Solution: (Course X071570: Stochastic Processes)

Size: px
Start display at page:

Download "Solution: (Course X071570: Stochastic Processes)"

Transcription

1 Solution I (Course X071570: Stochastic Processes) October 24, 2013 Exercise 1.1: Find all functions f from the integers to the real numbers satisfying f(n) = 1 2 f(n + 1) + 1 f(n 1) 1. 2 A secial solution to the equation is f(n) = n 2. The corresonding homogeneous difference equation has solution f 1 (n) = 1, f 2 (n) = n. Hence, a general solution is f(n) = n 2 + c 1 + c 2 n, c 1, c 2 R. Exercise 1.2: Let the one-ste transition robability matrix of a two-state Markov chain be given by ( ) 1 P =. 1 Show by induction that the n ste transition robability matrix is given by ( P n (2 1) n (2 1) = n ) (2 1) n (2 1) n. Omitted. Exercise 1.3: Suose four Markov chains have the following one-ste transition robability matrices resectively. Secify the communication classes of the Markov chains, and determine whether they are transient, recurrent and/or absorbent: P 1 = , P 2 = P 3 = , P = /3 2/

2 P 1. One communication class, it is ositive recurrent because there is a unique invariant robability vector π = (1/3, 1/3, 1/3). P 2. One communication class, it is ositive recurrent because there is a unique invariant robability vector π = (1/6, 1/6, 1/3, 1/3). P 3. Denote the labels for the columns by A, B,C,D,E sequentially. There are three communication classes: (1){A, C}, recurrent (irreducible, aeriodic); (2){D, E},recurrent (irreducible, aeriodic); (3){B}, transient. P 4. Denote the labels for the columns by A, B,C,D,E sequentially. There are four communication classes: (1){A, B}, recurrent (irreducible, aeriodic); (2){C},recurrent (absorbing); (3){D}, transient;(4){e}, transient. Exercise 1.4: Let X n be an irreducible Markov chain on the state sace {1, 2,, N}. Show that there exists C < + and ρ < 1 such that for any states i, j, P (X m j, 0 m n X 0 = i) Cρ n. Show that this imlies that E(T ) < +, where T is the first time that the Markov chain reaches the state j if the Markov chain started in state i. For fixed i, j, denote S = {1, 2,, N}\{j} and i = Z 0, i j imlies that there exists Z 1, Z 2,, Z m S such that P (Z 0, Z 1 )P (Z 1, Z 2 ) P (Z m, j) > 0. If m N 1, then there are at least two Zs are identical, without loss of generality, assume Z k1 = Z k2, k 1 < k 2. Note that P (Z 0, Z 1 )P (Z 1, Z 2 ) P (Z m, j) P (Z 0, Z 1 )P (Z 1, Z 2 ) P (k 2 k 1 ) (Z k1, Z k2 ) P (Z m, j), consider the new sequence (Z 1, Z 2,, Z k1, Z k2 +1,, Z m ),then we have found a shorter ath for the Markov Chain to reach j when started in i. It can be easily seen that the shortest ath aears only when m < N 1. That is, there exists a δ i,j > 0 such that, P (X m = j for some 0 m N X 0 = i) δ i,j. Let δ = min{δ i,j : i, j S}, since S is finite, we have δ > 0 and for all i, j, P (X m = j for some 0 m N X 0 = i) δ. 2

3 Write n = N + q where, q are nonnegative integers and q < N, when q 1, note that = P (X m j, 0 m n X 0 = i) P (i, x 1 ) P (X m = x m, (k 1)N + 1 m kn X (k 1)N = x (k 1)N ) (x 1,x 2,...,x n):x k S P (X m = x m, N + 1 m n X N = x N ) P (i, x 1 ) x 1 :x 1 S P (X m = x m, (k 1)N + 1 m kn X (k 1)N = x (k 1)N ) x m S,(k 1)N+1 m kn x m S,( 1)N+1 m n P (X m = x m, ( 1)N + 1 m n X ( 1)N = x ( 1)N ) max{p (X m j, (k 1)N + 1 m kn X (k 1)N = i) : i S } max{p (X m j, kn + 1 m n X kn = i) : i S } (1 δ) n/n. The above inequality also holds when q = 0. Let C = 1, ρ = (1 δ) 1/N, we get the first art of the desired results. The second art follows naturally. Exercise 1.5: Consider three urns, a red one, a white one and a blue one. The red urn contains 1 red and 4 blue balls; the white urn contains 3 white balls, 2 red balls, and 2 blue balls; the blue urn contains 4 white balls, 3 red balls and 2 blue balls. At the initial ste a ball is randomly icked from the red urn. At every subsequent stage, a ball is randomly selected from the urn whose colour is the same as that of the ball reviously icked and is then returned to that urn. Argue that the sequence of ball colours is a Markov chain by finding the transition matrix. What if the balls are not returned to the resective urn? If returned, the transition matrix is 1/5 0 4/5 P = 2/7 3/7 2/7. 3/9 4/9 2/9 If not returned, it is not a Markov Chain, for instance, P (X 3 = red X 2 = blue, X 1 = blue, X 0 = red) = 3/8 P (X 3 = red X 2 = blue, X 1 = red, X 0 = red) = 3/9. Exercise 1.6: M balls are initially distributed among m urns. At each stage one of the balls is selected at random, taken from whichever urn it is in, and then laced, at random, in one 3

4 of the other m 1 urns. Consider the Markov chain whose state at any time is the vector (n 1,, n m ) where n i denotes the number of balls in urn i. Guess the limiting robabilities for this Markov chain and then verify your guess. Guess: the limiting distribution is the Multinomial distribution π((n 1, n 2,, n m )) = M! n 1!n 2! n m!m M, where n i 0, n 1 +n 2 + +n m = M. Now for two vectors (n 1, n 2,, n m ) and (n 1, n 2,, n i + 1,, n j 1,, n m ), if i j, n j > 0, then the transition robability is Therefore, we have P ((n 1, n 2,, n i + 1,, n j 1,, n m ), (n 1, n 2,, n m )) = P (a ball is icked from the i th urn and laced in the j th urn) = n i + 1 M 1 m 1. 1 i j m = = π((n 1, n 2,, n i + 1,, n j 1,, n m )) P ((n 1, n 2,, n i + 1,, n j 1,, n m ), (n 1, n 2,, n m )) M! n i + 1 n 1!n 2! (n i + 1)! (n j 1)! n m!m M M 1 m 1 1 i j m M! n 1!n 2! n m!m M = π((n 1, n 2,, n m )). Or, you can also check that 1 i j m n j M(m 1) π((n 1, n 2,, n i + 1,, n j 1,, n m )) P ((n 1, n 2,, n i + 1,, n j 1,, n m ), (n 1, n 2,, n m )) = π((n 1, n 2,, n m )) P ((n 1, n 2,, n m ), (n 1, n 2,, n i + 1,, n j 1,, n m )). Exercise 1.7: Consider simle random walk on the grah below. 4

5 (a). In the long run, about what fraction of time is sent in vertex A? (b). Suose a walker starts in vertex A. walker returns to A? What is the exected number of stes until the (c). Suose a walker starts in vertex C. What is the exected number of visits to B before the walker reaches A? (d). Suose the walker starts in vertex B. What is the robability that the walker reaches A before the walker reaches C? (e). Again assume the walker starts in vertex C. What is the exected number of stes until the walker reaches A? Firstly, the one-ste transition robability matrix can be easily obtained: 0 1/3 1/3 1/3 0 1/3 0 1/3 0 1/3 P = 1/2 1/ / /2 0 1/2 0 1/2 0 we can further claim that this is an irreducible, aeriodic, discrete-time Markov Chain on a finite state sace. The invariant distribution π = (1/4, 1/4, 1/6, 1/6, 1/6) by solving πp = π. Or, you can also obtain π by exlaining clearly that the unique invariant distribution is roortional to the degree of each vertex. Now define (a). 1/4; (b). 4; τ x = inf{n 1 : X n = x}, x {A, B, C, D, E} (c). Define function f(x) = P (τ B < τ A X 0 = x), by adoting the one-ste analysis strategy, we have f(c) = P (τ B < τ A, X 1 = A X 0 = C) + P (τ B < τ A, X 1 = B X 0 = C) = 1/2. For other values of f, similarly, we can obtain the following system of equations: f(a) = 1/3 + 1/3f(C) + 1/3f(D) f(b) = 1/3f(C) + 1/3f(E) f(e) = 1/2 + 1/2f(D) f(d) = 1/2f(E). Therefore, f(b) = 7/18, f(d) = 1/3, f(e) = 2/3. Now, let N be the number of visits to B before reaching A if started in C. Then, we can use the Markov roerty to show that for any k 1, P (N = k) = f(c)f k 1 (B)(1 f(b)) and P (N = 0) = 1 f(c) = 1/2.Thus, EN = f(c) + k f k 1 (B)(1 f(b)) = 1/2 1/(1 f(b)) = 9/11. (Geometric distribution with arameter 1 f(b)) 5

6 (d). Define function g(x) = P (τ A < τ C X 0 = x), by adoting the one-ste analysis strategy, we can obtain the following system of equations: g(b) = 1/3 + 1/3g(E) g(e) = 1/2g(B) + 1/2g(D) g(d) = 1/2 + 1/2g(E). Therefore, g(b) = 4/7, g(d) = 6/7, g(e) = 5/7. The solution is 4/7. (e). Define T = inf{k : X k = A} and function h(x) = E(T X 0 = x), by adoting the one-ste analysis strategy, we can obtain the following system of equations: h(c) = 1/2 + 1/2(1 + h(b)) h(b) = 1/3 + 1/3(1 + h(c)) + 1/3(1 + h(e)) h(e) = 1/2(1 + h(b)) + 1/2(1 + h(d)) h(d) = 1/2 + 1/2(1 + h(e)). Therefore, h(b) = 36/11, h(c) = 29/11, h(d) = 34/11, h(e) = 46/11. The solution is 29/11. Exercise 1.8: Let X 1, X 2, be the successive values from indeendent rolls of a standard sixsided die. Let S n = n X i and i=1 T 1 = min{n 1 : S n is divisible by 8}, T 2 = min{n 1 : S n 1 is divisible by 8}, Find E(T 1 ) and E(T 2 ). Define X n as the remainder after division of S n by 8, X 0 = 0, then X n is a Markov Chain with transition robability matrix 0 1/6 1/6 1/6 1/6 1/6 1/ /6 1/6 1/6 1/6 1/6 1/6 1/ /6 1/6 1/6 1/6 1/6 P = 1/6 1/ /6 1/6 1/6 1/6 1/6 1/6 1/ /6 1/6 1/6 1/6 1/6 1/6 1/ /6 1/6 1/6 1/6 1/6 1/ /6 1/6 1/6 1/6 1/6 1/6 1/6 0 0 The invariant robability distribution is π = (1/8, 1/8, 1/8, 1/8, 1/8, 1/8, 1/8, 1/8). Define τ x = inf{n 1 : X n = x}, then E(T 1 ) = E(τ 0 X 0 = 0) = 8, E(T 2 ) = E(τ 1 X 0 = 0). Let 6

7 h(x) = E(τ 1 X 0 = x), the one-ste analysis suggests that h(1) = 8 h(0) = 1 + 1/6(h(2) + h(3) + h(4) + h(5) + h(6)) h(2) = 1 + 1/6(h(0) + h(3) + h(4) + h(5) + h(6) + h(7)) h(3) = 1 + 1/6(h(0) + h(4) + h(5) + h(6) + h(7)) h(4) = 1 + 1/6(h(0) + h(2) + h(5) + h(6) + h(7)) h(5) = 1 + 1/6(h(0) + h(2) + h(3) + h(6) + h(7)) h(6) = 1 + 1/6(h(0) + h(2) + h(3) + h(4) + h(7)) h(7) = 1 + 1/6(h(0) + h(2) + h(3) + h(4) + h(5)). Therefore, h(0) = , h(2) = , h(3) = , h(4) = , h(5) = , h(6) = , h(7) = That is, E(T 2 ) = Exercise 1.9: Let X n be the Markov chain with state sace Z and transition robability where > 1/2. Assume X 0 = 0. P (n, n + 1) =, P (n, n 1) = 1, (a). Let Y = min{x 0, X 1, }. What is the distribution of Y? (b). For ositive integer k, let T k = min{n : X n = k} and let e(k) = E(T k ). Exlain why e(k) = ke(1). (c). Find e(1). (d). Use (c) to give another roof that e(1) = if = 1/2. (a). Define τ i = inf{k : X k = i}. For a fixed sufficiently large ositive integer N and any fixed ositive integer k, define f(x) = P (τ k 1 > τ N X 0 = x) for k 1 x N, then we find the recursive formula f(x) = (1 )f(x 1) + f(x + 1), which imlies f(x) = c 1 + c 2 ( 1 )x, combined with the boundary conditions f( k 1) = 0, f(n) = 1, we have Therefore, c 1 = ( 1 ) k 1 ( 1 )N ( 1, c ) k 1 2 = f(0) = 1 ( 1 )N ( 1. ) k 1 1 ( 1 ) k 1 ( 1 )N ( 1. ) k 1 Note that f(0) is a decreasing sequence in N and bounded, by continuity, P (Y k) = lim f(0) = 1 (1 N + )k+1. That is, P ( Y = k) = P (Y k) P (Y k + 1) = (1 1 )( 1 )k, k 0. 7

8 (b). Write X n = n ξ k, where ξ 1, ξ 2,, ξ n are i.i.d random variables from ξ, where P (ξ = 1) =, P (ξ = 1) = 1. Note that T 1 = min{m : ξ ξ m = 1}, for general i, T k T k 1 = min{m : ξ Tk ξ Tk ξ Tk 1 +m = 1}, therefore, T 1, T 2 T 1,, T k T k 1 have the same distribution, which imlies E(T k ) = ke(1). (c). Write X n = n ξ k, where ξ 1, ξ 2,, ξ n are i.i.d random variables from ξ, where P (ξ = 1) =, P (ξ = 1) = 1. By definition, T 1 E(X T1 ) = E ξ k = E + ξ k I(T 1 k) Note that {T 1 k} = { m ξ j 1 : m k 1}, hence ξ k and I(T 1 k) are indeendent. Therefore, j=1 E(X T1 ) = Eξ + P (T 1 k) = EξE(T 1 ). Note that X T1 1, Eξ = 2 1, therefore, e(1) = 1/(2 1). (d). Omitted. Exercise 1.10: Suose J 1, J 2, are indeendent random variables with P (J i = 1) = 1 P (J i = 0) =. Let k be a ositive integer and let T k be the first time that k consecutive 1s have aeared. In other words, T k = n if J i = 1 for all n (k 1) i n and there is no m < n such that J i = 1 for all m (k 1) i m. Let X 0 = 0 and for n > 0, let X n be the number of consecutive 1s in the last run, i.e. X n = k if J n k = 0 and J i = 1 for n (k 1) i n. (a). Exlain why X n is a Markov chain with state sace {0, 1, 2, } and give the transition robabilities. (b). Show that the chain is irreducible and ositive recurrent and give the invariant robability π. (c). Find E(T k ) by writing an equation for E(T k ) in terms of E(T k 1 ) and then solving the recursive equation. (d). Find E(T k ) in a different way. Suose the chain starts in state k and let ˆT k be the time until returning to state k and ˆT 0 the time until the chain reaches state 0. Exlain why E( ˆT k ) = E( ˆT 0 ) + E(T k ), find E( ˆT 0 ) and use art (b) to determine E( ˆT k ). (a). k 0, P (k, k + 1) =, P (k, 0) = 1. 8

9 (b). For any two i, j, without loss of generality, assume i < j, we have P j i (i, j) P (i, i + 1)P (i + 1, i + 2) P (j 1, j) = j i > 0. Therefore, it is irreducible. Now solve the equations π(k)p (k, m) = π(m), k 0 use (a), we have the following recursive formula: m 1, π(m 1) = π(m); π(k)(1 ) = π(0). k 0 Under the constraint that k 0 π(k) = 1, we have π(m) = m (1 ), m 0. (c). Denote event E i as the i th aearance of (1, 1,, 1, 0) after a 0, i.e, k consecutive 1s followed by a 0 if J 0 is defined to be 0. Note that E 1, E 2, are indeendent and if we let Y n denote the value after the nth k consecutive 1s, then the first aearance of k + 1 consecutive occurs right after E 1, E 2,, E m if and only if Y 1 = Y 2 = = Y m = 0, Y m+1 = 1. That is, the first 1 aears at time m + 1 for Y n. Since Y n actually is a subsequence of J n, this m + 1 now is T 1 by definition. Therefore, E(T k+1 ) = (E(T k ) + 1) E(T 1 ) Note that T 1 follows a geometric distribution with arameter, i.e, P (T 1 = k) = (1 ) k 1, k 1, thus E(T 1 ) = 1/. The equation becomes E(T k+1 ) 1/( 1) = 1/(E(T k ) 1/( 1)). Solve the formula recursively, we obtain E(T k ) = (1 k )/( k (1 )). (d). Use (b), we know E( ˆT k ) = 1/( k (1 )). Since ˆT 0 follows a geometric distribution with arameter 1 (obtained from (a)), we have E( ˆT 0 ) = 1/(1 ). Then we get the formula for E(T k ). 9

Lecture 6. 2 Recurrence/transience, harmonic functions and martingales

Lecture 6. 2 Recurrence/transience, harmonic functions and martingales Lecture 6 Classification of states We have shown that all states of an irreducible countable state Markov chain must of the same tye. This gives rise to the following classification. Definition. [Classification

More information

6 Stationary Distributions

6 Stationary Distributions 6 Stationary Distributions 6. Definition and Examles Definition 6.. Let {X n } be a Markov chain on S with transition robability matrix P. A distribution π on S is called stationary (or invariant) if π

More information

P(X 0 = j 0,... X nk = j k )

P(X 0 = j 0,... X nk = j k ) Introduction to Probability Example Sheet 3 - Michaelmas 2006 Michael Tehranchi Problem. Let (X n ) n 0 be a homogeneous Markov chain on S with transition matrix P. Given a k N, let Z n = X kn. Prove that

More information

1 Gambler s Ruin Problem

1 Gambler s Ruin Problem Coyright c 2017 by Karl Sigman 1 Gambler s Ruin Problem Let N 2 be an integer and let 1 i N 1. Consider a gambler who starts with an initial fortune of $i and then on each successive gamble either wins

More information

1 Random Variables and Probability Distributions

1 Random Variables and Probability Distributions 1 Random Variables and Probability Distributions 1.1 Random Variables 1.1.1 Discrete random variables A random variable X is called discrete if the number of values that X takes is finite or countably

More information

1 Probability Spaces and Random Variables

1 Probability Spaces and Random Variables 1 Probability Saces and Random Variables 1.1 Probability saces Ω: samle sace consisting of elementary events (or samle oints). F : the set of events P: robability 1.2 Kolmogorov s axioms Definition 1.2.1

More information

Outline. Markov Chains and Markov Models. Outline. Markov Chains. Markov Chains Definitions Huizhen Yu

Outline. Markov Chains and Markov Models. Outline. Markov Chains. Markov Chains Definitions Huizhen Yu and Markov Models Huizhen Yu janey.yu@cs.helsinki.fi Det. Comuter Science, Univ. of Helsinki Some Proerties of Probabilistic Models, Sring, 200 Huizhen Yu (U.H.) and Markov Models Jan. 2 / 32 Huizhen Yu

More information

Homework Solution 4 for APPM4/5560 Markov Processes

Homework Solution 4 for APPM4/5560 Markov Processes Homework Solution 4 for APPM4/556 Markov Processes 9.Reflecting random walk on the line. Consider the oints,,, 4 to be marked on a straight line. Let X n be a Markov chain that moves to the right with

More information

Stochastic Processes

Stochastic Processes Stochastic Processes 8.445 MIT, fall 20 Mid Term Exam Solutions October 27, 20 Your Name: Alberto De Sole Exercise Max Grade Grade 5 5 2 5 5 3 5 5 4 5 5 5 5 5 6 5 5 Total 30 30 Problem :. True / False

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

Einführung in Stochastische Prozesse und Zeitreihenanalyse Vorlesung, 2013S, 2.0h March 2015 Hubalek/Scherrer

Einführung in Stochastische Prozesse und Zeitreihenanalyse Vorlesung, 2013S, 2.0h March 2015 Hubalek/Scherrer Name: Mat.Nr.: Studium: Bitte keinen Rotstift verwenden! 15.593 Einführung in Stochastische Prozesse und Zeitreihenanalyse Vorlesung, 213S, 2.h March 215 Hubalek/Scherrer (Dauer 9 Minutes, Permissible

More information

= p(t)(1 λδt + o(δt)) (from axioms) Now p(0) = 1, so c = 0 giving p(t) = e λt as required. 5 For a non-homogeneous process we have

= p(t)(1 λδt + o(δt)) (from axioms) Now p(0) = 1, so c = 0 giving p(t) = e λt as required. 5 For a non-homogeneous process we have . (a (i I: P(exactly event occurs in [t, t + δt = λδt + o(δt, [o(δt/δt 0 as δt 0]. II: P( or more events occur in [t, t + δt = o(δt. III: Occurrence of events after time t is indeendent of occurrence of

More information

Frank Moore Algebra 901 Notes Professor: Tom Marley

Frank Moore Algebra 901 Notes Professor: Tom Marley Frank Moore Algebra 901 Notes Professor: Tom Marley Examle/Remark: Let f(x) K[x] be an irreducible olynomial. We know that f has multile roots (f, f ) = 1 f f, as f is irreducible f = 0. So, if Char K

More information

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015 ID NAME SCORE MATH 56/STAT 555 Applied Stochastic Processes Homework 2, September 8, 205 Due September 30, 205 The generating function of a sequence a n n 0 is defined as As : a ns n for all s 0 for which

More information

Analysis of some entrance probabilities for killed birth-death processes

Analysis of some entrance probabilities for killed birth-death processes Analysis of some entrance robabilities for killed birth-death rocesses Master s Thesis O.J.G. van der Velde Suervisor: Dr. F.M. Sieksma July 5, 207 Mathematical Institute, Leiden University Contents Introduction

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1 Properties of Markov Chains and Evaluation of Steady State Transition Matrix P ss V. Krishnan - 3/9/2 Property 1 Let X be a Markov Chain (MC) where X {X n : n, 1, }. The state space is E {i, j, k, }. The

More information

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States

More information

Chapter 7: Special Distributions

Chapter 7: Special Distributions This chater first resents some imortant distributions, and then develos the largesamle distribution theory which is crucial in estimation and statistical inference Discrete distributions The Bernoulli

More information

Elementary Analysis in Q p

Elementary Analysis in Q p Elementary Analysis in Q Hannah Hutter, May Szedlák, Phili Wirth November 17, 2011 This reort follows very closely the book of Svetlana Katok 1. 1 Sequences and Series In this section we will see some

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 2. Countable Markov Chains I started Chapter 2 which talks about Markov chains with a countably infinite number of states. I did my favorite example which is on

More information

1 Martingales. Martingales. (Ω, B, P ) is a probability space.

1 Martingales. Martingales. (Ω, B, P ) is a probability space. Martingales January 8, 206 Debdee Pati Martingales (Ω, B, P ) is a robability sace. Definition. (Filtration) filtration F = {F n } n 0 is a collection of increasing sub-σfields such that for m n, we have

More information

Stat 516, Homework 1

Stat 516, Homework 1 Stat 516, Homework 1 Due date: October 7 1. Consider an urn with n distinct balls numbered 1,..., n. We sample balls from the urn with replacement. Let N be the number of draws until we encounter a ball

More information

Stochastic integration II: the Itô integral

Stochastic integration II: the Itô integral 13 Stochastic integration II: the Itô integral We have seen in Lecture 6 how to integrate functions Φ : (, ) L (H, E) with resect to an H-cylindrical Brownian motion W H. In this lecture we address the

More information

Consider an infinite row of dominoes, labeled by 1, 2, 3,, where each domino is standing up. What should one do to knock over all dominoes?

Consider an infinite row of dominoes, labeled by 1, 2, 3,, where each domino is standing up. What should one do to knock over all dominoes? 1 Section 4.1 Mathematical Induction Consider an infinite row of dominoes, labeled by 1,, 3,, where each domino is standing up. What should one do to knock over all dominoes? Principle of Mathematical

More information

8 STOCHASTIC PROCESSES

8 STOCHASTIC PROCESSES 8 STOCHASTIC PROCESSES The word stochastic is derived from the Greek στoχαστικoς, meaning to aim at a target. Stochastic rocesses involve state which changes in a random way. A Markov rocess is a articular

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

Arithmetic and Metric Properties of p-adic Alternating Engel Series Expansions

Arithmetic and Metric Properties of p-adic Alternating Engel Series Expansions International Journal of Algebra, Vol 2, 2008, no 8, 383-393 Arithmetic and Metric Proerties of -Adic Alternating Engel Series Exansions Yue-Hua Liu and Lu-Ming Shen Science College of Hunan Agriculture

More information

Brownian Motion and Random Prime Factorization

Brownian Motion and Random Prime Factorization Brownian Motion and Random Prime Factorization Kendrick Tang June 4, 202 Contents Introduction 2 2 Brownian Motion 2 2. Develoing Brownian Motion.................... 2 2.. Measure Saces and Borel Sigma-Algebras.........

More information

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution. Lecture 7 1 Stationary measures of a Markov chain We now study the long time behavior of a Markov Chain: in particular, the existence and uniqueness of stationary measures, and the convergence of the distribution

More information

Solutions to In Class Problems Week 15, Wed.

Solutions to In Class Problems Week 15, Wed. Massachusetts Institute of Technology 6.04J/18.06J, Fall 05: Mathematics for Comuter Science December 14 Prof. Albert R. Meyer and Prof. Ronitt Rubinfeld revised December 14, 005, 1404 minutes Solutions

More information

SQUAREFREE VALUES OF QUADRATIC POLYNOMIALS COURSE NOTES, 2015

SQUAREFREE VALUES OF QUADRATIC POLYNOMIALS COURSE NOTES, 2015 SQUAREFREE VALUES OF QUADRATIC POLYNOMIALS COURSE NOTES, 2015 1. Squarefree values of olynomials: History In this section we study the roblem of reresenting square-free integers by integer olynomials.

More information

Elementary theory of L p spaces

Elementary theory of L p spaces CHAPTER 3 Elementary theory of L saces 3.1 Convexity. Jensen, Hölder, Minkowski inequality. We begin with two definitions. A set A R d is said to be convex if, for any x 0, x 1 2 A x = x 0 + (x 1 x 0 )

More information

Positive and null recurrent-branching Process

Positive and null recurrent-branching Process December 15, 2011 In last discussion we studied the transience and recurrence of Markov chains There are 2 other closely related issues about Markov chains that we address Is there an invariant distribution?

More information

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t 2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition

More information

Robust hamiltonicity of random directed graphs

Robust hamiltonicity of random directed graphs Robust hamiltonicity of random directed grahs Asaf Ferber Rajko Nenadov Andreas Noever asaf.ferber@inf.ethz.ch rnenadov@inf.ethz.ch anoever@inf.ethz.ch Ueli Peter ueter@inf.ethz.ch Nemanja Škorić nskoric@inf.ethz.ch

More information

Towards understanding the Lorenz curve using the Uniform distribution. Chris J. Stephens. Newcastle City Council, Newcastle upon Tyne, UK

Towards understanding the Lorenz curve using the Uniform distribution. Chris J. Stephens. Newcastle City Council, Newcastle upon Tyne, UK Towards understanding the Lorenz curve using the Uniform distribution Chris J. Stehens Newcastle City Council, Newcastle uon Tyne, UK (For the Gini-Lorenz Conference, University of Siena, Italy, May 2005)

More information

Approximating min-max k-clustering

Approximating min-max k-clustering Aroximating min-max k-clustering Asaf Levin July 24, 2007 Abstract We consider the roblems of set artitioning into k clusters with minimum total cost and minimum of the maximum cost of a cluster. The cost

More information

Applied Probability Trust (24 March 2004) Abstract

Applied Probability Trust (24 March 2004) Abstract Alied Probability Trust (24 March 2004) STOPPING THE MAXIMUM OF A CORRELATED RANDOM WALK, WITH COST FOR OBSERVATION PIETER ALLAART, University of North Texas Abstract Let (S n ) n 0 be a correlated random

More information

Chapter 8: Taylor s theorem and L Hospital s rule

Chapter 8: Taylor s theorem and L Hospital s rule Chapter 8: Taylor s theorem and L Hospital s rule Theorem: [Inverse Mapping Theorem] Suppose that a < b and f : [a, b] R. Given that f (x) > 0 for all x (a, b) then f 1 is differentiable on (f(a), f(b))

More information

ECE 6960: Adv. Random Processes & Applications Lecture Notes, Fall 2010

ECE 6960: Adv. Random Processes & Applications Lecture Notes, Fall 2010 ECE 6960: Adv. Random Processes & Alications Lecture Notes, Fall 2010 Lecture 16 Today: (1) Markov Processes, (2) Markov Chains, (3) State Classification Intro Please turn in H 6 today. Read Chater 11,

More information

GENERICITY OF INFINITE-ORDER ELEMENTS IN HYPERBOLIC GROUPS

GENERICITY OF INFINITE-ORDER ELEMENTS IN HYPERBOLIC GROUPS GENERICITY OF INFINITE-ORDER ELEMENTS IN HYPERBOLIC GROUPS PALLAVI DANI 1. Introduction Let Γ be a finitely generated grou and let S be a finite set of generators for Γ. This determines a word metric on

More information

Math 4400/6400 Homework #8 solutions. 1. Let P be an odd integer (not necessarily prime). Show that modulo 2,

Math 4400/6400 Homework #8 solutions. 1. Let P be an odd integer (not necessarily prime). Show that modulo 2, MATH 4400 roblems. Math 4400/6400 Homework # solutions 1. Let P be an odd integer not necessarily rime. Show that modulo, { P 1 0 if P 1, 7 mod, 1 if P 3, mod. Proof. Suose that P 1 mod. Then we can write

More information

1 Gambler s Ruin Problem

1 Gambler s Ruin Problem 1 Gambler s Ruin Problem Consider a gambler who starts with an initial fortune of $1 and then on each successive gamble either wins $1 or loses $1 independent of the past with probabilities p and q = 1

More information

arxiv: v2 [math.pr] 4 Sep 2017

arxiv: v2 [math.pr] 4 Sep 2017 arxiv:1708.08576v2 [math.pr] 4 Sep 2017 On the Speed of an Excited Asymmetric Random Walk Mike Cinkoske, Joe Jackson, Claire Plunkett September 5, 2017 Abstract An excited random walk is a non-markovian

More information

Lecture 9 Classification of States

Lecture 9 Classification of States Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

Introduction to Banach Spaces

Introduction to Banach Spaces CHAPTER 8 Introduction to Banach Saces 1. Uniform and Absolute Convergence As a rearation we begin by reviewing some familiar roerties of Cauchy sequences and uniform limits in the setting of metric saces.

More information

On a Markov Game with Incomplete Information

On a Markov Game with Incomplete Information On a Markov Game with Incomlete Information Johannes Hörner, Dinah Rosenberg y, Eilon Solan z and Nicolas Vieille x{ January 24, 26 Abstract We consider an examle of a Markov game with lack of information

More information

B8.1 Martingales Through Measure Theory. Concept of independence

B8.1 Martingales Through Measure Theory. Concept of independence B8.1 Martingales Through Measure Theory Concet of indeendence Motivated by the notion of indeendent events in relims robability, we have generalized the concet of indeendence to families of σ-algebras.

More information

MATH 2710: NOTES FOR ANALYSIS

MATH 2710: NOTES FOR ANALYSIS MATH 270: NOTES FOR ANALYSIS The main ideas we will learn from analysis center around the idea of a limit. Limits occurs in several settings. We will start with finite limits of sequences, then cover infinite

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

Calculus The Mean Value Theorem October 22, 2018

Calculus The Mean Value Theorem October 22, 2018 Calculus The Mean Value Theorem October, 018 Definitions Let c be a number in the domain D of a function f. Then f(c) is the (a) absolute maximum value of f on D, i.e. f(c) = max, if f(c) for all x in

More information

Math Homework 5 Solutions

Math Homework 5 Solutions Math 45 - Homework 5 Solutions. Exercise.3., textbook. The stochastic matrix for the gambler problem has the following form, where the states are ordered as (,, 4, 6, 8, ): P = The corresponding diagram

More information

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing

More information

Solutions Final Exam May. 14, 2014

Solutions Final Exam May. 14, 2014 Solutions Final Exam May. 14, 2014 1. (a) (10 points) State the formal definition of a Cauchy sequence of real numbers. A sequence, {a n } n N, of real numbers, is Cauchy if and only if for every ɛ > 0,

More information

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes? IEOR 3106: Introduction to Operations Research: Stochastic Models Fall 2006, Professor Whitt SOLUTIONS to Final Exam Chapters 4-7 and 10 in Ross, Tuesday, December 19, 4:10pm-7:00pm Open Book: but only

More information

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65 MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65 2.2.5. proof of extinction lemma. The proof of Lemma 2.3 is just like the proof of the lemma I did on Wednesday. It goes like this. Suppose that â is the smallest

More information

On-Line Appendix. Matching on the Estimated Propensity Score (Abadie and Imbens, 2015)

On-Line Appendix. Matching on the Estimated Propensity Score (Abadie and Imbens, 2015) On-Line Aendix Matching on the Estimated Proensity Score Abadie and Imbens, 205 Alberto Abadie and Guido W. Imbens Current version: August 0, 205 The first art of this aendix contains additional roofs.

More information

THE SET CHROMATIC NUMBER OF RANDOM GRAPHS

THE SET CHROMATIC NUMBER OF RANDOM GRAPHS THE SET CHROMATIC NUMBER OF RANDOM GRAPHS ANDRZEJ DUDEK, DIETER MITSCHE, AND PAWE L PRA LAT Abstract. In this aer we study the set chromatic number of a random grah G(n, ) for a wide range of = (n). We

More information

Examples of Countable State Markov Chains Thursday, October 16, :12 PM

Examples of Countable State Markov Chains Thursday, October 16, :12 PM stochnotes101608 Page 1 Examples of Countable State Markov Chains Thursday, October 16, 2008 12:12 PM Homework 2 solutions will be posted later today. A couple of quick examples. Queueing model (without

More information

Math 180B Homework 4 Solutions

Math 180B Homework 4 Solutions Math 80B Homework 4 Solutions Note: We will make repeated use of the following result. Lemma. Let (X n ) be a time-homogeneous Markov chain with countable state space S, let A S, and let T = inf { n 0

More information

By Evan Chen OTIS, Internal Use

By Evan Chen OTIS, Internal Use Solutions Notes for DNY-NTCONSTRUCT Evan Chen January 17, 018 1 Solution Notes to TSTST 015/5 Let ϕ(n) denote the number of ositive integers less than n that are relatively rime to n. Prove that there

More information

Markov Processes Hamid R. Rabiee

Markov Processes Hamid R. Rabiee Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete

More information

MATHEMATICAL MODELLING OF THE WIRELESS COMMUNICATION NETWORK

MATHEMATICAL MODELLING OF THE WIRELESS COMMUNICATION NETWORK Comuter Modelling and ew Technologies, 5, Vol.9, o., 3-39 Transort and Telecommunication Institute, Lomonosov, LV-9, Riga, Latvia MATHEMATICAL MODELLIG OF THE WIRELESS COMMUICATIO ETWORK M. KOPEETSK Deartment

More information

Functions. Given a function f: A B:

Functions. Given a function f: A B: Functions Given a function f: A B: We say f maps A to B or f is a mapping from A to B. A is called the domain of f. B is called the codomain of f. If f(a) = b, then b is called the image of a under f.

More information

Uniform Law on the Unit Sphere of a Banach Space

Uniform Law on the Unit Sphere of a Banach Space Uniform Law on the Unit Shere of a Banach Sace by Bernard Beauzamy Société de Calcul Mathématique SA Faubourg Saint Honoré 75008 Paris France Setember 008 Abstract We investigate the construction of a

More information

Math 242: Principles of Analysis Fall 2016 Homework 6 Part B Solutions. x 2 +2x = 15.

Math 242: Principles of Analysis Fall 2016 Homework 6 Part B Solutions. x 2 +2x = 15. Math 242: Principles of Analysis Fall 2016 Homework 6 Part B Solutions 1. Use the definition of a it to prove that x 2 +2x = 15. Solution. First write x 2 +2x 15 = x 3 x+5. Next let δ 1 = 1. If 0 < x 3

More information

Chapter 7. Markov chain background. 7.1 Finite state space

Chapter 7. Markov chain background. 7.1 Finite state space Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider

More information

PETER J. GRABNER AND ARNOLD KNOPFMACHER

PETER J. GRABNER AND ARNOLD KNOPFMACHER ARITHMETIC AND METRIC PROPERTIES OF -ADIC ENGEL SERIES EXPANSIONS PETER J. GRABNER AND ARNOLD KNOPFMACHER Abstract. We derive a characterization of rational numbers in terms of their unique -adic Engel

More information

Dalal-Schmutz (2002) and Diaconis-Freedman (1999) 1. Random Compositions. Evangelos Kranakis School of Computer Science Carleton University

Dalal-Schmutz (2002) and Diaconis-Freedman (1999) 1. Random Compositions. Evangelos Kranakis School of Computer Science Carleton University Dalal-Schmutz (2002) and Diaconis-Freedman (1999) 1 Random Compositions Evangelos Kranakis School of Computer Science Carleton University Dalal-Schmutz (2002) and Diaconis-Freedman (1999) 2 Outline 1.

More information

MATH 829: Introduction to Data Mining and Analysis Consistency of Linear Regression

MATH 829: Introduction to Data Mining and Analysis Consistency of Linear Regression 1/9 MATH 829: Introduction to Data Mining and Analysis Consistency of Linear Regression Dominique Guillot Deartments of Mathematical Sciences University of Delaware February 15, 2016 Distribution of regression

More information

Sums of independent random variables

Sums of independent random variables 3 Sums of indeendent random variables This lecture collects a number of estimates for sums of indeendent random variables with values in a Banach sace E. We concentrate on sums of the form N γ nx n, where

More information

Chapter 2: Markov Chains and Queues in Discrete Time

Chapter 2: Markov Chains and Queues in Discrete Time Chapter 2: Markov Chains and Queues in Discrete Time L. Breuer University of Kent 1 Definition Let X n with n N 0 denote random variables on a discrete space E. The sequence X = (X n : n N 0 ) is called

More information

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

Exact Simulation of the Stationary Distribution of M/G/c Queues

Exact Simulation of the Stationary Distribution of M/G/c Queues 1/36 Exact Simulation of the Stationary Distribution of M/G/c Queues Professor Karl Sigman Columbia University New York City USA Conference in Honor of Søren Asmussen Monday, August 1, 2011 Sandbjerg Estate

More information

12 Markov chains The Markov property

12 Markov chains The Markov property 12 Markov chains Summary. The chapter begins with an introduction to discrete-time Markov chains, and to the use of matrix products and linear algebra in their study. The concepts of recurrence and transience

More information

Ergodic Properties of Markov Processes

Ergodic Properties of Markov Processes Ergodic Properties of Markov Processes March 9, 2006 Martin Hairer Lecture given at The University of Warwick in Spring 2006 1 Introduction Markov processes describe the time-evolution of random systems

More information

Section 4.2 The Mean Value Theorem

Section 4.2 The Mean Value Theorem Section 4.2 The Mean Value Theorem Ruipeng Shen October 2nd Ruipeng Shen MATH 1ZA3 October 2nd 1 / 11 Rolle s Theorem Theorem (Rolle s Theorem) Let f (x) be a function that satisfies: 1. f is continuous

More information

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 3: Regenerative Processes Contents 3.1 Regeneration: The Basic Idea............................... 1 3.2

More information

where x i is the ith coordinate of x R N. 1. Show that the following upper bound holds for the growth function of H:

where x i is the ith coordinate of x R N. 1. Show that the following upper bound holds for the growth function of H: Mehryar Mohri Foundations of Machine Learning Courant Institute of Mathematical Sciences Homework assignment 2 October 25, 2017 Due: November 08, 2017 A. Growth function Growth function of stum functions.

More information

IMPROVED BOUNDS IN THE SCALED ENFLO TYPE INEQUALITY FOR BANACH SPACES

IMPROVED BOUNDS IN THE SCALED ENFLO TYPE INEQUALITY FOR BANACH SPACES IMPROVED BOUNDS IN THE SCALED ENFLO TYPE INEQUALITY FOR BANACH SPACES OHAD GILADI AND ASSAF NAOR Abstract. It is shown that if (, ) is a Banach sace with Rademacher tye 1 then for every n N there exists

More information

CHAPTER 5 TANGENT VECTORS

CHAPTER 5 TANGENT VECTORS CHAPTER 5 TANGENT VECTORS In R n tangent vectors can be viewed from two ersectives (1) they cature the infinitesimal movement along a ath, the direction, and () they oerate on functions by directional

More information

MARKOV PROCESSES. Valerio Di Valerio

MARKOV PROCESSES. Valerio Di Valerio MARKOV PROCESSES Valerio Di Valerio Stochastic Process Definition: a stochastic process is a collection of random variables {X(t)} indexed by time t T Each X(t) X is a random variable that satisfy some

More information

International Competition in Mathematics for Universtiy Students in Plovdiv, Bulgaria 1994

International Competition in Mathematics for Universtiy Students in Plovdiv, Bulgaria 1994 International Competition in Mathematics for Universtiy Students in Plovdiv, Bulgaria 1994 1 PROBLEMS AND SOLUTIONS First day July 29, 1994 Problem 1. 13 points a Let A be a n n, n 2, symmetric, invertible

More information

1 Continuous-time chains, finite state space

1 Continuous-time chains, finite state space Université Paris Diderot 208 Markov chains Exercises 3 Continuous-time chains, finite state space Exercise Consider a continuous-time taking values in {, 2, 3}, with generator 2 2. 2 2 0. Draw the diagramm

More information

arxiv: v2 [math.ac] 5 Jan 2018

arxiv: v2 [math.ac] 5 Jan 2018 Random Monomial Ideals Jesús A. De Loera, Sonja Petrović, Lily Silverstein, Desina Stasi, Dane Wilburne arxiv:70.070v [math.ac] Jan 8 Abstract: Insired by the study of random grahs and simlicial comlexes,

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

10.1 Sequences. Example: A sequence is a function f(n) whose domain is a subset of the integers. Notation: *Note: n = 0 vs. n = 1.

10.1 Sequences. Example: A sequence is a function f(n) whose domain is a subset of the integers. Notation: *Note: n = 0 vs. n = 1. 10.1 Sequences Example: A sequence is a function f(n) whose domain is a subset of the integers. Notation: *Note: n = 0 vs. n = 1 Examples: EX1: Find a formula for the general term a n of the sequence,

More information

BEST POSSIBLE DENSITIES OF DICKSON m-tuples, AS A CONSEQUENCE OF ZHANG-MAYNARD-TAO

BEST POSSIBLE DENSITIES OF DICKSON m-tuples, AS A CONSEQUENCE OF ZHANG-MAYNARD-TAO BEST POSSIBLE DENSITIES OF DICKSON m-tuples, AS A CONSEQUENCE OF ZHANG-MAYNARD-TAO ANDREW GRANVILLE, DANIEL M. KANE, DIMITRIS KOUKOULOPOULOS, AND ROBERT J. LEMKE OLIVER Abstract. We determine for what

More information

Applied Stochastic Processes

Applied Stochastic Processes Applied Stochastic Processes Jochen Geiger last update: July 18, 2007) Contents 1 Discrete Markov chains........................................ 1 1.1 Basic properties and examples................................

More information

Marcinkiewicz-Zygmund Type Law of Large Numbers for Double Arrays of Random Elements in Banach Spaces

Marcinkiewicz-Zygmund Type Law of Large Numbers for Double Arrays of Random Elements in Banach Spaces ISSN 995-0802, Lobachevskii Journal of Mathematics, 2009, Vol. 30, No. 4,. 337 346. c Pleiades Publishing, Ltd., 2009. Marcinkiewicz-Zygmund Tye Law of Large Numbers for Double Arrays of Random Elements

More information

Interpolating between random walk and rotor walk

Interpolating between random walk and rotor walk Interolating between random walk and rotor walk Wilfried Huss *, Lionel Levine, Ecaterina Sava-Huss Aril 7, 2016 Abstract We introduce a family of stochastic rocesses on the integers, deending on a arameter

More information

Mathematical Methods for Computer Science

Mathematical Methods for Computer Science Mathematical Methods for Computer Science Computer Science Tripos, Part IB Michaelmas Term 2016/17 R.J. Gibbens Problem sheets for Probability methods William Gates Building 15 JJ Thomson Avenue Cambridge

More information

GOOD MODELS FOR CUBIC SURFACES. 1. Introduction

GOOD MODELS FOR CUBIC SURFACES. 1. Introduction GOOD MODELS FOR CUBIC SURFACES ANDREAS-STEPHAN ELSENHANS Abstract. This article describes an algorithm for finding a model of a hyersurface with small coefficients. It is shown that the aroach works in

More information

Interlude: Practice Final

Interlude: Practice Final 8 POISSON PROCESS 08 Interlude: Practice Final This practice exam covers the material from the chapters 9 through 8. Give yourself 0 minutes to solve the six problems, which you may assume have equal point

More information

Spring 2012 Math 541B Exam 1

Spring 2012 Math 541B Exam 1 Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote

More information

2. Transience and Recurrence

2. Transience and Recurrence Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times

More information

Markov Chains (Part 3)

Markov Chains (Part 3) Markov Chains (Part 3) State Classification Markov Chains - State Classification Accessibility State j is accessible from state i if p ij (n) > for some n>=, meaning that starting at state i, there is

More information