Excited random walks in cookie environments with Markovian cookie stacks

Size: px
Start display at page:

Download "Excited random walks in cookie environments with Markovian cookie stacks"

Transcription

1 Excited random walks in cookie environments with Markovian cookie stacks Jonathon Peterson Department of Mathematics Purdue University Joint work with Elena Kosygina December 5, 2015 Jonathon Peterson 12/5/ / 19

2 Excited (Cookie) Random Walks Cookie Environment M cookies at each site Cookie strengths p 1, p 2,, p M (0, 1) Eating a cookie induces a drift for the next step 1 p 1 p 1 1 p 2 p 2 1 p 3 p 3 Jonathon Peterson 12/5/ / 19

3 Excited (Cookie) Random Walks Cookie Environment M cookies at each site Cookie strengths p 1, p 2,, p M (0, 1) Eating a cookie induces a drift for the next step 1 p 1 p 1 1 p 2 p 2 1 p 3 p 3 1 p 1 p 1 Jonathon Peterson 12/5/ / 19

4 Excited (Cookie) Random Walks Cookie Environment M cookies at each site Cookie strengths p 1, p 2,, p M (0, 1) Eating a cookie induces a drift for the next step 1 p 1 p 1 1 p 2 p 2 1 p 3 p 3 1 p 1 p 1 Jonathon Peterson 12/5/ / 19

5 Excited (Cookie) Random Walks Cookie Environment M cookies at each site Cookie strengths p 1, p 2,, p M (0, 1) Eating a cookie induces a drift for the next step 1 p 1 p 1 1 p 2 p 2 1 p 3 p 3 1 p 2 p 2 Jonathon Peterson 12/5/ / 19

6 Excited (Cookie) Random Walks Cookie Environment M cookies at each site Cookie strengths p 1, p 2,, p M (0, 1) Eating a cookie induces a drift for the next step 1 p 1 p 1 1 p 2 p 2 1 p 3 p 3 1 p 1 p 1 Jonathon Peterson 12/5/ / 19

7 Excited (Cookie) Random Walks Cookie Environment M cookies at each site Cookie strengths p 1, p 2,, p M (0, 1) Eating a cookie induces a drift for the next step 1 p 1 p 1 1 p 2 p 2 1 p 3 p 3 1 p 3 p 3 Jonathon Peterson 12/5/ / 19

8 Excited (Cookie) Random Walks Cookie Environment M cookies at each site Cookie strengths p 1, p 2,, p M (0, 1) Eating a cookie induces a drift for the next step 1 p 1 p 1 1 p 2 p 2 1 p 3 p 3 Jonathon Peterson 12/5/ / 19

9 Excited (Cookie) Random Walks Cookie Environment M cookies at each site Cookie strengths p 1, p 2,, p M (0, 1) Eating a cookie induces a drift for the next step 1 p 1 p 1 1 p 2 p 2 1 p 3 p 3 Jonathon Peterson 12/5/ / 19

10 Excited (Cookie) Random Walks Cookie Environment M cookies at each site Cookie strengths p 1, p 2,, p M (0, 1) Eating a cookie induces a drift for the next step 1 p 1 p 1 1 p 2 p 2 1 p 3 p Jonathon Peterson 12/5/ / 19

11 Excited (Cookie) Random Walks Cookie Environment M cookies at each site Cookie strengths p 1, p 2,, p M (0, 1) Eating a cookie induces a drift for the next step 1 p 1 p 1 1 p 2 p 2 1 p 3 p Jonathon Peterson 12/5/ / 19

12 Excited (Cookie) Random Walks Random iid cookie environments ω x (j) strength of j-th cookie at site x Cookie environment ω = {ω x } is iid Cookies within a stack may be dependent Jonathon Peterson 12/5/ / 19

13 Recurrence/Transience and LLN Average drift per site M δ = E (2ω 0 (j) 1) j=1 (deterministic cookie environments: δ = M j=1 (2p j 1)) Theorem ( Zerner 05, Zerner & Kosygina 08) The cookie RW is recurrent if and only if δ [ 1, 1] Jonathon Peterson 12/5/ / 19

14 Recurrence/Transience and LLN Average drift per site M δ = E (2ω 0 (j) 1) j=1 (deterministic cookie environments: δ = M j=1 (2p j 1)) Theorem ( Zerner 05, Zerner & Kosygina 08) The cookie RW is recurrent if and only if δ [ 1, 1] Theorem (Basdevant & Singh 07, Zerner & Kosygina 08) lim n X n /n = v 0, and v 0 > 0 δ > 2 No explicit formula is known for v 0 Jonathon Peterson 12/5/ / 19

15 Limiting Distributions for Theorem (Basdevant & Singh 08, Kosygina & Zerner 08, Kosygina & Mountford 11) Excited random walks have the following limiting distributions Regime Re-scaling Limiting Distribution δ (1, 2) X n ( δ 2 -stable) δ/2 δ (2, 4) δ > 4 n δ/2 X n nv 0 n 2/δ X n nv 0 A n Results are also known for other values of δ Note: similar scaling limits for RWRE Totally asymmetric δ 2 -stable Gaussian Jonathon Peterson 12/5/ / 19

16 Periodic cookie stacks Periodic cookie sequence p 1, p 2,, p M, p 1, p 2, Average value p = 1 M M j=1 p j Jonathon Peterson 12/5/ / 19

17 Periodic cookie stacks Periodic cookie sequence p 1, p 2,, p M, p 1, p 2, Average value p = 1 M M j=1 p j Questions: Recurrence/transience, limiting speed, limiting distributions? Jonathon Peterson 12/5/ / 19

18 Periodic cookie stacks - recurrence/transience Critical case: p = 1 M M j=1 p j = 1 2 Jonathon Peterson 12/5/ / 19

19 Periodic cookie stacks - recurrence/transience Critical case: p = 1 M M j=1 p j = 1 2 M j j=1 i=1 (1 p j)(2p i 1) M j j=1 i=1 p j(1 2p i ) θ = 2 M j=1 p j(1 p j ) and θ = 2 M j=1 p j(1 p j ) Theorem (Kozma, Shinkar, Orenshtein) For excited random walks with periodic cookie stacks and p = 1 2, θ > 1 implies X n + θ > 1 implies X n max{θ, θ} 1 implies X n is recurrent Note: θ + θ < 1 Jonathon Peterson 12/5/ / 19

20 Periodic cookie stacks - LLN Limiting speed exists (Kosygina-Zerer 13) X n v 0 = lim n n Jonathon Peterson 12/5/ / 19

21 Periodic cookie stacks - LLN Limiting speed exists (Kosygina-Zerer 13) X n v 0 = lim n n Theorem (Kosygina & P 15) For excited random walks with periodic cookie stacks and p = 1 2, If θ > 2 then v 0 > 0 If θ > 2 then v 0 < 0 If max{θ, θ} 2 then v 0 = 0 Jonathon Peterson 12/5/ / 19

22 Periodic cookie stacks - limiting distributions Limiting distributions when transient to the right (θ > 1) Theorem (Kosygina & P 15) For excited random walks with periodic cookie stacks and p = 1 2, the following limiting distributions hold Regime Re-scaling Limiting Distribution θ (1, 2) X n ( θ 2 -stable) θ/2 θ (2, 4) θ > 4 n θ/2 X n nv 0 n 2/θ X n nv 0 A n Results are also known for θ = 1 and θ = 2 Totally asymmetric θ 2 -stable Gaussian Jonathon Peterson 12/5/ / 19

23 Markovian cookie stacks Markov chain {R j } j 1 on finite state space {1, 2,, N} 1 α α α 1 α Jonathon Peterson 12/5/ / 19

24 Markovian cookie stacks Markov chain {R j } j 1 on finite state space {1, 2,, N} Function p : {1, 2,, N} (0, 1) p 1 p 1 α α 1 α 1 p p α Jonathon Peterson 12/5/ / 19

25 Markovian cookie stacks Markov chain {R j } j 1 on finite state space {1, 2,, N} Function p : {1, 2,, N} (0, 1) Cookie stack at x = 0: ω 0 (j) = p(r j ) p 1 p 1 α α 1 α 1 p p α Jonathon Peterson 12/5/ / 19

26 Markovian cookie stacks Markov chain {R j } j 1 on finite state space {1, 2,, N} Function p : {1, 2,, N} (0, 1) Cookie stack at x = 0: ω 0 (j) = p(r j ) Cookie stacks iid spatially p 1 p 1 α α 1 α 1 p p α Jonathon Peterson 12/5/ / 19

27 Markovian cookie stacks - results Markov chain input transition matrix K initial distribution η function p : {1, 2,, N} (0, 1) Jonathon Peterson 12/5/ / 19

28 Markovian cookie stacks - results Markov chain input transition matrix K initial distribution η function p : {1, 2,, N} (0, 1) Critical case: p = N j=1 π jp(j) = 1 2 (π is stationary for Markov chain) Jonathon Peterson 12/5/ / 19

29 Markovian cookie stacks - results Markov chain input transition matrix K initial distribution η function p : {1, 2,, N} (0, 1) Critical case: p = N j=1 π jp(j) = 1 2 (π is stationary for Markov chain) Theorem (Kosygina & P 15) Previous results for periodic cookie stacks extend to Markovian cookie stacks In particular, in the critical case Recurrence max{θ, θ} 1 Positive speed θ > 2 CLT θ > 4 or θ > 4 Explicit formulas for θ = θ(k, η, p) and θ = θ(k, η, p) Jonathon Peterson 12/5/ / 19

30 Explicit formulas for θ and θ Input parameters: K, η, p µ - stationary distribution for Markov chain D p - diagonal matrix with diagonal entries p = (p(1),, p(m)) Jonathon Peterson 12/5/ / 19

31 Explicit formulas for θ and θ Input parameters: K, η, p µ - stationary distribution for Markov chain D p - diagonal matrix with diagonal entries p = (p(1),, p(m)) r = (I K + 2(1 p)µ(i D p )K ) 1 p 1 ν = 2 + 4µD p K r θ = θ(k, η, 1 p) θ = 2η r ν Jonathon Peterson 12/5/ / 19

32 Markov cookie stacks - examples Periodic cookie stacks Jonathon Peterson 12/5/ / 19

33 Markov cookie stacks - examples Periodic cookie stacks Deterministic M-cookie stacks In this case θ = δ and θ = δ Jonathon Peterson 12/5/ / 19

34 Markov cookie stacks - examples Two-type, critical p 1 p 1 α α 1 α 1 p p α Jonathon Peterson 12/5/ / 19

35 Markov cookie stacks - examples Two-type, critical p 1 p 1 α α 1 α 1 p p α < θ < 1 1 < θ < 2 2 < θ < 4 θ > 4 θ = (2p 1)((2α 1)p α) 4(2α 1)(p 1)p + α 1 α p Jonathon Peterson 12/5/ / 19

36 Markov cookie stacks - examples Geometric cookie stacks 1 p p 1 α α θ = 2p 1 α Jonathon Peterson 12/5/ / 19

37 Markov cookie stacks - examples Geometric cookie stacks 1 p p 1 α α θ = 2p 1 α [ Note that also δ = E j=1 (2ω 0(j) 1)] = 2p 1 α Jonathon Peterson 12/5/ / 19

38 Backward branching-like process Branching-like processes Jonathon Peterson 12/5/ / 19

39 Backward branching-like process Branching-like processes D n i = # steps from i before T n T n = n + i<n D n i Jonathon Peterson 12/5/ / 19

40 Backward branching-like process Branching-like processes D n i = # steps from i before T n T n = n + i<n D n i Jonathon Peterson 12/5/ / 19

41 Backward branching-like process Branching-like processes D n i = # steps from i before T n T n = n + i<n D n i -2-1 D 5 5 = Jonathon Peterson 12/5/ / 19

42 Backward branching-like process Branching-like processes D n i = # steps from i before T n T n = n + i<n D n i -2-1 D 5 5 = Jonathon Peterson 12/5/ / 19

43 Backward branching-like process Branching-like processes D n i = # steps from i before T n T n = n + i<n D n i -2-1 D 5 4 = 1 D 5 5 = Jonathon Peterson 12/5/ / 19

44 Backward branching-like process Branching-like processes D n i = # steps from i before T n T n = n + i<n D n i -2-1 D 5 3 = 4 D 5 4 = 1 D 5 5 = Jonathon Peterson 12/5/ / 19

45 Backward branching-like process Branching-like processes D n i = # steps from i before T n T n = n + i<n D n i -2-1 D 5 2 = 2 D 5 3 = 4 D 5 4 = 1 D 5 5 = Jonathon Peterson 12/5/ / 19

46 Backward branching-like process Branching-like processes D n i = # steps from i before T n T n = n + i<n D n i -2-1 D 5 1 = 1 D 5 2 = 2 D 5 3 = 4 D 5 4 = 1 D 5 5 = Jonathon Peterson 12/5/ / 19

47 Backward branching-like process Branching-like processes D n i = # steps from i before T n T n = n + i<n D n i -2-1 D 5 0 = 2 D 5 1 = 1 D 5 2 = 2 D 5 3 = 4 D 5 4 = 1 D 5 5 = Jonathon Peterson 12/5/ / 19

48 Backward branching-like process Branching-like processes D n i = # steps from i before T n T n = n + i<n D n i -2 D 5 1 = 0-1 D 5 0 = 2 D 5 1 = 1 D 5 2 = 2 D 5 3 = 4 D 5 4 = 1 D 5 5 = Jonathon Peterson 12/5/ / 19

49 Branching-like processes Branching-like processes - LLN Backward branching process Markov chain {V i } i 0 (V 0, V 1,, V n ) Law = (D n n, D n n 1,, Dn 0 ) Jonathon Peterson 12/5/ / 19

50 Branching-like processes Branching-like processes - LLN Backward branching process Markov chain {V i } i 0 (V 0, V 1,, V n ) Law = (D n n, D n n 1,, Dn 0 ) T n n Jonathon Peterson 12/5/ / 19

51 Branching-like processes Branching-like processes - LLN Backward branching process Markov chain {V i } i 0 (V 0, V 1,, V n ) Law = (D n n, D n n 1,, Dn 0 ) T n n 1 n ( n 1 n + 2 i=0 D n i ) Jonathon Peterson 12/5/ / 19

52 Branching-like processes Branching-like processes - LLN Backward branching process Markov chain {V i } i 0 (V 0, V 1,, V n ) Law = (D n n, D n n 1,, Dn 0 ) T n n 1 n ( n 1 n + 2 i=0 D n i ) Law = n n V i i=1 Jonathon Peterson 12/5/ / 19

53 Branching-like processes Branching-like processes - LLN Backward branching process Markov chain {V i } i 0 (V 0, V 1,, V n ) Law = (D n n, D n n 1,, Dn 0 ) T n n 1 n ( n 1 n + 2 LLN for excited random walk 1 X n n v Tn n 1 v i=0 D n i ) Law = n n V i i=1 Jonathon Peterson 12/5/ / 19

54 Branching-like processes - LLN Branching-like processes Backward branching process Markov chain {V i } i 0 (V 0, V 1,, V n ) Law = (D n n, D n n 1,, Dn 0 ) T n n 1 n ( n 1 n + 2 LLN for excited random walk 1 X n n v Tn n 1 v 2 T n n 1 + 2E [V 0 ] i=0 D n i ) Law = n n V i i=1 Jonathon Peterson 12/5/ / 19

55 Branching-like processes - LLN Branching-like processes Backward branching process Markov chain {V i } i 0 (V 0, V 1,, V n ) Law = (D n n, D n n 1,, Dn 0 ) T n n 1 n ( n 1 n + 2 LLN for excited random walk 1 X n n v Tn n 1 v 2 T n n 1 + 2E [V 0 ] i=0 Ballistic: v > 0 E [V 0 ] < D n i ) Law = n n V i i=1 Jonathon Peterson 12/5/ / 19

56 Branching-like processes - LLN Branching-like processes Ballistic: v > 0 E [V 0 ] < Jonathon Peterson 12/5/ / 19

57 Branching-like processes - LLN Branching-like processes Ballistic: v > 0 E [V 0 ] < Regeneration time: σ = inf{i > 0 : V i = 0} Jonathon Peterson 12/5/ / 19

58 Branching-like processes - LLN Branching-like processes Ballistic: v > 0 E [V 0 ] < Regeneration time: σ = inf{i > 0 : V i = 0} E [V 0 ] = E [ σ i=1 V i] E[σ] Jonathon Peterson 12/5/ / 19

59 Branching-like processes - LLN Branching-like processes Ballistic: v > 0 E [V 0 ] < Regeneration time: σ = inf{i > 0 : V i = 0} E [V 0 ] = E [ σ i=1 V i] E[σ] Theorem (Kosygina & P 15) There exist constants C 1, C 2 > 0 such that ( σ ) P(σ > n) C 1 n θ, and P V i > n C 2 n θ/2 i=1 Jonathon Peterson 12/5/ / 19

60 Branching-like processes - LLN Branching-like processes Ballistic: v > 0 E [V 0 ] < Regeneration time: σ = inf{i > 0 : V i = 0} E [V 0 ] = E [ σ i=1 V i] E[σ] Theorem (Kosygina & P 15) There exist constants C 1, C 2 > 0 such that ( σ ) P(σ > n) C 1 n θ, and P V i > n C 2 n θ/2 Key idea: V i is like a squared Bessel process of dimension 2(1 θ) i=1 Jonathon Peterson 12/5/ / 19

61 Branching-like processes Comparison with squared Bessel process Mean: α = α(k, p, η) E[V 1 V 0 V 0 = n] α Ce cn Jonathon Peterson 12/5/ / 19

62 Branching-like processes Comparison with squared Bessel process Mean: α = α(k, p, η) E[V 1 V 0 V 0 = n] α Ce cn Variance: Var(V 1 V 0 = n) ν n C n ν = ν(k, p) Jonathon Peterson 12/5/ / 19

63 Branching-like processes Comparison with squared Bessel process Mean: α = α(k, p, η) E[V 1 V 0 V 0 = n] α Ce cn Variance: Var(V 1 V 0 = n) ν n C n ν = ν(k, p) { V tn n } t 0 is similar to dy t = α dt + ν Y t dw t (4/ν)Y t is a Squared Bessel process of dimendion 4α ν =: 2(1 θ) Jonathon Peterson 12/5/ / 19

arxiv: v2 [math.pr] 4 Sep 2017

arxiv: v2 [math.pr] 4 Sep 2017 arxiv:1708.08576v2 [math.pr] 4 Sep 2017 On the Speed of an Excited Asymmetric Random Walk Mike Cinkoske, Joe Jackson, Claire Plunkett September 5, 2017 Abstract An excited random walk is a non-markovian

More information

Weak quenched limiting distributions of a one-dimensional random walk in a random environment

Weak quenched limiting distributions of a one-dimensional random walk in a random environment Weak quenched limiting distributions of a one-dimensional random walk in a random environment Jonathon Peterson Cornell University Department of Mathematics Joint work with Gennady Samorodnitsky September

More information

Quenched Limit Laws for Transient, One-Dimensional Random Walk in Random Environment

Quenched Limit Laws for Transient, One-Dimensional Random Walk in Random Environment Quenched Limit Laws for Transient, One-Dimensional Random Walk in Random Environment Jonathon Peterson School of Mathematics University of Minnesota April 8, 2008 Jonathon Peterson 4/8/2008 1 / 19 Background

More information

FUNCTIONAL CENTRAL LIMIT (RWRE)

FUNCTIONAL CENTRAL LIMIT (RWRE) FUNCTIONAL CENTRAL LIMIT THEOREM FOR BALLISTIC RANDOM WALK IN RANDOM ENVIRONMENT (RWRE) Timo Seppäläinen University of Wisconsin-Madison (Joint work with Firas Rassoul-Agha, Univ of Utah) RWRE on Z d RWRE

More information

Lecture Notes on Random Walks in Random Environments

Lecture Notes on Random Walks in Random Environments Lecture Notes on Random Walks in Random Environments Jonathon Peterson Purdue University February 2, 203 This lecture notes arose out of a mini-course I taught in January 203 at Instituto Nacional de Matemática

More information

RW in dynamic random environment generated by the reversal of discrete time contact process

RW in dynamic random environment generated by the reversal of discrete time contact process RW in dynamic random environment generated by the reversal of discrete time contact process Andrej Depperschmidt joint with Matthias Birkner, Jiří Černý and Nina Gantert Ulaanbaatar, 07/08/2015 Motivation

More information

Asymptotic properties of the maximum likelihood estimator for a ballistic random walk in a random environment

Asymptotic properties of the maximum likelihood estimator for a ballistic random walk in a random environment Asymptotic properties of the maximum likelihood estimator for a ballistic random walk in a random environment Catherine Matias Joint works with F. Comets, M. Falconnet, D.& O. Loukianov Currently: Laboratoire

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

Rare event simulation for the ruin problem with investments via importance sampling and duality

Rare event simulation for the ruin problem with investments via importance sampling and duality Rare event simulation for the ruin problem with investments via importance sampling and duality Jerey Collamore University of Copenhagen Joint work with Anand Vidyashankar (GMU) and Guoqing Diao (GMU).

More information

The Contact Process on the Complete Graph with Random, Vertex-dependent, Infection Rates

The Contact Process on the Complete Graph with Random, Vertex-dependent, Infection Rates The Contact Process on the Complete Graph with Random, Vertex-dependent, Infection Rates Jonathon Peterson Purdue University Department of Mathematics December 2, 2011 Jonathon Peterson 12/2/2011 1 / 8

More information

arxiv:math/ v1 [math.pr] 24 Apr 2003

arxiv:math/ v1 [math.pr] 24 Apr 2003 ICM 2002 Vol. III 1 3 arxiv:math/0304374v1 [math.pr] 24 Apr 2003 Random Walks in Random Environments Ofer Zeitouni Abstract Random walks in random environments (RWRE s) have been a source of surprising

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

Exercises with solutions (Set D)

Exercises with solutions (Set D) Exercises with solutions Set D. A fair die is rolled at the same time as a fair coin is tossed. Let A be the number on the upper surface of the die and let B describe the outcome of the coin toss, where

More information

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ),

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ), MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 4: Steady-State Theory Contents 4.1 The Concept of Stochastic Equilibrium.......................... 1 4.2

More information

Gibbs Sampling in Linear Models #2

Gibbs Sampling in Linear Models #2 Gibbs Sampling in Linear Models #2 Econ 690 Purdue University Outline 1 Linear Regression Model with a Changepoint Example with Temperature Data 2 The Seemingly Unrelated Regressions Model 3 Gibbs sampling

More information

Non-homogeneous random walks on a semi-infinite strip

Non-homogeneous random walks on a semi-infinite strip Non-homogeneous random walks on a semi-infinite strip Chak Hei Lo Joint work with Andrew R. Wade World Congress in Probability and Statistics 11th July, 2016 Outline Motivation: Lamperti s problem Our

More information

Chapter 3 - Temporal processes

Chapter 3 - Temporal processes STK4150 - Intro 1 Chapter 3 - Temporal processes Odd Kolbjørnsen and Geir Storvik January 23 2017 STK4150 - Intro 2 Temporal processes Data collected over time Past, present, future, change Temporal aspect

More information

Practical conditions on Markov chains for weak convergence of tail empirical processes

Practical conditions on Markov chains for weak convergence of tail empirical processes Practical conditions on Markov chains for weak convergence of tail empirical processes Olivier Wintenberger University of Copenhagen and Paris VI Joint work with Rafa l Kulik and Philippe Soulier Toronto,

More information

Spider walk in a random environment

Spider walk in a random environment Graduate Theses and Dissertations Iowa State University Capstones, Theses and Dissertations 2017 Spider walk in a random environment Tetiana Takhistova Iowa State University Follow this and additional

More information

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) Markov Chains (2) Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2 pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous

More information

Stochastic process. X, a series of random variables indexed by t

Stochastic process. X, a series of random variables indexed by t Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,

More information

Introduction to self-similar growth-fragmentations

Introduction to self-similar growth-fragmentations Introduction to self-similar growth-fragmentations Quan Shi CIMAT, 11-15 December, 2017 Quan Shi Growth-Fragmentations CIMAT, 11-15 December, 2017 1 / 34 Literature Jean Bertoin, Compensated fragmentation

More information

X t = a t + r t, (7.1)

X t = a t + r t, (7.1) Chapter 7 State Space Models 71 Introduction State Space models, developed over the past 10 20 years, are alternative models for time series They include both the ARIMA models of Chapters 3 6 and the Classical

More information

Lecture 21. David Aldous. 16 October David Aldous Lecture 21

Lecture 21. David Aldous. 16 October David Aldous Lecture 21 Lecture 21 David Aldous 16 October 2015 In continuous time 0 t < we specify transition rates or informally P(X (t+δ)=j X (t)=i, past ) q ij = lim δ 0 δ P(X (t + dt) = j X (t) = i) = q ij dt but note these

More information

Some mathematical models from population genetics

Some mathematical models from population genetics Some mathematical models from population genetics 5: Muller s ratchet and the rate of adaptation Alison Etheridge University of Oxford joint work with Peter Pfaffelhuber (Vienna), Anton Wakolbinger (Frankfurt)

More information

Random Processes. DS GA 1002 Probability and Statistics for Data Science.

Random Processes. DS GA 1002 Probability and Statistics for Data Science. Random Processes DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Aim Modeling quantities that evolve in time (or space)

More information

Statistical mechanics of random billiard systems

Statistical mechanics of random billiard systems Statistical mechanics of random billiard systems Renato Feres Washington University, St. Louis Banff, August 2014 1 / 39 Acknowledgements Collaborators: Timothy Chumley, U. of Iowa Scott Cook, Swarthmore

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

18.175: Lecture 30 Markov chains

18.175: Lecture 30 Markov chains 18.175: Lecture 30 Markov chains Scott Sheffield MIT Outline Review what you know about finite state Markov chains Finite state ergodicity and stationarity More general setup Outline Review what you know

More information

ON THE ZERO-ONE LAW AND THE LAW OF LARGE NUMBERS FOR RANDOM WALK IN MIXING RAN- DOM ENVIRONMENT

ON THE ZERO-ONE LAW AND THE LAW OF LARGE NUMBERS FOR RANDOM WALK IN MIXING RAN- DOM ENVIRONMENT Elect. Comm. in Probab. 10 (2005), 36 44 ELECTRONIC COMMUNICATIONS in PROBABILITY ON THE ZERO-ONE LAW AND THE LAW OF LARGE NUMBERS FOR RANDOM WALK IN MIXING RAN- DOM ENVIRONMENT FIRAS RASSOUL AGHA Department

More information

Econometrics I, Estimation

Econometrics I, Estimation Econometrics I, Estimation Department of Economics Stanford University September, 2008 Part I Parameter, Estimator, Estimate A parametric is a feature of the population. An estimator is a function of the

More information

Probability Models. 4. What is the definition of the expectation of a discrete random variable?

Probability Models. 4. What is the definition of the expectation of a discrete random variable? 1 Probability Models The list of questions below is provided in order to help you to prepare for the test and exam. It reflects only the theoretical part of the course. You should expect the questions

More information

Markov Chains and Hidden Markov Models

Markov Chains and Hidden Markov Models Chapter 1 Markov Chains and Hidden Markov Models In this chapter, we will introduce the concept of Markov chains, and show how Markov chains can be used to model signals using structures such as hidden

More information

Controlled Diffusions and Hamilton-Jacobi Bellman Equations

Controlled Diffusions and Hamilton-Jacobi Bellman Equations Controlled Diffusions and Hamilton-Jacobi Bellman Equations Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2014 Emo Todorov (UW) AMATH/CSE 579, Winter

More information

Positive and null recurrent-branching Process

Positive and null recurrent-branching Process December 15, 2011 In last discussion we studied the transience and recurrence of Markov chains There are 2 other closely related issues about Markov chains that we address Is there an invariant distribution?

More information

Midterm 2 Review. CS70 Summer Lecture 6D. David Dinh 28 July UC Berkeley

Midterm 2 Review. CS70 Summer Lecture 6D. David Dinh 28 July UC Berkeley Midterm 2 Review CS70 Summer 2016 - Lecture 6D David Dinh 28 July 2016 UC Berkeley Midterm 2: Format 8 questions, 190 points, 110 minutes (same as MT1). Two pages (one double-sided sheet) of handwritten

More information

Matrix analytic methods. Lecture 1: Structured Markov chains and their stationary distribution

Matrix analytic methods. Lecture 1: Structured Markov chains and their stationary distribution 1/29 Matrix analytic methods Lecture 1: Structured Markov chains and their stationary distribution Sophie Hautphenne and David Stanford (with thanks to Guy Latouche, U. Brussels and Peter Taylor, U. Melbourne

More information

Disjointness and Additivity

Disjointness and Additivity Midterm 2: Format Midterm 2 Review CS70 Summer 2016 - Lecture 6D David Dinh 28 July 2016 UC Berkeley 8 questions, 190 points, 110 minutes (same as MT1). Two pages (one double-sided sheet) of handwritten

More information

ON COMPOUND POISSON POPULATION MODELS

ON COMPOUND POISSON POPULATION MODELS ON COMPOUND POISSON POPULATION MODELS Martin Möhle, University of Tübingen (joint work with Thierry Huillet, Université de Cergy-Pontoise) Workshop on Probability, Population Genetics and Evolution Centre

More information

Problems 5: Continuous Markov process and the diffusion equation

Problems 5: Continuous Markov process and the diffusion equation Problems 5: Continuous Markov process and the diffusion equation Roman Belavkin Middlesex University Question Give a definition of Markov stochastic process. What is a continuous Markov process? Answer:

More information

Stochastic Process (ENPC) Monday, 22nd of January 2018 (2h30)

Stochastic Process (ENPC) Monday, 22nd of January 2018 (2h30) Stochastic Process (NPC) Monday, 22nd of January 208 (2h30) Vocabulary (english/français) : distribution distribution, loi ; positive strictement positif ; 0,) 0,. We write N Z,+ and N N {0}. We use the

More information

Tail inequalities for additive functionals and empirical processes of. Markov chains

Tail inequalities for additive functionals and empirical processes of. Markov chains Tail inequalities for additive functionals and empirical processes of geometrically ergodic Markov chains University of Warsaw Banff, June 2009 Geometric ergodicity Definition A Markov chain X = (X n )

More information

18.440: Lecture 33 Markov Chains

18.440: Lecture 33 Markov Chains 18.440: Lecture 33 Markov Chains Scott Sheffield MIT 1 Outline Markov chains Examples Ergodicity and stationarity 2 Outline Markov chains Examples Ergodicity and stationarity 3 Markov chains Consider a

More information

Analysis of an Infinite-Server Queue with Markovian Arrival Streams

Analysis of an Infinite-Server Queue with Markovian Arrival Streams Analysis of an Infinite-Server Queue with Markovian Arrival Streams Guidance Professor Associate Professor Assistant Professor Masao FUKUSHIMA Tetsuya TAKINE Nobuo YAMASHITA Hiroyuki MASUYAMA 1999 Graduate

More information

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk Instructor: Victor F. Araman December 4, 2003 Theory and Applications of Stochastic Systems Lecture 0 B60.432.0 Exponential Martingale for Random Walk Let (S n : n 0) be a random walk with i.i.d. increments

More information

1 Introduction. 2 Diffusion equation and central limit theorem. The content of these notes is also covered by chapter 3 section B of [1].

1 Introduction. 2 Diffusion equation and central limit theorem. The content of these notes is also covered by chapter 3 section B of [1]. 1 Introduction The content of these notes is also covered by chapter 3 section B of [1]. Diffusion equation and central limit theorem Consider a sequence {ξ i } i=1 i.i.d. ξ i = d ξ with ξ : Ω { Dx, 0,

More information

System Identification, Lecture 4

System Identification, Lecture 4 System Identification, Lecture 4 Kristiaan Pelckmans (IT/UU, 2338) Course code: 1RT880, Report code: 61800 - Spring 2012 F, FRI Uppsala University, Information Technology 30 Januari 2012 SI-2012 K. Pelckmans

More information

Mean field simulation for Monte Carlo integration. Part II : Feynman-Kac models. P. Del Moral

Mean field simulation for Monte Carlo integration. Part II : Feynman-Kac models. P. Del Moral Mean field simulation for Monte Carlo integration Part II : Feynman-Kac models P. Del Moral INRIA Bordeaux & Inst. Maths. Bordeaux & CMAP Polytechnique Lectures, INLN CNRS & Nice Sophia Antipolis Univ.

More information

Eleventh Problem Assignment

Eleventh Problem Assignment EECS April, 27 PROBLEM (2 points) The outcomes of successive flips of a particular coin are dependent and are found to be described fully by the conditional probabilities P(H n+ H n ) = P(T n+ T n ) =

More information

System Identification, Lecture 4

System Identification, Lecture 4 System Identification, Lecture 4 Kristiaan Pelckmans (IT/UU, 2338) Course code: 1RT880, Report code: 61800 - Spring 2016 F, FRI Uppsala University, Information Technology 13 April 2016 SI-2016 K. Pelckmans

More information

arxiv: v1 [math.pr] 17 Oct 2014

arxiv: v1 [math.pr] 17 Oct 2014 DETERMINISTIC WALK IN AN EXCITED RANDOM ENVIRONMENT IVAN MATIC AND DAVID SIVAKOFF arxiv:40.4846v [math.pr] 7 Oct 204 Abstract. Deterministic walk in an excited random environment is a non-markov integer-valued

More information

At the boundary states, we take the same rules except we forbid leaving the state space, so,.

At the boundary states, we take the same rules except we forbid leaving the state space, so,. Birth-death chains Monday, October 19, 2015 2:22 PM Example: Birth-Death Chain State space From any state we allow the following transitions: with probability (birth) with probability (death) with probability

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

HW 2 Solutions. The variance of the random walk is explosive (lim n Var (X n ) = ).

HW 2 Solutions. The variance of the random walk is explosive (lim n Var (X n ) = ). Stochastic Processews Prof Olivier Scaillet TA Adrien Treccani HW 2 Solutions Exercise. The process {X n, n } is a random walk starting at a (cf. definition in the course. [ n ] E [X n ] = a + E Z i =

More information

Large deviations and fluctuation exponents for some polymer models. Directed polymer in a random environment. KPZ equation Log-gamma polymer

Large deviations and fluctuation exponents for some polymer models. Directed polymer in a random environment. KPZ equation Log-gamma polymer Large deviations and fluctuation exponents for some polymer models Timo Seppäläinen Department of Mathematics University of Wisconsin-Madison 211 1 Introduction 2 Large deviations 3 Fluctuation exponents

More information

Random Walk in Periodic Environment

Random Walk in Periodic Environment Tdk paper Random Walk in Periodic Environment István Rédl third year BSc student Supervisor: Bálint Vető Department of Stochastics Institute of Mathematics Budapest University of Technology and Economics

More information

The range of tree-indexed random walk

The range of tree-indexed random walk The range of tree-indexed random walk Jean-François Le Gall, Shen Lin Institut universitaire de France et Université Paris-Sud Orsay Erdös Centennial Conference July 2013 Jean-François Le Gall (Université

More information

Averaging principle and Shape Theorem for growth with memory

Averaging principle and Shape Theorem for growth with memory Averaging principle and Shape Theorem for growth with memory Amir Dembo (Stanford) Paris, June 19, 2018 Joint work with Ruojun Huang, Pablo Groisman, and Vladas Sidoravicius Laplacian growth & motion in

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Stationary independent increments. 1. Random changes of the form X t+h X t fixed h > 0 are called increments of the process.

Stationary independent increments. 1. Random changes of the form X t+h X t fixed h > 0 are called increments of the process. Stationary independent increments 1. Random changes of the form X t+h X t fixed h > 0 are called increments of the process. 2. If each set of increments, corresponding to non-overlapping collection of

More information

STA205 Probability: Week 8 R. Wolpert

STA205 Probability: Week 8 R. Wolpert INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and

More information

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution. Lecture 7 1 Stationary measures of a Markov chain We now study the long time behavior of a Markov Chain: in particular, the existence and uniqueness of stationary measures, and the convergence of the distribution

More information

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1 Irreducibility Irreducible every state can be reached from every other state For any i,j, exist an m 0, such that i,j are communicate, if the above condition is valid Irreducible: all states are communicate

More information

Probability Models in Electrical and Computer Engineering Mathematical models as tools in analysis and design Deterministic models Probability models

Probability Models in Electrical and Computer Engineering Mathematical models as tools in analysis and design Deterministic models Probability models Probability Models in Electrical and Computer Engineering Mathematical models as tools in analysis and design Deterministic models Probability models Statistical regularity Properties of relative frequency

More information

18.600: Lecture 32 Markov Chains

18.600: Lecture 32 Markov Chains 18.600: Lecture 32 Markov Chains Scott Sheffield MIT Outline Markov chains Examples Ergodicity and stationarity Outline Markov chains Examples Ergodicity and stationarity Markov chains Consider a sequence

More information

Concentration inequalities for Feynman-Kac particle models. P. Del Moral. INRIA Bordeaux & IMB & CMAP X. Journées MAS 2012, SMAI Clermond-Ferrand

Concentration inequalities for Feynman-Kac particle models. P. Del Moral. INRIA Bordeaux & IMB & CMAP X. Journées MAS 2012, SMAI Clermond-Ferrand Concentration inequalities for Feynman-Kac particle models P. Del Moral INRIA Bordeaux & IMB & CMAP X Journées MAS 2012, SMAI Clermond-Ferrand Some hyper-refs Feynman-Kac formulae, Genealogical & Interacting

More information

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015 ID NAME SCORE MATH 56/STAT 555 Applied Stochastic Processes Homework 2, September 8, 205 Due September 30, 205 The generating function of a sequence a n n 0 is defined as As : a ns n for all s 0 for which

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

1. Stochastic Process

1. Stochastic Process HETERGENEITY IN QUANTITATIVE MACROECONOMICS @ TSE OCTOBER 17, 216 STOCHASTIC CALCULUS BASICS SANG YOON (TIM) LEE Very simple notes (need to add references). It is NOT meant to be a substitute for a real

More information

Review. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

Review. DS GA 1002 Statistical and Mathematical Models.   Carlos Fernandez-Granda Review DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Probability and statistics Probability: Framework for dealing with

More information

The Distribution of Mixing Times in Markov Chains

The Distribution of Mixing Times in Markov Chains The Distribution of Mixing Times in Markov Chains Jeffrey J. Hunter School of Computing & Mathematical Sciences, Auckland University of Technology, Auckland, New Zealand December 2010 Abstract The distribution

More information

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 3: Regenerative Processes Contents 3.1 Regeneration: The Basic Idea............................... 1 3.2

More information

Average-cost temporal difference learning and adaptive control variates

Average-cost temporal difference learning and adaptive control variates Average-cost temporal difference learning and adaptive control variates Sean Meyn Department of ECE and the Coordinated Science Laboratory Joint work with S. Mannor, McGill V. Tadic, Sheffield S. Henderson,

More information

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Financial Econometrics / 49

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Financial Econometrics / 49 State-space Model Eduardo Rossi University of Pavia November 2013 Rossi State-space Model Financial Econometrics - 2013 1 / 49 Outline 1 Introduction 2 The Kalman filter 3 Forecast errors 4 State smoothing

More information

4 Branching Processes

4 Branching Processes 4 Branching Processes Organise by generations: Discrete time. If P(no offspring) 0 there is a probability that the process will die out. Let X = number of offspring of an individual p(x) = P(X = x) = offspring

More information

4.7.1 Computing a stationary distribution

4.7.1 Computing a stationary distribution At a high-level our interest in the rest of this section will be to understand the limiting distribution, when it exists and how to compute it To compute it, we will try to reason about when the limiting

More information

Problem Points S C O R E Total: 120

Problem Points S C O R E Total: 120 PSTAT 160 A Final Exam December 10, 2015 Name Student ID # Problem Points S C O R E 1 10 2 10 3 10 4 10 5 10 6 10 7 10 8 10 9 10 10 10 11 10 12 10 Total: 120 1. (10 points) Take a Markov chain with the

More information

Section Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018

Section Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018 Section Notes 9 Midterm 2 Review Applied Math / Engineering Sciences 121 Week of December 3, 2018 The following list of topics is an overview of the material that was covered in the lectures and sections

More information

µ n 1 (v )z n P (v, )

µ n 1 (v )z n P (v, ) Plan More Examples (Countable-state case). Questions 1. Extended Examples 2. Ideas and Results Next Time: General-state Markov Chains Homework 4 typo Unless otherwise noted, let X be an irreducible, aperiodic

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

Introduction to MCMC. DB Breakfast 09/30/2011 Guozhang Wang

Introduction to MCMC. DB Breakfast 09/30/2011 Guozhang Wang Introduction to MCMC DB Breakfast 09/30/2011 Guozhang Wang Motivation: Statistical Inference Joint Distribution Sleeps Well Playground Sunny Bike Ride Pleasant dinner Productive day Posterior Estimation

More information

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

JONATHON PETERSON AND GENNADY SAMORODNITSKY

JONATHON PETERSON AND GENNADY SAMORODNITSKY WEAK WEAK QUENCHED LIMITS FOR THE PATH-VALUED PROCESSES OF HITTING TIMES AND POSITIONS OF A TRANSIENT, ONE-DIMENSIONAL RANDOM WALK IN A RANDOM ENVIRONMENT JONATHON PETERSON AND GENNADY SAMORODNITSKY Abstract.

More information

Random Toeplitz Matrices

Random Toeplitz Matrices Arnab Sen University of Minnesota Conference on Limits Theorems in Probability, IISc January 11, 2013 Joint work with Bálint Virág What are Toeplitz matrices? a0 a 1 a 2... a1 a0 a 1... a2 a1 a0... a (n

More information

Two Classical models of Quantum Dynamics

Two Classical models of Quantum Dynamics Two Classical models of Quantum Dynamics Tom Spencer Institute for Advanced Study Princeton, NJ May 1, 2018 Outline: Review of dynamics and localization for Random Schrödinger H on l 2 (Z d ) H = + λv

More information

July 31, 2009 / Ben Kedem Symposium

July 31, 2009 / Ben Kedem Symposium ing The s ing The Department of Statistics North Carolina State University July 31, 2009 / Ben Kedem Symposium Outline ing The s 1 2 s 3 4 5 Ben Kedem ing The s Ben has made many contributions to time

More information

Renewal processes and Queues

Renewal processes and Queues Renewal processes and Queues Last updated by Serik Sagitov: May 6, 213 Abstract This Stochastic Processes course is based on the book Probabilities and Random Processes by Geoffrey Grimmett and David Stirzaker.

More information

1.2. Markov Chains. Before we define Markov process, we must define stochastic processes.

1.2. Markov Chains. Before we define Markov process, we must define stochastic processes. 1. LECTURE 1: APRIL 3, 2012 1.1. Motivating Remarks: Differential Equations. In the deterministic world, a standard tool used for modeling the evolution of a system is a differential equation. Such an

More information

Markov Model. Model representing the different resident states of a system, and the transitions between the different states

Markov Model. Model representing the different resident states of a system, and the transitions between the different states Markov Model Model representing the different resident states of a system, and the transitions between the different states (applicable to repairable, as well as non-repairable systems) System behavior

More information

When is a Markov chain regenerative?

When is a Markov chain regenerative? When is a Markov chain regenerative? Krishna B. Athreya and Vivekananda Roy Iowa tate University Ames, Iowa, 50011, UA Abstract A sequence of random variables {X n } n 0 is called regenerative if it can

More information

Why Hyperbolic Symmetry?

Why Hyperbolic Symmetry? Why Hyperbolic Symmetry? Consider the very trivial case when N = 1 and H = h is a real Gaussian variable of unit variance. To simplify notation set E ɛ = 0 iɛ and z, w C = G(E ɛ ) 2 h = [(h iɛ)(h + iɛ)]

More information

An Introduction to Stochastic Modeling

An Introduction to Stochastic Modeling F An Introduction to Stochastic Modeling Fourth Edition Mark A. Pinsky Department of Mathematics Northwestern University Evanston, Illinois Samuel Karlin Department of Mathematics Stanford University Stanford,

More information

Resistance Growth of Branching Random Networks

Resistance Growth of Branching Random Networks Peking University Oct.25, 2018, Chengdu Joint work with Yueyun Hu (U. Paris 13) and Shen Lin (U. Paris 6), supported by NSFC Grant No. 11528101 (2016-2017) for Research Cooperation with Oversea Investigators

More information

Stochastic Processes

Stochastic Processes Stochastic Processes 8.445 MIT, fall 20 Mid Term Exam Solutions October 27, 20 Your Name: Alberto De Sole Exercise Max Grade Grade 5 5 2 5 5 3 5 5 4 5 5 5 5 5 6 5 5 Total 30 30 Problem :. True / False

More information

Model Counting for Logical Theories

Model Counting for Logical Theories Model Counting for Logical Theories Wednesday Dmitry Chistikov Rayna Dimitrova Department of Computer Science University of Oxford, UK Max Planck Institute for Software Systems (MPI-SWS) Kaiserslautern

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 7

MATH 56A: STOCHASTIC PROCESSES CHAPTER 7 MATH 56A: STOCHASTIC PROCESSES CHAPTER 7 7. Reversal This chapter talks about time reversal. A Markov process is a state X t which changes with time. If we run time backwards what does it look like? 7.1.

More information

Asymptotic behaviour of randomly reflecting billiards in unbounded tubular domains arxiv: v2 [math.pr] 12 May 2008

Asymptotic behaviour of randomly reflecting billiards in unbounded tubular domains arxiv: v2 [math.pr] 12 May 2008 Asymptotic behaviour of randomly reflecting billiards in unbounded tubular domains arxiv:0802.1865v2 [math.pr] 12 May 2008 M.V. Menshikov,1 M. Vachkovskaia,2 A.R. Wade,3 28th May 2008 1 Department of Mathematical

More information

Homework # , Spring Due 14 May Convergence of the empirical CDF, uniform samples

Homework # , Spring Due 14 May Convergence of the empirical CDF, uniform samples Homework #3 36-754, Spring 27 Due 14 May 27 1 Convergence of the empirical CDF, uniform samples In this problem and the next, X i are IID samples on the real line, with cumulative distribution function

More information

Econ 424 Time Series Concepts

Econ 424 Time Series Concepts Econ 424 Time Series Concepts Eric Zivot January 20 2015 Time Series Processes Stochastic (Random) Process { 1 2 +1 } = { } = sequence of random variables indexed by time Observed time series of length

More information

Stochastic Processes (Week 6)

Stochastic Processes (Week 6) Stochastic Processes (Week 6) October 30th, 2014 1 Discrete-time Finite Markov Chains 2 Countable Markov Chains 3 Continuous-Time Markov Chains 3.1 Poisson Process 3.2 Finite State Space 3.2.1 Kolmogrov

More information