Strong approximation for additive functionals of geometrically ergodic Markov chains
|
|
- Esmond Conley
- 5 years ago
- Views:
Transcription
1 Strong approximation for additive functionals of geometrically ergodic Markov chains Florence Merlevède Joint work with E. Rio Université Paris-Est-Marne-La-Vallée (UPEM) Cincinnati Symposium on Probability Theory and Applications September 2014
2 Strong approximation in the iid setting (1) Assume that (X i ) i 1 is a sequence of iid centered real-valued random variables with a finite second moment σ 2 and define S n = X 1 + X X n The ASIP says that a sequence (Z i ) i 1 of iid centered Gaussian variables may be constructed is such a way that sup Sk σb k = o(b n ) almost surely, 1 k n where b n = (n log log n) 1/2 (Strassen (1964)). When (X i ) i 1 is assumed to be in addition in L p with p > 2, then we can obtain rates in the ASIP: b n = n 1/p (see Major (1976) for p ]2, 3] and Komlós, Major and Tusnády for p > 3).
3 Strong approximation in the iid setting (2) When (X i ) i 1 is assumed to have a finite moment generating function in a neighborhood of 0, then the famous Komlós-Major-Tusnády theorem (1975 and 1976) says that one can construct a standard Brownian motion (B t ) t 0 in such a way that ( ) P sup S k σb k x + c log n k n a exp( bx) (1) where a, b and c are positive constants depending only on the law of X 1. (1) implies in particular that sup S k σb k = O(log n) almost surely 1 k n It comes from the Erdös-Rényi law of large numbers (1970) that this result is unimprovable.
4 Strong approximation in the multivariate iid setting Einmahl (1989) proved that we can obtain the rate O((log n) 2 ) in the almost sure approximation of the partial sums of iid random vectors with finite moment generating function in a neighborhood of 0 by Gaussian partial sums. Zaitsev (1998) removed the extra logarithmic factor and obtained the KMT inequality in the case of iid random vectors. What about KMT type results in the dependent setting?
5 An extension for functions of iid (Berkes, Liu and Wu (2014)) Let (X k ) k Z be a stationary process defined as follows. Let (ε k ) k Z be a sequence of iid r.v. s and g : R Z R be a measurable function such that for all k Z, X k = g(ξ k ) with ξ k := (..., ε k 1, ε k ) is well defined, E(g(ξ k )) = 0 and g(ξ k ) p < for some p > 2. Let (ε k ) k Z an independent copy of (ε k ) k Z. For any integer k 0, let ξk = ( ξ 1, ε 0, ε ) 1,..., ε k 1, ε k and X k = g(ξk ). For k 0, let δ(k) as introduced by Wu (2005): δ(k) = X k X k p. Berkes, Liu and Wu (2014): The almost sure strong approximation holds with the rate o(n 1/p ) and σ 2 = k E(X 0 X k ) provided that δ(k) = O(k α ) with α > 2 if p ]2, 4] and α > f (p) if p > 4 with f (p) = 1 + p2 4 + (p 2) p p + 4 8p
6 What about strong approximation in the Markov setting? Let (ξ n ) be an irreducible and aperiodic Harris recurrent Markov chain on a countably generated measurable state space (E, B). Let P(x,.) be the transition probability. We assume that the chain is positive recurrent. Let π be its (unique) invariant probability measure. Then there exists some positive integer m, some measurable function h with values in [0, 1] with π(h) > 0, and some probability measure ν on E, such that We assume that m = 1 P m (x, A) h(x)ν(a). The Nummelin splitting technique (1984) allows to extend the Markov chain in such a way that the extended Markov chain has a recurrent atom. This allows regeneration.
7 The Nummelin splitting technique (1) Let Q(x, ) be the sub-stochastic kernel defined by Q = P h ν The minorization condition allows to define an extended chain ( ξ n, U n ) in E [0, 1] as follows. At time 0, U 0 is independent of ξ 0 and has the uniform distribution over [0, 1]; for any n N, P( ξ n+1 A ξ n = x, U n = y) = 1 y h(x) ν(a) + 1 y>h(x) Q(x, A) 1 h(x) := P((x, y), A) and U n+1 is independent of ( ξ n+1, ξ n, U n ) and has the uniform distribution over [0, 1]. P = P λ (λ is the Lebesgue measure on [0, 1]) and ( ξ n, U n ) is an irreducible and aperiodic Harris recurrent chain, with unique invariant probability measure π λ. Moreover ( ξ n ) is an homogenous Markov chain with transition probability P(x,.).
8 Regeneration Define now the set C in E [0, 1] by C = {(x, y) E [0, 1] such that y h(x)}. For any (x, y) in C, P( ξ n+1 A ξ n = x, U n = y) = ν(a). Since π λ(c ) = π(h) > 0, the set C is an atom of the extended chain, and it can be proven that this atom is recurrent. Let T 0 = inf{n 1 : U n h( ξ n )} and T k = inf{n > T k 1 : U n h( ξ n )}, and the return times (τ k ) k>0 by τ k = T k T k 1. Note that T 0 is a.s. finite and the return times τ k are iid and integrable. Let S n (f ) = n k=1 f ( ξ k ). The random vectors (τ k, S Tk (f ) S Tk 1 (f )) k>0 are iid and their common law is the law of (τ 1, S T1 (f ) S T0 (f )) under P C.
9 Csáki and Csörgö (1995): If the r.v s S Tk ( f ) S Tk 1 ( f ) have a finite moment of order p for some p in ]2, 4] and if E(τ p/2 k ) <, then one can construct a standard Wiener process (W t ) t 0 such that S n (f ) nπ(f ) σ(f ) W n = O(a n ) a.s.. with a n = n 1/p (log n) 1/2 (log log n) α and σ 2 (f ) = lim n 1 n VarS n (f ). The above result holds for any bounded function f only if the return times have a finite moment of order p. The proof is based on the regeneration properties of the chain, on the Skorohod embedding and on an application of the results of KMT (1975) to the partial sums of the iid random variables S Tk+1 (f ) S Tk (f ), k > 0.
10 On the proof of Csáki and Csörgö For any i 1, let X i = T i l=t i 1 +1 f ( ξ l ). Since the (X i ) i>0 are iid, if E X 1 2+δ <, there exists a standard Brownian motion (W (t)) t>0 such that sup k X i σ(f )W (k) = o(n 1/(2+δ) ) k n i=1 a.s. Let ρ(n) = max{k : T k n}. If E τ 1 q < for some 1 q 2, then ρ(n) = n E(τ 1 ) + O(n1/q (log log n) α ) a.s. ρ(n) i=1 X i S n (f ) = o(n 1/(2+δ) ) a.s. W ( ) ( ρ(n)) W ( n = O n 1/(2q) (log n) 1/2 (log log n) α ) ) a.s. E(τ 1 ) With this method, no way to do better than O(n 1/(2q) (log n) 1/2 ) (1 q 2) even if f is bounded and τ 1 has exponential moment.
11 Link between the moments of return times and the coefficients of absolute regularity For positive measures µ and ν, let µ ν denote the total variation of µ ν Set β n = E Pn (x,.) π dπ(x). The coefficients β n are called absolute regularity (or β-mixing) coefficients of the chain. Bolthausen ( ): for any p > 1, E(τ p 1 ) = E C (T p 0 ) < if and only if n p 2 β n <. n>0 Hence, according to the strong approximation result of M.-Rio (2012), if f is bounded and E(τ p 1 ) for some p in ]2, 3[, then the strong approximation result holds with the rate o(n 1/p (log n) (p 2)/(2p) ).
12 Main result: M. Rio (2014) Assume that β n = O(ρ n ) for some real ρ with 0 < ρ < 1, If f is bounded and such that π(f ) = 0 then there exists a standard Wiener process (W t ) t 0 and positive constants a, b and c depending on f and on the transition probability P(x, ) such that, for any positive real x and any integer n 2, ( ) P π sup Sk (f ) σ(f )W k c log n + x a exp( bx). k n where σ 2 (f ) = π(f 2 ) + 2 n>0 π(fp n f ) > 0. Therefore sup k n S k (g) σ(f )W k = O(log n) a.s.
13 Some comments on the condition The condition β n = O(ρ n ) for some real ρ with 0 < ρ < 1 is equivalent to say that the Markov chain is geometrically ergodic (see Nummelin and Tuominen (1982)). If the Markov chain is GE then there exists a positive real δ such that E ( e tτ 1 ) < and E π ( e tt 0 ) < for any t δ. Let µ be any law on E such that E Pn (x,.) π dµ(x) = O(r n ) for some r < 1. Then P µ (T 0 > n) decreases exponentially fast (see Nummelin and Tuominen (1982)). The result extends to the Markov chain (ξ n ) with transition probability P and initial law µ.
14 Some insights for the proof Let S n (f ) = n l=1 f ( ξ l ) and X i = T i l=t i 1 +1 f ( ξ l ). Recall that (X i, τ i ) i>0 are iid. Let α be the unique real such that Cov(X k ατ k, τ k ) = 0 The random vectors (X i ατ i, τ i ) i>0 of R 2 are then iid and their marginals are non correlated. By the multidimensional strong approximation theorem of Zaitsev (1998), there exist two independent standard Brownian motions (B t ) t and ( B t ) t such that and S Tn (f ) α(t n ne(τ 1 )) vb n = O(log n) a.s. (1) T n ne(τ 1 ) ṽ B n = O(log n) a.s. (2) where v 2 = Var(X 1 ατ 1 ) and ṽ 2 = Var(τ 1 ). We associate to T n a Poisson Process via (2).
15 Let λ = (E(τ 1)) 2. Via KMT, one can construct a Poisson process N Var(τ 1 ) (depending on B) with parameter λ in such a way that Therefore, via (2), and then, via (1), γn(n) ne(τ 1 ) ṽ B n = O(log n) a.s. T n γn(n) = O(log n) a.s. S γn(n) (f ) αγn(n) + αne(τ 1 ) vb n = O(log n) a.s. (3) The processes (B t ) t and (N t ) t appearing here are independent. Via (3), setting N 1 (k) = inf{t > 0 : N(t) k} := k l=1 E l, S n (f ) = vb N 1 (n/γ) + αn αe(τ 1)N 1 (n/γ) + O(log n) a.s. If v = 0, the proof is finished. Indeed, by KMT, there exists a Brownian motion W n (depending on N) such that αn αe(τ 1 )N 1 (n/γ) = W n + O(log n) a.s.
16 If v = 0 and α = 0, we have S n (f ) = vb N 1 (n/γ) + O(log n) a.s. Using Csörgö, Deheuvels and Horváth (1987) (B and N are independent), one can construct a Brownian motion W (depending on N) such that B N 1 (n/γ) W n = O(log n) a.s. ( ), which leads to the expected result when α = 0. However, in the case α = 0 and v = 0, we still have S n (f ) = vb N 1 (n/γ) + αn αe(τ 1)N 1 (n/γ) + O(log n) a.s. and then S n (f ) = W n + W n + O(log n) a.s. Since W and W are not independent, we cannot conclude. Can we construct W n independent of N (and then of W ) such that ( ) still holds?
17 The key lemma Let (B t ) t 0 be a standard Brownian motion on the line and {N(t) : t 0} be a Poisson process with parameter λ > 0, independent of (B t ) t 0. Then one can construct a standard Brownian process (W t ) t 0 independent of N( ) and such that, for any integer n 2 and any positive real x, ( P sup Bk 1 ) W N(k) C log n + x A exp( Bx), k n λ where A, B and C are positive constants depending only on λ. (W t ) t 0 may be constructed from the processes (B t ) t 0, N( ) and some auxiliary atomless random variable δ independent of the σ-field generated by the processes (B t ) t 0 and N( ).
18 Construction of W (1/3) It will be constructed from B by writing B on the Haar basis. For j Z and k N, let and Y j,k = e j,k = 2 j/2( 1 ]k2 j,(k+ 1 2 )2j ] 1 ](k+ 1 2 )2j,(k+1)2 j ]), 0 e j,k (t)db(t) = 2 j/2( 2B (k+ 1 2 )2 j B k2 j B (k+1)2 j ). Then, since (e j,k ) j Z,k 0 is a total orthonormal system of l 2 (R), for any t R +, B t = j Z k 0 ( t To construct W, we modify the e j,k. 0 e j,k(t)dt ) Y j,k.
19 Construction of W (2/3) Let E j = {k N : N(k2 j ) < N((k )2j ) < N((k + 1)2 j )} For j Z and k E j, let f j,k = c 1/2 ( j,k bj,k 1 ]N(k2 j ),N((k+ 1 2 )2j )] a ) j,k1 ]N((k+ 1 2 )2 j ),N((k+1)2 j )], where a j,k = N((k )2j ) N(k2 j ), b j,k = N((k + 1)2 j ) N((k )2j ), and c j,k = a j,k b j,k (a j,k + b j,k ) (f j,k ) j Z,k Ej is an orthonormal system whose closure contains the vectors 1 ]0,N(t)] for t R + and then the vectors 1 ]0,l] for l N. Setting f j,k = 0 if k / E j, we define W l = j Z k 0 ( l 0 f j,k(t)dt ) Y j,k for any l N and W 0 = 0
20 Construction of W (3/3) W l = j Z k 0 ( l 0 f j,k(t)dt ) Y j,k for any l N and W 0 = 0 Conditionally to N, (f j,k ) j Z,k Ej is an orthonormal system and (Y j,k ) is a sequence of iid N (0, 1), independent of N. Hence, conditionally to N, (W l ) l 0 is a Gaussian sequence such that Cov(W l, W m ) = l m. Therefore this Gaussian sequence is independent of N By the Skorohod embedding theorem, we can extend it to a standard Wiener process (W t ) t still independent of N.
21 Take λ = 1. For any l N B l W N(l) = j 0 k 0 If l / ]k2 j, (k + 1)2 j [, Hence, setting l j = [l2 j ], ( l 0 e j,k(t)dt N(l) 0 ) f j,k (t)dt Y j,k. l 0 e j,k(t)dt = N(l) 0 f j,k (t)dt = 0 ( l N(l) ) B l W N(l) = e j,l j (t)dt f j,lj (t)dt Y j,lj. j Let t j = l l j 2 j 2 j. We then have B l W N(l) = U j,lj Y j,lj 1 tj ]0,1/2] + V j,lj Y j,lj 1 tj ]1/2,1[, j 0 j 0
22 The rest of the proof consists to show that ) P( Uj,l 2 j 1 tj ]0,1/2] c 1 log n + c 2 x j 0 and the same with V 2 j,l j 1 tj ]1/2,1[. c 3 e c 4x, Let Π j,k = N((k + 1)2 j ) N(k2 j ). We use in particular that the conditional law of Π j 1,2k given Π j,k is a B(Π j,k, 1/2) The property above allows to construct a sequence (ξ j,k ) j,k of iid N (0, 1) such that Π j 1,2k 1 2 Π j,k Π1/2 j,k ξ j,k This comes from the Tusnády s lemma: Setting Φ m the d.f. of B(m, 1/2) and Φ the d.f. of N (0, 1), Φ 1 m (Φ(ξ)) m ξ m
23 Thank you for your attention!
Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539
Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory
More information6 Markov Chain Monte Carlo (MCMC)
6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution
More information1. Stochastic Processes and filtrations
1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S
More informationPractical conditions on Markov chains for weak convergence of tail empirical processes
Practical conditions on Markov chains for weak convergence of tail empirical processes Olivier Wintenberger University of Copenhagen and Paris VI Joint work with Rafa l Kulik and Philippe Soulier Toronto,
More informationBrownian Motion. 1 Definition Brownian Motion Wiener measure... 3
Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................
More informationPath Decomposition of Markov Processes. Götz Kersting. University of Frankfurt/Main
Path Decomposition of Markov Processes Götz Kersting University of Frankfurt/Main joint work with Kaya Memisoglu, Jim Pitman 1 A Brownian path with positive drift 50 40 30 20 10 0 0 200 400 600 800 1000-10
More information1 Continuous-time chains, finite state space
Université Paris Diderot 208 Markov chains Exercises 3 Continuous-time chains, finite state space Exercise Consider a continuous-time taking values in {, 2, 3}, with generator 2 2. 2 2 0. Draw the diagramm
More informationLecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1
Random Walks and Brownian Motion Tel Aviv University Spring 011 Lecture date: May 0, 011 Lecture 9 Instructor: Ron Peled Scribe: Jonathan Hermon In today s lecture we present the Brownian motion (BM).
More informationConsistency of the maximum likelihood estimator for general hidden Markov models
Consistency of the maximum likelihood estimator for general hidden Markov models Jimmy Olsson Centre for Mathematical Sciences Lund University Nordstat 2012 Umeå, Sweden Collaborators Hidden Markov models
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationLimit theorems for dependent regularly varying functions of Markov chains
Limit theorems for functions of with extremal linear behavior Limit theorems for dependent regularly varying functions of In collaboration with T. Mikosch Olivier Wintenberger wintenberger@ceremade.dauphine.fr
More informationErgodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.
Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions
More informationOn a class of additive functionals of two-dimensional Brownian motion and random walk
On a class of additive functionals of two-dimensional Brownian motion and random walk Endre Csáki a Alfréd Rényi Institute of Mathematics, Hungarian Academy of Sciences, Budapest, P.O.B. 27, H-364, Hungary.
More informationCentral Limit Theorem for Non-stationary Markov Chains
Central Limit Theorem for Non-stationary Markov Chains Magda Peligrad University of Cincinnati April 2011 (Institute) April 2011 1 / 30 Plan of talk Markov processes with Nonhomogeneous transition probabilities
More information1. Aufgabenblatt zur Vorlesung Probability Theory
24.10.17 1. Aufgabenblatt zur Vorlesung By (Ω, A, P ) we always enote the unerlying probability space, unless state otherwise. 1. Let r > 0, an efine f(x) = 1 [0, [ (x) exp( r x), x R. a) Show that p f
More informationPositive and null recurrent-branching Process
December 15, 2011 In last discussion we studied the transience and recurrence of Markov chains There are 2 other closely related issues about Markov chains that we address Is there an invariant distribution?
More informationELEMENTS OF PROBABILITY THEORY
ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable
More informationA regeneration proof of the central limit theorem for uniformly ergodic Markov chains
A regeneration proof of the central limit theorem for uniformly ergodic Markov chains By AJAY JASRA Department of Mathematics, Imperial College London, SW7 2AZ, London, UK and CHAO YANG Department of Mathematics,
More informationThe range of tree-indexed random walk
The range of tree-indexed random walk Jean-François Le Gall, Shen Lin Institut universitaire de France et Université Paris-Sud Orsay Erdös Centennial Conference July 2013 Jean-François Le Gall (Université
More informationStochastic Processes
Stochastic Processes 8.445 MIT, fall 20 Mid Term Exam Solutions October 27, 20 Your Name: Alberto De Sole Exercise Max Grade Grade 5 5 2 5 5 3 5 5 4 5 5 5 5 5 6 5 5 Total 30 30 Problem :. True / False
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationCONDITIONAL LIMIT THEOREMS FOR PRODUCTS OF RANDOM MATRICES
CONDITIONAL LIMIT THEOREMS FOR PRODUCTS OF RANDOM MATRICES ION GRAMA, EMILE LE PAGE, AND MARC PEIGNÉ Abstract. Consider the product G n = g n...g 1 of the random matrices g 1,..., g n in GL d, R and the
More informationTail inequalities for additive functionals and empirical processes of. Markov chains
Tail inequalities for additive functionals and empirical processes of geometrically ergodic Markov chains University of Warsaw Banff, June 2009 Geometric ergodicity Definition A Markov chain X = (X n )
More informationNecessary and sufficient conditions for strong R-positivity
Necessary and sufficient conditions for strong R-positivity Wednesday, November 29th, 2017 The Perron-Frobenius theorem Let A = (A(x, y)) x,y S be a nonnegative matrix indexed by a countable set S. We
More informationWeak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij
Weak convergence and Brownian Motion (telegram style notes) P.J.C. Spreij this version: December 8, 2006 1 The space C[0, ) In this section we summarize some facts concerning the space C[0, ) of real
More informationLogarithmic scaling of planar random walk s local times
Logarithmic scaling of planar random walk s local times Péter Nándori * and Zeyu Shen ** * Department of Mathematics, University of Maryland ** Courant Institute, New York University October 9, 2015 Abstract
More informationP i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=
2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]
More informationOn the convergence of sequences of random variables: A primer
BCAM May 2012 1 On the convergence of sequences of random variables: A primer Armand M. Makowski ECE & ISR/HyNet University of Maryland at College Park armand@isr.umd.edu BCAM May 2012 2 A sequence a :
More informationSelected Exercises on Expectations and Some Probability Inequalities
Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ
More informationGaussian Processes. 1. Basic Notions
Gaussian Processes 1. Basic Notions Let T be a set, and X : {X } T a stochastic process, defined on a suitable probability space (Ω P), that is indexed by T. Definition 1.1. We say that X is a Gaussian
More informationMarkov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued
Markov Chains X(t) is a Markov Process if, for arbitrary times t 1 < t 2
More informationPart IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015
Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.
More informationStochastic Process (ENPC) Monday, 22nd of January 2018 (2h30)
Stochastic Process (NPC) Monday, 22nd of January 208 (2h30) Vocabulary (english/français) : distribution distribution, loi ; positive strictement positif ; 0,) 0,. We write N Z,+ and N N {0}. We use the
More informationLecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.
Lecture 7 1 Stationary measures of a Markov chain We now study the long time behavior of a Markov Chain: in particular, the existence and uniqueness of stationary measures, and the convergence of the distribution
More informationLectures on Stochastic Stability. Sergey FOSS. Heriot-Watt University. Lecture 4. Coupling and Harris Processes
Lectures on Stochastic Stability Sergey FOSS Heriot-Watt University Lecture 4 Coupling and Harris Processes 1 A simple example Consider a Markov chain X n in a countable state space S with transition probabilities
More informationPhenomena in high dimensions in geometric analysis, random matrices, and computational geometry Roscoff, France, June 25-29, 2012
Phenomena in high dimensions in geometric analysis, random matrices, and computational geometry Roscoff, France, June 25-29, 202 BOUNDS AND ASYMPTOTICS FOR FISHER INFORMATION IN THE CENTRAL LIMIT THEOREM
More informationLecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M.
Lecture 10 1 Ergodic decomposition of invariant measures Let T : (Ω, F) (Ω, F) be measurable, and let M denote the space of T -invariant probability measures on (Ω, F). Then M is a convex set, although
More informationCentral limit theorems for ergodic continuous-time Markov chains with applications to single birth processes
Front. Math. China 215, 1(4): 933 947 DOI 1.17/s11464-15-488-5 Central limit theorems for ergodic continuous-time Markov chains with applications to single birth processes Yuanyuan LIU 1, Yuhui ZHANG 2
More information1. Introduction. 0 i<j n j i
ESAIM: Probability and Statistics URL: http://www.emath.fr/ps/ Will be set by the publisher SHAO S THEOREM ON THE MAXIMUM OF STANDARDIZED RANDOM WALK INCREMENTS FOR MULTIDIMENSIONAL ARRAYS Zakhar Kabluchko
More informationGAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM
GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM STEVEN P. LALLEY 1. GAUSSIAN PROCESSES: DEFINITIONS AND EXAMPLES Definition 1.1. A standard (one-dimensional) Wiener process (also called Brownian motion)
More informationInfinitely iterated Brownian motion
Mathematics department Uppsala University (Joint work with Nicolas Curien) This talk was given in June 2013, at the Mittag-Leffler Institute in Stockholm, as part of the Symposium in honour of Olav Kallenberg
More informationSusceptible-Infective-Removed Epidemics and Erdős-Rényi random
Susceptible-Infective-Removed Epidemics and Erdős-Rényi random graphs MSR-Inria Joint Centre October 13, 2015 SIR epidemics: the Reed-Frost model Individuals i [n] when infected, attempt to infect all
More informationMixing time for a random walk on a ring
Mixing time for a random walk on a ring Stephen Connor Joint work with Michael Bate Paris, September 2013 Introduction Let X be a discrete time Markov chain on a finite state space S, with transition matrix
More informationLimit Thoerems and Finitiely Additive Probability
Limit Thoerems and Finitiely Additive Probability Director Chennai Mathematical Institute rlk@cmi.ac.in rkarandikar@gmail.com Limit Thoerems and Finitiely Additive Probability - 1 Most of us who study
More informationMarkov processes and queueing networks
Inria September 22, 2015 Outline Poisson processes Markov jump processes Some queueing networks The Poisson distribution (Siméon-Denis Poisson, 1781-1840) { } e λ λ n n! As prevalent as Gaussian distribution
More informationStochastic Models (Lecture #4)
Stochastic Models (Lecture #4) Thomas Verdebout Université libre de Bruxelles (ULB) Today Today, our goal will be to discuss limits of sequences of rv, and to study famous limiting results. Convergence
More informationMean-field dual of cooperative reproduction
The mean-field dual of systems with cooperative reproduction joint with Tibor Mach (Prague) A. Sturm (Göttingen) Friday, July 6th, 2018 Poisson construction of Markov processes Let (X t ) t 0 be a continuous-time
More informationOn a class of stochastic differential equations in a financial network model
1 On a class of stochastic differential equations in a financial network model Tomoyuki Ichiba Department of Statistics & Applied Probability, Center for Financial Mathematics and Actuarial Research, University
More information6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )
6. Brownian Motion. stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ, P) and a real valued stochastic process can be defined
More informationCDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes
CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic process X = {X(t), t2 T} is a collection of random variables (rvs); one rv
More information215 Problem 1. (a) Define the total variation distance µ ν tv for probability distributions µ, ν on a finite set S. Show that
15 Problem 1. (a) Define the total variation distance µ ν tv for probability distributions µ, ν on a finite set S. Show that µ ν tv = (1/) x S µ(x) ν(x) = x S(µ(x) ν(x)) + where a + = max(a, 0). Show that
More informationStochastic Processes (Week 6)
Stochastic Processes (Week 6) October 30th, 2014 1 Discrete-time Finite Markov Chains 2 Countable Markov Chains 3 Continuous-Time Markov Chains 3.1 Poisson Process 3.2 Finite State Space 3.2.1 Kolmogrov
More informationAn almost sure invariance principle for additive functionals of Markov chains
Statistics and Probability Letters 78 2008 854 860 www.elsevier.com/locate/stapro An almost sure invariance principle for additive functionals of Markov chains F. Rassoul-Agha a, T. Seppäläinen b, a Department
More informationFormulas for probability theory and linear models SF2941
Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms
More informationExact Simulation of the Stationary Distribution of M/G/c Queues
1/36 Exact Simulation of the Stationary Distribution of M/G/c Queues Professor Karl Sigman Columbia University New York City USA Conference in Honor of Søren Asmussen Monday, August 1, 2011 Sandbjerg Estate
More information9 Brownian Motion: Construction
9 Brownian Motion: Construction 9.1 Definition and Heuristics The central limit theorem states that the standard Gaussian distribution arises as the weak limit of the rescaled partial sums S n / p n of
More informationModule 9: Stationary Processes
Module 9: Stationary Processes Lecture 1 Stationary Processes 1 Introduction A stationary process is a stochastic process whose joint probability distribution does not change when shifted in time or space.
More informationOn the Central Limit Theorem for an ergodic Markov chain
Stochastic Processes and their Applications 47 ( 1993) 113-117 North-Holland 113 On the Central Limit Theorem for an ergodic Markov chain K.S. Chan Department of Statistics and Actuarial Science, The University
More informationA Functional Central Limit Theorem for an ARMA(p, q) Process with Markov Switching
Communications for Statistical Applications and Methods 2013, Vol 20, No 4, 339 345 DOI: http://dxdoiorg/105351/csam2013204339 A Functional Central Limit Theorem for an ARMAp, q) Process with Markov Switching
More informationIterated Random Functions: Convergence Theorems. C. D. Fuh Institute of Statistical Science Academia Sinica, Taipei, Taiwan, ROC ABSTRACT
Iterated Random Functions: Convergence Theorems C. D. Fuh Institute of Statistical Science Academia Sinica, Taipei, Taiwan, ROC ABSTRACT Iterated random functions are used to draw pictures, simulate large
More informationMATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015
ID NAME SCORE MATH 56/STAT 555 Applied Stochastic Processes Homework 2, September 8, 205 Due September 30, 205 The generating function of a sequence a n n 0 is defined as As : a ns n for all s 0 for which
More informationExercises in stochastic analysis
Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with
More informationApplied Stochastic Processes
Applied Stochastic Processes Jochen Geiger last update: July 18, 2007) Contents 1 Discrete Markov chains........................................ 1 1.1 Basic properties and examples................................
More informationSimultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms
Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms Yan Bai Feb 2009; Revised Nov 2009 Abstract In the paper, we mainly study ergodicity of adaptive MCMC algorithms. Assume that
More informationSubgaussian concentration inequalities for geometrically ergodic Markov chains
Subgaussian concentration inequalities for geometrically ergodic Markov chains Jérôme Dedecker, Sébastien Gouëzel To cite this version: Jérôme Dedecker, Sébastien Gouëzel. Subgaussian concentration inequalities
More informationChapter 7. Markov chain background. 7.1 Finite state space
Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider
More informationEconometría 2: Análisis de series de Tiempo
Econometría 2: Análisis de series de Tiempo Karoll GOMEZ kgomezp@unal.edu.co http://karollgomez.wordpress.com Segundo semestre 2016 II. Basic definitions A time series is a set of observations X t, each
More informationRademacher Bounds for Non-i.i.d. Processes
Rademacher Bounds for Non-i.i.d. Processes Afshin Rostamizadeh Joint work with: Mehryar Mohri Background Background Generalization Bounds - How well can we estimate an algorithm s true performance based
More informationECE353: Probability and Random Processes. Lecture 18 - Stochastic Processes
ECE353: Probability and Random Processes Lecture 18 - Stochastic Processes Xiao Fu School of Electrical Engineering and Computer Science Oregon State University E-mail: xiao.fu@oregonstate.edu From RV
More informationDiscrete solid-on-solid models
Discrete solid-on-solid models University of Alberta 2018 COSy, University of Manitoba - June 7 Discrete processes, stochastic PDEs, deterministic PDEs Table: Deterministic PDEs Heat-diffusion equation
More informationFUNCTIONAL LAWS OF THE ITERATED LOGARITHM FOR THE INCREMENTS OF THE COMPOUND EMPIRICAL PROCESS 1 2. Abstract
FUNCTIONAL LAWS OF THE ITERATED LOGARITHM FOR THE INCREMENTS OF THE COMPOUND EMPIRICAL PROCESS 1 2 Myriam MAUMY Received: Abstract Let {Y i : i 1} be a sequence of i.i.d. random variables and let {U i
More informationUniversity of Toronto Department of Statistics
Norm Comparisons for Data Augmentation by James P. Hobert Department of Statistics University of Florida and Jeffrey S. Rosenthal Department of Statistics University of Toronto Technical Report No. 0704
More informationYaming Yu Department of Statistics, University of California, Irvine Xiao-Li Meng Department of Statistics, Harvard University.
Appendices to To Center or Not to Center: That is Not the Question An Ancillarity-Sufficiency Interweaving Strategy (ASIS) for Boosting MCMC Efficiency Yaming Yu Department of Statistics, University of
More informationSUPPLEMENT TO MARKOV PERFECT INDUSTRY DYNAMICS WITH MANY FIRMS TECHNICAL APPENDIX (Econometrica, Vol. 76, No. 6, November 2008, )
Econometrica Supplementary Material SUPPLEMENT TO MARKOV PERFECT INDUSTRY DYNAMICS WITH MANY FIRMS TECHNICAL APPENDIX (Econometrica, Vol. 76, No. 6, November 2008, 1375 1411) BY GABRIEL Y. WEINTRAUB,C.LANIER
More informationThe largest eigenvalues of the sample covariance matrix. in the heavy-tail case
The largest eigenvalues of the sample covariance matrix 1 in the heavy-tail case Thomas Mikosch University of Copenhagen Joint work with Richard A. Davis (Columbia NY), Johannes Heiny (Aarhus University)
More informationEstimating the efficient price from the order flow : a Brownian Cox process approach
Estimating the efficient price from the order flow : a Brownian Cox process approach - Sylvain DELARE Université Paris Diderot, LPMA) - Christian ROBER Université Lyon 1, Laboratoire SAF) - Mathieu ROSENBAUM
More informationA mathematical framework for Exact Milestoning
A mathematical framework for Exact Milestoning David Aristoff (joint work with Juan M. Bello-Rivas and Ron Elber) Colorado State University July 2015 D. Aristoff (Colorado State University) July 2015 1
More informationSolution: (Course X071570: Stochastic Processes)
Solution I (Course X071570: Stochastic Processes) October 24, 2013 Exercise 1.1: Find all functions f from the integers to the real numbers satisfying f(n) = 1 2 f(n + 1) + 1 f(n 1) 1. 2 A secial solution
More informationSTOCHASTIC PROCESSES Basic notions
J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving
More informationModelling Dependent Credit Risks
Modelling Dependent Credit Risks Filip Lindskog, RiskLab, ETH Zürich 30 November 2000 Home page:http://www.math.ethz.ch/ lindskog E-mail:lindskog@math.ethz.ch RiskLab:http://www.risklab.ch Modelling Dependent
More informationDepartment of Econometrics and Business Statistics
ISSN 440-77X Australia Department of Econometrics and Business Statistics http://www.buseco.monash.edu.au/depts/ebs/pubs/wpapers/ Nonlinear Regression with Harris Recurrent Markov Chains Degui Li, Dag
More informationOn the Set of Limit Points of Normed Sums of Geometrically Weighted I.I.D. Bounded Random Variables
On the Set of Limit Points of Normed Sums of Geometrically Weighted I.I.D. Bounded Random Variables Deli Li 1, Yongcheng Qi, and Andrew Rosalsky 3 1 Department of Mathematical Sciences, Lakehead University,
More informationMarkov Chain Monte Carlo (MCMC)
Markov Chain Monte Carlo (MCMC Dependent Sampling Suppose we wish to sample from a density π, and we can evaluate π as a function but have no means to directly generate a sample. Rejection sampling can
More information25.1 Ergodicity and Metric Transitivity
Chapter 25 Ergodicity This lecture explains what it means for a process to be ergodic or metrically transitive, gives a few characterizes of these properties (especially for AMS processes), and deduces
More informationLesson 4: Stationary stochastic processes
Dipartimento di Ingegneria e Scienze dell Informazione e Matematica Università dell Aquila, umberto.triacca@univaq.it Stationary stochastic processes Stationarity is a rather intuitive concept, it means
More informationOn the local time of random walk on the 2-dimensional comb
Stochastic Processes and their Applications 2 (20) 290 34 www.elsevier.com/locate/spa On the local time of random walk on the 2-dimensional comb Endre Csáki a,, Miklós Csörgő b, Antónia Földes c, Pál Révész
More informationNon-Essential Uses of Probability in Analysis Part IV Efficient Markovian Couplings. Krzysztof Burdzy University of Washington
Non-Essential Uses of Probability in Analysis Part IV Efficient Markovian Couplings Krzysztof Burdzy University of Washington 1 Review See B and Kendall (2000) for more details. See also the unpublished
More informationUPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES
Applied Probability Trust 7 May 22 UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES HAMED AMINI, AND MARC LELARGE, ENS-INRIA Abstract Upper deviation results are obtained for the split time of a
More informationLecture 21 Representations of Martingales
Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let
More informationMarkov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains
Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time
More informationarxiv:math/ v1 [math.pr] 24 Apr 2003
ICM 2002 Vol. III 1 3 arxiv:math/0304373v1 [math.pr] 24 Apr 2003 Estimates for the Strong Approximation in Multidimensional Central Limit Theorem A. Yu. Zaitsev Abstract In a recent paper the author obtained
More informationPoisson Processes. Stochastic Processes. Feb UC3M
Poisson Processes Stochastic Processes UC3M Feb. 2012 Exponential random variables A random variable T has exponential distribution with rate λ > 0 if its probability density function can been written
More informationA review of Continuous Time MC STA 624, Spring 2015
A review of Continuous Time MC STA 624, Spring 2015 Ruriko Yoshida Dept. of Statistics University of Kentucky polytopes.net STA 624 1 Continuous Time Markov chains Definition A continuous time stochastic
More informationCDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical
CDA5530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic ti process X = {X(t), t T} is a collection of random variables (rvs); one
More informationChange detection problems in branching processes
Change detection problems in branching processes Outline of Ph.D. thesis by Tamás T. Szabó Thesis advisor: Professor Gyula Pap Doctoral School of Mathematics and Computer Science Bolyai Institute, University
More informationMarkov processes Course note 2. Martingale problems, recurrence properties of discrete time chains.
Institute for Applied Mathematics WS17/18 Massimiliano Gubinelli Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains. [version 1, 2017.11.1] We introduce
More informationRANDOM WALKS AND PERCOLATION IN A HIERARCHICAL LATTICE
RANDOM WALKS AND PERCOLATION IN A HIERARCHICAL LATTICE Luis Gorostiza Departamento de Matemáticas, CINVESTAV October 2017 Luis GorostizaDepartamento de Matemáticas, RANDOM CINVESTAV WALKS Febrero AND 2017
More informationLet (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t
2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition
More informationA generalization of Strassen s functional LIL
A generalization of Strassen s functional LIL Uwe Einmahl Departement Wiskunde Vrije Universiteit Brussel Pleinlaan 2 B-1050 Brussel, Belgium E-mail: ueinmahl@vub.ac.be Abstract Let X 1, X 2,... be a sequence
More informationLecture Notes 3 Convergence (Chapter 5)
Lecture Notes 3 Convergence (Chapter 5) 1 Convergence of Random Variables Let X 1, X 2,... be a sequence of random variables and let X be another random variable. Let F n denote the cdf of X n and let
More information