Kesten s power law for difference equations with coefficients induced by chains of infinite order
|
|
- Ethan Summers
- 6 years ago
- Views:
Transcription
1 Kesten s power law for difference equations with coefficients induced by chains of infinite order Arka P. Ghosh 1,2, Diana Hay 1, Vivek Hirpara 1,3, Reza Rastegar 1, Alexander Roitershtein 1,, Ashley Schulteis 4, Jiyeon Suh 5 Abstract We consider the equation R n = Q n + M n R n 1, with random coefficients (Q n, M n ) n Z induced by chains of infinite order. We study the asymptotic behavior of the distribution tails of the stationary solution R to this equation and prove a power law for P (R > t). Keywords: random equations, power tails, chains of infinite order, chains with complete connections, regenerative structure, Markov representation MSC: 60K15, 60J Introduction and statement of results Let (Q n, M n ) n Z be a stationary and ergodic sequence of pairs of real-valued random variables, and consider the recurrence relation R n = Q n + M n R n 1, n N, R n R. (1) This equation has a wide variety of real world and theoretical applications, see for instance (Embrechts and Goldie, 1993; Vervaat, 1979). If E (log M 0 ) < 0 and E ( log + Q 0 ) < (where we denote x + := max{x, 0} for x R) then for any R 0, the limit law of R n is that of R = Q 0 + n=1 Q n 1 n i=0 M i, and it is the unique initial distribution under which (R n ) n 0 is stationary (Brandt, 1986). The distribution tails of R are shown to be power tailed in (Kesten, 1973; Goldie, 1991), provided that the pairs (Q n, M n ) n Z form an i.i.d. sequence. Extensions of this result to Markovian coefficients have been obtained in (de Saporta, 2005; Roitershtein, 2007). The goal of this paper is to consider a non-markov setup which is desirable in This work was partially done during Summer 09 Research Experience for Undergraduate (REU) Program at Iowa State University. D.H, R.R, and A.S. were partially supported by NSF Award DMS J.S. thanks the Department of Mathematics at Iowa State University for the hospitality and support during her visit there. Corresponding author. roiterst@iastate.edu 1 Department of Mathematics, Iowa State University, Ames, IA 50011, USA 2 Department of Statistics, Iowa State University, Ames, IA 50011, USA 3 Department of Economics, Iowa State University, Ames, IA 50011, USA 4 Department of Math, Physics and CS, Wartburg College, Waverly, IA 50677, USA 5 Department of Math, Grand Valley State University, Allendale, MI 49401, USA Preprint submitted to Statistics & Probability Letters April 2, 2010
2 many, especially financial, applications. See for instance the concluding discussion in (Perrakis and Henin, 1974). Definition 1.1. (Lalley, 1986) A C-chain is a stationary process (X n ) n Z taking values in a finite set (alphabet) D such that (i) For any i 1, i 2,..., i n D, P (X 1 = i 1, X 2 = i 2,..., X n = i n ) > 0. (ii) For any i 0 D and any sequence (i n ) n 1 D N, the following limit exists: lim P (X 0 = i 0 X k = i k, 1 k n) = P (X 0 = i 0 X k = i k, k 1), n where the right-hand side is a regular version of the conditional probabilities. (iii) (fading memory) For n 0 let { } P (X 0 = i 0 X k = i k, k 1) γ n := sup P (X 0 = j 0 X k = j k, k 1) 1 : i k = j k, k = 1,..., n. Then, the numbers γ n are all finite and lim sup n log γ n /n < 0. C-chains are a particular case of chains of infinite order (chains with complete connections). For general accounts on these processes we refer to (Iosifesku and Grigoresku, 1990; Kaijser, 1981). The distributions of C-chains (a particular case of g-measures introduced in (Keane, 1972)) are Gibbs states in the sense of Bowen (also known as Dobrushin-Lanford-Ruelle states), see e.g. (Bowen, 1975; Lalley, 1986). We have, see also (Berbee, 1987; Fernández and Maillard, 2004), Theorem A. (Lalley, 1986) Let (X n ) n Z be a C-chain with alphabet D. Let S = n 1 Dn and ζ : S D be the projection into the last coordinate, i.e. ζ ( (s 1,..., s n ) ) = s n. Then there exist a stationary irreducible Markov chain (Y n ) n Z defined on the state space S and constants r N, δ > 0 such that: (i) The following Markov representation holds: X n = ζ(y n ), n Z. Furthermore, for any (y n ) n N D N and s N, (ii) P ( Y n+1 = (x 1, x 2,..., x t ) Y n = (y 1, y 2,..., y s ) ) = 0 unless either t = 1 or t = s + 1 and x i = y i for all i s, (iii) P ( Y n+1 = (y) Y n+1 D, Y n = (y 1, y 2,..., y s ) ) = P ( Y 0 = (y) Y 0 D). (iv) P ( Y n+1 D Y n = (y 1, y 2,..., y mr ) ) = δ, for all m N. Moreover, without loss of generality, we can assume that (v) r is even (see the beginning of the proof in (Lalley, 1986, p. 1266)). Let (X n ) n Z be a C-chain and denote X = (X n ) n 0. Our main result is: Theorem 1.2. Let (Q n,i, M n,i ) n Z,i D R 2 be a sequence independent of (X n ) n Z such that for any j D, (Q n,j, M n,j ) n Z are i.i.d., and Q n = j D Q n,j I {Xn =j} = Q n,xn and M n = j D M n,j I {Xn =j} = M n,xn. (2) Assume that: 2
3 (i) P ( Q 0 < q 0 ) = 1 and P (m 1 ) 0 < M 0 < m 0 = 1 for some constants q0 > 0, m 0 > 0. ( (ii) lim sup 1 n log E n 1 ) ( i=0 M i β1 0 and lim sup 1 n log E n 1 ) i=0 M i β2 < 0 for some n n constants β 1 > 0 and β 2 > 0. (iii) P (log M 0 = δ k for some k Z M 0 0) < 1 for all δ > 0. Then, with parameter α > 0 defined below in (4), the following hold: (a) lim t α P (R η > t X ) = K η (X ) for bounded K η : D Z+ R +, η = 1, 1. Moreover, t the convergence is uniform on X. (b) If P (M 0 < 0) > 0 then P ( K 1 (X ) = K 1 (X ) ) = 1. (c) If P (Q 0 > 0, M 0 > 0) = 1 then P ( K 1 (X ) > 0 ) = 1. (d) For η { 1, 1}, if P ( K η (X ) > 0 ) > 0 then P ( K η (X ) > 0 ) = 1. (e) P ( K 1 (X ) = K 1 (X ) = 0 ) = 1 iff there is a function Γ : D R s. t. P ( Q 0 + Γ(X 0 )M 0 = Γ(X 1 ) ) = 1. (3) Moreover, if P ( Q 0,i = q, M 0,i = m ) < 1 for all pairs (q, m) R 2 and i D, then P ( K 1 (X ) = K 1 (X ) = 0 ) = 1 iff P ( Q 0 + cm 0 = c ) = 1 for a constant c R. It turns out (see below, in Section 2) that both lim sup s in (iii) are in fact limits, and the parameter α > 0 is (uniquely) determined by Λ(α) = 0, 1 ( where Λ(β) = lim n n log E n 1 Π i=0 M i β). (4) We remark that condition (3) is a natural generalization of the criterion that appears in the i.i.d. case, see Theorem B below. Note that (2) defines a Hidden Markov Model (HMM) when X n is a Markov chain. In fact, (Q n, M n, Y n ) n Z can be thought as a HMM associated with Y n by X n = η(y n ). The proofs in (Kesten, 1973; Goldie, 1991) and (Roitershtein, 2007; de Saporta, 2005) rely on renewal theory for Markov additive process (Markov random walk) V n = n 1 i=0 log M i under exponential tilt. Following the idea of (Mayer-Wolf et al., 2004), Theorem 1.2 is obtained in Section 2 rather directly from its i.i.d. prototype (cited as Theorem B below) by applying a Doeblin s cyclic trick and associating R to a linear recursion with i.i.d. coefficients. Though a reduction to the main results of (Roitershtein, 2007) is possible, it would anyway entail considerable extra work. The approach taken here enjoys the finite range and mixing properties of X n, in addition to the existence of Y n. It is much lighter than those used in (Roitershtein, 2007) for general Markov chains and in (de Saporta, 2005) for a finite state case, both built on techniques of (Goldie, 1991). 2. Proof of Theorem 1.2 Consider the backward chain Z n = Y n, n Z, with transition matrix H given by H(x, y) = P (Y n = y Y n+1 = x) = P (Y0=y) P (Y 0 =x) P (Y n+1 = y Y n = x). Fix y S and a constant r (0, 1). Let (η n ) n Z be a sequence of i.i.d. coins independent of both 3
4 (Z n ) n Z and ( Q n,i, M n,i )n Z,i D, s. t. P (η 0 = 1) = r, P (η 0 = 0) = 1 r. Let N 0 = 0 and N i = inf{n > N i 1 : Z n = y, η n = 1}, i N. The blocks (Z Ni,..., Z Ni+1 1 ) are independent for i 0 and identically distributed for i 1. Between two successive regeneration times N i the chain (Z n ) n 0 evolves according to a sub-markov kernel Θ given by H(x, y) = Θ(x, y) + r1 {y=y }H(x, y), i.e. Θ(x, y) = P (Z 1 = y, N 1 > 1 Z 0 = x). (5) Theorem A implies that E ( ) e βn 1 Z0 is uniformly bounded for some β > 0 (see the paragraph following Theorem 1 in (Lalley, 1986)). For i 0, let A i = Q Ni + Q Ni +1M Ni Q Ni+1 1M Ni M Ni M Ni+1 2 B i = M Ni M Ni M Ni+1 1. The pairs (A i, B i ) are independent for i 0, identically distributed for i 1 and R = A 0 + n=1 A n 1 n i=0 B i. To prove Theorem 1.2 we will verify the conditions of the following theorem for (A i, B i ) i 1. Theorem B. (Kesten, 1973; Goldie, 1991) Let (A i, B i ) i 1 be i.i.d. and (i) For some α > 0, E ( A 1 α ) = 1 and E ( B 1 α log + B 1 ) <. (ii) P (log B 1 = δ k for some k Z B 1 0) < 1 for all δ > 0. Let R = A 1 + n=2 A n 1 n i=1 B i. Then (a) lim t α P ( ) R > t = K+, lim t α P ( ) R < t = K for some K +, K 0. t t (b) If P (B 1 < 0) > 0, then K + = K. (c) K + + K > 0 if and only if P ( A 1 = (1 B 1 )c ) < 1 for all c R. Let H β (x, y) = H(x, y)e( M 0,ζ(y) β ) and Θ β (x, y) = Θ(x, y)e( M 0,ζ(y) β ). Notice that E ( n i=0 M i β 1 {Zn =y} Z 0 = x ) = E ( M 0 β Z 0 = x ) Hβ n (x, y) and E ( 1 {n<n1 } n M ) i β 1 {Zn =y} Z 0 = x = E ( M 0 β Z 0 = x ) Θ n β(x, y), y y. i=0 (Roitershtein, 2007, Lemma 2.3 and Proposition 2.4) show that (4) holds and that the spectral radius of H α is 1 while the spectral radius of Θ α is less than 1. Hence, see the proof of (2.43) in (Mayer-Wolf et al., 2004), we have: Lemma 2.1. E( B 1 α ) = 1. We next show that log B 0 is a non-lattice random variable if y D. Lemma 2.2. Assume y D 1 and let V = N 1 1 n=0 log M n = log B 0. Then for any δ > 0 we have P ( V δ Z Z 0 = y ) < 1, where δ Z := {k δ : k Z}. Proof. For y D and k N, let y (k) denote (y 1,..., y k ) D k such that y 2 = = y k = y and y 1 = y. By Lalley s Theorem A, for any y D, P (Z 0 = y (1), Z 1 = y (m),..., Z m 1 = Z N1 1 = y (2) ) > 0 (6) 4
5 for some m > 1. Therefore, P ( V δ Z Z 0 = (y ) ) = 1 along with (6) imply: (i) using first y = y, P ( m log M 0,y δ Z Z 0 = (y ) ) = 1; (ii) using then a general y D, P ( (m 1) log M 0,y + log M 0,y δ Z Z0 = (y ) ) = 1. Therefore, we would have P ( m(m 1) log M 0,y δ Z Z 0 = (y ) ) = 1 for any y D, contradicting condition (iv) of Theorem 1.2. The proof of the next lemma is verbatim the proof of (2.44) in (Mayer-Wolf et al., 2004), and therefore is omitted. Lemma 2.3. β > α s. t. E [( N 1 1 n 1 n=0 i=0 M i ) β ] Z 0 is bounded. Completion of the proof of Theorem 1.2. (a) (d) It follows from Lemmas that the conclusions of Theorem B can be applied to (A 1, B 1 ). Claims (a) through (d) of Theorem 1.2 follow then from the identity R = A 0 + B 0 R and the independence of (A0, B 0 ) of R under the conditional measures P ( Z 0 = x). In particular, for η { 1, 1}, lim P ( R η > t Z 0 ) = E ( B 0 α (1 t {B0 η>0}k {B0 η<0}k ) ) Z 0 (7) Notice that Theorem A-(v) implies P (N 1 is odd) > 0 and P (N 1 is even) > 0. Therefore, P (M 0 < 0) > 0 yields P (B 0 < 0 Z 0 = x) > 0 for all x S. (e) First, (3) implies P ( R n = Γ(X n ) ) = 1 provided that R 0 = Γ(X 0 ). Hence if (3) is true and R 0 = Γ(X 0 ), then R n can take only a finite number of values from {Γ(i) : i D} and cannot converge in distribution to a power-tailed random variable. Therefore, (3) implies P ( K 1 (X ) + K 1 (X ) = 0 ) = 1. Assume now that P ( K 1 (X )+K 1 (X ) = 0 ) = 1. Then (7) along with Theorem 1.2-(b) imply that K + = K = 0. By Theorem B, K + = K = 0 iff P ( A 1 + c(y )B 1 = c(y ) ) for a constant c(y ) R, in which case 1 = P ( R = c(y ) ) = P ( R = c(y ) Z 0 = y ) = P ( Q 0 + M 0 R = c(y ) Z 0 = y ) with R = R Q 0 M 0. Since R is conditionally independent of (Q 0, M 0 ) given Z 0, it follows that for some constant c 1 (y ) R, P ( R = c1 (y ) Z 0 = y ) = P ( R = c 1 (y ) Z 1 = y ) = 1 (8) P ( Q 0,ζ(y ) + M 0,ζ(y )c 1 (y ) = c(y ) ) = 1. (9) It follows from (8) and the Markov property that P ( R = c 1 (y ) Z 0 = y ) = 1 for any y S such that P (Z 0 = y Z 1 = y ) > 0, and hence P ( R = c(y ) Z 1 = x ) = 1 x S s. t. P (Z 0 = y Z 1 = x) > 0. (10) Thus P ( R = c 1 (x) = c(y ) Z 1 = x ) = 1 whence P (Z 0 = y Z 1 = x) > 0. Since y S is arbitrary, P ( Q 0,ζ(Z0 ) + M 0,ζ(Z0 )c 1 (Z 0 ) = c 1 (Z 1 ) ) = 1 due to (9). This is exactly (3) once one sets Γ(Y n ) = c 1 (Z n ). Suppose now in addition that (Q 0,i, M 0,i ) is non-degenerate for any i D. Since the system of equations q + mc 1 = c, q + m c 1 = c for unknown variables q and m has a unique solution unless c 1 = c 1, it follows from (9) that c 1 depends on ζ(y ) only. Let Γ(i) = c 1 (y ) for i = ζ(y ). Then, (9) and (10) imply P ( Q 0 + M 0 Γ(X 0 ) = Γ(X 1 ) ) = 1. Hence, in virtue of (i) of Definition 1.1, P (Γ(X n ) = c) = P ( Q 0 + M 0 c = c ) = 1 for some constant c R. 5
6 References Berbee, H., Chains with infinite connections: Uniqueness and Markov representation. Probab. Th. Rel. Fields 76, Bowen, R., Equillibrium States and the Ergodic Anosov Diffeomorthisms. Lecture Notes in Mathematics 470, Springer-Verlag. Brandt, A., The stochastic equation Y n+1 = A ny n + B n with stationary coefficients. Adv. Appl. Prob. 18, Embrechts, P., Goldie, C.M., Perpetuities and random equations. In Asymptotic Statistics, 5th. Prague Symp., Physica, Heidelberg. Eds. P. Mandl and M. Hušková. Fernández, R., Maillard, G., Chains with complete connections and one-dimensional gibbs measures. Goldie, C.M., Implicit renewal theory and tails of solutions of random equations. Ann. Appl. Prob. 1, Iosifesku, M., Grigoresku, S., Dependence with Complete Connections and its Applications. Cambridge University Press. Kaijser, T., On a new contraction condition for random systems with complete connections. Rev. Roum. Math. Pures Appl. 26, Keane, M., Strongly mixing g-measures. Invent. Math. 16, Kesten, H., Random difference equations and renewal theory for products of random matrices. Ann. Inst. H. Poincare-Prob. Stat. 131, Lalley, S., Regenerative representation for one-dimensional gibbs states. Ann. Prob. 14, Mayer-Wolf, E., Roitershtein, A., Zeitouni, O., Limit theorems for one-dimensional transient random walks in markov environments. Ann. Inst. H. Poincare B 40, Perrakis, S., Henin, C., The evaluation of risky investments with random timing of cash returns. Management Science 21, Theory Series. Roitershtein, A., One-dimensional linear recursions with markov-dependent coefficients. Ann. Appl. Prob. 17, de Saporta, B., Tails of the stationary solution of the stochastic equation Y n+1 = a n Y n + b n with markovian coefficients. Stoch. Process. Appl. 115, Vervaat, W., On a stochastic difference equations and a representation of non-negative infinitely divisible random variables. Adv. Appl. Prob. 11,
A log-scale limit theorem for one-dimensional random walks in random environments
A log-scale limit theorem for one-dimensional random walks in random environments Alexander Roitershtein August 3, 2004; Revised April 1, 2005 Abstract We consider a transient one-dimensional random walk
More informationRelative growth of the partial sums of certain random Fibonacci-like sequences
Relative growth of the partial sums of certain random Fibonacci-like sequences Alexander Roitershtein Zirou Zhou January 9, 207; Revised August 8, 207 Abstract We consider certain Fibonacci-like sequences
More informationLectures on Stochastic Stability. Sergey FOSS. Heriot-Watt University. Lecture 4. Coupling and Harris Processes
Lectures on Stochastic Stability Sergey FOSS Heriot-Watt University Lecture 4 Coupling and Harris Processes 1 A simple example Consider a Markov chain X n in a countable state space S with transition probabilities
More informationGARCH processes probabilistic properties (Part 1)
GARCH processes probabilistic properties (Part 1) Alexander Lindner Centre of Mathematical Sciences Technical University of Munich D 85747 Garching Germany lindner@ma.tum.de http://www-m1.ma.tum.de/m4/pers/lindner/
More informationarxiv: v2 [math.pr] 4 Sep 2017
arxiv:1708.08576v2 [math.pr] 4 Sep 2017 On the Speed of an Excited Asymmetric Random Walk Mike Cinkoske, Joe Jackson, Claire Plunkett September 5, 2017 Abstract An excited random walk is a non-markovian
More informationLarge deviation bounds for functionals of Viterbi paths
Large deviation bounds for functionals of Viterbi paths Arka P. Ghosh Elizabeth Kleiman Alexander Roitershtein February 6, 2009; Revised December 8, 2009 Abstract In a number of applications, the underlying
More informationFaithful couplings of Markov chains: now equals forever
Faithful couplings of Markov chains: now equals forever by Jeffrey S. Rosenthal* Department of Statistics, University of Toronto, Toronto, Ontario, Canada M5S 1A1 Phone: (416) 978-4594; Internet: jeff@utstat.toronto.edu
More informationRare event simulation for the ruin problem with investments via importance sampling and duality
Rare event simulation for the ruin problem with investments via importance sampling and duality Jerey Collamore University of Copenhagen Joint work with Anand Vidyashankar (GMU) and Guoqing Diao (GMU).
More informationThe Subexponential Product Convolution of Two Weibull-type Distributions
The Subexponential Product Convolution of Two Weibull-type Distributions Yan Liu School of Mathematics and Statistics Wuhan University Wuhan, Hubei 4372, P.R. China E-mail: yanliu@whu.edu.cn Qihe Tang
More informationA Note on the Central Limit Theorem for a Class of Linear Systems 1
A Note on the Central Limit Theorem for a Class of Linear Systems 1 Contents Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka 560-8531, Japan.
More informationMS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10
MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 3: Regenerative Processes Contents 3.1 Regeneration: The Basic Idea............................... 1 3.2
More informationMultivariate Markov-switching ARMA processes with regularly varying noise
Multivariate Markov-switching ARMA processes with regularly varying noise Robert Stelzer 26th February 2007 Abstract The tail behaviour of stationary R d -valued Markov-Switching ARMA processes driven
More information8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains
8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States
More informationarxiv:math/ v1 [math.pr] 24 Apr 2003
ICM 2002 Vol. III 1 3 arxiv:math/0304374v1 [math.pr] 24 Apr 2003 Random Walks in Random Environments Ofer Zeitouni Abstract Random walks in random environments (RWRE s) have been a source of surprising
More informationProbability & Computing
Probability & Computing Stochastic Process time t {X t t 2 T } state space Ω X t 2 state x 2 discrete time: T is countable T = {0,, 2,...} discrete space: Ω is finite or countably infinite X 0,X,X 2,...
More informationConditions for convergence of random coefficient AR(1) processes and perpetuities in higher dimensions
Bernoulli 202, 2014, 990 1005 DOI: 10.3150/13-BEJ513 Conditions for convergence of random coefficient AR1 processes and perpetuities in higher dimensions TORKEL ERHARDSSON Department of Mathematics, Linköping
More informationConsistency of the maximum likelihood estimator for general hidden Markov models
Consistency of the maximum likelihood estimator for general hidden Markov models Jimmy Olsson Centre for Mathematical Sciences Lund University Nordstat 2012 Umeå, Sweden Collaborators Hidden Markov models
More informationDiscrete-time Ornstein-Uhlenbeck process in a stationary dynamic environment
Discrete-time Ornstein-Uhlenbeck process in a stationary dynamic environment Arka P. Ghosh Wenjun Qin Alexander Roitershtein December 7, 0 Abstract We study the stationary solution to the recursion X n+
More informationAsymptotic Tail Probabilities of Sums of Dependent Subexponential Random Variables
Asymptotic Tail Probabilities of Sums of Dependent Subexponential Random Variables Jaap Geluk 1 and Qihe Tang 2 1 Department of Mathematics The Petroleum Institute P.O. Box 2533, Abu Dhabi, United Arab
More informationMultivariate Markov-switching ARMA processes with regularly varying noise
Multivariate Markov-switching ARMA processes with regularly varying noise Robert Stelzer 23rd June 2006 Abstract The tail behaviour of stationary R d -valued Markov-Switching ARMA processes driven by a
More informationP(X 0 = j 0,... X nk = j k )
Introduction to Probability Example Sheet 3 - Michaelmas 2006 Michael Tehranchi Problem. Let (X n ) n 0 be a homogeneous Markov chain on S with transition matrix P. Given a k N, let Z n = X kn. Prove that
More informationCentral limit theorems for ergodic continuous-time Markov chains with applications to single birth processes
Front. Math. China 215, 1(4): 933 947 DOI 1.17/s11464-15-488-5 Central limit theorems for ergodic continuous-time Markov chains with applications to single birth processes Yuanyuan LIU 1, Yuhui ZHANG 2
More informationLecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.
Lecture 7 1 Stationary measures of a Markov chain We now study the long time behavior of a Markov Chain: in particular, the existence and uniqueness of stationary measures, and the convergence of the distribution
More informationLecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321
Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process
More informationLecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.
1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if
More informationErgodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.
Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions
More informationRandomly Weighted Sums of Conditionnally Dependent Random Variables
Gen. Math. Notes, Vol. 25, No. 1, November 2014, pp.43-49 ISSN 2219-7184; Copyright c ICSRS Publication, 2014 www.i-csrs.org Available free online at http://www.geman.in Randomly Weighted Sums of Conditionnally
More informationSMSTC (2007/08) Probability.
SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................
More informationIowa State University. Instructor: Alex Roitershtein Summer Exam #1. Solutions. x u = 2 x v
Math 501 Iowa State University Introduction to Real Analysis Department of Mathematics Instructor: Alex Roitershtein Summer 015 Exam #1 Solutions This is a take-home examination. The exam includes 8 questions.
More informationResistance Growth of Branching Random Networks
Peking University Oct.25, 2018, Chengdu Joint work with Yueyun Hu (U. Paris 13) and Shen Lin (U. Paris 6), supported by NSFC Grant No. 11528101 (2016-2017) for Research Cooperation with Oversea Investigators
More informationCS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions
CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions Instructor: Erik Sudderth Brown University Computer Science April 14, 215 Review: Discrete Markov Chains Some
More informationIntroduction Random recurrence equations have been used in numerous elds of applied probability. We refer for instance to Kesten (973), Vervaat (979)
Extremal behavior of the autoregressive process with ARCH() errors Milan Borkovec Abstract We investigate the extremal behavior of a special class of autoregressive processes with ARCH() errors given by
More information2 markets. Money is gained or lost in high volatile markets (in this brief discussion, we do not distinguish between implied or historical volatility)
HARCH processes are heavy tailed Paul Embrechts (embrechts@math.ethz.ch) Department of Mathematics, ETHZ, CH 8092 Zürich, Switzerland Rudolf Grübel (rgrubel@stochastik.uni-hannover.de) Institut für Mathematische
More informationStability and Rare Events in Stochastic Models Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk
Stability and Rare Events in Stochastic Models Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk ANSAPW University of Queensland 8-11 July, 2013 1 Outline (I) Fluid
More informationA NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM
J. Appl. Prob. 49, 876 882 (2012 Printed in England Applied Probability Trust 2012 A NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM BRIAN FRALIX and COLIN GALLAGHER, Clemson University Abstract
More informationFinite-time Ruin Probability of Renewal Model with Risky Investment and Subexponential Claims
Proceedings of the World Congress on Engineering 29 Vol II WCE 29, July 1-3, 29, London, U.K. Finite-time Ruin Probability of Renewal Model with Risky Investment and Subexponential Claims Tao Jiang Abstract
More informationRegular Variation and Extreme Events for Stochastic Processes
1 Regular Variation and Extreme Events for Stochastic Processes FILIP LINDSKOG Royal Institute of Technology, Stockholm 2005 based on joint work with Henrik Hult www.math.kth.se/ lindskog 2 Extremes for
More informationCHUN-HUA GUO. Key words. matrix equations, minimal nonnegative solution, Markov chains, cyclic reduction, iterative methods, convergence rate
CONVERGENCE ANALYSIS OF THE LATOUCHE-RAMASWAMI ALGORITHM FOR NULL RECURRENT QUASI-BIRTH-DEATH PROCESSES CHUN-HUA GUO Abstract The minimal nonnegative solution G of the matrix equation G = A 0 + A 1 G +
More informationLecture 6: Entropy Rate
Lecture 6: Entropy Rate Entropy rate H(X) Random walk on graph Dr. Yao Xie, ECE587, Information Theory, Duke University Coin tossing versus poker Toss a fair coin and see and sequence Head, Tail, Tail,
More informationRandom walks on Z with exponentially increasing step length and Bernoulli convolutions
Random walks on Z with exponentially increasing step length and Bernoulli convolutions Jörg Neunhäuserer University of Hannover, Germany joerg.neunhaeuserer@web.de Abstract We establish a correspondence
More informationChapter 7. Markov chain background. 7.1 Finite state space
Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider
More informationCELEBRATING 50 YEARS OF THE APPLIED PROBABILITY TRUST
J. Appl. Prob. Spec. Vol. 51A, 359 376 (2014) Applied Probability Trust 2014 CELEBATING 50 YEAS OF THE APPLIED POBABILITY TUST Edited by S. ASMUSSEN, P. JAGES, I. MOLCHANOV and L. C. G. OGES Part 8. Markov
More informationMarkov Chains, Stochastic Processes, and Matrix Decompositions
Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral
More informationTHE VARIANCE CONSTANT FOR THE ACTUAL WAITING TIME OF THE PH/PH/1 QUEUE. By Mogens Bladt National University of Mexico
The Annals of Applied Probability 1996, Vol. 6, No. 3, 766 777 THE VARIANCE CONSTANT FOR THE ACTUAL WAITING TIME OF THE PH/PH/1 QUEUE By Mogens Bladt National University of Mexico In this paper we consider
More informationRare Events in Random Walks and Queueing Networks in the Presence of Heavy-Tailed Distributions
Rare Events in Random Walks and Queueing Networks in the Presence of Heavy-Tailed Distributions Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk The University of
More informationLecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is
MARKOV CHAINS What I will talk about in class is pretty close to Durrett Chapter 5 sections 1-5. We stick to the countable state case, except where otherwise mentioned. Lecture 7. We can regard (p(i, j))
More informationTAIL ESTIMATES FOR STOCHASTIC FIXED POINT EQUATIONS VIA NONLINEAR RENEWAL THEORY
Submitted arxiv: math.pr/1103.2317 TAIL ESTIMATES FOR STOCHASTIC FIXED POINT EQUATIONS VIA NONLINEAR RENEWAL THEORY By Jeffrey F. Collamore and Anand N. Vidyashankar University of Copenhagen and George
More informationSpider walk in a random environment
Graduate Theses and Dissertations Iowa State University Capstones, Theses and Dissertations 2017 Spider walk in a random environment Tetiana Takhistova Iowa State University Follow this and additional
More informationA PARAMETRIC MODEL FOR DISCRETE-VALUED TIME SERIES. 1. Introduction
tm Tatra Mt. Math. Publ. 00 (XXXX), 1 10 A PARAMETRIC MODEL FOR DISCRETE-VALUED TIME SERIES Martin Janžura and Lucie Fialová ABSTRACT. A parametric model for statistical analysis of Markov chains type
More informationGeneral Glivenko-Cantelli theorems
The ISI s Journal for the Rapid Dissemination of Statistics Research (wileyonlinelibrary.com) DOI: 10.100X/sta.0000......................................................................................................
More informationLIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974
LIMITS FOR QUEUES AS THE WAITING ROOM GROWS by Daniel P. Heyman Ward Whitt Bell Communications Research AT&T Bell Laboratories Red Bank, NJ 07701 Murray Hill, NJ 07974 May 11, 1988 ABSTRACT We study the
More informationPractical conditions on Markov chains for weak convergence of tail empirical processes
Practical conditions on Markov chains for weak convergence of tail empirical processes Olivier Wintenberger University of Copenhagen and Paris VI Joint work with Rafa l Kulik and Philippe Soulier Toronto,
More information= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1
Properties of Markov Chains and Evaluation of Steady State Transition Matrix P ss V. Krishnan - 3/9/2 Property 1 Let X be a Markov Chain (MC) where X {X n : n, 1, }. The state space is E {i, j, k, }. The
More informationRandom Infinite Divisibility on Z + and Generalized INAR(1) Models
ProbStat Forum, Volume 03, July 2010, Pages 108-117 ISSN 0974-3235 Random Infinite Divisibility on Z + and Generalized INAR(1) Models Satheesh S NEELOLPALAM, S. N. Park Road Trichur - 680 004, India. ssatheesh1963@yahoo.co.in
More informationMATH 56A: STOCHASTIC PROCESSES CHAPTER 1
MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter
More informationMarkov Processes Hamid R. Rabiee
Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete
More informationRARE EVENT SIMULATION FOR PROCESSES GENERATED VIA STOCHASTIC FIXED POINT EQUATIONS
Submitted arxiv: math.pr/0000000 RARE EVENT SIMULATION FOR PROCESSES GENERATED VIA STOCHASTIC FIXED POINT EQUATIONS By Jeffrey F. Collamore, Guoqing Diao, and Anand N. Vidyashankar University of Copenhagen
More informationLECTURE 3. Last time:
LECTURE 3 Last time: Mutual Information. Convexity and concavity Jensen s inequality Information Inequality Data processing theorem Fano s Inequality Lecture outline Stochastic processes, Entropy rate
More informationEfficient Rare-event Simulation for Perpetuities
Efficient Rare-event Simulation for Perpetuities Blanchet, J., Lam, H., and Zwart, B. We consider perpetuities of the form Abstract D = B 1 exp Y 1 ) + B 2 exp Y 1 + Y 2 ) +..., where the Y j s and B j
More informationConvergence Rates for Renewal Sequences
Convergence Rates for Renewal Sequences M. C. Spruill School of Mathematics Georgia Institute of Technology Atlanta, Ga. USA January 2002 ABSTRACT The precise rate of geometric convergence of nonhomogeneous
More informationRENEWAL THEORY STEVEN P. LALLEY UNIVERSITY OF CHICAGO. X i
RENEWAL THEORY STEVEN P. LALLEY UNIVERSITY OF CHICAGO 1. RENEWAL PROCESSES A renewal process is the increasing sequence of random nonnegative numbers S 0,S 1,S 2,... gotten by adding i.i.d. positive random
More informationA regeneration proof of the central limit theorem for uniformly ergodic Markov chains
A regeneration proof of the central limit theorem for uniformly ergodic Markov chains By AJAY JASRA Department of Mathematics, Imperial College London, SW7 2AZ, London, UK and CHAO YANG Department of Mathematics,
More informationExample: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected
4. Markov Chains A discrete time process {X n,n = 0,1,2,...} with discrete state space X n {0,1,2,...} is a Markov chain if it has the Markov property: P[X n+1 =j X n =i,x n 1 =i n 1,...,X 0 =i 0 ] = P[X
More informationMEASURE-THEORETIC ENTROPY
MEASURE-THEORETIC ENTROPY Abstract. We introduce measure-theoretic entropy 1. Some motivation for the formula and the logs We want to define a function I : [0, 1] R which measures how suprised we are or
More informationµ n 1 (v )z n P (v, )
Plan More Examples (Countable-state case). Questions 1. Extended Examples 2. Ideas and Results Next Time: General-state Markov Chains Homework 4 typo Unless otherwise noted, let X be an irreducible, aperiodic
More informationThe range of tree-indexed random walk
The range of tree-indexed random walk Jean-François Le Gall, Shen Lin Institut universitaire de France et Université Paris-Sud Orsay Erdös Centennial Conference July 2013 Jean-François Le Gall (Université
More informationAsymptotic Irrelevance of Initial Conditions for Skorohod Reflection Mapping on the Nonnegative Orthant
Published online ahead of print March 2, 212 MATHEMATICS OF OPERATIONS RESEARCH Articles in Advance, pp. 1 12 ISSN 364-765X print) ISSN 1526-5471 online) http://dx.doi.org/1.1287/moor.112.538 212 INFORMS
More informationConsistency of Quasi-Maximum Likelihood Estimators for the Regime-Switching GARCH Models
Consistency of Quasi-Maximum Likelihood Estimators for the Regime-Switching GARCH Models Yingfu Xie Research Report Centre of Biostochastics Swedish University of Report 2005:3 Agricultural Sciences ISSN
More informationGärtner-Ellis Theorem and applications.
Gärtner-Ellis Theorem and applications. Elena Kosygina July 25, 208 In this lecture we turn to the non-i.i.d. case and discuss Gärtner-Ellis theorem. As an application, we study Curie-Weiss model with
More informationPrecise Large Deviations for Sums of Negatively Dependent Random Variables with Common Long-Tailed Distributions
This article was downloaded by: [University of Aegean] On: 19 May 2013, At: 11:54 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer
More informationMATH 56A SPRING 2008 STOCHASTIC PROCESSES 65
MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65 2.2.5. proof of extinction lemma. The proof of Lemma 2.3 is just like the proof of the lemma I did on Wednesday. It goes like this. Suppose that â is the smallest
More informationNecessary and sufficient conditions for strong R-positivity
Necessary and sufficient conditions for strong R-positivity Wednesday, November 29th, 2017 The Perron-Frobenius theorem Let A = (A(x, y)) x,y S be a nonnegative matrix indexed by a countable set S. We
More informationTheory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk
Instructor: Victor F. Araman December 4, 2003 Theory and Applications of Stochastic Systems Lecture 0 B60.432.0 Exponential Martingale for Random Walk Let (S n : n 0) be a random walk with i.i.d. increments
More informationMarkov Chain Monte Carlo
Chapter 5 Markov Chain Monte Carlo MCMC is a kind of improvement of the Monte Carlo method By sampling from a Markov chain whose stationary distribution is the desired sampling distributuion, it is possible
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science Transmission of Information Spring 2006
MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 6.44 Transmission of Information Spring 2006 Homework 2 Solution name username April 4, 2006 Reading: Chapter
More informationInformation Theory. Lecture 5 Entropy rate and Markov sources STEFAN HÖST
Information Theory Lecture 5 Entropy rate and Markov sources STEFAN HÖST Universal Source Coding Huffman coding is optimal, what is the problem? In the previous coding schemes (Huffman and Shannon-Fano)it
More informationA Note on the Approximation of Perpetuities
Discrete Mathematics and Theoretical Computer Science (subm.), by the authors, rev A Note on the Approximation of Perpetuities Margarete Knape and Ralph Neininger Department for Mathematics and Computer
More informationON THE ZERO-ONE LAW AND THE LAW OF LARGE NUMBERS FOR RANDOM WALK IN MIXING RAN- DOM ENVIRONMENT
Elect. Comm. in Probab. 10 (2005), 36 44 ELECTRONIC COMMUNICATIONS in PROBABILITY ON THE ZERO-ONE LAW AND THE LAW OF LARGE NUMBERS FOR RANDOM WALK IN MIXING RAN- DOM ENVIRONMENT FIRAS RASSOUL AGHA Department
More informationRandom Walk in Periodic Environment
Tdk paper Random Walk in Periodic Environment István Rédl third year BSc student Supervisor: Bálint Vető Department of Stochastics Institute of Mathematics Budapest University of Technology and Economics
More informationOn Optimal Stopping Problems with Power Function of Lévy Processes
On Optimal Stopping Problems with Power Function of Lévy Processes Budhi Arta Surya Department of Mathematics University of Utrecht 31 August 2006 This talk is based on the joint paper with A.E. Kyprianou:
More informationSome functional (Hölderian) limit theorems and their applications (II)
Some functional (Hölderian) limit theorems and their applications (II) Alfredas Račkauskas Vilnius University Outils Statistiques et Probabilistes pour la Finance Université de Rouen June 1 5, Rouen (Rouen
More informationLinear-fractional branching processes with countably many types
Branching processes and and their applications Badajoz, April 11-13, 2012 Serik Sagitov Chalmers University and University of Gothenburg Linear-fractional branching processes with countably many types
More informationPITMAN S 2M X THEOREM FOR SKIP-FREE RANDOM WALKS WITH MARKOVIAN INCREMENTS
Elect. Comm. in Probab. 6 (2001) 73 77 ELECTRONIC COMMUNICATIONS in PROBABILITY PITMAN S 2M X THEOREM FOR SKIP-FREE RANDOM WALKS WITH MARKOVIAN INCREMENTS B.M. HAMBLY Mathematical Institute, University
More informationPositive and null recurrent-branching Process
December 15, 2011 In last discussion we studied the transience and recurrence of Markov chains There are 2 other closely related issues about Markov chains that we address Is there an invariant distribution?
More informationLIST OF MATHEMATICAL PAPERS
LIST OF MATHEMATICAL PAPERS 1961 1999 [1] K. Sato (1961) Integration of the generalized Kolmogorov-Feller backward equations. J. Fac. Sci. Univ. Tokyo, Sect. I, Vol. 9, 13 27. [2] K. Sato, H. Tanaka (1962)
More informationOn the Borel-Cantelli Lemma
On the Borel-Cantelli Lemma Alexei Stepanov, Izmir University of Economics, Turkey In the present note, we propose a new form of the Borel-Cantelli lemma. Keywords and Phrases: the Borel-Cantelli lemma,
More informationUniversity of Toronto Department of Statistics
Norm Comparisons for Data Augmentation by James P. Hobert Department of Statistics University of Florida and Jeffrey S. Rosenthal Department of Statistics University of Toronto Technical Report No. 0704
More informationSome open problems related to stability. 1 Multi-server queue with First-Come-First-Served discipline
1 Some open problems related to stability S. Foss Heriot-Watt University, Edinburgh and Sobolev s Institute of Mathematics, Novosibirsk I will speak about a number of open problems in queueing. Some of
More informationIntroduction to self-similar growth-fragmentations
Introduction to self-similar growth-fragmentations Quan Shi CIMAT, 11-15 December, 2017 Quan Shi Growth-Fragmentations CIMAT, 11-15 December, 2017 1 / 34 Literature Jean Bertoin, Compensated fragmentation
More informationMARKOV PROCESSES. Valerio Di Valerio
MARKOV PROCESSES Valerio Di Valerio Stochastic Process Definition: a stochastic process is a collection of random variables {X(t)} indexed by time t T Each X(t) X is a random variable that satisfy some
More informationSimultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms
Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms Yan Bai Feb 2009; Revised Nov 2009 Abstract In the paper, we mainly study ergodicity of adaptive MCMC algorithms. Assume that
More informationTAIL ASYMPTOTICS FOR A RANDOM SIGN LINDLEY RECURSION
J. Appl. Prob. 47, 72 83 (21) Printed in England Applied Probability Trust 21 TAIL ASYMPTOTICS FOR A RANDOM SIGN LINDLEY RECURSION MARIA VLASIOU, Eindhoven University of Technology ZBIGNIEW PALMOWSKI,
More informationZero Temperature Limits of Gibbs-Equilibrium States for Countable Alphabet Subshifts of Finite Type
Journal of Statistical Physics, Vol. 119, Nos. 3/4, May 2005 ( 2005) DOI: 10.1007/s10955-005-3035-z Zero Temperature Limits of Gibbs-Equilibrium States for Countable Alphabet Subshifts of Finite Type O.
More informationCountable state discrete time Markov Chains
Countable state discrete time Markov Chains Tuesday, March 18, 2014 2:12 PM Readings: Lawler Ch. 2 Karlin & Taylor Chs. 2 & 3 Resnick Ch. 1 Countably infinite state spaces are of practical utility in situations
More informationChapter 2. Markov Chains. Introduction
Chapter 2 Markov Chains Introduction A Markov chain is a sequence of random variables {X n ; n = 0, 1, 2,...}, defined on some probability space (Ω, F, IP), taking its values in a set E which could be
More informationDispersion of the Gilbert-Elliott Channel
Dispersion of the Gilbert-Elliott Channel Yury Polyanskiy Email: ypolyans@princeton.edu H. Vincent Poor Email: poor@princeton.edu Sergio Verdú Email: verdu@princeton.edu Abstract Channel dispersion plays
More informationTHE CENTER OF MASS OF THE ISE AND THE WIENER INDEX OF TREES
Elect. Comm. in Probab. 9 (24), 78 87 ELECTRONIC COMMUNICATIONS in PROBABILITY THE CENTER OF MASS OF THE ISE AND THE WIENER INDEX OF TREES SVANTE JANSON Departement of Mathematics, Uppsala University,
More informationINTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING
INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing
More informationNonparametric regression with martingale increment errors
S. Gaïffas (LSTA - Paris 6) joint work with S. Delattre (LPMA - Paris 7) work in progress Motivations Some facts: Theoretical study of statistical algorithms requires stationary and ergodicity. Concentration
More informationThe cost/reward formula has two specific widely used applications:
Applications of Absorption Probability and Accumulated Cost/Reward Formulas for FDMC Friday, October 21, 2011 2:28 PM No class next week. No office hours either. Next class will be 11/01. The cost/reward
More information