A Type of Shannon-McMillan Approximation Theorems for Second-Order Nonhomogeneous Markov Chains Indexed by a Double Rooted Tree

Size: px
Start display at page:

Download "A Type of Shannon-McMillan Approximation Theorems for Second-Order Nonhomogeneous Markov Chains Indexed by a Double Rooted Tree"

Transcription

1 Caspian Journal of Applied Mathematics, Ecology and Economics V. 2, No, 204, July ISSN A Type of Shannon-McMillan Approximation Theorems for Second-Order Nonhomogeneous Markov Chains Indexed by a Double Rooted Tree Wang Kangkang, Zong Decai Abstract. In this paper, a class of small deviation theorems for the relative entropy densities of arbitrary random field on a double rooted tree are discussed by comparing between the arbitrary measure µ and the second-order nonhomogeneous Markov measure µ Q on the double rooted tree. As corollaries, some Shannon-McMillan theorems for the arbitrary random field, secondorder Markov chain field and a limit property for the random conditional entropy of second-order homogeneous Markov chain on the double rooted tree are obtained. The existing result is extended. Key Words and Phrases: Shannon-McMillan theorem, a double rooted tree, arbitrary random field, second-order nonhomogeneous Markov chain, relative entropy density Mathematics Subject Classifications: 60F5. Introduction A tree is a graph S = {T,E} which is connected and contains no circuits. Given any two vertices σ,t( σ t T), let σt be the unique path connecting σ and t. Define the graph distance d(σ,t) to be the number of edges contained in the path σt. Let T o be an arbitrary infinite tree that is partially finite (i.e. it has infinite vertices, and each vertex connects with finite vertices) and has a root o. Meanwhile, we consider another kind of double root tree T, that is, it is formed with the root o of T o connecting with an arbitrary point denoted by the root. For a better explanation of the double root tree T, we take Cayley tree T C,N for example. It s a special case of the tree T o, the root o of Cayley tree has N neighbors and all the other vertices of it have N + neighbors each. The double root tree T C,N (see Fig.) is formed with root o of tree T C,N connecting with another root. Let σ, t be vertices of the double root tree T. Write t σ (σ,t o, ) if t is on the unique path connecting o to σ, and σ for the number of edges on this path. For any Corresponding author. 9 c 203 CJAMEE All rights reserved.

2 20 Wang Kangkang, Zong Decai two vertices σ, t (σ,t o, ) of the tree T, denote by σ t the vertex farthest from o satisfying σ t σ and σ t t. The set of all vertices with distance n from root o is called the n-th generation of T, which is denoted by L n. We say that L n is the set of all vertices on level n and especially root is on the st level on tree T. We denote by T (n) the subtree of the tree T the subtree of the tree T o containing the vertices from level 0 (the root o) to level n. Let t( o, ) be a vertex of the tree T. We denote the first predecessor of t by t, the second predecessor of t by 2 t, and denote by n t the n-th predecessor of t. Let X A = {X t,t A}, and let x A be a realization of X A and denote by A the number of vertices of A. containing the vertices from level (the root ) to level n and denote by T (n) o level 3 level 2 level level 0 level- t t 2 t root o root- Fig. Double root tree T C,2 Definition Let S = {s,s 2,,s M } and Q(z y,x) be a nonnegative function on S 3. Let Q = ((Q(z y,x)), Q(z y,x) 0, x,y,z S. If Q(z y,x) =, z S then Q is called a second-order transition matrix. Definition 2 Let T be a double root tree and S = {s,s 2,,s M } be a finite state space, and {X t,t T} be a collection of S-valued random variables defined on the probability space (Ω, F, Q). Let be a distribution on S 2, and q = (q(x,y)), x,y S () Q t = (Q t (z y,x)), x,y,z S, t T\{o}{ } (2)

3 2 be a collection of second-order transition matrices. For any vertex t (t o, ), if and Q(X t = z X t = y,x 2t = x,and X σ for σ t 2 t ) = Q(X t = z X t = y,x 2t = x) = Q t (z y,x) x,y,z S (3) Q(X = x,x o = y) = q(x,y), x,y S, (4) then {X t,t T} is called a S-valued second-order nonhomogeneous Markov chain indexed by a tree T with the initial distribution () and second-order transition matrices (2), or called a T-indexed second-order nonhomogeneous Markov chain. Definition 3. Let (Q t = Q t (z x,y),t T (n) \{o, }) and q = (q(x,y)) be defined as before, µ Q be a second-order nonhomogeneous Markov measure on (Ω,F). If µ Q (x T(n) ) = q(x 0,x ) µ Q (x 0,x ) = q(x 0,x ) (5) Q t (x t x t,x 2t ), n, (6) then µ Q will be called a second-order Markov chains field on an infinite tree T determined by the stochastic matrices Q t and the initial distribution q. Let µ be an arbitrary probability measure defined on (Ω,F), log is the natural logarithmic. Denote f n (ω) = logµ(xt(n) T (n) ). (7) f n (ω) is called the entropy density on subgraph T (n) with respect to the measure µ. If µ = µ Q, then by (6), (7) we get f n (ω) = [logq(x 0,X )+ Q t (X t X t,x 2t )]. (8) The convergence of f n (ω) in a sense (L convergence, convergence in probability, or almost sure convergence) is called Shannon-McMillan theorem or the asymptotic equipartition property(aep) in information theory. There have been some works on limit theorems for tree-indexed stochastic processes. Benjamini and Peres [] have given the notion of the tree-indexed Markov chains and studied the recurrence and ray-recurrence for them. Berger and Ye [2] have studied the existence of entropy rate for some stationary random fields on a homogeneous tree. Ye and Berger (see [4],[5] ), by using Pemantle s result [3] and a combinatorial approach, have studied the Shannon-McMillan theorem with convergence in probability for a PPS-invariant and ergodic random field on a homogeneous tree. YangandLiu[8] have studiedastronglaw of largenumbersforthefrequencyofoccurrence of states for Markov chains field on a homogeneous tree (a particular case of tree-indexed Markov chains field and PPS-invariant random fields). Yang (see [6]) has studied the

4 22 Wang Kangkang, Zong Decai strong law of large numbers for frequency of occurrence of state and Shannon-McMillan theorem for homogeneous Markov chains indexed by a homogeneous tree. Recently, Yang (see [3]) has studied the strong law of large numbers and Shannon-McMillan theorem for nonhomogeneous Markov chains indexed by a homogeneous tree. Huang and Yang (see []) have also studied the strong law of large numbers for Markov chains indexed by an infinite tree with uniformly bounded degree. Peng and Yang have studied a class of small deviation theorems for functionals for arbitrary random field on a homogeneous trees (see[9]). Wang has also studied some Shannon-McMillan approximation theorems for arbitrary random field on the generalized Bethe tree (see[0]). In this paper, we study a class of Shannon-McMillan random approximation theorems for arbitrary random fields on the double rooted tree by comparison the arbitrary measure with the second-order nonhomogeneous Markov measure and constructing a supermartingale on the double rooted tree. As corollaries, a class of Shannon-McMillan theorems for arbitrary random field and second-order Markov chains field on the double rooted tree are obtained. A limit property for the expectation of the random conditional entropy of second-order homogeneous Markov chain indexed by the double rooted tree is studied. Yang and Ye s result (see[3]) is extended. 2. Main result and its proof Lemma (see [8]). Let µ and µ 2 be two probability measures defined on (Ω,F), D F, {τ n,n 0} be a sequence of positive-valued random variables such that n τ n > 0. µ a.s. D. (0) Then log µ 2(X T(n) ) τ n µ (X T(n) ) 0. µ a.s. D. () In particular, let τ n =, then log µ 2(X T(n) ) µ (X T(n) ) 0. µ a.s. (2) Proof. See reference [8]. Let ϕ(µ µ Q ) = limsup log µ(xt(n) ) µ Q (X T(n) ), (3)

5 ϕ(µ µ Q ) is called the sample relative entropy rate of X T(n) with respect to µ and µ Q. ϕ(µ µ Q ) is also called asymptotic logarithmic likelihood ratio. By (2) and (3) ϕ(µ µ Q ) liminf 23 µ(xt(n) ) log 0. µ a.s. (4) µ Q (X T(n) ) Hence ϕ(µ µ Q ) can be looked on as a type of a measure of the deviation between the arbitrary random field and the second-order nonhomogeneous Markov chain fields on the double rooted tree. Although ϕ(µ µ Q ) is not a proper metric between two probability measures, we nevertheless think of it as a measure of dissimilarity between their joint distribution µ and second-order Markov distribution µ Q. Obviously, ϕ(µ µ Q ) = 0 if and only if µ = µ Q. It has been shown in (4) that ϕ(µ µ Q ) 0, a.s. in any case. Hence, ϕ(µ µ Q ) can be used as a random measure of the deviation between the true joint distribution µ(x T(n) ) and the second-order Markov distribution µ Q (x T(n) ). Roughly speaking, this deviation may be regarded as the one between coordinate stochastic process x T(n) and the Markov case. The smaller ϕ(µ µ Q ) is, the smaller the deviation is. Theorem. Let X = {X t,t T} be an arbitrary random field on a double rooted tree. f n (ω) and ϕ(µ µ Q ) are respectively defined as (7) and (3). Denote by H Q t (X t X t,x 2t ) the random conditional entropy of X t relative to X t,x 2t on the measure µ Q, that is H Q t (X t X t,x 2t ) = x t SQ t (x t X t,x 2t )logq t (x t X t,x 2t ), t T (n) \{o, }. (5) Let Then D(c) = {ω : ϕ(µ µ Q ) c} (6) { 2xe 2 α(c) = min ( x) 2 + c } x, 0 < x <, c > 0; α(0) = 0. (7) { 2xe 2 β(c) = max (+x) 2 + c } x, < x < 0, c > 0; β(0) = 0. (8) [f n (ω) [f n(ω) H Q t (X t X t,x 2t )] α(c)m, µ a.s. ω D(c) (9) H Q t (X t X t,x 2t )] β(c)m c. µ a.s. ω D(c) (20)

6 24 Wang Kangkang, Zong Decai Proof. Consider the probability space (Ω,F,µ), let λ > 0 be a constant, δ j ( ) be Kronecker function. Denote g t (j) = logq t (j X t,x 2t ), we construct the following product distribution: µ Q (x T(n) ;λ) = q(x 0,x ) By (22) we can write = x Ln S Ln q(x 0,x ) Q t (x t x t,x 2t ) exp{λg t (j)δ j (x t )}[ +(e λgt(j) )Q t (j x t,x 2t ) ]. µ Q (x T(n) ;λ) x Ln S Ln Q t (x t x t,x 2t ) exp{λg t (j)δ j (x t )}[ +(e λgt(j) )Q t (j x t,x 2t ) ] Q = µ Q (x T(n ) t (x t x t,x 2t ) ;λ) exp{λg t (j)δ j (x t )}[ +(e λgt(j) )Q x Ln S Ln t L t (j x t,x 2t ) ] n = µ Q (x T(n ) ;λ) Q t (x t x t,x 2t ) exp{λg t (j)δ j (x t )}[ +(e λgt(j) )Q t (j x t,x 2t ) ] = µ Q (x T(n ) ;λ) t L n x t S = µ Q (x T(n ) ;λ) +(e λgt(j) )Q t L t (j x t,x 2t ) [ n x t=j + x t j eλgt(j) Q t (j x t,x 2t)+ Q t (j x t,x 2t) +(e λgt(j) )Q t L t (j x t,x 2t ) n ] (22) = µ Q (x T(n ) ;λ) (23) Therefore µ Q (x T(n) ;λ), n =,2, are a class of consistent distributions on S T(n). Let U n (λ,ω) = µ Q(X T(n) ;λ) µ(x T(n) ) (24) By (22) and (24) we attain U n (λ,ω) = exp{ q(x 0 ) λg t (j)δ j (X t )} Q t (X t X t,x 2t ) [ +(e λgt(j) )Q t (j X t,x 2t ) ] / µ(x T(n) ). (25) It is easy to see that U n (λ,ω) is a nonnegative sup-martingale from Doob s martingale convergence theorem (see [2]) since µ and µ Q are two probability measures. Moreover, lim U n(λ,ω) = U (λ,ω) <. µ a.s. (26)

7 25 By (2) and (24) we have According to (6), (25), we can rewrite (27) as { Letting λ = 0 in (28), we have ϕ(µ µ Q ) liminf logu n(λ,ω) 0. µ a.s. (27) λg t (j)δ j (X t ) log[+(e λgt(j) )Q t (j X t,x 2t )]+ log µ Q(X T(n) ) µ(x T(n) ) } 0 µ a.s. (28) By use of (6) and (28) we obtain µ(xt(n) ) log 0. µ a.s. ω D(c). (29) µ Q (X T(n) ) {λg t (j)δ j (X t ) log[+(e λgt(j) )Q t (j X t,x 2t )} ϕ(µ µ Q ) c. µ a.s. ω D(c). (30) By virtue of (30), the properties of super limit and the inequalities /x lnx x,(x > 0), e x x (/2)x 2 e x, we can write = (λ 2/ 2)limsup (λ 2/ 2)limsup λ{g t (j)δ j (X t ) g t (j)q t (j X t,x 2t )} {log[+(e λgt(j) )Q t (j X t,x 2t )] λg t (j)q t (j X t,x 2t )}+c Q t (j X t,x 2t )[e λgt(j) λg t (j)]+c Q t (j X t,x 2t )g 2 t(j)e λgt(j) +c Q t (j X t,x 2t )log 2 Q t (j X t,x 2t )e λ logqt(j X t,x 2t ) +c

8 26 Wang Kangkang, Zong Decai = (λ 2/ 2)limsup log 2 Q t (j X t,x 2t ) Q t (j X t,x 2t ) λ +c. µ a.s. ω D(c) (3) In the case 0 < λ <, dividing two sides of (3) by λ, we have λ 2 limsup Consider the function [g t (j)δ j (X t ) g t (j)q t (j X t,x 2t )] log 2 Q t (j X t,x 2t ) Q t (j X t,x 2t ) λ + c λ µ a.s. ω D(c) (32) φ(x) = (logx) 2 x λ, 0 < x, 0 < λ <. (set φ(0) = 0) (33) It can be concluded that on the internal [0,], max{φ(x),0 x } = φ(e 2/(λ ) 2 ) = ( λ )2 e 2. (34) By (32) and (34) we have that when 0 < λ <, λ 2 limsup = 2λe 2 ( λ) 2 limsup [g t (j)δ j (X t ) g t (j)q t (j X t,x 2t )] 2 ( λ )2 e 2 + c λ 2 T (n) + c λ 2λe 2 ( λ) 2+c. µ a.s. ω D(c) (35) λ When c > 0, h(λ) = (2λe 2 ) / ( λ) 2 +c/λ attains its smallest value α(c) at λ o (0,). Hence letting λ = λ o in (35), we attain from (7) that [g t (j)δ j (X t ) g t (j)q t (j X t,x 2t )] α(c). µ a.s. ω D(c). (36) By (7), (6), (5), (29) and (36), noticing g t (j) = logq t (j X t,x 2t ), we can deduce [f n (ω) H Q t (X t X t,x 2t )]

9 = limsup{ logµ(xt(n) T (n) ) +limsup j S [ t T (n) \{o} H Q t (X t X t )} { logq t (X t X t,x 2t ) H Q t (X t X t,x 2t )} logq t (X t X t,x 2t ) logµ(x T(n) )] = limsup { δ j (X t )logq t (j X t,x 2t )+Q t (j X t,x 2t )logq t (j X t,x 2t )} 27 = limsup le s M limsup j=s +limsup +limsup log µ Q(X T(n) ) µ(x T(n) ) s M j=s [g t (j)δ j (X t ) g t (j)q t (j X t,x 2t )] log µ Q(X T(n) ) µ(x T(n) ) [g t (j)δ j (X t ) g t (j)q t (j X t,x 2t )] α(c)m µ a.s. ω D(c), (37) thus in the case c > 0, (9) follows from (37). In the case < λ < 0, by (3) we have λ 2 limsup By (34) and (38), we gain λ 2 limsup 2λe 2 (+λ) 2+c λ. [g t (j)δ j (X t ) g t (j)q t (j X t,x 2t )] log 2 Q t (j X t,x 2t ) Q t (j X t,x 2t ) +λ + c λ. µ a.s. ω D(c). (38) [g t (j)δ j (X t ) g t (j)q t (j X t,x 2t )] 2 ( +λ )2 e 2 + c λ

10 28 Wang Kangkang, Zong Decai µ a.s. ω D(c). (39) In the case c > 0, the function u(λ) = (2λe 2 ) / (+λ) 2 +c/λ attains the largest value β(c) at λ o (,0). Thereby letting λ = λ o in (39), we have [g t (j)δ j (X t ) g t (j)q t (j X t,x 2t )] β(c). µ a.s. ω D(c). (40) By (7), (6), (3), (6) and (40), noticing that g t (j) = logq t (j X t,x 2t ), we can write liminf s M [f n(ω) +liminf liminf liminf j=s [ limsup H Q t (X t X t,x 2t ) { logq t (X t X t,x 2t ) H Q t (X t X t,x 2t )} logq t (X t X t,x 2t ) logµ(x T(n) )] s M j=s [g t (j)δ j (X t ) g t (j)q t (j X t,x 2t )] [logµ(xt(n) ) t T (n) \{o} logq t (X t X t )] [g t (j)δ j (X t ) g t (j)q t (j X t,x 2t )] ϕ(µ µ Q ) β(c)m c. µ a.s. ω D(c). (4) In accordance with (4), we see that (20) also holds in the case c > 0. When c = 0, take 0 < λ i <,(i =,2, ) such that λ i 0 (i ), by (35) we acquire [g t (j)δ j (X t ) g t (j)q t (j X t,x 2t )] 0. µ a.s. ω D(0). (42) Imitating the proof of (37), we have by (7) and (42) [f n (ω) H Q t (X t X t,x 2t )] 0.µ a.s. ω D(0). (43)

11 Since α(0) = 0, we know that (9) also holds in the case c = 0 from (37). By the similar means, we can obtain that (20) holds in the case c = 0. Corollary. Under the assumption of Theorem, we have that in the case 0 c <, [f n (ω) T (n) H Q t (X 2e 2 t X t,x 2t )] M[ ( c) 2 +] c, [f n(ω) 29 µ a.s. ω D(c) (44) H Q t (X 2e 2 t X t,x 2t )] M[ ( c) 2 +] c c. Proof. Letting x = c in (9) and (20), we have µ a.s. ω D(c). (45) [2e 2/ ( c) 2 +] c α(c), [2e 2/ ( c) 2 +] c β(c). Therefore (44), (45) follow from (9) and (20), respectively. Corollary 2. Under the assumption of Theorem, we have lim [f n(ω) T (n) H Q t (X t X t,x 2t )] = 0. µ a.s. ω D(0). (46) Proof. Letting c = 0 in Corollary, (46) follows from (44) and (45). Corollary 3. Let X = {X t,t T} be the second-order nonhomogeneous Markov chains field indexed by the double rooted tree with the initial distribution (5) and the joint distribution (6), f n (ω) be defined as (8). Then lim [f n(ω) T (n) H Q t (X t X t,x 2t )] = 0. µ Q a.s. (47) Proof. Let µ µ Q in Theorem, then ϕ(µ µ Q ) 0. Thereby D(0) = Ω. (47) follows from (46) correspondingly. Remark. When the second-order nonhomogeneous Markov chain indexed by the tree degenerates into the first-order nonhomogeneous Markov chain indexed by a tree, we can see easily that Q t (X t X t,x 2t ) = Q t (X t X t ), H Q t (X t X t,x 2t ) = H Q t (X t X t ). At the moment, (47) is changed into lim [f n(ω) T (n) H Q t (X t X t )] = 0. µ Q a.s. This is a main result of Yang and Ye (see [3]).

12 30 Wang Kangkang, Zong Decai 3. Shannon-McMillan theorem for homogeneous Markov chains fields on a double rooted tree Let X = {X t,t T} be another second-order homogeneous Markov chain indexed by a double rooted tree with the initial distribution and the joint distribution on the measure µ P as follows: µ P (x 0,x ) = p(x 0,x ) (48) µ P (x T(n) ) = p(x 0,x ) P(x t x t,x 2t ), n, (49) where P = P(z x,y), x,y,z S is a strictly positive stochastic matrix on S 3, p = (p(x,y)) is a strictly positive distribution on S 2. Thereby the relative entropy density of X = {X t,t T} on the measure µ P is f n (ω) = [logp(x 0,X )+ logp(x t x t,x 2t )]. (50) Let a be a real number, denote [a] + = max{a,0}. We have the following result: Theorem 2. Let X = {X t,t T} be the second-order homogeneous Markov chains field with the initial distribution (48) and joint distribution (49) under the measure µ P. f n (ω) is defined by (50). Let Q = Q(z x,y), x,y,z S be defined by definition, α = min{q(j i,k), i,k,j S} > 0. Denote H Q (X t X t,x 2t ) = x t SQ(x t X t,x 2t )logq(x t X t,x 2t ), t T (n) \{o, }. If [P(j i,k) Q(j i,k)] + α c, (5) i S k S j S then [f n (ω) H Q (X t X t,x 2t )] α(c)m. µ P a.s. (52) [f n(ω) H Q (X t X t,x 2t )] β(c)m c. µ P a.s. (53) Proof. Let µ = µ P, Q t (z x,y) Q(z x,y), x,y,z S, t T (n) \{o, }, we obtain that H Q t (X t X t,x 2t ) = H Q (X t X t,x 2t ), t T (n) \{o, }, thus (50) follows from (7)

13 and (49). By the inequalities logx x (x > 0), a [a] + and (5), noticing that α = min{q(j i,k), i,k,j S} > 0, we can conclude = limsup ϕ(µ P µ Q ) = limsup log log µ P(X T(n) ) µ Q (X T(n) ) p(x 0,X ) P(X t X t,x 2t ) q(x 0,X ) log p(x 0,X ) q(x 0,X ) +limsup i S i S k S j S i S k S j S k S j S i S i S k S j S Q(X t X t,x 2t ) log P(X t X t,x 2t ) Q(X t X t,x 2t ) δ j (X t )δ i (X t )δ k (X 2t )log P(j i,k) Q(j i, k) P(j i,k) Q(j i,k) δ j (X t )δ i (X t )δ k (X 2t ) Q(j i, k) i S k S j S k S j S By (5) and (54) we have P(j i,k) Q(j i,k) Q(j i, k) [P(j i,k) Q(j i,k)] + α [P(j i,k) Q(j i,k)] + 2[P(j i,k) Q(j i,k)] + α = [P(j i,k) Q(j i,k)] +. (54) α i S k S j S ϕ(µ P µ Q ) c. a.s. (55) By (6) and (55) we know D(c) = Ω. Hence (52), (53) follow from (9), (20), respectively. Theorem 3. Let X = {X t,t T} be a second-order homogeneous Markov chains field indexed by the double rooted tree with the initial distribution () and the transition matrix Q = Q(z x,y), x,y,z S. Then lim E Q [H Q (X t X t,x 2t )] = i S α 3 q(i, j)q(k i, j) log Q(k i, j), j S k S

14 32 Wang Kangkang, Zong Decai µ Q a.s., (56) where E Q represents the expectation under the measure µ Q. Proof. By the definition of H Q (X t X t,x 2t ) in Theorem 2 and the property of the conditional expectation, we can write lim = lim = lim = lim = lim = lim E Q [H Q (X t X t,x 2t )] E Q [E Q ( logq(x t X t,x 2t ) X t,x 2t )] E Q ( logq(x t X t,x 2t )) x t S x 2t S x t S i S j S k S i S j S k S [ q(x t,x 2t,x t )logq(x t x t,x 2t )] [ q(i,j,k)logq(k i,j)] [ q(i, j)q(k i, j) log Q(k i, j)] = 2 [ q(i,j)q(k i,j)logq(k i,j)] lim i S j S k S = [ q(i, j)q(k i, j) log Q(k i, j)]. (57) i S j S k S Therefore, (56) follows from (57) directly. 4. Acknowledgements Authors would like to thank the referee for his valuable suggestions. The work is supported by the Project of Higher Schools Natural Science Basic Research of Jiangsu Province of China (3KJB0006). Wang Kangkang is the corresponding author. References [] I. Benjammini and Y. Peres, Markov chains indexed bu trees. Ann. Probab. 22(2): [2] T. Berger, and Z. Ye, Entropic aspects of random fields on trees. IEEE Trans. Inform. Theory. 36(3):

15 [3] R. Pemantle, Antomorphism invariant measure on trees. Ann. Probab. 20(3): [4] Z.X. Ye, and T. Berger, Ergodic regularity and asymptotic equipartition property of random fields on trees. J.Combin.Inform.System.Sci, 2(): [5] Z.X. Ye, and T. Berger, Information Measures for Discrete Random Fields. Science Press, New York [6] W.G. Yang Some limit properties for Markov chains indexed by homogeneous tree. Stat. Probab. Letts. 65(2): [7] W. Liu, and W.G. Yang, An extension of Shannon-McMillan theorem and some limit properties for nonhomogeneous Markov chains. Stochastic Process. Appl, 6(), [8] W.G. Yang and W. Liu, Strong law of large numbers and Shannon-McMillan theorem for Markov chains fields on trees. IEEE Trans.Inform.Theory. 48(): [9] W.C. Peng, W.G. Yang and B. Wang, A class of small deviation theorems for functionals of random fields on a homogeneous tree. Journal of Mathematical Analysis and Applications. 36, [0] K.K. Wang and D.C. Zong Some Shannon-McMillan approximation theoems for Markov chain field on the generalized Bethe tree. Journal of Inequalities and Applications. Article ID 47090, 8 pages doi:0.55/20/ [] H.L. Huang and W.G. Yang Strong law of large numbers for Markov chains indexed by an infinite tree with uniformly bounded degree, Science in China, 5(2): [2] J.L. Doob Stochastic Process. Wiley, New York [3] W.G. Yang and Z.X. Ye The asymptotic equipartition property for nonhomogeneous Markov chains indexed by a homogeneous tree. IEEE Trans.Inform.Theory. 53(5): Wang Kangkang School of Mathematics and Physics, Jiangsu University of Science and Technology, Zhenjiang 22003, China wkk.cn@26.com Zong Decai Department of Computer Science and Engineering, Changshu Institute of Technology, Changshu 25500, China Received April 202 Accepted 7 October 203

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE International Journal of Applied Mathematics Volume 31 No. 18, 41-49 ISSN: 1311-178 (printed version); ISSN: 1314-86 (on-line version) doi: http://dx.doi.org/1.173/ijam.v31i.6 LIMITING PROBABILITY TRANSITION

More information

Complex Systems Methods 2. Conditional mutual information, entropy rate and algorithmic complexity

Complex Systems Methods 2. Conditional mutual information, entropy rate and algorithmic complexity Complex Systems Methods 2. Conditional mutual information, entropy rate and algorithmic complexity Eckehard Olbrich MPI MiS Leipzig Potsdam WS 2007/08 Olbrich (Leipzig) 26.10.2007 1 / 18 Overview 1 Summary

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

Information Theory. Lecture 5 Entropy rate and Markov sources STEFAN HÖST

Information Theory. Lecture 5 Entropy rate and Markov sources STEFAN HÖST Information Theory Lecture 5 Entropy rate and Markov sources STEFAN HÖST Universal Source Coding Huffman coding is optimal, what is the problem? In the previous coding schemes (Huffman and Shannon-Fano)it

More information

Introduction to Machine Learning CMU-10701

Introduction to Machine Learning CMU-10701 Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov

More information

Nonamenable Products are not Treeable

Nonamenable Products are not Treeable Version of 30 July 1999 Nonamenable Products are not Treeable by Robin Pemantle and Yuval Peres Abstract. Let X and Y be infinite graphs, such that the automorphism group of X is nonamenable, and the automorphism

More information

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015 ID NAME SCORE MATH 56/STAT 555 Applied Stochastic Processes Homework 2, September 8, 205 Due September 30, 205 The generating function of a sequence a n n 0 is defined as As : a ns n for all s 0 for which

More information

LECTURE 3. Last time:

LECTURE 3. Last time: LECTURE 3 Last time: Mutual Information. Convexity and concavity Jensen s inequality Information Inequality Data processing theorem Fano s Inequality Lecture outline Stochastic processes, Entropy rate

More information

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing Advances in Dynamical Systems and Applications ISSN 0973-5321, Volume 8, Number 2, pp. 401 412 (2013) http://campus.mst.edu/adsa Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes

More information

Applied Mathematics Letters. Stationary distribution, ergodicity and extinction of a stochastic generalized logistic system

Applied Mathematics Letters. Stationary distribution, ergodicity and extinction of a stochastic generalized logistic system Applied Mathematics Letters 5 (1) 198 1985 Contents lists available at SciVerse ScienceDirect Applied Mathematics Letters journal homepage: www.elsevier.com/locate/aml Stationary distribution, ergodicity

More information

A note on the growth rate in the Fazekas Klesov general law of large numbers and on the weak law of large numbers for tail series

A note on the growth rate in the Fazekas Klesov general law of large numbers and on the weak law of large numbers for tail series Publ. Math. Debrecen 73/1-2 2008), 1 10 A note on the growth rate in the Fazekas Klesov general law of large numbers and on the weak law of large numbers for tail series By SOO HAK SUNG Taejon), TIEN-CHUNG

More information

Mi-Hwa Ko. t=1 Z t is true. j=0

Mi-Hwa Ko. t=1 Z t is true. j=0 Commun. Korean Math. Soc. 21 (2006), No. 4, pp. 779 786 FUNCTIONAL CENTRAL LIMIT THEOREMS FOR MULTIVARIATE LINEAR PROCESSES GENERATED BY DEPENDENT RANDOM VECTORS Mi-Hwa Ko Abstract. Let X t be an m-dimensional

More information

Lecture 1: Brief Review on Stochastic Processes

Lecture 1: Brief Review on Stochastic Processes Lecture 1: Brief Review on Stochastic Processes A stochastic process is a collection of random variables {X t (s) : t T, s S}, where T is some index set and S is the common sample space of the random variables.

More information

MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES

MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES J. Korean Math. Soc. 47 1, No., pp. 63 75 DOI 1.4134/JKMS.1.47..63 MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES Ke-Ang Fu Li-Hua Hu Abstract. Let X n ; n 1 be a strictly stationary

More information

Embeddings of finite metric spaces in Euclidean space: a probabilistic view

Embeddings of finite metric spaces in Euclidean space: a probabilistic view Embeddings of finite metric spaces in Euclidean space: a probabilistic view Yuval Peres May 11, 2006 Talk based on work joint with: Assaf Naor, Oded Schramm and Scott Sheffield Definition: An invertible

More information

On the Logarithmic Calculus and Sidorenko s Conjecture

On the Logarithmic Calculus and Sidorenko s Conjecture On the Logarithmic Calculus and Sidorenko s Conjecture by Xiang Li A thesis submitted in conformity with the requirements for the degree of Msc. Mathematics Graduate Department of Mathematics University

More information

Asymptotic behavior for sums of non-identically distributed random variables

Asymptotic behavior for sums of non-identically distributed random variables Appl. Math. J. Chinese Univ. 2019, 34(1: 45-54 Asymptotic behavior for sums of non-identically distributed random variables YU Chang-jun 1 CHENG Dong-ya 2,3 Abstract. For any given positive integer m,

More information

Markov processes and queueing networks

Markov processes and queueing networks Inria September 22, 2015 Outline Poisson processes Markov jump processes Some queueing networks The Poisson distribution (Siméon-Denis Poisson, 1781-1840) { } e λ λ n n! As prevalent as Gaussian distribution

More information

MARKOV CHAIN MONTE CARLO

MARKOV CHAIN MONTE CARLO MARKOV CHAIN MONTE CARLO RYAN WANG Abstract. This paper gives a brief introduction to Markov Chain Monte Carlo methods, which offer a general framework for calculating difficult integrals. We start with

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Self-similar fractals as boundaries of networks

Self-similar fractals as boundaries of networks Self-similar fractals as boundaries of networks Erin P. J. Pearse ep@ou.edu University of Oklahoma 4th Cornell Conference on Analysis, Probability, and Mathematical Physics on Fractals AMS Eastern Sectional

More information

Mean-field dual of cooperative reproduction

Mean-field dual of cooperative reproduction The mean-field dual of systems with cooperative reproduction joint with Tibor Mach (Prague) A. Sturm (Göttingen) Friday, July 6th, 2018 Poisson construction of Markov processes Let (X t ) t 0 be a continuous-time

More information

On bounded redundancy of universal codes

On bounded redundancy of universal codes On bounded redundancy of universal codes Łukasz Dębowski Institute of omputer Science, Polish Academy of Sciences ul. Jana Kazimierza 5, 01-248 Warszawa, Poland Abstract onsider stationary ergodic measures

More information

A NOTE ON THE COMPLETE MOMENT CONVERGENCE FOR ARRAYS OF B-VALUED RANDOM VARIABLES

A NOTE ON THE COMPLETE MOMENT CONVERGENCE FOR ARRAYS OF B-VALUED RANDOM VARIABLES Bull. Korean Math. Soc. 52 (205), No. 3, pp. 825 836 http://dx.doi.org/0.434/bkms.205.52.3.825 A NOTE ON THE COMPLETE MOMENT CONVERGENCE FOR ARRAYS OF B-VALUED RANDOM VARIABLES Yongfeng Wu and Mingzhu

More information

Introduction to Markov Chains and Riffle Shuffling

Introduction to Markov Chains and Riffle Shuffling Introduction to Markov Chains and Riffle Shuffling Nina Kuklisova Math REU 202 University of Chicago September 27, 202 Abstract In this paper, we introduce Markov Chains and their basic properties, and

More information

Wittmann Type Strong Laws of Large Numbers for Blockwise m-negatively Associated Random Variables

Wittmann Type Strong Laws of Large Numbers for Blockwise m-negatively Associated Random Variables Journal of Mathematical Research with Applications Mar., 206, Vol. 36, No. 2, pp. 239 246 DOI:0.3770/j.issn:2095-265.206.02.03 Http://jmre.dlut.edu.cn Wittmann Type Strong Laws of Large Numbers for Blockwise

More information

An instantaneous code (prefix code, tree code) with the codeword lengths l 1,..., l N exists if and only if. 2 l i. i=1

An instantaneous code (prefix code, tree code) with the codeword lengths l 1,..., l N exists if and only if. 2 l i. i=1 Kraft s inequality An instantaneous code (prefix code, tree code) with the codeword lengths l 1,..., l N exists if and only if N 2 l i 1 Proof: Suppose that we have a tree code. Let l max = max{l 1,...,

More information

Self-similar fractals as boundaries of networks

Self-similar fractals as boundaries of networks Self-similar fractals as boundaries of networks Erin P. J. Pearse ep@ou.edu University of Oklahoma 4th Cornell Conference on Analysis, Probability, and Mathematical Physics on Fractals AMS Eastern Sectional

More information

Consistency of the maximum likelihood estimator for general hidden Markov models

Consistency of the maximum likelihood estimator for general hidden Markov models Consistency of the maximum likelihood estimator for general hidden Markov models Jimmy Olsson Centre for Mathematical Sciences Lund University Nordstat 2012 Umeå, Sweden Collaborators Hidden Markov models

More information

Received: 20 December 2011; in revised form: 4 February 2012 / Accepted: 7 February 2012 / Published: 2 March 2012

Received: 20 December 2011; in revised form: 4 February 2012 / Accepted: 7 February 2012 / Published: 2 March 2012 Entropy 2012, 14, 480-490; doi:10.3390/e14030480 Article OPEN ACCESS entropy ISSN 1099-4300 www.mdpi.com/journal/entropy Interval Entropy and Informative Distance Fakhroddin Misagh 1, * and Gholamhossein

More information

Non-homogeneous random walks on a semi-infinite strip

Non-homogeneous random walks on a semi-infinite strip Non-homogeneous random walks on a semi-infinite strip Chak Hei Lo Joint work with Andrew R. Wade World Congress in Probability and Statistics 11th July, 2016 Outline Motivation: Lamperti s problem Our

More information

SOLUTIONS OF SEMILINEAR WAVE EQUATION VIA STOCHASTIC CASCADES

SOLUTIONS OF SEMILINEAR WAVE EQUATION VIA STOCHASTIC CASCADES Communications on Stochastic Analysis Vol. 4, No. 3 010) 45-431 Serials Publications www.serialspublications.com SOLUTIONS OF SEMILINEAR WAVE EQUATION VIA STOCHASTIC CASCADES YURI BAKHTIN* AND CARL MUELLER

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution

More information

Conditional independence, conditional mixing and conditional association

Conditional independence, conditional mixing and conditional association Ann Inst Stat Math (2009) 61:441 460 DOI 10.1007/s10463-007-0152-2 Conditional independence, conditional mixing and conditional association B. L. S. Prakasa Rao Received: 25 July 2006 / Revised: 14 May

More information

INTRODUCTION TO MARKOV CHAIN MONTE CARLO

INTRODUCTION TO MARKOV CHAIN MONTE CARLO INTRODUCTION TO MARKOV CHAIN MONTE CARLO 1. Introduction: MCMC In its simplest incarnation, the Monte Carlo method is nothing more than a computerbased exploitation of the Law of Large Numbers to estimate

More information

Necessary and sufficient conditions for strong R-positivity

Necessary and sufficient conditions for strong R-positivity Necessary and sufficient conditions for strong R-positivity Wednesday, November 29th, 2017 The Perron-Frobenius theorem Let A = (A(x, y)) x,y S be a nonnegative matrix indexed by a countable set S. We

More information

The Markov Chain Monte Carlo Method

The Markov Chain Monte Carlo Method The Markov Chain Monte Carlo Method Idea: define an ergodic Markov chain whose stationary distribution is the desired probability distribution. Let X 0, X 1, X 2,..., X n be the run of the chain. The Markov

More information

Convergence Rate of Markov Chains

Convergence Rate of Markov Chains Convergence Rate of Markov Chains Will Perkins April 16, 2013 Convergence Last class we saw that if X n is an irreducible, aperiodic, positive recurrent Markov chain, then there exists a stationary distribution

More information

Central limit theorems for ergodic continuous-time Markov chains with applications to single birth processes

Central limit theorems for ergodic continuous-time Markov chains with applications to single birth processes Front. Math. China 215, 1(4): 933 947 DOI 1.17/s11464-15-488-5 Central limit theorems for ergodic continuous-time Markov chains with applications to single birth processes Yuanyuan LIU 1, Yuhui ZHANG 2

More information

Diff. Eq. App.( ) Midterm 1 Solutions

Diff. Eq. App.( ) Midterm 1 Solutions Diff. Eq. App.(110.302) Midterm 1 Solutions Johns Hopkins University February 28, 2011 Problem 1.[3 15 = 45 points] Solve the following differential equations. (Hint: Identify the types of the equations

More information

Convex Optimization CMU-10725

Convex Optimization CMU-10725 Convex Optimization CMU-10725 Simulated Annealing Barnabás Póczos & Ryan Tibshirani Andrey Markov Markov Chains 2 Markov Chains Markov chain: Homogen Markov chain: 3 Markov Chains Assume that the state

More information

Uniformly Uniformly-ergodic Markov chains and BSDEs

Uniformly Uniformly-ergodic Markov chains and BSDEs Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,

More information

RESEARCH ARTICLE. An extension of the polytope of doubly stochastic matrices

RESEARCH ARTICLE. An extension of the polytope of doubly stochastic matrices Linear and Multilinear Algebra Vol. 00, No. 00, Month 200x, 1 15 RESEARCH ARTICLE An extension of the polytope of doubly stochastic matrices Richard A. Brualdi a and Geir Dahl b a Department of Mathematics,

More information

Power Series and Analytic Function

Power Series and Analytic Function Dr Mansoor Alshehri King Saud University MATH204-Differential Equations Center of Excellence in Learning and Teaching 1 / 21 Some Reviews of Power Series Differentiation and Integration of a Power Series

More information

Stochastic process. X, a series of random variables indexed by t

Stochastic process. X, a series of random variables indexed by t Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,

More information

The range of tree-indexed random walk

The range of tree-indexed random walk The range of tree-indexed random walk Jean-François Le Gall, Shen Lin Institut universitaire de France et Université Paris-Sud Orsay Erdös Centennial Conference July 2013 Jean-François Le Gall (Université

More information

Probability & Computing

Probability & Computing Probability & Computing Stochastic Process time t {X t t 2 T } state space Ω X t 2 state x 2 discrete time: T is countable T = {0,, 2,...} discrete space: Ω is finite or countably infinite X 0,X,X 2,...

More information

Probability Theory. Richard F. Bass

Probability Theory. Richard F. Bass Probability Theory Richard F. Bass ii c Copyright 2014 Richard F. Bass Contents 1 Basic notions 1 1.1 A few definitions from measure theory............. 1 1.2 Definitions............................. 2

More information

Oscillatory Behavior of Third-order Difference Equations with Asynchronous Nonlinearities

Oscillatory Behavior of Third-order Difference Equations with Asynchronous Nonlinearities International Journal of Difference Equations ISSN 0973-6069, Volume 9, Number 2, pp 223 231 2014 http://campusmstedu/ijde Oscillatory Behavior of Third-order Difference Equations with Asynchronous Nonlinearities

More information

On the Set of Limit Points of Normed Sums of Geometrically Weighted I.I.D. Bounded Random Variables

On the Set of Limit Points of Normed Sums of Geometrically Weighted I.I.D. Bounded Random Variables On the Set of Limit Points of Normed Sums of Geometrically Weighted I.I.D. Bounded Random Variables Deli Li 1, Yongcheng Qi, and Andrew Rosalsky 3 1 Department of Mathematical Sciences, Lakehead University,

More information

Research Article Exponential Inequalities for Positively Associated Random Variables and Applications

Research Article Exponential Inequalities for Positively Associated Random Variables and Applications Hindawi Publishing Corporation Journal of Inequalities and Applications Volume 008, Article ID 38536, 11 pages doi:10.1155/008/38536 Research Article Exponential Inequalities for Positively Associated

More information

Viscosity approximation method for m-accretive mapping and variational inequality in Banach space

Viscosity approximation method for m-accretive mapping and variational inequality in Banach space An. Şt. Univ. Ovidius Constanţa Vol. 17(1), 2009, 91 104 Viscosity approximation method for m-accretive mapping and variational inequality in Banach space Zhenhua He 1, Deifei Zhang 1, Feng Gu 2 Abstract

More information

Han-Ying Liang, Dong-Xia Zhang, and Jong-Il Baek

Han-Ying Liang, Dong-Xia Zhang, and Jong-Il Baek J. Korean Math. Soc. 41 (2004), No. 5, pp. 883 894 CONVERGENCE OF WEIGHTED SUMS FOR DEPENDENT RANDOM VARIABLES Han-Ying Liang, Dong-Xia Zhang, and Jong-Il Baek Abstract. We discuss in this paper the strong

More information

Concentration Inequalities

Concentration Inequalities Chapter Concentration Inequalities I. Moment generating functions, the Chernoff method, and sub-gaussian and sub-exponential random variables a. Goal for this section: given a random variable X, how does

More information

The Game of Normal Numbers

The Game of Normal Numbers The Game of Normal Numbers Ehud Lehrer September 4, 2003 Abstract We introduce a two-player game where at each period one player, say, Player 2, chooses a distribution and the other player, Player 1, a

More information

ON THE DIFFUSION OPERATOR IN POPULATION GENETICS

ON THE DIFFUSION OPERATOR IN POPULATION GENETICS J. Appl. Math. & Informatics Vol. 30(2012), No. 3-4, pp. 677-683 Website: http://www.kcam.biz ON THE DIFFUSION OPERATOR IN POPULATION GENETICS WON CHOI Abstract. W.Choi([1]) obtains a complete description

More information

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Prakash Balachandran Department of Mathematics Duke University April 2, 2008 1 Review of Discrete-Time

More information

Elementary Differential Equations, Section 2 Prof. Loftin: Practice Test Problems for Test Find the radius of convergence of the power series

Elementary Differential Equations, Section 2 Prof. Loftin: Practice Test Problems for Test Find the radius of convergence of the power series Elementary Differential Equations, Section 2 Prof. Loftin: Practice Test Problems for Test 2 SOLUTIONS 1. Find the radius of convergence of the power series Show your work. x + x2 2 + x3 3 + x4 4 + + xn

More information

EXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS

EXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS (February 25, 2004) EXTINCTION TIMES FOR A GENERAL BIRTH, DEATH AND CATASTROPHE PROCESS BEN CAIRNS, University of Queensland PHIL POLLETT, University of Queensland Abstract The birth, death and catastrophe

More information

arxiv: v1 [math.pr] 11 Dec 2017

arxiv: v1 [math.pr] 11 Dec 2017 Local limits of spatial Gibbs random graphs Eric Ossami Endo Department of Applied Mathematics eric@ime.usp.br Institute of Mathematics and Statistics - IME USP - University of São Paulo Johann Bernoulli

More information

Lecture 4: Applications: random trees, determinantal measures and sampling

Lecture 4: Applications: random trees, determinantal measures and sampling Lecture 4: Applications: random trees, determinantal measures and sampling Robin Pemantle University of Pennsylvania pemantle@math.upenn.edu Minerva Lectures at Columbia University 09 November, 2016 Sampling

More information

PRINTABLE VERSION. Practice Final. Question 1. Find the coordinates of the y-intercept for 5x 9y + 6 = 0. 2 (0, ) 3 3 (0, ) 2 2 (0, ) 3 6 (0, ) 5

PRINTABLE VERSION. Practice Final. Question 1. Find the coordinates of the y-intercept for 5x 9y + 6 = 0. 2 (0, ) 3 3 (0, ) 2 2 (0, ) 3 6 (0, ) 5 PRINTABLE VERSION Practice Final Question Find the coordinates of the y-intercept for 5x 9y + 6 = 0. (0, ) (0, ) (0, ) 6 (0, ) 5 6 (0, ) 5 Question Find the slope of the line: 7x 4y = 0 7 4 4 4 7 7 4 4

More information

Statistics 992 Continuous-time Markov Chains Spring 2004

Statistics 992 Continuous-time Markov Chains Spring 2004 Summary Continuous-time finite-state-space Markov chains are stochastic processes that are widely used to model the process of nucleotide substitution. This chapter aims to present much of the mathematics

More information

Weak convergence in Probability Theory A summer excursion! Day 3

Weak convergence in Probability Theory A summer excursion! Day 3 BCAM June 2013 1 Weak convergence in Probability Theory A summer excursion! Day 3 Armand M. Makowski ECE & ISR/HyNet University of Maryland at College Park armand@isr.umd.edu BCAM June 2013 2 Day 1: Basic

More information

A Note on the Glauber Dynamics for Sampling Independent Sets

A Note on the Glauber Dynamics for Sampling Independent Sets A Note on the Glauber Dynamics for Sampling Independent Sets Eric Vigoda Division of Informatics King s Buildings University of Edinburgh Edinburgh EH9 3JZ vigoda@dcs.ed.ac.uk Submitted: September 25,

More information

H(X) = plog 1 p +(1 p)log 1 1 p. With a slight abuse of notation, we denote this quantity by H(p) and refer to it as the binary entropy function.

H(X) = plog 1 p +(1 p)log 1 1 p. With a slight abuse of notation, we denote this quantity by H(p) and refer to it as the binary entropy function. LECTURE 2 Information Measures 2. ENTROPY LetXbeadiscreterandomvariableonanalphabetX drawnaccordingtotheprobability mass function (pmf) p() = P(X = ), X, denoted in short as X p(). The uncertainty about

More information

The Azéma-Yor Embedding in Non-Singular Diffusions

The Azéma-Yor Embedding in Non-Singular Diffusions Stochastic Process. Appl. Vol. 96, No. 2, 2001, 305-312 Research Report No. 406, 1999, Dept. Theoret. Statist. Aarhus The Azéma-Yor Embedding in Non-Singular Diffusions J. L. Pedersen and G. Peskir Let

More information

A CLASSROOM NOTE: ENTROPY, INFORMATION, AND MARKOV PROPERTY. Zoran R. Pop-Stojanović. 1. Introduction

A CLASSROOM NOTE: ENTROPY, INFORMATION, AND MARKOV PROPERTY. Zoran R. Pop-Stojanović. 1. Introduction THE TEACHING OF MATHEMATICS 2006, Vol IX,, pp 2 A CLASSROOM NOTE: ENTROPY, INFORMATION, AND MARKOV PROPERTY Zoran R Pop-Stojanović Abstract How to introduce the concept of the Markov Property in an elementary

More information

A NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM

A NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM J. Appl. Prob. 49, 876 882 (2012 Printed in England Applied Probability Trust 2012 A NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM BRIAN FRALIX and COLIN GALLAGHER, Clemson University Abstract

More information

Ollivier Ricci curvature for general graph Laplacians

Ollivier Ricci curvature for general graph Laplacians for general graph Laplacians York College and the Graduate Center City University of New York 6th Cornell Conference on Analysis, Probability and Mathematical Physics on Fractals Cornell University June

More information

Central Limit Theorem for Non-stationary Markov Chains

Central Limit Theorem for Non-stationary Markov Chains Central Limit Theorem for Non-stationary Markov Chains Magda Peligrad University of Cincinnati April 2011 (Institute) April 2011 1 / 30 Plan of talk Markov processes with Nonhomogeneous transition probabilities

More information

arxiv: v1 [math.pr] 6 Jan 2014

arxiv: v1 [math.pr] 6 Jan 2014 Recurrence for vertex-reinforced random walks on Z with weak reinforcements. Arvind Singh arxiv:40.034v [math.pr] 6 Jan 04 Abstract We prove that any vertex-reinforced random walk on the integer lattice

More information

General Glivenko-Cantelli theorems

General Glivenko-Cantelli theorems The ISI s Journal for the Rapid Dissemination of Statistics Research (wileyonlinelibrary.com) DOI: 10.100X/sta.0000......................................................................................................

More information

From trees to seeds:

From trees to seeds: From trees to seeds: on the inference of the seed from large random trees Joint work with Sébastien Bubeck, Ronen Eldan, and Elchanan Mossel Miklós Z. Rácz Microsoft Research Banff Retreat September 25,

More information

Feedback Capacity of a Class of Symmetric Finite-State Markov Channels

Feedback Capacity of a Class of Symmetric Finite-State Markov Channels Feedback Capacity of a Class of Symmetric Finite-State Markov Channels Nevroz Şen, Fady Alajaji and Serdar Yüksel Department of Mathematics and Statistics Queen s University Kingston, ON K7L 3N6, Canada

More information

Statistical Physics on Sparse Random Graphs: Mathematical Perspective

Statistical Physics on Sparse Random Graphs: Mathematical Perspective Statistical Physics on Sparse Random Graphs: Mathematical Perspective Amir Dembo Stanford University Northwestern, July 19, 2016 x 5 x 6 Factor model [DM10, Eqn. (1.4)] x 1 x 2 x 3 x 4 x 9 x8 x 7 x 10

More information

SIMILAR MARKOV CHAINS

SIMILAR MARKOV CHAINS SIMILAR MARKOV CHAINS by Phil Pollett The University of Queensland MAIN REFERENCES Convergence of Markov transition probabilities and their spectral properties 1. Vere-Jones, D. Geometric ergodicity in

More information

Lecture 9: Counting Matchings

Lecture 9: Counting Matchings Counting and Sampling Fall 207 Lecture 9: Counting Matchings Lecturer: Shayan Oveis Gharan October 20 Disclaimer: These notes have not been subjected to the usual scrutiny reserved for formal publications.

More information

Shih-sen Chang, Yeol Je Cho, and Haiyun Zhou

Shih-sen Chang, Yeol Je Cho, and Haiyun Zhou J. Korean Math. Soc. 38 (2001), No. 6, pp. 1245 1260 DEMI-CLOSED PRINCIPLE AND WEAK CONVERGENCE PROBLEMS FOR ASYMPTOTICALLY NONEXPANSIVE MAPPINGS Shih-sen Chang, Yeol Je Cho, and Haiyun Zhou Abstract.

More information

An Introduction to Entropy and Subshifts of. Finite Type

An Introduction to Entropy and Subshifts of. Finite Type An Introduction to Entropy and Subshifts of Finite Type Abby Pekoske Department of Mathematics Oregon State University pekoskea@math.oregonstate.edu August 4, 2015 Abstract This work gives an overview

More information

Information Theory: Entropy, Markov Chains, and Huffman Coding

Information Theory: Entropy, Markov Chains, and Huffman Coding The University of Notre Dame A senior thesis submitted to the Department of Mathematics and the Glynn Family Honors Program Information Theory: Entropy, Markov Chains, and Huffman Coding Patrick LeBlanc

More information

arxiv: v1 [math.co] 5 Apr 2019

arxiv: v1 [math.co] 5 Apr 2019 arxiv:1904.02924v1 [math.co] 5 Apr 2019 The asymptotics of the partition of the cube into Weyl simplices, and an encoding of a Bernoulli scheme A. M. Vershik 02.02.2019 Abstract We suggest a combinatorial

More information

Bootstrap Percolation on Periodic Trees

Bootstrap Percolation on Periodic Trees Bootstrap Percolation on Periodic Trees Milan Bradonjić Iraj Saniee Abstract We study bootstrap percolation with the threshold parameter θ 2 and the initial probability p on infinite periodic trees that

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

On Approximate Pattern Matching for a Class of Gibbs Random Fields

On Approximate Pattern Matching for a Class of Gibbs Random Fields On Approximate Pattern Matching for a Class of Gibbs Random Fields J.-R. Chazottes F. Redig E. Verbitskiy March 3, 2005 Abstract: We prove an exponential approximation for the law of approximate occurrence

More information

On asymptotic behavior of a finite Markov chain

On asymptotic behavior of a finite Markov chain 1 On asymptotic behavior of a finite Markov chain Alina Nicolae Department of Mathematical Analysis Probability. University Transilvania of Braşov. Romania. Keywords: convergence, weak ergodicity, strong

More information

LOCAL CONVERGENCE OF GRAPHS AND ENUMERATION OF SPANNING TREES. 1. Introduction

LOCAL CONVERGENCE OF GRAPHS AND ENUMERATION OF SPANNING TREES. 1. Introduction LOCAL CONVERGENCE OF GRAPHS AND ENUMERATION OF SPANNING TREES MUSTAZEE RAHMAN 1 Introduction A spanning tree in a connected graph G is a subgraph that contains every vertex of G and is itself a tree Clearly,

More information

1 Series Solutions Near Regular Singular Points

1 Series Solutions Near Regular Singular Points 1 Series Solutions Near Regular Singular Points All of the work here will be directed toward finding series solutions of a second order linear homogeneous ordinary differential equation: P xy + Qxy + Rxy

More information

. Find E(V ) and var(v ).

. Find E(V ) and var(v ). Math 6382/6383: Probability Models and Mathematical Statistics Sample Preliminary Exam Questions 1. A person tosses a fair coin until she obtains 2 heads in a row. She then tosses a fair die the same number

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

Nonparametric inference for ergodic, stationary time series.

Nonparametric inference for ergodic, stationary time series. G. Morvai, S. Yakowitz, and L. Györfi: Nonparametric inference for ergodic, stationary time series. Ann. Statist. 24 (1996), no. 1, 370 379. Abstract The setting is a stationary, ergodic time series. The

More information

OPTIMIZATION BY SIMULATED ANNEALING: A NECESSARY AND SUFFICIENT CONDITION FOR CONVERGENCE. Bruce Hajek* University of Illinois at Champaign-Urbana

OPTIMIZATION BY SIMULATED ANNEALING: A NECESSARY AND SUFFICIENT CONDITION FOR CONVERGENCE. Bruce Hajek* University of Illinois at Champaign-Urbana OPTIMIZATION BY SIMULATED ANNEALING: A NECESSARY AND SUFFICIENT CONDITION FOR CONVERGENCE Bruce Hajek* University of Illinois at Champaign-Urbana A Monte Carlo optimization technique called "simulated

More information

Strong convergence theorems for total quasi-ϕasymptotically

Strong convergence theorems for total quasi-ϕasymptotically RESEARCH Open Access Strong convergence theorems for total quasi-ϕasymptotically nonexpansive multi-valued mappings in Banach spaces Jinfang Tang 1 and Shih-sen Chang 2* * Correspondence: changss@yahoo.

More information

Susceptible-Infective-Removed Epidemics and Erdős-Rényi random

Susceptible-Infective-Removed Epidemics and Erdős-Rényi random Susceptible-Infective-Removed Epidemics and Erdős-Rényi random graphs MSR-Inria Joint Centre October 13, 2015 SIR epidemics: the Reed-Frost model Individuals i [n] when infected, attempt to infect all

More information