STEIN S METHOD FOR DISCRETE GIBBS MEASURES 1. BY PETER EICHELSBACHER AND GESINE REINERT Ruhr-Universität Bochum and University of Oxford

Size: px
Start display at page:

Download "STEIN S METHOD FOR DISCRETE GIBBS MEASURES 1. BY PETER EICHELSBACHER AND GESINE REINERT Ruhr-Universität Bochum and University of Oxford"

Transcription

1 The Annals of Applied Probability 2008, Vol. 8, No. 4, DOI: 0.24/07-AAP0498 Institute of Mathematical Statistics, 2008 STEIN S METHOD FOR DISCRETE GIBBS MEASURES BY PETER EICHELSBACHER AND GESINE REINERT Ruhr-Universität Bochum and University of Oxford Stein s method provides a way of bounding the distance of a probability distribution to a target distribution μ. Here we develop Stein s method for the class of discrete Gibbs measures with a density e V,whereV is the energy function. Using size bias couplings, we treat an example of Gibbs convergence for strongly correlated random variables due to Chayes and Klein [Helv. Phys. Acta 67 (994) 30 42]. We obtain estimates of the approximation to a grand-canonical Gibbs ensemble. As side results, we slightly improve on the Barbour, Holst and Janson [Poisson Approximation (992)] bounds for Poisson approximation to the sum of independent indicators, and in the case of the geometric distribution we derive better nonuniform Stein bounds than Brown and Xia [Ann. Probab. 29 (200) ]. 0. Introduction. Stein [7] introduced an elegant method for proving convergence of random variables toward a standard normal variable. Barbour [2, 3] and Götze [0] developed a dynamical point of view of Stein s method using timereversible Markov processes. If μ is the stationary distribution of a homogeneous Markov process with generator A, thenx μ if and only if EAg(X) = 0forall functions g in the domain D(A) of the operator A. For any random variable W and for any suitable function f, to assess the distance Ef (W ) fdμ we first find a solution g of the equation Ag(x) = f(x) fdμ. If g is in the domain D(A) of A, then we obtain (0.) Ef (W ) fdμ = EAg(W). Bounding the right-hand side of (0.) for a sufficiently large class of functions f leads to bounds on distances between the distributions. Here we will mainly focus on the total variation distance, where indicator functions are the test functions to consider. Received April 2006; revised October Supported in part by the DAAD Foundation and the British Council, ARC Project, Contract 33- ARC-00/27749 and by the RiP program at Oberwolfach, Germany. AMS 2000 subject classifications. Primary 60E05; secondary 60F05, 60E5, 82B05. Key words and phrases. Stein s method, Gibbs measures, birth and death processes, size bias coupling. 588

2 STEIN S METHOD FOR DISCRETE GIBBS MEASURES 589 Such distributional bounds are not only useful for limits, but they are also of particular interest, for example, when the distance to the target distribution is not negligible. Such relatively large distances occur, for example, when only few observations are available, or when there is considerable dependence in the data slowing down the convergence. The bound on the distance can then be taken into account explicitly, for example when deriving confidence intervals. In Section we introduce Stein s method using the generator approach for discrete Gibbs measures, which are probability measures on N 0 with probability weights μ(k) proportional to exp(v (k)) for some function V : N 0 R. The discrete Gibbs measures include the classical distributions Poisson, binomial, geometric, negative binomial, hypergeometric and the discrete uniform, to name but a few. One can construct simple birth death processes, which are time-reversible, and which have a discrete Gibbs measure as its equilibrium measure. In the context of spatial Gibbs measures this connection was introduced by Preston [5]. In Section 2 we not only recall bounds for the increments of the solution of the Stein equation from [5], but we also derive bounds on the solution itself, in terms of potential function of the Gibbs measure; see Lemmas 2., 2.4 and 2.5. The bounds, which to our knowledge are new, are illustrated for the Poisson, the binomial and the geometric distribution. For nonnegative random variables, the size bias coupling is a very useful approach to disentangle dependence. Its formulation and its application to assess the distance to Gibbs measures are described in Section 3. We compare the distributions by comparing their respective generators, an idea also used in [], while paying special attention to the case that the domains of the two generators are not identical. The size bias coupling then naturally leads to Theorem 3.5 and Corollary 3.8. Section 4 applies these theoretical results to assess the distance to a Gibbs distribution for the law of a sum of possibly strongly correlated random variables. The main results are Theorem 4.2 and Proposition 4.4, where we give general bounds for the total variation distance between the distribution of certain sums of strongly correlated random variables and discrete Gibbs distributions of a grand-canonical form; see [6], Section.2.3. In particular, Theorem 4.2 gives a bound on the rate of convergence for the qualitative results in [6] by bounding the rate of convergence. Considering two examples with nontrivial interaction, we obtain bounds to limiting nonclassical Gibbs distributions. Our bound on the approximation error is phrased in terms of the particle number and the average density of the particles. Summarizing, the main advantage of our considerations is the application of Stein s method to models with interaction described by Gibbs measures. When applying our bounds on the solution of the Stein equation to the Poisson distribution and the geometric distribution, surprisingly we obtain improved bounds for these well-studied distributions. Thus our investigation of discrete Gibbs measures serves also as a vehicle for obtaining results for classical discrete distributions.

3 590 P. EICHELSBACHER AND G. REINERT The results presented here will also provide a foundation for introducing Stein s method for spatial Gibbs measures and Gibbs point processes, in forthcoming work.. Gibbs measures and birth death processes... Birth death processes. A birth death process {X(t),t R} is a Markov process on the state space {0,,...,N}, wheren N 0 { }, characterized by (nonnegative) birth rates {b j,j {0,,...,N}} and (nonnegative) death rates {d j,j {,...,N}} and has a generator ( ) ( ) (.) (Ah)(j) = b j h(j + ) h(j) dj h(j) h(j ) with j {0,,...,N}. It is well known that for any given probability distribution μ on N 0 one can construct a birth death process which has this distribution as its stationary distribution. For N =, recurrence of the birth death process X( ) is equivalent to n d d n b b n =. The process X( ) is ergodic if and only if the process is recurrent and c := + n b 0 b n d d n <. For N<, irreducibility and hence ergodicity holds if b j > 0, j = 0,,...,N, b N = 0andd j > 0, j =,...,N. In either case the stationary distribution of the ergodic process is given by μ(0) = /c and μ(n) = μ(0) b 0 b n (.2). d d n For any given probability distribution μ on N 0 these recursive formulas give the corresponding class of birth death processes which have μ as the stationary distribution. For the choice of a unit per capita death rate d j = j one simply obtains that μ(j + ) (.3) b j = (j + ), μ(j) for j N. Here and throughout, if N =,thenbyj N we mean j = 0,,... The choice of these rates corresponds to the case where the detailed balance condition (.4) μ(j)b j = μ(j + )d j+, j = 0,,...,N, holds; see, for example, [] and[2]. We will apply these well-known facts to the discrete Gibbs measure introduced in the next subsection. Now might be a good time to note that not all probability distributions on N 0 are given in closed form expressions; notable exceptions occur, for example, in compound Poisson distributions.

4 STEIN S METHOD FOR DISCRETE GIBBS MEASURES Gibbs measures as stationary distributions of birth death processes. Gibbs measures can be viewed as stationary measures of birth death processes, as follows. We start with a discrete Gibbs measure μ; for convenience we assume that μ has support supp(μ) ={0,...,N},whereN N 0 { },sothatμ is given by (.5) μ(k) = Z ev(k)ωk, k= 0,,...,N, k! for some function V : N 0 R. HereZ = N k=0 exp(v (k)) ωk k!,andω>0isfixed. We assume that Z exists; Z is known as the partition function in models of statistical mechanics. We set V(k)= for k>n. In terms of statistical mechanics, μ is a grand-canonical ensemble, ω is the activity and V is the potential energy; see [6], Chapter.2. The class of discrete Gibbs measures in (.5) is equivalent to the class of all discrete probability distributions on N 0 by the following simple identification: For a given probability distribution (μ(k)) k N0 we have (.6) V(k)= log μ(k) + log k!+log Z k log ω, k = 0,,...,N, with V(0) = log μ(0) + log Z. Hence Z = ev(0) μ(0). The latter formula gives the possibility of proving convergence for a sequence of partition functions Z n by using the convergence of the corresponding sequence μ n (0). However, the representation of a probability measure as a Gibbs measure is not unique. For example, the Poisson distribution with parameter λ, Po(λ), can be written in the form (.5) with ω = λ, V(k)= λ, k 0, Z =. Alternatively, we could have chosen V(k)= 0, ω = λ, Z = e λ. From (.3), if we choose a unit per capita death rate d k = k, and if we choose the birth rate b k = ωe V(k+) V(k) μ(k + ) (.7) = (k + ), μ(k) for k,k + supp(μ), then (.8) (Ah)(k) = ( h(k + ) h(k) ) ωe V(k+) V(k) + k ( h(k ) h(k) ), for k,k +,k supp(μ), k N [set h( ) = 0], is the generator of the timereversible birth death process with invariant measure μ. Note that this choice of d k and b k ensures that the detailed balance condition (.4) is satisfied. Hence we have chosen a birth death process with generator of the type (.) which is easily seen to be an ergodic process. Namely, if N =, then the recurrence of the corresponding process is given since n d d n b b n = n μ() (n + )μ(n + ) n μ() n + =.

5 592 P. EICHELSBACHER AND G. REINERT For N< we have that V(k)> for k supp(μ), sothatb k > 0fork = 0,...,N, and as V(N+) =, we obtain b N = 0. Due to the unit per capita death rate the ergodicity follows. From 0 supp(μ) we obtain c = /μ(0)<. Hence μ is indeed the unique stationary distribution of the birth death process. In the development of Stein s method unit per capita is a common and useful choice for the death rate; see [2]. It is worth noting that there are modifications of this choice in [5] and[]. To compare with the approach in Barbour [2] we reformulate the generator: let h(k + ) h(k) =: g(k + ); then(.8) yields (.9) (Ag)(k) = g(k + )ωe V(k+) V(k) kg(k), k = 0,,...,N. The generalization to case of arbitrary death rates d k is straightforward; we omit it here to streamline the paper. 2. Stein identity for Gibbs measures and bounds. In view of the generator approach to Stein s method, for a test function f : supp(μ) R the appropriate Stein equation for μ givenin(.5) is (2.) (Ag)(j) = f(j) μ(f ) for j {0,...,N} and A the generator given by (.8). Here, N μ(f ) := f (k)μ(k) k=0 is the expectation of f under μ. We are interested in indicator functions f(j)= I [j A] for some A supp(μ). Thus if W is a random variable on supp(μ), we obtain (2.2) EAg(W) = P(W A) μ(a). The right-hand side of (2.2) links in nicely with the total variation distance. Recall that for P and Q being probability distributions on N 0,wedefinethetotal variation distance (metric) by d TV (P, Q) := sup P(A) Q(A) A N 0 = sup P(f) Q(f ) f B = 2 P({k}) Q({k}), k N 0 where B denotes the set of measurable functions f with 0 f. Hence bounding the left-hand side of (2.2) uniformly in A N 0 gives a bound on the total variation distance.

6 STEIN S METHOD FOR DISCRETE GIBBS MEASURES 593 In the following we give a Stein characterization for μ given in (.5) anda solution of the corresponding Stein equation. Let Z be a random variable distributed according to the Gibbs measure μ defined in (.5) and for a function g : N 0 R assume E Zg(Z) <. From(.9) we obtain the Stein characterization for μ that if Z is distributed according to μ, given by (.5), then for every function g : N 0 R with E Zg(Z) < (2.3) E { ωe V(Z+) V(Z) g(z + ) Zg(Z) } = 0. If f : supp(μ) R is an arbitrary function, and μ isgivenby(.5), then there exists a solution g f,v : N 0 R for (2.) with operator as in (.9), (2.4) g f,v (k + )ωe V(k+) V(k) kg f,v (k) = f(k) μ(f ), k N; see [5]. This solution g f,v is such that g f,v (0) = 0, and for j = 0,...,N the solution g f,v can be represented by recursion as (2.5) (2.6) g f,v (j + ) = j! ω j e V(j+) j+ k=0 N e V(j+) j+ k=j+ = j! ω e V(k)ωk ( ) f(k) μ(f ) k! e V(k)ωk ( ) f(k) μ(f ). k! We may set g f,v (N + ) = 0. Having a suitable Stein equation for Gibbs measures and its solution at our disposal, the next step in Stein s method is to bound the increments of the solutions; it will turn out advantageous to bound the solutions themselves as well. For any function g : N 0 R we define g(j ) := g(j + ) g(j). In applications often only bounds on the increments are needed, hence we start with these. Uniform bounds on the increments are also called Stein factors or magic factors. Nonuniform Stein factor bounds may yield better overall bounds on distributional distances and are therefore of particular interest. Lemma 2. gives such a nonuniform bound. The proof is given in [5], Lemma 2.4 and Theorem 2.. We introduce the class of functions (2.7) B := {f : supp(μ) [0, ]} and we define (2.8) k F(k):= μ(i), i=0 N F(k):= μ(i). i=k

7 594 P. EICHELSBACHER AND G. REINERT LEMMA 2. (Nonuniform bounds for increments). Assume that the death rates are unit per capita and assume that the birth rates in (.7) fulfill, for each k =, 2,...,N, F(k) k F(k ) ωev(k+) V(k) k F(k+ ) (2.9). F(k) Let f B and let g f,v be its solution to the Stein equation (2.4). Then, for every j {0,...,N}, (2.0) sup g f,v (j) = f B ω ev(j) V(j+) F(j + ) + F(j ). j Moreover, for every j {0,...,N}, sup g f,v (j) f B j ev(j) ωe V(j+). REMARK 2.2. Condition (2.9) is Condition (C2) in [5], Lemma 2.4. In this paper, three more conditions are formulated, which are all equivalent to (C2). For example, if the death rates are unit per capita and the birth rates are nonincreasing: (2.) e V(k+) V(k) e V(k) V(k ), k= 0,,...,N, Condition (C4) in [5] is satisfied and (2.0) holds. REMARK 2.3. Reference [] gives an elegant recursive proof of (2.0) fora choice of birth rates and death rates which make the method of exchangeable pairs work. In particular, unit per capita death rates are not used in her results. Under a slightly weaker condition than (2.9) it is possible to derive nonuniform bounds on the solution g f,v of the Stein equation itself, as follows. LEMMA 2.4. Consider the solution g f,v of the Stein equation (2.4), where f B, given in (2.7). Assume that, for each k =, 2,...,N, ωe V(k+) V(k) k F(k+ ) (2.2) F(k) is satisfied. Then we obtain for every j {,...,N} that ( { g f,v (j) min ln(j), } kμ(k) + ) ω ev(0) V() k F(j). The proof is related to ideas of [4], pages 7 8, used in Poisson approximation.

8 STEIN S METHOD FOR DISCRETE GIBBS MEASURES 595 PROOF OF LEMMA 2.4. Forj {0,...,N} let U j ={0,,...,j}.Weusethe notation f A (x) = f(x)(x A). It is easy to see from (2.5) that for f B and A {0,...,N},andforj {0,...,N }, and hence g f,v (j + ) = j! ω j+ e V(j+) {μ(f Uj )μ(u c j ) μ(f U c j )μ(u j )} (2.3) g f,v (j + ) j! ω j+ e V(j+) μ(u j )μ(uj c ). From (2.2) wehavethat Alternatively, j! ω j+ e V(j+) μ(u j ) = ω j! ω j+ e V(j+) μ(u j ) proving the assertion. j k=0 j e V(k) V(j+) ωk j j! k! F(0) k= F(j + ) k + ω ev(0) V() (ln(j) + ) ω ev(0) V() = j k= ( N k= ( N F(j + ). F(k) F(j + ) k + ω ev(0) V() F(k)+ ) ω ev(0) V() kμ(k) + ) ω ev(0) V() k= F(0) F(j + ) F(0) F(j + ) F(j + ) F(j + ), We now prove a crude but usable bound for the supremum norm g f,v which does not require (2.9) or(2.2) to be satisfied. To this purpose we introduce the quantities (2.4) λ := ω inf 0 k N ev(k+) V(k) = inf b k, 0 k N λ 2 := ω sup 0 k N e V(k+) V(k) = sup 0 k N b k. Note that λ 2 λ by construction.

9 596 P. EICHELSBACHER AND G. REINERT LEMMA 2.5. Consider the solution g f,v of the Stein equation (2.4), where f B, given in (2.7). Assume that λ > 0 and λ 2 <. Then g f,v 2 + ( ) λ2 λ2 λ 2 (λ 2 2 λ ). 2 λ + PROOF. As the proof follows the ideas of [4], pages 7 8, used in Poisson approximation, we only sketch it here. With the notation as for (2.3) we obtain the bounds g f,v (j + ) j! ω j+ e V(j+) μ(u j ) Similarly we have = ω λ j k=0 e V(k) V(j+) ωk j j! k! j l=0 λ l j! (j l)!. g f,v (j + ) j! ω j+ e V(j+) μ(u c j ) = ω N k=j+ N k=j+ e V(k) V(j+) ωk j j! k! λ k j j! 2 k! = j!e λ 2 λ j 2 Po(λ 2 )(U c j ). This puts us in the situation of (.20) and (.2) in Chapter of [4]. We obtain that for j<λ gf,v (j + ) 2min(,λ /2 ) 2, and for j>λ 2 2 g f,v (j + ) j + 2 (j + )(j + 2 λ 2 ) 5 4 < 2. Thus we have proved the assertion for j<λ and for j>λ 2 2. If λ >λ 2 2, then our bound covers the whole domain. Now assume that S ={ λ +,..., λ 2 2} is nonempty. For j S we have g f,v (j + ) j! ω j+ e V(j+)( μ(u c j S) + μ(u c j \ S)) μ(u j )

10 STEIN S METHOD FOR DISCRETE GIBBS MEASURES j! ω j+ e V(j+) μ(u c j S) λ j! k=j+ λ k j 2 k! 2 + j!λ j 2 e λ 2 Po(λ 2 ){0,..., λ 2 2}. From [4], Proposition A.2.3(iii), page 259, the Poisson probabilities can be bounded as λ 2 Po(λ 2 ){0,..., λ 2 2} λ λ 2 Po(λ 2)( λ 2 2) and so, as λ 2 j forj λ 2 2, g f,v (j + ) 2 + j! 2 ( λ 2 2)! λ λ 2 2 j (j + ) ( λ 2 2 j) λ λ 2 2 j 2 ( λ2 ) λ2 2 j This finishes the proof λ + ( λ2 λ + ) λ2 λ 2. REMARK 2.6. As λ and λ 2 stay invariant under the reparametrization ω ω = αω, we argue that these are reasonable quantities to employ. REMARK 2.7. While the examples below will show that the bound in Lemma 2.5 may not be informative, in particular examples better bounds may be obtainable in a straightforward manner. Lemma 2.5 is nevertheless useful as it gives conditions on the birth rates so that g f,v is bounded, and these conditions do not involve monotonicity of the birth rates. REMARK 2.8. Note that neither the last bound in Lemma 2. nor the bound in Lemma 2.5 use the normalizing constant Z explicitly. Again, the generalization of the bounds to the case of arbitrary death rates d k would be straightforward. A complication arises when we compare two distributions with nonidentical supports. Therefore it will turn out to be useful to consider the following extension from finite to infinite support for a generator A. For convenience assume that the

11 598 P. EICHELSBACHER AND G. REINERT corresponding measure μ has supp(μ) ={0,,...,n}, for some finite n, so that A is only defined for functions with support {0,,...,n} [recall that we set g(n + ) = 0]. We extend A to be defined for functions on {0,, 2,...} as follows: (2.5) (Ãg)(k) := g(k + )ωe V(k+) V(k) kg(k), (Ãg)(k) := kg(k), k n +. k = 0,,...,n, Thus when there are more than n individuals the process is pure death. Now if X μ, then still EÃf(X)= 0 for all functions f D(Ã), the domain of Ã,since the operator à represents a birth death process with the same invariant distribution as A; see(.2). Next we extend the solution of the Stein equation, so that for f : {0,,...,n} R, the solution g f,v is defined by (2.5) fork {0,,...,n}, and by (2.6) g f,v (k) := μ(f ), k n +. k [Note that our formula (2.5) would have yielded g f,v (n + ) = 0.] For a related suggestion see [4], Chapter 9.2. The above definition ensures that the Stein equation (2.4) is still satisfied. However, the bounds on the solution of (2.4) change slightly. In contrast to (2.7), let (2.7) B 0 := {f : N 0 [0, ],f(x)= 0forx/ supp(μ)}. LEMMA 2.9. Let Ã, associated with μ, V, and ω be given as in (2.5), where μ has supp(μ) ={0,,...,n}. Let f B 0 and let g f,v be the solution of the Stein equation (2.4) given in (2.5) for k n, and as in (2.6) for k n +. Defining the sum n l=j+ as 0 for j n, with λ and λ 2 being given in (2.4) for μ, g f,v (j) ( λ2 λ + g f,v (j) j, j n +. ) λ2 λ 2 (λ 2 2 λ ), j = 0,,...,n; In the case that (2.2) is satisfied we also have ( { g f,v (j) min ln(j), } kμ(k) + ) ω ev(0) V() k F(j), j = 0,,...,n, and if (2.9) is satisfied, then sup g f,v (j) j ev(j), j n, ωev(j+) f B j, j >n.

12 STEIN S METHOD FOR DISCRETE GIBBS MEASURES 599 PROOF. The first assertion follows directly from Lemma 2.5 and (2.6). The second assertion follows from Lemma 2. for j n, and for j n + it follows from (2.6) that { g f,v (j + ) g f,v (j) = μ(f ) j + }, j so that For j = n we obtain so that g f,v (j + ) g f,v (j) < j. g f,v (n + ) g f,v (n) = μ(f ) { n + ( ) } f(n) μ(f ) n = ( ) f(n) μ(f ), n n + g f,v (n + ) g f,v (n) < n. We conclude this section with some examples. EXAMPLE 2.0 (Poisson distribution Po(λ) with parameter λ>0). ω = λ, V(k)= λ, Z =. The Stein operator is We use (Ag)(k) = g(k + )λ kg(k). Applying Lemma 2. (b k = λ is nonincreasing in k) gives the nonuniform bound (2.8) sup g f,pos (k) f B k λ, which leads to the well-known uniform bound /λ; see[4], Lemma... Yet [4], Lemma.. gives the better bound λ ( e λ ) min(, /λ). Inthe Poisson case the right-hand side of (2.0) gives k /k l= l=0 λ λl e l! + /λ λ λl e l!. l=k+ This sum can be bounded by ( e λ λ l ) = e λ λ l! λ (eλ ) = λ ( e λ ),

13 600 P. EICHELSBACHER AND G. REINERT and therefore we obtain for any k sup g f,pos (k) ( e λ (2.9) ) f B λ as in [4], Lemma... The bounds (2.8) and(2.9) lead to (2.20) sup g f,pos (k) f B k λ ( e λ ), which is a slight improvement of (2.8) and of [5]. For the Poisson distribution, λ = λ 2 = λ in Lemma 2.5, and the bound 2 from Lemma 2.5 is poor compared to the bound g min(,λ /2 ) from Lemma.. on page 7 in [4]. The bound in Lemma 2.4 is cumbersome to compute; but for λ 2andforj =, say, the nonuniform bound g() (λ( e λ )) is slightly more informative than the uniform bound g λ /2. The nonuniform bound improves with increasing λ, and deteriorates with increasing j. EXAMPLE 2. (Binomial distribution with parameters n and 0 <p <). We use ω = p p,v(k)= log((n k)!), and Z = (n!( p)n ). The Stein operator is p(n k) (Ag)(k) = g(k + ) ( p) kg(k). In [4]and[7]wefind(Ag)(k) = g(k + )p(n k) ( p)kg(k) as the operator which is equivalent to our formulation. The birth rates (b k ) k are nonincreasing and we obtain from Lemma 2. that sup g f,bin (k) f B ( p)k p(n k). The proof of [4], Lemma 9.2., implicitly contains the same nonuniform bound. Formula (8) in [7] gives a bound of the form sup g f,bin (k) ( p n+ ( p) n+) / ( (n + )( p)p ) f B for every 0 <k<n. Simple calculations show that the bound (8) in [7] isfor some cases better and for some cases worse in comparison to our nonuniform result. From Lemma 2.5 with λ =, λ 2 = n we obtain a bound on g f,bin which is of order O(e n ln n ) and therefore in most applications not useful. The nonuniform bound Lemma 2.4 ( j ( ) ) ( n g(j) p k ( p) n k min{ln(j), np}+ p ) k np k=0 will still be informative for j small and np large.

14 STEIN S METHOD FOR DISCRETE GIBBS MEASURES 60 EXAMPLE 2.2 (Geometric distribution). Consider μ(k) = p( p) k for k = 0,,...The Stein operator is (Ag)(k) = g(k + )( p)(k + ) kg(k). The birth rates b k = ( p)(k + ) fulfill b k b k k (k ) = d k d k. This is condition (C4) in [5], which is sufficient for (2.9). Hence applying [5], Theorem 2.0, one obtains sup g f,geo (k) f B k ( p)(k + ), which leads to the uniform bound ( p). We can improve their bounds calculating the right-hand side of (2.0) explicitly: g f,v (j) = = p ( p)(j + ) ( p)j j + ( ( p) j+ p + ( ( p) j ) = j ) + ( ( p) j ) j j + ( p)j j(j + ) Obviously g f,v (j) j. Using Bernoulli s inequality ( x)n nx for x<andn N it follows that g f,v (j) j + pj j(j + ) = + p j +. We obtain sup g f,geo (k) f B k + p (k + ), which leads to the uniform bound ( + p). In Lemma 2.5 we obtain λ = p, λ 2 =, and hence the lemma does not give informative bounds. Lemma 2.4 gives, with F(j) = ( p) j, the bound g f,geo (j) (min{ln(j), ( p) j p }+ p ). However, with (2.6) andf B we may bound directly g f,geo (j + ) j! ( p) j+ (j + )! = j + ( p) k = k=0 k=j+ ( p) k p(j + ), and using that g f,geo (0) = 0 we obtain the improved bound g f,geo p. To our knowledge this bound is new. In [3] the author considered another Stein operator. Hence the results cannot be compared. In [4] the authors considered the same Stein operator. They obtain sup f B g f,geo (k) k. Hence [5] already improved this result..

15 602 P. EICHELSBACHER AND G. REINERT 3. Size-bias couplings and Gibbs measures. 3.. Size-bias couplings and Stein characterizations. Stein s method is most powerful in presence of dependence. To treat global dependence, couplings have proved a useful tool. For discrete, nonnegative random variables, so-called sizebias couplings fit nicely into our framework as they link in with unit per capita death rate generators. For any random variable X 0 with EX > 0 we say that a random variable X, defined on the same probability space as X,hastheX-size-biased distribution if (3.) EXf(X) = EXEf(X ) for all functions f such that both sides of (3.) exist. Size-bias couplings (X, X ) have been studied in connection with Stein s method for Poisson approximation (see [4], e.g.) and for normal approximations (see [9], e.g.); a general framework is given in [8]. If X is discrete, then for all x we have P(X = x) = EX x P(X= x). This illustrates that size biasing corresponds to sampling proportional to size; the larger a subpopulation, the more likely it is to be contained in the sample. EXAMPLE 3. (Poisson distribution). If X Po(λ), then from the Stein operator in Example 2.0 we read off that X = X + hasthepo(λ)-size-biased distribution. In [4] a related coupling is used, namely the reduced size-biased coupling (X, X ), with X = X. EXAMPLE 3.2 (Bernoulli distribution). If X Be(p), it is easy to see that X = hasthebe(p)-size-biased distribution, for all p (0, ]. As an aside, this shows that X does not uniquely define X. EXAMPLE 3.3 (Sum of nonnegative random variables). Let X,...,X n be nonnegative, EX i = μ i > 0, W = n i= X i, EW = μ,var(w) =. Goldstein and Rinott [9] give the following construction, valid also in the presence of dependence. Choose an index I from {,...,n} according to P(I = i) = μ i μ.ifi = i, replace X i by a variate Xi having the X i -size-biased distribution. If Xi = x, construct Xˆ j, j i, such that L( ˆ X j,j i X i = x) = L(X j,j i X i = x). Then (3.2) W = j I ˆ X j + X I has the W -size-biased distribution; see [9], construction after Lemma 2..

16 STEIN S METHOD FOR DISCRETE GIBBS MEASURES 603 As in the Poisson case, the size-bias coupling can be used to derive a characterization of a Gibbs measure, as follows. LEMMA 3.4. Let X 0 be such that 0 <E(X)<, and let μ be a discrete Gibbs measure given in (.5). If X μ and X having the X-size-biased distribution is defined on the same probability space as X, then for every function g : N 0 R such that E Xg(X) exists, (3.3) ωee V(X+) V(X) g(x + ) = ωee V(X+) V(X) Eg(X ). PROOF. In view of (2.3), for any X 0 with EX > 0and(X, X ) a sizebiased coupling we have, for any bounded function g : N 0 R E { ωe V(X+) V(X) g(x + ) Xg(X) } = E { ωe V(X+) V(X) g(x + ) EXEg(X ) }. For X having distribution (.5), the expectation is given by EX = ωee V(X+) V(X). From EX > 0 it follows that ω 0. The result now follows from the Stein characterization (2.3) of(.5) forω 0. Indeed in (3.3) the factor ω cancels. However, in a moment we shall relate (3.3) to the Stein equation. In order not to get confused about the solutions for the Stein equation with and without the ω involved, we have decided to keep the ω. Lemma 3.4 provides a new formulation of the Stein approach (2.2): Let W 0 with 0 <EW <.IfW has distribution μ, then for all f B Ef (W ) μ(f ) (3.4) = ω { Ee V(W+) V(W) g f,v (W + ) Ee V(W+) V(W) Eg f,v (W ) }, where g f,v,givenin(2.5), is the solution of the Stein equation (2.4) Comparison of distributions via their generators. From(2.2), for any random variable W and any measurable set A we can find a function f = f μ such that P(W A) μ(a) = EAf(W),whereA is the generator associated with the target distribution μ. Applications of Stein s method usually continue by bounding the right-hand side EAf(W). Nevertheless, Stein s method can also be employed to compare two distributions by comparing their generators. While generators may in general not be available, for any discrete distribution we have implicitly constructed a generator in (.9). It is from this angle that we shall bound distances between Gibbs distributions. Assume that μ and μ 2 are two distributions with

17 604 P. EICHELSBACHER AND G. REINERT generators A and A 2, respectively. Then, for W μ 2, EA 2 f μ (W) = 0, and therefore μ (A) P(W A) = EA f μ (W) EA 2 f μ (W) = E(A A 2 )f μ (W). Hence we can compare two discrete Gibbs distributions by comparing their birth rates and their death rates, as follows. THEOREM 3.5. Let μ have generator A as in (.9) and corresponding (ω,v ), and let μ 2 have generator A 2, and corresponding (ω 2,V 2 ), both described in terms of unit per capita death rates. Suppose that D(A ) = D(A 2 ). Then, for X 2 μ 2, f B, if g f,v is the solution of the Stein equation for μ, Ef (X 2) fdμ { min g f,v E(X 2 ) (3.5) ( ω ω 2 + ω E e (V (X2 ) V (X2 )) (V 2(X2 ) V 2(X2 )) ), ω 2 ω 2 g f,v2 E(X ) ( ω2 ω + ω 2 E e (V 2(X ) V 2(X )) (V (X ) V (X )) )}. ω ω PROOF. For X 2 μ 2, f B,ifg f,v is the solution of the Stein equation for μ,wehaveef (X 2 ) μ (f ) = EA g f,v (X 2 ) = E(A A 2 )g f,v (X 2 ). Using the size biasing and (3.3) we obtain Ef (X 2 ) μ (f ) = Eg f,v (X 2 + ) ( ω e V (X 2 +) V (X 2 ) ω 2 e V 2(X 2 +) V 2 (X 2 ) ) = ω Eg f,v (X 2 + )e V 2(X 2 +) V 2 (X 2 ) e V (X 2 +) V (X 2 ) (V 2 (X 2 +) V 2 (X 2 )) E(X 2 )Eg f,v (X 2 ) = ω ω 2 E(X 2 )Eg f,v (X 2 )e(v (X 2 ) V (X 2 )) (V 2(X 2 ) V 2(X 2 )) E(X 2 )Eg f,v (X 2 ) = ω ω 2 ω 2 E(X 2 )Eg f,v (X 2 ) + ω E(X 2 )Eg f,v (X2 ω ){ e (V (X2 ) V (X2 )) (V 2(X2 ) V 2(X2 )) }. 2 Taking absolute values together with the triangle inequality, and observing that the argument is symmetric in the indices and 2, we obtain (3.5).

18 STEIN S METHOD FOR DISCRETE GIBBS MEASURES 605 REMARK 3.6. Theorem 3.5 has the advantage that the partition function Z may not be needed to assess the distance. REMARK 3.7. Note that it is not surprising that g f,v appears in the bound; even if we compare two Poisson distributions, with different means λ and λ 2,the bound would be of the type λ λ 2 g f,v. When D(A ) D(A 2 ),wefirstextenda to Ã, defined for functions on {0,, 2,...} as in (2.5), and apply then Lemma 2.9. For convenience assume that supp(μ ) ={0,,...,n} and supp(μ 2 ) ={0,, 2,...}. Using similar calculations as for Theorem 3.5, we derive the following result. COROLLARY 3.8. Let μ have generator A and corresponding (ω,v ), and let μ 2 have generator A 2, and corresponding (ω 2,V 2 ), both described in terms of unit per capita death rates. Suppose that supp(μ ) ={0,,...,n} and that supp(μ 2 ) ={0,, 2,...}. Then, for X 2 μ 2 and f B 0 as in (2.7), if g f,v is the solution of the Stein equation for μ as in (2.5), Ef (X 2) fdμ { min g f,v E(X 2 ) (3.6) ( ω ω 2 + ω E e (V (X2 ) V (X2 )) (V 2(X2 ) V 2(X2 )) ), ω 2 ω 2 g f,v2 E(X ) ( ω2 ω + ω 2 E e (V 2(X ) V 2(X )) (V (X ) V (X )) )} ω ω + μ (f ) μ 2 (k). k=n+ The extra term μ (f ) k=n+ μ 2 (k) in the bound (3.6) as compared to (3.5) accounts for the extension à of the generator A. PROOF OF COROLLARY 3.8. We proceed as in the proof of Theorem 3.5.Let à be the extension of A as in (2.5). For X 2 μ 2, f B,ifg f,v as in (2.6) is the solution of the Stein equation for μ,wehave Ef (X 2 ) μ (f ) = Eà g f,v (X 2 ) = E(à A 2 )g f,v (X 2 )

19 606 P. EICHELSBACHER AND G. REINERT = Eg f,v (X 2 + ) ( ω e V (X 2 +) V (X 2 ) ω 2 e V 2(X 2 +) V 2 (X 2 ) ) n = k=0 P(X 2 = k)g f,v (k + ) ( ω e V (k+) V (k) ω 2 e V 2(k+) V 2 (k) ) k=n P(X 2 = k) μ (f ) k + ω 2e V 2(k+) V 2 (k), whereweused(2.6). Now the summand n k=0 P(X 2 = k)g f,v (k + ) ( ω e V (k+) V (k) ω 2 e V 2(k+) V 2 (k) ) can be treated exactly as in Theorem 3.5; from Lemma 2.9 we have the same bounds on the solution of the Stein equation when k n. For the second summand we have k=n Thus we obtain (3.6). P(X 2 = k) μ (f ) k + ω 2e V 2(k+) V 2 (k) = ω 2 μ (f ) = μ (f ) k=n k=n+ Z ev 2(k) ω k 2 k!(k + ) ev 2(k+) V 2 (k) μ 2 (k). Note that the bound in Corollary 3.8 contains the measure μ 2 explicitly; hence examples will require the calculation of the normalizing constant Z 2. We now treat the instructive example that the two generators A and A 2 have the same birth and death rates, but live on different domains. EXAMPLE 3.9. Suppose that the distributions μ and μ 2 have supports supp(μ ) ={0,,...,n} and supp(μ 2 ) ={0,, 2,...}, respectively. Let μ have generator A and corresponding (ω, V ), andletμ 2 have generator A 2,andcorresponding same (ω, V ), both described in terms of unit per capita death rates, so that the two generators have the same birth rates and death rates for k = 0,...,n. Then, for X 2 μ 2 and f B 0,ifg f,v is the solution of the Stein equation for μ, it follows from (3.6) that Ef (X 2) fdμ μ (f ) k=n+ μ 2 (k).

20 In particular it follows that STEIN S METHOD FOR DISCRETE GIBBS MEASURES 607 d TV (μ,μ 2 ) k=n+ μ 2 (k), stating that the difference between the two distributions can be bounded by the total mass of the domain, under the respective distribution, which is in the support of the one distribution, but not of the other distribution. Indeed the bound is sharp, as from (.2) wehavethatμ (k) = αμ 2 (k) for k = 0,,...,n, with α = ( n k=0 μ 2 (k)) >. Hence n d TV (μ,μ 2 ) = 2 (α )μ 2 (k) + 2 μ 2 (k) = μ 2 (k). k=0 k=n+ k=n+ Theorem 3.5 and Corollary 3.8 invite the study of the total variation distance of discrete probability distributions with different supports of the corresponding operators, for example considering the distance between a binomial and a hypergeometric distribution. As this leads away from the flow of the paper, we do not pursue it here. Instead we turn to the application of our approach to Gibbs measures arising in interacting particle systems. 4. Application to lattice approximation in statistical physics. Now we apply our results to the model studied by Chayes and Klein [6]. Their setup is as follows. Assume A R d is a rectangle; denote its volume by A. Consider the intersection of A with the d-dimensional lattice n Z d. For each site m in this intersection we associate a Bernoulli random variable Xm n which takes value, with probability pm n, if a particle is present at m and 0 otherwise. If the collection (Xm n ) m is independent, the joint distribution can be interpreted as the Gibbs distribution for an ideal gas on the lattice. The Poisson convergence theorem states that, for n going to infinity, when preserving the average density of particles the lattice ideal gas distribution converges weakly to a Poisson distribution, which is the standard Gibbs distribution for an ideal gas in the continuum. On physical grounds one expects that a similar result might hold for interacting particles. The model is as follows. Pick n N and suppose that A can be partitioned into a regular array of d(n) sub-rectangles {S n,...,sn d(n) } with volumes v(sn m ) = zn m zn. Here, zm n > 0for each m, n. Thus A = d(n) z n As a guideline motivated by [6] we choose zm n > 0andz n such that zm n 0and z n z>0forn. For each m, n choose a point qm n Sn m. Now the following class of functions is considered. m= z n m.

21 608 P. EICHELSBACHER AND G. REINERT ASSUMPTION 4.. Let (f k ) k be a sequence of functions satisfying f 0 and for each k, f k (x,...,x k ) is a nonnegative function, Riemann integrable on A k, such that f k is a symmetric function for each k; that is, for any permutation σ, f k (x σ(),...,x σ(k) ) = f k (x,...,x k ). Let Xm n, m d(n), be 0 random variables with joint density function given as follows. If a,...,a d(n) are such that a i {0, },andif k = d(n) m= a m = k a im m= is the sum of the nonzero a i s, then P ( X n = a,...,xd(n) n = a ) d(n) where the normalizing constant is Define K(d(n)) = d(n) = K(d(n)) f k (qi n,...,qi n k ) (zm n )a m, a {0,} d(n) Then, due to the symmetry of f k, k m= d(n) { ai =k}f k (qi n,...,qi n k ) (zm n )a m. S n = d(n) m= X n m. (z n ) k d(n) d(n) k! i = i P(S n = k) = k = f k(qi n,...,qi n k ) k m= v(si n (4.) m ) d(n) k=0 (z n) k d(n) d(n) k! i = i k = f k(qi n,...,qi n k ) k m= v(si n m ). Note that we can write (4.) as a Gibbs measure μ n of the form (.5) with (4.2) ω n = z n, V n (k) = log(w n (k)), k {0,,... d(n)}, m= where d(n) d(n) k W n (k) = f k (qi n,...,qi n k ) v(si n m ), k {0,...,d(n)}. i = i k = m= Let S be a nonnegative integer-valued random variable defined by z k k! A P(S= k) = k f k(x,...,x k )dx dx k (4.3) k=0 z k k! A k f. k(x,...,x k )dx dx k

22 STEIN S METHOD FOR DISCRETE GIBBS MEASURES 609 We write (4.3) as a Gibbs measure μ of the form (.5) with where ω = z, V (k) = log(w(k)), k {0,,...}, W(k)= f k(x,...,x k )dx dx k, k {0,,...}. A k In [6] the following class of functions is considered. Let (f k ) k be a sequence of functions satisfying Assumption 4. and, in addition: (a) There exists a constant C such that f k (x,...,x k ) C k for all k 0. (b) f k (x,...,x k ) = 0ifx i = x j for some i j. This is not a necessary condition; see [6]. In [6] it is shown that under these additional conditions, S n S, thatis,s n converges weakly to S, under the conditions that lim n z n = z and that ( ) lim = 0. n max m d(n) zn m For the proof of this convergence, condition (a) on (f k ) k enters to ensure that the Riemann sum converges, so that the normalizing constants Z n and Z for μ n and for μ, respectively, are finite. Condition (b) avoids some measure-theoretic considerations. In [6] a bound on the rate of convergence is given for the special case that f k =, all k 0, resulting in a Poisson approximation. Here we give a general bound on the distance, not only in the case f k =, k 0. Note that in [6] the generalization of Poisson convergence allows the authors to develop a lattice-to-continuum theory of classical statistical mechanics; similarly our results could be applied to obtain bounds on such lattice-to-continuum theory. Using Corollary 3.8 with μ = μ n and μ 2 = μ, we obtain Ef (S n) fdμ { ( zn z min E(S n ) g f,v z n ( z zn E(S) g f,vn z + μ n (f ) k=d(n)+ μ(k). + z W(S E n )W n(sn ) ) z n W(Sn )W n(sn ), + z n z E W n (S )W(S )} ) W n (S )W(S ) Writing out the size-biased distribution gives the following result.

23 60 P. EICHELSBACHER AND G. REINERT THEOREM 4.2. Let S n,μ,z n be as above. Let us assume that (f k ) k satisfies Assumption 4. and condition (a). Then d TV (L(S n ), μ) max( g f,v, g f,vn ) {( min E(S n ) z n z + z ) W(k)W kμ n (k) n (k ) z n z n W(k )W k n (k), ( E(S) z z n + z )} n W kμ(k) n (k)w(k ) z z W k n (k )W(k) + μ n (f ) k=d(n)+ μ(k). REMARK 4.3. Note that for Theorem 4.2, no monotonicity assumptions on the birth and death rates in the corresponding birth death process are needed. For the bound to be informative, though, we need that max( g f,v, g f,vn )< ; conditions to ensure this behavior are given in Lemma 2.5. For the case of nonincreasing birth rates and unit per capita death rates, we also obtain bounds based on the increments g. Recall the notation of Example 3.3. PROPOSITION 4.4. Let S n be the sum of X,...,X n, where X i Be(p i ), and construct Sn via (3.2), and let μ be given by (.5). Assume that the birth rates in (.7) are nonincreasing, and assume that the death rates are unit per capita. Then d TV (L(S n ), μ) ωe { e V(S n+) V(S n ) min { S(S n,s n ), (4.4) ( Sn S n + /ω)ev(min(s n,s n )) V(min(S n,s n )+)}} + g f,v ωe e V(S n+) V(S n ) Ee V(S n+) V(S n ), whereweputs(x,y) = max(x,y) l=min(x,y)+ l. PROOF. With the notation as in Example 3.3, denote the conditional expectation given I = i by E i.letf B. IfX i Be(p i ), i =,...,n,thenxi =. Put Ŝ n,i = Xˆ j. j i

24 By (3.4) and(3.2), we have STEIN S METHOD FOR DISCRETE GIBBS MEASURES 6 Ef (S n ) μ(f ) { n p i [ = ω nj= E i e V(S n +) V(S n ) g f,v (S n + ) ] p j i= n = ω i= ω nj= p j ( Ee V(S n+) V(S n ) Eg f,v Xˆ j + )} j i p i nj= p j E i [ e V(S n +) V(S n )) ( g f,v (S n + ) g f,v (Ŝ n,i + ) )] + E i g f,v (Ŝ n,i + ) ( e V(S n+) V(S n ) Ee V(S n+) V(S n ) ) n { p i E i e V(S n +) V(S n ) i= min { S(S n, Ŝ n,i ), P(S n Ŝ n,i ) ( S n Ŝ n,i /ω) e V(min(S n,ŝ n,i )) V(min(S n,ŝ n,i )+) } S n Ŝ n,i } + g f,v ωe e V(S n+) V(S n ) Ee V(S n+) V(S n ), where we used Lemma 2. for the last inequality. Note that in contrast to Theorem 4.2, we did not employ the difference between generators in the proof. 4.. Poisson approximation for a sum of indicators. In the Poisson case we can use the bound (2.20) on g instead of Lemma 2. to obtain improved overall bounds, as follows. Let X i Be(p i ), i =,...,n;thenasinexample3.3 we have for S n = n j= X j that Sn = Ŝ n,i +, where as above Ŝ n,i = ˆ j i X j.let λ = ES n = n j= p j. Recall that if μ = Po(λ),thenω = λ and V(k)= λ. Hence using (4.5) we obtain n ( Ef (S n ) Po(λ)(f ) = p i E i gf,v (S n + ) g f,v (Ŝ n,i ) ). i=

25 62 P. EICHELSBACHER AND G. REINERT Calculating as in the proof of Proposition 4.4 and using (2.20) weget (4.5) d TV (L(S n ), Po(λ)) n { p i E i {min S(S n, Ŝ n,i ), S n Ŝ n,i ( e λ )} } Sn Ŝ n,i λ i= P(S n Ŝ n,i ). In particular we recover the bound for couplings in [4], Theorem.B, (4.6) d TV (L(S n ), Po(λ)) e λ λ n p i E i S n Ŝ n,i. i= EXAMPLE 4.5. Assume that the variables S n and Sn can be coupled such that (4.7) S n Ŝ n,i, i =,...,n. Then we can improve the bound (4.6), using E i {S(S n, Ŝ n,i ) S n Ŝ n,i }P(S n Ŝ n,i ) { Sn Ŝ n,i E i S n Ŝ n,i }P(S n Ŝ n,i ) + Ŝ n,i [ P(S n Ŝ n,i ) P {Ŝ n,i = 0 S n Ŝ n,i }+ ] 2 P {Ŝ n,i S n Ŝ n,i } = P(S n Ŝ n,i ) 2 ( + P {Ŝ n,i = 0 S n Ŝ n,i }). EXAMPLE 4.6. In case that the X i s are independent and we have Ŝ n,i = S n X i,sothat(4.7) is satisfied, and (4.6) yields that n d TV (L(S n ), Po(λ)) p 2 e λ (4.8) i. λ i= The bound (4.8) coincides with the bound (.23) given in [4]. We can improve on it by using that ( ) P {Ŝ n,i = 0 S n Ŝ n,i }=P X j = 0 X i = = p j ). j i j i( This results in the bound for the independent case { ( n d TV (L(S n ), Po(λ)) pi 2 min + ) p j ), 2 i= j i( } e λ. λ

26 STEIN S METHOD FOR DISCRETE GIBBS MEASURES 63 To see when the above bound is an improvement on the bound (.23) given in [4], consider the inequality e λ ( + x) <. 2 λ We rearrange this inequality to give λx < 2 λ 2e λ. Numerical calculations yield that the right-hand side is positive when 0 <λ< For such λ and some y such that 0 <y<2 λ 2e λ we can construct strongly inhomogeneous cases such that ( p j ) λ y j i for some indices i. Examples are, for λ =, the vector p = (p,...,p n ) = (, 0, 0,...,0), or the vector p = (p,...,p n ) = ( /(n ), /(n ) 2, /(n ) 2,...,/(n ) 2 ) for n 3. In such cases, our bound provides an improvement. Example 4.6 can only be an improvement on (4.6) when the underlying random variables are not identically distributed; in the independent and identically distributed case, we recover exactly the bound given in [4], (.23), page An example with repelling interaction. Our first example with interaction partitions A =[0, ] into n intervals, the ith interval given by Si n =[ i n, n i ],and we choose qi n = 2i 2n, the midpoint of the interval Sn i. Then each Sn i has volume n = λ λn ; we choose zn i = λ n and z n = λ. Note the freedom of choice in λ>0. We consider the set of functions f k given by f 0 =,f (x) = forallx,andfork 2, f k (x,...,x k ) = 0ifx i = x j for some i j, and otherwise f k (x,...,x k ) = (x i x j ) 2, i j k so that values x,...,x n which are far away from each other are preferred. Then clearly f k satisfies Assumption 4. as well as conditions (a) and (b). To avoid trivialities we assume that n 2. In our setup we choose 0 random variables Xm n, m n, with density function P(X n = a,...,xn n = a n) n k λ k n (qi n qj n )2 if k = a m 2; i j k m= P(X n = a,...,x n n = a n) λ n if k = ; P(X n = 0,...,Xn n = 0).

27 64 P. EICHELSBACHER AND G. REINERT With S n = n m= Xm n we have with (4.) that where P(S n = k) λk k! W n(k), n n W n (k) = n k f k (qi n,...,qi n k ), i = i k = k=0 λ k k! k = 0,...,n. Let S be a nonnegative integer-valued random variable defined as in (4.3) by λ k k! [0,] P(S= k) = k f k(x,...,x k )dx dx k [0,] k f λk k(x,...,x k )dx dx k k! W(k), with W(k)= f k(x,...,x k )dx dx k, k 0. [0,] k It is immediate that W(0) = W() =, and for k 2, k W(k)= (x i x j ) 2 dx i dx j = i= j i 0 0 Thus our limiting Gibbs measure μ has normalizing constant Z = + λ + 6 k 2 and is therefore given by μ(0) = ( + λ + λ2 ) 6 eλ, λ μ() = + λ + λ2 6 eλ, k(k ). 6 λ k (k 2)! = + λ + λ2 6 eλ μ(k) = ( + λ + λ2 ) λ k 6 eλ, k 2. (k 2)! To assess the distance between the distributions of S n and of S we note that W n (0) = W n () = = W(0) = W(), and some algebra yields, for k 2, that n n W n (k) = n k ( 2is 2i ) j 2 2n 2n i = i k = l s = n k 2 n n (i j) 2 n k 2 l s k i= j=

28 Thus STEIN S METHOD FOR DISCRETE GIBBS MEASURES 65 { n ( = 2n 4 n ) 2 } k(k ) i 2 i i= i= = k(k )(n + )(n ) 6n 2. W n (k) W n (k ) = W(k) W(k ) for k = 0,...,n, and we are in the situation of Example 3.9. Therefore it follows that d TV (μ n,μ)= μ(k) Summarizing, we have: COROLLARY 4.7. = k=n+ ( + λ + λ2 ) 6 eλ k=n+ λ n+ e λ (n + )!( + λ + λ2 6 eλ ). λ k k! Let S n and S be as constructed above. Then λ n+ e λ d TV (L(S n ), μ) (n + )!( + λ + λ2 6 eλ ) A second example with interaction. Using the same partition of A =[0, ] and the same notation as in the previous example, and choosing for simplicity ω = λ =, we now consider the set of functions (f k ) k given by f 0 (x) = f (x) = for all x, andfork 2, f k (x,...,x k ) = x i x j. i j k Clearly (f k ) k satisfy Assumption 4. as well as conditions (a) and (b). We take qi n = i n, the left endpoint of the interval Sn i. Assume that n 3. Along the lines of the calculations in the previous example we obtain now W(0) = W() = and for k 2 W(k)= k k, so that the distribution of S given in (4.3) is μ(k) = Z k! k k, k 0.

29 66 P. EICHELSBACHER AND G. REINERT We note that Z = + k= k! e k ln k e /e. For the distribution of S n given in (4.) we obtain W n (0) = W n () =, and for k 2, ( n ) k W n (k) = n k2 i k. i=0 We employ an integral approximation, obtaining ( n Observe that for k = 0,...,n, n ) k 2 k k <W n (k) < k k, k 2. ( n n ) k 2 < W n(k) W(k) <. For Theorem 4.2, we may thus bound for k = 2,...,n, W(k)W n (k ) W(k )W n (k) { max ( n n ( ) n k 2 =. n Now we calculate W(k)W kμ n (k) n (k ) W(k )W k n (k) k Taylor s expansion gives the inequality ) k 2 ( ) n k 2 } ; n {( ) n k 2 } kμ n (k). n ( ) n k 2 0 k2 ( + ) k 2 < k2 n n n n ek2 /(n ) < k2 n ek, for k n and n 3. Applying this inequality, we obtain that W(k)W kμ n (k) n (k ) W(k )W k n (k) n log(k)+k e k (n )Z n k! k= e e < < 2ee (n )Z n n,

30 STEIN S METHOD FOR DISCRETE GIBBS MEASURES 67 whereweusedz n = + k= k! W n (k) in the last inequality. Next we estimate μ(k) = Z k! k k k! n k /n n (n+) e (n + )!, k=n+ k=n+ k=n+ where we used that Z >. (k+) Note that λ 2 = sup k+ k =, so that we cannot apply Lemma 2.5; instead k we find an alternative bound k on g f,v as follows. From (2.6) wehave g f,v (j + ) j! ω j+ e V(j+) N k=j+ e V(k)ωk k! k! k k = Z e /e. k=0 COROLLARY 4.8. Let S n and S be as constructed above, and let n 3. Then d TV (L(S n ), μ) 2 ee+/e n /n n (n+) + e (n + )!. Acknowledgments. The authors would like to thank Hans Zessin for pointing out the work of Preston as well as the work of Chayes and Klein. We would also like to thank Andrew Barbour and two anonymous referees for helpful comments. REFERENCES [] ASMUSSEN, S. (987). Applied Probability and Queues. Wiley, Chichester. MR [2] BARBOUR, A. D. (988). Stein s method and Poisson process convergence. J. Appl. Probab. 25A MR [3] BARBOUR, A. D. (990). Stein s method for diffusion approximations. Probab. Theory Related Fields MR [4] BARBOUR, A. D., HOLST, L. and JANSON, S. (992). Poisson Approximation. Oxford Univ. Press. MR63825 [5] BROWN, T. C. and XIA, A. (200). Stein s method and birth death processes. Ann. Probab MR [6] CHAYES, L. and KLEIN, D. (994). A generalization of Poisson convergence to Gibbs convergence with applications to statistical mechanics. Helv. Phys. Acta MR28458 [7] EHM, W. (99). Binomial approximation to the Poisson binomial distribution. Statist. Probab. Lett MR09342 [8] GOLDSTEIN, L. and REINERT, G. (2005). Distributional transformations, orthogonal polynomials, and Stein characterizations. J. Theoret. Probab MR [9] GOLDSTEIN, L. and RINOTT, Y. (996). Multivariate normal approximations by Stein s method and size bias couplings. J. Appl. Probab MR37949 [0] GÖTZE, F. (99). On the rate of convergence in the multivariate CLT. Ann. Probab MR06283 [] HOLMES, S. (2004). Stein s method for birth and death chains. In Stein s Method: Expository Lectures and Applications (P. Diaconis and S. Holmes, eds.) IMS, Beachwood, OH. MR28602 [2] KARLIN, S. and MCGREGOR, J. (957). The classification of birth and death processes. Trans. Amer. Math. Soc MR

31 68 P. EICHELSBACHER AND G. REINERT [3] PEKÖZ, E. A. (996). Stein s method for geometric approximation. J. Appl. Probab MR40468 [4] PHILLIPS, M. J. and WEINBERG, G. V. (2000). Non-uniform bounds for geometric approximation. Statist. Probab. Lett MR [5] PRESTON, C. (975). Spatial birth-and-death processes (with discussion). Proceedings of the 40th Session of the International Statistical Institute (Warsaw, 975) 2. Invited Papers , MR [6] RUELLE, D. (999). Statistical Mechanics. World Scientific, River Edge, NJ. Reprint of the 989 edition. MR [7] STEIN, C. (972). A bound for the error in the normal approximation to the distribution of a sum of dependent random variables. Proc. Sixth Berkeley Symp. Math. Statist. Probab. II. Probability Theory Univ. California Press, Berkeley. MR FAKULTÄT FÜR MATHEMATIK RUHR-UNIVERSITÄT BOCHUM NA 3/68 D BOCHUM GERMANY peter.eichelsbacher@ruhr-uni-bochum.de DEPARTMENT OF STATISTICS UNIVERSITY OF OXFORD SOUTH PARKS ROAD OXFORD OX 3TG UNITED KINGDOM reinert@stats.ox.ac.uk

A Short Introduction to Stein s Method

A Short Introduction to Stein s Method A Short Introduction to Stein s Method Gesine Reinert Department of Statistics University of Oxford 1 Overview Lecture 1: focusses mainly on normal approximation Lecture 2: other approximations 2 1. The

More information

Stein s Method and the Zero Bias Transformation with Application to Simple Random Sampling

Stein s Method and the Zero Bias Transformation with Application to Simple Random Sampling Stein s Method and the Zero Bias Transformation with Application to Simple Random Sampling Larry Goldstein and Gesine Reinert November 8, 001 Abstract Let W be a random variable with mean zero and variance

More information

A CLT FOR MULTI-DIMENSIONAL MARTINGALE DIFFERENCES IN A LEXICOGRAPHIC ORDER GUY COHEN. Dedicated to the memory of Mikhail Gordin

A CLT FOR MULTI-DIMENSIONAL MARTINGALE DIFFERENCES IN A LEXICOGRAPHIC ORDER GUY COHEN. Dedicated to the memory of Mikhail Gordin A CLT FOR MULTI-DIMENSIONAL MARTINGALE DIFFERENCES IN A LEXICOGRAPHIC ORDER GUY COHEN Dedicated to the memory of Mikhail Gordin Abstract. We prove a central limit theorem for a square-integrable ergodic

More information

Non Uniform Bounds on Geometric Approximation Via Stein s Method and w-functions

Non Uniform Bounds on Geometric Approximation Via Stein s Method and w-functions Communications in Statistics Theory and Methods, 40: 45 58, 20 Copyright Taylor & Francis Group, LLC ISSN: 036-0926 print/532-45x online DOI: 0.080/036092090337778 Non Uniform Bounds on Geometric Approximation

More information

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond Measure Theory on Topological Spaces Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond May 22, 2011 Contents 1 Introduction 2 1.1 The Riemann Integral........................................ 2 1.2 Measurable..............................................

More information

On bounds in multivariate Poisson approximation

On bounds in multivariate Poisson approximation Workshop on New Directions in Stein s Method IMS (NUS, Singapore), May 18-29, 2015 Acknowledgments Thanks Institute for Mathematical Sciences (IMS, NUS), for the invitation, hospitality and accommodation

More information

MODERATE DEVIATIONS IN POISSON APPROXIMATION: A FIRST ATTEMPT

MODERATE DEVIATIONS IN POISSON APPROXIMATION: A FIRST ATTEMPT Statistica Sinica 23 (2013), 1523-1540 doi:http://dx.doi.org/10.5705/ss.2012.203s MODERATE DEVIATIONS IN POISSON APPROXIMATION: A FIRST ATTEMPT Louis H. Y. Chen 1, Xiao Fang 1,2 and Qi-Man Shao 3 1 National

More information

Asymptotic efficiency of simple decisions for the compound decision problem

Asymptotic efficiency of simple decisions for the compound decision problem Asymptotic efficiency of simple decisions for the compound decision problem Eitan Greenshtein and Ya acov Ritov Department of Statistical Sciences Duke University Durham, NC 27708-0251, USA e-mail: eitan.greenshtein@gmail.com

More information

Notes on Poisson Approximation

Notes on Poisson Approximation Notes on Poisson Approximation A. D. Barbour* Universität Zürich Progress in Stein s Method, Singapore, January 2009 These notes are a supplement to the article Topics in Poisson Approximation, which appeared

More information

Random Process Lecture 1. Fundamentals of Probability

Random Process Lecture 1. Fundamentals of Probability Random Process Lecture 1. Fundamentals of Probability Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/43 Outline 2/43 1 Syllabus

More information

A Pointwise Approximation of Generalized Binomial by Poisson Distribution

A Pointwise Approximation of Generalized Binomial by Poisson Distribution Applied Mathematical Sciences, Vol. 6, 2012, no. 22, 1095-1104 A Pointwise Approximation of Generalized Binomial by Poisson Distribution K. Teerapabolarn Department of Mathematics, Faculty of Science,

More information

Approximation of the conditional number of exceedances

Approximation of the conditional number of exceedances Approximation of the conditional number of exceedances Han Liang Gan University of Melbourne July 9, 2013 Joint work with Aihua Xia Han Liang Gan Approximation of the conditional number of exceedances

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

THE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION

THE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION THE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION JAINUL VAGHASIA Contents. Introduction. Notations 3. Background in Probability Theory 3.. Expectation and Variance 3.. Convergence

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

On the Entropy of Sums of Bernoulli Random Variables via the Chen-Stein Method

On the Entropy of Sums of Bernoulli Random Variables via the Chen-Stein Method On the Entropy of Sums of Bernoulli Random Variables via the Chen-Stein Method Igal Sason Department of Electrical Engineering Technion - Israel Institute of Technology Haifa 32000, Israel ETH, Zurich,

More information

arxiv: v2 [math.pr] 4 Sep 2017

arxiv: v2 [math.pr] 4 Sep 2017 arxiv:1708.08576v2 [math.pr] 4 Sep 2017 On the Speed of an Excited Asymmetric Random Walk Mike Cinkoske, Joe Jackson, Claire Plunkett September 5, 2017 Abstract An excited random walk is a non-markovian

More information

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9 MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended

More information

Wald for non-stopping times: The rewards of impatient prophets

Wald for non-stopping times: The rewards of impatient prophets Electron. Commun. Probab. 19 (2014), no. 78, 1 9. DOI: 10.1214/ECP.v19-3609 ISSN: 1083-589X ELECTRONIC COMMUNICATIONS in PROBABILITY Wald for non-stopping times: The rewards of impatient prophets Alexander

More information

Convergence in Distribution

Convergence in Distribution Convergence in Distribution Undergraduate version of central limit theorem: if X 1,..., X n are iid from a population with mean µ and standard deviation σ then n 1/2 ( X µ)/σ has approximately a normal

More information

A COMPOUND POISSON APPROXIMATION INEQUALITY

A COMPOUND POISSON APPROXIMATION INEQUALITY J. Appl. Prob. 43, 282 288 (2006) Printed in Israel Applied Probability Trust 2006 A COMPOUND POISSON APPROXIMATION INEQUALITY EROL A. PEKÖZ, Boston University Abstract We give conditions under which the

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

STAT 7032 Probability Spring Wlodek Bryc

STAT 7032 Probability Spring Wlodek Bryc STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,

More information

3 Integration and Expectation

3 Integration and Expectation 3 Integration and Expectation 3.1 Construction of the Lebesgue Integral Let (, F, µ) be a measure space (not necessarily a probability space). Our objective will be to define the Lebesgue integral R fdµ

More information

Series of Error Terms for Rational Approximations of Irrational Numbers

Series of Error Terms for Rational Approximations of Irrational Numbers 2 3 47 6 23 Journal of Integer Sequences, Vol. 4 20, Article..4 Series of Error Terms for Rational Approximations of Irrational Numbers Carsten Elsner Fachhochschule für die Wirtschaft Hannover Freundallee

More information

On Approximating a Generalized Binomial by Binomial and Poisson Distributions

On Approximating a Generalized Binomial by Binomial and Poisson Distributions International Journal of Statistics and Systems. ISSN 0973-675 Volume 3, Number (008), pp. 3 4 Research India Publications http://www.ripublication.com/ijss.htm On Approximating a Generalized inomial by

More information

BALANCING GAUSSIAN VECTORS. 1. Introduction

BALANCING GAUSSIAN VECTORS. 1. Introduction BALANCING GAUSSIAN VECTORS KEVIN P. COSTELLO Abstract. Let x 1,... x n be independent normally distributed vectors on R d. We determine the distribution function of the minimum norm of the 2 n vectors

More information

Normal Approximation for Hierarchical Structures

Normal Approximation for Hierarchical Structures Normal Approximation for Hierarchical Structures Larry Goldstein University of Southern California July 1, 2004 Abstract Given F : [a, b] k [a, b] and a non-constant X 0 with P (X 0 [a, b]) = 1, define

More information

Received: 2/7/07, Revised: 5/25/07, Accepted: 6/25/07, Published: 7/20/07 Abstract

Received: 2/7/07, Revised: 5/25/07, Accepted: 6/25/07, Published: 7/20/07 Abstract INTEGERS: ELECTRONIC JOURNAL OF COMBINATORIAL NUMBER THEORY 7 2007, #A34 AN INVERSE OF THE FAÀ DI BRUNO FORMULA Gottlieb Pirsic 1 Johann Radon Institute of Computational and Applied Mathematics RICAM,

More information

Key words. Feedback shift registers, Markov chains, stochastic matrices, rapid mixing

Key words. Feedback shift registers, Markov chains, stochastic matrices, rapid mixing MIXING PROPERTIES OF TRIANGULAR FEEDBACK SHIFT REGISTERS BERND SCHOMBURG Abstract. The purpose of this note is to show that Markov chains induced by non-singular triangular feedback shift registers and

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms

Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms Yan Bai Feb 2009; Revised Nov 2009 Abstract In the paper, we mainly study ergodicity of adaptive MCMC algorithms. Assume that

More information

Part II Probability and Measure

Part II Probability and Measure Part II Probability and Measure Theorems Based on lectures by J. Miller Notes taken by Dexter Chua Michaelmas 2016 These notes are not endorsed by the lecturers, and I have modified them (often significantly)

More information

Partition of Integers into Distinct Summands with Upper Bounds. Partition of Integers into Even Summands. An Example

Partition of Integers into Distinct Summands with Upper Bounds. Partition of Integers into Even Summands. An Example Partition of Integers into Even Summands We ask for the number of partitions of m Z + into positive even integers The desired number is the coefficient of x m in + x + x 4 + ) + x 4 + x 8 + ) + x 6 + x

More information

7 Convergence in R d and in Metric Spaces

7 Convergence in R d and in Metric Spaces STA 711: Probability & Measure Theory Robert L. Wolpert 7 Convergence in R d and in Metric Spaces A sequence of elements a n of R d converges to a limit a if and only if, for each ǫ > 0, the sequence a

More information

Entropy and Ergodic Theory Lecture 15: A first look at concentration

Entropy and Ergodic Theory Lecture 15: A first look at concentration Entropy and Ergodic Theory Lecture 15: A first look at concentration 1 Introduction to concentration Let X 1, X 2,... be i.i.d. R-valued RVs with common distribution µ, and suppose for simplicity that

More information

MATH 117 LECTURE NOTES

MATH 117 LECTURE NOTES MATH 117 LECTURE NOTES XIN ZHOU Abstract. This is the set of lecture notes for Math 117 during Fall quarter of 2017 at UC Santa Barbara. The lectures follow closely the textbook [1]. Contents 1. The set

More information

Lecture 5: Expectation

Lecture 5: Expectation Lecture 5: Expectation 1. Expectations for random variables 1.1 Expectations for simple random variables 1.2 Expectations for bounded random variables 1.3 Expectations for general random variables 1.4

More information

Poisson Approximation for Independent Geometric Random Variables

Poisson Approximation for Independent Geometric Random Variables International Mathematical Forum, 2, 2007, no. 65, 3211-3218 Poisson Approximation for Independent Geometric Random Variables K. Teerapabolarn 1 and P. Wongasem Department of Mathematics, Faculty of Science

More information

MEASURE-THEORETIC ENTROPY

MEASURE-THEORETIC ENTROPY MEASURE-THEORETIC ENTROPY Abstract. We introduce measure-theoretic entropy 1. Some motivation for the formula and the logs We want to define a function I : [0, 1] R which measures how suprised we are or

More information

460 HOLGER DETTE AND WILLIAM J STUDDEN order to examine how a given design behaves in the model g` with respect to the D-optimality criterion one uses

460 HOLGER DETTE AND WILLIAM J STUDDEN order to examine how a given design behaves in the model g` with respect to the D-optimality criterion one uses Statistica Sinica 5(1995), 459-473 OPTIMAL DESIGNS FOR POLYNOMIAL REGRESSION WHEN THE DEGREE IS NOT KNOWN Holger Dette and William J Studden Technische Universitat Dresden and Purdue University Abstract:

More information

Monotonicity and Aging Properties of Random Sums

Monotonicity and Aging Properties of Random Sums Monotonicity and Aging Properties of Random Sums Jun Cai and Gordon E. Willmot Department of Statistics and Actuarial Science University of Waterloo Waterloo, Ontario Canada N2L 3G1 E-mail: jcai@uwaterloo.ca,

More information

Abstract. 2. We construct several transcendental numbers.

Abstract. 2. We construct several transcendental numbers. Abstract. We prove Liouville s Theorem for the order of approximation by rationals of real algebraic numbers. 2. We construct several transcendental numbers. 3. We define Poissonian Behaviour, and study

More information

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 =

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 = Chapter 5 Sequences and series 5. Sequences Definition 5. (Sequence). A sequence is a function which is defined on the set N of natural numbers. Since such a function is uniquely determined by its values

More information

Log-Convexity Properties of Schur Functions and Generalized Hypergeometric Functions of Matrix Argument. Donald St. P. Richards.

Log-Convexity Properties of Schur Functions and Generalized Hypergeometric Functions of Matrix Argument. Donald St. P. Richards. Log-Convexity Properties of Schur Functions and Generalized Hypergeometric Functions of Matrix Argument Donald St. P. Richards August 22, 2009 Abstract We establish a positivity property for the difference

More information

A Note on Poisson Approximation for Independent Geometric Random Variables

A Note on Poisson Approximation for Independent Geometric Random Variables International Mathematical Forum, 4, 2009, no, 53-535 A Note on Poisson Approximation for Independent Geometric Random Variables K Teerapabolarn Department of Mathematics, Faculty of Science Burapha University,

More information

Measure and Integration: Concepts, Examples and Exercises. INDER K. RANA Indian Institute of Technology Bombay India

Measure and Integration: Concepts, Examples and Exercises. INDER K. RANA Indian Institute of Technology Bombay India Measure and Integration: Concepts, Examples and Exercises INDER K. RANA Indian Institute of Technology Bombay India Department of Mathematics, Indian Institute of Technology, Bombay, Powai, Mumbai 400076,

More information

On the mean connected induced subgraph order of cographs

On the mean connected induced subgraph order of cographs AUSTRALASIAN JOURNAL OF COMBINATORICS Volume 71(1) (018), Pages 161 183 On the mean connected induced subgraph order of cographs Matthew E Kroeker Lucas Mol Ortrud R Oellermann University of Winnipeg Winnipeg,

More information

Submitted to the Brazilian Journal of Probability and Statistics

Submitted to the Brazilian Journal of Probability and Statistics Submitted to the Brazilian Journal of Probability and Statistics Multivariate normal approximation of the maximum likelihood estimator via the delta method Andreas Anastasiou a and Robert E. Gaunt b a

More information

Mathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector

Mathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector On Minimax Filtering over Ellipsoids Eduard N. Belitser and Boris Y. Levit Mathematical Institute, University of Utrecht Budapestlaan 6, 3584 CD Utrecht, The Netherlands The problem of estimating the mean

More information

Equivalence constants for certain matrix norms II

Equivalence constants for certain matrix norms II Linear Algebra and its Applications 420 (2007) 388 399 www.elsevier.com/locate/laa Equivalence constants for certain matrix norms II Bao Qi Feng a,, Andrew Tonge b a Department of Mathematical Sciences,

More information

An Inverse Problem for Gibbs Fields with Hard Core Potential

An Inverse Problem for Gibbs Fields with Hard Core Potential An Inverse Problem for Gibbs Fields with Hard Core Potential Leonid Koralov Department of Mathematics University of Maryland College Park, MD 20742-4015 koralov@math.umd.edu Abstract It is well known that

More information

ON THE COMPLETE CONVERGENCE FOR WEIGHTED SUMS OF DEPENDENT RANDOM VARIABLES UNDER CONDITION OF WEIGHTED INTEGRABILITY

ON THE COMPLETE CONVERGENCE FOR WEIGHTED SUMS OF DEPENDENT RANDOM VARIABLES UNDER CONDITION OF WEIGHTED INTEGRABILITY J. Korean Math. Soc. 45 (2008), No. 4, pp. 1101 1111 ON THE COMPLETE CONVERGENCE FOR WEIGHTED SUMS OF DEPENDENT RANDOM VARIABLES UNDER CONDITION OF WEIGHTED INTEGRABILITY Jong-Il Baek, Mi-Hwa Ko, and Tae-Sung

More information

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory Part V 7 Introduction: What are measures and why measurable sets Lebesgue Integration Theory Definition 7. (Preliminary). A measure on a set is a function :2 [ ] such that. () = 2. If { } = is a finite

More information

GAUSSIAN MEASURE OF SECTIONS OF DILATES AND TRANSLATIONS OF CONVEX BODIES. 2π) n

GAUSSIAN MEASURE OF SECTIONS OF DILATES AND TRANSLATIONS OF CONVEX BODIES. 2π) n GAUSSIAN MEASURE OF SECTIONS OF DILATES AND TRANSLATIONS OF CONVEX BODIES. A. ZVAVITCH Abstract. In this paper we give a solution for the Gaussian version of the Busemann-Petty problem with additional

More information

Course Notes. Part IV. Probabilistic Combinatorics. Algorithms

Course Notes. Part IV. Probabilistic Combinatorics. Algorithms Course Notes Part IV Probabilistic Combinatorics and Algorithms J. A. Verstraete Department of Mathematics University of California San Diego 9500 Gilman Drive La Jolla California 92037-0112 jacques@ucsd.edu

More information

DECAY AND GROWTH FOR A NONLINEAR PARABOLIC DIFFERENCE EQUATION

DECAY AND GROWTH FOR A NONLINEAR PARABOLIC DIFFERENCE EQUATION PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 133, Number 9, Pages 2613 2620 S 0002-9939(05)08052-4 Article electronically published on April 19, 2005 DECAY AND GROWTH FOR A NONLINEAR PARABOLIC

More information

Lebesgue Measure on R n

Lebesgue Measure on R n 8 CHAPTER 2 Lebesgue Measure on R n Our goal is to construct a notion of the volume, or Lebesgue measure, of rather general subsets of R n that reduces to the usual volume of elementary geometrical sets

More information

Lebesgue Measure on R n

Lebesgue Measure on R n CHAPTER 2 Lebesgue Measure on R n Our goal is to construct a notion of the volume, or Lebesgue measure, of rather general subsets of R n that reduces to the usual volume of elementary geometrical sets

More information

Total variation error bounds for geometric approximation

Total variation error bounds for geometric approximation Bernoulli 192, 2013, 610 632 DOI: 10.3150/11-BEJ406 Total variation error bounds for geometric approximation EROL A. PEKÖZ 1, ADRIAN RÖLLIN 2 and NATHAN ROSS 3 1 School of Management, Boston University,

More information

Problem List MATH 5143 Fall, 2013

Problem List MATH 5143 Fall, 2013 Problem List MATH 5143 Fall, 2013 On any problem you may use the result of any previous problem (even if you were not able to do it) and any information given in class up to the moment the problem was

More information

Normal approximation of Poisson functionals in Kolmogorov distance

Normal approximation of Poisson functionals in Kolmogorov distance Normal approximation of Poisson functionals in Kolmogorov distance Matthias Schulte Abstract Peccati, Solè, Taqqu, and Utzet recently combined Stein s method and Malliavin calculus to obtain a bound for

More information

If Y and Y 0 satisfy (1-2), then Y = Y 0 a.s.

If Y and Y 0 satisfy (1-2), then Y = Y 0 a.s. 20 6. CONDITIONAL EXPECTATION Having discussed at length the limit theory for sums of independent random variables we will now move on to deal with dependent random variables. An important tool in this

More information

Proofs for Large Sample Properties of Generalized Method of Moments Estimators

Proofs for Large Sample Properties of Generalized Method of Moments Estimators Proofs for Large Sample Properties of Generalized Method of Moments Estimators Lars Peter Hansen University of Chicago March 8, 2012 1 Introduction Econometrica did not publish many of the proofs in my

More information

Yunhi Cho and Young-One Kim

Yunhi Cho and Young-One Kim Bull. Korean Math. Soc. 41 (2004), No. 1, pp. 27 43 ANALYTIC PROPERTIES OF THE LIMITS OF THE EVEN AND ODD HYPERPOWER SEQUENCES Yunhi Cho Young-One Kim Dedicated to the memory of the late professor Eulyong

More information

ON THE MOMENTS OF ITERATED TAIL

ON THE MOMENTS OF ITERATED TAIL ON THE MOMENTS OF ITERATED TAIL RADU PĂLTĂNEA and GHEORGHIŢĂ ZBĂGANU The classical distribution in ruins theory has the property that the sequence of the first moment of the iterated tails is convergent

More information

arxiv: v1 [math.co] 17 Dec 2007

arxiv: v1 [math.co] 17 Dec 2007 arxiv:07.79v [math.co] 7 Dec 007 The copies of any permutation pattern are asymptotically normal Milós Bóna Department of Mathematics University of Florida Gainesville FL 36-805 bona@math.ufl.edu Abstract

More information

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 45 Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems Peter J. Hammond latest revision 2017 September

More information

Sharp threshold functions for random intersection graphs via a coupling method.

Sharp threshold functions for random intersection graphs via a coupling method. Sharp threshold functions for random intersection graphs via a coupling method. Katarzyna Rybarczyk Faculty of Mathematics and Computer Science, Adam Mickiewicz University, 60 769 Poznań, Poland kryba@amu.edu.pl

More information

SMOOTHNESS PROPERTIES OF SOLUTIONS OF CAPUTO- TYPE FRACTIONAL DIFFERENTIAL EQUATIONS. Kai Diethelm. Abstract

SMOOTHNESS PROPERTIES OF SOLUTIONS OF CAPUTO- TYPE FRACTIONAL DIFFERENTIAL EQUATIONS. Kai Diethelm. Abstract SMOOTHNESS PROPERTIES OF SOLUTIONS OF CAPUTO- TYPE FRACTIONAL DIFFERENTIAL EQUATIONS Kai Diethelm Abstract Dedicated to Prof. Michele Caputo on the occasion of his 8th birthday We consider ordinary fractional

More information

Continued Fraction Digit Averages and Maclaurin s Inequalities

Continued Fraction Digit Averages and Maclaurin s Inequalities Continued Fraction Digit Averages and Maclaurin s Inequalities Steven J. Miller, Williams College sjm1@williams.edu, Steven.Miller.MC.96@aya.yale.edu Joint with Francesco Cellarosi, Doug Hensley and Jake

More information

Banach Spaces II: Elementary Banach Space Theory

Banach Spaces II: Elementary Banach Space Theory BS II c Gabriel Nagy Banach Spaces II: Elementary Banach Space Theory Notes from the Functional Analysis Course (Fall 07 - Spring 08) In this section we introduce Banach spaces and examine some of their

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

A Generalization of Wigner s Law

A Generalization of Wigner s Law A Generalization of Wigner s Law Inna Zakharevich June 2, 2005 Abstract We present a generalization of Wigner s semicircle law: we consider a sequence of probability distributions (p, p 2,... ), with mean

More information

Czechoslovak Mathematical Journal

Czechoslovak Mathematical Journal Czechoslovak Mathematical Journal Oktay Duman; Cihan Orhan µ-statistically convergent function sequences Czechoslovak Mathematical Journal, Vol. 54 (2004), No. 2, 413 422 Persistent URL: http://dml.cz/dmlcz/127899

More information

STABILIZATION OF AN OVERLOADED QUEUEING NETWORK USING MEASUREMENT-BASED ADMISSION CONTROL

STABILIZATION OF AN OVERLOADED QUEUEING NETWORK USING MEASUREMENT-BASED ADMISSION CONTROL First published in Journal of Applied Probability 43(1) c 2006 Applied Probability Trust STABILIZATION OF AN OVERLOADED QUEUEING NETWORK USING MEASUREMENT-BASED ADMISSION CONTROL LASSE LESKELÄ, Helsinki

More information

4 Expectation & the Lebesgue Theorems

4 Expectation & the Lebesgue Theorems STA 205: Probability & Measure Theory Robert L. Wolpert 4 Expectation & the Lebesgue Theorems Let X and {X n : n N} be random variables on a probability space (Ω,F,P). If X n (ω) X(ω) for each ω Ω, does

More information

Gaussian Random Fields

Gaussian Random Fields Gaussian Random Fields Mini-Course by Prof. Voijkan Jaksic Vincent Larochelle, Alexandre Tomberg May 9, 009 Review Defnition.. Let, F, P ) be a probability space. Random variables {X,..., X n } are called

More information

THE HYPERBOLIC METRIC OF A RECTANGLE

THE HYPERBOLIC METRIC OF A RECTANGLE Annales Academiæ Scientiarum Fennicæ Mathematica Volumen 26, 2001, 401 407 THE HYPERBOLIC METRIC OF A RECTANGLE A. F. Beardon University of Cambridge, DPMMS, Centre for Mathematical Sciences Wilberforce

More information

Newton, Fermat, and Exactly Realizable Sequences

Newton, Fermat, and Exactly Realizable Sequences 1 2 3 47 6 23 11 Journal of Integer Sequences, Vol. 8 (2005), Article 05.1.2 Newton, Fermat, and Exactly Realizable Sequences Bau-Sen Du Institute of Mathematics Academia Sinica Taipei 115 TAIWAN mabsdu@sinica.edu.tw

More information

( f ^ M _ M 0 )dµ (5.1)

( f ^ M _ M 0 )dµ (5.1) 47 5. LEBESGUE INTEGRAL: GENERAL CASE Although the Lebesgue integral defined in the previous chapter is in many ways much better behaved than the Riemann integral, it shares its restriction to bounded

More information

Short cycles in random regular graphs

Short cycles in random regular graphs Short cycles in random regular graphs Brendan D. McKay Department of Computer Science Australian National University Canberra ACT 0200, Australia bdm@cs.anu.ed.au Nicholas C. Wormald and Beata Wysocka

More information

MAGIC010 Ergodic Theory Lecture Entropy

MAGIC010 Ergodic Theory Lecture Entropy 7. Entropy 7. Introduction A natural question in mathematics is the so-called isomorphism problem : when are two mathematical objects of the same class the same (in some appropriately defined sense of

More information

The Hopf argument. Yves Coudene. IRMAR, Université Rennes 1, campus beaulieu, bat Rennes cedex, France

The Hopf argument. Yves Coudene. IRMAR, Université Rennes 1, campus beaulieu, bat Rennes cedex, France The Hopf argument Yves Coudene IRMAR, Université Rennes, campus beaulieu, bat.23 35042 Rennes cedex, France yves.coudene@univ-rennes.fr slightly updated from the published version in Journal of Modern

More information

I. ANALYSIS; PROBABILITY

I. ANALYSIS; PROBABILITY ma414l1.tex Lecture 1. 12.1.2012 I. NLYSIS; PROBBILITY 1. Lebesgue Measure and Integral We recall Lebesgue measure (M411 Probability and Measure) λ: defined on intervals (a, b] by λ((a, b]) := b a (so

More information

Mod-φ convergence I: examples and probabilistic estimates

Mod-φ convergence I: examples and probabilistic estimates Mod-φ convergence I: examples and probabilistic estimates Valentin Féray (joint work with Pierre-Loïc Méliot and Ashkan Nikeghbali) Institut für Mathematik, Universität Zürich Summer school in Villa Volpi,

More information

Analysis Qualifying Exam

Analysis Qualifying Exam Analysis Qualifying Exam Spring 2017 Problem 1: Let f be differentiable on R. Suppose that there exists M > 0 such that f(k) M for each integer k, and f (x) M for all x R. Show that f is bounded, i.e.,

More information

Mean convergence theorems and weak laws of large numbers for weighted sums of random variables under a condition of weighted integrability

Mean convergence theorems and weak laws of large numbers for weighted sums of random variables under a condition of weighted integrability J. Math. Anal. Appl. 305 2005) 644 658 www.elsevier.com/locate/jmaa Mean convergence theorems and weak laws of large numbers for weighted sums of random variables under a condition of weighted integrability

More information

THE ALTERNATIVE DUNFORD-PETTIS PROPERTY FOR SUBSPACES OF THE COMPACT OPERATORS

THE ALTERNATIVE DUNFORD-PETTIS PROPERTY FOR SUBSPACES OF THE COMPACT OPERATORS THE ALTERNATIVE DUNFORD-PETTIS PROPERTY FOR SUBSPACES OF THE COMPACT OPERATORS MARÍA D. ACOSTA AND ANTONIO M. PERALTA Abstract. A Banach space X has the alternative Dunford-Pettis property if for every

More information

GENERALIZED CANTOR SETS AND SETS OF SUMS OF CONVERGENT ALTERNATING SERIES

GENERALIZED CANTOR SETS AND SETS OF SUMS OF CONVERGENT ALTERNATING SERIES Journal of Applied Analysis Vol. 7, No. 1 (2001), pp. 131 150 GENERALIZED CANTOR SETS AND SETS OF SUMS OF CONVERGENT ALTERNATING SERIES M. DINDOŠ Received September 7, 2000 and, in revised form, February

More information

A VERY BRIEF REVIEW OF MEASURE THEORY

A VERY BRIEF REVIEW OF MEASURE THEORY A VERY BRIEF REVIEW OF MEASURE THEORY A brief philosophical discussion. Measure theory, as much as any branch of mathematics, is an area where it is important to be acquainted with the basic notions and

More information

Random Bernstein-Markov factors

Random Bernstein-Markov factors Random Bernstein-Markov factors Igor Pritsker and Koushik Ramachandran October 20, 208 Abstract For a polynomial P n of degree n, Bernstein s inequality states that P n n P n for all L p norms on the unit

More information

Independence of some multiple Poisson stochastic integrals with variable-sign kernels

Independence of some multiple Poisson stochastic integrals with variable-sign kernels Independence of some multiple Poisson stochastic integrals with variable-sign kernels Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological

More information

UNIFORM BOUNDS FOR BESSEL FUNCTIONS

UNIFORM BOUNDS FOR BESSEL FUNCTIONS Journal of Applied Analysis Vol. 1, No. 1 (006), pp. 83 91 UNIFORM BOUNDS FOR BESSEL FUNCTIONS I. KRASIKOV Received October 8, 001 and, in revised form, July 6, 004 Abstract. For ν > 1/ and x real we shall

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

AN ELEMENTARY PROOF OF THE TRIANGLE INEQUALITY FOR THE WASSERSTEIN METRIC

AN ELEMENTARY PROOF OF THE TRIANGLE INEQUALITY FOR THE WASSERSTEIN METRIC PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 136, Number 1, January 2008, Pages 333 339 S 0002-9939(07)09020- Article electronically published on September 27, 2007 AN ELEMENTARY PROOF OF THE

More information

Math212a1413 The Lebesgue integral.

Math212a1413 The Lebesgue integral. Math212a1413 The Lebesgue integral. October 28, 2014 Simple functions. In what follows, (X, F, m) is a space with a σ-field of sets, and m a measure on F. The purpose of today s lecture is to develop the

More information

Math 117: Infinite Sequences

Math 117: Infinite Sequences Math 7: Infinite Sequences John Douglas Moore November, 008 The three main theorems in the theory of infinite sequences are the Monotone Convergence Theorem, the Cauchy Sequence Theorem and the Subsequence

More information

d(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N

d(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N Problem 1. Let f : A R R have the property that for every x A, there exists ɛ > 0 such that f(t) > ɛ if t (x ɛ, x + ɛ) A. If the set A is compact, prove there exists c > 0 such that f(x) > c for all x

More information