1 Introduction and main results.
|
|
- Ginger Bates
- 5 years ago
- Views:
Transcription
1 On Dirichlet Multinomial Distributions Dedicated to Professor Y. S. Chow on the Occasion of his 80th Birthday By Robert W. Keener 1 and Wei Biao Wu 2 Abstract Let Y have a symmetric Dirichlet multinomial distributions in R m, and let S m = h(y 1 )+ +h(y m ). We derive a central limit theorem for S m as the sample size n and the number of cells m tend to infinity at the same rate. The rate of convergence is shown to be of order m 1/6. The approach is based on approximation of marginal distributions for the Dirichlet multinomial distribution by negative binomial distributions, and a blocking technique similar to that used to study renormalization groups in statistical physics. These theorems generalize and refine results for the classical occupancy problem. keyword: Occupancy problems; central limit theorem; exchangeable distributions 1 Introduction and main results. Let Y have a multinomial distribution M(n, p) with n trials and success probabilities p = (p 1,..., p m ). Classical occupancy problems concern counts l k = #{j : Y j = k} and coverage m l 0. If m and n tend to infinity at the same rate and the multinomial distribution is symmetric, p 1 = = p m = 1/m, Weiss (1958) gives a central limit theorem for the number of cells covered or l 0. This result has been extended in various directions. Rényi (1962) gives proofs extending Weiss result in more general limits, Kopocinska and Kopocinski (1992) prove joint asymptotic normality for a collection of the l k, and Englund (1981) gives a Berry-Esseen bound for the error of normal approximation. In asymmetric cases where the cell probabilities p j vary, Esty (1983) gives a central limit theorem for the coverage and Quine and Robinson (1982) obtain a Berry-Esseen bound. Most relevant to the results presented here, Chen (1980) introduces mixture models, described below, in which the multinomial cell probabilities are sampled from a Dirichlet distribution. Extensions are presented in Chen (1981a, 1981b). 1 Department of Statistics, University of Michigan, Ann Arbor, MI 48109, USA. keener@umich.edu 2 Department of Statistics, University of Chicago, Chicago, IL 60637, USA. wbwu@galton.uchicago.edu This work was supported by National Science Foundation Grant DMS- 1
2 The models and results here are of some interest in statistics, particularly in situations where multinomial data divide up a sample, but cells are discovered as the experiment is performed. Then quantities like l 0, the number of unobserved cells, will be unknown parameters, but the other counts l k, k 1, are observed and various proposed estimators are based on these. See Fisher, Corbet and Williams (1943), Good and Toulmin (1956) and Keener, Rothman and Starr (1987) for further details. Related models also arise studying bootstrap procedures, although the distributional questions of interest are not that related to the results here. See Rubin (1981) or Csörgő and Wu (2000). If G m i=1 Γ(A i, 1), then p = G/(G G m ) has a Dirichlet distribution, p = G G G m D m (A). If the conditional distribution of Y given p is multinomial, Y p = p M(n, p), then Y has the Dirichlet multinomial distribution, By smoothing (law of total probability), Y DM(n, A). P(Y = y) = EP(Y = y p) ( ) n m = E y 1,..., y m i=1 = n!γ(a A m ) Γ(n + A A m ) p y i i m i=1 Γ(y i + A i ) y i!γ(a i ). In the sequel, we will be particularly interested in the symmetric case in which A = (a,..., a) = a1 m. Special cases of interest include the Bose-Einstein distribution in which a = 1 and P(Y = y) is independent of y, and the Maxwell-Boltzmann distribution which arises in the limit as a. When m = 2, p 1 has the beta distribution, B(A 1, A 2 ) and Y 1 has the beta-binomial distribution, BB(n, A 1, A 2 ). In the limit theory developed in this paper, the negative binomial distribution NB(a, η) with mass function Γ(a + y)a a η y, y = 0, 1,... y!γ(a)(a + η) a+y 2
3 will play a central role. The shape parameter a here is not restricted to be an integer, and the parameter η is the mean, instead of the usual success probability η/(a + η). The variance with this parameterization is η(a + η)/a. Let h be a function from N + = {0, 1,...} to R, and define m S m = h(y i ). i=1 The main result below is a central limit theorem for S m as m with Y from a symmetric Dirichlet multinomial distribution. Note that when h(x) = I{z = j}, S m equals l k, showing the connection to the occupancy problems mentioned above. Theorem 1. Consider a limiting situation in which m and n, with η = n m η (0, ). (1) Assume Y DM(n, a1 m ), and that h is a nonlinear function with sup h 4 (y)e Λy <, y N + where Λ < log(1 + a/η ). Take Z NB(a, η) and define ˆµ = ˆµ(η) = E η h(z). Also, let β = Cov[h(Z), Z)]/Var(Z), so that ˆµ + β(z η) is the best linear predictor of h(z), and define ˆσ 2 = ˆσ 2 (η) = Var[h(Z) ˆµ β(z η)]. (Note that since h is nonlinear, ˆσ 2 > 0.) Then P (S m x) Φ sup x R ( x mˆµ mˆσ ) = O(1/m 1/6 ). This result remains true if the corresponding moments of h(y 1 ) or the mean and standard deviation of S m are used to center and scale the normal approximation. The version stated seems a bit more convenient for explicit calculation. The next result complements Theorem 1 providing an exponential bound for tail probabilities of S m. 3
4 Theorem 2. Assume E η e ɛ 0 h(z) < (2) for some ɛ 0 > 0. Then for some constant c > 0, P [ S m mˆµ > mɛ] = O( me cɛ2m ) in limit (1), uniformly for ɛ in any bounded subset of [0, ). 2 Marginal Distributions This section provides approximations for marginal distributions in the limiting situation described in Theorem 1. Throughout, Y will have the symmetric Dirichlet multinomial distribution DM(n, a1 m ), and Z, Z 1, Z 2,... will be i.i.d. from NB(a, η) with η = n/m. In limit (1), joint distributions for (Y 1,..., Y k ) and (Z 1,..., Z k ) converge. We will be interested in approximating moments of S k = h(y 1 ) + + h(y k ), and it will be convenient to measure distance between measures using a variant of the total variation norm with exponential weights for large values. Specifically, if ν is a signed measure on N k, define ν Λ = ν({y}) e Λ(y 1+ +y k ). y N k If Q and ˆQ are finite signed measures on N k, then f dq f d ˆQ = [ ] f(y)e Λ(y 1 + +y k ) e Λ(y 1+ +y k ) [Q({y}) ˆQ({y})] y N k is Q ˆQ Λ sup y N k f(y) e Λ(y 1+ +y k ). (3) The likelihood ratio between the marginal distribution of (Y 1,..., Y k ) and (Z 1,..., Z k ) L(y) = P(Y 1 = y 1,..., Y k = y k ) P(Z = y 1 ) P(Z = y k ) Γ(n + 1) = Γ(n + 1 v)n Γ(ma) v Γ(ma ka)(ma) ka Γ(n + ma ka v)(n + ma)ka+v, Γ(n + ma) 4
5 where v = y y k. The three terms here can be approximated using the following lemma, which follows fairly easily from Stirling s approximation for the gamma function. Lemma 1. If a = a x and b = b x are both o( x), then ( ) 1 + a 4 + b 4 + O as x. Γ(x + a) x a b Γ(x + b) = 1 + (a b)(a + b 1) 2x x 2 Using this lemma, L(y) = [ v(1 v) m 2η ( ) 1 + v 4 +O m 2 k(1 + ka) 2 + ] (ka + v)(ka + v + 1) 2(a + η) (4) (5) in limit (1), provided v = o( m). When v is large, the approximation to L breaks down, and errors from these values will be estimated using Bernstein inequality bounds based on moment generating functions. The moment generating function for the negative binomial distribution is finite provided e u < 1 + a/η. ( ) a M Z (u) = Ee uz a =, a + η ηe u Lemma 2. Let V k = Y Y k. If e u < 1 + a/η, then in limit (1), ( ) ka Ee uv k a. a + η η e u Proof. The moment generating function for the binomial distribution with n trials and success probability p is [1 + p(e u 1)] n. Since V k BB(n, ka, ma ka), its distribution is a beta mixture of binomial distributions and Ee uv k = = Γ(ma) Γ(ka)Γ(ma ka) Γ(ma) Γ(ka)Γ(ma ka) [1 + x(e u 1)] n x ka 1 (1 x) ma ka 1 dx x ka 1 (1 x) ka 1 e mf(x) dx, 5
6 where f(x) = η log[1 + x(e u 1)] a log(1 x) = [a (e u 1)]x + O(x 2 ) as x 0. A change of variables rescaling x by m gives Ee u(y 1+ +Y k ) = Γ(ma)(ma) ka Γ(ma ka) m aka Γ(ka) 0 x ka 1 e mf(x/m) dx. (1 x/m) ka 1 The first factor here tends to one by Lemma 1, and the desired result then follows by a dominated convergence argument since mf(x/m) [a + η η e u ]x, with the error uniformly bounded for x (0, m). To be careful, there should be a separate argument to show that the contribution integrating over x ( m, m) is negligible. Since this is fairly routine, the details are omitted. Considering the approximation for the likelihood ratio L given in (5), it is natural to approximate the joint distribution Q for (Y 1,..., Y k ) by ˆQ = ˆQ m ˆQ 1 where ˆQ 0 = NB(a, η) k, the joint distribution of (Z 1,..., Z k ), and ˆQ 1 ({z}) = q 1 (z) ˆQ 0 ({z}) with q 1 (z) = = v(1 v) k(1 + ka) (ka + v)(ka + v + 1) + 2η 2 2(a + η) (a + 2η)ṽ 2η(a + η) aṽ 2 2η(a + η) + k 2, where v = z z k and ṽ = v kη. Theorem 3. If Λ < log(1 + a/η ), then in limit (1), Q ˆQ Λ = O(1/m 2 ). 6
7 Proof. Let B = B m = {y N k : y y k n 1/3 }. Then since Q({y}) = L(y) ˆQ 0 ({y}), Q ˆQ Λ y B c L(y) 1 q 1 (y) e Λ(y 1+ +y k ) ˆQ0 ({y}) + y B[Q({y}) + Q 0 ({y}) + q 1 (y) Q 0 ({y})]e Λ(y 1+ +y k ). The first term here is O(1/m 2 ) by the approximation for L in (5), and the second term is of order e ɛn1/3 some u > Λ. for some ɛ, since moment generating functions in Lemma 2 converge for As a corollary the next result provides approximations to moments of h(y i ). Let µ(η) = Eh(Y i ) and ˆµ(η) = Eh(Z i ). Corollary 1. Assume Λ < log(1 + a/η ) and that sup h 4 (y)e Λy <. y N + Then in limit (1), µ(η) = ˆµ(η) + O(1/m), Var(h(Y 1 )) = Var(h(Z)) + O(1/m), a[eh(z)(z η)]2 Cov(h(Y 1 ), h(y 2 )) = + O(1/m 2 ), mη(a + η) E(h(Y 1 ) µ(η)) 4 = E(h(Z) ˆµ(η)) 4 + O(1/m), E(h(Y 1 ) µ(η)) 3 (h(y 2 ) µ(η)) = O(1/m), E(h(Y 1 ) µ(η)) 2 (h(y 2 ) µ(η)) 2 = Var 2 (h(z)) + O(1/m), E(h(Y 1 ) µ(η)) 2 (h(y 2 ) µ(η))(h(y 3 ) µ(η)) = O(1/m), and E(h(Y 1 ) µ(η))(h(y 2 ) µ(η))(h(y 3 ) µ(η))(h(y 4 ) µ(η)) = O(1/m 2 ). As a consequence, in this limit ES m = mˆµ(η) + O(1), 7
8 ma[eh(z)(z η)]2 Var(S m ) = mvar(h(z)) η(a + η) = me [ h(z) ˆµ(η) = mˆσ 2 (η) + O(1), + O(1) 2 Cov(h(Z), Z) (Z η)] + O(1), Var(Z) and E(S m mµ(η)) 4 = O(m 2 ). Proof. Using (3) the initial assertions all follow from Theorem 3. Note that f d ˆQ = Ef(Z 1,..., Z k ) + 1 m Eq 1(Z 1,..., Z k )f(z 1,..., Z k ), and that q 1 is a quadratic function with Eq 1 (Z 1,..., Z k ) = 0. So, for instance, q 1 (Z 1,..., Z 4 ) can be written as a sum of quadratic functions of (Z i, Z j ), 1 i j 4 and Eq 1 (Z 1,..., Z 4 ) 4 (h(z i ) µ(η)) = 0. The results about moments of S m then follow directly after a bit of combinatorics. i=1 3 Partial Sums Given a subset B = {b 1,..., b j } (with b 1 < < b j ) of {1,..., m} let Y B denote the random vector (Y b1,..., Y bj ) and let Y +B = i B Y i. Define A B, G B and G +B similarly, and take S +B = i B h(y i). Lemma 3. Let B 1,..., B γ be sets partitioning {1,..., m}. If Y DM m (A), then Y B1,..., Y Bγ given Y +B1 = n 1,..., Y +Bγ = n γ are conditionally independent with Y Bj Y +B1 = n 1,..., Y +Bγ = n γ DM(n j, A Bj ). Proof. Since Y G = g M(n, p), the conditional joint mass function P (Y B1 = y B1,..., Y Bγ = y Bγ G = g, Y +B1 = n 1,..., Y +Bγ = n γ ) = P (Y = y G = g) P (Y +B1 = n 1,..., Y +Bγ = n γ ) 8
9 is a ratio of multinomial probabilities. Straightforward algebra then shows that Y B1,..., Y Bγ given G = g and Y +B1 = n 1,..., Y +Bγ = n γ are conditionally independent with Y Bi G = g, Y +B1 = n 1,..., Y +Bγ = n γ M(n i, g Bi /g +Bi ). The stated results now follows integrating against the distribution for G. Conditional independence is preserved because G B1,..., G Bγ are independent. Given a partition B 1,..., B γ of {1,..., m} we can write S m = S B1 + + S Bγ, and by this lemma the summands are conditionally independent given Y +B1,..., Y +Bγ. Theorem 1 will be established using a Berry-Esseen limit theorem (in an independent but non-identically distributed setting) to argue that the conditional distribution of S m is approximately normal. The following two technical lemmas will be needed. Lemma 4. The mean of the beta-binomial distribution BB(n, A 1, A 2 ) is and the variance is na 1 A 1 + A 2, na 1 A 2 (n + A 1 + A 2 ) (A 1 + A 2 ) 2 (A 1 + A 2 + 1). If A 1 + A 2 = ma, then the variance is at most A 1 η(a + η)/a 3. This result follows easily from a conditioning argument. Lemma 5. For c > 0, x R and y R, Φ(cx) Φ(y) e log c + x y 2π. Proof. By ordinary calculus, x φ(x) φ(1). If c 1 then c Φ(cx) Φ(x) = uxφ(ux) du u φ(1) log c. 1 Similarly, Φ(cx) Φ(x) < φ(1) log c when 0 < c < 1. Also, since Φ = φ 1/ 2π, Φ(x) Φ(y) x y / 2π. The desired result follows from these bounds because Φ(cx) Φ(y) Φ(cx) Φ(x) + Φ(x) Φ(y). 9
10 Proof. Proof of Theorem 1 Let γ = m 1/3, and let B 1,..., B γ be an even partition of {1,..., m}, i.e., a partition chosen so that m i = B i equals m or m, where m = m/γ = m 2/3 + O(m 1/3 ). Then m i m 1, so m i = m + O(1). Define n i = Y +Bi and η i = n i m i, and let F denote the sigma-field generated by Y +B1,..., Y +Bγ. Conditional moments of S Bi given F will be approximated using Corollary 1. The approximations will be accurate when the variables η i are near the limiting value η. Define the event Since Y +Bi F = { η i η m 1/12, i = 1,..., γ}. BB(n, m i a, (m m i )a), by Lemma 4, η i has mean η and variance at most m i η(a + η)/(m 2 i a 2 ) = O(m 2/3 ). By Tchebysheff s inequality, P ( η i η m 1/12 ) = O(m 1/2 ). This bound and asymptotic expressions in the sequel for all quantities indexed by i hold uniformly in i. By Boole s inequality, P (F c ) γ P ( η i η m 1/12 ) = γo(m 1/2 ) = O(m 1/6 ). i=1 Let µ i = E[S Bi F], σ 2 i = Var(S Bi F), and ρ i = E[ S Bi µ i 3 F]. By Corollary 1, on F, µ i = m iˆµ(η i ) + O(1) and σ 2 i = m iˆσ 2 (η i ) + O(1). Also, by the corollary on F, and so E[(S Bi µ i ) 4 F] = O(m 2 i ) ρ i = O(m 3/2 i ) on F. The function ˆµ( ) has a bounded second derivative in some neighborhood of η. Taylor expansion about η i = η gives µ i = m iˆµ(η) + (n i m i η)µ (η) + O(m i (η i η) 2 ) + O(1) 10
11 on F, and summing over i, µi = mˆµ(η) + V O(m 2/3 ) + O(m 1/3 ), (6) on F, where Similarly, on F, V = (η i η) 2. σ 2 i = mˆσ 2 (η) + V O(m 2/3 ) + O(m 1/3 ). Next, by the Berry-Esseen theorem (cf Theorem in Feller (1971)), ( P (S x ) µ i ρi m x F) Φ 6 σ 2 i ( σ. i 2)3/2 Now on F, V = O(m 1/6 ), and so σ 2 i mˆσ 2 (η). Hence on F, ρi ( σ 2 i )3/2 = O(m 1/6 ). Since P (S m x) = EP (S m x F), the theorem will follow from the bounds presented provided E sup x Φ ( x ) ( ) µ i x mˆµ Φ 1 F = O(m 1/6 ). σ 2 i mˆσ By Lemma 5, the left hand side here is bounded by the sum of e 2π E mˆσ log 1 1 F and E µi mˆµ σ 2 i 2π 1 F. mˆσ Since V = O(m 1/6 ) on F, the argument of the expectation in the first of these expressions is O(m 1/6 ). The second expression, by (6), is O(m 1/6 ) + O(m 1/6 EV ). But Var(η i ) = O(m 2/3 ) which implies EV = O(m 1/3 ), and so the second expression is also O(m 1/6 ). Proof. Proof of Theorem 2 Since Z 1,..., Z m NB(ma, mη), P η (Z Z m = n) = Γ(ma + n)(ma)ma (mη) n n!γ(ma)(ma + mη) ma+n. 11
12 Using this, it is easy to check that Z 1,..., Z m Z Z n = n DM(n, a1 m ), (7) noted as Lemma 1 of Chen (1980). Also, by Stirling s formula a P η (Z Z m = n) 2πmη(a + η) in limit (1). Using (7), P [ S m mˆµ > mɛ] P η [ n i=1 W i > mɛ] P η (Z Z m = n), where W i = h(z i ) ˆµ (and W = h(z) ˆµ below), and the theorem will follow if [ ] n P η W i > mɛ = O(e cɛ2m ). i=1 This basically follows from Bernstein s inequality, but a bit of care is necessary to make sure the stated uniformity holds. Note that adjusting c, it is sufficient to show that the asymptotic bound holds uniformly for all ɛ sufficiently small. Let δ = ɛ 0 /4. Since e x 1 + x + x 2 e x /2 and x 2 4e x /e 2, for 0 u δ, E η e uw u2 E η W 2 e δ W 1 + 2u2 δ 2 e 2 E ηe 2δ W Introducing a likelihood ratio and using the Schwarz inequality, ( ) Z ( ) a+z η a + E η e 2δ W η = E η e 2δ W η a + η { ( ) 2Z ( ) } 2a+2Z 1/2 η a + η {Eη e E } ɛ 0 h(z) ˆµ 1/2 η. η a + η The first factor here converges to one by dominated convergence as η η, and the second factor remains bounded for m and n sufficiently large by (2). So there is a constant c 0 such that E η e uw 1 + c 0 u 2 e c 0u 2, 0 u δ, for m and n sufficiently large in limit (1). By Bernstein s inequality, ] P η [ n i=1 W i > mɛ e mc 0u 2 mɛ 12
13 for m and n sufficiently large in limit (1). If ɛ 2c 0 δ, taking u = ɛ/(2c 0 ) this bound becomes e mɛ2 /(4c 0 ). The theorem then follows from this bound and a corresponding bound for P η [ n i=1 W i < mɛ]. Acknowledgment. We thank the referee for a careful reading of our manuscript. References Chen, Wen-Chen (1980) On the weak form of Zipf s law, Journal of Applied Probability, 17, Chen, Wen-Chen (1981a) Limit theorems for general size distributions, Journal of Applied Probability, 18, Chen, Wen-Chen (1981b) Some local limit theorems in the symmetric Dirichlet-multinomial urn models, Annals of the Institute of Statistical Mathematics, 33, Csörgő, S. and Wu, W. B. (2000) Random graphs and the strong convergence of bootstrap means, Combinatorics, Probability and Computing, Englund, Gunnar, (1981) A remainder term estimate for the normal approximation in classical occupancy, Annals of Probability, 9, Esty, Warren W. (1983) A normal limit law for a nonparametric estimator of the coverage of a random sample, Annals of Statistics, Feller, W. (1971) An Introduction to Probability Theory and its Applications. Wiley, New York Fisher, R. A., Corbet, A. S. and Williams, C. B., (1943) The relation between the number of species and the number of individuals in a random sample of an animal population, J. Anial Ecol., 12, Good, I. J. and Toulmin, G. H., (1956) The number of new species, and the increase in population coverage, when a sample is increased, Biometrika, 43, Keener, R., Rothman, E. and Starr, N. (1987) Distributions on partitions, Annals of Statistics, 15, Kopocinska, I. and Kopocinski, B. (1992) A new proof of generalized theorem of Irving Weiss, Periodica Mathematica Hungarica, 25, Quine, M. P., (1979) A functional central limit theorem for a generalized occupancy problem, Stochastic Processes and their Applications, 9,
14 Quine, M. P. and Robinson, J. (1982) A Berry-Esseen bound for an occupancy problem. Annals of Probability, 10, Rényi, A., (1962) Three new proofs and a generalization of a theorem of Irving Weiss. Magyar Tud. Akad. Mat. Kutato Int. Kozl., 7, Rubin, D. B. (1981) The Bayesian bootstrap, Annals of Statistics, 9, Weiss, Irving (1958) Limiting distributions in some occupancy problems. Annals of Mathematical Statistics,
Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535
Additive functionals of infinite-variance moving averages Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Departments of Statistics The University of Chicago Chicago, Illinois 60637 June
More informationA Gentle Introduction to Stein s Method for Normal Approximation I
A Gentle Introduction to Stein s Method for Normal Approximation I Larry Goldstein University of Southern California Introduction to Stein s Method for Normal Approximation 1. Much activity since Stein
More informationA NONINFORMATIVE BAYESIAN APPROACH FOR TWO-STAGE CLUSTER SAMPLING
Sankhyā : The Indian Journal of Statistics Special Issue on Sample Surveys 1999, Volume 61, Series B, Pt. 1, pp. 133-144 A OIFORMATIVE BAYESIA APPROACH FOR TWO-STAGE CLUSTER SAMPLIG By GLE MEEDE University
More informationn! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2
Order statistics Ex. 4. (*. Let independent variables X,..., X n have U(0, distribution. Show that for every x (0,, we have P ( X ( < x and P ( X (n > x as n. Ex. 4.2 (**. By using induction or otherwise,
More informationis a Borel subset of S Θ for each c R (Bertsekas and Shreve, 1978, Proposition 7.36) This always holds in practical applications.
Stat 811 Lecture Notes The Wald Consistency Theorem Charles J. Geyer April 9, 01 1 Analyticity Assumptions Let { f θ : θ Θ } be a family of subprobability densities 1 with respect to a measure µ on a measurable
More informationSpring 2012 Math 541B Exam 1
Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote
More informationDA Freedman Notes on the MLE Fall 2003
DA Freedman Notes on the MLE Fall 2003 The object here is to provide a sketch of the theory of the MLE. Rigorous presentations can be found in the references cited below. Calculus. Let f be a smooth, scalar
More informationAsymptotic efficiency of simple decisions for the compound decision problem
Asymptotic efficiency of simple decisions for the compound decision problem Eitan Greenshtein and Ya acov Ritov Department of Statistical Sciences Duke University Durham, NC 27708-0251, USA e-mail: eitan.greenshtein@gmail.com
More informationOn large deviations of sums of independent random variables
On large deviations of sums of independent random variables Zhishui Hu 12, Valentin V. Petrov 23 and John Robinson 2 1 Department of Statistics and Finance, University of Science and Technology of China,
More informationNormal approximation of Poisson functionals in Kolmogorov distance
Normal approximation of Poisson functionals in Kolmogorov distance Matthias Schulte Abstract Peccati, Solè, Taqqu, and Utzet recently combined Stein s method and Malliavin calculus to obtain a bound for
More informationProblem set 2 The central limit theorem.
Problem set 2 The central limit theorem. Math 22a September 6, 204 Due Sept. 23 The purpose of this problem set is to walk through the proof of the central limit theorem of probability theory. Roughly
More informationPart IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015
Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.
More informationBayesian nonparametric estimation of finite population quantities in absence of design information on nonsampled units
Bayesian nonparametric estimation of finite population quantities in absence of design information on nonsampled units Sahar Z Zangeneh Robert W. Keener Roderick J.A. Little Abstract In Probability proportional
More informationPhenomena in high dimensions in geometric analysis, random matrices, and computational geometry Roscoff, France, June 25-29, 2012
Phenomena in high dimensions in geometric analysis, random matrices, and computational geometry Roscoff, France, June 25-29, 202 BOUNDS AND ASYMPTOTICS FOR FISHER INFORMATION IN THE CENTRAL LIMIT THEOREM
More informationBALANCING GAUSSIAN VECTORS. 1. Introduction
BALANCING GAUSSIAN VECTORS KEVIN P. COSTELLO Abstract. Let x 1,... x n be independent normally distributed vectors on R d. We determine the distribution function of the minimum norm of the 2 n vectors
More informationStatistics 3858 : Maximum Likelihood Estimators
Statistics 3858 : Maximum Likelihood Estimators 1 Method of Maximum Likelihood In this method we construct the so called likelihood function, that is L(θ) = L(θ; X 1, X 2,..., X n ) = f n (X 1, X 2,...,
More informationConcentration of Measures by Bounded Size Bias Couplings
Concentration of Measures by Bounded Size Bias Couplings Subhankar Ghosh, Larry Goldstein University of Southern California [arxiv:0906.3886] January 10 th, 2013 Concentration of Measure Distributional
More informationOn large deviations for combinatorial sums
arxiv:1901.0444v1 [math.pr] 14 Jan 019 On large deviations for combinatorial sums Andrei N. Frolov Dept. of Mathematics and Mechanics St. Petersburg State University St. Petersburg, Russia E-mail address:
More information1 Introduction. 2 Measure theoretic definitions
1 Introduction These notes aim to recall some basic definitions needed for dealing with random variables. Sections to 5 follow mostly the presentation given in chapter two of [1]. Measure theoretic definitions
More informationBounds of the normal approximation to random-sum Wilcoxon statistics
R ESEARCH ARTICLE doi: 0.306/scienceasia53-874.04.40.8 ScienceAsia 40 04: 8 9 Bounds of the normal approximation to random-sum Wilcoxon statistics Mongkhon Tuntapthai Nattakarn Chaidee Department of Mathematics
More informationConcentration of Measures by Bounded Couplings
Concentration of Measures by Bounded Couplings Subhankar Ghosh, Larry Goldstein and Ümit Işlak University of Southern California [arxiv:0906.3886] [arxiv:1304.5001] May 2013 Concentration of Measure Distributional
More informationEmpirical Processes: General Weak Convergence Theory
Empirical Processes: General Weak Convergence Theory Moulinath Banerjee May 18, 2010 1 Extended Weak Convergence The lack of measurability of the empirical process with respect to the sigma-field generated
More informationOn Reparametrization and the Gibbs Sampler
On Reparametrization and the Gibbs Sampler Jorge Carlos Román Department of Mathematics Vanderbilt University James P. Hobert Department of Statistics University of Florida March 2014 Brett Presnell Department
More informationn! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2
Order statistics Ex. 4.1 (*. Let independent variables X 1,..., X n have U(0, 1 distribution. Show that for every x (0, 1, we have P ( X (1 < x 1 and P ( X (n > x 1 as n. Ex. 4.2 (**. By using induction
More informationMarkov Chain Monte Carlo (MCMC)
Markov Chain Monte Carlo (MCMC Dependent Sampling Suppose we wish to sample from a density π, and we can evaluate π as a function but have no means to directly generate a sample. Rejection sampling can
More informationExercises and Answers to Chapter 1
Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean
More informationStein s Method and the Zero Bias Transformation with Application to Simple Random Sampling
Stein s Method and the Zero Bias Transformation with Application to Simple Random Sampling Larry Goldstein and Gesine Reinert November 8, 001 Abstract Let W be a random variable with mean zero and variance
More informationStein Couplings for Concentration of Measure
Stein Couplings for Concentration of Measure Jay Bartroff, Subhankar Ghosh, Larry Goldstein and Ümit Işlak University of Southern California [arxiv:0906.3886] [arxiv:1304.5001] [arxiv:1402.6769] Borchard
More informationAn Introduction to Probability Theory and Its Applications
An Introduction to Probability Theory and Its Applications WILLIAM FELLER (1906-1970) Eugene Higgins Professor of Mathematics Princeton University VOLUME II SECOND EDITION JOHN WILEY & SONS Contents I
More informationFall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.
1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n
More informationLaplace s Equation. Chapter Mean Value Formulas
Chapter 1 Laplace s Equation Let be an open set in R n. A function u C 2 () is called harmonic in if it satisfies Laplace s equation n (1.1) u := D ii u = 0 in. i=1 A function u C 2 () is called subharmonic
More informationAsymptotic Normality under Two-Phase Sampling Designs
Asymptotic Normality under Two-Phase Sampling Designs Jiahua Chen and J. N. K. Rao University of Waterloo and University of Carleton Abstract Large sample properties of statistical inferences in the context
More informationCoupling Binomial with normal
Appendix A Coupling Binomial with normal A.1 Cutpoints At the heart of the construction in Chapter 7 lies the quantile coupling of a random variable distributed Bin(m, 1 / 2 ) with a random variable Y
More informationSOME CONVERSE LIMIT THEOREMS FOR EXCHANGEABLE BOOTSTRAPS
SOME CONVERSE LIMIT THEOREMS OR EXCHANGEABLE BOOTSTRAPS Jon A. Wellner University of Washington The bootstrap Glivenko-Cantelli and bootstrap Donsker theorems of Giné and Zinn (990) contain both necessary
More informationASYMPTOTIC NORMALITY UNDER TWO-PHASE SAMPLING DESIGNS
Statistica Sinica 17(2007), 1047-1064 ASYMPTOTIC NORMALITY UNDER TWO-PHASE SAMPLING DESIGNS Jiahua Chen and J. N. K. Rao University of British Columbia and Carleton University Abstract: Large sample properties
More informationApproximating the Conway-Maxwell-Poisson normalizing constant
Filomat 30:4 016, 953 960 DOI 10.98/FIL1604953S Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Approximating the Conway-Maxwell-Poisson
More information(y 1, y 2 ) = 12 y3 1e y 1 y 2 /2, y 1 > 0, y 2 > 0 0, otherwise.
54 We are given the marginal pdfs of Y and Y You should note that Y gamma(4, Y exponential( E(Y = 4, V (Y = 4, E(Y =, and V (Y = 4 (a With U = Y Y, we have E(U = E(Y Y = E(Y E(Y = 4 = (b Because Y and
More informationProbability and Measure
Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability
More informationThe Convergence Rate for the Normal Approximation of Extreme Sums
The Convergence Rate for the Normal Approximation of Extreme Sums Yongcheng Qi University of Minnesota Duluth WCNA 2008, Orlando, July 2-9, 2008 This talk is based on a joint work with Professor Shihong
More informationStein s Method Applied to Some Statistical Problems
Stein s Method Applied to Some Statistical Problems Jay Bartroff Borchard Colloquium 2017 Jay Bartroff (USC) Stein s for Stats 4.Jul.17 1 / 36 Outline of this talk 1. Stein s Method 2. Bounds to the normal
More informationHan-Ying Liang, Dong-Xia Zhang, and Jong-Il Baek
J. Korean Math. Soc. 41 (2004), No. 5, pp. 883 894 CONVERGENCE OF WEIGHTED SUMS FOR DEPENDENT RANDOM VARIABLES Han-Ying Liang, Dong-Xia Zhang, and Jong-Il Baek Abstract. We discuss in this paper the strong
More informationSemiparametric Minimax Rates
arxiv: math.pr/0000000 Semiparametric Minimax Rates James Robins, Eric Tchetgen Tchetgen, Lingling Li, and Aad van der Vaart James Robins Eric Tchetgen Tchetgen Department of Biostatistics and Epidemiology
More informationNONLINEAR LEAST-SQUARES ESTIMATION 1. INTRODUCTION
NONLINEAR LEAST-SQUARES ESTIMATION DAVID POLLARD AND PETER RADCHENKO ABSTRACT. The paper uses empirical process techniques to study the asymptotics of the least-squares estimator for the fitting of a nonlinear
More informationConsistency of the maximum likelihood estimator for general hidden Markov models
Consistency of the maximum likelihood estimator for general hidden Markov models Jimmy Olsson Centre for Mathematical Sciences Lund University Nordstat 2012 Umeå, Sweden Collaborators Hidden Markov models
More informationLecture 2 One too many inequalities
University of Illinois Department of Economics Spring 2017 Econ 574 Roger Koenker Lecture 2 One too many inequalities In lecture 1 we introduced some of the basic conceptual building materials of the course.
More informationExample continued. Math 425 Intro to Probability Lecture 37. Example continued. Example
continued : Coin tossing Math 425 Intro to Probability Lecture 37 Kenneth Harris kaharri@umich.edu Department of Mathematics University of Michigan April 8, 2009 Consider a Bernoulli trials process with
More informationRefining the Central Limit Theorem Approximation via Extreme Value Theory
Refining the Central Limit Theorem Approximation via Extreme Value Theory Ulrich K. Müller Economics Department Princeton University February 2018 Abstract We suggest approximating the distribution of
More informationAsymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½
University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 1998 Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½ Lawrence D. Brown University
More informationSTAT 200C: High-dimensional Statistics
STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 59 Classical case: n d. Asymptotic assumption: d is fixed and n. Basic tools: LLN and CLT. High-dimensional setting: n d, e.g. n/d
More informationConnectivity of addable graph classes
Connectivity of addable graph classes Paul Balister Béla Bollobás Stefanie Gerke January 8, 007 A non-empty class A of labelled graphs that is closed under isomorphism is weakly addable if for each graph
More informationConnectivity of addable graph classes
Connectivity of addable graph classes Paul Balister Béla Bollobás Stefanie Gerke July 6, 008 A non-empty class A of labelled graphs is weakly addable if for each graph G A and any two distinct components
More informationThe Mixture Approach for Simulating New Families of Bivariate Distributions with Specified Correlations
The Mixture Approach for Simulating New Families of Bivariate Distributions with Specified Correlations John R. Michael, Significance, Inc. and William R. Schucany, Southern Methodist University The mixture
More informationConcentration inequalities and the entropy method
Concentration inequalities and the entropy method Gábor Lugosi ICREA and Pompeu Fabra University Barcelona what is concentration? We are interested in bounding random fluctuations of functions of many
More informationAlmost sure limit theorems for random allocations
Almost sure limit theorems for random allocations István Fazekas and Alexey Chuprunov Institute of Informatics, University of Debrecen, P.O. Box, 400 Debrecen, Hungary, e-mail: fazekasi@inf.unideb.hu and
More informationA note on L convergence of Neumann series approximation in missing data problems
A note on L convergence of Neumann series approximation in missing data problems Hua Yun Chen Division of Epidemiology & Biostatistics School of Public Health University of Illinois at Chicago 1603 West
More informationLoose Hamilton Cycles in Random k-uniform Hypergraphs
Loose Hamilton Cycles in Random k-uniform Hypergraphs Andrzej Dudek and Alan Frieze Department of Mathematical Sciences Carnegie Mellon University Pittsburgh, PA 1513 USA Abstract In the random k-uniform
More information5 Introduction to the Theory of Order Statistics and Rank Statistics
5 Introduction to the Theory of Order Statistics and Rank Statistics This section will contain a summary of important definitions and theorems that will be useful for understanding the theory of order
More informationX n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)
14:17 11/16/2 TOPIC. Convergence in distribution and related notions. This section studies the notion of the so-called convergence in distribution of real random variables. This is the kind of convergence
More informationMath212a1413 The Lebesgue integral.
Math212a1413 The Lebesgue integral. October 28, 2014 Simple functions. In what follows, (X, F, m) is a space with a σ-field of sets, and m a measure on F. The purpose of today s lecture is to develop the
More informationPattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions
Pattern Recognition and Machine Learning Chapter 2: Probability Distributions Cécile Amblard Alex Kläser Jakob Verbeek October 11, 27 Probability Distributions: General Density Estimation: given a finite
More informationHyperparameter estimation in Dirichlet process mixture models
Hyperparameter estimation in Dirichlet process mixture models By MIKE WEST Institute of Statistics and Decision Sciences Duke University, Durham NC 27706, USA. SUMMARY In Bayesian density estimation and
More informationExistence and Uniqueness of Penalized Least Square Estimation for Smoothing Spline Nonlinear Nonparametric Regression Models
Existence and Uniqueness of Penalized Least Square Estimation for Smoothing Spline Nonlinear Nonparametric Regression Models Chunlei Ke and Yuedong Wang March 1, 24 1 The Model A smoothing spline nonlinear
More informationSubmitted to the Brazilian Journal of Probability and Statistics
Submitted to the Brazilian Journal of Probability and Statistics Multivariate normal approximation of the maximum likelihood estimator via the delta method Andreas Anastasiou a and Robert E. Gaunt b a
More informationA noninformative Bayesian approach to domain estimation
A noninformative Bayesian approach to domain estimation Glen Meeden School of Statistics University of Minnesota Minneapolis, MN 55455 glen@stat.umn.edu August 2002 Revised July 2003 To appear in Journal
More informationNonparametric Bayesian Methods (Gaussian Processes)
[70240413 Statistical Machine Learning, Spring, 2015] Nonparametric Bayesian Methods (Gaussian Processes) Jun Zhu dcszj@mail.tsinghua.edu.cn http://bigml.cs.tsinghua.edu.cn/~jun State Key Lab of Intelligent
More information2. Function spaces and approximation
2.1 2. Function spaces and approximation 2.1. The space of test functions. Notation and prerequisites are collected in Appendix A. Let Ω be an open subset of R n. The space C0 (Ω), consisting of the C
More information2 Inference for Multinomial Distribution
Markov Chain Monte Carlo Methods Part III: Statistical Concepts By K.B.Athreya, Mohan Delampady and T.Krishnan 1 Introduction In parts I and II of this series it was shown how Markov chain Monte Carlo
More informationBahadur representations for bootstrap quantiles 1
Bahadur representations for bootstrap quantiles 1 Yijun Zuo Department of Statistics and Probability, Michigan State University East Lansing, MI 48824, USA zuo@msu.edu 1 Research partially supported by
More informationMiscellanea Kernel density estimation and marginalization consistency
Biometrika (1991), 78, 2, pp. 421-5 Printed in Great Britain Miscellanea Kernel density estimation and marginalization consistency BY MIKE WEST Institute of Statistics and Decision Sciences, Duke University,
More informationA Bootstrap Test for Conditional Symmetry
ANNALS OF ECONOMICS AND FINANCE 6, 51 61 005) A Bootstrap Test for Conditional Symmetry Liangjun Su Guanghua School of Management, Peking University E-mail: lsu@gsm.pku.edu.cn and Sainan Jin Guanghua School
More informationStat 5101 Lecture Slides: Deck 8 Dirichlet Distribution. Charles J. Geyer School of Statistics University of Minnesota
Stat 5101 Lecture Slides: Deck 8 Dirichlet Distribution Charles J. Geyer School of Statistics University of Minnesota 1 The Dirichlet Distribution The Dirichlet Distribution is to the beta distribution
More informationThe expansion of random regular graphs
The expansion of random regular graphs David Ellis Introduction Our aim is now to show that for any d 3, almost all d-regular graphs on {1, 2,..., n} have edge-expansion ratio at least c d d (if nd is
More informationExperience Rating in General Insurance by Credibility Estimation
Experience Rating in General Insurance by Credibility Estimation Xian Zhou Department of Applied Finance and Actuarial Studies Macquarie University, Sydney, Australia Abstract This work presents a new
More informationOrder Statistics and Distributions
Order Statistics and Distributions 1 Some Preliminary Comments and Ideas In this section we consider a random sample X 1, X 2,..., X n common continuous distribution function F and probability density
More informationGoodness-of-fit tests for the cure rate in a mixture cure model
Biometrika (217), 13, 1, pp. 1 7 Printed in Great Britain Advance Access publication on 31 July 216 Goodness-of-fit tests for the cure rate in a mixture cure model BY U.U. MÜLLER Department of Statistics,
More informationUPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES
Applied Probability Trust 7 May 22 UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES HAMED AMINI, AND MARC LELARGE, ENS-INRIA Abstract Upper deviation results are obtained for the split time of a
More informationOn rate of convergence in distribution of asymptotically normal statistics based on samples of random size
Annales Mathematicae et Informaticae 39 212 pp. 17 28 Proceedings of the Conference on Stochastic Models and their Applications Faculty of Informatics, University of Debrecen, Debrecen, Hungary, August
More informationBayesian nonparametrics
Bayesian nonparametrics 1 Some preliminaries 1.1 de Finetti s theorem We will start our discussion with this foundational theorem. We will assume throughout all variables are defined on the probability
More informationDistance between multinomial and multivariate normal models
Chapter 9 Distance between multinomial and multivariate normal models SECTION 1 introduces Andrew Carter s recursive procedure for bounding the Le Cam distance between a multinomialmodeland its approximating
More informationPCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities
PCMI 207 - Introduction to Random Matrix Theory Handout #2 06.27.207 REVIEW OF PROBABILITY THEORY Chapter - Events and Their Probabilities.. Events as Sets Definition (σ-field). A collection F of subsets
More informationMathematics 426 Robert Gross Homework 9 Answers
Mathematics 4 Robert Gross Homework 9 Answers. Suppose that X is a normal random variable with mean µ and standard deviation σ. Suppose that PX > 9 PX
More informationChapter 1. Sets and probability. 1.3 Probability space
Random processes - Chapter 1. Sets and probability 1 Random processes Chapter 1. Sets and probability 1.3 Probability space 1.3 Probability space Random processes - Chapter 1. Sets and probability 2 Probability
More informationContinuity. Chapter 4
Chapter 4 Continuity Throughout this chapter D is a nonempty subset of the real numbers. We recall the definition of a function. Definition 4.1. A function from D into R, denoted f : D R, is a subset of
More informationThe properties of L p -GMM estimators
The properties of L p -GMM estimators Robert de Jong and Chirok Han Michigan State University February 2000 Abstract This paper considers Generalized Method of Moment-type estimators for which a criterion
More information5. Conditional Distributions
1 of 12 7/16/2009 5:36 AM Virtual Laboratories > 3. Distributions > 1 2 3 4 5 6 7 8 5. Conditional Distributions Basic Theory As usual, we start with a random experiment with probability measure P on an
More informationStochastic Comparisons of Weighted Sums of Arrangement Increasing Random Variables
Portland State University PDXScholar Mathematics and Statistics Faculty Publications and Presentations Fariborz Maseeh Department of Mathematics and Statistics 4-7-2015 Stochastic Comparisons of Weighted
More informationIn particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with
Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient
More informationPart IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015
Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.
More informationLECTURE 11: EXPONENTIAL FAMILY AND GENERALIZED LINEAR MODELS
LECTURE : EXPONENTIAL FAMILY AND GENERALIZED LINEAR MODELS HANI GOODARZI AND SINA JAFARPOUR. EXPONENTIAL FAMILY. Exponential family comprises a set of flexible distribution ranging both continuous and
More informationLectures on Elementary Probability. William G. Faris
Lectures on Elementary Probability William G. Faris February 22, 2002 2 Contents 1 Combinatorics 5 1.1 Factorials and binomial coefficients................. 5 1.2 Sampling with replacement.....................
More informationHYPERGRAPHS, QUASI-RANDOMNESS, AND CONDITIONS FOR REGULARITY
HYPERGRAPHS, QUASI-RANDOMNESS, AND CONDITIONS FOR REGULARITY YOSHIHARU KOHAYAKAWA, VOJTĚCH RÖDL, AND JOZEF SKOKAN Dedicated to Professors Vera T. Sós and András Hajnal on the occasion of their 70th birthdays
More informationLecture 4: Dynamic models
linear s Lecture 4: s Hedibert Freitas Lopes The University of Chicago Booth School of Business 5807 South Woodlawn Avenue, Chicago, IL 60637 http://faculty.chicagobooth.edu/hedibert.lopes hlopes@chicagobooth.edu
More informationEstimation of the functional Weibull-tail coefficient
1/ 29 Estimation of the functional Weibull-tail coefficient Stéphane Girard Inria Grenoble Rhône-Alpes & LJK, France http://mistis.inrialpes.fr/people/girard/ June 2016 joint work with Laurent Gardes,
More informationIntroduction to Empirical Processes and Semiparametric Inference Lecture 02: Overview Continued
Introduction to Empirical Processes and Semiparametric Inference Lecture 02: Overview Continued Michael R. Kosorok, Ph.D. Professor and Chair of Biostatistics Professor of Statistics and Operations Research
More informationDependent hierarchical processes for multi armed bandits
Dependent hierarchical processes for multi armed bandits Federico Camerlenghi University of Bologna, BIDSA & Collegio Carlo Alberto First Italian meeting on Probability and Mathematical Statistics, Torino
More informationRATE-OPTIMAL GRAPHON ESTIMATION. By Chao Gao, Yu Lu and Harrison H. Zhou Yale University
Submitted to the Annals of Statistics arxiv: arxiv:0000.0000 RATE-OPTIMAL GRAPHON ESTIMATION By Chao Gao, Yu Lu and Harrison H. Zhou Yale University Network analysis is becoming one of the most active
More informationConvergence rates in weighted L 1 spaces of kernel density estimators for linear processes
Alea 4, 117 129 (2008) Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes Anton Schick and Wolfgang Wefelmeyer Anton Schick, Department of Mathematical Sciences,
More informationTail bound inequalities and empirical likelihood for the mean
Tail bound inequalities and empirical likelihood for the mean Sandra Vucane 1 1 University of Latvia, Riga 29 th of September, 2011 Sandra Vucane (LU) Tail bound inequalities and EL for the mean 29.09.2011
More informationOn the Optimum Asymptotic Multiuser Efficiency of Randomly Spread CDMA
On the Optimum Asymptotic Multiuser Efficiency of Randomly Spread CDMA Ralf R. Müller Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) Lehrstuhl für Digitale Übertragung 13 December 2014 1. Introduction
More informationNotes on Poisson Approximation
Notes on Poisson Approximation A. D. Barbour* Universität Zürich Progress in Stein s Method, Singapore, January 2009 These notes are a supplement to the article Topics in Poisson Approximation, which appeared
More information