Probability Inequalities for the Sum of Random Variables When Sampling Without Replacement

Size: px
Start display at page:

Download "Probability Inequalities for the Sum of Random Variables When Sampling Without Replacement"

Transcription

1 International Journal of Statistics and Probability; Vol 2, No 4; 2013 ISSN E-ISSN Published by Canadian Center of Science and Education Probability Inequalities for the Sum of Random Variables When Sampling Without Replacement Jeremy Becnel 1, Kent Riggs 1 & Dean Young 2 1 Department of Mathematics and Statistics, Stephen F Austin State University, Nacogdoches, Texas, USA 2 Department of Statistical Systems, Baylor University, Waco, TX, USA Correspondence: Jeremy Becnel, Department of Mathematics and Statistics, Stephen F Austin State University, Nacogdoches, Texas, USA becneljj@sfasuedu Received: August 11, 2013 Accepted: September 24, 2013 Online Published: October 11, 2013 doi:105539/ijspv2n4p75 URL: Abstract Exponential-type upper bounds are formulated for the probability that the maximum of the partial sample sums of discrete random variables having finite equispaced support exceeds or differs from the population mean by a specified positive constant The new inequalities extend the work of Serfling 1974 An example of the results are given to demonstrate their efficacy Keywords: probability bounds, discrete probability 1 Introduction Serfling 1974 has obtained upper bounds for the probability that the sum of observations sampled without replacement from a finite population exceeds its expected value by a specified quantity Serfling 1974 also noted that his bound is crude due to the incorporation of the coarse variance upper bound σ 2 < b a 2 /4 and has suggested that it would be desirable to obtain a sharpening of this result involving the quantity σ 2 in place of the quantity b a 2 /4 While this problem remains unsolved, we attempt to at least partially fulfill Serfling s suggestion In order to do this we tighten his inequality bound by restricting ourselves to a particular class of discrete distributions The general problem which we address may be stated as follows Consider a finite population of size N whose members are not necessarily distinct Let the set Ω N = {x 1, x 2,,x N } be the set representation of this population Denote by X 1, X 2,,X n the values of a sample of size n drawn without replacement from Ω N Define the statistics N a = min x i, b = max x x i N i, μ = 1 i N 1 i N N, σ2 = x i μ 2 /N, 11 i=1 i=1 and let the sampling fractions be f n = n 1/N 1 and g n = n 1/N We are concerned with the behavior of the sum k S k = X i 12 i=1 for 1 k n In particular, we derive a new parameter-free upper bounds on the probabilities P n ε = PS n nμ nε 13 and R n ε = P max ε 14 n k N k where ε>0 The most familiar upper bound for 13 is the Bienayme-Chebyshev inequality, which is of the form P n ε 1 f nσ 2 15 nε 2 75

2 Serfling 1974 has derived an alternative upper bound for 13, which may be expressed as 2nε 2 P n ε exp 1 g n b a 2 16 and an alternative bound for a two-sided version of 14, which is given as R nε = P max n k N k ε ES n nμ 2r 17 nε 2r where r is a positive integer We then compare our new under bound with these under the following scenario, which is similar to an example presented by Savage 1961 Suppose one wishes to study the average height of a finite population of people Assume that all individuals in the population are between 60 and 78 inches and their heights are measured to the nearest inch The question we wish to answer is, What is the probability that the average height of a sample of 100 from a population of 4,000 individuals is within two inches of the population mean height? The main vehicle we utilize for the sharpening of inequalities 16 and 17 is the additional assumption that the random variables given by X i X 1,,X i 1 for i = 1,,n are discrete random variables with probability functions having finite, equispaced support, and whose variance is bounded above by the discrete uniform variance The remainder of the paper is as follows In Section 2 we give some mathematical preliminaries while in Section 21 we derive the main inequality results Finally, in Section 3 we present the before mentioned application of the newly-derived probability bounds 2 The Set V a,b,j From this point forward we work almost exclusively with an equispaced set of J points in the interval a, b beginning at a and ending at b We denote the set as Ω a,b,j = {a, a + c, a + 2c,,a + J 1c} where b = a + J 1c and c = b a J 1 We refer to J as the support size As before X 1, X 2,,X n denote the values of a sample of size n drawn without replacement from Ω a,b,j and S k = k i=1 X i To sharpen the inequalities 16 and 17 we work with probability distributions that have a variance bounded by the variance of a discrete uniform probability distribution This leads us to the following definition Definition 21 Let V a,b,j be the set of probability functions f with support on Ω a,b,j such that for a random variable X having probability function f the variance is bounded as per Var f X J + 1 b a 2 J 1 12 Remark 22 For f V a,b,j, the above bound on Var f X is simply the variance of a discrete uniform probability function on Ω a,b,j Remark 23 The definition for V a,b,j, although appearing somewhat restrictive, still allows for a broad and rich range of distributions In particular, it applies to a broad range of discrete unimodal distributions 21 Probability Inequalities for V a,b,j In this section we derive a new maximal probability inequality for sums of discrete unimodal random variables sampled without replacement from a set of probability functions belonging to V a,b,j We shall need the following lemmas, theorems, and corollaries to develop the new maximal probability inequality We now develop two lemmas which are used in the proof of the main theorem Lemma 24 Let X be a random variable with probability function in V a,b,j and let EX = μ Then for any λ 0 we have Ee λx μ expαe λd λd 1, where d = b a and α = J+1 1 J

3 Proof Let Z = X μ and notice that Ee λz = 1 + j=1 λ j j! EZ j = 1 + λ j j! EZ j, 21 since EZ = 0 Now let f be the probability function for X Then for any j 2 we have EZ j = b x μ j f x x=a b d j 2 x μ 2 f x x=a b = d j 2 x μ 2 f x x=a = d j 2 Var f X d 2 d j 2 J + 1 J 1 12 J + 1 d j J 1 12 since x μ d since f V a,b, f d j J 1 12 Substituting the result EZ j J+1 Ee λz 1 + into 21, we get = 1 + α λ j j! J + 1 J 1 12 = 1 + J J 1 12 λ j d j j! d j = 1 + αe λd λd 1 expαe λd λd 1 λ j d j j! Corollary 25 Let X be a random variable with probability function in V a,b,j and let EX = μ Then for any λ 0 we have Ee λx μ expβλ 2 rλ, d, where d = b a, β = J+1 b a 2 J 1 12, and rλ, d = λ j 2 d j 2 j! Proof By Lemma 24, we have 1 Ee λx μ J + 1 exp 12 J 1 eλd λd 1 Thus, 1 J + 1 exp 12 J 1 eλd λd 1 1 J + 1 λ j d j = exp 12 J 1 j! b a 2 J + 1 λ j 2 d j 2 = exp 12 J 1 λ2 j! = expβλ 2 rλ, d The following lemma uses an argument similar to Theorem 22 in Sefling s paper

4 Lemma 26 Let Ω N = {x 1, x 2,,x N } where each x k is in Ω a,b,j Also let μ = N k=1 N If the probability function of X 1 and the conditional probability functions of X k X 1, X 2,,X k 1,k= 2,,n are in V a,b,j, then, for any λ 0 and any n {1, 2,,N} we have Ee λs n nμ exp n 1 g n J J 1 eλd λd 1, where S n = n k=1 X k Proof For λ = 0, the result is obvious Given λ>0let λ k = N n λ for 1 k n N k Notice λ k is increasing in k up to λ as k goes from 1 to n Because X k μ X k 1,X 1 V a,b,j and X 1 V a,b,j letting μ k denote the conditional expectation of X k μ X k 1,,X 1 we have by Corollary 25 that Eexpλ k X k μ μ k X k 1,,X 1 expβλ k rλ k, d expβλ k rλ, d 22 because λ k λ Using the conditional independence of the random variables S k 1 and X k μ μ k given X k 1,,X 1, we can apply Corollary 36 so that Eexpλ k = Eexpλ k 1 S k 1 k 1μEexpλ k X k μ μ k X k 1,,X 1 23 Combining 23 with 22 we have that Recursively applying 24 we get where Δ n = n k=1 Replacing rλ, d with eλd λd 1 λ 2 d 2 Eexpλ k Eexpλ k 1 S k 1 k 1μ expβλ 2 krλ, d 24 Eexpλ n S n nμ expλ 2 Δ n βrλ, d, 25 N n 2 N k = 1 + N n 2 N 1 k=n n+1 1 Next, using that Δ k 2 n n1 g n from 25 we get Eexpλ n S n nμ expλ 2 n1 g n βrλ, d we have x k Eexpλ n S n nμ expλ 2 n1 g n β eλd λd 1 λ 2 d 2 = exp n 1 g n J J 1 eλ λd 1 Theorem 27 Let X k, S k, and μ be as before, then for any ε, λ > 0 we have { d1 gn J J 1ε EexpλS n nμ nε exp n 12J 1d Proof First note that by the Lemma 26 we have ln 12J 1ε 1 g n J + 1d + 1 ε d } EexpλS n nμ nε = e λnε Ee λs n nμ e λnε expαe λd λd 1 where α = n 1 g n J J 1 = expαe λd λd 1 λnε 26 In terms of λ, 26 is minimized when gλ = αe λd λd 1 λnε is minimized, which occurs at λ = 1 nε d ln αd

5 Substituting the value in 27 into 26 we get expαe λd λ d 1 λ nε { nε nε = exp α αd + 1 ln αd + 1 { = exp α + nε nε ln d αd } + nε d nε nε } d ln αd + 1 Substituting for α we get { = exp n 1 g n J J 1 + nε 12 J 1 ln d 1 g n J + 1 =exp { n d1 g n J+1+12J 1ε 12J 1d ln 12J 1ε ε d g n J+1d + 1 ε d } + nε } d 22 Main Result We now give the main result of the paper in the following theorem Theorem 28 For any ε>0and λ>0we have R n ε := P max ε n k N k exp { n d1 g n J+1+12J 1ε 12J 1d ln 12J 1ε 1 g n J+1d + 1 } ε d Proof By Proposition 34, we see that R n ε = P max ε n k N k e λε E exp λ S n nμ μ = E exp λ n S n nμ nε exp { n d1 g n J+1+12J 1ε 12J 1d ln 12J 1ε 1 g n J+1d + 1 } ε d, because Theorem 27 holds for any λ>0 here λ n As per 13 we have P n ε = PS n nμ nε Noting that P n ε R n ε and applying Theorem 28 we get the following corollary Corollary 29 For any ε>0we have P n ε exp { n d1 g n J+1+12J 1ε 12J 1d ln 12J 1ε 1 g n J+1d + 1 } ε d From 16 we have R nε = P max n k N k ε Observe that R nε = P max ε + P max n k N k ε n k N k Applying Theorem 28 to the above we get the following corollary Corollary 210 For any ε>0, R nε 2exp { n d1 g n J+1+12J 1ε 12J 1d ln 12J 1ε 1 g n J+1d + 1 ε d } 79

6 3 An Application Corollaries 29 and 210 give parameter-free maximal inequalities which are shaper than Serfling s 1974 inequalities given in 16 and 17, respectively Of course this fact is not surprising because we are utilizing the additional assumption that X 1, X 2,,X n belong to the set of probability distributions whose variance is bounded by the variance of the discrete uniform distribution as described in Definition 21 However, the extent of the improvement can be substantial as demonstrated by the following example Consider the following scenario, which is similar to an example presented by Savage 1961 Suppose one wishes to study the average height of a finite population of people Assume that all individuals in the population are between 60 and 78 inches and their heights are measured to the nearest inch The question we wish to answer is, What is the probability that the average height of a sample of 100 from a population of 4,000 individuals is within two inches of the population mean height? One possible solution is to apply the Bienayme-Chebyshev inequality given in 15, where σ 2 is replaced by the maximum possible variance of distributions with support on Ω 60,78,19, which for this problem is 81 This solution gives P X μ A second possible solution to this example is to apply the probability inequality 16 given by Serfling 1974 and assumes only finite support of a discrete random variable For our example, 16 yields P X μ If we make the additional and, in this case, reasonable assumption that the probability functions of X 1, X 2,X n are from V 60,78,19, then we may apply Chebyshev s inequality with variance bound given in Definition 21 This method yields P X μ Applying the newly-derived inequality given in Corollary 29, we get the following result P X μ Clearly, inequality 34 not only yields a better bound than inequalities 31, 32, and 33, but, moreover, considerably increases the degree of improvement As an extension of the above example, let the amount of error, ε, between X and μ be arbitrary, but retain all other values in the example Figure 1 demonstrates the sharpening of Serfling s 1974 inequality in 16 by the inequality given in Corollary 29 across values of ε from 0 to 3 Figure 1 Comparison of our bound OB to Serfling s bound SB over value of ε from 0 to 3 Acknowledgements The authors would like to thank the anynomous referee for the careful review and helpful comments pertaining to this article 80

7 References Dharmadhikari, S, & Joag-Dev, K 1989 Upper bounds for the variances of certain random variables Communications in Statistics-Theory and Methods, 18, Feller, W 1966 An Introduction to Probability Theory and Its Applications vol 2 New York: Wiley Gray, H, & Odell, P 1967 On least favorable density functions SIAM Review, 9, Jacobson, H 1969 The maximum variance of restricted unimodal distributions Annals of Statistics, 40, Klaassen, C 1985 On an inequality of chernoff Annals of Probabiliy, 13, Moors, J, & Muilwijk, J 1971 Note on a theorem of m n murthy and v k sethi Sankhyā, 33, Rayner, J 1975 Variance bounds Sankhyā, 37, Savage, I R 1961 Probability inequalities of the tchebycheff type Journal of Research of the National Bureau of Standards-B, 65, Seaman, J, Young, D, & Turner, D 1987 On the variance of certain bounded random variables The Mathematical Scientists, 12, Serfling, R J 1974 Probability inequalities for the sum in sampling without replacement The Annuals of Statistics, 2,

8 Appendix Here we present some results that are mostly standard, but are complied here for ease of reference We begin with a brief review of martingales including some results pertinent to this work 1 Martingales In the proofs of the results for this paper we need a few results about submartingales, in particular, reverse submartingales We present the necessary results here for ease of reference For a more indepth treatment of this subject see Feller 1966 For our purposes we will use the following definitions for martingales, submartingales, reverse martingales, and reverse submartingales Definition 1 Let {Z k } be a a sequence of random variables such that E Z k < We say {Z k } is a martingale if EZ k Z k 1, Z k 2, = Z k 1 for all k We say {Z k } is a submartingale if EZ k Z k 1, Z k 2, Z k 1 for all k We say {Z k } is a reverse martingale if EZ k Z k+1, Z k+2, = Z k+1 for all k We say {Z k } is a reverse submartingale if EZ k Z k+1, Z k+2, Z k+1 for all k We now present several results which exploit these properties Proposition 2 Let {Z n } N n=1 be a sequence of non-negative reverse submartingales, ie EZ n Z n+1,,z N Z n+1 Then for any c 0 we have cp max Z k c EZ n n k N Proof Let F = {max n k N Z k c} Then F can be expressed as the disjoint union of Now observe for each k = n,,n, Summing over all k we get or, equivalently, F N = {Z N c} F N 1 = {Z N < c} {Z N 1 c} F N 2 = {Z N < c} {Z N 1 < c} {Z N 2 c} F n = {Z N < c} {Z n+1 < c} {Z n c} EZ n ; F k = EEZ n Z n+1,,z N ; F k EZ k ; F k since {Z n } is a reverse submartingale cpf k since Z k c on F k N EZ n ; F k k=n N cpf k k=n EZ n ; F cpf since F = N F k Therefore, EZ n EZ n ; F cpf = cp max Z k c, n k N which yields the desired result Remark 3 If {Z k } is a reverse martingale, then applying Jensen s inequality immediately gives us that in{e λz k } is a reverse submartingale for any λ>0 Proposition 4 Let {Z k } N k=1 be a reverse martingale, ie k=n EZ k Z k+1, Z k+2, = Z k+1 82

9 For any ε>0and λ>0we have P max Z k ε EeλZ n n k N e λε Proof By Remark 3, {e λz k } is a reverse submartingale Now using c = e λε in Proposition 2 we have P max n k N eλz k e λε EeλZ n e λε A1 Combining A1 with the fact that P max n k N eλz k e λε = P max Z k ε, n k N we get the desired result 2 Finite Population Drawn Without Replacement Here, we present some results which will aid us in our quest for sharpening of inequalities 16 and 17 These results were first used without proof in Serfling s paper 1974 We give proofs here for the sake of completeness We work under the same set-up as presented in Section 1 That is Ω N = {x 1, x 2,,x N } is a finite population of size N, the members of which are not necessarily distinct Also X 1, X 2,,X n denote the values of a sample of size n drawn without replacement from Ω N and S k is the sum of the first k samples, as in 12 We also take μ as in 11 Proposition 5 Let μ k X k μ X 1, X 2,,X k 1 Then, Proof For k = 1, 2,,N 1, let T k k μ k = S k 1 k 1μ A2 N k 1 and T k N k One can easily check that T k is a reverse martingale and Tk are martingales Serfling 1974 That is, and To prove A2 note that ET k T k+1,,t N 1 = T k+1, 1 k N 2 = N k N k 1 ET k T k 1,,T 1 = T k 1, 2 k N 1 Tk = ET k+1 T k,t 1 S k+1 k + 1μ = E Tk N k 1,T 1 = N k S N k 1 E k+1 k + 1μ Tk N k,t 1 = N k N k 1 E Xk+1 μ + Tk N k,t 1 E = N k N k 1 = N k N k 1 Xk+1 μ N k T k,t 1 + E N k T k,t 1 1 N k E X k+1 μ X k,x 1 + E T k Tk,T 1 1 N k μ k+1 + Tk 83

10 Thus, Tk = μ k N k 1 + N kt k N k 1 Multiplying both sides of A3 by N k 1, we get A3 N k 1T k = μ k+1 + N kt k or Tk = μ k+1 Therefore A2 holds In the next corollary utilizes a nonobvious recursive relationship between the S k s Corollary 6 For a fixed integer any λ>0, let λ k = N n N k for 1 k n Then λ k = λ k 1 S k 1 k 1μ + λ k X k μ μ k A4 for k = 2, 3,,N Proof Using A2, we have that λ k 1 S k 1 k 1μ + λ k X k μ μ k N n = λ S k 1 k 1μ + λ N n X k μ + S k 1 k 1μ N k 1 N k N k + 1 S k 1 k 1μ = λn n + X k μ N k 1 N k + 1 S k 1 k 1μ N k N k + 1 N k + 1 S k 1 k 1μ = λn n + X k μ N k N k 1 N k = λ N n S k 1 k 1μ + X k μ N k = λ k, which yields A4 Copyrights Copyright for this article is retained by the authors, with first publication rights granted to the journal This is an open-access article distributed under the terms and conditions of the Creative Commons Attribution license 84

A CHARACTERIZATION OF DISCRETE UNIMODALITY WITH APPLICATIONS TO VARIANCE UPPER BOUNDS

A CHARACTERIZATION OF DISCRETE UNIMODALITY WITH APPLICATIONS TO VARIANCE UPPER BOUNDS Ann. Inst. Statist. Math. Vol. 45, No. 4, 603 614 (1993) A CHARACTERIZATION OF DISCRETE UNIMODALITY WITH APPLICATIONS TO VARIANCE UPPER BOUNDS SHARON E. NAVARD 1, JOHN W. SEAMAN, JR. 2 AND DEAN M. YOUNG

More information

The variance upper bound for a mixed random variable. Martín Egozcue Universidad de Montevideo, Uruguay

The variance upper bound for a mixed random variable. Martín Egozcue Universidad de Montevideo, Uruguay FACULTAD DE CIENCIAS EMPRESARIALES Y ECONOMIA Serie de documentos de trabajo del Departamento de Economía / Department of Economics Working Papers Series The variance upper bound for a mixed random variable

More information

Proving the central limit theorem

Proving the central limit theorem SOR3012: Stochastic Processes Proving the central limit theorem Gareth Tribello March 3, 2019 1 Purpose In the lectures and exercises we have learnt about the law of large numbers and the central limit

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 59 Classical case: n d. Asymptotic assumption: d is fixed and n. Basic tools: LLN and CLT. High-dimensional setting: n d, e.g. n/d

More information

Self-normalized laws of the iterated logarithm

Self-normalized laws of the iterated logarithm Journal of Statistical and Econometric Methods, vol.3, no.3, 2014, 145-151 ISSN: 1792-6602 print), 1792-6939 online) Scienpress Ltd, 2014 Self-normalized laws of the iterated logarithm Igor Zhdanov 1 Abstract

More information

A NOTE ON SUMS OF INDEPENDENT RANDOM MATRICES AFTER AHLSWEDE-WINTER

A NOTE ON SUMS OF INDEPENDENT RANDOM MATRICES AFTER AHLSWEDE-WINTER A NOTE ON SUMS OF INDEPENDENT RANDOM MATRICES AFTER AHLSWEDE-WINTER 1. The method Ashwelde and Winter [1] proposed a new approach to deviation inequalities for sums of independent random matrices. The

More information

1 Large Deviations. Korea Lectures June 2017 Joel Spencer Tuesday Lecture III

1 Large Deviations. Korea Lectures June 2017 Joel Spencer Tuesday Lecture III Korea Lectures June 017 Joel Spencer Tuesday Lecture III 1 Large Deviations We analyze how random variables behave asymptotically Initially we consider the sum of random variables that take values of 1

More information

Research Article Exponential Inequalities for Positively Associated Random Variables and Applications

Research Article Exponential Inequalities for Positively Associated Random Variables and Applications Hindawi Publishing Corporation Journal of Inequalities and Applications Volume 008, Article ID 38536, 11 pages doi:10.1155/008/38536 Research Article Exponential Inequalities for Positively Associated

More information

Example continued. Math 425 Intro to Probability Lecture 37. Example continued. Example

Example continued. Math 425 Intro to Probability Lecture 37. Example continued. Example continued : Coin tossing Math 425 Intro to Probability Lecture 37 Kenneth Harris kaharri@umich.edu Department of Mathematics University of Michigan April 8, 2009 Consider a Bernoulli trials process with

More information

MA8109 Stochastic Processes in Systems Theory Autumn 2013

MA8109 Stochastic Processes in Systems Theory Autumn 2013 Norwegian University of Science and Technology Department of Mathematical Sciences MA819 Stochastic Processes in Systems Theory Autumn 213 1 MA819 Exam 23, problem 3b This is a linear equation of the form

More information

The Canonical Gaussian Measure on R

The Canonical Gaussian Measure on R The Canonical Gaussian Measure on R 1. Introduction The main goal of this course is to study Gaussian measures. The simplest example of a Gaussian measure is the canonical Gaussian measure P on R where

More information

Induced subgraphs with many repeated degrees

Induced subgraphs with many repeated degrees Induced subgraphs with many repeated degrees Yair Caro Raphael Yuster arxiv:1811.071v1 [math.co] 17 Nov 018 Abstract Erdős, Fajtlowicz and Staton asked for the least integer f(k such that every graph with

More information

Point Process Control

Point Process Control Point Process Control The following note is based on Chapters I, II and VII in Brémaud s book Point Processes and Queues (1981). 1 Basic Definitions Consider some probability space (Ω, F, P). A real-valued

More information

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1 Random Walks and Brownian Motion Tel Aviv University Spring 011 Lecture date: May 0, 011 Lecture 9 Instructor: Ron Peled Scribe: Jonathan Hermon In today s lecture we present the Brownian motion (BM).

More information

Concentration inequalities and the entropy method

Concentration inequalities and the entropy method Concentration inequalities and the entropy method Gábor Lugosi ICREA and Pompeu Fabra University Barcelona what is concentration? We are interested in bounding random fluctuations of functions of many

More information

Introduction to Probability

Introduction to Probability LECTURE NOTES Course 6.041-6.431 M.I.T. FALL 2000 Introduction to Probability Dimitri P. Bertsekas and John N. Tsitsiklis Professors of Electrical Engineering and Computer Science Massachusetts Institute

More information

March 1, Florida State University. Concentration Inequalities: Martingale. Approach and Entropy Method. Lizhe Sun and Boning Yang.

March 1, Florida State University. Concentration Inequalities: Martingale. Approach and Entropy Method. Lizhe Sun and Boning Yang. Florida State University March 1, 2018 Framework 1. (Lizhe) Basic inequalities Chernoff bounding Review for STA 6448 2. (Lizhe) Discrete-time martingales inequalities via martingale approach 3. (Boning)

More information

Research Article A Note on the Central Limit Theorems for Dependent Random Variables

Research Article A Note on the Central Limit Theorems for Dependent Random Variables International Scholarly Research Network ISRN Probability and Statistics Volume 22, Article ID 92427, pages doi:.542/22/92427 Research Article A Note on the Central Limit Theorems for Dependent Random

More information

A proof of the existence of good nested lattices

A proof of the existence of good nested lattices A proof of the existence of good nested lattices Dinesh Krithivasan and S. Sandeep Pradhan July 24, 2007 1 Introduction We show the existence of a sequence of nested lattices (Λ (n) 1, Λ(n) ) with Λ (n)

More information

Susceptible-Infective-Removed Epidemics and Erdős-Rényi random

Susceptible-Infective-Removed Epidemics and Erdős-Rényi random Susceptible-Infective-Removed Epidemics and Erdős-Rényi random graphs MSR-Inria Joint Centre October 13, 2015 SIR epidemics: the Reed-Frost model Individuals i [n] when infected, attempt to infect all

More information

Modulation of symmetric densities

Modulation of symmetric densities 1 Modulation of symmetric densities 1.1 Motivation This book deals with a formulation for the construction of continuous probability distributions and connected statistical aspects. Before we begin, a

More information

On the Set of Limit Points of Normed Sums of Geometrically Weighted I.I.D. Bounded Random Variables

On the Set of Limit Points of Normed Sums of Geometrically Weighted I.I.D. Bounded Random Variables On the Set of Limit Points of Normed Sums of Geometrically Weighted I.I.D. Bounded Random Variables Deli Li 1, Yongcheng Qi, and Andrew Rosalsky 3 1 Department of Mathematical Sciences, Lakehead University,

More information

Machine Learning Theory Lecture 4

Machine Learning Theory Lecture 4 Machine Learning Theory Lecture 4 Nicholas Harvey October 9, 018 1 Basic Probability One of the first concentration bounds that you learn in probability theory is Markov s inequality. It bounds the right-tail

More information

Lecture 4: Inequalities and Asymptotic Estimates

Lecture 4: Inequalities and Asymptotic Estimates CSE 713: Random Graphs and Applications SUNY at Buffalo, Fall 003 Lecturer: Hung Q. Ngo Scribe: Hung Q. Ngo Lecture 4: Inequalities and Asymptotic Estimates We draw materials from [, 5, 8 10, 17, 18].

More information

Essentials on the Analysis of Randomized Algorithms

Essentials on the Analysis of Randomized Algorithms Essentials on the Analysis of Randomized Algorithms Dimitris Diochnos Feb 0, 2009 Abstract These notes were written with Monte Carlo algorithms primarily in mind. Topics covered are basic (discrete) random

More information

Lecture 4. P r[x > ce[x]] 1/c. = ap r[x = a] + a>ce[x] P r[x = a]

Lecture 4. P r[x > ce[x]] 1/c. = ap r[x = a] + a>ce[x] P r[x = a] U.C. Berkeley CS273: Parallel and Distributed Theory Lecture 4 Professor Satish Rao September 7, 2010 Lecturer: Satish Rao Last revised September 13, 2010 Lecture 4 1 Deviation bounds. Deviation bounds

More information

6.1 Moment Generating and Characteristic Functions

6.1 Moment Generating and Characteristic Functions Chapter 6 Limit Theorems The power statistics can mostly be seen when there is a large collection of data points and we are interested in understanding the macro state of the system, e.g., the average,

More information

Extended dynamic programming: technical details

Extended dynamic programming: technical details A Extended dynamic programming: technical details The extended dynamic programming algorithm is given by Algorithm 2. Algorithm 2 Extended dynamic programming for finding an optimistic policy transition

More information

A PECULIAR COIN-TOSSING MODEL

A PECULIAR COIN-TOSSING MODEL A PECULIAR COIN-TOSSING MODEL EDWARD J. GREEN 1. Coin tossing according to de Finetti A coin is drawn at random from a finite set of coins. Each coin generates an i.i.d. sequence of outcomes (heads or

More information

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu Home Work: 1 1. Describe the sample space when a coin is tossed (a) once, (b) three times, (c) n times, (d) an infinite number of times. 2. A coin is tossed until for the first time the same result appear

More information

ECEN 5612, Fall 2007 Noise and Random Processes Prof. Timothy X Brown NAME: CUID:

ECEN 5612, Fall 2007 Noise and Random Processes Prof. Timothy X Brown NAME: CUID: Midterm ECE ECEN 562, Fall 2007 Noise and Random Processes Prof. Timothy X Brown October 23 CU Boulder NAME: CUID: You have 20 minutes to complete this test. Closed books and notes. No calculators. If

More information

ECE302 Spring 2015 HW10 Solutions May 3,

ECE302 Spring 2015 HW10 Solutions May 3, ECE32 Spring 25 HW Solutions May 3, 25 Solutions to HW Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in italics where

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

1 Variance of a Random Variable

1 Variance of a Random Variable Indian Institute of Technology Bombay Department of Electrical Engineering Handout 14 EE 325 Probability and Random Processes Lecture Notes 9 August 28, 2014 1 Variance of a Random Variable The expectation

More information

Probability Theory. Richard F. Bass

Probability Theory. Richard F. Bass Probability Theory Richard F. Bass ii c Copyright 2014 Richard F. Bass Contents 1 Basic notions 1 1.1 A few definitions from measure theory............. 1 1.2 Definitions............................. 2

More information

The space complexity of approximating the frequency moments

The space complexity of approximating the frequency moments The space complexity of approximating the frequency moments Felix Biermeier November 24, 2015 1 Overview Introduction Approximations of frequency moments lower bounds 2 Frequency moments Problem Estimate

More information

Notes on uniform convergence

Notes on uniform convergence Notes on uniform convergence Erik Wahlén erik.wahlen@math.lu.se January 17, 2012 1 Numerical sequences We begin by recalling some properties of numerical sequences. By a numerical sequence we simply mean

More information

Introduction to Bandit Algorithms. Introduction to Bandit Algorithms

Introduction to Bandit Algorithms. Introduction to Bandit Algorithms Stochastic K-Arm Bandit Problem Formulation Consider K arms (actions) each correspond to an unknown distribution {ν k } K k=1 with values bounded in [0, 1]. At each time t, the agent pulls an arm I t {1,...,

More information

Measure Theory and Lebesgue Integration. Joshua H. Lifton

Measure Theory and Lebesgue Integration. Joshua H. Lifton Measure Theory and Lebesgue Integration Joshua H. Lifton Originally published 31 March 1999 Revised 5 September 2004 bstract This paper originally came out of my 1999 Swarthmore College Mathematics Senior

More information

Entropy and Ergodic Theory Lecture 15: A first look at concentration

Entropy and Ergodic Theory Lecture 15: A first look at concentration Entropy and Ergodic Theory Lecture 15: A first look at concentration 1 Introduction to concentration Let X 1, X 2,... be i.i.d. R-valued RVs with common distribution µ, and suppose for simplicity that

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4.1 (*. Let independent variables X 1,..., X n have U(0, 1 distribution. Show that for every x (0, 1, we have P ( X (1 < x 1 and P ( X (n > x 1 as n. Ex. 4.2 (**. By using induction

More information

Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 2012

Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 2012 Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 202 The exam lasts from 9:00am until 2:00pm, with a walking break every hour. Your goal on this exam should be to demonstrate mastery of

More information

Lecture 7: Chapter 7. Sums of Random Variables and Long-Term Averages

Lecture 7: Chapter 7. Sums of Random Variables and Long-Term Averages Lecture 7: Chapter 7. Sums of Random Variables and Long-Term Averages ELEC206 Probability and Random Processes, Fall 2014 Gil-Jin Jang gjang@knu.ac.kr School of EE, KNU page 1 / 15 Chapter 7. Sums of Random

More information

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1 Chapter 2 Probability measures 1. Existence Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension to the generated σ-field Proof of Theorem 2.1. Let F 0 be

More information

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008 Gaussian processes Chuong B Do (updated by Honglak Lee) November 22, 2008 Many of the classical machine learning algorithms that we talked about during the first half of this course fit the following pattern:

More information

IEOR 6711: Stochastic Models I Fall 2013, Professor Whitt Lecture Notes, Thursday, September 5 Modes of Convergence

IEOR 6711: Stochastic Models I Fall 2013, Professor Whitt Lecture Notes, Thursday, September 5 Modes of Convergence IEOR 6711: Stochastic Models I Fall 2013, Professor Whitt Lecture Notes, Thursday, September 5 Modes of Convergence 1 Overview We started by stating the two principal laws of large numbers: the strong

More information

Problem Points S C O R E Total: 120

Problem Points S C O R E Total: 120 PSTAT 160 A Final Exam Solution December 10, 2015 Name Student ID # Problem Points S C O R E 1 10 2 10 3 10 4 10 5 10 6 10 7 10 8 10 9 10 10 10 11 10 12 10 Total: 120 1. (10 points) Take a Markov chain

More information

A Study on the Minimum and Maximum Sum of C2 Problem in IMO2014

A Study on the Minimum and Maximum Sum of C2 Problem in IMO2014 Journal of Mathematics Research; Vol. 9, No. 5; October 017 ISSN 1916-9795 E-ISSN 1916-9809 Published by Canadian Center of Science and Education A Study on the Minimum and Maximum Sum of C Problem in

More information

Problem set 2 The central limit theorem.

Problem set 2 The central limit theorem. Problem set 2 The central limit theorem. Math 22a September 6, 204 Due Sept. 23 The purpose of this problem set is to walk through the proof of the central limit theorem of probability theory. Roughly

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini April 27, 2018 1 / 80 Classical case: n d. Asymptotic assumption: d is fixed and n. Basic tools: LLN and CLT. High-dimensional setting: n d, e.g. n/d

More information

Other properties of M M 1

Other properties of M M 1 Other properties of M M 1 Přemysl Bejda premyslbejda@gmail.com 2012 Contents 1 Reflected Lévy Process 2 Time dependent properties of M M 1 3 Waiting times and queue disciplines in M M 1 Contents 1 Reflected

More information

Computations Under Time Constraints: Algorithms Developed for Fuzzy Computations can Help

Computations Under Time Constraints: Algorithms Developed for Fuzzy Computations can Help Journal of Uncertain Systems Vol.6, No.2, pp.138-145, 2012 Online at: www.jus.org.uk Computations Under Time Constraints: Algorithms Developed for Fuzzy Computations can Help Karen Villaverde 1, Olga Kosheleva

More information

Chapter 4. Continuous Random Variables

Chapter 4. Continuous Random Variables Chapter 4. Continuous Random Variables Review Continuous random variable: A random variable that can take any value on an interval of R. Distribution: A density function f : R R + such that 1. non-negative,

More information

Tail Inequalities. The Chernoff bound works for random variables that are a sum of indicator variables with the same distribution (Bernoulli trials).

Tail Inequalities. The Chernoff bound works for random variables that are a sum of indicator variables with the same distribution (Bernoulli trials). Tail Inequalities William Hunt Lane Department of Computer Science and Electrical Engineering, West Virginia University, Morgantown, WV William.Hunt@mail.wvu.edu Introduction In this chapter, we are interested

More information

STA 711: Probability & Measure Theory Robert L. Wolpert

STA 711: Probability & Measure Theory Robert L. Wolpert STA 711: Probability & Measure Theory Robert L. Wolpert 6 Independence 6.1 Independent Events A collection of events {A i } F in a probability space (Ω,F,P) is called independent if P[ i I A i ] = P[A

More information

Lecture 4: Two-point Sampling, Coupon Collector s problem

Lecture 4: Two-point Sampling, Coupon Collector s problem Randomized Algorithms Lecture 4: Two-point Sampling, Coupon Collector s problem Sotiris Nikoletseas Associate Professor CEID - ETY Course 2013-2014 Sotiris Nikoletseas, Associate Professor Randomized Algorithms

More information

A Simpler and Tighter Redundant Klee-Minty Construction

A Simpler and Tighter Redundant Klee-Minty Construction A Simpler and Tighter Redundant Klee-Minty Construction Eissa Nematollahi Tamás Terlaky October 19, 2006 Abstract By introducing redundant Klee-Minty examples, we have previously shown that the central

More information

Exercises and Answers to Chapter 1

Exercises and Answers to Chapter 1 Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean

More information

MONOTONICITY OF RATIOS INVOLVING INCOMPLETE GAMMA FUNCTIONS WITH ACTUARIAL APPLICATIONS

MONOTONICITY OF RATIOS INVOLVING INCOMPLETE GAMMA FUNCTIONS WITH ACTUARIAL APPLICATIONS MONOTONICITY OF RATIOS INVOLVING INCOMPLETE GAMMA FUNCTIONS WITH ACTUARIAL APPLICATIONS EDWARD FURMAN Department of Mathematics and Statistics York University Toronto, Ontario M3J 1P3, Canada EMail: efurman@mathstat.yorku.ca

More information

EXISTENCE OF SOLUTIONS FOR CROSS CRITICAL EXPONENTIAL N-LAPLACIAN SYSTEMS

EXISTENCE OF SOLUTIONS FOR CROSS CRITICAL EXPONENTIAL N-LAPLACIAN SYSTEMS Electronic Journal of Differential Equations, Vol. 2014 (2014), o. 28, pp. 1 10. ISS: 1072-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu ftp ejde.math.txstate.edu EXISTECE OF SOLUTIOS

More information

CS5314 Randomized Algorithms. Lecture 18: Probabilistic Method (De-randomization, Sample-and-Modify)

CS5314 Randomized Algorithms. Lecture 18: Probabilistic Method (De-randomization, Sample-and-Modify) CS5314 Randomized Algorithms Lecture 18: Probabilistic Method (De-randomization, Sample-and-Modify) 1 Introduce two topics: De-randomize by conditional expectation provides a deterministic way to construct

More information

Stat410 Probability and Statistics II (F16)

Stat410 Probability and Statistics II (F16) Stat4 Probability and Statistics II (F6 Exponential, Poisson and Gamma Suppose on average every /λ hours, a Stochastic train arrives at the Random station. Further we assume the waiting time between two

More information

Your first day at work MATH 806 (Fall 2015)

Your first day at work MATH 806 (Fall 2015) Your first day at work MATH 806 (Fall 2015) 1. Let X be a set (with no particular algebraic structure). A function d : X X R is called a metric on X (and then X is called a metric space) when d satisfies

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4. (*. Let independent variables X,..., X n have U(0, distribution. Show that for every x (0,, we have P ( X ( < x and P ( X (n > x as n. Ex. 4.2 (**. By using induction or otherwise,

More information

Two problems on independent sets in graphs

Two problems on independent sets in graphs Two problems on independent sets in graphs David Galvin June 14, 2012 Abstract Let i t (G) denote the number of independent sets of size t in a graph G. Levit and Mandrescu have conjectured that for all

More information

8 Laws of large numbers

8 Laws of large numbers 8 Laws of large numbers 8.1 Introduction We first start with the idea of standardizing a random variable. Let X be a random variable with mean µ and variance σ 2. Then Z = (X µ)/σ will be a random variable

More information

Hoeffding, Chernoff, Bennet, and Bernstein Bounds

Hoeffding, Chernoff, Bennet, and Bernstein Bounds Stat 928: Statistical Learning Theory Lecture: 6 Hoeffding, Chernoff, Bennet, Bernstein Bounds Instructor: Sham Kakade 1 Hoeffding s Bound We say X is a sub-gaussian rom variable if it has quadratically

More information

Confidence and prediction intervals for. generalised linear accident models

Confidence and prediction intervals for. generalised linear accident models Confidence and prediction intervals for generalised linear accident models G.R. Wood September 8, 2004 Department of Statistics, Macquarie University, NSW 2109, Australia E-mail address: gwood@efs.mq.edu.au

More information

Handout 1: Mathematical Background

Handout 1: Mathematical Background Handout 1: Mathematical Background Boaz Barak September 18, 2007 This is a brief review of some mathematical tools, especially probability theory that we will use. This material is mostly from discrete

More information

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region Eissa Nematollahi Tamás Terlaky January 5, 2008 Abstract By introducing some redundant Klee-Minty constructions,

More information

An Asymptotically Optimal Algorithm for the Max k-armed Bandit Problem

An Asymptotically Optimal Algorithm for the Max k-armed Bandit Problem An Asymptotically Optimal Algorithm for the Max k-armed Bandit Problem Matthew J. Streeter February 27, 2006 CMU-CS-06-110 Stephen F. Smith School of Computer Science Carnegie Mellon University Pittsburgh,

More information

APPM/MATH 4/5520 Solutions to Exam I Review Problems. f X 1,X 2. 2e x 1 x 2. = x 2

APPM/MATH 4/5520 Solutions to Exam I Review Problems. f X 1,X 2. 2e x 1 x 2. = x 2 APPM/MATH 4/5520 Solutions to Exam I Review Problems. (a) f X (x ) f X,X 2 (x,x 2 )dx 2 x 2e x x 2 dx 2 2e 2x x was below x 2, but when marginalizing out x 2, we ran it over all values from 0 to and so

More information

LIST OF FORMULAS FOR STK1100 AND STK1110

LIST OF FORMULAS FOR STK1100 AND STK1110 LIST OF FORMULAS FOR STK1100 AND STK1110 (Version of 11. November 2015) 1. Probability Let A, B, A 1, A 2,..., B 1, B 2,... be events, that is, subsets of a sample space Ω. a) Axioms: A probability function

More information

Notes 18 : Optional Sampling Theorem

Notes 18 : Optional Sampling Theorem Notes 18 : Optional Sampling Theorem Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Wil91, Chapter 14], [Dur10, Section 5.7]. Recall: DEF 18.1 (Uniform Integrability) A collection

More information

Even Star Decomposition of Complete Bipartite Graphs

Even Star Decomposition of Complete Bipartite Graphs Journal of Mathematics Research; Vol. 8, No. 5; October 2016 ISSN 1916-9795 E-ISSN 1916-9809 Published by Canadian Center of Science and Education Even Star Decomposition of Complete Bipartite Graphs E.

More information

in Mobile In-App Advertising

in Mobile In-App Advertising Online Appendices to Not Just a Fad: Optimal Sequencing in Mobile In-App Advertising Appendix A: Proofs of echnical Results Proof of heorem : For ease of exposition, we will use a concept of dummy ads

More information

ON TESTING THE DIVISIBILITY OF LACUNARY POLYNOMIALS BY CYCLOTOMIC POLYNOMIALS

ON TESTING THE DIVISIBILITY OF LACUNARY POLYNOMIALS BY CYCLOTOMIC POLYNOMIALS ON TESTING THE DIVISIBILITY OF LACUNARY POLYNOMIALS BY CYCLOTOMIC POLYNOMIALS Michael Filaseta 1 and Andrzej Schinzel August 30, 2002 1 The first author gratefully acknowledges support from the National

More information

3 Integration and Expectation

3 Integration and Expectation 3 Integration and Expectation 3.1 Construction of the Lebesgue Integral Let (, F, µ) be a measure space (not necessarily a probability space). Our objective will be to define the Lebesgue integral R fdµ

More information

The Asymptotic Theory of Transaction Costs

The Asymptotic Theory of Transaction Costs The Asymptotic Theory of Transaction Costs Lecture Notes by Walter Schachermayer Nachdiplom-Vorlesung, ETH Zürich, WS 15/16 1 Models on Finite Probability Spaces In this section we consider a stock price

More information

LECTURE 11 A MODEL OF LIQUIDITY PREFERENCE

LECTURE 11 A MODEL OF LIQUIDITY PREFERENCE LECTURE 11 A MODEL OF LIQUIDITY PREFERENCE JACOB T. SCHWARTZ EDITED BY KENNETH R. DRIESSEL Abstract. We construct a model of production and exchange within which a decision to expand or to reduce inventories

More information

Module 3. Function of a Random Variable and its distribution

Module 3. Function of a Random Variable and its distribution Module 3 Function of a Random Variable and its distribution 1. Function of a Random Variable Let Ω, F, be a probability space and let be random variable defined on Ω, F,. Further let h: R R be a given

More information

Notes 6 : First and second moment methods

Notes 6 : First and second moment methods Notes 6 : First and second moment methods Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Roc, Sections 2.1-2.3]. Recall: THM 6.1 (Markov s inequality) Let X be a non-negative

More information

1 Exercises for lecture 1

1 Exercises for lecture 1 1 Exercises for lecture 1 Exercise 1 a) Show that if F is symmetric with respect to µ, and E( X )

More information

Chernoff s distribution is log-concave. But why? (And why does it matter?)

Chernoff s distribution is log-concave. But why? (And why does it matter?) Chernoff s distribution is log-concave But why? (And why does it matter?) Jon A. Wellner University of Washington, Seattle University of Michigan April 1, 2011 Woodroofe Seminar Based on joint work with:

More information

ECEN 689 Special Topics in Data Science for Communications Networks

ECEN 689 Special Topics in Data Science for Communications Networks ECEN 689 Special Topics in Data Science for Communications Networks Nick Duffield Department of Electrical & Computer Engineering Texas A&M University Lecture 3 Estimation and Bounds Estimation from Packet

More information

The Moment Method; Convex Duality; and Large/Medium/Small Deviations

The Moment Method; Convex Duality; and Large/Medium/Small Deviations Stat 928: Statistical Learning Theory Lecture: 5 The Moment Method; Convex Duality; and Large/Medium/Small Deviations Instructor: Sham Kakade The Exponential Inequality and Convex Duality The exponential

More information

The concentration of the chromatic number of random graphs

The concentration of the chromatic number of random graphs The concentration of the chromatic number of random graphs Noga Alon Michael Krivelevich Abstract We prove that for every constant δ > 0 the chromatic number of the random graph G(n, p) with p = n 1/2

More information

Least singular value of random matrices. Lewis Memorial Lecture / DIMACS minicourse March 18, Terence Tao (UCLA)

Least singular value of random matrices. Lewis Memorial Lecture / DIMACS minicourse March 18, Terence Tao (UCLA) Least singular value of random matrices Lewis Memorial Lecture / DIMACS minicourse March 18, 2008 Terence Tao (UCLA) 1 Extreme singular values Let M = (a ij ) 1 i n;1 j m be a square or rectangular matrix

More information

A Generalization of p-rings

A Generalization of p-rings International Journal of Algebra, Vol. 9, 2015, no. 8, 395-401 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ija.2015.5848 A Generalization of p-rings Adil Yaqub Department of Mathematics University

More information

Random Models. Tusheng Zhang. February 14, 2013

Random Models. Tusheng Zhang. February 14, 2013 Random Models Tusheng Zhang February 14, 013 1 Introduction In this module, we will introduce some random models which have many real life applications. The course consists of four parts. 1. A brief review

More information

Solutions to Problem Set 4

Solutions to Problem Set 4 UC Berkeley, CS 174: Combinatorics and Discrete Probability (Fall 010 Solutions to Problem Set 4 1. (MU 5.4 In a lecture hall containing 100 people, you consider whether or not there are three people in

More information

Limiting Distributions

Limiting Distributions Limiting Distributions We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the

More information

Learning Theory. Sridhar Mahadevan. University of Massachusetts. p. 1/38

Learning Theory. Sridhar Mahadevan. University of Massachusetts. p. 1/38 Learning Theory Sridhar Mahadevan mahadeva@cs.umass.edu University of Massachusetts p. 1/38 Topics Probability theory meet machine learning Concentration inequalities: Chebyshev, Chernoff, Hoeffding, and

More information

K-loop Structures Raised by the Direct Limits of Pseudo Unitary U(p, a n ) and Pseudo Orthogonal O(p, a n ) Groups

K-loop Structures Raised by the Direct Limits of Pseudo Unitary U(p, a n ) and Pseudo Orthogonal O(p, a n ) Groups Journal of Mathematics Research; Vol. 9, No. 5; October 2017 ISSN 1916-9795 E-ISSN 1916-9809 Published by Canadian Center of Science and Education K-loop Structures Raised by the Direct Limits of Pseudo

More information

NOWHERE LOCALLY UNIFORMLY CONTINUOUS FUNCTIONS

NOWHERE LOCALLY UNIFORMLY CONTINUOUS FUNCTIONS NOWHERE LOCALLY UNIFORMLY CONTINUOUS FUNCTIONS K. Jarosz Southern Illinois University at Edwardsville, IL 606, and Bowling Green State University, OH 43403 kjarosz@siue.edu September, 995 Abstract. Suppose

More information

LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS. S. G. Bobkov and F. L. Nazarov. September 25, 2011

LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS. S. G. Bobkov and F. L. Nazarov. September 25, 2011 LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS S. G. Bobkov and F. L. Nazarov September 25, 20 Abstract We study large deviations of linear functionals on an isotropic

More information

Expectation, Variance and Standard Deviation for Continuous Random Variables Class 6, Jeremy Orloff and Jonathan Bloom

Expectation, Variance and Standard Deviation for Continuous Random Variables Class 6, Jeremy Orloff and Jonathan Bloom Expectation, Variance and Standard Deviation for Continuous Random Variables Class 6, 8.5 Jeremy Orloff and Jonathan Bloom Learning Goals. Be able to compute and interpret expectation, variance, and standard

More information

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have 362 Problem Hints and Solutions sup g n (ω, t) g(ω, t) sup g(ω, s) g(ω, t) µ n (ω). t T s,t: s t 1/n By the uniform continuity of t g(ω, t) on [, T], one has for each ω that µ n (ω) as n. Two applications

More information

is a Borel subset of S Θ for each c R (Bertsekas and Shreve, 1978, Proposition 7.36) This always holds in practical applications.

is a Borel subset of S Θ for each c R (Bertsekas and Shreve, 1978, Proposition 7.36) This always holds in practical applications. Stat 811 Lecture Notes The Wald Consistency Theorem Charles J. Geyer April 9, 01 1 Analyticity Assumptions Let { f θ : θ Θ } be a family of subprobability densities 1 with respect to a measure µ on a measurable

More information

STAT/MATH 395 A - PROBABILITY II UW Winter Quarter Moment functions. x r p X (x) (1) E[X r ] = x r f X (x) dx (2) (x E[X]) r p X (x) (3)

STAT/MATH 395 A - PROBABILITY II UW Winter Quarter Moment functions. x r p X (x) (1) E[X r ] = x r f X (x) dx (2) (x E[X]) r p X (x) (3) STAT/MATH 395 A - PROBABILITY II UW Winter Quarter 07 Néhémy Lim Moment functions Moments of a random variable Definition.. Let X be a rrv on probability space (Ω, A, P). For a given r N, E[X r ], if it

More information