THE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION

Size: px
Start display at page:

Download "THE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION"

Transcription

1 THE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION JAINUL VAGHASIA Contents. Introduction. Notations 3. Background in Probability Theory 3.. Expectation and Variance 3.. Convergence 3. Zero Bias Transformation 3 5. Setting up the conditions 5 6. The Lindeberg-Feller Central Limit Theorem 7 7. The Feller-Lévy Partial Converse 8 8. Cycles in a Random Permutation: Application of Lindeberg-Feller CLT 9 References. Introduction In 738, Abraham de Moivre published his book The Doctrine of Chances where he provided techniques for solving gambling problems using a version of the Central Limit theorem for Bernoulli trials with p = /. Roughly a century later, Pierre Simon Laplace published an extension of this proof, although not a rigorous one, for p / in his work Theorie Analytique des Probabilities. A more general statement of the Central Limit Theorem appeared only in 9 when Lindeberg suggested that the sequence of random variables need not be identically distributed given certain condition known as the Lindeberg condition. Feller has explained that the Lindeberg condition requires the individual variances be due mainly to masses in an interval whose length is small in comparison to the overall variance. In his paper [3], Goldstein offers an equivalent, seemingly simpler condition through Stein s zero bias transformation introduced in []. As a result of equivalency, this condition can be used to prove the Lindeberg-Feller Central Limit theorem and its partial converse (independently due to Feller and Lévy). This paper will outline the properties of zero bias transformation, and describe its role in the proof of the Lindeberg-Feller Central Limit Theorem and its Feller-Lévy converse. In light of completeness, we shall also offer an application of the Central Limit theorem using the small zero bias condition to the number Date: June 3, 08.

2 JAINUL VAGHASIA of cycles in a random permutation pooled from all permutations of the set {,,..., n}, and show that it is asymptotically Normal in n. This paper uses the following notations:. Notations We write X P to mean that X has distribution P. Y n d Y means convergence in distribution. Y n p Y is convergence in probability. E means expected value and Var means variance, while P is the probability. denotes the indicator function X denotes the Random Variable with X-zero biased distribution 3. Background in Probability Theory We begin by highlighting some elementary facts and definitions that will be used later in this paper. Let Ω be a sample space with a probability distribution (also called a probability measure) P. A random variable is a map X : Ω R. We write 3.. Expectation and Variance. P(X A) = P({ω Ω : X(ω) A}). Definition 3.. (Expected value) The expected value of g(x) is E(g(X)) = It has following properties: g(x) dp(x) = { () Linearity: E( k j= c jg j (X)) = k j= c je(g j (X)). () If X,..., X n are independent then ( n ) E X i = i Definition 3.. (Variance) Variance is It has following properties: () Var (X) = E(X ) µ. () If X,..., X n are independent then g(x)p(x) dx if X is continuous, j g(x j)p(x j ) if X is discrete. E(X i ). σ = Var (X) = E((X µ) ). ( k ) Var a j X j = j= k a jvar (X j ). j=

3 3.. Convergence. THE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION 3 Definition 3.3. (Convergence in Distribution) A sequence of random variables Y n is said to converge in distribution to Y if lim P(Y n x) = P(Y x) for all continuity points x of P(Y x). Definition 3.. (Convergence in Probability) A sequence of random variables Y n converges in probability to Y if lim P( Y n Y ɛ) = 0 for all ɛ > 0. We will also need the following equivalence theorems between convergence of expectation and convergence in distribution. Theorem 3.5. Y n d Y implies lim E(h(Y n)) = E(h(Y )) for all h C b, where C b is the collection of all bounded, continuous functions. Since the set of all functions with compact support which integrate to zero and are infinitely differentiable Cc,0 C b, Theorem 3.5 holds for Cc,0. Following converse holds for C c,0 as well. Theorem 3.6. If for all h C c,0, then Y n d Y. lim E(h(Y n)) = E(h(Y )) The proofs of Theorems 3.5 and 3.6 are too involved to be worth the digression, and therefore shall not be presented here.. Zero Bias Transformation Definition.. Let X be a random variable with mean zero and finite, nonzero variance σ. We say that X has the X zero biased distribution if for all differentiable f for which these expectations exist, σ E(f (X )) = E(Xf(X)). In [3], the proof of existence of the zero biased distribution for any such X is omitted for simplicity. We provide a brief Ideas of the argument here. For any given continuous function with compact support g, let G = x 0 g. Then T g = σ E(XG(X)) exists since the variance E(X ) <. Note that T is linear as inherited by the linearity of expectation. Now take g 0, so G is increasing, and therefore X and G(X) are positively correlated. Hence, T g = E(XG(X)) E(X)E(G(X)) = 0. This shows that T is positive. The existence of a unique probability measure ν with T g = x 0 g dν would then be obtained by Riesz-Markov-Kakutani representation theorem (see eg. []; [5] provides a category theoretic proof). Next, we show that zero bias transformation enjoys the following continuity property. This will be instrumental later in proving the Feller-Lévy converse.

4 JAINUL VAGHASIA Theorem.. Let Y and Y n, n =,,... be mean zero random variables with finite, nonzero variances σ = Var (Y n ), respectively. If Y n d Y and lim σ n = σ, then Y n d Y. Proof. We shall use the alternate definition of convergence in distribution provided by Theorems 3.5 and 3.6. Let f Cc,0 and F (y) = y f(t) dt. Since Y and Y n have mean zero and finite variances, their zero bias distributions exists. In particular, By Theorem 3.5, since yf (y) is in C b, we obtain σ ne(f(y n )) = E[Y n F (Y n )] for all n. σ lim E(f(Y n )) = lim E[Y nf (Y n )] = E[Y F (Y )] = σ E(f(Y )). Hence, E(f(Y n )) E(f(Y )) for all f C c,0, so Y n d Y by Theorem 3.6. The main utility of the zero bias transforms is that Normal distribution is the unique fixed point of the transformation. This is stated formally below: Theorem.3. Let X be a zero-meaned random variable with nonzero, finite variance σ. Then X has distribution N (0, σ ) if and only if σ E(f (X)) = E(Xf(X)) for all absolutely continuous functions f for which these expectations exist. Proof. Recall the probability density function for the standard normal distribution: ϕ(x) = e x /. π We evaluate E(f (X)) using integration by parts: E(f (X)) = f (x)e x / dx π = [f(x)e x / π = π = E(Xf(X)), xf(x)e x / dx ] + xf(x)e x / dx as claimed. Since the solution to the differential equation is unique upto adding a constant, we have the result precisely for the family of normal distribution. This property can be exploited to reason that a distribution that gets mapped to one nearby is close to being a fixed point of the transformation. Hence, normal approximation can be applied whenever the distribution of a random variable is close to that of its zero bias transformation. With this in hand, consider the distribution of a sum W n of comparably sized independent random variables. We show that this can be zero biased by randomly selecting a single summand chosen with probability proportional to its variance,

5 THE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION 5 and replacing it with comparable size random variable. This would imply that W n and Wn are close, and therefore approximately Normal. This is the gist of the proof of the Lindeberg-Feller Central Limit Theorem under the condition that Goldstein calls small zero bias condition. We shall now formalize this argument. 5. Setting up the conditions This section serves to list the conditions that go together in some combination into creating the statement of the Lindeberg-Feller Central Limit Theorem. Condition 5.. For every n =,,..., the random variables making up the collection X n = {X i,n : i n} are independent with mean zero and finite variances σ i,n = Var (X i,n), standardized so that W n = X i,n has variance Var (W n ) = σi,n =. This is a non-negotiable condition because it is necessary with both the Lindeberg condition and the small zero bias condition. Condition 5.. (Lindeberg Condition) ɛ > 0 lim L n,ɛ = 0 where L n,ɛ = E[Xi,n( X i,n ɛ)]. As described in the introduction, this condition forces the random variables X i,n to have individual variances small compared to their sum. Condition 5.3. (Small zero bias condition) Given X n satisfying condition 5., let X n = {X i,n : i n} be a collection of random variables so that X i,n has the X i,n zero biased distribution and is independent of X n. Further let I n be a random index, independent of X n and X n, with distribution (5.) P(I n = i) = σ i,n, and define (5.5) X In,n = The condition requires that (I n = i)x i,n and XI n,n = X I n,n p 0. (I n = i)xi,n. Theorem 5.6. Condition 5. and Condition 5.3 are equivalent. Proof. Since the random index I n is independent of X n and X n, we use (5.) and (5.5) to obtain (5.7) E(f(X In,n)) = σi,ne(f(x i,n )) and E(f(XI n,n)) = σi,ne(f(x i,n))

6 6 JAINUL VAGHASIA Showing that 5. implies 5.3 relies on the function x + ɛ if x ɛ f(x) = x ɛ ( x ɛ) = 0 if ɛ < x < ɛ x ɛ if x ɛ Notice that f (x) = ( x ɛ) Then, using E((x)) = P(x) for the first equality and zero bias relation for the second equality, we obtain [ ] (5.8) σi,np( X i,n ɛ) = σi,ne(f (Xi,n)) = E (Xi,n ɛ X i,n )( X i,n ɛ) Bounding this quantity from above, [ ] σi,np( X i,n ɛ) E (Xi,n + ɛ X i,n )( X i,n ɛ) [ ] E Xi,n( X i,n ɛ). Applying (5.7) on the indicator function ( x ɛ) and using E((x)) = P(x), we obtain P( XI n,n ɛ) = σi,np( X i,n ɛ) Using the bound derived above, a.e. P( X I n,n ɛ) L n,ɛ. Hence, small bias condition is satisfied given Lindeberg condition. For the reverse implication, observe that x ( x ɛ) (x ɛ ) ( x x ɛ ). Then L n,ɛ = E(Xi,n( X i,n ɛ)) ( Xi,n ɛ ) ( X i,n X i,n ɛ ) ( = σi,np Xi,n ɛ ) ( = P Xi,n ɛ ), where we used (5.8) and (5.5) for second and third equalities, respectively. This finishes the equivalence argument. Now that we know that the two conditions are equivalent, it is worthwhile to see how small bias condition can be used to prove the Central Limit theorems.

7 THE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION 7 6. The Lindeberg-Feller Central Limit Theorem Lemma 6.. Let X n, n =,,... satisfy Condition 5. and m n = max i n σ i,n. Then X In,n p 0 whenever lim m n = 0. Proof. Using (5.7) with f(x) = x, we obtain E(X In,n) = 0, and hence Var (X In,n) = E(XI n,n ). Again with f(x) = x, Var (X In,n) = Since σ i,n σ i,n max j n σ j,n = σ i,n m n, we have for all ɛ > 0 σ i,n. P( X In,n ɛ) Var (X I n,n) ɛ where we used the Chebyshev s inequality. ɛ m n σi,n = ɛ m n, Lemma 6.. If X n, n =,,... satisfies Condition 5. and the small bias condition 5.3, then X In,n p 0. Proof. For all n, i n, and ɛ > 0, σi,n = E(Xi,n( X i,n < ɛ)) + E(Xi,n( X i,n ɛ)) ɛ + L n,ɛ. It follows that m n ɛ + L n,ɛ, and therefore lim sup m n ɛ, where we used L n,ɛ 0 by Theorem 5.6. The claim now follows by Lemma 6.. Theorem 6.3. If X n, =,,... satisfies Condition 5.3, then W n d Z, a standard normal random variable Proof. (Ideas) Let h Cc,0. Stein has shown that there exists a twice differentiable solution f with bounded derivatives to the Stein equation : Then using zero bias relation, f (w) wf(w) = h(w) E(h(Z)) E[h(W n ) E(h(Z))] = E[f (W n ) f (W n)] where Wn = W n +XI X n,n I n,n. Since f, f are bounded, we have η(δ) = sup x y δ f (x) f (y) 0 as δ 0. By the small bias condition, Wn W n 0. Hence, η( Wn W n ) 0. By the triangle inequality, E[h(W n ) E(h(Z))] E(η( Wn W n )). This shall give us lim E(h(W n )) = E(h(Z)). The result would then follow by Theorem 3.6.

8 8 JAINUL VAGHASIA 7. The Feller-Lévy Partial Converse Lemma 7.. (Slutsky s Lemma) Let U n and V n, n =,,... be two sequences of random variables. Then U n d U and V n p 0 implies U n + V n d U. The proof of Slutsky s Lemma is an exercise in applying definitions and is left to the reader. When independence holds, we have the following converse. Lemma 7.. Let U n and V n, n =,,... be two sequences of random variables such that U n and V n are independent for every n. Then U n d U and U n + V n d U implies V n p 0. Proof. (Ideas) Assume the special case that U n = d U for all n. Then assume for contradiction that V n does not tend to zero in probability. Without loss of generality, there exists an infinite subsequence K such that for all n K, P(V n ɛ) p. Use density of U to obtain continuous s(x) = P(x U x + ). Hence, s(x) can be showed to attain its maximum value in a bounded region. Then using the independence of U and V n, we have P(y U + V n y + V n ) = s(y V n ) for all n. On one hand, P(y U + V n y + V n ɛ) sup x y ɛ s(x) for all n K, but on the other hand by conditioning on V n ɛ and its complement, and the fact that U is absolutely continuous, we obtain a contradiction lim inf P(y U + V n y + ) < lim P(y U + V n y + ). Generalizing the situation requires more sophisticated constructs for which we refer to the Appendix of [3]. Theorem 7.3. If X n, =,,... satisfies Condition 5. and lim m n = 0, then the small zero bias condition is necessary for W n d Z. m n = max i n σ i,n, Proof. Since W n d Z and Var (W n ) Var (Z) with limit identically one, Theorem. implies W n Z. Note that the limit is technically Z but Z = Z for the fixed point of zero bias transformation. Since m n 0, Lemma 6. yields that X In,n p 0. It follows from Slutsky s Lemma 7. that Hence, W n + X I n,n = W n + X In,n d Z. W n d Z and W n + X I n,n d Z. Since W n is a function of X n, which is independent of I n and Xn and therefore XI n,n, Lemma 7. yields that XI n,n p 0.

9 THE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION 9 8. Cycles in a Random Permutation: Application of Lindeberg-Feller CLT Random permutations play an important role in many areas of probability and statistics. Here we provide one application that uses the cycles in random permutations. Consider a party with n people, and everyone writes their name down on a piece of paper and puts it into a bag. The bag is mixed up, and each person draws one piece of paper. If you draw the name of someone else, you are considered to be in his group. We want to show that the distribution of total number of groups is asymptotically Normal. This construct is an extension of that of the classical hat-check problem where each group is only allowed at most two people, and the motive is to find the distribution of the number of singleton groups. Let us formalize this problem now. Define the symmetric group S n to be the set of all permutations π on the set {,,..., n}. We can represent the permutation using the cycle notation. For example, π S 7 may be represented as π = (,, 6)(, 3, 7)(5) with the meaning that π maps to, to 6, 6 to, and so forth. Notice that π has two cycles of length 3 and one of length, for a total of three cycles. Hence, in the problem described above, each cycle would represent a single group of people. We will denote C r,n (π) to be the number of cycles of length r in permutation π S n. Define K n (π) to be the total number of cycles in permutation π S n. To begin, we show that C,n converges in distribution to a random variable having the Poisson distribution with mean. This should solve the simplest of the hat-check problems. Proposition 8.. For k = 0,,..., n, e P(C,n = k) d as n k! Proof. (Ideas) Using an inclusion-exclusion argument, we can show an elementary equality that P(C,n = k) = k! n k Then the proposition holds true as we pass to limit n. l=0 ( ) l. l! Now, we handle the extension of the problem. That is, we show that K n (π) is asymptotically Normal. We will employ Feller coupling [], which constructs a random permutation π uniformly from S n with help of n independent Bernoulli variables X :n with distributions (8.) P(X i = 0) = and P(X i = ) =, i =,..., n. i i Begin the first cycle at step with the element. At step i, i =,..., n, if X n i+ = close the current cycle and begin a new one starting with the smallest number not yet in any cycle, and otherwise choose an element uniformly from those yet unused. In this way, at step i, we complete a cycle with probability /(n i + ). As the total number K n (π) of cycles of π is precisely the number of times an element closes the loop upon completing its cycle, K n (π) = X i.

10 0 JAINUL VAGHASIA Figure: An example of Feller Coupling for π = (,, 6)(, 3, 7)(5) with X = X = X 5 =. An important consequence of this definition is that random variables X i, i =,..., n, even though independent, are not identically distributed. As a result, the classical Central Limit does not apply to their sum. However, we show that the small zero bias conditions hold. This gives us the required asymptotic Normality of K π (n). First, we standardize K n (π) in order to satisfy Condition (5.). Since Bernoulli variables with defining probability p have expected value p and variance p( p), we use the linearity of expectation and that of variance for independent variables to obtain that (8.3) E(K n (π)) = E(X i ) = Then the standardized variable is defined as (8.) W n = K n(π) h n σ n = It is trivial to verify that W n satisfies Condition (5.). Now we verify the small zero bias condition. Notice that (8.5) lim i = h n and Var (K n (π)) = σ n = h n =, and hence log n lim X i,n where X i,n = X i i σ n. σ n log n =. ( i ) i. As a consequence of (8.) and (8.), we have W n = n i= X i,n as X = identically makes X,n = 0 for all n. Xi should be uniformly distributed: Let B be a Bernoulli random variable with success probability p and let X be the centered Bernoulli variable B p, having variance p( p). Then E(Xf(X)) = E((B p)f(b p)) = p( p)f( p) ( p)pf( p) = σ [f( p) f( p)] p = σ f (u) du p = σ E(f (U)),

11 THE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION for U having uniform density over [ p, p]. Thus, by and (8.), we have (αx) = d αx for all α 0, [ Xi,n U i = d, where U i has distribution U σ n i, ], i =,..., n. i It follows that U i for all i =,..., n, and therefore for uniformly random index I n X I n,n σ n 0 by (8.5). Hence, small bias condition is satisfied and the Lindeberg-Feller Central Limit Theorem gives us the result we want. References [] W. Feller. The fundamental limit theorems in probability. Bull. Amer. Math. Soc. 5 (95) [] G. Folland. Real Analysis: Modern Techniques and Their Applications. John Wiley and Sons (98). [3] L. Goldstein. A Probabilistic Proof of the Lindeberg-Feller Central Limit Theorem. The American Mathematical Monthly. Vol. 6, No. (Jan., 009) [] L. Goldstein and G. Rienhart. Stein s method and the zero bias transformation with application to simple random sampling. Ann. Appl. Probab. 7 (997) [5] D. G. Hartig. The Riesz Representation Theorem Revisited. The American Mathematical Monthly. Vol. 90, No. (Apr., 983)

A Gentle Introduction to Stein s Method for Normal Approximation I

A Gentle Introduction to Stein s Method for Normal Approximation I A Gentle Introduction to Stein s Method for Normal Approximation I Larry Goldstein University of Southern California Introduction to Stein s Method for Normal Approximation 1. Much activity since Stein

More information

Expectation of Random Variables

Expectation of Random Variables 1 / 19 Expectation of Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 13, 2015 2 / 19 Expectation of Discrete

More information

Lecture 3: Expected Value. These integrals are taken over all of Ω. If we wish to integrate over a measurable subset A Ω, we will write

Lecture 3: Expected Value. These integrals are taken over all of Ω. If we wish to integrate over a measurable subset A Ω, we will write Lecture 3: Expected Value 1.) Definitions. If X 0 is a random variable on (Ω, F, P), then we define its expected value to be EX = XdP. Notice that this quantity may be. For general X, we say that EX exists

More information

Stein s Method and the Zero Bias Transformation with Application to Simple Random Sampling

Stein s Method and the Zero Bias Transformation with Application to Simple Random Sampling Stein s Method and the Zero Bias Transformation with Application to Simple Random Sampling Larry Goldstein and Gesine Reinert November 8, 001 Abstract Let W be a random variable with mean zero and variance

More information

Concentration of Measures by Bounded Size Bias Couplings

Concentration of Measures by Bounded Size Bias Couplings Concentration of Measures by Bounded Size Bias Couplings Subhankar Ghosh, Larry Goldstein University of Southern California [arxiv:0906.3886] January 10 th, 2013 Concentration of Measure Distributional

More information

CLASSICAL PROBABILITY MODES OF CONVERGENCE AND INEQUALITIES

CLASSICAL PROBABILITY MODES OF CONVERGENCE AND INEQUALITIES CLASSICAL PROBABILITY 2008 2. MODES OF CONVERGENCE AND INEQUALITIES JOHN MORIARTY In many interesting and important situations, the object of interest is influenced by many random factors. If we can construct

More information

Concentration of Measures by Bounded Couplings

Concentration of Measures by Bounded Couplings Concentration of Measures by Bounded Couplings Subhankar Ghosh, Larry Goldstein and Ümit Işlak University of Southern California [arxiv:0906.3886] [arxiv:1304.5001] May 2013 Concentration of Measure Distributional

More information

Math 328 Course Notes

Math 328 Course Notes Math 328 Course Notes Ian Robertson March 3, 2006 3 Properties of C[0, 1]: Sup-norm and Completeness In this chapter we are going to examine the vector space of all continuous functions defined on the

More information

STAT 7032 Probability Spring Wlodek Bryc

STAT 7032 Probability Spring Wlodek Bryc STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,

More information

Notions such as convergent sequence and Cauchy sequence make sense for any metric space. Convergent Sequences are Cauchy

Notions such as convergent sequence and Cauchy sequence make sense for any metric space. Convergent Sequences are Cauchy Banach Spaces These notes provide an introduction to Banach spaces, which are complete normed vector spaces. For the purposes of these notes, all vector spaces are assumed to be over the real numbers.

More information

Advanced Probability Theory (Math541)

Advanced Probability Theory (Math541) Advanced Probability Theory (Math541) Instructor: Kani Chen (Classic)/Modern Probability Theory (1900-1960) Instructor: Kani Chen (HKUST) Advanced Probability Theory (Math541) 1 / 17 Primitive/Classic

More information

Limiting Distributions

Limiting Distributions Limiting Distributions We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Lecture 10. Variance and standard deviation

Lecture 10. Variance and standard deviation 18.440: Lecture 10 Variance and standard deviation Scott Sheffield MIT 1 Outline Defining variance Examples Properties Decomposition trick 2 Outline Defining variance Examples Properties Decomposition

More information

MATH 426, TOPOLOGY. p 1.

MATH 426, TOPOLOGY. p 1. MATH 426, TOPOLOGY THE p-norms In this document we assume an extended real line, where is an element greater than all real numbers; the interval notation [1, ] will be used to mean [1, ) { }. 1. THE p

More information

Selected Topics in Probability and Stochastic Processes Steve Dunbar. Partial Converse of the Central Limit Theorem

Selected Topics in Probability and Stochastic Processes Steve Dunbar. Partial Converse of the Central Limit Theorem Steven R. Dunbar Department of Mathematics 203 Avery Hall University of Nebraska-Lincoln Lincoln, NE 68588-0130 http://www.math.unl.edu Voice: 402-472-3731 Fax: 402-472-8466 Selected Topics in Probability

More information

The Central Limit Theorem: More of the Story

The Central Limit Theorem: More of the Story The Central Limit Theorem: More of the Story Steven Janke November 2015 Steven Janke (Seminar) The Central Limit Theorem:More of the Story November 2015 1 / 33 Central Limit Theorem Theorem (Central Limit

More information

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2) 14:17 11/16/2 TOPIC. Convergence in distribution and related notions. This section studies the notion of the so-called convergence in distribution of real random variables. This is the kind of convergence

More information

The main results about probability measures are the following two facts:

The main results about probability measures are the following two facts: Chapter 2 Probability measures The main results about probability measures are the following two facts: Theorem 2.1 (extension). If P is a (continuous) probability measure on a field F 0 then it has a

More information

Random Variables. P(x) = P[X(e)] = P(e). (1)

Random Variables. P(x) = P[X(e)] = P(e). (1) Random Variables Random variable (discrete or continuous) is used to derive the output statistical properties of a system whose input is a random variable or random in nature. Definition Consider an experiment

More information

University of Regina. Lecture Notes. Michael Kozdron

University of Regina. Lecture Notes. Michael Kozdron University of Regina Statistics 252 Mathematical Statistics Lecture Notes Winter 2005 Michael Kozdron kozdron@math.uregina.ca www.math.uregina.ca/ kozdron Contents 1 The Basic Idea of Statistics: Estimating

More information

0.1 Uniform integrability

0.1 Uniform integrability Copyright c 2009 by Karl Sigman 0.1 Uniform integrability Given a sequence of rvs {X n } for which it is known apriori that X n X, n, wp1. for some r.v. X, it is of great importance in many applications

More information

Notes 1 : Measure-theoretic foundations I

Notes 1 : Measure-theoretic foundations I Notes 1 : Measure-theoretic foundations I Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Wil91, Section 1.0-1.8, 2.1-2.3, 3.1-3.11], [Fel68, Sections 7.2, 8.1, 9.6], [Dur10,

More information

Expansion and Isoperimetric Constants for Product Graphs

Expansion and Isoperimetric Constants for Product Graphs Expansion and Isoperimetric Constants for Product Graphs C. Houdré and T. Stoyanov May 4, 2004 Abstract Vertex and edge isoperimetric constants of graphs are studied. Using a functional-analytic approach,

More information

Probability and Measure

Probability and Measure Probability and Measure Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham, NC, USA Convergence of Random Variables 1. Convergence Concepts 1.1. Convergence of Real

More information

Density estimators for the convolution of discrete and continuous random variables

Density estimators for the convolution of discrete and continuous random variables Density estimators for the convolution of discrete and continuous random variables Ursula U Müller Texas A&M University Anton Schick Binghamton University Wolfgang Wefelmeyer Universität zu Köln Abstract

More information

d(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N

d(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N Problem 1. Let f : A R R have the property that for every x A, there exists ɛ > 0 such that f(t) > ɛ if t (x ɛ, x + ɛ) A. If the set A is compact, prove there exists c > 0 such that f(x) > c for all x

More information

Poisson approximations

Poisson approximations Chapter 9 Poisson approximations 9.1 Overview The Binn, p) can be thought of as the distribution of a sum of independent indicator random variables X 1 + + X n, with {X i = 1} denoting a head on the ith

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

Math 61CM - Solutions to homework 6

Math 61CM - Solutions to homework 6 Math 61CM - Solutions to homework 6 Cédric De Groote November 5 th, 2018 Problem 1: (i) Give an example of a metric space X such that not all Cauchy sequences in X are convergent. (ii) Let X be a metric

More information

7 Convergence in R d and in Metric Spaces

7 Convergence in R d and in Metric Spaces STA 711: Probability & Measure Theory Robert L. Wolpert 7 Convergence in R d and in Metric Spaces A sequence of elements a n of R d converges to a limit a if and only if, for each ǫ > 0, the sequence a

More information

If g is also continuous and strictly increasing on J, we may apply the strictly increasing inverse function g 1 to this inequality to get

If g is also continuous and strictly increasing on J, we may apply the strictly increasing inverse function g 1 to this inequality to get 18:2 1/24/2 TOPIC. Inequalities; measures of spread. This lecture explores the implications of Jensen s inequality for g-means in general, and for harmonic, geometric, arithmetic, and related means in

More information

Lecture 1: August 28

Lecture 1: August 28 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random

More information

Lecture 2: Review of Basic Probability Theory

Lecture 2: Review of Basic Probability Theory ECE 830 Fall 2010 Statistical Signal Processing instructor: R. Nowak, scribe: R. Nowak Lecture 2: Review of Basic Probability Theory Probabilistic models will be used throughout the course to represent

More information

1 Presessional Probability

1 Presessional Probability 1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional

More information

CONVERGENCE OF RANDOM SERIES AND MARTINGALES

CONVERGENCE OF RANDOM SERIES AND MARTINGALES CONVERGENCE OF RANDOM SERIES AND MARTINGALES WESLEY LEE Abstract. This paper is an introduction to probability from a measuretheoretic standpoint. After covering probability spaces, it delves into the

More information

CHAPTER 3: LARGE SAMPLE THEORY

CHAPTER 3: LARGE SAMPLE THEORY CHAPTER 3 LARGE SAMPLE THEORY 1 CHAPTER 3: LARGE SAMPLE THEORY CHAPTER 3 LARGE SAMPLE THEORY 2 Introduction CHAPTER 3 LARGE SAMPLE THEORY 3 Why large sample theory studying small sample property is usually

More information

Introduction to Bases in Banach Spaces

Introduction to Bases in Banach Spaces Introduction to Bases in Banach Spaces Matt Daws June 5, 2005 Abstract We introduce the notion of Schauder bases in Banach spaces, aiming to be able to give a statement of, and make sense of, the Gowers

More information

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) D. ARAPURA This is a summary of the essential material covered so far. The final will be cumulative. I ve also included some review problems

More information

2.1 Elementary probability; random sampling

2.1 Elementary probability; random sampling Chapter 2 Probability Theory Chapter 2 outlines the probability theory necessary to understand this text. It is meant as a refresher for students who need review and as a reference for concepts and theorems

More information

Real Analysis Notes. Thomas Goller

Real Analysis Notes. Thomas Goller Real Analysis Notes Thomas Goller September 4, 2011 Contents 1 Abstract Measure Spaces 2 1.1 Basic Definitions........................... 2 1.2 Measurable Functions........................ 2 1.3 Integration..............................

More information

Asymptotic Statistics-VI. Changliang Zou

Asymptotic Statistics-VI. Changliang Zou Asymptotic Statistics-VI Changliang Zou Kolmogorov-Smirnov distance Example (Kolmogorov-Smirnov confidence intervals) We know given α (0, 1), there is a well-defined d = d α,n such that, for any continuous

More information

Notes 6 : First and second moment methods

Notes 6 : First and second moment methods Notes 6 : First and second moment methods Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Roc, Sections 2.1-2.3]. Recall: THM 6.1 (Markov s inequality) Let X be a non-negative

More information

Lecture 1: Review on Probability and Statistics

Lecture 1: Review on Probability and Statistics STAT 516: Stochastic Modeling of Scientific Data Autumn 2018 Instructor: Yen-Chi Chen Lecture 1: Review on Probability and Statistics These notes are partially based on those of Mathias Drton. 1.1 Motivating

More information

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ).

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ). Connectedness 1 Motivation Connectedness is the sort of topological property that students love. Its definition is intuitive and easy to understand, and it is a powerful tool in proofs of well-known results.

More information

Probability and Measure

Probability and Measure Chapter 4 Probability and Measure 4.1 Introduction In this chapter we will examine probability theory from the measure theoretic perspective. The realisation that measure theory is the foundation of probability

More information

Limiting Distributions

Limiting Distributions We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the two fundamental results

More information

THEOREMS, ETC., FOR MATH 515

THEOREMS, ETC., FOR MATH 515 THEOREMS, ETC., FOR MATH 515 Proposition 1 (=comment on page 17). If A is an algebra, then any finite union or finite intersection of sets in A is also in A. Proposition 2 (=Proposition 1.1). For every

More information

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1 Chapter 2 Probability measures 1. Existence Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension to the generated σ-field Proof of Theorem 2.1. Let F 0 be

More information

Another Riesz Representation Theorem

Another Riesz Representation Theorem Another Riesz Representation Theorem In these notes we prove (one version of) a theorem known as the Riesz Representation Theorem. Some people also call it the Riesz Markov Theorem. It expresses positive

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 2. Countable Markov Chains I started Chapter 2 which talks about Markov chains with a countably infinite number of states. I did my favorite example which is on

More information

Notes on the second moment method, Erdős multiplication tables

Notes on the second moment method, Erdős multiplication tables Notes on the second moment method, Erdős multiplication tables January 25, 20 Erdős multiplication table theorem Suppose we form the N N multiplication table, containing all the N 2 products ab, where

More information

Midterm #1. Lecture 10: Joint Distributions and the Law of Large Numbers. Joint Distributions - Example, cont. Joint Distributions - Example

Midterm #1. Lecture 10: Joint Distributions and the Law of Large Numbers. Joint Distributions - Example, cont. Joint Distributions - Example Midterm #1 Midterm 1 Lecture 10: and the Law of Large Numbers Statistics 104 Colin Rundel February 0, 01 Exam will be passed back at the end of class Exam was hard, on the whole the class did well: Mean:

More information

Normal Approximation for Hierarchical Structures

Normal Approximation for Hierarchical Structures Normal Approximation for Hierarchical Structures Larry Goldstein University of Southern California July 1, 2004 Abstract Given F : [a, b] k [a, b] and a non-constant X 0 with P (X 0 [a, b]) = 1, define

More information

Sample Spaces, Random Variables

Sample Spaces, Random Variables Sample Spaces, Random Variables Moulinath Banerjee University of Michigan August 3, 22 Probabilities In talking about probabilities, the fundamental object is Ω, the sample space. (elements) in Ω are denoted

More information

REAL AND COMPLEX ANALYSIS

REAL AND COMPLEX ANALYSIS REAL AND COMPLE ANALYSIS Third Edition Walter Rudin Professor of Mathematics University of Wisconsin, Madison Version 1.1 No rights reserved. Any part of this work can be reproduced or transmitted in any

More information

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989),

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989), Real Analysis 2, Math 651, Spring 2005 April 26, 2005 1 Real Analysis 2, Math 651, Spring 2005 Krzysztof Chris Ciesielski 1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer

More information

A FIXED POINT THEOREM FOR GENERALIZED NONEXPANSIVE MULTIVALUED MAPPINGS

A FIXED POINT THEOREM FOR GENERALIZED NONEXPANSIVE MULTIVALUED MAPPINGS Fixed Point Theory, (0), No., 4-46 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.html A FIXED POINT THEOREM FOR GENERALIZED NONEXPANSIVE MULTIVALUED MAPPINGS A. ABKAR AND M. ESLAMIAN Department of Mathematics,

More information

MATH Solutions to Probability Exercises

MATH Solutions to Probability Exercises MATH 5 9 MATH 5 9 Problem. Suppose we flip a fair coin once and observe either T for tails or H for heads. Let X denote the random variable that equals when we observe tails and equals when we observe

More information

Normed Vector Spaces and Double Duals

Normed Vector Spaces and Double Duals Normed Vector Spaces and Double Duals Mathematics 481/525 In this note we look at a number of infinite-dimensional R-vector spaces that arise in analysis, and we consider their dual and double dual spaces

More information

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond Measure Theory on Topological Spaces Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond May 22, 2011 Contents 1 Introduction 2 1.1 The Riemann Integral........................................ 2 1.2 Measurable..............................................

More information

Measure and Integration: Solutions of CW2

Measure and Integration: Solutions of CW2 Measure and Integration: s of CW2 Fall 206 [G. Holzegel] December 9, 206 Problem of Sheet 5 a) Left (f n ) and (g n ) be sequences of integrable functions with f n (x) f (x) and g n (x) g (x) for almost

More information

j=1 [We will show that the triangle inequality holds for each p-norm in Chapter 3 Section 6.] The 1-norm is A F = tr(a H A).

j=1 [We will show that the triangle inequality holds for each p-norm in Chapter 3 Section 6.] The 1-norm is A F = tr(a H A). Math 344 Lecture #19 3.5 Normed Linear Spaces Definition 3.5.1. A seminorm on a vector space V over F is a map : V R that for all x, y V and for all α F satisfies (i) x 0 (positivity), (ii) αx = α x (scale

More information

Chapter 2 Metric Spaces

Chapter 2 Metric Spaces Chapter 2 Metric Spaces The purpose of this chapter is to present a summary of some basic properties of metric and topological spaces that play an important role in the main body of the book. 2.1 Metrics

More information

Chapter 4: Asymptotic Properties of the MLE

Chapter 4: Asymptotic Properties of the MLE Chapter 4: Asymptotic Properties of the MLE Daniel O. Scharfstein 09/19/13 1 / 1 Maximum Likelihood Maximum likelihood is the most powerful tool for estimation. In this part of the course, we will consider

More information

Lecture 3 - Expectation, inequalities and laws of large numbers

Lecture 3 - Expectation, inequalities and laws of large numbers Lecture 3 - Expectation, inequalities and laws of large numbers Jan Bouda FI MU April 19, 2009 Jan Bouda (FI MU) Lecture 3 - Expectation, inequalities and laws of large numbersapril 19, 2009 1 / 67 Part

More information

Asymptotic Statistics-III. Changliang Zou

Asymptotic Statistics-III. Changliang Zou Asymptotic Statistics-III Changliang Zou The multivariate central limit theorem Theorem (Multivariate CLT for iid case) Let X i be iid random p-vectors with mean µ and and covariance matrix Σ. Then n (

More information

From now on, we will represent a metric space with (X, d). Here are some examples: i=1 (x i y i ) p ) 1 p, p 1.

From now on, we will represent a metric space with (X, d). Here are some examples: i=1 (x i y i ) p ) 1 p, p 1. Chapter 1 Metric spaces 1.1 Metric and convergence We will begin with some basic concepts. Definition 1.1. (Metric space) Metric space is a set X, with a metric satisfying: 1. d(x, y) 0, d(x, y) = 0 x

More information

ACM 116: Lecture 2. Agenda. Independence. Bayes rule. Discrete random variables Bernoulli distribution Binomial distribution

ACM 116: Lecture 2. Agenda. Independence. Bayes rule. Discrete random variables Bernoulli distribution Binomial distribution 1 ACM 116: Lecture 2 Agenda Independence Bayes rule Discrete random variables Bernoulli distribution Binomial distribution Continuous Random variables The Normal distribution Expected value of a random

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 8 10/1/2008 CONTINUOUS RANDOM VARIABLES

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 8 10/1/2008 CONTINUOUS RANDOM VARIABLES MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 8 10/1/2008 CONTINUOUS RANDOM VARIABLES Contents 1. Continuous random variables 2. Examples 3. Expected values 4. Joint distributions

More information

Problem List MATH 5143 Fall, 2013

Problem List MATH 5143 Fall, 2013 Problem List MATH 5143 Fall, 2013 On any problem you may use the result of any previous problem (even if you were not able to do it) and any information given in class up to the moment the problem was

More information

Week 12-13: Discrete Probability

Week 12-13: Discrete Probability Week 12-13: Discrete Probability November 21, 2018 1 Probability Space There are many problems about chances or possibilities, called probability in mathematics. When we roll two dice there are possible

More information

Math 564 Homework 1. Solutions.

Math 564 Homework 1. Solutions. Math 564 Homework 1. Solutions. Problem 1. Prove Proposition 0.2.2. A guide to this problem: start with the open set S = (a, b), for example. First assume that a >, and show that the number a has the properties

More information

Probability Models of Information Exchange on Networks Lecture 1

Probability Models of Information Exchange on Networks Lecture 1 Probability Models of Information Exchange on Networks Lecture 1 Elchanan Mossel UC Berkeley All Rights Reserved Motivating Questions How are collective decisions made by: people / computational agents

More information

6.1 Moment Generating and Characteristic Functions

6.1 Moment Generating and Characteristic Functions Chapter 6 Limit Theorems The power statistics can mostly be seen when there is a large collection of data points and we are interested in understanding the macro state of the system, e.g., the average,

More information

The Lebesgue Integral

The Lebesgue Integral The Lebesgue Integral Brent Nelson In these notes we give an introduction to the Lebesgue integral, assuming only a knowledge of metric spaces and the iemann integral. For more details see [1, Chapters

More information

The Lindeberg central limit theorem

The Lindeberg central limit theorem The Lindeberg central limit theorem Jordan Bell jordan.bell@gmail.com Department of Mathematics, University of Toronto May 29, 205 Convergence in distribution We denote by P d the collection of Borel probability

More information

Filters in Analysis and Topology

Filters in Analysis and Topology Filters in Analysis and Topology David MacIver July 1, 2004 Abstract The study of filters is a very natural way to talk about convergence in an arbitrary topological space, and carries over nicely into

More information

Lecture 4: September Reminder: convergence of sequences

Lecture 4: September Reminder: convergence of sequences 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 4: September 6 In this lecture we discuss the convergence of random variables. At a high-level, our first few lectures focused

More information

I. ANALYSIS; PROBABILITY

I. ANALYSIS; PROBABILITY ma414l1.tex Lecture 1. 12.1.2012 I. NLYSIS; PROBBILITY 1. Lebesgue Measure and Integral We recall Lebesgue measure (M411 Probability and Measure) λ: defined on intervals (a, b] by λ((a, b]) := b a (so

More information

L p Functions. Given a measure space (X, µ) and a real number p [1, ), recall that the L p -norm of a measurable function f : X R is defined by

L p Functions. Given a measure space (X, µ) and a real number p [1, ), recall that the L p -norm of a measurable function f : X R is defined by L p Functions Given a measure space (, µ) and a real number p [, ), recall that the L p -norm of a measurable function f : R is defined by f p = ( ) /p f p dµ Note that the L p -norm of a function f may

More information

Section 27. The Central Limit Theorem. Po-Ning Chen, Professor. Institute of Communications Engineering. National Chiao Tung University

Section 27. The Central Limit Theorem. Po-Ning Chen, Professor. Institute of Communications Engineering. National Chiao Tung University Section 27 The Central Limit Theorem Po-Ning Chen, Professor Institute of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 3000, R.O.C. Identically distributed summands 27- Central

More information

FORMULATION OF THE LEARNING PROBLEM

FORMULATION OF THE LEARNING PROBLEM FORMULTION OF THE LERNING PROBLEM MIM RGINSKY Now that we have seen an informal statement of the learning problem, as well as acquired some technical tools in the form of concentration inequalities, we

More information

Eigenvalues and Eigenfunctions of the Laplacian

Eigenvalues and Eigenfunctions of the Laplacian The Waterloo Mathematics Review 23 Eigenvalues and Eigenfunctions of the Laplacian Mihai Nica University of Waterloo mcnica@uwaterloo.ca Abstract: The problem of determining the eigenvalues and eigenvectors

More information

Kousha Etessami. U. of Edinburgh, UK. Kousha Etessami (U. of Edinburgh, UK) Discrete Mathematics (Chapter 7) 1 / 13

Kousha Etessami. U. of Edinburgh, UK. Kousha Etessami (U. of Edinburgh, UK) Discrete Mathematics (Chapter 7) 1 / 13 Discrete Mathematics & Mathematical Reasoning Chapter 7 (continued): Markov and Chebyshev s Inequalities; and Examples in probability: the birthday problem Kousha Etessami U. of Edinburgh, UK Kousha Etessami

More information

Probability Models. 4. What is the definition of the expectation of a discrete random variable?

Probability Models. 4. What is the definition of the expectation of a discrete random variable? 1 Probability Models The list of questions below is provided in order to help you to prepare for the test and exam. It reflects only the theoretical part of the course. You should expect the questions

More information

Twelfth Problem Assignment

Twelfth Problem Assignment EECS 401 Not Graded PROBLEM 1 Let X 1, X 2,... be a sequence of independent random variables that are uniformly distributed between 0 and 1. Consider a sequence defined by (a) Y n = max(x 1, X 2,..., X

More information

Partial Differential Equations, 2nd Edition, L.C.Evans Chapter 5 Sobolev Spaces

Partial Differential Equations, 2nd Edition, L.C.Evans Chapter 5 Sobolev Spaces Partial Differential Equations, nd Edition, L.C.Evans Chapter 5 Sobolev Spaces Shih-Hsin Chen, Yung-Hsiang Huang 7.8.3 Abstract In these exercises always denote an open set of with smooth boundary. As

More information

Chapter 6. Convergence. Probability Theory. Four different convergence concepts. Four different convergence concepts. Convergence in probability

Chapter 6. Convergence. Probability Theory. Four different convergence concepts. Four different convergence concepts. Convergence in probability Probability Theory Chapter 6 Convergence Four different convergence concepts Let X 1, X 2, be a sequence of (usually dependent) random variables Definition 1.1. X n converges almost surely (a.s.), or with

More information

Problem set 1, Real Analysis I, Spring, 2015.

Problem set 1, Real Analysis I, Spring, 2015. Problem set 1, Real Analysis I, Spring, 015. (1) Let f n : D R be a sequence of functions with domain D R n. Recall that f n f uniformly if and only if for all ɛ > 0, there is an N = N(ɛ) so that if n

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor)

Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor) Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor) Matija Vidmar February 7, 2018 1 Dynkin and π-systems Some basic

More information

Course 212: Academic Year Section 1: Metric Spaces

Course 212: Academic Year Section 1: Metric Spaces Course 212: Academic Year 1991-2 Section 1: Metric Spaces D. R. Wilkins Contents 1 Metric Spaces 3 1.1 Distance Functions and Metric Spaces............. 3 1.2 Convergence and Continuity in Metric Spaces.........

More information

Introduction to Probability

Introduction to Probability LECTURE NOTES Course 6.041-6.431 M.I.T. FALL 2000 Introduction to Probability Dimitri P. Bertsekas and John N. Tsitsiklis Professors of Electrical Engineering and Computer Science Massachusetts Institute

More information

Exercise Solutions to Functional Analysis

Exercise Solutions to Functional Analysis Exercise Solutions to Functional Analysis Note: References refer to M. Schechter, Principles of Functional Analysis Exersize that. Let φ,..., φ n be an orthonormal set in a Hilbert space H. Show n f n

More information

Math Bootcamp 2012 Miscellaneous

Math Bootcamp 2012 Miscellaneous Math Bootcamp 202 Miscellaneous Factorial, combination and permutation The factorial of a positive integer n denoted by n!, is the product of all positive integers less than or equal to n. Define 0! =.

More information

FIRST YEAR CALCULUS W W L CHEN

FIRST YEAR CALCULUS W W L CHEN FIRST YER CLCULUS W W L CHEN c W W L Chen, 994, 28. This chapter is available free to all individuals, on the understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

Continuum Probability and Sets of Measure Zero

Continuum Probability and Sets of Measure Zero Chapter 3 Continuum Probability and Sets of Measure Zero In this chapter, we provide a motivation for using measure theory as a foundation for probability. It uses the example of random coin tossing to

More information

Probability and Distributions

Probability and Distributions Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated

More information

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 3: Regenerative Processes Contents 3.1 Regeneration: The Basic Idea............................... 1 3.2

More information