Math 832 Fall University of Wisconsin at Madison. Instructor: David F. Anderson

Size: px
Start display at page:

Download "Math 832 Fall University of Wisconsin at Madison. Instructor: David F. Anderson"

Transcription

1 Math 832 Fall 2013 University of Wisconsin at Madison Instructor: David F. Anderson

2 Pertinent information Instructor: David Anderson Office: Van Vleck Office hours: Mondays from 9-10AM and Wednesdays from 2:30-3:30pm. Lectures: Tuesdays and Thursdays, 11:00-12:15 PM Room: B131 in Van Vleck. I will use the class list to send out corrections, announcements, please check your wisc.edu regularly.

3 Pertinent information Required Text: Probability: Theory and Examples, by Rick Durrett. We will use the fourth edition, but earlier editions should be fine. Just make sure you are completing the correct homework assignments. Course content: This is a graduate level introductory course on mathematical probability theory. We will cover chapters 5 through 8 of Durrett s book. Highlights include: Review of chapters 1-3: measure theory, laws of large numbers, central limit theorems. Conditional expectations and martingales. Markov chains. Brownian motion: construction and properties.

4 Pertinent information Prerequisites: Some familiarity with key parts from the first semester, such as measure-theoretic foundations of probability, laws of large numbers, and central limit theorems. Evaluation: Course grades will be based on regular homework assignments.

5 Instructions for Homework Homework must be handed in at class time of the due date. Neatness and clarity are essential. Write one problem per page except in cases of very short problems. You are welcome to use LaTeX to typeset your solutions. This is encouraged. You can use basic facts from analysis and measure theory in your homework, and the theorems we cover in class without reproving them. If you do use other literature for help, cite your sources properly. You may work with other students on the homework. This is encouraged. However, you must turn in your own assignment using your own words.

6 Review of material from first semester A phenomena is called random if the exact outcome is uncertain. The mathematical study of randomness is called the theory of probability. A probability model has three essential pieces: (Ω, F, P). 1. Ω, the sample space, is the set of possible outcomes of an experiment. 2. F is a σ-algebra, a collection of subsets (events) satisfying Ω F. if A F, then A c F. if A i F is a countable sequence of sets, then A i F. If we only have n i=1a i F for finite n, then called an Algebra. i=1 3. A probability measure, P : F R 0 satisfies (i) P(Ω) = 1 (makes it a probability measure otherwise just a measure, µ). (ii) P(A) P( ) = 0 for all A F. (iii) If A i F is a countable sequence of disjoint sets, then ( ) P A i = P(A i ). i=1 i=1

7 Some Examples: 1. Flipping a coin. Ω = {T, H}, P({T }) = P({H}) = 1/2. Or, 2. Rolling a fair die F 1 = {, {H}, {T }, {H, T }}. F 0 = {, {H, T }}, P({H, T }) = 1. Ω = {1, 2, 3, 4, 5, 6}, F 1 = {, {1},..., {6}, {1, 2},, {2, 4, 6}, } F 2 = {, Ω, {1, 2, 3}, {4, 5, 6}}. These must have different probabilities put on them. For example, P 1 ({j}) = 1 6, j, P 2 ({1, 2, 3}) = 1/ we could have a random experiment with finitely many equally likely outcomes: Ω = {ω 1,..., ω n}, Pr(A) = A n. 4. We could have Ω = [0, )...

8 Facts about σ-algebras Exercise: An arbitrary intersection of σ-algebras is a σ-algebra. Definition Let C be any collection of subsets. Then σ(c) will denote the smallest σ-algebra containing C. From exercise above: this is the (non-empty) intersection of all σ-algebras containing C. Example 1. For a single set A, σ(a) = {, A, A c, Ω}. 2. If C is a σ-algebra, then σ(c) = C. 3. If C is the set of the open sets in R d, then σ(c) is called the Borel σ-algebra and denoted B(C). 4. Let {(Ω i, F i ) : 1 i n} be a set of measurable spaces, then the product σ-algebra on the space Ω 1 Ω n is σ(f 1 F n), where F 1 F n = {A 1 A n : A i F i }.

9 Facts about measures We have two forms of continuity for probability functions: 1. If A 1 A 2, and then A = A n, n=1 P(A) = lim n P(A n). (holds for all measures if allow = ) 2. If A 1 A 2 and A = n=1a n, then (Holds in general if µ(a 1 ) <.) P(A) = lim n P(A n).

10 Some important measures 1. (Counting measure, ν) For A S, let ν(a) give the number of elements in A. Thus, ν(a) = if A has infinitely many elements. 2. (Lebesgue measure λ on (R 1, B(R 1 ))) For the open interval (a, b), set λ(a, b) = b a. Extend to B(R 1 ) via Carathéodory (π λ theorem and all that). 3. (Product measure) Let {(Ω i, F i ); 1 i k} be k σ-finite measure spaces. Then the product measure ν 1 ν k is the unique measure on σ(f 1 F k ) such that ν 1 ν k (A 1 A k ) = ν 1 (A 1 ) ν k (A k ), for all A i F i, i = 1,..., k. Lebesgue measure on R k is product measure of k copies of Lebesgue measure on R 1. The events A 1 A k are called measurable rectangles.

11 Random variables Recall, if (X 1, F 1 ) and (X 2, F 2 ) are measurable spaces, then f : X 1 X 2 is measurable if f 1 (E) F 1 for every E F 2. Definition A real-valued function X : Ω R is said to be a Random Variable if for every Borel set B B(R) we have that X 1 (B) = {ω : X(ω) B} F. That is, if X : (Ω, F) (R, B(R)) is measurable.

12 Examples Example 1. If Ω is discrete: any function is a random variable (since F can consist of all subsets power set) 2. If X is the indicator of an event A F, then it is a RV: { 1 ω A X(ω) = 1 A (ω) = 0 ω / A Exercise: Check that an indicator function is a random variable.

13 Exercises 1. The composition of measurable functions is measurable. 2. If X : (Ω, F) (R, B(R)) is a random variable, then the collection σ(x) = {X 1 (B) : B B(R)}, is a σ-algebra in Ω. (So, X is a RV if and only if σ(x) F) 3. The collection is a σ-algebra in R. {B R : X 1 (B) F} To check if X is a RV, have to check many sets. Can we lower it down? Lemma Suppose X : (Ω, F) (S, A). If A 0 A generates A (so A = σ(a 0 )) and X 1 (A) F for all A A 0 then X is measurable. This means that for R and R d it s enough to check the inverse images of intervals/boxes.

14 Distribution of X Let X : (Ω, F) (R, B(R)) be a RV. Define µ, a measure on the measurable space (R, B(R)), via µ(a) = def P(X 1 (A)) = P(X A) = P{ω Ω : X(ω) A}. µ is the distribution of X. Exercise: µ is a probability measure on (R, B(R)).

15 Distribution function Define the distribution function of a RV X is defined Properties: (1) F is nondecreasing, (2) F( ) = 0, F( ) = 1 F X (x) = def µ((, x]) = P(X x). (3) F is right continuous, lim y x + F(y) = F(x). (4) If we define F(x ) = lim y x F(y), then F(x ) = P(X < x), (5) P(X = x) = F(x) F(x ) Theorem: Any F satisfying properties 1,2, and 3 is a distribution function for a RV. Proof on page 10. Important: Let Ω = (0, 1), F = Borels, P = Lebesgue, and X(ω) = sup{y : F(y) < ω}.

16 Equal in Distribution Definition If X and Y induce the same distribution on (R, B(R)), then we say that they are equal in distribution. We write X d = Y. Note: they are not necessary the same RV! In fact, they can even live on different probability spaces!!!!!

17 discrete/continuous random variables Definition Let X : Ω R be a random variable. Call X 1. discrete if there exists a countable set D so that P{X D} = 1, 2. continuous if the distribution function F is absolutely continuous. Discrete random variables satisfy: F(x) = P(X x) = f (s), s D,s x under the requirements that f (x) 0 for all x D and 1 = f (s). s D

18 discrete/continuous random variables A continuous random variable has a density f with respect to Lebesgue measure on R: F(x) = x f (s)ds. with the requirement that f (s) 0 for all s R, and 1 = f (s)ds.

19 Important examples of discrete RVs 1. (Bernoulli) Ber(p), D = {0, 1} f (x) = P{X = x} = p x (1 p) 1 x. 2. (binomial) Bin(n, p), D = {0, 1,..., n}, f (x) = P{X = x} = ( n x ) p x (1 p) n x. Note: Ber(p) is Bin(1, p). 3. (geometric) Geo(P), D = {1, 2,... } f (x) = P{X = x} = p(1 p) 1 x. 4. (Poisson) Pois(λ), D = {0, 1, 2... }, f (x) = P{X = x} = e λ λ x x!.

20 Important examples of continuous RVs 1. (exponential) Exp(λ) on [0, ). f (x) = λe λx. 2. (uniform) U(a, b) on [a, b], f (x) = 1 b a. 3. (Normal) N(µ, σ 2 ) on R, f (x) = ( ) 1 exp (x µ)2. 2πσ 2 2σ 2

21 Integration-expectation Suppose that φ : (Ω, F, µ) (R, B(R)). We want to develop an object (integral) of the form φ dµ. Ω The usual machinery is to develop in following manner: 1. simple functions 2. nonnegative functions 3. general measurable functions Most interested in X : (Ω, F, P) (R, B(R)) in which case XdP = EX = E P X. Ω

22 Step 1: Simple functions. The function φ is said to be a simple function if there are disjoint measurable A i with µ(a i ) < for which φ(ω) = n a i 1 Ai (ω). i=1 In this cae we define φ dµ = n a i µ(a i ). i=1 Note that this already gives us expectations for discrete random variables (taking finitely many values): If then by def. X(ω) = EX = n a i 1(X(ω) = a i ) i=1 n a i P(X = a i ). i=1

23 Properties check a few 1. If φ 0 a.e. then φ dµ (linearity I) For any a R, aφ dµ = a φ dµ. 3. (linearity II) (φ + ψ)dµ = φ dµ + ψ dµ. 4. (triangle inequality) φ dµ φ dµ. 5. If φ ψ a.e., then φ dµ ψ dµ. 6. If ψ = φ a.e. then φ dµ = ψ dµ.

24 Step 2: nonnegative functions. Now we may define for measurable f : Ω [0, ], { } f dµ = sup φ dµ : 0 φ f, φ simple. All previous properties hold.

25 Step 3: general functions. Final step: We now say a function is integrable if f dµ <. For such functions, we let f + (x) = f (x) 0, f (x) = (f (x) 0), and note that f = f + f. Since f = f + + f, each term has a finite integral and we can define f dµ = f + dµ f dµ Well defined since function is integrable (and both pieces are finite). All properties go through.

26 Expectations If underlying measure, µ is a probability measure P, then we call the integral an expectation or the expected value and write E P X = X(ω)P(dω) = XdP. Ω

27 Critically important theorems Theorem (Monotone convergence theorem) If f n is a sequence of non-negative measurable functions such that f j f j+1 for all j, and (pointwise convergence) f = lim n f n f dµ = lim n f n dµ. (= sup f n), then n Theorem (Fatou s lemma) If f n 0 is a measurable sequence, then (lim inf f n) dµ lim inf f n dµ. Theorem (Dominated convergence theorem) If f n f a.e., f n g for all n, and g is integrable, then f n dµ f dµ.

28 Special case of DCT Theorem (Bounded convergence) Suppose that f n : Ω R are measurable functions satisfying f n M, f n a.e. f. Then if µ(ω) < we have lim n f n dµ = f dµ.

29 Chebyshev s/markov s inequality Notation: If we only integrate over A Ω, we write E(X; A) = E(X1 A ) = X dp. A Theorem (Chebyshev s/markov s inequality) Suppose that φ : R R has φ 0, let A B(R) and let i A = inf{φ(y) : y A}. Then, i A P(X A) E(φ(X); X A) Eφ(X). usual statement from 431: A = [k, ), φ(x) = x 2 in which case P(X k) EX 2 k 2.

30 More on expectations If X : Ω R has density g: P(X A) = g(y)dy. Then, Ef (X) = f (y)g(y)dy. R A If X is discrete with range D, then Ef (X) = x D f (x)p{x = x}.

31 Product measures, Fubini s Theorem Let (X, A, µ 1 ) and (Y, B, µ 2 ) be two σ-finite measure spaces (i.e. X, Y are countable union of sets of finite measure). Let Sets in S are called rectangles. Ω = X Y = {(x, y) : x X, y Y } S = {A B : A A, B B}. We let F = A B be the σ-algebra generated by S. Theorem (page 36): There is a unique measure µ on F with when A A and B B. µ(a B) = µ 1 (A)µ 2 (B). Theorem (Fubini s theorem) If f 0 or f dµ <, then f (x, y)µ 2 (dy)µ 1 (dx) = f dµ = f (x, y)µ 1 (dx)µ 2 (dy). X Y X Y Y X For example, this will allow us to make statements of form: E f (X s)ds = Ef (X s)ds,

32 Different types of convergence (except in distribution) In all below, we let X n, X be RVs on (Ω, F, P). Definition We say X n X almost surely if P{ω : X n(ω) X(ω)} = 1. Definition We say that X n X converges in L r, r > 0 if as n. E X n X r 0, Definition We say that X n X converges in probability (old terminology: converges in measure) if for each ɛ > 0, as n. P{ω : X n(ω) X(ω) > ɛ} 0,

33 Different types of convergence (except in distribution) 1. Convergence in L r, = convergence in probability. 2. L r L s when r s. Example: L 2 L almost sure convergence convergence in prob. 4. Dominated convergence: a.s. + dominated = convergence in L If X n X in probability and uniformly bounded, then X n X in L r for r 1.

34 Chapter 2: Independence and laws of large numbers Beginning probability Definition Let (Ω, F, P) be a probability space. A collection of σ-algebras, F i, are independent if for any finite subset I {1, 2,... }, ( ) P A i = P(A i ), i I i I for any A i F i. A collection of random variables {X i, i = 1, 2,... } are said to be independent if {σ(x 1 ), σ(x 2 )... } are independent. Events {A 1, A 2,..., } are said to be independent if 1 A1, 1 A1,... are independent. This is equivalent to saying that whenever I is finite P( i I A i ) = i I P(A i ).

35 Exercises Exercise: Let A 1, A 2,..., A n be independent. Show (i) A c 1, A 2,..., A n are independent; (ii) 1 A1,..., 1 An are independent. Exercise: If a sequence {X k, k 1} of random variables is independent, then P{X 1 A 1, X 2 A 2,... } = P{X i A i }. (Note the product is infinite.) i=1

36 Independence Theorem Suppose X 1,..., X n are independent RVs and X i has distribution µ i. Then (X 1,..., X n) has distribution µ 1 µ n. Proof. We have P((X 1,..., X n) A 1 A 2 A n) = P(X 1 A 1,..., X n A n) (no independence n = P(X i A i ) (independence used) = i=1 n µ i (A i ) (def.) i=1 = µ 1 µ n(a 1 A n) (def. of µ 1 So, the distribution of (X 1,..., X n) and the measure µ 1 µ n agree on sets of the form A 1 A n, a π-system that generates B(R n ). Thus, the measure are the same by π λ theorem.

37 Independence Theorem Suppose X and Y are independent and have distributions µ and ν. If h : R 2 R, is measurable and h 0 or E h(x, Y ) <, then Eh(X, Y ) = h(x, y)µ X (dx)ν Y (dy). If h(x, y) = f (x)g(y) with f, g 0 or E f (X), E g(y ) <, then Ef (X)g(Y ) = Ef (X)Eg(Y ). Proof. Just use Fubini s theorem. We know now that measure of (X, Y ) is µ X ν Y, so Eh(X, Y ) = hd(µ X ν Y ) R 2 = h(x, y)µ X (dx)ν Y (dy) Second part is immediate.

38 Weak law of large numbers first major theorem Def: X i, X j uncorrelated if E(X i X j ) = EX i EX j (for free if independent). Theorem (L 2 weak law) Let X 1, X 2,..., be uncorrelated random variables with EX i = µ i and If S n = X X n, then as n Var(X i ) C <, and hence in probability. 1 n Sn µ in L2, Proof. This is immediate. We have that ES n/n = µ, so E(S n/n µ) 2 = Var(S n/n) = 1 n (Var(X 1) + + Var(X n)) Cn n 2 0.

39 More general weak law page 61 Theorem Let X 1, X 2,..., be i.i.d. with E X i <. Let S n = X X n and let µ = EX 1. Then 1 Sn µ, n in probability.

40 Strong Laws of Large Numbers (and Borel-Cantelli lemmas) If A n is a sequence of subsets of Ω, we let [ ] lim sup A n = lim A n = A n m n=m m=1 n=m = {ω that are in infinitely many A n} = {ω : ω A n i.o.}. lim inf A n = lim m n=m [ ] A n = A n = {ω in all but finitely many A n}. m=1 n=m

41 First Borel-Cantelli lemma Theorem (Borel-Cantelli lemma) If n=1 P(An) <, then P(A n i.o.) = 0. Proof. Let N = k 1 A k be the number of events that occur. Then from Fubini, EN = k P(A k ) <, and so we must have that N < a.s.

42 Example: First Borel-Cantelli lemma Lemma If for every ɛ > 0, then X n X a.s. P( X n X > ɛ) <, n=1 Proof. Let A n = { X n X > ɛ}. By BC P(A n i.o.) = 0 Thus, with probability one, there is an N > 0 for which n N implies which means that X n X 0 a.s. X n X ɛ,

43 Example: First Borel-Cantelli lemma Theorem (Strong LLN with fourth moment Theorem in text) Let X 1, X 2,... be i.i.d. with finite mean, EX i = µ, and finite fourth moment, EXi 4 <. Then ( ) Sn Pr n µ = 1 Proof. We have E 1 [ 4 n n Sn µ = 1 n E (X 4 i µ) i=1 ] 4 ( ) = 1 n n E (X 4 i µ) n 6(E(X n 4 i µ) 2 ) 2 (cross terms) 2 Cn 2 i=1 Now we use Chebyshev/Markov: P ( S n/n µ > ɛ) 1 ɛ 4 E Sn/n µ 4 C ɛ 4 1 n 2. We can now use the previous lemma.

44 Second Borel-Cantelli lemma Theorem (Second BC lemma) If the events A n are independent and then P(A n i.o.) = 1. P(A n) = n=1 Example Drunk monkey banking on keyboard. let A 1 be event that in first k characters, writes Moby Dick. has positive prob. Let A n be event that in nth set of k characters, does it. then P(A i ) = ɛ > 0. And sum is infinite. So happens infinitely often.

45 Strong law of large numbers 2nd major theorem. Theorem Assume that X 1, X 2,... are pairwise independent, identically distributed with E X i <. Then S n n = X X n a.s. µ = EX. n

46 Application Example (Monte Carlo integration) Let X 1, X 2,... be independent random variables uniformly distributed on the interval [0, 1]. Then def g(x) n = 1 n n g(x i ) i=1 1 0 g(x)dx = I(g), with probability one as n. The error in the estimate of the integral is supplied by the variance Var(g(X) n ) = 1 n 1 0 (g(x) I(g)) 2 dx = σ2 n.

47 Chapter 3: central limit theorems. Definition A sequence of distribution functions, F n, is said to converge weakly to a limit F, and is written F n F if F n(y) F(y) for all y that are continuity points of F. A sequence of random variables X n is said to converge weakly or converge in distribution to a limit X, written X n X, if there distribution functions satisfy F n F. Note that we can have X n X even when X n are defined on completely different probability spaces!

48 First weak convergence results Example (De Moivre-Laplace) Let X 1, X 2,... be i.i.d. with P(X i = 1) = P(X i = 1) = 1/2, and let Then Thus, S n = X X n. F n(y) = P(S n/ n y) where Z is a standard Normal RV. y S n/ n Z, Example (Glivenko-Cantelli) Let X i be i.i.d. with distribution F. Then for a.e. ω, for all y. (2π) 1/2 e x2 /2 dx. n F n(ω, y) = def n 1 1(X m(ω) y) F(y), m=1

49 Where does convergence in dist. fit in with other forms? Lemma Convergence in probability implies convergence in distribution. Since everything implies convergence in probability, convergence in distribution is weakest... But...

50 Conv. in distr = a.s.? Theorem If F n F, then we can find random variables X n, X so that X n a.s. X. Theorem (Fatou s lemma) Let g 0 be continuous. If X n X, then Eg(X ) lim inf n Eg(Xn). Proof. We know there exists a probability space and RVs Y n Y a.s. such that the distributions are correct. Then, Eg(X ) = Eg(Y ) = Eg(lim n Y n) = E lim n g(y n) (by continuity) lim inf Eg(Y n n) = lim inf Eg(X n n). (by usual Fatou s lemma)

51 Alternative definition of weak convergence Theorem X n X if and only if for every bounded continuous function g, we have Eg(X n) Eg(X ). Proof. Key ingredients: bounded/dominated conv. theorem and the a.s. representation.

52 Alternative definition of weak convergence Theorem (Portmanteau theorem) Equivalent definitions of weak convergence. 1. X n X 2. If G is open then 3. If F is closed then 4. If P(X A) = 0 then lim inf P(Xn G) P(X G) n lim sup P(X n F) P(X F) n lim P(X n A) = P(X A) (Note: A = Ā inta is the boundary of A)

53 How do we get weak convergence? Theorem (Helly s selection theorem) For every sequence F n of distribution functions, there is a subsequence F n(k) and a right continuous nondecreasing function F so that lim F n(k) = F (y), k at all continuity points, y, of F. (Vague convergence F need not be distribution function) Definition We say that the sequence of distribution functions F n is tight if for every ɛ there is an M = M(ɛ) so that lim sup P( X n > M) ɛ. n Theorem F n tight every subsequential vague limit is a distribution function.

54 Characteristic functions - Fourier transforms Definition If X is a random variable we define its characteristic function by φ(t) = Ee itx. Exercise: 1. Let X be a random variable with characteristic function ϕ X (t). Let Y = ax + b. Show that ϕ Y (t) = e itb ϕ(at). 2. Show that if Z N(0, 1), then Z has characteristic function ϕ Z (t) = e t2 /2. What is the characteristic function of a normal random variable with mean µ and variance σ 2?

55 Characteristic functions Definition If X is a random variable we define its characteristic function by φ(t) = Ee itx. Important because: 1. Characteristic functions characterize distribution (so know one, know the other). 2. (Levy s continuity theorem) Convergence in distribution is equivalent to (pointwise) convergence of characteristic functions!

56 De Moivre-Laplace in two lines Recall, if S n is binomial(n, 1/2) (sums of Bernoulli s) then 1 n S n N(0, 1). Can we show using characteristic functions quickly? Let S n = X X n, with What is char. func of X 1? i=1 P(X i = 1) = P(X i = 1) = 1/2, φ X1 (t) = Ee itx 1 = 1 2 ( e it + e it) = cos(t). Now use that n φ Sn/ n = φ X1 (t/ n) = ( cos(t/ n) ) ( n = 1 t ) 2 n 2n + O(t 4 n 2 ) e t2 /2. This actually has all the makings for a proof of the Central Limit Theorem.

57 Central limit theorem third major limit theorem. Theorem Let X i be iid with EX i = µ and finite variance σ 2 (0, ). Then S n nµ σn 1/2 = N(0, 1) Proof. More of previous slide, with a few more bells and whistles. Simple consequence of CLT: if X i Poisson(λ) are i.i.d., then n i=1 (X i λ) λn N(0, 1) Note that this gives correction for LLN. Let Z N(0, 1), then 1 n Sn µ + σ 1 n Z.

58 A more general version, Lindeberg-Feller Theorem For each n, let X n,m, 1 m n be mean zero independent random variables. Suppose ( n ) n (i) Var X n,m = EXn,m 2 σ 2, and m=1 (ii) For all ɛ > 0, Then m=1 lim n n m=1 [ ] E Xn,m1( X 2 n,m > ɛ) = 0. S n = X n,1 + + X n,n N(0, σ 2 ). Says: sum of large number of small independent effects has approximately a normal distribution. No identically distributed assumption... amazing. Lindeberberg condition guarantees no single random variable contributes to variance significantly.

59 Application of Lindeberg-Feller: Lyapunov s CLT Theorem Let X 1, X 2,... be independent with EX j = 0 for all j. Let n n αn 2 = Var(X j ) α n = Var(X j ) j=1 If there exists a δ > 0 (usually 1) such that lim n 1 α 2+δ n j=1 n E X j 2+δ = 0, j=1 then X X n Sn ESn n = N(0, 1). j=1 Var(X α j) n Proof. Define X n,m = (1/α n)x m and check the Linderberg-Feller conditions.

60 Other types of convergence Poisson Recall a result from intro probability. Let n 1 and p 1 and np = λ = O(1). Then, if X n binomial(n, p) and Y Poisson(λ), as n, X n Y. Sometimes called the weak law of small numbers. Usual proof: ( ) ( ) k ( n P{X n = k} = p k (1 p) k n! λ = 1 λ ) n k k (n k)!k! n n = n(n 1) (n (k 1)) λ k n k k! take limit as n ( 1 λ ( 1 λ n ) n n ) k nk n k λk k! e λ 1 = e λ λ k k!.

61 Other types of convergence Poisson New proof: characteristic functions. We have (let Z i be Bernoulli(p)), ϕ Xn (t) = ( ϕ Zi (t) ) n = = = ( ) n e it p + (1 p) ( ) n 1 + p(e it 1) ( 1 + λ ) n n (eit 1) exp{λ(e it 1)}. Is that Poisson Char. Func? We have X Poiss(λ) Ee itx = e itk e λ λ k ( k! = ) λe itk k e λ k! k=0 = exp{ λ + λe itx }. k=0

62 Other types of convergence Poisson Theorem For each n, let X n,m, 1 m n be independent random variables with P(X n,m = 1) = p n,m, P(X n,m = 0) = 1 p n,m. So X n,m is Bernoulli(p n,m). Suppose n (i) p n,m λ (0, ), and m=1 (ii) max {1 m n} p n,m 0. If we let S n = X n,1 + + X n,n, then S n Z where Z is Poisson(λ). Examples: 1. Explains why typos on a page have a Poisson distribution. 2. Injuries at amusement part are Poisson. 3. Page 148 has examples.

63 Weak convergence in R d random vectors Let X = (X 1,..., X d ) be a random vector. Its distribution function F is F(x) = P(X x), where X x means X i x i for all i {1,..., d}. Definition We say that X n X if F n(x) F (x) for all points of continuity of F. As before we have equivalent ways to characterize convergence in distribution. In particular, Portmanteau theorem: 1. Ef (X n) Ef (X ) for all bounded, continuous f. 2. For all closed sets, K, lim sup n P(X n K ) P(X K ). 3. For all open sets G, lim inf n P(X n G) P(X G).

64 Weak convergence in R d random vectors Tightness: same Definition The sequence of probability measures µ n is tight if for any ɛ > 0, there is an M > 0 so that lim inf n µn([ M, M]d ) 1 ɛ. OR if there is a compact K ɛ for which lim sup µ n(kɛ c ) ɛ. n Theorem (3.9.2 in text) If µ n is tight, then there is a weakly convergent subsequence.

65 Weak convergence in R d random vectors Definition The CF of a random vector X = (X 1,..., X d ) is given by: for t R d, φ X (t) = E[exp(i(t 1 X t k X k ))] = E exp(i(t X)). 1. Again: φ X characterizes distribution of a RV. 2. Again: convergence of X n = X is equivalent to φ n φ. Theorem (CLT in R d ) Let X 1, X 2,... be iid with EX = µ R d and finite covariances Let S n = X X n, then def Γ i,j = Cov(X n,i, X n,j ). S n nµ n χ where χ is a multivariate normal with mean 0 and covariance Γ.

66 Multivariate Normal Definition (Multivariate Gaussian) A d-dimensional standard Gaussian is a random vector X = (X 1,..., X d ) where the X i s are independent standard Gaussians. In particular, X has mean 0 and covariance matrix I. More generally, a random vector X = (X 1,..., X d ) is Gaussian if there is a vector b, a d r matrix A and an r-dimensional standard Gaussian Y such that X = AY + b. Then X has mean µ = b and covariance matrix (you can check this) The CF of X is given by d ϕ X (t) = exp i t j µ j 1 2 j=1 Σ = AA T. d ( t j t k Σ j,k = exp it T µ 1 ) 2 t T Σt j,k=1 (Compute it in the standard case, then transform it)

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

Part II Probability and Measure

Part II Probability and Measure Part II Probability and Measure Theorems Based on lectures by J. Miller Notes taken by Dexter Chua Michaelmas 2016 These notes are not endorsed by the lecturers, and I have modified them (often significantly)

More information

The main results about probability measures are the following two facts:

The main results about probability measures are the following two facts: Chapter 2 Probability measures The main results about probability measures are the following two facts: Theorem 2.1 (extension). If P is a (continuous) probability measure on a field F 0 then it has a

More information

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1 Chapter 2 Probability measures 1. Existence Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension to the generated σ-field Proof of Theorem 2.1. Let F 0 be

More information

STAT 7032 Probability Spring Wlodek Bryc

STAT 7032 Probability Spring Wlodek Bryc STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,

More information

1 Probability space and random variables

1 Probability space and random variables 1 Probability space and random variables As graduate level, we inevitably need to study probability based on measure theory. It obscures some intuitions in probability, but it also supplements our intuition,

More information

Notes on Measure, Probability and Stochastic Processes. João Lopes Dias

Notes on Measure, Probability and Stochastic Processes. João Lopes Dias Notes on Measure, Probability and Stochastic Processes João Lopes Dias Departamento de Matemática, ISEG, Universidade de Lisboa, Rua do Quelhas 6, 1200-781 Lisboa, Portugal E-mail address: jldias@iseg.ulisboa.pt

More information

Notes 1 : Measure-theoretic foundations I

Notes 1 : Measure-theoretic foundations I Notes 1 : Measure-theoretic foundations I Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Wil91, Section 1.0-1.8, 2.1-2.3, 3.1-3.11], [Fel68, Sections 7.2, 8.1, 9.6], [Dur10,

More information

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2) 14:17 11/16/2 TOPIC. Convergence in distribution and related notions. This section studies the notion of the so-called convergence in distribution of real random variables. This is the kind of convergence

More information

1 Presessional Probability

1 Presessional Probability 1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional

More information

Probability Theory. Richard F. Bass

Probability Theory. Richard F. Bass Probability Theory Richard F. Bass ii c Copyright 2014 Richard F. Bass Contents 1 Basic notions 1 1.1 A few definitions from measure theory............. 1 1.2 Definitions............................. 2

More information

I. ANALYSIS; PROBABILITY

I. ANALYSIS; PROBABILITY ma414l1.tex Lecture 1. 12.1.2012 I. NLYSIS; PROBBILITY 1. Lebesgue Measure and Integral We recall Lebesgue measure (M411 Probability and Measure) λ: defined on intervals (a, b] by λ((a, b]) := b a (so

More information

STOR 635 Notes (S13)

STOR 635 Notes (S13) STOR 635 Notes (S13) Jimmy Jin UNC-Chapel Hill Last updated: 1/14/14 Contents 1 Measure theory and probability basics 2 1.1 Algebras and measure.......................... 2 1.2 Integration................................

More information

Probability: Handout

Probability: Handout Probability: Handout Klaus Pötzelberger Vienna University of Economics and Business Institute for Statistics and Mathematics E-mail: Klaus.Poetzelberger@wu.ac.at Contents 1 Axioms of Probability 3 1.1

More information

A D VA N C E D P R O B A B I L - I T Y

A D VA N C E D P R O B A B I L - I T Y A N D R E W T U L L O C H A D VA N C E D P R O B A B I L - I T Y T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Conditional Expectation 5 1.1 Discrete Case 6 1.2

More information

Probability and Measure

Probability and Measure Probability and Measure Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham, NC, USA Convergence of Random Variables 1. Convergence Concepts 1.1. Convergence of Real

More information

Lecture 6 Basic Probability

Lecture 6 Basic Probability Lecture 6: Basic Probability 1 of 17 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 6 Basic Probability Probability spaces A mathematical setup behind a probabilistic

More information

2. Suppose (X, Y ) is a pair of random variables uniformly distributed over the triangle with vertices (0, 0), (2, 0), (2, 1).

2. Suppose (X, Y ) is a pair of random variables uniformly distributed over the triangle with vertices (0, 0), (2, 0), (2, 1). Name M362K Final Exam Instructions: Show all of your work. You do not have to simplify your answers. No calculators allowed. There is a table of formulae on the last page. 1. Suppose X 1,..., X 1 are independent

More information

Graduate Probability Theory

Graduate Probability Theory Graduate Probability Theory Yiqiao YIN Statistics Department Columbia University Notes in L A TEX December 12, 2017 Abstract This is the lecture note from Probability Theory class offered in Mathematics

More information

Fundamental Tools - Probability Theory II

Fundamental Tools - Probability Theory II Fundamental Tools - Probability Theory II MSc Financial Mathematics The University of Warwick September 29, 2015 MSc Financial Mathematics Fundamental Tools - Probability Theory II 1 / 22 Measurable random

More information

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Review of Basic Probability The fundamentals, random variables, probability distributions Probability mass/density functions

More information

1 Sequences of events and their limits

1 Sequences of events and their limits O.H. Probability II (MATH 2647 M15 1 Sequences of events and their limits 1.1 Monotone sequences of events Sequences of events arise naturally when a probabilistic experiment is repeated many times. For

More information

18.175: Lecture 15 Characteristic functions and central limit theorem

18.175: Lecture 15 Characteristic functions and central limit theorem 18.175: Lecture 15 Characteristic functions and central limit theorem Scott Sheffield MIT Outline Characteristic functions Outline Characteristic functions Characteristic functions Let X be a random variable.

More information

Product measure and Fubini s theorem

Product measure and Fubini s theorem Chapter 7 Product measure and Fubini s theorem This is based on [Billingsley, Section 18]. 1. Product spaces Suppose (Ω 1, F 1 ) and (Ω 2, F 2 ) are two probability spaces. In a product space Ω = Ω 1 Ω

More information

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due ). Show that the open disk x 2 + y 2 < 1 is a countable union of planar elementary sets. Show that the closed disk x 2 + y 2 1 is a countable

More information

Lecture 3: Expected Value. These integrals are taken over all of Ω. If we wish to integrate over a measurable subset A Ω, we will write

Lecture 3: Expected Value. These integrals are taken over all of Ω. If we wish to integrate over a measurable subset A Ω, we will write Lecture 3: Expected Value 1.) Definitions. If X 0 is a random variable on (Ω, F, P), then we define its expected value to be EX = XdP. Notice that this quantity may be. For general X, we say that EX exists

More information

Lecture 2: Repetition of probability theory and statistics

Lecture 2: Repetition of probability theory and statistics Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:

More information

1.1 Review of Probability Theory

1.1 Review of Probability Theory 1.1 Review of Probability Theory Angela Peace Biomathemtics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

1 Stat 605. Homework I. Due Feb. 1, 2011

1 Stat 605. Homework I. Due Feb. 1, 2011 The first part is homework which you need to turn in. The second part is exercises that will not be graded, but you need to turn it in together with the take-home final exam. 1 Stat 605. Homework I. Due

More information

18.175: Lecture 3 Integration

18.175: Lecture 3 Integration 18.175: Lecture 3 Scott Sheffield MIT Outline Outline Recall definitions Probability space is triple (Ω, F, P) where Ω is sample space, F is set of events (the σ-algebra) and P : F [0, 1] is the probability

More information

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due 9/5). Prove that every countable set A is measurable and µ(a) = 0. 2 (Bonus). Let A consist of points (x, y) such that either x or y is

More information

Convergence in Distribution

Convergence in Distribution Convergence in Distribution Undergraduate version of central limit theorem: if X 1,..., X n are iid from a population with mean µ and standard deviation σ then n 1/2 ( X µ)/σ has approximately a normal

More information

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

STAT 7032 Probability. Wlodek Bryc

STAT 7032 Probability. Wlodek Bryc STAT 7032 Probability Wlodek Bryc Revised for Spring 2019 Printed: January 14, 2019 File: Grad-Prob-2019.TEX Department of Mathematical Sciences, University of Cincinnati, Cincinnati, OH 45221 E-mail address:

More information

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN Lecture Notes 5 Convergence and Limit Theorems Motivation Convergence with Probability Convergence in Mean Square Convergence in Probability, WLLN Convergence in Distribution, CLT EE 278: Convergence and

More information

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities PCMI 207 - Introduction to Random Matrix Theory Handout #2 06.27.207 REVIEW OF PROBABILITY THEORY Chapter - Events and Their Probabilities.. Events as Sets Definition (σ-field). A collection F of subsets

More information

Random Process Lecture 1. Fundamentals of Probability

Random Process Lecture 1. Fundamentals of Probability Random Process Lecture 1. Fundamentals of Probability Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/43 Outline 2/43 1 Syllabus

More information

Quick Tour of Basic Probability Theory and Linear Algebra

Quick Tour of Basic Probability Theory and Linear Algebra Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra CS224w: Social and Information Network Analysis Fall 2011 Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra Outline Definitions

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

Useful Probability Theorems

Useful Probability Theorems Useful Probability Theorems Shiu-Tang Li Finished: March 23, 2013 Last updated: November 2, 2013 1 Convergence in distribution Theorem 1.1. TFAE: (i) µ n µ, µ n, µ are probability measures. (ii) F n (x)

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

Lecture 7. Sums of random variables

Lecture 7. Sums of random variables 18.175: Lecture 7 Sums of random variables Scott Sheffield MIT 18.175 Lecture 7 1 Outline Definitions Sums of random variables 18.175 Lecture 7 2 Outline Definitions Sums of random variables 18.175 Lecture

More information

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

Spring 2014 Advanced Probability Overview. Lecture Notes Set 1: Course Overview, σ-fields, and Measures

Spring 2014 Advanced Probability Overview. Lecture Notes Set 1: Course Overview, σ-fields, and Measures 36-752 Spring 2014 Advanced Probability Overview Lecture Notes Set 1: Course Overview, σ-fields, and Measures Instructor: Jing Lei Associated reading: Sec 1.1-1.4 of Ash and Doléans-Dade; Sec 1.1 and A.1

More information

1: PROBABILITY REVIEW

1: PROBABILITY REVIEW 1: PROBABILITY REVIEW Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 1: Probability Review 1 / 56 Outline We will review the following

More information

1* (10 pts) Let X be a random variable with P (X = 1) = P (X = 1) = 1 2

1* (10 pts) Let X be a random variable with P (X = 1) = P (X = 1) = 1 2 Math 736-1 Homework Fall 27 1* (1 pts) Let X be a random variable with P (X = 1) = P (X = 1) = 1 2 and let Y be a standard normal random variable. Assume that X and Y are independent. Find the distribution

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

On the convergence of sequences of random variables: A primer

On the convergence of sequences of random variables: A primer BCAM May 2012 1 On the convergence of sequences of random variables: A primer Armand M. Makowski ECE & ISR/HyNet University of Maryland at College Park armand@isr.umd.edu BCAM May 2012 2 A sequence a :

More information

1 Basic continuous random variable problems

1 Basic continuous random variable problems Name M362K Final Here are problems concerning material from Chapters 5 and 6. To review the other chapters, look over previous practice sheets for the two exams, previous quizzes, previous homeworks and

More information

Chapter 5. Chapter 5 sections

Chapter 5. Chapter 5 sections 1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Probability and Measure

Probability and Measure Chapter 4 Probability and Measure 4.1 Introduction In this chapter we will examine probability theory from the measure theoretic perspective. The realisation that measure theory is the foundation of probability

More information

Lecture 1: Review on Probability and Statistics

Lecture 1: Review on Probability and Statistics STAT 516: Stochastic Modeling of Scientific Data Autumn 2018 Instructor: Yen-Chi Chen Lecture 1: Review on Probability and Statistics These notes are partially based on those of Mathias Drton. 1.1 Motivating

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

Example continued. Math 425 Intro to Probability Lecture 37. Example continued. Example

Example continued. Math 425 Intro to Probability Lecture 37. Example continued. Example continued : Coin tossing Math 425 Intro to Probability Lecture 37 Kenneth Harris kaharri@umich.edu Department of Mathematics University of Michigan April 8, 2009 Consider a Bernoulli trials process with

More information

Convergence of Random Variables

Convergence of Random Variables 1 / 15 Convergence of Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay March 19, 2014 2 / 15 Motivation Theorem (Weak

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Probability Theory I: Syllabus and Exercise

Probability Theory I: Syllabus and Exercise Probability Theory I: Syllabus and Exercise Narn-Rueih Shieh **Copyright Reserved** This course is suitable for those who have taken Basic Probability; some knowledge of Real Analysis is recommended( will

More information

Preliminaries. Probability space

Preliminaries. Probability space Preliminaries This section revises some parts of Core A Probability, which are essential for this course, and lists some other mathematical facts to be used (without proof) in the following. Probability

More information

Elementary Probability. Exam Number 38119

Elementary Probability. Exam Number 38119 Elementary Probability Exam Number 38119 2 1. Introduction Consider any experiment whose result is unknown, for example throwing a coin, the daily number of customers in a supermarket or the duration of

More information

Probability and Statistics

Probability and Statistics Probability and Statistics Jane Bae Stanford University hjbae@stanford.edu September 16, 2014 Jane Bae (Stanford) Probability and Statistics September 16, 2014 1 / 35 Overview 1 Probability Concepts Probability

More information

Random Variables and Their Distributions

Random Variables and Their Distributions Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital

More information

CHAPTER 3: LARGE SAMPLE THEORY

CHAPTER 3: LARGE SAMPLE THEORY CHAPTER 3 LARGE SAMPLE THEORY 1 CHAPTER 3: LARGE SAMPLE THEORY CHAPTER 3 LARGE SAMPLE THEORY 2 Introduction CHAPTER 3 LARGE SAMPLE THEORY 3 Why large sample theory studying small sample property is usually

More information

Integration on Measure Spaces

Integration on Measure Spaces Chapter 3 Integration on Measure Spaces In this chapter we introduce the general notion of a measure on a space X, define the class of measurable functions, and define the integral, first on a class of

More information

7 Convergence in R d and in Metric Spaces

7 Convergence in R d and in Metric Spaces STA 711: Probability & Measure Theory Robert L. Wolpert 7 Convergence in R d and in Metric Spaces A sequence of elements a n of R d converges to a limit a if and only if, for each ǫ > 0, the sequence a

More information

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University Chapter 3, 4 Random Variables ENCS6161 - Probability and Stochastic Processes Concordia University ENCS6161 p.1/47 The Notion of a Random Variable A random variable X is a function that assigns a real

More information

Probability Review. Yutian Li. January 18, Stanford University. Yutian Li (Stanford University) Probability Review January 18, / 27

Probability Review. Yutian Li. January 18, Stanford University. Yutian Li (Stanford University) Probability Review January 18, / 27 Probability Review Yutian Li Stanford University January 18, 2018 Yutian Li (Stanford University) Probability Review January 18, 2018 1 / 27 Outline 1 Elements of probability 2 Random variables 3 Multiple

More information

36-752: Lecture 1. We will use measures to say how large sets are. First, we have to decide which sets we will measure.

36-752: Lecture 1. We will use measures to say how large sets are. First, we have to decide which sets we will measure. 0 0 0 -: Lecture How is this course different from your earlier probability courses? There are some problems that simply can t be handled with finite-dimensional sample spaces and random variables that

More information

Measure and integration

Measure and integration Chapter 5 Measure and integration In calculus you have learned how to calculate the size of different kinds of sets: the length of a curve, the area of a region or a surface, the volume or mass of a solid.

More information

Annalee Gomm Math 714: Assignment #2

Annalee Gomm Math 714: Assignment #2 Annalee Gomm Math 714: Assignment #2 3.32. Verify that if A M, λ(a = 0, and B A, then B M and λ(b = 0. Suppose that A M with λ(a = 0, and let B be any subset of A. By the nonnegativity and monotonicity

More information

1 Measurable Functions

1 Measurable Functions 36-752 Advanced Probability Overview Spring 2018 2. Measurable Functions, Random Variables, and Integration Instructor: Alessandro Rinaldo Associated reading: Sec 1.5 of Ash and Doléans-Dade; Sec 1.3 and

More information

Tom Salisbury

Tom Salisbury MATH 2030 3.00MW Elementary Probability Course Notes Part V: Independence of Random Variables, Law of Large Numbers, Central Limit Theorem, Poisson distribution Geometric & Exponential distributions Tom

More information

Gaussian vectors and central limit theorem

Gaussian vectors and central limit theorem Gaussian vectors and central limit theorem Samy Tindel Purdue University Probability Theory 2 - MA 539 Samy T. Gaussian vectors & CLT Probability Theory 1 / 86 Outline 1 Real Gaussian random variables

More information

STA 711: Probability & Measure Theory Robert L. Wolpert

STA 711: Probability & Measure Theory Robert L. Wolpert STA 711: Probability & Measure Theory Robert L. Wolpert 6 Independence 6.1 Independent Events A collection of events {A i } F in a probability space (Ω,F,P) is called independent if P[ i I A i ] = P[A

More information

Random variables. DS GA 1002 Probability and Statistics for Data Science.

Random variables. DS GA 1002 Probability and Statistics for Data Science. Random variables DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Motivation Random variables model numerical quantities

More information

Lecture Notes 2 Random Variables. Discrete Random Variables: Probability mass function (pmf)

Lecture Notes 2 Random Variables. Discrete Random Variables: Probability mass function (pmf) Lecture Notes 2 Random Variables Definition Discrete Random Variables: Probability mass function (pmf) Continuous Random Variables: Probability density function (pdf) Mean and Variance Cumulative Distribution

More information

Exercises Measure Theoretic Probability

Exercises Measure Theoretic Probability Exercises Measure Theoretic Probability 2002-2003 Week 1 1. Prove the folloing statements. (a) The intersection of an arbitrary family of d-systems is again a d- system. (b) The intersection of an arbitrary

More information

Product measures, Tonelli s and Fubini s theorems For use in MAT4410, autumn 2017 Nadia S. Larsen. 17 November 2017.

Product measures, Tonelli s and Fubini s theorems For use in MAT4410, autumn 2017 Nadia S. Larsen. 17 November 2017. Product measures, Tonelli s and Fubini s theorems For use in MAT4410, autumn 017 Nadia S. Larsen 17 November 017. 1. Construction of the product measure The purpose of these notes is to prove the main

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

Math 5051 Measure Theory and Functional Analysis I Homework Assignment 2

Math 5051 Measure Theory and Functional Analysis I Homework Assignment 2 Math 551 Measure Theory and Functional nalysis I Homework ssignment 2 Prof. Wickerhauser Due Friday, September 25th, 215 Please do Exercises 1, 4*, 7, 9*, 11, 12, 13, 16, 21*, 26, 28, 31, 32, 33, 36, 37.

More information

1 Review of Probability

1 Review of Probability 1 Review of Probability Random variables are denoted by X, Y, Z, etc. The cumulative distribution function (c.d.f.) of a random variable X is denoted by F (x) = P (X x), < x

More information

Chapter 6. Integration. 1. Integrals of Nonnegative Functions. a j µ(e j ) (ca j )µ(e j ) = c X. and ψ =

Chapter 6. Integration. 1. Integrals of Nonnegative Functions. a j µ(e j ) (ca j )µ(e j ) = c X. and ψ = Chapter 6. Integration 1. Integrals of Nonnegative Functions Let (, S, µ) be a measure space. We denote by L + the set of all measurable functions from to [0, ]. Let φ be a simple function in L +. Suppose

More information

Lecture 22: Variance and Covariance

Lecture 22: Variance and Covariance EE5110 : Probability Foundations for Electrical Engineers July-November 2015 Lecture 22: Variance and Covariance Lecturer: Dr. Krishna Jagannathan Scribes: R.Ravi Kiran In this lecture we will introduce

More information

STA205 Probability: Week 8 R. Wolpert

STA205 Probability: Week 8 R. Wolpert INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and

More information

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) D. ARAPURA This is a summary of the essential material covered so far. The final will be cumulative. I ve also included some review problems

More information

MATHS 730 FC Lecture Notes March 5, Introduction

MATHS 730 FC Lecture Notes March 5, Introduction 1 INTRODUCTION MATHS 730 FC Lecture Notes March 5, 2014 1 Introduction Definition. If A, B are sets and there exists a bijection A B, they have the same cardinality, which we write as A, #A. If there exists

More information

µ X (A) = P ( X 1 (A) )

µ X (A) = P ( X 1 (A) ) 1 STOCHASTIC PROCESSES This appendix provides a very basic introduction to the language of probability theory and stochastic processes. We assume the reader is familiar with the general measure and integration

More information

JUSTIN HARTMANN. F n Σ.

JUSTIN HARTMANN. F n Σ. BROWNIAN MOTION JUSTIN HARTMANN Abstract. This paper begins to explore a rigorous introduction to probability theory using ideas from algebra, measure theory, and other areas. We start with a basic explanation

More information

ECON 2530b: International Finance

ECON 2530b: International Finance ECON 2530b: International Finance Compiled by Vu T. Chau 1 Non-neutrality of Nominal Exchange Rate 1.1 Mussa s Puzzle(1986) RER can be defined as the relative price of a country s consumption basket in

More information

Lecture Notes for MA 623 Stochastic Processes. Ionut Florescu. Stevens Institute of Technology address:

Lecture Notes for MA 623 Stochastic Processes. Ionut Florescu. Stevens Institute of Technology  address: Lecture Notes for MA 623 Stochastic Processes Ionut Florescu Stevens Institute of Technology E-mail address: ifloresc@stevens.edu 2000 Mathematics Subject Classification. 60Gxx Stochastic Processes Abstract.

More information

Compendium and Solutions to exercises TMA4225 Foundation of analysis

Compendium and Solutions to exercises TMA4225 Foundation of analysis Compendium and Solutions to exercises TMA4225 Foundation of analysis Ruben Spaans December 6, 2010 1 Introduction This compendium contains a lexicon over definitions and exercises with solutions. Throughout

More information

Hints/Solutions for Homework 3

Hints/Solutions for Homework 3 Hints/Solutions for Homework 3 MATH 865 Fall 25 Q Let g : and h : be bounded and non-decreasing functions Prove that, for any rv X, [Hint: consider an independent copy Y of X] ov(g(x), h(x)) Solution:

More information

II - REAL ANALYSIS. This property gives us a way to extend the notion of content to finite unions of rectangles: we define

II - REAL ANALYSIS. This property gives us a way to extend the notion of content to finite unions of rectangles: we define 1 Measures 1.1 Jordan content in R N II - REAL ANALYSIS Let I be an interval in R. Then its 1-content is defined as c 1 (I) := b a if I is bounded with endpoints a, b. If I is unbounded, we define c 1

More information

Some basic elements of Probability Theory

Some basic elements of Probability Theory Chapter I Some basic elements of Probability Theory 1 Terminology (and elementary observations Probability theory and the material covered in a basic Real Variables course have much in common. However

More information

Probability and Distributions

Probability and Distributions Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated

More information

MATH/STAT 235A Probability Theory Lecture Notes, Fall 2013

MATH/STAT 235A Probability Theory Lecture Notes, Fall 2013 MATH/STAT 235A Probability Theory Lecture Notes, Fall 2013 Dan Romik Department of Mathematics, UC Davis December 30, 2013 Contents Chapter 1: Introduction 6 1.1 What is probability theory?...........................

More information

Lecture 1: August 28

Lecture 1: August 28 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4.1 (*. Let independent variables X 1,..., X n have U(0, 1 distribution. Show that for every x (0, 1, we have P ( X (1 < x 1 and P ( X (n > x 1 as n. Ex. 4.2 (**. By using induction

More information

Universal examples. Chapter The Bernoulli process

Universal examples. Chapter The Bernoulli process Chapter 1 Universal examples 1.1 The Bernoulli process First description: Bernoulli random variables Y i for i = 1, 2, 3,... independent with P [Y i = 1] = p and P [Y i = ] = 1 p. Second description: Binomial

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4. (*. Let independent variables X,..., X n have U(0, distribution. Show that for every x (0,, we have P ( X ( < x and P ( X (n > x as n. Ex. 4.2 (**. By using induction or otherwise,

More information

P. Billingsley, Probability and Measure, Wiley, New York, P. Gänssler, W. Stute, Wahrscheinlichkeitstheorie, Springer, Berlin, 1977.

P. Billingsley, Probability and Measure, Wiley, New York, P. Gänssler, W. Stute, Wahrscheinlichkeitstheorie, Springer, Berlin, 1977. Probability Theory Klaus Ritter TU Kaiserslautern, WS 2017/18 Literature In particular, H. Bauer, Probability Theory, de Gruyter, Berlin, 1996. P. Billingsley, Probability and Measure, Wiley, New York,

More information