Measure-theoretic probability
|
|
- Beverley Elinor Johnston
- 5 years ago
- Views:
Transcription
1 Measure-theoretic probability Koltay L. VEGTMAM144B November 28, 2012 (VEGTMAM144B) Measure-theoretic probability November 28, / 27
2 The probability space De nition The (Ω, A, P) measure space is a probability space where Ω is the set of elementary events; A is the σ- eld of events (or simply eld of events); P is the probability measure, i.e. P(Ω) = 1. Examples Combinatorial probability Ω = fω 1, ω 2,, ω n g A =2 Ω P (fω k g) = 1 n k = 1, 2,, n =) P(A) = ωk 2A 1 "No. of elements in A" n = "No. of elements in Ω" A Ω Geometric probability Ω 2 B n 0 < m n (Ω) < A = B nω P(A) is proportional to m nω (A) =) P(A) = m n(a) m n A 2 B (Ω) nω (VEGTMAM144B) Measure-theoretic probability November 28, / 27
3 Random variables De nition Let (Ω, A, P) be a probability space. An A-measurable function X : Ω! R n is called random variable (RV). Remark If n = 1 it is scalar RV, otherwise random vector variable (RVV) The X : Ω! R function is a scalar RV i for all x 2 R X 1 (] ; x[) = fω j X (ω) < xg 2 A. The X = (X 1, X 2,, X n ) : Ω! R n function is a RVV i X k : Ω! R are scalar RVs for all k = 1, 2,, n. (VEGTMAM144B) Measure-theoretic probability November 28, / 27
4 Distribution of random variables De nition The cumulative distribution function (cdf, or simply distribution function) of the RV(V) X = (X 1, X 2,, X n ) : Ω! R n is the function F (x 1, x 2,, x n ) = P X 1 (] ; x 1 [] ; x 2 [ ] ; x n [) = P (X 1 < x 1, X 2 < x 2,, X n < x n ) (x 1, x 2,, x n ) 2 R n. Remark The cdf of X is the distribution function induced by the generated P X measure on B n, F and P X uniquely determine each other, so that Z Z P(X 2 B) = P X (B) = dp X = df B 2 B n. B B (VEGTMAM144B) Measure-theoretic probability November 28, / 27
5 Properties of the cdf If I R n is a bounded interval e.g. if n = 1 P(X 2 I ) = P X (I ) = [F ] I 0 P X ([a; b[) = F (b) F (a) P ([a; b]) = F (b + 0) F (a) P X (]a; b[) = F (b) F (a + 0) P (]a; b]) = F (b + 0) F (a + 0) F is left-continuos and monotone increasing in each of its variables. F has limit values for (x 1,, x k 1, x k+1,, x n ) 2 R n 1 n > 1 lim xk! F (x 1,, x k,, x n ) = 0 lim xk!+ F (x 1,, x k,, x n ) = P (X 1 < x 1,, X k 1 < x k 1, X k+1 < x k+1, X n < x n ) what is the cdf of the RV(V) (X 1,, X k 1, X k+1,, X n ), and if n = 1 then lim x! F (x) = 0 lim x!+ F (x) = 1. (VEGTMAM144B) Measure-theoretic probability November 28, / 27
6 Discrete distribution If P X is dominated by the counting measure ν(a) = 1, z2z \A where Z is a nite or countable subset of R n, then denoting p z = dp X d ν (z) at the point z 2 Z which is a subset of the range X with P X measure 1 and p z = 0 otherwise, we have F (x) = p z p z = lim F (x) z<x x!z+0 F (z) where z 7! p z z 2 Z is a discrete probability density function (discrete pdf) satisfying moreover Z P(X 2 B) = P X (B) = p z 0 and p z = 1 z2z B Z dp X = z2b p z dν = p z B 2 B n. z2b (VEGTMAM144B) Measure-theoretic probability November 28, / 27
7 Continuos distribution If P X is dominated by the Lebesgue measure m n, then denoting f (z) = dp X dm n (z) at the point z 2 Z which is a subset of the range X with P X measure 1 and f (z) = 0 otherwise, we have Z F (x) = z<x f dm n x 2 R n f (z) = n z 1 z n F (z) at f s continuity points z 2 R n where z 7! f (z) is a continuos probability density function (continuos pdf) satisfying Z f (z) 0 and f (z) dm n = 1 moreover Z P(X 2 B) = P X (B) = B Z Z dp X = where the relations <, 2 are meant by componentwise. B f (z) dm n B 2 B n, (VEGTMAM144B) Measure-theoretic probability November 28, / 27
8 The support of a pdf Remark In both cases the discrete and continuos pdf dp X d ν, dp X dm n can be set to 0 except on a set Z im(x ) with P X measure 1, as in the discrete case P X (B) = in the continuos case Z P X (B) = B \Z P X (B) = P X (B \ Z ) B 2 B n, z2b \Z p z + 0 = z2b \Z c p z z2b \Z Z Z f (z) dm n + 0 dm n = f (z) dm n. B \Z c B \Z (VEGTMAM144B) Measure-theoretic probability November 28, / 27
9 Probability vs measure theory If some property is valid on the points of Ω except on a set N of P-measure 0, i.e. it is valid a.e., then P(N c ) = 1 and we say that it is valid with probability 1, i.e. almost sure (a.s.). If the sequence X 1, X 2, of RVs converges to the RV X in 1 a.e., then we say that the sequence converges strongly, or a.s., denoted (a.s.) lim n! X n = X ; 2 P-measure, then we say that the sequence converges in probability, denoted (P) lim n! X n = X ; 3 L p, then we say that the sequence converges in p-th mean, denoted (L p ) lim n! X n = X, we say convergence in mean or in mean square in case of p = 1 and p = 2 respectively, where mean square convergence implies convergence in mean. (VEGTMAM144B) Measure-theoretic probability November 28, / 27
10 Weak convergence of distributions De nition The sequence of cdfs F n : R r! [0; 1] n = 1, 2, converges weakly to the distribution function F : R r! [0; 1] if denoted F n w! F. lim F n(x) = F (x) at continuity points x 2 R r of F, n! Remark As a cdf can have at most countable discontinuity points, the weak limit is unique too. w! F equivalent with F n lim n! [F n] I = [F ] I I = [a 1 ; b 1 ] [a r ; b r ] R r for a = (a 1,, a r ) and b = (b 1,, b r ) continuity points of F. (VEGTMAM144B) Measure-theoretic probability November 28, / 27
11 Independent events We call two or more events A i i 2 I independent if for any nite collection of indices fi 1, i 2,, i k g I P(A i1 \ A i2 \ \ A ik ) = P (A i1 ) P (A i2 ) P (A ik ) which means according to the concept of conditional probability measure P(A j B) = P (A\B ) P(B) > 0 that for any two disjoint collections of P (B ) indices P(A i1 \ A i2 \ \ A ik j A j1 \ A j2 \ \ A jl ) = P(A i1 \ A i2 \ \ A ik ) fi 1, i 2,, i k g, fj 1, j 2,, j l g I i.e. some of them occur jointly with the same probability regardless of the occurrence of some other s. Remark Any event can be substituted by its complement, the independence remains true. (VEGTMAM144B) Measure-theoretic probability November 28, / 27
12 Independent collections of events and RVs We call two or more set of events A i i 2 I independent if the events A i 2 A i i 2 I are independent. It also means that a set can be extended by the complements of its events. We call two or more RV(V)s X i i 2 I independent if A Xi i 2 I are independent. It is equivalent with the following conditions: for any nite collection of indices fi 1, i 2,, i k g I P (Xi1,X i2,,x ik ) = P Xi1 P Xi2 P Xik F (Xi1,X i2,,x ik )(x 1, x 2,, x k ) = F Xi1 (x 1 ) F Xi2 (x 2 )... F Xik (x k ) and if P (Xi1,X i2,,x ik ) is dominated by the Lebesgue measure m k, f (Xi1,X i2,,x ik )(x 1, x 2,, x k ) = f Xi1 (x 1 ) f Xi2 (x 2 )... f Xik (x k ) m k a.e. and if P (Xi1,X i2,,x ik ) is dominated by a counting measure ν k, p (x1,x 2,,x k ) = p x1 p x2... p xk ν k a.e. (VEGTMAM144B) Measure-theoretic probability November 28, / 27
13 Expectation and standard deviation De nition The expected value of the scalar RV X : Ω! R is the integral Z E (X ) = XdP if its value is nite, i.e. X 2 L 1 (Ω, A, P). The standard deviation of the scalar RV X : Ω! R is r D(X ) = E h(x E (X )) 2i if E (X ) and E Ω h (X E (X )) 2i are existing, i.e. X 2 L 2 (Ω, A, P). V (X ) = D 2 (X ) = E is called the variance of the RV X. h (X E (X )) 2i = E X 2 E 2 (X ) (VEGTMAM144B) Measure-theoretic probability November 28, / 27
14 Properties of the expectation Assuming the existence of the expectation of the RVs, 1 if 0 X then 0 E (X ), moreover "=" holds if and only if X = 0 a.s.; 2 if, α, β 2 R then 3 if A 2 A then E (αx + βy ) = αe (X ) + βe (Y ) ; E (1 A ) = P(A) especially E (1 Ω ) = P(Ω) = 1, so a constant RV c = c 1 Ω have expectation E (c) = c the same constant; 4 if h : R! R convex function, then 5 if X, Y are independent RVs then h (E (X )) E (h(x )) ; E (XY ) = E (X ) E (Y ). (VEGTMAM144B) Measure-theoretic probability November 28, / 27
15 Properties of the standard deviation Assuming the existence of the expectation of the RVs, 1 D(X ) 0 and equality holds if and only if X = E (X ) a.s., i.e. X is a.s. constant function; 2 D(X ) = q E (X 2 ) E 2 (X ) D 2 (X ) = V (X ) = E (X 2 ) E 2 (X ) ; 3 if X, Y are independent, then q D(X + Y ) = D 2 (X ) + D 2 (Y ) V (X + Y ) = V (X ) + V (Y ) ; 4 if a, b 2 R then D(aX + b) = jaj D(X ) V (ax + b) = a 2 V (X ). (VEGTMAM144B) Measure-theoretic probability November 28, / 27
16 Calculating the expected value of a RV Let F denote the cdf of the RV X, then in the image space Z Z + E (X ) = xdp X = xdf (x) if the integral is nite. Especially if P X is dominated by the counting measure with pdf R x k 7! p k k = 1, 2,, then E (X ) = x k p k k assuming the series converges absolutely, i.e. k jx k j p k is nite; Lebesgue measure with pdf x 7! f (x) x 2 R, then E (X ) = Z + x f (x)dx assuming R the (improper-) integral converges absolutely, i.e. + jxj f (x)dx is nite. (VEGTMAM144B) Measure-theoretic probability November 28, / 27
17 Expectation of a Borel function of RV(V) Let F denote the cdf of the RV(V) X, and h : R n! R be Borel measurable, then Z + Z + Z + E (h(x )) = h(x 1, x 2,, x n )df (x 1, x 2,, x n ) if the integral is nite. Especially if P X is dominated by the counting measure with pdf x k 7! p k k = 1, 2,, then E (h(x )) = h(x k ) p k k assuming the series converges absolutely; Lebesgue measure with pdf x 7! f (x) x = (x 1, x 2,, x n ) 2 R n, then Z + Z + E (h(x )) = h(x 1,, x n ) f (x 1,, x n )dx 1 dx n assuming the (improper-) integral converges absolutely. (VEGTMAM144B) Measure-theoretic probability November 28, / 27
18 Moment-generating function De nitions The moment-generating function of the RV X : Ω! R is M X (t) = E e tx t 2 R e tx 2 L 1 The generating function of the RV X : Ω! N is G X (z) = E z X z 2 R z X 2 L 1 Remark M X (0) = G X (1) = 1, and M X (t) = G X (e t ) M X (t) = 1 + t E (X ) + t2 2 E (X 2 ) + + tn n! E (X n ) + G X (z) = P(X = 0) + z P(X = 1) + + z n P(X = n) + If there exists the moment E (X k ) of order k, then M (k) X (0) = E (X k ) G (k) X (1) = E (X (X 1) (X k + 1)) If the RVs X Y are independent then M X +Y (t) = M X (t) M Y (t) G X +Y (z) = G X (z) G Y (z) (VEGTMAM144B) Measure-theoretic probability November 28, / 27
19 Convergent sequences of RVs 1 Weak law of large numbers: if X 1, X 2,, X n, are independent identically distributed (i.i.d. RVs) with the common expectation and standard deviation m = E (X k ), σ = D(X k ) k = 1, 2,, then (P) n 1 lim n! n X k = m. k=1 2 Bernoulli law of large numbers: if the RVs Y n are distributed according to the binomial law (k 7! p k = ( n k )pk (1 p) n k k = 0, 1,, n), then (P) Y n lim n! n = p. 3 Strong law of large numbers: if X 1, X 2,, X n, are i.i.d. RVs, 1 then the limit (a.s.) lim n! n n k=1 X k exits if and only if X k 2 L 1 k = 1, 2, and in this case (a.s.) 1 lim n! n n k=1 X k = m = E (X k ). (VEGTMAM144B) Measure-theoretic probability November 28, / 27
20 Weak convergence of distributions 1 The hypergeometric distribution k 7! p k = (M k )(N converges weakly to the binomial distribution: n lim p k = k N,M! M N!p M n k ) ( N n ) p k (1 p) n k k = 0, 1,, n k=0,1,,n 2 The binomial distribution k 7! p k = ( n k )pk (1 p) n k k=0,1,,n converges weakly to the Poisson distribution: lim p n! k = λk k! e λ k = 0, 1, np!λ 3 Central limit theorem: if X 1, X 2,, X n, are square integrable i.i.d. RVs with m = E (X k ), σ = D(X k ) k = 1, 2,, then denoting n the cdf of the RV 1 n k=1 X k m p n by Fn, then σ F n w! Φ i.e. lim n! F n (x) = Φ(x) x 2 R. (VEGTMAM144B) Measure-theoretic probability November 28, / 27
21 Conditional expectation De nition Let X : Ω! R be a RV with existing expected value (X 2 L 1 ), and A 0 A a sub - σ- eld. The conditional expectation of X with respect to the σ- eld A 0 is the A 0 measurable function E (X j A 0 ) : Ω! R for which ν(a) = R A X dp = R A E (X j A 0) dp A 2 A 0 Remark The conditional expectation is the a.s. unique Radon Nikodym derivative E (X j A 0 ) = d ν dp. If Y : Ω! R n is a RV(V) and A 0 = A Y then there is a P Y a.s. unique function E (X j Y = ) 2 L 1 (R n, B n, P Y ), and substituting Y into it we get the RV E (X j Y ) = E (X j A Y ) 2 L 1 (Ω, A Y, P). If X is square integrable, then E (X j Y = ) 2 L 2 (R n, B n, P Y ) is the function, RR for which jx E (X j Y = y)j dp XY = R jx E (X j Y )j 2 dp is minimal. RR n Ω (VEGTMAM144B) Measure-theoretic probability November 28, / 27
22 Properties of the conditional expectation 1 If X 2 L 1 then E (E (X j A 0 )) = E (X ). 2 If 0 X 2 L 1 then E (X j A 0 ) 0 a.s. 3 If X, Y 2 L 1 and α, β 2 R then E (X + Y j A 0 ) = E (X j A 0 ) + E (Y j A 0 ) 4 If Y : Ω! R n is a RV(V), h : R n! R is a measurable function and X, h(y ) X 2 L 1, then E (h(y ) X j Y ) = h(y ) E (X j Y ) a.s. which is true for any h(y) = c constant function too. 5 If the RVs X, Y are independent, and h(x, Y ) 2 L 1 then E (h(x, Y ) j Y = y) = E (h(x, y)) especially if X 2 L 1 then E (X j Y ) = E (X ) 6 If X 2 L 1 and A 0 A 1 A then E (E (X j A 1 ) j A 0 ) = E (X j A 0 ) a.s. P Y - a.s. E (E (X j Y 0, Y 1 ) j Y 0 ) = E (X j Y 0 ) if A 0 = A Y0 A 1 = A Y0,Y 1 (VEGTMAM144B) Measure-theoretic probability November 28, / 27 a.s.
23 Conditional expectation as the best regression Let X 2 L 2 and the RV(V) Y : Ω! R n, then E jx E (X j Y )j 2 Z Ω jx H(Y )j 2 dp H 2 L 2 (R n, B n, P Y ) and equality holds if and only if E (X j Y ) = H(Y ) a.s., i.e. the best (in L 2 norm) regressor function is the conditional expectation E (X j Y ). The residual variance of X by the best regressor function of Y is σ 2 R = E jx E (X j Y )j 2 = E (X 2 ) E E 2 (X j Y ) = D 2 (X ) D 2 (E (X j Y )) (VEGTMAM144B) Measure-theoretic probability November 28, / 27
24 Conditional probability If A 2 A then we denote by P(A j A 0 ) = E (1 A j A 0 ) or P(A j Y = y) = E (1 A j Y = y) the conditional probability of the event A for given A 0 or Y = y. It satis es the followings: 0 P(A j A 0 ) 1 and P(Ω j A 0 ) = 1 a.s. if A 1, A 2, 2 A are pairwise disjoint events then P([ n A n j A 0 ) = P(A n j A 0 ) a.s. n (VEGTMAM144B) Measure-theoretic probability November 28, / 27
25 Discrete case Let the RVV (X, Y ) : Ω! R R n be with the discrete pdf then (x k, y l ) 7! p kl k = 1, 2, l = 1, 2,, E (X j Y = y l ) = x k p xk jy l k l = 1, 2, where for each l = 1, 2, if 0 < p l = k p kl = P(Y = y l ) p xk jy l = P(X = x k j Y = y l ) = p kl p l k = 1, 2, is the conditional distribution of the RV X for given Y = y l. (VEGTMAM144B) Measure-theoretic probability November 28, / 27
26 Continuos case Let the RVV (X, Y ) : Ω! R R n be with the continuos pdf (x, y) 7! f (x, y) x 2 R, y 2 R n then E (X j Y = y) = Z x f X jy (xjy) dx where for each y 2 R n if 0 < f Y (y) = R f (x, y) dx y 2 R n f X jy (xjy) = f (x, y) f Y (y) x 2 R is the conditional distribution of the RV X for given Y = y. (VEGTMAM144B) Measure-theoretic probability November 28, / 27
27 Mixed case If X = k x k 1 Ak where fx = x k g = A k 2 A P(A k ) > 0 k = 1, 2, is a partition of Ω, and the RV Y : Ω! R n is distributed depending on the occurrence of the event A k as Z P(Y 2 B j A k ) = f k dm n B 2 B n k = 1, 2, then Y is distributed with the continuos pdf and f (y) = k P(A k ) f k (y) E (Y j X = x k ) = Z P(A k j Y = y) = P(A k ) f k (y) f (y) P(A E (X j Y = y) = k ) f k (y) x k f (y) k B y 2 R n (theorem of total probability) y f k (y) dy k = 1, 2, (if n = 1) y 2 R n and f (y) > 0 (Bayes theorem) y 2 R n and f (y) > 0 (VEGTMAM144B) Measure-theoretic probability November 28, / 27
1: PROBABILITY REVIEW
1: PROBABILITY REVIEW Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 1: Probability Review 1 / 56 Outline We will review the following
More informationLecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable
Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed
More informationLecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable
Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed
More informationPart IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015
Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.
More informationRecitation 2: Probability
Recitation 2: Probability Colin White, Kenny Marino January 23, 2018 Outline Facts about sets Definitions and facts about probability Random Variables and Joint Distributions Characteristics of distributions
More information1 Random Variable: Topics
Note: Handouts DO NOT replace the book. In most cases, they only provide a guideline on topics and an intuitive feel. 1 Random Variable: Topics Chap 2, 2.1-2.4 and Chap 3, 3.1-3.3 What is a random variable?
More informationProbability and Random Processes
Probability and Random Processes Lecture 7 Conditional probability and expectation Decomposition of measures Mikael Skoglund, Probability and random processes 1/13 Conditional Probability A probability
More informationChapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University
Chapter 3, 4 Random Variables ENCS6161 - Probability and Stochastic Processes Concordia University ENCS6161 p.1/47 The Notion of a Random Variable A random variable X is a function that assigns a real
More informationWeek 2. Review of Probability, Random Variables and Univariate Distributions
Week 2 Review of Probability, Random Variables and Univariate Distributions Probability Probability Probability Motivation What use is Probability Theory? Probability models Basis for statistical inference
More informationBasics on Probability. Jingrui He 09/11/2007
Basics on Probability Jingrui He 09/11/2007 Coin Flips You flip a coin Head with probability 0.5 You flip 100 coins How many heads would you expect Coin Flips cont. You flip a coin Head with probability
More informationPCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities
PCMI 207 - Introduction to Random Matrix Theory Handout #2 06.27.207 REVIEW OF PROBABILITY THEORY Chapter - Events and Their Probabilities.. Events as Sets Definition (σ-field). A collection F of subsets
More informationPart IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015
Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.
More informationBrief Review of Probability
Maura Department of Economics and Finance Università Tor Vergata Outline 1 Distribution Functions Quantiles and Modes of a Distribution 2 Example 3 Example 4 Distributions Outline Distribution Functions
More informationLecture 11. Probability Theory: an Overveiw
Math 408 - Mathematical Statistics Lecture 11. Probability Theory: an Overveiw February 11, 2013 Konstantin Zuev (USC) Math 408, Lecture 11 February 11, 2013 1 / 24 The starting point in developing the
More informationLectures 22-23: Conditional Expectations
Lectures 22-23: Conditional Expectations 1.) Definitions Let X be an integrable random variable defined on a probability space (Ω, F 0, P ) and let F be a sub-σ-algebra of F 0. Then the conditional expectation
More informationLecture 3: Random variables, distributions, and transformations
Lecture 3: Random variables, distributions, and transformations Definition 1.4.1. A random variable X is a function from S into a subset of R such that for any Borel set B R {X B} = {ω S : X(ω) B} is an
More informationContinuous Random Variables
1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables
More informationProbability and Measure
Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability
More informationIntroduction to Statistical Inference Self-study
Introduction to Statistical Inference Self-study Contents Definition, sample space The fundamental object in probability is a nonempty sample space Ω. An event is a subset A Ω. Definition, σ-algebra A
More informationChapter 1: Probability Theory Lecture 1: Measure space, measurable function, and integration
Chapter 1: Probability Theory Lecture 1: Measure space, measurable function, and integration Random experiment: uncertainty in outcomes Ω: sample space: a set containing all possible outcomes Definition
More information12 Expectations. Expectations 103
Expectations 103 12 Expectations At rst, for motivation purposes, the expectation of a r.v. will be introduced for the discrete case as the weighted mean value on the basis of the series value of an absolutely
More informationNorthwestern University Department of Electrical Engineering and Computer Science
Northwestern University Department of Electrical Engineering and Computer Science EECS 454: Modeling and Analysis of Communication Networks Spring 2008 Probability Review As discussed in Lecture 1, probability
More informationSTAT 430/510 Probability
STAT 430/510 Probability Hui Nie Lecture 16 June 24th, 2009 Review Sum of Independent Normal Random Variables Sum of Independent Poisson Random Variables Sum of Independent Binomial Random Variables Conditional
More informationReview: mostly probability and some statistics
Review: mostly probability and some statistics C2 1 Content robability (should know already) Axioms and properties Conditional probability and independence Law of Total probability and Bayes theorem Random
More informationRandom Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay
1 / 13 Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay August 8, 2013 2 / 13 Random Variable Definition A real-valued
More informationLecture 2: Repetition of probability theory and statistics
Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:
More informationLecture 6 Basic Probability
Lecture 6: Basic Probability 1 of 17 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 6 Basic Probability Probability spaces A mathematical setup behind a probabilistic
More information2 (Statistics) Random variables
2 (Statistics) Random variables References: DeGroot and Schervish, chapters 3, 4 and 5; Stirzaker, chapters 4, 5 and 6 We will now study the main tools use for modeling experiments with unknown outcomes
More informationDeep Learning for Computer Vision
Deep Learning for Computer Vision Lecture 3: Probability, Bayes Theorem, and Bayes Classification Peter Belhumeur Computer Science Columbia University Probability Should you play this game? Game: A fair
More informationLecture 1: Review on Probability and Statistics
STAT 516: Stochastic Modeling of Scientific Data Autumn 2018 Instructor: Yen-Chi Chen Lecture 1: Review on Probability and Statistics These notes are partially based on those of Mathias Drton. 1.1 Motivating
More informationPROBABILITY AND INFORMATION THEORY. Dr. Gjergji Kasneci Introduction to Information Retrieval WS
PROBABILITY AND INFORMATION THEORY Dr. Gjergji Kasneci Introduction to Information Retrieval WS 2012-13 1 Outline Intro Basics of probability and information theory Probability space Rules of probability
More informationProbability and Distributions
Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated
More informationChapter 6 Expectation and Conditional Expectation. Lectures Definition 6.1. Two random variables defined on a probability space are said to be
Chapter 6 Expectation and Conditional Expectation Lectures 24-30 In this chapter, we introduce expected value or the mean of a random variable. First we define expectation for discrete random variables
More informationProbability Theory. Richard F. Bass
Probability Theory Richard F. Bass ii c Copyright 2014 Richard F. Bass Contents 1 Basic notions 1 1.1 A few definitions from measure theory............. 1 1.2 Definitions............................. 2
More informationWhy study probability? Set theory. ECE 6010 Lecture 1 Introduction; Review of Random Variables
ECE 6010 Lecture 1 Introduction; Review of Random Variables Readings from G&S: Chapter 1. Section 2.1, Section 2.3, Section 2.4, Section 3.1, Section 3.2, Section 3.5, Section 4.1, Section 4.2, Section
More informationMathematical Statistics 1 Math A 6330
Mathematical Statistics 1 Math A 6330 Chapter 2 Transformations and Expectations Mohamed I. Riffi Department of Mathematics Islamic University of Gaza September 14, 2015 Outline 1 Distributions of Functions
More informationIntroduction to Probability Theory
Introduction to Probability Theory Ping Yu Department of Economics University of Hong Kong Ping Yu (HKU) Probability 1 / 39 Foundations 1 Foundations 2 Random Variables 3 Expectation 4 Multivariate Random
More informationProbability: Handout
Probability: Handout Klaus Pötzelberger Vienna University of Economics and Business Institute for Statistics and Mathematics E-mail: Klaus.Poetzelberger@wu.ac.at Contents 1 Axioms of Probability 3 1.1
More information18.175: Lecture 3 Integration
18.175: Lecture 3 Scott Sheffield MIT Outline Outline Recall definitions Probability space is triple (Ω, F, P) where Ω is sample space, F is set of events (the σ-algebra) and P : F [0, 1] is the probability
More information1 Joint and marginal distributions
DECEMBER 7, 204 LECTURE 2 JOINT (BIVARIATE) DISTRIBUTIONS, MARGINAL DISTRIBUTIONS, INDEPENDENCE So far we have considered one random variable at a time. However, in economics we are typically interested
More informationProbability on a Riemannian Manifold
Probability on a Riemannian Manifold Jennifer Pajda-De La O December 2, 2015 1 Introduction We discuss how we can construct probability theory on a Riemannian manifold. We make comparisons to this and
More informationProbability Review. Gonzalo Mateos
Probability Review Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ September 11, 2018 Introduction
More informationAlgorithms for Uncertainty Quantification
Algorithms for Uncertainty Quantification Tobias Neckel, Ionuț-Gabriel Farcaș Lehrstuhl Informatik V Summer Semester 2017 Lecture 2: Repetition of probability theory and statistics Example: coin flip Example
More informationDiscrete Random Variables
CPSC 53 Systems Modeling and Simulation Discrete Random Variables Dr. Anirban Mahanti Department of Computer Science University of Calgary mahanti@cpsc.ucalgary.ca Random Variables A random variable is
More informationMA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems
MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Review of Basic Probability The fundamentals, random variables, probability distributions Probability mass/density functions
More informationEE514A Information Theory I Fall 2013
EE514A Information Theory I Fall 2013 K. Mohan, Prof. J. Bilmes University of Washington, Seattle Department of Electrical Engineering Fall Quarter, 2013 http://j.ee.washington.edu/~bilmes/classes/ee514a_fall_2013/
More informationWeek 12-13: Discrete Probability
Week 12-13: Discrete Probability November 21, 2018 1 Probability Space There are many problems about chances or possibilities, called probability in mathematics. When we roll two dice there are possible
More information1 Presessional Probability
1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional
More information3. Review of Probability and Statistics
3. Review of Probability and Statistics ECE 830, Spring 2014 Probabilistic models will be used throughout the course to represent noise, errors, and uncertainty in signal processing problems. This lecture
More informationLecture 5: Expectation
Lecture 5: Expectation 1. Expectations for random variables 1.1 Expectations for simple random variables 1.2 Expectations for bounded random variables 1.3 Expectations for general random variables 1.4
More informationRandom Variables. Definition: A random variable (r.v.) X on the probability space (Ω, F, P) is a mapping
Random Variables Example: We roll a fair die 6 times. Suppose we are interested in the number of 5 s in the 6 rolls. Let X = number of 5 s. Then X could be 0, 1, 2, 3, 4, 5, 6. X = 0 corresponds to the
More informationEE4601 Communication Systems
EE4601 Communication Systems Week 2 Review of Probability, Important Distributions 0 c 2011, Georgia Institute of Technology (lect2 1) Conditional Probability Consider a sample space that consists of two
More information1 Probability and Random Variables
1 Probability and Random Variables The models that you have seen thus far are deterministic models. For any time t, there is a unique solution X(t). On the other hand, stochastic models will result in
More informationLecture 3. Discrete Random Variables
Math 408 - Mathematical Statistics Lecture 3. Discrete Random Variables January 23, 2013 Konstantin Zuev (USC) Math 408, Lecture 3 January 23, 2013 1 / 14 Agenda Random Variable: Motivation and Definition
More informationSome basic elements of Probability Theory
Chapter I Some basic elements of Probability Theory 1 Terminology (and elementary observations Probability theory and the material covered in a basic Real Variables course have much in common. However
More information36-752: Lecture 1. We will use measures to say how large sets are. First, we have to decide which sets we will measure.
0 0 0 -: Lecture How is this course different from your earlier probability courses? There are some problems that simply can t be handled with finite-dimensional sample spaces and random variables that
More informationChapter 1: Probability Theory Lecture 1: Measure space and measurable function
Chapter 1: Probability Theory Lecture 1: Measure space and measurable function Random experiment: uncertainty in outcomes Ω: sample space: a set containing all possible outcomes Definition 1.1 A collection
More informationAppendix A : Introduction to Probability and stochastic processes
A-1 Mathematical methods in communication July 5th, 2009 Appendix A : Introduction to Probability and stochastic processes Lecturer: Haim Permuter Scribe: Shai Shapira and Uri Livnat The probability of
More information1 Review of Probability and Distributions
Random variables. A numerically valued function X of an outcome ω from a sample space Ω X : Ω R : ω X(ω) is called a random variable (r.v.), and usually determined by an experiment. We conventionally denote
More informationECE 4400:693 - Information Theory
ECE 4400:693 - Information Theory Dr. Nghi Tran Lecture 8: Differential Entropy Dr. Nghi Tran (ECE-University of Akron) ECE 4400:693 Lecture 1 / 43 Outline 1 Review: Entropy of discrete RVs 2 Differential
More informationHILBERT SPACES AND THE RADON-NIKODYM THEOREM. where the bar in the first equation denotes complex conjugation. In either case, for any x V define
HILBERT SPACES AND THE RADON-NIKODYM THEOREM STEVEN P. LALLEY 1. DEFINITIONS Definition 1. A real inner product space is a real vector space V together with a symmetric, bilinear, positive-definite mapping,
More informationRandom Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R
In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample
More information2.1 Elementary probability; random sampling
Chapter 2 Probability Theory Chapter 2 outlines the probability theory necessary to understand this text. It is meant as a refresher for students who need review and as a reference for concepts and theorems
More informationSDS 321: Introduction to Probability and Statistics
SDS 321: Introduction to Probability and Statistics Lecture 14: Continuous random variables Purnamrita Sarkar Department of Statistics and Data Science The University of Texas at Austin www.cs.cmu.edu/
More informationI. ANALYSIS; PROBABILITY
ma414l1.tex Lecture 1. 12.1.2012 I. NLYSIS; PROBBILITY 1. Lebesgue Measure and Integral We recall Lebesgue measure (M411 Probability and Measure) λ: defined on intervals (a, b] by λ((a, b]) := b a (so
More information2 Statistical Estimation: Basic Concepts
Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 2 Statistical Estimation:
More informationReal Analysis, 2nd Edition, G.B.Folland Elements of Functional Analysis
Real Analysis, 2nd Edition, G.B.Folland Chapter 5 Elements of Functional Analysis Yung-Hsiang Huang 5.1 Normed Vector Spaces 1. Note for any x, y X and a, b K, x+y x + y and by ax b y x + b a x. 2. It
More informationMotivation and Applications: Why Should I Study Probability?
Motivation and Applications: Why Should I Study Probability? As stated by Laplace, Probability is common sense reduced to calculation. You need to first learn the theory required to correctly do these
More informationProbability reminders
CS246 Winter 204 Mining Massive Data Sets Probability reminders Sammy El Ghazzal selghazz@stanfordedu Disclaimer These notes may contain typos, mistakes or confusing points Please contact the author so
More informationChapter 2. Discrete Distributions
Chapter. Discrete Distributions Objectives ˆ Basic Concepts & Epectations ˆ Binomial, Poisson, Geometric, Negative Binomial, and Hypergeometric Distributions ˆ Introduction to the Maimum Likelihood Estimation
More informationBMIR Lecture Series on Probability and Statistics Fall 2015 Discrete RVs
Lecture #7 BMIR Lecture Series on Probability and Statistics Fall 2015 Department of Biomedical Engineering and Environmental Sciences National Tsing Hua University 7.1 Function of Single Variable Theorem
More informationMATH 418: Lectures on Conditional Expectation
MATH 418: Lectures on Conditional Expectation Instructor: r. Ed Perkins, Notes taken by Adrian She Conditional expectation is one of the most useful tools of probability. The Radon-Nikodym theorem enables
More informationMTH 404: Measure and Integration
MTH 404: Measure and Integration Semester 2, 2012-2013 Dr. Prahlad Vaidyanathan Contents I. Introduction....................................... 3 1. Motivation................................... 3 2. The
More informationMATH MEASURE THEORY AND FOURIER ANALYSIS. Contents
MATH 3969 - MEASURE THEORY AND FOURIER ANALYSIS ANDREW TULLOCH Contents 1. Measure Theory 2 1.1. Properties of Measures 3 1.2. Constructing σ-algebras and measures 3 1.3. Properties of the Lebesgue measure
More informationQuantitative Methods in Economics Conditional Expectations
Quantitative Methods in Economics Conditional Expectations Maximilian Kasy Harvard University, fall 2016 1 / 19 Roadmap, Part I 1. Linear predictors and least squares regression 2. Conditional expectations
More informationNotes on Measure Theory. Let A 2 M. A function µ : A [0, ] is finitely additive if, A j ) =
Notes on Measure Theory Definitions and Facts from Topic 1500 For any set M, 2 M := {subsets of M} is called the power set of M. The power set is the set of all sets. Let A 2 M. A function µ : A [0, ]
More informationChapter 3: Random Variables 1
Chapter 3: Random Variables 1 Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw 1 Modified from the lecture notes by Prof.
More informationMultivariate Random Variable
Multivariate Random Variable Author: Author: Andrés Hincapié and Linyi Cao This Version: August 7, 2016 Multivariate Random Variable 3 Now we consider models with more than one r.v. These are called multivariate
More informationChapter 1. Statistical Spaces
Chapter 1 Statistical Spaces Mathematical statistics is a science that studies the statistical regularity of random phenomena, essentially by some observation values of random variable (r.v.) X. Sometimes
More informationMultiple Random Variables
Multiple Random Variables This Version: July 30, 2015 Multiple Random Variables 2 Now we consider models with more than one r.v. These are called multivariate models For instance: height and weight An
More informationNotes on Measure, Probability and Stochastic Processes. João Lopes Dias
Notes on Measure, Probability and Stochastic Processes João Lopes Dias Departamento de Matemática, ISEG, Universidade de Lisboa, Rua do Quelhas 6, 1200-781 Lisboa, Portugal E-mail address: jldias@iseg.ulisboa.pt
More informationQuick Tour of Basic Probability Theory and Linear Algebra
Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra CS224w: Social and Information Network Analysis Fall 2011 Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra Outline Definitions
More informationChapter 1 Statistical Reasoning Why statistics? Section 1.1 Basics of Probability Theory
Chapter 1 Statistical Reasoning Why statistics? Uncertainty of nature (weather, earth movement, etc. ) Uncertainty in observation/sampling/measurement Variability of human operation/error imperfection
More information1 Measurable Functions
36-752 Advanced Probability Overview Spring 2018 2. Measurable Functions, Random Variables, and Integration Instructor: Alessandro Rinaldo Associated reading: Sec 1.5 of Ash and Doléans-Dade; Sec 1.3 and
More informationAdvanced Analysis Qualifying Examination Department of Mathematics and Statistics University of Massachusetts. Tuesday, January 16th, 2018
NAME: Advanced Analysis Qualifying Examination Department of Mathematics and Statistics University of Massachusetts Tuesday, January 16th, 2018 Instructions 1. This exam consists of eight (8) problems
More information2 n k In particular, using Stirling formula, we can calculate the asymptotic of obtaining heads exactly half of the time:
Chapter 1 Random Variables 1.1 Elementary Examples We will start with elementary and intuitive examples of probability. The most well-known example is that of a fair coin: if flipped, the probability of
More informationChapter 2. Random Variable. Define single random variables in terms of their PDF and CDF, and calculate moments such as the mean and variance.
Chapter 2 Random Variable CLO2 Define single random variables in terms of their PDF and CDF, and calculate moments such as the mean and variance. 1 1. Introduction In Chapter 1, we introduced the concept
More informationGiven a experiment with outcomes in sample space: Ω Probability measure applied to subsets of Ω: P[A] 0 P[A B] = P[A] + P[B] P[AB] = P(AB)
1 16.584: Lecture 2 : REVIEW Given a experiment with outcomes in sample space: Ω Probability measure applied to subsets of Ω: P[A] 0 P[A B] = P[A] + P[B] if AB = P[A B] = P[A] + P[B] P[AB] P[A] = 1 P[A
More informationChapter 1. Sets and probability. 1.3 Probability space
Random processes - Chapter 1. Sets and probability 1 Random processes Chapter 1. Sets and probability 1.3 Probability space 1.3 Probability space Random processes - Chapter 1. Sets and probability 2 Probability
More informationReal Analysis Problems
Real Analysis Problems Cristian E. Gutiérrez September 14, 29 1 1 CONTINUITY 1 Continuity Problem 1.1 Let r n be the sequence of rational numbers and Prove that f(x) = 1. f is continuous on the irrationals.
More informationActuarial Science Exam 1/P
Actuarial Science Exam /P Ville A. Satopää December 5, 2009 Contents Review of Algebra and Calculus 2 2 Basic Probability Concepts 3 3 Conditional Probability and Independence 4 4 Combinatorial Principles,
More informationRandom Process Lecture 1. Fundamentals of Probability
Random Process Lecture 1. Fundamentals of Probability Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/43 Outline 2/43 1 Syllabus
More informationBrownian Motion and Conditional Probability
Math 561: Theory of Probability (Spring 2018) Week 10 Brownian Motion and Conditional Probability 10.1 Standard Brownian Motion (SBM) Brownian motion is a stochastic process with both practical and theoretical
More informationST5215: Advanced Statistical Theory
Department of Statistics & Applied Probability Thursday, August 15, 2011 Lecture 2: Measurable Function and Integration Measurable function f : a function from Ω to Λ (often Λ = R k ) Inverse image of
More informationTopics in Probability and Statistics
Topics in Probability and tatistics A Fundamental Construction uppose {, P } is a sample space (with probability P), and suppose X : R is a random variable. The distribution of X is the probability P X
More informationConvergence in Distribution
Convergence in Distribution Undergraduate version of central limit theorem: if X 1,..., X n are iid from a population with mean µ and standard deviation σ then n 1/2 ( X µ)/σ has approximately a normal
More informationProbability Review. Yutian Li. January 18, Stanford University. Yutian Li (Stanford University) Probability Review January 18, / 27
Probability Review Yutian Li Stanford University January 18, 2018 Yutian Li (Stanford University) Probability Review January 18, 2018 1 / 27 Outline 1 Elements of probability 2 Random variables 3 Multiple
More informationMathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )
Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca October 22nd, 2014 E. Tanré (INRIA - Team Tosca) Mathematical
More information) ) = γ. and P ( X. B(a, b) = Γ(a)Γ(b) Γ(a + b) ; (x + y, ) I J}. Then, (rx) a 1 (ry) b 1 e (x+y)r r 2 dxdy Γ(a)Γ(b) D
3 Independent Random Variables II: Examples 3.1 Some functions of independent r.v. s. Let X 1, X 2,... be independent r.v. s with the known distributions. Then, one can compute the distribution of a r.v.
More information7 Random samples and sampling distributions
7 Random samples and sampling distributions 7.1 Introduction - random samples We will use the term experiment in a very general way to refer to some process, procedure or natural phenomena that produces
More information