Section 21. Expected Values. Po-Ning Chen, Professor. Institute of Communications Engineering. National Chiao Tung University

Size: px
Start display at page:

Download "Section 21. Expected Values. Po-Ning Chen, Professor. Institute of Communications Engineering. National Chiao Tung University"

Transcription

1 Section 21 Expected Values Po-Ning Chen, Professor Institute of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30010, R.O.C.

2 Expected value as integral 21-1 Definition (expected value) The expected value of a random variable X is the integral of X with respect to its cdf. I.e., E[X] =E[X + ] E[X ]= 0 xdf X (x) 0 ( x)df X (x) = Properties of expected value 1. E[X] exists if, and only if, at least one of E[X + ]ande[x ]isfinite. 2. X is integrable, if E[ X ] <. 3. For B/B-measurable (or simply B-measurable) function g, E[g(X)] = E[g(X) + ] E[g(X) ] = g(x)df X (x) = {x R:g(x) 0} g(x)df X (x). {x R:g(x)<0} xdf X (x). [ g(x)]df X (x)

3 Absolute moments 21-2 Definition (absolute moments) The kth absolute moment of X is: E [ X k] = Properties of absolute moments 1. The absolute moment always exists. x k df X (x). 2. If kth absolute moment is finite, then jth moment is finite for any j k. Proof: It can be easily proved by x j 1+ x k for j k.

4 Moments 21-3 Definition (moments) The kth moment of X is: E [ X k] = = x k df X (x) max{x k, 0}dF X (x) = E[(X k ) + ] E[(X k ) ]. ( min{x k, 0} ) df X (x) Properties of moments 1. E[X k ] exists if, and only if, at least one of E[(X k ) + ]ande[(x k ) ]isfinite. 2. X k is integrable, if E[ X k ] <. 3. For B/B-measurable (or simply B-measurable) function g, E[g(X k )] = E[g(X k ) + ] E[g(X k ) ] = g(x k )df X (x) = {x R:g(x k ) 0} g(x k )df X (x). {x R:g(x k )<0} [ g(x k )]df X (x)

5 Moments 21-4 Example 21.1 (moments of standard normal) The pdf of a standard normal distribution is equal to: 1 e x2 /2. 2π Integration by parts shows that: 1 x k e x2 /2 dx = 2π or equivalently, As a result, E[X k ]= E[X k ]=(k 1)E[X k 2 ]. 1 2π (k 1)x k 2 e x2 /2 dx, { 0, for k odd; (k 1), for k even.

6 Computation of mean 21-5 Theorem For non-negative random variable X, E[X] = 0 Pr[X >t]dt = 0 Pr[X t]dt. In other words, the area of ccdf (complementary cdf) is the mean. (The theorem is valid even if E[X] =.)(This is an extension of the law of large numbers when empirical distribution is concerned)! Geometric interpretation Suppose Pr[X = x i ]=p i for 1 i k. 1 p k x k p k cdf p k 1 x k 1 p k 1. The area of ccdf = p 1 x 1 + p 2 x 2 + p 3 x p k 1 x k 1 + p k x k = E[X] p 3 x 3 p 2 x 2 p 2 p 1 x 1 p 1 0 x 1 x 2 x 3 p 3 x k 1 x k

7 Computation of mean 21-6 Theorem For α 0, α + Note that it is always true that 0 xdf X (x) = 0 + xdf X (x) but it is possible α xdf x(x) α + xdf X (x)! xdf X (x) =α Pr[X >α]+ α Pr[X >t]dt. Proof: Let Y = X I [X>α],whereI [X>α] =1ifX>α, and zero, otherwise. Hence, Y =0forX α, andy = X for X>α.Consequently, xdf X (x) =E[Y ] = α + = = 0 α 0 α 0 Pr[Y >t]dt Pr[Y >t]dt + Pr[X >α]dt + = α Pr[X >α]+ α α α Pr[Y >t]dt Pr[X >t]dt Pr[X >t]dt. The empirical approximation of Pr[X > t](orpr[x t]) is more easily obtained than df (x). With the above result, E[X](orE[Y ]) can be established directly from Pr[X >t].

8 Inequalities regarding moments 21-7 Inequalities regarding moments

9 Markov s inequality 21-8 Lemma (Markov s inequality) For any k>0 (and implicitly α>0), Pr[ X α] 1 α k E[ X k ]. Proof: The below inequality is valid for any x R: x k α k 1{ x k α k } (21.1) Hence, E[ X k ] = x k df X (x) α k 1{ x k α k }df X (x) =α k Pr[ X α]. Equality holds if, and only if, equality in (21.1) is true with probability 1. I.e., Pr [ X k = α k 1{ X k α k } ] =1, or equivalently, Pr[ X =0orα] =1.

10 Chebyshev-Bienaymé inequality 21-9 Lemma (Chebyshev-Bienaymé inequality) For α>0, Pr[ X E[X] α] 1 α 2Var[X]. Proof: By Markov s inequality with k =2,wehave: Pr[ X E[X] α] 1 α 2E[ X E[X] 2 ]. Equality holds if, and only if, Pr[ X E[X] =0]+Pr [ X E[X] = α ] =1, which implies that Pr [ X = E[X]+α ] =Pr [ X = E[X] α ] = p and for α>0. Pr[X = E[X]] = 1 2p

11 Jensen s inequality Definition (convexity) A function ϕ(x) issaidtobeconvex over an interval (a, b) if for every x 1,x 2 (a, b) and0 λ 1, ϕ(λx 1 +(1 λ)x 2 ) λϕ(x 1 )+(1 λ)ϕ(x 2 ). Furthermore, a function ϕ is said to be strictly convex if equality holds only when λ =0orλ =1.(Can we replace (a, b) byarealsetx?) Definition (support line) A line y = ax + b is said to be a support line of function ϕ(x) if among all lines of the same slope a, it is the largest one satisfying ax + b ϕ(x) for every x. A support line ax + b may not necessarily intersect with ϕ( ). In other words, it is possible that no x 0 satisfies ax 0 + b = ϕ(x 0 ). However, the existence of intersection between function ϕ( ) and its support line is guaranteed, if ϕ( ) isconvex.

12 Jensen s inequality An example that no intersection exists for a function and its support line. ϕ(x) ax + b

13 Jensen s inequality Lemma (Jensen s inequality) Suppose that function ϕ( ) isconvexonthe domain X of X. (Implicitly, E[X] X.) Then ϕ(e[x]) E[ϕ(X)]. Proof: Let ax + b be a support line through the point (E[X],ϕ(E[X])). Thus, over the domain X of ϕ(x), If equality holds in X in this step, ax + b ϕ(x). then equality remains true for the subsequent steps. By taking the expectation value of both sides, we obtain a E[X]+b E[ϕ(X)], but we know that a E[X]+b = ϕ(e[x]). Consequently, ϕ(e[x]) E[ϕ(X)]. Equality hold if, and only if, either there exist a and b such that ae[x] +b = ϕ(e[x]) and Pr ( {x X : ax + b = ϕ(x)} ) =1.

14 Jensen s inequality The support line y = ax + b of the convex function ϕ(x). ϕ(x) ax + b ( E[X],ϕ(E[X]) )

15 Hölder s inequality Lemma (Hölder s inequality) For p>1,q >1and1/p +1/q =1, E[ XY ] E 1/p [ X p ]E 1/q [ Y q ]. Proof: Since the inequality is trivially valid, if E 1/p [ X p ]E 1/q [ Y q ] = 0. Without loss of generality, assume E 1/p [ X p ]E 1/q [ Y q ] > 0. exp{x} is a convex function in x. Hence, by Jensen s inequality, { 1 exp p s + 1 } q t 1 p exp{s} + 1 q exp{t}. Since e x is strictly convex, equality holds iff s = t. Let a =exp{s/p} and b =exp{t/q}. Then the above inequality becomes: ab 1 p ap + 1 q bq, Equality holds iff a p = b q. whose validity is not restricted to positive a and b but to non-negative a and b. By letting a = x /E 1/p [ X p ]andb = y /E 1/q [ Y q ], we obtain: xy E 1/p [ X p ]E 1/q [ Y q ] 1 x p p E[ X p ] + 1 y q q E[ Y q ]. Equality [ holds if, and ] only if, X p Y q Pr = =1. E[ X p ] E[ Y q ]

16 Hölder s inequality Taking the expectation values of both sides yields: E[ XY ] E 1/p [ X p ]E 1/q [ Y q ] 1 E[ X p ] p E[ X p ] + 1 E[ Y q ] q E[ Y q ] = 1 p + 1 q =1. Lemma (Hölder s inequality) For p>1,q >1and1/p +1/q =1, E[ XY ] E 1/p [ X p ]E 1/q [ Y q ]. Equality holds if, and only if, [ ] X p Y q Pr = E[ X p ] E[ Y q ] =1or Pr[X =0]=1or Pr[Y =0]=1. Example. p = q =2and Y =0 Y =1 X =0 p 00 p 01 X =1 p 10 p 11 E[ XY ] = p 11 = p 1/2 11 p1/2 11 (p 10 + p 11 ) 1/2 (p 01 + p 11 ) 1/2 = E 1/2 [ X 2 ]E 1/2 [ Y 2 ] with equality holding if, and only if, p 10 = p 01 =0.

17 Cauchy-Schwartz s inequality Suppose E[ X ] > 0andE[ Y > 0. Lemma (Hölder s inequality) For p>1,q >1and1/p +1/q =1, E[ XY ] E 1/p [ X p ]E 1/q [ Y q ]. Equality holds if, and only if, there exists a such that Pr[ X p = a Y q ]=1. Lemma (Cauchy-Schwartz s inequality) E[ XY ] E 1/2 [X 2 ]E 1/2 [Y 2 ]. Equality holds if, and only if, there exists a such that Pr[X 2 = ay 2 ]=1. Proof: A special case of Hölder s inequality with p = q =2. Equality holds if, and only if, Pr [ X 2 = ay 2] =1 for some a.

18 Lyapounov s inequality Lemma (Lyapounov s inequality) For 0 <α<β, E 1/α [ Z α ] E 1/β [ Z β ]. Equality holds if, and only if, Pr[ Z = a] =1forsomea. Proof: Letting X = Z α, Y =1,p = β/α and q = β/(β α) inhölder s inequality yields: [ ] [ ] E[ Z α ] E α/β ( Z α ) β/α E (β α)/β 1 β/(β α) = E α/β [ Z β ]. Equality holds if, and only if, [ ] Pr ( Z α ) β/α = a =Pr [ Z β = a ] =Pr [ Z = a 1/β] =1 for some a. Notably, in the statement of the lemma, β is strictly larger than α. It is certain that if α = β, the inequality automatically becomes an equality.

19 Joint integrals Lemma For B k /B-measurable (or simply B k -measurable) function g and k- dimensional random vector X, E[g(X)] = g(x k )df X (x k ), R k if one of E[(g(X)) + ]ande[(g(x)) ]isfinite. Definition (covariance) The covariance of two random vectors X and Y is: Cov[X, Y ] = E [ (X E[X])(Y E[Y ]) T] X 1 E[X 1 ] X = E 2 E[X 2 ] [ Y1 E[Y. 1 ] Y 2 E[Y 2 ] Y l E[Y l ] ] X k E[X k ] (X 1 E[X 1 ])(Y 1 E[Y 1 ]) (X 1 E[X 1 ])(Y l E[Y l ]) =.. (X k E[X k ])(Y 1 E[Y 1 ]) (X k E[X k ])(Y l E[Y l ]) where T represents vector transpose operation, if one of E[((X i E[X i ])(Y j E[Y j ])) + ]ande[((x i E[X i ])(Y j E[Y j ])) ] is finite for every i, j. k l

20 Joint integrals Definition (uncorrelated) X and Y is uncorrelated, if Cov[X, Y ]=0 k l. Definition (independence) X and Y is independent, if Pr [ (X 1 x 1 X k x k ) (Y 1 y 1 Y l y l ) ] = Pr [ ] [ ] X 1 x 1 X k x k Pr Y1 y 1 Y l y l. Lemma (integrability of product) For independent X 1,X 2,...,X k,ifeach of X i is integrable, so is X 1 X 2 X k,and E[X 1 X 2 X k ]=E[X 1 ]E[X 2 ] E[X k ]. Lemma (sum of variance for pair-wise independent samples) If X 1,X 2,...,X k are pair-wise independent and integrable, Var[X 1 + X X k ]=Var[X 1 ]+Var[X 2 ]+ +Var[X k ]. Notably, pair-wise independence does not imply complete independence.

21 Joint integrals Example (only pair-wise independence) Toss a fair coin twice, and assume independence. Define { 1, if head appears on the first toss; X = 0, otherwise, { 1, if head appears on the second toss; Y = 0, otherwise, { 1, if exactly one head and one tail appear on the two tosses; Z = 0, otherwise, Then Pr[X =1 Y =1 Z =1]=0; but Pr[X =1]Pr[Y =1]Pr[Z =1]= = 1 8. So X, Y and Z are not independent. Obviously, X Y. In addition, Pr[X =1 Z =1] Pr[X =1 Z =1]= Pr[Z =1] Pr[X =1 Z =0] Pr[X =1 Z =0]= Pr[Z =0] = 1/4 1/2 = 1 2 = 1/4 1/2 = 1 2 One can similarly show (or by symmetry) that Y Z. =Pr[X =1] =Pr[X =1] X Z.

22 Joint integrals Example (con t) By head head X + Y + Z =2 head tail X + Y + Z =2 tail head X + Y + Z =2 tail tail X + Y + Z =0 Var[X + Y + Z] = 3 4 (2 3/2) (0 3/2)2 = 3 4 This is equal to: Var[X]+Var[Y ]+Var[Z] = = 3 4. This result matches that The variance of sum equals the sum of variances holds for pair-wise independent random variables.

23 Moment generating function Definition (moment generating function) The moment generating function of X is defined as: M X (t) =E[e tx ]= e tx df X (x), for all t for which this is finite. If M X (t) is defined (i.e., finite) throughout an interval ( t 0,t 0 ), where t 0 > 0. then t k M X (t) = k! E[Xk ]. k=0 In other words, M X (t) has a Taylor expansion about t = 0 with positive radius of convergence if it is defined in some ( t 0,t 0 ). In case that M X (t) has a Taylor expansion about t = 0 with positive radius of convergence, the moment of X can be computed by the derivatives of M X (t) through: M (k) (0) = E[X k ]. If M Xi (t) is defined throughout an interval ( t 0,t 0 )foreachi,andx 1,X 2,...,X n are independent, then the moment generating function of X X n is also n defined on ( t 0,t 0 ), and is equal to M Xi (t). i=1

24 Moment generating function Example (random variable whose moment generating function is defined only at zero) The pdf of a Cauchy distribution is For t 0, the integral e tx 1 π(1 + x 2 ) dx = f(x) = 1 π(1 + x 2 ). ( e tx + e tx) 1 π(1 + x 2 ) dx e t x 1 π(1 + x 2 ) dx t x π(1 + x 2 ) dx (by ex 1+x x for x 0) t x π(1 + x 2 ) dx t x t dx = π(x 2 + x 2 ) 2π So the moment generating function is not defined for any t dx =. x The Cauchy distribution is indeed the Student s T -distribution with 1 degree of freedom.

25 Moment generating function Example (Student s T -distribution with n degree of freedom) This distribution has moments of order n 1 but any higher moments either do not exist or are infinity. Some books considers infinite moments as non-existence. So they write This distribution has moments of order n 1 but no higher moments exist. Let X, Y 1,Y 2,...,Y n be i.i.d. with standard normal marginal distribution. Then X n T n = Y Yn 2 is called the Student s t-distribution (on R) withn degree of freedom. The numerator X n has a normal density with mean 0 and variance n. χ 2 = Y Yn 2 is a chi-square distribution with n degree of freedom, and has density f 1/2,n/2 (y), where f α,ν (x) = 1 Γ(ν) αν x ν 1 e αx on [0, ) is the gamma density (or sometimes named Erlangian density when ν is a positive integer) with parameters ν>0andα>0, and Γ(t) = x t 1 e x dx is the gamma function. 0

26 Moment generating function (closure under convolutions for gamma density) f α,µ f α,ν = f α,µ+ν. Proof: (f α,µ f α,ν )(x) = = = = 0 α µ+ν f α,µ (x y)f α,ν (y)dy x (x y) µ 1 e α(x y) y ν 1 e αy dy Γ(µ)Γ(ν) 0 α µ+ν x Γ(µ)Γ(ν) e αx (x y) µ 1 y ν 1 dy 0 α µ+ν 1 Γ(µ)Γ(ν) e αx = Γ(µ + ν) αµ+ν x µ+ν 1 e αx = f α,µ+ν (x), (x xt) µ 1 (xt) ν 1 xdt (by y = xt) Γ(µ + ν) Γ(µ)Γ(ν) 0 (1 t) µ 1 t ν 1 dt where that the last term is equal to one follows the beta integral. For µ>0andν>0, B(µ, ν) = beta integral. 1 0 (1 t) µ 1 t ν 1 dt = Γ(µ)Γ(ν) Γ(µ + ν) is the so-called

27 Moment generating function That the pdf of chi-square distribution with 1 degree of freedom equals f 1/2,1/2 (y) canbederivedas: Pr[Y 2 1 y] = Pr[ y Y 1 y]=φ( y) Φ( y), where Φ( ) represents the unit Gaussian cdf. So the derivative of the above equation is y 1/2 φ(y 1/2 1 )= Γ(1/2) (1/2)1/2 y (1/2) 1 e (1/2)y for y 0. Then by closure under convolution for gamma densities, the distribution of chi-square distribution with n degree of freedom can be obtained. In statistical mechanics, Y1 2 + Y Y 3 2 appears as the square of the speed of particles. So the density of particle speed (on [0, )) is equal to v(x) = 2xf 1/2,3/2 (x 2 ), which is called Maxwell density. By letting Ȳ = Y Y 2 n, Pr[T n t] = Pr[(X n)/ȳ t] = Pr[X yt/ n]df Ȳ (y) = 0 Φ(yt/ n) ( 2yf 1/2,n/2 (y 2 ) ) dy. 0

28 Moment generating function So the density of T n is: ( y f Tn (t) = n φ(yt/ n) )( 2yf 1/2,n/2 (y 2 ) ) dy 0 ( y = e y2 t 2 )( /(2n) 1 0 2πn 2 n/2 1 Γ(n/2) yn 1 e ) y2 /2 dy 1 = 2 (n 1)/2 Γ(n/2) y n e y2 (1+t 2 /n)/2 dy πn 0 1 = 2 (n 1)/2 Γ(n/2) 2 n/2 s n/2 2 1/2 s 1/2 ) πn 0 (1 + t 2 /n) n/2e s( ds (by y = (1 + t 2 /n) 1/2 1 = Γ(n/2) s (n+1)/2 1 e s ds πn(1 + t 2 /n) (n+1)/2 0 C n = (1 + t 2 /n) (n+1)/2, where C Γ((n +1)/2) n = Γ(n/2) πn. 2s (1 + t 2 /n) 1/2)

29 Moment generating function For t 0, the integral e tx C n (1 + x 2 /n) (n+1)/2dx C n 0 e t x 1 (1 + x 2 /n) (n+1)/2dx t n x n (1 + x 2 /n) (n+1)/2dx C n n! 0 ( by e x 1+x + x2 2 C n n! C n n! n n t n x n (1 + x 2 /n) (n+1)/2dx xn + + n! xn n! for x 0) t n x n (x 2 /n + x 2 /n) (n+1)/2dx = C n t n n (n+1)/2 n!2 (n+1)/2 So the moment generating function is not defined for any t 0. 1 dx =. n x

30 Moment generating function However for r<n, E[ T n r t r ] = C n (1 + t 2 /n) (n+1)/2dt t r = 2C n (1 + t 2 /n) (n+1)/2dt But E[(Tn n ) + ]= 0 n (r+1)/2 Γ((n r)/2)γ((r +1)/2) = 2C n 2Γ((n +1)/2) = nr/2 Γ((n r)/2)γ((r +1)/2) Γ(n/2) <. π C n C n 0 t n (1 + t 2 /n)(n+1)/2dt, if n even; t n (1 + t 2 /n)(n+1)/2dt, if n odd. = and 0, if n even; E[(Tn n ) ]= 0 t n C n (1 + t 2 /n)(n+1)/2dt =, if n odd. This is a good example for which the moments, even if some of them exist (and are finite), cannot be obtained through the moment generating function.

31 Some seldom-known densities gamma distribution : f α,ν (x) (on[0, )) has mean ν/α and variance ν/α 2. Snedecor s distribution or F -distribution : It is the distribution of Z = X Xm 2 m Y Yn 2 n where X i and Y j are all independent standard normal. Its pdf with positive integer parameters m and n is: m m/2 Γ((m + n)/2) n m/2 Γ(m/2)Γ(n/2) z m/2 1 (1 + mz/n) (m+n)/2 on z 0. Bilateral exponential distribution : It is the distribution of Z = X 1 X 2, where X 1 and X 2 are independent and have common exponential density αe αx on x 0. Its pdf is 1 2 αe α x on x R. It has zero mean and variance 2α 2.,

32 Some seldom-known densities Beta distribution : Its pdf with parameters µ>0andν>0is β µ,ν (x) = Γ(µ + ν) Γ(µ)Γ(ν) (1 x)µ 1 x ν 1 on 0 <x<1. Its mean is ν/(µ + ν), and its variance is µν/[(µ + ν) 2 (µ + ν +1)]. Arc sine distribution :Itspdfisβ 1/2,1/2 (x) = Its cdf is given by 2 π sin 1 ( x)for0<x<1. 1 π x(1 x) on 0 <x<1. Generalized arc sine distribution : It is a beta distribution with µ + ν =1. Pareto distribution : It is the distribution of Z = X 1 1, where X is beta distributed with parameters µ>0andν>0. Its density is z µ 1 Γ(µ + ν) on 0 <z<. Γ(µ)Γ(ν) (1 + z) µ+ν This is often used as an incoming traffic with heavy tail as z (ν+1).

33 Some seldom-known densities Cauchy distribution : It is the distribution of Z = αx/ Y, wherex and Y are independent standard normal distributed. Its pdf with parameter α>0is Its cdf is π tan 1 (x/α). γ α (x) = 1 π α x 2 + α 2 on R. It is also closure under convolution, i.e., γ s γ u = γ s+u. It is interesting that Cauchy distribution is also closure under scaling, i.e., a X has density γ aα ( ), if X has density γ α ( ). Hence, we can easily obtain the density of a 1 X 1 + a 2 X a n X n as γ a1 α 1 +a 2 α 2 + +a n α n ( ), if X i has density γ αi ( ).

34 Some seldom-known densities One-sided stable distribution of index 1/2 : It is the distribution of Z = α 2 X 2,whereX has a standard normal distribution. Its pdf is f α (z) = α 1 2π z 3 e 1 2 α2 /z on z 0. It is also closure under convolution, namely,f α f β = f α+β. If Z 1,Z 2,...,Z n are i.i.d. with marginal density f α ( ), then Z Z n n 2 has density f α ( ). also Weibull distribution : Its pdf and cdf are respectively given by ( ) α 1 α y e (y/β)α and 1 e (y/β)α β β with α>0, β>0and support (0, ). By defining X =1/Y,whereY has the above distribution, we derive the pdf and cdf of X as respectively: This is useful for ordered statistics. αβ(βy) 1 α e 1/(βy) and e 1/(βx)α.

35 Some seldom-known densities For example, [ let X (n) ] denote the largest one among i.i.d. X 1,X 2,...,X n. X(n) Then Pr n x = e (α/π)(1/x),ifx j is Cauchy distributed with parameter α. [ ] X(n) Or Pr x = e (α 2/ π)(1/x 1/2),ifX j is a one-sided stable distribution n 2 of index 1/2 with parameter α. Logistic distribution : Its cdf with parameters α>0andβ Ris on R. 1 1+e αx β

36 Distribution with nice moment generating function Gaussian distribution : M(t) = 1 2πσ 2 which exists for all t R. Exponential distribution : M(t) = is defined for t<α. So the kth moment is k!α k. Poisson distribution : which exists for all t R. 0 M(t) = e tx αe αx dx = r=0 e tx e (x m)2 /(2σ 2) dx = e mt+σ2 t 2 /2, α α t = k=0 e rt e λλr r! = eλ(et 1), ( ) k! t k α k k!

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

Continuous Random Variables

Continuous Random Variables 1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables

More information

Probability and Distributions

Probability and Distributions Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated

More information

3. Probability and Statistics

3. Probability and Statistics FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important

More information

Section 27. The Central Limit Theorem. Po-Ning Chen, Professor. Institute of Communications Engineering. National Chiao Tung University

Section 27. The Central Limit Theorem. Po-Ning Chen, Professor. Institute of Communications Engineering. National Chiao Tung University Section 27 The Central Limit Theorem Po-Ning Chen, Professor Institute of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 3000, R.O.C. Identically distributed summands 27- Central

More information

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities PCMI 207 - Introduction to Random Matrix Theory Handout #2 06.27.207 REVIEW OF PROBABILITY THEORY Chapter - Events and Their Probabilities.. Events as Sets Definition (σ-field). A collection F of subsets

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables Joint Probability Density Let X and Y be two random variables. Their joint distribution function is F ( XY x, y) P X x Y y. F XY ( ) 1, < x

More information

conditional cdf, conditional pdf, total probability theorem?

conditional cdf, conditional pdf, total probability theorem? 6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random

More information

Continuous Random Variables and Continuous Distributions

Continuous Random Variables and Continuous Distributions Continuous Random Variables and Continuous Distributions Continuous Random Variables and Continuous Distributions Expectation & Variance of Continuous Random Variables ( 5.2) The Uniform Random Variable

More information

MAS223 Statistical Inference and Modelling Exercises

MAS223 Statistical Inference and Modelling Exercises MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,

More information

Continuous Random Variables

Continuous Random Variables Continuous Random Variables Recall: For discrete random variables, only a finite or countably infinite number of possible values with positive probability. Often, there is interest in random variables

More information

BASICS OF PROBABILITY

BASICS OF PROBABILITY October 10, 2018 BASICS OF PROBABILITY Randomness, sample space and probability Probability is concerned with random experiments. That is, an experiment, the outcome of which cannot be predicted with certainty,

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

1 Review of Probability and Distributions

1 Review of Probability and Distributions Random variables. A numerically valued function X of an outcome ω from a sample space Ω X : Ω R : ω X(ω) is called a random variable (r.v.), and usually determined by an experiment. We conventionally denote

More information

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu Home Work: 1 1. Describe the sample space when a coin is tossed (a) once, (b) three times, (c) n times, (d) an infinite number of times. 2. A coin is tossed until for the first time the same result appear

More information

Actuarial Science Exam 1/P

Actuarial Science Exam 1/P Actuarial Science Exam /P Ville A. Satopää December 5, 2009 Contents Review of Algebra and Calculus 2 2 Basic Probability Concepts 3 3 Conditional Probability and Independence 4 4 Combinatorial Principles,

More information

Random Variables and Their Distributions

Random Variables and Their Distributions Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital

More information

Lecture 2: Review of Probability

Lecture 2: Review of Probability Lecture 2: Review of Probability Zheng Tian Contents 1 Random Variables and Probability Distributions 2 1.1 Defining probabilities and random variables..................... 2 1.2 Probability distributions................................

More information

LIST OF FORMULAS FOR STK1100 AND STK1110

LIST OF FORMULAS FOR STK1100 AND STK1110 LIST OF FORMULAS FOR STK1100 AND STK1110 (Version of 11. November 2015) 1. Probability Let A, B, A 1, A 2,..., B 1, B 2,... be events, that is, subsets of a sample space Ω. a) Axioms: A probability function

More information

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n JOINT DENSITIES - RANDOM VECTORS - REVIEW Joint densities describe probability distributions of a random vector X: an n-dimensional vector of random variables, ie, X = (X 1,, X n ), where all X is are

More information

Chapter 4. Chapter 4 sections

Chapter 4. Chapter 4 sections Chapter 4 sections 4.1 Expectation 4.2 Properties of Expectations 4.3 Variance 4.4 Moments 4.5 The Mean and the Median 4.6 Covariance and Correlation 4.7 Conditional Expectation SKIP: 4.8 Utility Expectation

More information

Lecture 1: August 28

Lecture 1: August 28 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random

More information

Chp 4. Expectation and Variance

Chp 4. Expectation and Variance Chp 4. Expectation and Variance 1 Expectation In this chapter, we will introduce two objectives to directly reflect the properties of a random variable or vector, which are the Expectation and Variance.

More information

ECON 5350 Class Notes Review of Probability and Distribution Theory

ECON 5350 Class Notes Review of Probability and Distribution Theory ECON 535 Class Notes Review of Probability and Distribution Theory 1 Random Variables Definition. Let c represent an element of the sample space C of a random eperiment, c C. A random variable is a one-to-one

More information

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

1.1 Review of Probability Theory

1.1 Review of Probability Theory 1.1 Review of Probability Theory Angela Peace Biomathemtics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable Distributions of Functions of Random Variables 5.1 Functions of One Random Variable 5.2 Transformations of Two Random Variables 5.3 Several Random Variables 5.4 The Moment-Generating Function Technique

More information

Lecture 2: Repetition of probability theory and statistics

Lecture 2: Repetition of probability theory and statistics Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:

More information

3 Continuous Random Variables

3 Continuous Random Variables Jinguo Lian Math437 Notes January 15, 016 3 Continuous Random Variables Remember that discrete random variables can take only a countable number of possible values. On the other hand, a continuous random

More information

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline.

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline. Random Variables Amappingthattransformstheeventstotherealline. Example 1. Toss a fair coin. Define a random variable X where X is 1 if head appears and X is if tail appears. P (X =)=1/2 P (X =1)=1/2 Example

More information

18.440: Lecture 28 Lectures Review

18.440: Lecture 28 Lectures Review 18.440: Lecture 28 Lectures 18-27 Review Scott Sheffield MIT Outline Outline It s the coins, stupid Much of what we have done in this course can be motivated by the i.i.d. sequence X i where each X i is

More information

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows. Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage

More information

Introduction to Machine Learning

Introduction to Machine Learning What does this mean? Outline Contents Introduction to Machine Learning Introduction to Probabilistic Methods Varun Chandola December 26, 2017 1 Introduction to Probability 1 2 Random Variables 3 3 Bayes

More information

1 Random Variable: Topics

1 Random Variable: Topics Note: Handouts DO NOT replace the book. In most cases, they only provide a guideline on topics and an intuitive feel. 1 Random Variable: Topics Chap 2, 2.1-2.4 and Chap 3, 3.1-3.3 What is a random variable?

More information

Continuous Distributions

Continuous Distributions A normal distribution and other density functions involving exponential forms play the most important role in probability and statistics. They are related in a certain way, as summarized in a diagram later

More information

18.440: Lecture 28 Lectures Review

18.440: Lecture 28 Lectures Review 18.440: Lecture 28 Lectures 17-27 Review Scott Sheffield MIT 1 Outline Continuous random variables Problems motivated by coin tossing Random variable properties 2 Outline Continuous random variables Problems

More information

Gaussian vectors and central limit theorem

Gaussian vectors and central limit theorem Gaussian vectors and central limit theorem Samy Tindel Purdue University Probability Theory 2 - MA 539 Samy T. Gaussian vectors & CLT Probability Theory 1 / 86 Outline 1 Real Gaussian random variables

More information

Probability- the good parts version. I. Random variables and their distributions; continuous random variables.

Probability- the good parts version. I. Random variables and their distributions; continuous random variables. Probability- the good arts version I. Random variables and their distributions; continuous random variables. A random variable (r.v) X is continuous if its distribution is given by a robability density

More information

Exercises and Answers to Chapter 1

Exercises and Answers to Chapter 1 Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean

More information

1 Probability theory. 2 Random variables and probability theory.

1 Probability theory. 2 Random variables and probability theory. Probability theory Here we summarize some of the probability theory we need. If this is totally unfamiliar to you, you should look at one of the sources given in the readings. In essence, for the major

More information

Practice Examination # 3

Practice Examination # 3 Practice Examination # 3 Sta 23: Probability December 13, 212 This is a closed-book exam so do not refer to your notes, the text, or any other books (please put them on the floor). You may use a single

More information

Exam P Review Sheet. for a > 0. ln(a) i=0 ari = a. (1 r) 2. (Note that the A i s form a partition)

Exam P Review Sheet. for a > 0. ln(a) i=0 ari = a. (1 r) 2. (Note that the A i s form a partition) Exam P Review Sheet log b (b x ) = x log b (y k ) = k log b (y) log b (y) = ln(y) ln(b) log b (yz) = log b (y) + log b (z) log b (y/z) = log b (y) log b (z) ln(e x ) = x e ln(y) = y for y > 0. d dx ax

More information

Chapter 7: Special Distributions

Chapter 7: Special Distributions This chater first resents some imortant distributions, and then develos the largesamle distribution theory which is crucial in estimation and statistical inference Discrete distributions The Bernoulli

More information

Sampling Distributions

Sampling Distributions In statistics, a random sample is a collection of independent and identically distributed (iid) random variables, and a sampling distribution is the distribution of a function of random sample. For example,

More information

Algorithms for Uncertainty Quantification

Algorithms for Uncertainty Quantification Algorithms for Uncertainty Quantification Tobias Neckel, Ionuț-Gabriel Farcaș Lehrstuhl Informatik V Summer Semester 2017 Lecture 2: Repetition of probability theory and statistics Example: coin flip Example

More information

1 Presessional Probability

1 Presessional Probability 1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional

More information

Probability Background

Probability Background Probability Background Namrata Vaswani, Iowa State University August 24, 2015 Probability recap 1: EE 322 notes Quick test of concepts: Given random variables X 1, X 2,... X n. Compute the PDF of the second

More information

Stat410 Probability and Statistics II (F16)

Stat410 Probability and Statistics II (F16) Stat4 Probability and Statistics II (F6 Exponential, Poisson and Gamma Suppose on average every /λ hours, a Stochastic train arrives at the Random station. Further we assume the waiting time between two

More information

Final Exam # 3. Sta 230: Probability. December 16, 2012

Final Exam # 3. Sta 230: Probability. December 16, 2012 Final Exam # 3 Sta 230: Probability December 16, 2012 This is a closed-book exam so do not refer to your notes, the text, or any other books (please put them on the floor). You may use the extra sheets

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Introduction to Probabilistic Methods Varun Chandola Computer Science & Engineering State University of New York at Buffalo Buffalo, NY, USA chandola@buffalo.edu Chandola@UB

More information

5 Operations on Multiple Random Variables

5 Operations on Multiple Random Variables EE360 Random Signal analysis Chapter 5: Operations on Multiple Random Variables 5 Operations on Multiple Random Variables Expected value of a function of r.v. s Two r.v. s: ḡ = E[g(X, Y )] = g(x, y)f X,Y

More information

Probability Theory and Statistics. Peter Jochumzen

Probability Theory and Statistics. Peter Jochumzen Probability Theory and Statistics Peter Jochumzen April 18, 2016 Contents 1 Probability Theory And Statistics 3 1.1 Experiment, Outcome and Event................................ 3 1.2 Probability............................................

More information

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

t x 1 e t dt, and simplify the answer when possible (for example, when r is a positive even number). In particular, confirm that EX 4 = 3.

t x 1 e t dt, and simplify the answer when possible (for example, when r is a positive even number). In particular, confirm that EX 4 = 3. Mathematical Statistics: Homewor problems General guideline. While woring outside the classroom, use any help you want, including people, computer algebra systems, Internet, and solution manuals, but mae

More information

Statistical Methods in Particle Physics

Statistical Methods in Particle Physics Statistical Methods in Particle Physics Lecture 3 October 29, 2012 Silvia Masciocchi, GSI Darmstadt s.masciocchi@gsi.de Winter Semester 2012 / 13 Outline Reminder: Probability density function Cumulative

More information

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Review of Basic Probability The fundamentals, random variables, probability distributions Probability mass/density functions

More information

Review of Probability Theory

Review of Probability Theory Review of Probability Theory Arian Maleki and Tom Do Stanford University Probability theory is the study of uncertainty Through this class, we will be relying on concepts from probability theory for deriving

More information

Chapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables

Chapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables Chapter 2 Some Basic Probability Concepts 2.1 Experiments, Outcomes and Random Variables A random variable is a variable whose value is unknown until it is observed. The value of a random variable results

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4. (*. Let independent variables X,..., X n have U(0, distribution. Show that for every x (0,, we have P ( X ( < x and P ( X (n > x as n. Ex. 4.2 (**. By using induction or otherwise,

More information

4. Distributions of Functions of Random Variables

4. Distributions of Functions of Random Variables 4. Distributions of Functions of Random Variables Setup: Consider as given the joint distribution of X 1,..., X n (i.e. consider as given f X1,...,X n and F X1,...,X n ) Consider k functions g 1 : R n

More information

Statistics for scientists and engineers

Statistics for scientists and engineers Statistics for scientists and engineers February 0, 006 Contents Introduction. Motivation - why study statistics?................................... Examples..................................................3

More information

Probability Review. Yutian Li. January 18, Stanford University. Yutian Li (Stanford University) Probability Review January 18, / 27

Probability Review. Yutian Li. January 18, Stanford University. Yutian Li (Stanford University) Probability Review January 18, / 27 Probability Review Yutian Li Stanford University January 18, 2018 Yutian Li (Stanford University) Probability Review January 18, 2018 1 / 27 Outline 1 Elements of probability 2 Random variables 3 Multiple

More information

2. Variance and Covariance: We will now derive some classic properties of variance and covariance. Assume real-valued random variables X and Y.

2. Variance and Covariance: We will now derive some classic properties of variance and covariance. Assume real-valued random variables X and Y. CS450 Final Review Problems Fall 08 Solutions or worked answers provided Problems -6 are based on the midterm review Identical problems are marked recap] Please consult previous recitations and textbook

More information

1. Let A be a 2 2 nonzero real matrix. Which of the following is true?

1. Let A be a 2 2 nonzero real matrix. Which of the following is true? 1. Let A be a 2 2 nonzero real matrix. Which of the following is true? (A) A has a nonzero eigenvalue. (B) A 2 has at least one positive entry. (C) trace (A 2 ) is positive. (D) All entries of A 2 cannot

More information

SOLUTIONS TO MATH68181 EXTREME VALUES AND FINANCIAL RISK EXAM

SOLUTIONS TO MATH68181 EXTREME VALUES AND FINANCIAL RISK EXAM SOLUTIONS TO MATH68181 EXTREME VALUES AND FINANCIAL RISK EXAM Solutions to Question A1 a) The marginal cdfs of F X,Y (x, y) = [1 + exp( x) + exp( y) + (1 α) exp( x y)] 1 are F X (x) = F X,Y (x, ) = [1

More information

BMIR Lecture Series on Probability and Statistics Fall 2015 Discrete RVs

BMIR Lecture Series on Probability and Statistics Fall 2015 Discrete RVs Lecture #7 BMIR Lecture Series on Probability and Statistics Fall 2015 Department of Biomedical Engineering and Environmental Sciences National Tsing Hua University 7.1 Function of Single Variable Theorem

More information

6.041/6.431 Fall 2010 Quiz 2 Solutions

6.041/6.431 Fall 2010 Quiz 2 Solutions 6.04/6.43: Probabilistic Systems Analysis (Fall 200) 6.04/6.43 Fall 200 Quiz 2 Solutions Problem. (80 points) In this problem: (i) X is a (continuous) uniform random variable on [0, 4]. (ii) Y is an exponential

More information

Product measure and Fubini s theorem

Product measure and Fubini s theorem Chapter 7 Product measure and Fubini s theorem This is based on [Billingsley, Section 18]. 1. Product spaces Suppose (Ω 1, F 1 ) and (Ω 2, F 2 ) are two probability spaces. In a product space Ω = Ω 1 Ω

More information

Summary of basic probability theory Math 218, Mathematical Statistics D Joyce, Spring 2016

Summary of basic probability theory Math 218, Mathematical Statistics D Joyce, Spring 2016 8. For any two events E and F, P (E) = P (E F ) + P (E F c ). Summary of basic probability theory Math 218, Mathematical Statistics D Joyce, Spring 2016 Sample space. A sample space consists of a underlying

More information

CDA5530: Performance Models of Computers and Networks. Chapter 2: Review of Practical Random Variables

CDA5530: Performance Models of Computers and Networks. Chapter 2: Review of Practical Random Variables CDA5530: Performance Models of Computers and Networks Chapter 2: Review of Practical Random Variables Definition Random variable (R.V.) X: A function on sample space X: S R Cumulative distribution function

More information

Chapter 4 : Expectation and Moments

Chapter 4 : Expectation and Moments ECE5: Analysis of Random Signals Fall 06 Chapter 4 : Expectation and Moments Dr. Salim El Rouayheb Scribe: Serge Kas Hanna, Lu Liu Expected Value of a Random Variable Definition. The expected or average

More information

2. Suppose (X, Y ) is a pair of random variables uniformly distributed over the triangle with vertices (0, 0), (2, 0), (2, 1).

2. Suppose (X, Y ) is a pair of random variables uniformly distributed over the triangle with vertices (0, 0), (2, 0), (2, 1). Name M362K Final Exam Instructions: Show all of your work. You do not have to simplify your answers. No calculators allowed. There is a table of formulae on the last page. 1. Suppose X 1,..., X 1 are independent

More information

2 Functions of random variables

2 Functions of random variables 2 Functions of random variables A basic statistical model for sample data is a collection of random variables X 1,..., X n. The data are summarised in terms of certain sample statistics, calculated as

More information

E[X n ]= dn dt n M X(t). ). What is the mgf? Solution. Found this the other day in the Kernel matching exercise: 1 M X (t) =

E[X n ]= dn dt n M X(t). ). What is the mgf? Solution. Found this the other day in the Kernel matching exercise: 1 M X (t) = Chapter 7 Generating functions Definition 7.. Let X be a random variable. The moment generating function is given by M X (t) =E[e tx ], provided that the expectation exists for t in some neighborhood of

More information

where r n = dn+1 x(t)

where r n = dn+1 x(t) Random Variables Overview Probability Random variables Transforms of pdfs Moments and cumulants Useful distributions Random vectors Linear transformations of random vectors The multivariate normal distribution

More information

Basic concepts of probability theory

Basic concepts of probability theory Basic concepts of probability theory Random variable discrete/continuous random variable Transform Z transform, Laplace transform Distribution Geometric, mixed-geometric, Binomial, Poisson, exponential,

More information

CS145: Probability & Computing

CS145: Probability & Computing CS45: Probability & Computing Lecture 5: Concentration Inequalities, Law of Large Numbers, Central Limit Theorem Instructor: Eli Upfal Brown University Computer Science Figure credits: Bertsekas & Tsitsiklis,

More information

Section 8.1. Vector Notation

Section 8.1. Vector Notation Section 8.1 Vector Notation Definition 8.1 Random Vector A random vector is a column vector X = [ X 1 ]. X n Each Xi is a random variable. Definition 8.2 Vector Sample Value A sample value of a random

More information

Basic concepts of probability theory

Basic concepts of probability theory Basic concepts of probability theory Random variable discrete/continuous random variable Transform Z transform, Laplace transform Distribution Geometric, mixed-geometric, Binomial, Poisson, exponential,

More information

Chapter 3: Random Variables 1

Chapter 3: Random Variables 1 Chapter 3: Random Variables 1 Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw 1 Modified from the lecture notes by Prof.

More information

ECE 650 Lecture 4. Intro to Estimation Theory Random Vectors. ECE 650 D. Van Alphen 1

ECE 650 Lecture 4. Intro to Estimation Theory Random Vectors. ECE 650 D. Van Alphen 1 EE 650 Lecture 4 Intro to Estimation Theory Random Vectors EE 650 D. Van Alphen 1 Lecture Overview: Random Variables & Estimation Theory Functions of RV s (5.9) Introduction to Estimation Theory MMSE Estimation

More information

STA205 Probability: Week 8 R. Wolpert

STA205 Probability: Week 8 R. Wolpert INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and

More information

STA 256: Statistics and Probability I

STA 256: Statistics and Probability I Al Nosedal. University of Toronto. Fall 2017 My momma always said: Life was like a box of chocolates. You never know what you re gonna get. Forrest Gump. Exercise 4.1 Let X be a random variable with p(x)

More information

Partial Solutions for h4/2014s: Sampling Distributions

Partial Solutions for h4/2014s: Sampling Distributions 27 Partial Solutions for h4/24s: Sampling Distributions ( Let X and X 2 be two independent random variables, each with the same probability distribution given as follows. f(x 2 e x/2, x (a Compute the

More information

Probability Models. 4. What is the definition of the expectation of a discrete random variable?

Probability Models. 4. What is the definition of the expectation of a discrete random variable? 1 Probability Models The list of questions below is provided in order to help you to prepare for the test and exam. It reflects only the theoretical part of the course. You should expect the questions

More information

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University Chapter 3, 4 Random Variables ENCS6161 - Probability and Stochastic Processes Concordia University ENCS6161 p.1/47 The Notion of a Random Variable A random variable X is a function that assigns a real

More information

Topic 4: Continuous random variables

Topic 4: Continuous random variables Topic 4: Continuous random variables Course 003, 2018 Page 0 Continuous random variables Definition (Continuous random variable): An r.v. X has a continuous distribution if there exists a non-negative

More information

Stat 5101 Notes: Brand Name Distributions

Stat 5101 Notes: Brand Name Distributions Stat 5101 Notes: Brand Name Distributions Charles J. Geyer September 5, 2012 Contents 1 Discrete Uniform Distribution 2 2 General Discrete Uniform Distribution 2 3 Uniform Distribution 3 4 General Uniform

More information

MATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation)

MATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation) MATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation) Last modified: March 7, 2009 Reference: PRP, Sections 3.6 and 3.7. 1. Tail-Sum Theorem

More information

Statistics, Data Analysis, and Simulation SS 2015

Statistics, Data Analysis, and Simulation SS 2015 Statistics, Data Analysis, and Simulation SS 2015 08.128.730 Statistik, Datenanalyse und Simulation Dr. Michael O. Distler Mainz, 27. April 2015 Dr. Michael O. Distler

More information

18.175: Lecture 8 Weak laws and moment-generating/characteristic functions

18.175: Lecture 8 Weak laws and moment-generating/characteristic functions 18.175: Lecture 8 Weak laws and moment-generating/characteristic functions Scott Sheffield MIT 18.175 Lecture 8 1 Outline Moment generating functions Weak law of large numbers: Markov/Chebyshev approach

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4.1 (*. Let independent variables X 1,..., X n have U(0, 1 distribution. Show that for every x (0, 1, we have P ( X (1 < x 1 and P ( X (n > x 1 as n. Ex. 4.2 (**. By using induction

More information

MATH Solutions to Probability Exercises

MATH Solutions to Probability Exercises MATH 5 9 MATH 5 9 Problem. Suppose we flip a fair coin once and observe either T for tails or H for heads. Let X denote the random variable that equals when we observe tails and equals when we observe

More information

Characteristic Functions and the Central Limit Theorem

Characteristic Functions and the Central Limit Theorem Chapter 6 Characteristic Functions and the Central Limit Theorem 6.1 Characteristic Functions 6.1.1 Transforms and Characteristic Functions. There are several transforms or generating functions used in

More information

Multivariate Random Variable

Multivariate Random Variable Multivariate Random Variable Author: Author: Andrés Hincapié and Linyi Cao This Version: August 7, 2016 Multivariate Random Variable 3 Now we consider models with more than one r.v. These are called multivariate

More information

CDA6530: Performance Models of Computers and Networks. Chapter 2: Review of Practical Random

CDA6530: Performance Models of Computers and Networks. Chapter 2: Review of Practical Random CDA6530: Performance Models of Computers and Networks Chapter 2: Review of Practical Random Variables Definition Random variable (RV)X (R.V.) X: A function on sample space X: S R Cumulative distribution

More information

CHAPTER 1 DISTRIBUTION THEORY 1 CHAPTER 1: DISTRIBUTION THEORY

CHAPTER 1 DISTRIBUTION THEORY 1 CHAPTER 1: DISTRIBUTION THEORY CHAPTER 1 DISTRIBUTION THEORY 1 CHAPTER 1: DISTRIBUTION THEORY CHAPTER 1 DISTRIBUTION THEORY 2 Basic Concepts CHAPTER 1 DISTRIBUTION THEORY 3 Random Variables (R.V.) discrete random variable: probability

More information

Topic 4: Continuous random variables

Topic 4: Continuous random variables Topic 4: Continuous random variables Course 3, 216 Page Continuous random variables Definition (Continuous random variable): An r.v. X has a continuous distribution if there exists a non-negative function

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables This Version: July 30, 2015 Multiple Random Variables 2 Now we consider models with more than one r.v. These are called multivariate models For instance: height and weight An

More information

Exercises with solutions (Set D)

Exercises with solutions (Set D) Exercises with solutions Set D. A fair die is rolled at the same time as a fair coin is tossed. Let A be the number on the upper surface of the die and let B describe the outcome of the coin toss, where

More information