Chp 4. Expectation and Variance

Size: px
Start display at page:

Download "Chp 4. Expectation and Variance"

Transcription

1 Chp 4. Expectation and Variance 1 Expectation In this chapter, we will introduce two objectives to directly reflect the properties of a random variable or vector, which are the Expectation and Variance. Definition 1.1 Assume a random variable X has probability distribution if j x j p j <, then p j P (X x j ), j, 1,..., E(X) is the Expectation of random variable X. x j p j (1) Definition 1.2 Assume the density function of a continuous random variable X is f(x), if x f(x)dx < (Absolutely integrable), then E(X) is the Expectation of random variable X. j xf(x)dx (2) Remark We can find that the expectation is actually the weighted mean of values that the random variable may take. 2. We often write the expectation of X as EX instead of E(X), and µ is often used for EX. 3. Note that we need j x j p j < to make sure that j x jp j is not meaningless. We know that if P (X n 2 ) 1 n and P (X n) 1 1 n, then EX n n + n 1/2 goes to infinity as n goes to infinity. However, we can find that P (X n) 1 1 n means the probability of X < is 1 if n goes to infinity. However, if any x j is bigger than such that x j p j, EX is not meaningless. We can declare that EX exists whenever EX EX + EX make sense, i.e., EX + < or EX <, where x + max{x, } to be the positive part and x max{ x, } to be the negative part. The expectations of several distributions and indicator function: 1. Bernoulli Distribution B(1, p): EX p Since P (X 1) p, P (X ) 1 p, therefore EX 1 P (X 1)+ P (X ) p. 1

2 2. Binomial Distribution B(n, p): EX np. Since p j P (X j) C j np j (1 p) n j, j n, therefore we have EX jcnp j j (1 p) n j j np j1 C j 1 n 1 pj 1 (1 p) n j (jc j n nc j 1 n 1 ) n 1 np Cn 1p k k (1 p) n 1 k (k j 1) k np(p + 1 p) n 1 np. 3. Poisson Distribution P(λ): EX λ. Since P (X k) λk k! e λ, k, 1,..., therefore we have EX k k λk k! e λ λ 4. Gemetric Distribution: EX 1 p. Since therefore with q 1 p, we have EX k1 λ k 1 (k 1)! e λ λ. P (X j) p(1 p) j 1, j 1, 2,..., jpq j 1 p j1 j 5. Uniform Distribution U(a, b): EX a+b 2. EX xf(x)dx 6. Exponential Distribution ε(λ): EX 1 λ. EX xf(x)dx q j b a ( ) 1 p 1 1 q p. x 1 b a a + b 2. xλe λx dx 1 λ te t dt 1 λ. 2

3 7. Normal Distribution N(µ, σ 2 ): EX µ. EX xf(x)dx µf(x)dx + µ + 1 2πσ µ + 1 2πσ µ, (x µ)f(x)dx (x µ)e (x µ)2 2σ 2 te t2 2σ 2 dt where the last equation is because e t2 2σ 2 is symmetric of t. 8. Gamma Distribution Γ(α, β): EX α β. EX xf(x)dx 1 t α e t dt Γ(α)β Γ(α + 1) Γ(α)β α β, where the last equation is because Γ(α + 1) αγ(α). dx x βα x α 1 Γ(α) e βx dx (t βx) 9. If EX exists and the density of X f(x) satisfies f(µ + x) f(µ x), then EX µ. Answer: EX µ + since tf(t + µ) tf( t + µ). xf(x)dx µf(x)dx + tf(t + µ)dx µ (x µ)f(x µ + µ)dx 1. (Important) Assume A is an event, I A is the indicator function of A, then EI A P (I A 1) P (A). 3

4 2 Properties of Expectation 2.1 Expectation of Functions of Random Vectors Theorem 2.1 Assume X (X 1, X 2,..., X n ) is a random vector, x (x 1, x 2,..., x n ) R n. (1) If X has the joint density f(x) f(x 1, x 2,..., x n ), and the real function g(x) satisfies R n g(x) f(x)dx 1 dx 2...dx n <, then the expectation of Y g(x) is EY g(x)f(x)dx 1 dx 2...dx n ; (3) R n (2) If X is a discrete random vector and has the probability distribution p j1,j 2,...,j n P (X (x 1 (j 1 ), x 2 (j 2 ),..., x n (j n ))), j 1, j 2,..., j n 1, and the real function h(x) satisfies j 1,j 2,...,j n h(x 1 (j 1 ), x 2 (j 2 ),..., x n (j n )) p j1,j 2,...,j n <, then the expectation of Y h(x) is EY h(x 1 (j 1 ), x 2 (j 2 ),..., x n (j n ))p j1,j 2,...,j n. (4) j 1,j 2,...,j n Remark 2.1 We can get the expectation of functions of random vectors directly instead of deriving the distribution of them and then get the expectation. Now, we give some important examples. Example 2.1 Assume X N(, 1), α is a fixed number, calculate E X α. Answer: E X α 1 2π 2 ( 2) α 2π x α exp( x 2 /2)dx 1 ( 2) α t α/2 1/2 e t dt π t α/2 e t 1 2t dt (x 2t, dx dt t 1/2 ) Therefore, when α > 1, E X α 2 α π Γ ( ) 1+α 2 as Γ(α) t α 1 e t dt. When α 1, E X α as X α. Specially, when α 2, EX 2 2 ( ) 3 Γ 2 ( ) 1 1 π 2 π 2 Γ 1, 2 4

5 when α 1, as Γ( 1 2 ) π, Γ(1) 1. E X 2 2 π Γ(1) π We can find that the main technique in calculating the expectation is integration and also changing the parameters. 2.2 Properties of Expectation Theorem 2.2 Assume E X j < (1 j n), c, c 1,..., c n are fixed numbers, then we have (1) the expecation of Y c + c 1 X 1 + c 2 X c n X n exists, and EY c +c 1 EX 1 +c 2 EX c n EX n, E(X 1 +X X n ) EX 1 +EX EX n ; Note: X 1, X 2,..., X n don t need to be independent. (2)if {X j } are independent, then the expectation of Z X 1 X 2...X n exists and EZ EX 1 EX 2...EX n ; Note: X 1, X 2,..., X n need to be independent. (3) if X 1 X 2 a.s. or say Y a.s., then EX 1 EX 2 or EY. Proof: We only prove the theorem when X (X 1, X 2,..., X n ) has joint density f(x). (1) E Y c + c j x j f(x)dx 1 dx 2...dx n R n And, E(Y ) R n c o + c o + c + c + c o + j1 c j x j f(x)dx 1 dx 2...dx n R n c j E( X j ) <. j1 j1 c j x j f(x)dx 1 dx 2...dx n j1 c j x j f(x)dx 1 dx 2...dx n R n j1 c j EX j X j (,...,, 1,,..., )(X 1, X 2,..., X n ) X j. j1 5

6 (2) E Z x 1 x 2...x n f(x)dx 1 dx 2...dx n R { n } { } { } x 1 f(x 1 )dx 1 x 2 f(x 2 )dx 2... x n f(x n )dx n E X 1 E X 2...E X n <. Similarly, we can get EZ EX 1 EX 2...EX n. (3) It s obvious that if Y (ω) for any ω Ω, then EY. Now, we discuss the case Y (ω) a.s.. We know that if EY <, then there must exist a fixed number a <, such that P (Y < a) > c > which is contrary to P (Y ) 1. It s important to connect the non-negativity of E X and X a.s.. Lemma 2.1 If E X, then P (X ) 1. Proof: We use I[n X > 1] to denote the indicator function of event {n X > 1}. Then we always have I[n X > 1] n X. Therefore, P ( X > 1/n) P (n X > 1) E(I[n X > 1]) ne( X ), and by the continuity of probability, we have ( ) P ( X > ) P { X > 1/n} n1 So, P ( X ) 1 P ( X > ) 1. lim P ( X > 1/n). n On the other hand, we can prove that if P (X ) 1, then E X. We know that if E X a >, then there must exist a fixed b >, such that P ( X > b) c > and P ( X ) < 1 P ( X > b) < 1. Therefore, if P (X ) 1, then E X. The technique using indicator function in the above proof is very useful. 3 Variance of Random Variable 3.1 Definition of Variance Definition 3.1 If the expectation of random variable X is µ EX and finite, then we call E(X µ) 2 the Variance of X, and denote it as var(x) and call σ X var(x) the Standard Deviation. 6

7 We know that E(X EX) 2 E(X 2 2XEX + (EX) 2 ) EX 2 2(EX) 2 + (EX) 2 EX 2 (EX) 2. This equation will be used very often in the calculating of variance. Now we give the variances of several distributions: 1. Bernoulli Distribution B(1, p): var(x) p(1 p) Since P (X 1) p, P (X ) 1 p, therefore var(x) EX 2 (EX) 2 p p 2 p(1 p). 2. Binomial Distribution B(n, p): var(x) np(1 p). Denote q 1 p, since EX np and we have p j P (X j) C j np j (1 p) n j, j n, EX 2 E(X(X 1)) + EX Cnj(j j 1)p j q n j + np j p 2 d2 dx 2 Cnx j j q n j xp + np j ( ) d p 2 2 (x + q)n dx2 xp + np n(n 1)p 2 + np. Therefore, var(x) EX 2 (EX) 2 n 2 p 2 np 2 + np n 2 p 2 np(1 p). 3.Poisson Distribution P(λ): var(x) λ. Since EX λ and P (X k) λk k! e λ, k, 1,..., therefore we have EX 2 E(X(X 1)) + EX k(k 1) λk k! e λ + λ k λ 2 k2 λ 2 + λ λ k 2 (k 2)! e λ + λ 7

8 So, var(x) EX 2 (EX) 2 λ 2 + λ λ 2 λ. 4. Geometric Distribution: var(x) 1 p p 2. The calculating of EX 2 is similarly as Binomial distribution. 5. Uniform Distribution U(a, b): var(x) (b a)2 12. EX 2 x 2 f(x)dx b 6. Exponential Distribution ε(λ): var(x) 1 λ 2. a x 2 1 b a b3 a 3 3(b a). EX 2 x 2 λe λx dx 1 λ 2 t 2 e t dt (x t/λ) 1 λ 2 Γ(3) 2 λ 2 7. Normal Distribution N(µ, σ 2 ): var(x) σ 2. From above subsection, we know that EX 2 1 if X N(, 1). var(x) (x µ) 2 1 σ 2 1 2π σ 2 We have calculate it in last section. 8. Gamma Distribution Γ(α, β): var(x) α β 2. EX 2 1 Γ(α)β 2 Γ(α + 2) Γ(α)β x 2 f(x)dx exp ( 2πσ ) (x µ)2 2σ 2 dx ( ) t 2 exp t2 dt t x µ 2 σ t α+1 e t dt α(α + 1), β x 2 βα x α 1 Γ(α) e βx dx (t βx) where the last equation is because Γ(α + 2) (α + 1)αΓ(α). 8

9 3.2 Properties of Variance Theorem 3.1 Assume EX µ, var(x) <, µ j EX j, var(x j ) < (1 j n), then (1) var(a + bx) b 2 var(x); (2) var(x) E(X µ) 2 < E(X c) 2, if c µ; (3) var(x) X µ a.s.; (4) var( n j1 X j) n j1 n j1 [EX ix j µ i µ j ]; (5) If X 1, X 2,..., X n are independent, then var( n j1 X j) n j1 var(x j); Proof: (1) (2) If c µ, then var(a + bx) E(a + bx a + bµ) 2 b 2 E(X µ) 2 b 2 var(x). E(X c) 2 E(X µ + µ c) 2 E(X µ) 2 + E(µ c) 2 + 2(µ c)e(x µ) E(X µ) 2. (3) Since var( X ) E( X E X ) 2 E X 2 (E X ) 2, therefore E(X µ) 2 E X µ such that E X µ. Then by Lemma 2.1, we have P (X ) 1. We need to prove if X µ a.s., then E(X µ) 2. In fact, if Y (X µ) 2 is a simple random variable such that Y y k I Ak (ω), then if y k, we have P (A k ) by hypothesis and therefore EY. If S Y, where S is a simple random variable, then S a.s. and consequently ES and (4) Omitted. EY sup ES. S Ω:S Y 9

10 (5) var j1 X j i1 j1 [EX i X j µ i µ j ] [EX i X i µ i µ i ] i1 var(x i ), the second equation is because EX i X j EX i EX j if i j. 3.3 Three Important Inequality i1 Theorem 3.2 (Markov Inequality) For any random variable X and fixed number ϵ, we have P ( X ϵ) 1 ϵ α E X α, α >. (5) If α 2, then we have (Chebyshev Inequality): P ( X EX ϵ) 1 ϵ 2 E X EX 2 1 var(x). (6) ϵ2 Proof: Since I[ X > ϵ] 1 X α /ϵ α, therefore by Theorem 2.2. P ( X ϵ) E(I[ X > ϵ]) E ( X α /ϵ α ) 1 ϵ α E X α Theorem 3.3 Assume EX 2 <, EY 2 <, then we have EXY EX 2 EY 2. (7) And, EXY EX 2 EY 2 if and only if there exists two fixed number a and b (ab ) such that ax + by a.s.. where Proof: Since for any a, b which satisfy ab, E(aX + by ) 2 a 2 EX 2 + b 2 EY 2 + 2abEXY Σ (a, b)σ(a, b), ( ) EX 2 EXY EXY EY 2, therefore det(σ) EX 2 EY 2 (EXY ) 2. In addition, it s obviously that (EXY ) 2 EX 2 EY 2 if and only if E(aX + by ) 2, and E(aX + by ) 2 if and only if ax + by a.s. by Lemma

11 Theorem 3.4 (Jensen Inequality) Suppose g(x) is a convex function, that is for each x, there exists a number λ(x ) such that for all x R. Then g(x) g(x ) + (x x )λ(x ) (8) provided both expectation exist, i.e. E ξ and E g(ξ) <. Proof: Putting x ξ and x Eξ, we find from (8) that E(g(ξ)) g(eξ). (9) g(ξ) g(eξ) + (ξ Eξ)λ(x ). Take expectations for both sides for above inequality, we have (9). λ(x ) can be considered as g (x ). 4 Covariance and Correlation 4.1 Covariance and Correlation Definition 4.1 Assume µ X EX, µ Y EY exist, then if E (X µ X )(Y µ Y ) <, we call cov(x, Y ) E[(X µ X )(Y µ Y )] the Covariance of random variables X, Y. If cov(x, Y ), we say X, Y are Uncorrelated. Definition 4.2 If < σ X σ Y <, then we call the Corrlation of X and Y. ρ XY Usually we use the following equation cov(x, Y ) var(x) var(y ) σ XY σ X σ Y cov(x, Y ) E[(X µ X )(Y µ Y )] E(XY µ X Y µ Y X + µ X µ Y ) EXY EXEY to calculate the covariance of X, Y. We can also use the standardization of X, Y to calculate the correlation [( ) ( )] X µx Y µy ρ XY E where X µ X σ X is the standardization of X with mean and variance 1. σ X 11 σ Y

12 Theorem 4.1 If ρ XY is the correlation of X, Y, then (1) ρ XY 1; (2) ρ XY 1 iff there exist real numbers c, d such that P (Y c + dx) 1; We call X, Y are Linear Correlated if ρ XY 1. (3) X, Y are independent X, Y are uncorrelated. However, X, Y are uncorrelated X, Y are independent, except when (X, Y ) N(µ 1, µ 2 ; σ1 2, σ2 2 ; ρ), then X, Y are independent X, Y are uncorrelated. Proof: (1) ρ XY E (X µ X)(Y µ Y ) E(X µx ) 2 E(Y µ Y ) 2 1 ρ XY 1 iff P (a(x µ X ) + b(y µ Y )) 1 by Theorem 3.3, which is equivalent to ρ XY 1 iff there exist real numbers c and d such that P (Y c+dx) 1 with d a b and c bµ Y +aµ x b. (3) can be proven by EXY xf(x)dx xyf(x, y)dxdy yf(y)dxdy EXEY, xyf(x)f(y)dxdy such that cov(x, Y ) EXY EXEY EXEY EXEY. See Example 4.2 in Page 173 for X, Y are independent X, Y are uncorrelated when (X, Y ) N(µ 1, µ 2 ; σ1 2, σ2 2 ; ρ). Example 4.1 Assume (X, Y ) are uniformly distributed on D {(x, y) x 2 + y 2 1}, then the density of X is f X (x) 1 π f(x, y)dy 1 π I {x 2 +y 2 1}dy I { y 1 x 2 } dy 2 π 1 x 2, similarly, we have f Y (y) 2 π 1 y 2. Therefore, X and Y are not independent. We know that EX and EY, such that cov(x, Y ) xyf(x, y)dxdy 1 ( 1 ) 1 x 2 x dx. R 2 π 1 Therefore, X and Y are uncorrelated. 1 x 2 ydy 12

13 4.2 Covariance Matrix Assume X (X 1, X 2,..., X n ) is a random vector, and for every i, µ i EX i exists, then we say EX exists and µ EX (EX 1, EX 2,..., EX n ) (µ 1, µ 2,..., µ n ). If the expectation of random variable X ij for any 1 i m and 1 j n exists, then the expectation of X 11 X 12 X 1n X 21 X 22 X 2n Y... X m1 X m2 X mn exists and are defined as EX 11 EX 12 EX 1n EX 21 EX 22 EX 2n EY.... EX m1 EX m2 EX mn Similar as for 1-dimensional random variable, we also have the following equations for above random vector X (X 1, X 2,..., X n ) and random matrix Y. For any real random vector a (a 1, a 2,..., a n ), matrix A with size k m and B with size n h, we have Proof: E(aX ) E EaX aex (EY ) EY ; EAY AEY ; EY B EY B; EAY B AEY B a j X j j1 a j EX j ax ; j1 (EY ) E(X ij ) (EX ji ) E(X ); ( m ) m E(AY ) E a il X lj a il EX lj AEY ; l1 Similarly, we have E(Y B) E(Y )B such that E(AY B) AE(Y B) AEY B. l1 13

14 Definition 4.3 If the expectation of random vector X (X 1, X 2,..., X n ), µ EX exists, and for any 1 i n, var(x i ) <, then we call as the Covariance Matrix of X, where Σ E[(X µ) (X µ)] (σ ij ) n n σ ij cov(x i, X j ). A covariance matrix has an very important property non-negative. We have the following theorem: Theorem 4.2 Assume Σ is the covariance matrix of X, then A. Σ is non-negative defined and symmetric. B. det(σ) iff there exists a 1, a 2,..., a n, which satisfy at least one of a 1, a 2,..., a n is not equal to, such that with µ i EX i. a i (X i µ i ) a.s. j1 Proof: A. Since for any n-dimensional real vector a (a 1, a 2,..., a n ), we have aσa a i a j σ ij (1) i1 j1 i1 j1 E a i a j E[(X i µ i )(X j µ j )] (11) i1 j1 a i a j (X i µ i )(X j µ j ) (12) [ 2 E a i (X i µ i )] (13) i1 ([ ]) var a i (X i µ i ) i1. (15) We then prove B by (3) in Theorem 3.1. (14) 14

15 5 Conditional Expectation Definition 5.1 Assume (X, Y ) is a random vector. If E( X Y y) x f X Y (x y)dx <, then m(y) def E(X Y y) xf X Y (x y)dx, (16) where f X Y (x y) f(x, y) f Y (y) and we call random variable m(y ) as the conditional expectation of X condition on Y, denoted on E(X Y ). Similar as calculating g(x) P (A X), we can first calculate m(y) and then change y to Y. Example 5.1 Assume (X, Y ) N(µ 1, µ 2 ; σ1 2, σ2 2 ; ρ). Please give E(X Y ) and E(Y X). Answer: Since we know that conditional on X x, Y N(µ 2 + (ρσ 2 /σ 1 )(x µ 1 ), (1 ρ 2 )σ 2 2), therefore such that Similary, E(Y X x) µ 2 + (ρσ 2 /σ 1 )(x µ 1 ) E(Y X) µ 2 + (ρσ 2 /σ 1 )(X µ 1 ). E(X Y ) µ 1 + (ρσ 1 /σ 2 )(Y µ 2 ). Conditional expectation has similar properties as expectation. We have the following theorem. Theorem 5.1 Assume X, Y are random variables, g(x), h(y) are real functions and E g(x) <. In addition, we assume EX i < for any 1 i n, then (1) E(c + n j1 c jx j Y ) c + n j1 c je(x j Y ); (2) E[h(Y )g(x) Y ] h(y )E[g(X) Y ]; (3) (Important) E(g(X) Y ) Eg(X) if X, Y are independent; (4) (Important) E[E(g(X) Y )] Eg(X). 15

16 Proof: (1) is because E( Y y) is an expectation and have the property of expectation. (2) is because E[h(Y )g(x) Y y] E[h(y)g(X) Y y] h(y)e[g(x) Y y]. (3) is because E(g(X) Y y) g(x)f X Y (x y)dx g(x)f(x)dx E(g(x)). (4) We only prove the case when (X, Y ) has the joint density function f(x, y). Assume f Y (y) is the marginal density of Y. We have E(g(X) Y y) which is a function of y. Therefore, E[E(g(X) Y )] g(x)f X Y (x y)dx Eg(X). E(g(X) Y y)f Y (y)dy f(x, y) g(x) f Y (y) dx, f(x, y) g(x) f Y (y) dxf Y (y)dy g(x)f(x, y)dxdy { } g(x) f(x, y)dy dx Now, we give several examples about calculating conditional expectations. The first one is about the indicator function. Example 5.2 Prove that E(P (A X)) P (A). Proof: Since P (A X x) is the probability of event A happens conditional on X x, therefore we have P (A X x) E(I A X x) by P (A) E(I A ) such that E(P (A X)) E(E(I A X)) EI A P (A). Example 5.3 Assume X, Y are random variables, h(x) is a real function. If EX 2 < and Eh 2 (Y ) <, then E[(X E(X Y ))h(y )]. (17) Proof: Since EX 2 <, h 2 (Y ) <, therefore E Xh(Y ) EX 2 Eh 2 (Y ) <. Then E[(X E(X Y ))h(y )] E[Xh(Y )] E[E(X Y )h(y )] E[Xh(Y )] E[E(Xh(Y ) Y )] E[Xh(Y )] E[Xh(Y )]. 16

17 Example 5.4 (Best Predict) Assume EX 2 <, m(y ) E(X Y ), then for any real function g(y), we have E[X m(y )] 2 E[X g(y )] 2. (18) The equality is true iff g(y ) m(y ) a.s.. Proof: If Eg 2 (Y ), then by 2(a b) 2 + 2a 2 b 2 4a 2 4ab + b 2 (2a b) 2 we have Eg(Y ) 2 2E[X g(y )] 2 + 2EX 2. by letting a X and b g(y ) and then take expectation. Therefore E[X g(y )] 2 such that (18) is true. Now we discuss the case Eg 2 (Y ) <. We have [E(X Y y)] 2 E(X 2 Y y) by EXY EX 2 EY 2 and considering Z X Y y as a random variable, and Y 1 in EXY EX 2 EY 2. Therefore Em 2 (Y ) E(E(X Y )) 2 E[E(X 2 Y )] EX 2 <. We have E[X g(y )] 2 E[X m(y ) + m(y ) g(y )] 2 E[X m(y )] 2 + E[m(Y ) g(y )] 2 + 2E[(X m(y ))(m(y ) g(y ))] E[X m(y )] 2 + E[m(Y ) g(y )] 2 + 2E[(X m(y ))h(y )] E[X m(y )] 2, where the third equation is because (17). And, the equality is true in last inequality iff E[m(Y ) g(y )] 2, which is equivalent to g(y ) m(y ) a.s.. We call that m(y ) is the Best Prediction. Theorem 5.2 Assume P (A) >, X is a non-negative random variable, then E(X A) E(XI A) P (A) (19) Proof: For non-negative random variable, we have EX xdf (x) xf (x) F (x)dx dx F (x)dx (1 F (x))dx as F ( ) 1. In addition, because E(X A) is the expectation of X under the probability 17

18 distribution P ( A), therefore E(X A) 1 P (A) 1 P (A) P (X > x A)dx E(XI A) P (A). P ({X > x} A)dx P ({XI A > x})dx Y XI A if A not happen Proposition 5.1 Assume P (A) >, E(X A) exists, then E(X A) E(XI A) P (A). Proof: To apply above theorem, we write X as X X + X, where { { X + X, X ; X X, X < ;, X < ;, X ; We can find that X + and X are non-negative. So, we have E(X A) E(X + X A) E(X + A) E(X A) E(X+ I A ) P (A) E(X I A ) P (A) E((X+ X )I A ) P (A) E(XI A) P (A). Example 5.5 Assume X ξ(λ). Then for any a >, we have the no-memory property E(X a X > a) EX. Proof: The density of X is f(x) λe λx, x >. Then by above proposition, we have E(XI[X > a]) E(X X > a) P (X > a) 1 e λa xλe λx dx a 1 λe λa [e λx ( λx 1)] a 1 λe λa [λae λa + e λa ] 1 λ + a. te t dt (te t e t dt) (te t + e t ) 18

19 Therefore, E(X a X > a) 1 λ + a a 1 λ EX E(X X > ). Note that: Because the exponential distribution has non-memory P (X > s + t X > t) P (X > s), therefore we have above results. However, for other distribution, we may not have above result. 19

LIST OF FORMULAS FOR STK1100 AND STK1110

LIST OF FORMULAS FOR STK1100 AND STK1110 LIST OF FORMULAS FOR STK1100 AND STK1110 (Version of 11. November 2015) 1. Probability Let A, B, A 1, A 2,..., B 1, B 2,... be events, that is, subsets of a sample space Ω. a) Axioms: A probability function

More information

3. Probability and Statistics

3. Probability and Statistics FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important

More information

Random Variables and Their Distributions

Random Variables and Their Distributions Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

STT 441 Final Exam Fall 2013

STT 441 Final Exam Fall 2013 STT 441 Final Exam Fall 2013 (12:45-2:45pm, Thursday, Dec. 12, 2013) NAME: ID: 1. No textbooks or class notes are allowed in this exam. 2. Be sure to show all of your work to receive credit. Credits are

More information

18.440: Lecture 28 Lectures Review

18.440: Lecture 28 Lectures Review 18.440: Lecture 28 Lectures 17-27 Review Scott Sheffield MIT 1 Outline Continuous random variables Problems motivated by coin tossing Random variable properties 2 Outline Continuous random variables Problems

More information

Continuous Random Variables

Continuous Random Variables 1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables

More information

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

ENGG2430A-Homework 2

ENGG2430A-Homework 2 ENGG3A-Homework Due on Feb 9th,. Independence vs correlation a For each of the following cases, compute the marginal pmfs from the joint pmfs. Explain whether the random variables X and Y are independent,

More information

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline.

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline. Random Variables Amappingthattransformstheeventstotherealline. Example 1. Toss a fair coin. Define a random variable X where X is 1 if head appears and X is if tail appears. P (X =)=1/2 P (X =1)=1/2 Example

More information

Final Exam # 3. Sta 230: Probability. December 16, 2012

Final Exam # 3. Sta 230: Probability. December 16, 2012 Final Exam # 3 Sta 230: Probability December 16, 2012 This is a closed-book exam so do not refer to your notes, the text, or any other books (please put them on the floor). You may use the extra sheets

More information

Expectation of Random Variables

Expectation of Random Variables 1 / 19 Expectation of Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 13, 2015 2 / 19 Expectation of Discrete

More information

Lecture 5: Expectation

Lecture 5: Expectation Lecture 5: Expectation 1. Expectations for random variables 1.1 Expectations for simple random variables 1.2 Expectations for bounded random variables 1.3 Expectations for general random variables 1.4

More information

Probability Theory and Statistics. Peter Jochumzen

Probability Theory and Statistics. Peter Jochumzen Probability Theory and Statistics Peter Jochumzen April 18, 2016 Contents 1 Probability Theory And Statistics 3 1.1 Experiment, Outcome and Event................................ 3 1.2 Probability............................................

More information

Elements of Probability Theory

Elements of Probability Theory Short Guides to Microeconometrics Fall 2016 Kurt Schmidheiny Unversität Basel Elements of Probability Theory Contents 1 Random Variables and Distributions 2 1.1 Univariate Random Variables and Distributions......

More information

5 Operations on Multiple Random Variables

5 Operations on Multiple Random Variables EE360 Random Signal analysis Chapter 5: Operations on Multiple Random Variables 5 Operations on Multiple Random Variables Expected value of a function of r.v. s Two r.v. s: ḡ = E[g(X, Y )] = g(x, y)f X,Y

More information

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) D. ARAPURA This is a summary of the essential material covered so far. The final will be cumulative. I ve also included some review problems

More information

Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2

Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2 You can t see this text! Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2 Eric Zivot Spring 2015 Eric Zivot (Copyright 2015) Probability Review - Part 2 1 /

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

BASICS OF PROBABILITY

BASICS OF PROBABILITY October 10, 2018 BASICS OF PROBABILITY Randomness, sample space and probability Probability is concerned with random experiments. That is, an experiment, the outcome of which cannot be predicted with certainty,

More information

Lecture 19: Properties of Expectation

Lecture 19: Properties of Expectation Lecture 19: Properties of Expectation Dan Sloughter Furman University Mathematics 37 February 11, 4 19.1 The unconscious statistician, revisited The following is a generalization of the law of the unconscious

More information

Actuarial Science Exam 1/P

Actuarial Science Exam 1/P Actuarial Science Exam /P Ville A. Satopää December 5, 2009 Contents Review of Algebra and Calculus 2 2 Basic Probability Concepts 3 3 Conditional Probability and Independence 4 4 Combinatorial Principles,

More information

conditional cdf, conditional pdf, total probability theorem?

conditional cdf, conditional pdf, total probability theorem? 6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random

More information

1 Review of Probability

1 Review of Probability 1 Review of Probability Random variables are denoted by X, Y, Z, etc. The cumulative distribution function (c.d.f.) of a random variable X is denoted by F (x) = P (X x), < x

More information

Practice Examination # 3

Practice Examination # 3 Practice Examination # 3 Sta 23: Probability December 13, 212 This is a closed-book exam so do not refer to your notes, the text, or any other books (please put them on the floor). You may use a single

More information

Random Variables. P(x) = P[X(e)] = P(e). (1)

Random Variables. P(x) = P[X(e)] = P(e). (1) Random Variables Random variable (discrete or continuous) is used to derive the output statistical properties of a system whose input is a random variable or random in nature. Definition Consider an experiment

More information

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities PCMI 207 - Introduction to Random Matrix Theory Handout #2 06.27.207 REVIEW OF PROBABILITY THEORY Chapter - Events and Their Probabilities.. Events as Sets Definition (σ-field). A collection F of subsets

More information

Probability and Distributions

Probability and Distributions Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated

More information

STA 256: Statistics and Probability I

STA 256: Statistics and Probability I Al Nosedal. University of Toronto. Fall 2017 My momma always said: Life was like a box of chocolates. You never know what you re gonna get. Forrest Gump. There are situations where one might be interested

More information

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Review of Basic Probability The fundamentals, random variables, probability distributions Probability mass/density functions

More information

Chapter 4 : Expectation and Moments

Chapter 4 : Expectation and Moments ECE5: Analysis of Random Signals Fall 06 Chapter 4 : Expectation and Moments Dr. Salim El Rouayheb Scribe: Serge Kas Hanna, Lu Liu Expected Value of a Random Variable Definition. The expected or average

More information

STAT Chapter 5 Continuous Distributions

STAT Chapter 5 Continuous Distributions STAT 270 - Chapter 5 Continuous Distributions June 27, 2012 Shirin Golchi () STAT270 June 27, 2012 1 / 59 Continuous rv s Definition: X is a continuous rv if it takes values in an interval, i.e., range

More information

18.440: Lecture 28 Lectures Review

18.440: Lecture 28 Lectures Review 18.440: Lecture 28 Lectures 18-27 Review Scott Sheffield MIT Outline Outline It s the coins, stupid Much of what we have done in this course can be motivated by the i.i.d. sequence X i where each X i is

More information

Stat 5101 Notes: Algorithms

Stat 5101 Notes: Algorithms Stat 5101 Notes: Algorithms Charles J. Geyer January 22, 2016 Contents 1 Calculating an Expectation or a Probability 3 1.1 From a PMF........................... 3 1.2 From a PDF...........................

More information

Continuous Random Variables and Continuous Distributions

Continuous Random Variables and Continuous Distributions Continuous Random Variables and Continuous Distributions Continuous Random Variables and Continuous Distributions Expectation & Variance of Continuous Random Variables ( 5.2) The Uniform Random Variable

More information

Lecture 22: Variance and Covariance

Lecture 22: Variance and Covariance EE5110 : Probability Foundations for Electrical Engineers July-November 2015 Lecture 22: Variance and Covariance Lecturer: Dr. Krishna Jagannathan Scribes: R.Ravi Kiran In this lecture we will introduce

More information

MAS113 Introduction to Probability and Statistics. Proofs of theorems

MAS113 Introduction to Probability and Statistics. Proofs of theorems MAS113 Introduction to Probability and Statistics Proofs of theorems Theorem 1 De Morgan s Laws) See MAS110 Theorem 2 M1 By definition, B and A \ B are disjoint, and their union is A So, because m is a

More information

2. Suppose (X, Y ) is a pair of random variables uniformly distributed over the triangle with vertices (0, 0), (2, 0), (2, 1).

2. Suppose (X, Y ) is a pair of random variables uniformly distributed over the triangle with vertices (0, 0), (2, 0), (2, 1). Name M362K Final Exam Instructions: Show all of your work. You do not have to simplify your answers. No calculators allowed. There is a table of formulae on the last page. 1. Suppose X 1,..., X 1 are independent

More information

Chapter 5 continued. Chapter 5 sections

Chapter 5 continued. Chapter 5 sections Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

STAT 430/510: Lecture 16

STAT 430/510: Lecture 16 STAT 430/510: Lecture 16 James Piette June 24, 2010 Updates HW4 is up on my website. It is due next Mon. (June 28th). Starting today back at section 6.7 and will begin Ch. 7. Joint Distribution of Functions

More information

ACM 116: Lectures 3 4

ACM 116: Lectures 3 4 1 ACM 116: Lectures 3 4 Joint distributions The multivariate normal distribution Conditional distributions Independent random variables Conditional distributions and Monte Carlo: Rejection sampling Variance

More information

Probability- the good parts version. I. Random variables and their distributions; continuous random variables.

Probability- the good parts version. I. Random variables and their distributions; continuous random variables. Probability- the good arts version I. Random variables and their distributions; continuous random variables. A random variable (r.v) X is continuous if its distribution is given by a robability density

More information

MULTIVARIATE PROBABILITY DISTRIBUTIONS

MULTIVARIATE PROBABILITY DISTRIBUTIONS MULTIVARIATE PROBABILITY DISTRIBUTIONS. PRELIMINARIES.. Example. Consider an experiment that consists of tossing a die and a coin at the same time. We can consider a number of random variables defined

More information

14.30 Introduction to Statistical Methods in Economics Spring 2009

14.30 Introduction to Statistical Methods in Economics Spring 2009 MIT OpenCourseWare http://ocw.mit.edu 14.30 Introduction to Statistical Methods in Economics Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

More information

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n JOINT DENSITIES - RANDOM VECTORS - REVIEW Joint densities describe probability distributions of a random vector X: an n-dimensional vector of random variables, ie, X = (X 1,, X n ), where all X is are

More information

ECE 302 Division 1 MWF 10:30-11:20 (Prof. Pollak) Final Exam Solutions, 5/3/2004. Please read the instructions carefully before proceeding.

ECE 302 Division 1 MWF 10:30-11:20 (Prof. Pollak) Final Exam Solutions, 5/3/2004. Please read the instructions carefully before proceeding. NAME: ECE 302 Division MWF 0:30-:20 (Prof. Pollak) Final Exam Solutions, 5/3/2004. Please read the instructions carefully before proceeding. If you are not in Prof. Pollak s section, you may not take this

More information

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows. Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage

More information

Theory of probability and mathematical statistics

Theory of probability and mathematical statistics Theory of probability and mathematical statistics Tomáš Mrkvička Bibliography [1] J. [2] J. Andďż l: Matematickďż statistika, SNTL/ALFA, Praha 1978 Andďż l: Statistickďż metody, Matfyzpress, Praha 1998

More information

Exam P Review Sheet. for a > 0. ln(a) i=0 ari = a. (1 r) 2. (Note that the A i s form a partition)

Exam P Review Sheet. for a > 0. ln(a) i=0 ari = a. (1 r) 2. (Note that the A i s form a partition) Exam P Review Sheet log b (b x ) = x log b (y k ) = k log b (y) log b (y) = ln(y) ln(b) log b (yz) = log b (y) + log b (z) log b (y/z) = log b (y) log b (z) ln(e x ) = x e ln(y) = y for y > 0. d dx ax

More information

Stat 5101 Notes: Algorithms (thru 2nd midterm)

Stat 5101 Notes: Algorithms (thru 2nd midterm) Stat 5101 Notes: Algorithms (thru 2nd midterm) Charles J. Geyer October 18, 2012 Contents 1 Calculating an Expectation or a Probability 2 1.1 From a PMF........................... 2 1.2 From a PDF...........................

More information

ORF 245 Fundamentals of Statistics Chapter 4 Great Expectations

ORF 245 Fundamentals of Statistics Chapter 4 Great Expectations ORF 245 Fundamentals of Statistics Chapter 4 Great Expectations Robert Vanderbei Fall 2014 Slides last edited on October 20, 2014 http://www.princeton.edu/ rvdb Definition The expectation of a random variable

More information

1.1 Review of Probability Theory

1.1 Review of Probability Theory 1.1 Review of Probability Theory Angela Peace Biomathemtics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors EE401 (Semester 1) 5. Random Vectors Jitkomut Songsiri probabilities characteristic function cross correlation, cross covariance Gaussian random vectors functions of random vectors 5-1 Random vectors we

More information

Joint Distributions. (a) Scalar multiplication: k = c d. (b) Product of two matrices: c d. (c) The transpose of a matrix:

Joint Distributions. (a) Scalar multiplication: k = c d. (b) Product of two matrices: c d. (c) The transpose of a matrix: Joint Distributions Joint Distributions A bivariate normal distribution generalizes the concept of normal distribution to bivariate random variables It requires a matrix formulation of quadratic forms,

More information

Exercises and Answers to Chapter 1

Exercises and Answers to Chapter 1 Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean

More information

Review of Probability Theory

Review of Probability Theory Review of Probability Theory Arian Maleki and Tom Do Stanford University Probability theory is the study of uncertainty Through this class, we will be relying on concepts from probability theory for deriving

More information

Problem Solving. Correlation and Covariance. Yi Lu. Problem Solving. Yi Lu ECE 313 2/51

Problem Solving. Correlation and Covariance. Yi Lu. Problem Solving. Yi Lu ECE 313 2/51 Yi Lu Correlation and Covariance Yi Lu ECE 313 2/51 Definition Let X and Y be random variables with finite second moments. the correlation: E[XY ] Yi Lu ECE 313 3/51 Definition Let X and Y be random variables

More information

matrix-free Elements of Probability Theory 1 Random Variables and Distributions Contents Elements of Probability Theory 2

matrix-free Elements of Probability Theory 1 Random Variables and Distributions Contents Elements of Probability Theory 2 Short Guides to Microeconometrics Fall 2018 Kurt Schmidheiny Unversität Basel Elements of Probability Theory 2 1 Random Variables and Distributions Contents Elements of Probability Theory matrix-free 1

More information

1 Random Variable: Topics

1 Random Variable: Topics Note: Handouts DO NOT replace the book. In most cases, they only provide a guideline on topics and an intuitive feel. 1 Random Variable: Topics Chap 2, 2.1-2.4 and Chap 3, 3.1-3.3 What is a random variable?

More information

4. CONTINUOUS RANDOM VARIABLES

4. CONTINUOUS RANDOM VARIABLES IA Probability Lent Term 4 CONTINUOUS RANDOM VARIABLES 4 Introduction Up to now we have restricted consideration to sample spaces Ω which are finite, or countable; we will now relax that assumption We

More information

Probability: Handout

Probability: Handout Probability: Handout Klaus Pötzelberger Vienna University of Economics and Business Institute for Statistics and Mathematics E-mail: Klaus.Poetzelberger@wu.ac.at Contents 1 Axioms of Probability 3 1.1

More information

MATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation)

MATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation) MATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation) Last modified: March 7, 2009 Reference: PRP, Sections 3.6 and 3.7. 1. Tail-Sum Theorem

More information

Preliminary Statistics Lecture 3: Probability Models and Distributions (Outline) prelimsoas.webs.com

Preliminary Statistics Lecture 3: Probability Models and Distributions (Outline) prelimsoas.webs.com 1 School of Oriental and African Studies September 2015 Department of Economics Preliminary Statistics Lecture 3: Probability Models and Distributions (Outline) prelimsoas.webs.com Gujarati D. Basic Econometrics,

More information

Week 12-13: Discrete Probability

Week 12-13: Discrete Probability Week 12-13: Discrete Probability November 21, 2018 1 Probability Space There are many problems about chances or possibilities, called probability in mathematics. When we roll two dice there are possible

More information

Chapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables

Chapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables Chapter 2 Some Basic Probability Concepts 2.1 Experiments, Outcomes and Random Variables A random variable is a variable whose value is unknown until it is observed. The value of a random variable results

More information

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

1 Exercises for lecture 1

1 Exercises for lecture 1 1 Exercises for lecture 1 Exercise 1 a) Show that if F is symmetric with respect to µ, and E( X )

More information

Lecture 1: August 28

Lecture 1: August 28 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random

More information

Chapter 4. Chapter 4 sections

Chapter 4. Chapter 4 sections Chapter 4 sections 4.1 Expectation 4.2 Properties of Expectations 4.3 Variance 4.4 Moments 4.5 The Mean and the Median 4.6 Covariance and Correlation 4.7 Conditional Expectation SKIP: 4.8 Utility Expectation

More information

1 Review of Probability and Distributions

1 Review of Probability and Distributions Random variables. A numerically valued function X of an outcome ω from a sample space Ω X : Ω R : ω X(ω) is called a random variable (r.v.), and usually determined by an experiment. We conventionally denote

More information

Lecture 25: Review. Statistics 104. April 23, Colin Rundel

Lecture 25: Review. Statistics 104. April 23, Colin Rundel Lecture 25: Review Statistics 104 Colin Rundel April 23, 2012 Joint CDF F (x, y) = P [X x, Y y] = P [(X, Y ) lies south-west of the point (x, y)] Y (x,y) X Statistics 104 (Colin Rundel) Lecture 25 April

More information

ECE Lecture #9 Part 2 Overview

ECE Lecture #9 Part 2 Overview ECE 450 - Lecture #9 Part Overview Bivariate Moments Mean or Expected Value of Z = g(x, Y) Correlation and Covariance of RV s Functions of RV s: Z = g(x, Y); finding f Z (z) Method : First find F(z), by

More information

2 (Statistics) Random variables

2 (Statistics) Random variables 2 (Statistics) Random variables References: DeGroot and Schervish, chapters 3, 4 and 5; Stirzaker, chapters 4, 5 and 6 We will now study the main tools use for modeling experiments with unknown outcomes

More information

Conditional distributions. Conditional expectation and conditional variance with respect to a variable.

Conditional distributions. Conditional expectation and conditional variance with respect to a variable. Conditional distributions Conditional expectation and conditional variance with respect to a variable Probability Theory and Stochastic Processes, summer semester 07/08 80408 Conditional distributions

More information

Introduction to Machine Learning

Introduction to Machine Learning What does this mean? Outline Contents Introduction to Machine Learning Introduction to Probabilistic Methods Varun Chandola December 26, 2017 1 Introduction to Probability 1 2 Random Variables 3 3 Bayes

More information

Lecture 4: Proofs for Expectation, Variance, and Covariance Formula

Lecture 4: Proofs for Expectation, Variance, and Covariance Formula Lecture 4: Proofs for Expectation, Variance, and Covariance Formula by Hiro Kasahara Vancouver School of Economics University of British Columbia Hiro Kasahara (UBC) Econ 325 1 / 28 Discrete Random Variables:

More information

Chapter 2: Random Variables

Chapter 2: Random Variables ECE54: Stochastic Signals and Systems Fall 28 Lecture 2 - September 3, 28 Dr. Salim El Rouayheb Scribe: Peiwen Tian, Lu Liu, Ghadir Ayache Chapter 2: Random Variables Example. Tossing a fair coin twice:

More information

Stochastic Models of Manufacturing Systems

Stochastic Models of Manufacturing Systems Stochastic Models of Manufacturing Systems Ivo Adan Organization 2/47 7 lectures (lecture of May 12 is canceled) Studyguide available (with notes, slides, assignments, references), see http://www.win.tue.nl/

More information

Probability Models. 4. What is the definition of the expectation of a discrete random variable?

Probability Models. 4. What is the definition of the expectation of a discrete random variable? 1 Probability Models The list of questions below is provided in order to help you to prepare for the test and exam. It reflects only the theoretical part of the course. You should expect the questions

More information

1 Probability theory. 2 Random variables and probability theory.

1 Probability theory. 2 Random variables and probability theory. Probability theory Here we summarize some of the probability theory we need. If this is totally unfamiliar to you, you should look at one of the sources given in the readings. In essence, for the major

More information

This exam is closed book and closed notes. (You will have access to a copy of the Table of Common Distributions given in the back of the text.

This exam is closed book and closed notes. (You will have access to a copy of the Table of Common Distributions given in the back of the text. TEST #3 STA 536 December, 00 Name: Please read the following directions. DO NOT TURN THE PAGE UNTIL INSTRUCTED TO DO SO Directions This exam is closed book and closed notes. You will have access to a copy

More information

MATH/STAT 3360, Probability Sample Final Examination Model Solutions

MATH/STAT 3360, Probability Sample Final Examination Model Solutions MATH/STAT 3360, Probability Sample Final Examination Model Solutions This Sample examination has more questions than the actual final, in order to cover a wider range of questions. Estimated times are

More information

ORF 245 Fundamentals of Statistics Great Expectations

ORF 245 Fundamentals of Statistics Great Expectations ORF 245 Fundamentals of Statistics Great Expectations Robert Vanderbei Fall 2015 Slides last edited on November 16, 2015 http://www.princeton.edu/ rvdb Definition The expectation of a random variable is

More information

MAS113 Introduction to Probability and Statistics. Proofs of theorems

MAS113 Introduction to Probability and Statistics. Proofs of theorems MAS113 Introduction to Probability and Statistics Proofs of theorems Theorem 1 De Morgan s Laws) See MAS110 Theorem 2 M1 By definition, B and A \ B are disjoint, and their union is A So, because m is a

More information

Brief Review of Probability

Brief Review of Probability Maura Department of Economics and Finance Università Tor Vergata Outline 1 Distribution Functions Quantiles and Modes of a Distribution 2 Example 3 Example 4 Distributions Outline Distribution Functions

More information

Solution to Assignment 3

Solution to Assignment 3 The Chinese University of Hong Kong ENGG3D: Probability and Statistics for Engineers 5-6 Term Solution to Assignment 3 Hongyang Li, Francis Due: 3:pm, March Release Date: March 8, 6 Dear students, The

More information

ECON 5350 Class Notes Review of Probability and Distribution Theory

ECON 5350 Class Notes Review of Probability and Distribution Theory ECON 535 Class Notes Review of Probability and Distribution Theory 1 Random Variables Definition. Let c represent an element of the sample space C of a random eperiment, c C. A random variable is a one-to-one

More information

3 Multiple Discrete Random Variables

3 Multiple Discrete Random Variables 3 Multiple Discrete Random Variables 3.1 Joint densities Suppose we have a probability space (Ω, F,P) and now we have two discrete random variables X and Y on it. They have probability mass functions f

More information

Notes on Stochastic Calculus

Notes on Stochastic Calculus Notes on Stochastic Calculus David Nualart Kansas University nualart@math.ku.edu 1 Stochastic Processes 1.1 Probability Spaces and Random Variables In this section we recall the basic vocabulary and results

More information

CDA5530: Performance Models of Computers and Networks. Chapter 2: Review of Practical Random Variables

CDA5530: Performance Models of Computers and Networks. Chapter 2: Review of Practical Random Variables CDA5530: Performance Models of Computers and Networks Chapter 2: Review of Practical Random Variables Definition Random variable (R.V.) X: A function on sample space X: S R Cumulative distribution function

More information

Lecture 11. Probability Theory: an Overveiw

Lecture 11. Probability Theory: an Overveiw Math 408 - Mathematical Statistics Lecture 11. Probability Theory: an Overveiw February 11, 2013 Konstantin Zuev (USC) Math 408, Lecture 11 February 11, 2013 1 / 24 The starting point in developing the

More information

1 Presessional Probability

1 Presessional Probability 1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional

More information

Interesting Probability Problems

Interesting Probability Problems Interesting Probability Problems Jonathan Mostovoy - 4665 University of Toronto August 9, 6 Contents Chapter Questions a).8.7............................................ b)..............................................

More information

1 Basic continuous random variable problems

1 Basic continuous random variable problems Name M362K Final Here are problems concerning material from Chapters 5 and 6. To review the other chapters, look over previous practice sheets for the two exams, previous quizzes, previous homeworks and

More information

The Multivariate Normal Distribution. In this case according to our theorem

The Multivariate Normal Distribution. In this case according to our theorem The Multivariate Normal Distribution Defn: Z R 1 N(0, 1) iff f Z (z) = 1 2π e z2 /2. Defn: Z R p MV N p (0, I) if and only if Z = (Z 1,..., Z p ) T with the Z i independent and each Z i N(0, 1). In this

More information

Lectures 22-23: Conditional Expectations

Lectures 22-23: Conditional Expectations Lectures 22-23: Conditional Expectations 1.) Definitions Let X be an integrable random variable defined on a probability space (Ω, F 0, P ) and let F be a sub-σ-algebra of F 0. Then the conditional expectation

More information

Let X and Y denote two random variables. The joint distribution of these random

Let X and Y denote two random variables. The joint distribution of these random EE385 Class Notes 9/7/0 John Stensby Chapter 3: Multiple Random Variables Let X and Y denote two random variables. The joint distribution of these random variables is defined as F XY(x,y) = [X x,y y] P.

More information

Chapter 4. Continuous Random Variables

Chapter 4. Continuous Random Variables Chapter 4. Continuous Random Variables Review Continuous random variable: A random variable that can take any value on an interval of R. Distribution: A density function f : R R + such that 1. non-negative,

More information

Recall that if X 1,...,X n are random variables with finite expectations, then. The X i can be continuous or discrete or of any other type.

Recall that if X 1,...,X n are random variables with finite expectations, then. The X i can be continuous or discrete or of any other type. Expectations of Sums of Random Variables STAT/MTHE 353: 4 - More on Expectations and Variances T. Linder Queen s University Winter 017 Recall that if X 1,...,X n are random variables with finite expectations,

More information

Continuous Random Variables

Continuous Random Variables Continuous Random Variables Recall: For discrete random variables, only a finite or countably infinite number of possible values with positive probability. Often, there is interest in random variables

More information