APPM/MATH 4/5520 Solutions to Exam I Review Problems. f X 1,X 2. 2e x 1 x 2. = x 2

Size: px
Start display at page:

Download "APPM/MATH 4/5520 Solutions to Exam I Review Problems. f X 1,X 2. 2e x 1 x 2. = x 2"

Transcription

1 APPM/MATH 4/5520 Solutions to Exam I Review Problems. (a) f X (x ) f X,X 2 (x,x 2 )dx 2 x 2e x x 2 dx 2 2e 2x x was below x 2, but when marginalizing out x 2, we ran it over all values from 0 to and so there was no upper bound on x. The final answer for the marginal pdf of X is f X (x) 2e 2x I (0, ) (x). That is, X exp(rate 2). f X2 (x 2 ) f X,X 2 (x,x 2 )dx x 2 0 2e x x 2 dx 2e x2 2e 2x 2 x 2 was above x, but since there was no lower bound (other than 0) on x when marginalizing out x, x 2 ends up going all the way from 0 to. The final answer for the marginal pdf of X 2 is f X2 (x) 2e x2 ( e x2 )I (0, ) (x). This distribution does not have a name. (So sad!) (b) No, X and X 2 are not independent because f X,X 2 f X (x ) f X2 (x 2 ). You can also tell that they are dependent because of the triangular region indicated by the indicators. (c) We have y g (x,x 2 ) 2x and y 2 g 2 (x,x 2 ) x 2 x which gives us x g (y,y 2 ) y /2 and x 2 g2 (y,y 2 ) y /2+y 2. The Jacobian of this transformation is x x y y 2 /2 0 J /2 /2 x 2 y x 2 y 2 Now f Y,Y 2 (y,y 2 ) f X,X 2 (g (y,y 2 ),g 2 (y,y 2 )) J 2e y /2 (y 2 y /2) I (0,y /2+y 2 )(y /2)I (0, ) (y /2+y 2 ) /2 e y y 2 I (0,y /2+y 2 )(y /2)I (0, )

2 The first indicator: I (0,y /2+y 2 )(y /2) 0 < y /2 < y /2+y 2 y /2 < 0 < y 2 So we need to at least have y > 0 and y 2 > 0. The second indicator: I (0, ) (y /2+y 2 ) 0 < y /2+y 2 < While this does not mean that, necessarily, y > 0 and y 2 > 0, we already know that y > 0 and y 2 > 0 and this second indicator does not constrain things any further. Therefore, I (0,y /2+y 2 )(y /2) I (0, ) (y /2+y 2 ) I (0, ) (y ) I (0, ) (y 2 ). Hence, the joint pdf becomes f Y,Y 2 (y,y 2 ) e y y 2 I (0, ) (y ) I (0, ) (y 2 ) which factors into a y part and a y 2 part. This shows that Y and Y 2 are independent. 2. First note that P(X x) ( p) x pi {0,,2,...}. So, P(Y y) P(X + y) P(X y ) ( p) y pi {0,,2,...,} (y ) ( p) y pi {,2,3,...,} (y) Therefore, Y geom (p).

3 3. y g(x) e x x g (y) ln(y +) Here f X (x) λe λx I (0, ) (x). So, f Y (y) f X (g (y)) y g (y) λe λln(y+) I (0, ) (ln(y +)) λe ln(y+) λ I (0, ) (y) y+ y+ λ(y +) λ I (0, ) (y) y+ (We can drop the absolute value since the indicator tells us that the thing inside will be positive.) So, λ f Y (y) (y +) λ+ I (0, )(y) which implies that Y P areto(λ). 4. (a) f X (x) γ (+x) γ+ I (0, ) (x). y g(x) ln(x+) x g (y) e y The pdf for Y is then f Y (y) f X (g (y)) d dy g (y) γ (+e y ) γ+ I (0, ) (e y ) e y For the indicator, 0 < e y < < e y <. Taking ln all the way across gives 0 < y <. Simplifying everything gives f Y (y) γe γy I (0, ) (y) which is the pdf of an exponential distribution with rate γ. Thus, Y exp(rate γ)

4 (b) You could go through part(a) of this problem and try to figure out what you could have done at the start that would cancel the γ out of that problem and then think again about how you would have introduced the λ in the right spot. Or, start from the end where you concluded that Y exp(rate γ). If you think about the g procedure on this, it becomes clear that γy will be an exponential random variable with rate and then that Z γy/λ will be an exponential random variable with rate λ. Since the original Y was Y ln( +X), we will need Z (γ/λ)y (γ/λ)ln(+x) in order to end up with an exponential distribution with rate λ. That is, z g(x) (γ/λ)ln(+x). 5. So, M X (t) E[e tx ] E[e t n n i X i ] iid [M X (t/n)] n [ ] λ n [ ] λn n λ t/n λn t X Γ(n,λn) (Alternatively, you could say that n i X i Γ(n,λ) and then use the g inverse method to find the pdf of /n times Y where Y Γ(n,λ). ) 6. M Y (t) E[e ty ] E[e t(x +X 2 ) ] E[e tx e tx 2 ] indep E[e tx ] E[e txt ] M X (t) M X2 (t) Since Y χ 2 (n) Γ(n/2,/2), we know that the mgf for Y is M Y (t) ( ) /2 n/2. /2 t Similarly, since X χ 2 (n ) Γ(n /2,/2), we know that the mgf for X is M X (t) ( ) /2 n /2. /2 t Now M Y (t) M X (t) M X2 (t) M X2 (t) M Y (t)/m X (t)

5 ( ) /2 n/2 /2 t ( ) /2 /2 t n /2 ( /2 /2 t which we recognize as the mgf for a Γ( n n 2, 2 Therefore, X 2 χ 2 (n n ). ) (n n )/2 ) χ 2 (n n ). 7. Using moment generating functions, M Y (t) iid [M X (t)] n [ ] pe t n ( p)e t which is the moment generating function for a negative binomial distribution. (The second one on the table that starts from r.) 8. (a) The Beta(, ) distribution is the same thing as the unif(0, ) distribution. You know this even if you do not know what the Beta function is because the pdf is some constant on the interval (0,). The only possibility for that constant is for it to be! (b) We first need to find the pdf for the maximum. First, note that the cdf for any one X unif(0,) is 0, x < 0 F(x) x, 0 x <, x Thus, F X(n) (x) P(X (n) x) P(max(X,X 2,...,X n ) x) P(X x,x 2 x,...,x n x) indep P(X x) P(X 2 x) P(X n x) ident [P(X x)] n unif x n So, f X(n) (x) d dx F X (n) (x) nx n.

6 To complete the pdf, we include the domain of X (n) : f X(n) (x) nx n I (0,) (x). If we didn t recognize this pdf, we would proceed to find the expected value and variance of X (n) by computing the appropriate integrals. However, this is the Beta pdf with parameters a n and b. That is X (n) Beta(n,). So, we can just look up the mean and variance! E[X (n) ] n n+ and Var[X (n) ] n (n+) 2 (n+2) (c) E[X (n) ] Var[X (n) ]+(E[X (n) ]) 2 ( ) n n 2 (n+) 2 (n+2) + n+ 9. (a) Let s try the sample mean X m mi X i. Then (b) E[X] E[X i ] np. So, putting a /n on front of X will do the trick. In other words, we have an unbiased estimator ˆp n X m X i. nm (Note: There is more than one correct answer. For example, another unbiased estimator is ˆp 2 n X.) i MSE[ˆp] MSE[X] unbiased [ ] Var n X n 2 Var[X] n 2 Var[X ] m n 2 np( p) m p( p) nm 0. There are many possible answers. For example, the mean of this distribution is θ/2. Since X is always unbiased for the mean, we know that E[X] θ 2. Therefore, ˆθ : 2X is an unbiased estimator of θ.

7 Another estimator that makes sense to use here since θ is the upper endpoint for possible values in the sample is the maximum value in the sample. Is X (n) unbiased though? F X(n) (x) P(X (n) x) P(X x,x 2 x,...,x n x) indep P(X x)p(x 2 x) P(X n x) ident [P(X x)] n [ ] x n θ So f X(n) (x) d dx F X (n) (x) n θ ( ) x n I θ (0,θ)(x) Notethattheindicatorwasjusttacked ontotheend thedomainwasfiguredoutby common sense. ie: All the individual x s had to be between 0 and θ so the max of the x s has to be between 0 and θ! Now for the expectation. So, another unbiased estimator of θ is E[X (n) ] x f X (n) (x)dx θ ( x n 0 xn θ θ) n θ n θ 0 xn dx n θ n θn+ n+ n n+ θ ˆθ 2 n+ n X (n).. M X (t) E[e tx ] x0 e tx P(X x) e (0)(t) ( p)+e ()(t) p p+pe t

8 2. Let Y X /X 2 and let Y 2 be anything. When dealing with a ratio, it is usually convenient to let Y 2 be the denominator, so I will do just that. Let Y 2 X 2. So Solving for x and x 2 : y g (x,x 2 ) x /x 2, y 2 g 2 (x,x 2 ) x 2. This is how you define the inverse functions: The Jacobian of the transformation is x J So, the joint pdf for Y and Y 2 is x 2 y 2, x y x 2 y y 2. x : g (y,y 2 ) y y 2, x 2 : g 2 (y,y 2 ) y 2. x y y 2 x 2 x 2 y y 2 y 2 y 0 y 2 f Y,Y 2 (y,y 2 ) f X,X 2 (g (y,y 2 ),g 2 (y,y 2 )) J To simplify the indicators, write them out: 8(y y 2 )y 2 I (0,y2 )(y y 2 )I (0,) (y 2 ) y 2 8y y 2 2 y 2 I (0,y2 )(y y 2 )I (0,) (y 2 ) 0 < y y 2 < y 2, 0 < y 2 <. Sincethe second inequality tells us that y 2 > 0, wecan divideeverything in thefirstinequality through by y 2 and not have to worry about dividing by zero or flipping inequalities because of dividing by a negative. We now have 0 < y <, 0 < y 2 <. I m going to replace the indicators in the pdf and, at the same time, drop the absolute value since we now know that y 2 > 0. The final joint pdf is then f Y,Y 2 (y,y 2 ) 8y y 3 2 I (0,) (y )I (0,) (y 2 ). Finally, we only wanted the pdf of Y X /X 2 so we will marginalize out y 2 : f Y (y ) f Y,Y 2 (y,y 2 )dy 2 8y y 3 2 I (0,)(y )I (0,) (y 2 )dy 2 0 8y y 3 2 I (0,)(y )dy 2 2y y2 4I (0,)(y ) 2y I (0,) (y 2 ) y 2 y 2 0 (Note: We are done since I didn t ask for a name, but this is a Beta(2,) pdf!)

9 3. Let Y X /X 2 and let Y 2 X 2. We will find the joint pdf of Y and Y 2 and then integrate out the y 2. By independence of the X s, We have which implies that f X,X 2 (x,x 2 ) f X (x ) f X2 (x 2 ) 2e 2x I (0, ) (x ) 2e 2x 2 I (0, ) (x 2 ) 4e 2(x +x 2 ) I (0, ) (x )I (0, ) (x 2 ) y g (x,x 2 ) x /x 2 and y 2 g 2 (x,x 2 ) x 2 x g (y,y 2 ) y y 2 and x 2 g 2 (y,y 2 ) y 2. The Jacobian of the transformation is x J x y y 2 x 2 x 2 y y 2 y 2 y 0 y 2. So, f Y,Y 2 (y,y 2 ) f X.X 2 (g (y ),y 2,g 2 (y,y 2 )) J 4e 2(y y 2 +y 2 ) I (0, ) (y y 2 )I (0, ) (y 2 ) y 2 The first indicator says that 0 < y y 2 < which means that we either have both y > 0 and y 2 > 0, or, we have both y < 0 and y 2 < 0. However, the second indicator says that y 2 > 0, thereby ruling the second ( negative negative ) possibility out. Therefore, the product of indicators is equivalent to the product I (0, ) (y ) I (0, ) (y 2 ). Since y 2 is positive, we can drop the absolute value on y 2 in the joint pdf for Y and Y 2. So f Y,Y 2 (y,y 2 ) 4y 2 e 2(y y 2 +y 2 ) I (0, ) (y )I (0, ) (y 2 ) Now f Y (y ) f Y,Y 2 (y,y 2 )dy 2 0 4y 2 e 2(y y 2 +y 2 ) I (0, ) (y )dy 2 4I (0, ) (y ) 0 y 2 e 2(y +)y 2 dy 2 The integral is almost that of the pdf of the Γ(2,2(y + )) distribution and we can put in (and adjust for) the correct constants in order to get an integral of.

10 To change things up a bit, note also that the integral is almost like the mean of an exponential random variable with rate 2(y +) (which would be /[2(y +)]). We can put in the rate: f Y (y ) 4I (0, ) (y ) 2(y +) 4I (0, ) (y ) 2(y +) (y +) 2 I (0, ) (y ) 0 y 2 2(y +)e 2(y +)y 2 dy 2 2(y +) This is the pdf of the Pareto distribution with parameter γ. That is, Y X /X 2 Pareto(). 4. (a) E[X] x f X(x)dx 0 x Γ(α) βα x α e βx dx 0 Γ(α) βα x α e βx dx This is almost like a Γ(α+,β) pdf. To get it just right, we need Γ(α+) in place of Γ(α) and we need another β. To this end, let s write it as E[X] 0 Γ(α) βα x α e βx dx. Γ(α+) Γ(α) fty β 0 Γ(α+) βα+ x α e βx dx Now we are integrating the Γ(α+,β) pdf over 0 to. This must be. So, the answer is (b) The mgf is E[X] Γ(α+) Γ(α) The derivative with respect to t is M X (t) β α Γ(α) Γ(α) β α β. [ ] β α. β t [ ] β α ( ) M X(t) β α β t (β t) 2 E[X] M X (0) α β β 2 α β.

11 5. By independence of the X s, We have which implies that f X,X 2 (x,x 2 ) f X (x ) f X2 (x 2 ) e x I (0, ) (x ) e x 2 I (0, ) (x 2 ) e (x +x 2 ) I (0, ) (x )I (0, ) (x 2 ) y g (x,x 2 ) x x 2 and y 2 g 2 (x,x 2 ) x +x 2 x g (y,y 2 ) 2 (y +y 2 ) and x 2 g 2 (y,y 2 ) 2 (y 2 y ). The Jacobian of the transformation is x J x y y 2 x 2 x 2 y y So, f Y,Y 2 (y,y 2 ) f X.X 2 (g (y ),y 2,g 2 (y,y 2 )) J e ( 2 (y +y 2 )+ 2 (y 2 y )) I (0, ) ( 2 (y +y 2 ))I (0, ) ( 2 (y 2 y )) That exponent simplifies to y 2. As for the indicators 0 < 2 (y +y 2 ) < 0 < y +y 2 < y < y 2 which is shown in the shaded region in Figure. The second indicator 0 < y 2 y < which implies that y < y 2. This region is indicated by the horizontal line shading of Figure 2. The intersection where both indicators are on at the same time is the upper V shape in Figure 2. This may be represented as I (0, ) (y 2 ) I ( y2,y 2 )(y ). Putting this all together, we have the joint pdf for Y and Y 2 is f Y,Y 2 (y,y 2 ) 2 e y 2 I ( y2,y 2 )(y )I (0, ) (y 2 ). 2

12 Figure : 0 < y +y 2 < Figure 2: 0 < y y 2 < on top of 0 < y +y 2 < 6. (a) There are many answers that will work here. Since λ is the mean for the Poisson distribution, one obvious estimator is the sample mean, X since it is always unbiased for the mean of the distribution. So ˆλ X. Note that for the Poisson distribution, λ is also the variance of the distribution. Since we know that the sample variance S 2 is unbiased for the distribution variance, we have another unbiased estimator for λ: ˆλ 2 S 2. HOWEVER, looking ahead to parts (b) and (c), I don t want to have to findthe variance of S 2! So, I ll come up with another unbiased estimator.

13 How about just X? Let ˆλ 2 X. (b) It s boring but it works. Var[ˆλ ] Var[X] Var(X ) n Var[ˆλ 2 ] Var[X ] λ λ n So, the first estimator is better in terms of variance (smaller variance). (c) There is nothing to do here! If the estimators are unbiased, then the MSE is the same as the variance! So, the first estimator is better in terms of MSE (smaller MSE). 7. X is always unbiased for the true mean which is in this case θ /λ. So, ˆθ X. 8. Based on the previous problem, we should probably try the estimator /X. Note that E[/X] /E[X]. Instead, where Y Γ(n,λ). So, So, E[ Y ] E[/X] E[n/ X i ] ne[/y] y f Y(y)dy 0 0 To get an unbiased estimator, we use y Γ(n) λn y n e λy dy Γ(n) λn y n 2 e λy dy }{{} like Γ(n,λ) Γ(n ) Γ(n) λ 0 Γ(n) λn y n 2 e λy dy Γ(n ) Γ(n) λ (n 2)! (n )! λ λ n. E[/X] ne[/y] n n λ. ˆλ n n X.

14 9. The real numbers a n converge to the real number a if, for any ε > 0, there exists a natural number N such that a n a < ε for all n N. Thus, for all n N we have P( a n a,ε) and therefore, lim P( a n a,ε). n So, a n P a, as desired. 20. We know that, for any distribution with finite variance, the sample mean X converges to the mean µ of the distribution. In this case, µ E[X ] λ. So, X P λ. 2. (a) Um... this looks familiar. (See the previous problem.) We know that X P λ since we know that, for any distribution with finite variance, the sample mean X converges to the mean µ of the distribution. Since g(x) x 2 is a continuous function, we then have that X 2 g(x) P g(λ) λ 2. (b) We will try That Theorem. ] E[Y n ] E[ X n+ n(n+) E[ n i X i ] ni n(n+) E[X i ] Poisson ni n(n+) iλ n(n+) n(n+) λ 2 λ 2. We wanted to see λ. Note though, that 2Y n is unbiased for λ. Also, note that ] Var[2Y n ] 4Var[Y n ] 4Var[ X n+ 4 n 2 (n+) 2 Var[ n i X i ] indep 4 n 2 (n+) 2 ni Var[X i ] Poisson 4 n 2 (n+) 2 ni iλ 4 λ n(n+) n 2 (n+) n(n+) λ.

15 Since this goes to 0 as n, we have, by That Theorem, that 2Y n P λ. Thus, we get Y n P λ/2. We can see this as either Y n 2 }{{} /2 2Y n }{{} P λ P 2 λ (Since we know that X n P a and Yn P b implies that Xn Y n P ab. Combine this with problem 9.) OR, we can put 2Y n P λ through the continuous function g(x) x/ The cdf for this distribution is F(x) e (x θ) for x > θ. The cdf for the minimum is F X() (x) P(X () x) P(min(X,X 2,...,X n x) x) P(min(X,X 2,...,X n ) > x) iid [P(X > x)] n unif e n(x θ). Taking the derivative with respet to x, we see that X () has an exponential distribution with rate n that has been shifted θ units to the right. Thus E[X () ] n +θ. This is not an unbiased estimator, which is sad because we d like to use That Theorem to show convergence in probability, but That Theorem requires an unbiased estimator. Consider for a moment the estimator θ 2 X () /n. This is an unbiased estimator with variance Var[ θ 2 ] Var[X () /n] Var[X () ] n 2. (Here I have twice used the fact that adding or subtracting a constant to a random variable or a distribution does not affect its variance!) Since E[ θ 2 ] θ and Var[ θ 2 ] 0 as n, we know, by That Theorem, that X () n θ 2 P θ. So, finally, we conclude that X () θ + n P θ+0 θ

16 as desired. (Here we are using the Theorem that said that X n P a and Yn P b implies X n +Y n P a+b and problem 9 on this review that addresses convergence in probability for non-random things.) 23. F Yn (y) P(Y n y) P(nln(X () +) y) P(X () e y/n ) Since P(X () x) P(X () > x) iid [P(X > x)] n [ Pareto ] n (+x) γ we have that (+x) nγ, F Yn (y) P(X () e y/n ) Finally, (+e y/n ) nγ (e y/n ) nγ e yγ e γy. lim n F Y n (y) lim n [ e γy ] e γy which is the cdf of an exponential distribution with rate γ. So, where Y exp(rate γ). Y n d Y

17 24. F Yn (y) P(Y n y) P(nX () y) So, P(X () y/n) P(X () > y/n) iid [P(X > y/n)] n [e λy/n] n e λy lim n F Y n (y) lim n [ e λy ] e λy which is the cdf of an exponential distribution with rate λ. Therefore, where Y exp(rate λ). Y n d Y 25. Using mgfs, M Y (t) E[e ty ] E[e t X i ] E[e txi ] ni ] E[ e tx i indep n i E[e tx i ] n i M Xi (t) ( ) n /2 ni /2 i /2 t ( /2 t) ( ni )/2. This is the mgf of the χ 2 ( n i n i ) distribution. Thus, n Y χ 2 ( n i ). i 26. (a) S 2 ni (X i X) 2 n

18 (b) E[(X i X) 2 ] E[X 2 i 2X ix +X 2 ] We know that We also know that E[X 2 i ] 2E[X ix]+e[x 2 ] E[X 2 i ] Var[X i]+(e[x i ]) 2 σ 2 +µ 2. E[X 2 ] Var[X]+(E[X]) 2 σ2 n +µ2. The other term is a bit tricky since X i is not independent of X. X i is, however, independent of most of the terms in X. [ ] E[X i X] E X nj i n X j n E[X X i + X i X i +X 2 i +X ix i+ + X n X i ] We already know that E[X 2 i ] σ2 +µ 2. The other n terms have the form E[X i X j ] where j i. In this case E[X i X j ] indep E[X i ]E[X j ] µ µ µ 2. Thus, E[X i X] n [ ] (n )µ 2 +(σ 2 +µ 2 ) n [nµ2 +σ 2 ] µ 2 +σ 2 /n. Putting it all together, we have E[(X i X) 2 ] E[Xi 2 2X ix +X 2 ] E[Xi 2] 2E[X ix]+e[x 2 ] σ 2 +µ 2 2[µ 2 +σ 2 /n]+σ 2 /n+µ 2 (n )σ2 n (c) Thus, E[S 2 ] E [ n ] i (X i X) 2 n n E[ n i (X i X) 2 ] n ni E[(X i X) 2 ] ni (n )σ 2 n n (n )σ2 n n n σ 2

APPM/MATH 4/5520 Solutions to Problem Set Two. = 2 y = y 2. e 1 2 x2 1 = 1. (g 1

APPM/MATH 4/5520 Solutions to Problem Set Two. = 2 y = y 2. e 1 2 x2 1 = 1. (g 1 APPM/MATH 4/552 Solutions to Problem Set Two. Let Y X /X 2 and let Y 2 X 2. (We can select Y 2 to be anything but when dealing with a fraction for Y, it is usually convenient to set Y 2 to be the denominator.)

More information

4. Distributions of Functions of Random Variables

4. Distributions of Functions of Random Variables 4. Distributions of Functions of Random Variables Setup: Consider as given the joint distribution of X 1,..., X n (i.e. consider as given f X1,...,X n and F X1,...,X n ) Consider k functions g 1 : R n

More information

lim F n(x) = F(x) will not use either of these. In particular, I m keeping reserved for implies. ) Note:

lim F n(x) = F(x) will not use either of these. In particular, I m keeping reserved for implies. ) Note: APPM/MATH 4/5520, Fall 2013 Notes 9: Convergence in Distribution and the Central Limit Theorem Definition: Let {X n } be a sequence of random variables with cdfs F n (x) = P(X n x). Let X be a random variable

More information

Sampling Distributions

Sampling Distributions Sampling Distributions In statistics, a random sample is a collection of independent and identically distributed (iid) random variables, and a sampling distribution is the distribution of a function of

More information

Limiting Distributions

Limiting Distributions Limiting Distributions We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the

More information

This does not cover everything on the final. Look at the posted practice problems for other topics.

This does not cover everything on the final. Look at the posted practice problems for other topics. Class 7: Review Problems for Final Exam 8.5 Spring 7 This does not cover everything on the final. Look at the posted practice problems for other topics. To save time in class: set up, but do not carry

More information

Probability Models. 4. What is the definition of the expectation of a discrete random variable?

Probability Models. 4. What is the definition of the expectation of a discrete random variable? 1 Probability Models The list of questions below is provided in order to help you to prepare for the test and exam. It reflects only the theoretical part of the course. You should expect the questions

More information

Sampling Distributions

Sampling Distributions In statistics, a random sample is a collection of independent and identically distributed (iid) random variables, and a sampling distribution is the distribution of a function of random sample. For example,

More information

Moments. Raw moment: February 25, 2014 Normalized / Standardized moment:

Moments. Raw moment: February 25, 2014 Normalized / Standardized moment: Moments Lecture 10: Central Limit Theorem and CDFs Sta230 / Mth 230 Colin Rundel Raw moment: Central moment: µ n = EX n ) µ n = E[X µ) 2 ] February 25, 2014 Normalized / Standardized moment: µ n σ n Sta230

More information

BMIR Lecture Series on Probability and Statistics Fall 2015 Discrete RVs

BMIR Lecture Series on Probability and Statistics Fall 2015 Discrete RVs Lecture #7 BMIR Lecture Series on Probability and Statistics Fall 2015 Department of Biomedical Engineering and Environmental Sciences National Tsing Hua University 7.1 Function of Single Variable Theorem

More information

Joint Probability Distributions and Random Samples (Devore Chapter Five)

Joint Probability Distributions and Random Samples (Devore Chapter Five) Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete

More information

Exercises and Answers to Chapter 1

Exercises and Answers to Chapter 1 Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean

More information

Properties of Random Variables

Properties of Random Variables Properties of Random Variables 1 Definitions A discrete random variable is defined by a probability distribution that lists each possible outcome and the probability of obtaining that outcome If the random

More information

Lecture 5: Moment generating functions

Lecture 5: Moment generating functions Lecture 5: Moment generating functions Definition 2.3.6. The moment generating function (mgf) of a random variable X is { x e tx f M X (t) = E(e tx X (x) if X has a pmf ) = etx f X (x)dx if X has a pdf

More information

18.440: Lecture 28 Lectures Review

18.440: Lecture 28 Lectures Review 18.440: Lecture 28 Lectures 18-27 Review Scott Sheffield MIT Outline Outline It s the coins, stupid Much of what we have done in this course can be motivated by the i.i.d. sequence X i where each X i is

More information

STAT 512 sp 2018 Summary Sheet

STAT 512 sp 2018 Summary Sheet STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}

More information

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable Distributions of Functions of Random Variables 5.1 Functions of One Random Variable 5.2 Transformations of Two Random Variables 5.3 Several Random Variables 5.4 The Moment-Generating Function Technique

More information

1.1 Review of Probability Theory

1.1 Review of Probability Theory 1.1 Review of Probability Theory Angela Peace Biomathemtics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

Probability and Distributions

Probability and Distributions Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated

More information

Things to remember when learning probability distributions:

Things to remember when learning probability distributions: SPECIAL DISTRIBUTIONS Some distributions are special because they are useful They include: Poisson, exponential, Normal (Gaussian), Gamma, geometric, negative binomial, Binomial and hypergeometric distributions

More information

Limiting Distributions

Limiting Distributions We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the two fundamental results

More information

Stat410 Probability and Statistics II (F16)

Stat410 Probability and Statistics II (F16) Stat4 Probability and Statistics II (F6 Exponential, Poisson and Gamma Suppose on average every /λ hours, a Stochastic train arrives at the Random station. Further we assume the waiting time between two

More information

18.440: Lecture 28 Lectures Review

18.440: Lecture 28 Lectures Review 18.440: Lecture 28 Lectures 17-27 Review Scott Sheffield MIT 1 Outline Continuous random variables Problems motivated by coin tossing Random variable properties 2 Outline Continuous random variables Problems

More information

3 Continuous Random Variables

3 Continuous Random Variables Jinguo Lian Math437 Notes January 15, 016 3 Continuous Random Variables Remember that discrete random variables can take only a countable number of possible values. On the other hand, a continuous random

More information

7 Random samples and sampling distributions

7 Random samples and sampling distributions 7 Random samples and sampling distributions 7.1 Introduction - random samples We will use the term experiment in a very general way to refer to some process, procedure or natural phenomena that produces

More information

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

(y 1, y 2 ) = 12 y3 1e y 1 y 2 /2, y 1 > 0, y 2 > 0 0, otherwise.

(y 1, y 2 ) = 12 y3 1e y 1 y 2 /2, y 1 > 0, y 2 > 0 0, otherwise. 54 We are given the marginal pdfs of Y and Y You should note that Y gamma(4, Y exponential( E(Y = 4, V (Y = 4, E(Y =, and V (Y = 4 (a With U = Y Y, we have E(U = E(Y Y = E(Y E(Y = 4 = (b Because Y and

More information

Exam 3, Math Fall 2016 October 19, 2016

Exam 3, Math Fall 2016 October 19, 2016 Exam 3, Math 500- Fall 06 October 9, 06 This is a 50-minute exam. You may use your textbook, as well as a calculator, but your work must be completely yours. The exam is made of 5 questions in 5 pages,

More information

Will Murray s Probability, XXXII. Moment-Generating Functions 1. We want to study functions of them:

Will Murray s Probability, XXXII. Moment-Generating Functions 1. We want to study functions of them: Will Murray s Probability, XXXII. Moment-Generating Functions XXXII. Moment-Generating Functions Premise We have several random variables, Y, Y, etc. We want to study functions of them: U (Y,..., Y n ).

More information

Uses of Asymptotic Distributions: In order to get distribution theory, we need to norm the random variable; we usually look at n 1=2 ( X n ).

Uses of Asymptotic Distributions: In order to get distribution theory, we need to norm the random variable; we usually look at n 1=2 ( X n ). 1 Economics 620, Lecture 8a: Asymptotics II Uses of Asymptotic Distributions: Suppose X n! 0 in probability. (What can be said about the distribution of X n?) In order to get distribution theory, we need

More information

Mathematical statistics

Mathematical statistics October 18 th, 2018 Lecture 16: Midterm review Countdown to mid-term exam: 7 days Week 1 Chapter 1: Probability review Week 2 Week 4 Week 7 Chapter 6: Statistics Chapter 7: Point Estimation Chapter 8:

More information

ECE302 Exam 2 Version A April 21, You must show ALL of your work for full credit. Please leave fractions as fractions, but simplify them, etc.

ECE302 Exam 2 Version A April 21, You must show ALL of your work for full credit. Please leave fractions as fractions, but simplify them, etc. ECE32 Exam 2 Version A April 21, 214 1 Name: Solution Score: /1 This exam is closed-book. You must show ALL of your work for full credit. Please read the questions carefully. Please check your answers

More information

Probability and Statistics Notes

Probability and Statistics Notes Probability and Statistics Notes Chapter Five Jesse Crawford Department of Mathematics Tarleton State University Spring 2011 (Tarleton State University) Chapter Five Notes Spring 2011 1 / 37 Outline 1

More information

6 The normal distribution, the central limit theorem and random samples

6 The normal distribution, the central limit theorem and random samples 6 The normal distribution, the central limit theorem and random samples 6.1 The normal distribution We mentioned the normal (or Gaussian) distribution in Chapter 4. It has density f X (x) = 1 σ 1 2π e

More information

Week 9 The Central Limit Theorem and Estimation Concepts

Week 9 The Central Limit Theorem and Estimation Concepts Week 9 and Estimation Concepts Week 9 and Estimation Concepts Week 9 Objectives 1 The Law of Large Numbers and the concept of consistency of averages are introduced. The condition of existence of the population

More information

Lecture 1: August 28

Lecture 1: August 28 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random

More information

Chapter 2. Discrete Distributions

Chapter 2. Discrete Distributions Chapter. Discrete Distributions Objectives ˆ Basic Concepts & Epectations ˆ Binomial, Poisson, Geometric, Negative Binomial, and Hypergeometric Distributions ˆ Introduction to the Maimum Likelihood Estimation

More information

Chapter 5. Chapter 5 sections

Chapter 5. Chapter 5 sections 1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) D. ARAPURA This is a summary of the essential material covered so far. The final will be cumulative. I ve also included some review problems

More information

STA 256: Statistics and Probability I

STA 256: Statistics and Probability I Al Nosedal. University of Toronto. Fall 2017 My momma always said: Life was like a box of chocolates. You never know what you re gonna get. Forrest Gump. Exercise 4.1 Let X be a random variable with p(x)

More information

Final Exam # 3. Sta 230: Probability. December 16, 2012

Final Exam # 3. Sta 230: Probability. December 16, 2012 Final Exam # 3 Sta 230: Probability December 16, 2012 This is a closed-book exam so do not refer to your notes, the text, or any other books (please put them on the floor). You may use the extra sheets

More information

t x 1 e t dt, and simplify the answer when possible (for example, when r is a positive even number). In particular, confirm that EX 4 = 3.

t x 1 e t dt, and simplify the answer when possible (for example, when r is a positive even number). In particular, confirm that EX 4 = 3. Mathematical Statistics: Homewor problems General guideline. While woring outside the classroom, use any help you want, including people, computer algebra systems, Internet, and solution manuals, but mae

More information

Asymptotic Statistics-III. Changliang Zou

Asymptotic Statistics-III. Changliang Zou Asymptotic Statistics-III Changliang Zou The multivariate central limit theorem Theorem (Multivariate CLT for iid case) Let X i be iid random p-vectors with mean µ and and covariance matrix Σ. Then n (

More information

Continuous Random Variables and Continuous Distributions

Continuous Random Variables and Continuous Distributions Continuous Random Variables and Continuous Distributions Continuous Random Variables and Continuous Distributions Expectation & Variance of Continuous Random Variables ( 5.2) The Uniform Random Variable

More information

1 Review of Probability

1 Review of Probability 1 Review of Probability Random variables are denoted by X, Y, Z, etc. The cumulative distribution function (c.d.f.) of a random variable X is denoted by F (x) = P (X x), < x

More information

ECE 302 Division 2 Exam 2 Solutions, 11/4/2009.

ECE 302 Division 2 Exam 2 Solutions, 11/4/2009. NAME: ECE 32 Division 2 Exam 2 Solutions, /4/29. You will be required to show your student ID during the exam. This is a closed-book exam. A formula sheet is provided. No calculators are allowed. Total

More information

1 Solution to Problem 2.1

1 Solution to Problem 2.1 Solution to Problem 2. I incorrectly worked this exercise instead of 2.2, so I decided to include the solution anyway. a) We have X Y /3, which is a - function. It maps the interval, ) where X lives) onto

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

E[X n ]= dn dt n M X(t). ). What is the mgf? Solution. Found this the other day in the Kernel matching exercise: 1 M X (t) =

E[X n ]= dn dt n M X(t). ). What is the mgf? Solution. Found this the other day in the Kernel matching exercise: 1 M X (t) = Chapter 7 Generating functions Definition 7.. Let X be a random variable. The moment generating function is given by M X (t) =E[e tx ], provided that the expectation exists for t in some neighborhood of

More information

MAS223 Statistical Inference and Modelling Exercises

MAS223 Statistical Inference and Modelling Exercises MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,

More information

1 Review of Probability and Distributions

1 Review of Probability and Distributions Random variables. A numerically valued function X of an outcome ω from a sample space Ω X : Ω R : ω X(ω) is called a random variable (r.v.), and usually determined by an experiment. We conventionally denote

More information

1. Point Estimators, Review

1. Point Estimators, Review AMS571 Prof. Wei Zhu 1. Point Estimators, Review Example 1. Let be a random sample from. Please find a good point estimator for Solutions. There are the typical estimators for and. Both are unbiased estimators.

More information

Stochastic Processes and Monte-Carlo Methods. University of Massachusetts: Spring Luc Rey-Bellet

Stochastic Processes and Monte-Carlo Methods. University of Massachusetts: Spring Luc Rey-Bellet Stochastic Processes and Monte-Carlo Methods University of Massachusetts: Spring 2010 Luc Rey-Bellet Contents 1 Random variables and Monte-Carlo method 3 1.1 Review of probability............................

More information

Statistics 1B. Statistics 1B 1 (1 1)

Statistics 1B. Statistics 1B 1 (1 1) 0. Statistics 1B Statistics 1B 1 (1 1) 0. Lecture 1. Introduction and probability review Lecture 1. Introduction and probability review 2 (1 1) 1. Introduction and probability review 1.1. What is Statistics?

More information

Theory of Statistics.

Theory of Statistics. Theory of Statistics. Homework V February 5, 00. MT 8.7.c When σ is known, ˆµ = X is an unbiased estimator for µ. If you can show that its variance attains the Cramer-Rao lower bound, then no other unbiased

More information

8 Laws of large numbers

8 Laws of large numbers 8 Laws of large numbers 8.1 Introduction We first start with the idea of standardizing a random variable. Let X be a random variable with mean µ and variance σ 2. Then Z = (X µ)/σ will be a random variable

More information

Common ontinuous random variables

Common ontinuous random variables Common ontinuous random variables CE 311S Earlier, we saw a number of distribution families Binomial Negative binomial Hypergeometric Poisson These were useful because they represented common situations:

More information

15 Discrete Distributions

15 Discrete Distributions Lecture Note 6 Special Distributions (Discrete and Continuous) MIT 4.30 Spring 006 Herman Bennett 5 Discrete Distributions We have already seen the binomial distribution and the uniform distribution. 5.

More information

STAT 135 Lab 3 Asymptotic MLE and the Method of Moments

STAT 135 Lab 3 Asymptotic MLE and the Method of Moments STAT 135 Lab 3 Asymptotic MLE and the Method of Moments Rebecca Barter February 9, 2015 Maximum likelihood estimation (a reminder) Maximum likelihood estimation Suppose that we have a sample, X 1, X 2,...,

More information

Convergence in Distribution

Convergence in Distribution Convergence in Distribution Undergraduate version of central limit theorem: if X 1,..., X n are iid from a population with mean µ and standard deviation σ then n 1/2 ( X µ)/σ has approximately a normal

More information

1.12 Multivariate Random Variables

1.12 Multivariate Random Variables 112 MULTIVARIATE RANDOM VARIABLES 59 112 Multivariate Random Variables We will be using matrix notation to denote multivariate rvs and their distributions Denote by X (X 1,,X n ) T an n-dimensional random

More information

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n JOINT DENSITIES - RANDOM VECTORS - REVIEW Joint densities describe probability distributions of a random vector X: an n-dimensional vector of random variables, ie, X = (X 1,, X n ), where all X is are

More information

Chapter 5 continued. Chapter 5 sections

Chapter 5 continued. Chapter 5 sections Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

STAT/MATH 395 A - PROBABILITY II UW Winter Quarter Moment functions. x r p X (x) (1) E[X r ] = x r f X (x) dx (2) (x E[X]) r p X (x) (3)

STAT/MATH 395 A - PROBABILITY II UW Winter Quarter Moment functions. x r p X (x) (1) E[X r ] = x r f X (x) dx (2) (x E[X]) r p X (x) (3) STAT/MATH 395 A - PROBABILITY II UW Winter Quarter 07 Néhémy Lim Moment functions Moments of a random variable Definition.. Let X be a rrv on probability space (Ω, A, P). For a given r N, E[X r ], if it

More information

University of Chicago Graduate School of Business. Business 41901: Probability Final Exam Solutions

University of Chicago Graduate School of Business. Business 41901: Probability Final Exam Solutions Name: University of Chicago Graduate School of Business Business 490: Probability Final Exam Solutions Special Notes:. This is a closed-book exam. You may use an 8 piece of paper for the formulas.. Throughout

More information

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it

More information

1 Random Variable: Topics

1 Random Variable: Topics Note: Handouts DO NOT replace the book. In most cases, they only provide a guideline on topics and an intuitive feel. 1 Random Variable: Topics Chap 2, 2.1-2.4 and Chap 3, 3.1-3.3 What is a random variable?

More information

Math 3215 Intro. Probability & Statistics Summer 14. Homework 5: Due 7/3/14

Math 3215 Intro. Probability & Statistics Summer 14. Homework 5: Due 7/3/14 Math 325 Intro. Probability & Statistics Summer Homework 5: Due 7/3/. Let X and Y be continuous random variables with joint/marginal p.d.f. s f(x, y) 2, x y, f (x) 2( x), x, f 2 (y) 2y, y. Find the conditional

More information

Mathematical Statistics

Mathematical Statistics Mathematical Statistics Chapter Three. Point Estimation 3.4 Uniformly Minimum Variance Unbiased Estimator(UMVUE) Criteria for Best Estimators MSE Criterion Let F = {p(x; θ) : θ Θ} be a parametric distribution

More information

Actuarial Science Exam 1/P

Actuarial Science Exam 1/P Actuarial Science Exam /P Ville A. Satopää December 5, 2009 Contents Review of Algebra and Calculus 2 2 Basic Probability Concepts 3 3 Conditional Probability and Independence 4 4 Combinatorial Principles,

More information

Stochastic Processes and Monte-Carlo Methods. University of Massachusetts: Spring 2018 version. Luc Rey-Bellet

Stochastic Processes and Monte-Carlo Methods. University of Massachusetts: Spring 2018 version. Luc Rey-Bellet Stochastic Processes and Monte-Carlo Methods University of Massachusetts: Spring 2018 version Luc Rey-Bellet April 5, 2018 Contents 1 Simulation and the Monte-Carlo method 3 1.1 Review of probability............................

More information

SOLUTION FOR HOMEWORK 12, STAT 4351

SOLUTION FOR HOMEWORK 12, STAT 4351 SOLUTION FOR HOMEWORK 2, STAT 435 Welcome to your 2th homework. It looks like this is the last one! As usual, try to find mistakes and get extra points! Now let us look at your problems.. Problem 7.22.

More information

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators Estimation theory Parametric estimation Properties of estimators Minimum variance estimator Cramer-Rao bound Maximum likelihood estimators Confidence intervals Bayesian estimation 1 Random Variables Let

More information

6.1 Moment Generating and Characteristic Functions

6.1 Moment Generating and Characteristic Functions Chapter 6 Limit Theorems The power statistics can mostly be seen when there is a large collection of data points and we are interested in understanding the macro state of the system, e.g., the average,

More information

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it

More information

Notes on Random Vectors and Multivariate Normal

Notes on Random Vectors and Multivariate Normal MATH 590 Spring 06 Notes on Random Vectors and Multivariate Normal Properties of Random Vectors If X,, X n are random variables, then X = X,, X n ) is a random vector, with the cumulative distribution

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4.1 (*. Let independent variables X 1,..., X n have U(0, 1 distribution. Show that for every x (0, 1, we have P ( X (1 < x 1 and P ( X (n > x 1 as n. Ex. 4.2 (**. By using induction

More information

Continuous Random Variables

Continuous Random Variables Continuous Random Variables Recall: For discrete random variables, only a finite or countably infinite number of possible values with positive probability. Often, there is interest in random variables

More information

LIST OF FORMULAS FOR STK1100 AND STK1110

LIST OF FORMULAS FOR STK1100 AND STK1110 LIST OF FORMULAS FOR STK1100 AND STK1110 (Version of 11. November 2015) 1. Probability Let A, B, A 1, A 2,..., B 1, B 2,... be events, that is, subsets of a sample space Ω. a) Axioms: A probability function

More information

MAS113 Introduction to Probability and Statistics. Proofs of theorems

MAS113 Introduction to Probability and Statistics. Proofs of theorems MAS113 Introduction to Probability and Statistics Proofs of theorems Theorem 1 De Morgan s Laws) See MAS110 Theorem 2 M1 By definition, B and A \ B are disjoint, and their union is A So, because m is a

More information

Random Variables and Their Distributions

Random Variables and Their Distributions Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital

More information

MAS113 Introduction to Probability and Statistics. Proofs of theorems

MAS113 Introduction to Probability and Statistics. Proofs of theorems MAS113 Introduction to Probability and Statistics Proofs of theorems Theorem 1 De Morgan s Laws) See MAS110 Theorem 2 M1 By definition, B and A \ B are disjoint, and their union is A So, because m is a

More information

SDS 321: Introduction to Probability and Statistics

SDS 321: Introduction to Probability and Statistics SDS 321: Introduction to Probability and Statistics Lecture 17: Continuous random variables: conditional PDF Purnamrita Sarkar Department of Statistics and Data Science The University of Texas at Austin

More information

2 Functions of random variables

2 Functions of random variables 2 Functions of random variables A basic statistical model for sample data is a collection of random variables X 1,..., X n. The data are summarised in terms of certain sample statistics, calculated as

More information

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University Chapter 3, 4 Random Variables ENCS6161 - Probability and Stochastic Processes Concordia University ENCS6161 p.1/47 The Notion of a Random Variable A random variable X is a function that assigns a real

More information

Continuous Distributions

Continuous Distributions A normal distribution and other density functions involving exponential forms play the most important role in probability and statistics. They are related in a certain way, as summarized in a diagram later

More information

University of Regina. Lecture Notes. Michael Kozdron

University of Regina. Lecture Notes. Michael Kozdron University of Regina Statistics 252 Mathematical Statistics Lecture Notes Winter 2005 Michael Kozdron kozdron@math.uregina.ca www.math.uregina.ca/ kozdron Contents 1 The Basic Idea of Statistics: Estimating

More information

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

Probability Distributions Columns (a) through (d)

Probability Distributions Columns (a) through (d) Discrete Probability Distributions Columns (a) through (d) Probability Mass Distribution Description Notes Notation or Density Function --------------------(PMF or PDF)-------------------- (a) (b) (c)

More information

Math/Stats 425, Sec. 1, Fall 04: Introduction to Probability. Final Exam: Solutions

Math/Stats 425, Sec. 1, Fall 04: Introduction to Probability. Final Exam: Solutions Math/Stats 45, Sec., Fall 4: Introduction to Probability Final Exam: Solutions. In a game, a contestant is shown two identical envelopes containing money. The contestant does not know how much money is

More information

Stat 5101 Lecture Slides: Deck 7 Asymptotics, also called Large Sample Theory. Charles J. Geyer School of Statistics University of Minnesota

Stat 5101 Lecture Slides: Deck 7 Asymptotics, also called Large Sample Theory. Charles J. Geyer School of Statistics University of Minnesota Stat 5101 Lecture Slides: Deck 7 Asymptotics, also called Large Sample Theory Charles J. Geyer School of Statistics University of Minnesota 1 Asymptotic Approximation The last big subject in probability

More information

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) 1 EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) Taisuke Otsu London School of Economics Summer 2018 A.1. Summation operator (Wooldridge, App. A.1) 2 3 Summation operator For

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4. (*. Let independent variables X,..., X n have U(0, distribution. Show that for every x (0,, we have P ( X ( < x and P ( X (n > x as n. Ex. 4.2 (**. By using induction or otherwise,

More information

This exam is closed book and closed notes. (You will have access to a copy of the Table of Common Distributions given in the back of the text.

This exam is closed book and closed notes. (You will have access to a copy of the Table of Common Distributions given in the back of the text. TEST #3 STA 5326 December 4, 214 Name: Please read the following directions. DO NOT TURN THE PAGE UNTIL INSTRUCTED TO DO SO Directions This exam is closed book and closed notes. (You will have access to

More information

Statistics STAT:5100 (22S:193), Fall Sample Final Exam B

Statistics STAT:5100 (22S:193), Fall Sample Final Exam B Statistics STAT:5 (22S:93), Fall 25 Sample Final Exam B Please write your answers in the exam books provided.. Let X, Y, and Y 2 be independent random variables with X N(µ X, σ 2 X ) and Y i N(µ Y, σ 2

More information

Brief Review of Probability

Brief Review of Probability Maura Department of Economics and Finance Università Tor Vergata Outline 1 Distribution Functions Quantiles and Modes of a Distribution 2 Example 3 Example 4 Distributions Outline Distribution Functions

More information

Moment Generating Functions

Moment Generating Functions MATH 382 Moment Generating Functions Dr. Neal, WKU Definition. Let X be a random variable. The moment generating function (mgf) of X is the function M X : R R given by M X (t ) = E[e X t ], defined for

More information

MATH2715: Statistical Methods

MATH2715: Statistical Methods MATH2715: Statistical Methods Exercises V (based on lectures 9-10, work week 6, hand in lecture Mon 7 Nov) ALL questions count towards the continuous assessment for this module. Q1. If X gamma(α,λ), write

More information

Tom Salisbury

Tom Salisbury MATH 2030 3.00MW Elementary Probability Course Notes Part V: Independence of Random Variables, Law of Large Numbers, Central Limit Theorem, Poisson distribution Geometric & Exponential distributions Tom

More information