Exercises and Answers to Chapter 1
|
|
- Drusilla Underwood
- 5 years ago
- Views:
Transcription
1 Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean and variance of X. (3) When Y X, derive the density function of Y. [Answer] () From the property of the density function, i.e., f (x) dx, we need to have: f (x) dx a (a x) dx [ ax ] a x a. Therefore, a is obtained, taking into account a >. () The definitions of mean and variance are given by: E(X) x f (x) dx and V(X) (x µ) f (x) dx, where µ E(X). Therefore, mean of X is: a [ E(X) x f (x) dx x(a x) dx ax ] a 3 x3 6 a3 a is substituted. 3 Variance of X is: V(X) (x µ) f (x) dx x f (x) dx µ [ 3 ax3 ] a 4 x4 µ a4 µ 3 a 3 x (a x) dx µ 9.
2 (3) Let f (x) be the density function of X and F(x) be the distribution function of X. And let g(y) be the density function of Y and G(y) be the distribution function of Y. Using Y X, we obtain: G(y) P(Y < y) P(X < y) P( y < X < y) F( y) F( y) F( y) F( y). Moreover, from the relationship between the density and the distribution functions, we obtain the following: g(y) dg(y) df( y) df(x) d y dy dy dx dy F (x) y f (x) y f ( y) y ( y), for < y <. y x y The range of y is obtained as: < x < < x < < y <. The continuous type of random variable X has the following density function: Answer the following questions. () Compute mean and variance of X. f (x) π e x. () When Y X, compute mean and variance of Y. (3) When Z e X, obtain mean and variance of Z. [Answer] () The definitions of mean and variance are: E(X) x f (x) dx and V(X) (x µ) f (x) dx, where µ E(X). Therefore, mean of X is: E(X) x f (x) dx x π e x dx π [ e x].
3 In the third equality, we utilize: Variance of X is: V(X) (x µ) f (x) dx [ x ] e x + π de x dx xe x. x f (x) dx µ π e x dx µ. In the fourth equality, the following formula is used. b a h (x)g(x) dx [ h(x)g(x) ] b b h(x)g (x) dx, a where g(x) x and h (x) x π e x are set. And in the first term of the fourth equality, we use: lim x e x. x ± π a x π e x dx µ In the second term of the fourth equality, we utilize the property that the integration of the density function is equal to one. () When Y X, mean of Y is: E(Y) E(X ) V(X) µ x From (), note that V(X) and µ x E(X). Variance of Y is: V(Y) E(Y µ y ) µ y E(Y) E(Y ) µ y E(X 4 ) µ y x 3 x π e x dx µ y [ x 3 π e x ] + 3 x 4 π e x dx µ y x π e x dx µ y 3E(X ) µ y E(X ), µ y 3
4 In the sixth equality, the following formula on integration is utilized. b a h (x)g(x) dx [ h(x)g(x) ] b b h(x)g (x) dx, a where g(x) x 3 and h (x) x π e x are set. In the first term of the sixth equality, we use: (3) For Z e X, mean of Z is: E(Z) E(e X ) In the sixth equality, lim x ± x3 e x. π π e (x ) + e x e x dx π dx e π e (x ) a π e (x x) dx π e (x ) dx e. is a normal distribution with mean one and variance one, and accordingly its integration is equal to one. Variance of Z is: V(Z) E(Z µ z ) µ z E(Z) e E(Z ) µ z E(e X ) µ z e π e (x 4x) dx µ z e x π e x dx µ z π e (x ) dx µ z e e. The eighth equality comes from the facts that π e (x ) + dx µ z π e (x ) is a normal distribution with mean two and variance one and that its integration is equal to one. 3 The continuous type of random variable X has the following density function: x f (x) λ e λ, if < x,, otherwise. Answer the following questions. 4
5 () Compute mean and variance of X. () Derive the moment-generating function of X. (3) Let X, X,, X n be the random variables, which are mutually independently [Answer] distributed and have the density function shown above. Prove that the density function of Y X + X + + X n is given by the chi-square distribution with n degrees of freedom when λ. Note that the chi-square distribution with m degrees of freedom is given by: f (x) m Γ( m) x m e x, if x >,, otherwise. () Mean of X is: E(X) x f (x) dx [ ] xe λ x + e x λ x λ e x λ In the third equality, the following formula is used: b a dx [ ] dx λe λ x λ. h (x)g(x) dx [ h(x)g(x) ] b b h(x)g (x) dx. a a where g(x) x and h (x) λ e x λ are set. And we utilize: lim x x xe λ, lim e λ x. x Variance of X is: V(X) (x µ) f (x) dx [ x e x λ x λ e x λ ] x f (x) dx µ µ E(X) λ dx µ [ ] x e λ x + + λ x x λ e λ dx µ λe(x) µ µ E(X) λ λ λ λ. 5 xe x λ dx µ
6 In the third equality, we utilize: b a h (x)g(x) dx [ h(x)g(x) ] b b h(x)g (x) dx, a a where g(x) x and h (x) λ e x λ. In the sixth equality, the following formulas are used: lim x x e λ x, µ E(X) xe λ x () The moment-generating function of X is: φ(θ) E(e θx ) e θx f (x) dx /λ /λ θ ( λ θ)e ( λ θ)x dx e θx λ e x λ λθ. dx. dx λ e ( λ θ)x dx In the last equality, since ( λ θ)e ( λ θ)x is a density function, its integration is one. λ in f (x) is replaced by λ θ. (3) We want to show that the moment-generating function of Y is equivalent to that of a chi-square distribution with n degrees of freedom. Because X, X,, X n are mutually independently distributed, the momentgenerating function of X i, φ i (θ), is: φ i (θ) which corresponds to the case λ of (). θ φ(θ), For λ, the moment-generating function of Y X + X + + X n, φ y (θ), is: φ y (θ) E(e θy ) E(e θ(x +X + +X n ) ) E(e θx )E(e θx ) E(e θx n ) φ (θ)φ (θ) φ n (θ) ( φ(θ) ) n ( ) n ( ) n. θ θ Therefore, the moment-generating function of Y is: φ y (θ) ( ) n. θ 6
7 A chi-square distribution with m degrees of freedom is given by: f (x) m Γ( m ) x m e x, for x >. The moment-generating function of the above density function, φ χ (θ), is: φ χ (θ) E(e θx ) e θx m Γ( m) x m e x dx m Γ( m) x m e ( θ)x dx ( y ) m e m Γ( m) y θ θ dx ( θ ) m θ m Γ( m )ym e y dx ( ) m. θ In the fourth equality, use y ( θ)x. In the sixth equality, since the function in the integration corresponds to the chi-square distribution with m degrees of freedom, the integration is one. Thus, φ y (θ) is equivalent to φ χ (θ) for m n. That is, φ y (θ) is the moment-generating function of a chi square distribution with n degrees of freedom. Therefore, Y χ (n). 4 The continuous type of random variable X has the following density function:, if < x <, f (x), otherwise. Answer the following questions. () Compute mean and variance of X. () When Y log X, derive the moment-generating function of Y. Note that the log represents the natural logarithm (i.e., y log x is equivalent to x e y ). (3) Let Y and Y be the random variables which have the density function obtained in (). Suppose that Y is independent of Y. When Z Y + Y, compute the density function of Z. 7
8 [Answer] () Mean of X is: E(X) x f (x) dx Variance of X is: V(X) (x µ) f (x) dx x dx µ [ ] 3 x3 x dx [ ] x. x f (x) dx µ µ E(X) µ 3 ( ). () For Y log X, we obtain the moment-generating function of Y, φ y (θ). φ y (θ) E(e θy ) E(e θ log X ) E(X θ ) x θ f (x) dx [ ] x θ dx θ x θ θ. (3) Let Y and Y be the random variables which have the density function obtained from (). And, assume that Y is independent of Y. For Z Y + Y, we want to have the density function of Z. The moment-generating function of Z, φ z (θ), is: φ z (θ) E(e θz ) E(e θ(y +Y ) ) E(e θy )E(e θy ) ( φ y (θ) ) ( ) ( ) 4, θ θ which is equivalent to the moment-generating function of the chi square distribution with 4 degrees of freedom. Therefore, Z χ (4). Note that the chisquare density function with n degrees of freedom is given by: f (x) n Γ( n) x n e x, for x >,, otherwise. The moment-generating function φ(θ) is: φ(θ) ( ) n. θ 8
9 5 The continuous type of random variable X has the following density function: f (x) n Γ( n) x n e x, if x >,, otherwise. Answer the following questions. Γ(a) is called the gamma function, defined as: Γ(a) () What are mean and variance of X? x a e x dx. () Compute the moment-generating function of X. [Answer] () For mean: E(X) n n+ n x f (x) dx Γ( n+) Γ( n) n Γ( n ) x Γ( n ) n x n e x dx n+ Γ( n+ ) x n x n+ e x dx n. e x dx Note that Γ(s + ) sγ(s), Γ(), and Γ( ) π. Using n n +, from the property of the density function, we have: n f (x) dx Γ( n ) which is utilized in the fifth equality. x n e x dx, For variance, from V(X) E(X ) µ we compute E(X ) as follows: E(X ) n n+4 4( n + x f (x) dx Γ( n ) n x n+4 e x dx Γ( n+4) Γ( n) n ) x Γ( n ) n x n e x dx n+4 n+4 Γ( n+4 x e x dx ) n n x e x dx n(n + ), Γ( n ) 9
10 where n n + 4 is set. Therefore, V(X) n(n + ) n n is obtained. () The moment-generating function of X is: φ(θ) E(e θx ) ( θ n Γ( n) x n exp ( y n Γ( n) θ ) n e θx f (x) dx e θx n Γ( n) x n exp( x ) dx ( ) ( θ)x dx ) n exp( y) θ dy n Γ( n exp( y) dy ( θ )yn dx Use y ( θ)x in the fifth equality. Note that dy ( θ). In the seventh equality, the integration corresponds to the chi-square distribution with n degrees of freedom. ) n. 6 The continuous type of random variables X and Y are mutually independent and assumed to be X N(, ) and Y N(, ). Define U X/Y. Answer the following questions. When X N(, ), the density function of X is represented as: f (x) π e x. () Derive the density function of U. () Prove that the first moment of U does not exist. [Answer] () The density of U is obtained as follows. The densities of X and Y are: f (x) π exp( x ), < x <, g(y) π exp( y ), < y <.
11 Since X is independent of Y, the joint density of X and Y is: h(x, y) f (x)g(y) exp( π x ) exp( π y ) π exp( (x + y )). Using u x and v y, the transformation of the variables is performed. For y x uv and y v, we have the Jacobian: x x J u v y y v u. u v Using transformation of variables, the joint density of U and V, s(u, v) is given by: The marginal density of U is: s(u, v) h(uv, v) J π exp( v ( + u )) v. p(u) s(u, v) dv π v exp( π v ( + u )) dv [ π + u exp( v ( + u )) which corresponds to Cauchy distribution. () We prove that the first moment of U is infinity, i.e., E(U) u f (u) du v exp( v ( + u )) dv ] v u π( + u ) du π( + u ), π x dx x + u is used. [ ] π log x d log x dx x. For < u <, the range of x + u is give by < x <.
12 7 The continuous type of random variables has the following joint density function: Answer the following questions. x + y, if < x < and < y <, f (x, y), otherwise. () Compute the expectation of XY. () Obtain the correlation coefficient between X and Y. (3) What is the marginal density function of X? [Answer] () The expectation of XY is: E(XY) xy f (x, y) dx dy [ 3 yx3 + y x [ 6 y + 6 y3 ] 3. ] dy xy(x + y) dx dy ( 3 y + y ) dy () We want to obtain the correlation coefficient between X and Y, which is represented as: ρ Cov(X, Y)/ V(X)V(Y). Therefore, E(X), E(Y), V(X), V(Y) and Cov(X, Y) have to be computed. E(X) is: E(X) x f (x, y) dx dy [ 3 x3 + yx [ 3 y + 4 y ] ] 7. dy x(x + y) dx dy ( 3 + y) dy
13 In the case where x and y are exchangeable, the functional form of f (x, y) is unchanged. Therefore, E(Y) is: For V(X), E(Y) E(X) 7. V(X) E ( (X µ) ) µ E(X) 7 E(X ) µ x (x + y) dx dy µ ( y) dy µ 5 ( 7 ) 44. x f (x, y) dx dy µ ] [ 4 y + 6 y [ 4 x4 + 3 yx3 µ ] dy µ For V(Y), For Cov(X, Y), V(Y) V(X) 44. Cov(X, Y) E ( (X µ x )(Y µ y ) ) E(XY) µ x µ y , where Therefore, ρ is: ρ µ x E(X) 7, µ y E(Y) 7. Cov(X, Y) V(X)V(Y) /44 (/44)(/44). (3) The marginal density function of X, f x (x), is: f x (x) for < x <. f (x, y) dy (x + y) dy 3 [ xy + ] y x + y,
14 8 The discrete type of random variable X has the following density function: Answer the following questions. () Prove f (x). x f (x) e λ λ x, x,,,. x! () Compute the moment-generating function of X. (3) From the moment-generating function, obtain mean and variance of X. [Answer] () We can show f (x) as: x f (x) x x λ λx e x! e λ x λ x x! e λ e λ. Note that e x x k k!, because we have f (k) (x) e x for f (x) e x. As shown in k Appendix.3, the formula of Taylor series expansion is: f (x) k k k! f (k) (x )(x x ) k. The Taylor series expansion around x is: f (x) k! f (k) ()x k k! xk Here, replace x by λ and k by x. () The moment-generating function of X is: φ(θ) E(e θx ) e θx f (x) e θx λ λx e x! e λ (eθ λ) x x! x x x e λ exp(e θ λ) exp( e θ λ) (eθ λ) x e λ exp(e θ λ x λ λ) e x! x! x k exp( λ) exp(e θ λ) exp ( λ(e θ ) ). Note that λ exp(e θ λ). 4 k x k k!. x
15 (3) Based on the moment-generating function, we obtain mean and variance of X. For mean, because of φ(θ) exp ( λ(e θ ) ), φ (θ) λe θ exp ( λ(e θ ) ) and E(X) φ (), we obtain: E(X) φ () λ. For variance, from V(X) E(X ) (E(X)), we obtain E(X ). Note that E(X ) φ () and φ (θ) ( + λe θ )λe θ exp ( λ(e θ ) ). Therefore, V(X) E(X ) (E(X)) φ () (φ ()) ( + λ)λ λ λ. 9 X, X,, X n are mutually independently and normally distributed with mean µ and variance σ, where the density function is given by: f (x) πσ e σ (x µ). Then, answer the following questions. () Obtain the maximum likelihood estimators of mean µ and variance σ. () Check whether the maximum likelihood estimator of σ is unbiased. If it is not unbiased, obtain an unbiased estimator of σ. (Hint: use the maximum likelihood estimator.) (3) We want to test the null hypothesis H : µ µ by the likelihood ratio test. [Answer] Obtain the test statistic and explain the testing procedure. () The joint density is: f (x, x,, x n ; µ, σ ) n f (x i ; µ, σ ) n ( exp ) πσ σ (x i µ) (πσ ) n/ exp 5 σ (x i µ) l(µ, σ ).
16 Taking the logarithm, we have: log l(µ, σ ) n log(π) n log(σ ) σ (x i µ). The derivatives of the log-likelihood function log l(µ, σ ) with respect to µ and σ are set to be zero. log l(µ, σ ) µ (x σ i µ), log l(µ, σ ) n σ σ + σ 4 (x i µ). Solving the two equations, we have the solution of (µ, σ ), denoted by (ˆµ, ˆσ ): ˆµ n ˆσ n x i x, (x i µ) n (x i x). Therefore, the maximum likelihood estimators of µ and σ, (ˆµ, ˆσ ), are as follows: X, S n (X i X). () Take the expectation to check whether S is unbiased. E(S ) E ( n (X i X) ) n n E( (X i X) ) n n E( ((X i µ) (X µ)) ) n n E( ((X i µ) (X i µ)(x µ) + (X µ) ) ) n n E( (X i µ) (X µ) (X i µ) + n(x µ) ) n n E( (X i µ) n(x µ) + n(x µ) ) 6
17 n n E( (X i µ) n(x µ) ) n n E( (X i µ) ) n E( n(x µ) ) n n E ( (X i µ) ) E ( (X µ) ) V(X i ) V(X) n σ n σ n n σ σ. σ σ n Therefore, S is not unbiased. Based on S, we obtain the unbiased estimator of σ. Multiplying n/(n ) on both sides of E(S ) σ (n )/n, we obtain: n n E(S ) σ. Therefore, the unbiased estimator of σ is: n n S n (3) The likelihood ratio is defined as: (X i X) S. λ max l(µ, σ ) σ max l(µ, σ ) µ,σ l(µ, σ ) l(ˆµ, ˆσ ). Since the number of restriction is one, we have: log λ χ (). l(µ, σ ) is given by: l(µ, σ ) (πσ ) n/ exp Taking the logarithm, log l(µ, σ ) is: σ (x i µ). log l(µ, σ ) n log(π) n log(σ ) σ 7 (x i µ).
18 On the numerator, under the restriction µ µ, log l(µ, σ ) is maximized with respect to σ as follows: log l(µ, σ ) n σ σ + σ 4 This solution of σ is σ, which is represented as: (x i µ ). σ n (x i µ ). Then, l(µ, σ ) is: l(µ, σ ) (π σ ) n/ exp σ On the denominator, from the question (), we have: ( (x i µ ) (π σ ) n/ exp n ). ˆµ n x i, ˆσ n (x i ˆµ). Therefore, l(ˆµ, ˆσ ) is: l(ˆµ, ˆσ ) (π ˆσ ) n/ exp The likelihood ratio is: ˆσ ( (x i ˆµ) (π ˆσ ) n/ exp n ). λ max l(µ, σ ) σ max l(µ, σ ) µ,σ l(µ, σ ) l(ˆµ, ˆσ ) (π σ ) n/ exp( n/) (π ˆσ ) n/ exp( n/) ( σ ) n/. ˆσ As n goes to infinity, we obtain: log λ n(log σ log ˆσ ) χ (). When log λ > χ α(), the null hypothesis H : µ µ is rejected by the significance level α, where χ α() denotes the α percent point of the Chisquare distribution with one degree of freedom. Answer the following questions. 8
19 () The discrete type of random variable X is assumed to be Bernoulli. The Bernoulli distribution is given by: f (x) p x ( p) x, x,. Let X, X,,X n be random variables drawn from the Bernoulli trials. Compute the maximum likelihood estimator of p. () Let Y be a random variable from a binomial distribution, denoted by f (y), which is represented as: f (y) n C y p y ( p) n y, y,,,, n. Then, prove that Y/n goes to p as n is large. (3) For the random variable Y in the question (), Let us define: Z n Y np np( p). Then, Z n goes to a standard normal distribution as n is large. (4) The continuous type of random variable X has the following density function: f (x) n Γ( n) x n e x, if x >,, otherwise. [Answer] where Γ(a) denotes the Gamma function, i.e., Γ(a) x a e x dx. Then, show that X/n approaches one when n. () When X is a Bernoulli random variable, the probability function of X is given by: f (x; p) p x ( p) x, x,. 9
20 The joint probability function of X, X,,X n is: f (x, x,, x n ; p) Take the logarithm of l(p). log l(p) ( i n f (x i ; p) n p x i ( p) x i p i x i ( p) n i x i l(p). x i ) log(p) + (n x i ) log( p). The derivative of the log-likelihood function log l(p) with respect to p is set to be zero. d log l(p) dp i x i p Solving the above equation, we have: i x i n p i i x i np p( p). p n x i x. Therefore, the maximum likelihood estimator of p is: ˆp n X i X. () Mean and variance of Y are: E(Y) np, V(Y) np( p). Therefore, we have: E( Y n ) n E(Y) p, V(Y n ) p( p) V(Y). n n Chebyshev s inequality indicates that for a random variable X and g(x) we have: where k >. P(g(X) k) E(g(X)), k
21 Here, when g(x) (X E(X)) and k ɛ are set, we can rewrite as: where ɛ >. P( X E(X) ɛ) V(X) ɛ, Replacing X by Y, we apply Chebyshev s inequality. n That is, as n, P( Y n E(Y n ) ɛ) V( Y n ) ɛ. P( Y n p ɛ) p( p) nɛ. Therefore, we obtain: Y n p. (3) Let X, X,, X n be Bernoulli random variables, where P(X i x) p x ( p) x for x,. Define Y X + X + + X n. Because Y has a binomial distribution, Y/n is taken as the sample mean from X, X,, X n, i.e., Y/n (/n) n X i. Therefore, using E(Y/n) p and V(Y/n) p( p)/n, by the central limit theorem, as n, we have: Y/n p p( p)/n N(, ). Moreover, Therefore, Z n Y np np( p) Z n N(, ). Y/n p p( p)/n. (4) When X χ (n), we have E(X) n and V(X) n. Therefore, E(X/n) and V(X/n) /n.
22 Apply Chebyshev s inequality. Then, we have: P( X n E( X n ) ɛ) V( X n ) ɛ, where ɛ >. That is, as n, we have: P( X n ɛ) nɛ. Therefore, X n. Consider n random variables X, X,, X n, which are mutually independently and exponentially distributed. Note that the exponential distribution is given by: f (x) λe λx, x >. Then, answer the following questions. () Let ˆλ be the maximum likelihood estimator of λ. Obtain ˆλ. () When n is large enough, obtain mean and variance of ˆλ. [Answer] () Since X,, X n are mutually independently and exponentially distributed, the likelihood function l(λ) is written as: l(λ) n f (x i ) n λe λx i λ n e λ x i. The log-likelihood function is: log l(λ) n log(λ) λ x i. We want the λ which maximizes log l(λ). Solving the following equation: d log l(λ) dλ n λ x i,
23 and replacing x i by X i, the maximum likelihood estimator of λ, denoted by ˆλ, is: ˆλ n n X i. () X, X,, X n are mutually independent. Let f (x i ; λ) be the density function of X i. For the maximum likelihood estimator of λ, i.e., ˆλ n, as n, we have the following property: n(ˆλ n λ) N (, σ (λ) ), where σ (λ) ( ). d log f (X; λ) E dλ Therefore, we obtain σ (λ). The expectation in σ (ˆλ n ) is: ( ) ( ) ( d log f (X; λ) E E dλ λ X E λ ) λ X + X λ λ E(X) + E(X ) λ, where E(X) and E(X ) are: E(X) λ, E(X ) λ. Therefore, we have: σ (λ) ( ) d log f (X; λ) E dλ λ. As n is large, ˆλ n approximately has the following distribution: ˆλ n N(λ, λ n ). Thus, as n goes to infinity, mean and variance are given by λ and λ /n. 3
24 The n random variables X, X,, X n are mutually independently distributed with mean µ and variance σ. Consider the following two estimators of µ: X X i, X n (X + X n ). Then, answer the following questions. () Is X unbiased? How about X? () Which is more efficient, X or X? (3) Examine whether X and X are consistent. [Answer] () We check whether X and X are unbiased. E(X) E( n X i ) n E( X i ) n E(X i ) n µ µ, E( X) E(X ) + E(X n ) ( ) (µ + µ) µ. Thus, both are unbiased. () We examine which is more efficient, X or X. V(X) V( n V( X) 4 X i ) n V( X i ) n V(X i ) n ( V(X ) + V(X n ) ) 4 (σ + σ ) σ. σ σ n, Therefore, because of V(X) < V( X), X is more efficient than X when n >. (3) We check if X and X are consistent. Apply Chebyshev s inequality. For X, P( X E(X) ɛ) V(X) ɛ, where ɛ >. That is, when n, we have: P( X µ ɛ) σ nɛ. 4
25 Therefore, we obtain: X µ. Next, for X, we have: P( X E( X) ɛ) V( X) ɛ, where ɛ >. That is, when n, the following equation is obtained: P( X µ ɛ) σ ɛ /. X is a consistent estimator of µ, but X is not consistent. 3 The 9 random samples: which are obtained from the normal population N(µ, σ ). Then, answer the following questions. () Obtain the unbiased estimates of µ and σ. () Obtain both 9 and 95 percent confidence intervals for µ. (3) Test the null hypothesis H : µ 4 and the alternative hypothesis H : µ > 4 by the significance level.. How about.5? [Answer] () The unbiased estimators of µ and σ, denoted by X and S, are given by: X n X i, S n (X i X). The unbiased estimates of µ and σ are: x n x i, s n (x i x). 5
26 Therefore, x x i ( ) 7, n 9 s (x i x) n ( ( 7) + (3 7) + (3 7) + ( 7) 8 ) +(36 7) + (7 7) + (6 7) + (8 7) + (3 7) ( ) () We obtain the confidence intervals of µ. The following sample distribution is utilized: Therefore, X µ S/ n t(n ). P ( X µ S/ n < t α/(n ) ) α, where t α/ (n ) denotes the α/ percent point of the t distribution, which is obtained given probability α and n degrees of freedom. Therefore, we have: P ( X t α/ (n ) S n < µ < X + t α/ (n ) S n ) α. Replacing X and S by x and s, the ( α) percent confidence interval of µ is: ( x tα/ (n ) s n, x + t α/ (n ) s n ). Since x 7, s 7.5, n 9, t.5 (8).86 and t.5 (8).36, the 9 percent confidence interval of µ is: (7.86, ) (3.76, 3.4), 9 9 and the 95 percent confidence interval of µ is: (7.36, ) (.99, 3.)
27 (3) We test the null hypothesis H : µ 4 and the alternative hypothesis H : µ > 4 by the significance levels. and.5. The distribution of X is: X µ S/ n t(n ). Therefore, under the null hypothesis H : µ µ, we obtain X µ S/ n t(n ). Note that µ is replaced by µ. For the alternative hypothesis H : µ > µ, since we have: P ( X µ S/ n > t α(n ) ) α, we reject the null hypothesis H : µ µ by the significance level α when we have: x µ s/ n > t α(n ). Substitute x 7, s 7.5, µ 4, n 9, t. (8).397 and t.5 (8).86 into the above formula. Then, we obtain: x µ s/ n /9.74 > t. (8).397. Therefore, we reject the null hypothesis H : µ 4 by the significance level α.. And we obtain: x µ s/ n /9.74 < t.5 (8).86. Therefore, the null hypothesis H : µ 4 is accepted by the significance level α.5. 4 The 6 samples X, X,, X 6 are randomly drawn from the normal population with mean µ and known variance σ. The sample average is given by x 36. Then, answer the following questions. () Obtain the 95 percent confidence interval for µ. 7
28 () Test the null hypothesis H : µ 35 and the alternative hypothesis H : µ 36.5 by the significance level.5. (3) Compute the power of the test in the above question (). [Answer] () We obtain the 95 percent confidence interval of µ. The distribution of X is: X µ σ/ n N(, ). Therefore, P ( X µ σ/ n < z ) α/ α, where z α/ denotes the α α. Therefore, percent point, which is obtained given probability P ( X z α/ σ n < µ < X + z α/ σ n ) α. Replacing X by x, the ( α) percent confidence interval of µ is: ( x zα/ σ n, x + z α/ σ n ). Substituting x 36, σ, n 6 and z.5.96, the ( α) percent confidence interval of µ is: ( , ) (35., 36.98). () We test the null hypothesis H : µ 35 and the alternative hypothesis H : µ 36.5 by the significance level.5. The distribution of X is: X µ σ/ n N(, ). Under the null hypothesis H : µ µ, X µ σ/ n 8 N(, ).
29 For the alternative hypothesis H : µ > µ, we obtain: If we have: P ( X µ σ/ n > z ) α α. x µ σ/ n > z α, the null hypothesis H : µ µ is rejected by the significance level α. Substituting x 36, σ, n 6 and z.5.645, we obtain: x µ σ/ n / 6 > z α.645. The null hypothesis H : µ 35 is rejected by the significance level α.5. (3) We compute the power of the test in the question (). The power of the test is the probability which rejects the null hypothesis under the alternative hypothesis. That is, under the null hypothesis H : µ µ, the region which rejects the null hypothesis is: X > µ + z α σ/ n, because P ( X µ σ/ n > z ) α α. We compute the probability which rejects the null hypothesis under the alternative hypothesis H : µ µ. That is, under the alternative hypothesis H : µ µ, the following probability is known as the power of the test: P ( X > µ + z α σ/ n ). Under the alternative hypothesis H : µ µ, we have: X µ σ/ n N(, ). Therefore, we want to compute the following probability P ( X µ σ/ n > µ µ σ/ n + z ) α. 9
30 Substituting σ, n 6, µ 35, µ 36.5 and z α.645, we obtain: P ( X µ σ/ n Note that z and z > / ) P ( X µ σ/ n >.355) P ( X µ σ/ n >.355) X, X,, X n are assumed to be mutually independent and be distributed as a Poisson process, where the Poisson distribution is given by: Then, answer the following questions. P(X x) f (x; λ) λx e λ, x,,,. x! () Obtain the maximum likelihood estimator of λ, which is denoted by ˆλ. () Prove that ˆλ is an unbiased estimator. (3) Prove that ˆλ is an efficient estimator. (4) Prove that ˆλ is an consistent estimator. [Answer] () We obtain the maximum likelihood estimator of λ, denoted by ˆλ. The Poisson distribution is: The likelihood function is: P(X x) f (x; λ) λx e λ, x,,,. x! l(λ) n f (x i ; λ) The log-likelihood function is: n λ x i e λ x i! λ n x i e nλ n x i!. n log l(λ) log(λ) x i nλ log( x i!). 3
31 The derivative of the log-likelihood function with respect to λ is: log l(λ) λ λ x i n. Solving the above equation, the maximum likelihood estimator ˆλ is: ˆλ n X i X. () We prove that ˆλ is an unbiased estimator of λ. E(ˆλ) E( n X i ) n E(X i ) n λ λ. (3) We prove that ˆλ is an efficient estimator of λ, where we show that the equality holds in the Cramer-Rao inequality. First, we obtain V(ˆλ) as: V(ˆλ) V( n X i ) V(X n i ) n The Cramer-Rao lower bound is given by: λ λ n. ( ) log f (X; λ) ne λ ( ) (X log λ λ log X!) ne λ λ [( X ) ] ne λ ne[(x λ) ] λ nv(x) λ nλ λ n. Therefore, V(ˆλ) ( ). log f (X; λ) ne λ That is, V(ˆλ) is equal to the lower bound of the Cramer-Rao inequality. Therefore, ˆλ is efficient. 3
32 (4) We show that ˆλ is a consistent estimator of λ. Note as follows: In Chebyshev s inequality: E(ˆλ) λ, V(ˆλ) λ n. P( ˆλ E(ˆλ) ɛ) V(ˆλ) ɛ, E(ˆλ) and V(ˆλ) are substituted. Then, we have: P( ˆλ λ > ɛ) < which implies that ˆλ is consistent. λ nɛ, 6 X, X,, X n are mutually independently distributed as normal random variables. Note that the normal density is: f (x) Then, answer the following questions. πσ e σ (x µ). () Prove that the sample mean X (/n) n X i is normally distributed with mean µ and variance σ /n. () Define: Z X µ σ/ n. Show that Z is normally distributed with mean zero and variance one. (3) Consider the sample unbiased variance: S (X i X). n The distribution of (n )S /σ is known as a Chi-square distribution with n degrees of freedom. Obtain mean and variance of S. Note that a Chi-square distribution with m degrees of freedom is: f (x) m Γ( m) x m e x, if x >,, otherwise. 3
33 (4) Prove that S is an consistent estimator of σ. [Answer] () The distribution of the sample mean X (/n) n X i is derived using the moment-generating function. Note that for X N(µ, σ ) the moment-generating function φ(θ) is: φ(θ) E(e θx ) e θx f (x) dx πσ e σ (x µ) +θx dx ( ) πσ e σ x (µ+σ θ)x+µ dx ) (x (µ+σ πσ e σ θ) +(µθ+ σ θ ) dx e µθ+ σ θ e θx πσ e σ (x µ) dx ) (x (µ+σ πσ e σ θ) dx exp ( µθ + σ θ ). In the integration above, N(µ + σ θ, σ ) is utilized. Therefore, we have: φ i (θ) exp ( µθ + σ θ ). Now, consider the moment-generating function of X, denoted by φ x (θ): φ x (θ) E(e θx ) E(e θ n n n n n X i ) E( e n θ X i ) E(e n θ X i ) φ i ( θ n ) n exp ( µ θ n + σ ( θ n )) exp ( µθ + σ θ n ) ( σ exp µθ + n θ), which is equivalent to the moment-generating function of the normal distribution with mean µ and variance σ /n. () We derive the distribution of Z, which is shown as: Z X µ σ/ n. From the question (), the moment-generating function of X, denoted by φ x (θ), is: φ x (θ) E(e θx ) exp ( µθ + σ n θ). 33
34 The moment-generating function of Z, denoted by φ z (θ): φ z (θ) E(e θz ) E ( exp(θ X µ σ/ n )) exp ( µ θ σ/ ) ( θ E exp( n σ/ n X)) exp ( µ θ σ/ ) θ φx ( n σ/ n ) exp ( µ θ σ/ ) ( θ exp µ n σ/ n + σ ( θ n σ/ ) ) exp( n θ ), which is the moment-generating function of N(, ). (3) First, as preliminaries, we derive mean and variance of the chi-square distribution with m degrees of freedom. The chi-square distribution with m degrees of freedom is: f (x) m Γ( m ) x m e x, if x >. Therefore, the moment-generating function φ χ (θ) is: φ χ (θ) E(e θx ) e θx m Γ( m) x m e x dx m Γ( m) x m e ( θ)x dx ( θ m Γ( m ) ( y ) m θ ) m e y θ θ dx m Γ( m )ym e y dx ( θ) m. In the fourth equality, use y ( θ)x. The first and second derivatives of the moment-generating function is: φ χ (θ) m( θ) m, φ χ (θ) m(m + )( θ) m. Therefore, we obtain: E(X) φ χ () m, E(X ) φ χ () m(m + ). 34
35 Thus, for the chi-square distribution with m degrees of freedom, mean is given by m and variance is: V(X) E(X ) (E(X)) m(m + ) m m. Therefore, using (n )S /σ χ (n ), we have: which implies (n )S (n )S E( ) n, V( ) (n ), σ σ n σ E(S ) n, ( n σ ) V(S ) (n ). Finally, mean and variance of S are: E(S ) σ, V(S ) σ4 n. (4) We show that S is a consistent estimator of σ. Chebyshev s inequality is utilized, which is: P( S E(S ) ɛ) V(S ) ɛ. Substituting E(S ) and V(S ), we obtain: Therefore, S is consistent. P( S σ ɛ) σ 4 (n )ɛ. 35
Probability and Distributions
Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated
More informationMATH4427 Notebook 2 Fall Semester 2017/2018
MATH4427 Notebook 2 Fall Semester 2017/2018 prepared by Professor Jenny Baglivo c Copyright 2009-2018 by Jenny A. Baglivo. All Rights Reserved. 2 MATH4427 Notebook 2 3 2.1 Definitions and Examples...................................
More informationDistributions of Functions of Random Variables. 5.1 Functions of One Random Variable
Distributions of Functions of Random Variables 5.1 Functions of One Random Variable 5.2 Transformations of Two Random Variables 5.3 Several Random Variables 5.4 The Moment-Generating Function Technique
More informationPCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities
PCMI 207 - Introduction to Random Matrix Theory Handout #2 06.27.207 REVIEW OF PROBABILITY THEORY Chapter - Events and Their Probabilities.. Events as Sets Definition (σ-field). A collection F of subsets
More informationNotes on the Multivariate Normal and Related Topics
Version: July 10, 2013 Notes on the Multivariate Normal and Related Topics Let me refresh your memory about the distinctions between population and sample; parameters and statistics; population distributions
More informationLIST OF FORMULAS FOR STK1100 AND STK1110
LIST OF FORMULAS FOR STK1100 AND STK1110 (Version of 11. November 2015) 1. Probability Let A, B, A 1, A 2,..., B 1, B 2,... be events, that is, subsets of a sample space Ω. a) Axioms: A probability function
More informationProbability Theory and Statistics. Peter Jochumzen
Probability Theory and Statistics Peter Jochumzen April 18, 2016 Contents 1 Probability Theory And Statistics 3 1.1 Experiment, Outcome and Event................................ 3 1.2 Probability............................................
More information1 Review of Probability and Distributions
Random variables. A numerically valued function X of an outcome ω from a sample space Ω X : Ω R : ω X(ω) is called a random variable (r.v.), and usually determined by an experiment. We conventionally denote
More informationAsymptotic Statistics-III. Changliang Zou
Asymptotic Statistics-III Changliang Zou The multivariate central limit theorem Theorem (Multivariate CLT for iid case) Let X i be iid random p-vectors with mean µ and and covariance matrix Σ. Then n (
More informationCourse: ESO-209 Home Work: 1 Instructor: Debasis Kundu
Home Work: 1 1. Describe the sample space when a coin is tossed (a) once, (b) three times, (c) n times, (d) an infinite number of times. 2. A coin is tossed until for the first time the same result appear
More informationMAS223 Statistical Inference and Modelling Exercises
MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,
More informationMath 181B Homework 1 Solution
Math 181B Homework 1 Solution 1. Write down the likelihood: L(λ = n λ X i e λ X i! (a One-sided test: H 0 : λ = 1 vs H 1 : λ = 0.1 The likelihood ratio: where LR = L(1 L(0.1 = 1 X i e n 1 = λ n X i e nλ
More informationStatistics STAT:5100 (22S:193), Fall Sample Final Exam B
Statistics STAT:5 (22S:93), Fall 25 Sample Final Exam B Please write your answers in the exam books provided.. Let X, Y, and Y 2 be independent random variables with X N(µ X, σ 2 X ) and Y i N(µ Y, σ 2
More informationProbability Models. 4. What is the definition of the expectation of a discrete random variable?
1 Probability Models The list of questions below is provided in order to help you to prepare for the test and exam. It reflects only the theoretical part of the course. You should expect the questions
More informationMAS223 Statistical Modelling and Inference Examples
Chapter MAS3 Statistical Modelling and Inference Examples Example : Sample spaces and random variables. Let S be the sample space for the experiment of tossing two coins; i.e. Define the random variables
More informationMath 3215 Intro. Probability & Statistics Summer 14. Homework 5: Due 7/3/14
Math 325 Intro. Probability & Statistics Summer Homework 5: Due 7/3/. Let X and Y be continuous random variables with joint/marginal p.d.f. s f(x, y) 2, x y, f (x) 2( x), x, f 2 (y) 2y, y. Find the conditional
More information2 Functions of random variables
2 Functions of random variables A basic statistical model for sample data is a collection of random variables X 1,..., X n. The data are summarised in terms of certain sample statistics, calculated as
More informationActuarial Science Exam 1/P
Actuarial Science Exam /P Ville A. Satopää December 5, 2009 Contents Review of Algebra and Calculus 2 2 Basic Probability Concepts 3 3 Conditional Probability and Independence 4 4 Combinatorial Principles,
More informationSpring 2012 Math 541B Exam 1
Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote
More informationAPPM/MATH 4/5520 Solutions to Exam I Review Problems. f X 1,X 2. 2e x 1 x 2. = x 2
APPM/MATH 4/5520 Solutions to Exam I Review Problems. (a) f X (x ) f X,X 2 (x,x 2 )dx 2 x 2e x x 2 dx 2 2e 2x x was below x 2, but when marginalizing out x 2, we ran it over all values from 0 to and so
More information2017 Financial Mathematics Orientation - Statistics
2017 Financial Mathematics Orientation - Statistics Written by Long Wang Edited by Joshua Agterberg August 21, 2018 Contents 1 Preliminaries 5 1.1 Samples and Population............................. 5
More informationSUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)
SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) D. ARAPURA This is a summary of the essential material covered so far. The final will be cumulative. I ve also included some review problems
More informationNotes on Random Vectors and Multivariate Normal
MATH 590 Spring 06 Notes on Random Vectors and Multivariate Normal Properties of Random Vectors If X,, X n are random variables, then X = X,, X n ) is a random vector, with the cumulative distribution
More informationContinuous Random Variables
1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables
More informationLimiting Distributions
Limiting Distributions We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the
More informationMath 152. Rumbos Fall Solutions to Assignment #12
Math 52. umbos Fall 2009 Solutions to Assignment #2. Suppose that you observe n iid Bernoulli(p) random variables, denoted by X, X 2,..., X n. Find the LT rejection region for the test of H o : p p o versus
More informationThis does not cover everything on the final. Look at the posted practice problems for other topics.
Class 7: Review Problems for Final Exam 8.5 Spring 7 This does not cover everything on the final. Look at the posted practice problems for other topics. To save time in class: set up, but do not carry
More informationPart IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015
Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.
More informationMA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems
MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Review of Basic Probability The fundamentals, random variables, probability distributions Probability mass/density functions
More informationRandom Variables. P(x) = P[X(e)] = P(e). (1)
Random Variables Random variable (discrete or continuous) is used to derive the output statistical properties of a system whose input is a random variable or random in nature. Definition Consider an experiment
More informationSTAT 512 sp 2018 Summary Sheet
STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}
More informationStatistics 3858 : Maximum Likelihood Estimators
Statistics 3858 : Maximum Likelihood Estimators 1 Method of Maximum Likelihood In this method we construct the so called likelihood function, that is L(θ) = L(θ; X 1, X 2,..., X n ) = f n (X 1, X 2,...,
More informationPetter Mostad Mathematical Statistics Chalmers and GU
Petter Mostad Mathematical Statistics Chalmers and GU Solution to MVE55/MSG8 Mathematical statistics and discrete mathematics MVE55/MSG8 Matematisk statistik och diskret matematik Re-exam: 4 August 6,
More informationIntroduction to Normal Distribution
Introduction to Normal Distribution Nathaniel E. Helwig Assistant Professor of Psychology and Statistics University of Minnesota (Twin Cities) Updated 17-Jan-2017 Nathaniel E. Helwig (U of Minnesota) Introduction
More informationTopic 19 Extensions on the Likelihood Ratio
Topic 19 Extensions on the Likelihood Ratio Two-Sided Tests 1 / 12 Outline Overview Normal Observations Power Analysis 2 / 12 Overview The likelihood ratio test is a popular choice for composite hypothesis
More informationBIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation
BIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation Yujin Chung November 29th, 2016 Fall 2016 Yujin Chung Lec13: MLE Fall 2016 1/24 Previous Parametric tests Mean comparisons (normality assumption)
More informationMathematical Statistics
Mathematical Statistics Chapter Three. Point Estimation 3.4 Uniformly Minimum Variance Unbiased Estimator(UMVUE) Criteria for Best Estimators MSE Criterion Let F = {p(x; θ) : θ Θ} be a parametric distribution
More informationQualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama
Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama Instructions This exam has 7 pages in total, numbered 1 to 7. Make sure your exam has all the pages. This exam will be 2 hours
More informationMATH c UNIVERSITY OF LEEDS Examination for the Module MATH2715 (January 2015) STATISTICAL METHODS. Time allowed: 2 hours
MATH2750 This question paper consists of 8 printed pages, each of which is identified by the reference MATH275. All calculators must carry an approval sticker issued by the School of Mathematics. c UNIVERSITY
More informationFinal Examination Statistics 200C. T. Ferguson June 11, 2009
Final Examination Statistics 00C T. Ferguson June, 009. (a) Define: X n converges in probability to X. (b) Define: X m converges in quadratic mean to X. (c) Show that if X n converges in quadratic mean
More informationMidterm Examination. STA 215: Statistical Inference. Due Wednesday, 2006 Mar 8, 1:15 pm
Midterm Examination STA 215: Statistical Inference Due Wednesday, 2006 Mar 8, 1:15 pm This is an open-book take-home examination. You may work on it during any consecutive 24-hour period you like; please
More informationChp 4. Expectation and Variance
Chp 4. Expectation and Variance 1 Expectation In this chapter, we will introduce two objectives to directly reflect the properties of a random variable or vector, which are the Expectation and Variance.
More informationMathematical statistics
October 18 th, 2018 Lecture 16: Midterm review Countdown to mid-term exam: 7 days Week 1 Chapter 1: Probability review Week 2 Week 4 Week 7 Chapter 6: Statistics Chapter 7: Point Estimation Chapter 8:
More informationChapter 7. Basic Probability Theory
Chapter 7. Basic Probability Theory I-Liang Chern October 20, 2016 1 / 49 What s kind of matrices satisfying RIP Random matrices with iid Gaussian entries iid Bernoulli entries (+/ 1) iid subgaussian entries
More informationMATH2715: Statistical Methods
MATH2715: Statistical Methods Exercises IV (based on lectures 7-8, work week 5, hand in lecture Mon 30 Oct) ALL questions count towards the continuous assessment for this module. Q1. If a random variable
More informationMathematical statistics
October 4 th, 2018 Lecture 12: Information Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation Chapter
More informationChapter 5. Random Variables (Continuous Case) 5.1 Basic definitions
Chapter 5 andom Variables (Continuous Case) So far, we have purposely limited our consideration to random variables whose ranges are countable, or discrete. The reason for that is that distributions on
More informationBMIR Lecture Series on Probability and Statistics Fall 2015 Discrete RVs
Lecture #7 BMIR Lecture Series on Probability and Statistics Fall 2015 Department of Biomedical Engineering and Environmental Sciences National Tsing Hua University 7.1 Function of Single Variable Theorem
More informationLimiting Distributions
We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the two fundamental results
More informationStatistics. Statistics
The main aims of statistics 1 1 Choosing a model 2 Estimating its parameter(s) 1 point estimates 2 interval estimates 3 Testing hypotheses Distributions used in statistics: χ 2 n-distribution 2 Let X 1,
More informationSampling Distributions
Sampling Distributions In statistics, a random sample is a collection of independent and identically distributed (iid) random variables, and a sampling distribution is the distribution of a function of
More informationSampling Distributions
In statistics, a random sample is a collection of independent and identically distributed (iid) random variables, and a sampling distribution is the distribution of a function of random sample. For example,
More informationSTA2603/205/1/2014 /2014. ry II. Tutorial letter 205/1/
STA263/25//24 Tutorial letter 25// /24 Distribution Theor ry II STA263 Semester Department of Statistics CONTENTS: Examination preparation tutorial letterr Solutions to Assignment 6 2 Dear Student, This
More informationReview of Probability Theory II
Review of Probability Theory II January 9-3, 008 Exectation If the samle sace Ω = {ω, ω,...} is countable and g is a real-valued function, then we define the exected value or the exectation of a function
More informationRandom Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R
In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample
More informationHT Introduction. P(X i = x i ) = e λ λ x i
MODS STATISTICS Introduction. HT 2012 Simon Myers, Department of Statistics (and The Wellcome Trust Centre for Human Genetics) myers@stats.ox.ac.uk We will be concerned with the mathematical framework
More informationChapter 2. Discrete Distributions
Chapter. Discrete Distributions Objectives ˆ Basic Concepts & Epectations ˆ Binomial, Poisson, Geometric, Negative Binomial, and Hypergeometric Distributions ˆ Introduction to the Maimum Likelihood Estimation
More informationLectures on Statistics. William G. Faris
Lectures on Statistics William G. Faris December 1, 2003 ii Contents 1 Expectation 1 1.1 Random variables and expectation................. 1 1.2 The sample mean........................... 3 1.3 The sample
More informationQualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf
Part : Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section
More informationIf we want to analyze experimental or simulated data we might encounter the following tasks:
Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction
More informationPart IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015
Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.
More informationt x 1 e t dt, and simplify the answer when possible (for example, when r is a positive even number). In particular, confirm that EX 4 = 3.
Mathematical Statistics: Homewor problems General guideline. While woring outside the classroom, use any help you want, including people, computer algebra systems, Internet, and solution manuals, but mae
More informationUQ, Semester 1, 2017, Companion to STAT2201/CIVL2530 Exam Formulae and Tables
UQ, Semester 1, 2017, Companion to STAT2201/CIVL2530 Exam Formulae and Tables To be provided to students with STAT2201 or CIVIL-2530 (Probability and Statistics) Exam Main exam date: Tuesday, 20 June 1
More informationRandom Variables and Their Distributions
Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital
More informationLoglikelihood and Confidence Intervals
Stat 504, Lecture 2 1 Loglikelihood and Confidence Intervals The loglikelihood function is defined to be the natural logarithm of the likelihood function, l(θ ; x) = log L(θ ; x). For a variety of reasons,
More informationTheory of Statistics.
Theory of Statistics. Homework V February 5, 00. MT 8.7.c When σ is known, ˆµ = X is an unbiased estimator for µ. If you can show that its variance attains the Cramer-Rao lower bound, then no other unbiased
More informationStatistics for scientists and engineers
Statistics for scientists and engineers February 0, 006 Contents Introduction. Motivation - why study statistics?................................... Examples..................................................3
More informationJoint Distributions. (a) Scalar multiplication: k = c d. (b) Product of two matrices: c d. (c) The transpose of a matrix:
Joint Distributions Joint Distributions A bivariate normal distribution generalizes the concept of normal distribution to bivariate random variables It requires a matrix formulation of quadratic forms,
More information4. CONTINUOUS RANDOM VARIABLES
IA Probability Lent Term 4 CONTINUOUS RANDOM VARIABLES 4 Introduction Up to now we have restricted consideration to sample spaces Ω which are finite, or countable; we will now relax that assumption We
More informationMathematical statistics
October 1 st, 2018 Lecture 11: Sufficient statistic Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation
More informationQualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf
Part 1: Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section
More informationA Very Brief Summary of Statistical Inference, and Examples
A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)
More informationThis paper is not to be removed from the Examination Halls
~~ST104B ZA d0 This paper is not to be removed from the Examination Halls UNIVERSITY OF LONDON ST104B ZB BSc degrees and Diplomas for Graduates in Economics, Management, Finance and the Social Sciences,
More informationFormulas for probability theory and linear models SF2941
Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms
More informationn! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2
Order statistics Ex. 4. (*. Let independent variables X,..., X n have U(0, distribution. Show that for every x (0,, we have P ( X ( < x and P ( X (n > x as n. Ex. 4.2 (**. By using induction or otherwise,
More informationStatement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:.
MATHEMATICAL STATISTICS Homework assignment Instructions Please turn in the homework with this cover page. You do not need to edit the solutions. Just make sure the handwriting is legible. You may discuss
More information1 Probability theory. 2 Random variables and probability theory.
Probability theory Here we summarize some of the probability theory we need. If this is totally unfamiliar to you, you should look at one of the sources given in the readings. In essence, for the major
More informationSpace Telescope Science Institute statistics mini-course. October Inference I: Estimation, Confidence Intervals, and Tests of Hypotheses
Space Telescope Science Institute statistics mini-course October 2011 Inference I: Estimation, Confidence Intervals, and Tests of Hypotheses James L Rosenberger Acknowledgements: Donald Richards, William
More informationStat 5101 Lecture Slides: Deck 7 Asymptotics, also called Large Sample Theory. Charles J. Geyer School of Statistics University of Minnesota
Stat 5101 Lecture Slides: Deck 7 Asymptotics, also called Large Sample Theory Charles J. Geyer School of Statistics University of Minnesota 1 Asymptotic Approximation The last big subject in probability
More informationProbability Background
Probability Background Namrata Vaswani, Iowa State University August 24, 2015 Probability recap 1: EE 322 notes Quick test of concepts: Given random variables X 1, X 2,... X n. Compute the PDF of the second
More informationUnbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.
Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it
More informationExam P Review Sheet. for a > 0. ln(a) i=0 ari = a. (1 r) 2. (Note that the A i s form a partition)
Exam P Review Sheet log b (b x ) = x log b (y k ) = k log b (y) log b (y) = ln(y) ln(b) log b (yz) = log b (y) + log b (z) log b (y/z) = log b (y) log b (z) ln(e x ) = x e ln(y) = y for y > 0. d dx ax
More information2 (Statistics) Random variables
2 (Statistics) Random variables References: DeGroot and Schervish, chapters 3, 4 and 5; Stirzaker, chapters 4, 5 and 6 We will now study the main tools use for modeling experiments with unknown outcomes
More informationGaussian vectors and central limit theorem
Gaussian vectors and central limit theorem Samy Tindel Purdue University Probability Theory 2 - MA 539 Samy T. Gaussian vectors & CLT Probability Theory 1 / 86 Outline 1 Real Gaussian random variables
More informationChapter 5 Joint Probability Distributions
Applied Statistics and Probability for Engineers Sixth Edition Douglas C. Montgomery George C. Runger Chapter 5 Joint Probability Distributions 5 Joint Probability Distributions CHAPTER OUTLINE 5-1 Two
More informationQuick Tour of Basic Probability Theory and Linear Algebra
Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra CS224w: Social and Information Network Analysis Fall 2011 Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra Outline Definitions
More informationEC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)
1 EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) Taisuke Otsu London School of Economics Summer 2018 A.1. Summation operator (Wooldridge, App. A.1) 2 3 Summation operator For
More informationChapter 4 : Expectation and Moments
ECE5: Analysis of Random Signals Fall 06 Chapter 4 : Expectation and Moments Dr. Salim El Rouayheb Scribe: Serge Kas Hanna, Lu Liu Expected Value of a Random Variable Definition. The expected or average
More informationf(x θ)dx with respect to θ. Assuming certain smoothness conditions concern differentiating under the integral the integral sign, we first obtain
0.1. INTRODUCTION 1 0.1 Introduction R. A. Fisher, a pioneer in the development of mathematical statistics, introduced a measure of the amount of information contained in an observaton from f(x θ). Fisher
More informationStochastic Processes and Monte-Carlo Methods. University of Massachusetts: Spring 2018 version. Luc Rey-Bellet
Stochastic Processes and Monte-Carlo Methods University of Massachusetts: Spring 2018 version Luc Rey-Bellet April 5, 2018 Contents 1 Simulation and the Monte-Carlo method 3 1.1 Review of probability............................
More informationConvergence in Distribution
Convergence in Distribution Undergraduate version of central limit theorem: if X 1,..., X n are iid from a population with mean µ and standard deviation σ then n 1/2 ( X µ)/σ has approximately a normal
More informationStatistical Methods in Particle Physics
Statistical Methods in Particle Physics Lecture 3 October 29, 2012 Silvia Masciocchi, GSI Darmstadt s.masciocchi@gsi.de Winter Semester 2012 / 13 Outline Reminder: Probability density function Cumulative
More information2. Suppose (X, Y ) is a pair of random variables uniformly distributed over the triangle with vertices (0, 0), (2, 0), (2, 1).
Name M362K Final Exam Instructions: Show all of your work. You do not have to simplify your answers. No calculators allowed. There is a table of formulae on the last page. 1. Suppose X 1,..., X 1 are independent
More informationStatistics, Data Analysis, and Simulation SS 2015
Statistics, Data Analysis, and Simulation SS 2015 08.128.730 Statistik, Datenanalyse und Simulation Dr. Michael O. Distler Mainz, 27. April 2015 Dr. Michael O. Distler
More informationECE534, Spring 2018: Solutions for Problem Set #3
ECE534, Spring 08: Solutions for Problem Set #3 Jointly Gaussian Random Variables and MMSE Estimation Suppose that X, Y are jointly Gaussian random variables with µ X = µ Y = 0 and σ X = σ Y = Let their
More informationLet us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided
Let us first identify some classes of hypotheses. simple versus simple H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided H 0 : θ θ 0 versus H 1 : θ > θ 0. (2) two-sided; null on extremes H 0 : θ θ 1 or
More informationMathematics Ph.D. Qualifying Examination Stat Probability, January 2018
Mathematics Ph.D. Qualifying Examination Stat 52800 Probability, January 2018 NOTE: Answers all questions completely. Justify every step. Time allowed: 3 hours. 1. Let X 1,..., X n be a random sample from
More informationn! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2
Order statistics Ex. 4.1 (*. Let independent variables X 1,..., X n have U(0, 1 distribution. Show that for every x (0, 1, we have P ( X (1 < x 1 and P ( X (n > x 1 as n. Ex. 4.2 (**. By using induction
More informationMULTIVARIATE PROBABILITY DISTRIBUTIONS
MULTIVARIATE PROBABILITY DISTRIBUTIONS. PRELIMINARIES.. Example. Consider an experiment that consists of tossing a die and a coin at the same time. We can consider a number of random variables defined
More informationBrandon C. Kelly (Harvard Smithsonian Center for Astrophysics)
Brandon C. Kelly (Harvard Smithsonian Center for Astrophysics) Probability quantifies randomness and uncertainty How do I estimate the normalization and logarithmic slope of a X ray continuum, assuming
More information