Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean and variance of X. (3) When Y X, derive the density function of Y. [Answer] () From the property of the density function, i.e., f (x) dx, we need to have: f (x) dx a (a x) dx [ ax ] a x a. Therefore, a is obtained, taking into account a >. () The definitions of mean and variance are given by: E(X) x f (x) dx and V(X) (x µ) f (x) dx, where µ E(X). Therefore, mean of X is: a [ E(X) x f (x) dx x(a x) dx ax ] a 3 x3 6 a3 a is substituted. 3 Variance of X is: V(X) (x µ) f (x) dx x f (x) dx µ [ 3 ax3 ] a 4 x4 µ a4 µ 3 a 3 x (a x) dx µ 9.
(3) Let f (x) be the density function of X and F(x) be the distribution function of X. And let g(y) be the density function of Y and G(y) be the distribution function of Y. Using Y X, we obtain: G(y) P(Y < y) P(X < y) P( y < X < y) F( y) F( y) F( y) F( y). Moreover, from the relationship between the density and the distribution functions, we obtain the following: g(y) dg(y) df( y) df(x) d y dy dy dx dy F (x) y f (x) y f ( y) y ( y), for < y <. y x y The range of y is obtained as: < x < < x < < y <. The continuous type of random variable X has the following density function: Answer the following questions. () Compute mean and variance of X. f (x) π e x. () When Y X, compute mean and variance of Y. (3) When Z e X, obtain mean and variance of Z. [Answer] () The definitions of mean and variance are: E(X) x f (x) dx and V(X) (x µ) f (x) dx, where µ E(X). Therefore, mean of X is: E(X) x f (x) dx x π e x dx π [ e x].
In the third equality, we utilize: Variance of X is: V(X) (x µ) f (x) dx [ x ] e x + π de x dx xe x. x f (x) dx µ π e x dx µ. In the fourth equality, the following formula is used. b a h (x)g(x) dx [ h(x)g(x) ] b b h(x)g (x) dx, a where g(x) x and h (x) x π e x are set. And in the first term of the fourth equality, we use: lim x e x. x ± π a x π e x dx µ In the second term of the fourth equality, we utilize the property that the integration of the density function is equal to one. () When Y X, mean of Y is: E(Y) E(X ) V(X) µ x From (), note that V(X) and µ x E(X). Variance of Y is: V(Y) E(Y µ y ) µ y E(Y) E(Y ) µ y E(X 4 ) µ y x 3 x π e x dx µ y [ x 3 π e x ] + 3 x 4 π e x dx µ y x π e x dx µ y 3E(X ) µ y E(X ), µ y 3
In the sixth equality, the following formula on integration is utilized. b a h (x)g(x) dx [ h(x)g(x) ] b b h(x)g (x) dx, a where g(x) x 3 and h (x) x π e x are set. In the first term of the sixth equality, we use: (3) For Z e X, mean of Z is: E(Z) E(e X ) In the sixth equality, lim x ± x3 e x. π π e (x ) + e x e x dx π dx e π e (x ) a π e (x x) dx π e (x ) dx e. is a normal distribution with mean one and variance one, and accordingly its integration is equal to one. Variance of Z is: V(Z) E(Z µ z ) µ z E(Z) e E(Z ) µ z E(e X ) µ z e π e (x 4x) dx µ z e x π e x dx µ z π e (x ) dx µ z e e. The eighth equality comes from the facts that π e (x ) + dx µ z π e (x ) is a normal distribution with mean two and variance one and that its integration is equal to one. 3 The continuous type of random variable X has the following density function: x f (x) λ e λ, if < x,, otherwise. Answer the following questions. 4
() Compute mean and variance of X. () Derive the moment-generating function of X. (3) Let X, X,, X n be the random variables, which are mutually independently [Answer] distributed and have the density function shown above. Prove that the density function of Y X + X + + X n is given by the chi-square distribution with n degrees of freedom when λ. Note that the chi-square distribution with m degrees of freedom is given by: f (x) m Γ( m) x m e x, if x >,, otherwise. () Mean of X is: E(X) x f (x) dx [ ] xe λ x + e x λ x λ e x λ In the third equality, the following formula is used: b a dx [ ] dx λe λ x λ. h (x)g(x) dx [ h(x)g(x) ] b b h(x)g (x) dx. a a where g(x) x and h (x) λ e x λ are set. And we utilize: lim x x xe λ, lim e λ x. x Variance of X is: V(X) (x µ) f (x) dx [ x e x λ x λ e x λ ] x f (x) dx µ µ E(X) λ dx µ [ ] x e λ x + + λ x x λ e λ dx µ λe(x) µ µ E(X) λ λ λ λ. 5 xe x λ dx µ
In the third equality, we utilize: b a h (x)g(x) dx [ h(x)g(x) ] b b h(x)g (x) dx, a a where g(x) x and h (x) λ e x λ. In the sixth equality, the following formulas are used: lim x x e λ x, µ E(X) xe λ x () The moment-generating function of X is: φ(θ) E(e θx ) e θx f (x) dx /λ /λ θ ( λ θ)e ( λ θ)x dx e θx λ e x λ λθ. dx. dx λ e ( λ θ)x dx In the last equality, since ( λ θ)e ( λ θ)x is a density function, its integration is one. λ in f (x) is replaced by λ θ. (3) We want to show that the moment-generating function of Y is equivalent to that of a chi-square distribution with n degrees of freedom. Because X, X,, X n are mutually independently distributed, the momentgenerating function of X i, φ i (θ), is: φ i (θ) which corresponds to the case λ of (). θ φ(θ), For λ, the moment-generating function of Y X + X + + X n, φ y (θ), is: φ y (θ) E(e θy ) E(e θ(x +X + +X n ) ) E(e θx )E(e θx ) E(e θx n ) φ (θ)φ (θ) φ n (θ) ( φ(θ) ) n ( ) n ( ) n. θ θ Therefore, the moment-generating function of Y is: φ y (θ) ( ) n. θ 6
A chi-square distribution with m degrees of freedom is given by: f (x) m Γ( m ) x m e x, for x >. The moment-generating function of the above density function, φ χ (θ), is: φ χ (θ) E(e θx ) e θx m Γ( m) x m e x dx m Γ( m) x m e ( θ)x dx ( y ) m e m Γ( m) y θ θ dx ( θ ) m θ m Γ( m )ym e y dx ( ) m. θ In the fourth equality, use y ( θ)x. In the sixth equality, since the function in the integration corresponds to the chi-square distribution with m degrees of freedom, the integration is one. Thus, φ y (θ) is equivalent to φ χ (θ) for m n. That is, φ y (θ) is the moment-generating function of a chi square distribution with n degrees of freedom. Therefore, Y χ (n). 4 The continuous type of random variable X has the following density function:, if < x <, f (x), otherwise. Answer the following questions. () Compute mean and variance of X. () When Y log X, derive the moment-generating function of Y. Note that the log represents the natural logarithm (i.e., y log x is equivalent to x e y ). (3) Let Y and Y be the random variables which have the density function obtained in (). Suppose that Y is independent of Y. When Z Y + Y, compute the density function of Z. 7
[Answer] () Mean of X is: E(X) x f (x) dx Variance of X is: V(X) (x µ) f (x) dx x dx µ [ ] 3 x3 x dx [ ] x. x f (x) dx µ µ E(X) µ 3 ( ). () For Y log X, we obtain the moment-generating function of Y, φ y (θ). φ y (θ) E(e θy ) E(e θ log X ) E(X θ ) x θ f (x) dx [ ] x θ dx θ x θ θ. (3) Let Y and Y be the random variables which have the density function obtained from (). And, assume that Y is independent of Y. For Z Y + Y, we want to have the density function of Z. The moment-generating function of Z, φ z (θ), is: φ z (θ) E(e θz ) E(e θ(y +Y ) ) E(e θy )E(e θy ) ( φ y (θ) ) ( ) ( ) 4, θ θ which is equivalent to the moment-generating function of the chi square distribution with 4 degrees of freedom. Therefore, Z χ (4). Note that the chisquare density function with n degrees of freedom is given by: f (x) n Γ( n) x n e x, for x >,, otherwise. The moment-generating function φ(θ) is: φ(θ) ( ) n. θ 8
5 The continuous type of random variable X has the following density function: f (x) n Γ( n) x n e x, if x >,, otherwise. Answer the following questions. Γ(a) is called the gamma function, defined as: Γ(a) () What are mean and variance of X? x a e x dx. () Compute the moment-generating function of X. [Answer] () For mean: E(X) n n+ n x f (x) dx Γ( n+) Γ( n) n Γ( n ) x Γ( n ) n x n e x dx n+ Γ( n+ ) x n x n+ e x dx n. e x dx Note that Γ(s + ) sγ(s), Γ(), and Γ( ) π. Using n n +, from the property of the density function, we have: n f (x) dx Γ( n ) which is utilized in the fifth equality. x n e x dx, For variance, from V(X) E(X ) µ we compute E(X ) as follows: E(X ) n n+4 4( n + x f (x) dx Γ( n ) n x n+4 e x dx Γ( n+4) Γ( n) n ) x Γ( n ) n x n e x dx n+4 n+4 Γ( n+4 x e x dx ) n n x e x dx n(n + ), Γ( n ) 9
where n n + 4 is set. Therefore, V(X) n(n + ) n n is obtained. () The moment-generating function of X is: φ(θ) E(e θx ) ( θ n Γ( n) x n exp ( y n Γ( n) θ ) n e θx f (x) dx e θx n Γ( n) x n exp( x ) dx ( ) ( θ)x dx ) n exp( y) θ dy n Γ( n exp( y) dy ( θ )yn dx Use y ( θ)x in the fifth equality. Note that dy ( θ). In the seventh equality, the integration corresponds to the chi-square distribution with n degrees of freedom. ) n. 6 The continuous type of random variables X and Y are mutually independent and assumed to be X N(, ) and Y N(, ). Define U X/Y. Answer the following questions. When X N(, ), the density function of X is represented as: f (x) π e x. () Derive the density function of U. () Prove that the first moment of U does not exist. [Answer] () The density of U is obtained as follows. The densities of X and Y are: f (x) π exp( x ), < x <, g(y) π exp( y ), < y <.
Since X is independent of Y, the joint density of X and Y is: h(x, y) f (x)g(y) exp( π x ) exp( π y ) π exp( (x + y )). Using u x and v y, the transformation of the variables is performed. For y x uv and y v, we have the Jacobian: x x J u v y y v u. u v Using transformation of variables, the joint density of U and V, s(u, v) is given by: The marginal density of U is: s(u, v) h(uv, v) J π exp( v ( + u )) v. p(u) s(u, v) dv π v exp( π v ( + u )) dv [ π + u exp( v ( + u )) which corresponds to Cauchy distribution. () We prove that the first moment of U is infinity, i.e., E(U) u f (u) du v exp( v ( + u )) dv ] v u π( + u ) du π( + u ), π x dx x + u is used. [ ] π log x d log x dx x. For < u <, the range of x + u is give by < x <.
7 The continuous type of random variables has the following joint density function: Answer the following questions. x + y, if < x < and < y <, f (x, y), otherwise. () Compute the expectation of XY. () Obtain the correlation coefficient between X and Y. (3) What is the marginal density function of X? [Answer] () The expectation of XY is: E(XY) xy f (x, y) dx dy [ 3 yx3 + y x [ 6 y + 6 y3 ] 3. ] dy xy(x + y) dx dy ( 3 y + y ) dy () We want to obtain the correlation coefficient between X and Y, which is represented as: ρ Cov(X, Y)/ V(X)V(Y). Therefore, E(X), E(Y), V(X), V(Y) and Cov(X, Y) have to be computed. E(X) is: E(X) x f (x, y) dx dy [ 3 x3 + yx [ 3 y + 4 y ] ] 7. dy x(x + y) dx dy ( 3 + y) dy
In the case where x and y are exchangeable, the functional form of f (x, y) is unchanged. Therefore, E(Y) is: For V(X), E(Y) E(X) 7. V(X) E ( (X µ) ) µ E(X) 7 E(X ) µ x (x + y) dx dy µ ( 4 + 3 y) dy µ 5 ( 7 ) 44. x f (x, y) dx dy µ ] [ 4 y + 6 y [ 4 x4 + 3 yx3 µ ] dy µ For V(Y), For Cov(X, Y), V(Y) V(X) 44. Cov(X, Y) E ( (X µ x )(Y µ y ) ) E(XY) µ x µ y 3 7 7 44, where Therefore, ρ is: ρ µ x E(X) 7, µ y E(Y) 7. Cov(X, Y) V(X)V(Y) /44 (/44)(/44). (3) The marginal density function of X, f x (x), is: f x (x) for < x <. f (x, y) dy (x + y) dy 3 [ xy + ] y x + y,
8 The discrete type of random variable X has the following density function: Answer the following questions. () Prove f (x). x f (x) e λ λ x, x,,,. x! () Compute the moment-generating function of X. (3) From the moment-generating function, obtain mean and variance of X. [Answer] () We can show f (x) as: x f (x) x x λ λx e x! e λ x λ x x! e λ e λ. Note that e x x k k!, because we have f (k) (x) e x for f (x) e x. As shown in k Appendix.3, the formula of Taylor series expansion is: f (x) k k k! f (k) (x )(x x ) k. The Taylor series expansion around x is: f (x) k! f (k) ()x k k! xk Here, replace x by λ and k by x. () The moment-generating function of X is: φ(θ) E(e θx ) e θx f (x) e θx λ λx e x! e λ (eθ λ) x x! x x x e λ exp(e θ λ) exp( e θ λ) (eθ λ) x e λ exp(e θ λ x λ λ) e x! x! x k exp( λ) exp(e θ λ) exp ( λ(e θ ) ). Note that λ exp(e θ λ). 4 k x k k!. x
(3) Based on the moment-generating function, we obtain mean and variance of X. For mean, because of φ(θ) exp ( λ(e θ ) ), φ (θ) λe θ exp ( λ(e θ ) ) and E(X) φ (), we obtain: E(X) φ () λ. For variance, from V(X) E(X ) (E(X)), we obtain E(X ). Note that E(X ) φ () and φ (θ) ( + λe θ )λe θ exp ( λ(e θ ) ). Therefore, V(X) E(X ) (E(X)) φ () (φ ()) ( + λ)λ λ λ. 9 X, X,, X n are mutually independently and normally distributed with mean µ and variance σ, where the density function is given by: f (x) πσ e σ (x µ). Then, answer the following questions. () Obtain the maximum likelihood estimators of mean µ and variance σ. () Check whether the maximum likelihood estimator of σ is unbiased. If it is not unbiased, obtain an unbiased estimator of σ. (Hint: use the maximum likelihood estimator.) (3) We want to test the null hypothesis H : µ µ by the likelihood ratio test. [Answer] Obtain the test statistic and explain the testing procedure. () The joint density is: f (x, x,, x n ; µ, σ ) n f (x i ; µ, σ ) n ( exp ) πσ σ (x i µ) (πσ ) n/ exp 5 σ (x i µ) l(µ, σ ).
Taking the logarithm, we have: log l(µ, σ ) n log(π) n log(σ ) σ (x i µ). The derivatives of the log-likelihood function log l(µ, σ ) with respect to µ and σ are set to be zero. log l(µ, σ ) µ (x σ i µ), log l(µ, σ ) n σ σ + σ 4 (x i µ). Solving the two equations, we have the solution of (µ, σ ), denoted by (ˆµ, ˆσ ): ˆµ n ˆσ n x i x, (x i µ) n (x i x). Therefore, the maximum likelihood estimators of µ and σ, (ˆµ, ˆσ ), are as follows: X, S n (X i X). () Take the expectation to check whether S is unbiased. E(S ) E ( n (X i X) ) n n E( (X i X) ) n n E( ((X i µ) (X µ)) ) n n E( ((X i µ) (X i µ)(x µ) + (X µ) ) ) n n E( (X i µ) (X µ) (X i µ) + n(x µ) ) n n E( (X i µ) n(x µ) + n(x µ) ) 6
n n E( (X i µ) n(x µ) ) n n E( (X i µ) ) n E( n(x µ) ) n n E ( (X i µ) ) E ( (X µ) ) V(X i ) V(X) n σ n σ n n σ σ. σ σ n Therefore, S is not unbiased. Based on S, we obtain the unbiased estimator of σ. Multiplying n/(n ) on both sides of E(S ) σ (n )/n, we obtain: n n E(S ) σ. Therefore, the unbiased estimator of σ is: n n S n (3) The likelihood ratio is defined as: (X i X) S. λ max l(µ, σ ) σ max l(µ, σ ) µ,σ l(µ, σ ) l(ˆµ, ˆσ ). Since the number of restriction is one, we have: log λ χ (). l(µ, σ ) is given by: l(µ, σ ) (πσ ) n/ exp Taking the logarithm, log l(µ, σ ) is: σ (x i µ). log l(µ, σ ) n log(π) n log(σ ) σ 7 (x i µ).
On the numerator, under the restriction µ µ, log l(µ, σ ) is maximized with respect to σ as follows: log l(µ, σ ) n σ σ + σ 4 This solution of σ is σ, which is represented as: (x i µ ). σ n (x i µ ). Then, l(µ, σ ) is: l(µ, σ ) (π σ ) n/ exp σ On the denominator, from the question (), we have: ( (x i µ ) (π σ ) n/ exp n ). ˆµ n x i, ˆσ n (x i ˆµ). Therefore, l(ˆµ, ˆσ ) is: l(ˆµ, ˆσ ) (π ˆσ ) n/ exp The likelihood ratio is: ˆσ ( (x i ˆµ) (π ˆσ ) n/ exp n ). λ max l(µ, σ ) σ max l(µ, σ ) µ,σ l(µ, σ ) l(ˆµ, ˆσ ) (π σ ) n/ exp( n/) (π ˆσ ) n/ exp( n/) ( σ ) n/. ˆσ As n goes to infinity, we obtain: log λ n(log σ log ˆσ ) χ (). When log λ > χ α(), the null hypothesis H : µ µ is rejected by the significance level α, where χ α() denotes the α percent point of the Chisquare distribution with one degree of freedom. Answer the following questions. 8
() The discrete type of random variable X is assumed to be Bernoulli. The Bernoulli distribution is given by: f (x) p x ( p) x, x,. Let X, X,,X n be random variables drawn from the Bernoulli trials. Compute the maximum likelihood estimator of p. () Let Y be a random variable from a binomial distribution, denoted by f (y), which is represented as: f (y) n C y p y ( p) n y, y,,,, n. Then, prove that Y/n goes to p as n is large. (3) For the random variable Y in the question (), Let us define: Z n Y np np( p). Then, Z n goes to a standard normal distribution as n is large. (4) The continuous type of random variable X has the following density function: f (x) n Γ( n) x n e x, if x >,, otherwise. [Answer] where Γ(a) denotes the Gamma function, i.e., Γ(a) x a e x dx. Then, show that X/n approaches one when n. () When X is a Bernoulli random variable, the probability function of X is given by: f (x; p) p x ( p) x, x,. 9
The joint probability function of X, X,,X n is: f (x, x,, x n ; p) Take the logarithm of l(p). log l(p) ( i n f (x i ; p) n p x i ( p) x i p i x i ( p) n i x i l(p). x i ) log(p) + (n x i ) log( p). The derivative of the log-likelihood function log l(p) with respect to p is set to be zero. d log l(p) dp i x i p Solving the above equation, we have: i x i n p i i x i np p( p). p n x i x. Therefore, the maximum likelihood estimator of p is: ˆp n X i X. () Mean and variance of Y are: E(Y) np, V(Y) np( p). Therefore, we have: E( Y n ) n E(Y) p, V(Y n ) p( p) V(Y). n n Chebyshev s inequality indicates that for a random variable X and g(x) we have: where k >. P(g(X) k) E(g(X)), k
Here, when g(x) (X E(X)) and k ɛ are set, we can rewrite as: where ɛ >. P( X E(X) ɛ) V(X) ɛ, Replacing X by Y, we apply Chebyshev s inequality. n That is, as n, P( Y n E(Y n ) ɛ) V( Y n ) ɛ. P( Y n p ɛ) p( p) nɛ. Therefore, we obtain: Y n p. (3) Let X, X,, X n be Bernoulli random variables, where P(X i x) p x ( p) x for x,. Define Y X + X + + X n. Because Y has a binomial distribution, Y/n is taken as the sample mean from X, X,, X n, i.e., Y/n (/n) n X i. Therefore, using E(Y/n) p and V(Y/n) p( p)/n, by the central limit theorem, as n, we have: Y/n p p( p)/n N(, ). Moreover, Therefore, Z n Y np np( p) Z n N(, ). Y/n p p( p)/n. (4) When X χ (n), we have E(X) n and V(X) n. Therefore, E(X/n) and V(X/n) /n.
Apply Chebyshev s inequality. Then, we have: P( X n E( X n ) ɛ) V( X n ) ɛ, where ɛ >. That is, as n, we have: P( X n ɛ) nɛ. Therefore, X n. Consider n random variables X, X,, X n, which are mutually independently and exponentially distributed. Note that the exponential distribution is given by: f (x) λe λx, x >. Then, answer the following questions. () Let ˆλ be the maximum likelihood estimator of λ. Obtain ˆλ. () When n is large enough, obtain mean and variance of ˆλ. [Answer] () Since X,, X n are mutually independently and exponentially distributed, the likelihood function l(λ) is written as: l(λ) n f (x i ) n λe λx i λ n e λ x i. The log-likelihood function is: log l(λ) n log(λ) λ x i. We want the λ which maximizes log l(λ). Solving the following equation: d log l(λ) dλ n λ x i,
and replacing x i by X i, the maximum likelihood estimator of λ, denoted by ˆλ, is: ˆλ n n X i. () X, X,, X n are mutually independent. Let f (x i ; λ) be the density function of X i. For the maximum likelihood estimator of λ, i.e., ˆλ n, as n, we have the following property: n(ˆλ n λ) N (, σ (λ) ), where σ (λ) ( ). d log f (X; λ) E dλ Therefore, we obtain σ (λ). The expectation in σ (ˆλ n ) is: ( ) ( ) ( d log f (X; λ) E E dλ λ X E λ ) λ X + X λ λ E(X) + E(X ) λ, where E(X) and E(X ) are: E(X) λ, E(X ) λ. Therefore, we have: σ (λ) ( ) d log f (X; λ) E dλ λ. As n is large, ˆλ n approximately has the following distribution: ˆλ n N(λ, λ n ). Thus, as n goes to infinity, mean and variance are given by λ and λ /n. 3
The n random variables X, X,, X n are mutually independently distributed with mean µ and variance σ. Consider the following two estimators of µ: X X i, X n (X + X n ). Then, answer the following questions. () Is X unbiased? How about X? () Which is more efficient, X or X? (3) Examine whether X and X are consistent. [Answer] () We check whether X and X are unbiased. E(X) E( n X i ) n E( X i ) n E(X i ) n µ µ, E( X) E(X ) + E(X n ) ( ) (µ + µ) µ. Thus, both are unbiased. () We examine which is more efficient, X or X. V(X) V( n V( X) 4 X i ) n V( X i ) n V(X i ) n ( V(X ) + V(X n ) ) 4 (σ + σ ) σ. σ σ n, Therefore, because of V(X) < V( X), X is more efficient than X when n >. (3) We check if X and X are consistent. Apply Chebyshev s inequality. For X, P( X E(X) ɛ) V(X) ɛ, where ɛ >. That is, when n, we have: P( X µ ɛ) σ nɛ. 4
Therefore, we obtain: X µ. Next, for X, we have: P( X E( X) ɛ) V( X) ɛ, where ɛ >. That is, when n, the following equation is obtained: P( X µ ɛ) σ ɛ /. X is a consistent estimator of µ, but X is not consistent. 3 The 9 random samples: 3 3 36 7 6 8 3 which are obtained from the normal population N(µ, σ ). Then, answer the following questions. () Obtain the unbiased estimates of µ and σ. () Obtain both 9 and 95 percent confidence intervals for µ. (3) Test the null hypothesis H : µ 4 and the alternative hypothesis H : µ > 4 by the significance level.. How about.5? [Answer] () The unbiased estimators of µ and σ, denoted by X and S, are given by: X n X i, S n (X i X). The unbiased estimates of µ and σ are: x n x i, s n (x i x). 5
Therefore, x x i ( + 3 + 3 + + 36 + 7 + 6 + 8 + 3) 7, n 9 s (x i x) n ( ( 7) + (3 7) + (3 7) + ( 7) 8 ) +(36 7) + (7 7) + (6 7) + (8 7) + (3 7) (36 + 6 + 5 + 49 + 8 + + + + 9) 7.5. 8 () We obtain the confidence intervals of µ. The following sample distribution is utilized: Therefore, X µ S/ n t(n ). P ( X µ S/ n < t α/(n ) ) α, where t α/ (n ) denotes the α/ percent point of the t distribution, which is obtained given probability α and n degrees of freedom. Therefore, we have: P ( X t α/ (n ) S n < µ < X + t α/ (n ) S n ) α. Replacing X and S by x and s, the ( α) percent confidence interval of µ is: ( x tα/ (n ) s n, x + t α/ (n ) s n ). Since x 7, s 7.5, n 9, t.5 (8).86 and t.5 (8).36, the 9 percent confidence interval of µ is: 7.5 7.5 (7.86, 7 +.86 ) (3.76, 3.4), 9 9 and the 95 percent confidence interval of µ is: 7.5 7.5 (7.36, 7 +.36 ) (.99, 3.). 9 9 6
(3) We test the null hypothesis H : µ 4 and the alternative hypothesis H : µ > 4 by the significance levels. and.5. The distribution of X is: X µ S/ n t(n ). Therefore, under the null hypothesis H : µ µ, we obtain X µ S/ n t(n ). Note that µ is replaced by µ. For the alternative hypothesis H : µ > µ, since we have: P ( X µ S/ n > t α(n ) ) α, we reject the null hypothesis H : µ µ by the significance level α when we have: x µ s/ n > t α(n ). Substitute x 7, s 7.5, µ 4, n 9, t. (8).397 and t.5 (8).86 into the above formula. Then, we obtain: x µ s/ n 7 4 7.5/9.74 > t. (8).397. Therefore, we reject the null hypothesis H : µ 4 by the significance level α.. And we obtain: x µ s/ n 7 4 7.5/9.74 < t.5 (8).86. Therefore, the null hypothesis H : µ 4 is accepted by the significance level α.5. 4 The 6 samples X, X,, X 6 are randomly drawn from the normal population with mean µ and known variance σ. The sample average is given by x 36. Then, answer the following questions. () Obtain the 95 percent confidence interval for µ. 7
() Test the null hypothesis H : µ 35 and the alternative hypothesis H : µ 36.5 by the significance level.5. (3) Compute the power of the test in the above question (). [Answer] () We obtain the 95 percent confidence interval of µ. The distribution of X is: X µ σ/ n N(, ). Therefore, P ( X µ σ/ n < z ) α/ α, where z α/ denotes the α α. Therefore, percent point, which is obtained given probability P ( X z α/ σ n < µ < X + z α/ σ n ) α. Replacing X by x, the ( α) percent confidence interval of µ is: ( x zα/ σ n, x + z α/ σ n ). Substituting x 36, σ, n 6 and z.5.96, the ( α) percent confidence interval of µ is: (36.96 6, 36 +.96 6 ) (35., 36.98). () We test the null hypothesis H : µ 35 and the alternative hypothesis H : µ 36.5 by the significance level.5. The distribution of X is: X µ σ/ n N(, ). Under the null hypothesis H : µ µ, X µ σ/ n 8 N(, ).
For the alternative hypothesis H : µ > µ, we obtain: If we have: P ( X µ σ/ n > z ) α α. x µ σ/ n > z α, the null hypothesis H : µ µ is rejected by the significance level α. Substituting x 36, σ, n 6 and z.5.645, we obtain: x µ σ/ n 36 35 / 6 > z α.645. The null hypothesis H : µ 35 is rejected by the significance level α.5. (3) We compute the power of the test in the question (). The power of the test is the probability which rejects the null hypothesis under the alternative hypothesis. That is, under the null hypothesis H : µ µ, the region which rejects the null hypothesis is: X > µ + z α σ/ n, because P ( X µ σ/ n > z ) α α. We compute the probability which rejects the null hypothesis under the alternative hypothesis H : µ µ. That is, under the alternative hypothesis H : µ µ, the following probability is known as the power of the test: P ( X > µ + z α σ/ n ). Under the alternative hypothesis H : µ µ, we have: X µ σ/ n N(, ). Therefore, we want to compute the following probability P ( X µ σ/ n > µ µ σ/ n + z ) α. 9
Substituting σ, n 6, µ 35, µ 36.5 and z α.645, we obtain: P ( X µ σ/ n Note that z.885.35 and z.869.36. > 35 36.5 / 6 +.645) P ( X µ σ/ n >.355) P ( X µ σ/ n >.355).877.93. 5 X, X,, X n are assumed to be mutually independent and be distributed as a Poisson process, where the Poisson distribution is given by: Then, answer the following questions. P(X x) f (x; λ) λx e λ, x,,,. x! () Obtain the maximum likelihood estimator of λ, which is denoted by ˆλ. () Prove that ˆλ is an unbiased estimator. (3) Prove that ˆλ is an efficient estimator. (4) Prove that ˆλ is an consistent estimator. [Answer] () We obtain the maximum likelihood estimator of λ, denoted by ˆλ. The Poisson distribution is: The likelihood function is: P(X x) f (x; λ) λx e λ, x,,,. x! l(λ) n f (x i ; λ) The log-likelihood function is: n λ x i e λ x i! λ n x i e nλ n x i!. n log l(λ) log(λ) x i nλ log( x i!). 3
The derivative of the log-likelihood function with respect to λ is: log l(λ) λ λ x i n. Solving the above equation, the maximum likelihood estimator ˆλ is: ˆλ n X i X. () We prove that ˆλ is an unbiased estimator of λ. E(ˆλ) E( n X i ) n E(X i ) n λ λ. (3) We prove that ˆλ is an efficient estimator of λ, where we show that the equality holds in the Cramer-Rao inequality. First, we obtain V(ˆλ) as: V(ˆλ) V( n X i ) V(X n i ) n The Cramer-Rao lower bound is given by: λ λ n. ( ) log f (X; λ) ne λ ( ) (X log λ λ log X!) ne λ λ [( X ) ] ne λ ne[(x λ) ] λ nv(x) λ nλ λ n. Therefore, V(ˆλ) ( ). log f (X; λ) ne λ That is, V(ˆλ) is equal to the lower bound of the Cramer-Rao inequality. Therefore, ˆλ is efficient. 3
(4) We show that ˆλ is a consistent estimator of λ. Note as follows: In Chebyshev s inequality: E(ˆλ) λ, V(ˆλ) λ n. P( ˆλ E(ˆλ) ɛ) V(ˆλ) ɛ, E(ˆλ) and V(ˆλ) are substituted. Then, we have: P( ˆλ λ > ɛ) < which implies that ˆλ is consistent. λ nɛ, 6 X, X,, X n are mutually independently distributed as normal random variables. Note that the normal density is: f (x) Then, answer the following questions. πσ e σ (x µ). () Prove that the sample mean X (/n) n X i is normally distributed with mean µ and variance σ /n. () Define: Z X µ σ/ n. Show that Z is normally distributed with mean zero and variance one. (3) Consider the sample unbiased variance: S (X i X). n The distribution of (n )S /σ is known as a Chi-square distribution with n degrees of freedom. Obtain mean and variance of S. Note that a Chi-square distribution with m degrees of freedom is: f (x) m Γ( m) x m e x, if x >,, otherwise. 3
(4) Prove that S is an consistent estimator of σ. [Answer] () The distribution of the sample mean X (/n) n X i is derived using the moment-generating function. Note that for X N(µ, σ ) the moment-generating function φ(θ) is: φ(θ) E(e θx ) e θx f (x) dx πσ e σ (x µ) +θx dx ( ) πσ e σ x (µ+σ θ)x+µ dx ) (x (µ+σ πσ e σ θ) +(µθ+ σ θ ) dx e µθ+ σ θ e θx πσ e σ (x µ) dx ) (x (µ+σ πσ e σ θ) dx exp ( µθ + σ θ ). In the integration above, N(µ + σ θ, σ ) is utilized. Therefore, we have: φ i (θ) exp ( µθ + σ θ ). Now, consider the moment-generating function of X, denoted by φ x (θ): φ x (θ) E(e θx ) E(e θ n n n n n X i ) E( e n θ X i ) E(e n θ X i ) φ i ( θ n ) n exp ( µ θ n + σ ( θ n )) exp ( µθ + σ θ n ) ( σ exp µθ + n θ), which is equivalent to the moment-generating function of the normal distribution with mean µ and variance σ /n. () We derive the distribution of Z, which is shown as: Z X µ σ/ n. From the question (), the moment-generating function of X, denoted by φ x (θ), is: φ x (θ) E(e θx ) exp ( µθ + σ n θ). 33
The moment-generating function of Z, denoted by φ z (θ): φ z (θ) E(e θz ) E ( exp(θ X µ σ/ n )) exp ( µ θ σ/ ) ( θ E exp( n σ/ n X)) exp ( µ θ σ/ ) θ φx ( n σ/ n ) exp ( µ θ σ/ ) ( θ exp µ n σ/ n + σ ( θ n σ/ ) ) exp( n θ ), which is the moment-generating function of N(, ). (3) First, as preliminaries, we derive mean and variance of the chi-square distribution with m degrees of freedom. The chi-square distribution with m degrees of freedom is: f (x) m Γ( m ) x m e x, if x >. Therefore, the moment-generating function φ χ (θ) is: φ χ (θ) E(e θx ) e θx m Γ( m) x m e x dx m Γ( m) x m e ( θ)x dx ( θ m Γ( m ) ( y ) m θ ) m e y θ θ dx m Γ( m )ym e y dx ( θ) m. In the fourth equality, use y ( θ)x. The first and second derivatives of the moment-generating function is: φ χ (θ) m( θ) m, φ χ (θ) m(m + )( θ) m. Therefore, we obtain: E(X) φ χ () m, E(X ) φ χ () m(m + ). 34
Thus, for the chi-square distribution with m degrees of freedom, mean is given by m and variance is: V(X) E(X ) (E(X)) m(m + ) m m. Therefore, using (n )S /σ χ (n ), we have: which implies (n )S (n )S E( ) n, V( ) (n ), σ σ n σ E(S ) n, ( n σ ) V(S ) (n ). Finally, mean and variance of S are: E(S ) σ, V(S ) σ4 n. (4) We show that S is a consistent estimator of σ. Chebyshev s inequality is utilized, which is: P( S E(S ) ɛ) V(S ) ɛ. Substituting E(S ) and V(S ), we obtain: Therefore, S is consistent. P( S σ ɛ) σ 4 (n )ɛ. 35