Mathematical Statistics Chapter Three. Point Estimation
3.4 Uniformly Minimum Variance Unbiased Estimator(UMVUE) Criteria for Best Estimators MSE Criterion Let F = {p(x; θ) : θ Θ} be a parametric distribution family, g(θ) the parameter to be estimated, and X = (X 1, X 2,, X n ) a sample from p(x; θ).
Definition Definition 3.4.1 Suppose ĝ = ĝ(x) is an estimator for g(θ). Then E θ [ĝ g(θ)] 2 is called the mean squared error(mse)of ĝ.
Suppose both ĝ 1 and ĝ 2 are estimators for g(θ), and E θ [ĝ 1 g(θ)] 2 E θ [ĝ 2 g(θ)] 2, θ Θ. Then ĝ 2 is said to be no better than ĝ 1 in the terms of MSE.
Suppose both ĝ 1 and ĝ 2 are estimators for g(θ), and E θ [ĝ 1 g(θ)] 2 E θ [ĝ 2 g(θ)] 2, θ Θ. Then ĝ 2 is said to be no better than ĝ 1 in the terms of MSE. If the inequality holds for at least one θ Θ, then ĝ 1 is said to be better than ĝ 2.
Example Let X 1, X 2,, X n be a sample from a normal population N(µ, σ 2 ). Both Sn 2 = 1 n n 1 i=1 (X i X ) 2 and Sn 2 = 1 n n i=1 (X i X ) 2 are estimators for σ 2. Sn 2 is an unbiased estimator. Compute their MSE.
We know that (n 1)S n 2 /σ 2 χ 2 (n 1). By the definition of χ 2 (n), we know that if Y i are i.i.d. and N(0, 1),i = 1, 2,, n 1, then n 1 Yi 2 χ 2 (n 1). i=1
Let Y = AX, where A = (a ij ) is a n n orthogonal matrix and Y 1 = nx. Then we have (n 1)S n 2 = n i=2 Y 2 i.
Hence, Var{Sn 2 } =Var{ σ2 n 1 = σ4 n 1 Var(Y 2 1 ) = (n 1)Sn 2 } = σ 2 σ 4 (n 1) Var{ n 1 2 i=1 σ4 n 1 [EY 4 1 (EY 2 1 ) 2 ] = Y 2 i } 2σ4 n 1.
Hence, Var{Sn 2 } =Var{ σ2 n 1 = σ4 n 1 Var(Y 2 1 ) = (n 1)Sn 2 } = σ 2 The mean squared error of S n 2 is σ 4 (n 1) Var{ n 1 2 i=1 σ4 n 1 [EY 4 1 (EY 2 1 ) 2 ] = E{Sn 2 σ 2 } 2 = Var{Sn 2 } = 2σ4 n 1. Y 2 i } 2σ4 n 1.
For, Sn, 2 since Sn 2 = n 1S 2 n n, we have ESn 2 = n 1 n ES n 2 = n 1 n σ2.
The MSE of S 2 n is E{Sn 2 σ 2 } 2 =Var{Sn} 2 + (ESn 2 σ 2 ) 2 ( ) 2 n 1 = Var{S 2 n } + ( 1n ) 2 n σ2 ( ) 2 n 1 2σ 4 = n n 1 + σ4 n = 2n 1 σ 4. 2 n 2 E{S 2 n σ 2 } 2 < E{S n 2 σ 2 } 2. In terms of MSE, S 2 n is a better estimator for σ 2 than S n 2.
Existence Question: Does there exist such an estimator ĝ for g(θ) such that it is as good as or better than other estimators in terms of MSE? That is, is there a best estimator with the smallest MSE?
Existence Question: Does there exist such an estimator ĝ for g(θ) such that it is as good as or better than other estimators in terms of MSE? That is, is there a best estimator with the smallest MSE? The answer is NO.
In fact, if such a statistic ĝ for g(θ) does exist, then for any given θ 0, ĝ 2 g(θ 0 ) is also an estimator for g(θ). Hence the following should hold: E θ [ĝ g(θ)] 2 E θ [g(θ 0 ) g(θ)] 2, θ.
In particular, when θ = θ 0, we have E θ0 [ĝ g(θ 0 )] 2 = 0. Therefore, P θ0 {ĝ = g(θ 0 )} = 1. Since θ 0 is arbitrary, P θ [ĝ = g(θ)] = 1, θ that is ĝ = g(θ) a.s.
Best Unbiased Estimator As we noted, a comparison of estimators in terms of MSE may NOT yield a clear favorite. Indeed there is no one best MSE estimator. The reason there is no one best MSE estimator is that the class of all estimators is too large a class. One way to make the problem of finding a best estimator tractable is to limit the class of estimators. A popular way to restricting the class of estimator is to consider only unbiased estimators.
Let U g = {ĝ = ĝ(x) : E θ ĝ = g(θ), θ}. be the class of unbiased estimators for g(θ). The MSE of each statistic in U g is the variance of the statistic. When U g, we say that g(θ) is an redestimable parameter.
One example of estimable parameters is the population mean. However, not all parameters are estimable. Example Let X = (X 1, X 2,..., X n ) be a sample of size n from b(1, p), p unknown. g(p) = 1/p is not estimable.
In fact, for any statistic T (X), the expected value of T (X) E p [T (X)] = T (x)p i x i (1 p) n i x i, 0 < p < 1. x i =0,1;i=1,...,n The right-hand side is a degree n polynomial of p, which is bounded and it is impossible to equal to the unbounded function 1/p in the interval (0, 1). Therefore, unbiased estimators for g(p) = 1/p do NOT exist.
Constructing unbiased estimators for a parameter g(θ) is not an easy task. Unbiased estimators may be found among the existing estimators or by modifying the existing estimators. For instance, the unbiased estimator S 2 for σ 2 is the slight modification of the moment estimator Sn. 2
Example Let X 1,..., X n be a sample from a uniform distribution U(0, θ), θ > 0. Find an unbiased estimator for 1/θ.
Example Let X 1,..., X n be a sample from a uniform distribution U(0, θ), θ > 0. Find an unbiased estimator for 1/θ. Solution: Consider an estimator T (X (n) ), which is a function of the sufficient statistic X (n) = max 1 i n X i. Compute the expected value of T (X (n) ) is E θ [T (X (n) )] = θ 0 T (x) nx n 1 θ n dx.
In order for T (X (n) ) to be an unraised estimator for 1/θ, he following has to be true: 1 θ = θ 0 T (x) nx n 1 θ n dx, θ > 0.
That is, θ n 1 = θ 0 T (x)nx n 1 dx, θ > 0.
That is, θ n 1 = θ 0 T (x)nx n 1 dx, θ > 0. Differentiating both sides with respect to θ yields T (θ) = n 1 nθ. Hence T (x) = n 1 nx. When n 2, T (X (n) ) is an unbiased estimator for 1/θ.
When n = 1, T (X (n) ) 0, and E θ [T (X (n) )] 1/θ.
When n = 1, T (X (n) ) 0, and E θ [T (X (n) )] 1/θ. In this case, there is no unbiased estimator for 1/θ.
Suppose g(x) were an unbiased estimator for 1/θ. Then T (X (n) ) = E[g(X) X (n) ] is also an unbiased estimator for 1/θ (Why? Check it use the property of conditional expected value.) This once more implies that T (X (n) ) 0.
Definition Let F = {p(x; θ) : θ Θ} be a parametric distribution family, and X a sample from p(x; θ). Suppose g(θ) is an estimable parameter, and U g is the class of unbiased estimators for g(θ). If ĝ = ĝ (X) is an unbiased estimator such that, for all ĝ = ĝ(x) U g, Var θ {ĝ } Var θ {ĝ}, θ Θ, then ĝ is said to be a best unbiased estimator for g(θ), or a uniform minimum variance unbiased estimator (UMVUE) for g(θ).
It is not an easy job to find or verify UMVUE directly, but Rao-Blackwell Theorem provides an approach for finding UMVUE.
Theorem (Rao-Blackwell) Let T = T (X) be a sufficient statistic for the parameter θ Θ, and ϕ(x) an unbiased estimator for g(θ). Then ĝ(t ) = E{ϕ(X) T } is also an unbiased estimator for g(θ), and Var θ {ĝ(t )} Var{ϕ(X)}, θ Θ, the equality holds if and only if P θ {ϕ(x) = ĝ(t )} = 1, θ Θ.
The theorem tells us: Conditional expectation of an unbiased estimator on a sufficient statistic will improve the variance of an unbiased estimator;
The theorem tells us: Conditional expectation of an unbiased estimator on a sufficient statistic will improve the variance of an unbiased estimator; UMVUE must be a function of a sufficient statistic;
The theorem tells us: Conditional expectation of an unbiased estimator on a sufficient statistic will improve the variance of an unbiased estimator; UMVUE must be a function of a sufficient statistic; We need consider only statistics that are functions of sufficient statistic in our search for best unbiased estimators.
Proof: Since T is a sufficient statistic, then the conditional distribution of X given T is independent of θ. This further implies that the conditional expected value of ĝ(t ) = E{ϕ(X) T } is independent of θ, and hence it is a statistic. and E θ {ĝ(t )} = E θ [E{ϕ(X) T }] = E θ {ϕ(x)} = g(θ). Therefore, ĝ(t ) is an unbiased estimator for g(θ).
To prove the second part of Rao-Blackwell theorem, write ϕ = ϕ(x). Var θ {ϕ} =E θ {ϕ E θ (ϕ T ) + E θ (ϕ T ) E θ (ϕ)} 2 =E θ {ϕ E θ (ϕ T )} 2 + E θ {E θ (ϕ T ) E θ (ϕ)} 2 + 2E θ {[ ϕ Eθ (ϕ T ) ][ E θ (ϕ T ) E θ (ϕ) ]} =E θ {ϕ ĝ(t )} 2 + Var θ {ĝ(t )} + 2E θ {[ ϕ Eθ (ϕ T ) ][ E θ (ϕ T ) E θ (ϕ) ]}.
Notice that E θ {[ ϕ Eθ (ϕ T ) ][ E θ (ϕ T ) E θ (ϕ) ]} =E θ ( E θ { [ϕ Eθ (ϕ T ) ][ E θ (ϕ T ) E θ (ϕ) ] T }) =E θ ( E θ { [ϕ Eθ (ϕ T ) ] T } [E θ (ϕ T ) E θ (ϕ) ]) =E θ ([ Eθ (ϕ T ) E θ (ϕ T ) ] [E θ (ϕ T ) E θ (ϕ) ]) = 0.
We have Var θ {ϕ} =E θ {ϕ ĝ(t )} 2 + Var θ {ĝ(t )} Var θ {ĝ(t )}, θ Θ. The equality holds true if and only if E θ {ϕ ĝ(t )} 2 = 0. That is, P θ {ϕ = ĝ(t )} = 1.
Theorem Suppose T = T (X) is a sufficient statistic for the parameter θ Θ, ϕ(x) = (ϕ 1 (X),..., ϕ k (X)) is an unbiased estimator for R k -valued parameter g(θ). Then 1 ĝ(t ) = E{ϕ(X) T } is also an unbiased estimator for g(θ).
2 Let V(θ) = {Cov θ [ĝ i (T ), ĝ j (T )]} 1 i,j k, and U(θ) = {Cov θ [ϕ i (X), ϕ j (X)]} 1 i,j k be covariance matrices of ĝ(t ) and ϕ(x), respectively. Then U(θ) V(θ) is non-negative definite, θ Θ. And U(θ) V(θ) = O (zero matrix) if and only if P θ {ϕ(x) = ĝ(t )} = 1, θ Θ.
Example Let X 1, X 2,, X n be a sample from a Bernoulli distribution b(1, p). Discuss the unbiased estimator for p.
Example Let X 1, X 2,, X n be a sample from a Bernoulli distribution b(1, p). Discuss the unbiased estimator for p. We know that EX 1 = p. Hence X 1 is an unbiased estimator for p. Moreover, T = X 1 + + X n is a sufficient statistic. Now we use conditional expectation to improve the unbiased estimator. E(X 1 T ) =E(X 2 T ) = = E(X n T ).
Thus, E(X 1 T ) = E(X 1 T ) + E(X 2 T ) + E(X n T ) n = E(X 1 + X 2 + X n T ) n E(T T ) = = T n n = X.
To see when an unbiased estimator is a best unbiased, we might ask how could we improve upon a given unbiased estimator?
To see when an unbiased estimator is a best unbiased, we might ask how could we improve upon a given unbiased estimator? Suppose that W satisfies E θ W = τ(θ), and we have another estimator U such that E θ U = 0 for all θ, that is, U is an unbiased estimator of 0. The estimator V a = W + au, where a is a constant, satisfies E θ V a = τ(θ) and hence is also an unbiased estimator of τ(θ).
Can V a be better than W? The variance of V a is VarV a = Var θ (W + au) = Var θ W + 2aCov θ (W, U) + a 2 Var θ U. Thus the relationship between W and the unbiased estimator of 0 is crucial in evaluating whether W is a best unbiased. This relationship, in fact, characterizes best unbiasedness.
Unbiased Estimator of Zero
Unbiased Estimator of Zero Theorem Theorem 3.4.1 Suppose ĝ = ĝ(x) is an unbiased estimator for g(θ), and Var θ (ĝ) < for θ Θ. Let L = {l = l(x) : E θ (l(x)) = 0, θ Θ} be the set of all unbiased estimators of 0. If for l L, Cov θ (ĝ, l) = E θ (ĝ l) = 0, θ Θ, (3.4.2) then ĝ is the UMVUE for g(θ).
Example Let X 1, X 2,, X n be a sample from a Bernoulli distribution b(1, p). Show that X is the UMVUE for p.
Example Let X 1, X 2,, X n be a sample from a Bernoulli distribution b(1, p). Show that X is the UMVUE for p. Proof: Notice that E p (X ) = p and Var p (X ) = p(1 p)/n <. Applying Theorem 3.4.1, to prove that X is the UMVUE for p, we only need to show that X satisfies (3.4.2).
Now let l = l(x) be an unbiased estimator of 0. Then
Now let l = l(x) be an unbiased estimator of 0. Then 0 = E p (l) = l(x)p i x i (1 p) n i x i, 0 < p < 1. x i =0,1;i=1,...,n
Now let l = l(x) be an unbiased estimator of 0. Then 0 = E p (l) = x i =0,1;i=1,...,n l(x)p i x i (1 p) n i x i, 0 < p < 1. Let ϕ = p. Then we have 1 p 0 = l(x)ϕ i x i ϕ > 0 ( ). x i =0,1;i=1,...,n
Differentiating both sides of ( ) with respect to ϕ yields n 0 = l(x)( x i )ϕ i x i 1 x i =0,1;i=1,...,n i=1 ϕ > 0,
Differentiating both sides of ( ) with respect to ϕ yields n 0 = l(x)( x i )ϕ i x i 1 x i =0,1;i=1,...,n i=1 ϕ > 0, Thus, n 0 = l(x)( x i )ϕ i x i ϕ > 0. x i =0,1;i=1,...,n i=1 This implies that Cov p (X, l(x)) = E p (X l(x)) = 0, that is (3.4.2) holds. We conclude that X is the UMVUE for p.
Complete Sufficient Statistics Definition Let F = {p(x; θ) : θ Θ} be a parametric distribution family, T a statistic. If for any real-valued function ϕ(t), E θ ϕ(t ) = 0, θ Θ implies P θ {ϕ(t ) = 0} = 1, θ Θ. That is, the derived distribution family F T = {p T (t; θ) : θ Θ} of T is complete, then T is said to be a complete statistic.
Remark: A complete statistic means that among the functions of T, 0 is the only unbiased estimator for 0.
Rao-Blackwell Theorem shows: If T is sufficient, then an UMVUE for g(θ) can be derived from the functions of T. Completeness says: If T is also a sufficient statistic, only 0 is the unibiased estimator of 0, and hence it is uncorrelated to any statistic of the form ϕ(t ). Therefore, if T is sufficient and complete, then as long as ϕ(x) is unbiased, we have that E[ϕ(X) T ] is UMVUE.
Theorem Theorem 3.4.2 ( Lehmann-Scheffe) Suppose S = S(X) is a complete sufficient statistic for θ. Then there is a unique UMVUE for the estimable parameter g(θ). If ϕ(x) is an unbiased estimator for g(θ), then E[ϕ(X) S]. is the unique UMVUE for g(θ).
The Lehmann-Scheffé Theorem represents a major achievement in mathematical statistics, tying together sufficiency, completeness, and uniqueness.
The theorem also provides two ways of finding UMVUE: 1. Method A. 1 Construct a sufficient and complete statistic S, 2 Find an unbiased estimator ϕ for the estimable parameter, 3 Compute the conditional expected value of E[ϕ S], and E[ϕ S] is UMVUE.
2. Method B. 1 First find a sufficient and complete statistic S, 2 Next find a function g(s) of S, such that g(s) is an unbiased estimator. 3 g(s) is the UMVUE.
For example: Let X = (X 1, X 2,, X n ) be a sample from a Bernoulli distribution family {B(1, p); 0 < p < 1}. We showed that T = n i=1 X i is a sufficient complete estimator for p. We want to derive an UMVUE for p 2.
Knowing that X 1 is an unbiased estimator for p, we have Then X is the UMVUE for p. E[X 1 T ] = T /n = X.
Knowing that X 1 is an unbiased estimator for p, we have Then X is the UMVUE for p. E[X 1 T ] = T /n = X.
Next we want to find UMVUE for p 2.
Next we want to find UMVUE for p 2. Since ET 2 = n i=1 EX 2 i + i j EX i EX j = ET + n(n 1)p 2, then solving for p 2 : Hence, is UMVUE for p 2. p 2 = E[T 2 T ] n(n 1). T 2 T n(n 1) = 1 n 1 [nx 2 X ]
Proof of The Lehmann-Scheffé Theorem: Uniqueness: Suppose both W = W (X) and Y = Y (X) are UMVUE for g(θ). Then E θ W = E θ Y = g(θ), Var θ (W ) = Var θ (Y ) θ.
Proof of The Lehmann-Scheffé Theorem: Uniqueness: Suppose both W = W (X) and Y = Y (X) are UMVUE for g(θ). Then E θ W = E θ Y = g(θ), Var θ (W ) = Var θ (Y ) θ. Notice that E θ (W Y ) 2 =Var θ (W Y ) =Var θ (W ) + Var θ (Y ) 2Cov θ (W, Y ) =2Var θ (W ) 2Cov θ (W, Y ).
If we can show that Cov θ (W, Y ) = Var θ (W ) = Var θ (Y ), then E θ (W Y ) 2 = 0, and this proves P θ (W = Y ) = 1, θ.
To show Cov θ (W, Y ) = Var θ (W ), we notice that Z = (W + Y )/2 is also an unbiased estimator for g(θ).
Since W is UMVUE, we deduce Var θ (W ) Var θ (Z) = 1 4 Var θ(w ) + 1 4 Var θ(y ) + 1 2 Cov θ(w, Y ) 1 4 Var θ(w ) + 1 4 Var θ(y ) + 1 2 [Var θ(w )] 1/2 [Var θ (Y )] 1/2 (Cauchy-Schwarz inequality) =Var θ (W ). Thus, Cov θ {W, Y } = Var θ {W } = Var θ {Y }.
Existence: Suppose ϕ = ϕ(x) is an unbiased estimator for an estimable parameter g(θ). We denote ĝ = ĝ(x) = E[ϕ S]. According to Rao-Blackwell Theorem, ĝ is also an unbiased estimator for g(θ).
Now we show that ĝ is UMVUE. Let f = f (X) be another arbitrary unbiased estimator for g(θ).
Now we show that ĝ is UMVUE. Let f = f (X) be another arbitrary unbiased estimator for g(θ). Rao-Blackwell implies that E[f S] is an unbiased estimator for g(θ) as well. Moreover, Var θ {E[f S]} Var θ {f }.
Since both ĝ and E[f S] are unbiased estimators for g(θ), E θ {ĝ E[f S]} = 0, θ.
Since both ĝ and E[f S] are unbiased estimators for g(θ), E θ {ĝ E[f S]} = 0, θ. Furthermore, S is complete, hence P θ {ĝ = E[f S]} = 1. Therefore, Var θ {ĝ} = Var θ {E[f S]} Var θ {f }. This shows that ĝ is UMVUE for g(θ).
Example Let X 1, X 2,, X n be a sample from a Poisson distribution with parameter λ, λ > 0. Derive the UMVUE for the probability P λ (k) = λk k! e λ.
Solution: The joint pmf for the sample X 1, X 2,, X n is P(X 1 = x 1, X 2 = x 2,, X n = x n ) =e λn λ x 1+x 2 + +x n 1 x 1!x 2! x n! I {x i = 0, 1, ; i = 1, 2,, n}. According the factorization theorem, T n = n i=1 X n is a sufficient statistic.
Now we prove that it is complete. Since T n P(nλ), if we have E λ f (T n ) = e nλ t=0 f (t) (nλ)t t! = 0, λ > 0,
then t=0 f (t) x t t! = 0, x > 0, which implies that the coefficient of x t has to be zero, that is, f (t) 1 t! = 0, t = 0, 1, 2,, Thus f (t) = 0, t = 0, 1, 2,, and P{f (T n ) = 0} = 1. Therefore, T n is complete.
It is trivial that the statistic 1, X 1 = k, ϕ k (X) = 0, X 1 k is an unbiased estimator for P λ (k).
E λ [ϕ k (X) T n = t] = P(X 1 = k T n = t) = P(X 1 = k, T n = t) P(T n = t) k! e λ [(n 1)λ]t k e (n 1)λ (t k)! λ k = (nλ) t t! e nλ = = P(X 1 = k)p(x 2 + + X n = t k) P(T n = t) ( ) ( ) k ( t 1 1 1 t k. k n n)
We conclude that P λ (k) = ( Tn k is the UMVUE for P λ (k). ) ( ) k ( 1 1 1 ) Tn k. n n In particular, when k = 0, we have ( P λ (0) = 1 n) 1 Tn.
Example Let X 1, X 2,, X n be a sample from a uniform distribution U(0, θ), θ > 0. Find the UMVUE for θ, and compare the efficiency of UMVUE and moment estimator.
Example Let X 1, X 2,, X n be a sample from a uniform distribution U(0, θ), θ > 0. Find the UMVUE for θ, and compare the efficiency of UMVUE and moment estimator. Solution: First we need to find a sufficient and complete statistic. Knowing that T = X (n) is a sufficient statistic, we prove that it is also complete.
Example Let X 1, X 2,, X n be a sample from a uniform distribution U(0, θ), θ > 0. Find the UMVUE for θ, and compare the efficiency of UMVUE and moment estimator. Solution: First we need to find a sufficient and complete statistic. Knowing that T = X (n) is a sufficient statistic, we prove that it is also complete. The density function of X (n) is p(t; θ) = n θ n tn 1, 0 < t < θ.
If E θ ϕ(x (n) ) = then θ 0 θ 0 ϕ(t) n θ n tn 1 dt = 0, θ > 0, ϕ(t)t n 1 dt = 0, θ > 0. Taking derivative with respect to θ yields ϕ(θ)θ n 1 = 0, θ > 0, that is, ϕ(t) = 0, t > 0. Thus, P(ϕ(X (n) ) = 0) = 1 is complete.
Notice that EX (n) = θ 0 t n θ n tn 1 dt = n n + 1 θ. We see that θ = (1 + 1 n )X (n) is an unbiased estimator for θ. Since it is a function of a sufficient and complete statistic, it is the UMVUE for θ.
Next we compute the variance of θ:
Next we compute the variance of θ: EX 2 (n) = θ 0 t 2 n θ n tn 1 dt = n n + 2 θ2. Var( θ) = E θ 2 θ 2 = (1 + 1 n n )2 n + 2 θ2 θ 2 = θ 2 n(n + 2).
By EX = θ, θ = 2EX, we obtain the moment estimator for θ 2 and its variance θ 1 = 2X, Var( θ 1 ) = 4Var(X ) = 4 θ2 12n = θ2 3n.
By EX = θ, θ = 2EX, we obtain the moment estimator for θ 2 θ 1 = 2X, and its variance Var( θ 1 ) = 4Var(X ) = 4 θ2 12n = θ2 3n. It is obvious that UMVUE is better (or more effective) than the moment estimator.
Recall: Theorem Suppose the population distribution belongs to an exponential family. The joint pdf or pmf of a sample from this population is p(x; θ) = c (θ) exp { k j=1 Q j (θ)t j (x) } h (x), If the range of Q = (Q 1 (θ),, Q k (θ)) has non-empty interior, then T = (T1,..., Tk ) is a sufficient and complete statistic.
Example Let X 1, X 2,, X n be a sample from a normal population N(µ, σ 2 ). Find the UMVUE for µ and σ 2.
Solution: The joint density function of X = (X 1, X 2,, X n ) is { } 1 p(x; θ) = exp 1 (x (2πσ 2 ) n/2 2σ 2 i µ) 2 i } 1 = exp { nµ2 (2πσ 2 ) n/2 2σ {[ ] 2 [ ] [ µ ] [ exp x i + x 2 σ 2 i 1 ] } 2σ 2 i i
It is clear that the interior of the range of (µ/σ 2, 1/(2σ 2 )) is non-empty. Hence ( n i=1 X i, n i=1 X i 2 ) is sufficient and complete. Any one-to-one transformation of ( n i=1 X i, n i=1 X 2 i ) is still sufficient and complete. Furthermore, both X and S 2 are unbiased estimators for µ and σ 2, and X and S 2 are functions of ( n i=1 X i, n i=1 X i 2 ). Hence they are UMVUE for µσ 2.
Example Consider Gamma distribution gamma(α, λ), where α > 0 is known. Find the UMVUE for λ.
Solution: The joint density of a sample X = (X 1,..., X n ) from the Gamma distribution is p(x; λ) = λnα (Γ(α)) n n i=1 x α 1 i exp x i > 0, i = 1,, n; { λ } n x i, i=1
According to the properties of exponential family, when α is known, T = n i=1 X i is a sufficient and complete statistic for the distribution family.
According to the properties of exponential family, when α is known, T = n i=1 X i is a sufficient and complete statistic for the distribution family. Then by the additivity of Gamma distributions, we have T gamma(nα, λ), and E λ T = nα λ, that is, λ = nα E λ T.
[ ] 1 1 E λ = p(t; λ) = λnα T 0 t Γ(nα) λγ(nα 1) λ = = Γ(nα) nα 1. 0 t nα 1 1 e λt dt
[ ] 1 1 E λ = p(t; λ) = λnα T 0 t Γ(nα) λγ(nα 1) λ = = Γ(nα) nα 1. 0 t nα 1 1 e λt dt Thus, [ ] nα 1 E λ = λ. T
Applying Theorem 3.4.2, is the UMVUE for λ. λ = nα 1 n i=1 X i = nα 1 nx
Applying Theorem 3.4.2, is the UMVUE for λ. λ = nα 1 n i=1 X i = nα 1 nx When α = 1, (n 1)/(nX ) is the UMVUE for λ in the exponential distribution E(λ).
Alternative solution: We will find an unbiased estimator among the functions g(t ) of T.
Alternative solution: We will find an unbiased estimator among the functions g(t ) of T. Let g(t ) be an arbitrary function of T and E λ g(t ) = λnα Γ(nα) 0 g(t)t nα 1 e λt dt = λ, λ > 0,
Alternative solution: We will find an unbiased estimator among the functions g(t ) of T. Let g(t ) be an arbitrary function of T and that is, E λ g(t ) = 1 Γ(nα) λnα Γ(nα) 0 0 g(t)t nα 1 e λt dt = λ, λ > 0, g(t)t nα 1 e λt dt = λ nα+1, λ > 0.
Notice that 1 g(t)t nα 1 e λt dt Γ(nα) 0 1 = t (nα 1) 1 e λt dt Γ(nα 1) 0 = nα 1 t (nα 1) 1 e λt dt, λ > 0. Γ(nα) 0
Hence, 1 Γ(nα) 0 [ g(t) nα 1 ] t nα 1 e λt dt = 0, λ > 0. t The uniqueness of Laplace transformation yields We conclude is the UMVUE for λ. g(t) = nα 1. t λ = nα 1 n i=1 X i = nα 1 nx