Estimation of parametric functions in Downton s bivariate exponential distribution

Size: px
Start display at page:

Download "Estimation of parametric functions in Downton s bivariate exponential distribution"

Transcription

1 Estimation of parametric functions in Downton s bivariate exponential distribution George Iliopoulos Department of Mathematics University of the Aegean Karlovasi, Samos, Greece geh@aegean.gr Abstract This paper considers estimation of the ratio of means and the regression function in Downton s (1970) bivariate exponential distribution. Unbiased estimators are given and, by presenting improved estimators, they are shown to be inadmissible in terms of mean squared error. The results are derived by conditioning on an unobserved random sample from a geometric distribution which provides conditional independence for the statistics involved. AMS 2000 subject classifications: 62F10, 62C99. Key words and phrases: Downton s bivariate exponential distribution, unbiased estimation, ratio of means, regression function, mean squared error, inadmissibility. 1 Introduction One of the most important bivariate distributions in reliability theory is the bivariate exponential. There are various bivariate exponential distributions in the literature. A recent review can be found in the book of Kotz, Balakrishnan and Johnson (2000). In this paper we are interested in Downton s bivariate exponential distribution with probability 1

2 density function (pdf) f(x, y; λ 1, λ 2, ρ) = λ { 1λ 2 1 ρ exp λ } } 1x + λ 2 y {2(ρλ 1 λ 2 xy) 1/2 I 0 1 ρ 1 ρ, (1.1) where x, y, λ 1, λ 2 > 0, 0 ρ < 1, and I 0 (z) = k=0 (z/2)2k /k! 2 is the modified Bessel function of the first kind of order zero. The above density was initially derived in a different form by Moran (1967). The form (1.1) is derived by Downton (1970) in a reliability context and is a special case of Kibble s (1941) bivariate gamma distribution. Let (X, Y ) be an observation from (1.1). The marginal distributions of X, Y are exponential with means (scale parameters) 1/λ 1, 1/λ 2 respectively. Since I 0 (0) = 1, it is clear that X and Y are independent if and only if ρ = 0. Downton (1970) showed that ρ is the correlation coefficient of the two variates. By expanding in a series, the pdf can be written in the form f(x, y ; λ 1, λ 2, ρ) = k=0 ( ) ( ) π(k ; ρ) g k+1 x ; 1 ρ λ 1 g k+1 y ; 1 ρ λ 2 where g α ( ; β) denotes the pdf of a Gamma(α, β) random variable and π(k ; ρ) = (1 ρ)ρ k, k = 0, 1, 2,..., is the geometric probability mass function., Let K be a random variable having the above geometric distribution. Then, conditionally on K = k, X, Y are independent gamma variates with shape parameter k + 1 and scale parameters (1 ρ)/λ 1, (1 ρ)/λ 2 respectively. The most common algorithm for generating observations from (1.1) (see Downton, 1970 and Al-Saadi, Scrimshaw and Young, 1979) as well as the extension of the above distribution in more than two dimensions (see Al-Saadi and Young, 1982) are based on this well known property. Statistical inference for the parameters of (1.1) is restricted mainly on the correlation coefficient ρ. Nagao and Kadoya (1971), Al Saadi and Young (1980), and Balakrishnan and Ng (2001) considered the estimation problem of ρ, and Al Saadi, Scrimshaw and Young (1979) the problem of testing the hypothesis ρ = 0. However, another interesting problem is the estimation of λ = λ 2 /λ 1, which represents the ratio of the means of the two components. For example, an estimated value greater than one indicates that on the average the first component is more reliable than the second one. Note that λ is also the ratio of the scale parameters of X and Y. Estimation of λ in general scale families including among others normal, exponential and inverse Gaussian has been considered by many authors in the past. For a decision theoretic approach, see Gelfand and Dey (1988), Madi and Tsui (1990), Kubokawa (1994), Madi (1995), Ghosh and Kundu (1996), 2

3 Kubokawa and Srivastava (1996) (who assume independence of the two components) and Iliopoulos (2001) (who considers the problem of estimation of the ratio of variances in the bivariate normal distribution). Next, we outline the rest of the paper. In Section 2, an unbiased estimator, ˆλ U, of λ is derived based on a random sample from (1.1). Then, a class of inadmissible estimators with respect to the mean squared error is constructed and it is shown that this class contains ˆλ U. Furthermore, some alternative biased estimators dominating ˆλ U are presented. In Section 3, unbiased estimators of the regression of X on Y, as well as of the conditional variance of X given Y = y, are given. They are also shown to be inadmissible; improved estimators are presented as well. Finally, an Appendix contains useful expressions for expectations of geometric and negative binomial distributions as well as of the statistics involved in the derivation of the results. 2 Estimation of the ratio of means Let (X 1, Y 1 ),..., (X n, Y n ), n 2, be a random sample from (1.1) and K = (K 1,..., K n ) be the associated (unobserved) random sample from the geometric distribution π( ; ρ) such that, given K i = k i, X i is independent of Y i, i = 1,..., n. Since each K i is related only to (X i, Y i ), it is clear that, conditionally on K = k = (k 1,..., k n ), all X s and Y s are independent. Set K = K i, k = k i and note that K follows a negative binomial distribution. By considering the joint distribution of the data, it is easily seen that the sufficient statistic is (X 1 Y 1,..., X n Y n, X i, Y i ). Setting S 1 = X i, S 2 = Y i, U = (U 1,..., U n ) = (X 1 S 1 1,..., X ns 1 1 ), and V = (V 1,..., V n ) = (Y 1 S 1 2,..., Y ns 1 2 ) we obtain the one to one transformation (U 1 V 1,..., U n V n, S 1, S 2 ), which is also sufficient. Conditionally on K = k, S 1, S 2 are independent and S i Gamma(n + k, (1 ρ)λ 1 i ), i = 1, 2. Moreover, from a well known characterization of the gamma distribution, (S 1, S 2 ) is independent of (U, V), and U, V are iid from a (n 1)-variate Dirichlet distribution with parameters k 1 + 1,..., k n 1 + 1, k n + 1. Consider the estimation problem of λ = λ 2 /λ 1. Nagao and Kadoya (1971) showed that the maximum likelihood estimators (mles) of λ 1 and λ 2 are 1/ X and 1/Ȳ respectively, thus the mle of λ is ˆλ mle = S 1 /S 2. Using Lemma 4.1(vii) in the Appendix, we obtain the expectation of this estimator, E[S 1 /S 2 ] = E[E(S 1 /S 2 K)] = λ E 3 [ ] ( ) n + K n ρ = λ n + K 1 n 1. (2.1)

4 Hence, ˆλ mle is biased. For deriving an unbiased estimator of λ it is necessary to employ an estimator of the correlation coefficient ρ. There are two classes of estimators of ρ in the literature: (i) estimators based on the statistic T = X i Y i /S 1 S 2 = U i V i (such as the moment estimator) and (ii) estimators based on the sample correlation coefficient R, see Al-Saadi and Young (1980) and Balakrishnan and Ng (2001). However, R is not a function of the sufficient statistic, whereas T is. Therefore, T has been chosen for our purposes. Note also that the problem of estimation of λ remains invariant under the group of transformations (X i, Y i ) (c 1 X i, c 2 Y i ), i = 1,..., n, and equivariant estimators of λ are of the form ψ(u 1 V 1,..., U n V n )S 1 /S 2. A particular choice for ψ can be of the form ψ(t ), giving more justification to T. The conditional expectation of T given K = k is n n E[T K = k] = E[U i V i K = k] = E(U i K = k) 2 = i=1 i=1 n i=1 (k i + 1) 2 (k + n) 2. Since T is a function of U and V solely, it follows that, conditionally on K, it is also independent of S 1, S 2. Therefore, E[T S 1 /S 2 ] = E[E(T S 1 /S 2 K)] = E[E(T K)E(S 1 /S 2 K)] [ n ] (K i + 1) 2 = λ E (n + K)(n + K 1) i=1 = λ E { n(n + K) 1 (n + K 1) 1 E [ (K 1 + 1) 2 ]} K [ ] ( n K 1 = λ E = λ (n + 1)(n + K 1) n 1 + n 3 ) n 2 1 ρ (2.2) (see Lemma 4.1). From (2.1), (2.2) it can be seen that each of E[S 1 /S 2 ], E[T S 1 /S 2 ] equals λ times a first degree polynomial in ρ. The derivation of an unbiased estimator of λ which is a function of S 1, S 2 and T is equivalent to finding c 0, c 1 such that E[c 0 S 1 /S 2 +c 1 T S 1 /S 2 ] = λ. Solving the linear equations n n 1 c n 1 c 1 = 1 1 n 1 c 0 + n 3 n 2 1 c 1 = 0, we obtain c 0 = (n 3)/(n 1), c 1 = (n + 1)/(n 1). Thus, we have proved the following proposition. Proposition 2.1. The estimator ( n 3 + (n + 1)T ˆλ U = n 1 is unbiased for λ = λ 2 /λ 1. 4 ) S1 S 2 (2.3)

5 For n 3, the variance of ˆλ U is ( n 3 + (n + 1)T Var(ˆλ U ) = E n 1 = ( n 3 n 1) 2 E [ S 2 1 /S 2 2 S 1 S 2 ) 2 λ 2 ] + 2(n 3)(n+1) (n 1) 2 E [ T S 2 1/S 2 2 ] + ( n+1 n 1) 2 E [ T 2 S 2 1/S 2 2] λ 2, and substituting the expectations from Lemma 4.2 we get [ 2n 2 5n + 5 Var(ˆλ U ) = (n 2)(n 1) 2 2(n3 3n + 10) (n 2 4)(n 1) 2 ρ + n3 + 6n 2 ] 5n + 38 (n + 3)(n 2 4)(n 1) 2 ρ2 λ 2. Consider the class of estimators of λ, C = {ˆλ a1,a 2 = (a 1 + a 2 T )S 1 /S 2, a 1, a 2 R}. The unbiased estimator ˆλ U as well as the mle ˆλ mle are members of C for a 1 = a 1U = (n 3)/(n 1), a 2 = a 2U = (n + 1)/(n 1) and a 1 = 1, a 2 = 0, respectively. We would like to characterize inadmissible estimators within C in terms of mean squared error (mse). By invariance, the (scaled) mse λ 2 E λ1,λ 2,ρ(ˆλ a1,a 2 λ) 2 does not depend on λ 1, λ 2. Thus, without loss of generality, we assume for the rest of the section that λ 1 = λ 2 = 1 and denote the mse of ˆλ a1,a 2 as mse(a, ρ), where a = (a 1, a 2 ). Fix ρ [0, 1]. Then, for n 3, mse(a, ρ) is strictly convex in a and there exists a minimizing point a 0 (ρ) = (a 10 (ρ), a 20 (ρ)) with where a 10 (ρ) = (n 2)q 1 (ρ)/q 2 (ρ) a 20 (ρ) = 3(n 2)(n + 1)(n + 2)(n + 3)ρ(1 ρ) 2 /q 2 (ρ), q 1 (ρ) = (n + 1)(n + 2)(n + 3) + 4(n 6)(n + 1)(n + 3)ρ (n 5)(3n n + 30)ρ 2 + 2(n 3 11n 46)ρ 3, q 2 (ρ) = (n + 1) 2 (n + 2)(n + 3) + 4(n 6)(n + 1) 2 (n + 3)ρ (3n n 3 77n 2 382n 312)ρ 2 +2(n 4 + 4n n 2 126n 256)ρ 3 3(n n 94)ρ 4. As expected, it holds a 0 (0) = ((n 2)/(n + 1), 0), i.e., in the case of two independent exponential samples, the best estimator within C coincides with the best equivariant estimator of λ. On the other hand, a 0 (1) = (1, 0), that is, the best estimator in this case is the mle. Notice here that the mse of the mle tends to zero as ρ 1. To see that 5

6 without evaluating it, observe that since the support of X i, Y i is (0, ), ρ = 1 implies that X i /λ 1 = Y i /λ 2 with probability one. Setting B(ρ) = E λ 1 =λ 2 =1,ρ(S1 2/S2 2 ) E λ 1 =λ 2 =1,ρ(T S1 2/S2 2 ) E λ1 =λ 2 =1,ρ(T S1 2/S2 2 ) E λ 1 =λ 2 =1,ρ(T 2 S1 2/S2 2 ) the mse of ˆλ a1,a 2 can be expressed as mse(a, ρ) = [a a 0 (ρ)] B(ρ)[a a 0 (ρ)] + mse(a 0 (ρ), ρ). Let E(a, ρ) = { c R 2 : [c a 0 (ρ)] B(ρ)[c a 0 (ρ)] < mse(a, ρ) mse(a 0 (ρ), ρ) } be the interior of the ellipse that consists of the points c = (c 1, c 2 ), such that ˆλ c1,c 2 has equal mse with ˆλ a1,a 2 for the particular ρ. Then, ˆλ a1,a 2 is admissible within C if and only if E(a, ρ) =. ρ [0,1) This condition is clearly satisfied by ˆλ a10 (ρ),a 20 (ρ), ρ [0, 1), implying that these estimators are admissible within C. By the continuity of the mse, this holds also for the mle. However, the determination of the above intersection is in general a problem which does not seem to allow for an analytical solution. Instead of that, we can find a subclass of C containing inadmissible estimators ˆλ a1,a 2, by fixing a 1 or a 2 one at a time. Fix first a 1. Then, the mse of ˆλ a1,a 2 a 2 = a 2 (a 1, ρ) given by is quadratic in a 2 and uniquely minimized at a 2(a 1, ρ) = (n 2)[n + 1 (n 3)ρ] + [(n + 1)2 + (n 2 5n 12)ρ 3(n 5)ρ 2 ]a 1 (n + 2)(n + 3) 2 + 2(n + 3)(n 2 + n 26)ρ + (n 3 8n 2 27n + 178)ρ 2. (2.4) Since the denominator in (2.4) is positive for every ρ [0, 1] and n 3, a 2 (a 1, ρ) is bounded. Let a 2 (a 1) = inf ρ [0,1) a 2 (a 1, ρ) and a 2 (a 1) = sup ρ [0,1) a 2 (a 1, ρ). Then we have the following. Proposition 2.2. (i) If a 2 / A 2 (a 1) = [a 2 (a 1), a 2 (a 1)] then ˆλ a1,a 2 dominated by ˆλ a1,a 2 (a 1) if a 2 < a 2 (a 1) or ˆλ a1,a 2 (a 1) if a 2 > a 2 (a 1). (ii) In particular, if a 1 a 11 = (n 7)/(n 1) then a 2(a 1 ) = a 2(a 1, 1) = is inadmissible being (n + 2)(n + 3) (1 a 1 ), (2.5) 2(n + 5) 6

7 ( ) a 2 (a 1) = a (n + 1)2 n 2 2(a 1, 0) = n + 3 n + 1 a 1 whereas, if a 1 a 12 = (n 3 + 2n 2 41n 34)/[(n 1)(n 2 + 9n + 10)],, (2.6) a 2(a 1 ) = a 2(a 1, 0), a 2 (a 1) = a 2(a 1, 1), where a 2 (a 1, 0), a 2 (a 1, 1) are as in (2.5), (2.6), respectively. Proof. Part (i) is a consequence of the convexity of the mse in a 2. Part (ii) arises from the monotonicity of a 2 (a 1, ρ) with respect to ρ. Specifically, for a 1 a 11, a 2 (a 1, ρ) is strictly decreasing in ρ whereas for a 1 a 12 it is strictly increasing. This can be seen by examining the sign of the derivative of a 2 (a 1, ρ) with respect to ρ which is proportional to the quadratic (n + 3)[(n 1)(n 2 + 9n + 10)a 1 (n 3 + 2n 2 41n 34)]+ 2[(n 1)(n 3 35n 46)a 1 (n + 1)(n 3 8n 2 27n + 178)]ρ+ [(n 1)(n 3 4n 2 19n + 102)a 1 (n 3)(n 3 8n 2 27n + 178)]ρ 2. The rest of the proof is elementary (although messy) and therefore omitted. Remark 2.1. When a 2 < a 2 (a 1), by the convexity of the mean squared error, ˆλ a1,a 2 is dominated not only by ˆλ a1,a 2 (a 1), but by any estimator ˆλ a1,a with a 2 2 (a 2, a 2 (a 1)] (a similar argument occurs when a 2 > a 2 (a 1)). Nevertheless, ˆλ a1,a 2 (a 1) is the best among these estimators, therefore is the only one mentioned in Proposition 2.2. In a similar way, by fixing a 2 and letting a 1 to vary, one can obtain an analogous result. In this case the mse is quadratic in a 1 and uniquely minimized in a 1 (a 2, ρ) given by a 1(a 2, ρ) = (n + 1)(n 2)(n ρ) [(n + 1)2 + (n 2 5n 12)ρ 3(n 5)ρ 2 ]a 2 (n + 1)[n(n + 1) 4(n + 1)ρ + 6ρ 2 ]. The denominator is always positive, thus a 1 (a 2, ρ) is bounded for ρ [0, 1]. a 1 (a 2) = inf ρ [0,1) a 1 (a 2, ρ) and a 1 (a 2) = sup ρ [0,1) a 1 (a 2, ρ), we derive the following. Setting Proposition 2.3. (i) If a 1 / A 1 (a 2) = [a 1 (a 2), a 1 (a 2)] then ˆλ a1,a 2 dominated by ˆλ a 1 (a 2 ),a 2 if a 1 < a 1 (a 2) or ˆλ a 1 (a 2 ),a 2 if a 1 > a 1 (a 2). (ii) In particular, if a 2 a 21 = 3n(n + 1)/[(n 1)(n + 2)] then is inadmissible being a 1(a 2 ) = a 1(a 2, 0) = n 2 n + 1 a 2 n, (2.7) 7

8 whereas, if a 2 a 22 = 3(n + 1)/(n 1), a 1 (a 2) = a 1(a 2, 1) = 1 2a 2 n + 1, (2.8) a 1(a 2 ) = a 1(a 2, 1), a 1 (a 2) = a 1(a 2, 0), where a 1 (a 2, 0), a 1 (a 2, 1) are as in (2.7), (2.8), respectively. Propositions 2.2 and 2.3 provide necessary conditions for the admissibility of ˆλ a1,a 2 within C as stated in Corollary 2.1 below. Corollary 2.1. Two necessary conditions for the admissibility of ˆλ a1,a 2 a 1 A 1 (a 2) and a 2 A 2 (a 1). within C are Typically, unbiased estimators of scale parameters (as is λ for the distribution of S 1 /S 2 ) are inadmissible in terms of mean squared error. In our case, the inadmissibility of the unbiased estimator ˆλ U follows from Proposition 2.2, since a 1U > a 12 and a 2U > a 2 (a 1U) = (n + 2)(n + 3)/(n 1)(n + 5). Corollary 2.2. The unbiased estimator ˆλ U is inadmissible in terms of mean squared error being dominated by ( ) ˆλ U = ˆλ n 3 a1u,a 2 (a 1U ) = (n + 2)(n + 3) + n 1 (n 1)(n + 5) T S1. (2.9) S 2 Consider now the broader class of estimators D = {ˆλ φ = φ(t )S 1 /S 2 }, where φ( ) is any function such that ˆλ φ has finite mse. Using Stein s (1964) technique, originally presented for improving the best equivariant estimator of a normal variance when the mean is unknown, one concludes that ˆλ U in (2.9) as well as a large subset of C are inadmissible estimators. To be specific, consider the conditional mean squared error of ˆλ φ given T = t, K = k, E {[φ(t )S 1 /S 2 λ] 2 } T = t, K = k, which is quadratic in φ(t) and uniquely minimized at φ k (t) = λ E[S 1/S 2 T = t, K = k] E[S 2 1 /S2 2 T = t, K = k] = n + k 2 n + k + 1 = φ k, say. Note that it does not depend on t, since conditionally on K = k, S 1, S 2 and T are mutually independent. Moreover φ k is strictly increasing in k with φ 0 = (n 2)/(n + 1) 8

9 n ρ Table 1. Values of ρ for which ˆλ mle and ˆλ U have equal mean squared errors. and lim k φ k = 1. As a consequence, each estimator of the form ˆλ φ with P[φ(T ) / [(n 2)/(n + 1), 1] > 0 is inadmissible being dominated by the estimator φ (T )S 1 /S 2, where φ (T ) = max{(n 2)/(n + 1), min[φ(t ), 1]}. Application of the above argument to the class C leads to the following proposition. Proposition 2.4. (i) If a 1 / [(n 2)/(n + 1), 1] or a 2 / [(n 2)/(n + 1) a 1, 1 a 1 ], then the estimator ˆλ a1,a 2 is inadmissible being dominated by max{(n 2)/(n + 1), min[a 1 + a 2 T, 1]}S 1 /S 2. (ii) In particular, ˆλ U in (2.9) is dominated by ˆλ U = min{ˆλ U, ˆλ mle }. The mse of ˆλ U cannot be derived in a closed form, therefore an analytical comparison with ˆλ mle is impossible. However, it is easy to compare the latter with ˆλ U. Table 1 shows, for selected sample sizes, the corresponding values of the correlation coefficient for which both estimators have equal mean squared errors. When ρ is less than the reported value, ˆλ U is superior to ˆλ mle and vice-versa. Since ˆλ U dominates ˆλ U, it follows that for ρ less than the reported value, dominates ˆλ mle as well. (In fact, a Monte Carlo study showed that ˆλ U ˆλ U and ˆλ mle have equal mean squared errors when ρ is approximately 0.05 higher than the values given in Table 1.) It can be concluded that almost perfect linear correlation is suspected. ˆλ U should be preferred, unless 3 Estimation of the regression and the conditional variance Consider now estimation of the regression of X on Y based on a random sample from (1.1). Downton (1970) showed that the conditional expectation of X given Y = y is linear in y, specifically, η(y) = E[X Y = y] = 1 ρ λ 1 + ρ λ 2 λ 1 y. Obviously, for deriving an unbiased estimator of η(y) it suffices to derive unbiased estimators of η 1 = (1 ρ)/λ 1 and η 2 = ρλ 2 /λ 1. 9

10 Proposition 3.1. (i) The estimator is unbiased for η 1 = (1 ρ)/λ 1. ˆη 1U = 2 (n + 1)T n 1 S 1 (ii) The estimator is unbiased for η 2 = ρλ 2 /λ 1. ˆη 2U = n + 1 n 1 (nt 1) S 1 S 2 Proof. (i) The problem is similar to that of the derivation of ˆλ U in (2.3). We have to find c 0, c 1 such that E[(c 0 + c 1 T )S 1 ] = (1 ρ)λ 1 1. Using Lemma 4.2 (i), (ii), it can be seen that it suffices to solve the equations nc 0 + c 1 = 1, n 1 n+1 c 1 = 1, for c 0 and c 1. The solution is c 0 = 2/(n 1) and c 1 = (n + 1)/(n 1), hence ˆη 1U is an unbiased estimator of η 1 = (1 ρ)/λ 1. (ii) Similarly, we need to find c 0, c 1 such that E[(c 0 + c 1 T )S 1 /S 2 ] = ρλ 2 /λ 1. Using (2.1) and (2.2), we get the equations n n 1 c n 1 c 1 = 0 1 n 1 c 0 + n 3 n 2 1 c 1 = 1, whose solution is c 0 = (n + 1)/(n 1) and c 1 = n(n + 1)/(n 1), yielding ˆη 2U as an unbiased estimator of η 2 = ρλ 2 /λ 1. Corollary 3.1. The estimator ˆη U (y) = 2 (n + 1)T n 1 S 1 + n + 1 n 1 (nt 1) S 1 S 2 y is unbiased for η(y). The estimator ˆη U (y) is inadmissible for every y, since it assumes negative values with positive probability. A rather crude improved estimator is its positive part, ˆη + U (y) = max{0, ˆη U (y)}, which has smaller risk for any convex loss function. However, the same occurs for ˆη 1U and ˆη 2U, and it seems rational to improve first on them and use their improvements to estimate the regression. An estimator dominating ˆη 1U in terms of mean squared error can be derived using Stein s (1964) technique. Consider the conditional mean squared error of estimators of 10

11 { [φ(t the form φ(t )S 1 given T = t, K = k, E )S1 (1 ρ)λ 1 ] } 2 1 T = t, K = k, which is quadratic in φ(t) and uniquely minimized at φ k (t) = λ 1 1 (1 ρ)e[s 1 T = t, K = k] E[S 2 1 T = t, K = k] = 1 n + k + 1 = φ k, say. Now, φ k is positive, attaining its maximum when k = 0, i.e. 0 < φ k φ 0 = (n + 1) 1. As a consequence, each estimator of the form φ(t )S 1 with P[φ(T ) / [0, (n + 1) 1 ]] > 0 is inadmissible being dominated by the estimator φ (T )S 1, where φ (T ) = max{0, min[φ(t ), (n + 1) 1 ]}. Since P[(2 (n + 1)T )/(n 1) / [0, (n + 1) 1 ]] > 0, ˆη 1U is dominated by the estimator S 1 /(n + 1), T < (n + 3)/(n + 1) 2, ˆη 1 = ˆη 1U, (n + 3)/(n + 1) 2 T 2/(n + 1), 0, T > 2/(n + 1). (3.1) In a similar fashion we can improve on ˆη 2U. Note that it contains the quantity nt 1, which is the estimator of ρ obtained by Nagao and Kadoya (1971) using the method of moments. estimator to Using the condition 0 ρ < 1, Al-Saadi and Young (1980) modified this 0, T < 1/n, ρ = nt 1, 1/n T 2/n, 1, T > 2/n. The replacement of nt 1 in ˆη 2U by max{nt 1, 0} leads to its positive part, ˆη + 2U = max{0, ˆη 2U }, which is an improved estimator of ρλ 2 /λ 1. Replacement of nt 1 by ρ seems also reasonable, leading to the estimator 0, T < 1/n, η 2 = ˆη 2U, 1/n T 2/n, n + 1 S 1, T > 2/n. n 1 S 2 (3.2) However, using Stein s (1964) technique we can find an estimator dominating all these estimators. Consider the class of estimators of ρλ 2 /λ 1 having the form ψ(t )S 1 /S 2. The conditional mean squared error given T = t, K = k of such an estimator is uniquely minimized with respect to ψ(t) at ψ k (t) = ρλ 2λ 1 1 E[S 1/S 2 T = t, K = k] E[S 2 1 /S2 2 T = t, K = k] = ρ n + k 2 n + k + 1 = ψ k (ρ), 11

12 ρ n = 10 η η n = 20 η η n = 50 η η Table 2. Simulated percentage risk improvement of the mean squared error of ˆη 1 in (3.1), ˆη 2 in (3.3) over ˆη 1U, ˆη 2U respectively. say. Since 0 ψ k (ρ) ρ < 1, any estimator of the form ψ(t )S 1/S 2 satisfying P[ψ(T ) / [0, 1]] > 0 is inadmissible. Indeed, it is dominated by ψ (T )S 1 /S 2, where ψ (T ) = max{0, min[ψ(t ), 1]}. Thus, ˆη 2U, ˆη + 2U are dominated by 0, T < 1/n, ˆη 2 = ˆη 2U, 1/n T 2/(n + 1), S 1 /S 2, T > 2/(n + 1). From (3.2) and (3.3), it is obvious that ˆη 2 dominates also η 2. (3.3) Remark. The estimators ˆη 1, ˆη 2 in (3.1), (3.3) respectively, have the property of pretesting for ρ. For example, when T is small (smaller than (n + 3)/(n + 1) 2 ), indicating ρ = 0, ˆη 1 equals to the best equivariant estimator of 1/λ 1 with respect to squared error loss, S 1 /(n+1). On the other hand, when T is large (greater than 2/(n+1)), indicating ρ to be very close to one, ˆη 1 equals zero. Analogous comments hold for ˆη 2. The percentage improvements in terms of mean squared error of the estimators ˆη 1, ˆη 2 over ˆη 1U, ˆη 2U respectively, have been evaluated by Monte Carlo sampling from (1.1), for sample sizes n = 10, 20, 50 and ρ = 0(.1).9. We have taken replications for each pair (n, ρ). The results are shown in Table 2. It can be seen that the improvements are remarkable even for n = 50. Generally, they are larger for extreme values of ρ. This can be explained by the nature of the improved estimators as indicated in the above remark. The conditional variance of X given Y = y is also linear in y. Specifically, ( ) 1 ρ 2 θ(y) = Var(X Y = y) = + 2ρ(1 ρ) λ 2 λ 1 λ 2 y. 1 Let θ 1 = (1 ρ) 2 λ 2 1, θ 2 = 2ρ(1 ρ)λ 2 λ 2 1. Then we have the following proposition. 12

13 Proposition 3.2. (i) The estimator ˆθ 1U = h 1 (T )S 2 1, where h 1 (T ) = 4(n + 5) 4(n + 1)(n + 5)T + (n + 1)(n + 2)(n + 3)T 2 (n 1)(n 2 + 5n + 2), is unbiased for θ 1 = (1 ρ) 2 λ 2 1. (ii) The estimator ˆθ 2U = h 2 (T )S 2 1 /S 2, where h 2 (T ) = 4(n2 + 7n + 8) + 2(n + 1)(3n n + 18)T 2(n + 1) 2 (n + 2)(n + 3)T 2 (n 1)(n 2 + 5n + 2), is unbiased for θ 2 = 2ρ(1 ρ)λ 2 λ 2 1. Proof. Similarly to the proof of Proposition 3.1, the problem reduces in finding c 0, c 1, c 2, d 0, d 1, d 2 such that E[(c 0 + c 1 T + c 2 T 2 )S 2 1 ] = θ 1, E[(d 0 + d 1 T + d 2 T 2 )S 2 1 /S 2] = θ 2 for part (i), (ii) respectively. Using Lemma 4.2 and equating the coefficients of the appropriate second degree polynomials in ρ, we obtain ˆθ 1U, ˆθ 2U as unbiased estimators of θ 1, θ 2. Corollary 3.2. The estimator ˆθ U (y) = h 1 (T )S y h 2 (T )S 2 1/S 2 is unbiased for θ(y). The estimators ˆθ 1U, ˆθ 2U and hence ˆθ(y) assume negative values with positive probability. As in the estimation problem of η(y), we can improve on them by truncating h 1, h 2 in suitable intervals. Omitting the details, an estimator of θ 1 = (1 ρ) 2 λ 2 1 of the form φ(t )S 2 1 satisfying P[φ(T ) / [0, 1/(n + 2)(n + 3)]] > 0 is dominated by φ (T )S 2 1 where φ (T ) = max{0, min[φ(t ), 1/(n+2)(n+3)]}, whereas an estimator of θ 2 = 2ρ(1 ρ)λ 2 λ 2 1 of the form ψ(t )S 2 1 /S 2 with P[ψ(T ) / [0, 2(n 2)/(n + 2)(n + 3)]] > 0 is dominated by ψ (T )S 2 1 /S 2 where ψ (T ) = max{0, min[ψ(t ), 2(n 2)/(n + 2)(n + 3)]}, provided n 6. The functions h 1, h 2 satisfy the above conditions for n 3, thus ˆθ 1U, ˆθ 2U are dominated by suitable estimators. 4 Appendix Lemma 4.1. Let K 1, K 2,..., K n be a random sample from a geometric distribution with probability mass function π 1 (k 1 ; ρ) = P(K 1 = k 1 ; ρ) = (1 ρ)ρ k 1, k 1 = 0, 1, 2,.... (4.1) 13

14 and K = K i. Then, (i) P(K 1 = k 1 K = k) = ( n 2+k k1 k k 1 )( n+k 1 k ) 1, 0 k1 k, ( )( ) n 3+k k1 k (ii) P(K 1 = k 1, K 2 = k 2 K = k) = 2 n+k 1 1, k k 1 k 2 k 0 k1, k 2, k 1 + k 2 k, (iii) E[(K 1 + 1) 2 K = k] = 1 + 3n+1 n(n+1) k + 2 n(n+1) k2, (iv) E[(K 1 + 1) 2 (K 1 + 2) 2 K = k] = ( n2 +13n+3 n(n+1)(n+3) k + 19n2 +41n+18 n(n+1)(n+2)(n+3) k n(n+2)(n+3) k3 6 + n(n+1)(n+2)(n+3) ), k4 (v) E[(K 1 + 1) 2 (K 2 + 1) 2 K = k] = 1 + (2n+3)(3n+1) n(n+1)(n+3) k + 13n2 +29n+14 n(n+1)(n+2)(n+3) k n(n+2)(n+3) k3 4 + n(n+1)(n+2)(n+3) k4, (vi) EK = [ n + K (vii) E n + K 1 nρ 1 ρ, nρ (1 + nρ) EK2 = (1 ρ) 2, ] = n ρ n 1. Proof. Parts (i), (ii) are applications of the Bayes Theorem, whereas parts (iii) (vi) are straightforward. We will prove only part (vii). Since K = K i follows a negative binomial distribution with probability mass function one has [ ] n + K E n + K 1 = = = = π n (k ; ρ) = k=0 k=0 ( n+k 1 k n + k n + k 1 ) ρ k (1 ρ) n, k = 0, 1, 2,..., ( n+k 1 k ) ρ k (1 ρ) n (n + k 2)! (n + k) ρ k (1 ρ) n k!(n 1)! n(1 ρ) n 1 n(1 ρ) n 1 k=0 ( n 1+k 1 k + ρ = n ρ n 1. ) ρ k (1 ρ) n 1 + ρ k=0 ( n+k 1 k ) ρ k (1 ρ) n Lemma 4.2. Let (X 1, Y 1 ), (X 2, Y 2 ),..., (X n, Y n ) be a random sample from (1.1), and S 1 = X i, S 2 = Y i, T = X i Y i /(S 1 S 2 ). Then, 14

15 (i) E[S 1 ] = n λ 1 (ii) E[T S 1 ] = 1, ( ) 1 + n 1 n+1 ρ λ 1 1, (iii) E[S1 2 ] = n(n + 1) λ 2 1, ( ) (iv) E[T S1 2] = n (n 1)(n+2) n+1 ρ n 1 n+1 ρ2 λ 2 1, ( ) (v) E[T 2 S1 2] = n+3 n+1 + 2(n 1)(n+6) (n+1)(n+2) ρ + (n 1)(n2 +n 18) (n+1)(n+2)(n+3) ρ2 λ 2 1, (vi) E[S 2 1 /S 2] = ( n(n+1) n 1 2(n+1) n 1 ρ + 2 n 1 ρ2 ) λ 2 λ 2 1, ( ) (vii) E[T S1 2/S 2] = n+1 n 1 + n2 2n 7 ρ 2(n 3) n 2 1 n 2 1 ρ2 λ 2 λ 2 1, ( ) (viii) E[T 2 S1 2/S 2] = n+3 n (n2 +3n 16) (n 2 1)(n+2) ρ + n3 4n 2 27n+78 (n 2 1)(n+2)(n+3) ρ2 λ 2 λ 2 1, ( ) (ix) E[S1 2/S2 2 ] = n(n+1) (n 1)(n 2) 4(n+1) (n 1)(n 2) ρ + 6 (n 1)(n 2) ρ2 λ 2 2 λ 2 1, (x) E[T S 2 1 /S2 2 ] = ( (xi) E[T 2 S 2 1 /S2 2 ] = ( n+1 (n 1)(n 2) + n2 5n 12 (n 2 1)(n 2) ρ 3(n 5) n+3 (n 2 1)(n 2) + 2(n2 +n 26) (n 2 1)(n 2) ρ2 ) (n 2 1)(n 2 4) ρ + n3 8n 2 27n+178 λ 2 2 λ 2 1, (n 2 1)(n 2 4)(n+3) ρ2 ) λ 2 2 λ 2 1, Proof. The marginal distribution of S 1 is Gamma(n, 1/λ 1 ), thus (i), (iii) are immediate. From the rest, we will prove only (v) since the proofs of the other parts are similar. Let K = (K 1,..., K n ) be a random sample from the geometric distribution (4.1) and set U i = X i S1 1, V i = Y i S2 1, i = 1,..., n. Then ( n ) 2 E[T 2 K = k] = E U i V i i=1 K = k n n n = E Ui 2 Vi 2 + U i V i U j V j K = k i=1 i=1 j=1 j i = (n + k) 2 (n + k + 1) 2 n (k i + 1) 2 (k i + 2) 2 + i=1 n i=1 E[S 2 1 K = k] = (n + k)(n + k + 1)(1 ρ) 2 λ 2 1, n (k i + 1) 2 (k j + 1) 2, j=1 j i 15

16 yielding E[T 2 S1] 2 = E[E(T 2 S1 K)] 2 = E[E(T 2 K)E(S1 K)] 2 { [ n(k1 + 1) 2 (K 1 + 2) 2 + n(n 1)(K 1 + 1) 2 (K 2 + 1) 2 ]} ( = E E 1 ρ (n + K)(n + K + 1) K λ 1 [ ] ( ) n + 3 = E n (n + 5) (n + 1)(n + 2)(n + 3) K + 4(n + 5) 1 ρ 2 (n + 1)(n + 3) K2. Here the last equality follows from Lemma 4.1(iv), (v). Substituting the moments of K in the last expression from Lemma 4.1(vi), we obtain the desired result. λ 1 ) 2 Acknowledgment The author wishes to thank the referees for their suggestions which improved the results and the presentation of the paper. References Al-Saadi, S. D., Scrimshaw, D. G. and Young, D. H. (1979). Tests for independence of exponential variables. J. Statist. Comput. Simul., 9, Al-Saadi, S. D. and Young, D. H. (1980). Estimators for the correlation coefficient in a bivariate exponential distribution. J. Statist. Comput. Simul., 11, Al-Saadi, S. D. and Young, D. H. (1982). A test for independence in a multivariate exponential distribution with equal correlation coefficient. J. Statist. Comput. Simul., 14, Balakrishnan, N. and Ng, H. K. T. (2001). Improved estimation of the correlation coefficient in a bivariate exponential distribution. J. Statist. Comput. Simul., 68, Downton, F. (1970). Bivariate exponential distributions in reliability theory. J. Roy. Statist. Soc. B, 32, Gelfand, A. E. and Dey, D. K. (1988). On the estimation of a variance ratio. J. Statist. Plann. Inference, 19, Ghosh, M. and Kundu, S. (1996). Statist. Decisions, 14, Decision theoretic estimation of the variance ratio. Iliopoulos, G. (2001). Decision theoretic estimation of the ratio of variances in a bivariate normal distribution. Ann. Inst. Statist. Math., 53, Kibble, W. F. (1941). A two-variate gamma type distribution. Sankhyã, 5,

17 Kotz, S., Balakrishnan, N. and Johnson, N. L. (2000). Continuous Multivariate Distributions, 1, Second edition. New York, Wiley. Kubokawa, T. (1994). Double shrinkage estimation of ratio of scale parameters. Ann. Inst. Statist. Math., 46, Kubokawa, T. and Srivastava, M. S. (1996). Double shrinkage estimators of ratio of variances. Multidimensional Statistical Analysis and Theory of Random Matrices (eds. A.K. Gupta and V.L. Girko), , VSP, Netherlands. Madi, T. M. (1995). On the invariant estimation of a normal variance ratio. J. Statist. Plann. Inference, 44, Madi, T. M. and Tsui, K. W. (1990). Estimation of the ratio of the scale parameters of two exponential distributions with unknown location parameters. Ann. Inst. Statist. Math., 42, Moran, P. A. P. (1967). Testing for correlation between non-negative variates. Biometrika, 54, Nagao, M. and Kadoya, M. (1971). Two-variate exponential distribution and its numerical table for engineering application. Bulletin of the Disaster Prevention Institute, Kyoto University, 20, Stein, C. (1964). Inadmissibility of the usual estimator for the variance of a normal distribution with unknown mean. Ann. Inst. Statist. Math., 16,

Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed

Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed 18.466 Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed 1. MLEs in exponential families Let f(x,θ) for x X and θ Θ be a likelihood function, that is, for present purposes,

More information

STA 732: Inference. Notes 10. Parameter Estimation from a Decision Theoretic Angle. Other resources

STA 732: Inference. Notes 10. Parameter Estimation from a Decision Theoretic Angle. Other resources STA 732: Inference Notes 10. Parameter Estimation from a Decision Theoretic Angle Other resources 1 Statistical rules, loss and risk We saw that a major focus of classical statistics is comparing various

More information

Evaluating the Performance of Estimators (Section 7.3)

Evaluating the Performance of Estimators (Section 7.3) Evaluating the Performance of Estimators (Section 7.3) Example: Suppose we observe X 1,..., X n iid N(θ, σ 2 0 ), with σ2 0 known, and wish to estimate θ. Two possible estimators are: ˆθ = X sample mean

More information

Key Words: binomial parameter n; Bayes estimators; admissibility; Blyth s method; squared error loss.

Key Words: binomial parameter n; Bayes estimators; admissibility; Blyth s method; squared error loss. SOME NEW ESTIMATORS OF THE BINOMIAL PARAMETER n George Iliopoulos Department of Mathematics University of the Aegean 83200 Karlovassi, Samos, Greece geh@unipi.gr Key Words: binomial parameter n; Bayes

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)

More information

COPYRIGHTED MATERIAL CONTENTS. Preface Preface to the First Edition

COPYRIGHTED MATERIAL CONTENTS. Preface Preface to the First Edition Preface Preface to the First Edition xi xiii 1 Basic Probability Theory 1 1.1 Introduction 1 1.2 Sample Spaces and Events 3 1.3 The Axioms of Probability 7 1.4 Finite Sample Spaces and Combinatorics 15

More information

HANDBOOK OF APPLICABLE MATHEMATICS

HANDBOOK OF APPLICABLE MATHEMATICS HANDBOOK OF APPLICABLE MATHEMATICS Chief Editor: Walter Ledermann Volume VI: Statistics PART A Edited by Emlyn Lloyd University of Lancaster A Wiley-Interscience Publication JOHN WILEY & SONS Chichester

More information

Testing Statistical Hypotheses

Testing Statistical Hypotheses E.L. Lehmann Joseph P. Romano Testing Statistical Hypotheses Third Edition 4y Springer Preface vii I Small-Sample Theory 1 1 The General Decision Problem 3 1.1 Statistical Inference and Statistical Decisions

More information

More Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order Restriction

More Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order Restriction Sankhyā : The Indian Journal of Statistics 2007, Volume 69, Part 4, pp. 700-716 c 2007, Indian Statistical Institute More Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order

More information

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part : Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section

More information

Uniform Correlation Mixture of Bivariate Normal Distributions and. Hypercubically-contoured Densities That Are Marginally Normal

Uniform Correlation Mixture of Bivariate Normal Distributions and. Hypercubically-contoured Densities That Are Marginally Normal Uniform Correlation Mixture of Bivariate Normal Distributions and Hypercubically-contoured Densities That Are Marginally Normal Kai Zhang Department of Statistics and Operations Research University of

More information

MAS223 Statistical Inference and Modelling Exercises

MAS223 Statistical Inference and Modelling Exercises MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,

More information

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER. 21 June :45 11:45

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER. 21 June :45 11:45 Two hours MATH20802 To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER STATISTICAL METHODS 21 June 2010 9:45 11:45 Answer any FOUR of the questions. University-approved

More information

Chapter 3: Maximum Likelihood Theory

Chapter 3: Maximum Likelihood Theory Chapter 3: Maximum Likelihood Theory Florian Pelgrin HEC September-December, 2010 Florian Pelgrin (HEC) Maximum Likelihood Theory September-December, 2010 1 / 40 1 Introduction Example 2 Maximum likelihood

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

LECTURE 5 NOTES. n t. t Γ(a)Γ(b) pt+a 1 (1 p) n t+b 1. The marginal density of t is. Γ(t + a)γ(n t + b) Γ(n + a + b)

LECTURE 5 NOTES. n t. t Γ(a)Γ(b) pt+a 1 (1 p) n t+b 1. The marginal density of t is. Γ(t + a)γ(n t + b) Γ(n + a + b) LECTURE 5 NOTES 1. Bayesian point estimators. In the conventional (frequentist) approach to statistical inference, the parameter θ Θ is considered a fixed quantity. In the Bayesian approach, it is considered

More information

Miscellaneous Errors in the Chapter 6 Solutions

Miscellaneous Errors in the Chapter 6 Solutions Miscellaneous Errors in the Chapter 6 Solutions 3.30(b In this problem, early printings of the second edition use the beta(a, b distribution, but later versions use the Poisson(λ distribution. If your

More information

Lecture 7 Introduction to Statistical Decision Theory

Lecture 7 Introduction to Statistical Decision Theory Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7

More information

Moments of the Reliability, R = P(Y<X), As a Random Variable

Moments of the Reliability, R = P(Y<X), As a Random Variable International Journal of Computational Engineering Research Vol, 03 Issue, 8 Moments of the Reliability, R = P(Y

More information

STAT 512 sp 2018 Summary Sheet

STAT 512 sp 2018 Summary Sheet STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}

More information

STATISTICS SYLLABUS UNIT I

STATISTICS SYLLABUS UNIT I STATISTICS SYLLABUS UNIT I (Probability Theory) Definition Classical and axiomatic approaches.laws of total and compound probability, conditional probability, Bayes Theorem. Random variable and its distribution

More information

On a simple construction of bivariate probability functions with fixed marginals 1

On a simple construction of bivariate probability functions with fixed marginals 1 On a simple construction of bivariate probability functions with fixed marginals 1 Djilali AIT AOUDIA a, Éric MARCHANDb,2 a Université du Québec à Montréal, Département de mathématiques, 201, Ave Président-Kennedy

More information

Statistics - Lecture One. Outline. Charlotte Wickham 1. Basic ideas about estimation

Statistics - Lecture One. Outline. Charlotte Wickham  1. Basic ideas about estimation Statistics - Lecture One Charlotte Wickham wickham@stat.berkeley.edu http://www.stat.berkeley.edu/~wickham/ Outline 1. Basic ideas about estimation 2. Method of Moments 3. Maximum Likelihood 4. Confidence

More information

Journal of Statistical Research 2007, Vol. 41, No. 1, pp Bangladesh

Journal of Statistical Research 2007, Vol. 41, No. 1, pp Bangladesh Journal of Statistical Research 007, Vol. 4, No., pp. 5 Bangladesh ISSN 056-4 X ESTIMATION OF AUTOREGRESSIVE COEFFICIENT IN AN ARMA(, ) MODEL WITH VAGUE INFORMATION ON THE MA COMPONENT M. Ould Haye School

More information

Bayes and Empirical Bayes Estimation of the Scale Parameter of the Gamma Distribution under Balanced Loss Functions

Bayes and Empirical Bayes Estimation of the Scale Parameter of the Gamma Distribution under Balanced Loss Functions The Korean Communications in Statistics Vol. 14 No. 1, 2007, pp. 71 80 Bayes and Empirical Bayes Estimation of the Scale Parameter of the Gamma Distribution under Balanced Loss Functions R. Rezaeian 1)

More information

Exact Inference for the Two-Parameter Exponential Distribution Under Type-II Hybrid Censoring

Exact Inference for the Two-Parameter Exponential Distribution Under Type-II Hybrid Censoring Exact Inference for the Two-Parameter Exponential Distribution Under Type-II Hybrid Censoring A. Ganguly, S. Mitra, D. Samanta, D. Kundu,2 Abstract Epstein [9] introduced the Type-I hybrid censoring scheme

More information

t x 1 e t dt, and simplify the answer when possible (for example, when r is a positive even number). In particular, confirm that EX 4 = 3.

t x 1 e t dt, and simplify the answer when possible (for example, when r is a positive even number). In particular, confirm that EX 4 = 3. Mathematical Statistics: Homewor problems General guideline. While woring outside the classroom, use any help you want, including people, computer algebra systems, Internet, and solution manuals, but mae

More information

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued Chapter 3 sections 3.1 Random Variables and Discrete Distributions 3.2 Continuous Distributions 3.3 The Cumulative Distribution Function 3.4 Bivariate Distributions 3.5 Marginal Distributions 3.6 Conditional

More information

Estimation Under Multivariate Inverse Weibull Distribution

Estimation Under Multivariate Inverse Weibull Distribution Global Journal of Pure and Applied Mathematics. ISSN 097-768 Volume, Number 8 (07), pp. 4-4 Research India Publications http://www.ripublication.com Estimation Under Multivariate Inverse Weibull Distribution

More information

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 1: Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section

More information

3.0.1 Multivariate version and tensor product of experiments

3.0.1 Multivariate version and tensor product of experiments ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016 Lecture 3: Minimax risk of GLM and four extensions Lecturer: Yihong Wu Scribe: Ashok Vardhan, Jan 28, 2016 [Ed. Mar 24]

More information

Statistics GIDP Ph.D. Qualifying Exam Theory Jan 11, 2016, 9:00am-1:00pm

Statistics GIDP Ph.D. Qualifying Exam Theory Jan 11, 2016, 9:00am-1:00pm Statistics GIDP Ph.D. Qualifying Exam Theory Jan, 06, 9:00am-:00pm Instructions: Provide answers on the supplied pads of paper; write on only one side of each sheet. Complete exactly 5 of the 6 problems.

More information

Testing Statistical Hypotheses

Testing Statistical Hypotheses E.L. Lehmann Joseph P. Romano, 02LEu1 ttd ~Lt~S Testing Statistical Hypotheses Third Edition With 6 Illustrations ~Springer 2 The Probability Background 28 2.1 Probability and Measure 28 2.2 Integration.........

More information

Spring 2012 Math 541A Exam 1. X i, S 2 = 1 n. n 1. X i I(X i < c), T n =

Spring 2012 Math 541A Exam 1. X i, S 2 = 1 n. n 1. X i I(X i < c), T n = Spring 2012 Math 541A Exam 1 1. (a) Let Z i be independent N(0, 1), i = 1, 2,, n. Are Z = 1 n n Z i and S 2 Z = 1 n 1 n (Z i Z) 2 independent? Prove your claim. (b) Let X 1, X 2,, X n be independent identically

More information

A PRACTICAL WAY FOR ESTIMATING TAIL DEPENDENCE FUNCTIONS

A PRACTICAL WAY FOR ESTIMATING TAIL DEPENDENCE FUNCTIONS Statistica Sinica 20 2010, 365-378 A PRACTICAL WAY FOR ESTIMATING TAIL DEPENDENCE FUNCTIONS Liang Peng Georgia Institute of Technology Abstract: Estimating tail dependence functions is important for applications

More information

A New Two Sample Type-II Progressive Censoring Scheme

A New Two Sample Type-II Progressive Censoring Scheme A New Two Sample Type-II Progressive Censoring Scheme arxiv:609.05805v [stat.me] 9 Sep 206 Shuvashree Mondal, Debasis Kundu Abstract Progressive censoring scheme has received considerable attention in

More information

arxiv: v1 [math.st] 26 Jun 2011

arxiv: v1 [math.st] 26 Jun 2011 The Shape of the Noncentral χ 2 Density arxiv:1106.5241v1 [math.st] 26 Jun 2011 Yaming Yu Department of Statistics University of California Irvine, CA 92697, USA yamingy@uci.edu Abstract A noncentral χ

More information

Review and continuation from last week Properties of MLEs

Review and continuation from last week Properties of MLEs Review and continuation from last week Properties of MLEs As we have mentioned, MLEs have a nice intuitive property, and as we have seen, they have a certain equivariance property. We will see later that

More information

Statistical Inference On the High-dimensional Gaussian Covarianc

Statistical Inference On the High-dimensional Gaussian Covarianc Statistical Inference On the High-dimensional Gaussian Covariance Matrix Department of Mathematical Sciences, Clemson University June 6, 2011 Outline Introduction Problem Setup Statistical Inference High-Dimensional

More information

Mathematics Qualifying Examination January 2015 STAT Mathematical Statistics

Mathematics Qualifying Examination January 2015 STAT Mathematical Statistics Mathematics Qualifying Examination January 2015 STAT 52800 - Mathematical Statistics NOTE: Answer all questions completely and justify your derivations and steps. A calculator and statistical tables (normal,

More information

Shrinkage Estimation in High Dimensions

Shrinkage Estimation in High Dimensions Shrinkage Estimation in High Dimensions Pavan Srinath and Ramji Venkataramanan University of Cambridge ITA 206 / 20 The Estimation Problem θ R n is a vector of parameters, to be estimated from an observation

More information

Statistics 3858 : Maximum Likelihood Estimators

Statistics 3858 : Maximum Likelihood Estimators Statistics 3858 : Maximum Likelihood Estimators 1 Method of Maximum Likelihood In this method we construct the so called likelihood function, that is L(θ) = L(θ; X 1, X 2,..., X n ) = f n (X 1, X 2,...,

More information

Three hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER.

Three hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER. Three hours To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER EXTREME VALUES AND FINANCIAL RISK Examiner: Answer QUESTION 1, QUESTION

More information

A Very Brief Summary of Bayesian Inference, and Examples

A Very Brief Summary of Bayesian Inference, and Examples A Very Brief Summary of Bayesian Inference, and Examples Trinity Term 009 Prof Gesine Reinert Our starting point are data x = x 1, x,, x n, which we view as realisations of random variables X 1, X,, X

More information

Part III. A Decision-Theoretic Approach and Bayesian testing

Part III. A Decision-Theoretic Approach and Bayesian testing Part III A Decision-Theoretic Approach and Bayesian testing 1 Chapter 10 Bayesian Inference as a Decision Problem The decision-theoretic framework starts with the following situation. We would like to

More information

A BIVARIATE MARSHALL AND OLKIN EXPONENTIAL MINIFICATION PROCESS

A BIVARIATE MARSHALL AND OLKIN EXPONENTIAL MINIFICATION PROCESS Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://wwwpmfniacyu/filomat Filomat 22: (2008), 69 77 A BIVARIATE MARSHALL AND OLKIN EXPONENTIAL MINIFICATION PROCESS Miroslav M

More information

Covariance function estimation in Gaussian process regression

Covariance function estimation in Gaussian process regression Covariance function estimation in Gaussian process regression François Bachoc Department of Statistics and Operations Research, University of Vienna WU Research Seminar - May 2015 François Bachoc Gaussian

More information

4 Invariant Statistical Decision Problems

4 Invariant Statistical Decision Problems 4 Invariant Statistical Decision Problems 4.1 Invariant decision problems Let G be a group of measurable transformations from the sample space X into itself. The group operation is composition. Note that

More information

Chapter 5 continued. Chapter 5 sections

Chapter 5 continued. Chapter 5 sections Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2009 Prof. Gesine Reinert Our standard situation is that we have data x = x 1, x 2,..., x n, which we view as realisations of random

More information

Statistics Ph.D. Qualifying Exam: Part I October 18, 2003

Statistics Ph.D. Qualifying Exam: Part I October 18, 2003 Statistics Ph.D. Qualifying Exam: Part I October 18, 2003 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. 1 2 3 4 5 6 7 8 9 10 11 12 2. Write your answer

More information

A Skewed Look at Bivariate and Multivariate Order Statistics

A Skewed Look at Bivariate and Multivariate Order Statistics A Skewed Look at Bivariate and Multivariate Order Statistics Prof. N. Balakrishnan Dept. of Mathematics & Statistics McMaster University, Canada bala@mcmaster.ca p. 1/4 Presented with great pleasure as

More information

An Analysis of Record Statistics based on an Exponentiated Gumbel Model

An Analysis of Record Statistics based on an Exponentiated Gumbel Model Communications for Statistical Applications and Methods 2013, Vol. 20, No. 5, 405 416 DOI: http://dx.doi.org/10.5351/csam.2013.20.5.405 An Analysis of Record Statistics based on an Exponentiated Gumbel

More information

Joint work with Nottingham colleagues Simon Preston and Michail Tsagris.

Joint work with Nottingham colleagues Simon Preston and Michail Tsagris. /pgf/stepx/.initial=1cm, /pgf/stepy/.initial=1cm, /pgf/step/.code=1/pgf/stepx/.expanded=- 10.95415pt,/pgf/stepy/.expanded=- 10.95415pt, /pgf/step/.value required /pgf/images/width/.estore in= /pgf/images/height/.estore

More information

Probability and Measure

Probability and Measure Chapter 4 Probability and Measure 4.1 Introduction In this chapter we will examine probability theory from the measure theoretic perspective. The realisation that measure theory is the foundation of probability

More information

Time Series and Dynamic Models

Time Series and Dynamic Models Time Series and Dynamic Models Section 1 Intro to Bayesian Inference Carlos M. Carvalho The University of Texas at Austin 1 Outline 1 1. Foundations of Bayesian Statistics 2. Bayesian Estimation 3. The

More information

Considering our result for the sum and product of analytic functions, this means that for (a 0, a 1,..., a N ) C N+1, the polynomial.

Considering our result for the sum and product of analytic functions, this means that for (a 0, a 1,..., a N ) C N+1, the polynomial. Lecture 3 Usual complex functions MATH-GA 245.00 Complex Variables Polynomials. Construction f : z z is analytic on all of C since its real and imaginary parts satisfy the Cauchy-Riemann relations and

More information

Invariant HPD credible sets and MAP estimators

Invariant HPD credible sets and MAP estimators Bayesian Analysis (007), Number 4, pp. 681 69 Invariant HPD credible sets and MAP estimators Pierre Druilhet and Jean-Michel Marin Abstract. MAP estimators and HPD credible sets are often criticized in

More information

First Year Examination Department of Statistics, University of Florida

First Year Examination Department of Statistics, University of Florida First Year Examination Department of Statistics, University of Florida August 20, 2009, 8:00 am - 2:00 noon Instructions:. You have four hours to answer questions in this examination. 2. You must show

More information

ECE531 Lecture 10b: Maximum Likelihood Estimation

ECE531 Lecture 10b: Maximum Likelihood Estimation ECE531 Lecture 10b: Maximum Likelihood Estimation D. Richard Brown III Worcester Polytechnic Institute 05-Apr-2011 Worcester Polytechnic Institute D. Richard Brown III 05-Apr-2011 1 / 23 Introduction So

More information

Test Code: STA/STB (Short Answer Type) 2013 Junior Research Fellowship for Research Course in Statistics

Test Code: STA/STB (Short Answer Type) 2013 Junior Research Fellowship for Research Course in Statistics Test Code: STA/STB (Short Answer Type) 2013 Junior Research Fellowship for Research Course in Statistics The candidates for the research course in Statistics will have to take two shortanswer type tests

More information

Mathematics Ph.D. Qualifying Examination Stat Probability, January 2018

Mathematics Ph.D. Qualifying Examination Stat Probability, January 2018 Mathematics Ph.D. Qualifying Examination Stat 52800 Probability, January 2018 NOTE: Answers all questions completely. Justify every step. Time allowed: 3 hours. 1. Let X 1,..., X n be a random sample from

More information

A NOTE ON A DISTRIBUTION OF WEIGHTED SUMS OF I.I.D. RAYLEIGH RANDOM VARIABLES

A NOTE ON A DISTRIBUTION OF WEIGHTED SUMS OF I.I.D. RAYLEIGH RANDOM VARIABLES Sankhyā : The Indian Journal of Statistics 1998, Volume 6, Series A, Pt. 2, pp. 171-175 A NOTE ON A DISTRIBUTION OF WEIGHTED SUMS OF I.I.D. RAYLEIGH RANDOM VARIABLES By P. HITCZENKO North Carolina State

More information

The Bayesian Choice. Christian P. Robert. From Decision-Theoretic Foundations to Computational Implementation. Second Edition.

The Bayesian Choice. Christian P. Robert. From Decision-Theoretic Foundations to Computational Implementation. Second Edition. Christian P. Robert The Bayesian Choice From Decision-Theoretic Foundations to Computational Implementation Second Edition With 23 Illustrations ^Springer" Contents Preface to the Second Edition Preface

More information

Bayesian Inference. Chapter 9. Linear models and regression

Bayesian Inference. Chapter 9. Linear models and regression Bayesian Inference Chapter 9. Linear models and regression M. Concepcion Ausin Universidad Carlos III de Madrid Master in Business Administration and Quantitative Methods Master in Mathematical Engineering

More information

Discriminating Between the Bivariate Generalized Exponential and Bivariate Weibull Distributions

Discriminating Between the Bivariate Generalized Exponential and Bivariate Weibull Distributions Discriminating Between the Bivariate Generalized Exponential and Bivariate Weibull Distributions Arabin Kumar Dey & Debasis Kundu Abstract Recently Kundu and Gupta ( Bivariate generalized exponential distribution,

More information

40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology

40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology Singapore University of Design and Technology Lecture 9: Hypothesis testing, uniformly most powerful tests. The Neyman-Pearson framework Let P be the family of distributions of concern. The Neyman-Pearson

More information

Statistical Inference with Monotone Incomplete Multivariate Normal Data

Statistical Inference with Monotone Incomplete Multivariate Normal Data Statistical Inference with Monotone Incomplete Multivariate Normal Data p. 1/4 Statistical Inference with Monotone Incomplete Multivariate Normal Data This talk is based on joint work with my wonderful

More information

IEOR 165 Lecture 7 1 Bias-Variance Tradeoff

IEOR 165 Lecture 7 1 Bias-Variance Tradeoff IEOR 165 Lecture 7 Bias-Variance Tradeoff 1 Bias-Variance Tradeoff Consider the case of parametric regression with β R, and suppose we would like to analyze the error of the estimate ˆβ in comparison to

More information

MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES

MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES J. Korean Math. Soc. 47 1, No., pp. 63 75 DOI 1.4134/JKMS.1.47..63 MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES Ke-Ang Fu Li-Hua Hu Abstract. Let X n ; n 1 be a strictly stationary

More information

Multivariate Normal-Laplace Distribution and Processes

Multivariate Normal-Laplace Distribution and Processes CHAPTER 4 Multivariate Normal-Laplace Distribution and Processes The normal-laplace distribution, which results from the convolution of independent normal and Laplace random variables is introduced by

More information

460 HOLGER DETTE AND WILLIAM J STUDDEN order to examine how a given design behaves in the model g` with respect to the D-optimality criterion one uses

460 HOLGER DETTE AND WILLIAM J STUDDEN order to examine how a given design behaves in the model g` with respect to the D-optimality criterion one uses Statistica Sinica 5(1995), 459-473 OPTIMAL DESIGNS FOR POLYNOMIAL REGRESSION WHEN THE DEGREE IS NOT KNOWN Holger Dette and William J Studden Technische Universitat Dresden and Purdue University Abstract:

More information

Probability and Stochastic Processes

Probability and Stochastic Processes Probability and Stochastic Processes A Friendly Introduction Electrical and Computer Engineers Third Edition Roy D. Yates Rutgers, The State University of New Jersey David J. Goodman New York University

More information

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it

More information

MIT Spring 2016

MIT Spring 2016 MIT 18.655 Dr. Kempthorne Spring 2016 1 MIT 18.655 Outline 1 2 MIT 18.655 Decision Problem: Basic Components P = {P θ : θ Θ} : parametric model. Θ = {θ}: Parameter space. A{a} : Action space. L(θ, a) :

More information

Consistency of test based method for selection of variables in high dimensional two group discriminant analysis

Consistency of test based method for selection of variables in high dimensional two group discriminant analysis https://doi.org/10.1007/s42081-019-00032-4 ORIGINAL PAPER Consistency of test based method for selection of variables in high dimensional two group discriminant analysis Yasunori Fujikoshi 1 Tetsuro Sakurai

More information

Two Different Shrinkage Estimator Classes for the Shape Parameter of Classical Pareto Distribution

Two Different Shrinkage Estimator Classes for the Shape Parameter of Classical Pareto Distribution Two Different Shrinkage Estimator Classes for the Shape Parameter of Classical Pareto Distribution Meral EBEGIL and Senay OZDEMIR Abstract In this study, biased estimators for the shape parameter of a

More information

Spring 2012 Math 541B Exam 1

Spring 2012 Math 541B Exam 1 Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote

More information

Delta Method. Example : Method of Moments for Exponential Distribution. f(x; λ) = λe λx I(x > 0)

Delta Method. Example : Method of Moments for Exponential Distribution. f(x; λ) = λe λx I(x > 0) Delta Method Often estimators are functions of other random variables, for example in the method of moments. These functions of random variables can sometimes inherit a normal approximation from the underlying

More information

ST5215: Advanced Statistical Theory

ST5215: Advanced Statistical Theory Department of Statistics & Applied Probability Wednesday, October 5, 2011 Lecture 13: Basic elements and notions in decision theory Basic elements X : a sample from a population P P Decision: an action

More information

Principles of Statistics

Principles of Statistics Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 81 Paper 4, Section II 28K Let g : R R be an unknown function, twice continuously differentiable with g (x) M for

More information

Analysis of Type-II Progressively Hybrid Censored Data

Analysis of Type-II Progressively Hybrid Censored Data Analysis of Type-II Progressively Hybrid Censored Data Debasis Kundu & Avijit Joarder Abstract The mixture of Type-I and Type-II censoring schemes, called the hybrid censoring scheme is quite common in

More information

Chapter 5. Chapter 5 sections

Chapter 5. Chapter 5 sections 1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Asymptotic Statistics-III. Changliang Zou

Asymptotic Statistics-III. Changliang Zou Asymptotic Statistics-III Changliang Zou The multivariate central limit theorem Theorem (Multivariate CLT for iid case) Let X i be iid random p-vectors with mean µ and and covariance matrix Σ. Then n (

More information

Quick Tour of Basic Probability Theory and Linear Algebra

Quick Tour of Basic Probability Theory and Linear Algebra Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra CS224w: Social and Information Network Analysis Fall 2011 Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra Outline Definitions

More information

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Put your solution to each problem on a separate sheet of paper. Problem 1. (5106) Let X 1, X 2,, X n be a sequence of i.i.d. observations from a

More information

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu Home Work: 1 1. Describe the sample space when a coin is tossed (a) once, (b) three times, (c) n times, (d) an infinite number of times. 2. A coin is tossed until for the first time the same result appear

More information

Research Article The Laplace Likelihood Ratio Test for Heteroscedasticity

Research Article The Laplace Likelihood Ratio Test for Heteroscedasticity International Mathematics and Mathematical Sciences Volume 2011, Article ID 249564, 7 pages doi:10.1155/2011/249564 Research Article The Laplace Likelihood Ratio Test for Heteroscedasticity J. Martin van

More information

ECON 4160, Autumn term Lecture 1

ECON 4160, Autumn term Lecture 1 ECON 4160, Autumn term 2017. Lecture 1 a) Maximum Likelihood based inference. b) The bivariate normal model Ragnar Nymoen University of Oslo 24 August 2017 1 / 54 Principles of inference I Ordinary least

More information

SIMULTANEOUS ESTIMATION OF SCALE MATRICES IN TWO-SAMPLE PROBLEM UNDER ELLIPTICALLY CONTOURED DISTRIBUTIONS

SIMULTANEOUS ESTIMATION OF SCALE MATRICES IN TWO-SAMPLE PROBLEM UNDER ELLIPTICALLY CONTOURED DISTRIBUTIONS SIMULTANEOUS ESTIMATION OF SCALE MATRICES IN TWO-SAMPLE PROBLEM UNDER ELLIPTICALLY CONTOURED DISTRIBUTIONS Hisayuki Tsukuma and Yoshihiko Konno Abstract Two-sample problems of estimating p p scale matrices

More information

Asymptotic Normality under Two-Phase Sampling Designs

Asymptotic Normality under Two-Phase Sampling Designs Asymptotic Normality under Two-Phase Sampling Designs Jiahua Chen and J. N. K. Rao University of Waterloo and University of Carleton Abstract Large sample properties of statistical inferences in the context

More information

Pairwise rank based likelihood for estimating the relationship between two homogeneous populations and their mixture proportion

Pairwise rank based likelihood for estimating the relationship between two homogeneous populations and their mixture proportion Pairwise rank based likelihood for estimating the relationship between two homogeneous populations and their mixture proportion Glenn Heller and Jing Qin Department of Epidemiology and Biostatistics Memorial

More information

Estimators for the binomial distribution that dominate the MLE in terms of Kullback Leibler risk

Estimators for the binomial distribution that dominate the MLE in terms of Kullback Leibler risk Ann Inst Stat Math (0) 64:359 37 DOI 0.007/s0463-00-036-3 Estimators for the binomial distribution that dominate the MLE in terms of Kullback Leibler risk Paul Vos Qiang Wu Received: 3 June 009 / Revised:

More information

SEQUENTIAL ESTIMATION OF A COMMON LOCATION PARAMETER OF TWO POPULATIONS

SEQUENTIAL ESTIMATION OF A COMMON LOCATION PARAMETER OF TWO POPULATIONS REVSTAT Statistical Journal Volume 14, Number 3, June 016, 97 309 SEQUENTIAL ESTIMATION OF A COMMON LOCATION PARAMETER OF TWO POPULATIONS Authors: Agnieszka St epień-baran Institute of Applied Mathematics

More information

Bounds on expectation of order statistics from a nite population

Bounds on expectation of order statistics from a nite population Journal of Statistical Planning and Inference 113 (2003) 569 588 www.elsevier.com/locate/jspi Bounds on expectation of order statistics from a nite population N. Balakrishnan a;, C. Charalambides b, N.

More information

3 Integration and Expectation

3 Integration and Expectation 3 Integration and Expectation 3.1 Construction of the Lebesgue Integral Let (, F, µ) be a measure space (not necessarily a probability space). Our objective will be to define the Lebesgue integral R fdµ

More information

ASSESSING A VECTOR PARAMETER

ASSESSING A VECTOR PARAMETER SUMMARY ASSESSING A VECTOR PARAMETER By D.A.S. Fraser and N. Reid Department of Statistics, University of Toronto St. George Street, Toronto, Canada M5S 3G3 dfraser@utstat.toronto.edu Some key words. Ancillary;

More information

Arithmetic progressions in sumsets

Arithmetic progressions in sumsets ACTA ARITHMETICA LX.2 (1991) Arithmetic progressions in sumsets by Imre Z. Ruzsa* (Budapest) 1. Introduction. Let A, B [1, N] be sets of integers, A = B = cn. Bourgain [2] proved that A + B always contains

More information

Econometrics I, Estimation

Econometrics I, Estimation Econometrics I, Estimation Department of Economics Stanford University September, 2008 Part I Parameter, Estimator, Estimate A parametric is a feature of the population. An estimator is a function of the

More information