Advances in Statistics, Article ID 974604, 9 pages http://dx.doi.org/0.55/204/974604 Research Article Efficient Estimators Using Auxiliary Variable under Second Order Approximation in Simple Random Sampling and Two-Phase Sampling Rajesh Singh and Prayas Sharma Department of Statistics, Banaras Hindu University, Varanasi 22005, India Correspondence should be addressed to Prayas Sharma; prayassharma02@gmail.com Received 2 July 204; Accepted 9 August 204; Published 3 September 204 Academic Editor: Chin-Shang Li Copyright 204 R. Singh and P. Sharma. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This paper suggests some estimators for population mean of the study variable in simple random sampling and two-phase sampling using information on an auxiliary variable under second order approximation. Bahl and Tuteja (99) and Singh et al. (2008) proposed some efficient estimators and studied the properties of the estimators to the first order of approximation. In this paper, we have tried to find out the second order biases and mean square errors of these estimators using information on auxiliary variable based on simple random sampling and two-phase sampling. Finally, an empirical study is carried out to judge the merits of the estimators over others under first and second order of approximation.. Introduction Let U=(U,U 2,U 3,...,U N ) denoteafinitepopulationofdistinct and identifiable units. For the estimation of population mean Y of a study variable Y, let us consider X to be the auxiliary variable that is correlated with study variable Y, taking the corresponding values of the units. Let a sample of size n be drawn from this population using simple random sampling without replacement (SRSWOR) and y i, x i (i =,2,...,n) are the values of the study variable and auxiliary variable, respectively, for the ith units of the sample. In sampling theory the use of suitable auxiliary information results in considerable reduction in MSE of the ratio estimators. Many authors including Singh and Tailor [], Kadilar and Cingi [2], Singh et al. [3], and Singh and Kumar [4] suggested estimators using some known population parameters of an auxiliary variable in simple random sampling. These authors studied the properties of the estimators to the first order of approximation. But sometimes it is important to know the behavior of the estimators to the second order of approximation because up to the first order of approximation the behavior of the estimators is almost the same, while the properties for second order change drastically. Hossain et al. [5] and Sharma and Singh [6, 7] studied the properties of some estimators to the second order approximation. Sharma et al. [8, 9] also studied the properties of some estimators under second order of approximation using information on auxiliary attributes. In this paper we have studied properties of some exponential estimators under second order of approximation in simple random sampling and two-phase sampling using information on an auxiliary variable. 2. Some Estimators in Simple Random Sampling For estimating the population mean Y of Y, the exponential ratio estimator t S is given by t S = y exp [ X x ], () X+x where y = (/n) n i= y i and x = (/n) n i= x i (the notation is used to represents for simple random sampling).
2 Advances in Statistics by The classical exponential product type estimator is given t 2S = y exp [ x X ]. (2) X+x Following Srivastava [0]anestimatort 3S is defined as t 3S = y exp [ X x α X+x ], (3) where α is a constant suitably chosen by minimizing MSE of t 3S.Forα =, t 3S is the same as conventional exponential ratio estimator, whereas, for α=, it becomes conventional exponential product type estimator. Again for estimating the population mean Yof Y, Singh et al. []definedanestimatort 4S as t 4S = y[θexp [ X x ]+( θ) exp [x X]], (4) X+x X+x where θ is the constant and suitably chosen by minimizing mean square error of the estimator t 4S. 3. Notations Used Let us define, e 0 =(y Y)/Y and e =(x X)/X, andthen E(e 0 )=E(e )=0. ForobtainingthebiasandMSEthefollowinglemmaswill be used. Lemma. Consider (i) V(e 0 )=E{(e 0 ) 2 }= N n N n C 02 =L C 02, (ii) V(e )=E{(e ) 2 }= N n N n C 20 =L C 20, (iii) COV (e 0,e )=E{(e 0 e )} = N n N n C =L C. (5) Lemma 2. Consider (i) E{(e 2 e 0)} = (N n) (N ) (ii) E{(e 3 Lemma 3. Consider )} = (N n) (N ) (N 2n) (N 2) (N 2n) (N 2) (i) E(e 3 e 0)=L 3 C 3 +3L 4 C 20 C, n 2 C 2 =L 2 C 2, n 2 C 30 =L 2 C 30. (6) where L 3 = (N n) (N2 +N 6nN+6n 2 ) (N )(N 2)(N 3) L 4 = (N n)(n n )(n ) (N )(N 2)(N 3) C pq = N i= (X i X) p (Y i Y) q. n 3, n 3, For proof of these lemmas see P. V. Sukhatme and B. V. Sukhatme [2]. 4. Biases and Mean Squared Errors to the First Order of Approximation Bias and MSE of the estimators t S, t 2S,andt 3S are, respectively, written as (8) Bias (t S )=Y[ 3 8 L C 20 2 L C ], (9) MSE (t S )=Y 2 [L C 02 + 2 L C 20 L C ], (0) Bias (t 2S )=Y[ 2 L C 2 L C 20 ], () MSE (t 2S )=Y 2 [L C 02 + 4 L C 20 +L C ], (2) Bias (t 3S )=Y[α 4 L C 20 +α 2 8 L C 20 2 αl C ], (3) MSE (t 3S )=Y 2 [L C 02 + 4 α2 L C 20 αl C ]. (4) By minimizing MSE(t 3S ),theoptimumvalueofα is obtained as α o =2C /C 20. By putting this optimum value of α in (3) and (4) we get the minimum value for bias and MSE of the estimator t 3S. The bias and MSE of estimator t 4S are given, respectively, as Bias (t 4S )=Y[θ{ 3 8 L C 20 2 L C } + ( θ) { 2 L C 8 L C 20 }], MSE (t 4S )=Y 2 [L C 02 +( 2 θ) 2L C 20 +2( 2 θ)l C ]. (5) (ii) E{(e 4 )} = (N n) (N2 +N 6nN+6n 2 ) (N )(N 2)(N 3) =L 3 C 40 +3L 4 C 2 20, (iii) E(e 2 e2 0 )=L 3C 40 +3L 4 C 20, n 3 C 30 (7) By minimizing MSE(t 4S ),theoptimumvalueofθ is obtained as θ o = (C /C 20 ) + (/2). By putting this optimum value of θ in (5) we get the minimum value for bias and MSE of the estimator t 3S. We found that for the optimum cases the biases of the estimators t 3S and t 4S are different but the MSE expressions of estimators t 3S and t 4S are similar to the first
Advances in Statistics 3 order of approximation. It is also analyzed that the MSEs of the estimators t 3S and t 4S are always less than the MSEs of the estimators t S and t 2S.Thispromptedustostudythe estimators t 3S and t 4S under second order approximation. 5. Second Order Biases and Mean Squared Errors Expressing estimator t i s (i =, 2, 3, 4) intermsofe s (i = 0, ), we get Or t S = Y(+e 0 ) exp [ e 2+e ]. (6) t S Y=Y{ e 2 + 3 8 e2 7 48 e3 + 25 384 e4 +e 0 2 e 0e + 3 8 e 0e 2 7 48 e 0e 3 }. The Bias of the estimators t S is Bias 2 (t S )=E(t S Y) Using (7), we have or = Y 2 [3 4 L C 20 L C 7 24 L 2C 30 + 3 4 L 2C 2 + 25 92 (L 3C 40 +3L 4 C 2 20 ) 7 24 (L 3C 3 +3L 4 C 20 C )]. MSE 2 (t S )=E[Y(e 0 e 2 + 3 8 e2 2 e 0e + 3 8 e 0e 2 7 2 48 e3 )] MSE 2 (t S )=Y 2 E[{e 2 0 + 4 e2 e 0e +e 2 0 e2 e2 0 e (7) (8) (9) ThebiasesandMSE softheestimatorst 2S, t 3S,andt 4S to second order of approximation, respectively, as Bias 2 (t 2S )=E(t 2S Y) = Y 2 [ 4 L C 20 L C 5 48 L 2C 30 8 L 2C 2 MSE 2 (t 2S )=E(t 2S Y) 2 5 48 (L 3C 3 +3L 4 C 20 C ) + 23 92 (L 3C 40 +3L 4 C 2 20 ) + 384 (L 3C 40 +3L 4 C 2 20 )], (22) = Y 2 [L C 02 + 4 L C 20 +L 2 C 2 +L C Bias 2 (t 3S )=E(t 3S Y) 8 L 2C 30 + 4 L 2C 2 24 (L 3C 3 +3L 4 C 20 C )], = Y[( α2 8 + α 4 )L C 20 +( α2 8 + α 4 )L C 2 α 2 L C +( α2 8 + α3 48 )L 2C 30 ( α2 8 + α3 48 )(L 3C 3 +3L 4 C 20 C ) (23) or 3 8 e3 25 24 e 0e 3 + 5 4 e 0e 2 + 55 92 e4 }] (20) +( α2 32 + α3 32 + α4 384 ) (L 3 C 40 +3L 4 C 2 20 )], (24) MSE 2 (t S )=Y 2 [L C 02 + 4 L C 20 L 2 C 2 L C 4 L 2C 30 + 5 4 L 2C 2 + 55 92 (L 3C 40 +3L 4 C 2 20 ) +(L 3 C 22 +L 4 (C 20 C 02 +C 2 )) 3 8 L 2C 30 25 24 (L 3C 3 +3L 4 C 20 C )]. (2) MSE 2 (t 3S )=E(t 3S Y) 2 = Y 2 [L C 02 + α2 4 L C 20 αl C αl 2 C 2 +( α 2 + 3α2 4 )L 2C 2 +( α2 6 + α3 6 + 7α4 92 )
4 Advances in Statistics Bias 2 (t 4S )=E(t 4S Y) (L 3 C 40 +3L 4 C 2 20 ) ( α2 4 + α2 8 )L 2C 30 ( 3α2 4 + 7α3 24 )(L 3C 3 +3L 4 C 20 C ) +( α 2 + α2 2 ) (L 3 C 22 +L 4 (C 20 C 02 +C 2 )) ], = Y[( 2 θ)l C 2 ( 4 θ)(l C 02 +L C 20 ) (25) The optimum value of α we get by minimizing MSE 2 (t 3S ). But theoretically the determination of the optimum value of α is very difficult; we have calculated the optimum value by using numerical techniques. Similarly, the optimum value of θ which minimizes the MSE of the estimator t 4S is obtained by using numerical techniques. 6. Empirical Study For a natural population data, we calculate the biases and the mean squared errors of the estimators and compare biases and MSEs of the estimators under first and second order of approximation. 6.. Data Set. The data is taken from 98, Utter Pradesh DistrictCensusHandbook,Aligarh.Thepopulationconsists of 340 villages under koil police station, with Y being number of agricultural labour in 98 and X being area of the villages (in acre) in 98. The following values are obtained: MSE 2 (t 4S )=E(t 4S Y) 2 +{ 6 ( 24 +θ)}(l 3C 40 +3L 4 C 2 20 ) (2θ + 5) 48 {(L 3 C 3 +3L 4 C 20 C )+L 2 C 30 }], (26) Y = 73.76765, X = 249.04, N = 340, n = 70, n = 20, n = 70, C 02 = 0.764, C = 0.2667, C 03 = 2.6942, C 2 = 0.0747, C 2 = 0.589, C 30 = 0.7877, C 3 = 0.32, C 3 = 0.885, C 04 = 7.4275, C 22 = 0.8424, C 40 =.305. (28) = Y 2 [L C 02 +( 2 θ) 2L C 20 +{( 2 2 θ) + +{( 2 2 θ) + (4α ) }L 4 2 C 2 (4θ ) } 4 (L 3 C 22 +L 4 C 20 C 02 +L 4 C 2 ) +{ 64 (4θ )2 24 ( θ)(2θ + 5)} 2 (L 3 C 40 +3L 4 C 2 20 ) Table exhibits the biases and MSEs of the estimators t S, t 2S, t 3S,andt 4S which are written under first order and second order of approximation. The estimator t 2S is exponential productestimatoranditisproposedforthecaseofnegative correlation; therefore, the bias and mean squared error for estimator t 2S are greater than the other estimators considered here. For ratio estimators, it is observed that the biases and the mean squared errors increased for second order. Estimators t 3S and t 4S havethesamemeansquarederrorsforthefirst order but the mean squared errors of t 3S for the second order are less than t 4S. So, on the basis of the given dataset, we conclude that the estimator t 3S is best followed by the estimator t 4S among the estimators considered here. +2( 2 θ)l 2C 2 +2( 2 θ)l C + 4 ( 2 θ)(4θ ) L 2C 30 +{ 24 (2θ + 5) + 2 ( θ)(4θ )} 2 (L 3 C 3 +3L 4 C 20 C )]. (27) 7. Two-Phase Sampling Inthecasewhenpopulationmeanoftheauxiliarycharacter is not known in advance, we go for two-phase (double) sampling. The two-phase sampling can be powerful and costeffective (economical) procedure for finding the infallible estimate for first phase sample for the unknown parameters of the auxiliary character x andhenceplaysaneminentrole in survey sampling; for instance, see Hidiroglou and Sarndal [3].
Advances in Statistics 5 Estimators Table : Bias and MSE of estimators. First order Bias Second order First order MSE Second order t S 0.06298 0.062546 39.2307 39.34897 t 2S 0.053623 0.052483 72.256693 73.329202 t 3S 0.057063 0.056678 39.28063 39.3656 t 4S 0.06253 0.062043 39.28073 39.334929 Considering SRSWOR (simple random sampling without replacement) design in each phase, the two-phase sampling scheme is as follows: (i) the first phase sample s n (s n U) of a fixed size n is drawn to measure only x in order to formulate a good estimate of a population mean X; (ii) given s n,thesecondphasesamples n (s n s n )ofa fixed size n isdrawntomeasurey only. Let x = (/n) i sn x i, y = (/n) i sn y i, and x = (/n ) i sn x i. The estimators t S considered in Section 2 can be defined in two-phase sampling as Lemma 4. Consider (i) V(e 0 )=E{(e 0 )2 }= N n N n C 02 =L C 02, (ii) V(e )=E{(e )2 }= N n N n C 20 =L C 20, (iii) COV (e 0,e )=E{(e 0e N n )} = N n C =L C. (33) Lemma 5. Consider (i) E{(e 2 e0 )} = (N n ) (N 2n ) (N ) (N 2) n C 2 2 =L 2 C 2, (ii) E{(e 3 (N n ) (N 2n ) )} = (N ) (N 2) n C 2 30 =L 2 C 30. (34) Lemma 6. Consider t d = y exp [ x x x + x ]. (29) The classical exponential product type estimator in two-phase sampling is given by t 2d = y exp [ x x x + x ]. (30) The estimator t 3S in two-phase sampling is defined as t 3d = y exp [ x α x d x + x ], (3) where (i) E(e 3 e0 )=L 3 C 3 +3L 4 C 20C (N n )(N 2 +N 6n N+6n 2 ) (ii) E{(e 4 )} = (N )(N 2)(N 3) n C 3 30 =L 3 C 40 +3L 4 C2 20, (iii) E(e 2 e 2 0 )=L 3 C 40 +3L 4 C 20, (35) where α d is a constant suitably chosen by minimizing MSE of t 3d. For α d =, t 3d is the same as conventional exponential ratio estimator, whereas, for α d =,itbecomes conventional exponential product type estimator. The estimator t 4S in two-phase sampling is defined as t 4d = y[θ d exp [ x x x + x ]+( θ d) exp [ x x x ]], (32) + x where θ d is a constant and is suitably chosen by minimizing mean square error of the estimator t 4d. 8. Notations under Two-Phase Sampling Notations defined in Section 3 can be written in two-phase sampling for SRSWOR as follows. (N n )(N 2 +N 6n N+6n 2 ) L 3 = (N )(N 2)(N 3) n, 3 L 4 = (N n )(N n )(n ) (N )(N 2)(N 3) n. 3 (36) Proof of these lemmas is straight forward by using SRSWOR (see P. V. Sukhatme and B. V. Sukhatme [2]). 9. First Order Biases and Mean Squared Errors in Two-Phase Sampling The bias and MSEs of the estimators t d, t 2d,andt 3d in twophase sampling, respectively, are
6 Advances in Statistics Bias (t d )=Y[ 3 8 (L L )C 20 + 2 (L L )C ], (37) MSE (t d )=Y 2 [L C 02 + 4 (L L )C 20 (38) +(L L )C ], Bias (t 2d )=Y[ 8 (L L )C 20 + 2 (L L )C ], (39) MSE (t 2d )=Y 2 [L C 02 + 4 (L L )C 20 (40) +(L L )C ], or t d Y=Y{e 0 + e e 2 + 2 e 0 (e e )+ 3 8 (e2 e 2 ) + 3 8 e 0 (e 2 e 2 7 )+ 48 (e 3 e 3 ) + 7 48 e 0 (e 3 e 3 )+ 384 (e4 e 4 )}. (45) The bias of the estimator t d to the second order of approximation is Bias (t 3d )=Y[{ α d 4 + α2 d 8 }(L L )C 20 + α d 2 (L L )C ], MSE (t 3d )=Y 2 [L C 02 + 4 α2 d (L L )C 20 +α d (L L )C ]. (4) (42) By minimizing MSE(t 3d ),theoptimumvalueofα d is obtained as α do =2C /C 20. By putting this optimum value of α d in (4)and(42), we get the minimum value for bias and MSE of the estimator t 3d. The expressions for the bias and MSE of t 4d to the first order of approximation are given below Bias (t 4d )=Y[ 2 (θ d 4 )(L L )C 20 +(θ d 2 )(L L )C ], MSE (t 4d )=Y[L C 02 +(θ d 2 ) 2 (L L )C 20 +2(θ d 2 )(L 2 L 2)C ]. (43) By minimizing MSE(t 4d ),theoptimumvalueofθ is obtained as θ do =(C /C 20 ) + (/2). By putting this optimum value of θ d in (43) we get the minimum value for bias and MSE of the estimator t 3d. We analyzed that our study should be extended to the second order of approximation as earlier. 0. Second Order Biases and Mean Squared Errors in Two-Phase Sampling Expressing estimator t id (i =, 2, 3, 4)intermsofe s (i=0,), we get t d = Y(+e 0 ) exp [ e e 2+e +e ] (44) Bias 2 (t d )= Y 2 [3 4 (L L )C 20 +(L L )C + 7 24 L 2C 30 + 3 4 (L 2 L 2 )C 2 + 92 ((L 3 L 3)C 40 + 3(L 4 L ) 4 C 2 20 ) + 7 24 ((L 3 L 3)C 3 +3(L 4 L 4 )C 20C )]. (46) Using (45)wegetMSEoft d up to second order of approximation as MSE 2 (t d )=E[Y{e 0 + e e 2 or + 2 e 0 (e e ) + 3 8 (e2 e 2 3 )+ 8 e 0 (e 2 e 2 ) + 7 48 (e 3 e 3 )+ 7 2 48 e 0 (e 3 e 3 )}] (47) MSE 2 (t d )=Y 2 [L C 02 + 4 (L L )C 20 +(L L )C +(L 2 L 2)C 2 + 3 8 (L 2 L 2 )C 30 + 5 4 (L 2 L 2 )C 2 + 25 24 ((L 3 L 3)C 3 +3(L 4 L 4)C 20 C ) + 55 92 ((L 3 L 3 )C 40 +3(L 4 L 4 ))]. (48)
Advances in Statistics 7 ThebiasesandMSE softheestimatorst 2d, t 3d,andt 4d to the second order of approximation are, respectively, as follows: Bias 2 (t 2d )= Y 2 [ 4 (L L )C 20 +(L L )C 5 24 (L 2 L 2 )C 30 8 (L 2 L 2 )C 2 + 92 ((L 3 L 3 )C 40 + 3(L 4 L ) 4 C 2 20 ) 5 24 ((L 3 L 3 )C 3 +3(L 4 L 4 )C 20C )], MSE 2 (t 2d )=Y 2 [L C 02 + 4 (L L )C 20 +(L L )C +(L 2 L 2)C 2 (49) MSE 2 (t 3d )=Y 2 [L C 02 + α2 d 4 (L L )C 20 +α d (L L )C +α d (L 2 L 2)C 2 +{ α2 d 2 +2(α2 d 8 + α d 4 )} (L 2 L 2 )C 2 +α d ( α2 d 8 + α d 4 )(L 2 L 2 )C 30 +{ α2 d 4 +2(α d 2 + α2 d 2 )} (L 3 C 22 +L 4 (C 20 C 02 +C 2 )) +{2( α d 4 + α2 d 8 )+2(α2 d 8 + α3 2 d 48 ) } ((L 3 L 3)C 3 +3(L 4 L 4)C 20 C ) +{α d ( α2 d 8 + α3 d 48 )+(α d 4 + α2 2 d 8 ) } + 3 8 (L 2 L 2 )C 30 + 5 4 (L 2 L 2 )C 2 + 25 24 ((L 3 L 3)C 3 +3(L 4 L 4)C 20 C ) + 55 92 ((L 3 L 3 )C 40 +3(L 4 L 4 ))], (50) Bias 2 (t 3d )=Y[( α2 d 8 + α d 4 )(L L )C 20 +α d (L L )C +( α2 d 8 + α3 d 48 )(L 2 L 2)C 30 +( α2 d 8 + α d 4 )(L 2 L 2 )C 2 +( α2 d 8 + α3 d 48 ) ((L 3 L 3)C 3 +3(L 4 L 4 )C 20C ) +( α2 d 32 α3 d 32 + α4 d 384 ) ((L 3 L 3 )C 40 +3(L 4 L 4 )C2 20 )], ((L 3 L 3 )C 40 +3(L 4 L 4 )C2 20 )], Bias 2 (t 4d )=Y[ 2 (θ d 4 )(L L )C 20 +(θ d 2 )(L L )C + 2 (θ d 4 )(L 2 L 2)C 2 + 24 (θ d + 5 2 )(L 2 L 2 )C 30 + 24 (θ d + 5 2 ) ((L 3 L 3)C 3 +3(L 4 L 4)C 20 C ) + 92 (θ d + 2 ) (52) ((L 3 L 3 )C 40 +3(L 4 L 4 )C2 20 )], (53) MSE 2 (t 4d )=Y 2 [L C 02 +(θ d + 2 ) 2 (L L )C 20 +2(θ d 2 )(L 2 L 2){C 2 +C } +{2(θ d 2 ) 2 +(θ d 4 )} (5)
8 Advances in Statistics (L 2 L 2 )C 2 +(θ d 2 )(θ d 4 )(L 2 L 2)C 30 +{ 2 (θ d + 5 2 ) +2 (θ d 2 )(θ d 4 )} ((L 3 L 3)C 3 +3(L 4 L 4)C 20 C ) +{(θ d 2 ) 2 +(θ d 4 )} (L 3 C 22 +L 4 (C 20 C 02 +C 2 )) +{ 4 (θ d 4 ) 2 + 2 (θ d 2 )(θ d + 5 2 )} ((L 3 L 3 )C 40 +3(L 4 L 4 )C2 20 )]. (54) Wecangettheoptimumvalueofα d and θ d by minimizing MSE 2 (t 3d ) and MSE 2 (t 4d ), respectively. But theoretically the determination of the optimum values of α d and θ d is very difficult; therefore, we have calculated the optimum values by using numerical techniques.. Empirical Study For a natural population data set considered in Section 6, we calculate the biases and the mean squared errors of the estimators and compare the biases and MSE s of the estimators under first and second order of approximations. Table 2 exhibits the biases and MSE s of the estimators t d, t 2d, t 3d,andt 4d which are written under first order and second order of approximation for two-phase sampling. The estimator t 2d is exponential product estimator and it is considered in case of negative correlation. So the bias and mean squared error for this estimator is more than the other estimators considered here. For the classical exponential ratio estimatorintwo-phasesampling,itisobservedthatthebiases andthemeansquarederrorsincreasedforsecondorder.the estimators t 3d and t 4d havethesamemeansquarederrorfor thefirstorderbutthemeansquarederroroft 3d is less than t 4d for the second order. So, on the basis of the given dataset we conclude that the estimator t 3d is best followed by the estimator t 4d in tow phase sampling, among the estimators considered here. 2. Conclusion In this paper we have studied the Bahl and Tuteja [4] exponential ratio and exponential product type estimators and Singh et al. [] estimators under first order and second Table2:BiasandMSEofestimatorsundertwophasesampling. Estimators First order Bias Second order First order MSE Second order t d 0.07544899 0.075324 58.5067369 58.90969 t 2d 0.04975632 0.047459 22.933664 222.5025 t 3d 0.0693 0.062035 58.320 58.3868 t 4d 0.0743 0.073954 58.320 58.99492 order of approximation in simple random sampling and twophase sampling. It is observed that up to the first order of approximation both estimators are equally efficient in the sense of mean squared error but when we consider the second order approximation the estimator t 3S (t 3d in twophase sampling) is best followed by the estimator t 4S (t 4d in two-phase sampling). Theoretical results are also supported through two natural population datasets. Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper. Acknowledgments Authors wish to thank the editor Chin-Shang Li and two anonymous referees for their helpful comments that aided in improving this paper. References [] H. P. Singh and R. Tailor, Use of known correlation coefficient in estimating the finite population mean, Statistics in Transition,vol.6,pp.555 560,2003. [2] C. Kadilar and H. Cingi, Improvement in estimating the population mean in simple random sampling, Applied Mathematics Letters,vol.9,no.,pp.75 79,2006. [3] R. Singh, P. Cauhan, N. Sawan, and F. Smarandache, Auxiliary Information and a Priori Values in Construction of Improved Estimators, Renaissance High Press, 2007. [4] R. Singh and M. Kumar, A note on transformations on auxiliary variable in survey sampling, MASA: Model Assisted Statistics and Applications,vol.6,no.,pp.7 9,20. [5] M. I. Hossain, M. I. Rahman, and M. Tareq, Second order biases and mean squared errors of some estimators using auxiliary variable, Social Science Research Network,2006. [6] P. Sharma and R. Singh, Improved estimators for simple random sampling and stratified random sampling under second order of approximation, Statistics in Transition, vol. 4, no. 3, pp. 379 390, 203. [7] P. Sharma and R. Singh, Improved ratio type estimators using two auxiliary variables under second order approximation, Mathematical Interdisciplinary Sciences, vol.2,no.2, pp.79 90,204. [8] P. Sharma, H. K. Verma, A. Sanaullah, and R. Singh, Some exponential ratio-product type estimators using information on auxiliary attributes under second order approximation,
Advances in Statistics 9 International Statistics & Economics,vol.2,no.3,pp. 58 66, 203. [9] P. Sharma, R. Singh, and J. Min-Kim, Study of some improved ratio type estimators using information on auxiliary attributes under second order approximation, Scientific Research,vol.57,pp.38 46,203. [0] S. K. Srivastava, An estimator using auxiliary information in sample surveys, Calcutta Statistical Association Bulletin,vol.5, pp.27 34,967. [] R. Singh, P. Chauhan, and N. Sawan, On linear combination of Ratio-product type exponential estimator for estimating finite population mean, Statistics in Transition,vol.9,no.,pp.05 5, 2008. [2] P.V.SukhatmeandB.V.Sukhatme,Sampling Theory of Surveys with Applications, Iowa State University Press, Ames, Iowa, USA, 970. [3] M. A. Hidiroglou and C. E. Sarndal, Use of auxiliary information for two-phase sampling, Survey Methodology, vol.24,no., pp. 20, 998. [4] S. Bahl and R. K. Tuteja, Ratio and product type exponential estimators, Information & Optimization Sciences,vol. 2,no.,pp.59 64,99.
Advances in Operations Research Advances in Decision Sciences Applied Mathematics Algebra Probability and Statistics The Scientific World Journal International Differential Equations Submit your manuscripts at International Advances in Combinatorics Mathematical Physics Complex Analysis International Mathematics and Mathematical Sciences Mathematical Problems in Engineering Mathematics Discrete Mathematics Discrete Dynamics in Nature and Society Function Spaces Abstract and Applied Analysis International Stochastic Analysis Optimization