It should be unbiased, or approximately unbiased. Variance of the variance estimator should be small. That is, the variance estimator is stable.

Similar documents
Element sampling: Part 2

5. Fractional Hot deck Imputation

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n.

Investigating the Significance of a Correlation Coefficient using Jackknife Estimates

Lecture 33: Bootstrap

4.5 Multiple Imputation

DS 100: Principles and Techniques of Data Science Date: April 13, Discussion #10

Properties and Hypothesis Testing

Random Variables, Sampling and Estimation

Topic 9: Sampling Distributions of Estimators

Topic 9: Sampling Distributions of Estimators

3 Resampling Methods: The Jackknife

MATH 320: Probability and Statistics 9. Estimation and Testing of Parameters. Readings: Pruim, Chapter 4

Statistical Inference (Chapter 10) Statistical inference = learn about a population based on the information provided by a sample.

Lecture 2: Monte Carlo Simulation

Chapter 11 Output Analysis for a Single Model. Banks, Carson, Nelson & Nicol Discrete-Event System Simulation

Topic 9: Sampling Distributions of Estimators

Economics 241B Relation to Method of Moments and Maximum Likelihood OLSE as a Maximum Likelihood Estimator

Estimation of the Mean and the ACVF

Simulation. Two Rule For Inverting A Distribution Function

Lecture 3. Properties of Summary Statistics: Sampling Distribution

A General Family of Estimators for Estimating Population Variance Using Known Value of Some Population Parameter(s)

First Year Quantitative Comp Exam Spring, Part I - 203A. f X (x) = 0 otherwise

Econ 325 Notes on Point Estimator and Confidence Interval 1 By Hiro Kasahara

Lecture 22: Review for Exam 2. 1 Basic Model Assumptions (without Gaussian Noise)

Section 14. Simple linear regression.

Stat 421-SP2012 Interval Estimation Section

Confidence Interval for Standard Deviation of Normal Distribution with Known Coefficients of Variation

Estimation for Complete Data

Simple Random Sampling!

Lecture 12: September 27

7-1. Chapter 4. Part I. Sampling Distributions and Confidence Intervals

32 estimating the cumulative distribution function

Expectation and Variance of a random variable

Output Analysis and Run-Length Control

4.1 Non-parametric computational estimation

Introductory statistics

Statistical Inference Based on Extremum Estimators

Statistical inference: example 1. Inferential Statistics

Improved Class of Ratio -Cum- Product Estimators of Finite Population Mean in two Phase Sampling

1 Inferential Methods for Correlation and Regression Analysis

A statistical method to determine sample size to estimate characteristic value of soil parameters

A Question. Output Analysis. Example. What Are We Doing Wrong? Result from throwing a die. Let X be the random variable

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss

Probability and statistics: basic terms

Exponential Families and Bayesian Inference

Lecture 19: Convergence

17. Joint distributions of extreme order statistics Lehmann 5.1; Ferguson 15

Double Stage Shrinkage Estimator of Two Parameters. Generalized Exponential Distribution

Asymptotics. Hypothesis Testing UMP. Asymptotic Tests and p-values

ECONOMETRIC THEORY. MODULE XIII Lecture - 34 Asymptotic Theory and Stochastic Regressors

The variance of a sum of independent variables is the sum of their variances, since covariances are zero. Therefore. V (xi )= n n 2 σ2 = σ2.

Review Questions, Chapters 8, 9. f(y) = 0, elsewhere. F (y) = f Y(1) = n ( e y/θ) n 1 1 θ e y/θ = n θ e yn

Ismor Fischer, 1/11/

Lecture Overview. 2 Permutations and Combinations. n(n 1) (n (k 1)) = n(n 1) (n k + 1) =

o <Xln <X2n <... <X n < o (1.1)

1 Introduction to reducing variance in Monte Carlo simulations

Journal of Multivariate Analysis. Superefficient estimation of the marginals by exploiting knowledge on the copula

STATISTICAL PROPERTIES OF LEAST SQUARES ESTIMATORS. Comments:

MATH 472 / SPRING 2013 ASSIGNMENT 2: DUE FEBRUARY 4 FINALIZED

Chi-Squared Tests Math 6070, Spring 2006

ESTIMATION AND PREDICTION BASED ON K-RECORD VALUES FROM NORMAL DISTRIBUTION

Hypothesis Testing. Evaluation of Performance of Learned h. Issues. Trade-off Between Bias and Variance

Stat 319 Theory of Statistics (2) Exercises

KLMED8004 Medical statistics. Part I, autumn Estimation. We have previously learned: Population and sample. New questions

LECTURE 8: ASYMPTOTICS I

A Note on Effi cient Conditional Simulation of Gaussian Distributions. April 2010

A NEW METHOD FOR CONSTRUCTING APPROXIMATE CONFIDENCE INTERVALS FOR M-ESTU1ATES. Dennis D. Boos

Math 155 (Lecture 3)

Study the bias (due to the nite dimensional approximation) and variance of the estimators

Machine Learning Brett Bernstein

Kernel density estimator

Entropy and Ergodic Theory Lecture 5: Joint typicality and conditional AEP

Direction: This test is worth 250 points. You are required to complete this test within 50 minutes.

Because it tests for differences between multiple pairs of means in one test, it is called an omnibus test.

Machine Learning Brett Bernstein

Chapter 6 Principles of Data Reduction

MOST PEOPLE WOULD RATHER LIVE WITH A PROBLEM THEY CAN'T SOLVE, THAN ACCEPT A SOLUTION THEY CAN'T UNDERSTAND.

( θ. sup θ Θ f X (x θ) = L. sup Pr (Λ (X) < c) = α. x : Λ (x) = sup θ H 0. sup θ Θ f X (x θ) = ) < c. NH : θ 1 = θ 2 against AH : θ 1 θ 2

Resampling modifications for the Bagai test

There is no straightforward approach for choosing the warmup period l.

STA6938-Logistic Regression Model

Discrete Mathematics for CS Spring 2008 David Wagner Note 22

A RANK STATISTIC FOR NON-PARAMETRIC K-SAMPLE AND CHANGE POINT PROBLEMS

Output Analysis (2, Chapters 10 &11 Law)

Stochastic Simulation

Module 1 Fundamentals in statistics

4 Multidimensional quantitative data

Algebra of Least Squares

Stat 200 -Testing Summary Page 1

Chapter 2 The Monte Carlo Method

Access to the published version may require journal subscription. Published with permission from: Elsevier.

Chapter 3. Strong convergence. 3.1 Definition of almost sure convergence

The Bootstrap, Jackknife, Randomization, and other non-traditional approaches to estimation and hypothesis testing

6 Sample Size Calculations

Abstract. Ranked set sampling, auxiliary variable, variance.

The standard deviation of the mean

Chapter 8: Estimating with Confidence

Estimating the Population Mean using Stratified Double Ranked Set Sample

Model-based Variance Estimation for Systematic Sampling

Transcription:

Chapter 10 Variace Estimatio 10.1 Itroductio Variace estimatio is a importat practical problem i survey samplig. Variace estimates are used i two purposes. Oe is the aalytic purpose such as costructig cofidece itervals or performig hypothesis testigs. The other is descriptive purposed to evaluate the efficiecy of the survey desigs or estimates for plaig surveys. Desirable variace estimates should satisfy the followig properties: It should be ubiased, or approximately ubiased. Variace of the variace estimator should be small. That is, the variace estimator is stable. It should ot take egative values. The computatio should be simple. The HT variace estimator is ubiased but it ca take egative values. Also, computig the joit iclusio probabilities π i j ca be cumbersome whe the sample size is large. 1

2 CHAPTER 10. VARIANCE ESTIMATION Example 10.1. Cosider a fiite populatio of size N = 3 with y 1 = 16,y 2 = 21 ad y 3 = 18 ad cosider the followig samplig desig. Table 10.1: A samplig desig Sample A Pr A HT estimator HT variace estimator A 1 = {1,2} 0.4 50 206 A 2 = {1,3} 0.3 50 200 A 3 = {2,3} 0.2 60-90 A 4 = {1,2,3} 0.1 80-394 The samplig variace of HT estimator is 85. Note that HT variace estimator has expectatio 206 0.4 + 200 0.3 + 90 0.2 + 394 0.1 = 85 but it ca take egative values i some samples. The variace estimator uder PPS samplig is always oegative ad ca be computed without computig the joit iclusio probability. I practice, PPS samplig variace estimator is ofte applied as a alterative variace estimator eve for without-replacemet samplig. The resultig variace estimator ca be writte 1 ˆV 0 = 1 2 yi Ŷ HT = i A p i 1 yi 1 2, 10.1 i A π i ŶHT where p i = π i /. The followig theorem express the bias of the simplified variace estimator i 10.1 as a estimator of the variace of HT estimator. Theorem 10.1. The simplified variace estimator i 10.1 satisfies where E ˆV 0 Var ŶHT = Var 1 Ŷ PPS = { } Var ŶPPS Var ŶHT 1 N 2 yi p i Y p i 10.2

10.1. INTRODUCTION 3 ad p i = π i /. Proof. Note that ˆV 0 satisfies { E = E k A { k A The first term is ad, usig yk Y +Y Ŷ HT p k } 2 yk Y + 2E p k E 2 } { } 2 yk Y k A p k yk Y = k A p k k A { yk Y } { } 2 Y ŶHT + E k A p k Y ŶHT. k A N 2 yk = Y π k p k = 2 Var Ŷ PPS y k π k Y = Ŷ HT Y, the secod term equals to 2Var Ŷ HT ad the last term is equal to Var ŶHT, which proves the result. I may cases, the bias term i 10.2 is positive ad the simplified variace estimator is coservatively estimate the variace. Uder SRS, the relative bias of the simplified variace estimator 10.1 is ˆV 0 Var Ŷ HT Var Ŷ HT = N ad the relative bias is egligible whe /N is egligible. 10.3 The simplified variace estimator ca be directly applicable to multistage samplig desigs. Uder multistage samplig desig, the HT estimator of the total ca be writte Ŷ HT = i A I where Ŷ i is the estimated total for the PSU i. The simplified variace estimator is the give by ˆV 0 = Ŷ i π Ii 1 Ŷi 1 2. i A I π Ii ŶHT

4 CHAPTER 10. VARIANCE ESTIMATION Uder stratified multistage cluster samplig, the simplified variace estimator ca be writte ˆV 0 = H h=1 h h 1 h w hi Ŷ hi 1 h where w hi is the samplig weight for cluster i i stratum h. 2 h w h j Ŷ h j 10.4 j=1 10.2 Liearizatio approach to variace estimatio Whe the poit estimator is a oliear estimator such as ratio estimator or regressio estimator, exact variace estimatio for such estimates is very difficult. Istead, we ofte rely o liearizatio methods for estimatig the samplig variace of such estimators. Roughly speakig, if y is a p-dimesioal vector ad whe ȳ = Ȳ N + O p 1/2 holds, the Taylor liearizatio of gȳ ca be writte as gȳ = g Ȳ p g Ȳ + ȳ j Ȳ j + O p 1. j=1 y j Thus, the variace of the liearized term of gȳ ca be writte V {gȳ } =. p p g Ȳ Ȳ g Cov { } ȳ i,ȳ j y i y j ad it is estimated by ˆV {gȳ }. = j=1 p p j=1 gȳ y i gȳ y j Ĉ { ȳ i,ȳ j }. 10.5 I summary, the liearizatio method for variace estimatio ca be described as follows: 1. Apply Taylor liearizatio ad igore the remaider terms. 2. Compute the variace ad covariace terms of each compoet of ȳ usig the stadard variace estimatio formula. 3. Estimate the partial derivative terms g/ y from the sample observatio.

10.2. LINEARIZATION APPROACH TO VARIANCE ESTIMATION 5 Example 10.2. If the parameter of iterest is R = Ȳ / X ad we use ˆR = ȲHT X HT to estimate R, we ca apply the Taylor liearizatio to get ˆR = R + X 1 Ȳ HT R X HT + O p 1 ad the variace estimatio formula i 10.5 reduces to ˆV ˆR. = X 2 HT ˆV Ȳ HT + X 2 HT ˆR 2 ˆV X HT 2 X 2 HT ˆRĈ X HT,Ȳ HT. 10.6 If the variaces ad covariaces of X HT ad Ȳ HT are estimated by HT variace estimatio formula, 10.6 ca be estimated by where e i = y i ˆRx i. ˆV ˆR. = 1 ˆX 2 HT i A j A π i j π i π j π i j e i π i e j π j Usig the result i Example 10.2, the variace of the ratio estimator Ŷ r = X ˆR is estimated by ˆV. X 2 π i j π i π j e i e j Ŷ r = ˆX HT, 10.7 i A j A π i j π i π j which is obtaied by multiplyig ˆX 2 HT X 2 to the variace formula i 8.13. Recall that the liearized variace estimator of the ratio estimator is give by 8.13. The variace estimator i 10.7 is asymptotically equivalet to the liearizatio variace estimator i 8.13, but it give more adequate measure of the coditioal variace of the ratio estimator, as advocated by Royall ad Cumberlad 1981. More geerally, Särdal, Swesso, ad Wretma 1989 proposed usig ˆV Ŷ GREG = i A j A π i j π i π j π i j g i e i π i g j e j π j 10.8 as the coditioal variace estimator of the GREG estimator of the form Ŷ GREG = i A πi 1 g i y i, where g i = X i A 1 1 1 x i x i x i 10.9 π i c i c i

6 CHAPTER 10. VARIANCE ESTIMATION ad e i = y i x i i A 1 1 x i x 1 i π i c i x i y i. π i c i i A The g i i 10.9 is the factor to applied to the desig weight to satisfy the calibratio costrait. Example 10.3. We ow cosider variace estimatio of the poststratificatio estimator i 8.28. To estimate the variace, the ucoditioal variace estimator is give by ˆV u = N2 1 G g 1 N g=1 1 s2 g 10.10 where s 2 g = g 1 1 i Ag y i ȳ g 2. O the other had, the coditioal variace estimator i 10.8 is give by ˆV c = 1 N 1 G g=1 N 2 g g g 1 g s 2 g. 10.11 Note that the coditioal variace formula i 10.11 is very similar to the variace estimatio formula uder the stratified samplig. 10.3 Replicatio approach to variace estimatio We ow cosider a alterative approach to variace estimatio that is based o several replicates of the origial sample estimator. Such replicatio approach does ot use Taylor liearizatio, but istead geerates several resamples ad compute a replicate to each resample. The variability amog the replicates is used to estimate the samplig variace of the poit estimator. Such replicatio approach ofte uses repeated computatio of the replicates usig a computer. Replicatio methods iclude radom group method, jackkife, balaced repeated replicatio, ad bootstrap method. More details of the replicatio methods ca be foud i Wolter 2007.

10.3. REPLICATION APPROACH TO VARIANCE ESTIMATION 7 10.3.1 Radom group method Radom group method was first used i jute acreage surveys i Begal Mahalaobis; 1939. Radom group method is useful i uderstadig the basic idea for replicatio approach to variace estimatio. I the radom group method, G radom groups are costructed from the sample ad the poit estimate is computed for each radom group sample ad the combied to get the fial poit estimate. Oce the fial poit estimate is costructed, its variace estimate is computed usig the variability of the G radom group estimates. There are two versios of the radom group method. Oe is idepedet radom group method ad the other is o-idepedet radom group method. We first cosider idepedet radom group method. The idepedet radom group method ca be described as follows. [Step 1] Usig the give samplig desig, select the first radom group sample, deoted by A 1, ad compute ˆθ 1, a ubiased estimator of θ from the first radom group sample. [Step 2] Usig the same samplig desig, select the secod radom group sample, idepedetly from the first radom group sample, ad compute ˆθ 2 from the secod radom group sample. [Step 3] Usig the same procedure, obtai G idepedet estimate ˆθ 1,, ˆθ G from the G radom group sample. [Step 4] The fial estimator of θ is ˆθ RG = 1 G ad its ubiased variace estimator is ˆV 1 1 ˆθ RG = G G 1 G G ˆθ k 10.12 ˆθ k ˆθ RG 2. 10.13

8 CHAPTER 10. VARIANCE ESTIMATION Sice ˆθ 1,, ˆθ G are idepedetly ad idetically distributed, the variace estimator i 10.13 is ubiased for the variace of ˆθ RG i 10.12. Such idepedet radom group method is very easy to uderstad ad is applicable for complicated samplig desigs for selectig A g, but the sample allows for duplicatio of sample elemets ad the variace estimator may be ustable whe G is small. We ow cosider o-idepedet radom group method, which does ot allow for duplicatio of the sample elemets. I the o-idepedet radom group method, the sample is partitioed ito G groups, exhaustive ad mutually exclusive, deoted by A = G g=1 A g ad the apply the sample estimatio method for idepedet radom group method, by treatig the o-idepedet radom group samples as if they are idepedet. The followig theorem expresses the bias of the variace estimator for this case. Theorem 10.2. Let ˆθ g be a ubiased estimator of θ, computed from the g-th radom group sample A g for g = 1,,G. The the radom group variace estimator?? satisfies E { ˆV ˆθ RG } V ˆθ RG = 1 GG 1 i j Cov ˆθ i, ˆθ j 10.14 Proof. By 10.12, we have ad, sice V { 1 G ˆθ RG = G 2 V ˆθ i + i j Cov } ˆθ i, ˆθ j 10.15 ad usig E ˆθ i = θ, E { G ˆθ i ˆθ RG 2 } G G 2 ˆθ i ˆθ RG = ˆθ i θ 2 G ˆθ RG θ 2 = G V ˆθ i G V ˆθ RG = 1 G 1 G V ˆθ i G 1 i j Cov ˆθ i, ˆθ j

10.3. REPLICATION APPROACH TO VARIANCE ESTIMATION 9 which implies E { ˆV } ˆθ RG = G 2 G V ˆθ i G 2 G 1 1 i j Cov ˆθ i, ˆθ j. Thus, usig 10.15, we have 10.14. The right side of 10.14 is the bias of ˆV ˆθ RG as a estimator for the variace of ˆθ RG. Such bias becomes zero if the samplig for radom groups is a withreplacemet samplig. For without-replacemet samplig, the covariace betwee the two differet replicates is egative ad so the bias term becomes positive. The relative amout of the bias is ofte egligible. The followig example compute the amout of the relative bias. Example 10.4. Cosider a sample of size obtaied from the simple radom samplig from a fiite populatio of size N. Let b = /G be a iteger value which is the size of A g such that A = G g=1 A g. The sample mea of y obtaied from A g is deoted by ȳ g ad the overall mea of y is give by ˆθ = 1 G G g=1 ȳ g. I this case, ȳ 1,,ȳ G are ot idepedet distributed but idetically distributed with the same mea. By 10.15, we have Var ˆθ = 1 G V ȳ 1 + 1 1 Cov ȳ G 1,ȳ 2 ad sice V ȳ 1 = 1 b 1 N S 2, we have Thus, the bias i 10.14 reduces to Cov ȳ 1,ȳ 2 = 1 N S2. Bias { ˆV ˆθ RG } = Cov ȳ1,ȳ 2 = 1 N S2

10 CHAPTER 10. VARIANCE ESTIMATION which is ofte egligible. Thus, the radom group variace estimator 10.13 ca be safely used to estimate the variace of ȳ i the simple radom samplig. Such radom group method provides a useful computatioal tool for estimatig the variace of the poit estimators. However, the radom group method is applicable oly whe the samplig desig for A g has the same structure as the origial sample A. The partitio A = G g=1 A g that leads to ubiasedess of ˆθ g is ot always possible for complex samplig desigs. 10.4 Jackkife method Jackkife was first itroduced by Queouille 1949 to reduce the bias of ratio estimator ad the was suggested by Tukey 1958 to be used for variace estimatio. Jackkife is very popular i practice as a tool for variace estimatio. To itroduce the idea of Queouille 1949, suppose that idepedet observatios of x i,y i are available that are geerated from a distributio with mea µ x, µ y. If the parameter of iterest is θ = µ y /µ x, the the sample ratio ˆθ = x 1 ȳ has bias of order O 1. That is, we have E ˆθ = θ + C + O 2. If we delete the k-th observatio ad recompute the ratio 1 ˆθ k = x i y i, i k i k we obtai E ˆθ k = θ + C 1 + O 2. Thus, the jackkife pseudo value defied by ˆθ k = ˆθ 1 ˆθ k ca be used to compute ˆθ. = 1 ˆθ k 10.16 which has bias of order O 2. Thus, the jackkife ca be used to reduce the bias of oliear estimators.

10.4. JACKKNIFE METHOD 11 Note that if ˆθ = ȳ the ˆθ k = y k. Tukey 1958 suggested usig ˆθ k as approximate IID observatio to get the followig jackkife variace estimator. ˆV. 1 1 JK ˆθ = 1 = 1 For the special case of ˆθ = ȳ, we obtai ˆV JK = 1 1 1 ˆθ k θ. 2 ˆθ k θ. 2 y i ȳ 2 = 1 s2 y ad the jackkife variace estimator is algebraically equivalet to the usual variace estimator of the sample mea uder simple radom samplig igorig the fiite populatio correctio term. If we are iterested i estimatig the variace of ˆθ = f x,ȳ, usig ˆθ k = f x k,ȳ k, we ca costruct ˆV JK ˆθ = 1 2 ˆθ k ˆθ 10.17 as the jackkife variace estimator of ˆθ. The jackkife replicate ˆθ k is computed by usig the same formula for ˆθ usig the jackkife weight w k i istead of the origial weight w i = 1/, where w k i = { 1 1 if i k 0 if i = k. To discuss the asymptotic property of the jackkife variace estimator i 10.17, we use the followig Taylor expasio, which is ofte called secod-type Taylor expasio. Lemma 10.1. Let {X,W } be a sequece of radom variables such that X = W + O p r

12 CHAPTER 10. VARIANCE ESTIMATION where r 0 as. If gx is a fuctio with s-th cotiuous derivatives i the lie segmet joiig X ad W ad the s-th order partial derivatives are bouded, the s 1 gx = gw + 1 k! gk W X W k + O p r s where g k a is the k-th derivative of gx evaluated at x = a. Now, sice ȳ k ȳ = 1 1 ȳ y k, we have ȳ k ȳ = O p 1. For the case of ˆθ = f x,ȳ, we ca apply the above lemma to get ˆθ k ˆθ = f x x,ȳ x k x + f y x,ȳ ȳ k ȳ + o p 1 so that 1 2 ˆθ k ˆθ { } f 2 = x x,ȳ 1 { }{ } f f 1 +2 x x,ȳ y x,ȳ { 2 f x k x + } 2 y x,ȳ 1 2 ȳ k ȳ x k x ȳ k ȳ + o p 1. Thus, the jackkife variace estimator is asymptotically equivalet to the liearized variace estimator. That is, the secod-type Taylor liearizatio leads to ˆV { } f 2 { } f 2 JK ˆθ = x x,ȳ ˆV x + y x,ȳ ˆV ȳ { }{ } f f +2 x x,ȳ y x,ȳ Ĉ x,ȳ + o p 1. The above jackkife method is costructed uder simple radom samplig. For stratified multistage samplig, jackkife replicates ca be costructed by deletig a cluster for each replicate. Let Ŷ HT = H h h=1 w hi Ŷ hi

10.4. JACKKNIFE METHOD 13 be the HT estimator of Y uder the stratified multistage cluster samplig. The jackkife weights are costructed by deletig a cluster sequetially as 0 if h = g ad i = j g j w hi = h 1 1 h w hi if h = g ad i j w hi otherwise ad the jackkife variace estimator is computed by where ˆV JK ŶHT = H h=1 h 1 h h H h g j Ŷ HT = h=1 Ŷ hi HT 1 h h w g j hi j=1 Ŷ hi. 2 h j Ŷ HT 10.18 The followig theorem presets the algebraic property of the jackkife variace estimator i 10.18. Theorem 10.3. The jackkife variace estimator i 10.18 is algebraically equivalet to the simplified variace estimator i 10.4. If ˆθ = f ˆX HT,Ŷ HT the the jackkife replicates are costructed as ˆθ g j = g j g j f ˆX. I this case, the jackkife variace estimator is computed as HT,Ŷ HT or, more simply, ˆV JK ˆθ = H h=1 ˆV JK ˆθ = h 1 h H h=1 h h 1 h ˆθ hi 1 h h 2 h ˆθ h j 10.19 j=1 2 ˆθ hi ˆθ. 10.20 The asymptotic properties of the above jackkife variace estimator uder stratified multistage samplig ca be established. For refereces, see Krewski ad Rao 1981 or Shao ad Tu 1995. Example 10.5. We ow revisit Example 10.3 to estimate the variace of poststratificatio estimator usig the jackkife. For simplicity, assume SRS for the sample.

14 CHAPTER 10. VARIANCE ESTIMATION The poststratificatio estimator is computed as Ŷ post = G g=1 N g ȳ g where ȳ g = 1 g i Ag y i. Now, the k-th replicate of Ŷ post is where ȳ k h ad Ŷ k post = h 1 1 h ȳ h y k. Thus, = N g ȳ g + N h ȳ k h g h Ŷ k post Ŷ post = N h ȳ k h ȳ h ˆV JK Ŷpost = 1 = N h h 1 1 ȳ h y k = 1 G g=1 Ŷ k post Ŷ post 2 N 2 g g 1 1 s 2 g, which, igorig /N term, is asymptotically equivalet to the coditioal variace estimator i 10.11. 10.5 Other methods Skip Referece Krewski, D. ad Rao, J.N.K. 1981. Iferece from stratified samples: properties of the liearizatio, jackkife ad balaced repeated replicatio methods. Aals of Statistics 9, 1010-1019. Queouille, M. H. 1956. Notes o bias i estimatio. Biometrika 43, 353-360.

10.5. OTHER METHODS 15 Royall, R. M. ad Cumberlad, W. G. 1981. A empirical study of the ratio estimator ad estimators of its variace. Joural of the America Statistical Associatio 76, 66-77. Särdal, C. E., Swesso, B., ad Wretma, J. H. 1989. The weighted residual techique for estimatio the variace of the geeral regressio estimator of the fiite populatio total, Biometrika 76, 527-537. Tukey, J. W. 1958. Bias ad cofidece i ot-quite large samples abstract. Aals of Mathematical Statistics 29, 614. Wolter, K. M. 2007. Itroductio to Variace Estimatio: Secod Editio. New York: Spriger-Verlag.