Chapter 9. Confidence Intervals. 9.1 Introduction

Similar documents
Topic 9: Sampling Distributions of Estimators

Simulation. Two Rule For Inverting A Distribution Function

Statistical Inference (Chapter 10) Statistical inference = learn about a population based on the information provided by a sample.

Since X n /n P p, we know that X n (n. Xn (n X n ) Using the asymptotic result above to obtain an approximation for fixed n, we obtain

Topic 9: Sampling Distributions of Estimators

Topic 9: Sampling Distributions of Estimators

Summary. Recap ... Last Lecture. Summary. Theorem

Parameter, Statistic and Random Samples

Chapter 6 Principles of Data Reduction

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n.

Lecture 7: Properties of Random Samples

Stat 319 Theory of Statistics (2) Exercises

Last Lecture. Wald Test

Chapter 6 Sampling Distributions

Probability and statistics: basic terms

Discrete Mathematics for CS Spring 2008 David Wagner Note 22

STAC51: Categorical data Analysis

STAT Homework 7 - Solutions

Direction: This test is worth 150 points. You are required to complete this test within 55 minutes.

Direction: This test is worth 250 points. You are required to complete this test within 50 minutes.

The variance of a sum of independent variables is the sum of their variances, since covariances are zero. Therefore. V (xi )= n n 2 σ2 = σ2.

Lecture 19: Convergence

Binomial Distribution

MATH 320: Probability and Statistics 9. Estimation and Testing of Parameters. Readings: Pruim, Chapter 4

4. Partial Sums and the Central Limit Theorem

MATH/STAT 352: Lecture 15

Lecture Notes 15 Hypothesis Testing (Chapter 10)

32 estimating the cumulative distribution function

Confidence intervals summary Conservative and approximate confidence intervals for a binomial p Examples. MATH1005 Statistics. Lecture 24. M.

Estimation for Complete Data

Interval Estimation (Confidence Interval = C.I.): An interval estimate of some population parameter is an interval of the form (, ),

Problem Set 4 Due Oct, 12

17. Joint distributions of extreme order statistics Lehmann 5.1; Ferguson 15

STATISTICAL INFERENCE

Final Examination Statistics 200C. T. Ferguson June 10, 2010

Output Analysis and Run-Length Control

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY

Expectation and Variance of a random variable

Bayesian Methods: Introduction to Multi-parameter Models

Some Properties of the Exact and Score Methods for Binomial Proportion and Sample Size Calculation

January 25, 2017 INTRODUCTION TO MATHEMATICAL STATISTICS

Introductory statistics

Let us give one more example of MLE. Example 3. The uniform distribution U[0, θ] on the interval [0, θ] has p.d.f.

1.010 Uncertainty in Engineering Fall 2008

0, otherwise. EX = E(X 1 + X n ) = EX j = np and. Var(X j ) = np(1 p). Var(X) = Var(X X n ) =

Stat 421-SP2012 Interval Estimation Section

Chi-Squared Tests Math 6070, Spring 2006

This exam contains 19 pages (including this cover page) and 10 questions. A Formulae sheet is provided with the exam.

Review Questions, Chapters 8, 9. f(y) = 0, elsewhere. F (y) = f Y(1) = n ( e y/θ) n 1 1 θ e y/θ = n θ e yn

Tests of Hypotheses Based on a Single Sample (Devore Chapter Eight)

Basis for simulation techniques

FACULTY OF MATHEMATICAL STUDIES MATHEMATICS FOR PART I ENGINEERING. Lectures

Kurskod: TAMS11 Provkod: TENB 21 March 2015, 14:00-18:00. English Version (no Swedish Version)

Goodness-of-Fit Tests and Categorical Data Analysis (Devore Chapter Fourteen)

Questions and Answers on Maximum Likelihood

AMS 216 Stochastic Differential Equations Lecture 02 Copyright by Hongyun Wang, UCSC ( ( )) 2 = E X 2 ( ( )) 2

Chapter 22. Comparing Two Proportions. Copyright 2010, 2007, 2004 Pearson Education, Inc.

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting

Convergence of random variables. (telegram style notes) P.J.C. Spreij

x = Pr ( X (n) βx ) =

Overview. p 2. Chapter 9. Pooled Estimate of. q = 1 p. Notation for Two Proportions. Inferences about Two Proportions. Assumptions

Exercise 4.3 Use the Continuity Theorem to prove the Cramér-Wold Theorem, Theorem. (1) φ a X(1).

7-1. Chapter 4. Part I. Sampling Distributions and Confidence Intervals

Statistical Theory MT 2009 Problems 1: Solution sketches

Properties of Point Estimators and Methods of Estimation

This section is optional.

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 2 9/9/2013. Large Deviations for i.i.d. Random Variables

Statistical Inference Based on Extremum Estimators

April 18, 2017 CONFIDENCE INTERVALS AND HYPOTHESIS TESTING, UNDERGRADUATE MATH 526 STYLE

Statistical Theory MT 2008 Problems 1: Solution sketches

Asymptotics. Hypothesis Testing UMP. Asymptotic Tests and p-values

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting

Econ 325/327 Notes on Sample Mean, Sample Proportion, Central Limit Theorem, Chi-square Distribution, Student s t distribution 1.

of the matrix is =-85, so it is not positive definite. Thus, the first

Homework for 2/3. 1. Determine the values of the following quantities: a. t 0.1,15 b. t 0.05,15 c. t 0.1,25 d. t 0.05,40 e. t 0.

DS 100: Principles and Techniques of Data Science Date: April 13, Discussion #10

A quick activity - Central Limit Theorem and Proportions. Lecture 21: Testing Proportions. Results from the GSS. Statistics and the General Population

Stat410 Probability and Statistics II (F16)

The Sampling Distribution of the Maximum. Likelihood Estimators for the Parameters of. Beta-Binomial Distribution

Random Variables, Sampling and Estimation

Department of Mathematics

CEE 522 Autumn Uncertainty Concepts for Geotechnical Engineering

Sample questions. 8. Let X denote a continuous random variable with probability density function f(x) = 4x 3 /15 for

Statistical Intervals for a Single Sample

Mathematical Statistics - MS

Chapter 22. Comparing Two Proportions. Copyright 2010 Pearson Education, Inc.

Chapter 8: Estimating with Confidence

Confidence Interval for Standard Deviation of Normal Distribution with Known Coefficients of Variation

1 Introduction to reducing variance in Monte Carlo simulations


7.1 Convergence of sequences of random variables

7.1 Convergence of sequences of random variables

Lecture 33: Bootstrap

Lecture Note 8 Point Estimators and Point Estimation Methods. MIT Spring 2006 Herman Bennett

Econ 325 Notes on Point Estimator and Confidence Interval 1 By Hiro Kasahara

Chi-squared tests Math 6070, Spring 2014

Element sampling: Part 2

STAT Homework 1 - Solutions

Large Sample Theory. Convergence. Central Limit Theorems Asymptotic Distribution Delta Method. Convergence in Probability Convergence in Distribution

62. Power series Definition 16. (Power series) Given a sequence {c n }, the series. c n x n = c 0 + c 1 x + c 2 x 2 + c 3 x 3 +

Transcription:

Chapter 9 Cofidece Itervals 9.1 Itroductio Defiitio 9.1. Let the data Y 1,..., Y have pdf or pmf fy θ with parameter space Θ ad support Y. Let L Y ad U Y be statistics such that L y U y, y Y. The L y, U y is a 100 1 α % cofidece iterval CI for θ if P θ L Y < θ < U Y = 1 α for all θ Θ. The iterval L y, U y is a large sample 100 1 α % CI for θ if P θ L Y < θ < U Y 1 α for all θ Θ as. Defiitio 9.2. Let the data Y 1,..., Y have pdf or pmf fy θ with parameter space Θ ad support Y. The radom variable RY θ is a pivot or pivotal quatity if the distributio of RY θ is idepedet θ. The quatity RY, θ is a asymptotic pivot if the limitig distributio of RY, θ is idepedet of θ. The first CI i Defiitio 9.1 is sometimes called a exact CI. I the followig defiitio, the scaled asymptotic legth is closely related to asymptotic relative efficiecy of a estimator ad high power of a test of hypotheses. 247

Defiitio 9.3. Let L, U be a 100 1 α% CI or large sample CI for θ. If δ U L P A α, the A α is the scaled asymptotic legth of the CI. Typically δ = 0.5 but superefficiet CIs have δ = 1. For a give α, a CI with smaller A α is better tha a CI with larger A α. Example 9.1. Let Y 1,..., Y be iid Nµ, σ 2 where σ 2 > 0. The RY µ, σ 2 = Y µ S/ t 1 is a pivotal quatity. If Y 1,..., Y are iid with EY = µ ad VARY = σ 2 > 0, the, by the CLT ad Slutsky s Theorem, RY µ, σ 2 = Y µ S/ = σ S Y µ σ/ D N0, 1 is a asymptotic pivot. Large sample theory ca be used to fid a CI from the asymptotic pivot. Suppose that Y = Y 1,..., Y ad that W W Y is a estimator of some parameter µ W such that W µ W D N0, σ 2 W where σ 2 W/ is the asymptotic variace of the estimator W. The above otatio meas that if is large, the for probability calculatios W µ W N0, σ 2 W/. Suppose that SW 2 is a cosistet estimator of σ2 W so that the asymptotic stadard error of W is SEW = S W /. Let z α be the α percetile of the N0,1 distributio. Hece PZ z α = α if Z N0, 1. The 1 α P z 1 α/2 W µ W SEW z 1 α/2, ad a approximate or large sample 1001 α% CI for µ W is give by W z 1 α/2 SEW, W + z 1 α/2 SEW. 9.1 248

Sice t p,1 α/2 z 1 α/2 1 if p p as, aother large sample 1001 α% CI for µ W is W t p,1 α/2 SEW, W + t p,1 α/2 SEW. 9.2 The CI 9.2 ofte performs better tha the CI 9.1 i small samples. The quatity t p,1 α/2 /z 1 α/2 ca be regarded as a small sample correctio factor. The CI 9.2 is loger tha the CI 9.1. Hece the CI 9.2 more coservative tha the CI 9.1. Suppose that there are two idepedet samples Y 1,..., Y ad X 1,..., X m ad that W Y µ W Y mwm X µ W X D N 2 0 0, σ 2 W Y 0 0 σ 2 WX. The W Y µ W Y W m X µ W X 0 N 2 0, σ 2 W Y / 0 0 σ 2 W X/m, ad W Y W m X µ W Y µ W X N0, σ2 W Y + σ2 W X m. Hece S 2 SEW Y W m X = W Y + S2 W X m, ad the large sample 1001 α% CI for µ W Y µ W X is give by W Y W m X ± z 1 α/2 SEW Y W m X. 9.3 If p is the degrees of freedom used for a sigle sample procedure whe the sample size is, let p = mip, p m. The aother large sample 1001 α% CI for µ W Y µ W X is give by W Y W m X ± t p,1 α/2 SEW Y W m X. 9.4 These CIs are kow as Welch itervals. See Welch 1937 ad Yue 1974. 249

Example 9.2. Cosider the sigle sample procedures where W = Y. The µ W = EY, σ 2 W = VARY, S W = S, ad p = 1. Let t p deote a radom variable with a t distributio with p degrees of freedom ad let the α percetile t p,α satisfy Pt p t p,α = α. The the classical t-iterval for µ EY is Y ± t 1,1 α/2 S ad the t-test statistic for Ho : µ = µ o is t o = Y µ o S /. The right tailed p-value is give by Pt 1 > t o. Now suppose that there are two samples where W Y = Y ad W m X = X m. The µ W Y = EY µ Y, µ W X = EX µ X, σw 2 Y = VARY σy 2, σ2 W X = VARX σ2 X, ad p = 1. Let p = mi 1, m 1. Sice S 2 SEW Y W m X = Y + S2 mx m, the two sample t-iterval for µ Y µ X ad two sample t-test statistic Y X m ± t p,1 α/2 S 2 Y t o = Y X m S 2 Y + S2 m X m + S2 m X m The right tailed p-value is give by Pt p > t o. For sample meas, values of the degrees of freedom that are more accurate tha p = mi 1, m 1 ca be computed. See Moore 2007, p. 474. The remaider of this sectio follows Olive 2007b, Sectio 2.4 closely. Let x deote the greatest iteger fuctio eg, 7.7 = 7. Let x deote the smallest iteger greater tha or equal to x eg, 7.7 = 8. Example 9.3: iferece with the sample media. Let U = L where L = /2 /4 ad use SEMED = 0.5Y U Y L+1. 250.

Let p = U L 1. The a 1001 α% cofidece iterval for the populatio media MEDY is MED ± t p,1 α/2 SEMED. 9.5 Example 9.4: iferece with the trimmed mea. The symmetrically trimmed mea or the δ trimmed mea T = T L, U = 1 U L U i=l +1 Y i 9.6 where L = δ ad U = L. If δ = 0.25, say, the the δ trimmed mea is called the 25% trimmed mea. The trimmed mea is estimatig a trucated mea µ T. Assume that Y has a probability desity fuctio f Y y that is cotiuous ad positive o its support. Let y δ be the umber satisfyig PY y δ = δ. The µ T = 1 1 2δ y1 δ Notice that the 25% trimmed mea is estimatig µ T = y0.75 y 0.25 y δ yf Y ydy. 9.7 2yf Y ydy. To perform iferece, fid d 1,..., d where Y L+1, i L d i = Y i, L + 1 i U Y U, i U + 1. The the Wisorized variace is the sample variace S 2 d 1,..., d of d 1,..., d, ad the scaled Wisorized variace V SW L, U = S2 d 1,..., d [U L ]/ 2. 9.8 The stadard error of T is SET = V SW L, U /. A large sample 100 1 α% cofidece iterval CI for µ T is T ± t p,1 α 2 SET 9.9 251

where Pt p t p,1 α 2 = 1 α/2 if t p is from a t distributio with p = U L 1 degrees of freedom. This iterval is the classical t iterval whe δ = 0, but δ = 0.25 gives a robust CI. Example 9.5. Suppose the data below is from a symmetric distributio with mea µ. Fid a 95% CI for µ. 6, 9, 9, 7, 8, 9, 9, 7 Solutio. Whe computig small examples by had, the steps are to sort the data from smallest to largest value, fid, L, U, Y L, Y U, p, MED ad SEMED. After fidig t p,1 α/2, plug the relevat quatities ito the formula for the CI. The sorted data are 6, 7, 7, 8, 9, 9, 9, 9. Thus MED = 8 + 9/2 = 8.5. Sice = 8, L = 4 2 = 4 1.414 = 4 2 = 2 ad U = L = 8 2 = 6. Hece SEMED = 0.5Y 6 Y 3 = 0.5 9 7 = 1. The degrees of freedom p = U L 1 = 6 2 1 = 3. The cutoff t 3,0.975 = 3.182. Thus the 95% CI for MEDY is MED ± t 3,0.975 SEMED = 8.5 ±3.1821 = 5.318, 11.682. The classical t iterval uses Y = 6+7+ 7 + 8 + 9 + 9 + 9 + 9/8 ad S 2 = 1/7[ Y i 2 882 ] = 1/7[522 864] = 10/7 1.4286, ad t 7,0.975 2.365. Hece the 95% CI for µ is 8 ± 2.365 1.4286/8 = 7.001, 8.999. Notice that the t-cutoff = 2.365 for the classical iterval is less tha the t-cutoff = 3.182 for the media iterval ad that SEY < SEMED. Example 9.6. I the last example, what happes if the 6 becomes 66 ad a 9 becomes 99? Solutio. The the ordered data are 7, 7, 8, 9, 9, 9, 66, 99. Hece MED = 9. Sice L ad U oly deped o the sample size, they take the same values as i the previous example ad SEMED = 0.5Y 6 Y 3 = 0.5 9 8 = 0.5. Hece the 95% CI for MEDY is MED ± t 3,0.975 SEMED = 9 ± 3.1820.5 = 7.409, 10.591. Notice that with discrete data, it is possible to drive SEMED to 0 with a few outliers if is small. The classical cofidece iterval Y ± t 7,0.975 S/ blows up ad is equal to 2.955, 56.455. Example 9.7. The Buxto 1920 data cotais 87 heights of me, but five of the me were recorded to be about 0.75 iches tall! The mea height is Y = 1598.862 ad the classical 95% CI is 1514.206, 1683.518. MED = 1693.0 ad the resistat 95% CI based o the media is 1678.517, 252

1707.483. The 25% trimmed mea T = 1689.689 with 95% CI 1672.096, 1707.282. The heights for the five me were recorded uder their head legths, so the outliers ca be corrected. The Y = 1692.356 ad the classical 95% CI is 1678.595, 1706.118. Now MED = 1694.0 ad the 95% CI based o the media is 1678.403, 1709.597. The 25% trimmed mea T = 1693.200 with 95% CI 1676.259, 1710.141. Notice that whe the outliers are corrected, the three itervals are very similar although the classical iterval legth is slightly shorter. Also otice that the outliers roughly shifted the media cofidece iterval by about 1 mm while the outliers greatly icreased the legth of the classical t iterval. 9.2 Some Examples Example 9.8. Suppose that Y 1,..., Y are iid from a oe parameter expoetial family with parameter τ. Assume that T = ty i is a complete sufficiet statistic. The from Theorems 3.6 ad 3.7, ofte T Ga, 2b τ where a ad b are kow positive costats. The ˆτ = T 2ab is the UMVUE ad ofte the MLE of τ. Sice T /b τ Ga, 2, a 1001 α% cofidece iterval for τ is T /b Ga, 2, 1 α/2, T /b T /b T /b Ga, 2, α/2 χ 2 d 1 α/2, χ 2 d α/2 9.10 where d = 2a, x is the greatest iteger fuctio e.g. 7.7 = 7 = 7, P[G Gν, λ, α] = α if G Gν, λ, ad P[X χ 2 d α] = α if X has a chi-square χ 2 d distributio with d degrees of freedom. This cofidece iterval ca be iverted to perform two tail tests of hypotheses. By Theorem 7.3, the uiformly most powerful UMP test of H o : τ τ o versus H A : τ > τ o rejects H o if ad oly if T > k where P[G > k] = α whe G Ga, 2b τ o. Hece k = Ga, 2b τ o, 1 α. 9.11 253

A good approximatio to this test rejects H o if ad oly if where d = 2a. T > b τ o χ 2 d1 α Example 9.9. If Y is half ormal HNµ, σ the the pdf of Y is fy = 2 y µ2 exp 2π σ 2σ 2 where σ > 0 ad y > µ ad µ is real. Sice [ 2 fy = I[y > µ] exp 2π σ 1 2σ2y µ2 Y is a 1P REF if µ is kow. Sice T = Y i µ 2 G/2, 2σ 2, i Example 9.8 take a = 1/2, b = 1, d = ad τ = σ 2. The a 1001 α% cofidece iterval for σ 2 is if T χ 2 1 α/2, T χ 2 α/2 ],. 9.12 The UMP test of H o : σ 2 σ 2 o versus H A : σ 2 > σ 2 o rejects H o if ad oly T /σo 2 > χ2 1 α. Now cosider iferece whe both µ ad σ are ukow. The the family is o loger a expoetial family sice the support depeds o µ. Let D = Y i Y 1: 2. 9.13 Pewsey 2002 showed that ˆµ, ˆσ 2 = Y 1:, 1 D is the MLE of µ, σ 2, ad that Y 1: µ σφ 1 1 2 + 1 2 D EXP1. Sice π/2/ is a approximatio to Φ 1 1 + 1 based o a first order 2 2 Taylor series expasio such that Φ 1 1 2 + 1 2 π/2/ 1, 254

it follows that Y 1: µ σ π 2 D EXP1. 9.14 Usig this fact, it ca be show that a large sample 1001 α% CI for µ is ˆµ + ˆσ logα Φ 1 1 2 + 1 2 1 + 13/2, ˆµ 9.15 where the term 1+13/ 2 is a small sample correctio factor. See Abuhassa ad Olive 2008. Note that Hece or D = Y i Y 1: 2 = Y i µ + µ Y 1: 2 = Y i µ 2 + µ Y 1: 2 + 2µ Y 1: Y i µ. D = T + 1 [Y 1: µ] 2 2[Y 1: µ] Y i µ, D σ = T 2 σ + 1 2 1 σ 2[Y 1: µ] 2 2[ Y 1: µ σ ] Y i µ. 9.16 σ Cosider the three terms o the right had side of 9.16. The middle term coverges to 0 i distributio while the third term coverges i distributio to a 2EXP1 or χ 2 2 distributio sice Y i µ/σ is the sample mea of HN0,1 radom variables ad EX = 2/π whe X HN0, 1. Let T p = p Y i µ 2. The D = T p + i= p+1 Y i µ 2 V 9.17 where V σ 2 D χ 2 2. 255

Hece D T p D 1 ad D /σ 2 is asymptotically equivalet to a χ 2 p radom variable where p is a arbitrary oegative iteger. Pewsey 2002 used p = 1. Thus whe both µ ad σ 2 are ukow, a large sample 1001 α% cofidece iterval for σ 2 is D χ 2 11 α/2, D. 9.18 χ 2 1α/2 It ca be show that CI legth coverges to σ 2 2z 1 α/2 z α/2 for CIs 9.12 ad 9.18 while legth CI 9.15 coverges to σ logα π/2. Whe µ ad σ 2 are ukow, a approximate α level test of H o : σ 2 σ 2 o versus H A : σ 2 > σ 2 o that rejects H o if ad oly if D /σo 2 > χ2 1 1 α 9.19 has early as much power as the α level UMP test whe µ is kow if is large. Example 9.10. Followig Ma, Schafer, ad Sigpurwalla 1974, p. 176, let W 1,..., W be iid EXPθ, λ radom variables. Let W 1: = miw 1,..., W. The the MLE ˆθ, ˆλ = W 1:, 1 W i W 1: = W 1:, W W 1:. Let D = ˆλ. For > 1, a 1001 α% cofidece iterval CI for θ is while a 1001 α% CI for λ is 2D, χ 2 2 1,1 α/2 W 1: ˆλ[α 1/ 1 1], W 1: 9.20 2D χ 2 2 1,α/2. 9.21 256

Let T = W i θ = W θ. If θ is kow, the ˆλ θ = W i θ = W θ is the UMVUE ad MLE of λ, ad a 1001 α% CI for λ is 2T 2T,. 9.22 χ 2 2,1 α/2 χ 2 2,α/2 Usig χ 2,α / 2z α +, it ca be show that CI legth coverges to λz 1 α/2 z α for CIs 9.21 ad 9.22 i probability. It ca be show that legth CI 9.20 coverges to λlogα. Whe a radom variable is a simple trasformatio of a distributio that has a easily computed CI, the trasformed radom variable will ofte have a easily computed CI. Similarly the MLEs of the two distributios are ofte closely related. See the discussio above Example 5.10. The first 3 of the followig 4 examples are from Abuhassa ad Olive 2008. Example 9.11. If Y has a Pareto distributio, Y PARσ, λ, the W = logy EXPθ = logσ, λ. If θ = logσ so σ = e θ, the a 100 1 α% CI for θ is 9.20. A 100 1 α% CI for σ is obtaied by expoetiatig the edpoits of 9.20, ad a 100 1 α% CI for λ is 9.21. The fact that the Pareto distributio is a log-locatio-scale family ad hece has simple iferece does ot seem to be well kow. Example 9.12. If Y has a power distributio, Y P OWλ, the W = logy is EXP0, λ. A 100 1 α% CI for λ is 9.22. Example 9.13. If Y has a trucated extreme value distributio, Y TEV λ, the W = e Y 1 is EXP0, λ. A 100 1 α% CI for λ is 9.22. Example 9.14. If Y has a logormal distributio, Y LNµ, σ 2, the W i = logy i Nµ, σ 2. Thus a 1 α100% CI for µ whe σ is ukow is S W S W W t 1,1 α, W 2 + t 1,1 α 2 257

where S W = 1ˆσ = 1 1 W i W 2, ad Pt t 1,1 α 2 = 1 α/2 whe t t 1. Example 9.15. Let X 1,..., X be iid Poissoθ radom variables. The classical large sample 100 1 α% CI for θ is X ± z 1 α/2 X/ where PZ z 1 α/2 = 1 α/2 if Z N0, 1. Followig Byre ad Kabaila 2005, a modified large sample 100 1 α% CI for θ is L, U where L = 1 X i 0.5 + 0.5z1 α/2 2 z 1 α/2 X i 0.5 + 0.25z1 α/2 2 ad U = 1 X i + 0.5 + 0.5z1 α/2 2 + z 1 α/2 X i + 0.5 + 0.25z1 α/2 2. Followig Grosh 1989, p. 59, 197 200, let W = X i ad suppose that W = w is observed. Let PT < χ 2 d α = α if T χ2 d. The a exact 100 1 α% CI for θ is χ 2 2w α 2 2, χ2 2w+21 α 2 2 for w 0 ad 0, χ2 2 1 α 2 for w = 0. The exact CI is coservative: the actual coverage 1 δ 1 α = the omial coverage. This iterval performs well if θ is very close to 0. See Problem 9.3. 258

Example 9.16. Let Y 1,..., Y be iid bi1, ρ. Let ˆρ = Y i/ = umber of successes /. The classical large sample 100 1 α% CI for ρ is ˆρ ± z 1 α/2 ˆρ1 ˆρ where PZ z 1 α/2 = 1 α/2 if Z N0, 1. The Agresti Coull CI takes ñ = + z 2 1 α/2 ad ρ = ˆρ + 0.5z2 1 α/2. + z1 α/2 2 The method adds 0.5z1 α/2 2 0 s ad 0.5z2 1 α/2 1 s to the sample, so the sample size icreases by z1 α/2 2. The the large sample 100 1 α% Agresti Coull CI for ρ is ρ1 ρ p ± z 1 α/2. ñ Now let Y 1,..., Y be idepedet bim i, ρ radom variables, let W = Y i bi m i, ρ ad let w = m i. Ofte m i 1 ad the w =. Let PF d1,d 2 F d1,d 2 α = α where F d1,d 2 has a F distributio with d 1 ad d 2 degrees of freedom. Assume W = w is observed. The the Clopper Pearso exact 100 1 α% CI for ρ is 1 0, for w = 0, 1 + w F 2w,2α w w + F 2,2w 1 α, 1 for w = w, ad ρ L, ρ U for 0 < w < w with ad ρ L = ρ U = w w + w w + 1F 2w w+1,2w1 α/2 w + 1 w + 1 + w wf 2w w,2w+1α/2. The exact CI is coservative: the actual coverage 1 δ 1 α = the omial coverage. This iterval performs well if ρ is very close to 0 or 1. 259

The classical iterval should oly be used if it agrees with the Agresti Coull iterval. See Problem 9.2. Example 9.17. Let ˆρ = umber of successes /. Cosider a takig a simple radom sample of size from a fiite populatio of kow size N. The the classical fiite populatio large sample 100 1 α% CI for ρ is ˆρ1 ˆρ N ˆρ ± z 1 α/2 = ˆρ ± z 1 α/2 SEˆρ 9.23 1 N where PZ z 1 α/2 = 1 α/2 if Z N0, 1. Let ñ = + z 2 1 α/2 ad ρ = ˆρ + 0.5z2 1 α/2. + z1 α/2 2 Heuristically, the method adds 0.5z1 α/2 2 0 s ad 0.5z2 1 α/2 1 s to the sample, so the sample size icreases by z1 α/2 2. The a large sample 100 1 α% Agresti Coull type fiite populatio CI for ρ is ρ1 ρ N ρ ± z 1 α/2 = ρ ± z 1 α/2 SE ρ. 9.24 ñ N Notice that a 95% CI uses z 1 α/2 = 1.96 2. For data from a fiite populatio, large sample theory gives useful approximatios as N ad ad /N 0. Hece theory suggests that the Agresti Coull CI should have better coverage tha the classical CI if the p is ear 0 or 1, if the sample size is moderate, ad if is small compared to the populatio size N. The coverage of the classical ad Agresti Coull CIs should be very similar if is large eough but small compared to N which may oly be possible if N is eormous. As icreases to N, ˆρ goes to p, SEˆρ goes to 0, ad the classical CI may perform well. SE ρ also goes to 0, but ρ is a biased estimator of ρ ad the Agresti Coull CI will ot perform well if /N is too large. See Problem 9.4. Example 9.18. If Y 1,..., Y are iid Weibull φ, λ, the the MLE ˆφ, ˆλ must be foud before obtaiig CIs. The likelihood [ Lφ, λ = φ y φ 1 1 1 λ i λ exp ] y φ i, λ 260

ad the log likelihood Hece loglφ, λ = logφ logλ + φ 1 or y φ i = λ, or Now so or ad loglφ, λ = λ λ + ˆλ = y ˆφ i. y φ i λ 2 φ loglφ, λ = φ + logy i 1 λ + φ[ logy i 1 λ logy i 1 λ set = 0, y φ i logy i set = 0, y φ i logy i] = 0, ˆφ = 1 ˆλ y ˆφ i logy i. logy i Oe way to fid the MLE is to use iteratio ˆφ k = ˆλ k = y ˆφk 1 i 1 ˆφk 1 ˆλ y k i logy i. logy i Sice W = logy SEV θ = logλ 1/φ, σ = 1/φ, let ˆσ R = MADW 1,..., W /0.767049 y φ i. ad ˆθ R = MEDW 1,..., W loglog2ˆσ R. The ˆφ 0 = 1/ˆσ R ad ˆλ 0 = expˆθ R /ˆσ R. The iteratio might be ru util both ˆφ k ˆφ k 1 < 10 6 ad ˆλ k ˆλ k 1 < 10 6. The take ˆφ, ˆλ = ˆφ k, ˆλ k. 261

By Example 8.13, ˆλ ˆφ λ φ D N 2 0,Σ where Σ = 1.109λ 2 1 + 0.4635logλ + 0.5482logλ 2 0.257φλ + 0.608λφlogλ 0.257φλ + 0.608λφlogλ 0.608φ 2. Thus 1 α P z 1 α/2 0.608ˆφ < ˆφ φ < z1 α/2 0.608ˆφ ad a large sample 1001 α% CI for φ is Similarly, a large sample 1001 α% CI for λ is ˆφ ± z 1 α/2 ˆφ 0.608/. 9.25 ˆλ ± z 1 α/2 1.109ˆλ 2 [1 + 0.4635logˆλ + 0.5824logˆλ 2 ]. 9.26 I simulatios, for small the umber of iteratios for the MLE to coverge could be i the thousads, ad the coverage of the large sample CIs is poor for < 50. See Problem 9.7. Iteratig the likelihood equatios util covergece to a poit ˆθ is called a fixed poit algorithm. Such algorithms may ot coverge, so check that ˆθ satisfies the likelihood equatios. Other methods such as Newto s method may perform better. Newto s method is used to solve gθ = 0 for θ, where the solutio is called ˆθ, ad uses θ k+1 = θ k [D gθ k ] 1 gθ k 9.27 where D gθ = θ 1 g 1 θ.... θ 1 g p θ... θ p g 1 θ. θ p g p θ If the MLE is the solutio of the likelihood equatios, the use gθ = g 1 θ,..., g p θ T where g i θ = θ i loglθ. 262.

Let θ 0 be a iitial estimator, such as the method of momets estimator of θ. Let D = D gθ. The ad D ij = θ j g i θ = 1 D ij = 1 k=1 2 θ i θ j loglθ = k=1 2 θ i θ j logfx k θ, [ ] 2 logfx k θ D 2 E logfx θ. θ i θ j θ i θ j Newto s method coverges if the iitial estimator is sufficietly close, but may diverge otherwise. Hece cosistet iitial estimators are recommeded. Newto s method is also popular because if the partial derivative ad itegratio operatios ca be iterchaged, the 1 D D Iθ. 9.28 gθ For example, the regularity coditios hold for a kp-ref by Propositio 8.20. The a 100 1 α% large sample CI for θ i is where ˆθ i ± z 1 α/2 D 1 ii 9.29 D 1 = [ D gˆθ] 1. This result follows because D 1 ii [I 1 ˆθ] ii /. Example 9.19. Problem 9.8 simulates CIs for the Rayleigh µ, σ distributio of the form 9.29 although o check has bee made o whether 9.28 holds for the Rayleigh distributio which is ot a 2P-REF. yi µ Lµ, σ = σ 2 exp [ 1 2σ 2 yi µ 2 Notice that for fixed σ, LY 1, σ = 0. Hece the MLE ˆµ < Y 1. Now the log likelihood loglµ, σ = logy i µ 2logσ 1 yi µ 2. 2 σ 2 263 ].

Hece g 1 µ, σ = ad g 2 µ, σ = which has solutio µ loglµ, σ = 1 y i µ + 1 σ 2 2 loglµ, σ = σ σ + 1 σ 3 ˆσ 2 = 1 2 y i µ set = 0, y i µ 2 set = 0, Y i ˆµ 2. 9.30 To obtai iitial estimators, let ˆσ M = S 2 /0.429204 ad ˆµ M = Y 1.253314ˆσ M. These would be the method of momets estimators if S 2 M was used istead of the sample variace S 2. The use µ 0 = miˆµ M, 2Y 1 ˆµ M ad σ 0 = Yi µ 0 2 /2. Now θ = µ, σ T ad So θ k+1 = θ k D D = gθ µ g 1θ µ g 2θ σ g 1θ σ g 2θ = 1 y i 2 µ 2 σ 2 σ 3 y i µ 2 σ 3 y i µ 1 y i µ k 2 σ 2 k 2 σk 3 y i µ k 2 σ 2 3 σ 4 y i µ 2 2 σ 3 k y i µ k. 2 3 σk 2 σk 4 y i µ k 2 1 gθ k where gθ k = 1 1 y i µ k σk 2 y i µ k 2 σ k + 1 σk 3 y i µ k 2. 264

This formula could be iterated for 100 steps resultig i θ 101 = µ 101, σ 101 T. The take ˆµ = miµ 101, 2Y 1 µ 101 ad ˆσ = 1 Y i ˆµ 2 2. The ˆθ = ˆµ, ˆσ T ad compute D D. The assumig 9.28 holds a g ˆθ 100 1 α% large sample CI for µ is ˆµ ± z 1 α/2 D 1 11 ad a 100 1 α% large sample CI for σ is ˆσ ± z 1 α/2 D 1 22. Example 9.20. Assume that Y 1,..., Y are iid discrete uiform 1, η where η is a iteger. For example, each Y i could be draw with replacemet from a populatio of η taks with serial umbers 1, 2,..., η. The Y i would be the serial umber observed, ad the goal would be to estimate the populatio size η = umber of taks. The PY i = i = 1/η for i = 1,..., η. The the CDF of Y is Fy = y 1 η = y η for 1 y η. Here y is the greatest iteger fuctio, eg, 7.7 = 7. Now let Z i = Y i /η which has CDF F Z t = PZ t = PY tη = tη η t for 0 < t < 1. Let Z = Y /η = maxz 1,..., Z. The F Z t = P Y tη η t = η for 1/η < t < 1. Wat c so that Pc Y η 1 = 1 α 265

for 0 < α < 1. So c η 1 F Z c = 1 α or 1 = 1 α η or c η η = α 1/. The solutio may ot exist, but c 1/η α 1/ c. Take c = α 1/ the [Y, Y α 1/ is a CI for η that has coverage slightly less tha 1001 α% for small, but the coverage coverges i probability to 1 as. For small the midpoit of the 95% CI might be a better estimator of η tha Y. The left edpoit is closed sice Y is a cosistet estimator of η. If the edpoit was ope, coverage would go to 0 istead of 1. It ca be show that legth CI coverges to η logα i probability. Hece legth 95% CI 3η. Problem 9.9 provides simulatios that suggest that the 95% CI coverage ad legth is close to the asymptotic values for 10. Example 9.21. Assume that Y 1,..., Y are iid uiform 0, θ. Let Z i = Y i /θ U0, 1 which has cdf F Z t = t for 0 < t < 1. Let Z = Y /θ = maxz 1,..., Z. The for 0 < t < 1. Wat c so that for 0 < α < 1. So F Z t = P Y θ Pc Y θ t = t 1 = 1 α 1 F Z c = 1 α or 1 c = 1 α or c = α 1/. 266

The Y, Y α 1/ is a exact 1001 α% CI for θ. It ca be show that legth CI coverges to θ logα i probability. If Y 1,..., Y are iid Uθ 1, θ 2 where θ 1 is kow, the Y i θ 1 are iid U0, θ 2 θ 1 ad Y θ 1, Y θ 1 α 1/ is a 1001 α% CI for θ 2 θ 1. Thus if θ 1 is kow, the Y, θ 1 1 1 α 1/ + Y α 1/ is a 1001 α% CI for θ 2. Notice that if θ 1 is ukow, Y > 0 ad Y 1 < 0, the replacig θ 1 1 1/α 1/ by 0 icreases the coverage. Example 9.22. Assume Y 1,..., Y are iid with mea µ ad variace σ 2. Bickel ad Doksum 2007, p. 279 suggest that [ ] 1S W = 1/2 2 ca be used as a asymptotic pivot for σ 2 if EY 4 <. Notice that W = 1/2 [ Yi µ 2 σ 2 [ Yi µ 2 σ 1 ] σ 2 1 Y µ2 σ 2 ] = 2 Y µ = X Z. Sice Z D χ 2 1, the term Z D 0. Now X = U 1 D N0, τ by the CLT sice U i = [Y i µ/σ] 2 has mea EU i = 1 ad variace V U i = τ = EU 2 i EU i 2 = E[Y i µ 4 ] σ 4 1 = κ + 2 where κ is the kurtosis of Y i. Thus W D N0, τ. σ 267

Hece 1 α P z 1 α/2 < W τ < z 1 α/2 = P z 1 α/2 τ < W < z 1 α/2 τ 1S 2 = P z 1 α/2 τ < < z 1 α/2 τ σ 2 = P z 1 α/2 τ < 1S 2 σ 2 < + z 1 α/2 τ. Hece a large sample 1001 α% CI for σ 2 is 1S 2 1S 2, + z 1 α/2 ˆτ z 1 α/2 ˆτ where 1 ˆτ = Y i Y 4 1. S 4 Notice that this CI eeds > z 1 α/2 ˆτ for the right edpoit to be positive. It ca be show that legth CI coverges to 2σ 2 z 1 α/2 τ i probability. Problem 9.10 uses a asymptotically equivalet 1001 α% CI of the form as 2 + t 1,1 α/2 ˆτ, j=0 + bs 2 t 1,1 α/2 ˆτ where a ad b deped o ˆτ. The goal was to make a 95% CI with good coverage for a wide variety of distributios with 4th momets for 100. The price is that the CI is too log for some of the distributios with small kurtosis. The Nµ, σ 2 distributio has τ = 2, while the EXPλ distributio has σ 2 = λ 2 ad τ = 8. The quatity τ is small for the uiform distributio but large for the logormal LN0,1 distributio. By the biomial theorem, if EY 4 exists ad EY = µ the 4 4 EY µ 4 = E[Y j ] µ 4 j = j µ 4 4µ 3 EY + 6µ 2 V Y + [EY ] 2 4µEY 3 + EY 4. This fact ca be useful for computig τ = E[Y i µ 4 ] σ 4 1 = κ + 2. 268

9.3 Complemets Guether 1969 is a useful referece for cofidece itervals. Agresti ad Coull 1998 ad Brow, Cai ad DasGupta 2001, 2002 discuss CIs for a biomial proportio. Agresti ad Caffo 2000 discuss CIs for the differece of two biomial proportios ρ 1 ρ 2 obtaied from 2 idepedet samples. Barker 2002 ad Byre ad Kabaila 2005 discuss CIs for Poisso θ data. Brow, Cai ad DasGupta 2003 discuss CIs for several discrete expoetial families. Abuhassa ad Olive 2008 cosider CIs for some trasformed radom variable. A compariso of CIs with other itervals such as predictio itervals is give i Vardema 1992. Newto s method is described, for example, i Peressii, Sulliva ad Uhl 1988, p. 85. 9.4 Problems PROBLEMS WITH AN ASTERISK * ARE ESPECIALLY USE- FUL. Refer to Chapter 10 for the pdf or pmf of the distributios i the problems below. 9.1. Aug. 2003 QUAL: Suppose that X 1,..., X are iid with the Weibull distributio, that is the commo pdf is { b fx = a xb 1 e x b a 0 < x 0 elsewhere where a is the ukow parameter, but b> 0 is assumed kow. a Fid a miimal sufficiet statistic for a b Assume = 10. Use the Chi-Square Table ad the miimal sufficiet statistic to fid a 95% two sided cofidece iterval for a. R/Splus Problems Use the commad source A:/sipack.txt to dowload the fuctios. See Sectio 11.1. Typig the ame of the sipack fuctio, eg accisimf, will display the code for the fuctio. Use the args commad, eg argsaccisimf, to display the eeded argumets for the fuctio. 269

9.2. Let Y 1,..., Y be iid biomial1, ρ radom variables. From the website www.math.siu.edu/olive/sipack.txt, eter the R/Splus fuctio bcisim ito R/Splus. This fuctio simulates the 3 CIs classical, modified ad exact from Example 9.16, but chages the CI L,U to max0,l,mi1,u to get shorter legths. To ru the fuctio for = 10 ad ρ p = 0.001, eter the R/Splus commad bcisim=10,p=0.001. Make a table with header p ccov cle accov acle ecov ele. Fill the table for = 10 ad p = 0.001, 0.01, 0.5, 0.99, 0.999 ad the repeat for = 100. The cov is the proportio of 500 rus where the CI cotaied p ad the omial coverage is 0.95. A coverage betwee 0.92 ad 0.98 gives little evidece that the true coverage differs from the omial coverage of 0.95. A coverage greater that 0.98 suggests that the CI is coservative while a coverage less tha 0.92 suggests that the CI is liberal. Typically wat the true coverage to the omial coverage, so coservative itervals are better tha liberal CIs. The le is the average scaled legth of the CI ad for large should be ear 21.96 p1 p. From your table, is the classical estimator or the Agresti Coull CI better? Whe is the exact iterval good? Explai briefly. 9.3. Let X 1,..., X be iid Poissoθ radom variables. From the website www.math.siu.edu/olive/sipack.txt, eter the R/Splus fuctio poiscisim ito R/Splus. This fuctio simulates the 3 CIs classical, modified ad exact from Example 9.15. To ru the fuctio for = 100 ad θ = 5, eter the R/Splus commad poiscisimtheta=5. Make a table with header theta ccov cle mcov mle ecov ele. Fill the table for theta = 0.001, 0.1, 1.0, ad 5. The cov is the proportio of 500 rus where the CI cotaied θ ad the omial coverage is 0.95. A coverage betwee 0.92 ad 0.98 gives little evidece that the true coverage differs from the omial coverage of 0.95. A coverage greater that 0.98 suggests that the CI is coservative while a coverage less tha 0.92 suggests that the CI is liberal too short. Typically wat the true coverage to the omial coverage, so coservative itervals are better tha liberal CIs. The le is the average scaled legth of the CI ad for large θ should be ear 21.96 θ for the classical ad modified CIs. From your table, is the classical CI or the modified CI or the exact CI better? Explai briefly. Warig: i a 1999 versio of R, there was a bug for the Poisso radom umber geerator for θ 10. The 2007 versio of R seems to work. 270

9.4. This problem simulates the CIs from Example 9.17. a Dowload the fuctio accisimf ito R/Splus. b The fuctio will be used to compare the classical ad Agresti Coull 95% CIs whe the populatio size N = 500 ad p is close to 0.01. The fuctio geerates such a populatio, the selects 5000 idepedet simple radom samples from the populatio. The 5000 CIs are made for both types of itervals, ad the umber of times the true populatio p is i the ith CI is couted. The simulated coverage is this cout divided by 5000 the umber of CIs. The omial coverage is 0.95. To ru the fuctio for = 50 ad p 0.01, eter the commad accisimf=50,p=0.01. Make a table with header p ccov accov. Fill the table for = 50 ad the repeat for = 100, 150, 200, 250, 300, 350, 400 ad 450. The cov is the proportio of 5000 rus where the CI cotaied p ad the omial coverage is 0.95. For 5000 rus, a observed coverage betwee 0.94 ad 0.96 gives little evidece that the true coverage differs from the omial coverage of 0.95. A coverage greater that 0.96 suggests that the CI is coservative while a coverage less tha 0.94 suggests that the CI is liberal. Typically wat the true coverage to the omial coverage, so coservative itervals are better tha liberal CIs. The ccov is for the classical CI while accov is for the Agresti Coull CI. c From your table, for what values of is the Agresti Coull CI better, for what values of are the 2 itervals about the same, ad for what values of is the classical CI better? 9.5. This problem simulates the CIs from Example 9.10. a Dowload the fuctio expsim ito R/Splus. The output from this fuctio are the coverages scov, lcov ad ccov of the CI for λ, θ ad of λ if θ is kow. The scaled average legths of the CIs are also give. The legths of the CIs for λ are multiplied by while the legth of the CI for θ is multiplied by. b The 5000 CIs are made for 3 itervals, ad the umber of times the true populatio parameter λ or θ is i the ith CI is couted. The simulated coverage is this cout divided by 5000 the umber of CIs. The omial coverage is 0.95. To ru the fuctio for = 5, θ = 0 ad λ = 1 eter the commad expsim=5. Make a table with header CI for λ CI for θ CI for λ, θ ukow. The a secod header cov sle cov sle cov sle. Fill the table for = 5 271

ad the repeat for = 10, 20, 50, 100 ad 1000. The cov is the proportio of 5000 rus where the CI cotaied λ or θ ad the omial coverage is 0.95. For 5000 rus, a observed coverage betwee 0.94 ad 0.96 gives little evidece that the true coverage differs from the omial coverage of 0.95. A coverage greater that 0.96 suggests that the CI is coservative while a coverage less tha 0.94 suggests that the CI is liberal. As gets large, the values of sle should get closer to 3.92, 2.9957 ad 3.92. 9.6. This problem simulates the CIs from Example 9.9. a Dowload the fuctio hsim ito R/Splus. The output from this fuctio are the coverages scov, lcov ad ccov of the CI for σ 2, µ ad of σ 2 if µ is kow. The scaled average legths of the CIs are also give. The legths of the CIs for σ 2 are multiplied by while the legth of the CI for µ is multiplied by. b The 5000 CIs are made for 3 itervals, ad the umber of times the true populatio parameter θ = µ or σ 2 is i the ith CI is couted. The simulated coverage is this cout divided by 5000 the umber of CIs. The omial coverage is 0.95. To ru the fuctio for = 5, µ = 0 ad σ 2 = 1 eter the commad hsim=5. Make a table with header CI for σ 2 CI for µ CI for σ 2, µ ukow. The a secod header cov sle cov sle cov sle. Fill the table for = 5 ad the repeat for = 10, 20, 50, 100 ad 1000. The cov is the proportio of 5000 rus where the CI cotaied θ ad the omial coverage is 0.95. For 5000 rus, a observed coverage betwee 0.94 ad 0.96 gives little evidece that the true coverage differs from the omial coverage of 0.95. A coverage greater that 0.96 suggests that the CI is coservative while a coverage less tha 0.94 suggests that the CI is liberal. As gets large, the values of sle should get closer to 5.5437, 3.7546 ad 5.5437. 9.7. a Dowload the fuctio wcisim ito R/Splus. The output from this fuctio icludes the coverages pcov ad lcov of the CIs for φ ad λ if the simulated data Y 1,..., Y are iid Weibull φ, λ. The scaled average legths of the CIs are also give. The values pcov ad lcov should be less tha 10 5. If this is ot the case, icrease iter. 100 samples of size = 100 are used to create the 95% large sample CIs for φ ad λ give i Example 9.18. If the sample size is large, thesdphihat, the sample stadard deviatio of the 100 values of the MLE ˆφ, should be close tophiasd = φ.608. Similarly, sdlamhat should be close to the asymptotic stadard 272

deviatio lamasd = 1.109λ 2 1 + 0.4635logλ + 0.5282logλ 2. b Type the commad wcisim = 100, phi = 1, lam = 1, iter = 100 ad record the coverages for the CIs for φ ad λ. c Type the commad wcisim = 100, phi = 20, lam = 20, iter = 100 ad record the coverages for the CIs for φ ad λ. 9.8. a Dowload the fuctio raysim ito R/Splus. b Type the commad raysim = 100, mu = 20, sigma = 20, iter = 100 ad record the coverages for the CIs for µ ad σ. 9.9. a Dowload the fuctio ducisim ito R/Splus to simulate the CI of Example 9.20. b Type the commad ducisim=10,rus=1000,eta=1000. Repeat for = 50, 100, 500 ad make a table with header coverage 95% CI legth. Fill i the table for = 10, 50, 100 ad 500. c Are the coverages close to or higher tha 0.95 ad is the scaled legth close to 3η = 3000? 9.10. a Dowload the fuctio varcisim ito R/Splus to simulate a modified versio of the CI of Example 9.22. b Type the commad varcisim = 100, rus = 1000, type = 1 to simulate the 95% CI for the variace for iid N0,1 data. Is the coverage vcov close to or higher tha 0.95? Is the scaled legth vle = CI legth = 21.96σ 2 τ = 5.554σ 2 close to 5.554? c Type the commad varcisim = 100, rus = 1000, type = 2 to simulate the 95% CI for the variace for iid EXP1 data. Is the coverage vcov close to or higher tha 0.95? Is the scaled legth vle = CI legth = 21.96σ 2 τ = 21.96λ 2 8 = 11.087λ 2 close to 11.087? d Type the commad varcisim = 100, rus = 1000, type = 3 to simulate the 95% CI for the variace for iid LN0,1 data. Is the coverage vcov close to or higher tha 0.95? Is the scaled legth vle log? 273