Econ 325 Notes on Point Estimator and Confidence Interval 1 By Hiro Kasahara

Similar documents
Econ 325/327 Notes on Sample Mean, Sample Proportion, Central Limit Theorem, Chi-square Distribution, Student s t distribution 1.

Statistical Inference (Chapter 10) Statistical inference = learn about a population based on the information provided by a sample.

Introduction to Econometrics (3 rd Updated Edition) Solutions to Odd- Numbered End- of- Chapter Exercises: Chapter 3

Frequentist Inference

Properties and Hypothesis Testing

STATISTICAL INFERENCE

Direction: This test is worth 250 points. You are required to complete this test within 50 minutes.

Direction: This test is worth 150 points. You are required to complete this test within 55 minutes.

Topic 9: Sampling Distributions of Estimators

Random Variables, Sampling and Estimation

MATH 320: Probability and Statistics 9. Estimation and Testing of Parameters. Readings: Pruim, Chapter 4

Topic 9: Sampling Distributions of Estimators

Topic 9: Sampling Distributions of Estimators

Big Picture. 5. Data, Estimates, and Models: quantifying the accuracy of estimates.

FACULTY OF MATHEMATICAL STUDIES MATHEMATICS FOR PART I ENGINEERING. Lectures

Lecture Note 8 Point Estimators and Point Estimation Methods. MIT Spring 2006 Herman Bennett

Stat 319 Theory of Statistics (2) Exercises

Statistics 20: Final Exam Solutions Summer Session 2007

Introductory statistics

Statistical inference: example 1. Inferential Statistics

Common Large/Small Sample Tests 1/55

Tests of Hypotheses Based on a Single Sample (Devore Chapter Eight)

Stat 200 -Testing Summary Page 1

Data Analysis and Statistical Methods Statistics 651

April 18, 2017 CONFIDENCE INTERVALS AND HYPOTHESIS TESTING, UNDERGRADUATE MATH 526 STYLE

Agreement of CI and HT. Lecture 13 - Tests of Proportions. Example - Waiting Times

Chapter 8: Estimating with Confidence

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n.

5. Likelihood Ratio Tests

Chapter 13: Tests of Hypothesis Section 13.1 Introduction

Y i n. i=1. = 1 [number of successes] number of successes = n

Lecture 2: Monte Carlo Simulation

f(x i ; ) L(x; p) = i=1 To estimate the value of that maximizes L or equivalently ln L we will set =0, for i =1, 2,...,m p x i (1 p) 1 x i i=1

Asymptotics. Hypothesis Testing UMP. Asymptotic Tests and p-values

Efficient GMM LECTURE 12 GMM II

Inferential Statistics. Inference Process. Inferential Statistics and Probability a Holistic Approach. Inference Process.

Let us give one more example of MLE. Example 3. The uniform distribution U[0, θ] on the interval [0, θ] has p.d.f.

1 Constructing and Interpreting a Confidence Interval

BIOS 4110: Introduction to Biostatistics. Breheny. Lab #9

LECTURE 14 NOTES. A sequence of α-level tests {ϕ n (x)} is consistent if

STAT431 Review. X = n. n )

- E < p. ˆ p q ˆ E = q ˆ = 1 - p ˆ = sample proportion of x failures in a sample size of n. where. x n sample proportion. population proportion

2 1. The r.s., of size n2, from population 2 will be. 2 and 2. 2) The two populations are independent. This implies that all of the n1 n2

Economics Spring 2015

Parameter, Statistic and Random Samples

MATH/STAT 352: Lecture 15

Sample Size Determination (Two or More Samples)

Overview. p 2. Chapter 9. Pooled Estimate of. q = 1 p. Notation for Two Proportions. Inferences about Two Proportions. Assumptions

A quick activity - Central Limit Theorem and Proportions. Lecture 21: Testing Proportions. Results from the GSS. Statistics and the General Population

The variance of a sum of independent variables is the sum of their variances, since covariances are zero. Therefore. V (xi )= n n 2 σ2 = σ2.

6 Sample Size Calculations

KLMED8004 Medical statistics. Part I, autumn Estimation. We have previously learned: Population and sample. New questions

Recall the study where we estimated the difference between mean systolic blood pressure levels of users of oral contraceptives and non-users, x - y.

Problem Set 4 Due Oct, 12

1.010 Uncertainty in Engineering Fall 2008

If, for instance, we were required to test whether the population mean μ could be equal to a certain value μ

Expectation and Variance of a random variable

1 Constructing and Interpreting a Confidence Interval

MOST PEOPLE WOULD RATHER LIVE WITH A PROBLEM THEY CAN'T SOLVE, THAN ACCEPT A SOLUTION THEY CAN'T UNDERSTAND.

This exam contains 19 pages (including this cover page) and 10 questions. A Formulae sheet is provided with the exam.

HYPOTHESIS TESTS FOR ONE POPULATION MEAN WORKSHEET MTH 1210, FALL 2018

Module 1 Fundamentals in statistics

Estimation of a population proportion March 23,

Topic 10: Introduction to Estimation

32 estimating the cumulative distribution function

Math 152. Rumbos Fall Solutions to Review Problems for Exam #2. Number of Heads Frequency

Goodness-of-Fit Tests and Categorical Data Analysis (Devore Chapter Fourteen)

Mathacle. PSet Stats, Concepts In Statistics Level Number Name: Date: Confidence Interval Guesswork with Confidence

7.1 Convergence of sequences of random variables

Lecture 5: Parametric Hypothesis Testing: Comparing Means. GENOME 560, Spring 2016 Doug Fowler, GS

Chapter 6 Sampling Distributions

Sampling Distributions, Z-Tests, Power

1 Inferential Methods for Correlation and Regression Analysis

7.1 Convergence of sequences of random variables

TAMS24: Notations and Formulas

ECE 8527: Introduction to Machine Learning and Pattern Recognition Midterm # 1. Vaishali Amin Fall, 2015

7-1. Chapter 4. Part I. Sampling Distributions and Confidence Intervals

Parameter, Statistic and Random Samples

Simulation. Two Rule For Inverting A Distribution Function

Mathacle. PSet Stats, Concepts In Statistics Level Number Name: Date:

Exam II Review. CEE 3710 November 15, /16/2017. EXAM II Friday, November 17, in class. Open book and open notes.

A statistical method to determine sample size to estimate characteristic value of soil parameters

This is an introductory course in Analysis of Variance and Design of Experiments.

( θ. sup θ Θ f X (x θ) = L. sup Pr (Λ (X) < c) = α. x : Λ (x) = sup θ H 0. sup θ Θ f X (x θ) = ) < c. NH : θ 1 = θ 2 against AH : θ 1 θ 2

Lecture Notes 15 Hypothesis Testing (Chapter 10)

Chapter 8: STATISTICAL INTERVALS FOR A SINGLE SAMPLE. Part 3: Summary of CI for µ Confidence Interval for a Population Proportion p

Hypothesis Testing. Evaluation of Performance of Learned h. Issues. Trade-off Between Bias and Variance

Summary. Recap ... Last Lecture. Summary. Theorem

Topic 18: Composite Hypotheses

Discrete Mathematics for CS Spring 2008 David Wagner Note 22

PSYCHOLOGICAL RESEARCH (PYC 304-C) Lecture 9

Comparing Two Populations. Topic 15 - Two Sample Inference I. Comparing Two Means. Comparing Two Pop Means. Background Reading

Last Lecture. Wald Test

Interval Estimation (Confidence Interval = C.I.): An interval estimate of some population parameter is an interval of the form (, ),

Lecture 33: Bootstrap

Chapter 1 (Definitions)

Computing Confidence Intervals for Sample Data

Lecture 7: Properties of Random Samples

Chapter 22. Comparing Two Proportions. Copyright 2010, 2007, 2004 Pearson Education, Inc.

Instructor: Judith Canner Spring 2010 CONFIDENCE INTERVALS How do we make inferences about the population parameters?

Transcription:

Poit Estimator Eco 325 Notes o Poit Estimator ad Cofidece Iterval 1 By Hiro Kasahara Parameter, Estimator, ad Estimate The ormal probability desity fuctio is fully characterized by two costats: populatio mea µ ad populatio variace. The probability mass fuctio of Beroulli radom variable is fully defied by the populatio fractio of success, p. These costats are called parameters ad we geerally use the Greek letter θ to deote them. We are ofte iterested i kowig the populatio parameter such as populatio mea ad populatio variace. To guess the populatio value of mea ad variace, we use their sample aalogues, i.e., sample mea ad sample variace. A poit estimator of θ is a fuctio of the radom sample, deoted by ˆθ: ˆθ = ˆθX 1, X 2,..., X. Here, the right had side of the equatio provides a mappig from the sample {X 1, X 2,..., X } to real value. Namely, ˆθX 1, X 2,..., X a formula to compute the sample aalog of correspodig populatio parameter e.g., for sample mea X, we have ˆθX 1, X 2,..., X = 1 i=1 X i. The estimator ˆθ is a radom variable because the sample {X 1, X 2,..., X } is radomly draw. Whe we evaluate ˆθX 1, X 2,..., X at the realized sample, the ˆθ is called a estimate. The evaluated value of the fuctio ˆθX 1, X 2,..., X at the realized sample is ot a radom variable ay more rather, it is costat. Ubiasedess A estimator ˆθ is said to be a ubiased estimator of the parameter θ if The bias of a estimator ˆθ is defied as E[ˆθ] = θ. Bias = E[ˆθ] θ. The bias of a ubiased estimator is zero by defiitio. Example 1. The sample mea X = 1 i=1 X i is a ubiased estimator of the populatio mea µ because E[ X] = 1 i=1 E[X i] = 1 i=1 µ = µ. The sample variace s 2 = 1 1 i=1 X i X 2 is a ubiased estimator of the populatio variace. However, the estimator ˆ = 1 i=1 X i X 2 is ot a ubiased estimator of because E[ˆ ] = E[ 1 1 1 i=1 X i X 2 ] = 1E[ 1 1 i=1 X i X 2 ] = 1 σ2 <. 1 c Hiroyuki Kasahara. Not to be copied, used, revised, or distributed without explicit permissio of copyright ower. 1

Example 2. Give a radom sample of size {X 1, X 2,..., X }, cosider a estimator ˆX = X 1, which oly uses the first observatio while igores all other 1 observatios. This estimator is a ubiased estimator of µ because E[X 1 ] = µ. We ca also cosider a estimator defied by the weighted average of X i s as ˆX = i=1 w ix i, where {w i } i=1 is a sequece of umbers such that i=1 w i = 1. The, this estimator is a ubiased estimator of µ because E[ i=1 w ix i ] = i=1 w iµ = µ. Example 3. The sample fractio ˆp is a ubiased estimator fo the populatio fractio p. This is because the sample fractio is viewed as the sample average of idepedet Beroulli radom variables, i.e., ˆp = X = 1 i=1 X i, where X i = 0 with probability 1 p ad X i = 1 with probability p. Takig the expectatio, E[ˆp] = 1 i=1 E[X i] = 1 i=1 p = p. Efficiecy Cosider the case for = 2 ad let X 1 ad X 2 are radomly sampled from the populatio distributio with mea µ ad variace. Cosider the followig two estimators for µ: X = 1 2 X 1 + X 2 ad X = 1 3 X 1 + 2 3 X 2. Both X ad X are ubiased estimators because E[ X] = 1 2 µ+ 1 2 µ = µ ad E[ X] = 1 3 µ+ 2 3 µ = µ. I fact, we may cosider a estimator of the form give by ax 1 + 1 ax 2 for ay fixed value of a ad we ca verify that ax 1 + 1 ax 2 is a ubiased estimator because E[aX 1 + 1 ax 2 ] = µ. While ubiasedess is a desirable property of estimators, we have multiple ubiased estimators. Which estimators do we wat to choose amog all ubiased estimators? The aswer is: the estimator that has the smallest variace. Ituitively, the smaller the variace is, the closer the realized value of the estimator is to the populatio mea o average. I fact, if the variace of the ubiased estimator is zero, we have populatio mea for every realized value of the estimator. Let ˆθ 1 ad ˆθ 2 be two ubiased estimators. The ˆθ 1 is said to be more efficiet tha ˆθ 2 if V arˆθ 1 < V arˆθ 2. If ˆθ 1 is a ubiased estimator that has the smallest variace amog all ubiased estimators, the ˆθ 1 is said to be the most efficiet, or the miimum variace ubiased estimator. Example. Cosider the case for = 2 ad X 1 ad X 2 are radomly sampled from the populatio distributio with mea µ ad variace. What is the most efficiet ubiased estimator? To aswer this, we cosider the class of ubiased estimator of the form ax 1 + 1 ax 2 for ay fixed value a. The variace of ax 1 + 1 ax 2 is give by V arax 1 + 1 ax 2 = {a 2 + 1 a 2 }, where CovX 1, X 2 = 0 by radom samplig. Therefore, we may fid the most efficiet ubiased estimator by miimizig ga = a 2 + 1 a 2 = 2a 2 2a + 1 with respect to a. The first order coditio is give by g a = a 2 = 0 so that ga is miimized at a = 1/2. Therefore, 1X 2 1 + 1X 2 2 = X is the most efficiet ubiased estimator for µ. 2

Cosistecy A poit estimator ˆθ is said to be cosistet if ˆθ coverges i probability to θ, i.e., for every ɛ > 0, lim P ˆθ θ < ɛ = 1 see Law of Large Number. Example 5. Suppose that X 1, X 2,..., X are radomly sampled from a populatio with mea µ ad variace. Is X = 1 i=1 X i a cosistet estimator of µ? How about 1 1 i=1 X i ad 1 1 1 i=1 X i? Are these two estimators cosistet? The sample variace s 2 = 1 1 i=1 X i X 2 is a cosistet estimator of. Is the estimator ˆ = 1 i=1 X i X 2 a cosistet estimator of? Example 6. Suppose that X 1, X 2,..., X are idepedet Beroulli radom variables, i.e., ˆp = X = 1 i=1 X i, where X i = 0 with probability 1 p ad X i = 1 with probability p. The parameter p is the populatio fractio of idividuals with X i = 1. The sample fractio is defied as ˆp = X = 1 i=1 X i. By the Law of Large Numbers, the sample fractio ˆp is a cosistet estimator of the populatio fractio p. The variace of ˆp is give by V arˆp = V ar 1 i=1 X i = 1 2 V ar X1 + X 2 +... + X = V arx i = V arx i 2 = p1 p, where the last lie follows from V arx i = E[X i p 2 ] = 0 p 2 1 p + 1 p 2 p = p1 p. Because V arˆp = p1 p ivolves the ukow populatio parameter p, we do ot kow the value of V arˆp = p1 p. We ca costruct a estimator for V arˆp = p1 p by replacig p with ˆp, where the latter ca be computed from the sample, as V arˆp = ˆp1 ˆp. V arˆp = ˆp1 ˆp is a cosistet estimator of V arˆp. Cofidece Iterval We may estimate iterval rather tha a poit. The idea of iterval estimatio is to costruct a radom iterval such that the costructed iterval cotais the true parameter θ with a pre-specified probability, 1 α. Such a iterval is called 1 α percet cofidece iterval, where 1 α is called the cofidece level. The cofidece iterval is characterized by the lower limit L ad the upper limit U, both of which is a fuctio of the radom sample X 1,..., X so that P LX 1,..., X θ UX 1,..., X = 1 α. Note that both LX 1,..., X ad UX 1,..., X are radom variables. The case that is large or ˆθ Nθ, V arˆθ with V arˆθ kow. Suppose that a poit estimator ˆθ is approximately ormally distributed with mea θ ad variace V arˆθ, i.e., ˆθ N θ, V arˆθ. Two represetative cases are: 1. X 1, X 2,..., X are radomly sampled from some distributio that is differet from ormal distributio but the sample size is large. I this case, ˆθ is defied as the 3

average of radom variable, i.e., ˆθ = X = 1 X i so that we may apply the Cetral Limit Theorem to have ˆθ NE[X i ], V arx i /; for example, the sample mea X Nµ, /. Whe is large, we may essetially treat V arx i as if it is kow ad give by the sample variace. 2. X 1, X 2,..., X are radomly sampled from ormal distributio Nµ, with kow variace. I this case, the sample average is ormally distributed with mea µ ad variace /. I these cases, we may costruct the 95 percet cofidece iterval with [ ] [L, U] = ˆθ 1.96 V arˆθ, ˆθ + 1.96 V arˆθ, so that P ˆθ 1.96 V arˆθ θ ˆθ + 1.96 V arˆθ = 0.95. I geeral, the cofidece iterval with cofidece level 1 α is costructed as P ˆθ z α/2 V arˆθ θ ˆθ + z α/2 V arˆθ = 1 α, 1 where z α/2 is determied such that P Z z α/2 = α/2 whe Z N0, 1. Here, z α/2 V arˆθ is called as the margi of error. We may cofirm 1 by reformulatig the iequality o the left had side of 1 i terms of a stadardized radom variable ˆθ θ V arˆθ as follows. P ˆθ z α/2 V arˆθ θ ˆθ + z α/2 V arˆθ } } =P {ˆθ θ z α/2 V arˆθ ad {z α/2 V arˆθ ˆθ θ =P ˆθ θ z α/2 ad z α/2 ˆθ θ V arˆθ V arˆθ =P z α/2 ˆθ θ V arˆθ z α/2 where Z = ˆθ θ V arˆθ N0, 1 2 =P z α/2 Z z α/2 = 1 α. Example 7 Cofidece Iterval for populatio mea µ. Give the radom sample X 1,..., X draw from Nµ, ad is kow, the sample average X = 1 i=1 X i is a estimator of µ with V ar X = µ is give by [ [L, U] = = σ. Therefore, 95 percet cofidece iterval for X 1.96 σ, X + 1.96 σ ].

If we would like to costruct 90 percet cofidece iterval with α = 0.1, z 0.05 = 1.65 i.e., P Z 1.65 = 0.05 for Z N0, 1. Therefore, 95 percet cofidece iterval for µ is give by [ [L, U] = X 1.65 σ, X + 1.65 σ ]. Example 8 Survey o the U.S. presidetial electio i Florida. The survey was coducted betwee Oct. 20 ad 2, 2016, i Florida after the third ad fial presidetial debate. The survey result shows that, amog 1166 likely registered voters who support either Clito or Trump, there are 602 Clito voters ad 56 Trump voters. What is the 95 precet cofidece iterval for the populatio fractio of Clito voters? Let p be the populatio fractio of Clito voters. Each voter s preferece is a Beroulli radom variable X i with P X i = 1 = p ad P X i = 0 = 1 p, where X i = 1 meas Clito voter while X i = 0 meas Trump voter. The sample average is give by ˆp = 0.516. The stadard deviatio of ˆp is give by p1 p/, which ca be estimated as ˆp1 ˆp/ = 0.5161 0.516/1166 = 0.0163. The margi of error is, therefore, 1.96 0.0163 = 0.0287. The, we may costruct the 95 percet cofidece iterval with [ [L, U] = ˆp 1.96 ˆp1 ˆp/, ˆp + 1.96 ˆp1 ] ˆp/ = [0.88, 0.55]. Therefore, the populatio fractio of Clito voters is betwee 0.88 ad 0.55 with probability 95 percet. Therefore, i this Florida s poll, Clito s lead is withi the margi of error. Example 9 Survey o the U.S. presidetial electio i North Carolia. The survey was coducted betwee November 3 ad 6, 2016, i North Carolia. The survey result shows that, amog 791 likely registered voters who support either Clito or Trump, there are 00 Clito voters ad 391 Trump voters. What is the 95 precet cofidece iterval for the populatio fractio of Clito voters? The kowledge of V arˆθ is required for costructig cofidece iterval as show i 1. Typically, V arˆθ depeds o populatio parameter that is ukow e.g., V ar X = / ad V arˆp = p1 p/ but we ca estimate V arˆθ. The estimator of V ar X ad V arˆp are give by Vˆar X = s 2 / ad Vˆarˆp = ˆp1 ˆp/. Whe V arˆθ is ot kow, we replace V arˆθ with its estimator Vˆarˆθ i costructig cofidece iterval. I the above example of the U.S. presidetial electio Example 8, this is what we did: we replaced V arˆp = p1 p with its estimator ˆp1 ˆp. This is fie as log as the sample size is large because the estimator of V arˆp coverges i probability to V arˆp ad we may essetially treat V arˆp as kow i costructig the cofidece iterval. Whe is small, however, this is ot the case aymore. The radomess of the estimator of V arˆp does ot go away whe is small ad the costructed cofidece iterval i 1 by replacig V arˆθ with its estimator does ot cotai θ with probability 1 α aymore. The case that is small Whe is small, it is geerally difficult to costruct cofidece iterval for two reasos. First, we may ot use the Cetral Limit Theorem to claim that ˆθ is ormally distributed. Secod, replacig V arˆθ with its estimator Vˆarˆθ itroduces a additioal source of radomess. 5

I both cases, the stadardized radom variable usig the estimator of V arˆθ ˆθ θ Vˆarˆθ is ot a stadard ormal radom variable, ad therefore the cofidece iterval 1 is ot valid ay more because 1 is costructed uder the assumptio that N0, 1 see ˆθ θ V arˆθ 2. While it is geerally difficult to costruct cofidece iterval, there is oe exceptioal case where we may costruct cofidece iterval usig Studet s t-distributio. Suppose that we have a radom sample X 1, X 2,..., X from Nµ,. I this case, we have X µ s/ Studet s t distributio with 1 degrees of freedom. Therefore, the cofidece iterval for µ with cofidece level 1 α is costructed as s P X t 1,α/2 µ X s + t 1,α/2 = 1 α, 3 where t 1,α/2 is determied such that P T t 1,α/2 = α/2 whe T Studet s t distributio with 1 degrees of freedom. We may cofirm 3 by reformulatig the iequality o the left had side of 3 i terms of a stadardized radom variable X µ s/ as follows. s P X t 1,α/2 µ X s + t 1,α/2 { } { X µ =P s/ t 1,α/2 ad t 1,α/2 X } µ s/ =P t 1,α/2 X µ s/ t 1,α/2 where T = X µ s/ =P t 1,α/2 T t 1,α/2 = 1 α, Studet s t distributio with 1 degrees of freedom i. A few commets. First, we eed the assumptio that X 1, X 2,..., X are draw from the ormal distributio. If X i is a Beroulli radom variable, the X µ s/ does ot follow t-distributio. Secod, as, s 2 p, so that the Studet s t distributio coverges to the stadard ormal distributio as. I fact, at = 31, the critical value for 95 percet cofidece iterval usig t-distributio is give by 2.02 which is close to 1.96. Cofidece iterval for sample variace Suppose that {X 1, X 2,..., X } is a radom sample from a ormal distributio with E[X i ] = µ ad Var[X i ] =. The, the radom variable 1s2 has a distributio kow as the chisquare distributio with 1 degree of freedom which we deote by χ 2 1, i.e., 1s 2 = χ 2 1. 5 6

Let χ 2 1,α/2 ad χ2 1,1 α/2 be the value such that P χ2 1 > χ 2 1,α/2 = α/2 ad P χ2 1 > χ 2 1,1 α/2 = 1 α/2 so that P χ 2 1,1 α/2 < 1s2 < χ 2 1,α/2 = 1 α. The, we may costruct the cofidece iterval for from the sample variace s = 1 1 i=1 X i X 2 as follows. 1 α = P χ 2 1,1 α/2 < 1s2 < χ 2 1,α/2 1 1 = P < < = P χ 2 1,α/2 1s 2 χ 2 1,α/2 1s 2 < < 1s2 χ 2 1,1 α/2 χ 2 1,1 α/2. Therefore, with P L < < U 1 = 1 α L = 1s2 χ 2 1,α/2 ad U = 1s2. χ 2 1,1 α/2 Example 10 Cofidece Iterval ad Hypothesis Testig for Sample Variace. Suppose that you are a plat maager for producig electrical devices operated by a thermostatic cotrol. Accordig to the egieerig specificatios, the stadard deviatio of the temperature at which these cotrols actually operate should ot exceed 2.0 degrees Fahreheit. As a plat maager, you would like to kow how large the populatio stadard deviatio σ is. We assume that the temperature is ormally distributed. Suppose that you radomly sampled 25 of these cotrols, ad the sample variace of operatig temperatures was s 2 = 2.36 degrees Fahreheit. i Compute the 95 percet cofidece iterval for the populatio stadard deviatio σ. ii Test the ull hypothesis H 0 : σ = 2 agaist the alterative hypothesis H 1 : σ > 2 at the sigificace level α = 0.05. The distributio of 1s2 is give by chi-square distributio with 1 degrees of freedom. Let χ 2 1 be a radom variable distributed by chi-square distributio with 1 degrees of freedom ad let χ 2 1,α be the value such that Prχ 2 1 < χ 1,α = α. The, Prχ 2 1,1 α/2 1s2 χ 2 1,α/2 = 1 α ad, therefore, Pr 1s 2. χ 2 1,α/2 1s2 χ 2 1,1 α/2 Now, whe α/2 = 0.025, chi-square table gives χ 2 2,0.025 = 39.36 ad χ 2 2,0.975 = 12.01 so that the lower limit of the 95 percet CI is limit is 1s 2 χ 2 1,0.975 1s 2 χ 2 1,0.025 = 2 2.36 39.36 = 1.39 ad the upper = 2 2.36 12.01 =.567. Therefore, the 95 percet CI for the populatio variace is [1.39,.567] ad, because of the mootoic relatioship betwee the variace ad the stadard deviatio, the 95 percet CI for the populatio stadard deviatio is give by [ 1.39,.567] = [1.200, 2.137]. 7

To test the ull hypothesis H 0, a fid the distributio of stadardized radom variable whe H 0 is true, i.e., = 2 2 =, b fid the rejectio regio which is the regio 1s 2 where the radom variable 1s2 is ulikely i.e., with the probability less tha 5 percet to fall ito if H 0 is true, c look at the realized value of 1s2 ad ask if the realized value of 1s2 is a ulikely value to happe if H 0 is true by checkig if 1s2 falls ito the rejectio regio. For a, whe H 0 is true, 1s2 is distributed accordig to the chi-square distributio with the degree of freedom equal to 1 = 2. For b, because H 1 : σ > 2, we cosider oesided test; amely, the very high value of 1s2 is cosidered to be evidece agaist H 0 but ot the low value of 1s2. Uder H 0, Pr 1s 2 χ 2 1,α = Pr 2s 2 χ 2 2,α = 1 α for α = 0.05, where χ 2 2,0.05 = 36.15. Therefore, the rejectio regio for 2s2 is give by 36.15,, i.e., we reject H 0 if 2s2 > 36.15, or equivaletly, s 2 > 36.15/6 = 6.02 because such a value of s 2 is ulikely to happe if H 0 is true. c The realized value of s 2 is 2.36, which does ot fall i the rejectio regio i.e., 2.36 belogs to the regio which is ot ulikely happe if H 0 is true ad hece there is ot sufficiet evidece agaist H 0. We do ot reject H 0. 8