Chapter 7. Confidence Sets Lecture 30: Pivotal quantities and confidence sets

Similar documents
Lecture 32: Asymptotic confidence sets and likelihoods

Lecture 28: Asymptotic confidence sets

Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic

Lecture 16: Sample quantiles and their asymptotic properties

Lecture 17: Likelihood ratio and asymptotic tests

Stat 710: Mathematical Statistics Lecture 31

Stat 710: Mathematical Statistics Lecture 40

Lecture 26: Likelihood ratio tests

ST5215: Advanced Statistical Theory

40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics

2014/2015 Smester II ST5224 Final Exam Solution

Chapter 9: Interval Estimation and Confidence Sets Lecture 16: Confidence sets and credible sets

Lecture 3: Random variables, distributions, and transformations

Chapter 6. Hypothesis Tests Lecture 20: UMP tests and Neyman-Pearson lemma

Lecture 17: Minimal sufficiency

Lecture 20: Linear model, the LSE, and UMVUE

MAS223 Statistical Inference and Modelling Exercises

Multivariate Analysis and Likelihood Inference

Lecture 1: August 28

Lecture 23: UMPU tests in exponential families

Stat 710: Mathematical Statistics Lecture 27

Lecture 21: Convergence of transformations and generating a random variable

STA 2201/442 Assignment 2

MATH2715: Statistical Methods

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015

1 Bayesian Linear Regression (BLR)

Chapter 2: Resampling Maarten Jansen

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3

Statistics Ph.D. Qualifying Exam: Part I October 18, 2003

Lecture 21. Hypothesis Testing II

STAT 514 Solutions to Assignment #6

Chapter 8: Hypothesis Testing Lecture 9: Likelihood ratio tests

Interval Estimation. Chapter 9

Final Examination Statistics 200C. T. Ferguson June 11, 2009

Stochastic Convergence, Delta Method & Moment Estimators

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN

Lecture 14: Multivariate mgf s and chf s

Partial Solutions for h4/2014s: Sampling Distributions

Theory of Statistical Tests

simple if it completely specifies the density of x

1.1 Review of Probability Theory

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators

STAT 512 sp 2018 Summary Sheet

Limiting Distributions

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities

parameter space Θ, depending only on X, such that Note: it is not θ that is random, but the set C(X).

Mathematical Statistics

Lecture 13: p-values and union intersection tests

Lecture 2. (See Exercise 7.22, 7.23, 7.24 in Casella & Berger)

Lecture 6: Linear models and Gauss-Markov theorem

Random Variables and Their Distributions

Continuous Distributions

Limiting Distributions

Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed

Confidence & Credible Interval Estimates

1 Probability theory. 2 Random variables and probability theory.

System Identification, Lecture 4

Chapter 1: Probability Theory Lecture 1: Measure space and measurable function

Master s Written Examination - Solution

System Identification, Lecture 4

Testing Algebraic Hypotheses

Theoretical Statistics. Lecture 1.

Empirical Likelihood

Lecture 16: UMVUE: conditioning on sufficient and complete statistics

The strictly 1/2-stable example

Test Problems for Probability Theory ,

2 Functions of random variables

Asymptotic Statistics-III. Changliang Zou

Chapter 1: Probability Theory Lecture 1: Measure space, measurable function, and integration

STAT 7032 Probability Spring Wlodek Bryc

STAT 3610: Review of Probability Distributions

Quick Tour of Basic Probability Theory and Linear Algebra

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

3 Integration and Expectation

Spring 2012 Math 541B Exam 1

Asymptotic Statistics-VI. Changliang Zou

Statistics 3858 : Maximum Likelihood Estimators

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture 5: Moment generating functions

STT 843 Key to Homework 1 Spring 2018

Mathematical statistics

g 2 (x) (1/3)M 1 = (1/3)(2/3)M.

Review. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk

Math 362, Problem set 1

1 Complete Statistics

STA 6857 Autocorrelation and Cross-Correlation & Stationary Time Series ( 1.4, 1.5)

λ(x + 1)f g (x) > θ 0

ELEMENTS OF PROBABILITY THEORY

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Probability and Distributions

Chapter 2. Discrete Distributions

Chapter 5 continued. Chapter 5 sections

Severity Models - Special Families of Distributions

Lecture Notes 3 Convergence (Chapter 5)

MIT Spring 2015

Basic Distributional Assumptions of the Linear Model: 1. The errors are unbiased: E[ε] = The errors are uncorrelated with common variance:

Statistics (1): Estimation

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu

Transcription:

Chapter 7. Confidence Sets Lecture 30: Pivotal quantities and confidence sets Confidence sets X: a sample from a population P P. θ = θ(p): a functional from P to Θ R k for a fixed integer k. C(X): a confidence set for θ, a set in B Θ (the class of Borel sets on Θ) depending only on X. inf P P P(θ C(X)): the confidence coefficient of C(X). If the confidence coefficient of C(X) is 1 α for fixed α (0,1), then we say that C(X) has confidence level 1 α or C(X) is a level 1 α confidence set. We focus on Various methods of constructing confidence sets. Properties of confidence sets. UW-Madison (Statistics) Stat 710, Lecture 30 Jan 2012 1 / 12

Chapter 7. Confidence Sets Lecture 30: Pivotal quantities and confidence sets Confidence sets X: a sample from a population P P. θ = θ(p): a functional from P to Θ R k for a fixed integer k. C(X): a confidence set for θ, a set in B Θ (the class of Borel sets on Θ) depending only on X. inf P P P(θ C(X)): the confidence coefficient of C(X). If the confidence coefficient of C(X) is 1 α for fixed α (0,1), then we say that C(X) has confidence level 1 α or C(X) is a level 1 α confidence set. We focus on Various methods of constructing confidence sets. Properties of confidence sets. UW-Madison (Statistics) Stat 710, Lecture 30 Jan 2012 1 / 12

Construction of Confidence Sets: Pivotal quantities The most popular method of constructing confidence sets is the use of pivotal quantities defined as follows. Definition 7.1 A known Borel function R of (X,θ) is called a pivotal quantity if and only if the distribution of R(X,θ) does not depend on P. Remarks A pivotal quantity depends on P through θ = θ(p). A pivotal quantity is usually not a statistic, although its distribution is known. With a pivotal quantity R(X,θ), a level 1 α confidence set for any given α (0,1) can be obtained. If R(X, θ) has a continuous c.d.f., then we can obtain a confidence set C(X) that has confidence coefficient 1 α. UW-Madison (Statistics) Stat 710, Lecture 30 Jan 2012 2 / 12

Construction of Confidence Sets: Pivotal quantities The most popular method of constructing confidence sets is the use of pivotal quantities defined as follows. Definition 7.1 A known Borel function R of (X,θ) is called a pivotal quantity if and only if the distribution of R(X,θ) does not depend on P. Remarks A pivotal quantity depends on P through θ = θ(p). A pivotal quantity is usually not a statistic, although its distribution is known. With a pivotal quantity R(X,θ), a level 1 α confidence set for any given α (0,1) can be obtained. If R(X, θ) has a continuous c.d.f., then we can obtain a confidence set C(X) that has confidence coefficient 1 α. UW-Madison (Statistics) Stat 710, Lecture 30 Jan 2012 2 / 12

Construction of Confidence Sets: Pivotal quantities The most popular method of constructing confidence sets is the use of pivotal quantities defined as follows. Definition 7.1 A known Borel function R of (X,θ) is called a pivotal quantity if and only if the distribution of R(X,θ) does not depend on P. Remarks A pivotal quantity depends on P through θ = θ(p). A pivotal quantity is usually not a statistic, although its distribution is known. With a pivotal quantity R(X,θ), a level 1 α confidence set for any given α (0,1) can be obtained. If R(X, θ) has a continuous c.d.f., then we can obtain a confidence set C(X) that has confidence coefficient 1 α. UW-Madison (Statistics) Stat 710, Lecture 30 Jan 2012 2 / 12

Construction First, find two constants c 1 and c 2 such that P(c 1 R(X,θ) c 2 ) 1 α. Next, define C(X) = {θ Θ : c 1 R(X,θ) c 2 }. Then C(X) is a level 1 α confidence set, since inf P( θ C(X) ) = inf P( ) c 1 R(X,θ) c 2 P P P P = P ( ) c 1 R(X,θ) c 2 1 α. The confidence coefficient of C(X) may not be 1 α. If R(X,θ) has a continuous c.d.f., then we can choose c i s such that the equality in the last expression holds and the confidence set C(X) has confidence coefficient 1 α. In a given problem, there may not exist any pivotal quantity, or there may be many different pivotal quantities and one has to choose one based on some principles or criteria, which are discussed in 7.2. UW-Madison (Statistics) Stat 710, Lecture 30 Jan 2012 3 / 12

Construction First, find two constants c 1 and c 2 such that P(c 1 R(X,θ) c 2 ) 1 α. Next, define C(X) = {θ Θ : c 1 R(X,θ) c 2 }. Then C(X) is a level 1 α confidence set, since inf P( θ C(X) ) = inf P( ) c 1 R(X,θ) c 2 P P P P = P ( ) c 1 R(X,θ) c 2 1 α. The confidence coefficient of C(X) may not be 1 α. If R(X,θ) has a continuous c.d.f., then we can choose c i s such that the equality in the last expression holds and the confidence set C(X) has confidence coefficient 1 α. In a given problem, there may not exist any pivotal quantity, or there may be many different pivotal quantities and one has to choose one based on some principles or criteria, which are discussed in 7.2. UW-Madison (Statistics) Stat 710, Lecture 30 Jan 2012 3 / 12

Computation When R(X,θ) and c i s are chosen, we need to compute the confidence set C(X) = {c 1 R(X,θ) c 2 }. This can be done by inverting c 1 R(X,θ) c 2. For example, if θ is real-valued and R(X,θ) is monotone in θ when X is fixed, then C(X) = {θ : θ(x) θ θ(x)} for some θ(x) < θ(x), i.e., C(X) is an interval (finite or infinite). If R(X,θ) is not monotone, then C(X) may be a union of several intervals. For real-valued θ, a confidence interval rather than a complex set such as a union of several intervals is generally preferred since it is simple and the result is easy to interpret. When θ is multivariate, inverting c 1 R(X,θ) c 2 may be complicated. In most cases where explicit forms of C(X) do not exist, C(X) can still be obtained numerically. UW-Madison (Statistics) Stat 710, Lecture 30 Jan 2012 4 / 12

Example 7.2 Let X 1,...,X n be i.i.d. random variables from the uniform distribution U(0,θ). Consider the problem of finding a confidence set for θ. Note that the family P in this case is a scale family so that the results in Example 7.1 can be used. But a better confidence interval can be obtained based on the sufficient and complete statistic X (n) for which X (n) /θ is a pivotal quantity (Example 7.13). Note that X (n) /θ has the Lebesgue p.d.f. nx n 1 I (0,1) (x). Hence c i s should satisfy c n 2 cn 1 = 1 α. The resulting confidence interval for θ is [c 1 2 X (n),c 1 1 X (n)]. Choices of c i s are discussed in Example 7.13. UW-Madison (Statistics) Stat 710, Lecture 30 Jan 2012 5 / 12

Example 7.3 (Fieller s interval) Let (X i1,x i2 ), i = 1,...,n, be i.i.d. bivariate normal with unknown µ j = E(X 1j ), σj 2 = Var(X 1j ), j = 1,2, and σ 12 = Cov(X 11,X 12 ). Let θ = µ 2 /µ 1 be the parameter of interest (µ 1 0). Define Y i (θ) = X i2 θx i1. Then Y 1 (θ),...,y n (θ) are i.i.d. from N(0,σ2 2 2θσ 12 + θ 2 σ1 2). Let S 2 (θ) = 1 n n 1 [Y i (θ) Ȳ(θ)]2 = S2 2 2θS 12 + θ 2 S1 2, i=1 where Ȳ(θ) is the average of Y i(θ) s and Si 2 and S 12 are sample variances and covariance based on X ij s. It follows from Examples 1.16 and 2.18 that nȳ(θ)/s(θ) has the t-distribution t n 1 and, therefore, is a pivotal quantity. Let t n 1,α be the (1 α)th quantile of the t-distribution t n 1. Then C(X) = {θ : n[ȳ(θ)]2 /S 2 (θ) tn 1,α/2 2 } is a confidence set for θ with confidence coefficient 1 α. UW-Madison (Statistics) Stat 710, Lecture 30 Jan 2012 6 / 12

Example 7.3 (continued) Note that n[ȳ(θ)]2 = t 2 n 1,α/2 S2 (θ) defines a parabola in θ. Depending on the roots of the parabola, C(X) can be a finite interval, the complement of a finite interval, or the whole real line (exercise). Proposition 7.1 (Existence of pivotal quantities in parametric problems) Let T(X) = (T 1 (X),...,T s (X)) and T 1,...,T s be independent statistics. Suppose that each T i has a continuous c.d.f. F Ti,θ indexed by θ. Then R(X,θ) = s i=1 F T i,θ(t i (X)) is a pivotal quantity. Proof The result follows from the fact that F Ti,θ(T i ) s are i.i.d. from the uniform distribution U(0, 1). When θ and T in Proposition 7.1 are real-valued, we can use the following result to construct confidence intervals for θ even when the c.d.f. of T is not continuous. UW-Madison (Statistics) Stat 710, Lecture 30 Jan 2012 7 / 12

Example 7.3 (continued) Note that n[ȳ(θ)]2 = t 2 n 1,α/2 S2 (θ) defines a parabola in θ. Depending on the roots of the parabola, C(X) can be a finite interval, the complement of a finite interval, or the whole real line (exercise). Proposition 7.1 (Existence of pivotal quantities in parametric problems) Let T(X) = (T 1 (X),...,T s (X)) and T 1,...,T s be independent statistics. Suppose that each T i has a continuous c.d.f. F Ti,θ indexed by θ. Then R(X,θ) = s i=1 F T i,θ(t i (X)) is a pivotal quantity. Proof The result follows from the fact that F Ti,θ(T i ) s are i.i.d. from the uniform distribution U(0, 1). When θ and T in Proposition 7.1 are real-valued, we can use the following result to construct confidence intervals for θ even when the c.d.f. of T is not continuous. UW-Madison (Statistics) Stat 710, Lecture 30 Jan 2012 7 / 12

Example 7.3 (continued) Note that n[ȳ(θ)]2 = t 2 n 1,α/2 S2 (θ) defines a parabola in θ. Depending on the roots of the parabola, C(X) can be a finite interval, the complement of a finite interval, or the whole real line (exercise). Proposition 7.1 (Existence of pivotal quantities in parametric problems) Let T(X) = (T 1 (X),...,T s (X)) and T 1,...,T s be independent statistics. Suppose that each T i has a continuous c.d.f. F Ti,θ indexed by θ. Then R(X,θ) = s i=1 F T i,θ(t i (X)) is a pivotal quantity. Proof The result follows from the fact that F Ti,θ(T i ) s are i.i.d. from the uniform distribution U(0, 1). When θ and T in Proposition 7.1 are real-valued, we can use the following result to construct confidence intervals for θ even when the c.d.f. of T is not continuous. UW-Madison (Statistics) Stat 710, Lecture 30 Jan 2012 7 / 12

Example 7.3 (continued) Note that n[ȳ(θ)]2 = t 2 n 1,α/2 S2 (θ) defines a parabola in θ. Depending on the roots of the parabola, C(X) can be a finite interval, the complement of a finite interval, or the whole real line (exercise). Proposition 7.1 (Existence of pivotal quantities in parametric problems) Let T(X) = (T 1 (X),...,T s (X)) and T 1,...,T s be independent statistics. Suppose that each T i has a continuous c.d.f. F Ti,θ indexed by θ. Then R(X,θ) = s i=1 F T i,θ(t i (X)) is a pivotal quantity. Proof The result follows from the fact that F Ti,θ(T i ) s are i.i.d. from the uniform distribution U(0, 1). When θ and T in Proposition 7.1 are real-valued, we can use the following result to construct confidence intervals for θ even when the c.d.f. of T is not continuous. UW-Madison (Statistics) Stat 710, Lecture 30 Jan 2012 7 / 12

Theorem 7.1 Suppose that P is in a parametric family indexed by a real-valued θ. Let T(X) be a real-valued statistic with c.d.f. F T,θ (t) and let α 1 and α 2 be fixed positive constants such that α 1 + α 2 = α < 1 2. (i) Suppose that F T,θ (t) and F T,θ (t ) are nonincreasing in θ for each fixed t. Define θ = sup{θ : F T,θ (T) α 1 } and θ = inf{θ : F T,θ (T ) 1 α 2 }. Then [θ(t),θ(t)] is a level 1 α confidence interval for θ. (ii) If F T,θ (t) and F T,θ (t ) are nondecreasing in θ for each t, then the same result holds with θ = inf{θ : F T,θ (T) α 1 } and θ = sup{θ : F T,θ (T ) 1 α 2 }. (iii) If F T,θ is a continuous c.d.f. for any θ, then F T,θ (T) is a pivotal quantity and the confidence interval in (i) or (ii) has confidence coefficient 1 α. UW-Madison (Statistics) Stat 710, Lecture 30 Jan 2012 8 / 12

Proof We only need to prove (i). Under the given condition, θ > θ implies F T,θ (T) < α 1 and θ < θ implies F T,θ (T ) > 1 α 2. Hence, P ( θ θ θ ) 1 P ( F T,θ (T) < α 1 ) P ( FT,θ (T ) > 1 α 2 ). The result follows from P ( F T,θ (T) < α 1 ) α1 and P ( F T,θ (T ) > 1 α 2 ) α2. The proof of this inequality is left as an exercise. Discussion When the parametric family in Theorem 7.1 has monotone likelihood ratio in T(X), it follows from Lemma 6.3 that the condition in Theorem 7.1(i) holds; in fact, it follows from Exercise 2 in 6.6 that F T,θ (t) is strictly decreasing for any t at which 0 < F T,θ (t) < 1. UW-Madison (Statistics) Stat 710, Lecture 30 Jan 2012 9 / 12

Proof We only need to prove (i). Under the given condition, θ > θ implies F T,θ (T) < α 1 and θ < θ implies F T,θ (T ) > 1 α 2. Hence, P ( θ θ θ ) 1 P ( F T,θ (T) < α 1 ) P ( FT,θ (T ) > 1 α 2 ). The result follows from P ( F T,θ (T) < α 1 ) α1 and P ( F T,θ (T ) > 1 α 2 ) α2. The proof of this inequality is left as an exercise. Discussion When the parametric family in Theorem 7.1 has monotone likelihood ratio in T(X), it follows from Lemma 6.3 that the condition in Theorem 7.1(i) holds; in fact, it follows from Exercise 2 in 6.6 that F T,θ (t) is strictly decreasing for any t at which 0 < F T,θ (t) < 1. UW-Madison (Statistics) Stat 710, Lecture 30 Jan 2012 9 / 12

Discussion If F T,θ (t) is also continuous in θ, lim θ θ F T,θ (t) > α 1, and lim θ θ+ F T,θ (t) < α 1, where θ and θ + are the two ends of the parameter space, then θ is the unique solution of F T,θ (t) = α 1. (Figure) A similar conclusion can be drawn for θ. Theorem 7.1 can be applied to obtain the confidence interval for θ in Example 7.2 (exercise). The following example concerns a discrete F T,θ. Example 7.5 Let X 1,...,X n be i.i.d. random variables from the Poisson distribution P(θ) with an unknown θ > 0 and T(X) = n i=1 X i. Note that T is sufficient and complete for θ and has the Poisson distribution P(nθ). Thus F T,θ (t) = t j=0 e nθ (nθ) j, t = 0,1,2,... j! UW-Madison (Statistics) Stat 710, Lecture 30 Jan 2012 10 / 12

Discussion If F T,θ (t) is also continuous in θ, lim θ θ F T,θ (t) > α 1, and lim θ θ+ F T,θ (t) < α 1, where θ and θ + are the two ends of the parameter space, then θ is the unique solution of F T,θ (t) = α 1. (Figure) A similar conclusion can be drawn for θ. Theorem 7.1 can be applied to obtain the confidence interval for θ in Example 7.2 (exercise). The following example concerns a discrete F T,θ. Example 7.5 Let X 1,...,X n be i.i.d. random variables from the Poisson distribution P(θ) with an unknown θ > 0 and T(X) = n i=1 X i. Note that T is sufficient and complete for θ and has the Poisson distribution P(nθ). Thus F T,θ (t) = t j=0 e nθ (nθ) j, t = 0,1,2,... j! UW-Madison (Statistics) Stat 710, Lecture 30 Jan 2012 10 / 12

Example 7.5 (continued) Since the Poisson family has monotone likelihood ratio in T and 0 < F T,θ (t) < 1 for any t, F T,θ (t) is strictly decreasing in θ. Also, F T,θ (t) is continuous in θ and F T,θ (t) tends to 1 and 0 as θ tends to 0 and, respectively. Thus, Theorem 7.1 applies and θ is the unique solution of F T,θ (T) = α 1. Since F T,θ (t ) = F T,θ (t 1) for t > 0, θ is the unique solution of F T,θ (t 1) = 1 α 2 when T = t > 0 and θ = 0 when T = 0. In fact, in this case explicit forms of θ and θ can be obtained from 1 Γ(t) λ x t 1 e x dx = t 1 j=0 e λ λ j, t = 1,2,... j! Using this equality, it can be shown (exercise) that θ = (2n) 1 χ 2 2(T+1),α 1 and θ = (2n) 1 χ 2 2T,1 α 2, where χ 2 r,α is the (1 α)th quantile of the chi-square distribution χr 2 and χ0,a 2 is defined to be 0. UW-Madison (Statistics) Stat 710, Lecture 30 Jan 2012 11 / 12

Example 7.5 (continued) Since the Poisson family has monotone likelihood ratio in T and 0 < F T,θ (t) < 1 for any t, F T,θ (t) is strictly decreasing in θ. Also, F T,θ (t) is continuous in θ and F T,θ (t) tends to 1 and 0 as θ tends to 0 and, respectively. Thus, Theorem 7.1 applies and θ is the unique solution of F T,θ (T) = α 1. Since F T,θ (t ) = F T,θ (t 1) for t > 0, θ is the unique solution of F T,θ (t 1) = 1 α 2 when T = t > 0 and θ = 0 when T = 0. In fact, in this case explicit forms of θ and θ can be obtained from 1 Γ(t) λ x t 1 e x dx = t 1 j=0 e λ λ j, t = 1,2,... j! Using this equality, it can be shown (exercise) that θ = (2n) 1 χ 2 2(T+1),α 1 and θ = (2n) 1 χ 2 2T,1 α 2, where χ 2 r,α is the (1 α)th quantile of the chi-square distribution χr 2 and χ0,a 2 is defined to be 0. UW-Madison (Statistics) Stat 710, Lecture 30 Jan 2012 11 / 12

Figure: A confidence interval by using F T,θ (t) F θ 0.0 0.2 0.4 0.6 0.8 1.0 F θ(t) F θ(t 1) n = 20, t = 4.1.2.3.4 θ L θ U.6 θ So far we have considered examples for parametric problems. In a nonparametric problem, a pivotal quantity may not exist and we have to consider approximate pivotal quantities ( 7.3 and 7.4). UW-Madison (Statistics) Stat 710, Lecture 30 Jan 2012 12 / 12

Figure: A confidence interval by using F T,θ (t) F θ 0.0 0.2 0.4 0.6 0.8 1.0 F θ(t) F θ(t 1) n = 20, t = 4.1.2.3.4 θ L θ U.6 θ So far we have considered examples for parametric problems. In a nonparametric problem, a pivotal quantity may not exist and we have to consider approximate pivotal quantities ( 7.3 and 7.4). UW-Madison (Statistics) Stat 710, Lecture 30 Jan 2012 12 / 12