Lecture 28: Asymptotic confidence sets

Size: px
Start display at page:

Download "Lecture 28: Asymptotic confidence sets"

Transcription

1 Lecture 28: Asymptotic confidence sets 1 α asymptotic confidence sets Similar to testing hypotheses, in many situations it is difficult to find a confidence set with a given confidence coefficient or level 1 α. A common approach is to find a confidence set whose confidence coefficient or level is approximately 1 α when the sample size n is large. Based on a sample X, a confidence set C(X) for ϑ has asymptotic level 1 α if lim n P θ (ϑ C(X)) 1 α Often, we can replace by = in the previous expression. In any case, we call C(X) a 1 α asymptotic confidence set. Asymptotic pivotal quantities A known function q n (X,ϑ) is said to be asymptotically pivotal iff the limiting distribution of q n (X,ϑ) does not depend on any unknown quantity. UW-Madison (Statistics) Stat 610 Lecture / 10

2 Confidence sets based on asymptotic pivots Like a pivotal quantity in constructing confidence sets with a given confidence coefficient or level, an asymptotically pivotal quantity can be used in constructing 1 α asymptotic confidence sets. Because of the wide applications of the CLT, Slutsky s theorem, and the delta-method, many asymptotic pivots can be constructed with the 1/2 form V n ( ϑ n ϑ), where ϑ n is an estimator of ϑ that is asymptotically normal, i.e., V 1/2 n ( ϑ n ϑ) converges in distribution to N(0,I k ) and V n is a consistent estimator of V n (called the asymptotic covariance matrix of ϑ n ) in the sense that if v ij and v ij are the (i,j)th elements of V n and V n, respectively, then n( v ij v ij ) converges in probability to 0. The resulting 1 α asymptotically correct confidence sets are of the form C(X) = {ϑ : V 1/2 n ( ϑ n ϑ) 2 χk,α 2 }, UW-Madison (Statistics) Stat 610 Lecture / 10

3 where χk,α 2 is the 100(1 α)th percentile of the chi-square distribution with k degrees of freedom and b 2 = b b for any vector b. If ϑ is real-valued, then this C(X) is a 1 α asymptotic confidence interval; ϑ is multivariate, C(X) is an ellipsoid. Eample. Let X 1,...,X n be iid random variables with E(X 1 ) = µ and Var(X 1 ) = σ 2, and let ϑ = µ. Then n( X µ)/s is an asymptotic pivot, since by the CLT and Slutsky s theorem, it converges in distribution to N(0, 1). Based on this asymptotic pivot, we obtain a 1 α asymptotic confidence interval [ X z α/2 S/ n, X + z α/2 S/ n] If we also know that X i has the Poisson(λ) distribution with unknown λ > 0, then µ = λ and σ 2 = λ. Hence, we have two other asymptotic pivots: UW-Madison (Statistics) Stat 610 Lecture / 10

4 where χk,α 2 is the 100(1 α)th percentile of the chi-square distribution with k degrees of freedom and b 2 = b b for any vector b. If ϑ is real-valued, then this C(X) is a 1 α asymptotic confidence interval; ϑ is multivariate, C(X) is an ellipsoid. Eample. Let X 1,...,X n be iid random variables with E(X 1 ) = µ and Var(X 1 ) = σ 2, and let ϑ = µ. Then n( X µ)/s is an asymptotic pivot, since by the CLT and Slutsky s theorem, it converges in distribution to N(0, 1). Based on this asymptotic pivot, we obtain a 1 α asymptotic confidence interval [ X z α/2 S/ n, X + z α/2 S/ n] If we also know that X i has the Poisson(λ) distribution with unknown λ > 0, then µ = λ and σ 2 = λ. Hence, we have two other asymptotic pivots: UW-Madison (Statistics) Stat 610 Lecture / 10

5 n( X λ) X and n( X λ) λ. Based on the first one we obtain a 1 α asymptotic confidence interval [ X zα/2 X/ n, X + zα/2 X/ n ] For the second asymptotic pivot, we can construct a 1 α asymptotic confidence set C(X) = {λ : ( X λ) 2 } { zα/2 2 = λ : λ 2 (2 λ/n X + n 1 zα/2 2 )λ + X } 2 0 Note that Q(λ) = λ 2 (2 X + n 1 z 2 α/2 )λ + X 2 is a convex quadratic function. Thus, C(X) is also a confidence interval [L(X),U(X)] whose two limits are the two solutions to Q(λ) = 0: L(X) = X n 1 z 2 α/2 Xz 2 α/2 /n + z 4 α/2 /n2 U(X) = X n 1 z 2 α/2 + Xz 2 α/2 /n + z 4 α/2 /n2 UW-Madison (Statistics) Stat 610 Lecture / 10

6 Taking the difference of the lower limits of the two confidence intervals we get zα/2 2 2n z 2 X α/2 + z4 α/2 n n 2 zα/2 2 X n / = z2 α/2 2n z4 α/2 z 2 X α/2 n 2 + z4 α/2 n n 2 + zα/2 2 X n which converges to 0 at the rate n 1 and, hence, the two intervals are asymptotically equivalent. From the previous example, we find that, if ϑ = θ is the only unknown parameter in the population and thus V n = V n (θ), then we may not need to estimate V n in order to obtain a confidence interval. We can directly solve ( ϑ n ϑ) 2 /V n (ϑ) χ1,α 2 and obtain a confidence set { C(X) = ϑ : ( ϑ } n ϑ) 2 χ1,α 2 V n(ϑ) Often, C(X) is a confidence interval. UW-Madison (Statistics) Stat 610 Lecture / 10

7 Taking the difference of the lower limits of the two confidence intervals we get zα/2 2 2n z 2 X α/2 + z4 α/2 n n 2 zα/2 2 X n / = z2 α/2 2n z4 α/2 z 2 X α/2 n 2 + z4 α/2 n n 2 + zα/2 2 X n which converges to 0 at the rate n 1 and, hence, the two intervals are asymptotically equivalent. From the previous example, we find that, if ϑ = θ is the only unknown parameter in the population and thus V n = V n (θ), then we may not need to estimate V n in order to obtain a confidence interval. We can directly solve ( ϑ n ϑ) 2 /V n (ϑ) χ1,α 2 and obtain a confidence set { C(X) = ϑ : ( ϑ } n ϑ) 2 χ1,α 2 V n(ϑ) Often, C(X) is a confidence interval. UW-Madison (Statistics) Stat 610 Lecture / 10

8 A sufficient condition for C(X) to be an interval is that, as a function of ϑ, χ 2 1,α V n(ϑ) ( ϑ n ϑ) 2 is unimodal. This is actually the case for the second confidence interval in the previous example for the Poisson case. We may replace V n (ϑ) by V n ( ϑ n ), which is consistent if V n (ϑ) is a continuous function of ϑ. This ensures that we obtain a confidence interval, and the first interval in the previous example for the Poisson case is this kind. Suppose now that ϑ is not the only parameter, i.e., θ = (ϑ,ϕ) and we want a confidence set or interval for ϑ. If V n (θ) does not depend on ϕ, then the procedures in the previous discussion can still be applied. If V n (θ) = V n (ϑ,ϕ) depends on ϕ, then we first find an estimator ϕ n of ϕ, and then use V n (ϑ, ϕ n ) or V n ( ϑ n, ϕ n ) and apply the previous discussed procedure. We may also find an estimator of V n (θ) in a different way. UW-Madison (Statistics) Stat 610 Lecture / 10

9 Example. Let X 1,...,X n be iid from Double-exponential(µ,σ) with unknown µ R and σ 2 > 0. Consider setting a confidence interval for ϑ = µ based on the asymptotic distribution of X (not the MLE of µ). By the CLT and Var(X 1 ) = 2σ 2, n( X µ) converges in distribution to N(0,1) 2σ If we estimator 2σ 2 by the sample variance S 2, then we obtain the following 1 α asymptotic confidence interval (noting that zα/2 2 = χ2 1,α ) C(X) = {µ : n( X µ) 2 χ1,α 2 S2} = [ X zα/2 S/ n, X + z α/2 S/ n ] Suppose now that we know σ 2 = µ 2. If we estimate 2σ 2 by 2 X 2 and inverting n( X µ) 2 χ1,α 2 (2 X 2 ), the resulting confidence interval is the same as C(X) with S replaced by 2 X. UW-Madison (Statistics) Stat 610 Lecture / 10

10 If we invert n( X µ) 2 χ 2 1,α (2µ2 ), the resulting confidence interval is [L(X),U(X)] with L(X) = X 2z α/2 X / n 1 2z 2 α/2 /n, U(X) = X + 2z α/2 X / n 1 2z 2 α/2 /n If we estimate 2σ 2 = 2µ 2 by S 2, then we can still use interval C(X). Typically, in a given problem there exist many different asymptotic pivots that lead to different 1 α asymptotic confidence sets for θ. The following result indicates that we can compare volumes of confidence sets constructed using two asymptotic pivots. Theorem. Let C j (X), j = 1,2, be the confidence sets for ϑ using asymptotic V 1/2 pivots jn ( ϑ jn ϑ), j = 1,2, respectively. If V 1n < V 2n for sufficiently large n, where A is the determinant of the matrix A, then lim n P( vol(c 1 (X)) < vol(c 2 (X)) ) = 1. UW-Madison (Statistics) Stat 610 Lecture / 10

11 If we invert n( X µ) 2 χ 2 1,α (2µ2 ), the resulting confidence interval is [L(X),U(X)] with L(X) = X 2z α/2 X / n 1 2z 2 α/2 /n, U(X) = X + 2z α/2 X / n 1 2z 2 α/2 /n If we estimate 2σ 2 = 2µ 2 by S 2, then we can still use interval C(X). Typically, in a given problem there exist many different asymptotic pivots that lead to different 1 α asymptotic confidence sets for θ. The following result indicates that we can compare volumes of confidence sets constructed using two asymptotic pivots. Theorem. Let C j (X), j = 1,2, be the confidence sets for ϑ using asymptotic V 1/2 pivots jn ( ϑ jn ϑ), j = 1,2, respectively. If V 1n < V 2n for sufficiently large n, where A is the determinant of the matrix A, then lim n P( vol(c 1 (X)) < vol(c 2 (X)) ) = 1. UW-Madison (Statistics) Stat 610 Lecture / 10

12 Proof. The result follows from the consistency of V jn and the fact that the volume of the ellipsoid C(X) is equal to vol(c(x)) = π k/2 (χ 2 k,α )k/2 V n 1/2/ Γ(1 + k/2). If ϑ 1n is asymptotically more efficient than ϑ 2n, then V 1n V 2n and ϑ 1n results in a better confidence set in terms of volume. If ϑ n is asymptotically efficient (such as the MLE), then C(X) obtained by using the pivot based on ϑ n is asymptotically optimal in terms of volume. In the example for the Poisson distribution, X is the MLE and hence the first confidence interval is asymptotically optimal in terms of length. Although the theorem is not directly applicable to the second confidence interval in the Poisson example, actually we have shown in the end of the example that the second confidence interval is also asymptotically optimal in terms of length, because the difference between the lengths of the two intervals is of the order n 1, whereas the length is of the order n 1/2. UW-Madison (Statistics) Stat 610 Lecture / 10

13 Proof. The result follows from the consistency of V jn and the fact that the volume of the ellipsoid C(X) is equal to vol(c(x)) = π k/2 (χ 2 k,α )k/2 V n 1/2/ Γ(1 + k/2). If ϑ 1n is asymptotically more efficient than ϑ 2n, then V 1n V 2n and ϑ 1n results in a better confidence set in terms of volume. If ϑ n is asymptotically efficient (such as the MLE), then C(X) obtained by using the pivot based on ϑ n is asymptotically optimal in terms of volume. In the example for the Poisson distribution, X is the MLE and hence the first confidence interval is asymptotically optimal in terms of length. Although the theorem is not directly applicable to the second confidence interval in the Poisson example, actually we have shown in the end of the example that the second confidence interval is also asymptotically optimal in terms of length, because the difference between the lengths of the two intervals is of the order n 1, whereas the length is of the order n 1/2. UW-Madison (Statistics) Stat 610 Lecture / 10

14 In the double exponential example, the MLE of µ is the sample median Q 0.5, which has an asymptotic distribution derived previously: ( ) 1 n( Q0.5 µ) converges in distribution to N 0, 4(2σ) 2 = σ 2 By inverting n( Q 0.5 µ) 2 χ1,α 2 S2 /2 (σ 2 should be estimated by S 2 /2, not S 2 ), we obtain the 1 α asymptotic confidence interval [ C 1 (X) = X zα/2 S/ 2n, X + z α/2 S/ ] 2n which is shorter than the C(X) previously derived, by a factor of 2. We may replace the estimator S 2 /2 of σ 2 by the MLE of σ 2, which is equal to (verify) σ 2 = 1 n X i n Q 0.5 i=1 However, replacing S 2 /2 by σ 2 does not reduce the length. Finally, there are situations where an MLE does not exist or is very hard to calculate and, thus, we have to use other asymptotic pivots. UW-Madison (Statistics) Stat 610 Lecture / 10

Lecture 32: Asymptotic confidence sets and likelihoods

Lecture 32: Asymptotic confidence sets and likelihoods Lecture 32: Asymptotic confidence sets and likelihoods Asymptotic criterion In some problems, especially in nonparametric problems, it is difficult to find a reasonable confidence set with a given confidence

More information

Stat 710: Mathematical Statistics Lecture 31

Stat 710: Mathematical Statistics Lecture 31 Stat 710: Mathematical Statistics Lecture 31 Jun Shao Department of Statistics University of Wisconsin Madison, WI 53706, USA Jun Shao (UW-Madison) Stat 710, Lecture 31 April 13, 2009 1 / 13 Lecture 31:

More information

Chapter 9: Interval Estimation and Confidence Sets Lecture 16: Confidence sets and credible sets

Chapter 9: Interval Estimation and Confidence Sets Lecture 16: Confidence sets and credible sets Chapter 9: Interval Estimation and Confidence Sets Lecture 16: Confidence sets and credible sets Confidence sets We consider a sample X from a population indexed by θ Θ R k. We are interested in ϑ, a vector-valued

More information

Chapter 7. Confidence Sets Lecture 30: Pivotal quantities and confidence sets

Chapter 7. Confidence Sets Lecture 30: Pivotal quantities and confidence sets Chapter 7. Confidence Sets Lecture 30: Pivotal quantities and confidence sets Confidence sets X: a sample from a population P P. θ = θ(p): a functional from P to Θ R k for a fixed integer k. C(X): a confidence

More information

parameter space Θ, depending only on X, such that Note: it is not θ that is random, but the set C(X).

parameter space Θ, depending only on X, such that Note: it is not θ that is random, but the set C(X). 4. Interval estimation The goal for interval estimation is to specify the accurary of an estimate. A 1 α confidence set for a parameter θ is a set C(X) in the parameter space Θ, depending only on X, such

More information

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Data from one or a series of random experiments are collected. Planning experiments and collecting data (not discussed here). Analysis:

More information

Statistics Ph.D. Qualifying Exam

Statistics Ph.D. Qualifying Exam Department of Statistics Carnegie Mellon University May 7 2008 Statistics Ph.D. Qualifying Exam You are not expected to solve all five problems. Complete solutions to few problems will be preferred to

More information

Statistics Ph.D. Qualifying Exam: Part I October 18, 2003

Statistics Ph.D. Qualifying Exam: Part I October 18, 2003 Statistics Ph.D. Qualifying Exam: Part I October 18, 2003 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. 1 2 3 4 5 6 7 8 9 10 11 12 2. Write your answer

More information

Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic

Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic Unbiased estimation Unbiased or asymptotically unbiased estimation plays an important role in

More information

Lecture 17: Likelihood ratio and asymptotic tests

Lecture 17: Likelihood ratio and asymptotic tests Lecture 17: Likelihood ratio and asymptotic tests Likelihood ratio When both H 0 and H 1 are simple (i.e., Θ 0 = {θ 0 } and Θ 1 = {θ 1 }), Theorem 6.1 applies and a UMP test rejects H 0 when f θ1 (X) f

More information

SOLUTION FOR HOMEWORK 4, STAT 4352

SOLUTION FOR HOMEWORK 4, STAT 4352 SOLUTION FOR HOMEWORK 4, STAT 4352 Welcome to your fourth homework. Here we begin the study of confidence intervals, Errors, etc. Recall that X n := (X 1,...,X n ) denotes the vector of n observations.

More information

Stat 5101 Lecture Notes

Stat 5101 Lecture Notes Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random

More information

McGill University. Faculty of Science. Department of Mathematics and Statistics. Part A Examination. Statistics: Theory Paper

McGill University. Faculty of Science. Department of Mathematics and Statistics. Part A Examination. Statistics: Theory Paper McGill University Faculty of Science Department of Mathematics and Statistics Part A Examination Statistics: Theory Paper Date: 10th May 2015 Instructions Time: 1pm-5pm Answer only two questions from Section

More information

ST495: Survival Analysis: Hypothesis testing and confidence intervals

ST495: Survival Analysis: Hypothesis testing and confidence intervals ST495: Survival Analysis: Hypothesis testing and confidence intervals Eric B. Laber Department of Statistics, North Carolina State University April 3, 2014 I remember that one fateful day when Coach took

More information

Stat 710: Mathematical Statistics Lecture 40

Stat 710: Mathematical Statistics Lecture 40 Stat 710: Mathematical Statistics Lecture 40 Jun Shao Department of Statistics University of Wisconsin Madison, WI 53706, USA Jun Shao (UW-Madison) Stat 710, Lecture 40 May 6, 2009 1 / 11 Lecture 40: Simultaneous

More information

Lecture 21: Convergence of transformations and generating a random variable

Lecture 21: Convergence of transformations and generating a random variable Lecture 21: Convergence of transformations and generating a random variable If Z n converges to Z in some sense, we often need to check whether h(z n ) converges to h(z ) in the same sense. Continuous

More information

Final Examination Statistics 200C. T. Ferguson June 11, 2009

Final Examination Statistics 200C. T. Ferguson June 11, 2009 Final Examination Statistics 00C T. Ferguson June, 009. (a) Define: X n converges in probability to X. (b) Define: X m converges in quadratic mean to X. (c) Show that if X n converges in quadratic mean

More information

Multivariate Analysis and Likelihood Inference

Multivariate Analysis and Likelihood Inference Multivariate Analysis and Likelihood Inference Outline 1 Joint Distribution of Random Variables 2 Principal Component Analysis (PCA) 3 Multivariate Normal Distribution 4 Likelihood Inference Joint density

More information

Lecture 20: Linear model, the LSE, and UMVUE

Lecture 20: Linear model, the LSE, and UMVUE Lecture 20: Linear model, the LSE, and UMVUE Linear Models One of the most useful statistical models is X i = β τ Z i + ε i, i = 1,...,n, where X i is the ith observation and is often called the ith response;

More information

Chapter 8 - Statistical intervals for a single sample

Chapter 8 - Statistical intervals for a single sample Chapter 8 - Statistical intervals for a single sample 8-1 Introduction In statistics, no quantity estimated from data is known for certain. All estimated quantities have probability distributions of their

More information

Lecture 26: Likelihood ratio tests

Lecture 26: Likelihood ratio tests Lecture 26: Likelihood ratio tests Likelihood ratio When both H 0 and H 1 are simple (i.e., Θ 0 = {θ 0 } and Θ 1 = {θ 1 }), Theorem 6.1 applies and a UMP test rejects H 0 when f θ1 (X) f θ0 (X) > c 0 for

More information

Maximum Likelihood Large Sample Theory

Maximum Likelihood Large Sample Theory Maximum Likelihood Large Sample Theory MIT 18.443 Dr. Kempthorne Spring 2015 1 Outline 1 Large Sample Theory of Maximum Likelihood Estimates 2 Asymptotic Results: Overview Asymptotic Framework Data Model

More information

Asymptotic Statistics-VI. Changliang Zou

Asymptotic Statistics-VI. Changliang Zou Asymptotic Statistics-VI Changliang Zou Kolmogorov-Smirnov distance Example (Kolmogorov-Smirnov confidence intervals) We know given α (0, 1), there is a well-defined d = d α,n such that, for any continuous

More information

Lecture 1: August 28

Lecture 1: August 28 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random

More information

simple if it completely specifies the density of x

simple if it completely specifies the density of x 3. Hypothesis Testing Pure significance tests Data x = (x 1,..., x n ) from f(x, θ) Hypothesis H 0 : restricts f(x, θ) Are the data consistent with H 0? H 0 is called the null hypothesis simple if it completely

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2009 Prof. Gesine Reinert Our standard situation is that we have data x = x 1, x 2,..., x n, which we view as realisations of random

More information

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3 Hypothesis Testing CB: chapter 8; section 0.3 Hypothesis: statement about an unknown population parameter Examples: The average age of males in Sweden is 7. (statement about population mean) The lowest

More information

Chapter 4: Asymptotic Properties of the MLE (Part 2)

Chapter 4: Asymptotic Properties of the MLE (Part 2) Chapter 4: Asymptotic Properties of the MLE (Part 2) Daniel O. Scharfstein 09/24/13 1 / 1 Example Let {(R i, X i ) : i = 1,..., n} be an i.i.d. sample of n random vectors (R, X ). Here R is a response

More information

f(x θ)dx with respect to θ. Assuming certain smoothness conditions concern differentiating under the integral the integral sign, we first obtain

f(x θ)dx with respect to θ. Assuming certain smoothness conditions concern differentiating under the integral the integral sign, we first obtain 0.1. INTRODUCTION 1 0.1 Introduction R. A. Fisher, a pioneer in the development of mathematical statistics, introduced a measure of the amount of information contained in an observaton from f(x θ). Fisher

More information

Chapter 6: Large Random Samples Sections

Chapter 6: Large Random Samples Sections Chapter 6: Large Random Samples Sections 6.1: Introduction 6.2: The Law of Large Numbers Skip p. 356-358 Skip p. 366-368 Skip 6.4: The correction for continuity Remember: The Midterm is October 25th in

More information

HYPOTHESIS TESTING: FREQUENTIST APPROACH.

HYPOTHESIS TESTING: FREQUENTIST APPROACH. HYPOTHESIS TESTING: FREQUENTIST APPROACH. These notes summarize the lectures on (the frequentist approach to) hypothesis testing. You should be familiar with the standard hypothesis testing from previous

More information

Interval Estimation. Chapter 9

Interval Estimation. Chapter 9 Chapter 9 Interval Estimation 9.1 Introduction Definition 9.1.1 An interval estimate of a real-values parameter θ is any pair of functions, L(x 1,..., x n ) and U(x 1,..., x n ), of a sample that satisfy

More information

Test Problems for Probability Theory ,

Test Problems for Probability Theory , 1 Test Problems for Probability Theory 01-06-16, 010-1-14 1. Write down the following probability density functions and compute their moment generating functions. (a) Binomial distribution with mean 30

More information

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata Maura Department of Economics and Finance Università Tor Vergata Hypothesis Testing Outline It is a mistake to confound strangeness with mystery Sherlock Holmes A Study in Scarlet Outline 1 The Power Function

More information

Stat 5102 Final Exam May 14, 2015

Stat 5102 Final Exam May 14, 2015 Stat 5102 Final Exam May 14, 2015 Name Student ID The exam is closed book and closed notes. You may use three 8 1 11 2 sheets of paper with formulas, etc. You may also use the handouts on brand name distributions

More information

Lecture 11. Multivariate Normal theory

Lecture 11. Multivariate Normal theory 10. Lecture 11. Multivariate Normal theory Lecture 11. Multivariate Normal theory 1 (1 1) 11. Multivariate Normal theory 11.1. Properties of means and covariances of vectors Properties of means and covariances

More information

Institute of Actuaries of India

Institute of Actuaries of India Institute of Actuaries of India Subject CT3 Probability & Mathematical Statistics May 2011 Examinations INDICATIVE SOLUTION Introduction The indicative solution has been written by the Examiners with the

More information

STAT 512 sp 2018 Summary Sheet

STAT 512 sp 2018 Summary Sheet STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}

More information

Limiting Distributions

Limiting Distributions Limiting Distributions We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the

More information

Asymptotic Statistics-III. Changliang Zou

Asymptotic Statistics-III. Changliang Zou Asymptotic Statistics-III Changliang Zou The multivariate central limit theorem Theorem (Multivariate CLT for iid case) Let X i be iid random p-vectors with mean µ and and covariance matrix Σ. Then n (

More information

Lecture 3. Inference about multivariate normal distribution

Lecture 3. Inference about multivariate normal distribution Lecture 3. Inference about multivariate normal distribution 3.1 Point and Interval Estimation Let X 1,..., X n be i.i.d. N p (µ, Σ). We are interested in evaluation of the maximum likelihood estimates

More information

Statistics - Lecture One. Outline. Charlotte Wickham 1. Basic ideas about estimation

Statistics - Lecture One. Outline. Charlotte Wickham  1. Basic ideas about estimation Statistics - Lecture One Charlotte Wickham wickham@stat.berkeley.edu http://www.stat.berkeley.edu/~wickham/ Outline 1. Basic ideas about estimation 2. Method of Moments 3. Maximum Likelihood 4. Confidence

More information

Stat 5101 Notes: Brand Name Distributions

Stat 5101 Notes: Brand Name Distributions Stat 5101 Notes: Brand Name Distributions Charles J. Geyer September 5, 2012 Contents 1 Discrete Uniform Distribution 2 2 General Discrete Uniform Distribution 2 3 Uniform Distribution 3 4 General Uniform

More information

MVE055/MSG Lecture 8

MVE055/MSG Lecture 8 MVE055/MSG810 2017 Lecture 8 Petter Mostad Chalmers September 23, 2017 The Central Limit Theorem (CLT) Assume X 1,..., X n is a random sample from a distribution with expectation µ and variance σ 2. Then,

More information

Testing Algebraic Hypotheses

Testing Algebraic Hypotheses Testing Algebraic Hypotheses Mathias Drton Department of Statistics University of Chicago 1 / 18 Example: Factor analysis Multivariate normal model based on conditional independence given hidden variable:

More information

Limiting Distributions

Limiting Distributions We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the two fundamental results

More information

Stat 704 Data Analysis I Probability Review

Stat 704 Data Analysis I Probability Review 1 / 39 Stat 704 Data Analysis I Probability Review Dr. Yen-Yi Ho Department of Statistics, University of South Carolina A.3 Random Variables 2 / 39 def n: A random variable is defined as a function that

More information

Preliminaries. Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38

Preliminaries. Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38 Preliminaries Copyright c 2018 Dan Nettleton (Iowa State University) Statistics 510 1 / 38 Notation for Scalars, Vectors, and Matrices Lowercase letters = scalars: x, c, σ. Boldface, lowercase letters

More information

Chapters 9. Properties of Point Estimators

Chapters 9. Properties of Point Estimators Chapters 9. Properties of Point Estimators Recap Target parameter, or population parameter θ. Population distribution f(x; θ). { probability function, discrete case f(x; θ) = density, continuous case The

More information

Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama

Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama Instructions This exam has 7 pages in total, numbered 1 to 7. Make sure your exam has all the pages. This exam will be 2 hours

More information

Continuous Distributions

Continuous Distributions Chapter 3 Continuous Distributions 3.1 Continuous-Type Data In Chapter 2, we discuss random variables whose space S contains a countable number of outcomes (i.e. of discrete type). In Chapter 3, we study

More information

Mathematical statistics

Mathematical statistics October 4 th, 2018 Lecture 12: Information Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation Chapter

More information

Delta Method. Example : Method of Moments for Exponential Distribution. f(x; λ) = λe λx I(x > 0)

Delta Method. Example : Method of Moments for Exponential Distribution. f(x; λ) = λe λx I(x > 0) Delta Method Often estimators are functions of other random variables, for example in the method of moments. These functions of random variables can sometimes inherit a normal approximation from the underlying

More information

Inference on reliability in two-parameter exponential stress strength model

Inference on reliability in two-parameter exponential stress strength model Metrika DOI 10.1007/s00184-006-0074-7 Inference on reliability in two-parameter exponential stress strength model K. Krishnamoorthy Shubhabrata Mukherjee Huizhen Guo Received: 19 January 2005 Springer-Verlag

More information

Regression #4: Properties of OLS Estimator (Part 2)

Regression #4: Properties of OLS Estimator (Part 2) Regression #4: Properties of OLS Estimator (Part 2) Econ 671 Purdue University Justin L. Tobias (Purdue) Regression #4 1 / 24 Introduction In this lecture, we continue investigating properties associated

More information

Lecture 16: Sample quantiles and their asymptotic properties

Lecture 16: Sample quantiles and their asymptotic properties Lecture 16: Sample quantiles and their asymptotic properties Estimation of quantiles (percentiles Suppose that X 1,...,X n are i.i.d. random variables from an unknown nonparametric F For p (0,1, G 1 (p

More information

Stat 710: Mathematical Statistics Lecture 12

Stat 710: Mathematical Statistics Lecture 12 Stat 710: Mathematical Statistics Lecture 12 Jun Shao Department of Statistics University of Wisconsin Madison, WI 53706, USA Jun Shao (UW-Madison) Stat 710, Lecture 12 Feb 18, 2009 1 / 11 Lecture 12:

More information

STAT 830 Non-parametric Inference Basics

STAT 830 Non-parametric Inference Basics STAT 830 Non-parametric Inference Basics Richard Lockhart Simon Fraser University STAT 801=830 Fall 2012 Richard Lockhart (Simon Fraser University)STAT 830 Non-parametric Inference Basics STAT 801=830

More information

ST5215: Advanced Statistical Theory

ST5215: Advanced Statistical Theory Department of Statistics & Applied Probability Wednesday, October 19, 2011 Lecture 17: UMVUE and the first method of derivation Estimable parameters Let ϑ be a parameter in the family P. If there exists

More information

Severity Models - Special Families of Distributions

Severity Models - Special Families of Distributions Severity Models - Special Families of Distributions Sections 5.3-5.4 Stat 477 - Loss Models Sections 5.3-5.4 (Stat 477) Claim Severity Models Brian Hartman - BYU 1 / 1 Introduction Introduction Given that

More information

Some Assorted Formulae. Some confidence intervals: σ n. x ± z α/2. x ± t n 1;α/2 n. ˆp(1 ˆp) ˆp ± z α/2 n. χ 2 n 1;1 α/2. n 1;α/2

Some Assorted Formulae. Some confidence intervals: σ n. x ± z α/2. x ± t n 1;α/2 n. ˆp(1 ˆp) ˆp ± z α/2 n. χ 2 n 1;1 α/2. n 1;α/2 STA 248 H1S MIDTERM TEST February 26, 2008 SURNAME: SOLUTIONS GIVEN NAME: STUDENT NUMBER: INSTRUCTIONS: Time: 1 hour and 50 minutes Aids allowed: calculator Tables of the standard normal, t and chi-square

More information

18.175: Lecture 13 Infinite divisibility and Lévy processes

18.175: Lecture 13 Infinite divisibility and Lévy processes 18.175 Lecture 13 18.175: Lecture 13 Infinite divisibility and Lévy processes Scott Sheffield MIT Outline Poisson random variable convergence Extend CLT idea to stable random variables Infinite divisibility

More information

Comparing two independent samples

Comparing two independent samples In many applications it is necessary to compare two competing methods (for example, to compare treatment effects of a standard drug and an experimental drug). To compare two methods from statistical point

More information

Stat 135, Fall 2006 A. Adhikari HOMEWORK 6 SOLUTIONS

Stat 135, Fall 2006 A. Adhikari HOMEWORK 6 SOLUTIONS Stat 135, Fall 2006 A. Adhikari HOMEWORK 6 SOLUTIONS 1a. Under the null hypothesis X has the binomial (100,.5) distribution with E(X) = 50 and SE(X) = 5. So P ( X 50 > 10) is (approximately) two tails

More information

Chapter 6. Hypothesis Tests Lecture 20: UMP tests and Neyman-Pearson lemma

Chapter 6. Hypothesis Tests Lecture 20: UMP tests and Neyman-Pearson lemma Chapter 6. Hypothesis Tests Lecture 20: UMP tests and Neyman-Pearson lemma Theory of testing hypotheses X: a sample from a population P in P, a family of populations. Based on the observed X, we test a

More information

STATS 200: Introduction to Statistical Inference. Lecture 29: Course review

STATS 200: Introduction to Statistical Inference. Lecture 29: Course review STATS 200: Introduction to Statistical Inference Lecture 29: Course review Course review We started in Lecture 1 with a fundamental assumption: Data is a realization of a random process. The goal throughout

More information

Chapter 8: Hypothesis Testing Lecture 9: Likelihood ratio tests

Chapter 8: Hypothesis Testing Lecture 9: Likelihood ratio tests Chapter 8: Hypothesis Testing Lecture 9: Likelihood ratio tests Throughout this chapter we consider a sample X taken from a population indexed by θ Θ R k. Instead of estimating the unknown parameter, we

More information

Statistics. Statistics

Statistics. Statistics The main aims of statistics 1 1 Choosing a model 2 Estimating its parameter(s) 1 point estimates 2 interval estimates 3 Testing hypotheses Distributions used in statistics: χ 2 n-distribution 2 Let X 1,

More information

Mathematical statistics

Mathematical statistics October 18 th, 2018 Lecture 16: Midterm review Countdown to mid-term exam: 7 days Week 1 Chapter 1: Probability review Week 2 Week 4 Week 7 Chapter 6: Statistics Chapter 7: Point Estimation Chapter 8:

More information

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA, 00 MODULE : Statistical Inference Time Allowed: Three Hours Candidates should answer FIVE questions. All questions carry equal marks. The

More information

Asymptotic Theory. L. Magee revised January 21, 2013

Asymptotic Theory. L. Magee revised January 21, 2013 Asymptotic Theory L. Magee revised January 21, 2013 1 Convergence 1.1 Definitions Let a n to refer to a random variable that is a function of n random variables. Convergence in Probability The scalar a

More information

MAS223 Statistical Inference and Modelling Exercises

MAS223 Statistical Inference and Modelling Exercises MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,

More information

Statistical Inference

Statistical Inference Statistical Inference Classical and Bayesian Methods Revision Class for Midterm Exam AMS-UCSC Th Feb 9, 2012 Winter 2012. Session 1 (Revision Class) AMS-132/206 Th Feb 9, 2012 1 / 23 Topics Topics We will

More information

STA 2101/442 Assignment 3 1

STA 2101/442 Assignment 3 1 STA 2101/442 Assignment 3 1 These questions are practice for the midterm and final exam, and are not to be handed in. 1. Suppose X 1,..., X n are a random sample from a distribution with mean µ and variance

More information

Economics 583: Econometric Theory I A Primer on Asymptotics

Economics 583: Econometric Theory I A Primer on Asymptotics Economics 583: Econometric Theory I A Primer on Asymptotics Eric Zivot January 14, 2013 The two main concepts in asymptotic theory that we will use are Consistency Asymptotic Normality Intuition consistency:

More information

COMPSCI 240: Reasoning Under Uncertainty

COMPSCI 240: Reasoning Under Uncertainty COMPSCI 240: Reasoning Under Uncertainty Andrew Lan and Nic Herndon University of Massachusetts at Amherst Spring 2019 Lecture 20: Central limit theorem & The strong law of large numbers Markov and Chebyshev

More information

Lecture 6: Linear models and Gauss-Markov theorem

Lecture 6: Linear models and Gauss-Markov theorem Lecture 6: Linear models and Gauss-Markov theorem Linear model setting Results in simple linear regression can be extended to the following general linear model with independently observed response variables

More information

MISCELLANEOUS TOPICS RELATED TO LIKELIHOOD. Copyright c 2012 (Iowa State University) Statistics / 30

MISCELLANEOUS TOPICS RELATED TO LIKELIHOOD. Copyright c 2012 (Iowa State University) Statistics / 30 MISCELLANEOUS TOPICS RELATED TO LIKELIHOOD Copyright c 2012 (Iowa State University) Statistics 511 1 / 30 INFORMATION CRITERIA Akaike s Information criterion is given by AIC = 2l(ˆθ) + 2k, where l(ˆθ)

More information

DA Freedman Notes on the MLE Fall 2003

DA Freedman Notes on the MLE Fall 2003 DA Freedman Notes on the MLE Fall 2003 The object here is to provide a sketch of the theory of the MLE. Rigorous presentations can be found in the references cited below. Calculus. Let f be a smooth, scalar

More information

Central Limit Theorem ( 5.3)

Central Limit Theorem ( 5.3) Central Limit Theorem ( 5.3) Let X 1, X 2,... be a sequence of independent random variables, each having n mean µ and variance σ 2. Then the distribution of the partial sum S n = X i i=1 becomes approximately

More information

Lecture 15: Multivariate normal distributions

Lecture 15: Multivariate normal distributions Lecture 15: Multivariate normal distributions Normal distributions with singular covariance matrices Consider an n-dimensional X N(µ,Σ) with a positive definite Σ and a fixed k n matrix A that is not of

More information

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part : Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section

More information

Lecture 23: UMPU tests in exponential families

Lecture 23: UMPU tests in exponential families Lecture 23: UMPU tests in exponential families Continuity of the power function For a given test T, the power function β T (P) is said to be continuous in θ if and only if for any {θ j : j = 0,1,2,...}

More information

Economics 583: Econometric Theory I A Primer on Asymptotics: Hypothesis Testing

Economics 583: Econometric Theory I A Primer on Asymptotics: Hypothesis Testing Economics 583: Econometric Theory I A Primer on Asymptotics: Hypothesis Testing Eric Zivot October 12, 2011 Hypothesis Testing 1. Specify hypothesis to be tested H 0 : null hypothesis versus. H 1 : alternative

More information

Lecture 34: Properties of the LSE

Lecture 34: Properties of the LSE Lecture 34: Properties of the LSE The following results explain why the LSE is popular. Gauss-Markov Theorem Assume a general linear model previously described: Y = Xβ + E with assumption A2, i.e., Var(E

More information

Inference in Constrained Linear Regression

Inference in Constrained Linear Regression Inference in Constrained Linear Regression by Xinyu Chen A Thesis Submitted to the Faculty of the WORCESTER POLYTECHNIC INSTITUTE In partial fulfillment of the requirements for the Degree of Master of

More information

Interval estimation. October 3, Basic ideas CLT and CI CI for a population mean CI for a population proportion CI for a Normal mean

Interval estimation. October 3, Basic ideas CLT and CI CI for a population mean CI for a population proportion CI for a Normal mean Interval estimation October 3, 2018 STAT 151 Class 7 Slide 1 Pandemic data Treatment outcome, X, from n = 100 patients in a pandemic: 1 = recovered and 0 = not recovered 1 1 1 0 0 0 1 1 1 0 0 1 0 1 0 0

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)

More information

Lecture 3 September 1

Lecture 3 September 1 STAT 383C: Statistical Modeling I Fall 2016 Lecture 3 September 1 Lecturer: Purnamrita Sarkar Scribe: Giorgio Paulon, Carlos Zanini Disclaimer: These scribe notes have been slightly proofread and may have

More information

Master s Written Examination

Master s Written Examination Master s Written Examination Option: Statistics and Probability Spring 016 Full points may be obtained for correct answers to eight questions. Each numbered question which may have several parts is worth

More information

KRUSKAL-WALLIS ONE-WAY ANALYSIS OF VARIANCE BASED ON LINEAR PLACEMENTS

KRUSKAL-WALLIS ONE-WAY ANALYSIS OF VARIANCE BASED ON LINEAR PLACEMENTS Bull. Korean Math. Soc. 5 (24), No. 3, pp. 7 76 http://dx.doi.org/34/bkms.24.5.3.7 KRUSKAL-WALLIS ONE-WAY ANALYSIS OF VARIANCE BASED ON LINEAR PLACEMENTS Yicheng Hong and Sungchul Lee Abstract. The limiting

More information

Course information: Instructor: Tim Hanson, Leconte 219C, phone Office hours: Tuesday/Thursday 11-12, Wednesday 10-12, and by appointment.

Course information: Instructor: Tim Hanson, Leconte 219C, phone Office hours: Tuesday/Thursday 11-12, Wednesday 10-12, and by appointment. Course information: Instructor: Tim Hanson, Leconte 219C, phone 777-3859. Office hours: Tuesday/Thursday 11-12, Wednesday 10-12, and by appointment. Text: Applied Linear Statistical Models (5th Edition),

More information

Lecture 13: Subsampling vs Bootstrap. Dimitris N. Politis, Joseph P. Romano, Michael Wolf

Lecture 13: Subsampling vs Bootstrap. Dimitris N. Politis, Joseph P. Romano, Michael Wolf Lecture 13: 2011 Bootstrap ) R n x n, θ P)) = τ n ˆθn θ P) Example: ˆθn = X n, τ n = n, θ = EX = µ P) ˆθ = min X n, τ n = n, θ P) = sup{x : F x) 0} ) Define: J n P), the distribution of τ n ˆθ n θ P) under

More information

1 Exercises for lecture 1

1 Exercises for lecture 1 1 Exercises for lecture 1 Exercise 1 a) Show that if F is symmetric with respect to µ, and E( X )

More information

Notes on the Multivariate Normal and Related Topics

Notes on the Multivariate Normal and Related Topics Version: July 10, 2013 Notes on the Multivariate Normal and Related Topics Let me refresh your memory about the distinctions between population and sample; parameters and statistics; population distributions

More information

Additional Problems Additional Problem 1 Like the http://www.stat.umn.edu/geyer/5102/examp/rlike.html#lmax example of maximum likelihood done by computer except instead of the gamma shape model, we will

More information

Chapter 4 Multiple Random Variables

Chapter 4 Multiple Random Variables Review for the previous lecture Theorems and Examples: How to obtain the pmf (pdf) of U = g ( X Y 1 ) and V = g ( X Y) Chapter 4 Multiple Random Variables Chapter 43 Bivariate Transformations Continuous

More information

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

Statistics & Data Sciences: First Year Prelim Exam May 2018

Statistics & Data Sciences: First Year Prelim Exam May 2018 Statistics & Data Sciences: First Year Prelim Exam May 2018 Instructions: 1. Do not turn this page until instructed to do so. 2. Start each new question on a new sheet of paper. 3. This is a closed book

More information

Information in a Two-Stage Adaptive Optimal Design

Information in a Two-Stage Adaptive Optimal Design Information in a Two-Stage Adaptive Optimal Design Department of Statistics, University of Missouri Designed Experiments: Recent Advances in Methods and Applications DEMA 2011 Isaac Newton Institute for

More information