SOLUTION FOR HOMEWORK 4, STAT 4352

Size: px
Start display at page:

Download "SOLUTION FOR HOMEWORK 4, STAT 4352"

Transcription

1 SOLUTION FOR HOMEWORK 4, STAT 4352 Welcome to your fourth homework. Here we begin the study of confidence intervals, Errors, etc. Recall that X n := (X 1,...,X n ) denotes the vector of n observations. Try to find mistakes (and get extra points) in my solutions. Typically they are silly arithmetic mistakes (not methodological ones). They allow me to check that you did your HW on your own. Please do not me about your findings just mention them on the first page of your solution and count extra points. Now let us look at your problems. 1. Problem Let X be distributed according to the exponential distribution Expon(θ). Please note that θ is the scale parameter, as a result here X/θ = Y Expon(1). As a result, Y is the pivot, and its distribution, which does not depend on θ, will be used to define the confidence interval. But now let us return to the problem at hand. We need to find k such that Recall the pivot Y = X/θ and write, P θ (0 < θ < kx) = 1 α. P θ (0 < θ < kx) = P θ (0 < 1 < kx/θ) = P(Y > k 1 ). Please note that in the last probability I skipped the subscript θ because the distribution of Y does not depend on θ. Now we can find k: which implies the answer 1 α = P(Y > k 1 ) = k = 1/ ln(1 α). e y dy = e k 1, k 1 2. Problem 11.2(a). This problem is a good review of the probability technique on how to work with two RVs. Let X 1 and X 2 be two independent RVs uniformly distributed on (0, θ). Then we need to find k such that P θ (0 < θ < k(x 1 + X 2 )) = 1 α. Again, you may note that the pivot here is (X 1 + X 2 )/θ = Y 1 + Y 2 where Y 1 and Y 2 are independent and uniformly distributed on (0, 1). [In other words, θ is the scale parameter]. Then we can write P θ (0 < θ < k(x 1 + X 2 )) = P(Y 1 + Y 2 > k 1 ) = 1 α. Now the problem is converted into finding αth quantile of the distribution of the sum Z = Y 1 + Y 2 (well, we did this problem in the probability class, by the way.) Let us solve it directly (denote a := 1/k): α = P(Y 1 + Y 2 a) = I(0 < y 1 < 1)I(0 < y 2 < 1)dy 1 dy 2 =... 0 y 1 +y 2 a 1

2 By definition of this particular integral, this integral is equal to the area in the square [0, 1] 2 under the straight line y 2 = a y 1 [please draw a nice diagram of the square and the line to see this set and its area; pay attention to the fact that shape of the set changes depending on a < 1 and a > 1. For the considered case α < 1/2 the set is a lower triangle with the area equal to a 2 /2. Then α = (1/k) 2 /2. This yields the wished k = (1/2α) 1/2. Remark 1: It is possible that I (or you) made a mistake, and it is difficult to check numbers, but this verification may be helpful. If α decreases (the confidence coefficient increases) then the confidence interval must increase (you wish a larger probability of covering the interval must grow!). In our example this means that k must increase when α decreases. Our answer passes this test. [At the same time, to check the exact formulae you need to repeat your solution/check all steps.] Remark2. Think about the answer for the case α > 1/2 where a different set defines the quantile. 3. Problem Here we are considering the classical normal case with z-scoring being the pivot. We need to satisfy Using z-scoring we get P µ ( X z kα σn 1/2 < µ < X + z (1 k)α σn 1/2 ) 1 α. where Z N(0, 1). The latter holds iff P( z kα < Z < z (1 k)α ) 1 α kα + (1 k)α α We conclude that any k [0, 1] implies the desired confidence interval. But which k yields the minimal (most accurate) confidence interval? The answer is k = 1/2, and to prove it, you need to draw the bell-shaped normal density, look at the case k = 1/2 which implies so-called equal-tails confidence interval. Then shift the left side of the interval and look at the required length of a new interval to get the area 1 α under the normal density. By taking into account bell-shaped curve of the density, you prove that the interval becomes larger. 4. Problem Let us assume that ( X µ)/(σ/n 1/2 ) = Z N(0, 1). [This is the case if X is normal or n > 30]. Then we need to find a minimal sample size n 0 such that for the given confidence error E P µ X µ E) = 1 α. Using z-scoring we find that P µ ( X µ E) = P µ ( X µ σ/n 1/2 E/(σ/n1/2 ) ) 2

3 = P( Z E/(σ/n 1/2 )) = 1 α. Solving the last equality we find that n 0 should be equal to the rounded up [z α/2 σ/e] 2. Note that it is increasing in σ and decreasing in 1 α and E. 5. Problem Let X 1 N(µ 1, σ 2 1/n 1 ) and X 2 N(µ 2, σ 2 2/n 2 ). The parameter of interest is µ = µ 1 µ 2. Please note that the sufficient statistic here is Y = X 1 X 2 (check this using the Factorization Theorem). Then we get As a result, This yields a formula for the error Y N(µ, σ 2 ), σ 2 := σ 2 1/n 1 + σ 2 2/n 2. P µ ( Y µ /σ zα/2 ) = 1 α. E α = z α/2 [σ 2 1 /n 1 + σ 2 2 /n 2] 1/2. 6. Problem Suppose that we have X 1 N(θ 1, σ 2 /n 1 ) and independent X 2 N(θ 2, σ 2 /n 2 ) calculated from the samples X 11, X 12,..., X 1n1 and X 21, X 22,..., X 2n2, respectively. Then a direct calculation yields X 1 X 2 N(θ 1 θ 2, σ 2 (1/n 1 + 1/n 2 )). If σ 2 is known then this allows us to construct a confidence interval using the Z-pivot [ X 1 X 2 ]/[σ 2 (1/n 1 + 1/n 2 )] 1/2. Note that the pivot has standard normal distribution, so we can use the usual z α/2 in our formulae. However, if σ 2 is unknown, we need to estimate it. Because σ 2 is the same in both samples, we can pool them together and calculate n 1 ˆV := (X 1l X 1 ) 2 + (X 2l X 2 ) 2. Now: the next step is important. Consider a new random variable n 1 W := ˆV /σ 2 = (X 1l X 1 ) 2 /σ 2 + (X 2l X 2 ) 2 /σ 2. As we know from Section 8.4 (and our first homework), the first sum in the last equation has Chisq(n 1 1) distribution and the second has Chisq(n 2 1) distribution. Because these sums are independent, their total W has Chisq(n 1 + n 2 2) distribution. n 2 n 2 3

4 Further, we can introduce another random variable: ˆT := [( X 1 X 2 ) (µ 1 µ 2 )]/[σ 2 (n n 1 2 )] 1/2 [ˆV /σ 2 (n 1 + n 2 2)] 1/2 =: ξ 0 [χ 2 n 1 +n 2 2/(n 1 + n 2 2)] 1/2, where ξ 0 N(0, 1) is independent of χ 2 n 1 +n 2 2, which is a Chisq(n 1 + n 2 2). The latter is true because X 1 is independent from n 1 (X 1l X 1 ) 2, a similar assertion holds for the second sample, and further the two samples are independent. Now you need to recall that in this case ˆT has t-distribution with n 1 + n 2 2 degrees of freedom. What was wished to show. 7. Problem Let X Binom(θ, n). Then for sufficiently large n X θ N(0, θ(1 θ)/n). This yields (according to our previous work) that As a result, P θ ( X θ z α/2 (θ(1 θ)/n) 1/2 ) = 1 α. E α = z α/2 [θ(1 θ)]/n 1/2 is the Error that guarantees the (1 α) confidence. Then the sample size needed to guarantee an error E is n z 2 α/2 θ(1 θ)/e2. Now we note that max θ (0,1) θ(1 θ) = 1/4, and this yields a conservative lower bound n z 2 α/2 /(4E2 ). Remark: Typically the variance θ(1 θ) will be estimated by X(1 X). 8. Problem Here we have two independent Binomial RVs X 1 Binom(θ 1, n 1 ) and X 2 Binom(θ 2, n 2 ). Both θ 1 and θ 2 are unknown, and we are interested only in estimation of the parameter θ = θ 1 θ 2 (the difference between the populations probabilities of success). Denote ˆθ 1 = X 1 /n 1 and ˆθ 2 = X 2 /n 2. Also, denote ˆσ = [ˆθ 1 (1 ˆθ 1 )/n 1 + ˆθ 2 (1 ˆθ 2 )/n 2 ] 1/2. If both samples are large then we can use normal approximation and get ˆθ 1 ˆθ 2 (θ 1 θ 2 ) N(0, 1). ˆσ This implies that the (1 α) confidence maximal Error (half of the confidence interval) is E α = z α/2 /ˆσ = z α/2 [ˆθ 1 (1 ˆθ 1 )/n 1 + ˆθ 2 (1 ˆθ 2 )/n 2 ] 1/2. 4

5 Remark: Please note that if the Error E is given, then the last equality allows us to find the lower bound for corresponding sample sizes. 9. Problem Here n = 120, σ = 10.5, and 1 α =.99. Then (using our formula and the Normal Table in your text) E = z α/2 σ/n 1/2 = z.005 (10.5)/(120) 1/2 = (2.755)(10.5)/(120) 1/ Problem It is given that n = 40, X = 24.05, S = We can assert for θ = E(X) that (assuming n > 30) P θ ( X θ t α/2,n 1 S/n 1/2 ) = 1 α. In your Table IV (page 575) no t-distribution for 39 degrees of freedom can be found because beyond 29 degrees of freedom the t-distribution is practically standard normal. Thus we can use the approximation t α/2,39 = z α/2. Here α/2 =.025, so z.025 = Then E = (1.96)(2.68)/(40) 1/2. 5

SOLUTION FOR HOMEWORK 12, STAT 4351

SOLUTION FOR HOMEWORK 12, STAT 4351 SOLUTION FOR HOMEWORK 2, STAT 435 Welcome to your 2th homework. It looks like this is the last one! As usual, try to find mistakes and get extra points! Now let us look at your problems.. Problem 7.22.

More information

Lecture 28: Asymptotic confidence sets

Lecture 28: Asymptotic confidence sets Lecture 28: Asymptotic confidence sets 1 α asymptotic confidence sets Similar to testing hypotheses, in many situations it is difficult to find a confidence set with a given confidence coefficient or level

More information

Chapter 8 - Statistical intervals for a single sample

Chapter 8 - Statistical intervals for a single sample Chapter 8 - Statistical intervals for a single sample 8-1 Introduction In statistics, no quantity estimated from data is known for certain. All estimated quantities have probability distributions of their

More information

SOLUTION FOR HOMEWORK 6, STAT 6331

SOLUTION FOR HOMEWORK 6, STAT 6331 SOLUTION FOR HOMEWORK 6, STAT 633. Exerc.7.. It is given that X,...,X n is a sample from N(θ, σ ), and the Bayesian approach is used with Θ N(µ, τ ). The parameters σ, µ and τ are given. (a) Find the joinf

More information

Economics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1,

Economics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1, Economics 520 Lecture Note 9: Hypothesis Testing via the Neyman-Pearson Lemma CB 8., 8.3.-8.3.3 Uniformly Most Powerful Tests and the Neyman-Pearson Lemma Let s return to the hypothesis testing problem

More information

Statistics 3858 : Maximum Likelihood Estimators

Statistics 3858 : Maximum Likelihood Estimators Statistics 3858 : Maximum Likelihood Estimators 1 Method of Maximum Likelihood In this method we construct the so called likelihood function, that is L(θ) = L(θ; X 1, X 2,..., X n ) = f n (X 1, X 2,...,

More information

STAT 135 Lab 5 Bootstrapping and Hypothesis Testing

STAT 135 Lab 5 Bootstrapping and Hypothesis Testing STAT 135 Lab 5 Bootstrapping and Hypothesis Testing Rebecca Barter March 2, 2015 The Bootstrap Bootstrap Suppose that we are interested in estimating a parameter θ from some population with members x 1,...,

More information

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3 Hypothesis Testing CB: chapter 8; section 0.3 Hypothesis: statement about an unknown population parameter Examples: The average age of males in Sweden is 7. (statement about population mean) The lowest

More information

CSE 312: Foundations of Computing II Quiz Section #10: Review Questions for Final Exam (solutions)

CSE 312: Foundations of Computing II Quiz Section #10: Review Questions for Final Exam (solutions) CSE 312: Foundations of Computing II Quiz Section #10: Review Questions for Final Exam (solutions) 1. (Confidence Intervals, CLT) Let X 1,..., X n be iid with unknown mean θ and known variance σ 2. Assume

More information

Continuous Distributions

Continuous Distributions Chapter 3 Continuous Distributions 3.1 Continuous-Type Data In Chapter 2, we discuss random variables whose space S contains a countable number of outcomes (i.e. of discrete type). In Chapter 3, we study

More information

Mathematics Ph.D. Qualifying Examination Stat Probability, January 2018

Mathematics Ph.D. Qualifying Examination Stat Probability, January 2018 Mathematics Ph.D. Qualifying Examination Stat 52800 Probability, January 2018 NOTE: Answers all questions completely. Justify every step. Time allowed: 3 hours. 1. Let X 1,..., X n be a random sample from

More information

Statistical Inference

Statistical Inference Statistical Inference Classical and Bayesian Methods Class 5 AMS-UCSC Tue 24, 2012 Winter 2012. Session 1 (Class 5) AMS-132/206 Tue 24, 2012 1 / 11 Topics Topics We will talk about... 1 Confidence Intervals

More information

This paper is not to be removed from the Examination Halls

This paper is not to be removed from the Examination Halls ~~ST104B ZA d0 This paper is not to be removed from the Examination Halls UNIVERSITY OF LONDON ST104B ZB BSc degrees and Diplomas for Graduates in Economics, Management, Finance and the Social Sciences,

More information

Lecture 32: Asymptotic confidence sets and likelihoods

Lecture 32: Asymptotic confidence sets and likelihoods Lecture 32: Asymptotic confidence sets and likelihoods Asymptotic criterion In some problems, especially in nonparametric problems, it is difficult to find a reasonable confidence set with a given confidence

More information

SOLUTION FOR HOMEWORK 8, STAT 4372

SOLUTION FOR HOMEWORK 8, STAT 4372 SOLUTION FOR HOMEWORK 8, STAT 4372 Welcome to your 8th homework. Here you have an opportunity to solve classical estimation problems which are the must to solve on the exam due to their simplicity. 1.

More information

Homework 7: Solutions. P3.1 from Lehmann, Romano, Testing Statistical Hypotheses.

Homework 7: Solutions. P3.1 from Lehmann, Romano, Testing Statistical Hypotheses. Stat 300A Theory of Statistics Homework 7: Solutions Nikos Ignatiadis Due on November 28, 208 Solutions should be complete and concisely written. Please, use a separate sheet or set of sheets for each

More information

Chapter 9: Hypothesis Testing Sections

Chapter 9: Hypothesis Testing Sections Chapter 9: Hypothesis Testing Sections 9.1 Problems of Testing Hypotheses 9.2 Testing Simple Hypotheses 9.3 Uniformly Most Powerful Tests Skip: 9.4 Two-Sided Alternatives 9.6 Comparing the Means of Two

More information

STA 2201/442 Assignment 2

STA 2201/442 Assignment 2 STA 2201/442 Assignment 2 1. This is about how to simulate from a continuous univariate distribution. Let the random variable X have a continuous distribution with density f X (x) and cumulative distribution

More information

Recall that in order to prove Theorem 8.8, we argued that under certain regularity conditions, the following facts are true under H 0 : 1 n

Recall that in order to prove Theorem 8.8, we argued that under certain regularity conditions, the following facts are true under H 0 : 1 n Chapter 9 Hypothesis Testing 9.1 Wald, Rao, and Likelihood Ratio Tests Suppose we wish to test H 0 : θ = θ 0 against H 1 : θ θ 0. The likelihood-based results of Chapter 8 give rise to several possible

More information

Math 494: Mathematical Statistics

Math 494: Mathematical Statistics Math 494: Mathematical Statistics Instructor: Jimin Ding jmding@wustl.edu Department of Mathematics Washington University in St. Louis Class materials are available on course website (www.math.wustl.edu/

More information

STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots. March 8, 2015

STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots. March 8, 2015 STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots March 8, 2015 The duality between CI and hypothesis testing The duality between CI and hypothesis

More information

STAT 513 fa 2018 Lec 02

STAT 513 fa 2018 Lec 02 STAT 513 fa 2018 Lec 02 Inference about the mean and variance of a Normal population Karl B. Gregory Fall 2018 Inference about the mean and variance of a Normal population Here we consider the case in

More information

SOLUTION FOR HOMEWORK 7, STAT p(x σ) = (1/[2πσ 2 ] 1/2 )e (x µ)2 /2σ 2.

SOLUTION FOR HOMEWORK 7, STAT p(x σ) = (1/[2πσ 2 ] 1/2 )e (x µ)2 /2σ 2. SOLUTION FOR HOMEWORK 7, STAT 6332 1. We have (for a general case) Denote p (x) p(x σ)/ σ. Then p(x σ) (1/[2πσ 2 ] 1/2 )e (x µ)2 /2σ 2. p (x σ) p(x σ) 1 (x µ)2 +. σ σ 3 Then E{ p (x σ) p(x σ) } σ 2 2σ

More information

Homework for 1/13 Due 1/22

Homework for 1/13 Due 1/22 Name: ID: Homework for 1/13 Due 1/22 1. [ 5-23] An irregularly shaped object of unknown area A is located in the unit square 0 x 1, 0 y 1. Consider a random point distributed uniformly over the square;

More information

STAT 830 Non-parametric Inference Basics

STAT 830 Non-parametric Inference Basics STAT 830 Non-parametric Inference Basics Richard Lockhart Simon Fraser University STAT 801=830 Fall 2012 Richard Lockhart (Simon Fraser University)STAT 830 Non-parametric Inference Basics STAT 801=830

More information

MISCELLANEOUS TOPICS RELATED TO LIKELIHOOD. Copyright c 2012 (Iowa State University) Statistics / 30

MISCELLANEOUS TOPICS RELATED TO LIKELIHOOD. Copyright c 2012 (Iowa State University) Statistics / 30 MISCELLANEOUS TOPICS RELATED TO LIKELIHOOD Copyright c 2012 (Iowa State University) Statistics 511 1 / 30 INFORMATION CRITERIA Akaike s Information criterion is given by AIC = 2l(ˆθ) + 2k, where l(ˆθ)

More information

Lecture 19. Condence Interval

Lecture 19. Condence Interval Lecture 19. Condence Interval December 5, 2011 The phrase condence interval can refer to a random interval, called an interval estimator, that covers the true value θ 0 of a parameter of interest with

More information

Introduction to Statistical Data Analysis Lecture 5: Confidence Intervals

Introduction to Statistical Data Analysis Lecture 5: Confidence Intervals Introduction to Statistical Data Analysis Lecture 5: Confidence Intervals James V. Lambers Department of Mathematics The University of Southern Mississippi James V. Lambers Statistical Data Analysis 1

More information

Lecture 2: Basic Concepts and Simple Comparative Experiments Montgomery: Chapter 2

Lecture 2: Basic Concepts and Simple Comparative Experiments Montgomery: Chapter 2 Lecture 2: Basic Concepts and Simple Comparative Experiments Montgomery: Chapter 2 Fall, 2013 Page 1 Random Variable and Probability Distribution Discrete random variable Y : Finite possible values {y

More information

f(x θ)dx with respect to θ. Assuming certain smoothness conditions concern differentiating under the integral the integral sign, we first obtain

f(x θ)dx with respect to θ. Assuming certain smoothness conditions concern differentiating under the integral the integral sign, we first obtain 0.1. INTRODUCTION 1 0.1 Introduction R. A. Fisher, a pioneer in the development of mathematical statistics, introduced a measure of the amount of information contained in an observaton from f(x θ). Fisher

More information

Pivotal Quantities. Mathematics 47: Lecture 16. Dan Sloughter. Furman University. March 30, 2006

Pivotal Quantities. Mathematics 47: Lecture 16. Dan Sloughter. Furman University. March 30, 2006 Pivotal Quantities Mathematics 47: Lecture 16 Dan Sloughter Furman University March 30, 2006 Dan Sloughter (Furman University) Pivotal Quantities March 30, 2006 1 / 10 Pivotal quantities Definition Suppose

More information

Statistics - Lecture One. Outline. Charlotte Wickham 1. Basic ideas about estimation

Statistics - Lecture One. Outline. Charlotte Wickham  1. Basic ideas about estimation Statistics - Lecture One Charlotte Wickham wickham@stat.berkeley.edu http://www.stat.berkeley.edu/~wickham/ Outline 1. Basic ideas about estimation 2. Method of Moments 3. Maximum Likelihood 4. Confidence

More information

7.1 Basic Properties of Confidence Intervals

7.1 Basic Properties of Confidence Intervals 7.1 Basic Properties of Confidence Intervals What s Missing in a Point Just a single estimate What we need: how reliable it is Estimate? No idea how reliable this estimate is some measure of the variability

More information

STAT 512 sp 2018 Summary Sheet

STAT 512 sp 2018 Summary Sheet STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}

More information

This does not cover everything on the final. Look at the posted practice problems for other topics.

This does not cover everything on the final. Look at the posted practice problems for other topics. Class 7: Review Problems for Final Exam 8.5 Spring 7 This does not cover everything on the final. Look at the posted practice problems for other topics. To save time in class: set up, but do not carry

More information

Bootstrap Confidence Intervals

Bootstrap Confidence Intervals Bootstrap Confidence Intervals Patrick Breheny September 18 Patrick Breheny STA 621: Nonparametric Statistics 1/22 Introduction Bootstrap confidence intervals So far, we have discussed the idea behind

More information

STAT 514 Solutions to Assignment #6

STAT 514 Solutions to Assignment #6 STAT 514 Solutions to Assignment #6 Question 1: Suppose that X 1,..., X n are a simple random sample from a Weibull distribution with density function f θ x) = θcx c 1 exp{ θx c }I{x > 0} for some fixed

More information

UNIVERSITY OF TORONTO SCARBOROUGH Department of Computer and Mathematical Sciences FINAL EXAMINATION, APRIL 2013

UNIVERSITY OF TORONTO SCARBOROUGH Department of Computer and Mathematical Sciences FINAL EXAMINATION, APRIL 2013 UNIVERSITY OF TORONTO SCARBOROUGH Department of Computer and Mathematical Sciences FINAL EXAMINATION, APRIL 2013 STAB57H3 Introduction to Statistics Duration: 3 hours Last Name: First Name: Student number:

More information

Statistical Inference: Estimation and Confidence Intervals Hypothesis Testing

Statistical Inference: Estimation and Confidence Intervals Hypothesis Testing Statistical Inference: Estimation and Confidence Intervals Hypothesis Testing 1 In most statistics problems, we assume that the data have been generated from some unknown probability distribution. We desire

More information

Lecture 1: August 28

Lecture 1: August 28 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random

More information

Stat 135, Fall 2006 A. Adhikari HOMEWORK 6 SOLUTIONS

Stat 135, Fall 2006 A. Adhikari HOMEWORK 6 SOLUTIONS Stat 135, Fall 2006 A. Adhikari HOMEWORK 6 SOLUTIONS 1a. Under the null hypothesis X has the binomial (100,.5) distribution with E(X) = 50 and SE(X) = 5. So P ( X 50 > 10) is (approximately) two tails

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

MVE055/MSG Lecture 8

MVE055/MSG Lecture 8 MVE055/MSG810 2017 Lecture 8 Petter Mostad Chalmers September 23, 2017 The Central Limit Theorem (CLT) Assume X 1,..., X n is a random sample from a distribution with expectation µ and variance σ 2. Then,

More information

Point and Interval Estimation II Bios 662

Point and Interval Estimation II Bios 662 Point and Interval Estimation II Bios 662 Michael G. Hudgens, Ph.D. mhudgens@bios.unc.edu http://www.bios.unc.edu/ mhudgens 2006-09-13 17:17 BIOS 662 1 Point and Interval Estimation II Nonparametric CI

More information

2014/2015 Smester II ST5224 Final Exam Solution

2014/2015 Smester II ST5224 Final Exam Solution 014/015 Smester II ST54 Final Exam Solution 1 Suppose that (X 1,, X n ) is a random sample from a distribution with probability density function f(x; θ) = e (x θ) I [θ, ) (x) (i) Show that the family of

More information

Statistical Inference

Statistical Inference Statistical Inference Classical and Bayesian Methods Revision Class for Midterm Exam AMS-UCSC Th Feb 9, 2012 Winter 2012. Session 1 (Revision Class) AMS-132/206 Th Feb 9, 2012 1 / 23 Topics Topics We will

More information

Conditional distributions (discrete case)

Conditional distributions (discrete case) Conditional distributions (discrete case) The basic idea behind conditional distributions is simple: Suppose (XY) is a jointly-distributed random vector with a discrete joint distribution. Then we can

More information

Maximum Likelihood Large Sample Theory

Maximum Likelihood Large Sample Theory Maximum Likelihood Large Sample Theory MIT 18.443 Dr. Kempthorne Spring 2015 1 Outline 1 Large Sample Theory of Maximum Likelihood Estimates 2 Asymptotic Results: Overview Asymptotic Framework Data Model

More information

Math 494: Mathematical Statistics

Math 494: Mathematical Statistics Math 494: Mathematical Statistics Instructor: Jimin Ding jmding@wustl.edu Department of Mathematics Washington University in St. Louis Class materials are available on course website (www.math.wustl.edu/

More information

Confidence Intervals for Normal Data Spring 2014

Confidence Intervals for Normal Data Spring 2014 Confidence Intervals for Normal Data 18.05 Spring 2014 Agenda Today Review of critical values and quantiles. Computing z, t, χ 2 confidence intervals for normal data. Conceptual view of confidence intervals.

More information

Estimating a Population Mean. Section 7-3

Estimating a Population Mean. Section 7-3 Estimating a Population Mean Section 7-3 The Mean Age Suppose that we wish to estimate the mean age of residents of Metropolis. Furthermore, suppose we know that the ages of Metropolis residents are normally

More information

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata Maura Department of Economics and Finance Università Tor Vergata Hypothesis Testing Outline It is a mistake to confound strangeness with mystery Sherlock Holmes A Study in Scarlet Outline 1 The Power Function

More information

Bootstrap (Part 3) Christof Seiler. Stanford University, Spring 2016, Stats 205

Bootstrap (Part 3) Christof Seiler. Stanford University, Spring 2016, Stats 205 Bootstrap (Part 3) Christof Seiler Stanford University, Spring 2016, Stats 205 Overview So far we used three different bootstraps: Nonparametric bootstrap on the rows (e.g. regression, PCA with random

More information

Exercises and Answers to Chapter 1

Exercises and Answers to Chapter 1 Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean

More information

R. Koenker Spring 2017 Economics 574 Problem Set 1

R. Koenker Spring 2017 Economics 574 Problem Set 1 R. Koenker Spring 207 Economics 574 Problem Set.: Suppose X, Y are random variables with joint density f(x, y) = x 2 + xy/3 x [0, ], y [0, 2]. Find: (a) the joint df, (b) the marginal density of X, (c)

More information

MATH 360. Probablity Final Examination December 21, 2011 (2:00 pm - 5:00 pm)

MATH 360. Probablity Final Examination December 21, 2011 (2:00 pm - 5:00 pm) Name: MATH 360. Probablity Final Examination December 21, 2011 (2:00 pm - 5:00 pm) Instructions: The total score is 200 points. There are ten problems. Point values per problem are shown besides the questions.

More information

Statistics 3657 : Moment Approximations

Statistics 3657 : Moment Approximations Statistics 3657 : Moment Approximations Preliminaries Suppose that we have a r.v. and that we wish to calculate the expectation of g) for some function g. Of course we could calculate it as Eg)) by the

More information

Confidence Intervals for Normal Data Spring 2018

Confidence Intervals for Normal Data Spring 2018 Confidence Intervals for Normal Data 18.05 Spring 2018 Agenda Exam on Monday April 30. Practice questions posted. Friday s class is for review (no studio) Today Review of critical values and quantiles.

More information

Sampling Distributions

Sampling Distributions Sampling Distributions Mathematics 47: Lecture 9 Dan Sloughter Furman University March 16, 2006 Dan Sloughter (Furman University) Sampling Distributions March 16, 2006 1 / 10 Definition We call the probability

More information

Better Bootstrap Confidence Intervals

Better Bootstrap Confidence Intervals by Bradley Efron University of Washington, Department of Statistics April 12, 2012 An example Suppose we wish to make inference on some parameter θ T (F ) (e.g. θ = E F X ), based on data We might suppose

More information

Hypothesis Testing. File: /General/MLAB-Text/Papers/hyptest.tex

Hypothesis Testing. File: /General/MLAB-Text/Papers/hyptest.tex File: /General/MLAB-Text/Papers/hyptest.tex Hypothesis Testing Gary D. Knott, Ph.D. Civilized Software, Inc. 12109 Heritage Park Circle Silver Spring, MD 20906 USA Tel. (301) 962-3711 Email: csi@civilized.com

More information

INTERVAL ESTIMATION AND HYPOTHESES TESTING

INTERVAL ESTIMATION AND HYPOTHESES TESTING INTERVAL ESTIMATION AND HYPOTHESES TESTING 1. IDEA An interval rather than a point estimate is often of interest. Confidence intervals are thus important in empirical work. To construct interval estimates,

More information

Terminology Suppose we have N observations {x(n)} N 1. Estimators as Random Variables. {x(n)} N 1

Terminology Suppose we have N observations {x(n)} N 1. Estimators as Random Variables. {x(n)} N 1 Estimation Theory Overview Properties Bias, Variance, and Mean Square Error Cramér-Rao lower bound Maximum likelihood Consistency Confidence intervals Properties of the mean estimator Properties of the

More information

Master s Written Examination

Master s Written Examination Master s Written Examination Option: Statistics and Probability Spring 05 Full points may be obtained for correct answers to eight questions Each numbered question (which may have several parts) is worth

More information

Solution: First note that the power function of the test is given as follows,

Solution: First note that the power function of the test is given as follows, Problem 4.5.8: Assume the life of a tire given by X is distributed N(θ, 5000 ) Past experience indicates that θ = 30000. The manufacturere claims the tires made by a new process have mean θ > 30000. Is

More information

Statistics 135 Fall 2008 Final Exam

Statistics 135 Fall 2008 Final Exam Name: SID: Statistics 135 Fall 2008 Final Exam Show your work. The number of points each question is worth is shown at the beginning of the question. There are 10 problems. 1. [2] The normal equations

More information

[Chapter 6. Functions of Random Variables]

[Chapter 6. Functions of Random Variables] [Chapter 6. Functions of Random Variables] 6.1 Introduction 6.2 Finding the probability distribution of a function of random variables 6.3 The method of distribution functions 6.5 The method of Moment-generating

More information

Econometrics A. Simple linear model (2) Keio University, Faculty of Economics. Simon Clinet (Keio University) Econometrics A October 16, / 11

Econometrics A. Simple linear model (2) Keio University, Faculty of Economics. Simon Clinet (Keio University) Econometrics A October 16, / 11 Econometrics A Keio University, Faculty of Economics Simple linear model (2) Simon Clinet (Keio University) Econometrics A October 16, 2018 1 / 11 Estimation of the noise variance σ 2 In practice σ 2 too

More information

Linear models and their mathematical foundations: Simple linear regression

Linear models and their mathematical foundations: Simple linear regression Linear models and their mathematical foundations: Simple linear regression Steffen Unkel Department of Medical Statistics University Medical Center Göttingen, Germany Winter term 2018/19 1/21 Introduction

More information

CS 5014: Research Methods in Computer Science. Bernoulli Distribution. Binomial Distribution. Poisson Distribution. Clifford A. Shaffer.

CS 5014: Research Methods in Computer Science. Bernoulli Distribution. Binomial Distribution. Poisson Distribution. Clifford A. Shaffer. Department of Computer Science Virginia Tech Blacksburg, Virginia Copyright c 2015 by Clifford A. Shaffer Computer Science Title page Computer Science Clifford A. Shaffer Fall 2015 Clifford A. Shaffer

More information

1 Approximate Counting by Random Sampling

1 Approximate Counting by Random Sampling COMP8601: Advanced Topics in Theoretical Computer Science Lecture 5: More Measure Concentration: Counting DNF Satisfying Assignments, Hoeffding s Inequality Lecturer: Hubert Chan Date: 19 Sep 2013 These

More information

STAT2201. Analysis of Engineering & Scientific Data. Unit 3

STAT2201. Analysis of Engineering & Scientific Data. Unit 3 STAT2201 Analysis of Engineering & Scientific Data Unit 3 Slava Vaisman The University of Queensland School of Mathematics and Physics What we learned in Unit 2 (1) We defined a sample space of a random

More information

IB Mathematics HL Year 2 Unit 11: Completion of Algebra (Core Topic 1)

IB Mathematics HL Year 2 Unit 11: Completion of Algebra (Core Topic 1) IB Mathematics HL Year Unit : Completion of Algebra (Core Topic ) Homewor for Unit Ex C:, 3, 4, 7; Ex D: 5, 8, 4; Ex E.: 4, 5, 9, 0, Ex E.3: (a), (b), 3, 7. Now consider these: Lesson 73 Sequences and

More information

Asymptotic Statistics-VI. Changliang Zou

Asymptotic Statistics-VI. Changliang Zou Asymptotic Statistics-VI Changliang Zou Kolmogorov-Smirnov distance Example (Kolmogorov-Smirnov confidence intervals) We know given α (0, 1), there is a well-defined d = d α,n such that, for any continuous

More information

Graduate Econometrics I: Maximum Likelihood I

Graduate Econometrics I: Maximum Likelihood I Graduate Econometrics I: Maximum Likelihood I Yves Dominicy Université libre de Bruxelles Solvay Brussels School of Economics and Management ECARES Yves Dominicy Graduate Econometrics I: Maximum Likelihood

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2009 Prof. Gesine Reinert Our standard situation is that we have data x = x 1, x 2,..., x n, which we view as realisations of random

More information

Chapters 10. Hypothesis Testing

Chapters 10. Hypothesis Testing Chapters 10. Hypothesis Testing Some examples of hypothesis testing 1. Toss a coin 100 times and get 62 heads. Is this coin a fair coin? 2. Is the new treatment on blood pressure more effective than the

More information

Master s Written Examination - Solution

Master s Written Examination - Solution Master s Written Examination - Solution Spring 204 Problem Stat 40 Suppose X and X 2 have the joint pdf f X,X 2 (x, x 2 ) = 2e (x +x 2 ), 0 < x < x 2

More information

Probability Theory and Statistics. Peter Jochumzen

Probability Theory and Statistics. Peter Jochumzen Probability Theory and Statistics Peter Jochumzen April 18, 2016 Contents 1 Probability Theory And Statistics 3 1.1 Experiment, Outcome and Event................................ 3 1.2 Probability............................................

More information

Worksheet Week 1 Review of Chapter 5, from Definition of integral to Substitution method

Worksheet Week 1 Review of Chapter 5, from Definition of integral to Substitution method Worksheet Week Review of Chapter 5, from Definition of integral to Substitution method This worksheet is for improvement of your mathematical writing skill. Writing using correct mathematical expressions

More information

Lecture 12: Small Sample Intervals Based on a Normal Population Distribution

Lecture 12: Small Sample Intervals Based on a Normal Population Distribution Lecture 12: Small Sample Intervals Based on a Normal Population MSU-STT-351-Sum-17B (P. Vellaisamy: MSU-STT-351-Sum-17B) Probability & Statistics for Engineers 1 / 24 In this lecture, we will discuss (i)

More information

Some Review and Hypothesis Tes4ng. Friday, March 15, 13

Some Review and Hypothesis Tes4ng. Friday, March 15, 13 Some Review and Hypothesis Tes4ng Outline Discussing the homework ques4ons from Joey and Phoebe Review of Sta4s4cal Inference Proper4es of OLS under the normality assump4on Confidence Intervals, T test,

More information

CMPUT651: Differential Privacy

CMPUT651: Differential Privacy CMPUT65: Differential Privacy Homework assignment # 2 Due date: Apr. 3rd, 208 Discussion and the exchange of ideas are essential to doing academic work. For assignments in this course, you are encouraged

More information

MAT 271E Probability and Statistics

MAT 271E Probability and Statistics MAT 7E Probability and Statistics Spring 6 Instructor : Class Meets : Office Hours : Textbook : İlker Bayram EEB 3 ibayram@itu.edu.tr 3.3 6.3, Wednesday EEB 6.., Monday D. B. Bertsekas, J. N. Tsitsiklis,

More information

Fall 2003: Maximum Likelihood II

Fall 2003: Maximum Likelihood II 36-711 Fall 2003: Maximum Likelihood II Brian Junker November 18, 2003 Slide 1 Newton s Method and Scoring for MLE s Aside on WLS/GLS Application to Exponential Families Application to Generalized Linear

More information

Central Limit Theorem ( 5.3)

Central Limit Theorem ( 5.3) Central Limit Theorem ( 5.3) Let X 1, X 2,... be a sequence of independent random variables, each having n mean µ and variance σ 2. Then the distribution of the partial sum S n = X i i=1 becomes approximately

More information

3 Modeling Process Quality

3 Modeling Process Quality 3 Modeling Process Quality 3.1 Introduction Section 3.1 contains basic numerical and graphical methods. familiar with these methods. It is assumed the student is Goal: Review several discrete and continuous

More information

Generalized Linear Models Introduction

Generalized Linear Models Introduction Generalized Linear Models Introduction Statistics 135 Autumn 2005 Copyright c 2005 by Mark E. Irwin Generalized Linear Models For many problems, standard linear regression approaches don t work. Sometimes,

More information

STT 843 Key to Homework 1 Spring 2018

STT 843 Key to Homework 1 Spring 2018 STT 843 Key to Homework Spring 208 Due date: Feb 4, 208 42 (a Because σ = 2, σ 22 = and ρ 2 = 05, we have σ 2 = ρ 2 σ σ22 = 2/2 Then, the mean and covariance of the bivariate normal is µ = ( 0 2 and Σ

More information

Comparing two independent samples

Comparing two independent samples In many applications it is necessary to compare two competing methods (for example, to compare treatment effects of a standard drug and an experimental drug). To compare two methods from statistical point

More information

MAS223 Statistical Inference and Modelling Exercises

MAS223 Statistical Inference and Modelling Exercises MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,

More information

Diagnostics can identify two possible areas of failure of assumptions when fitting linear models.

Diagnostics can identify two possible areas of failure of assumptions when fitting linear models. 1 Transformations 1.1 Introduction Diagnostics can identify two possible areas of failure of assumptions when fitting linear models. (i) lack of Normality (ii) heterogeneity of variances It is important

More information

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable Distributions of Functions of Random Variables 5.1 Functions of One Random Variable 5.2 Transformations of Two Random Variables 5.3 Several Random Variables 5.4 The Moment-Generating Function Technique

More information

Outline of GLMs. Definitions

Outline of GLMs. Definitions Outline of GLMs Definitions This is a short outline of GLM details, adapted from the book Nonparametric Regression and Generalized Linear Models, by Green and Silverman. The responses Y i have density

More information

Statistics. Lecture 2 August 7, 2000 Frank Porter Caltech. The Fundamentals; Point Estimation. Maximum Likelihood, Least Squares and All That

Statistics. Lecture 2 August 7, 2000 Frank Porter Caltech. The Fundamentals; Point Estimation. Maximum Likelihood, Least Squares and All That Statistics Lecture 2 August 7, 2000 Frank Porter Caltech The plan for these lectures: The Fundamentals; Point Estimation Maximum Likelihood, Least Squares and All That What is a Confidence Interval? Interval

More information

parameter space Θ, depending only on X, such that Note: it is not θ that is random, but the set C(X).

parameter space Θ, depending only on X, such that Note: it is not θ that is random, but the set C(X). 4. Interval estimation The goal for interval estimation is to specify the accurary of an estimate. A 1 α confidence set for a parameter θ is a set C(X) in the parameter space Θ, depending only on X, such

More information

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued Chapter 3 sections Chapter 3 - continued 3.1 Random Variables and Discrete Distributions 3.2 Continuous Distributions 3.3 The Cumulative Distribution Function 3.4 Bivariate Distributions 3.5 Marginal Distributions

More information

ECE531 Lecture 10b: Maximum Likelihood Estimation

ECE531 Lecture 10b: Maximum Likelihood Estimation ECE531 Lecture 10b: Maximum Likelihood Estimation D. Richard Brown III Worcester Polytechnic Institute 05-Apr-2011 Worcester Polytechnic Institute D. Richard Brown III 05-Apr-2011 1 / 23 Introduction So

More information

Contents 1. Contents

Contents 1. Contents Contents 1 Contents 1 One-Sample Methods 3 1.1 Parametric Methods.................... 4 1.1.1 One-sample Z-test (see Chapter 0.3.1)...... 4 1.1.2 One-sample t-test................. 6 1.1.3 Large sample

More information

Hypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes

Hypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes Neyman-Pearson paradigm. Suppose that a researcher is interested in whether the new drug works. The process of determining whether the outcome of the experiment points to yes or no is called hypothesis

More information