Math 181B Homework 1 Solution

Similar documents
Exercises and Answers to Chapter 1

Notes on the Multivariate Normal and Related Topics

STT 843 Key to Homework 1 Spring 2018

Answer Key for STAT 200B HW No. 7

BIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation

Statistics 3858 : Maximum Likelihood Estimators

Stat 135, Fall 2006 A. Adhikari HOMEWORK 6 SOLUTIONS

Answer Key for STAT 200B HW No. 8

Exam 2. Jeremy Morris. March 23, 2006

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.

Hypothesis Testing One Sample Tests

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata

Spring 2012 Math 541B Exam 1

2.6.3 Generalized likelihood ratio tests

Lecture 3. Inference about multivariate normal distribution

Theory of Statistics.

Lecture 32: Asymptotic confidence sets and likelihoods

Chapter 4: Asymptotic Properties of the MLE (Part 2)

Math 152. Rumbos Fall Solutions to Assignment #12

Hypothesis Testing. Robert L. Wolpert Department of Statistical Science Duke University, Durham, NC, USA

STAT 512 sp 2018 Summary Sheet

Likelihood Ratio tests

Statistics. Statistics

Chapter 3: Maximum Likelihood Theory

1 Empirical Likelihood

simple if it completely specifies the density of x

Statement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:.

Statistical Inference

Central Limit Theorem ( 5.3)

Introduction to Normal Distribution

MATH5745 Multivariate Methods Lecture 07

Math 494: Mathematical Statistics

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

2.3 Methods of Estimation

Asymptotic Statistics-III. Changliang Zou

Quick Review on Linear Multiple Regression

MAS223 Statistical Modelling and Inference Examples

Final Exam. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given.

The Multivariate Normal Distribution. In this case according to our theorem

Likelihoods. P (Y = y) = f(y). For example, suppose Y has a geometric distribution on 1, 2,... with parameter p. Then the pmf is

Estimation MLE-Pandemic data MLE-Financial crisis data Evaluating estimators. Estimation. September 24, STAT 151 Class 6 Slide 1

ENGG2430A-Homework 2

Homework 7: Solutions. P3.1 from Lehmann, Romano, Testing Statistical Hypotheses.

CHANGE DETECTION IN TIME SERIES

f(x θ)dx with respect to θ. Assuming certain smoothness conditions concern differentiating under the integral the integral sign, we first obtain

Final Examination Statistics 200C. T. Ferguson June 11, 2009

A Study on the Correlation of Bivariate And Trivariate Normal Models

Chapter 11. Hypothesis Testing (II)

Probability Theory and Statistics. Peter Jochumzen

Asymptotic Statistics-VI. Changliang Zou

SOLUTION FOR HOMEWORK 6, STAT 6331

1 General problem. 2 Terminalogy. Estimation. Estimate θ. (Pick a plausible distribution from family. ) Or estimate τ = τ(θ).

Loglikelihood and Confidence Intervals

Inference for the Pareto, half normal and related. distributions

Lecture 25: Review. Statistics 104. April 23, Colin Rundel

Hypothesis Testing. A rule for making the required choice can be described in two ways: called the rejection or critical region of the test.

CHAPTER 8. Test Procedures is a rule, based on sample data, for deciding whether to reject H 0 and contains:

Notes on Random Vectors and Multivariate Normal

MLE and GMM. Li Zhao, SJTU. Spring, Li Zhao MLE and GMM 1 / 22

Variance reduction. Michel Bierlaire. Transport and Mobility Laboratory. Variance reduction p. 1/18

Comparing two independent samples

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.

Appendix. Proof of Theorem 1. Define. [ ˆΛ 0(D) ˆΛ 0(t) ˆΛ (t) ˆΛ. (0) t. X 0 n(t) = D t. and. 0(t) ˆΛ 0(0) g(t(d t)), 0 < t < D, t.

Generalized Linear Models Introduction

Canonical Correlation Analysis of Longitudinal Data

Cherry Blossom run (1) The credit union Cherry Blossom Run is a 10 mile race that takes place every year in D.C. In 2009 there were participants

2014/2015 Smester II ST5224 Final Exam Solution

i=1 X i/n i=1 (X i X) 2 /(n 1). Find the constant c so that the statistic c(x X n+1 )/S has a t-distribution. If n = 8, determine k such that

You may not use your books/notes on this exam. You may use calculator.

IEOR 165 Lecture 13 Maximum Likelihood Estimation

Chapter 2. Discrete Distributions

Random vectors X 1 X 2. Recall that a random vector X = is made up of, say, k. X k. random variables.

STA 2201/442 Assignment 2

Finite Singular Multivariate Gaussian Mixture

Spring 2012 Math 541A Exam 1. X i, S 2 = 1 n. n 1. X i I(X i < c), T n =

MAS223 Statistical Inference and Modelling Exercises

2017 Financial Mathematics Orientation - Statistics

IEOR 165: Spring 2019 Problem Set 2

11 Survival Analysis and Empirical Likelihood

Gaussian Mixture Models

Multivariate Statistical Analysis

Covariance. Lecture 20: Covariance / Correlation & General Bivariate Normal. Covariance, cont. Properties of Covariance

Problem Selected Scores

Institute of Actuaries of India

The Delta Method and Applications

Lecture 23 Maximum Likelihood Estimation and Bayesian Inference

HT Introduction. P(X i = x i ) = e λ λ x i

Space Telescope Science Institute statistics mini-course. October Inference I: Estimation, Confidence Intervals, and Tests of Hypotheses

EXAMINERS REPORT & SOLUTIONS STATISTICS 1 (MATH 11400) May-June 2009

Lecture 17: Likelihood ratio and asymptotic tests

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3

Stat 5102 Lecture Slides Deck 3. Charles J. Geyer School of Statistics University of Minnesota

A Very Brief Summary of Statistical Inference, and Examples


SAMPLING BIOS 662. Michael G. Hudgens, Ph.D. mhudgens :55. BIOS Sampling

ECE 275B Homework # 1 Solutions Version Winter 2015

Ch. 5 Hypothesis Testing

Statistics - Lecture One. Outline. Charlotte Wickham 1. Basic ideas about estimation

Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama

Linear Models and Estimation by Least Squares

Transcription:

Math 181B Homework 1 Solution 1. Write down the likelihood: L(λ = n λ X i e λ X i! (a One-sided test: H 0 : λ = 1 vs H 1 : λ = 0.1 The likelihood ratio: where LR = L(1 L(0.1 = 1 X i e n 1 = λ n X i e nλ n X i! = e n X i e n 0.1 0.1 X i = g( X i g(t = en 0.1 t = en 10 t, g (t = e n 10 t log 10 > 0, when t 0 So g(t is strictly increasing and invertible g 1 (g(t = t. Therefore, we can simplify the rejecting region for likelihood-ratio test LR = g( X i k as n X i C = g 1 (k. Remark: You may state without proof: LR is a monotone increasing function of X i. g(t argument serves to convince you. Often, you first come up with the conclusion. Then, you figure out the way to prove it (not necessary. Regular solution: Apply Central Limit Theorem under H 0, Z = n 1/ n (X i 1 H 0 N(0, 1. Obviously, X i = h(z = nz. h(z is a strictly increasing function. The rejecting region is then equivalent to Z C = h 1 (C = h 1 (g 1 (k. To achieve level α, we want P H0 (Z C = α. This is the definition of C being the α quantile of N(0, 1, usually written as Z 1 α = Z α. Extra credit: X H 0 i P oisson(n. To achieve level α, we want PH0 ( X i C = α. This is the definition of C being the α quantile of P oisson(n. (b Two-sided test: H 0 : λ = 1 vs H 1 : λ 1 MLE: ˆλ = X. The likelihood ratio: LR = L(1 L( X = 1 X i e n 1 X X i e n X = en( X 1 X n X X = eg( Regular solution: Apply the asymptotic theory for Generalized Likelihood- Ratio test, log LR = n X log X n( X 1 H 0 χ 1 The rejecting region is log LR C. To achieve level α, we want P H0 ( log LR C = α. This is the definition of C being the 1 α quantile of χ 1, usually written as χ 1,α. 1

Extra credit: { > 0 t < 1 g (t = n log t < 0 t > 1 LR is first increasing then decreasing w.r.t. X. Therefore, the rejecting region {LR k} is equivalent to { X a} { X b} for some pair a < 1 < b satisfying g(a = g(b. The distribution of X under H1 is studied in part (a.. Write down the likelihood: MLE: ˆp = X/n. The likelihood ratio: LR = L(0.5 L(X/n = L(p = p X (1 p n X 0.5 n (X/n X (1 X/n n X = en log 0.5 X log X (n X log(n X n log n = e g(x g (x = log x + log(n x { > 0 x < n/ < 0 x > n/ LR is first increasing then decreasing w.r.t. X. Therefore, the rejecting region {LR k} is equivalent to {X a} {X b} for some pair a < n/ < b satisfying g(a = g(b. Extra credit: Notice g(n x = g(x, so we may set b = n a. The rejecting region is then {X a} {X n a} = { X n C = n a}. Finally, figure out a from the distribution of X under H 0. Regular solution: Normal approximation Z = n 1/ (X n/ H 0 N(0, 1. Rejecting region is equivalent to Z > C. To achieve level α, we want P H0 ( Z > C = α. N(0, 1 is symmetric. Therefore, P H0 (Z > C = α/. This is the definition of C being the 1 α/ quantile of N(0, 1, usually written as Z α/. Extra Credit: X H 0 Binom(n, 0.5. To achieve level α, we want P H0 ( X n n a = α. Binom(n, 0.5 is also symmetric. Therefore, P H0 (X a = α/. This is the definition of a being the α/ quantile of Binom(n, 0.5. 3. Write down the likelihood: Log-likelihood: L(µ, σ = n 1 σ 1 π e (X i µ = (σ π n/ e 1 (X i µ l(µ, σ = n log(σ n log(π 1 (X i µ (a Regular solution: First simplify the null as H 0 : σ = σ 0. The full parameter space is now σ σ 0.

MLE: First take partial derivative w.r.t. the unconstrained parameter µ: l µ = n σ ( X µ The solution of the equation is always ˆµ = X for any σ > 0. Therefore, ˆµ = X is the MLE in both Θ 0 and Θ 0 Θ 1. The MLE for σ is then the maximum of the partial likelihood: l (σ = l( X, σ = n log(σ n log(π 1 (X i X Under H 0, σ =. Under H 0 + H 1, compute the derivative of l : { } dσ = n σ + 1 (X σ 4 i X = n σ n 1 (X σ 4 i X Denote the unconstrained MLE ˆσ = n 1 (X i X ˆσ depends on the observed data. It is random whether ˆσ belongs to the full parameter space [, +. We have to discuss the MLE under two cases. Case 1: If ˆσ, then the MLE is the unconstrained MLE ˆσ. The likelihood ratio is LR = L( X, σ 0 L( X, ˆσ = (σ 0 ˆσ n/ exp where ( 1 0 = ( σ 0 ˆσ n/ exp ( nˆσ + n = e (X i X + 1 ˆσ ˆσ g( σ 0 g(t = n log t n t + n, t 1 g (t = n t n 0, t 1 (X i X, ˆσ σ 0 ˆσ σ 0 The rejecting region {LR k} in case 1 is thus equivalent to {ˆσ } { ˆσ C}. Case : ˆσ <, by the derivative of partial log-likelihood dσ < 0, σ > σ 0 > ˆσ 1 3

l (σ is strictly decreasing over the parameter space [σ 0, +. The maximum is then the left boundary σ 0. The likelihood ratio is LR = L( X, σ 0 L( X, σ 0 = 1 If {LR = 1} belongs to the rejecting region defined by {LR k}, we must have k 1. However, LR is always no greater than 1, so the type I error is P H0 (LR k P H0 (LR 1 = 1 > α. Therefore, H 0 is always accepted in case to achieve any level α < 1. As a summary, the rejecting region is { nˆσ max(n, C }. Under H 0, nˆσ σ 0 = 1 0 To achieve level α test, we want (X i X χ n 1 P H0 ( nˆσ σ 0 max(n, C = α This is the definition of max(n, C being the 1 α/ quantile of χ n 1, usually written as χ n 1,α. Note the mean of χ n 1 is n 1. thus, its 1 α quantile is greater than n when α is small. (b Extra credit solution: Do not do the simplification. MLE: First take partial derivative w.r.t. the unconstrained parameter µ: l µ = n σ ( X µ The solution of the equation is always ˆµ = X for any σ > 0. Therefore, ˆµ = X is the MLE in both Θ 0 and Θ 0 Θ 1. The MLE for σ is then the maximum of the partial likelihood: l (σ = l( X, σ = n log(σ n log(π 1 (X i X Under H 0 + H 1, ˆσ = n 1 n (X i X is the unconstrained MLE. Under H 0, compute the derivative of l : dσ = n + 1 σ 4 (X i X = n σ 4 { σ n 1 } (X i X ˆσ depends on the observed data. It is random whether ˆσ belongs to the null parameter space (0, σ 0]. We have to discuss the MLE under two cases. 4

Case 1: If ˆσ σ 0, by the derivative of partial log-likelihood dσ > 0, σ σ 0 < ˆσ l (σ is strictly increasing over the parameter space (0, ]. The maximum is then the right boundary. The likelihood ratio is LR = L( X, ( L( X, ˆσ = (σ 0 ˆσ n/ exp 1 (X i X + 1 (X ˆσ i X where = ( σ 0 ˆσ n/ exp ( nˆσ + n = e ˆσ g( σ 0 g(t = n log t n t + n, t 1 g (t = n t n 0, t 1, ˆσ σ 0 ˆσ σ 0 The rejecting region {LR k} in case 1 is thus equivalent to {ˆσ } { ˆσ C}. Case : ˆσ <, then the MLE is the unconstrained MLE ˆσ. The likelihood ratio is LR = L( X, L( X, = 1 If {LR = 1} belongs to the rejecting region defined by {LR k}, we must have k 1. However, LR is always no greater than 1, so the type I error is P H0 (LR k P H0 (LR 1 = 1 > α. Therefore, H 0 is always accepted in case to achieve any level α < 1. As a summary, the rejecting region is { nˆσ max(n, C }. Under H 0, nˆσ σ = 1 To achieve level α test, we want ( nˆσ max P σ max(n, C σ σ 0 (X i X χ n 1 1 ( nˆσ = max P σ σ 0 σ σ σ max(n, C = α For any σ, the statistic nˆσ has the same distribution χ σ n 1. The smallest the cutoff σ 0 max(n, C is achieved at σ = σ σ 0. That corresponds to the biggest type I error. max P σ σ ( nˆσ σ 0 ( nˆσ max(n, C = P 5 max(n, C = α

This is the definition of max(n, C being the 1 α/ quantile of χ n 1, usually written as χ n 1,α. Note the mean of χ n 1 is n 1. thus, its 1 α quantile is greater than n when α is small. 4. Extra credit: Do the transformation X i Y i = Z i N(µ X µ Y, (1 ρσ. The affine (linear transformation of multivariate normal distribution is normal. Its parameters are given by its mean and variance. EZ = EX EY = µ X µ Y, VarZ = VarX + VarY Cov(X, Y = (1 ρσ Write ν = µ X µ Y, τ = (1 ρσ. Assume ρ ( 1, 1. Then, τ (0, 4σ. Write down the likelihood: n L(ν, τ 1 = τ 1 π e τ (Z i ν = (τ π n/ e 1 τ (Z i ν Log-likelihood: (a MLE: Under H 0, ν = 0. l(ν, τ = n log(τ n log(π 1 τ d dτ l(0, τ = n τ + 1 τ 4 Z i (Z i ν { = n τ n 1 τ 4 We have to discuss the MLE for τ under two cases. constraint τ < 4σ Denote the unconstrained MLE ˆτ 0 = n 1 Z i Z i } Remember there is this ˆτ 0 depends on the observed data. It is random whether ˆτ 0 belongs to the parameter space (0, 4σ. Following the same case-wise argument in previous problem, you will find the MLE under H 0 is min(4σ, ˆτ 0. Under H 0 + H 1, first solve ˆν = Z. Denote the unconstrained MLE ˆτ = n 1 (Z i Z Similarly, the constrained MLE for τ is min(4σ, ˆτ. Therefore, the likelihood-ratio statistic is Λ = L(0, min(4σ, ˆτ 0 L( Z, min(4σ, ˆτ 6

(b You may use log LR χ 1. Or follow the large sample argument. Under H 0, both unconstrained MLE ˆτ 0 and ˆτ converge in probability to true τ (0, 4σ by Law of Large Numbers. When sample size is large, we only need to consider the case ˆτ < ˆτ 0 < 4σ. Λ = L(0, ˆτ ( 0 ˆτ n/ L( Z, ˆτ = 0 ˆτ It is a monotone decreasing function of ˆτ 0 /ˆτ. Under H 0, F = ( ˆτ 0 ˆτ 1 (n 1 = To achieve the level α, the rejecting region is n Z (Z i Z /(n 1 F 1,n n Z (Z i Z /(n 1 F 1 α,1,n where F 1 α,1,n is the 1 α quantile of F 1,n. (c It is equivalent to the paired t-test Z T = s /n t α/,n 1 since T = F and t α/,n 1 = F 1,n 1. Read Larsen and Marx section 1.. 5. Extra credit: Read Larsen and Marx Appendix 9.A.1 page 488. 7