MATH 472 / SPRING 2013 ASSIGNMENT 2: DUE FEBRUARY 4 FINALIZED

Similar documents
Direction: This test is worth 250 points. You are required to complete this test within 50 minutes.

Statistical Theory MT 2008 Problems 1: Solution sketches

Statistical Theory MT 2009 Problems 1: Solution sketches

Review Questions, Chapters 8, 9. f(y) = 0, elsewhere. F (y) = f Y(1) = n ( e y/θ) n 1 1 θ e y/θ = n θ e yn

10-701/ Machine Learning Mid-term Exam Solution

Lecture 12: September 27

7.1 Convergence of sequences of random variables

Stat410 Probability and Statistics II (F16)

6. Sufficient, Complete, and Ancillary Statistics

University of Colorado Denver Dept. Math. & Stat. Sciences Applied Analysis Preliminary Exam 13 January 2012, 10:00 am 2:00 pm. Good luck!

This exam contains 19 pages (including this cover page) and 10 questions. A Formulae sheet is provided with the exam.

Topic 9: Sampling Distributions of Estimators

Let us give one more example of MLE. Example 3. The uniform distribution U[0, θ] on the interval [0, θ] has p.d.f.

7.1 Convergence of sequences of random variables

Topic 9: Sampling Distributions of Estimators

Topic 9: Sampling Distributions of Estimators

6.3 Testing Series With Positive Terms

EECS564 Estimation, Filtering, and Detection Hwk 2 Solns. Winter p θ (z) = (2θz + 1 θ), 0 z 1

Questions and Answers on Maximum Likelihood

Estimation for Complete Data

Please do NOT write in this box. Multiple Choice. Total

STATISTICAL INFERENCE

Problem Set 4 Due Oct, 12

Statistical Inference (Chapter 10) Statistical inference = learn about a population based on the information provided by a sample.

Asymptotics. Hypothesis Testing UMP. Asymptotic Tests and p-values

Since X n /n P p, we know that X n (n. Xn (n X n ) Using the asymptotic result above to obtain an approximation for fixed n, we obtain

AAEC/ECON 5126 FINAL EXAM: SOLUTIONS

Unbiased Estimation. February 7-12, 2008

Chapter 6 Principles of Data Reduction

Math 132, Fall 2009 Exam 2: Solutions

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 3 9/11/2013. Large deviations Theory. Cramér s Theorem

STATISTICAL PROPERTIES OF LEAST SQUARES ESTIMATORS. Comments:

Discrete Mathematics for CS Spring 2008 David Wagner Note 22

HOMEWORK I: PREREQUISITES FROM MATH 727

MIDTERM 3 CALCULUS 2. Monday, December 3, :15 PM to 6:45 PM. Name PRACTICE EXAM SOLUTIONS

Maximum Likelihood Estimation

Kurskod: TAMS11 Provkod: TENB 21 March 2015, 14:00-18:00. English Version (no Swedish Version)

January 25, 2017 INTRODUCTION TO MATHEMATICAL STATISTICS

Random Variables, Sampling and Estimation

Frequentist Inference

Solutions to Tutorial 3 (Week 4)

Lecture 10 October Minimaxity and least favorable prior sequences

MATH 320: Probability and Statistics 9. Estimation and Testing of Parameters. Readings: Pruim, Chapter 4

Math 113 Exam 3 Practice

4. Partial Sums and the Central Limit Theorem

Infinite Sequences and Series

Properties of Point Estimators and Methods of Estimation

Lesson 10: Limits and Continuity

32 estimating the cumulative distribution function

Convergence of random variables. (telegram style notes) P.J.C. Spreij

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 19 11/17/2008 LAWS OF LARGE NUMBERS II THE STRONG LAW OF LARGE NUMBERS

Math 116 Practice for Exam 3

Direction: This test is worth 150 points. You are required to complete this test within 55 minutes.

Exercise 4.3 Use the Continuity Theorem to prove the Cramér-Wold Theorem, Theorem. (1) φ a X(1).

Math 140A Elementary Analysis Homework Questions 3-1

Lecture 19: Convergence

CS / MCS 401 Homework 3 grader solutions

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY

ECONOMETRIC THEORY. MODULE XIII Lecture - 34 Asymptotic Theory and Stochastic Regressors

PROBLEM SET 5 SOLUTIONS 126 = , 37 = , 15 = , 7 = 7 1.

Law of the sum of Bernoulli random variables

Notes 5 : More on the a.s. convergence of sums

n outcome is (+1,+1, 1,..., 1). Let the r.v. X denote our position (relative to our starting point 0) after n moves. Thus X = X 1 + X 2 + +X n,

Homework for 2/3. 1. Determine the values of the following quantities: a. t 0.1,15 b. t 0.05,15 c. t 0.1,25 d. t 0.05,40 e. t 0.

Math 25 Solutions to practice problems

Review for Test 3 Math 1552, Integral Calculus Sections 8.8,

Sequences and Series of Functions

It is often useful to approximate complicated functions using simpler ones. We consider the task of approximating a function by a polynomial.

Advanced Stochastic Processes.

Massachusetts Institute of Technology

4x 2. (n+1) x 3 n+1. = lim. 4x 2 n+1 n3 n. n 4x 2 = lim = 3

Discrete Mathematics and Probability Theory Fall 2016 Walrand Probability: An Overview

Chapter 2 The Monte Carlo Method

Math 152. Rumbos Fall Solutions to Review Problems for Exam #2. Number of Heads Frequency

Lecture 6 Ecient estimators. Rao-Cramer bound.

Arkansas Tech University MATH 2924: Calculus II Dr. Marcel B. Finan

Introductory statistics

TMA4245 Statistics. Corrected 30 May and 4 June Norwegian University of Science and Technology Department of Mathematical Sciences.

Lecture 12: November 13, 2018

Seunghee Ye Ma 8: Week 5 Oct 28

Lecture 11 and 12: Basic estimation theory

Element sampling: Part 2

This section is optional.

Simulation. Two Rule For Inverting A Distribution Function

Math 104: Homework 2 solutions

Statistical and Mathematical Methods DS-GA 1002 December 8, Sample Final Problems Solutions

Chapter 3. Strong convergence. 3.1 Definition of almost sure convergence

Output Analysis and Run-Length Control

1 Approximating Integrals using Taylor Polynomials

Lecture 23: Minimal sufficiency

(A sequence also can be thought of as the list of function values attained for a function f :ℵ X, where f (n) = x n for n 1.) x 1 x N +k x N +4 x 3

2.1. The Algebraic and Order Properties of R Definition. A binary operation on a set F is a function B : F F! F.

MATH301 Real Analysis (2008 Fall) Tutorial Note #7. k=1 f k (x) converges pointwise to S(x) on E if and

IIT JAM Mathematical Statistics (MS) 2006 SECTION A

f(x) dx as we do. 2x dx x also diverges. Solution: We compute 2x dx lim

SDS 321: Introduction to Probability and Statistics

Lecture 16: UMVUE: conditioning on sufficient and complete statistics

Math 116 Practice for Exam 3

Berry-Esseen bounds for self-normalized martingales

Transcription:

MATH 47 / SPRING 013 ASSIGNMENT : DUE FEBRUARY 4 FINALIZED Please iclude a cover sheet that provides a complete setece aswer to each the followig three questios: (a) I your opiio, what were the mai ideas covered i the assigmet? (b) Were there ay topics that you believe should have bee covered, but were ot? (c) What problems did you have with the assigmet, if ay? Write your solutios carefully ad eatly, preferably i complete ad grammatically correct seteces. Justify all of your argumets. 1. Questio.7., page 333.. Questio.4.17, page 319. 3. Questio.4.18, page 30. 4. Questio.., page 33.. Questio.6.6, page 330. 6. Questio.6.7, page 330. 7. Questio.3.1, page 309. 8. Questio.3.7, page 310.

Solutios to Selected Problems. Solutio (.3.7). The true mea either does or does ot lie i a iterval of the give form (ivolvig a estimate ȳ). But if we cosider a radom iterval of the form ) I(Ȳ (Ȳ ) = σ 0.96, Ȳ + 1.06 σ, the it is a valid questio to ask, What is the probability that µ lies i this iterval? Usig the fact that Z = Ȳ µ σ/ is a stadard ormal radom variable, we have ( ) ( σ P Ȳ 0.96, Ȳ + 1.06 σ = P 1.06 < Ȳ µ ) σ/ < 0.96 = P ( 1.06 < Z < 0.96) = 0.831 0.1446 = 0.6869. So the probability that µ lies i the iterval I(Ȳ ) is about p = 0.687. We ow view each of our samples Ȳ as a Beroulli trial, where µ lies i I(Ȳ ) with success probability p = 0.6869. Let X be the umber of successes out of N = experimets. The X B(N, p) is biomial distributed, ad we have P (µ lies i at least 4 itervals I(ȳ)) = P (X 4) = P (X = 4) + P (X = ) ( ) ( ) = p 4 (1 p) 1 + p (1 p) 0 4 = 0.014. Solutio (.4.18). We wat to compare the variaces of ˆθ 1 ad ˆθ. Usig Theorem 3.10.1, we show that f Ymax (y) = y4 θ, 0 y θ. We have θ E(Y max ) = y y4 θ dy = θ, ad 6 E(Y max) = 0 θ 0 y y4 θ dy = 7 θ, so that Var(Y max ) = E(Ymax) E(Y max ) = 7 36 θ. Hece Var(ˆθ 1 ) = 36 Var(Y max) = 36 7 36 θ = 1 3 θ. By the symmetry of the pdf of Y about its mea, we must have Var(Y mi ) = Var(Y max ). Hece Var(ˆθ ) = 36Var(Y mi ) = 36 7 36 θ = 7 θ > Var(ˆθ 1 ). Hece ˆθ 1 is a more efficiet estimator for the parameter θ.

Solutio (..). To show that the estimator ˆλ = 1 Xi is EFFICIENT, we eed to prove that it is ubiased ad that its variace agrees with the Cramèr Rao lower boud. It is clearly a ubiased estimator for λ: E(ˆλ) = 1 E(Xi ) = λ. Its variace is For the Carmèr Rao lower boud, we have Var(ˆλ) = 1 Var(Xi ) = λ. l f X (k; λ) = λ λ ( λ + k l λ l k!) = k λ ( ) X E = 1 1 E(X) = λ λ λ { [ ]} 1 { l f X (k; λ) Cramèr Rao: E = 1 } 1 = λ λ λ As this boud agrees with the variace of ˆλ, we coclude that ˆλ is a efficiet estimator. Solutio (.6.6). To see if W = Y i is a sufficiet statistic, we apply the factorizatio theorem to show that the likelihood fuctio may be writte as L(θ) = g(w ; θ) h(y 1,..., y ), where h does ot deped o θ at all. Note that g is oly allowed to deped o θ ad o the estimator W. The likelihood fuctio is L(θ) = f Y (y i ; θ) = θy θ 1 i = θ ( yi ) θ 1 = [ θ w θ] y 1 i = g(w; θ) h(y 1,..., y ), with w = y i where g = θ w θ ad h = y 1 i have the desired properties. This shows that W is a sufficiet estimator. Now let s compute the max likelihood estimator. I this case, the pdf is supported o the iterval [0, 1], which DOES NOT deped o the parameter θ. So calculus will most likely fid the estimator we wat. We already computed the likelihood fuctio above, ad l L(θ) = l θ + θ l w l w l L(θ) = + l w Critical poit: θ = θ θ l w. The secod derivative satisfies l L(θ) = < 0, so that ay critical poit is automatically θ θ a local max. Hece ˆθ = is the maximum likelihood estimator, ad it is a fuctio of l W the sufficiet estimator W. 3

Solutio (.6.7). (a) As i the last problem, we will show that ˆθ = Y mi is sufficiet for θ by factorig its likelihood fuctio. Sice the pdf is ozero o the iterval [θ, ), which depeds o θ, a idicator fuctio will help us to uderstad the fier properties of the likelihood fuctio. To that ed, observe that I [θ, ) (y i ) = I [θ, ) (y mi ), sice y 1,..., y θ if ad oly if the smallest of the y i s is at least θ. Hece L(θ) = f Y (y i ; θ) = e (yi θ) I [θ, ) (y i ) = I [θ, ) (y mi ) exp [θ ] y i = [ e θ I [θ, ) (y mi ) ] exp ( ) y i = g(y mi ; θ) h(y 1,..., y ), where g = e θ I [θ, ) (y mi ) ad h = exp ( y i ) have the desired properties. This factorizatio shows that Y mi is a sufficiet estimator. (b) To show that Y max is NOT a sufficiet estimator, we caot use the factorizatio theorem directly. Ideed, we caot preset ALL possible factorizatios of L(θ), so we caot kow if there is oe lurkig about that shows Y max is sufficiet. Istead, suppose that Y max IS a sufficiet estimator. The the factorizatio theorem would say that L(θ) = g(y max ; θ) h(y 1,..., y ). So if we FIX the value of y max, the L(θ) should cease to deped o θ. But we already computed L(θ): L(θ) = [ e θ I [θ, ) (y mi ) ] exp ( ) y i. If we fix y max > θ, the L(θ) = 0 if ad oly if θ y mi, which is a coditio that clearly depeds o θ. So Y max caot be a sufficiet estimator. Solutio (.7.). To show that S = 1 Y i σ = Var(Y ), we fix ε > 0 ad show that P ( S σ ε ) 0 as. is a cosistet sequece of estimators for We will apply Chebyshev s iequality to show this. Set W = S. The E(W ) = E(Y ) = E(Y ) E(Y ) = σ, ad ( 1 ) Var(W ) = Var Y i = 1 ( ) Var Y Var(Y ) i =, where we have used the idepedece of the Y i to coclude that Var ( Y i ) = Var(Y i ). Sice Y is ormally distributed, the variace of Y is fiite. Applyig Chebyshev s iequality 4

gives P ( S σ ε ) = P ( W E(W ) ε) V ar(w ) ε = Var(Y ) ε 0 as.