Statement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:.

Similar documents
Statement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:.

Ph.D. Qualifying Exam Friday Saturday, January 3 4, 2014

Problem Selected Scores

Introduction to Estimation Methods for Time Series models. Lecture 1

Variations. ECE 6540, Lecture 10 Maximum Likelihood Estimation

Linear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,

Final Exam. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given.

Mathematical statistics

DA Freedman Notes on the MLE Fall 2003

Properties of the least squares estimates

Brief Review on Estimation Theory

Graduate Econometrics I: Unbiased Estimation

Linear models and their mathematical foundations: Simple linear regression

AMCS243/CS243/EE243 Probability and Statistics. Fall Final Exam: Sunday Dec. 8, 3:00pm- 5:50pm VERSION A

Next is material on matrix rank. Please see the handout

Introduction to Econometrics Midterm Examination Fall 2005 Answer Key

Introduction to Estimation Methods for Time Series models Lecture 2

Summer School in Statistics for Astronomers V June 1 - June 6, Regression. Mosuk Chow Statistics Department Penn State University.

Statistics & Data Sciences: First Year Prelim Exam May 2018

[y i α βx i ] 2 (2) Q = i=1

Regression and Statistical Inference

ECE 275A Homework 6 Solutions

STAT 730 Chapter 4: Estimation

1 One-way analysis of variance

Linear Models and Estimation by Least Squares

Maximum Likelihood Estimation

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.

Economics 573 Problem Set 5 Fall 2002 Due: 4 October b. The sample mean converges in probability to the population mean.

MAT2377. Rafa l Kulik. Version 2015/November/26. Rafa l Kulik

Masters Comprehensive Examination Department of Statistics, University of Florida

Estimation of the Response Mean. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 27

Model Checking and Improvement

Spring 2012 Math 541A Exam 1. X i, S 2 = 1 n. n 1. X i I(X i < c), T n =

UNIVERSITY OF MASSACHUSETTS. Department of Mathematics and Statistics. Basic Exam - Applied Statistics. Tuesday, January 17, 2017

STA 2201/442 Assignment 2

1. Let A be a 2 2 nonzero real matrix. Which of the following is true?

simple if it completely specifies the density of x

Xβ is a linear combination of the columns of X: Copyright c 2010 Dan Nettleton (Iowa State University) Statistics / 25 X =

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017

Example: Suppose Y has a Poisson distribution with mean

STAT 540: Data Analysis and Regression

A Very Brief Summary of Statistical Inference, and Examples

Chapter 4: Constrained estimators and tests in the multiple linear regression model (Part III)

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Statistics Ph.D. Qualifying Exam

Problem Sheet 1. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X : Ω R be defined by 1 if n = 1

Ph.D. Qualifying Exam Monday Tuesday, January 4 5, 2016

Ma 3/103: Lecture 24 Linear Regression I: Estimation

Master s Written Examination - Solution

Large Sample Properties of Estimators in the Classical Linear Regression Model

First Year Examination Department of Statistics, University of Florida

Qualifying Exam in Probability and Statistics.

Master s Written Examination

Econometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018

Estimating Estimable Functions of β. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 17

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.

1 Introduction to Generalized Least Squares

ECON 4117/5111 Mathematical Economics Fall 2005

557: MATHEMATICAL STATISTICS II BIAS AND VARIANCE

First Year Examination Department of Statistics, University of Florida

Mathematical statistics

Master s Written Examination

McGill University. Faculty of Science. Department of Mathematics and Statistics. Part A Examination. Statistics: Theory Paper

Mathematics Ph.D. Qualifying Examination Stat Probability, January 2018

Econometrics I, Estimation

Y i = η + ɛ i, i = 1,...,n.

Maximum Likelihood Estimation

9.1 Orthogonal factor model.

STAT 450: Final Examination Version 1. Richard Lockhart 16 December 2002

BIOS 2083 Linear Models c Abdus S. Wahed

Lecture 24: Weighted and Generalized Least Squares

6.1 Variational representation of f-divergences

Masters Comprehensive Examination Department of Statistics, University of Florida

ECE 275B Homework # 1 Solutions Winter 2018

Section 9: Generalized method of moments

Econometrics A. Simple linear model (2) Keio University, Faculty of Economics. Simon Clinet (Keio University) Econometrics A October 16, / 11

ECON 616: Lecture Two: Deterministic Trends, Nonstationary Processes

Review of Econometrics

Applied Statistics Preliminary Examination Theory of Linear Models August 2017

Statistics Ph.D. Qualifying Exam: Part II November 3, 2001

Data Mining Stat 588

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8

Regression #4: Properties of OLS Estimator (Part 2)

Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach

Direction: This test is worth 250 points and each problem worth points. DO ANY SIX

Probability and Statistics qualifying exam, May 2015

Association studies and regression

Theory of Maximum Likelihood Estimation. Konstantin Kashin

Master s Examination Solutions Option Statistics and Probability Fall 2011

ECE 275B Homework # 1 Solutions Version Winter 2015

Simple Linear Regression: The Model

Mathematical statistics

Methods of evaluating estimators and best unbiased estimators Hamid R. Rabiee

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015

Qualifying Exam in Probability and Statistics.

Lecture 4: Types of errors. Bayesian regression models. Logistic regression

MS&E 226: Small Data

Lecture 34: Properties of the LSE

The loss function and estimating equations

Economics 620, Lecture 9: Asymptotics III: Maximum Likelihood Estimation

Transcription:

MATHEMATICAL STATISTICS Take-home final examination February 1 st -February 8 th, 019 Instructions You do not need to edit the solutions Just make sure the handwriting is legible The final solutions should be your work The deadline for completion is February 8th, 019 by 4pm Turn in your solutions to Petra Vranješ or Lidija Urek or send me a scanned version of solutions For any questions contact me by e-mail or call me at +386 41 75 497 Statement: With my signature I confirm that the solutions are the product of my own work Name: Signature:

1 (5) In a population od size N there are three types of units: A, B and C We would like to estimate the proportions a, b and c of these units When a unit is chosen it does not necessarily respond truthfully but chooses one of the three types at random If a unit is of type X {A, B, C} it will respond that it is of type Y {A, B, C} with probability p XY Assume that the probabilities p XY are known We choose a simple random sample of size n Assume that the units choose their responses independently of each other and independently of the sampling procedure a Let N X be the number of units in the sample of type X and M X the number of units in the sample who respond X for X {A, B, C} Compute E(M X N A, N B, N C ) b Suggest unbiased estimates for a, b and c When is it possible to estimate the proportions? c Compute cov(m X, M Y N A, N B, N C ) for X, Y {A, B, C} d Give standard errors for the unbiased estimates of a, b and c

(5) Suppose {p(x, θ), θ Θ R k } is a (regular) family of distributions Define the vector valued score function s as the column vector with components s(x, θ) = θ log(p(x, θ)) = grad(log(p(x, θ)) and the Fisher information matrix as I(θ) = var(s) Remark: If p(x, θ) = 0 define log (p(x, θ)) = 0 a Let t(x) be an unbiased estimator of θ based on the likelihood function, ie Prove that Deduce that cov(s, t) = I E θ (t(x)) = θ E(s) = 0 and E(st T ) = I Remark: Make liberal assumptions about interchanging integration and differentiation b Let a, c be two arbitrary k dimensional vectors Prove that corr ( a T t, c T s ) = (a T c) a T var(t)a c T I(θ)c The correlation coefficient squared is always less or equal 1 Maximize the expression for the correlation coefficient over c and deduce the Rao-Cramér inequality 3

3 (5) Assume the data pairs (x 1, y 1 ),, (x n, y n ) are an iid sample from the bivariate normal distribution with parameters µ1 σ11 σ µ = and Σ = 1 µ σ 1 σ Assume the matrix Σ is invertible We would like to test the hypothesis H 0 : Σ has eigenvalues λ and λ for some λ > 0 versus H 1 : for the eigenvalues λ and µ of Σ we have λ/µ / {, 1/} a Find the maximum likelihood estimators of the parameters in the unrestricted case b Show that every symmetric matrix with eigenvalues λ and λ for λ > 0 is of the form λ(1 + a ) λab a b λ 0 a b λab λ(1 + b = ) b a 0 λ b a for a, b such that a + b = 1 c Find explicitly the likelihood ratio test for the above testing problem Hint: under the assumption α, β > 0 and αγ β > 0 the minimum of the function f(x, y) = αx + βxy + γy subject to the side condition x + y = 1 is α + γ (α γ) + 4β d What can you say about the approximate distribution of the test statistic? 4

4 (5) Assume the correct regression model is Y = Xβ + ɛ for E(ɛ) = 0 and var(ɛ) = σ I Assume the matrix X of dimensions n m with m < n has full rank Denote by ˆβ the ordinary least squares estimator of β Assume as known that the upper left corner of the inverse of Σ11 Σ 1, Σ 1 Σ is and the lower right corner is Σ 11 = Σ 1 11 + Σ 1 11 Σ 1 ( Σ Σ 1 Σ 1 11 Σ 1 Σ1 Σ 1 11, Σ = (Σ Σ 1 Σ 1 11 Σ 1 a Assume that we forget some independent variables and fit the regression model Y = X 1 β 1 + ɛ, where X = [X 1 ; X ] and E(ɛ ) = 0 and var(ɛ ) = σ I Write β = ( β1 β ) Assuming the wrong model we estimate β 1 by ˆβ 1 = ( X T 1 X 1 X T 1 Y Let ˆβ 1 be the best unbiased linear estimator of β 1 in the correct model Show that var( ˆβ 1 ) var( ˆβ 1) = σ AB 1 A T, where A = ( X T 1 X 1 X T 1 X and B = X T X X T X 1 A b Show that the matrix AB 1 A T is positive semi-definite This means that ˆβ 1 has smaller variance than ˆβ 1 Why is this not in contradiction with the Gauss- Markov theorem? Explain your answer Hint: all the minors of a positive semi-definite matrix are positive semi-definite 5