6. MAXIMUM LIKELIHOOD ESTIMATION

Size: px
Start display at page:

Download "6. MAXIMUM LIKELIHOOD ESTIMATION"

Transcription

1 6 MAXIMUM LIKELIHOOD ESIMAION [1] Maximum Likelihood Estimator (1) Cases in which θ (unknown parameter) is scalar Notational Clarification: From now on, we denote the true value of θ as θ o hen, view θ as a variable Definition: (Likelihood function) Let {x 1,, x } be a sample from a population It does not have to be a random sample x t is a scalar Let f(x 1,x,, x,θ o ) be the joint density function of x 1,, x he functional form of f is known, but not θ o hen, L (θ) f(x 1,, x, θ) is called likelihood function L (θ) is a function of θ given x 1,, x he functional form of f is known, but not θ o Definition: (log-likelihood function) l (θ) = ln[f(x 1,, x,θ)] MLE-1

2 Example: {x 1,, x }: a random sample from a population distributed with f(x,θ o ) f(x 1,, x, θ o ) = f (, θ ) t = 1 L (θ) = f(x 1,, x, θ) = t = t = 1 l (θ) = ( f ( xt, )) x t o 1 f ( x, θ ) ln θ = Σ ln f (, θ ) t t x t Definition: (Maximum Likelihood Estimator (MLE)) MLE θˆ MLE maximizes l (θ) given data points x 1,, x Example: {x 1,, x } is a random sample from a population following a Poisson distribution [ie, f(x,θ) = e -θ θ x /x! (suppressing subscript o from θ)] Note that E(x) = var(x) = θ o for Poisson distribution l (θ) = Σ t ln[f(x t,θ)] = -θ + (ln(θ))σ t x t - Σ t x t! 1 FOC of max: / θ = + Σtxt = 0 θ Solving this, θˆ MLE = Σ t x t = x MLE-

3 () Extension to the Cases with Multiple Parameters Definition: θ = [θ 1,θ,, θ p ] L (θ) = f(x 1,, x,θ) = f(x 1,, x, θ 1,, θ p ) l (θ) = ln[f(x 1,, x,θ) = ln[f(x 1,, x, θ 1,, θ p )] x t could be a vector If {x 1,, x } is a random sample from a population with f(x,θ o ), t = 1 l (θ) = ( f ( xt, )) ln θ = Σ ln f (, θ ) t x t Definition: (MLE) MLE θˆ MLE maximizes l (θ) given data (vector) points x 1,, x hat is, θˆ MLE solves ( θ ) θ = ( θ ) / θ1 ( θ ) / θ = : ( θ ) / θ p 0 0 : 0 p 1 Example: Let {x 1,, x } be a random sample from N(μ,σ ) [suppressing subscript o ] Since {x 1,, x } is a random sample, E(x t ) = μ o and var(x t ) = σ o Let θ = (μ,v), where v = σ MLE-3

4 1 ( xt μ) f( xt, θ ) = exp v π v 1/ 1/ ( xt μ) = ( π ) ( v) exp v 1 1 ( xt μ) ln[ f( xt, θ)] = ln( π) ln( v) v Σt( xt μ) ( θ) = ln( π) ln( v) v MLE solves FOC: ( θ ) 1 Σt( xt μ) (1) = Σt ( xt μ)( 1) = = 0; μ v v () From (1): ( θ) Σt( xt μ) = + = 0 v v v Σ x (3) Σt( x t μ) = 0 Σ t x t - μ = 0 μˆ t t MLE = = x Substituting (3) in to (): (4) -v + Σ t (x t -μˆ MLE ) = 0 hus, vˆ 1 MLE = Σt ( xt x) ˆ ˆ μ x MLE θ = = 1 MLE vˆ Σt( xt x) MLE MLE-4

5 [] Large Sample Properties of the ML estimator Definition: 1) Let g(θ) = g(θ 1,, θ p ) be a scalar function of θ Let g j = g/ θ j hen, g1 g g = θ : g p ) Let w(θ) =(w 1 (θ),, w m (θ)) be a m 1 vector of functions of θ Let w ij = w i (θ)/ θ j hen, w11 w1 w1 p w w w = θ : : : wm 1 wm w mp w( θ ) 1 p 3) Let g(θ) be a scalar function of θ where g ij = g(θ)/ θ i θ j hen, m p g11 g1 g1 p g g g = θθ : : : gp1 gp g pp g( θ ) 1 p Called Hessian matrix of g(θ) p p MLE-5

6 Example 1: Let g(θ) = θ 1 + θ + θ 1 θ Find g(θ)/ θ g( θ ) = θ θ + θ 1 θ θ + 1 Example : Let θ1 + θ w( θ ) = θ1+ θ w( θ ) θ 1 θ 1 = 1 θ Example 3: Let g(θ) = θ 1 + θ + θ 1 θ Find the Hessian matrix of g(θ) g( θ ) 1 = θθ 1 Some useful results: 1) c : 1 p, θ: p 1 (c θ is a scalar) (c θ)/ θ = c ; (c θ)/ θ = c ) R: m p, θ: p 1 (Rθ is m 1) (Rθ)/ θ = R 3) A: p p symmetric, θ: p 1 (θ Aθ) (θ Aθ)/ θ = Aθ (θ'aθ)/ θ = θ'a (θ Aθ)/ θ θ = A MLE-6

7 Definition: (Hessian matrix of log-likelihood function) H l l ( θ ) = = θθ θi θ j p p heorem: Let ˆ θ be MLE hen, under suitable regularity conditions, ˆ θ is consistent, and, ˆ 1 ( θ θo) d N0 p 1, plim H( θo) Further, ˆ θ is asymptotically efficient 1 Implication: ˆ θ N(θ o, [-H (θ o )] -1 ) ˆ θ N(θ o, [-H ( ˆ θ )] -1 ) Example: {x 1,, x } is a random sample from N(μ o,σ o ) Let θ = [μ,v] and v =σ l 1 1 = Σ v ln( π ) ln( v) t( x t μ) he first derivatives: l( θ ) Σt( xt μ) l( θ) 1 = ; = + Σ ( t x μ v v v v t μ ) MLE-7

8 he second derivatives: l ( θ ) 1 = Σt( 1) = μμ v v ; l( θ ) Σt( xt μ) = ; μ v v l ( θ ) 0 v 1 4v 1 vv v ( v) v v = + Σ ( ) ( ) t xt μ = Σ 3 t xt μ herefore, Hence, Σt( xt μ) v ν H ( θ ) = Σt( xt μ) Σt( xt μ) + 3 ν ν v 0 vˆ ( ˆ ML H θml) = 0 vˆ ML vˆ ML 0 ˆ ˆ μml μo θ = N, v ˆ ML v o vˆ ML 0 MLE-8

9 [3] esting Hypotheses Based on MLE General form of hypotheses: Let w(θ) = [w 1 (θ),w (θ),, w m (θ)], where w j (θ) = w j (θ 1, θ,, θ p ) = a function of θ 1,, θ p H o : he true θ (θ o ) satisfies the m restrcitions, w(θ) = 0 m 1 (m p) Definition: (Restricted MLE) Let θ be the restricted ML estimator which maximizes l (θ) st w(θ) = 0 Wald est: W w ˆ W ˆ Cov ˆ W ˆ w ˆ 1 = ( θ )'[ ( θ) ( θ) ( θ)] ( θ ) If ˆ θ is a (unrestricted) ML estimator, W = w ˆ W ˆ H ˆ W ˆ w ˆ θ 1 1 ( θ )[ ( θ){ ( θ)} ( θ)] ( ) Note: Can be computed with any consistent estimator ˆ θ and Cov( ˆ θ ) Likelihood Ratio est: (LR) LR = [l ( ˆ θ ) - l (θ )] Lagrangean Multiplier (LM) test Define s l ( θ ) ( θ ) = hen, LM = s (θ ) [-H (θ )] -1 s (θ ) θ MLE-9

10 heorem: Under H o : w(θ) = 0, W, LR, LM d χ (m) Implication: Given significance level (α), find a critical value from χ table Usually, α = 005 or α = 001 If W > c, reject H o Otherwise, do not reject H o Comments: 1) Wald needs only ˆ θ ; LR needs both ˆ θ and θ ; and LM needs θ only ) In general, W LR LM 3) W is not invariant to how to write restrictions hat is, W for H o : θ 1 = θ may not be equal to W for H o : θ 1 /θ = 1 Example: (1) {x 1,, x }: RS from N(μ o,v o ) with v o known So, θ = μ H o : μ = 0 w(μ) = μ l (μ) = -(/)ln(π) - (/)ln(v o ) - {1/(v o )}Σ t (x t -μ) s (μ) = (1/v o )Σ t (x t -μ) H ( μ) = v o MLE-10

11 [Wald est] Unrestricted MLE: FOC: l (μ)/ μ = (1/v)Σ t (x t -μ) = 0 ˆ μ = x W(μ) = 1 W( ˆμ ) = 1 -H ( ˆμ ) = /v o [LR est] Restricted MLE: μ = 0 l ( ˆμ ) = -(/)ln(π) - (/)ln(v o ) - {1/(v o )}Σ t (x t - x ) l (μ ) = -(/)ln(π) - (/)ln(v o )- {1/(v o )}Σ t x t [LM est] s (μ ) = (1/v o )Σ t x t = (/v o ) x ; -H (μ ) = /v o With this information, can show that W = LR = LM = x v o () Both μ and v unknown: θ = (μ,v) H o : μ = 0 w(θ) = μ W(θ) = w(θ)/ θ = [ μ/ μ, μ/ v] = [1, 0] l (θ) = -(/)ln(π) - (/)ln(v) - {1/(v)}Σ t (x t -μ) MLE-11

12 1 Σt( xt μ) v s (θ) = 1 + Σ ( ) t xt μ v v ; Σt( xt μ) v ν H ( θ ) = Σt( xt μ) Σt( xt μ) + 3 ν ν v Unrestricted MLE: ˆ μ = x and 1 vˆ ( ) = Σt xt x Restricted MLE: μ = 0, but need to compute v l ( μ,v) = -(/)ln(π) - (/)ln(v) - {1/(v)}Σ t (x t -μ ) l (0,v) = -(/)ln(π) - (/)ln(v) - {1/(v)}Σ t x t FOC: l (0,v)/ v = -/(v) + (1/(v ))/Σ t x t = 0 v = (1/)Σtx t [Wald est] w( ˆ θ ) = ˆμ = x ; W( ˆ θ ) = ( 1 0 ); -H ( ˆ θ ) = vˆ 0 0 vˆ W = w( ˆ θ ) [W( ˆ θ ){-H ( ˆ θ )} -1 W( ˆ θ ) ] -1 w( ˆ θ ) = x v ˆ MLE-1

13 [LR est] l ( ˆ θ ) = -(/)ln(π) - (/)ln( ˆv ) - {1/( ˆv )}Σt(x t - x ) l (θ ) = -(/)ln(π) - (/)ln(v ) - {1/( v )}Σtx t [LM est] s 1 x Σtx x t v v ( θ ) = = = v ; 1 + Σ 0 tx t + v v v v Σtxt v ν H( θml) = Σtxt ν ν LM = 1 x s ( θ)[ H( θ)] s( θ) = v x MLE-13

14 [4] Efficiency of OLS estimator under Ideal Conditions Assume that y t is iid N(x t β,v) conditional on x t f(y t x t,β,v) = 1 1 exp ( yt x ) π v v l ( β, v) =Σ ln f( y β, v, x ) t t ti ti β 1 = ln( π) ln v Σ( y x β) v 1 = ln( π ) ln v ( y Xβ) ( y Xβ) v t t ti herefore, we have the following likelihood function of y FOC: (i) l (β,v)/ β = -(1/v)[-X y + X Xβ] = 0 k 1 (ii) l (β,v)/ v = -(/v) + (1/v )(y-xβ) (y-xβ) = 0 From (i), X y - X Xβ = 0 k 1 From (ii), vˆ MLE = SSE/ βˆ MLE = (X X) -1 X y = βˆ hus, we can conclude that ˆ β and s = SSE/(-k) are asymptotically efficient MLE-14

REVIEW OF MAXIMUM LIKELIHOOD ESTIMATION

REVIEW OF MAXIMUM LIKELIHOOD ESTIMATION REVIEW OF MAXIMUM LIKELIHOOD ESIMAION [] Maximum Likelihood Esimaor () Cases in which θ (unknown parameer) is scalar Noaional Clarificaion: From now on, we denoe he rue alue of θ as θ o hen, iew θ as a

More information

Introduction to Estimation Methods for Time Series models Lecture 2

Introduction to Estimation Methods for Time Series models Lecture 2 Introduction to Estimation Methods for Time Series models Lecture 2 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 2 SNS Pisa 1 / 21 Estimators:

More information

f(x θ)dx with respect to θ. Assuming certain smoothness conditions concern differentiating under the integral the integral sign, we first obtain

f(x θ)dx with respect to θ. Assuming certain smoothness conditions concern differentiating under the integral the integral sign, we first obtain 0.1. INTRODUCTION 1 0.1 Introduction R. A. Fisher, a pioneer in the development of mathematical statistics, introduced a measure of the amount of information contained in an observaton from f(x θ). Fisher

More information

Econ 583 Homework 7 Suggested Solutions: Wald, LM and LR based on GMM and MLE

Econ 583 Homework 7 Suggested Solutions: Wald, LM and LR based on GMM and MLE Econ 583 Homework 7 Suggested Solutions: Wald, LM and LR based on GMM and MLE Eric Zivot Winter 013 1 Wald, LR and LM statistics based on generalized method of moments estimation Let 1 be an iid sample

More information

Parameter Estimation

Parameter Estimation Parameter Estimation Consider a sample of observations on a random variable Y. his generates random variables: (y 1, y 2,, y ). A random sample is a sample (y 1, y 2,, y ) where the random variables y

More information

Chapter 4: Constrained estimators and tests in the multiple linear regression model (Part III)

Chapter 4: Constrained estimators and tests in the multiple linear regression model (Part III) Chapter 4: Constrained estimators and tests in the multiple linear regression model (Part III) Florian Pelgrin HEC September-December 2010 Florian Pelgrin (HEC) Constrained estimators September-December

More information

Quick Review on Linear Multiple Regression

Quick Review on Linear Multiple Regression Quick Review on Linear Multiple Regression Mei-Yuan Chen Department of Finance National Chung Hsing University March 6, 2007 Introduction for Conditional Mean Modeling Suppose random variables Y, X 1,

More information

Final Exam. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given.

Final Exam. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given. (a) If X and Y are independent, Corr(X, Y ) = 0. (b) (c) (d) (e) A consistent estimator must be asymptotically

More information

The loss function and estimating equations

The loss function and estimating equations Chapter 6 he loss function and estimating equations 6 Loss functions Up until now our main focus has been on parameter estimating via the maximum likelihood However, the negative maximum likelihood is

More information

Hypothesis Testing: The Generalized Likelihood Ratio Test

Hypothesis Testing: The Generalized Likelihood Ratio Test Hypothesis Testing: The Generalized Likelihood Ratio Test Consider testing the hypotheses H 0 : θ Θ 0 H 1 : θ Θ \ Θ 0 Definition: The Generalized Likelihood Ratio (GLR Let L(θ be a likelihood for a random

More information

Ch. 5 Hypothesis Testing

Ch. 5 Hypothesis Testing Ch. 5 Hypothesis Testing The current framework of hypothesis testing is largely due to the work of Neyman and Pearson in the late 1920s, early 30s, complementing Fisher s work on estimation. As in estimation,

More information

Greene, Econometric Analysis (6th ed, 2008)

Greene, Econometric Analysis (6th ed, 2008) EC771: Econometrics, Spring 2010 Greene, Econometric Analysis (6th ed, 2008) Chapter 17: Maximum Likelihood Estimation The preferred estimator in a wide variety of econometric settings is that derived

More information

Probability and Statistics qualifying exam, May 2015

Probability and Statistics qualifying exam, May 2015 Probability and Statistics qualifying exam, May 2015 Name: Instructions: 1. The exam is divided into 3 sections: Linear Models, Mathematical Statistics and Probability. You must pass each section to pass

More information

MLE and GMM. Li Zhao, SJTU. Spring, Li Zhao MLE and GMM 1 / 22

MLE and GMM. Li Zhao, SJTU. Spring, Li Zhao MLE and GMM 1 / 22 MLE and GMM Li Zhao, SJTU Spring, 2017 Li Zhao MLE and GMM 1 / 22 Outline 1 MLE 2 GMM 3 Binary Choice Models Li Zhao MLE and GMM 2 / 22 Maximum Likelihood Estimation - Introduction For a linear model y

More information

Practical Econometrics. for. Finance and Economics. (Econometrics 2)

Practical Econometrics. for. Finance and Economics. (Econometrics 2) Practical Econometrics for Finance and Economics (Econometrics 2) Seppo Pynnönen and Bernd Pape Department of Mathematics and Statistics, University of Vaasa 1. Introduction 1.1 Econometrics Econometrics

More information

Math 152. Rumbos Fall Solutions to Assignment #12

Math 152. Rumbos Fall Solutions to Assignment #12 Math 52. umbos Fall 2009 Solutions to Assignment #2. Suppose that you observe n iid Bernoulli(p) random variables, denoted by X, X 2,..., X n. Find the LT rejection region for the test of H o : p p o versus

More information

Maximum Likelihood (ML) Estimation

Maximum Likelihood (ML) Estimation Econometrics 2 Fall 2004 Maximum Likelihood (ML) Estimation Heino Bohn Nielsen 1of32 Outline of the Lecture (1) Introduction. (2) ML estimation defined. (3) ExampleI:Binomialtrials. (4) Example II: Linear

More information

Statistics and econometrics

Statistics and econometrics 1 / 36 Slides for the course Statistics and econometrics Part 10: Asymptotic hypothesis testing European University Institute Andrea Ichino September 8, 2014 2 / 36 Outline Why do we need large sample

More information

MEI Exam Review. June 7, 2002

MEI Exam Review. June 7, 2002 MEI Exam Review June 7, 2002 1 Final Exam Revision Notes 1.1 Random Rules and Formulas Linear transformations of random variables. f y (Y ) = f x (X) dx. dg Inverse Proof. (AB)(AB) 1 = I. (B 1 A 1 )(AB)(AB)

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2009 Prof. Gesine Reinert Our standard situation is that we have data x = x 1, x 2,..., x n, which we view as realisations of random

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

simple if it completely specifies the density of x

simple if it completely specifies the density of x 3. Hypothesis Testing Pure significance tests Data x = (x 1,..., x n ) from f(x, θ) Hypothesis H 0 : restricts f(x, θ) Are the data consistent with H 0? H 0 is called the null hypothesis simple if it completely

More information

2014/2015 Smester II ST5224 Final Exam Solution

2014/2015 Smester II ST5224 Final Exam Solution 014/015 Smester II ST54 Final Exam Solution 1 Suppose that (X 1,, X n ) is a random sample from a distribution with probability density function f(x; θ) = e (x θ) I [θ, ) (x) (i) Show that the family of

More information

7. GENERALIZED LEAST SQUARES (GLS)

7. GENERALIZED LEAST SQUARES (GLS) 7. GENERALIZED LEAST SQUARES (GLS) [1] ASSUMPTIONS: Assume SIC except that Cov(ε) = E(εε ) = σ Ω where Ω I T. Assume that E(ε) = 0 T 1, and that X Ω -1 X and X ΩX are all positive definite. Examples: Autocorrelation:

More information

Notes on the Multivariate Normal and Related Topics

Notes on the Multivariate Normal and Related Topics Version: July 10, 2013 Notes on the Multivariate Normal and Related Topics Let me refresh your memory about the distinctions between population and sample; parameters and statistics; population distributions

More information

Problem Set 6 Solution

Problem Set 6 Solution Problem Set 6 Solution May st, 009 by Yang. Causal Expression of AR Let φz : αz βz. Zeros of φ are α and β, both of which are greater than in absolute value by the assumption in the question. By the theorem

More information

Introduction to Maximum Likelihood Estimation

Introduction to Maximum Likelihood Estimation Introduction to Maximum Likelihood Estimation Eric Zivot July 26, 2012 The Likelihood Function Let 1 be an iid sample with pdf ( ; ) where is a ( 1) vector of parameters that characterize ( ; ) Example:

More information

More Empirical Process Theory

More Empirical Process Theory More Empirical Process heory 4.384 ime Series Analysis, Fall 2008 Recitation by Paul Schrimpf Supplementary to lectures given by Anna Mikusheva October 24, 2008 Recitation 8 More Empirical Process heory

More information

LECTURE 10: NEYMAN-PEARSON LEMMA AND ASYMPTOTIC TESTING. The last equality is provided so this can look like a more familiar parametric test.

LECTURE 10: NEYMAN-PEARSON LEMMA AND ASYMPTOTIC TESTING. The last equality is provided so this can look like a more familiar parametric test. Economics 52 Econometrics Professor N.M. Kiefer LECTURE 1: NEYMAN-PEARSON LEMMA AND ASYMPTOTIC TESTING NEYMAN-PEARSON LEMMA: Lesson: Good tests are based on the likelihood ratio. The proof is easy in the

More information

DA Freedman Notes on the MLE Fall 2003

DA Freedman Notes on the MLE Fall 2003 DA Freedman Notes on the MLE Fall 2003 The object here is to provide a sketch of the theory of the MLE. Rigorous presentations can be found in the references cited below. Calculus. Let f be a smooth, scalar

More information

Maximum Likelihood Tests and Quasi-Maximum-Likelihood

Maximum Likelihood Tests and Quasi-Maximum-Likelihood Maximum Likelihood Tests and Quasi-Maximum-Likelihood Wendelin Schnedler Department of Economics University of Heidelberg 10. Dezember 2007 Wendelin Schnedler (AWI) Maximum Likelihood Tests and Quasi-Maximum-Likelihood10.

More information

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER. 21 June :45 11:45

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER. 21 June :45 11:45 Two hours MATH20802 To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER STATISTICAL METHODS 21 June 2010 9:45 11:45 Answer any FOUR of the questions. University-approved

More information

Exercises Chapter 4 Statistical Hypothesis Testing

Exercises Chapter 4 Statistical Hypothesis Testing Exercises Chapter 4 Statistical Hypothesis Testing Advanced Econometrics - HEC Lausanne Christophe Hurlin University of Orléans December 5, 013 Christophe Hurlin (University of Orléans) Advanced Econometrics

More information

Statement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:.

Statement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:. MATHEMATICAL STATISTICS Homework assignment Instructions Please turn in the homework with this cover page. You do not need to edit the solutions. Just make sure the handwriting is legible. You may discuss

More information

8. Hypothesis Testing

8. Hypothesis Testing FE661 - Statistical Methods for Financial Engineering 8. Hypothesis Testing Jitkomut Songsiri introduction Wald test likelihood-based tests significance test for linear regression 8-1 Introduction elements

More information

Introduction Large Sample Testing Composite Hypotheses. Hypothesis Testing. Daniel Schmierer Econ 312. March 30, 2007

Introduction Large Sample Testing Composite Hypotheses. Hypothesis Testing. Daniel Schmierer Econ 312. March 30, 2007 Hypothesis Testing Daniel Schmierer Econ 312 March 30, 2007 Basics Parameter of interest: θ Θ Structure of the test: H 0 : θ Θ 0 H 1 : θ Θ 1 for some sets Θ 0, Θ 1 Θ where Θ 0 Θ 1 = (often Θ 1 = Θ Θ 0

More information

Hypothesis testing: theory and methods

Hypothesis testing: theory and methods Statistical Methods Warsaw School of Economics November 3, 2017 Statistical hypothesis is the name of any conjecture about unknown parameters of a population distribution. The hypothesis should be verifiable

More information

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata Maura Department of Economics and Finance Università Tor Vergata Hypothesis Testing Outline It is a mistake to confound strangeness with mystery Sherlock Holmes A Study in Scarlet Outline 1 The Power Function

More information

McGill University. Faculty of Science. Department of Mathematics and Statistics. Part A Examination. Statistics: Theory Paper

McGill University. Faculty of Science. Department of Mathematics and Statistics. Part A Examination. Statistics: Theory Paper McGill University Faculty of Science Department of Mathematics and Statistics Part A Examination Statistics: Theory Paper Date: 10th May 2015 Instructions Time: 1pm-5pm Answer only two questions from Section

More information

Chapter 6 - Sections 5 and 6, Morris H. DeGroot and Mark J. Schervish, Probability and

Chapter 6 - Sections 5 and 6, Morris H. DeGroot and Mark J. Schervish, Probability and References Chapter 6 - Sections 5 and 6, Morris H. DeGroot and Mark J. Schervish, Probability and Statistics, 3 rd Edition, Addison-Wesley, Boston. Chapter 5 - Section 2, Bernard W. Lindgren, Statistical

More information

Econ 583 Final Exam Fall 2008

Econ 583 Final Exam Fall 2008 Econ 583 Final Exam Fall 2008 Eric Zivot December 11, 2008 Exam is due at 9:00 am in my office on Friday, December 12. 1 Maximum Likelihood Estimation and Asymptotic Theory Let X 1,...,X n be iid random

More information

GARCH Models Estimation and Inference

GARCH Models Estimation and Inference GARCH Models Estimation and Inference Eduardo Rossi University of Pavia December 013 Rossi GARCH Financial Econometrics - 013 1 / 1 Likelihood function The procedure most often used in estimating θ 0 in

More information

Chapters 9. Properties of Point Estimators

Chapters 9. Properties of Point Estimators Chapters 9. Properties of Point Estimators Recap Target parameter, or population parameter θ. Population distribution f(x; θ). { probability function, discrete case f(x; θ) = density, continuous case The

More information

The outline for Unit 3

The outline for Unit 3 The outline for Unit 3 Unit 1. Introduction: The regression model. Unit 2. Estimation principles. Unit 3: Hypothesis testing principles. 3.1 Wald test. 3.2 Lagrange Multiplier. 3.3 Likelihood Ratio Test.

More information

Generalized Method of Moments (GMM) Estimation

Generalized Method of Moments (GMM) Estimation Econometrics 2 Fall 2004 Generalized Method of Moments (GMM) Estimation Heino Bohn Nielsen of29 Outline of the Lecture () Introduction. (2) Moment conditions and methods of moments (MM) estimation. Ordinary

More information

(a) (3 points) Construct a 95% confidence interval for β 2 in Equation 1.

(a) (3 points) Construct a 95% confidence interval for β 2 in Equation 1. Problem 1 (21 points) An economist runs the regression y i = β 0 + x 1i β 1 + x 2i β 2 + x 3i β 3 + ε i (1) The results are summarized in the following table: Equation 1. Variable Coefficient Std. Error

More information

Economic modelling and forecasting

Economic modelling and forecasting Economic modelling and forecasting 2-6 February 2015 Bank of England he generalised method of moments Ole Rummel Adviser, CCBS at the Bank of England ole.rummel@bankofengland.co.uk Outline Classical estimation

More information

Maximum Likelihood. F θ, θ Θ. X 1,..., X n. L(θ) = f(x i ; θ) l(θ) = ln L(θ) = i.i.d. i=1. n ln f(x i ; θ) Sometimes

Maximum Likelihood. F θ, θ Θ. X 1,..., X n. L(θ) = f(x i ; θ) l(θ) = ln L(θ) = i.i.d. i=1. n ln f(x i ; θ) Sometimes Maximum Likelihood X 1,..., X n i.i.d. F θ, θ Θ L(θ) = n i=1 f(x i ; θ) l(θ) = ln L(θ) = Sometimes n i=1 ln f(x i ; θ) Close your eyes and differentiate? Let X 1,..., X n be a random sample from a Gamma

More information

Advanced Econometrics

Advanced Econometrics Advanced Econometrics Dr. Andrea Beccarini Center for Quantitative Economics Winter 2013/2014 Andrea Beccarini (CQE) Econometrics Winter 2013/2014 1 / 156 General information Aims and prerequisites Objective:

More information

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3 Hypothesis Testing CB: chapter 8; section 0.3 Hypothesis: statement about an unknown population parameter Examples: The average age of males in Sweden is 7. (statement about population mean) The lowest

More information

Cherry Blossom run (1) The credit union Cherry Blossom Run is a 10 mile race that takes place every year in D.C. In 2009 there were participants

Cherry Blossom run (1) The credit union Cherry Blossom Run is a 10 mile race that takes place every year in D.C. In 2009 there were participants 18.650 Statistics for Applications Chapter 5: Parametric hypothesis testing 1/37 Cherry Blossom run (1) The credit union Cherry Blossom Run is a 10 mile race that takes place every year in D.C. In 2009

More information

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part : Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section

More information

Estimation Theory. as Θ = (Θ 1,Θ 2,...,Θ m ) T. An estimator

Estimation Theory. as Θ = (Θ 1,Θ 2,...,Θ m ) T. An estimator Estimation Theory Estimation theory deals with finding numerical values of interesting parameters from given set of data. We start with formulating a family of models that could describe how the data were

More information

Master s Written Examination

Master s Written Examination Master s Written Examination Option: Statistics and Probability Spring 05 Full points may be obtained for correct answers to eight questions Each numbered question (which may have several parts) is worth

More information

Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed

Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed 18.466 Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed 1. MLEs in exponential families Let f(x,θ) for x X and θ Θ be a likelihood function, that is, for present purposes,

More information

First Year Examination Department of Statistics, University of Florida

First Year Examination Department of Statistics, University of Florida First Year Examination Department of Statistics, University of Florida August 19, 010, 8:00 am - 1:00 noon Instructions: 1. You have four hours to answer questions in this examination.. You must show your

More information

Advanced Quantitative Methods: maximum likelihood

Advanced Quantitative Methods: maximum likelihood Advanced Quantitative Methods: Maximum Likelihood University College Dublin 4 March 2014 1 2 3 4 5 6 Outline 1 2 3 4 5 6 of straight lines y = 1 2 x + 2 dy dx = 1 2 of curves y = x 2 4x + 5 of curves y

More information

GARCH Models Estimation and Inference

GARCH Models Estimation and Inference Università di Pavia GARCH Models Estimation and Inference Eduardo Rossi Likelihood function The procedure most often used in estimating θ 0 in ARCH models involves the maximization of a likelihood function

More information

4.5.1 The use of 2 log Λ when θ is scalar

4.5.1 The use of 2 log Λ when θ is scalar 4.5. ASYMPTOTIC FORM OF THE G.L.R.T. 97 4.5.1 The use of 2 log Λ when θ is scalar Suppose we wish to test the hypothesis NH : θ = θ where θ is a given value against the alternative AH : θ θ on the basis

More information

Final Examination Statistics 200C. T. Ferguson June 11, 2009

Final Examination Statistics 200C. T. Ferguson June 11, 2009 Final Examination Statistics 00C T. Ferguson June, 009. (a) Define: X n converges in probability to X. (b) Define: X m converges in quadratic mean to X. (c) Show that if X n converges in quadratic mean

More information

Lecture 28: Asymptotic confidence sets

Lecture 28: Asymptotic confidence sets Lecture 28: Asymptotic confidence sets 1 α asymptotic confidence sets Similar to testing hypotheses, in many situations it is difficult to find a confidence set with a given confidence coefficient or level

More information

Introduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Yishay Mansour, Lior Wolf

Introduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Yishay Mansour, Lior Wolf 1 Introduction to Machine Learning Maximum Likelihood and Bayesian Inference Lecturers: Eran Halperin, Yishay Mansour, Lior Wolf 2013-14 We know that X ~ B(n,p), but we do not know p. We get a random sample

More information

(θ θ ), θ θ = 2 L(θ ) θ θ θ θ θ (θ )= H θθ (θ ) 1 d θ (θ )

(θ θ ), θ θ = 2 L(θ ) θ θ θ θ θ (θ )= H θθ (θ ) 1 d θ (θ ) Setting RHS to be zero, 0= (θ )+ 2 L(θ ) (θ θ ), θ θ = 2 L(θ ) 1 (θ )= H θθ (θ ) 1 d θ (θ ) O =0 θ 1 θ 3 θ 2 θ Figure 1: The Newton-Raphson Algorithm where H is the Hessian matrix, d θ is the derivative

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation University of Pavia Maximum Likelihood Estimation Eduardo Rossi Likelihood function Choosing parameter values that make what one has observed more likely to occur than any other parameter values do. Assumption(Distribution)

More information

Chapter 8: Hypothesis Testing Lecture 9: Likelihood ratio tests

Chapter 8: Hypothesis Testing Lecture 9: Likelihood ratio tests Chapter 8: Hypothesis Testing Lecture 9: Likelihood ratio tests Throughout this chapter we consider a sample X taken from a population indexed by θ Θ R k. Instead of estimating the unknown parameter, we

More information

Maximum Likelihood Large Sample Theory

Maximum Likelihood Large Sample Theory Maximum Likelihood Large Sample Theory MIT 18.443 Dr. Kempthorne Spring 2015 1 Outline 1 Large Sample Theory of Maximum Likelihood Estimates 2 Asymptotic Results: Overview Asymptotic Framework Data Model

More information

Comparing two independent samples

Comparing two independent samples In many applications it is necessary to compare two competing methods (for example, to compare treatment effects of a standard drug and an experimental drug). To compare two methods from statistical point

More information

Institute of Actuaries of India

Institute of Actuaries of India Institute of Actuaries of India Subject CT3 Probability & Mathematical Statistics May 2011 Examinations INDICATIVE SOLUTION Introduction The indicative solution has been written by the Examiners with the

More information

ECON 5350 Class Notes Nonlinear Regression Models

ECON 5350 Class Notes Nonlinear Regression Models ECON 5350 Class Notes Nonlinear Regression Models 1 Introduction In this section, we examine regression models that are nonlinear in the parameters and give a brief overview of methods to estimate such

More information

Chapter 3: Maximum Likelihood Theory

Chapter 3: Maximum Likelihood Theory Chapter 3: Maximum Likelihood Theory Florian Pelgrin HEC September-December, 2010 Florian Pelgrin (HEC) Maximum Likelihood Theory September-December, 2010 1 / 40 1 Introduction Example 2 Maximum likelihood

More information

LECTURE 18: NONLINEAR MODELS

LECTURE 18: NONLINEAR MODELS LECTURE 18: NONLINEAR MODELS The basic point is that smooth nonlinear models look like linear models locally. Models linear in parameters are no problem even if they are nonlinear in variables. For example:

More information

STA 6857 Estimation ( 3.6)

STA 6857 Estimation ( 3.6) STA 6857 Estimation ( 3.6) Outline 1 Yule-Walker 2 Least Squares 3 Maximum Likelihood Arthur Berg STA 6857 Estimation ( 3.6) 2/ 19 Outline 1 Yule-Walker 2 Least Squares 3 Maximum Likelihood Arthur Berg

More information

Chapter 4: Unconstrained nonlinear optimization

Chapter 4: Unconstrained nonlinear optimization Chapter 4: Unconstrained nonlinear optimization Edoardo Amaldi DEIB Politecnico di Milano edoardo.amaldi@polimi.it Website: http://home.deib.polimi.it/amaldi/opt-15-16.shtml Academic year 2015-16 Edoardo

More information

Master s Written Examination

Master s Written Examination Master s Written Examination Option: Statistics and Probability Spring 016 Full points may be obtained for correct answers to eight questions. Each numbered question which may have several parts is worth

More information

F79SM STATISTICAL METHODS

F79SM STATISTICAL METHODS F79SM STATISTICAL METHODS SUMMARY NOTES 9 Hypothesis testing 9.1 Introduction As before we have a random sample x of size n of a population r.v. X with pdf/pf f(x;θ). The distribution we assign to X is

More information

Economics 582 Random Effects Estimation

Economics 582 Random Effects Estimation Economics 582 Random Effects Estimation Eric Zivot May 29, 2013 Random Effects Model Hence, the model can be re-written as = x 0 β + + [x ] = 0 (no endogeneity) [ x ] = = + x 0 β + + [x ] = 0 [ x ] = 0

More information

1/24/2008. Review of Statistical Inference. C.1 A Sample of Data. C.2 An Econometric Model. C.4 Estimating the Population Variance and Other Moments

1/24/2008. Review of Statistical Inference. C.1 A Sample of Data. C.2 An Econometric Model. C.4 Estimating the Population Variance and Other Moments /4/008 Review of Statistical Inference Prepared by Vera Tabakova, East Carolina University C. A Sample of Data C. An Econometric Model C.3 Estimating the Mean of a Population C.4 Estimating the Population

More information

Statistical Inference

Statistical Inference Statistical Inference Classical and Bayesian Methods Revision Class for Midterm Exam AMS-UCSC Th Feb 9, 2012 Winter 2012. Session 1 (Revision Class) AMS-132/206 Th Feb 9, 2012 1 / 23 Topics Topics We will

More information

Time Series Analysis

Time Series Analysis Time Series Analysis hm@imm.dtu.dk Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby 1 Outline of the lecture Regression based methods, 1st part: Introduction (Sec.

More information

The Functional Central Limit Theorem and Testing for Time Varying Parameters

The Functional Central Limit Theorem and Testing for Time Varying Parameters NBER Summer Institute Minicourse What s New in Econometrics: ime Series Lecture : July 4, 008 he Functional Central Limit heorem and esting for ime Varying Parameters Lecture -, July, 008 Outline. FCL.

More information

Math 181B Homework 1 Solution

Math 181B Homework 1 Solution Math 181B Homework 1 Solution 1. Write down the likelihood: L(λ = n λ X i e λ X i! (a One-sided test: H 0 : λ = 1 vs H 1 : λ = 0.1 The likelihood ratio: where LR = L(1 L(0.1 = 1 X i e n 1 = λ n X i e nλ

More information

Asymptotic Tests and Likelihood Ratio Tests

Asymptotic Tests and Likelihood Ratio Tests Asymptotic Tests and Likelihood Ratio Tests Dennis D. Cox Department of Statistics Rice University P. O. Box 1892 Houston, Texas 77251 Email: dcox@stat.rice.edu November 21, 2004 0 1 Chapter 6, Section

More information

Math 494: Mathematical Statistics

Math 494: Mathematical Statistics Math 494: Mathematical Statistics Instructor: Jimin Ding jmding@wustl.edu Department of Mathematics Washington University in St. Louis Class materials are available on course website (www.math.wustl.edu/

More information

An Introduction to Generalized Method of Moments. Chen,Rong aronge.net

An Introduction to Generalized Method of Moments. Chen,Rong   aronge.net An Introduction to Generalized Method of Moments Chen,Rong http:// aronge.net Asset Pricing, 2012 Section 1 WHY GMM? 2 Empirical Studies 3 Econometric Estimation Strategies 4 5 Maximum Likelihood Estimation

More information

Wald s theorem and the Asimov data set

Wald s theorem and the Asimov data set Wald s theorem and the Asimov data set Eilam Gross & Ofer Vitells ATLAS statistics forum, Dec. 009 1 Outline We have previously guessed that the median significance of many toy MC experiments could be

More information

Lecture Note 8 Point Estimators and Point Estimation Methods. MIT Spring 2006 Herman Bennett

Lecture Note 8 Point Estimators and Point Estimation Methods. MIT Spring 2006 Herman Bennett Lecture Note 8 Poit Estimators ad Poit Estimatio Methods MIT 14.30 Sprig 2006 Herma Beett Give a parameter with ukow value, the goal of poit estimatio is to use a sample to compute a umber that represets

More information

Spring 2012 Math 541A Exam 1. X i, S 2 = 1 n. n 1. X i I(X i < c), T n =

Spring 2012 Math 541A Exam 1. X i, S 2 = 1 n. n 1. X i I(X i < c), T n = Spring 2012 Math 541A Exam 1 1. (a) Let Z i be independent N(0, 1), i = 1, 2,, n. Are Z = 1 n n Z i and S 2 Z = 1 n 1 n (Z i Z) 2 independent? Prove your claim. (b) Let X 1, X 2,, X n be independent identically

More information

y = 1 N y i = 1 N y. E(W )=a 1 µ 1 + a 2 µ a n µ n (1.2) and its variance is var(w )=a 2 1σ a 2 2σ a 2 nσ 2 n. (1.

y = 1 N y i = 1 N y. E(W )=a 1 µ 1 + a 2 µ a n µ n (1.2) and its variance is var(w )=a 2 1σ a 2 2σ a 2 nσ 2 n. (1. The probability density function of a continuous random variable Y (or the probability mass function if Y is discrete) is referred to simply as a probability distribution and denoted by f(y; θ) where θ

More information

Economics 583: Econometric Theory I A Primer on Asymptotics: Hypothesis Testing

Economics 583: Econometric Theory I A Primer on Asymptotics: Hypothesis Testing Economics 583: Econometric Theory I A Primer on Asymptotics: Hypothesis Testing Eric Zivot October 12, 2011 Hypothesis Testing 1. Specify hypothesis to be tested H 0 : null hypothesis versus. H 1 : alternative

More information

Statistics Ph.D. Qualifying Exam

Statistics Ph.D. Qualifying Exam Department of Statistics Carnegie Mellon University May 7 2008 Statistics Ph.D. Qualifying Exam You are not expected to solve all five problems. Complete solutions to few problems will be preferred to

More information

parameter space Θ, depending only on X, such that Note: it is not θ that is random, but the set C(X).

parameter space Θ, depending only on X, such that Note: it is not θ that is random, but the set C(X). 4. Interval estimation The goal for interval estimation is to specify the accurary of an estimate. A 1 α confidence set for a parameter θ is a set C(X) in the parameter space Θ, depending only on X, such

More information

[y i α βx i ] 2 (2) Q = i=1

[y i α βx i ] 2 (2) Q = i=1 Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation

More information

Primer on statistics:

Primer on statistics: Primer on statistics: MLE, Confidence Intervals, and Hypothesis Testing ryan.reece@gmail.com http://rreece.github.io/ Insight Data Science - AI Fellows Workshop Feb 16, 018 Outline 1. Maximum likelihood

More information

Ma 3/103: Lecture 25 Linear Regression II: Hypothesis Testing and ANOVA

Ma 3/103: Lecture 25 Linear Regression II: Hypothesis Testing and ANOVA Ma 3/103: Lecture 25 Linear Regression II: Hypothesis Testing and ANOVA March 6, 2017 KC Border Linear Regression II March 6, 2017 1 / 44 1 OLS estimator 2 Restricted regression 3 Errors in variables 4

More information

Lecture 4 September 15

Lecture 4 September 15 IFT 6269: Probabilistic Graphical Models Fall 2017 Lecture 4 September 15 Lecturer: Simon Lacoste-Julien Scribe: Philippe Brouillard & Tristan Deleu 4.1 Maximum Likelihood principle Given a parametric

More information

Lecture 32: Asymptotic confidence sets and likelihoods

Lecture 32: Asymptotic confidence sets and likelihoods Lecture 32: Asymptotic confidence sets and likelihoods Asymptotic criterion In some problems, especially in nonparametric problems, it is difficult to find a reasonable confidence set with a given confidence

More information

Expectation Maximization (EM) Algorithm. Each has it s own probability of seeing H on any one flip. Let. p 1 = P ( H on Coin 1 )

Expectation Maximization (EM) Algorithm. Each has it s own probability of seeing H on any one flip. Let. p 1 = P ( H on Coin 1 ) Expectation Maximization (EM Algorithm Motivating Example: Have two coins: Coin 1 and Coin 2 Each has it s own probability of seeing H on any one flip. Let p 1 = P ( H on Coin 1 p 2 = P ( H on Coin 2 Select

More information

Economics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1,

Economics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1, Economics 520 Lecture Note 9: Hypothesis Testing via the Neyman-Pearson Lemma CB 8., 8.3.-8.3.3 Uniformly Most Powerful Tests and the Neyman-Pearson Lemma Let s return to the hypothesis testing problem

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)

More information

2.6.3 Generalized likelihood ratio tests

2.6.3 Generalized likelihood ratio tests 26 HYPOTHESIS TESTING 113 263 Generalized likelihood ratio tests When a UMP test does not exist, we usually use a generalized likelihood ratio test to verify H 0 : θ Θ against H 1 : θ Θ\Θ It can be used

More information