Graduate Econometrics I: Unbiased Estimation
|
|
- Natalie Barton
- 6 years ago
- Views:
Transcription
1 Graduate Econometrics I: Unbiased Estimation Yves Dominicy Université libre de Bruxelles Solvay Brussels School of Economics and Management ECARES Yves Dominicy Graduate Econometrics I: Unbiased Estimation 1/26
2 Outline Elements of Decision Theory 1 Elements of Decision Theory 2 Yves Dominicy Graduate Econometrics I: Unbiased Estimation 2/26
3 Outline Elements of Decision Theory 1 Elements of Decision Theory 2 Yves Dominicy Graduate Econometrics I: Unbiased Estimation 3/26
4 Decision Theory Elements of Decision Theory An estimator of a function g(θ) of the parameters is a mapping δ(y ) from Y to g(θ) The comparison of different estimators is based on risk functions associated with loss functions When estimating a function of the parameter g(θ), one typically uses (but not exclusively) the quadratic loss function L(δ(Y ), θ) = (δ(y ) g(θ)) 2, θ R or L(δ(Y ), θ) = (δ(y ) g(θ))(δ(y ) g(θ)), θ R q Yves Dominicy Graduate Econometrics I: Unbiased Estimation 4/26
5 Decision Theory Elements of Decision Theory Definition An estimator δ (Y ) weakly dominates another estimator δ(y ) if and only if θ Θ : R(δ (Y ), θ) = E θ (δ (Y ) g(θ))(δ (Y ) g(θ)) R(δ(Y ), θ) = E θ (δ(y ) g(θ))(δ(y ) g(θ)) Yves Dominicy Graduate Econometrics I: Unbiased Estimation 5/26
6 Decision Theory Elements of Decision Theory The risk function may be decomposed into two parts : Property R(δ, θ) = V θ δ(y ) + (E θ δ(y ) g(θ))(e θ δ(y ) g(θ)) }{{}}{{} variance square bias The best estimator is the one with the smaller risk : No bias Minimum variance Yves Dominicy Graduate Econometrics I: Unbiased Estimation 6/26
7 Decision Theory Elements of Decision Theory Definition An estimator δ(y ) is an unbiased estimator of g(θ) if and only if : E θ δ(y ) = g(θ) θ Θ Thus, if δ(y ) is an unbiased estimator : R(δ, θ) = V θ δ(y ) From now on, we focus on unbiased estimators Yves Dominicy Graduate Econometrics I: Unbiased Estimation 7/26
8 Decision Theory Elements of Decision Theory Property If δ 1 (Y ) and δ 2 (Y ) are two unbiased estimators, then δ 1 (Y ) dominates δ 2 (Y ) if and only if : V θ δ 2 (Y ) V θ δ 1 (Y ) θ Θ, ie if and only if V θ δ 2 (Y ) V θ δ 1 (Y ) is a positive semi-definite matrix for every possible value of the parameter Yves Dominicy Graduate Econometrics I: Unbiased Estimation 8/26
9 Decision Theory Elements of Decision Theory The use of efficiency (ie variance) as a measure of the goodness of an estimator suffers from two drawbacks : It can only be used to compare two unbiased estimators since the variance of an estimator is the mean of the squared deviations from the expected value of the estimator This can only be regarded as a measure of the scatter of the estimator about the parameter, g(θ), which it estimates if E θ δ(y ) = g(θ) But there are biased estimators that may be acceptable if the sampling distribution is skewed The variance is not the only measure of the scatter of the distribution that could be used The mean absolute deviation of one estimator can be smaller that the corresponding for the another estimator ; and vice versa for the mean square deviation But in large samples almost all the estimators become normal distributed so that neither of these two objections are valid Yves Dominicy Graduate Econometrics I: Unbiased Estimation 9/26
10 Decision Theory Elements of Decision Theory Exercise : If a simple sample X 1, X 2,, X n iid has unknown finite variance σ 2, then we can consider the sample variance s 2 = 1 n n (X i X) 2 i=1 Suppose X is the sample mean and denote by µ the distribution mean Is this an unbiased estimator of σ 2? Yves Dominicy Graduate Econometrics I: Unbiased Estimation 10/26
11 Outline Elements of Decision Theory 1 Elements of Decision Theory 2 Yves Dominicy Graduate Econometrics I: Unbiased Estimation 11/26
12 The Cramer-Rao bound, which is based on the variance of the score function, known as the Fisher Information, gives a lower bound for the variance of an unbiased estimator Among unbiased estimators, one important goal is to find an estimator that has as small a variance as possible A more precise aim would be to find an unbiased estimator that has uniform minimum variance If the variance of ˆθ attains the minimum variance of the Cramer-Rao inequality we say that ˆθ is a minimum variance unbiased estimator of θ (MVUE) If ˆθ 1 and ˆθ 2 are both unbiased estimators of a parameter θ we say that ˆθ 1 is relatively more efficient if var(ˆθ 1 ) < var(ˆθ 2 ) We use the ratio var( ˆθ 1 ) var( ˆθ in order 2 ) to measure relative efficiency Yves Dominicy Graduate Econometrics I: Unbiased Estimation 12/26
13 It provides a lower bound to the variance covariance matrices of unbiased estimators for g(θ) This inequality holds under suitable regularity conditions : Definition A parametric model with likelihood l(y; θ), θ Θ, is said to be regular if i) Θ is an open subset of R p ii) l(y; θ) is differentiable with respect to θ iii) l(y; θ)dy as a function of θ is differentiable and Y l(y; θ)dy = l(y; θ)dy Y Y ( iv) I(θ) = E log l(y ;θ) log l(y ;θ) θ is the Fisher information θ matrix It exists and is nonsingular (ie positive definite) θ Θ θ ) Yves Dominicy Graduate Econometrics I: Unbiased Estimation 13/26
14 We also need some regularity conditions on unbiased estimators : Definition An unbiased estimator δ(y ) is said to be regular if i) It is square integrable : E θ δ(y ) 2 < + ii) δ(y)l(y; θ)dy is differentiable and Y δ(y)l(y; θ)dy = δ(y) l(y; θ)dy Y Y Yves Dominicy Graduate Econometrics I: Unbiased Estimation 14/26
15 The best possible estimator is one whose distribution is concentrated as closely as possible about the parameter it estimates In the limit it will be completely concentrated on the parameter (if it is unbiased) But we have seen that we can stabilize the distribution Then which is the smallest possible variance? Theorem Given a regular parametric model, every estimator δ(y ) that is regular and unbiased for g(θ) R q has a variance covariance matrix satisfying : V θ δ(y ) g(θ) I(θ) 1 g(θ) θ Θ In particular, if g(θ) = θ, then V θ δ(y ) I(θ) 1 This is the Cramer-Rao lower bound Yves Dominicy Graduate Econometrics I: Unbiased Estimation 15/26
16 PROOF Differentiating the unbiasedness condition, E θ δ(y ) = δ(y)l(y; θ)dy = g(θ), with respect to θ g(θ) = = Y Y Y δ(y) δ(y) = E θ ( δ(y ) = Cov θ ( δ(y ), l(y; θ) dy log l(y; θ) l(y; θ)dy ) log l(y; θ) log l(y; θ) ) The last equality follows because the score has zero mean Yves Dominicy Graduate Econometrics I: Unbiased Estimation 16/26
17 We need the Schwarz inequality : V (Y ) Cov(X, Y )V (X) 1 Cov(X, Y ) Using the Schwarz inequality : V θ δ(y ) Cov θ ( δ(y ), ) ( log l(y; θ) log l(y; θ) V θ ) ( log l(y; θ) Cov θ, δ(y ) it follows that this matrix is symmetric positive semidefinite ) 1 0, Yves Dominicy Graduate Econometrics I: Unbiased Estimation 17/26
18 Since : QED ( ) log l(y; θ) V θ = I(θ) V θ δ(y ) g(θ) I(θ) 1 g(θ) Yves Dominicy Graduate Econometrics I: Unbiased Estimation 18/26
19 Given a family of unbiased estimators, we may ask which is the best among all Definition Efficient estimator Given a regular parametric model, a regular unbiased estimator of g(θ) is efficient if its variance covariance matrix is equal to the Cramer-Rao lower bound, ie if V θ δ(y ) = g(θ) I(θ) 1 g(θ) θ Θ In particular, an efficient estimator of θ is an estimator of which the variance covariance matrix is equal to the inverse of the Fisher information matrix Yves Dominicy Graduate Econometrics I: Unbiased Estimation 19/26
20 Property If δ(y ) is a q 1 efficient estimator of its mean and if A and B are two constants matrices, the estimator AT (Y ) + B is also an efficient estimator (Ag(θ) + B) V θ (Aδ(Y ) + B) = I(θ) 1 (Ag(θ) + B) Yves Dominicy Graduate Econometrics I: Unbiased Estimation 20/26
21 Why the variance of an efficient estimator is the inverse of the Fisher information matrix? The calculations say so, but what does this mean? 1- We have seen that the Fisher information matrix is the variance-covariance of the score : ( ) ( ) log l(y; θ) log l(y; θ) log l(y; θ) V θ = E θ 2- In the score, θ (or g(θ)) is fixed 3- We evaluate the score in every observation Some will produce a positive score and others a negative score 4- Some will be close from the average while others will be far Yves Dominicy Graduate Econometrics I: Unbiased Estimation 21/26
22 4-The mean of all these fluctuations around the mean (ie the square root of the variance) gives us a measure of the information content on the observations 5- We have now an unbiased estimator δ(y ) of g(θ) (E θ (δ(y )) = g(θ)) that is also a function of the observation 6-There is therefore a relation between δ(y ) and the score : ( ) ( ) V log l(y;θ) θ = I(θ) E log l(y;θ) θ = 0 E θ (δ(y )) = g(θ) V θ (δ(y )) =? 7-Since the Fisher information matrix is a measure of the information content in the observations, δ(y ) will also exploit it Yves Dominicy Graduate Econometrics I: Unbiased Estimation 22/26
23 8-The better this information is exploited, the better δ(y ) is In other words, the better I(θ) is used, the smaller the variance of δ(y ) 9-In the best case, I(θ) is efficiently exploitedthere is a kind of perfect fit between δ(y ) and the score 10-It means that if we regress the estimator is log l(y; θ) δ(y i ) = β + ε i, ˆβ = Cov θ ( δ(y ), V θ ( log l(y;θ) log l(y;θ) ) ) Yves Dominicy Graduate Econometrics I: Unbiased Estimation 23/26
24 If there is a perfect fit ˆε i = 0 i = 1,, n δ ( y) β log l( y; θ ) Yves Dominicy Graduate Econometrics I: Unbiased Estimation 24/26
25 And perfect fit also means Var(ˆε) = 0 Therefore : ( ) V θ δ(y ) = ˆβ log l(y; θ) 2 V θ V θ δ(y ) = Or in matrix form : Cov θ ( δ(y ), V θ δ(y ) Cov θ ( δ(y ), V θ ( log l(y;θ) V θ ( log l(y; θ) ) 2 log l(y;θ) ) log l(y; θ) ) 2 V θ ( log l(y; θ) ) 1 Cov θ ( δ(y ), V θ δ(y ) g(θ) I(θ) 1 g(θ) = 0 ) ) log l(y; θ) = 0 Yves Dominicy Graduate Econometrics I: Unbiased Estimation 25/26
26 If δ(y ) does not use efficiently all the information, some residuals are not zero which means that V (ˆε) > 0 and ( ) log l(y; θ) V θ (δ(y )) = ˆβ 2 V θ + V (ˆε), which means that V θ (δ(y )) g(θ) I(θ) 1 g(θ) = V (ˆε) > 0, which means that δ(y ) is not efficient Yves Dominicy Graduate Econometrics I: Unbiased Estimation 26/26
Graduate Econometrics I: Maximum Likelihood II
Graduate Econometrics I: Maximum Likelihood II Yves Dominicy Université libre de Bruxelles Solvay Brussels School of Economics and Management ECARES Yves Dominicy Graduate Econometrics I: Maximum Likelihood
More informationGraduate Econometrics I: Maximum Likelihood I
Graduate Econometrics I: Maximum Likelihood I Yves Dominicy Université libre de Bruxelles Solvay Brussels School of Economics and Management ECARES Yves Dominicy Graduate Econometrics I: Maximum Likelihood
More informationGraduate Econometrics I: Asymptotic Theory
Graduate Econometrics I: Asymptotic Theory Yves Dominicy Université libre de Bruxelles Solvay Brussels School of Economics and Management ECARES Yves Dominicy Graduate Econometrics I: Asymptotic Theory
More informationENEE 621 SPRING 2016 DETECTION AND ESTIMATION THEORY THE PARAMETER ESTIMATION PROBLEM
c 2007-2016 by Armand M. Makowski 1 ENEE 621 SPRING 2016 DETECTION AND ESTIMATION THEORY THE PARAMETER ESTIMATION PROBLEM 1 The basic setting Throughout, p, q and k are positive integers. The setup With
More informationECE 275A Homework 6 Solutions
ECE 275A Homework 6 Solutions. The notation used in the solutions for the concentration (hyper) ellipsoid problems is defined in the lecture supplement on concentration ellipsoids. Note that θ T Σ θ =
More informationGraduate Econometrics I: What is econometrics?
Graduate Econometrics I: What is econometrics? Yves Dominicy Université libre de Bruxelles Solvay Brussels School of Economics and Management ECARES Yves Dominicy Graduate Econometrics I: What is econometrics?
More informationApplied Econometrics (QEM)
Applied Econometrics (QEM) The Simple Linear Regression Model based on Prinicples of Econometrics Jakub Mućk Department of Quantitative Economics Jakub Mućk Applied Econometrics (QEM) Meeting #2 The Simple
More informationBrief Review on Estimation Theory
Brief Review on Estimation Theory K. Abed-Meraim ENST PARIS, Signal and Image Processing Dept. abed@tsi.enst.fr This presentation is essentially based on the course BASTA by E. Moulines Brief review on
More information6.1 Variational representation of f-divergences
ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016 Lecture 6: Variational representation, HCR and CR lower bounds Lecturer: Yihong Wu Scribe: Georgios Rovatsos, Feb 11, 2016
More informationA Few Notes on Fisher Information (WIP)
A Few Notes on Fisher Information (WIP) David Meyer dmm@{-4-5.net,uoregon.edu} Last update: April 30, 208 Definitions There are so many interesting things about Fisher Information and its theoretical properties
More informationLecture 7 Introduction to Statistical Decision Theory
Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7
More information557: MATHEMATICAL STATISTICS II BIAS AND VARIANCE
557: MATHEMATICAL STATISTICS II BIAS AND VARIANCE An estimator, T (X), of θ can be evaluated via its statistical properties. Typically, two aspects are considered: Expectation Variance either in terms
More informationRegression #4: Properties of OLS Estimator (Part 2)
Regression #4: Properties of OLS Estimator (Part 2) Econ 671 Purdue University Justin L. Tobias (Purdue) Regression #4 1 / 24 Introduction In this lecture, we continue investigating properties associated
More informationLecture 8: Information Theory and Statistics
Lecture 8: Information Theory and Statistics Part II: Hypothesis Testing and I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 23, 2015 1 / 50 I-Hsiang
More informationELEG 5633 Detection and Estimation Minimum Variance Unbiased Estimators (MVUE)
1 ELEG 5633 Detection and Estimation Minimum Variance Unbiased Estimators (MVUE) Jingxian Wu Department of Electrical Engineering University of Arkansas Outline Minimum Variance Unbiased Estimators (MVUE)
More informationMethods of evaluating estimators and best unbiased estimators Hamid R. Rabiee
Stochastic Processes Methods of evaluating estimators and best unbiased estimators Hamid R. Rabiee 1 Outline Methods of Mean Squared Error Bias and Unbiasedness Best Unbiased Estimators CR-Bound for variance
More informationEstimation Theory Fredrik Rusek. Chapters
Estimation Theory Fredrik Rusek Chapters 3.5-3.10 Recap We deal with unbiased estimators of deterministic parameters Performance of an estimator is measured by the variance of the estimate (due to the
More informationMS&E 226: Small Data. Lecture 11: Maximum likelihood (v2) Ramesh Johari
MS&E 226: Small Data Lecture 11: Maximum likelihood (v2) Ramesh Johari ramesh.johari@stanford.edu 1 / 18 The likelihood function 2 / 18 Estimating the parameter This lecture develops the methodology behind
More informationEconometrics I, Estimation
Econometrics I, Estimation Department of Economics Stanford University September, 2008 Part I Parameter, Estimator, Estimate A parametric is a feature of the population. An estimator is a function of the
More informationStatement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:.
MATHEMATICAL STATISTICS Take-home final examination February 1 st -February 8 th, 019 Instructions You do not need to edit the solutions Just make sure the handwriting is legible The final solutions should
More informationIntroduction to Estimation Methods for Time Series models Lecture 2
Introduction to Estimation Methods for Time Series models Lecture 2 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 2 SNS Pisa 1 / 21 Estimators:
More informationLinear Regression. Junhui Qian. October 27, 2014
Linear Regression Junhui Qian October 27, 2014 Outline The Model Estimation Ordinary Least Square Method of Moments Maximum Likelihood Estimation Properties of OLS Estimator Unbiasedness Consistency Efficiency
More informationMax. Likelihood Estimation. Outline. Econometrics II. Ricardo Mora. Notes. Notes
Maximum Likelihood Estimation Econometrics II Department of Economics Universidad Carlos III de Madrid Máster Universitario en Desarrollo y Crecimiento Económico Outline 1 3 4 General Approaches to Parameter
More informationMathematical statistics
October 4 th, 2018 Lecture 12: Information Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation Chapter
More informationEIE6207: Estimation Theory
EIE6207: Estimation Theory Man-Wai MAK Dept. of Electronic and Information Engineering, The Hong Kong Polytechnic University enmwmak@polyu.edu.hk http://www.eie.polyu.edu.hk/ mwmak References: Steven M.
More information10-704: Information Processing and Learning Fall Lecture 24: Dec 7
0-704: Information Processing and Learning Fall 206 Lecturer: Aarti Singh Lecture 24: Dec 7 Note: These notes are based on scribed notes from Spring5 offering of this course. LaTeX template courtesy of
More informationParameter Estimation
Parameter Estimation Consider a sample of observations on a random variable Y. his generates random variables: (y 1, y 2,, y ). A random sample is a sample (y 1, y 2,, y ) where the random variables y
More informationST5215: Advanced Statistical Theory
Department of Statistics & Applied Probability Wednesday, October 5, 2011 Lecture 13: Basic elements and notions in decision theory Basic elements X : a sample from a population P P Decision: an action
More informationThe properties of L p -GMM estimators
The properties of L p -GMM estimators Robert de Jong and Chirok Han Michigan State University February 2000 Abstract This paper considers Generalized Method of Moment-type estimators for which a criterion
More informationChapter 3. Point Estimation. 3.1 Introduction
Chapter 3 Point Estimation Let (Ω, A, P θ ), P θ P = {P θ θ Θ}be probability space, X 1, X 2,..., X n : (Ω, A) (IR k, B k ) random variables (X, B X ) sample space γ : Θ IR k measurable function, i.e.
More informationChapter 4: Constrained estimators and tests in the multiple linear regression model (Part III)
Chapter 4: Constrained estimators and tests in the multiple linear regression model (Part III) Florian Pelgrin HEC September-December 2010 Florian Pelgrin (HEC) Constrained estimators September-December
More informationEstimation, Inference, and Hypothesis Testing
Chapter 2 Estimation, Inference, and Hypothesis Testing Note: The primary reference for these notes is Ch. 7 and 8 of Casella & Berger 2. This text may be challenging if new to this topic and Ch. 7 of
More informationEstimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators
Estimation theory Parametric estimation Properties of estimators Minimum variance estimator Cramer-Rao bound Maximum likelihood estimators Confidence intervals Bayesian estimation 1 Random Variables Let
More informationTheory of Statistics.
Theory of Statistics. Homework V February 5, 00. MT 8.7.c When σ is known, ˆµ = X is an unbiased estimator for µ. If you can show that its variance attains the Cramer-Rao lower bound, then no other unbiased
More informationApplied Econometrics (QEM)
Applied Econometrics (QEM) based on Prinicples of Econometrics Jakub Mućk Department of Quantitative Economics Jakub Mućk Applied Econometrics (QEM) Meeting #3 1 / 42 Outline 1 2 3 t-test P-value Linear
More informationCentral Limit Theorem ( 5.3)
Central Limit Theorem ( 5.3) Let X 1, X 2,... be a sequence of independent random variables, each having n mean µ and variance σ 2. Then the distribution of the partial sum S n = X i i=1 becomes approximately
More informationRegression #3: Properties of OLS Estimator
Regression #3: Properties of OLS Estimator Econ 671 Purdue University Justin L. Tobias (Purdue) Regression #3 1 / 20 Introduction In this lecture, we establish some desirable properties associated with
More informationIn the bivariate regression model, the original parameterization is. Y i = β 1 + β 2 X2 + β 2 X2. + β 2 (X 2i X 2 ) + ε i (2)
RNy, econ460 autumn 04 Lecture note Orthogonalization and re-parameterization 5..3 and 7.. in HN Orthogonalization of variables, for example X i and X means that variables that are correlated are made
More informationThe regression model with one fixed regressor cont d
The regression model with one fixed regressor cont d 3150/4150 Lecture 4 Ragnar Nymoen 27 January 2012 The model with transformed variables Regression with transformed variables I References HGL Ch 2.8
More informationEconomics 620, Lecture 9: Asymptotics III: Maximum Likelihood Estimation
Economics 620, Lecture 9: Asymptotics III: Maximum Likelihood Estimation Nicholas M. Kiefer Cornell University Professor N. M. Kiefer (Cornell University) Lecture 9: Asymptotics III(MLE) 1 / 20 Jensen
More information1 Appendix A: Matrix Algebra
Appendix A: Matrix Algebra. Definitions Matrix A =[ ]=[A] Symmetric matrix: = for all and Diagonal matrix: 6=0if = but =0if 6= Scalar matrix: the diagonal matrix of = Identity matrix: the scalar matrix
More informationUnbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.
Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it
More informationElements of statistics (MATH0487-1)
Elements of statistics (MATH0487-1) Prof. Dr. Dr. K. Van Steen University of Liège, Belgium November 12, 2012 Introduction to Statistics Basic Probability Revisited Sampling Exploratory Data Analysis -
More informationIntroduction to Estimation Methods for Time Series models. Lecture 1
Introduction to Estimation Methods for Time Series models Lecture 1 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 1 SNS Pisa 1 / 19 Estimation
More informationSensitivity of GLS estimators in random effects models
of GLS estimators in random effects models Andrey L. Vasnev (University of Sydney) Tokyo, August 4, 2009 1 / 19 Plan Plan Simulation studies and estimators 2 / 19 Simulation studies Plan Simulation studies
More informationLecture 3 September 1
STAT 383C: Statistical Modeling I Fall 2016 Lecture 3 September 1 Lecturer: Purnamrita Sarkar Scribe: Giorgio Paulon, Carlos Zanini Disclaimer: These scribe notes have been slightly proofread and may have
More informationChapters 9. Properties of Point Estimators
Chapters 9. Properties of Point Estimators Recap Target parameter, or population parameter θ. Population distribution f(x; θ). { probability function, discrete case f(x; θ) = density, continuous case The
More information1. Fisher Information
1. Fisher Information Let f(x θ) be a density function with the property that log f(x θ) is differentiable in θ throughout the open p-dimensional parameter set Θ R p ; then the score statistic (or score
More information1 One-way analysis of variance
LIST OF FORMULAS (Version from 21. November 2014) STK2120 1 One-way analysis of variance Assume X ij = µ+α i +ɛ ij ; j = 1, 2,..., J i ; i = 1, 2,..., I ; where ɛ ij -s are independent and N(0, σ 2 ) distributed.
More informationUnbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.
Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it
More informationFinal Exam. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given.
1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given. (a) If X and Y are independent, Corr(X, Y ) = 0. (b) (c) (d) (e) A consistent estimator must be asymptotically
More informationEconometrics of Panel Data
Econometrics of Panel Data Jakub Mućk Meeting # 6 Jakub Mućk Econometrics of Panel Data Meeting # 6 1 / 36 Outline 1 The First-Difference (FD) estimator 2 Dynamic panel data models 3 The Anderson and Hsiao
More informationDA Freedman Notes on the MLE Fall 2003
DA Freedman Notes on the MLE Fall 2003 The object here is to provide a sketch of the theory of the MLE. Rigorous presentations can be found in the references cited below. Calculus. Let f be a smooth, scalar
More informationEconometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018
Econometrics I KS Module 2: Multivariate Linear Regression Alexander Ahammer Department of Economics Johannes Kepler University of Linz This version: April 16, 2018 Alexander Ahammer (JKU) Module 2: Multivariate
More informationIntroduction Large Sample Testing Composite Hypotheses. Hypothesis Testing. Daniel Schmierer Econ 312. March 30, 2007
Hypothesis Testing Daniel Schmierer Econ 312 March 30, 2007 Basics Parameter of interest: θ Θ Structure of the test: H 0 : θ Θ 0 H 1 : θ Θ 1 for some sets Θ 0, Θ 1 Θ where Θ 0 Θ 1 = (often Θ 1 = Θ Θ 0
More informationRegression Estimation - Least Squares and Maximum Likelihood. Dr. Frank Wood
Regression Estimation - Least Squares and Maximum Likelihood Dr. Frank Wood Least Squares Max(min)imization Function to minimize w.r.t. β 0, β 1 Q = n (Y i (β 0 + β 1 X i )) 2 i=1 Minimize this by maximizing
More informationA General Overview of Parametric Estimation and Inference Techniques.
A General Overview of Parametric Estimation and Inference Techniques. Moulinath Banerjee University of Michigan September 11, 2012 The object of statistical inference is to glean information about an underlying
More informationTheory of Maximum Likelihood Estimation. Konstantin Kashin
Gov 2001 Section 5: Theory of Maximum Likelihood Estimation Konstantin Kashin February 28, 2013 Outline Introduction Likelihood Examples of MLE Variance of MLE Asymptotic Properties What is Statistical
More informationLECTURE 5 NOTES. n t. t Γ(a)Γ(b) pt+a 1 (1 p) n t+b 1. The marginal density of t is. Γ(t + a)γ(n t + b) Γ(n + a + b)
LECTURE 5 NOTES 1. Bayesian point estimators. In the conventional (frequentist) approach to statistical inference, the parameter θ Θ is considered a fixed quantity. In the Bayesian approach, it is considered
More informationChapter 3: Maximum Likelihood Theory
Chapter 3: Maximum Likelihood Theory Florian Pelgrin HEC September-December, 2010 Florian Pelgrin (HEC) Maximum Likelihood Theory September-December, 2010 1 / 40 1 Introduction Example 2 Maximum likelihood
More informationStatistics and Econometrics I
Statistics and Econometrics I Point Estimation Shiu-Sheng Chen Department of Economics National Taiwan University September 13, 2016 Shiu-Sheng Chen (NTU Econ) Statistics and Econometrics I September 13,
More informationLet us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided
Let us first identify some classes of hypotheses. simple versus simple H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided H 0 : θ θ 0 versus H 1 : θ > θ 0. (2) two-sided; null on extremes H 0 : θ θ 1 or
More informationProof In the CR proof. and
Question Under what conditions will we be able to attain the Cramér-Rao bound and find a MVUE? Lecture 4 - Consequences of the Cramér-Rao Lower Bound. Searching for a MVUE. Rao-Blackwell Theorem, Lehmann-Scheffé
More informationBusiness Economics BUSINESS ECONOMICS. PAPER No. : 8, FUNDAMENTALS OF ECONOMETRICS MODULE No. : 3, GAUSS MARKOV THEOREM
Subject Business Economics Paper No and Title Module No and Title Module Tag 8, Fundamentals of Econometrics 3, The gauss Markov theorem BSE_P8_M3 1 TABLE OF CONTENTS 1. INTRODUCTION 2. ASSUMPTIONS OF
More informationStatistics & Data Sciences: First Year Prelim Exam May 2018
Statistics & Data Sciences: First Year Prelim Exam May 2018 Instructions: 1. Do not turn this page until instructed to do so. 2. Start each new question on a new sheet of paper. 3. This is a closed book
More informationThe loss function and estimating equations
Chapter 6 he loss function and estimating equations 6 Loss functions Up until now our main focus has been on parameter estimating via the maximum likelihood However, the negative maximum likelihood is
More informationIntroductory Econometrics
Based on the textbook by Wooldridge: : A Modern Approach Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna November 23, 2013 Outline Introduction
More informationProblem set 1: answers. April 6, 2018
Problem set 1: answers April 6, 2018 1 1 Introduction to answers This document provides the answers to problem set 1. If any further clarification is required I may produce some videos where I go through
More informationEC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)
1 EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) Taisuke Otsu London School of Economics Summer 2018 A.1. Summation operator (Wooldridge, App. A.1) 2 3 Summation operator For
More informationAnswers to Problem Set #4
Answers to Problem Set #4 Problems. Suppose that, from a sample of 63 observations, the least squares estimates and the corresponding estimated variance covariance matrix are given by: bβ bβ 2 bβ 3 = 2
More information1. (Regular) Exponential Family
1. (Regular) Exponential Family The density function of a regular exponential family is: [ ] Example. Poisson(θ) [ ] Example. Normal. (both unknown). ) [ ] [ ] [ ] [ ] 2. Theorem (Exponential family &
More informationCh. 5 Hypothesis Testing
Ch. 5 Hypothesis Testing The current framework of hypothesis testing is largely due to the work of Neyman and Pearson in the late 1920s, early 30s, complementing Fisher s work on estimation. As in estimation,
More informationFall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.
1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n
More informationPaper Review: Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties by Jianqing Fan and Runze Li (2001)
Paper Review: Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties by Jianqing Fan and Runze Li (2001) Presented by Yang Zhao March 5, 2010 1 / 36 Outlines 2 / 36 Motivation
More informationVariations. ECE 6540, Lecture 10 Maximum Likelihood Estimation
Variations ECE 6540, Lecture 10 Last Time BLUE (Best Linear Unbiased Estimator) Formulation Advantages Disadvantages 2 The BLUE A simplification Assume the estimator is a linear system For a single parameter
More informationClassical Estimation Topics
Classical Estimation Topics Namrata Vaswani, Iowa State University February 25, 2014 This note fills in the gaps in the notes already provided (l0.pdf, l1.pdf, l2.pdf, l3.pdf, LeastSquares.pdf). 1 Min
More informationReview of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley
Review of Classical Least Squares James L. Powell Department of Economics University of California, Berkeley The Classical Linear Model The object of least squares regression methods is to model and estimate
More informationStatistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach
Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach Jae-Kwang Kim Department of Statistics, Iowa State University Outline 1 Introduction 2 Observed likelihood 3 Mean Score
More informationSTATS 200: Introduction to Statistical Inference. Lecture 29: Course review
STATS 200: Introduction to Statistical Inference Lecture 29: Course review Course review We started in Lecture 1 with a fundamental assumption: Data is a realization of a random process. The goal throughout
More informationThe outline for Unit 3
The outline for Unit 3 Unit 1. Introduction: The regression model. Unit 2. Estimation principles. Unit 3: Hypothesis testing principles. 3.1 Wald test. 3.2 Lagrange Multiplier. 3.3 Likelihood Ratio Test.
More informationMLE and GMM. Li Zhao, SJTU. Spring, Li Zhao MLE and GMM 1 / 22
MLE and GMM Li Zhao, SJTU Spring, 2017 Li Zhao MLE and GMM 1 / 22 Outline 1 MLE 2 GMM 3 Binary Choice Models Li Zhao MLE and GMM 2 / 22 Maximum Likelihood Estimation - Introduction For a linear model y
More informationStatistics and econometrics
1 / 36 Slides for the course Statistics and econometrics Part 10: Asymptotic hypothesis testing European University Institute Andrea Ichino September 8, 2014 2 / 36 Outline Why do we need large sample
More informationML and REML Variance Component Estimation. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 58
ML and REML Variance Component Estimation Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 58 Suppose y = Xβ + ε, where ε N(0, Σ) for some positive definite, symmetric matrix Σ.
More informationMFE Financial Econometrics 2018 Final Exam Model Solutions
MFE Financial Econometrics 2018 Final Exam Model Solutions Tuesday 12 th March, 2019 1. If (X, ε) N (0, I 2 ) what is the distribution of Y = µ + β X + ε? Y N ( µ, β 2 + 1 ) 2. What is the Cramer-Rao lower
More informationSTAT5044: Regression and Anova. Inyoung Kim
STAT5044: Regression and Anova Inyoung Kim 2 / 47 Outline 1 Regression 2 Simple Linear regression 3 Basic concepts in regression 4 How to estimate unknown parameters 5 Properties of Least Squares Estimators:
More informationStatistics II. Management Degree Management Statistics IIDegree. Statistics II. 2 nd Sem. 2013/2014. Management Degree. Simple Linear Regression
Model 1 2 Ordinary Least Squares 3 4 Non-linearities 5 of the coefficients and their to the model We saw that econometrics studies E (Y x). More generally, we shall study regression analysis. : The regression
More informationMaximum Likelihood Tests and Quasi-Maximum-Likelihood
Maximum Likelihood Tests and Quasi-Maximum-Likelihood Wendelin Schnedler Department of Economics University of Heidelberg 10. Dezember 2007 Wendelin Schnedler (AWI) Maximum Likelihood Tests and Quasi-Maximum-Likelihood10.
More informationMathematical Statistics
Mathematical Statistics Chapter Three. Point Estimation 3.4 Uniformly Minimum Variance Unbiased Estimator(UMVUE) Criteria for Best Estimators MSE Criterion Let F = {p(x; θ) : θ Θ} be a parametric distribution
More informationECON Introductory Econometrics. Lecture 6: OLS with Multiple Regressors
ECON4150 - Introductory Econometrics Lecture 6: OLS with Multiple Regressors Monique de Haan (moniqued@econ.uio.no) Stock and Watson Chapter 6 Lecture outline 2 Violation of first Least Squares assumption
More informationMS&E 226: Small Data
MS&E 226: Small Data Lecture 12: Frequentist properties of estimators (v4) Ramesh Johari ramesh.johari@stanford.edu 1 / 39 Frequentist inference 2 / 39 Thinking like a frequentist Suppose that for some
More informationSTAT 100C: Linear models
STAT 100C: Linear models Arash A. Amini June 9, 2018 1 / 56 Table of Contents Multiple linear regression Linear model setup Estimation of β Geometric interpretation Estimation of σ 2 Hat matrix Gram matrix
More informationLecture 4: Heteroskedasticity
Lecture 4: Heteroskedasticity Econometric Methods Warsaw School of Economics (4) Heteroskedasticity 1 / 24 Outline 1 What is heteroskedasticity? 2 Testing for heteroskedasticity White Goldfeld-Quandt Breusch-Pagan
More informationLinear models. Linear models are computationally convenient and remain widely used in. applied econometric research
Linear models Linear models are computationally convenient and remain widely used in applied econometric research Our main focus in these lectures will be on single equation linear models of the form y
More informationMaking sense of Econometrics: Basics
Making sense of Econometrics: Basics Lecture 2: Simple Regression Egypt Scholars Economic Society Happy Eid Eid present! enter classroom at http://b.socrative.com/login/student/ room name c28efb78 Outline
More informationLinear Models and Estimation by Least Squares
Linear Models and Estimation by Least Squares Jin-Lung Lin 1 Introduction Causal relation investigation lies in the heart of economics. Effect (Dependent variable) cause (Independent variable) Example:
More informationSensitivity of GLS Estimators in Random Effects Models
Sensitivity of GLS Estimators in Random Effects Models Andrey L. Vasnev January 29 Faculty of Economics and Business, University of Sydney, NSW 26, Australia. E-mail: a.vasnev@econ.usyd.edu.au Summary
More informationReliability of inference (1 of 2 lectures)
Reliability of inference (1 of 2 lectures) Ragnar Nymoen University of Oslo 5 March 2013 1 / 19 This lecture (#13 and 14): I The optimality of the OLS estimators and tests depend on the assumptions of
More informationEconometrics I KS. Module 1: Bivariate Linear Regression. Alexander Ahammer. This version: March 12, 2018
Econometrics I KS Module 1: Bivariate Linear Regression Alexander Ahammer Department of Economics Johannes Kepler University of Linz This version: March 12, 2018 Alexander Ahammer (JKU) Module 1: Bivariate
More informationLECTURE 2 LINEAR REGRESSION MODEL AND OLS
SEPTEMBER 29, 2014 LECTURE 2 LINEAR REGRESSION MODEL AND OLS Definitions A common question in econometrics is to study the effect of one group of variables X i, usually called the regressors, on another
More informationEconomics 583: Econometric Theory I A Primer on Asymptotics
Economics 583: Econometric Theory I A Primer on Asymptotics Eric Zivot January 14, 2013 The two main concepts in asymptotic theory that we will use are Consistency Asymptotic Normality Intuition consistency:
More information