Time Series Analysis Fall 2008

Similar documents
DSGE Methods. Estimation of DSGE models: GMM and Indirect Inference. Willi Mutschler, M.Sc.

DSGE-Models. Limited Information Estimation General Method of Moments and Indirect Inference

Time Series Analysis Fall 2008

Estimating Deep Parameters: GMM and SMM

Chapter 1: A Brief Review of Maximum Likelihood, GMM, and Numerical Tools. Joan Llull. Microeconometrics IDEA PhD Program

Lecture 14 More on structural estimation

Generalized Method of Moments (GMM) Estimation

Introduction to Estimation Methods for Time Series models Lecture 2

Bayesian Indirect Inference and the ABC of GMM

GMM and SMM. 1. Hansen, L Large Sample Properties of Generalized Method of Moments Estimators, Econometrica, 50, p

Likelihood and p-value functions in the composite likelihood context

Chaos, Complexity, and Inference (36-462)

Weak Identification in Maximum Likelihood: A Question of Information

Lecture I. What is Quantitative Macroeconomics?

Analogy Principle. Asymptotic Theory Part II. James J. Heckman University of Chicago. Econ 312 This draft, April 5, 2006

Inference in non-linear time series

Optimization. Charles J. Geyer School of Statistics University of Minnesota. Stat 8054 Lecture Notes

Asymptotics for Nonlinear GMM

Econ 583 Final Exam Fall 2008

ECE 275B Homework # 1 Solutions Winter 2018

Estimation of Dynamic Regression Models

STATS 200: Introduction to Statistical Inference. Lecture 29: Course review

Lecture I. What is Quantitative Macroeconomics?

A Course on Advanced Econometrics

More Empirical Process Theory

i=1 h n (ˆθ n ) = 0. (2)

Lecture 21: October 19

MA Advanced Econometrics: Applying Least Squares to Time Series

Lecture Stat Information Criterion

A Significance Test for the Lasso

LECTURE 18: NONLINEAR MODELS

5.2 Fisher information and the Cramer-Rao bound

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Indirect Likelihood Inference

Testing Restrictions and Comparing Models

A Course in Applied Econometrics Lecture 14: Control Functions and Related Methods. Jeff Wooldridge IRP Lectures, UW Madison, August 2008

A Very Brief Summary of Statistical Inference, and Examples

ADVANCED FINANCIAL ECONOMETRICS PROF. MASSIMO GUIDOLIN

CS 540: Machine Learning Lecture 2: Review of Probability & Statistics

Economics 101 Lecture 5 - Firms and Production

Final Exam. Economics 835: Econometrics. Fall 2010

Econometrics I, Estimation

LECTURE 10: NEYMAN-PEARSON LEMMA AND ASYMPTOTIC TESTING. The last equality is provided so this can look like a more familiar parametric test.

Lecture 7: Dynamic panel models 2

Variance Decomposition

Lecture 3 Stationary Processes and the Ergodic LLN (Reference Section 2.2, Hayashi)

MS&E 226: Small Data. Lecture 11: Maximum likelihood (v2) Ramesh Johari

Financial Econometrics

Economic modelling and forecasting

Structural Estimation

Økonomisk Kandidateksamen 2005(I) Econometrics 2 January 20, 2005

Lecture 2: Relativization

MLE and GMM. Li Zhao, SJTU. Spring, Li Zhao MLE and GMM 1 / 22

6.080 / Great Ideas in Theoretical Computer Science Spring 2008

Chapter 3. Point Estimation. 3.1 Introduction

High Dimensional Empirical Likelihood for Generalized Estimating Equations with Dependent Data

STAT 135 Lab 3 Asymptotic MLE and the Method of Moments

Size Distortion and Modi cation of Classical Vuong Tests

P n. This is called the law of large numbers but it comes in two forms: Strong and Weak.

Eco517 Fall 2014 C. Sims MIDTERM EXAM

Discrete Dependent Variable Models

Eksamen på Økonomistudiet 2006-II Econometrics 2 June 9, 2006

MA Advanced Macroeconomics: 7. The Real Business Cycle Model

Motivation Non-linear Rational Expectations The Permanent Income Hypothesis The Log of Gravity Non-linear IV Estimation Summary.

Lecture 10: Generalized likelihood ratio test

ECE 275B Homework # 1 Solutions Version Winter 2015

Macroeconomic Analysis: VAR, DSGE and Factor Models

ECON 616: Lecture 1: Time Series Basics

The loss function and estimating equations

Introduction to the Mathematical and Statistical Foundations of Econometrics Herman J. Bierens Pennsylvania State University

Instrumental Variables and the Problem of Endogeneity

Z-estimators (generalized method of moments)

Inference with few assumptions: Wasserman s example

Maximum Likelihood Estimation

Generalized Method of Moment

Asymptotic Variance of Test Statistics in ML and QML Frameworks

Linear Regression with Time Series Data

Statistics & Data Sciences: First Year Prelim Exam May 2018

Key Concepts: Economic Computation, Part III

GARCH Models Estimation and Inference

Lecture 9: Quantile Methods 2

DSGE-Models. Calibration and Introduction to Dynare. Institute of Econometrics and Economic Statistics

Assessing Structural VAR s

Post-exam 2 practice questions 18.05, Spring 2014

Fall, 2007 Nonlinear Econometrics. Theory: Consistency for Extremum Estimators. Modeling: Probit, Logit, and Other Links.

A General Overview of Parametric Estimation and Inference Techniques.

Macroeconomics Theory II

Ch. 5 Hypothesis Testing

Switching Regime Estimation

Bayesian Econometrics

Maximum Likelihood Estimation

Introduction: structural econometrics. Jean-Marc Robin

Topic: Sampling, Medians of Means method and DNF counting Date: October 6, 2004 Scribe: Florin Oprea

Econ 623 Econometrics II Topic 2: Stationary Time Series

Theory of Maximum Likelihood Estimation. Konstantin Kashin

Non-nested model selection. in unstable environments

Club Convergence: Some Empirical Issues

The Delta Method and Applications

Generalized Method of Moments Estimation

Confidence Intervals for Normal Data Spring 2014

Transcription:

MIT OpenCourseWare http://ocw.mit.edu 4.384 Time Series Analysis Fall 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

Indirect Inference 4.384 Time Series Analysis, Fall 2007 Professor Anna Mikusheva Paul Schrimpf, scribe Novemeber 3, 2007 Lecture 9 Simulated MM and Indirect Inference Indirect Inference Suppose we are interested in parameter C with true value 0. We observe data {x t } T t=. We have a model that we can simulate to generate {y j ()} S j=. We don t necessarily believe that our model is the true DGP, but we do think our model can explain some features of the data. These features we want to explain can be written as a function θ({x t }). Let We ll assume that θ() is an extremum estimator. θt =θ({x t }) θ s =θ({y j ()}) θt =θ({x t }) = arg max Q T ({x t }, θ) θ θ s =θ({y j ()}) = arg max Q S ({y j ()}, θ) θ For example, θ could be simple sample means or moments, or regression coefficients, or more generally parameters from some sort of auxillary model. We estimate by matching θ T to = arg min( θ T θ S ) W T (θ T θs ) This estimator is discussed by Smith (993) for the case where Q T (, θ) is a pseudo-loglikelihood. Gourierox, Monfort, and Renault (993) consider a more general setup. We will go through Smith s setup. Q T (, θ) is a pseudo-loglikelihood: T Q T ({x t }, θ) = log f(x t,..., x t p ; θ) t=p We call this a pseudo-loglikelihood because we can allow f to be misspecified, ie f need not be the ture density of x t. Assume. x t and y t () are stationary and ergodic 2. a unique d 0 such that (x t,..., x t+p ) = (y s ( 0 ),..., y s+p ( 0 )) so θ(x t ) = θ(y s ( 0 )) = θ 0 3. f is well-behaved (has several continuous, well-bounded derivatives) 4. arg maxθ E log f(y s(),..., y s p(); θ) = θ = h() and θ is the unique maximizer for each Under these assumptions, θ s θ = h() and θ T θ 0 = h( 0 ) θ S

Extremum Estimators 2 For the asymptotic distribution, we need some notation. Let: A (θ) =E 2 log f({y()}, θ) B (θ) =Γ 0 (θ) + (Γ k ( θ) + Γ k ( θ)) where Γ k(θ) = cov( log f(ys( ), θ), log f(y s k (), θ)). Assume a CLT: then k= Q T s({y()}, θ) N(0, B (θ)) S(θ s θ ) N(0, A BA ) T (θt θ 0 ) N(0, A(θ 0 ) B(θ 0 )A(θ 0 ) ) We get this sandwhich form for the variance because we have not assumed that the likelihood is correctly specified. If the likelihood (f) is correctly specified, then the information equality would hold and A = B. These results follow from the general theory of extremum estimators. Extremum Estimators Extremum estimators are very common in econometrics. θ = arg min Q T ({x t }, θ) Most estimators, such as least squares, GMM, and MLE, fit into this framework. Under some regularity conditions (including Q is differentiable and Q satisfies a CLT), if Q T ({x t }, θ) a.s. Q (θ) uniformly and θ 0 = arg min Q then θ p θ 0 and θ is asymptotically normal. As with GMM, we can show this by taking a Taylor expansion of the first order condition: More Indirect Inference 0 = Q T ({x T }, θ) (θ) = Q T ({x T }, θ 0 ) + (θ θ 0 ) 2 Q T ({x T }, θ 0 ) + o p T (θ Q T θ 0 ) = T ({x T }, θ 0 ) T 2 Q T ({x T }, θ 0 ) N(0, A BA ) So far, we have shown that θ T and θ s are consistent and asymptotically normal. Now, we want to show that is consistent and asymptotically normal. We need some additional assumptions:. h s () and h() are continuous and several times differentiable. Let J() = h(). p 2. h s () h() uniformly in 3. S = T

Test of Overidentifying Restrictions 3 Then p and ( ) T ( p ) N 0, ( + )Σ( 0 ) where Σ( 0 ) = K J W Ω(θ 0)W JK, Ω(θ 0 ) = A(θ 0 ) B(θ 0 )A(θ 0 ) and K( 0 ) = J( 0 ) W J( 0 ). Proof. As usual, this is just sketch and not fully rigorous. Consider the FOC for We have: = arg min( θ T θ ) W T (θ T θ S S ) = arg min(θ T h S ()) W θ T ( T 0 = hs() Wt(θ T hs()) h S ()) Taking a Taylor expansion of h s () h s ( 0 ) + h s ( 0 )( 0 ), taking the limit, and multiplying by T : 0 =J W T (θ T h s ( 0 )) + J W J T ( 0 ) J W J T ( 0 ) =J W T (θ T h s ( 0 )) =J W (θ T θ 0 + θ 0 h s ( 0 ))J W J T ( 0 ) J W (N(0, Ω) + N(0, Ω)) Also, as usual, the efficient choice of W = Ω with a feasible estimate W T from some consistent estimate of θ 0. Test of Overidentifying Restrictions If dim(θ) = n > dim() = k, then we can test the overidentifying restrictions with Example Z t =T + ( θ T hs( T )) W T (θ T hs( T )) =T θ + ( T θ s ) W T (θ T θ s ) χ 2 n k Suppose we have a DSGE model where someone is maximizing something. For example maxe 0 ω t c γ γ s.t. c t + i t = Aλ t k α t k t+ = ( δ)k t + i t λt = ρλ t + ɛ t ɛ t N(0, σ ) 2 = Ω where Ω is formed This model has many parameters (ω, γ, A, α, δ, ρ, σ 2 ) and would be very difficult to write down a likelihood or moment functions. Moreover, we don t really believe that this model is the true DGP and we don t want

Calibration 4 to use it to explain all aspects of the data. Instead we just want the model to explain some feature of the data, say the dynamics as captured by VAR coefficients. Also, although it is hard to write the likelihood function for this model, it is fairly easy to simulate the model. The we can use indirect inference as follows:. Estimate (possibly misspecified) VAR from data 2. Given, simulate model, estimate VAR from simulations, repeat until minimize objective function Calibration This was the method for evaluating models proposed by Kydland and Prescott. It has been very widely used. The steps are. Ask a question: either an assessment of a theoretical implication of a policy or testing the ability of a model to mimic features of actual data 2. Pick a class of models 3. Write down the equations 4. Your model should have implications to some questions with known answers (eg the labor share is 70%). You calibrate your parameters to match these answers. Kydland and Prescott describe this step as follows: Some economic questions have known answers, the model should give approximately correct answers to them... Data are used to calibrate the model economy... to mimic the world as closely as possible along a limited number of dimensions. Calibration is not an attempt to assess a size of something; it s not estimation... The parameter values selected are not the ones that provide the best fit in some statistical sense. (Kydland and Prescott) 5. Run an experiment to see if you model matches other aspects of the data. This method was meant to get away from the way that statisticians put too much structure on the data. People criticize this method as not being rigorous and giving difficult to communicate results. There is no formal criteria for whether the model matches well in step 5. The judgement of good and bad fit is subjective. Hansen and Heckman argued that calibration can be thought of an informal version of estimation, so we might as well use the formal theory so that we have standard errors and formal testing. In fact when all of our calibration is take from moments in the data, calibration is the same a just-identified simulated GMM. If the calibration involve estimates taken from other studies, we can still use the simulated GMM framework, but we need to take into account the standard errors of the parameters taken from other studies.