Bayesian Indirect Inference and the ABC of GMM

Size: px
Start display at page:

Download "Bayesian Indirect Inference and the ABC of GMM"

Transcription

1 Bayesian Indirect Inference and the ABC of GMM Michael Creel, Jiti Gao, Han Hong, Dennis Kristensen Universitat Autónoma, Barcelona Graduate School of Economics, and MOVE Monash University Stanford University University College London, CEMMAP, and CREATES. April 18, / 44

2 Introduction The Generalized Method of Moment is the building block of many econometrics models: Large Sample Properties of Generalized Method of Moments Estimators, Hansen, 1982, Econometrica. Computation may not be easy Studies a method of implementing quasi-bayesian estimators for nonlinear and possibly nonseparable GMM models. Motivated very much by Indirect Likelihood Inference Creel and Kristensen 2011 and MCMC Chernozhukov and Hong Combines simulation with nonparametric regression in the computation of GMM models. new results showing that, in this particular setting, local linear or polynomial methods have theoretical advantage over possibly higher order kernel methods. Finite sample simulations consistent with theoretical results Need more work to handle large sample size and large dimension models. 2 / 44

3 Motivation and Literature Estimation of nonlinear models often involves difficult numerical optimization that might also be used in conjuction with simulation methods. Examples include maximum simulated likelihood, simulated method of moments, and efficient method of moments Gallant and Tauchen 1996, Gourieroux, Monfort, and Renault Despite extensive efforts, see for example Andrews 1997, the problem of extremum computation remains a formidable impediment in these mentioned applications. Recent insightful contribution by Creel and Kristensen 2011: Indirect Likelihood Inference, combines simulation and nonparametric kernel regression. 3 / 44

4 Setup and Estimators The current paper buids on and is closely related to Creel and Kristensen 2011: Indirect Likelihood Inference; Chernozhukov and Hong 2003: MCMC alternative to M Estimation. Gentzkow and Shapiro 2013 also regresses parameters on moments. We use nonparametric instead of linear regression; and focus on the fitted value instead of the regression coefficients. Recent work by Forneron and Ng 2015 analyze higher order asymptotic bias of ABC and indirect inference estimators. Extends in several directions Shows that local linear or polynomial methods have theoretical advantages over kernel methods. Demonstrates the validity of inference based on simulated posterior quantiles. generalizes to nonlinear, nonseparable, and possible nonsmooth moment conditions; Provides precise conditions relating the sampling noise to the sampling variance; 4 / 44

5 Setup and Background Creel and Kristensen 2011: log likelihood ˆQ θ based on a vector of summary statistics: ˆQ θ = log f Tn θ. Example: the efficient method of moments Gallant and Tauchen 1996 defines T n to be the score vector of an auxiliary model. A limited information Bayesian posterior distribution: f n θ T n = fn Tn, θ f n T n f nt n θπ θ =. 1 Θ fntn θπ θ dθ Information from Bayesian posterior can be used to conduct valid frequentist statistical inference. For example, the posterior mean is consistent and asymptotically normal. θ = θf n θ T n dθ E n θ T n. 2 Θ 5 / 44

6 Posterior quantiles can also be used to form valid confidence intervals under correct model specification. Define the posterior τth quantile of the jth parameter as θ j τ : θ τ j fnj θj T n dθj = τ where f nj θj T n = f n θ T n dθ j. A 1 τ level confidence interval for θ j : θ j τ/2, θ j 1 τ/2. More generally, let η θ be a known scalar function of the parameters. A point estimate of η 0 η θ 0 can be computed using the posterior mean: η = η θ f n θ T n dθ E n η θ T n. 3 Θ Define η τ, the posterior τth quantile of η given T n, through 1 η θ η τ f n θ T n dθ = τ. 4 An level 1 τ interval for η 0 : η τ/2, η 1 τ/2. 6 / 44

7 Draw θ s, s = 1,..., S independently from π θ. Compute η s = η θ s for s = 1,..., S. For each draw generate a sample from the model at this parameter value of θ s, then compute the corrsponding statistic T s n = T n θ s, s = 1,..., S. For a kernel function κ and a bandwidth sequence h, define ˆη = â, â, ˆb arg min a,b S s=1 η s a b Tn s T 2 T s n κ n T n. h Similarly, define a feasible version of η τ as ˆη τ = â as the intercept term in a local linear quantile regression, or a weighted quantile regression with weights κ : â, ˆb arg min a,b T s n T n h S T ρ τ η s a b Tn s s Tn κ n T n h s=1 In the above ρ τ x = τ 1 x 0 x is the check function in Koenker and Bassett / 44

8 Generalized to local polynomial least square and quantile regressions using the notations in Chaudhuri For u = u 1,..., u d, a d-dimensional vector of nonnegative integers, let [u] = u u d. Let A be the set of all d-dimensional vectors u such that [u] p and set s A = # A. Let β = β u u A be a vector of coefficients of dimension s A. y u s = y u 1 s,1... y u d s,d, u A. Also let y s = Tn s T n, and ys A = Define the pth order polynomial P n β, y s = u A βuy s u = β ys A. Then replace last two steps by Define ˆη = ˆβ [0], the 0th element of ˆβ, for S y ˆβ = ya s s 1 S y A s κ ys A h ηs κ s=1 s=1 Define ˆη τ = ˆβ [0], the 0th element of ˆβ, for ˆβ arg min β S ρ τ η s β ys A s=1 κ ys y s h h / 44

9 Local linear regression is a special case of local polynomial regression when p = 1. Under suitable regularity conditions, ˆη and ˆη τ are consistent if h 0 and S when n. For ˆη to be first order equivalent to limited information MLE and for 7 to hold, lim P η 0 η τ/2, η 1 τ/2 = 1 τ. 7 n we require nh 1+p 0 and Sh k, which entails S/ n k 2p+1, namely that S is much larger than n k 2p+1. The bias in ˆθ is of O h p The variance is order O 1 nsh which is much smaller than in usual k nonparametric regression models. In a local linear regression with p = 1, this requires S to be larger than n k/4, where k = dim θ. This conditions holds regardless of whether d = k or d > k. 9 / 44

10 The ABC of GMM M-estimators: ˇθ = arg max ˆQ n θ. MCMC Alternative Chernozhukov and Hong 2003: θπ θ exp m ˆQ n θ dθ θ =, π θ exp m ˆQ n θ dθ where m can be possibly different from n. But take m = n. ˆQ n θ = 1 2ĝ θ Ŵ θ ĝ θ, where Ŵ θ is a possibly data and parameter dependent weighting matrix. Redefine 2, 3 and 4 by replacing f n θ T n with f n θ GMM: θ = θf n θ GMM dθ, η = η θ f n θ GMM dθ 8 Θ Θ 1 η θ η τ f n θ GMM dθ = τ. 10 / 44

11 GMM and simulation based estimators Moment conditions: ĝ θ = 1 n n i=1 g z i, θ. ˆQ n θ = 1 2ĝ θ Ŵ θ ĝ θ. Consider the following statistical experiment / Consider a random vector Y m, which, given θ and the data, has a normal distribution: N ĝ θ, mŵ 1 θ 1. Define θ y = E θ Y m = y, for 1/2 f θ Y m = y π θ det Ŵ θ exp m 2 ĝ θ y Ŵ θ ĝ θ y. Then θ = θ y = θ y. y=0 11 / 44

12 GMM and simulation based estimators This interpretation suggests the following simulation based estimator. 1 Draw θ s, s = 1,..., S from π θ. For each θ s, compute ĝ θ s. 2 Draw yn s from Y n N ĝ θ s, nŵ 1 θs 1. For ξ N 0, I d : y s n = ĝ θs + 1 n Ŵ θ s 1/2 ξ, 3 Define ˆη = â in the following local to zero linear least square regression: â, ˆb arg min a,b S η s a b yn s 2 y s κ n h s=1 4 Define ˆη τ = â in the following local to zero linear quantile regression: â, ˆb arg min a,b S ρ τ η s a b yn s κ s=1. y s n. h 5 Local polynomials are implemented exactly as before. 12 / 44

13 Comments The weighting matrix W θ Can be chosen to provide an initial n consistent, but not necessarily efficient estimator. For example, W θ = I. Posterior quantile inference generally invalid in this case. Can be chosen ˆθ in two steps. In the first step W θ = I. In the second step, Ŵ estimated variance of ĝ θ 0 evaluated at ˆθ. Efficient and valid posterior quantile inference. Similar to optimal two step GMM. Or use continuously updating GMM, and choose Ŵ θ as estimated variance of ĝ θ for each given θ. Indirect inference implicitly uses continuously updated Ŵ θ. 13 / 44

14 Relation between BIL Creel and Kristensen 2011 and ABC-GMM separable moment conditions where ĝ θ = T n t θ. t θ is the probability limit of T n under θ. When t θ s is unknown, BIL replaces it with a simulated version T s n from θ s and use Y s n = T n T s n. This is analogous to drawing Y s n from ĝ θ s + 1 n W θ 1/2 ξ = T n t θ s T s n t θ s. where ξ is approximately normal. Appealing for parametric models with complex likelihood but feasible to simulate. 14 / 44

15 The simulation sample size can also differ from the observed sample. T s n can be replaced by T s m, where possibly m n. In step 2 of ABC-GMM, y s n can be replaced by ym s = ĝ θs + 1 W θ s 1/2 ξ N ĝ θ s, 1m W m θs 1. When m, ˆρ ρ 0 = O P 1 n m. Asymptotically valid however conservatively so when m < n 1 τth confidence interval for ρ 0 : η 1/2 + m n m m ητ/2 η 1/2, η1/2 + η1 τ/2 η 1/2. n m Only when m = n, this confidence interval specializes to ˆητ/2, ˆη 1 τ/2. Focus on m = n, since m < n does not seem to bring computational efficiency unless the cost of simulation increases with the simulation sample size, and m > n does not increase first order asymptotic efficiency. 15 / 44

16 Heuristics: Generalizing grid search with local polynomial extrapolation. Take m =, Y s = ĝ θ s. Regress θ s on Y s local to zero, for example, S ĝ θ ˆθ = θ s s κ / h s=1 S ĝ θ s κ h s=1 This is anagolous to a nonparametric regression in which there is no error term, ɛ 0: y = g x. Variance is soley due to the variation, and does not include the conditional variance of y given x. Fine for exactly identified models. But in overidentified model, while kernel methods are straightforward, local linear or polynomial methods involve possible multicolinearity among regressors ĝ θ s that are not present when m <.. 16 / 44

17 Furthermore, when the model is overidentified with d > t, conditional on a realization of θ s, the event that ĝ θ s = t is not possible for almost all values of t Lesbegue measure 1. In this case, the conditional distribution of θ ĝ θ = t is not defined for almost all t, including t = 0 for almost all realizations of ĝ θ. On the other hand, for m <, regardless of how large, the conditional distribution θ Y ĝ θ + ξ m = t is always well defined for all t, as long as ξ has full support. 17 / 44

18 Asymptotic Distribution Theory Provide conditions on the order of magnitude of the number of simulations in relation to the sample size for n consistency and asymptotic normality. Assumptions related to the infeasible estimators and intervals θ, η and η τ/2, η 1 τ/2 mirror the general results in Chernozhukov and Hong 2003 and Creel and Kristensen Additional assumptions relate to the feasible simulation based estimators and intervals, ˆη, ˆη τ, and ˆη 1 τ/2, ˆη 1 τ/2. θ and η τ, Assumption 1 The true parameter θ 0 belongs to the interior of a compact convex subset Θ of Euclidean space R k. The weighting function π : Θ R + is a continuous, uniformly positive density function. 18 / 44

19 Assumption 2 1 g θ = 0 if and only if θ = θ 0 ; 2 W θ is uniformly positive definite and finite on θ Θ; 3 sup θ Θ Ŵ θ W θ = o P 1; 4 sup θ Θ ĝ θ g θ = o P 1; 5 { n ĝ θ g θ ; θ Θ} G g, a mean zero Gaussian process with marginal variance Σ θ; 6 g θ and W θ are both p + 1 times boundedly and continuously differentiable. 7 For any ɛ > 0, there is δ > 0 such that lim sup P n { sup θ θ δ } n ĝθ ĝθ gθ gθ 1 + n θ θ > ɛ < ɛ / 44

20 Assumption 3, exactly identified models The model is exactly identified: d = k. Assumption 4, smooth overidentified model There exists random function Ĝ θ y, Ĥ θ y, such as for any δ n 0, sup sup θ θ y δ n y Y and sup sup θ θ y δ n y Y n ĝθ ĝθy gθ gθ y Ĝ θy G θ y θ θ y θ θ y = o P 1, n Ŵ θ Ŵ θy W θ W θ y Ĥ θy H θ y θ θ y θ θ y such that n ĝ θ y g θ y, Ĝ θ y G θ y, Ĥ θ y H θ y G g, G G, G H. = o P 1, 20 / 44

21 Assumption 5,nonsmooth overidentified models sup y Y y = o n 1/4. For any δ n 0, n ĝθ ĝθy gθ gθ y sup sup = O P 1 θ θ y δ n y Y θ θy Furthermore, Ŵ θ Ŵ, W θ W and Ŵ W = O P 1 n. Assumption 6 The kernel function satisfies 1 κ x = h x where h decreases monotonically on 0, ; 2 κ x dx = 1; 3 xκ x dx = 0; 4 x 2 κ x dx <. 21 / 44

22 Theorem Under Assumptions 1, 2, 6, and one of 3, 4, or 5, in the local linear regression, both n ˆη η = o P 1 and ˆη τ η τ = o P 1 n when Sh k, nh and nh 2 = o 1, so that ˆη and ˆη τ are first order asymptotically equivalent to ˆη and ˆη τ, and posterior inference based on ˆη τ is valid whenever it is valid for the infeasible η τ. Since the posterior distribution shrinks at 1/ n rate, whenever Sh k, aside from the bias term, ˆθ is automatically [ˆθ] n consistent for E. Interaction between n and h is limited to the bias term. Theorem 1 holds regardless of exact identification d = k or overidentification d > k. The lower bound on S is S >> n k/4 in the sense that n k/4 S. 22 / 44

23 Theorem Under Assumptions 1, 2 and 6, and one of 3, 4, for ˆη and ˆη τ defined in the local polynomial regressions, if nh 2p+1 0, nh, Sh k, then ˆθ θ = o P 1/ n, ˆη η = op 1 n, and ˆη τ η τ = o P 1/ n, so that posterior inference based on ˆη τ is valid whenever it is valid for the infeasible η τ. The lower bound on S implied by Theorem 2 is given by S >> n which can be much smaller than S >> n k/4 by using a larger p. Also hold regardless of exact or overidentification. k 2p+1, In the proof, we impose the additional assumption that nh, to allow for smaller S. 23 / 44

24 Difference from conventional nonparametric regression: marginal density of Y m, conditional variance of conditional density in the quantile regression case of θ given Y n are sample size dependent. Only for exactly identified model, f Yn 0 = O p 1. Behavior of the marginal density of Y n more complex in overidentified model: f Yn 0 = O p n d k 2. For local linear estimators with nh 2 = o 1, whenever Sh k, ˆθ is automatically n consistent for E ˆθ. Both higher order polynomials and kernels reduce bias. However, higher order polynomials also improve on variance. 24 / 44

25 Locally constant and possibly higher order kernel mean and quantile estimates of η are: and ˆη = S y η s s κ n / h s=1 ˆη τ = arg min a s=1 S κ s=1 y s n, 10 h S y ρ τ η s s a κ n. 11 h However, the conditions required for n-consistency and asymptotic normality are substantially more stringent for 10 and 11, and implies a larger lower bound on S: S >> n k/4 n Theorem Under Assumptions 1, 2, 6, and one of 3, 4, or 5, For ˆη and ˆη τ defined in 10 and 11, ˆη η = o 1 P n and ˆη τ η τ = o 1 P n if Sh k min 1, 1 nh 2 and nh 2 0 when d = k. The same conclusion holds when d > k under the additional condition that nh. 25 / 44

26 Illustrating example. Consider a vector of sample means: ĝ θ = µ X. Let π µ = N µ 0, Σ 0. Y m = µ X + 1 m Σ 1/2 ξ. Given µ, Y m N µ X, 1 m Σ. Then the posterior mean and variance are given by E µ Y m = t = Σ m Σ 0 + Σ 1 µ 0 + Σ 0 Σ 0 + Σ 1 X + t m X + t m m and Var µ Y m = t = Σ 0 Σ m Σ Σ 1 m = O. m Given X, the marginal density of Y n is N µ 0 X, Σ m Σ = O 1 whenever Σ 0 is nonsingular, as in the exact identification case of d dim Y m = k dim µ. 26 / 44

27 Suppose now d > k = 1, then for a scalar u 0 and σ0 2, and for l being a d 1 vector of 1 s, write µ 0 = u 0 l and Σ 0 = σ0 2ll. The previous calculation can not be used when m, n. Instead, In this case Σ 1 m + σ2 0 ll = mσ 1 σ2 0 m2 Σ 1 ll Σ σ0 2ml Σ 1 l. E µ Y m = t = I σ2 0 mll Σ σ0 2ml Σ 1 u 0 l + σ mσ 20 l ll 1 σ2 0 m2 Σ 1 ll Σ 1 X 1 + σ0 2ml Σ 1 + t l σ0 2 = I ll Σ 1 1/m + σ0 2l Σ 1 u 0 l + mσ2 0 ll Σ 1 X l 1 + σ0 2ml Σ 1 + t l As m, E µ Y m = t ll Σ 1 l Σ 1 l X + t, the GLS estimator. Furthermore, now interpret µ as a scalar: Var µ Y m = t = σ0 2 σ4 0 l Σ 0 + Σ/m 1 l = σ σ0 2ml Σ 1 l. 27 / 44

28 The marginal density of Y m at t = 0 of N X u0 l, Σ becomes singular when m, n. m + σ2 0 ll Let X µ 0 = X 1 u 0 l + 0, / n for = n X 1 X 1 so that = Ω 1/2 Z for some Ω, and Z N 0, I d k. The exponent of this density becomes 1 1 Σ X µ 0 2 m + σ2 0ll X µ 0 = X 1 u 0 2 ml Σ 1 l 1 + σ 2 0 ml Σ 1 l + 0, / n which is O p 1 if m n, and if m/n. Also using the relation that det I + uv = 1 + u v, Σ m d 1 det m + σ2 0 ll m C > 0. mσ 1 σ2 0m 2 Σ 1 ll Σ 1 0, / n 1 + σ 2 0 ml Σ 1 l 28 / 44

29 Change of variable and the singularity of f Y. Partition Y = Y 1, Y 2 for scalar Y 1. Let Y 2 = Y 1 + n + w 2 m. Simplify by letting Σ = I. Then Y 1 =µ X 1 + ξ 1 m Y 2 = µ X 2 + ξ 2 m, = n X2 X 1 = Op 1 w 2 = ξ 2 ξ 1 = O p 1. Implication for the kernel function Y1 κ h, Y 2 = κ h If nh, mh, essentially κ Y 1 /h. Y1 h, Y 1 h + nh = o p 1, + w 2 nh mh w 2 mh = o p 1, this is 29 / 44

30 Behavior of f Y n in the general case: For each v, f nθ θ v Y n = t = n k fθ θ + n v Y n = t and Rewrite this relation f θ θ + v π θ θ + v n f Y n = t θ + n v Y n = t =. n f Y n = t f Y n = t = n k π θ θ + v f Y n n = t θ = θ + n v f nθ v Yn θ = t Since both f nθ θ v Y n = t = O 1 and π θ θ + v n = O 1, f Y n = t C n k f Consider for example t = v = 0, then f Y n = t θ = θ + v n. Y n = 0 θ = θ n d/2 det W 1/2 exp n θ 2 ĝ W ĝ θ = O p n d/2. Hence f Y n = 0 = O p n d k/2. 30 / 44

31 Implications for nonparametric regression. In local constant kernel method: variance 1 Sh k 1 n + h2. In local linear regression: regressors can be asymptotically collinear along a k dimensional manifold, with up to 1/ n local variation surrounding this manifold. 1 In this case, the intercept term will converge at the fast 1 Sh k n rate, up to the bias adjustment. However, the local linear coefficients will converge at the slower 1 Sh k rate, and does not benefit from the 1/ n term. Although certain linear combinations of the coefficients converge at 1 the faster rate of 1 Sh k h n. We illustrate this point in the normal analytic example, for k = 1, d = 2 with diffuse prior σ 0 =. 31 / 44

32 In the normal example when m = n: µ = β 0 + β 1 Y 1 + β 2 Y 2 + ɛ, where β 0 = l Σ 1 l 1 l Σ 1 X ɛ N 0, 1 l Σ 1 l 1. n β 1 β 2 = l Σ 1 l 1 l Σ 1 This can be written as µ =β 0 + Y 1 β 1 + β 2 + Y 2 Y 1 β 2 + ɛ β 0 + Y 1 η + X 1 X 2 + ξ 2 ξ 1 β 2 + ɛ n n =β 0 + X 2 X 1 β2 + µ + ɛ 1 β η + ɛ ɛ n n Then for θ = β 0 + X 2 X 1 β2, β 1 + β 2, β 2 n and its corresponding least square estimate ˆθ based on the dependent variable µ s, s = 1,..., S and regressors Y 1 µ + ɛ 1 n and n Y2 Y 1 ɛ 2, where S is typically Sh k, S ˆθ θ has a nondegenerate distribution. 32 / 44

33 As S n S ˆθ θ N 0, σ 2 Σ 1 n where Σ n = σu 2 + σ2 1 σ n 12 n σ 0 n 12 σ σ 2 u σ 2 2 Asymptotically, ˆβ 1 + ˆβ 2 and ˆβ 2 are independent. These results will be demonstrated in the general case. If m/n, then ɛ N 0, 1 m l Σ 1 l 1, and Y 2 Y 1 = X 1 X 2 + ξ 2 ξ 1 m. In this case, need to scale β 2 by m instead. 33 / 44

34 If m =, Y 2 is a constant shift of Y 1 by X 2 X 1. Regressors become collinear. In this case, it appears sufficient to regress µ s on either Y 1s or Y 2s. However, in this case the intercept term β 0 is not estimable within scale O p 1/ n, in the sense that ˆβ 0 = F ˆβ for F = and ˆβ = ˆβ 0, ˆβ 1, ˆβ 2 is not uniquely determined when ˆβ can have multiple solutions to the normal equation due to the collinearity of regressors. Note see this, note that one can either of the following two cases: µ = β 0 + X2 X 1 β2 + Y 1 β 2 or µ = β 0 X2 X 1 β1 + Y 2 β 2 34 / 44

35 Or Rao 1973 to argue that there is no A such that F = X A. In fact, let A = a 1,..., a n. Then to satisfy = ai ai Y 1i X2 X 1 ai + a i Y 1i It is necessary that a i = 1 and a i = 0, which forms contradiction. The linear collinear case is perhaps unlikely in nonlinear models. Nevertheless this suggests that in overidentified model, the limit of taking m, even at very fast rate, is different from setting m =. It is possible to regress only on Y 1 or Y 2, but in this case X 1 and X 2 might not have been combined optimally. 35 / 44

36 Monte Carlo Simulation: DSGE Model A single good can be consumed or used for investment, and a single competitive firm maximizes profits. The variables: y output; c consumption; k capital; i investment, n labor; w real wages; r return to capital. Household maximizes expected discounted utility c 1 γ E t β s t+s + 1 nt+sηtψ 1 γ s=0 subject to the budget constraint c t + i t = r tk t + w tn t accumulation of capital k t+1 = i t + 1 δk t. Preference shock evolution: and the ln η t = ρ η ln η t 1 + σ ηɛ t / 44

37 Monte Carlo Simulation: DSGE Model Firm production technology: y t = k α t n1 α t z t. Technology shock log AR1 process: ln z t = ρ z ln z t 1 + σ z u t ɛ t and u t, mutually independent i.i.d. N 0, 1. variables. Output allocation: y t = c t + i t. Capital and labor paid at the rates r t and w t. Estimate steady state hours, n, along with the other parameters, excepting ψ 37 / 44

38 Table : DSGE models, support of uniform priors. Parameter Lower bound Upper bound True value Prior bias Prior RMSE α β δ γ ρ z σ z ρ η σ η n 6/24 9/24 1/ / 44

39 Table : Selected statistics, DSGE model. For statistics 11-20, σ xy indicates the sample covariance of the residuals of the AR1 models for the respective variables x and y. Statistic Description Statistic Description 1 log ψ 12 σ qq 2 γ 13 σ qn 3 ρ η, residuals 14 σ qr 4 sample mean c 15 σ qw 5 sample mean n 16 σ cc 6 sample std. dev. q 17 σ cn 7 sample std. dev. c 18 σ cr 8 sample std. dev. n 19 σ cw 9 sample std. dev. r 20 σ nn 10 sample std. dev. w 21 σ ww 11 estimated AR1 coef., r 22 c/n 39 / 44

40 Table : DSGE model. Monte Carlo results 1000 replications. Bandwidths tuned using prior. LC=local constant, LL=local linear, LQ=local quadratic. 90% CI gives the proportion of times that the true value is in the 90% confidence interval. Bias RMSE 90% CI Parameter LC LL LQ LC LL LQ LC α β δ γ ρ z σ z ρ η σ η n / 44

41 Table : DSGE model. Monte Carlo results 1000 replications. Bandwidths tuned locally. LC=local constant, LL=local linear, LQ=local quadratic. 90% CI gives the proportion of times that the true value is in the 90% confidence interval. Bias RMSE 90% CI Parameter LC LL LQ LC LL LQ LC α β δ γ ρ z σ z ρ η σ η n / 44

42 Monte Carlo Simulation Setup: quantile instrument variable model of Chernozhukov and Hansen 2005 y i = x i β + ɛ i, where Q τ ɛ i z i = 0. Data generating process: ɛ i = exp z i α2 v i 1 where v i is such that Q τ v i z i = 0: v i N 0, 1 We choose x i = 1, x i where x i = ξ 1i + ξ 2i, z i = 1, z i 1, z2 i where z i 1 = ξ 2i + ξ 3i, z i 2 = ξ i1 + ξ 4i. In the above ξ ij, j = 1,..., 4 i.i.d N 0, 1. α = 1/5, 1/5, 1/5, β = 1, 1, n = 200. Quantile moment condition ĝ β = 1 n n i=1 z i τ 1 y i x i β. Weighting matrix W = 1 n n i=1 z iz i / 44

43 Table : Quantile IV model. Monte Carlo results 1000 replications. Bandwidths tuned using prior. LC=local constant, LL=local linear. 90% CI gives the proportion of times that the true value is in the 90% confidence interval. β 1 β 2 Prior Bias IV LC LL Prior RMSE IV LC LL % CI LC / 44

44 Table : Quantile IV model. Monte Carlo results 1000 replications. Bandwidths tuned locally. LC=local constant, LL=local linear. 90% CI gives the proportion of times that the true value is in the 90% confidence interval. β 1 β 2 Bias LC LL RMSE LC LL % CI LC / 44

45 References Andrews, D. 1997: A stopping rule for the computation of generalized method of moments estimators, Econometrica, 654, Chaudhuri, P. 1991: Nonparametric estimates of regression quantiles and their local Bahadur representation, The Annals of Statistics, 192, Chernozhukov, V., and C. Hansen 2005: An IV Model of Quantile Treatment Effects, Econometrica, 731, Chernozhukov, V., and H. Hong 2003: A MCMC Approach to Classical Estimation, Journal of Econometrics, 1152, Creel, M. D., and D. Kristensen 2011: Indirect likelihood inference,. Gallant, R., and G. Tauchen 1996: Which Moments to Match, Econometric Theory, 12, Gentzkow, M., and J. Shapiro 2013: Measuring the sensitivity of parameter estimates to sample statistics, Unpublished Manuscript. Gourieroux, C., A. Monfort, and E. Renault 1993: Indirect Inference, Journal of Applied Econometrics, pp. S85 S / 44

46 Koenker, R., and G. S. Bassett 1978: Regression Quantiles, Econometrica, 46, / 44

Statistical Properties of Numerical Derivatives

Statistical Properties of Numerical Derivatives Statistical Properties of Numerical Derivatives Han Hong, Aprajit Mahajan, and Denis Nekipelov Stanford University and UC Berkeley November 2010 1 / 63 Motivation Introduction Many models have objective

More information

Indirect Likelihood Inference

Indirect Likelihood Inference Dynare Working Papers Series http://www.dynare.org/wp/ Indirect Likelihood Inference Michael Creel Dennis Kristensen Working Paper no. 8 July 2011 142, rue du Chevaleret 75013 Paris France http://www.cepremap.ens.fr

More information

A Robust Approach to Estimating Production Functions: Replication of the ACF procedure

A Robust Approach to Estimating Production Functions: Replication of the ACF procedure A Robust Approach to Estimating Production Functions: Replication of the ACF procedure Kyoo il Kim Michigan State University Yao Luo University of Toronto Yingjun Su IESR, Jinan University August 2018

More information

DSGE Methods. Estimation of DSGE models: GMM and Indirect Inference. Willi Mutschler, M.Sc.

DSGE Methods. Estimation of DSGE models: GMM and Indirect Inference. Willi Mutschler, M.Sc. DSGE Methods Estimation of DSGE models: GMM and Indirect Inference Willi Mutschler, M.Sc. Institute of Econometrics and Economic Statistics University of Münster willi.mutschler@wiwi.uni-muenster.de Summer

More information

Time Series Analysis Fall 2008

Time Series Analysis Fall 2008 MIT OpenCourseWare http://ocw.mit.edu 4.384 Time Series Analysis Fall 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. Indirect Inference 4.384 Time

More information

INFERENCE APPROACHES FOR INSTRUMENTAL VARIABLE QUANTILE REGRESSION. 1. Introduction

INFERENCE APPROACHES FOR INSTRUMENTAL VARIABLE QUANTILE REGRESSION. 1. Introduction INFERENCE APPROACHES FOR INSTRUMENTAL VARIABLE QUANTILE REGRESSION VICTOR CHERNOZHUKOV CHRISTIAN HANSEN MICHAEL JANSSON Abstract. We consider asymptotic and finite-sample confidence bounds in instrumental

More information

Flexible Estimation of Treatment Effect Parameters

Flexible Estimation of Treatment Effect Parameters Flexible Estimation of Treatment Effect Parameters Thomas MaCurdy a and Xiaohong Chen b and Han Hong c Introduction Many empirical studies of program evaluations are complicated by the presence of both

More information

DSGE-Models. Limited Information Estimation General Method of Moments and Indirect Inference

DSGE-Models. Limited Information Estimation General Method of Moments and Indirect Inference DSGE-Models General Method of Moments and Indirect Inference Dr. Andrea Beccarini Willi Mutschler, M.Sc. Institute of Econometrics and Economic Statistics University of Münster willi.mutschler@uni-muenster.de

More information

Quantile methods. Class Notes Manuel Arellano December 1, Let F (r) =Pr(Y r). Forτ (0, 1), theτth population quantile of Y is defined to be

Quantile methods. Class Notes Manuel Arellano December 1, Let F (r) =Pr(Y r). Forτ (0, 1), theτth population quantile of Y is defined to be Quantile methods Class Notes Manuel Arellano December 1, 2009 1 Unconditional quantiles Let F (r) =Pr(Y r). Forτ (0, 1), theτth population quantile of Y is defined to be Q τ (Y ) q τ F 1 (τ) =inf{r : F

More information

Single Index Quantile Regression for Heteroscedastic Data

Single Index Quantile Regression for Heteroscedastic Data Single Index Quantile Regression for Heteroscedastic Data E. Christou M. G. Akritas Department of Statistics The Pennsylvania State University SMAC, November 6, 2015 E. Christou, M. G. Akritas (PSU) SIQR

More information

IV Quantile Regression for Group-level Treatments, with an Application to the Distributional Effects of Trade

IV Quantile Regression for Group-level Treatments, with an Application to the Distributional Effects of Trade IV Quantile Regression for Group-level Treatments, with an Application to the Distributional Effects of Trade Denis Chetverikov Brad Larsen Christopher Palmer UCLA, Stanford and NBER, UC Berkeley September

More information

Fractional Imputation in Survey Sampling: A Comparative Review

Fractional Imputation in Survey Sampling: A Comparative Review Fractional Imputation in Survey Sampling: A Comparative Review Shu Yang Jae-Kwang Kim Iowa State University Joint Statistical Meetings, August 2015 Outline Introduction Fractional imputation Features Numerical

More information

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference for DSGE Models. Lawrence J. Christiano Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.

More information

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference for DSGE Models. Lawrence J. Christiano Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Preliminaries. Probabilities. Maximum Likelihood. Bayesian

More information

Spring 2017 Econ 574 Roger Koenker. Lecture 14 GEE-GMM

Spring 2017 Econ 574 Roger Koenker. Lecture 14 GEE-GMM University of Illinois Department of Economics Spring 2017 Econ 574 Roger Koenker Lecture 14 GEE-GMM Throughout the course we have emphasized methods of estimation and inference based on the principle

More information

Short Questions (Do two out of three) 15 points each

Short Questions (Do two out of three) 15 points each Econometrics Short Questions Do two out of three) 5 points each ) Let y = Xβ + u and Z be a set of instruments for X When we estimate β with OLS we project y onto the space spanned by X along a path orthogonal

More information

Approximate Bayesian computation for spatial extremes via open-faced sandwich adjustment

Approximate Bayesian computation for spatial extremes via open-faced sandwich adjustment Approximate Bayesian computation for spatial extremes via open-faced sandwich adjustment Ben Shaby SAMSI August 3, 2010 Ben Shaby (SAMSI) OFS adjustment August 3, 2010 1 / 29 Outline 1 Introduction 2 Spatial

More information

Introduction to Estimation Methods for Time Series models Lecture 2

Introduction to Estimation Methods for Time Series models Lecture 2 Introduction to Estimation Methods for Time Series models Lecture 2 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 2 SNS Pisa 1 / 21 Estimators:

More information

High Dimensional Empirical Likelihood for Generalized Estimating Equations with Dependent Data

High Dimensional Empirical Likelihood for Generalized Estimating Equations with Dependent Data High Dimensional Empirical Likelihood for Generalized Estimating Equations with Dependent Data Song Xi CHEN Guanghua School of Management and Center for Statistical Science, Peking University Department

More information

Density estimation Nonparametric conditional mean estimation Semiparametric conditional mean estimation. Nonparametrics. Gabriel Montes-Rojas

Density estimation Nonparametric conditional mean estimation Semiparametric conditional mean estimation. Nonparametrics. Gabriel Montes-Rojas 0 0 5 Motivation: Regression discontinuity (Angrist&Pischke) Outcome.5 1 1.5 A. Linear E[Y 0i X i] 0.2.4.6.8 1 X Outcome.5 1 1.5 B. Nonlinear E[Y 0i X i] i 0.2.4.6.8 1 X utcome.5 1 1.5 C. Nonlinearity

More information

Lasso Maximum Likelihood Estimation of Parametric Models with Singular Information Matrices

Lasso Maximum Likelihood Estimation of Parametric Models with Singular Information Matrices Article Lasso Maximum Likelihood Estimation of Parametric Models with Singular Information Matrices Fei Jin 1,2 and Lung-fei Lee 3, * 1 School of Economics, Shanghai University of Finance and Economics,

More information

Single Index Quantile Regression for Heteroscedastic Data

Single Index Quantile Regression for Heteroscedastic Data Single Index Quantile Regression for Heteroscedastic Data E. Christou M. G. Akritas Department of Statistics The Pennsylvania State University JSM, 2015 E. Christou, M. G. Akritas (PSU) SIQR JSM, 2015

More information

Technical appendices: Business cycle accounting for the Japanese economy using the parameterized expectations algorithm

Technical appendices: Business cycle accounting for the Japanese economy using the parameterized expectations algorithm Technical appendices: Business cycle accounting for the Japanese economy using the parameterized expectations algorithm Masaru Inaba November 26, 2007 Introduction. Inaba (2007a) apply the parameterized

More information

A Local Generalized Method of Moments Estimator

A Local Generalized Method of Moments Estimator A Local Generalized Method of Moments Estimator Arthur Lewbel Boston College June 2006 Abstract A local Generalized Method of Moments Estimator is proposed for nonparametrically estimating unknown functions

More information

A note on multiple imputation for general purpose estimation

A note on multiple imputation for general purpose estimation A note on multiple imputation for general purpose estimation Shu Yang Jae Kwang Kim SSC meeting June 16, 2015 Shu Yang, Jae Kwang Kim Multiple Imputation June 16, 2015 1 / 32 Introduction Basic Setup Assume

More information

Quantile Regression for Dynamic Panel Data

Quantile Regression for Dynamic Panel Data Quantile Regression for Dynamic Panel Data Antonio Galvao 1 1 Department of Economics University of Illinois NASM Econometric Society 2008 June 22nd 2008 Panel Data Panel data allows the possibility of

More information

Identification and Estimation of Partially Linear Censored Regression Models with Unknown Heteroscedasticity

Identification and Estimation of Partially Linear Censored Regression Models with Unknown Heteroscedasticity Identification and Estimation of Partially Linear Censored Regression Models with Unknown Heteroscedasticity Zhengyu Zhang School of Economics Shanghai University of Finance and Economics zy.zhang@mail.shufe.edu.cn

More information

Specification Test for Instrumental Variables Regression with Many Instruments

Specification Test for Instrumental Variables Regression with Many Instruments Specification Test for Instrumental Variables Regression with Many Instruments Yoonseok Lee and Ryo Okui April 009 Preliminary; comments are welcome Abstract This paper considers specification testing

More information

CCP Estimation. Robert A. Miller. March Dynamic Discrete Choice. Miller (Dynamic Discrete Choice) cemmap 6 March / 27

CCP Estimation. Robert A. Miller. March Dynamic Discrete Choice. Miller (Dynamic Discrete Choice) cemmap 6 March / 27 CCP Estimation Robert A. Miller Dynamic Discrete Choice March 2018 Miller Dynamic Discrete Choice) cemmap 6 March 2018 1 / 27 Criteria for Evaluating Estimators General principles to apply when assessing

More information

STATS 200: Introduction to Statistical Inference. Lecture 29: Course review

STATS 200: Introduction to Statistical Inference. Lecture 29: Course review STATS 200: Introduction to Statistical Inference Lecture 29: Course review Course review We started in Lecture 1 with a fundamental assumption: Data is a realization of a random process. The goal throughout

More information

Inference For High Dimensional M-estimates: Fixed Design Results

Inference For High Dimensional M-estimates: Fixed Design Results Inference For High Dimensional M-estimates: Fixed Design Results Lihua Lei, Peter Bickel and Noureddine El Karoui Department of Statistics, UC Berkeley Berkeley-Stanford Econometrics Jamboree, 2017 1/49

More information

Inference For High Dimensional M-estimates. Fixed Design Results

Inference For High Dimensional M-estimates. Fixed Design Results : Fixed Design Results Lihua Lei Advisors: Peter J. Bickel, Michael I. Jordan joint work with Peter J. Bickel and Noureddine El Karoui Dec. 8, 2016 1/57 Table of Contents 1 Background 2 Main Results and

More information

Multiscale Adaptive Inference on Conditional Moment Inequalities

Multiscale Adaptive Inference on Conditional Moment Inequalities Multiscale Adaptive Inference on Conditional Moment Inequalities Timothy B. Armstrong 1 Hock Peng Chan 2 1 Yale University 2 National University of Singapore June 2013 Conditional moment inequality models

More information

Inference for Identifiable Parameters in Partially Identified Econometric Models

Inference for Identifiable Parameters in Partially Identified Econometric Models Inference for Identifiable Parameters in Partially Identified Econometric Models Joseph P. Romano Department of Statistics Stanford University romano@stat.stanford.edu Azeem M. Shaikh Department of Economics

More information

Regression, Ridge Regression, Lasso

Regression, Ridge Regression, Lasso Regression, Ridge Regression, Lasso Fabio G. Cozman - fgcozman@usp.br October 2, 2018 A general definition Regression studies the relationship between a response variable Y and covariates X 1,..., X n.

More information

Graduate Econometrics I: Maximum Likelihood II

Graduate Econometrics I: Maximum Likelihood II Graduate Econometrics I: Maximum Likelihood II Yves Dominicy Université libre de Bruxelles Solvay Brussels School of Economics and Management ECARES Yves Dominicy Graduate Econometrics I: Maximum Likelihood

More information

The Numerical Delta Method and Bootstrap

The Numerical Delta Method and Bootstrap The Numerical Delta Method and Bootstrap Han Hong and Jessie Li Stanford University and UCSC 1 / 41 Motivation Recent developments in econometrics have given empirical researchers access to estimators

More information

A Resampling Method on Pivotal Estimating Functions

A Resampling Method on Pivotal Estimating Functions A Resampling Method on Pivotal Estimating Functions Kun Nie Biostat 277,Winter 2004 March 17, 2004 Outline Introduction A General Resampling Method Examples - Quantile Regression -Rank Regression -Simulation

More information

Econometrics I, Estimation

Econometrics I, Estimation Econometrics I, Estimation Department of Economics Stanford University September, 2008 Part I Parameter, Estimator, Estimate A parametric is a feature of the population. An estimator is a function of the

More information

Econ 583 Final Exam Fall 2008

Econ 583 Final Exam Fall 2008 Econ 583 Final Exam Fall 2008 Eric Zivot December 11, 2008 Exam is due at 9:00 am in my office on Friday, December 12. 1 Maximum Likelihood Estimation and Asymptotic Theory Let X 1,...,X n be iid random

More information

Additional Material for Estimating the Technology of Cognitive and Noncognitive Skill Formation (Cuttings from the Web Appendix)

Additional Material for Estimating the Technology of Cognitive and Noncognitive Skill Formation (Cuttings from the Web Appendix) Additional Material for Estimating the Technology of Cognitive and Noncognitive Skill Formation (Cuttings from the Web Appendix Flavio Cunha The University of Pennsylvania James Heckman The University

More information

Lecture 9: Quantile Methods 2

Lecture 9: Quantile Methods 2 Lecture 9: Quantile Methods 2 1. Equivariance. 2. GMM for Quantiles. 3. Endogenous Models 4. Empirical Examples 1 1. Equivariance to Monotone Transformations. Theorem (Equivariance of Quantiles under Monotone

More information

Chapter 1: A Brief Review of Maximum Likelihood, GMM, and Numerical Tools. Joan Llull. Microeconometrics IDEA PhD Program

Chapter 1: A Brief Review of Maximum Likelihood, GMM, and Numerical Tools. Joan Llull. Microeconometrics IDEA PhD Program Chapter 1: A Brief Review of Maximum Likelihood, GMM, and Numerical Tools Joan Llull Microeconometrics IDEA PhD Program Maximum Likelihood Chapter 1. A Brief Review of Maximum Likelihood, GMM, and Numerical

More information

11. Bootstrap Methods

11. Bootstrap Methods 11. Bootstrap Methods c A. Colin Cameron & Pravin K. Trivedi 2006 These transparencies were prepared in 20043. They can be used as an adjunct to Chapter 11 of our subsequent book Microeconometrics: Methods

More information

Nonlinear GMM. Eric Zivot. Winter, 2013

Nonlinear GMM. Eric Zivot. Winter, 2013 Nonlinear GMM Eric Zivot Winter, 2013 Nonlinear GMM estimation occurs when the GMM moment conditions g(w θ) arenonlinearfunctionsofthe model parameters θ The moment conditions g(w θ) may be nonlinear functions

More information

ON ILL-POSEDNESS OF NONPARAMETRIC INSTRUMENTAL VARIABLE REGRESSION WITH CONVEXITY CONSTRAINTS

ON ILL-POSEDNESS OF NONPARAMETRIC INSTRUMENTAL VARIABLE REGRESSION WITH CONVEXITY CONSTRAINTS ON ILL-POSEDNESS OF NONPARAMETRIC INSTRUMENTAL VARIABLE REGRESSION WITH CONVEXITY CONSTRAINTS Olivier Scaillet a * This draft: July 2016. Abstract This note shows that adding monotonicity or convexity

More information

Instrumental Variables Estimation and Weak-Identification-Robust. Inference Based on a Conditional Quantile Restriction

Instrumental Variables Estimation and Weak-Identification-Robust. Inference Based on a Conditional Quantile Restriction Instrumental Variables Estimation and Weak-Identification-Robust Inference Based on a Conditional Quantile Restriction Vadim Marmer Department of Economics University of British Columbia vadim.marmer@gmail.com

More information

Ultra High Dimensional Variable Selection with Endogenous Variables

Ultra High Dimensional Variable Selection with Endogenous Variables 1 / 39 Ultra High Dimensional Variable Selection with Endogenous Variables Yuan Liao Princeton University Joint work with Jianqing Fan Job Market Talk January, 2012 2 / 39 Outline 1 Examples of Ultra High

More information

Graduate Econometrics I: Maximum Likelihood I

Graduate Econometrics I: Maximum Likelihood I Graduate Econometrics I: Maximum Likelihood I Yves Dominicy Université libre de Bruxelles Solvay Brussels School of Economics and Management ECARES Yves Dominicy Graduate Econometrics I: Maximum Likelihood

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

Statistics: Learning models from data

Statistics: Learning models from data DS-GA 1002 Lecture notes 5 October 19, 2015 Statistics: Learning models from data Learning models from data that are assumed to be generated probabilistically from a certain unknown distribution is a crucial

More information

Nonparametric Identification of a Binary Random Factor in Cross Section Data - Supplemental Appendix

Nonparametric Identification of a Binary Random Factor in Cross Section Data - Supplemental Appendix Nonparametric Identification of a Binary Random Factor in Cross Section Data - Supplemental Appendix Yingying Dong and Arthur Lewbel California State University Fullerton and Boston College July 2010 Abstract

More information

Preliminaries The bootstrap Bias reduction Hypothesis tests Regression Confidence intervals Time series Final remark. Bootstrap inference

Preliminaries The bootstrap Bias reduction Hypothesis tests Regression Confidence intervals Time series Final remark. Bootstrap inference 1 / 171 Bootstrap inference Francisco Cribari-Neto Departamento de Estatística Universidade Federal de Pernambuco Recife / PE, Brazil email: cribari@gmail.com October 2013 2 / 171 Unpaid advertisement

More information

Supplement to Quantile-Based Nonparametric Inference for First-Price Auctions

Supplement to Quantile-Based Nonparametric Inference for First-Price Auctions Supplement to Quantile-Based Nonparametric Inference for First-Price Auctions Vadim Marmer University of British Columbia Artyom Shneyerov CIRANO, CIREQ, and Concordia University August 30, 2010 Abstract

More information

The generalized method of moments

The generalized method of moments Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna February 2008 Based on the book Generalized Method of Moments by Alastair R. Hall (2005), Oxford

More information

Stochastic Proximal Gradient Algorithm

Stochastic Proximal Gradient Algorithm Stochastic Institut Mines-Télécom / Telecom ParisTech / Laboratoire Traitement et Communication de l Information Joint work with: Y. Atchade, Ann Arbor, USA, G. Fort LTCI/Télécom Paristech and the kind

More information

Practical Bayesian Quantile Regression. Keming Yu University of Plymouth, UK

Practical Bayesian Quantile Regression. Keming Yu University of Plymouth, UK Practical Bayesian Quantile Regression Keming Yu University of Plymouth, UK (kyu@plymouth.ac.uk) A brief summary of some recent work of us (Keming Yu, Rana Moyeed and Julian Stander). Summary We develops

More information

Measuring the Sensitivity of Parameter Estimates to Estimation Moments

Measuring the Sensitivity of Parameter Estimates to Estimation Moments Measuring the Sensitivity of Parameter Estimates to Estimation Moments Isaiah Andrews MIT and NBER Matthew Gentzkow Stanford and NBER Jesse M. Shapiro Brown and NBER March 2017 Online Appendix Contents

More information

Fractional Hot Deck Imputation for Robust Inference Under Item Nonresponse in Survey Sampling

Fractional Hot Deck Imputation for Robust Inference Under Item Nonresponse in Survey Sampling Fractional Hot Deck Imputation for Robust Inference Under Item Nonresponse in Survey Sampling Jae-Kwang Kim 1 Iowa State University June 26, 2013 1 Joint work with Shu Yang Introduction 1 Introduction

More information

STAT 518 Intro Student Presentation

STAT 518 Intro Student Presentation STAT 518 Intro Student Presentation Wen Wei Loh April 11, 2013 Title of paper Radford M. Neal [1999] Bayesian Statistics, 6: 475-501, 1999 What the paper is about Regression and Classification Flexible

More information

Sequential Monte Carlo Methods

Sequential Monte Carlo Methods University of Pennsylvania Bradley Visitor Lectures October 23, 2017 Introduction Unfortunately, standard MCMC can be inaccurate, especially in medium and large-scale DSGE models: disentangling importance

More information

Closest Moment Estimation under General Conditions

Closest Moment Estimation under General Conditions Closest Moment Estimation under General Conditions Chirok Han and Robert de Jong January 28, 2002 Abstract This paper considers Closest Moment (CM) estimation with a general distance function, and avoids

More information

Closest Moment Estimation under General Conditions

Closest Moment Estimation under General Conditions Closest Moment Estimation under General Conditions Chirok Han Victoria University of Wellington New Zealand Robert de Jong Ohio State University U.S.A October, 2003 Abstract This paper considers Closest

More information

INDIRECT LIKELIHOOD INFERENCE

INDIRECT LIKELIHOOD INFERENCE INDIRECT LIKELIHOOD INFERENCE MICHAEL CREEL AND DENNIS KRISTENSEN ABSTRACT. Standard indirect Inference II) estimators take a given finite-dimensional statistic, Z n, and then estimate the parameters by

More information

Bayesian Estimation of DSGE Models 1 Chapter 3: A Crash Course in Bayesian Inference

Bayesian Estimation of DSGE Models 1 Chapter 3: A Crash Course in Bayesian Inference 1 The views expressed in this paper are those of the authors and do not necessarily reflect the views of the Federal Reserve Board of Governors or the Federal Reserve System. Bayesian Estimation of DSGE

More information

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) =

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) = Until now we have always worked with likelihoods and prior distributions that were conjugate to each other, allowing the computation of the posterior distribution to be done in closed form. Unfortunately,

More information

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference for DSGE Models. Lawrence J. Christiano Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.

More information

Lecture 8 Inequality Testing and Moment Inequality Models

Lecture 8 Inequality Testing and Moment Inequality Models Lecture 8 Inequality Testing and Moment Inequality Models Inequality Testing In the previous lecture, we discussed how to test the nonlinear hypothesis H 0 : h(θ 0 ) 0 when the sample information comes

More information

GMM Estimation and Testing

GMM Estimation and Testing GMM Estimation and Testing Whitney Newey July 2007 Idea: Estimate parameters by setting sample moments to be close to population counterpart. Definitions: β : p 1 parameter vector, with true value β 0.

More information

Economics 583: Econometric Theory I A Primer on Asymptotics

Economics 583: Econometric Theory I A Primer on Asymptotics Economics 583: Econometric Theory I A Primer on Asymptotics Eric Zivot January 14, 2013 The two main concepts in asymptotic theory that we will use are Consistency Asymptotic Normality Intuition consistency:

More information

Asymptotic distribution of GMM Estimator

Asymptotic distribution of GMM Estimator Asymptotic distribution of GMM Estimator Eduardo Rossi University of Pavia Econometria finanziaria 2010 Rossi (2010) GMM 2010 1 / 45 Outline 1 Asymptotic Normality of the GMM Estimator 2 Long Run Covariance

More information

INDIRECT INFERENCE BASED ON THE SCORE

INDIRECT INFERENCE BASED ON THE SCORE INDIRECT INFERENCE BASED ON THE SCORE Peter Fuleky and Eric Zivot June 22, 2010 Abstract The Efficient Method of Moments (EMM) estimator popularized by Gallant and Tauchen (1996) is an indirect inference

More information

Estimation, Inference, and Hypothesis Testing

Estimation, Inference, and Hypothesis Testing Chapter 2 Estimation, Inference, and Hypothesis Testing Note: The primary reference for these notes is Ch. 7 and 8 of Casella & Berger 2. This text may be challenging if new to this topic and Ch. 7 of

More information

CALCULATION METHOD FOR NONLINEAR DYNAMIC LEAST-ABSOLUTE DEVIATIONS ESTIMATOR

CALCULATION METHOD FOR NONLINEAR DYNAMIC LEAST-ABSOLUTE DEVIATIONS ESTIMATOR J. Japan Statist. Soc. Vol. 3 No. 200 39 5 CALCULAION MEHOD FOR NONLINEAR DYNAMIC LEAS-ABSOLUE DEVIAIONS ESIMAOR Kohtaro Hitomi * and Masato Kagihara ** In a nonlinear dynamic model, the consistency and

More information

Machine learning, shrinkage estimation, and economic theory

Machine learning, shrinkage estimation, and economic theory Machine learning, shrinkage estimation, and economic theory Maximilian Kasy December 14, 2018 1 / 43 Introduction Recent years saw a boom of machine learning methods. Impressive advances in domains such

More information

Location Multiplicative Error Model. Asymptotic Inference and Empirical Analysis

Location Multiplicative Error Model. Asymptotic Inference and Empirical Analysis : Asymptotic Inference and Empirical Analysis Qian Li Department of Mathematics and Statistics University of Missouri-Kansas City ql35d@mail.umkc.edu October 29, 2015 Outline of Topics Introduction GARCH

More information

Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach

Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach Jae-Kwang Kim Department of Statistics, Iowa State University Outline 1 Introduction 2 Observed likelihood 3 Mean Score

More information

Advanced Econometrics

Advanced Econometrics Advanced Econometrics Dr. Andrea Beccarini Center for Quantitative Economics Winter 2013/2014 Andrea Beccarini (CQE) Econometrics Winter 2013/2014 1 / 156 General information Aims and prerequisites Objective:

More information

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics. Jiti Gao

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics. Jiti Gao Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics Jiti Gao Department of Statistics School of Mathematics and Statistics The University of Western Australia Crawley

More information

Inference in Nonparametric Series Estimation with Data-Dependent Number of Series Terms

Inference in Nonparametric Series Estimation with Data-Dependent Number of Series Terms Inference in Nonparametric Series Estimation with Data-Dependent Number of Series Terms Byunghoon ang Department of Economics, University of Wisconsin-Madison First version December 9, 204; Revised November

More information

Econ 582 Nonparametric Regression

Econ 582 Nonparametric Regression Econ 582 Nonparametric Regression Eric Zivot May 28, 2013 Nonparametric Regression Sofarwehaveonlyconsideredlinearregressionmodels = x 0 β + [ x ]=0 [ x = x] =x 0 β = [ x = x] [ x = x] x = β The assume

More information

Econometrics II - EXAM Answer each question in separate sheets in three hours

Econometrics II - EXAM Answer each question in separate sheets in three hours Econometrics II - EXAM Answer each question in separate sheets in three hours. Let u and u be jointly Gaussian and independent of z in all the equations. a Investigate the identification of the following

More information

Density Estimation. Seungjin Choi

Density Estimation. Seungjin Choi Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/

More information

This chapter reviews properties of regression estimators and test statistics based on

This chapter reviews properties of regression estimators and test statistics based on Chapter 12 COINTEGRATING AND SPURIOUS REGRESSIONS This chapter reviews properties of regression estimators and test statistics based on the estimators when the regressors and regressant are difference

More information

Econometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018

Econometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018 Econometrics I KS Module 2: Multivariate Linear Regression Alexander Ahammer Department of Economics Johannes Kepler University of Linz This version: April 16, 2018 Alexander Ahammer (JKU) Module 2: Multivariate

More information

Supplemental Material for KERNEL-BASED INFERENCE IN TIME-VARYING COEFFICIENT COINTEGRATING REGRESSION. September 2017

Supplemental Material for KERNEL-BASED INFERENCE IN TIME-VARYING COEFFICIENT COINTEGRATING REGRESSION. September 2017 Supplemental Material for KERNEL-BASED INFERENCE IN TIME-VARYING COEFFICIENT COINTEGRATING REGRESSION By Degui Li, Peter C. B. Phillips, and Jiti Gao September 017 COWLES FOUNDATION DISCUSSION PAPER NO.

More information

Lecture 2: Consistency of M-estimators

Lecture 2: Consistency of M-estimators Lecture 2: Instructor: Deartment of Economics Stanford University Preared by Wenbo Zhou, Renmin University References Takeshi Amemiya, 1985, Advanced Econometrics, Harvard University Press Newey and McFadden,

More information

13 Endogeneity and Nonparametric IV

13 Endogeneity and Nonparametric IV 13 Endogeneity and Nonparametric IV 13.1 Nonparametric Endogeneity A nonparametric IV equation is Y i = g (X i ) + e i (1) E (e i j i ) = 0 In this model, some elements of X i are potentially endogenous,

More information

STA414/2104 Statistical Methods for Machine Learning II

STA414/2104 Statistical Methods for Machine Learning II STA414/2104 Statistical Methods for Machine Learning II Murat A. Erdogdu & David Duvenaud Department of Computer Science Department of Statistical Sciences Lecture 3 Slide credits: Russ Salakhutdinov Announcements

More information

FREQUENTIST BEHAVIOR OF FORMAL BAYESIAN INFERENCE

FREQUENTIST BEHAVIOR OF FORMAL BAYESIAN INFERENCE FREQUENTIST BEHAVIOR OF FORMAL BAYESIAN INFERENCE Donald A. Pierce Oregon State Univ (Emeritus), RERF Hiroshima (Retired), Oregon Health Sciences Univ (Adjunct) Ruggero Bellio Univ of Udine For Perugia

More information

Preliminaries The bootstrap Bias reduction Hypothesis tests Regression Confidence intervals Time series Final remark. Bootstrap inference

Preliminaries The bootstrap Bias reduction Hypothesis tests Regression Confidence intervals Time series Final remark. Bootstrap inference 1 / 172 Bootstrap inference Francisco Cribari-Neto Departamento de Estatística Universidade Federal de Pernambuco Recife / PE, Brazil email: cribari@gmail.com October 2014 2 / 172 Unpaid advertisement

More information

GARCH Models Estimation and Inference. Eduardo Rossi University of Pavia

GARCH Models Estimation and Inference. Eduardo Rossi University of Pavia GARCH Models Estimation and Inference Eduardo Rossi University of Pavia Likelihood function The procedure most often used in estimating θ 0 in ARCH models involves the maximization of a likelihood function

More information

Penalized Indirect Inference

Penalized Indirect Inference Penalized Indirect Inference Francisco Blasques a,b) and Artem Duplinskiy a,c) a) VU University Amsterdam b) Tinbergen Institute c) Coders Co. January 21, 218 Abstract Parameter estimates of structural

More information

Modelling Czech and Slovak labour markets: A DSGE model with labour frictions

Modelling Czech and Slovak labour markets: A DSGE model with labour frictions Modelling Czech and Slovak labour markets: A DSGE model with labour frictions Daniel Němec Faculty of Economics and Administrations Masaryk University Brno, Czech Republic nemecd@econ.muni.cz ESF MU (Brno)

More information

Introduction to Bayesian Inference

Introduction to Bayesian Inference University of Pennsylvania EABCN Training School May 10, 2016 Bayesian Inference Ingredients of Bayesian Analysis: Likelihood function p(y φ) Prior density p(φ) Marginal data density p(y ) = p(y φ)p(φ)dφ

More information

Confidence Intervals in Ridge Regression using Jackknife and Bootstrap Methods

Confidence Intervals in Ridge Regression using Jackknife and Bootstrap Methods Chapter 4 Confidence Intervals in Ridge Regression using Jackknife and Bootstrap Methods 4.1 Introduction It is now explicable that ridge regression estimator (here we take ordinary ridge estimator (ORE)

More information

What s New in Econometrics. Lecture 13

What s New in Econometrics. Lecture 13 What s New in Econometrics Lecture 13 Weak Instruments and Many Instruments Guido Imbens NBER Summer Institute, 2007 Outline 1. Introduction 2. Motivation 3. Weak Instruments 4. Many Weak) Instruments

More information

The Bootstrap: Theory and Applications. Biing-Shen Kuo National Chengchi University

The Bootstrap: Theory and Applications. Biing-Shen Kuo National Chengchi University The Bootstrap: Theory and Applications Biing-Shen Kuo National Chengchi University Motivation: Poor Asymptotic Approximation Most of statistical inference relies on asymptotic theory. Motivation: Poor

More information

An Encompassing Test for Non-Nested Quantile Regression Models

An Encompassing Test for Non-Nested Quantile Regression Models An Encompassing Test for Non-Nested Quantile Regression Models Chung-Ming Kuan Department of Finance National Taiwan University Hsin-Yi Lin Department of Economics National Chengchi University Abstract

More information

Time Series and Forecasting Lecture 4 NonLinear Time Series

Time Series and Forecasting Lecture 4 NonLinear Time Series Time Series and Forecasting Lecture 4 NonLinear Time Series Bruce E. Hansen Summer School in Economics and Econometrics University of Crete July 23-27, 2012 Bruce Hansen (University of Wisconsin) Foundations

More information