CCP Estimation. Robert A. Miller. March Dynamic Discrete Choice. Miller (Dynamic Discrete Choice) cemmap 6 March / 27

Size: px
Start display at page:

Download "CCP Estimation. Robert A. Miller. March Dynamic Discrete Choice. Miller (Dynamic Discrete Choice) cemmap 6 March / 27"

Transcription

1 CCP Estimation Robert A. Miller Dynamic Discrete Choice March 2018 Miller Dynamic Discrete Choice) cemmap 6 March / 27

2 Criteria for Evaluating Estimators General principles to apply when assessing an estimator Three criteria for evaluating different estimators are: 1 Large sample properties: Is the estimator consistent? If so, what is the rate of convergence? What is its asymptotic distribution? 2 Finite sample properties: At what sample size do the finite sample properties accurately reflect the asymptotic distribution? For a given sample size, what is the standard deviation and mean squared error of the estimator? 3 Implementation: Is the estimator defined by an algorithm? A weaker definition is to satisify a set of conditions. Are numerical approximations involved? Does the estimator require tuning parameters or instruments? Miller Dynamic Discrete Choice) cemmap 6 March / 27

3 Maximum Likelihood Estimation Data and outside knowledge Suppose the data comes from a long panel either stationary or complete panel histories for finite lived agents). Also assume we know: 1 the discount factor β 2 the distribution of disturbances G t ɛ x ) 3 u 1t x) or more generally one of the payoffs for each state and time). 4 u 1t x) = 0 for notational convenience) Since the panel is long, p t x) and hence ψ jt x) are identified. There are, of course, alternative assumptions that deliver identification, and the methods described below are generic. Miller Dynamic Discrete Choice) cemmap 6 March / 27

4 Maximum Likelihood Estimation The likelihood To simplify the notation, consider a sample of N independently drawn observations on the whole history t {1,..., T } of individuals n {1,..., N}, with data on their state variables decisions denoted by x nt, and decisions denoted by d njt. The joint probability distribution of the decisions and outcomes is: ) N n=1 T t=1 Taking logs yields: N n=1 T t=1 J j=1 J X j=1 x =1 d njt {log p jt x nt )] + d njt I { x n,t+1 = x } p jt x)f jt x x) X x =1 I {x n,t+1 = x} log f jt x x nt )] } Miller Dynamic Discrete Choice) cemmap 6 March / 27

5 Maximum Likelihood Estimation The reduced form Note the choice probabilities are additively separable from the transition probabilities in the formula for the joint distribution of decisions and outcomes. Hence the estimation of the joint likelihood splits into one piece dealing with the choice probabilities conditional on the state, and another dealing with the transition conditional on the choice and state. Maximizing each additive piece separately with respect to f j x x) and p t x nt ) we obtain the unrestricted ML estimators: f jt x x ) = N n=1 I {x nt = x, d njt = 1, x n,t+1 = x } N n=1 I {x nt = x, d njt = 1} and: p jt x) = N n=1 I {x nt = x, d njt = 1} N n=1 I {x nt = x} Miller Dynamic Discrete Choice) cemmap 6 March / 27

6 Maximum Likelihood Estimation Estimating an intermediate probability distribution Following the notation of Lecture 3, let κ jtτ x t+τ+1 x t ) denote the probability of reaching x t+τ+1 at t + τ + 1 from x t by following action j at t and then always choosing the first action: { fjt x κ jtτ x t+τ+1 x t ) t+1 x t ) τ = 0 X x =1 f 1,t+τ x t+τ+1 x)κ jt,τ 1 x x t ) τ = 1,... Thus we can recursively estimate κ jtτ x t+τ+1 x t ) with: κ jtτ x t+τ+1 x t ) { f jt x t+1 x t ) τ = 0 X f x =1 1,t+τ x t+τ+1 x) κ jt,τ 1 x x t ) τ = t + 1,... Similarly we estimate ψ jt x t ) with ψ jt x t ) using the p jt x) estimates of the CCPs. Miller Dynamic Discrete Choice) cemmap 6 March / 27

7 Maximum Likelihood Estimation Unrestricted estimates of the primitives From previous lectures: u jt x t ) = ψ 1t x t ) ψ jt x t ) T t X + β τ t ψ 1,t+τ x) κ t1,τ 1 x x t ) κ tj,τ 1 x x t )] τ=1 x =1 Substituting κ τ 1 x x t, j) for κ τ 1 x x t, j) and ψ jt x t ) with ψ jt x t ) then yields: û jt x t ) ψ 1t x t ) ψ jt x t ) T t X + β τ t ψ 1,t+τ x) κ t1,τ 1 x x t ) κ tj,τ 1 x x t )] τ=1 x =1 The stationary case is similar and has the matrix representation we discussed in previous lectures). Miller Dynamic Discrete Choice) cemmap 6 March / 27

8 Large Sample or Asymptotic Properties Asymptotic effi ciency of the unrestricted ML estimator By the Law of Large Numbers f jt x x ) converges to f jt x x ) and p jt x) converges to p jt x), both almost surely. By the Central Limit Theorem both estimators converge at N and and have asymptotic normal distributions. Both f jt x x ) and p jt x) are ML estimators for f jt x x ) and p jt x) and obtain the Cramer-Rao lower bound asymptotically. Since and u jt x) is exactly identified, it follows by the invariance principle that û jt x) is consistent and asymptotically effi cient for u jt x t ), also attaining its Cramer Rao lower bound. The same properties apply to the stationary model. Note that greater effi ciency can only be obtained by making functional form assumptions about u jt x t ) and f jt x x ). False restrictions, such as adopting convenient functional forms for the payoffs, typically create misspecifications. Miller Dynamic Discrete Choice) cemmap 6 March / 27

9 Maximum Likelihood Estimation Restricted ML estimates of the primitives In practice applications further restrict the parameter space. For example assume θ θ 1), θ 2)) Θ is a closed convex subspace of Euclidean space, and: u jt x) u j x, θ 1) ) f jt x x nt ) f jt x x nt, θ 2) ) We now define the model by T, β, θ, g). Assume the DGP comes from T, β, θ 0, g) where θ 0 Θ interior ). The ML estimator, denoted by θ ML, maximizes: N n=1 T t=1 J j=1 d njt {ln p jt x nt, θ)] + X x =1 ] } I {x n,t+1 = x} ln f jt x x nt, θ 2) ) over θ Θ where p t x, θ) are the CCPs for T, β, θ, g). Miller Dynamic Discrete Choice) cemmap 6 March / 27

10 Maximum Likelihood Estimation A common variation on the ML estimator A common variation on the ML estimator is: 1 estimate f jt x x nt, θ 2) ) from the state transitions. 2 obtain a limited information ML estimator θ 2) LIML. 3 estimate θ 1) by searching over p t x, θ 1), θ 2) LIML ). More precisely we define: θ 2) LIML arg max θ 2 θ 1) N n=1 ML arg max θ 1 T t=1 N n=1 J j=1 T t=1 X x =1 J j=1 ] I {x n,t+1 = x} d njt log f jt x x nt, θ 2) ) ]} d njt {log p jt x nt, θ 1), θ 2) LIML ) Note that: when θ 2) 0, that is f jt x x nt ), is known, θ 1) ML = θ 1) ML ; otherwise θ 1) ML is less effi cient but computationally simpler than θ1) ML ; nevertheless both estimators solve for the optimal rule many times. Miller Dynamic Discrete Choice) cemmap 6 March / 27

11 Quasi Maximum Likelihood Estimation The steps Hotz and Miller, 1993) The essential difference between this estimator and ML is this estimator substitutes an estimator of the continuation value into the likelihood rather than computing it from the optimal policy function: 1 Estimate the reduced form p and f or θ 2) LIML ) as above; 2 Apply the Representation Theorem to obtain expressions for v jt x t ) v kt x t ); 3 Substitute the reduced form estimates into these differences to obtain v jt x, θ 1)) v kt x, θ 1)) for any given θ 1) ; 4 Replace v jt x t ) with v jt x, θ 1)) in the random utility model RUM) to obtain an estimate p jt x, θ 1)) for any given θ 1) ; 5 Maximize the quasi-likelihood with respect to θ 1). In effect we estimate a static RUM where differences in current utilities u j x, θ 1)) u k x, θ 1)) are augmented by a dynamic correction factor estimated in the first stage off the reduced form. Miller Dynamic Discrete Choice) cemmap 6 March / 27

12 Quasi Maximum Likelihood Estimation Notes on QML Estimation In the second step, appealing to the Representation theorem, and the slides above v jt x, θ 1)) v kt x, θ 1)) = u j x, θ 1)) u k x, θ 1)) T t X β τ κkt,τ 1 x x ψ 1,t+τ x) t ) κ τ=1 x =1 jt,τ 1 x x t ) In the last two steps we define: p jt x, θ 1)) ɛ t J I k=1 { ɛkt ɛ jt v jt x, θ 1)) v kt x, θ 1)) } ] dg t ɛ t x t ) and: θ 1) QML arg max θ 1 N n=1 T t=1 J j=1 ]} d njt {log p jt x nt, θ 1), θ 2) LIML ) Miller Dynamic Discrete Choice) cemmap 6 March / 27

13 Quasi Maximum Likelihood Estimation Exploiting finite dependence in QML Estimation Supposing there is ρ-period dependence, we could form: ṽ jt x, θ 1)) ṽ it x, θ 1)) = u j x, θ 1)) u i x, θ 1)) + ρ J,X ) τ=1 k,x t+τ ) β τ u i x t+τ, θ 1)) ] + ψ k τ x t+τ ) ωiktτ x t, x t+τ ) κ it,τ 1 x t+τ x t ) ω jktτ x t, x t+τ ) κ jt,τ 1 x t+τ x t ) Then p jt x, θ 1 ) is formed in an analogous manner to p jt x, θ 1 ) The last maximization step is defined in a similar way. Note this estimator does not have exactly the same interpretation as θ 1) QML, because the dynamic selection correction involves the current utility parameters. Miller Dynamic Discrete Choice) cemmap 6 March / 27 ]

14 Quasi Maximum Likelihood Estimation Adjusting the asymptotic covariance for pre-estimation Hotz and Miller, 1993) Form P θ 1), p, f ), a mapping from Θ 1) P F to P with: κ jtτ x t+τ+1 x t ) { fjt x t+1 x t ) τ = 0 X x =1 f 1,t+τ x t+τ+1 x)κ jt,τ 1 x x t ) τ = t + 1,... v jt x, θ 1)) v kt x, θ 1)) = u j x, θ 1)) u k x, θ 1)) T t X β τ κkt,τ 1 x x ψ 1,t+τ x) t ) κ τ=1 x =1 jt,τ 1 x x t ) ] p jt x, θ 1)) ɛ t { J ɛkt I ɛ jt v k=1 jt x, θ 1)) v kt x, θ 1)) } g t ɛ t x t ) dɛ t Miller Dynamic Discrete Choice) cemmap 6 March / 27

15 Quasi Maximum Likelihood Estimation Adjusting the asymptotic covariance for pre-estimation Let: ) { )]} π 1n θ 1), p, f = W N z n d n P θ 1), p, f where W N is a weighting matrix and z n are instruments. Define the CCP estimator for θ 1) CCP by solving: ) N n=1 π 1n θ 1), p, f = 0 Miller Dynamic Discrete Choice) cemmap 6 March / 27

16 Large Sample or Asymptotic Properties The asymptotic covariance matrix Newey, 1984) Write the cell estimators as the solution to: ] ) I 0 = N n=1 π 2n p, f = N d n d n p) ) n=1 f n f where: In d is a J 1) XT dimensional row vector indicator function matching the state variables of n to the relevant CCP components) in p; In f is a J 1) X 2 T dimensional row vector indicator function matching state variables and decisions) of n to f components; f n is the outcome from the n making a choice given her state variables. For k {1, 2} and k {1, 2} define: Ω kk E π kn π k ] n Γ 11 E π1n θ 1) ] I f n Γ 12 E π1n p, π ] 1n f Then the asymptotic covariance matrix for θ 1) CCP, denoted by Σ 1, is: Σ 1 = Γ11 1 Ω11 + Γ 12 Ω 22 Ω 21 Ω 12 ) Γ 12] Γ 1 11 Miller Dynamic Discrete Choice) cemmap 6 March / 27

17 Minimum Distance Estimators Minimizing the difference between unrestricted and restricted current payoffs Another approach is to match up the parametrization of u jt x t ), denoted by u jt x t, θ 1) ), to its representation as closely as possible: 1 Form the vector function where Ψ p, f ) by stacking: Ψ jt x t, p, f ) ψ 1t x t ) ψ jt x t ) T t + τ=1 X x =1 2 Estimate the reduced form p and f. 3 Minimize the quadratic form to obtain: θ 1) MD = arg min ux, θ 1) ) Ψ θ 1) Θ 1) = arg min θ 1) Θ 1) β τ ψ 1,t+τ x) p, f )] W κkt,τ 1 x x t ) κ jt,τ 1 x x t ) ] )] ux, θ 1) ) Ψ p, f ux, θ 1) ) W ) ] ux, θ 1) ) 2Ψ p, f W ux, θ 1) ) where W, is a square J 1) TX weighting matrix. Miller Dynamic Discrete Choice) cemmap 6 March / 27

18 Minimum Distance Estimators Notes on minimizing the difference between unrestricted and restricted payoffs From the Representation theorem u jt x t, θ 1) 0 ) = Ψ jt x t, p, f 0 ) if p are the CCPs for T, β, θ 0, g). Furthermore u jt x) is exactly identified from Ψ jt x, p, f 0 ) without imposing any additional restrictions. Therefore parameterizing u with θ 1) 0 imposes overidentifying restrictions so θ 1) MD is consistent if the restrictions are true. Note θ 1) MD has a closed form if ux; θ1) 0 ) is linear in θ1) 0. Miller Dynamic Discrete Choice) cemmap 6 March / 27

19 Minimum Distance Estimators A minimum distance estimator that exploits finite dependence Altug and Miller, 1998) We can adapt this estimator by exploiting finite dependence. Suppose utility is stable over time with u jt x, θ 1)) = u j x, θ 1)), and that ω iktτ x t, x t+τ ) achieves ρ dependence for all K choices at each x at t: ) 1 Form the K T ρ) X utility vector Λ θ 1), p, f from ) Λ ijt x t, θ 1), p, f = u j x t, θ 1)) u i x t, θ 1)) ψ jt x t ) + ψ it x t ) ρ J,X ) u k x t+τ, θ 1)) ] + ψ k,t+τ x t+τ ) β τ ] ωiktτ x t, x t+τ ) κ τ=1 k,x τ ) it,τ 1 x t+τ x t ) ω jktτ x t, x t+τ ) κ jt,τ 1 x t+τ x t ) 2 Given K T ρ) X weighting matrix W, minimize: ) ) Λ θ 1), f, p W Λ θ 1), f, p 1) ) Miller Dynamic Discrete Choice) cemmap 6 March / 27

20 Simulated Moments Estimators A simulated moments estimator Hotz, Miller, Sanders and Smith, 1994) We could form a Methods of Simulated Moments MSM) estimator from: 1 Simulate a lifetime path from x ntn onwards ] for each j, using f and p. 2 d Obtain estimates of Ê ɛ o jt jt = 1, x t. 3 Stitch together a simulated lifetime utility outcome from ) the j th choice at t n onwards for n, denoted v nj v jtn x ntn ; θ 1), f, p. ) 4 Form the J 1 dimensional vector h n x ntn ; θ 1), f, p from: ) h nj x ntn ; θ 1), f, p ) ) v jtn x ntn, θ 1), f, p v Jtn x ntn, θ 1), f, p + ψ jt x ntn ) ψ Jt x ntn ) 5 Given a weighting matrix W S and an instrument vector z n minimize: N 1 N )] )] n=1 z nh n x ntn ; θ 1) N, f, p WS n=1 z nh n x ntn ; θ 1), f, p Miller Dynamic Discrete Choice) cemmap 6 March / 27

21 Simulated Moments Estimators Notes on this MSM estimator In the first step, given the state simulate a choice using p, and simulate the next state ) using f. In this way generate x ns and d ns d n1s,..., d njs for all s {t n + 1,..., T }. Generating this path does not exploit] knowledge of G, only the CCPs. d In the second step Ê ɛ o jt jt = 1, x t pjt 1 x t ) ɛ t In Step 4 v jt x ntn, θ 1), f, p u jt x ntn, θ 1) ) + J } I { ψ jt x t ) ψ kt x t ) ɛ jt ɛ kt ɛ jt g ɛ t ) dɛ t k=1 ) T J s=t+1 j=1 is stitched together as: β t 1 1 { d } { njs = 1 u js x ns, θ 1) ) +Ê ɛ js xns, d njs = 1 The solution has a closed form if u jt x, θ 1) ) is linear in θ 1). Miller Dynamic Discrete Choice) cemmap 6 March / 27 ] }

22 Simulated Moments Estimators Another MSM estimator Indeed ɛ t could be simulated as well: 1 Draw a realization ɛ from G ɛ) for each s {t n,..., T } and n. 2 Set: J } d njs = I { ψ js x ns ) ψ ks x ns ) ɛ njs ɛ nks k =1 and stitch together: u jt x ntn, θ 1) ) + T s=t+1 J j=1 } } β t 1 1 { d njs = 1 {u js x ns, θ 1) ) + ɛ js 3 Minimize an analogous quadratic form to obtain θ 1). Bajari, Benkard and Levin 2007) estimate an approximate reduced form of the policy function without exploiting the CCPs pages , 2007), but acknowledge: "Our method requires that one be able to consistently estimate each firm s policy function, so this may limit our ability to estimate certain models page 1345, 2007)." Miller Dynamic Discrete Choice) cemmap 6 March / 27

23 Large Sample or Asymptotic Properties Adjusting the asymptotic covariance for simulation as well Pakes and Pollard, 1989) Simulation adds an additional, independent source of variation to the sample moments and hence the estimated asymptotic standard errors. Following the definition given in Lecture 7 suppose θ 1) minimizes: )] N 1 N )] n=1 z nh n x ntn, θ 1) N, f, p WS n=1 z nh n x ntn, θ 1), f, p Then the additional component to the covariance matrix for θ 1) is: Σ S 1 S 1 Υ W S Υ ) 1 Υ W S E z n h n h nz n ] WS Υ Υ W S Υ ) 1 where S is the number of simulations per observation): ) Υ = E z n h n x ntn, θ 1), 0 f 0, p 0 θ 1) Note that Σ S 1 0 as S. Miller Dynamic Discrete Choice) cemmap 6 March / 27

24 Asymptotic Effi ciency The Newton-Raphson Algorithm Recall that for any extremum estimator for a problem satisfying standard regularity conditions that: θ i+1) = θ i) 2 Q N θ i)) / ] 1 θ θ Q N θ i)) / ] θ where N indicates the sample, θ is the parameter value, and Q N θ) is the criterion function associated with the extremum estimator This algorithm converges to the maximand if the criterion function is strictly concave, and/or if θ i) is close enough to the maximum. The algorithm is based on the quadratic approximation: Q N θ) Q N θ i)) + Q N θ i)) / ] θ θ θ i)) + 1 θ θ i)) 2 Q N θ i)) / ] θ θ θ θ i)) 2 Miller Dynamic Discrete Choice) cemmap 6 March / 27

25 Asymptotic Effi ciency Iterating one step Amemiya, 1985, pp ) Suppose θ 1) is a N consistent estimator for the interior) true value θ 0 Θ, the parameter space. Then it is well known that θ 1) has the same asymptotic properties as θ ), the limit of the sequence, namely: N θ 2) θ 0 ) a.d. p lim 1 2 Q N θ 1)) N θ θ 1 p lim 1 Q N θ 1)) N θ Specializing Q N θ) = log L N θ), the log likelihood, θ 2) is asymptotically effi cient, and ) N θ 2) θ 0 is asymptotically normal with mean zero and covariance: { 1 2 ]} 1 log L N θ 0 ) lim E N θ θ Miller Dynamic Discrete Choice) cemmap 6 March / 27

26 Asymptotic Effi ciency Achieving asymptotic effi ciency from a CCP estimator Aguirregaberia and Miro, 2002) This general principle can be applied to dynamic discrete choice models: 1 Estimate the unrestricted) CCPs. 2 Use a CCP estimator to obtain θ 1), the parameters characterizing the primitives. 3 Solve for the CCPs as a mapping of θ 1) and the state variables. 4 Take one Newton-Raphson step to obtain θ 2). Note that this procedure asmptotically guarantees the global optimum is selected. Starting out with a trial guess of a θ 0) and then updating, the traditional way of implementing ML, reaches a local not global) optimum at best "Overshooting" cannot be ruled out). Miller Dynamic Discrete Choice) cemmap 6 March / 27

27 Summary Factors to consider when selecting an esimator There is a trade off between effi ciency and computational ease: 1 Asymptotic effi ciency ML and the 2 step CCP/Newton) estimators are the most effi cient. 2 Small sample properties Simulation induces additional variation that may increase the number of observations required to approximate the asymptotic distribution. 3 Computational ease MD with linear utility in the parameters has a closed form. ML is the most burdensome and typically requires numerical approximations. Finally there is an open question about how well these estimators perform when the model is misspecified. Miller Dynamic Discrete Choice) cemmap 6 March / 27

Lecture Pakes, Ostrovsky, and Berry. Dynamic and Stochastic Model of Industry

Lecture Pakes, Ostrovsky, and Berry. Dynamic and Stochastic Model of Industry Lecture Pakes, Ostrovsky, and Berry Dynamic and Stochastic Model of Industry Let π n be flow profit of incumbant firms when n firms are in the industry. π 1 > 0, 0 π 2

More information

Dynamic Discrete Choice Structural Models in Empirical IO

Dynamic Discrete Choice Structural Models in Empirical IO Dynamic Discrete Choice Structural Models in Empirical IO Lecture 4: Euler Equations and Finite Dependence in Dynamic Discrete Choice Models Victor Aguirregabiria (University of Toronto) Carlos III, Madrid

More information

Chapter 1: A Brief Review of Maximum Likelihood, GMM, and Numerical Tools. Joan Llull. Microeconometrics IDEA PhD Program

Chapter 1: A Brief Review of Maximum Likelihood, GMM, and Numerical Tools. Joan Llull. Microeconometrics IDEA PhD Program Chapter 1: A Brief Review of Maximum Likelihood, GMM, and Numerical Tools Joan Llull Microeconometrics IDEA PhD Program Maximum Likelihood Chapter 1. A Brief Review of Maximum Likelihood, GMM, and Numerical

More information

Econometrics I, Estimation

Econometrics I, Estimation Econometrics I, Estimation Department of Economics Stanford University September, 2008 Part I Parameter, Estimator, Estimate A parametric is a feature of the population. An estimator is a function of the

More information

CCP Estimation of Dynamic Discrete Choice Models with Unobserved Heterogeneity

CCP Estimation of Dynamic Discrete Choice Models with Unobserved Heterogeneity CCP Estimation of Dynamic Discrete Choice Models with Unobserved Heterogeneity Peter Arcidiacono Duke University Robert A. Miller Carnegie Mellon University February 20, 2008 Abstract We adapt the Expectation-Maximization

More information

A Model of Human Capital Accumulation and Occupational Choices. A simplified version of Keane and Wolpin (JPE, 1997)

A Model of Human Capital Accumulation and Occupational Choices. A simplified version of Keane and Wolpin (JPE, 1997) A Model of Human Capital Accumulation and Occupational Choices A simplified version of Keane and Wolpin (JPE, 1997) We have here three, mutually exclusive decisions in each period: 1. Attend school. 2.

More information

Lecture 3 September 1

Lecture 3 September 1 STAT 383C: Statistical Modeling I Fall 2016 Lecture 3 September 1 Lecturer: Purnamrita Sarkar Scribe: Giorgio Paulon, Carlos Zanini Disclaimer: These scribe notes have been slightly proofread and may have

More information

GMM and SMM. 1. Hansen, L Large Sample Properties of Generalized Method of Moments Estimators, Econometrica, 50, p

GMM and SMM. 1. Hansen, L Large Sample Properties of Generalized Method of Moments Estimators, Econometrica, 50, p GMM and SMM Some useful references: 1. Hansen, L. 1982. Large Sample Properties of Generalized Method of Moments Estimators, Econometrica, 50, p. 1029-54. 2. Lee, B.S. and B. Ingram. 1991 Simulation estimation

More information

DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION

DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION DYNAMIC LECTURE 5: DISCRETE TIME INTERTEMPORAL OPTIMIZATION UNIVERSITY OF MARYLAND: ECON 600. Alternative Methods of Discrete Time Intertemporal Optimization We will start by solving a discrete time intertemporal

More information

Solution Methods. Jesús Fernández-Villaverde. University of Pennsylvania. March 16, 2016

Solution Methods. Jesús Fernández-Villaverde. University of Pennsylvania. March 16, 2016 Solution Methods Jesús Fernández-Villaverde University of Pennsylvania March 16, 2016 Jesús Fernández-Villaverde (PENN) Solution Methods March 16, 2016 1 / 36 Functional equations A large class of problems

More information

Incentives Work: Getting Teachers to Come to School. Esther Duflo, Rema Hanna, and Stephen Ryan. Web Appendix

Incentives Work: Getting Teachers to Come to School. Esther Duflo, Rema Hanna, and Stephen Ryan. Web Appendix Incentives Work: Getting Teachers to Come to School Esther Duflo, Rema Hanna, and Stephen Ryan Web Appendix Online Appendix: Estimation of model with AR(1) errors: Not for Publication To estimate a model

More information

ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko

ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko Indirect Utility Recall: static consumer theory; J goods, p j is the price of good j (j = 1; : : : ; J), c j is consumption

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

Introduction to Estimation Methods for Time Series models Lecture 2

Introduction to Estimation Methods for Time Series models Lecture 2 Introduction to Estimation Methods for Time Series models Lecture 2 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 2 SNS Pisa 1 / 21 Estimators:

More information

Econometrica Supplementary Material

Econometrica Supplementary Material Econometrica Supplementary Material SUPPLEMENT TO CONDITIONAL CHOICE PROBABILITY ESTIMATION OF DYNAMIC DISCRETE CHOICE MODELS WITH UNOBSERVED HETEROGENEITY (Econometrica, Vol. 79, No. 6, November 20, 823

More information

GMM, HAC estimators, & Standard Errors for Business Cycle Statistics

GMM, HAC estimators, & Standard Errors for Business Cycle Statistics GMM, HAC estimators, & Standard Errors for Business Cycle Statistics Wouter J. Den Haan London School of Economics c Wouter J. Den Haan Overview Generic GMM problem Estimation Heteroskedastic and Autocorrelation

More information

On the iterated estimation of dynamic discrete choice games

On the iterated estimation of dynamic discrete choice games On the iterated estimation of dynamic discrete choice games Federico A. Bugni Jackson Bunting The Institute for Fiscal Studies Department of Economics, UCL cemmap working paper CWP13/18 On the iterated

More information

Estimating Deep Parameters: GMM and SMM

Estimating Deep Parameters: GMM and SMM Estimating Deep Parameters: GMM and SMM 1 Parameterizing a Model Calibration Choose parameters from micro or other related macro studies (e.g. coeffi cient of relative risk aversion is 2). SMM with weighting

More information

Parametric Unsupervised Learning Expectation Maximization (EM) Lecture 20.a

Parametric Unsupervised Learning Expectation Maximization (EM) Lecture 20.a Parametric Unsupervised Learning Expectation Maximization (EM) Lecture 20.a Some slides are due to Christopher Bishop Limitations of K-means Hard assignments of data points to clusters small shift of a

More information

What s New in Econometrics. Lecture 15

What s New in Econometrics. Lecture 15 What s New in Econometrics Lecture 15 Generalized Method of Moments and Empirical Likelihood Guido Imbens NBER Summer Institute, 2007 Outline 1. Introduction 2. Generalized Method of Moments Estimation

More information

Lecture 14 More on structural estimation

Lecture 14 More on structural estimation Lecture 14 More on structural estimation Economics 8379 George Washington University Instructor: Prof. Ben Williams traditional MLE and GMM MLE requires a full specification of a model for the distribution

More information

Estimating Single-Agent Dynamic Models

Estimating Single-Agent Dynamic Models Estimating Single-Agent Dynamic Models Paul T. Scott New York University Empirical IO Course Fall 2016 1 / 34 Introduction Why dynamic estimation? External validity Famous example: Hendel and Nevo s (2006)

More information

Lasso Maximum Likelihood Estimation of Parametric Models with Singular Information Matrices

Lasso Maximum Likelihood Estimation of Parametric Models with Singular Information Matrices Article Lasso Maximum Likelihood Estimation of Parametric Models with Singular Information Matrices Fei Jin 1,2 and Lung-fei Lee 3, * 1 School of Economics, Shanghai University of Finance and Economics,

More information

1 Bewley Economies with Aggregate Uncertainty

1 Bewley Economies with Aggregate Uncertainty 1 Bewley Economies with Aggregate Uncertainty Sofarwehaveassumedawayaggregatefluctuations (i.e., business cycles) in our description of the incomplete-markets economies with uninsurable idiosyncratic risk

More information

Econometric Analysis of Games 1

Econometric Analysis of Games 1 Econometric Analysis of Games 1 HT 2017 Recap Aim: provide an introduction to incomplete models and partial identification in the context of discrete games 1. Coherence & Completeness 2. Basic Framework

More information

SEQUENTIAL ESTIMATION OF DYNAMIC DISCRETE GAMES. Victor Aguirregabiria (Boston University) and. Pedro Mira (CEMFI) Applied Micro Workshop at Minnesota

SEQUENTIAL ESTIMATION OF DYNAMIC DISCRETE GAMES. Victor Aguirregabiria (Boston University) and. Pedro Mira (CEMFI) Applied Micro Workshop at Minnesota SEQUENTIAL ESTIMATION OF DYNAMIC DISCRETE GAMES Victor Aguirregabiria (Boston University) and Pedro Mira (CEMFI) Applied Micro Workshop at Minnesota February 16, 2006 CONTEXT AND MOTIVATION Many interesting

More information

Lecture 7 Introduction to Statistical Decision Theory

Lecture 7 Introduction to Statistical Decision Theory Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7

More information

Estimating Single-Agent Dynamic Models

Estimating Single-Agent Dynamic Models Estimating Single-Agent Dynamic Models Paul T. Scott Empirical IO Fall, 2013 1 / 49 Why are dynamics important? The motivation for using dynamics is usually external validity: we want to simulate counterfactuals

More information

Second Welfare Theorem

Second Welfare Theorem Second Welfare Theorem Econ 2100 Fall 2015 Lecture 18, November 2 Outline 1 Second Welfare Theorem From Last Class We want to state a prove a theorem that says that any Pareto optimal allocation is (part

More information

Econometrics III: Problem Set # 2 Single Agent Dynamic Discrete Choice

Econometrics III: Problem Set # 2 Single Agent Dynamic Discrete Choice Holger Sieg Carnegie Mellon University Econometrics III: Problem Set # 2 Single Agent Dynamic Discrete Choice INSTRUCTIONS: This problem set was originally created by Ron Goettler. The objective of this

More information

Nonstationary Dynamic Models with Finite Dependence

Nonstationary Dynamic Models with Finite Dependence Nonstationary Dynamic Models with Finite Dependence Peter rcidiacono Duke University & NBER Robert. Miller Carnegie Mellon University June 13, 2016 bstract The estimation of non-stationary dynamic discrete

More information

Proofs for Large Sample Properties of Generalized Method of Moments Estimators

Proofs for Large Sample Properties of Generalized Method of Moments Estimators Proofs for Large Sample Properties of Generalized Method of Moments Estimators Lars Peter Hansen University of Chicago March 8, 2012 1 Introduction Econometrica did not publish many of the proofs in my

More information

Generalized Method of Moments (GMM) Estimation

Generalized Method of Moments (GMM) Estimation Econometrics 2 Fall 2004 Generalized Method of Moments (GMM) Estimation Heino Bohn Nielsen of29 Outline of the Lecture () Introduction. (2) Moment conditions and methods of moments (MM) estimation. Ordinary

More information

Convex Optimization. Problem set 2. Due Monday April 26th

Convex Optimization. Problem set 2. Due Monday April 26th Convex Optimization Problem set 2 Due Monday April 26th 1 Gradient Decent without Line-search In this problem we will consider gradient descent with predetermined step sizes. That is, instead of determining

More information

Linear models. Linear models are computationally convenient and remain widely used in. applied econometric research

Linear models. Linear models are computationally convenient and remain widely used in. applied econometric research Linear models Linear models are computationally convenient and remain widely used in applied econometric research Our main focus in these lectures will be on single equation linear models of the form y

More information

Switching Regime Estimation

Switching Regime Estimation Switching Regime Estimation Series de Tiempo BIrkbeck March 2013 Martin Sola (FE) Markov Switching models 01/13 1 / 52 The economy (the time series) often behaves very different in periods such as booms

More information

Lecture 5: Linear models for classification. Logistic regression. Gradient Descent. Second-order methods.

Lecture 5: Linear models for classification. Logistic regression. Gradient Descent. Second-order methods. Lecture 5: Linear models for classification. Logistic regression. Gradient Descent. Second-order methods. Linear models for classification Logistic regression Gradient descent and second-order methods

More information

Sequential Estimation of Dynamic Programming Models

Sequential Estimation of Dynamic Programming Models Sequential Estimation of Dynamic Programming Models Hiroyuki Kasahara Department of Economics University of British Columbia hkasahar@gmail.com Katsumi Shimotsu Department of Economics Hitotsubashi University

More information

Greene, Econometric Analysis (6th ed, 2008)

Greene, Econometric Analysis (6th ed, 2008) EC771: Econometrics, Spring 2010 Greene, Econometric Analysis (6th ed, 2008) Chapter 17: Maximum Likelihood Estimation The preferred estimator in a wide variety of econometric settings is that derived

More information

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators Estimation theory Parametric estimation Properties of estimators Minimum variance estimator Cramer-Rao bound Maximum likelihood estimators Confidence intervals Bayesian estimation 1 Random Variables Let

More information

Oblivious Equilibrium: A Mean Field Approximation for Large-Scale Dynamic Games

Oblivious Equilibrium: A Mean Field Approximation for Large-Scale Dynamic Games Oblivious Equilibrium: A Mean Field Approximation for Large-Scale Dynamic Games Gabriel Y. Weintraub, Lanier Benkard, and Benjamin Van Roy Stanford University {gweintra,lanierb,bvr}@stanford.edu Abstract

More information

Competitive Equilibrium and the Welfare Theorems

Competitive Equilibrium and the Welfare Theorems Competitive Equilibrium and the Welfare Theorems Craig Burnside Duke University September 2010 Craig Burnside (Duke University) Competitive Equilibrium September 2010 1 / 32 Competitive Equilibrium and

More information

Econ 423 Lecture Notes: Additional Topics in Time Series 1

Econ 423 Lecture Notes: Additional Topics in Time Series 1 Econ 423 Lecture Notes: Additional Topics in Time Series 1 John C. Chao April 25, 2017 1 These notes are based in large part on Chapter 16 of Stock and Watson (2011). They are for instructional purposes

More information

Estimation of Dynamic Regression Models

Estimation of Dynamic Regression Models University of Pavia 2007 Estimation of Dynamic Regression Models Eduardo Rossi University of Pavia Factorization of the density DGP: D t (x t χ t 1, d t ; Ψ) x t represent all the variables in the economy.

More information

Homework #6 (10/18/2017)

Homework #6 (10/18/2017) Homework #6 (0/8/207). Let G be the set of compound gambles over a finite set of deterministic payoffs {a, a 2,...a n } R +. A decision maker s preference relation over compound gambles can be represented

More information

ECON 4160, Lecture 11 and 12

ECON 4160, Lecture 11 and 12 ECON 4160, 2016. Lecture 11 and 12 Co-integration Ragnar Nymoen Department of Economics 9 November 2017 1 / 43 Introduction I So far we have considered: Stationary VAR ( no unit roots ) Standard inference

More information

The Expectation-Maximization Algorithm

The Expectation-Maximization Algorithm 1/29 EM & Latent Variable Models Gaussian Mixture Models EM Theory The Expectation-Maximization Algorithm Mihaela van der Schaar Department of Engineering Science University of Oxford MLE for Latent Variable

More information

Dynamic Factor Models and Factor Augmented Vector Autoregressions. Lawrence J. Christiano

Dynamic Factor Models and Factor Augmented Vector Autoregressions. Lawrence J. Christiano Dynamic Factor Models and Factor Augmented Vector Autoregressions Lawrence J Christiano Dynamic Factor Models and Factor Augmented Vector Autoregressions Problem: the time series dimension of data is relatively

More information

LINEAR MODELS FOR CLASSIFICATION. J. Elder CSE 6390/PSYC 6225 Computational Modeling of Visual Perception

LINEAR MODELS FOR CLASSIFICATION. J. Elder CSE 6390/PSYC 6225 Computational Modeling of Visual Perception LINEAR MODELS FOR CLASSIFICATION Classification: Problem Statement 2 In regression, we are modeling the relationship between a continuous input variable x and a continuous target variable t. In classification,

More information

ADVANCED MACROECONOMIC TECHNIQUES NOTE 3a

ADVANCED MACROECONOMIC TECHNIQUES NOTE 3a 316-406 ADVANCED MACROECONOMIC TECHNIQUES NOTE 3a Chris Edmond hcpedmond@unimelb.edu.aui Dynamic programming and the growth model Dynamic programming and closely related recursive methods provide an important

More information

Generalized Method of Moments Estimation

Generalized Method of Moments Estimation Generalized Method of Moments Estimation Lars Peter Hansen March 0, 2007 Introduction Generalized methods of moments (GMM) refers to a class of estimators which are constructed from exploiting the sample

More information

Identification of Nonparametric Simultaneous Equations Models with a Residual Index Structure

Identification of Nonparametric Simultaneous Equations Models with a Residual Index Structure Identification of Nonparametric Simultaneous Equations Models with a Residual Index Structure Steven T. Berry Yale University Department of Economics Cowles Foundation and NBER Philip A. Haile Yale University

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Brown University CSCI 2950-P, Spring 2013 Prof. Erik Sudderth Lecture 12: Gaussian Belief Propagation, State Space Models and Kalman Filters Guest Kalman Filter Lecture by

More information

1 Hotz-Miller approach: avoid numeric dynamic programming

1 Hotz-Miller approach: avoid numeric dynamic programming 1 Hotz-Miller approach: avoid numeric dynamic programming Rust, Pakes approach to estimating dynamic discrete-choice model very computer intensive. Requires using numeric dynamic programming to compute

More information

Likelihood-Based Methods

Likelihood-Based Methods Likelihood-Based Methods Handbook of Spatial Statistics, Chapter 4 Susheela Singh September 22, 2016 OVERVIEW INTRODUCTION MAXIMUM LIKELIHOOD ESTIMATION (ML) RESTRICTED MAXIMUM LIKELIHOOD ESTIMATION (REML)

More information

Dynamic and Stochastic Model of Industry. Class: Work through Pakes, Ostrovsky, and Berry

Dynamic and Stochastic Model of Industry. Class: Work through Pakes, Ostrovsky, and Berry Dynamic and Stochastic Model of Industry Class: Work through Pakes, Ostrovsky, and Berry Reading: Recommend working through Doraszelski and Pakes handbook chapter Recall model from last class Deterministic

More information

Lecture 8: Information Theory and Statistics

Lecture 8: Information Theory and Statistics Lecture 8: Information Theory and Statistics Part II: Hypothesis Testing and I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 23, 2015 1 / 50 I-Hsiang

More information

Estimating Dynamic Discrete-Choice Games of Incomplete Information

Estimating Dynamic Discrete-Choice Games of Incomplete Information Estimating Dynamic Discrete-Choice Games of Incomplete Information Michael Egesdal Zhenyu Lai Che-Lin Su October 5, 2012 Abstract We investigate the estimation of models of dynamic discrete-choice games

More information

Differentiable Welfare Theorems Existence of a Competitive Equilibrium: Preliminaries

Differentiable Welfare Theorems Existence of a Competitive Equilibrium: Preliminaries Differentiable Welfare Theorems Existence of a Competitive Equilibrium: Preliminaries Econ 2100 Fall 2017 Lecture 19, November 7 Outline 1 Welfare Theorems in the differentiable case. 2 Aggregate excess

More information

SGN Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection

SGN Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection SG 21006 Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection Ioan Tabus Department of Signal Processing Tampere University of Technology Finland 1 / 28

More information

Maximum Likelihood (ML) Estimation

Maximum Likelihood (ML) Estimation Econometrics 2 Fall 2004 Maximum Likelihood (ML) Estimation Heino Bohn Nielsen 1of32 Outline of the Lecture (1) Introduction. (2) ML estimation defined. (3) ExampleI:Binomialtrials. (4) Example II: Linear

More information

Exponential Moving Average Based Multiagent Reinforcement Learning Algorithms

Exponential Moving Average Based Multiagent Reinforcement Learning Algorithms Exponential Moving Average Based Multiagent Reinforcement Learning Algorithms Mostafa D. Awheda Department of Systems and Computer Engineering Carleton University Ottawa, Canada KS 5B6 Email: mawheda@sce.carleton.ca

More information

Decision Theory: Markov Decision Processes

Decision Theory: Markov Decision Processes Decision Theory: Markov Decision Processes CPSC 322 Lecture 33 March 31, 2006 Textbook 12.5 Decision Theory: Markov Decision Processes CPSC 322 Lecture 33, Slide 1 Lecture Overview Recap Rewards and Policies

More information

Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US

Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US Gerdie Everaert 1, Lorenzo Pozzi 2, and Ruben Schoonackers 3 1 Ghent University & SHERPPA 2 Erasmus

More information

Trade Dynamics in the Market for Federal Funds

Trade Dynamics in the Market for Federal Funds Trade Dynamics in the Market for Federal Funds Gara Afonso FRB of New York Ricardo Lagos New York University The market for federal funds A market for loans of reserve balances at the Fed. The market for

More information

Single Equation Linear GMM

Single Equation Linear GMM Single Equation Linear GMM Eric Zivot Winter 2013 Single Equation Linear GMM Consider the linear regression model Engodeneity = z 0 δ 0 + =1 z = 1 vector of explanatory variables δ 0 = 1 vector of unknown

More information

Additional Material for Estimating the Technology of Cognitive and Noncognitive Skill Formation (Cuttings from the Web Appendix)

Additional Material for Estimating the Technology of Cognitive and Noncognitive Skill Formation (Cuttings from the Web Appendix) Additional Material for Estimating the Technology of Cognitive and Noncognitive Skill Formation (Cuttings from the Web Appendix Flavio Cunha The University of Pennsylvania James Heckman The University

More information

Parameter Estimation

Parameter Estimation Parameter Estimation Consider a sample of observations on a random variable Y. his generates random variables: (y 1, y 2,, y ). A random sample is a sample (y 1, y 2,, y ) where the random variables y

More information

The generalized method of moments

The generalized method of moments Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna February 2008 Based on the book Generalized Method of Moments by Alastair R. Hall (2005), Oxford

More information

CS181 Midterm 2 Practice Solutions

CS181 Midterm 2 Practice Solutions CS181 Midterm 2 Practice Solutions 1. Convergence of -Means Consider Lloyd s algorithm for finding a -Means clustering of N data, i.e., minimizing the distortion measure objective function J({r n } N n=1,

More information

GARCH Models Estimation and Inference

GARCH Models Estimation and Inference GARCH Models Estimation and Inference Eduardo Rossi University of Pavia December 013 Rossi GARCH Financial Econometrics - 013 1 / 1 Likelihood function The procedure most often used in estimating θ 0 in

More information

Nonlinear GMM. Eric Zivot. Winter, 2013

Nonlinear GMM. Eric Zivot. Winter, 2013 Nonlinear GMM Eric Zivot Winter, 2013 Nonlinear GMM estimation occurs when the GMM moment conditions g(w θ) arenonlinearfunctionsofthe model parameters θ The moment conditions g(w θ) may be nonlinear functions

More information

Stochastic Dynamic Programming: The One Sector Growth Model

Stochastic Dynamic Programming: The One Sector Growth Model Stochastic Dynamic Programming: The One Sector Growth Model Esteban Rossi-Hansberg Princeton University March 26, 2012 Esteban Rossi-Hansberg () Stochastic Dynamic Programming March 26, 2012 1 / 31 References

More information

Learning the Linear Dynamical System with ASOS ( Approximated Second-Order Statistics )

Learning the Linear Dynamical System with ASOS ( Approximated Second-Order Statistics ) Learning the Linear Dynamical System with ASOS ( Approximated Second-Order Statistics ) James Martens University of Toronto June 24, 2010 Computer Science UNIVERSITY OF TORONTO James Martens (U of T) Learning

More information

GARCH Models Estimation and Inference. Eduardo Rossi University of Pavia

GARCH Models Estimation and Inference. Eduardo Rossi University of Pavia GARCH Models Estimation and Inference Eduardo Rossi University of Pavia Likelihood function The procedure most often used in estimating θ 0 in ARCH models involves the maximization of a likelihood function

More information

Single Equation Linear GMM with Serially Correlated Moment Conditions

Single Equation Linear GMM with Serially Correlated Moment Conditions Single Equation Linear GMM with Serially Correlated Moment Conditions Eric Zivot October 28, 2009 Univariate Time Series Let {y t } be an ergodic-stationary time series with E[y t ]=μ and var(y t )

More information

STATISTICS/ECONOMETRICS PREP COURSE PROF. MASSIMO GUIDOLIN

STATISTICS/ECONOMETRICS PREP COURSE PROF. MASSIMO GUIDOLIN Massimo Guidolin Massimo.Guidolin@unibocconi.it Dept. of Finance STATISTICS/ECONOMETRICS PREP COURSE PROF. MASSIMO GUIDOLIN SECOND PART, LECTURE 2: MODES OF CONVERGENCE AND POINT ESTIMATION Lecture 2:

More information

Equilibrium Conditions and Algorithm for Numerical Solution of Kaplan, Moll and Violante (2017) HANK Model.

Equilibrium Conditions and Algorithm for Numerical Solution of Kaplan, Moll and Violante (2017) HANK Model. Equilibrium Conditions and Algorithm for Numerical Solution of Kaplan, Moll and Violante (2017) HANK Model. January 8, 2018 1 Introduction This document describes the equilibrium conditions of Kaplan,

More information

Last updated: Oct 22, 2012 LINEAR CLASSIFIERS. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

Last updated: Oct 22, 2012 LINEAR CLASSIFIERS. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition Last updated: Oct 22, 2012 LINEAR CLASSIFIERS Problems 2 Please do Problem 8.3 in the textbook. We will discuss this in class. Classification: Problem Statement 3 In regression, we are modeling the relationship

More information

Lecture 2: Univariate Time Series

Lecture 2: Univariate Time Series Lecture 2: Univariate Time Series Analysis: Conditional and Unconditional Densities, Stationarity, ARMA Processes Prof. Massimo Guidolin 20192 Financial Econometrics Spring/Winter 2017 Overview Motivation:

More information

Dynamic stochastic game and macroeconomic equilibrium

Dynamic stochastic game and macroeconomic equilibrium Dynamic stochastic game and macroeconomic equilibrium Tianxiao Zheng SAIF 1. Introduction We have studied single agent problems. However, macro-economy consists of a large number of agents including individuals/households,

More information

Ch 4. Linear Models for Classification

Ch 4. Linear Models for Classification Ch 4. Linear Models for Classification Pattern Recognition and Machine Learning, C. M. Bishop, 2006. Department of Computer Science and Engineering Pohang University of Science and echnology 77 Cheongam-ro,

More information

Robust Predictions in Games with Incomplete Information

Robust Predictions in Games with Incomplete Information Robust Predictions in Games with Incomplete Information joint with Stephen Morris (Princeton University) November 2010 Payoff Environment in games with incomplete information, the agents are uncertain

More information

Distributed MAP probability estimation of dynamic systems with wireless sensor networks

Distributed MAP probability estimation of dynamic systems with wireless sensor networks Distributed MAP probability estimation of dynamic systems with wireless sensor networks Felicia Jakubiec, Alejandro Ribeiro Dept. of Electrical and Systems Engineering University of Pennsylvania https://fling.seas.upenn.edu/~life/wiki/

More information

Markov Chains and Hidden Markov Models

Markov Chains and Hidden Markov Models Chapter 1 Markov Chains and Hidden Markov Models In this chapter, we will introduce the concept of Markov chains, and show how Markov chains can be used to model signals using structures such as hidden

More information

2.1.3 The Testing Problem and Neave s Step Method

2.1.3 The Testing Problem and Neave s Step Method we can guarantee (1) that the (unknown) true parameter vector θ t Θ is an interior point of Θ, and (2) that ρ θt (R) > 0 for any R 2 Q. These are two of Birch s regularity conditions that were critical

More information

Estimation of Static Discrete Choice Models Using Market Level Data

Estimation of Static Discrete Choice Models Using Market Level Data Estimation of Static Discrete Choice Models Using Market Level Data NBER Methods Lectures Aviv Nevo Northwestern University and NBER July 2012 Data Structures Market-level data cross section/time series/panel

More information

Decoupling Coupled Constraints Through Utility Design

Decoupling Coupled Constraints Through Utility Design 1 Decoupling Coupled Constraints Through Utility Design Na Li and Jason R. Marden Abstract The central goal in multiagent systems is to design local control laws for the individual agents to ensure that

More information

Lesson 2: Analysis of time series

Lesson 2: Analysis of time series Lesson 2: Analysis of time series Time series Main aims of time series analysis choosing right model statistical testing forecast driving and optimalisation Problems in analysis of time series time problems

More information

Mini Course on Structural Estimation of Static and Dynamic Games

Mini Course on Structural Estimation of Static and Dynamic Games Mini Course on Structural Estimation of Static and Dynamic Games Junichi Suzuki University of Toronto June 1st, 2009 1 Part : Estimation of Dynamic Games 2 ntroduction Firms often compete each other overtime

More information

Machine Learning Lecture 5

Machine Learning Lecture 5 Machine Learning Lecture 5 Linear Discriminant Functions 26.10.2017 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Course Outline Fundamentals Bayes Decision Theory

More information

Selecting Efficient Correlated Equilibria Through Distributed Learning. Jason R. Marden

Selecting Efficient Correlated Equilibria Through Distributed Learning. Jason R. Marden 1 Selecting Efficient Correlated Equilibria Through Distributed Learning Jason R. Marden Abstract A learning rule is completely uncoupled if each player s behavior is conditioned only on his own realized

More information

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7 Mathematical Foundations -- Constrained Optimization Constrained Optimization An intuitive approach First Order Conditions (FOC) 7 Constraint qualifications 9 Formal statement of the FOC for a maximum

More information

Learning Model Predictive Control for Iterative Tasks: A Computationally Efficient Approach for Linear System

Learning Model Predictive Control for Iterative Tasks: A Computationally Efficient Approach for Linear System Learning Model Predictive Control for Iterative Tasks: A Computationally Efficient Approach for Linear System Ugo Rosolia Francesco Borrelli University of California at Berkeley, Berkeley, CA 94701, USA

More information

Econometrics of Panel Data

Econometrics of Panel Data Econometrics of Panel Data Jakub Mućk Meeting # 6 Jakub Mućk Econometrics of Panel Data Meeting # 6 1 / 36 Outline 1 The First-Difference (FD) estimator 2 Dynamic panel data models 3 The Anderson and Hsiao

More information

Revisiting the Nested Fixed-Point Algorithm in BLP Random Coeffi cients Demand Estimation

Revisiting the Nested Fixed-Point Algorithm in BLP Random Coeffi cients Demand Estimation Revisiting the Nested Fixed-Point Algorithm in BLP Random Coeffi cients Demand Estimation Jinhyuk Lee Kyoungwon Seo September 9, 016 Abstract This paper examines the numerical properties of the nested

More information

CSCI-567: Machine Learning (Spring 2019)

CSCI-567: Machine Learning (Spring 2019) CSCI-567: Machine Learning (Spring 2019) Prof. Victor Adamchik U of Southern California Mar. 19, 2019 March 19, 2019 1 / 43 Administration March 19, 2019 2 / 43 Administration TA3 is due this week March

More information

ECONOMETRICS II (ECO 2401S) University of Toronto. Department of Economics. Winter 2014 Instructor: Victor Aguirregabiria

ECONOMETRICS II (ECO 2401S) University of Toronto. Department of Economics. Winter 2014 Instructor: Victor Aguirregabiria ECONOMETRICS II (ECO 2401S) University of Toronto. Department of Economics. Winter 2014 Instructor: Victor guirregabiria SOLUTION TO FINL EXM Monday, pril 14, 2014. From 9:00am-12:00pm (3 hours) INSTRUCTIONS:

More information

ECOM 009 Macroeconomics B. Lecture 2

ECOM 009 Macroeconomics B. Lecture 2 ECOM 009 Macroeconomics B Lecture 2 Giulio Fella c Giulio Fella, 2014 ECOM 009 Macroeconomics B - Lecture 2 40/197 Aim of consumption theory Consumption theory aims at explaining consumption/saving decisions

More information