Econometrics III: Problem Set # 2 Single Agent Dynamic Discrete Choice

Similar documents
Estimating Single-Agent Dynamic Models

Estimating Single-Agent Dynamic Models

Lecture notes: Rust (1987) Economics : M. Shum 1

Estimating Dynamic Programming Models

Dynamic Discrete Choice Structural Models in Empirical IO

Numerical Methods-Lecture XIII: Dynamic Discrete Choice Estimation

Lecture Pakes, Ostrovsky, and Berry. Dynamic and Stochastic Model of Industry

1 Hotz-Miller approach: avoid numeric dynamic programming

Lecture 14 More on structural estimation

Oblivious Equilibrium: A Mean Field Approximation for Large-Scale Dynamic Games

A Model of Human Capital Accumulation and Occupational Choices. A simplified version of Keane and Wolpin (JPE, 1997)

MONTE CARLO SIMULATIONS OF THE NESTED FIXED-POINT ALGORITHM. Erik P. Johnson. Working Paper # WP October 2010

Econometrica Supplementary Material

Estimating Dynamic Oligopoly Models of Imperfect Competition

CCP Estimation of Dynamic Discrete Choice Models with Unobserved Heterogeneity

Economics 2010c: Lectures 9-10 Bellman Equation in Continuous Time

A Computational Method for Multidimensional Continuous-choice. Dynamic Problems

Estimating Dynamic Models of Imperfect Competition

Sequential Estimation of Dynamic Programming Models

University of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming

Optimal Replacement of GMC Bus Engines: An Empirical Model of Harold Zurcher. John Rust

9 - Markov processes and Burt & Allison 1963 AGEC

Markov Decision Processes and Solving Finite Problems. February 8, 2017

Incentives Work: Getting Teachers to Come to School. Esther Duflo, Rema Hanna, and Stephen Ryan. Web Appendix

CCP Estimation. Robert A. Miller. March Dynamic Discrete Choice. Miller (Dynamic Discrete Choice) cemmap 6 March / 27

1 Introduction to structure of dynamic oligopoly models

Dynamic and Stochastic Model of Industry. Class: Work through Pakes, Ostrovsky, and Berry

Machine Learning

Lecture Notes 10: Dynamic Programming

Christopher Watkins and Peter Dayan. Noga Zaslavsky. The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015

Qualifying Exam in Machine Learning

The Metropolis-Hastings Algorithm. June 8, 2012

An Instrumental Variable Approach to Dynamic Models

Models of Wage Dynamics

Slides II - Dynamic Programming

Economics 701 Advanced Macroeconomics I Project 1 Professor Sanjay Chugh Fall 2011

, and rewards and transition matrices as shown below:

Solving a Dynamic (Stochastic) General Equilibrium Model under the Discrete Time Framework

Monetary Economics: Solutions Problem Set 1

Game Theory and Econometrics: A Survey of Some Recent Research

Value Function Iteration

Sequential Estimation of Structural Models with a Fixed Point Constraint

Dynamic and Stochastic Model of Industry. Class: Work through Pakes, Ostrovsky, and Berry

Full Surplus Extraction and Costless Information Revelation in Dynamic Environments. Shunya NODA (University of Tokyo)

Dynamic Economics Quantitative Methods and Applications to Macro and Micro

Identifying Dynamic Games with Serially-Correlated Unobservables

SEQUENTIAL ESTIMATION OF DYNAMIC DISCRETE GAMES. Victor Aguirregabiria (Boston University) and. Pedro Mira (CEMFI) Applied Micro Workshop at Minnesota

Estimating Dynamic Discrete-Choice Games of Incomplete Information

Economics 385: Homework 2

Estimating Deep Parameters: GMM and SMM

Economics 2010c: Lecture 2 Iterative Methods in Dynamic Programming

ADVANCED MACROECONOMIC TECHNIQUES NOTE 3a

QED. Queen s Economics Department Working Paper No. 1092

Bayesian Methods for Machine Learning

CSC321 Lecture 18: Learning Probabilistic Models

Reinforcement Learning. Donglin Zeng, Department of Biostatistics, University of North Carolina

CSC321 Lecture 22: Q-Learning

Consumer heterogeneity, demand for durable goods and the dynamics of quality

On the iterated estimation of dynamic discrete choice games

Bias-Variance Tradeoff

Numerical Methods in Economics

Machine Learning, Fall 2011: Homework 5

Experiment 1: Linear Regression

SEQUENTIAL ESTIMATION OF DYNAMIC DISCRETE GAMES: A COMMENT. MARTIN PESENDORFER London School of Economics, WC2A 2AE London, U.K.

Estimation of Discrete Choice Dynamic Programming Models

DS-GA 1002 Lecture notes 11 Fall Bayesian statistics

Identifying Dynamic Games with Serially-Correlated Unobservables

CMU Lecture 11: Markov Decision Processes II. Teacher: Gianni A. Di Caro

ECO 2901 EMPIRICAL INDUSTRIAL ORGANIZATION

Empirical Likelihood

Outline of GLMs. Definitions

Gauss s Law. David J. Starling Penn State Hazleton PHYS 212

CS181 Midterm 2 Practice Solutions

ECO 2901 EMPIRICAL INDUSTRIAL ORGANIZATION

10-701/15-781, Machine Learning: Homework 4

CSE250A Fall 12: Discussion Week 9

Lecture Doraszelski and Judd. Dynamic and Stochastic Model of Industry

Consistency and Asymptotic Normality for Equilibrium Models with Partially Observed Outcome Variables

A nested recursive logit model for route choice analysis

Machine Learning, Fall 2012 Homework 2

Dynamic Random Utility. Mira Frick (Yale) Ryota Iijima (Yale) Tomasz Strzalecki (Harvard)

GMM and SMM. 1. Hansen, L Large Sample Properties of Generalized Method of Moments Estimators, Econometrica, 50, p

Hidden Rust Models. Benjamin Connault. Princeton University, Department of Economics. Job Market Paper. November Abstract

I. Multinomial Logit Suppose we only have individual specific covariates. Then we can model the response probability as

exp(v j=) k exp(v k =)

Estimating A Static Game of Traveling Doctors

Machine Learning Gaussian Naïve Bayes Big Picture

10-701/ Machine Learning - Midterm Exam, Fall 2010

A Theory of Financing Constraints and Firm Dynamics by Clementi and Hopenhayn - Quarterly Journal of Economics (2006)

POLI 8501 Introduction to Maximum Likelihood Estimation

November 28 th, Carlos Guestrin 1. Lower dimensional projections

Lecture 3: Policy Evaluation Without Knowing How the World Works / Model Free Policy Evaluation

Introduction to Statistical Inference

CCP Estimation of Dynamic Discrete Choice Models with Unobserved Heterogeneity

CCP and the Estimation of Nonseparable Dynamic Models

Generalized Method of Moments Estimation

Bresnahan, JIE 87: Competition and Collusion in the American Automobile Industry: 1955 Price War

Lecture 8 Inequality Testing and Moment Inequality Models

Mustafa Jarrar. Machine Learning Linear Regression. Birzeit University. Mustafa Jarrar: Lecture Notes on Linear Regression Birzeit University, 2018

Nonlinear GMM. Eric Zivot. Winter, 2013

Transcription:

Holger Sieg Carnegie Mellon University Econometrics III: Problem Set # 2 Single Agent Dynamic Discrete Choice INSTRUCTIONS: This problem set was originally created by Ron Goettler. The objective of this problem set is to estimate a machine replacement problem (similar to Rust 1987) using Rust s nested fixed-point algorithm, and using Hotz-Miller s conditional choice probability method. Please answer the following question and hand in the solution by Monday, Feb 25, 2008. THE MODEL: Firms each use one machine to produce output in each period. The machines age, becoming more likely to breakdown, and in each time period the firms have the option of replacing the machines. Let a t be the age of the machine at time t and let the expected current period payoffs from using a machine of age a t be given by: θ 1 a t + ɛ 0t if i t = 0 Π(a t, i t, ɛ 0t, ɛ 1t ) = R + ɛ 1t if i t = 1, where i t = 1 if the firm decides to replace the machine at t, R is the net cost of a new machine, and ɛ t are time specific shocks to the payoffs from replacing and not replacing. Assume ɛ t are i.i.d. logit errors (i.e., type 1 extreme value). This payoff function may be interpreted as a normalized profit function, or a cost function. Assume a simple state evolution equation for age: min{5, a t + 1} if i t = 0 a t+1 = 1 if i t = 1, That is, if not replaced, the machine ages by one year, up to a maximum of 5 years; after 5 years the machine s age is fixed at 5 until replacement. If replaced in the current year, the machine s age next year is 1. Hence, a t is one of five possible values: 1, 2, 3, 4, or 5.

Part A: Rust s Nested Fixed Point Algorithm 1. Write down the sequence problem for a firm maximizing the EDV of future payoffs (assume an infinite horizon). 2. Write down Bellman s equation (the functional equation) for the value function of the firm. Use the alternative-specific value functions method from lecture, i.e., define V 0 (a t ) and V 1 (a t ), where V 0 (a t ) is the firm s value at time t if it chooses not to replace the machine and V 1 (a t ) is the firm s value at time t if it replaces the machine. The functional equation should be a mapping from this pair of equations to itself. (For the parts that follow, remember from class that there are (at least) two ways to write the functional equation, and that one of them will be easier to use than the other.) 3. On the computer, write a procedure that solves this dynamic programming problem given values of the parameters (θ 1, R). Assume β =.9. Your procedure should iterate the contraction mapping on the two alternative-specific value functions. Each of these functions is represented by a five element vector since there are five values of a. Iterate until the V functions do not change very much (say 10 6 ). Remember that given the logit error assumption there is an analytic solution to the expectation of the max in these equations and that Euler s constant is approximately 0.5775. (You could also iterate on the smooth value function V (a t ) that is not alternative-specific, and then derive V 1 (a t ), where V 0 (a t ) from V (a t ) and Π.) 4. Solve the model for the parameters (θ 1 = 1, R = 3). Suppose a t = 2. Will the firm replace its machine in period t? For what value of ɛ 0t ɛ 1t is the firm indifferent between replacing its machine or not? What is the probability (to an econometrician who doesn t observe the ɛ s) that this firm will replace its machine? What is the EDV of future payoffs for a firm at state {a t = 4, ɛ 0t = 1, ɛ 1t = 1.5}? (Remember the constant term has been normalized out so this EDV could be negative.) 5. Download the data for this problem set from the course web page. The dataset has two columns: a t and i t. Consider this to be cross-sectional data (i.e., there is only one data point per firm). Estimate (θ 1, R) using maximum likelihood. The best way to do this is to write a function that returns ln(l), where L is the likelihood function

and then use a package (e.g. gauss or matlab) to maximize the this function (or minimize the negative log-likelihood). Also compute (asymptotic) standard errors. The function itself should follow these steps: (a) Start with arbitrary (θ 1, R). function. That is, these should be the arguments of the (b) Solve the dynamic programming problem given these parameters. That is, compute the functions V 0 (a t ) and V 1 (a t ) using your procedure from part 3. (c) Using V 0 (a t ) and V 1 (a t ), compute the probability of replacement for each possible a t. (d) Compute the likelihood of each observation using the above probabilities. Form the log-likelihood function by summing the logs of the likelihoods across observations (i.e., assume that each data point is an independent observation). (e) Return ln(l) for a maximization routine or ln(l) for a minimization routine. 6. Describe (you do NOT have to do this on the computer) how you would need to change your model (either the dynamic programming problem, the estimation procedure, or both) to accommodate the following perturbations: (a) Consider an alternative empirical model. Suppose firms differ in their value of θ 1. Proportion α of firms have θ 1 = θ 1A, proportion (1 α) have θ 1 = θ 1B. How would you change both the dynamic programming problem and the likelihood function? (b) What if you had the model in (a) but you have panel data (i.e., multiple observations on each firm)? Write down the likelihood function. (c) What if instead of firms differing in θ 1, machines differ in θ 1 (i.e., when a firm replaces an old machine, the new machine may have θ 1 = θ 1A (with probability α ) or it may have θ 1 = θ 1B (with probability 1 α )? Now how would you need to change your program (go back to the original data with one observation per firm)?

Part B: Hotz-Miller Algorithm 1. Using the data, calculate the machine replacement probabilities at each state a t using the average replacement rate in the sample ( cell averages ). These estimates are nonparametric estimates of the replacement probabilities. Also calculate the standard errors of your estimates (using the Bernoulli formula from first year statistics). 2. Define the normalized conditional value function as Ψ 1 (a t ) = V 1 (a t ) V 0 (a t ). Write down the Hotz and Miller inversion formula for this model. That is, write down the formula for Ψ 1 (a t ) as a function of the conditional choice probabilities. Calculate the normalized conditional value functions using the choice probabilities you estimated above. 3. Write down current period utility (i.e., payoff) for each choice in terms of the parameters and the current state. Call this u 0 and u 1. Define w kt (a t ) = E [ɛ kt a t, d kt = 1] for k {0, 1}. In the special case of the logit, w kt = ln(p kt ), where γ = 0.5775 is Euler s constant. You can work this out for yourself mathematically, but it is tedious. What is the formula for expected utility at time t conditional on reaching state a t, as a function of current period utilities (u 0, u 1 ), the conditional choice probabilities, P r(i it = 1 a t ), and w 0t and w 1t? Call the vector of expected utilities (a vector across states a t ) u. 4. Construct the two 5x5 conditional transition matrices F 0 and F 1 which give the transition probabilities conditional on the firm s {0, 1} replacement choice. Write them down. 5. Using the estimated conditional choice probabilities and the F 0 and F 1 matrices above, calculate the 5x5 unconditional transition matrix of states (which takes into account the probability of replacement as well as the transition probabilities conditional on replacement). Call this matrix T and write it down. 6. Write down the formula for the conditional value functions, V 0 (a t ) and V 1 (a t ), in terms of current period utilities (u 0, u 1 ), expected utility (u), the unconditional transition

probabilities T, the discount factor (β = 0.9), and the conditional transition probabilities F 0 and F 1. On the computer, write a procedure which takes a parameter vector (θ 1, R) as inputs and returns the conditional value functions using this formula. 7. Estimate (θ 1, R) using maximum likelihood. Again, write a function that returns ln(l) if using a minimization routine, or ln(l) if using a maximization routine. Also compute standard errors. Do not bother adding correction for the fact that you are using a two-step procedure (the conditional choice probabilities were estimated nonparametrically in the first stage). The function itself should follow these steps: (a) Start with arbitrary (θ 1, R). function. That is, these should be the arguments of the (b) Use the procedure that you wrote above to calculate V 0 (a t ) and V 1 (a t ). (c) Using V 0 (a t ) and V 1 (a t ) compute the probability of replacement for each possible a t. (d) Compute the likelihood of each observation using the above probabilities. Form the log likelihood function by summing the logs of the likelihoods across observations (i.e., assume that each data point is an independent observation). (e) Return ln(l) for a maximization routine or ln(l) for a minimization routine. 8. Compare your results here in part B with those of Part A.