Lecture Notes: Estimation of dynamic discrete choice models
|
|
- Silas Rose
- 6 years ago
- Views:
Transcription
1 Lecture Notes: Estimation of dynamic discrete choice models Jean-François Houde Wharton School University of Pennsylvania March 16, 2015 These lectures notes incorporate material from Victor Agguirregabiria s graduate IO slides at the University of Toronto (http : //individual.utoronto.ca/vaguirre/courses/eco2901/teaching io toronto.html). 1
2 Introduction The second part of the course deals with dynamic structural models in IO We start with an single-agent models of dynamic decisions: Machine replacement and investment decisions: Rust (1987) Renewal or exit decisions: Pakes (1986) Inventory control: Erdem, Imai, and Keane (2003), Hendel and Nevo (2006a) Experience goods and bayesian learning: Erdem and Keane (1996), Ackerberg (2003), Crawford and Shum (2005) Demand for durable goods: Gordon (2010), Gowrisankaran and Rysman (2012), Lee (2013) This lecture will focus on econometrics methods, and next lecture will discuss mostly applications. Next, we will discuss questions related to the dynamic of industries: Macro models of industry dynamics Markov-perfect dynamic games Empirical model of dynamic games 2
3 Machine replacement and investment decisions Consider a firm producing a good at N plants (indexed by i) that operate independently. Each plant has a machine. Examples: Rust (1987): Each plant is a Madison WI bus, and Harold Zucher is the plant operator. Das (1992): Consider cement plants, where the machines are cement kiln. Rust and Rothwell (1995): Study the maintenance of nuclear power plants. Related applications: Export decisions (Das et al. (2007)), replacement of durable goods (Adda and Cooper (2000), Gowrisankaran and Rysman (2012)). 3
4 Profit function at time t: π t = N y it rc it i=1 where y it is the plant s variable profit, and rc it is the replacing cost of the machine. Replacement and depreciation: Replace cost: rc it = a it RC(x it ) where RC(x)/ x 0 and a it = 1 if the machine is replaced. In the application, RC(x it ) = θ R0 + θ R1 x it. State variable: machine age x it, choice-specific profit shock {ɛ it (0), ɛ it (1)}. Variable profits are decreasing in the age x it of the aging, and increasing in profit shock ɛ it (a it ): where Y/ x < 0. y ij = Y ((1 a it )x it, ɛ it (a it )) This transforms the variable profit into a step function: { Y (0, ɛ it (1)) RC(x it ) If a it = 1 π it = Y (x it, ɛ it (0)) Otherwise. Aging/depreciation process: Deterministic: x it+1 = (1 a it )x it + 1 Stochastic: x it+1 = (1 a it )x it + ξ t+1 In Rust (1987), x it is bus mileage. It follows a random walk process with a log-normal distribution. 4
5 Assumptions: Additive separable (AS) profit shock: Y ((1 a)x, ɛ(a)) = θ Y0 + θ Y1 (1 a)x + ɛ(a) Conditional independence (CI): f(ɛ t+1 ɛ t, x t ) = f(ɛ t+1 ) Aging follows is a discrete random-walk process: x it {0, 1,..., M} and matrix F (x x, a) characterizes its controlled Markov transition process. Dynamic optimization: Harold Zucher maximizes expected future profits: ( ) V (a it x it, ɛ it ) = E β τ π it+τ x it, ɛ it, a it τ=0 Recursive formulation: Bellman equation V (a x, ɛ) = Y ((1 a) x) RC(a x) + ɛ(a) +β x E ɛ (V (x, ɛ )) F (x x, a) = v(a, x) + ɛ(a) where V (x, ɛ) max a {0,1} V (a x, ɛ). Optimal replacement decision: { a 1 If v(1, x) v(0, x) = ṽ(x) > ɛ(0) ɛ(1) = ɛ = 0 Otherwise. If {ɛ(0), ɛ(1)} are distributed according to a type-1 EV distribution with unit variance: Pr(a it = 1 x it ) = exp(ṽ(x it )/(1 + exp(ṽ(x it ))) ( ) V (x it ) = E max v(a it, x it ) + ɛ it (a it ) = ln exp(v(a, x it )) a it a=0,1 5
6 Solution to the dynamic-programming (DP) problem: Because of the additive-separability and conditional-independence assumptions we only need to numerically find a fixed-point to the Emax function: V (x) = E ɛ (max a = E ɛ (max a = Γ(x V ) ) v(a, x) + ɛ(a) Π(a, x) + β x V (x )F (x x, a) + ɛ(a) where Π(a, x) = Y ((1 a) x) RC(a x), and Γ(x V ) is a contraction mapping. Matrix form representation using the EV distribution assumption: V = ln ( exp ( Π(0) + βf (0) V ) + exp ( )) Π(1) + βf (1) V = Γ( V) where F (0) and F (1) are two M M conditional transition probability matrix, and V is the M 1 vector of expected value function. Algorithm 1: Value-function iteration. it = 0 Guess initial value for V (x). Example: ln (exp (Π(0)) + exp (Π(1))). it = k Update value function: V k = ln ( exp ( Π(0) + βf (0) V k 1) + exp ( Π(1) + βf (1) Stop if V k V k 1 < η. Otherwise, update k + 1. ) V k 1)) 6
7 Algorithm 2: Policy-function iteration. Define conditional choice-probability (CCP) function: ( ) Π(1, x) + β x V (x P (x) = F )F (x x, 1) + ɛ(1) ɛ Π(0, x) + β x V (x )F (x x, 0) + ɛ(0) = exp(ṽ(x)/(1 + exp(ṽ(x))) = (1 + exp( ṽ(x))) 1 (1) At the optimal CCP, we can write the Emax function as follows: [ V P (x) = (1 P (x)) Π(0, x) + e(0, x) + β ] V P (x )F (x x, 0) x [ +P (x) Π(0, x) + e(1, x) + β ] V P (x )F (x x, 1) x where e(a, x) = E(ɛ(a) a, x) is the conditional expectation of the profit shock. If ɛ(a) is EV distributed, we can write this expectation analytically: e(a, x) = γ ln P (a x). This implicit define the value function in terms of the CCP vector: V P = ( [ ] I βf P) 1 (1 P) (Π(0) + e(0)) +P (Π(1) + e(1)) (2) where F P = (1 P) F (0) + P F (1). Equations 1 and 2 define a fixed-point in P : P = Ψ(P ). Moreover, Ψ( ) is also a contraction mapping. 7
8 Algorithm steps: it=0 Guess initial value for the CCP. Example: P (x) = (1 + exp( (Π(1) Π(0)))) 1. it=k Calculate expected value function: V k 1 = ( [ I βf k 1) 1 (1 P k 1 ) ( Π(0) + e k 1 (0) ) ] +P k 1 ( Π(1) + e k 1 (1) ) Update CCP: P k = Ψ(P k 1 ) = ( 1 + exp( ṽ k 1 ) ) 1 Note: where ṽ k 1 = ( Π(1) + βf (1) V k 1) ( Π(0) + βf (0) V k 1). Stop if P k P k 1 < η. Otherwise, update k + 1. Both algorithms are guaranteed to converge if β (0, 1) (slower if β 1). Policy-function iteration algorithms converges in fewer steps than valuefunction iteration. However, each step of the policy-function algorithm is slower due to the matrix inversion. M is typically very large (in the millions). 8
9 Estimation: Nested fixed-point maximum-likelihood (or NFXP) Data: Panel of choices a it and observed states x it Parameters: Technology parmeters θ = {θ Y0, θ Y1, θ R0, θ R1 }, discount factor β, and distribution of mileage shocks f x (ξ it ). Likelihood contribution of bus i: T L i (A i, X i θ, β, f x ) = Pr(a it x it )f x (x it a it 1 x it 1 ) t=1 where Pr(a it x it ) = Ψ(a it x it ) is the solution to the DP problem. If the panel is long-enough, we can efficiently estimate f x (ξ) from the data, and estimate the remaining parameters in a second stage (standard-errors should be adjusted however). Maximum likelihood problem: a it ln P (x it ) + (1 a it ) ln(1 P (x it )) max θ,β i t s.t. P (x it ) = Ψ(x it ) x it In practice, you want to solve the MLE problem by embedding the maximization problem into a standard optimization package (e.g. simplex or BFGS), and writing a fixed-point routine that solves the value-function or CCP contraction mapping. Using the matrix representation greatly helps if you use a matrix language like Matlab. 9
10 Adding unobserved heterogeneity: Assume that buses belong to one of K discrete types Example: Heterogenous replacement cost θ k R 0 This increases the number of parameters by K(K 1): {θ 1 R 0,..., θ K R 0 } + {ω 1,.., ω K 1 } (probability weights). This changes the MLE problem: [ ] ln ω k P k (x it ) a it (1 P k (x it )) 1 a it max θ,β,ω i k s.t. P k (x it ) = Ψ k (x it ) x it and k t This complicates the MLE significantly: Need to solve the DP problem separately for each type. 10
11 Identification: Standard normalization: σ ɛ = 1. This means that we cannot identify the dollar value of replacement costs. Only relative to variable profits. When profits or output data are available, we can relax this normalization, and estimate σ ɛ (e.g. investment and production data). Discount factor? Assume we know F ɛ. β is not identified, unless we impose parametric assumptions on Y and RC. The data corresponds to empirical hazard functions: h(x) = Pr(replacement t miles t = x) This corresponds to the reduced form of the model: P (x) = F ɛ (ṽ(x)) ) (Π(1, x) Π(0, x)) = F ɛ ( β x V (x )(F (x x, 1) F (x x, 0)) If Π(x) is linear in x, then any non-linearity in the observed hazard function identifies β. If Π(x) is allowed to take any non-linear function, we cannot distinguish between a non-linear myopic model (β = 0), and a forwardlooking model (β > 0). β would be identified if the model included a state variable z that only enters the Markov transition function (i.e. F (x x, z, a)), and not the static payoff function. 11
12 Hazard function 12
13 Identification of β and search for the right specification: 13
14 Main estimation results: 14
15 Patents as options, Pakes (1986) This paper studies the value of patent protection: (i) what is the stochastic process determining the value of innovations?, (ii) how patent protection laws affect the decision to renew patens and the distribution of returns to innovation? The model is an example of an optimal stopping problem. The model is setup with a finite horizon, but it does not have to be. Other examples: retirement, firm exit decisions, technology adoption, etc. Contributions: Illustrate how we can infer the implicit option value of patents (or any other dynamic investment decision) from dynamic discrete choices (i.e. principle of revealed preference). This is done without actually observing profits or revenues from patents. Only the dynamic structure of renewal costs are needed. More technically, the paper is one of the firsts applications of simulation methods in econometrics (very influential). Data: Three countries: France, Germany and UK Renewal date for all patents: n m,t (a) = number of surviving patents at age a in country m from cohort t. Regulatory environment by country/cohort: f: Number of automatic renewal years. L: Expiration date on patent c = {c 1,..., c T }: Deterministic renewal cost 15
16 16
17 Country differences in survival probabilities: Country differences in renewal fee schedules: 17
18 Model setup: Consider the renewal problem for patent i Stochastic sequence of returns from patent: r i = {r i1,..., r il } Evolution of returns depend on: (i) initial quality level, (ii) arrival of substitutes innovations that depreciate the value of the patent, (iii) arrival of complement innovations that increase its value. Markov process for returns: r it+1 = τ it+1 max{δr it, ξ it+1 } Where, Pr(τ it+1 = 0 r it, t) = exp( λr it ) p(ξ it+1 r it, t) = 1 ( φ t σ exp γ + ξ ) it+1 φ t σ r i0 LN(µ 0, σ0) 2 or more compactly for t > 0, exp( λr it ) If r it+1 = 0 f(r it+1 r it, t) = Pr(ξ it+1 ( < δr it r it ), t) If r it+1 = δr it (3) 1 φ t σ exp γ+ξ it+1 If r φ t σ it+1 > δr it Model structural parameters (per country): δ measures the normal obsolescence rate φ and σ determines the arrival rate and magnitude of complementary innovations λ determines to arrival rate of substitute innovations µ 0 and σ 0 determines the initial quality pool of innovations Discount factor β is fixed. 18
19 Optimal stopping problem: In the last year, the value of renewing the patent depends only on c L and r il : V (L, r il ) = max{0, r il c L } and therefore the patent is renewed if r il > rl = c L. At year L 1, the value is defined recursively: V (L, r il 1 ) = max{0, r il 1 c L 1 + β r L V (L, r il )f(r il r il 1, L 1)dr il } This value function is strictly increasing in r il 1 (see proposition 1). Therefore, there exists a unique threshold such that the patent is renewed if r il 1 > r L 1 = c L 1 β r L V (L, r il )f(r il r L 1, L 1)dr il Similarly, for any year t > 0 the value function is defined recursively as follows: V (L, r it ) = max{0, r it c t +β r t+1 which lead to a series of optimal stopping rules: r it > r t = c t β r t+1 V (t+1, r it+1 )f(r il r it, t)dr it+1 } V (t + 1, r it+1 )f(r it+1 r t, t) Given the function form assumptions on f(r r t, t), the thresholds can be solved analytically by backward induction. When the terminal period is stochastic the value function becomes stationary. For instance, optimal stopping problems arise when studying retirement or exit decisions: { } V (s t ) = max 0, π(s t ) + β (1 δ(s t ))V (s t+1 )f(s t+1 s t )ds t+1 19
20 Estimation: Likelihood of the observed renewal sequence N m conditional on the regulation environment Z m = {L m, f m, c m } in country m: L L(N m Z m, θ) = max n m (t) ln Pr(t = t Z m, θ) θ Where, Pr(t = t θ, Z m ) = t=1 r 1 r 2... r t Monte Carlo integration approximation: 0 df (r i1,.., r it 1, r it )df 0 (r i0 ) 0. Sample r s i0 LN(µ 0, σ 2 0) 1. Period 1: (a) Sample τ s 1 from Bernoulli with probability exp( λr s 0) (b) If τ s 1 = 1, sample ξ s 1 from exponential distribution. Otherwise, do not renew patent: a s 1 = 0. (c) Calculate r 1 1 (d) Evaluate decision: a s 1 = 1 if r s 1 > r t. Repeat sampling for period t if patent was renewed at t 1 After collecting the simulated sequences of actions, we can evaluate the simulated choice-probability at period t: P S (t θ, Z m ) = 1 1(a s 1 = 1, a s 2 = 1,..., a s t 1 = 1, a s t = 0) S s Numerical problem: PS (t θ, Z m ) is not a smooth function of the parameters θ + equal to zero for some t unless S. Smooth alternative approximation: ˆP S (t, θ, Z m ) = exp ( PS (t θ, Z m )/η ) 1 + t exp( P S (t θ, Z m )/η) 20
21 Note: All the structural parameters are identified in this model. The implicit normalization is that coefficient on renewal cost c t : all the parameters are expressed in dollar. Results: Main differences across countries: (i) patent regulation rules, (ii) initial distribution of patent returns. Germany has a more selective screening system for granting new patents: higher mean and smaller variance of initial returns r i0. 21
22 Learning about complementary innovations: φ 0.5. Imply very fast learning/growth in returns. This has important policy implications: Regulator wants to keep initial renewing cost low, and increase them fast to extract rents from high value patents (low distortions after learning is over). 22
23 The distribution of realized patent value is highly skewed: Implied rate of returns on R&R: France = 15.56%, UK = %, Germany = 13.83%. 23
24 Key references: Sequential estimators of DDC models Hotz and Miller (1993) Hotz, Miller, Sanders, and Smith (1994) Aguirregabiria and Mira (2002) Identification issues: Magnac and Thesmar (2002), Kasahara and Shimotsu (2009) Consider the following dynamic discrete choice model with additively separable (AS) and conditional independent (CI) errors. A discrete actions. Payoff function: u(x a) State space: (x, ɛ). Where x is a discrete state vector, and ɛ is a A continuous vector. Distribution functions: Pr(x t+1 = x x t, a) = f(x x, a) g(ɛ) is a type-1 EV density with unit variance. Bellman operator: { } V (x) = max u(x a) + ɛ(a) + β V (x )f(x x, a) g(ɛ)dɛ a A x { } = max v(x a) + ɛ(a) g(ɛ)dɛ a A ( ) = ln exp(v(x a)) a = Γ ( V (x) ) 24
25 CCP operator: Express V (x) as a function of P (a x). V (x) = { P (a x) u(x a) + E(ɛ(a) x, a) + β } V (x )f(x x, a) a x Where, E(ɛ(a) x, a) 1 ( ) = 1 v(x a) + ɛ(a) > v(x a ) + ɛ(a ), a a g(ɛ)dɛ P (a x) e(a, P (a x)) = γ ln P (a x) In Matrix form: V = P (a) [ u(a) + e(a, P ) + βf (a)v ] a [ I β P (a) F (a) ] V = P (a) [ u(a) + e(a, P ) ] a a V (P ) = [ I β P (a) F (a) ] [ 1 P (a) ( u(a) + e(a, P ) )] a a where F (a) is X X and V is X 1. The CCP contraction mapping is: ( ) P (a x) = Pr v(x a, P ) + ɛ(a) > v(x a, P ) + ɛ(a ), a a exp ( ṽ(x a, P ) ) = 1 + a >1 exp ( ṽ(x a, P ) ) = Ψ(a x, P ) where ṽ(x a, P ) = v(x a, P ) v(x 1, P ). Special case 1: If u(x a, θ) = xθ, the value function is also linear in θ. V (P ) = Z(P )θ + λ(p ) Where Z(P ) = [ I β a λ(p ) = [ I β a P (a) F (a) ] 1 [ P (a) F (a) ] 1 [ 25 a a ] P (a) X ] P (a) e(a, P )
26 Special case 2: Absorbing state, such that v(x 0) = 0 (e.g. Exit or retirement). This change the value function: V (x, ε) = max u(x) + ε(1) + β E ε [V (x, ε )] F (x }{{} x), ε(0) x = V (x ) As before, the expected continuation value is: ( ( V (x) = log exp(0) + exp u(x) + β x V (x )F (x x) )) + γ = log (1 + exp(v(x))) + γ where v(x) = u(x) + β x V (x ). The choice probability is given by: Pr(a = 1 x) = P (x) = exp(v(x)) 1 + exp(v(x)) Note that the log of the odds-ratio is equal to the choice-specific value function: ( ) P (x) log = v(x) 1 P (x) Therefore, the expected continuation value can be expressed as a function of P (x): ( V p (x) = log (1 + exp(v(x))) + γ = log 1 + P (s) ) + γ 1 P (x) = log (1 P (x)) + γ Therefore, with an absorbing state, we don t need to invert [ I β a P (a) F (a) ] to apply the CCP mapping. 26
27 Two-step Estimator: The objective is to estimate the structural parameters θ without repeatedly solving the DP problem Initial step: Reduced form of the model Markov transition process: ˆf(x x, a) Policy function: ˆP (a x) Constraint: Need to estimate both functions at EVERY state point x. How? Ideally ˆP (a x) is estimated non-parametrically to avoid imposing a particular functional form on the policy function (i.e. no theory involved at this stage). This would correspond to a frequency estimator: ˆP (a x) = 1 1(a i = a) n(x) i n(x) In practice, for finite samples, we need to impose smooth the policy function and interpolate between states are not visited (or infrequently). Kernels or local-polynomial techniques can be used. Second-step: Structural parameters conditional on ( ˆP, ˆf) 27
28 Example: Assume that u(x a, θ) = x(a)θ and fix β. 1- Data Preparation: Use ( ˆP, ˆF ) to calculate: Z( ˆP, ˆF ) = [ I β ˆP (a) ˆF (a) ] [ 1 a a λ( ˆP, ˆF ) = [ I β ˆP (a) F (a) ] [ 1 a a ] ˆP (a) X(a) ˆP (a) e(a, ˆP ] ) 2- GMM: Let W it denote a vector of predetermined instruments (e.g. state-variables and their interactions). We can construct a set of moment conditions: [ E (W it a it Ψ(a it x it, ˆP, ˆF ]) ) = 0 Where, ( Ψ(a it x it, ˆP, ˆF exp v(x it a it, ˆP, ˆF ) ) ) = a exp(v(x it a, ˆP, ˆF )) v(x a, ˆP, ˆF ) = x(a)θ + β V (x ˆP, ˆF ) }{{} x = ( x(a) + β Z(x ˆP, ˆF ) ) =Z(x, ˆP, ˆF )θ+λ(x, ˆP, ˆF ) ˆf(x x, a). θ + β λ(x ˆP, ˆF ) Therefore, the second-stage of problem is equivalent to a linear IV problem. 28
29 Pseudo-likelihood estimators (PML): Use the CCP mapping Ψ(P ) to construct a pseudo-likelihood estimator (Aguirregabiria and Mira (2002)) Data: Panel of n individuals of T periods: 2-Step estimator: (A, X) = {a it, x it } i=1,...,n;t=1,...,t 1. Obtain a flexible estimator of CCPs ˆP 1 (a x) 2. Feasible PML estimator: Q 2S (A, X) = max Ψ(a it x it, ˆP 1, ˆF, θ) θ t If V (P ) is linear, the second step is a linear probit/logit model. NPL estimator: The NPL repeat the PML and policy function iteration steps sequentially (i.e. swapping the fixed-point algorithm). 0 Obtain a flexible estimator of CCPs ˆP 1 (a x) 1 Feasible PML step: Q k+1 (A, X) = max Ψ(a it x it, ˆP k, ˆF, θ) θ 2 Policy function iteration step: t i ˆP k+1 (a x) = Ψ(a x, ˆP k, ˆF, ˆθ k+1 ) 3 Stop if ˆP k+1 ˆP k < δ, else repeat step 1 and 2. In the single agent case: The NPL is garanteed to converge to the MLE estimator (i.e. NFXP). In practice, Aguirregabiria and Mira (2002) showed that 2 or 3 steps is sufficient to eliminate the small sample bias of the 2-step estimator, and is computationally easier to implement than the NFXP. i 29
30 Simulation-based CCP estimator: Hotz, Miller, Sanders, and Smith (1994) Starting point: The H&M GMM estimator suffers from a curse of dimensionality in X, since we must invert a X X matrix to evaluate the continuation value (not true for optimal-stopping models). This is less severe for NFXP estimators, since we can use the valuefunction mapping to solve the policy functions. Solution: First insight: We only need to know the relative choice-specific value function ṽ(a x) = v(a x) v(1 x) to predict behavior. { 1 If ṽ(a x) + ɛ(a) < 0 for all a 1 a it = a If max{0, ṽ(a x) + ɛ(a )} < ṽ(a x) + ɛ(a) for all a a Second insight: There exists a one-to-one mapping between ṽ(a x) and P (a x). Logit example: P (a x) = exp(v(a x)) a exp(v(a x)) = ṽ(a x, P ) = ln P (a x) ln P (1 x) exp(ṽ(a x)) 1 + a >1 exp(ṽ(a x)) Third insight: We can approximate the model predicted value function at any state x by simulating actions according to a policy function P (a x). ˆV S (x P ) = 1 S s T β τ {u(x s t+τ, a s t+τ) + e(a s t+τ P (a s t+τ x s t+τ))} τ=0 where (x s, a s ) is a simulated sequence of choices and states sampled from P (a x) and f(x x, a), and e(a P (a x)) = E(ɛ(a) a i = a, x, P ). Importantly, lim S ˆV S (x P ) = V (s P ). 30
31 Estimation procedure: Step 1: Estimate ˆP (a x) and ˆf(x x, a), and compute the dependent variable : ṽ n (a it x it, ˆP ) = ln ˆP (a it x it ) ln ˆP (1 x it ) Step 2a: Simulation of value functions of each observed state and choice (x it, a it ). Each simulated sequence calculate the value of future choices: 1. Calculate static value of (x it, a it ): u(x it, a it θ) + e(a it ˆP, x it ) 2. Sample new state for period t + 1: x it+1 ˆf(x x it, a it ) 3. Sample new choice for period t + 1: a it+1 ˆP (a x it ) Repeat steps 1-3 for T periods. This gives us the net present value of one simulated sequence: v s (a it x it, ˆP, θ) = u(x it, a it θ) + e(a it ˆP, x it+τ ) T + β [u(x τ s it+τ, a s it+τ θ) + e(a s it+τ ˆP ], x s it+τ) τ=1 Repeate this process S times. This gives us the simulated value of choosing a it in state x it : v S (a it x it, ˆP, θ) = 1 v s (a it x it, S ˆP ) Let ṽ S (a it x it, ˆP, θ) = v S (a it x it, ˆP, θ) v S (1 x it, ˆP, θ). Note: If u(x, a θ) is linear in θ, we need to do this simulation process only once. Step 2b: Moment conditions [ E (W it ṽ n (a it x it, ˆP ) ṽ S (a it x it, ˆP ]), θ) = 0 where W it is a vector of instruments. s 31
32 Importantly, setting up the moment conditions this way implies that the estimate will be consistent even with a finite number of simulated number of draws S. Why? The simulation error, ṽ(a it x it, ˆP, θ) ṽ S (a it x it, ˆP, θ), is additive, and therefore vanishes as n (instead of S ). However, the small sample bias in ˆP enters non-linearly in the moment conditions, and can induce severe biases: ( ) ( ) ln ˆP (ait x it ) + u it (a) ln ˆP (1 xit ) + u it (1) ln ˆP (a it x it ) ln ˆP (1 x it )+u it For instance, if ˆP (a it x it ) = 0, the objective function is not defined. HMSS presents Monte-Carlo experiment to illustrate the small-sample bias. It can be quite large. 32
33 Key references: Pesendorfer (2002) Demand for storable goods Erdem, Imai, and Keane (2003) Hendel and Nevo (2006a) and Hendel and Nevo (2006b) If consumers can stockpile a storable good, they can benefit from sales : purchase more than consumption when prices are low. This creates a difference between purchasing and consumption elasticity In the short-run, store-level demand elasticity reflect stockpiling behavior In the long-run, store-level demand elasticity reflects consumption decisions Ignoring stockpiling and forward-looking behavior can lead to biases: own and cross price elasticities. If consumers differ in the storage costs firms have an incentive to price discriminate: PD theory of sales. Ketchup pricing example (Pesendorfer 2002): 33
34 Hendel and Nevo (2006a): Estimate an inventory control model with product differentiation Two challenges: (i) inventories are unobserved (key state variable), and (ii) differentiation creates a dimensionality problem Propose a simplifying assumption that: (i) reduce the computation burden of the model (reduce the dimension), and (ii) simplify the estimation (three-step algorithm). Data: Scanner data: Household panel with purchasing decisions from Firms: 9 supermarkets (ignore supermarket choice) Store-level data: choice-set and product attributes (detergent), promotions, and prices 34
35 Market shares and sales: 35
36 Model setup: J differentiated brands Weekly decisions: consumption c t, purchases quantity x t {0, 1, 2, 3, 4}, and brand j t Utility function: U(c t, v t, i t+1 ) = γ ln(c t + v t ) [ ] δ 1 i t+1 + δ 2 i 2 t+1 +m }{{} t Inventory cost where m t = j,x d jxt(βa jxt αp jxt + ξ jxt + ɛ jxt ) is the indirect utility from purchasing the outside good plus the value of the chosen brand. Inventory transition: i t+1 = i t c t + x t Note: The function m t implies that taste for brand only enters the purchasing stage, no the consumption stage. That is, the utility of consumption does not depend on the mix of brands in storage. State variables: s t = {i t, v t, p t, a t, ξ t, ɛ t }. Two challenges: 1. Unobserved state variable: i t is unobserved to the econometrician. 2. Curse of dimensionality: {p t, a t, ξ t, ɛ t } is 4 5 J dimension matrix. Note: In Rust (1987), we also had unobserved state variables (i.e. logit errors). However, we assumed that it was conditionally independent of the observed state and separately additive, which allowed us to work with the Emax function. Not the case here: i t is serially correlated and nonseparable. 36
37 Model choice-probability: Pr(d jxt = 1 s t ) = Where, exp (βa jxt αp jxt + ξ jxt + M(x, s t )) x,j exp (βa j x t αp j x t + ξ j x t + M(x, s t )) M(x, s t ) = max γ ln(c + v t ) [ δ 1 i t+1 + δ 2 it+1] 2 + βe [V (st+1 ) x, c, s t ] c s.t. i t+1 = i t c + x Note that conditional on purchasing size x, the brand choice is a static problem: Pr(d jt = 1 s t, x t = x) = = exp (βa jxt αp jxt + ξ jxt + M(x, s t )) j exp (βa j xt αp j xt + ξ j xt + M(x, s t )) exp (βa jxt αp jxt + ξ jxt ) j exp (βa j xt αp j xt + ξ j xt) This probability can be estimated by MLE: Multinomial logit model. This is the first-step of the estimation procedure. Endogenous prices and advertising? ξ jxt = ξ jx. Brand/size fixed-effects, such that 37
38 How to deal with the dimensionality problem? Since the brand-choice is static, we can write down the expected utility (i.e. Emax) conditional on size: ( ) ω t (x) = E max βa jxt αp jxt + ξ jxt + ɛ jt j J = ln exp (βa jxt αp jxt + ξ jxt ) j=1 McFadden (1978) calls ω t (x) the inclusive values. It summarizes the information contained in the vector of prices and characteristics available in period t, very much like a quality-adjusted price index. Dynamic programming problem re-defined: V (i t, v t, ω t, ɛ t ) = max γ ln(c v t) C(i t+1 ) + ω t (x) + ɛ xt c,x +βe [V (i t+1, v t+1, ω t+1, ɛ t+1 ) i t, v t, ω t, ɛ t, x, c)] To be able to define the state-vector solely as (i t, v t, ω t, ɛ t ), we need to impose an addition assumption on the transition process of ω t : Pr(ω t+1 a t, p t, ξ t ) = Pr(ω t+1 ω t ) This assumption as been called the Inclusive-value sufficiency assumption (Gowrisankaran and Rysman 2012). 38
39 Step 2: Estimate the Markov process for ω t (x) 39
40 Step 3: Dynamic decisions Conditional on buying size x, the optimal consumption path solves the following continuous DP problem: v(x s t ) = max c 0 γ ln(c v t ) C(i t c + x) + βe [V (s t+1 ) s t, x, c)] and the value function takes the familiar form: ( 4 ) V (s t ) = ln exp(v(x s t )) x=0 The CCP used in the estimation of the dynamic purchasing decision model is: Pr(x it s it ) = exp(v(x it s it )) x exp(v(x s it )) Remaining problem: i t is unobserved. Solution: Integrate out the unobserved initial inventory levels. How? Let i i0 = 0. Simulate the model for the first T 0 weeks of the data Record inventory level at T 0 : i T0 (i.e. initial state variable) Evaluate the likelihood from weeks T 0 to T, as if inventory levels were observed Repeat the process S times, and average the likelihood contribution of individual i. Implicit assumption: Stationary process at time T 0. 40
41 41
42 Two differences between static and dynamic model: 1. Static model implies larger price coefficient, 2. and ignores the inventory problem Both lead to a larger own price elasticity with the static model, than the long-run own price elasticity with the dynamic model. Point two leads to a lower cross price elasticity (with the static model). Why? In the data the response to sales is mostly coming from people going from not buying to buying the brand on sale. This leads to predict small cross price elasticities with respect to other products, and large cross price elasticity with respect to the outside good. Or more mechanically, in the static model, the choice probability are all relatively small since consumers buy detergent infrequently. The logic cross-partials are equal to the product of the two choice-probabilities: small cross-elasticities. The dynamic model rationalize this phenomenon high unobserved inventories, and conditional choice probabilities (conditional on purchasing). Everybody needs detergent... 42
43 Demand for experience goods and brand loyalty Introduction: Measuring state dependence in consumer choice behavior. In marketing and economics it is frequently observed that demand for some products exhibit time dependence: Examples: Switching cost Brand loyalty Pr(d ijt = 1 d ijt = 1) > Pr(d ijt = 1 d ijt = 0). Persistent unobserved heterogeneity Many papers have be written to empirically distinguish between true statedependence and unobserved heterogeneity (e.g. Keane (1997)): Empirical challenges: U ijt = X ijt β + λh ijt + ɛ ijt (4) where ɛ ijt = ρɛ ijt 1 + ν ijt t 1 H ijt = g(τ)d ijτ τ=t 0 Initial condition problem (i.e. d t0 is endogenous and/or unobserved). The presence of persistent unobserved heterogeneity can generates spurious state-dependence. Multi-dimension integration when ɛ ijt are correlated accross options and time: Pr(d i d t0 ) = Pr(U ijt > U ikt, k j, d ijt = 1, t = 1...T ) = Pr(ɛ ijt ɛ ikt > (X ijt X ikt )β (H ijt H ikt )λ, k j, d ijt = 1, t = 1...T ), i.e. dimension is T (J 1). Cannot be evaluated with standard methods. We must use simulation. 43
44 GHK Simulator Simpler cross-sectional example with 4 choices: U ij = X ij β + ɛ ij, ɛ ij N(0, Ω) (5) Then the probability of choosing option 4 is: Pr(d i4 = 1) = Pr(ɛ ij ɛ i4 < (X ij X i4 )β, k 4) = df (ɛ i1 ɛ i4, ɛ i1 ɛ i3, ɛ i3 ɛ i4 ) A 1 A 2 A 3 where A j = {ɛ ij ɛ i4 < (X ij X i4 )β} Let redefine the variables to compute the probability of choosing option 4: ν ij = ɛ ij ɛ i4 Xij = X ij X i4 ν N(0, Σ), Σ 3 3 = C C ν = C η, η N(0, I). Standard Monte-Carlo integration (i.e. Accept-reject): 1. Draw M vectors η m i (0, I), 2. Keep draw m if C η m i A 1 A 2 A 3. Otherwise reject. 3. Compute simulated choice probability: Problems and limitations: Pr(d i4 ) = Number accepted draws M (6) Non-smooth simulator (i.e. cannot use gradient methods) Require a high number of draws to avoid P r(d i4 ) = 0. 44
45 If the dimension of integration is large: infeasible (e.g. Panel data). We can alleviate the non-smooth problem by smoothing the Accept/Reject probability: Pr(d i4 ) = still very bias if M is too small. m k exp(( X ik β νm ij )/ρ) (7) The GHK simulator avoids the main problems of the standard MC-AR method by drawing only from the accepted region. Recall that: C = c 11 c 12 c 13 0 c 22 c c 33 In order to compute Pr(d i4 = 1) we proceed sequentially: 1. Draw νi1 m: Compute Φ i1 = Pr(ν i1 < Xi1 β). Draw ηm 1 η Φ( X i1 c 11 ) How? (a) Draw λ i U[0, 1] (b) Set λ i1 = λ i Φ i1 (c) Set η m i1 = Φ 1 (λ i1 ) (d) Finally ν i1 = c 11 η m i1 2. Draw ν m i2 from a truncated normal (conditional on νm i1 ): η m 2 ν2 m = c 12 η1 m + c 22 η 2 < Xi2β ( X Φ i2 β c 12 ηi1 m ) Φ i2 c 22 ( Thus we first draw ηi2 m from Φ X i2 β c 12 ηi1 m νi2 m = c 12ηi1 m + c 22ηi2 m. 45 c 22 ) from a truncated normal: as before, and compute
46 3. Compute Φ i3 similarly. 4. Compute Pr(d i4 = 1): Pr(d i4 = 1) = Advantages of the GHK: 1 ( X Φ i1 β ) ( X Φ i2 β c 12 η m ) i1 M c m 11 c 22 ( X Φ i3 β c 12 ηi1 m c 23ηi2 m ) c 33 Highly accurate even with high dimension integrals Differentiable Require fewer draws In order to applied to panel data with AR(1) correlation in the ɛ ij and arbitrary correlation across options we need to simulate the probability of observing a sequence of choices {j it } t=1...t. The Algorithm is the same... just longer (i.e. there are (J 1)T sequential draws to make for each m and i). To see this, let redefine the variables in the following way: Ũ kt = U kt U jt t ɛ kt = ɛ kt ɛ jt t Where, ɛ N(0, Σ), Where Σ = C C is a (J 1)T (J 1)T covariance matrix appropriately transformed to reflect the vector of choices j t, t = 1...T. With this transformation, i j it if Ũkt 0, k and ɛ i = C η i. Then to compute Pr(j i ): Period 1: Draw ɛ m i1 : 1: Draw η m i11 from a truncated normal s.th: Ũ i11 (η m i11) < 0. 46
47 2: Draw ηi21 m from a truncated normal s.th:... j i1 : Skip ν m ij i Period t: Draw ɛ m it : Ũ i21 (η m i11, η m i21) < 0. 1: Draw η m i1t from a truncated normal s.th: Ũ i11 (η m i11,.., η m ijt 1, η m i1t) < j i1 1: Draw η m ij it 1t from a truncated normal s.th: Ũ iji1 11(η m i11,.., η m ijt 1, η m i1t,..., η m ij it 1t < 0). j i1 : Skip η m ij it t... Finally compute Pr(j i ) by taking the product of each component and averaging over m. Keane (1997) estimates many different specifications of equation 4 using SMS with GHK and finds strong evidences of true state-dependence for Ketchup. Erdem and Keane (1996), Ackerberg (2003), and Crawford and Shum (2005) all estimate structural models of quality uncertainty that generate endogenously this type of brand loyalty or switching costs. 47
48 Erdem and Keane (1996): Decision-Making under uncertainty Estimate a dynamic bayesian learning model of demand for detergent using the A.C. Nielsen public scanner data set (i.e 3000 households over 3 years). The data-sets for detergent, ketchup, margarine and canned soup are available at: Why? Market with frequent brand introduction. Reasonable to assume that consumers learn about the quality of the product only through experience and advertising. Provide and economic interpretation for brand loyalty: If consumers are risk averse experiencing too frequently is costly (i.e. endogenous switching cost). Measure the information content of advertising messages (see also Ackerberg (2001) and Ackerberg (2003)). 48
49 Model Finite Horizon DP problem: V j (I(t), d j = 1) = V (I(t)) = { E[U j (t) I(t)] + e jt + βe[v (I(t + 1)) I(t), d j (t)] E[U j (T ) I(T )] + e jt max V j(i(t), d j ) d j,j=1..j if t < T Where the expectation is taken over the evolution of the information set I(t) and the idiosyncratic shock e jt. Components of the expected utility: 1. Atribute of good j if experienced at t: A E jt = A j + δ jt. 2. Utility after realization of δ jt and e jt : Where 3. Expected utility: Outside options: U jt = ω p P jt + ω A A E jt ω A ra E jt2 + ejt r = risk aversion coefficient e jt = logit utility shock P jt = Price of j (stochastic) else E[U jt I(t)] = ω p P jt + ω A E[A E jt I(t)] ω A re[a E jt2 I(t)] [ ] ω A re (A E jt E[A E jt I(t)]) 2 ] I(t) + e jt E[U 0t I(t)] = Φ 0 + Ψ 0 t + e 0t E[U NP t I(t)] = Φ NP + Ψ NP t + e NP t 49
50 Signals and components of the information set: 1. Experience: A E jt = A j + δ jt, δ jt N(0, σδ) 2 The experience signals are unbiased: δ is mean zero. 2. Priors on A j : A j N(A, σ ν (0) 2 ). 3. Advertising message (with probablity p S j estimated from the data): S jt = A j + η jt, η jt N(0, σ 2 η), The information content of advertising messages is measured by σ 2 η Advertising is exogenous and strictly informative: η jt is mean zero. Bayesian Updating: Update expectation about good j s attribute E[A j I(t)] = E[A j I(t 1)] + d jt β 1jt [ A E jt E[A E jt I(t 1)] ] +ad jt β 2jt [ Sjt E[S jt I(t 1)] ] Where the updating weights are given by: β 1jt = σ2 ν j (t) σ 2 ν j (t) + σ 2 δ & β 2jt = σ2 ν j (t) σ 2 ν j (t) + σ 2 η From the econometrician point of view (since A j is a parameter), we can rewrite the problem in terms of expectation errors ν j (t) = E[A E jt I(t)] A j. This generates a first-order markov process payoff in relevant state variables: With ν j (0) = A A j, j. ν j (t) = ν j (t 1) + d jt β 1jt [ νj (t 1) + δ jt ] +ad jt β 2jt [ νj (t 1) + η jt ] (8) 50
51 Similarly, the precision of signals is updated using the following markov process: [ 1 σ νj (t) = σ ν (0) + s t d js s t σδ 2 + ad ] 1 js (9) σν 2 Timing: t = 0 Purchasing decision based on ν j (0). New signals: δ j0 (if d j0 = 1) and η j0 (if ad j0 = 1). Update ν j (1) according to equation 8 t = 1 Purchasing decision based on ν j (1). New signals: δ j1 (if d j1 = 1) and η jt (if ad j1 = 1). Update ν j (2) according to equation 8... Therefore at any period t the payoff relevant state variables are: { I(t) = d js, } ad js, ν jt s t 1 s t 1 } {{ } Discrete }{{} Continuous j=1...j Solution/Estimation: Nested fixed-point estimation algorithm where the DP is solved by backward induction. Challenges: Size of the state-space: Impossible to solve exactly. Choice probabilities: ( ) exp EU j (I(t)) + βe[v (I(t + 1), t + 1) Pr j (I(t)) = ( )f(ν)dν k exp EU j (I(t)) + βe[v (I(t + 1), t + 1) Integration is complicated by the fact that ν jt is serially correlated: Must integrate the sequence of past ν js s using simulation method: simulate M sequences of ν jt, just like in Hendel and Nevo (2006a). 51
52 Initial condition problem: Do not observe the initial level of purchasing history and attribute expectation (problem if the products are not newly introduced as in Ackerberg (2003)). Solution: Set I(0) = {0, 0, ν j0 } j=1...,j, and simulate the model for the first two years of the data. Use the last two years for the estimation (i.e. T = 100 weeks). 52
53 Solution Method: Keane and Wolpin (1994) General Idea: Solve the value function exactly only at a subset I (t) of the states and interpolate between them using Least-squares to compute EV (I(t)) at I(t) / I. Backward induction algorithm for a fix grid I : T : 1. Calculate EV T (I(T )) for all I(T ) I : { EV T (I(T )) = max EU jt (I(T )) + e jt }df (e) j ( ( ) ) = log exp EU jt (I(T )) 2. Run the following regression: j EV T (I(T )) = G(I(T ))θ T + ν = ÊV T(I(T )) + u, where G(I(T )) is vector containing flexible transformations of the state variables, where u is a regression error. Note: For the approximation to work, the R 2 of the regressions must be very high. Alternative methods exist to improve the quality of the interpolation (e.g. kernels, local polynomials, etc). T 1: 1. Draw M random variables: {δ m 1,..., δ m J, ηm j,...ηm J, adm 1,..., ad m J }. 2. For each state I(T 1) I compute the expected value of choosing brand j in T 1: E [ V T (I(T )) I(T 1), d jt 1 = 1 ] = 1 M ÊV T (I m (T )) Where I m (T ) is the state corresponding to the mth draw and d jt 1 = 1. If I m (T ) I use the exact solution, otherwise use G(I m (T ))θ T. m 53
54 3. For each I(T 1) I calculate V (I(T 1)): ( EV T 1 (I(T 1)) = log exp ( EU jt 1 (I(T 1)) + 4. Run the following regression: j βe [ V T (I(T )) I(T 1), d jt 1 = 1 ])) EV T 1 (I(T 1)) = G(I(T 1))θ T 1 + ν,... Repeat steps 1-4 for the remaining periods t = T 2,..., 0. Note: In Erdem and Keane (1996) G(I(t)) includes the expected attributed level of each brand and the perception error variances. To estimate the model, the model needs to be solve using the Interpolation/Simulation algorithm for each parameter values. The choice probabilities are computed by simulating a sequence of states for each households. 54
55 Results Risk-aversion coefficient is large and negative: Important switching cost of experimenting. The variance of the advertizing signal is much much larger than the variance of the experience signal: Consumers don t get much from TV ads! Initial priors are very precise: Little uncertainty in this market. Structural model fits better... 55
56 Crawford and Shum (2005): Uncertainty and Learning in Pharmaceutical Demand Across several treatment lengths and spell transitions, there is a marked decreasing trend in the switching probability at the very beginning of treatment. Two forces explain these switching probabilities in the bayesian learning model: Initial experimentation and risk aversion. Forward looking behavior: Incentive for patients (or doctors) to acquire more information by experimenting. 56
57 Model A treatment is characterized by two match values (contrary to only one in Erdem and Keane (1996)): µ jn Symptomatic or side-effects (enters the utility directly) ν jn Curative properties (enters the recovery probability). Signals about the match values if j use drug n at t: x jnt N(µ jn, σ 2 n) y jnt N(ν jn, τ 2 n) Initial priors about the match values: µ jnt N( µ nk, σ 2 n) ν jnt N( ν nk, τ 2 n) Where k = indexes the severity type of patients (learned perfectly by the initial diagnostic). Expected Utility (CARA): u(x jnt, p n, ɛ jnt ) = exp( rx jnt ) αp n + ɛ ( jnt ) ẼU(µ jn (t), ν jn (t), p n, ɛ jnt ) = exp rµ jn (t) + 1/2r 2 (σn 2 + V jn (t)) αp n + ɛ jnt = EU(µ jn (t), V jn (t), p n ) + ɛ jnt Recovery probability follow a markov process: ( ) hj (t 1) 1 h j (t 1) + d jnt y jnt h j (t) = ) d jnt y jnt ( hj (t 1) 1 h j (t 1) 57
58 Updating rule for the beliefs regarding the symptomatic and curative match value µ jn (t + 1) and ν jn (t + 1) by equation (7) and (8) (same as in Erdem and Keane). State space: s jt = Where l jn (t) = s<t d jnt. { } µ jn (t), ν jn (t), l jn (t), h j (t) n=1...5 Value Function: Infinite horizon problem with absorbing state (i.e. recovery) V (s) = max EU(s) + ɛ n + βe [ (1 h(s ))V (s ) d n = 1, s ] n [ = log exp ( EU(s) + βe [ (1 h(s ))V (s ) d n = 1, s ])] n Solution method: Value function iteration with interpolation and simulation (i.e. Keane and Wolpin (1994)) 1. Define a discrete grid S S. 2. For each state s S make an initial guess at the value function V 0 (s). 3. Run regression: V 0 (s) = G(s) θ 0 + u s 4. Draw M random signals {x m jn, ym jn } 5. Compute the expected value of choosing dug n for each s S : [ ] E V (s d n = 1, s = 1 (1 h(s m ))V 0 (s m ) M Where s m is state corresponding to the random draw m and drug n being chosen, and V 0 (s m ) is evaluated with the interpolation equation if necessary. m 58
59 6. Update the value function for each s S : [ V 1 (s) = log exp ( [ ] ) ] EU(s) + βe V (s d n = 1, s n 7. Repeat step 3-6 until convergence of the value function at the grid points. Results Large risk aversion coefficient: Important switching cost Types are horizontally differentiated Policy experiments: Concentration increases in the level of uncertainty (i.e. smaller switching costs) The pooling of types (i.e. poor initial diagnostic) decreases concentration (i.e. products become less differentiated). 59
60 Figure 1: Results 60
61 Demand for durable goods Motivation: In many settings, the timing of purchase is as important as what you purchase. In durable goods markets, this is true because the quality and price of products available at any point in time vary less, than the quality and price of products offer across time. Why? Introduction of new products push the technology frontier upwards, and most firms often respond by using dynamic pricing strategy (i.e. discount products that are about to be discontinued). Challenges: (i) Micro data on product replacement is rare, and (ii) dynamic engine replacement models with product differentiation have very large state space. Examples: Gordon (2010), Gowrisankaran and Rysman (2012) 61
62 Model: Gowrisankaran and Rysman (2012) Indirect utility over product characteristics (omit time): Utility over current product: u ij = β i x j + ξ j + ε ij = δ j + µ ij + ε ij (10) u i0 = β i x 0 + ε i0 (11) Note: product 0 is either the outside option (not using a camcorder), or the last camcorder. GR assume that products do not depreciate, and there are no sunk replacement costs. Bellman equation: Multinomial choice: a i A i = {0, 1,..., J}. Where J is the current number of products available. Common state-space: s i = {u i0, x 1,..., x J, p 1,..., p J } Expected Bellman equation: ( V i (s i ) = E ε [max u i,a α i p ia + ε ia + βe s Vi (s ) s i, a) )] a A i { }] = E ε [max v i (0 s i ) + ε i0, max v i(a s i ) + ε ia a A\0 where p ia = 0 if a = 0, and v i (a s i ) = u ia α i p ia + βe s ( Vi (s ) s i, a) ). Curse of dimensionality: (i) The number of grid points to approximate V i (s i ) grows exponentially with the number of products, and (ii) The calculation of the continuation value involves a s dimension integral (i.e. E s (V x)). 62
63 Dimension reduction assumption: Recall that if ε ij is distributed according to a type-1 extreme value distribution with location/spread parameters (0, 1), then max a A\0 v i (a x) + ε ia is also EV1 distributed with location parameter: ω i (s i ) = log exp(v i (a s i )) and spread parameter of 1. a A\0 ω i (s i ) is the expected discounted value of buying the preferred camcorder available today, instead of holding to the current one (i.e. outside option). Therefore, we can rewrite the expected Bellman equation as: V i (s i ) = E ε [max {v i (0 s i ) + ε i0, ω i (s i ) + ε i1 }] = log (exp(v i (0 s i )) + exp(ω i (s i ))) So far, we re simply re-writing the problem so that it looks more like a sequential decision problem: (i) replace or not replace, and (ii) which new product to buy conditional on replacing. The value of replacing is ω i (s i ). In order to reduce the dimension of the state space, GR introduce a new assumption: Inclusive value sufficiency: If ω i (s i ) = ω i ( s i ), then g ω (ω i (s ) s i ) = g ω (ω i (s ) s i )). Where g ω (x s i ) is the density the opion value ω. Importantly, this implies that if two different states have the same option value ω, then they also have the same value function. In addition, we assume that ω i evolves over time according to an AR(1) process: ω i,t+1 = ρ i,0 + ρ i,1 ω i,t + η i,t+1 where η i,t is mean zero shock. 63
64 The IVS assumption, means that consumers can use the scalar ω to forecast the future and evaluate the value relative value of buying now versus waiting. New sufficient state space: s i = (u i0, ω i ). New expected value function: V i (u i0, ω i ) = E ε [ max { ui0 + ε i0 + βe ω [ V i (u i0, ω ) ω i ], ω i + ε i1 }] = log ( exp(u i0 + ε i0 + βe ω [ V i (u i0, ω ) ω]) + exp(ω i ) ) = Γ i (u i0, ω i V i ) where Γ i ( V i ) is a contraction mapping from V i to V i. 64
65 Equilibrium conditions: 1. Rational expectation: 2. Bellman equation: ω i,t+1 = ρ i,0 + ρ i,1 ω i,t + η i,t+1 V i (u i0, ω i ) = log ( exp(u i0 + βe ω [ V i (u i0, ω ) ω]) + exp(ω i ) ) 3. Consistency of the inclusive value: ω i = log exp(u i,a + βe ω [ V i (u i,a, ω i) ω i ] 4. Aggregate market share: a A\0 ŝ jt = s jt (δ, θ) = 1 S i j 0 P j (u i,j0, ω it α i, β i )w j0 (α i, β i ) where P j (u i,j0, ω it β i, α i ) is the probability of choosing option j conditional on holding option j 0 last period, w j0 (α i, β i ) is the share of consumers of types (α i, β i ) holding option j 0 in period t 1, and S is the number of simulated consumer. In order to estimate θ, we must minimize the following GMM criterium function subject to the equilibrium conditions: min θ s.t. (ξ T Z)Ω 1 (ξ T Z) T δ jt = X jt + ξ jt = s 1 jt (ŝ, θ) ω i,t+1 = ρ i,0 + ρ i,1 ω i,t + η i,t+1 V i (u i0, ω i ) = Γ i (u i0, ω i V ) = log ( exp(u i0 + βe ω [ V i (u i0, ω ) ω]) + exp(ω i ) ) ω i = log exp(u i,a + βe ω [ V i (u i,a, ω i) ω i ] a A\0 65
66 Solution steps: At every candidate parameter θ 1. Initialization: Sample time-invariant S consumer types: β i State-space grid: (u 0, ω) 2. Outer-loop: Solve δ jt = s 1 jt (ŝ, θ) (a) Starting values: δ 0 jt (b) Inner-loop: Solve transition parameters (ρ i,0, ρ i,1 ), and Bellman equation V i (u 0, ω) 0 i. Initial guesses: V i (u o, ω) and (ρ 0 i,0, ρ0 i,1 ), for all i = 1,..., S ii. Solve inclusive value fixed-point for each t and i: ω i,t = log exp(u i,a + βe ωt+1 [ V i 0 (u i,a, ω t+1 ) ω i,t ] a A\0 iii. Estimate AR(1) process (OLS): ω i,t+1 = ρ 1 i,0 + ρ1 i,1 ω i,t + η i,t+1 iv. Update value function: V i 1 (u 0, ω) = log ( exp(u 0 + βe ω [ V i 0 (u 0, ω ) ω]) + exp(ω) ) v. Convergence check: V 1 V 0 < ɛ 1, ρ 1 i,0 ρ0 i,0 < ɛ 2 and ρ 1 i,1 ρ 0 i,1 < ɛ 3 (c) Calculate predicted market shares: ŝ jt = s jt (δ 0, θ) = 1 P j (u i,j0, ω it α i, β i )w j0 (α i, β i ) S i j 0 where P j (u i,j0, ω it α i, β i ) = exp(v 0 (u i,j0 ))/ (exp(ω i,t ) + exp(v 0 (u i,j0 ))) if j = j 0, and P j (u i,j0, ω it α i, β i ) = exp(ω i,t ) exp(ω i,t ) + exp(v 0 (u i,j0 )) exp(v j,t(u jt, ω t )) exp(ω i,t ) 66
67 Applications: Video-camcorder market between 2000 and 2006 Main data-set: Market shares and characteristics of 383 models and 11 brands, from March 2000 to May 2006 Auxiliary data: Aggregate penetration and new-sales rates by years 67
68 Estimation results Implied elasticities: A market-wise temporary price increase of 1% leads to: A contemporaneous decrease in sales of 2.55 percent Most of this decrease is due to delayed purchases: 44 percent of the decrease in sales is recaptured over the following 12 months. The same market-wise permanent price increase leads to a 1.23 percent decrease in demand (permanent). The difference is more modest when we consider a product-level price increase: 2.59 percent versus 2.41 percent (for the leading product in 2003). Why? Consumers substitute to competing brands when the change is permanent in about the same magnitude as the delayed response when the price change is temporary. 68
69 References Ackerberg, D. (2003). Advertising, learning, and consumer choice in experience good markets: A structural empirical examination. International Economic Review 44, Ackerberg, D. A. (2001, Summer). Empirically distinguishing informative and prestige effects of advertising. RAND Journal of Economics 32 (2), Adda, J. and R. Cooper (2000). Balladurette and juppette: A discrete analysis of scrapping subsidies. Journal of Political Economy 108 (4). Aguirregabiria, V. and P. Mira (2002). Swapping the nested fixed point algorithm: A class of estimators for discrete markov decision models. Econometrica 70 (4), Crawford, G. and M. Shum (2005). Uncertainty and learning in pharmaceutical demand. Econometrica 73, Das, S. (1992). A microeconometric model of capital utilization and retirement: The case of the us cement industry. Review of Economic Studies 59, Das, S., M. Roberts, and J. Tybout (2007, May). Market entry costs, producer heterogeneity, and export dynamics. Econometrica. Erdem, T., S. Imai, and M. P. Keane (2003). Brand and quantity choice dynamics under price uncertainty. Quantitative Marketing and Economics 1, Erdem, T. and M. P. Keane (1996). Decision-making under uncertainty: Capturing dynamic brand choice processes in turbulent consumer goods markets. Marketing Science 15 (1), Gordon, B. (2010, September). A dynamic model of consumer replacement cycles in the pc processor industry. Marketing Science 28 (5). Gowrisankaran, G. and M. Rysman (2012). Dynamics of consumer demand for new durable goods. Journal of Political Economy 120, Hendel, I. and A. Nevo (2006a). Measuring the implications of sales and consumer stockpiling behavior. Econometrica 74 (6), Hendel, I. and A. Nevo (2006b, Fall). Sales and consumer inventory. Rand Journal of Economics. Hotz, V. J. and R. A. Miller (1993). Conditional choice probabilities and the estimation of dynamic models. The Review of Economic Studies 60 (3), Hotz, V. J., R. A. Miller, S. Sanders, and J. Smith (1994). A simulation estimator for dynamic models of discrete choice. The Review of Economic Studies 61 (2), Kasahara, H. and K. Shimotsu (2009). Nonparametric identification of finite mixture models of dynamic discrete choices. Econometrica 77 (1). Keane, M. P. (1997). Modeling heterogeneity and state dependence in consumer choice behavior. Review of Economics and Statistics 15 (3), Keane, M. P. and K. I. Wolpin (1994). The solution and estimation of discrete choice dynamic programming models by simulation and interpolation: Monte carlo evidence. The Review of Economics and Statistics 76 (4), Lee, R. (2013, December). Vertical integration and exclusivity in platform and two-sided markets. American economic Review 103 (7), Magnac, T. and D. Thesmar (2002). Identifying dynamic discrete decision processes. Econometrica 70,
Lecture Notes: Estimation of dynamic discrete choice models
Lecture Notes: Estimation of dynamic discrete choice models Jean-François Houde Cornell University November 7, 2016 These lectures notes incorporate material from Victor Agguirregabiria s graduate IO slides
More informationLecture Notes: Brand Loyalty and Demand for Experience Goods
Lecture Notes: Brand Loyalty and Demand for Experience Goods Jean-François Houde Cornell University November 14, 2016 1 Demand for experience goods and brand loyalty Introduction: Measuring state dependence
More informationEstimating Single-Agent Dynamic Models
Estimating Single-Agent Dynamic Models Paul T. Scott Empirical IO Fall, 2013 1 / 49 Why are dynamics important? The motivation for using dynamics is usually external validity: we want to simulate counterfactuals
More informationEstimating Single-Agent Dynamic Models
Estimating Single-Agent Dynamic Models Paul T. Scott New York University Empirical IO Course Fall 2016 1 / 34 Introduction Why dynamic estimation? External validity Famous example: Hendel and Nevo s (2006)
More informationEstimating Dynamic Programming Models
Estimating Dynamic Programming Models Katsumi Shimotsu 1 Ken Yamada 2 1 Department of Economics Hitotsubashi University 2 School of Economics Singapore Management University Katsumi Shimotsu, Ken Yamada
More informationDynamic Discrete Choice Structural Models in Empirical IO
Dynamic Discrete Choice Structural Models in Empirical IO Lecture 4: Euler Equations and Finite Dependence in Dynamic Discrete Choice Models Victor Aguirregabiria (University of Toronto) Carlos III, Madrid
More informationLecture notes: Rust (1987) Economics : M. Shum 1
Economics 180.672: M. Shum 1 Estimate the parameters of a dynamic optimization problem: when to replace engine of a bus? This is a another practical example of an optimal stopping problem, which is easily
More informationSEQUENTIAL ESTIMATION OF DYNAMIC DISCRETE GAMES. Victor Aguirregabiria (Boston University) and. Pedro Mira (CEMFI) Applied Micro Workshop at Minnesota
SEQUENTIAL ESTIMATION OF DYNAMIC DISCRETE GAMES Victor Aguirregabiria (Boston University) and Pedro Mira (CEMFI) Applied Micro Workshop at Minnesota February 16, 2006 CONTEXT AND MOTIVATION Many interesting
More informationECO 2901 EMPIRICAL INDUSTRIAL ORGANIZATION
ECO 2901 EMPIRICAL INDUSTRIAL ORGANIZATION Lecture 8: Dynamic Games of Oligopoly Competition Victor Aguirregabiria (University of Toronto) Toronto. Winter 2017 Victor Aguirregabiria () Empirical IO Toronto.
More informationA Practitioner s Guide to Bayesian Estimation of Discrete Choice Dynamic Programming Models
A Practitioner s Guide to Bayesian Estimation of Discrete Choice Dynamic Programming Models Andrew Ching Rotman School of Management University of Toronto Susumu Imai Department of Economics Queen s University
More informationDynamic Discrete Choice Structural Models in Empirical IO. Lectures at Universidad Carlos III, Madrid June 26-30, 2017
Dynamic Discrete Choice Structural Models in Empirical IO Lectures at Universidad Carlos III, Madrid June 26-30, 2017 Victor Aguirregabiria (University of Toronto) Web: http://individual.utoronto.ca/vaguirre
More informationSequential Estimation of Dynamic Programming Models
Sequential Estimation of Dynamic Programming Models Hiroyuki Kasahara Department of Economics University of British Columbia hkasahar@gmail.com Katsumi Shimotsu Department of Economics Hitotsubashi University
More informationA Model of Human Capital Accumulation and Occupational Choices. A simplified version of Keane and Wolpin (JPE, 1997)
A Model of Human Capital Accumulation and Occupational Choices A simplified version of Keane and Wolpin (JPE, 1997) We have here three, mutually exclusive decisions in each period: 1. Attend school. 2.
More informationECO 2901 EMPIRICAL INDUSTRIAL ORGANIZATION
ECO 2901 EMPIRICAL INDUSTRIAL ORGANIZATION Lecture 12: Dynamic Games of Oligopoly Competition Victor Aguirregabiria (University of Toronto) Toronto. Winter 2018 Victor Aguirregabiria () Empirical IO Toronto.
More informationOblivious Equilibrium: A Mean Field Approximation for Large-Scale Dynamic Games
Oblivious Equilibrium: A Mean Field Approximation for Large-Scale Dynamic Games Gabriel Y. Weintraub, Lanier Benkard, and Benjamin Van Roy Stanford University {gweintra,lanierb,bvr}@stanford.edu Abstract
More information1 Bewley Economies with Aggregate Uncertainty
1 Bewley Economies with Aggregate Uncertainty Sofarwehaveassumedawayaggregatefluctuations (i.e., business cycles) in our description of the incomplete-markets economies with uninsurable idiosyncratic risk
More informationDynamic Random Utility. Mira Frick (Yale) Ryota Iijima (Yale) Tomasz Strzalecki (Harvard)
Dynamic Random Utility Mira Frick (Yale) Ryota Iijima (Yale) Tomasz Strzalecki (Harvard) Static Random Utility Agent is maximizing utility subject to private information randomness ( utility shocks ) at
More informationLecture 14 More on structural estimation
Lecture 14 More on structural estimation Economics 8379 George Washington University Instructor: Prof. Ben Williams traditional MLE and GMM MLE requires a full specification of a model for the distribution
More informationA Computational Method for Multidimensional Continuous-choice. Dynamic Problems
A Computational Method for Multidimensional Continuous-choice Dynamic Problems (Preliminary) Xiaolu Zhou School of Economics & Wangyannan Institution for Studies in Economics Xiamen University April 9,
More informationEstimation of Static Discrete Choice Models Using Market Level Data
Estimation of Static Discrete Choice Models Using Market Level Data NBER Methods Lectures Aviv Nevo Northwestern University and NBER July 2012 Data Structures Market-level data cross section/time series/panel
More informationNumerical Methods-Lecture XIII: Dynamic Discrete Choice Estimation
Numerical Methods-Lecture XIII: Dynamic Discrete Choice Estimation (See Rust 1989) Trevor Gallen Fall 2015 1 / 1 Introduction Rust (1989) wrote a paper about Harold Zurcher s decisions as the superintendent
More informationConsumer heterogeneity, demand for durable goods and the dynamics of quality
Consumer heterogeneity, demand for durable goods and the dynamics of quality Juan Esteban Carranza University of Wisconsin-Madison April 4, 2006 Abstract This paper develops a methodology to estimate models
More information1 Hotz-Miller approach: avoid numeric dynamic programming
1 Hotz-Miller approach: avoid numeric dynamic programming Rust, Pakes approach to estimating dynamic discrete-choice model very computer intensive. Requires using numeric dynamic programming to compute
More informationLecture 10 Demand for Autos (BLP) Bronwyn H. Hall Economics 220C, UC Berkeley Spring 2005
Lecture 10 Demand for Autos (BLP) Bronwyn H. Hall Economics 220C, UC Berkeley Spring 2005 Outline BLP Spring 2005 Economics 220C 2 Why autos? Important industry studies of price indices and new goods (Court,
More informationEstimating Dynamic Discrete-Choice Games of Incomplete Information
Estimating Dynamic Discrete-Choice Games of Incomplete Information Michael Egesdal Zhenyu Lai Che-Lin Su October 5, 2012 Abstract We investigate the estimation of models of dynamic discrete-choice games
More informationSyllabus. By Joan Llull. Microeconometrics. IDEA PhD Program. Fall Chapter 1: Introduction and a Brief Review of Relevant Tools
Syllabus By Joan Llull Microeconometrics. IDEA PhD Program. Fall 2017 Chapter 1: Introduction and a Brief Review of Relevant Tools I. Overview II. Maximum Likelihood A. The Likelihood Principle B. The
More informationRecent Developments in Empirical IO: Dynamic Demand and Dynamic Games
TWO Recent Developments in Empirical IO: Dynamic Demand and Dynamic Games Victor Aguirregabiria and Aviv Nevo 1.0 Introduction Important aspects of competition in oligopoly markets are dynamic. Demand
More informationRevisiting the Nested Fixed-Point Algorithm in BLP Random Coeffi cients Demand Estimation
Revisiting the Nested Fixed-Point Algorithm in BLP Random Coeffi cients Demand Estimation Jinhyuk Lee Kyoungwon Seo September 9, 016 Abstract This paper examines the numerical properties of the nested
More informationMini Course on Structural Estimation of Static and Dynamic Games
Mini Course on Structural Estimation of Static and Dynamic Games Junichi Suzuki University of Toronto June 1st, 2009 1 Part : Estimation of Dynamic Games 2 ntroduction Firms often compete each other overtime
More informationOptimal Replacement of GMC Bus Engines: An Empirical Model of Harold Zurcher. John Rust
Optimal Replacement of GMC Bus Engines: An Empirical Model of Harold Zurcher John Rust 1987 Top down approach to investment: estimate investment function that is based on variables that are aggregated
More informationEconometrics III: Problem Set # 2 Single Agent Dynamic Discrete Choice
Holger Sieg Carnegie Mellon University Econometrics III: Problem Set # 2 Single Agent Dynamic Discrete Choice INSTRUCTIONS: This problem set was originally created by Ron Goettler. The objective of this
More informationLinear IV Regression Estimators for Structural Dynamic Discrete Choice Models
Linear IV Regression Estimators for Structural Dynamic Discrete Choice Models Myrto Kalouptsidi, Paul T. Scott, Eduardo Souza-Rodrigues Harvard University, CEPR and NBER, NYU Stern, University of Toronto
More informationDynamic Structural Models of Industrial Organization
CHAPTER 6 Dynamic Structural Models of Industrial Organization 1. Introduction Dynamics in demand and/or supply can be important aspects of competition in oligopoly markets. In many markets demand is dynamic
More informationCCP Estimation. Robert A. Miller. March Dynamic Discrete Choice. Miller (Dynamic Discrete Choice) cemmap 6 March / 27
CCP Estimation Robert A. Miller Dynamic Discrete Choice March 2018 Miller Dynamic Discrete Choice) cemmap 6 March 2018 1 / 27 Criteria for Evaluating Estimators General principles to apply when assessing
More informationHigh-dimensional Problems in Finance and Economics. Thomas M. Mertens
High-dimensional Problems in Finance and Economics Thomas M. Mertens NYU Stern Risk Economics Lab April 17, 2012 1 / 78 Motivation Many problems in finance and economics are high dimensional. Dynamic Optimization:
More informationECO 2901 EMPIRICAL INDUSTRIAL ORGANIZATION
ECO 2901 EMPIRICAL INDUSTRIAL ORGANIZATION Lecture 7 & 8: Models of Competition in Prices & Quantities Victor Aguirregabiria (University of Toronto) Toronto. Winter 2018 Victor Aguirregabiria () Empirical
More informationMONTE CARLO SIMULATIONS OF THE NESTED FIXED-POINT ALGORITHM. Erik P. Johnson. Working Paper # WP October 2010
MONTE CARLO SIMULATIONS OF THE NESTED FIXED-POINT ALGORITHM Erik P. Johnson Working Paper # WP2011-011 October 2010 http://www.econ.gatech.edu/research/wokingpapers School of Economics Georgia Institute
More informationEstimation of demand for differentiated durable goods
Estimation of demand for differentiated durable goods Juan Esteban Carranza University of Wisconsin-Madison September 1, 2006 Abstract This paper develops a methodology to estimate models of demand for
More informationA Dynamic Model of Service Usage, Customer Satisfaction, and Retention
A Dynamic Model of Service Usage, Customer Satisfaction, and Retention Nan Yang Juin-Kuan Chong Department of Marketing, NUS Business School Dec 13, 2016, Melbourne / APIO Conference 1 / 29 SATISFACTION,
More information1 Introduction to structure of dynamic oligopoly models
Lecture notes: dynamic oligopoly 1 1 Introduction to structure of dynamic oligopoly models Consider a simple two-firm model, and assume that all the dynamics are deterministic. Let x 1t, x 2t, denote the
More informationEstimating Dynamic Oligopoly Models of Imperfect Competition
Estimating Dynamic Oligopoly Models of Imperfect Competition Lanier Benkard, Yale University Leverhume Lecture, Warwick May 2010 Introduction Why are we interested in dynamic oligopoly? 1. Effects of policy/environmental
More informationAn Instrumental Variable Approach to Dynamic Models
An Instrumental Variable Approach to Dynamic Models work in progress, comments welcome Steven Berry and Giovanni Compiani Yale University September 26, 2017 1 Introduction Empirical models of dynamic decision
More informationPrice Discrimination through Refund Contracts in Airlines
Introduction Price Discrimination through Refund Contracts in Airlines Paan Jindapon Department of Economics and Finance The University of Texas - Pan American Department of Economics, Finance and Legal
More informationLecture 3 Unobserved choice sets
Lecture 3 Unobserved choice sets Rachel Griffith CES Lectures, April 2017 Introduction Unobserved choice sets Goeree (2008) GPS (2016) Final comments 1 / 32 1. The general problem of unobserved choice
More informationEstimating Dynamic Models of Imperfect Competition
Estimating Dynamic Models of Imperfect Competition Patrick Bajari Department of Economics University of Minnesota and NBER Jonathan Levin Department of Economics Stanford University and NBER C. Lanier
More informationLecture XI. Approximating the Invariant Distribution
Lecture XI Approximating the Invariant Distribution Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Invariant Distribution p. 1 /24 SS Equilibrium in the Aiyagari model G.
More informationApproximating High-Dimensional Dynamic Models: Sieve Value Function Iteration
Approximating High-Dimensional Dynamic Models: Sieve Value Function Iteration Peter Arcidiacono Patrick Bayer Federico Bugni Jon James Duke University Duke University Duke University Duke University January
More informationEstimation of Discrete Choice Dynamic Programming Models
Estimation of Discrete Choice Dynamic Programming Models Hiroyuki Kasahara Department of Economics University of British Columbia hkasahar@mail.ubc.com Katsumi Shimotsu Faculty of Economics University
More informationReinforcement Learning
Reinforcement Learning March May, 2013 Schedule Update Introduction 03/13/2015 (10:15-12:15) Sala conferenze MDPs 03/18/2015 (10:15-12:15) Sala conferenze Solving MDPs 03/20/2015 (10:15-12:15) Aula Alpha
More informationMKTG 555: Marketing Models
MKTG 555: Marketing Models Structural Models -- Overview Arvind Rangaswamy (Some Parts are adapted from a presentation by by Prof. Pranav Jindal) March 27, 2017 1 Overview Differences between structural
More informationIdentifying Dynamic Games with Serially-Correlated Unobservables
Identifying Dynamic Games with Serially-Correlated Unobservables Yingyao Hu Dept. of Economics, Johns Hopkins University Matthew Shum Division of Humanities and Social Sciences, Caltech First draft: August
More informationSUPPLEMENT TO MARKET ENTRY COSTS, PRODUCER HETEROGENEITY, AND EXPORT DYNAMICS (Econometrica, Vol. 75, No. 3, May 2007, )
Econometrica Supplementary Material SUPPLEMENT TO MARKET ENTRY COSTS, PRODUCER HETEROGENEITY, AND EXPORT DYNAMICS (Econometrica, Vol. 75, No. 3, May 2007, 653 710) BY SANGHAMITRA DAS, MARK ROBERTS, AND
More informationLecture 2: Firms, Jobs and Policy
Lecture 2: Firms, Jobs and Policy Economics 522 Esteban Rossi-Hansberg Princeton University Spring 2014 ERH (Princeton University ) Lecture 2: Firms, Jobs and Policy Spring 2014 1 / 34 Restuccia and Rogerson
More informationApproximating High-Dimensional Dynamic Models: Sieve Value Function Iteration
Approximating High-Dimensional Dynamic Models: Sieve Value Function Iteration Peter Arcidiacono Department of Economics Duke University psarcidi@econ.duke.edu Federico A. Bugni Department of Economics
More informationOn the iterated estimation of dynamic discrete choice games
On the iterated estimation of dynamic discrete choice games Federico A. Bugni Jackson Bunting The Institute for Fiscal Studies Department of Economics, UCL cemmap working paper CWP13/18 On the iterated
More informationEconometrica Supplementary Material
Econometrica Supplementary Material SUPPLEMENT TO CONDITIONAL CHOICE PROBABILITY ESTIMATION OF DYNAMIC DISCRETE CHOICE MODELS WITH UNOBSERVED HETEROGENEITY (Econometrica, Vol. 79, No. 6, November 20, 823
More informationIdentifying Dynamic Games with Serially-Correlated Unobservables
Identifying Dynamic Games with Serially-Correlated Unobservables Yingyao Hu Dept. of Economics, Johns Hopkins University Matthew Shum Division of Humanities and Social Sciences, Caltech First draft: August
More informationDynamic Decisions under Subjective Beliefs: A Structural Analysis
Dynamic Decisions under Subjective Beliefs: A Structural Analysis Yonghong An Yingyao Hu December 4, 2016 (Comments are welcome. Please do not cite.) Abstract Rational expectations of agents on state transitions
More informationUniversity of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming
University of Warwick, EC9A0 Maths for Economists 1 of 63 University of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming Peter J. Hammond Autumn 2013, revised 2014 University of
More informationCCP Estimation of Dynamic Discrete Choice Models with Unobserved Heterogeneity
CCP Estimation of Dynamic Discrete Choice Models with Unobserved Heterogeneity Peter Arcidiacono Duke University Robert A. Miller Carnegie Mellon University February 20, 2008 Abstract We adapt the Expectation-Maximization
More informationDynamic Models with Serial Correlation: Particle Filter Based Estimation
Dynamic Models with Serial Correlation: Particle Filter Based Estimation April 6, 04 Guest Instructor: Nathan Yang nathan.yang@yale.edu Class Overview ( of ) Questions we will answer in today s class:
More informationQED. Queen s Economics Department Working Paper No. 1092
QED Queen s Economics Department Working Paper No 1092 Nonparametric Identification and Estimation of Finite Mixture Models of Dynamic Discrete Choices Hiroyuki Kasahara Department of Economics, University
More informationIndustrial Organization II (ECO 2901) Winter Victor Aguirregabiria. Problem Set #1 Due of Friday, March 22, 2013
Industrial Organization II (ECO 2901) Winter 2013. Victor Aguirregabiria Problem Set #1 Due of Friday, March 22, 2013 TOTAL NUMBER OF POINTS: 200 PROBLEM 1 [30 points]. Considertheestimationofamodelofdemandofdifferentiated
More informationEmpirical Industrial Organization (ECO 310) University of Toronto. Department of Economics Fall Instructor: Victor Aguirregabiria
Empirical Industrial Organization (ECO 30) University of Toronto. Department of Economics Fall 208. Instructor: Victor Aguirregabiria FINAL EXAM Tuesday, December 8th, 208. From 7pm to 9pm (2 hours) Exam
More informationEconometric Analysis of Games 1
Econometric Analysis of Games 1 HT 2017 Recap Aim: provide an introduction to incomplete models and partial identification in the context of discrete games 1. Coherence & Completeness 2. Basic Framework
More informationAssessing Structural VAR s
... Assessing Structural VAR s by Lawrence J. Christiano, Martin Eichenbaum and Robert Vigfusson Zurich, September 2005 1 Background Structural Vector Autoregressions Address the Following Type of Question:
More informationIdentification and Estimation of Continuous Time Dynamic Discrete Choice Games
Identification and Estimation of Continuous Time Dynamic Discrete Choice Games JASON R. BLEVINS The Ohio State University November 2, 2016 Preliminary and Incomplete Draft Abstract. We consider the theoretical
More informationAre Marijuana and Cocaine Complements or Substitutes? Evidence from the Silk Road
Are Marijuana and Cocaine Complements or Substitutes? Evidence from the Silk Road Scott Orr University of Toronto June 3, 2016 Introduction Policy makers increasingly willing to experiment with marijuana
More informationA Robust Approach to Estimating Production Functions: Replication of the ACF procedure
A Robust Approach to Estimating Production Functions: Replication of the ACF procedure Kyoo il Kim Michigan State University Yao Luo University of Toronto Yingjun Su IESR, Jinan University August 2018
More informationDemand in Differentiated-Product Markets (part 2)
Demand in Differentiated-Product Markets (part 2) Spring 2009 1 Berry (1994): Estimating discrete-choice models of product differentiation Methodology for estimating differentiated-product discrete-choice
More informationEconomics 2010c: Lectures 9-10 Bellman Equation in Continuous Time
Economics 2010c: Lectures 9-10 Bellman Equation in Continuous Time David Laibson 9/30/2014 Outline Lectures 9-10: 9.1 Continuous-time Bellman Equation 9.2 Application: Merton s Problem 9.3 Application:
More informationAdditional Material for Estimating the Technology of Cognitive and Noncognitive Skill Formation (Cuttings from the Web Appendix)
Additional Material for Estimating the Technology of Cognitive and Noncognitive Skill Formation (Cuttings from the Web Appendix Flavio Cunha The University of Pennsylvania James Heckman The University
More informationAn empirical model of firm entry with endogenous product-type choices
and An empirical model of firm entry with endogenous product-type choices, RAND Journal of Economics 31 Jan 2013 Introduction and Before : entry model, identical products In this paper : entry with simultaneous
More informationDynamic decisions under subjective expectations: a structural analysis
Dynamic decisions under subjective expectations: a structural analysis Yonghong An Yingyao Hu Ruli Xiao The Institute for Fiscal Studies Department of Economics, UCL cemmap working paper CWP11/18 Dynamic
More information1 The Basic RBC Model
IHS 2016, Macroeconomics III Michael Reiter Ch. 1: Notes on RBC Model 1 1 The Basic RBC Model 1.1 Description of Model Variables y z k L c I w r output level of technology (exogenous) capital at end of
More informationNevo on Random-Coefficient Logit
Nevo on Random-Coefficient Logit March 28, 2003 Eric Rasmusen Abstract Overheads for logit demand estimation for G604. These accompany the Nevo JEMS article. Indiana University Foundation Professor, Department
More informationConstrained Optimization Approaches to Estimation of Structural Models: Comment
Constrained Optimization Approaches to Estimation of Structural Models: Comment Fedor Iskhakov, University of New South Wales Jinhyuk Lee, Ulsan National Institute of Science and Technology John Rust,
More informationDynamic and Stochastic Model of Industry. Class: Work through Pakes, Ostrovsky, and Berry
Dynamic and Stochastic Model of Industry Class: Work through Pakes, Ostrovsky, and Berry Reading: Recommend working through Doraszelski and Pakes handbook chapter Recall model from last class Deterministic
More informationTechnical appendices: Business cycle accounting for the Japanese economy using the parameterized expectations algorithm
Technical appendices: Business cycle accounting for the Japanese economy using the parameterized expectations algorithm Masaru Inaba November 26, 2007 Introduction. Inaba (2007a) apply the parameterized
More informationLecture 15. Dynamic Stochastic General Equilibrium Model. Randall Romero Aguilar, PhD I Semestre 2017 Last updated: July 3, 2017
Lecture 15 Dynamic Stochastic General Equilibrium Model Randall Romero Aguilar, PhD I Semestre 2017 Last updated: July 3, 2017 Universidad de Costa Rica EC3201 - Teoría Macroeconómica 2 Table of contents
More informationEstimation of Dynamic Discrete Choice Models in Continuous Time
Estimation of Dynamic Discrete Choice Models in Continuous Time Peter Arcidiacono Patrick Bayer Jason Blevins Paul Ellickson Duke University Duke University Duke University University of Rochester April
More informationIdentification of Consumer Switching Behavior with Market Level Data
Identification of Consumer Switching Behavior with Market Level Data Yong Hyeon Yang UCLA November 18th, 2010 Abstract Switching costs and the persistence of idiosyncratic preference influence switching
More informationLecture 7: Stochastic Dynamic Programing and Markov Processes
Lecture 7: Stochastic Dynamic Programing and Markov Processes Florian Scheuer References: SLP chapters 9, 10, 11; LS chapters 2 and 6 1 Examples 1.1 Neoclassical Growth Model with Stochastic Technology
More informationStructural Econometrics: Dynamic Discrete Choice. Jean-Marc Robin
Structural Econometrics: Dynamic Discrete Choice Jean-Marc Robin 1. Dynamic discrete choice models 2. Application: college and career choice Plan 1 Dynamic discrete choice models See for example the presentation
More informationA simple macro dynamic model with endogenous saving rate: the representative agent model
A simple macro dynamic model with endogenous saving rate: the representative agent model Virginia Sánchez-Marcos Macroeconomics, MIE-UNICAN Macroeconomics (MIE-UNICAN) A simple macro dynamic model with
More informationDynamic stochastic game and macroeconomic equilibrium
Dynamic stochastic game and macroeconomic equilibrium Tianxiao Zheng SAIF 1. Introduction We have studied single agent problems. However, macro-economy consists of a large number of agents including individuals/households,
More informationSequential Estimation of Structural Models with a Fixed Point Constraint
Sequential Estimation of Structural Models with a Fixed Point Constraint Hiroyuki Kasahara Department of Economics University of British Columbia hkasahar@mail.ubc.ca Katsumi Shimotsu Department of Economics
More information14.461: Technological Change, Lecture 4 Competition and Innovation
14.461: Technological Change, Lecture 4 Competition and Innovation Daron Acemoglu MIT September 19, 2011. Daron Acemoglu (MIT) Competition and Innovation September 19, 2011. 1 / 51 Competition and Innovation
More informationSpecification Test on Mixed Logit Models
Specification est on Mixed Logit Models Jinyong Hahn UCLA Jerry Hausman MI December 1, 217 Josh Lustig CRA Abstract his paper proposes a specification test of the mixed logit models, by generalizing Hausman
More informationInformation Choice in Macroeconomics and Finance.
Information Choice in Macroeconomics and Finance. Laura Veldkamp New York University, Stern School of Business, CEPR and NBER Spring 2009 1 Veldkamp What information consumes is rather obvious: It consumes
More informationSmall Open Economy RBC Model Uribe, Chapter 4
Small Open Economy RBC Model Uribe, Chapter 4 1 Basic Model 1.1 Uzawa Utility E 0 t=0 θ t U (c t, h t ) θ 0 = 1 θ t+1 = β (c t, h t ) θ t ; β c < 0; β h > 0. Time-varying discount factor With a constant
More informationSingle Agent Dynamics: Dynamic Discrete Choice Models
Single Agent Dynamics: Dynamic Discrete Choice Models Part : Theory and Solution Methods Günter J. Hitsch The University of Chicago Booth School of Business 2013 1/113 Overview: Goals of this lecture 1.
More informationSEQUENTIAL ESTIMATION OF DYNAMIC DISCRETE GAMES
SEQUENTIAL ESTIMATION OF DYNAMIC DISCRETE GAMES Víctor Aguirregabiria and Pedro Mira CEMFI Working Paper No. 0413 September 2004 CEMFI Casado del Alisal 5; 28014 Madrid Tel. (34) 914 290 551. Fax (34)
More informationNext, we discuss econometric methods that can be used to estimate panel data models.
1 Motivation Next, we discuss econometric methods that can be used to estimate panel data models. Panel data is a repeated observation of the same cross section Panel data is highly desirable when it is
More informationProduct innovation and adoption in market equilibrium: The case of digital cameras.
Product innovation and adoption in market equilibrium: The case of digital cameras. Juan Esteban Carranza Abstract This paper contains an empirical dynamic model of supply and demand in the market for
More informationDynamic and Stochastic Model of Industry. Class: Work through Pakes, Ostrovsky, and Berry
Dynamic and Stochastic Model of Industry Class: Work through Pakes, Ostrovsky, and Berry Reading: Recommend working through Doraszelski and Pakes handbook chapter Recall model from last class Deterministic
More informationThe BLP Method of Demand Curve Estimation in Industrial Organization
The BLP Method of Demand Curve Estimation in Industrial Organization 27 February 2006 Eric Rasmusen 1 IDEAS USED 1. Instrumental variables. We use instruments to correct for the endogeneity of prices,
More informationIncentives Work: Getting Teachers to Come to School. Esther Duflo, Rema Hanna, and Stephen Ryan. Web Appendix
Incentives Work: Getting Teachers to Come to School Esther Duflo, Rema Hanna, and Stephen Ryan Web Appendix Online Appendix: Estimation of model with AR(1) errors: Not for Publication To estimate a model
More informationField Course Descriptions
Field Course Descriptions Ph.D. Field Requirements 12 credit hours with 6 credit hours in each of two fields selected from the following fields. Each class can count towards only one field. Course descriptions
More informationProductivity Losses from Financial Frictions: Can Self-financing Undo Capital Misallocation?
Productivity Losses from Financial Frictions: Can Self-financing Undo Capital Misallocation? Benjamin Moll G Online Appendix: The Model in Discrete Time and with iid Shocks This Appendix presents a version
More information