1 Motvaton Next we consder dynamc games where the choce varables are contnuous and/or dscrete. Example 1: Ryan (2009)- regulatng a concentrated ndustry (cement) Frms play Cournot n the stage Make lumpy nvestment decsons Investments nvolve large xed adjustment costs However, choce varable s naturally contnuous
Example 2: Snder (2009) Predaton Revsts Amercan Arlnes case Low cost carrers- low margnal costs per passenger mle Incumbents- low entry costs wthn hub Derentated products- Legacy carrers have hgher demand.
Frms compete a la Bertrand n the stage Decde how many seats to y on a route and entry/ext decsons on route Not easly usng logt machnery of prevous models.
Forward smulaton method of Bajar, Benkard and Levn (2007). Data generatng process s Ercksson and Pakes model It can take a long tme to solve these models even once Estmaton s n two stages Frst stage nvolves estmatng the reduced form decson rules Second stage- recover structural payo parameters Computatonally lght and smooth estmator
2 Smple Example The estmators for dscrete games were based on propertes of the logt model In contnuous games, the dea s to explot the lnearty of expected utlty. Let a 2 A denote 's strategy Let s 2 S the state u (a ; a ; s) denote vnm utlty Let a = (s) denote the equlbrum strategy of agent Let (a js) denote 's strategy
Assume utlty s a lnear ndex,.e. u (a ; a ; s) = (a; s) Then, utlty maxmzaton mples that: ( (s); (s); s) ( 0 (s); (s); s) for all 0 (s) 6= (s) Revealed preference mples the chosen strateges are better than the strateges not chosen
Take r = 1; :::; R alternatve strateges (r) (s t ) that are not optmal. Suppose we see states (a t ; s t ) for t = 1; :::; T Then: ( (s t ); (s t ); s) ( (r) (s t ); (s t ); s t ) for all r = 1; :::; R Note that ths s a lnear system n ( (s t ); (s t ); s) ( (r) (s t ); (s t ); s t ) for all r = 1; :::; R 0
Suppose we "knew" or could estmate (s t ) n a rst stage. We could then buld an estmator based on tryng to solve ths lnear system That s, we reverse engneer that makes (s t ) optmal and (r) (s t ) sub-optmal
Consder the followng M-estmator for. Fx (r) (s t ) non-optmal strateges for r = 1; :::; R TX RX ( 1 ( (s t ); (s t ); s) 1 T t=1 r=1 < ( (r) (s t ); (s t ); s t ) ( (s t ); (s t ); s) ( (r) (s t ); (s t ); s t ) ) 2 That s, the ndcator functon s turned on whenever revealed preference s volated. We penalze by the square of how much revealed preference s volated by. Clearly, the true value of satses ths nequalty.
The key dea behnd our estmator s that our nequaltes are lnear n gven our strateges (s t ); (s t ) Ths extends to models where strateges or states are stochastc It easy to show ths property also extends to dynamc models In these rcher models, ndng also bols down to solvng a lnear system of nequaltes
As smlar property holds n models where payos are not a lnear ndex Expected utlty s lnear by denton (snce t s an ntegral) In these cases, ndng the payos parameters bols down to solvng a non-lnear system of nequaltes
3 The Model Assume dscrete state space and dscrete acton space (for convenence only). Agents: = 1; :::; N Tme: t = 1; :::; 1 States: s t 2 S R G, commonly known. Actons: a t 2 A, smultaneously chosen. Transtons: P (s t+1 ja t ; s t ). Dscount Factor: (known to econometrcan).
Objectve Functon: Agent maxmzes EDV gven current nformaton, E t 1 X t=0 t u (a t ; s t ): (1)
3.1 Equlbrum Concept: Markov Perfect Equlbrum [MPE] Strateges: : S! A. Recursve Formulaton: V (sj) = u ((s); s)+ Z V (s 0 j)dp (s 0 j(s); s) A MPE s gven by a Markov prole,, such that for all, s, 0 V (sj ; ) V (sj 0 ; ) (2.)
3.2 Frst Step. Estmate polcy functons, : S! A and state transton functon, P : S A! (S):
Often wll also estmate \statc" parts of perod return. Examples: Producton functons, (Olley-Pakes) Investment polces, (nonparametrc) Entry/Ext polces, (nonparametrc) Statc supply-demand system (BLP) State transtons: (parametrc/nonparametrc)
3.3 Second Step. Idea: Fnd the set of parameters that ratonalze the data. I.e., condtonal on P and, nd the set of parameters that satsfy the requrements for equlbrum. Optmalty Inequaltes: For all, 0, and ntal state, s 0, t must be that E ; js 0 1 X t=0 t u (a t ; s t ) E 0 ; js 0 1 X t=0 t u (a t ; s t ); (3) The system of nequaltes, (3), contans all nformaton avalable from the denton of equlbrum.
Assume: perod return functon s lnear n the parameters (stronger than needed), u (a; s; ) = (a; s). (4)
Assume: perod return functon s lnear n the parameters (stronger than needed), u (a; s; ) = (a; s). (4) Then we can wrte utltes as: E ; js 0 1 X t=0 t u (a t ; s t ) = E ; js 0 1 X t=0 t (a; s) = E ; js 0 1 X t=0 t (a; s) Note that ths s lnear n!
The denton of equlbrum mples that for all, 0, and ntal state, s 0 E ; js 0 1 X t=0 t (a; s) E 0 ; js 0 1 X t=0 t (a; s) Ths mples that all of the restrctons of equlbrum can be stated as a lnear system! If we can smulate E ; js 0 P 1t=0 t (a; s) ndng the "true" means ndng the value that satses ths system for all, 0, and ntal state, s 0 In practce, we wll smulate E ; js 0 P 1t=0 t (a; s)
Smulate the ntegral b E ; js 0 P 1t=0 t (a t ; s t ) by Monte Carlo gven estmates of b and b P Fx state s o Draw a (l) t ; s (l) t for l = 1; :::; L as follows: 1. Gven s (l) t, a t = b(s (l) t ) 2. s (l) t+1 P (a(l) t ; s (l) t ) 3. Return to 1 Then for large L, be ; js 0 1 X t=0 t (a t ; s t ) = X t 1 L X l t (a (l) t ; s (l) t )
Idented Case. Consder a nte set of alternatve polces r = 1; :::; R; (r) 6= b that agent could have chosen, but dd not. Dene: g r (s; ) = cw (s; b ; b ) W c (s; (r) ; b ) cw (s; b ; b ) = b E ; js P 1t=0 t (a t ; s t ) cw (s; (r) ; b ) = b E 0 ; js P 1t=0 t (a t ; s t ) Mnmze: 1 T X X t r 1 fg r (s t ; ) < 0g g r (s t ; ) 2
Comments: Computatonally lght because W c (s; b ; ) cw (s; (r) ; ) s xed. We don't need to repeatedly resmulate or resolve model Smooth estmator Second stage error comes from b ; smulaton error and from sendng R! 1 Standard theory for smulaton based estmator
Dynamc Olgopoly w/ Investment Lke Pakes and McGure or Ercson and Pakes. Demand: U rj = 0 ln(z ) + 1 ln(y p ) + " rj Where z s product qualty- nteger valued. " rj s d logt error term. Constant margnal costs, : Bertrand prce competton- estmate ths usng standard demnd estmaton.
Investment I t 2 R + s successful w/ prob: I t 1 + I t where s a parameter. If nvestment s successful, product qualty ncreases by 1. The cost of nvestment s C(I) = I There s also an outsde good who's qualty wll move up w/ prob each perod. Scrap value for ext, :
(s t ) s ext polcy decson. One potental entrant each perod. v et s prvate entry cost drawn from dstrbuton, [v L ; v H ] State varable s number of rms and vector of product qualtes.
Frst Stage. Estmate demand parameters usng MLE and markup usng rst order condtons for optmal prcng. Estmate the transton parameters, and also by MLE. Estmate nvestment and ext polcy functons usng local lnear regressons.
Second Stage. For every ntal state, s 0, and every alternatve nvestment polcy, 0 (s) = (I 0 (s); 0 (s)), P be 1t=0 ; t (s t ) E b P 0 1t=0 ; t (s t ) + P be 1t=0 ; t I (s t ) E b P 0 1t=0 ; t I (s t ) + 2 4 be ; P 1t=0 t f(s t ) = 1g be 0 ; P 1t=0 t f 0 (s t ) = 1g 3 5 > 0 To get alternatve polces, add mean zero error term to the nvestment and ext polces. Also straghtforward to estmate sunk cost of entry dstrbuton (parametrcally or nonparametrcally) { see paper for detals.