Game Theory and Econometrics: A Survey of Some Recent Research

Size: px
Start display at page:

Download "Game Theory and Econometrics: A Survey of Some Recent Research"

Transcription

1 Game Theory and Econometrics: A Survey of Some Recent Research Pat Bajari 1 Han Hong 2 Denis Nekipelov 3 1 University of Minnesota and NBER 2 Stanford University 3 University of California at Berkeley Econometric Society World Congress in Shanghai, August 17, / 105

2 Motivation We attempt to survey an emerging literature at the intersection of industrial organization, game theory and econometrics. Researchers specify a set of players, their strategies, information, and payoffs, and use equilibrium concepts to derive positive and normative economic predictions. Researchers specify a set of players, their strategies, information, The predictions of game theoretic models often depend delicately on the specification of the game. Theory often provides little guidance on how to choose between multiple equilibrium generated by a particular game 2 / 105

3 Motivation The empirical literature addresses these issue by letting the data tell us the model specifications that best explain observed behavior. Econometricians observe data from plays of a game and exogenous covariates which influence payoffs or constraints faced by the agent. The payoffs are specified as a parametric or nonparametric function of the actions of other agents and exogenous covariates Use econometric estimators to reverse engineer payoffs to explain the observed behavior. 3 / 105

4 Earlier Literature Bresnahan and Reiss (1990, 1991) Berry (1991) Manski (1993), The Reflection Problem. Brock and Durlauf (2001). 4 / 105

5 Motivation Econometric methods to estimate games for a diverse set of problems. Static games with discrete strategies, e.g. Seim (2006), Sweeting (2009) Dynamic games with discrete strategies, e.g. Aguirregabiria and Mira (2007), Pesendorfer and Schmidt-Dengler (2003, 2008). Dynamic games with possibly continuous strategies, e.g. Pesendorfer and Jofre-Bonet (2003), Bajari, Benkard and Levin (2007). Among many others. We will focus on discrete actions. 5 / 105

6 Literature Early examples, Vuong and Bjorn (1984) and Bresnahan and Reiss (1990,1991). Also, Aradillas-Lopez (2005), Ho(2005), Ishii (2005), Pakes, Porter, Ho and Ishii (2005), Rysman (2004), Seim(2004), Sweeting (2004), Tamer (2003) and Ciliberto and Tamer (2005). Single agent dynamic models: Rust (1987), Hotz and Miller (1993), Magnac and Thesmar (2003), Heckman and Navarro (2005), among many others. Dynamic games: Aguirregabiria and Mira (2003), Bajari, Benkard and Levin (2004), Pakes, Ovstrovsky and Berry (2004), and Pesendorfer and Schmidt-Dengler (2004). 6 / 105

7 Common insight Equilibrium of (dynamic) games can be difficult to compute. Brute force approaches which repeatedly computes the equilibrium for alternative parameter values can be difficult or infeasible. Close relation between information structure and estimation strategy. Private information versus unobserved heterogeneity. With only private information and no unobserved heterogeneity, a flexible two step estimation approach is available. Nonparametric identification and estimation are also feasible under this assumption, where the model resembles a single agent decision problem. 7 / 105

8 Common insight The private information assumption equates the information set of the players (regarding the competition environment) with that of the researchers. In the presence of substantial unobserved heterogeneity, both identification and estimation are more difficult. Computationally intensive but the difficulty of implementation can be reduced with a careful design of estimation strategy. A fascinating broad and general literature. The domain of applicability is not limited to industrial organization. Generalizes to seemingly complicated structural models in other fields such as macro, labor, public finance, or marketing. 8 / 105

9 Self promotion Patrick Bajari, Han Hong, John Krainer and Denis Nekipelov. Estimating Static Models of Strategic Interactions. Patrick Bajari and Victor Chernozhukov and Han Hong and Denis Nekipelov Nonparametric Identification and Semiparametric Estimation of a Dynamic Game. Patrick Bajari, Han Hong and Stephen Ryan. Identification and Estimation of Discrete Games of Complete Information. Ronald Gallant, Han Hong and Ahmed Khwaja. Bayesian Estimation of a Dynamic Oligopolistic Game with Serially Correlated Unobserved Cost Variables 9 / 105

10 A general static model Players, i = 1,..., n. Actions a i {0, 1,..., K}. A = {0, 1,..., K} n and a = (a 1,..., a n ).. s i S i : state for player i. S = Π i S i and s = (s 1,..., s n ) S. For each agent i, K + 1 state variables ɛ i (a i ) ɛ i = (ɛ i (0),..., ɛ i (K)). Density f (ɛ i ), i.i.d. across i = 1,..., n. 10 / 105

11 A general static model Private information model s is common knowledge and also observed by econometrician. ɛ i (a i ): private information to each agent. Private information model with unobserved heterogeneity At least some components of s are not observed by the researchers. Complete information model ɛ i (a i ): public information to all agents. 11 / 105

12 Private information model: Bayesian nash equilibrium Choice probability σ i (a i s) for player i conditional on s, public information: σ i (a i = k s) = 1 {δ i (s, ɛ i ) = k} f (ɛ i )dɛ i. Choice specific expected payoff for player i: Π i (a i, s; θ) = a i Π i (a i, a i, s; θ)σ i (a i s). Expected utility from choosing a i, excluding preference shock. An example from an entry game. 12 / 105

13 Π i (a i, a i, s; θ) is often a linear function, e.g.: { s β + δ 1 {a j = 1} if a i = 1 Π i (a i, a i, s) = j i 0 if a i = 0 Mean utility from not entering normalized to zero. δ measures the influence of j s entry choice on i s profit. If firms compete with each other: δ < 0. β measure the impact of the state variables on profits. ε i (a i ) capture shocks to the profitability of entry. Often ε i (a i ) are assumed to be i.i.d. extreme value distributed: f (ε i (k)) = e ε i (k) e e ε i (k). 13 / 105

14 Equilibrium Choice specific expected payoff under linearity: Π i (a i = 1, s; θ) = s β + δ j i σ j (a j = 1 s). Choice probability under the extreme value distribution σ i (a i = 1 s) = exp(s β + δ j i σ j(a j = 1 s)) 1 + exp(s β + δ j i σ j(a j = 1 s)) Choice probability equations define fixed point mappings for σ j (a j = 1 s). For each θ = (β, δ), solve for for all j = 1,..., n, σ j (a j = 1 s; θ) Maximum likelihood versus two step estimator. Nonparametric estimator also follows from identification arguments. 14 / 105

15 Identification The error term distribution can possibly be identified if a parametric or index structure is imposed on the payoff function. Consider fixing the error term (extreme value without loss of generality) and identifying the payoff nonparametrically. σ i (a i s) only functions of Π i (a i, s) Π i (a i = 0, s). Normalization: For all i and all a i and s, Π i (a i = 0, a i, s) = 0, so that Π i (a i = 0, s) = / 105

16 Hotz and Miller (1993) inversion, for any k, k : log (σ i (k s)) log ( σ i (k s) ) = Π i (k, s) Π i (k, s). More generally let Γ : {0,..., K} S [0, 1]: (σ i (0 s),..., σ i (K s)) = Γ i (Π i (1, s) Π i (0, s),..., Π i (K, s) Π i (0, s)) And the inverse Γ 1 : (Π i (1, s) Π i (0, s),..., Π i (K, s) Π i (0, s)) = Γ 1 i (σ i (0 s),..., σ i (K s)) Invert equilibrium choice probabilities to nonparametrically recover Π i (1, s) Π i (0, s),..., Π i (K, s) Π i (0, s). Π i (a i, s) is known by our inversion and probabilites σ i can be observed by econometrician. 16 / 105

17 Next step: how to recover Π i (a i, a i, s) from Π i (a i, s). Requires inversion of the following system: Π i (a i, s) = a i σ i (a i s)π i (a i, a i, s), i = 1,..., n, a i = 1,..., K.. Given s, n K (K + 1) n 1 unknowns utilities of all agents. Only n (K) known expected utilities. Obvious solution: impose exclusion restrictions. 17 / 105

18 Partition s = (s i, s i ), and suppose Π i (a i, a i, s) = Π i (a i, a i, s i ) depends only on the subvector s i. Π i (a i, s i, s i ) = a i σ i (a i s i, s i )Π i (a i, a i, s i ). Identification: Given each s i, the second moment matrix of the regressors σ i (a i s i, s i ), is nonsingular. Eσ i (a i s i, s i )σ i (a i s i, s i ) Needs at least (K + 1) n 1 points in the support of the conditional distribution of s i given s i. 18 / 105

19 In peer interaction models with a large homogeneous peer group (e.g Brock and Durlauf (2001)), a standard logit regression with an additional equilibrium probability covariate is consistent despite the possibility of multiple equilibria. Still need multiple peer group to identify the peer effect coefficient. In entry games, a single equilibrium is plausible with time series data in a single market (Pesendorfer and Schmidt-Dengler (2003)). In cross section data, multiple markets might play different equilibria. With panel data, the equilibrium can be estimated market by market. 19 / 105

20 Nonparametric Estimation Step 1: Estimation of choice probabilities (e.g. sieve) where and σ i (k s) = T 1(a it = k)q κ(t ) (s t)(q T Q T ) 1 q κ(t ) (s). t=1 Q T = (q κ(t ) (s 1),..., q κ(t ) (s T ), q κ(t ) (s) = (q 1(s),..., q κ(t ) (s)). Step 2: Inversion of expected utilities ( Πi (1, s t) Π i (0, s t),... Π i (K, s t) Π i (0, s t)) In the logit model ˆΠ i (k, s t) ˆΠ i (0, s t) = log ( σ i (k s t)) log ( σ i (0 s t)) = Γ 1 i ( σ i (0 s t),..., σ i (K s t)) 20 / 105

21 Step 3: Recovering structural parameters. Infer Π i (a i, a i, s i ) from ˆΠ i (k, s). Use empirical analog of Π i (a i, s i, s i ) = σ i (a i s i, s i )Π i (a i, a i, s i ). a i For given s i and given a = (a i, a i ), run local regression T ˆΠi (a i, s it, s i ) 2 ˆσ i (a i s it, s i ) Π i (a i, a i, s i ) w (t, s i ). a i t=1 Nonparametric weights can take different forms, e.g. w (t, s i ) = k Asymptotically identified. ( sit s ) i / h T k τ=1 ( siτ s ) i. h 21 / 105

22 Semiparametric Estimation Linear model of deterministic utility Π i (a i, a i, s i ) = Φ i (a i, a i, s i ) θ. Choice specific value function where Π i (a i, s) = E [Π i (a i, a i, s i ) s, a i ] = Φ i (a i, s) θ. Φ i (a i, s) = a i ) σ i (a i s, ˆΦ, θ = Φ i (a i, a i, s i ) j i σ(a j = k j s). Estimated parameter choice probabilities: ) exp (ˆΦ i (a i, s) θ 1 + K k=1 exp (ˆΦ i (k, s) θ Semiparametric method of moment estimator 1 T ( ( )) Â (s t) y t σ s t, ˆΦ, ˆθ = 0. T where ( ) σ s t, ˆΦ, θ = t=1 ( ) ) (σ i k s t, ˆΦ, θ, k = 1,..., K, i = 1,..., n ) 22 / 105

23 Nonparametric vs semiparametric methods Typically σ i (k s) will converge to the true σ i (k s) at a nonparametric rate which is slower than T 1/2. Alternative estimators: kernel smoothing or local polynomial Nonparametric method is more robust against misspecification. May suffer from curse of dimensionality. Semiparametric method more practical in applications. θ can be estimated at parametric rates despite first stage nonparametric estimation. Pseudo MLE can be used in place of Method of moments. 23 / 105

24 Rust Chapter Markov, Discrete dependent variable, Dynamic Programming State space S (can be continuous or discrete) Usually need discretization in value function computation. Action (choice) set D. Time separable utility function: for T, U T (s, d) = T β t u t (s t, d t, θ) t=0 In above s and d are sequences of state variables and actions. δ = (δ 0,..., δ T ), where δ t = δ t (s t ) such as to maximize: max E δu T (s, d). δ=(δ 0,...,δ T ) In above δ generates the DGP for s t. 24 / 105

25 Finite Horizon case: Suppose the continuation value at time T is 0. (Might be questionable in the presence of bequest motives). δ T (s T ) = arg max d D U T (s T, d) For t = 0,..., T 1, { δ t (s t ) = arg max u t (s t, d) + β d D V T (s T ) = max d D U T (s T, d). } V t+1 (s t+1, δ t+1 (s t+1 )) dp (s t+1 s t, d) and { V t (s t ) = max u t (s t, d) + β d D } V t+1 (s t+1, δ t+1 (s t+1 )) dp (s t+1 s t, d) 25 / 105

26 δ = (δ 0,..., δ T ) maximizes [ ) ] V 0 (s) = max E δ U T ( s, d s 0 = s. δ This is called backward induction, can approximate infinite horizon model: ( lim E δ U T s, d ) = lim sup E δ U T ( s, d ) = sup E δ U ( s, d ). T T δ δ 26 / 105

27 Infinite horizon stationary case: time invariance δ t (s) δ (s) V t (s) = V (s). More applicable in IO? δ (s) = arg max d D u (s, d) + β V (s) = arg max u (s, d) + β d D V (s ) dp (s s, d) V (s ) dp (s s, d). 27 / 105

28 Contraction mapping properties V (s) W (s) max u (s, d) + β V (s ) dp (s s, d) d max u (s, d) + β W (s ) dp (s s, d) d max d β V (s ) dp (s s, d) β W (s ) dp (s s, d) max β V (s ) W (s ) dp (s s) d β sup V (s) W (s). s Take sup of left hand side contraction mapping. 28 / 105

29 Define functional mapping: Γ (W ) [s] = max d Bellman equation: [ u (s, d) + β V = Γ (V ). ] W (s ) dp (s s, d). Operationally, set V 0 (s) 0, V 1 (s) = Γ (V 0 ) [s]. This is the same as using T -finite horizon backward induction to approximation the infinite solution. How fast does it converge? How large does T has to be as a function of the sample size? ] V 0 V T 0 (s) = max δ (s) = max δ [ E δ β t u (s t, d t ) s 0 = s t=0 [ T ] E δ β t u (s t, d t ) s 0 = s t=0 lim V 0 T (s) =V0 (s), s S. T This is called successive approximation. 29 / 105

30 Discretization s = 1,..., S, or i u (s, δ (s)) u (i, δ (i)) S S transition probability matrix E δ, i, jth component: Then write Γ (V ) [s] = max d V δ }{{} s 1 [ p [i j, δ (j)]. u (s, d) + β = u (δ) +β E }{{}}{{} δ s 1 s s V δ }{{} s 1 ] S V (s ) p (s s, d) s =1 V δ =G δ (V δ ) = (I βe δ ) 1 u δ =u δ + βe δ u δ + β 2 E 2 δ u δ + 30 / 105

31 This is called policy iterations: δ k (s) = arg max d u (s, d) + β S V δk 1 (s ) p (s s, d) s =1 For each k 1, compute V δk 1 (s ) by the s s matrix inversion, then move on to δ k (s), iterate. Alternative 1: Newton Iteration (see Rust s code) Alternative 2: use a linear (or polynomial) approximation (Judd): V (s) = c 1 χ 1 (s) + c 2 χ 2 (s) + + c N χ N (s). where ĉ = arg max c V (s) Γ (V (s)) 31 / 105

32 Econometric assumptions: stationary markov case δ (s) = d if and only if u (s, d) + β V (s ) dp (s s, d) u (s, d ) + β V (s ) dp (s s, d). Introduce observed and unobserved state variables: s = (x, ɛ). We only observe conditional choice probabilities given observable state variables: P (d x) = 1 (δ (s) = d) dp (s x) = 1 (d = δ (s)) dp (ɛ x) 32 / 105

33 Key assumptions: Additivity: u (s, d) = u (x, d) + ɛ (d). Conditional independence: p (x t+1, ɛ t+1 x t, ɛ t, d t ) =p (x t+1 x t, d t ) p (ɛ t x t, d t ) p (ɛ t x t, d t ) =p (ɛ t ) ɛ t is i.i.d. and independent of everything else. Bellman equation: [ V (s) = V (x, ɛ) = max u (x, d) + ɛ (d) + β V (y, ɛ) dp (ɛ) dp (y x, d) d D = max [V (x, d) + ɛ (d)] d D where V (x, d) is called the choice specific value function. 33 / 105

34 Ex ante value function: (McFadden s social surplus function) V (y) =G (v (y, 1),..., V (y, d)) = V (y, ɛ) dp (ɛ) = max (V (x, d) + ɛ (d)) dp (ɛ). d D Relation between choice probability and social surplus: This is because G (v (x, 1),..., v (x, d)) v (x, i) G (v (x, 1),..., v (x, d)) v (x, i) = p (d = i x). maxd = D (v (x, d ) + ɛ (d )) dq (ɛ) v (x, d) ( ) = 1 d = arg max [u (x, d ) + ɛ (d )] dq (ɛ) d =P (d is chosen x). 34 / 105

35 Under the extreme value assumption: ( ) G (v (x, d), d D) = log exp (v (x, d)). d D Choice specific value function iteration: ( ) v (x, d) = u (x, d) + β log exp (v (y, d )) dp (y x, d). Choice probabilities: d D P (d x) = e v(x,d) e v(x,d ). d D 35 / 105

36 Likelihood estimation: L (θ) = A a=1 t=1 T p (dt a xt a, θ 1, θ 2 ) π ( xt a xt 1, a dt 1, a ) θ 1. Usually, θ 1 can be estimated using the transition process only: ˆθ 1 = arg max θ 1 A a=1 t=1 T π ( xt a xt 1, a dt 1, a ) θ 1. For each θ, compute u θ (x, d), calculate v θ (x, d) either by backward induction or by value function iteration. Then form p (d x, θ) = e vθ(x,d) e v θ(x,d ). d D 36 / 105

37 The utility specifications used in Rust (1987) Bus Engine: { θ1 d = 1 u θ (x, d) = θ 2 x t d = 0 Transition density specification: { g (x π (x t+1 x t, d t ) = t+1, θ 3 ) d t = 1 g (x t+1 x t, θ 3 ) d t = 0 Rust tried both β = 0 and β = / 105

38 Hotz and Hiller (1993, Review of Economic Studies) The idea in this paper predates many recent follow-up works. There is a one-to-one mapping between p (d x) and (x, d) = v (x, d) v (x, 1). In particular, under the extreme value assumption, (x, d) = log p (d x) p (1 x). 38 / 105

39 A two-step estimation procedure: Estimate the transition density ˆπ (x t+1 x t, d t ) first. Flexibly estimate ˆ (x, d), then you can compute the optimal choice rule ˆδ (x, ɛ) = arg max d D [ ˆ (x, d) + ɛ (d) ɛ (1) ] Simulate the entire path (one can take T for stationary models) of x, d and ɛ. Compute ˆV θ (x, d) = T t=0 [ β t u θ ( x t, ˆδ ) )] ( x t, ɛ t ) + ɛ t (ˆδ ( xt, ɛ t ). Use some norm, such as GMM, to match the flexibly estimated ˆ (x, d) to ˆV θ (x, d) ˆV θ (x, 1) through the best choice of θ using the previous step simulation. 39 / 105

40 Identifying Dynamic Discrete Decision Processes Thierry Magnac and David Thesmar Two period model. i I = {1,..., K}. State variable h = (x, ɛ). ɛ = {ɛ 1,..., ɛ K }. Period 1 utility: u i (x, ɛ). Next period state variables: h = ( x, ɛ ) drawn conditional on h = (x, ɛ). 40 / 105

41 Additive separability Assumptions ɛ is independent of x. i I, u i (x, ɛ) = u i (x) + ɛ i, Conditional independence: Random preference shocks at two periods ɛ and ɛ are independent and indepedent of x and d = i. Discrete support: The support of first period state variables x (resp. second period x ) is X (resp. X ). The joint support X = X X is discrete and finite, i.e., X = {x 1,..., x # X } 41 / 105

42 Transition matrix of the (x, ɛ) process P ( h h, d ) = P ( x, ɛ x, d ) = G ( ɛ ) P ( x x, d ) Bellman equation ( ( v i (x, ɛ) = u i (x) + ɛ i + βe max v j x, ɛ ) ) x, d = i j Decompose v i (x, ɛ) = v i (x) + ɛ i where v i (x) = u i (x) + βe ( max j ( ( v j x ) ) + ɛ ) j x, d = i 42 / 105

43 What can be recovered from the data: ( d, d ) I 2, ( x, x ) X X, P ( d, x, d x ) = P ( d x ) P ( x x, d ) P (d x) P (x x, d) is nonparametrically specified and is exactly identified from the data. Can P (d x ) and P (d x) be used to identify the structural parameters: b = {u1 (X ),..., uk (X ), v 1 ( X ),..., vk ( X ), G, β} where f (X ) is a short-cut for {f (x), x X }. For example, u 1 (X ) = {u 1 (x), x X }. 43 / 105

44 The probability that the agent chooses d = i given the structure b and observable state variable x: p i ( x; b) = P ( v Definition of identification ( x, i) X I i ( x; b) + ɛ i = max Observable vector of choice probabilities: p ( x) = (p 1 ( x),..., p K ( x)) j ( v j ( x; b) + ɛ j ) x, b ). Number of observable choice probabilities and number of parameters that can be identified. 44 / 105

45 Mapping between the observable choice probabilities and the vector of value functions: ( x, i) X I, v i ( x) = v K ( x) + q i ( p ( x) ; G) There is no loss of generality in setting vk (X ) = 0. Then for i = 1,..., K 1: vi ( X ) ( = q i ( p X ) ; G ) There is also no loss of generality setting uk (x) = / 105

46 Expected second period value function: for v ( x ) = {v1 ( x ),..., vk ( x ) } Define R ( v ( x ) ; G ) ( ( = E G max v i x ) ) + ɛ i i I Decompose the first period total utility function vi (x) = ui (x) + βe [ R ( v ( x ) ; G ) x, d = i ] Recall Now q i (x) = v i (x) v K (x) v i (x) = u i (x) + βe [ R ( v ( x ) ; G ) x, d = i ] and since uk (x) = 0: v K (x) = βe [ R ( v ( x ) ; G ) x, d = K ] 46 / 105

47 Combine these relations: q i (x) =v i (x) v K (x) =u i (x) + βe [ R ( v ( x ) ; G ) x, d = i ] Therefore u i (x) can be recovered by βe [ R ( v ( x ) ; G ) x, d = K ] u i (x) = q i (x) βe [ R ( v ( x ) ; G ) x, d = i ] + βe [ R ( v ( x ) ; G ) x, d = K ] So given β, G, u K (x) = 0, v K (x ) = 0, the other utility functions u i (x), i = 1,..., K 1 and the other second period value functions v i (x ), i = 1,..., K 1 can be identified. Estimation follows from identification. 47 / 105

48 Exclusion restrictions might identify β: if (x 1, x 2 ) X 2, i I, such that x 1 x 2, u i (x 1 ) = u i (x 2 ) but x 1 and x 2 still generate different transition probabilites P (x x, d), then q i (x 1 ) should be different from q i (x 2 ). 48 / 105

49 Identifying β: q i (x 1 ) βe [ R ( v ( x ) ; G ) x 1, d = i ] + βe [ R ( v ( x ) ; G ) x 1, d = K ] { q i (x 2 ) βe [ R ( v ( x ) ; G ) x 2, d = i ] + βe [ R ( v ( x ) ; G ) x 2, d = K ]} = 0. Parametric restriction: They said it can be used to identify G. Is this true? 49 / 105

50 Unobserved heterogeneity (fixed effect) ( vi (x, θ) = ui (x, θ) + βe max j ( ( v j x, θ ) ) + ɛ ) j d = i, x, θ θ takes two values 0 and 1, with probabilities r(0) and r(1). There are two alternatives only: d = 0, 1. p (x, θ) P (d = 1 x, θ). s (x) = P (d = 1 x) unconditional on θ. State 2 in the second period is absorbing. θ does not affect the transition probabilities P ( x x, d ) = P ( x x, d ) and G (ɛ θ) = G (ɛ) θ is independent of x. 50 / 105

51 Conditional on unobserved heterogeneity u 1 (x, θ) = q (p (x, θ) ; G) βe [ R ( q ( p ( x, θ ) ; G ) ; G ) d = 1, x ] Unconditional (on unobserved heterogeneity) choice probabilities s (x) = p (x, 1) r (1) + p (x, 0) r (0) and for x including x and d, s ( x ) = p ( x, 1 ) p (x; 1) r (1) s (x) + p ( x, 0 ) p (x, 0) r (0) s (x) If you know r(1), hence r(0), and if you know p (x, 1) and p (x, 1), then you can identify p (x, 0) and p (x, 1), and identify u1 (x, θ) in turn. Or you can assume p (x, 1) = p (x, 0) = s (x ) instead. 51 / 105

52 Introducing Dynamics into Incomplete Information Games Players are forward looking. Infinite Horizon, Stationary, Markov Transition Now players maximize expected discounted utility using discount factor β. { W i (s, ɛ i ; σ) = max Π i (a i, s) + ɛ i (a i ) a i A i } + β W i (s, ɛ i ; σ)g(s s, a i, a i )σ i (a i s)f (ɛ i )dɛ i a i Definition: A Markov Perfect Equilibrium is a collection of δ i (s, ɛ i ), i = 1,..., n such that for all i, all s and all ɛ i, δ i (s, ɛ i ) maximizes W i (s, ɛ i ; σ i, σ i ). Equilibrium existence: Pesendorfer and Schmidt-Dengler (2003), Jenkins, Liu, McFadden and Matzkin (2007). 52 / 105

53 Conditional independence: ɛ distributed i.i.d. over time. State variables evolve according to g (s s, a i, a i ). Define choice specific value function V i (a i, s) = Π i (a i, s) + βe [ V i (s ) s, a i ]. Players chooose a i to maximize V i (a i, s) + ɛ i (a i ), Ex ante value function (Social surplus function) V i (s) =E ɛi max a i [V i (a i, s) + ɛ i (a i )] =G (V i (a i, s), a i = 0,..., K) =G (V i (a i, s) V i (0, s), a i = 1,..., K) + V i (0, s) 53 / 105

54 When the error terms are extreme value distributed V i (s) = log K exp (V i (k, s) V i (0, s)) + V i (0, s) k=0 = log σ i (0 s) + V i (0, s) Relationship between Π i (a i, s) and V i (a i, s): V i (a i, s) =Π i (a i, s) + βe [ G ( V i (a i, s ), a i = 0,..., K ) s, a i ] =Π i (a i, s) + βe [ G ( V i (k, s ) V i ( 0, s ), k = 1,..., K ) s, a i ] + βe [ V i ( 0, s ) s, a i ] With extreme value distributed error terms V i (a i, s) =Π i (a i, s) βe [σ i (0 s) s, a i ] + βe [ V i ( 0, s ) s, a i ] 54 / 105

55 Hotz and Miller (1993): one to one mapping between σ i (a i s) and differences in choice specific value functions: (V i (1, s) V i (0, s),...v i (K, s) V i (0, s)) = Ω i (σ i (0 s),..., σ i (K s)) Example: i.i.d extreme value f (ɛ i ): Inverse mapping: σ i (a i s) = exp(v i (a i, s) V i (0, s)) K k=0 exp(v i(k, s) V i (0, s)) log (σ i (k s)) log (σ i (0 s)) = V i (k, s) V i (0, s) Since we can recover V i (k, s) V i (0, s), we only need to know V i (0, s) to recover V i (k, s), k. If we know V i (0, s), V i (a i, s) and Π i (a i, s) is one to one. 55 / 105

56 Identify V i (0, s) first. Set a i = 0: V i (0, s) =Π i (0, s) βe [ σ i (0 s ) s, 0] + βe [V i (0, s ) s, 0] This is a single contraction mapping unique fixed point iteration. Add V i (0, s) to V i (k, s) V i (0, s) to identify all V i (k, s). Then all Π i (k, s) calculated from V i (k, s) through Π i (k, s) = V i (k, s) βe [V i (s ) s, k]. 56 / 105

57 Why normalize Π i (0, s) = 0? Why not V i (0, s) = 0? If a firm stays out of the market in period t, current profit 0, but option value of future entry might depend on market size, number of other firms, etc. These state variables might evolve stochastically. Rest of the identification arguments: identical to the static model. 57 / 105

58 Nonparametric and Semiparametric Estimation Hotz-Miller inversion recovers V i (k, s) V i (0, s) instead of Π i (k, s) Π i (0, s). Nonparametrically compute V i (0, s) using ˆV i (0, s) = βê [σ i (0 s ) s, 0] [ + βê ˆVi (0, s ) s, 0] Obtain and ˆV i (k, s) and forward compute ˆΠ i (k, s). The rest is identical to the static model. 58 / 105

59 Efficient method of estimation Nested fixed point MLE should achieve parametric efficiency. A flexible sieve MLE should also be efficiency when the transition density is nonparametrically specified. Both can be intensive to compute. Alternative: view the value function iteration as a conditional moment model when the transition density is nonparametric. [ [ K ] V i (a i, s) = E Π i (a, s i ; γ) + β log exp (V i (l, s ))] a i, s. l=0 Combine this with the conditional moment: E (1 (a i = k) s m,t ) = σ i (k s m,t ) = exp (V i (k, s)). K exp (V i (l, s)) Make use techniques from the sieve/conditional moment literature (with possibly different conditioning sets), e.g. Newey and Powell, Ai and Chen, Chen and Pouzo. l=0 59 / 105

60 Nonparametric estimation Using kernel or sieve methods to account for continuous state variables in two step estimators are important. When the dimension of continuous state variable is at least 4, discretizing the state space in the first stage of nonparametric estimation will not produce n consistent parameter estimates in the second stage for the payoff functions. Unless an elaborate weighting scheme is used to combine the first stage estimates at different discretized points (similar to higher order kernel), e.g. Hong, Mahajan and Nekipelov (2010). For example, let the distance between two grid points to be h. Bias of order h 2. Need nh 2 0. Also need nh d. Not possible if d / 105

61 Equilibrium Computation Policy simulation still requires computing all equilibria. One approach is to view equilibrium computation as embedded in a constrained optimization program, e.g. Doraszelski, Judd, and Center (2008), Judd and Su (2006). The alternative all-solution homopoty method can be useful. Discussed in, for example, Nekipelov (2010), Borkovsky, Doraszelski, and Kryukov (2009), Besanko and Doraszelski (2004) and Bajari, Hong, Krainer and Nekipelov (2009). 61 / 105

62 Extensions of incomplete information models Nonstationary data and finite horizon Taber (2000) and Magnac and Thesmar (2002). Finite horizon model can be stationary or nonstationary, but typically nonstationary for exact identification with a panel data. Typically assume no continuation value beyond end period. Utility of one of the choices in the last period needs to be normalized to zero. 62 / 105

63 For dynamic models, zero utility for one choice might not be a good assumption for some applications. For static models, only the differences in utilities across choices are identified. For dynamic models, only a linear combination of the utilities under different choices across the state variables can be identified. The exact form of the linear combination depends on the transition probability matrixes. For some applications, parametric models can be more appropriate than normalizing one of the utilities to zero. 63 / 105

64 Identifying discount rates Magnac and Thesmar (2002), Aguirregabiria et al (2007), etc. Insight: need excluded variables that affect transition but not static utility. Intuition: individuals with different values of excluded variables behave similarly if they are myopic but differently if they are forward looking. Formally, find discount rate value so that the nonparametrically recovered static utility is the same for different values of the excluded variable. Identification also applies to hyperbolic discounting models, Fang and Wang (2009). Fernandez-Villaverde et al: difficult to distinguish discount rate heterogeneity from myopic or hyperbolic behavior. 64 / 105

65 Irregular discrete observations The observation time interval can be different from the model time interval, e.g. annual decision but observation only once every two years. Most researchers will assume that they are equal (e.g. using a model with decision once every two years), using a suitable discount rate. But this is not necessarily consistent. Correct approach (in the context of stationary models): First estimate the multi-period transition density from the data. Recover single period transition density from the multi-period one and the estimated choice probability function. Use single period transition density for subsequent estimation steps, e.g. by simulating data from it. 65 / 105

66 Continuous time model Doraszelski and Judd (2007): Players move sequentially, at random times determined by nature. Sequential move continuous time model easier for simulating counterfactual policy outcomes. Bayer, Arcidiacono, Blevins, and Ellickson (2010): two step estimators can also be used for continuous time models. Nonparametric identification argument also goes through. Replace discrete time contraction mapping by, e.g. V i (a i, s) Π i (a i, s) + e ρt E [ ( V i s ) s, a i, t ] f (t) dt. f (t) is nature s distribution of the random arrival time of the next move. Both the conditional expectation above and f (t) can be identified and presumably estimated from the data. 66 / 105

67 Unobserved heterogeneity Some of the state variables that are observed by all the players in the model might not be observed by the econometrician. Partition state variables into s, u, where s is observed by the econometrician but u is not. Focus on the case where the players know more than the econometrician. Also conceivable that the econometrician has more information at the time of analysis, e.g. Aradillas-Lopez (2005). Both period utility Π i (a i, a i, s, u) and the transition process g (s, u s, a, u) can potentially be contaminated by unobserved heterogeneity. The discount rate can also depend on observed and unobserved heterogeneity. 67 / 105

68 Unobserved heterogeneity: identification Kasahara and Shimotsu (2008), Hu and Shum (2008). Insight: identification is a functional mapping, [ β, σi (a s), g ( s s, a )] Π i (a i, a i, s), or [ σi (a s), g ( s s, a )] [β, Π i (a i, a i, s)]. Therefore it suffices to identify the unobserved heterogeneity in the reduced forms of σ i (a s, u) and g (s, u s, a, u). Intuition: unobserved heterogeneity induces serial correlation, and can potentially be identified with panel data. Challenging with possibly continuous unobserved heterogeneity that can be serially correlated. 68 / 105

69 Unobserved heterogeneity: estimation Monte Carlo Markov Chain Bayesian method useful for estimating models with unobserved heterogeneity. Imai, Jain and Ching (2009): update value function once for each parameter iteration along the MCMC chain. Norets (2009): Gibbs sampling to incorporate serially correlated unobserved heterogeneity. Chernozhukov and Hong (2003): MCMC methods are applicable for both frequentist and Bayesian point estimators and inferences. 69 / 105

70 Unobserved heterogeneity: estimation Computational challenges of Bayesian estimators Computing the expected surplus next period given the current state variables and actions. Kernel methods (Imai, Jain and Ching (2009)) Nearest neighborhood and importance sampler (Norets (2009)) Parametric numerical quadrature (Gallant, Hong and Khwaja (2008)). Extending Bayesian methods for DDC models to dynamic games requires a solution algorithm. Both computationally intensive and issues with multiple equilibria. 70 / 105

71 Simulating from the conditional posterior distribution of parameters and latent state variables. Norets (2009) and Imai, Jain and Ching (2009): MCMC for both. Alternative: sequential importance sampler, also called particle filter. Importance sampler does not appear to work well for parameters. But it may work well for unobserved state variables, which has larger dimensions and whose posterior does not concentrate when the sample size increases. Particle filters: Fernandez-Villaverde and Rubio-Ramirez (2007) for DSGE models, Blevins (2009) for incomplete information games in combination with two step estimators, Gallant, Hong and Khwaja (2009) for complete information games. 71 / 105

72 Particle filtering Sequential conditioning versus integration. Particle filter draws from the sequential conditional distribution of latent state variables given data and parameters. Useful for both filtering and prediction. Can be integrated to form the likelihood for each parameter value. The likelihood can be maximized or run through MCMC. Alternative: EM algorithm of Arcidiacono and Miller (2006). Even when parameters are estimated by other methods (GMM etc), particle filtering can still be useful if the latent variable distribution is of interest itself. 72 / 105

73 Unobserved heterogeneity: flexible estimation Based on the insights in Kasahara and Shimotsu (2008), Hu and Shum (2008). One can nonparametrically estimate ˆσ i (a i s, u) and ĝ (s, u a, s, u) for example by a flexible parametric model σ i (a i s, u, θ n ) and g (s, u a, s, u, γ n ). Once the estimates of ˆσ i (a i s, u) and ĝ (s, u a, s, u) are available, they can be mapped into the mean payoff functions, Π i (a s, u) and possibly the discount rate and the error distribution. For example, one can simulate data from ˆσ i (a i s, u) and ĝ (s, u a, s, u), and apply a number of multi-step estimators. The reduced forms ˆσ i (a i s, u) and ĝ (s, u a, s, u) can be estimated using a variety of methods including MCMC, data augmentation, sequential importance sampler, or the EM algorithm. 73 / 105

74 Unobserved heterogeneity Consider single agent dynamic model first. Random coefficient model: Dan Ackerberg A New Use of Importance Sampling to Reduce Computational Burden in Simulation Estimation. Working paper, Bayesian methods: Imai et al Bayesian Estimation of Dynamic Discrete Choice Models. Working paper, Andriy Noretz Dynamic Discrete Choice Models with serially correlated unobservables. Working paper, / 105

75 Ackerberg 2001 Static utility: x i u + ɛ i, ɛ i extreme value. Forward looking agent with discounting Random coefficient: u g (x i, θ). Moments to match: 1 n n [1 (y i = a) P (y i = a x i, θ)] t (x i ). i=1 Conditional choice probabilities: P (y i = a x i, θ) = = e V (a,x i,u) a A ev (a,x i,u) g (u x i, θ) du e V (a,x i,u) a A ev (a,x i,u) g (u x i, θ) q (u x i ) q (u x i) du 75 / 105

76 Using S draws of u s, s = 1,..., S from q (u x i ), the conditional choice probability can be simulated as: ˆP (y i = a x i, θ) = 1 S S s=1 e V (a,x i,u is ) a A ev (a,x i,u is ) g (u is x i, θ) q (u is x i ) Simulated moment conditions: 1 n n i=1 [ 1 (y i = a) ˆP ] (y i = a x i, θ) t (x i ). which is [ 1 n 1 (y i = a) 1 n S i=1 S s=1 e V (a,x i,u is ) a A ev (a,x i,u is ) ] g (u is x i, θ) t (x i ). q (u is x i ) 76 / 105

77 Separation of simulation and value function computation from the estimation step. Draw u is, i = 1,..., n, s = 1,..., S before estimation starts Compute all V (a, x i, u is ) for all i = 1,..., n and s = 1,..., S before hand. No need to recompute V (a, x i, u is ) when estimating θ. θ only reweights the density g (u is x i, θ) q (u is x i ) during optimization estimation. Same logic applies to simulated MLE, and in Bayesian analysis. 77 / 105

78 Noretz 2006 Bayesian method: serially correlated unobserved state variables s t = (y t, x t ). x t observed. y t not observed. Use Gibbs sampler: Given θ, d t, x t, draw y t. Given d t, x t, y t, draw θ Joint likelihood: π (θ) p (y T,i, x T,i, d T,i,..., y 1,i, x 1,i, d 1,i θ) =π (θ) Π T t=1p (d t,i y t,i, x t,i, θ) f (x t,i, y t,i x t 1,i, y t 1,i, d t 1,i ; θ). conditional choice probability can be indicator: p (d i,t y i,t, x t,i, θ) =1 (V (y t,i, x t,i, d t,i ; θ) V (y t,i, x t,i, d; θ), d D). 78 / 105

79 Break unobservables into serially independent and serially dependent components: y t = (ν t, ɛ t ), f (x t+1, ν v+1, ɛ t+1 x t, ν t, ɛ t, d; θ) =p (ν v+1 x t+1, ɛ t+1 ; θ) p (x t+1, ɛ t+1 x t, ɛ t, d; θ). Joint likelihood becomes π (θ) i,t p (d t,i V t,i ) p (V i,t x i,t, ɛ i,t ; θ) p (x t,i, ɛ t,i x t 1,i, ɛ t 1,i, d t 1,i ; θ) V it can be drawn analytically conditional on (x i,t, ɛ i,t ; θ) subject to the constraints specified by the p (d t,i V t,i ) indicators. θ and ɛ i,t are drawn using Metropolis-Hasting steps. 79 / 105

80 Analytic drawing of V t,i = {V t,d,i = V (s t,i, d; θ), d D}, where s = (x, ɛ, ν), requires value function updating. For example: Then u (s t,i, d; θ) = u (x t,i, d; θ) + v t,d,i + ɛ t,d,i. V (s t,i, d; θ) =u (x t,i, d; θ) + v t,d,i + ɛ t,d,i + βe [V (s t+1 ; θ) ɛ t,i, x t,i, d; θ]. At every step θ m, the expected value function E [ V (s t+1 ; θ m ) ɛ m t,i, x m t,i, d; θ m] is updated by averaging over near history point of θ on the mcmc chain and over the importance sampling draws of ɛ s. 80 / 105

81 How to update the approximate value function V m ( s m,j ; θ m) V m (s; θ m ) = max d D {u (s, d; θm ) + βe (m) [ V ( s ; θ m) s, d; θ m] }. At each iteration m, Draw randow states s m,j, j = 1,..., ˆN (m) from an i.i.d. density g ( ) > 0. Each iteration m, only keep track of the history of length N (m): {θ k ; s k,j, V k ( s k,j ; θ k), j = 1,..., ˆN (k)} m 1 k=m N(m). In this history, find i = 1,..., Ñ (m) cloest to θ parameter draws. Only the value functions in importance sampling that correspond to these nearest neighbors are used in the approximation by averaging. 81 / 105

82 Update value function as: E (m) [ V ( s ; θ ) s, d; θ ] Ñ(m) = i=1 Ñ(m) = i=1 ˆN(k i ) j=1 ˆN(k i ) j=1 V k i V k i ( ) s k i,j ; θ k f ( s k i,j s, d; θ ) /g ( s k i,j ) i Ñ(m) ˆN(k r ) r=1 q=1 f (skr,q s, d; θ) /g (s kr,q ). ( ) s k i,j ; θ k i W ki,j,m (s, d, θ) weights simplies with i.i.d. known unobservable components. expected max value function can be integrated out with extreme value errors. 82 / 105

83 Particle filtering Used in macro dynamic models by Fernandez and Rubio. Particle filtering is an importance sampling method. x is latent state, y is observable random variable: We are interested in the posterior distribution of x given y. Want to compute E (t (X ) y) p (y x) p (x) t (x) p (x y) dx = t (x) dx p (y) t (x) p(y x)p(x) g(x) g (x) dx = p(y x)p(x) g(x) g (x) dx 83 / 105

84 If we have r = 1,..., R draws from the density g (x), then we can approximate where E (t (X ) y) 1 R Or one can write where R t (x r ) w (x r y) / 1 R r=1 w (x r y) = p (y x r ) p (x r ). g (x r ) E (t (X ) y) 1 R R w (x r y), r=1 R t (x r ) w (x r y) r=1 w (x r y) = w (x r y) / R w (x r y). r=1 84 / 105

85 In other words, given R draws from g (x), t (X ) is integrated against a discrete distribution that places weights w (x r ) on each of the r = 1,..., R points on the discrete support. Alternatively, one can compute E (t (X ) y) 1 R R t ( x r ) r=1 where x r, r = 1,..., R are R draws from the weighted empirical distribution on the x r, r = 1,..., R, where the weights are w (x r ). When g (x) = p (x), w (x y) = p (y x), and w (x r y) = p (y x r ) / R p (y x r ). r=1 85 / 105

86 In a Bayesian setup, the coefficients θ are the latent state variables x. p (x) is then the prior distribution of θ. A random coefficient model is just like a hierachical Bayesian model where the µ are the hyper-parameters in the prior distribution, and the hyperparameters are to be estimated. The prior for θ can be made dependent on both µ and covariates (state variables) s. Computing the weights p (y θ) involves value function iteration and is computationally difficult. If we use GMM objective function instead of the likelihood to form p (y θ), there might be some computation savings because only one simulation r is needed for each observation. 86 / 105

87 A more interesting case is when there is dynamic unobservable state variables. Suppose s = (x, v), such that x is observed but v is not observed. We might be interested in the posterior distribution of the entire sample path of v in addition to those of θ, the parameters. Particle filtering can be used in this case. How can computation be maximally sped up? Do particles lend themselves to pararrell processing? 87 / 105

88 p (v 0:t y 1:t, x 0:t ) = p (v 0:t, y 1:t, x 0:t ) p (y 1:t, x 0:t ) = p (v 0:t 1, y 1:t 1, x 0:t 1 ) p (v t, y t, x t v 0:t 1, y 1:t 1, x 0:t 1 ) p (y 1:t 1, x 0:t 1 ) p (y t, x t y 1:t 1, x 0:t 1 ) =p (v 0:t 1 y 1:t 1, x 0:t 1 ) p (v t, y t, x t v 0:t 1, y 1:t 1, x 0:t 1 ) p (y t, x t y 1:t 1, x 0:t 1 ) =p (v 0:t 1 y 1:t 1, x 0:t 1 ) p (y t x t, v t ) p (v t, x t v 0:t 1, y 1:t 1, x 0:t 1 ) p (y t, x t y 1:t 1, x 0:t 1 ) =p (v 0:t 1 y 1:t 1, x 0:t 1 ) p (y t x t, v t ) p (v t, x t v t 1, y t 1, x t 1 ) p (y t, x t y 1:t 1, x 0:t 1 ) The fourth equality follows from the conditional independence assumption and the markovian structure, and the fifth equality follows from the markovian structure of the state variable transition. 88 / 105

89 Therefore we only need to make the following small modifications to the filtering algorithm: First, for each particle in period t 1, with its associated value of v t 1, simulate from p (v t x t, v t 1, y t 1, x t 1 ) = p (v t, x t v t 1, y t 1, x t 1 ) p (x t v t 1, y t 1, x t 1 ). If v t and x t are conditionally independent given v t 1, y t 1, x t 1, then we can directly simulate from Then reweight by p (y t x t, v t ). p (v t v t 1, y t 1, x t 1 ). The weights are not easy to compute because they either involve value function iteration orbackward recursion. 89 / 105

90 In summary, three components of recursive particle filtering method: p (v 0:t x 1:t, y 1:t ) p (v 0:t 1 y 1:t 1, x 0:t 1 ) p (v t x t, v t 1, y t 1, x t 1 ) p (y t x t, v t ) The first part p (v 0:t 1 y 1:t 1, x 0:t 1 ), defines recursion. The second part p (v t x t, v t 1, y t 1, x t 1 ), defines the importance sampling density. The third part p (y t x t, v t ), defines the weights on the particles. 90 / 105

91 Relation to the macro model of Jesús Fernández-Villaverde and Juan Rubio-Ramírez. Their paper: Estimating Macroeconomic Models: A Likelihood Approach, forthcoming, Review of Economic Studies. Basically, very similar. They also have feedback from y t to the latent state variables x t+1, v t+1 : p (v t, x t v 0:t 1, y 1:t 1, x 0:t 1 ) = p (v t, x t v t 1, y t 1, x t 1 ) unlike stochastic volatility models, where transitions of v t, x t are automonous. 91 / 105

92 They introduce this dependence by allowing for singularity in the measurement equation: Y t = g (S t, V t ; γ) The states in their transition equation are not necessarily all latent: S t = f (S t 1, W t ; γ) Feedback from Y t to latent states is allowed when Y t is a subcomponent of S t and is directly observed, such that that particular component of g ( ) has no noise V t. 92 / 105

93 In their assumption 2, they assume that both Y and V are continuously distributed and that g ( ) (f ( ) as well) is invertable. Then the conditional density of Y t (which basically is used to calculate the weights) can be determined from the density of V t and the jacobian of the transformation. They do mention though, in one brief sentence, that this assumption can possibly be relaxed as long as the weights can be computed. Our discrete Y t model falls into this extension where there is no invertibility. The weights can still be computed through the logit probability form and the value function iterations (or backward recursion). 93 / 105

94 Calculating value functions by backward induction: Assuming binary choice model. At time T : V T (s) = log [exp (Π (s, 1)) + exp (Π (s, 0))] Suppose V t+1 (s) is known, then at time t: V t (s, 1) =Π (s, 1) + βe [ ( V t+1 s ) s, 1 ] V t (s, 0) =Π (s, 0) + βe [ ( V t+1 s ) s, 0 ] V t (s) = log [exp (V t (s, 1)) + exp (V t (s, 0))]. 94 / 105

95 Complete information Only contains unobserved heterogeneity that are observed by all the players. widely used to model entry behaviors of competing firms. Issues with existence and multiplicity of equilibrium. Important previous work by Bresnahan and Reiss (1990,1991) and Berry (1992) focus on unique features (number of firms) in the data. Recent papers by Tamer (2002) and Ciliberto and Tamer (2003) makes use of bounds. Another approach specifies an equilibrium selection mechanism: Bjorn and Vuong (1984), Berry (1992), Jia (2008). Bajari, Hong and Ryan consider estimating an equilibrium selection mechanism. 95 / 105

96 Complete information Allow for mixed strategies. Equilibrium selection is part of model. Exploit computational tools in McKelvy and McLennan (1994) to find all equilibrium. Reduce computational burden in structural models using importance sampling, e.g. Ackerberg (2008). Nonparametric identification is difficult. 96 / 105

97 The model Simultaneous move game of complete information (normal form game). There are i = 1,..., N players with a finite set of actions A i. Let A = A i. i Utility is u i : A R, where R is the real line. Let π i denote a mixed strategy over A i. A Nash equilibrium is a set of best responses. 97 / 105

98 Following Bresnahan and Reiss (1990,1991), utility is: Mean utility, f i (x, a; θ 1 ) u i (a) = f i (x, a; θ) + ε i (a). a, the vector of actions, covariates x, and a parameters θ. ε i (a) preference shocks. ε i (a) g(ε θ) iid. Standard random utility model, except utility depends on actions of others. E(u) set of Nash equilibrium. λ(π; E(u), β) is probability of equilibrium, π E(u) given parameters β. λ(π; E(u), β) corresponds to a finite vector of probabilities. 98 / 105

99 Example of λ. Theorists have suggested that an equilibrium may be more likely to be played if it: Satisfies a particular refinement concept (e.g. trembling hand perfection). The equilibrium is in pure strategies. The equilibrium is risk dominant. Create dummy variable for whether a given equilibrium, π E(u) satisfies criteria 1-3 above. Let x(π, u) denote the vector of covariates that we generate in this fashion. Then a straightforward way to model λ is: exp(β x(π, u)) λ(π; E(u), β) = π E(u) exp(β x(π, u)) 99 / 105

100 Estimation Computing the set E(u), all of the equilibrium to a normal form game, is a well understood problem. McKelvy and McLennan (1996) survey the available algorithms in detail. Software package Gambit. P(a x, θ, β) is probability of a given x, θ and β P(a x, θ, β) = { λ(π; u(x, θ 1, ε), β) ( N i=1 π(a i ) )} g(ε θ 2)dε π E(u(x,θ,ε)) Computation of the above integral is facilitated by the importance sampling procedure. (cf. Ackerberg (2003)) 100 / 105

101 Often g(ε θ 2 ) is a simple parametric distribution (e.g. normal). For instance, suppose it is normal and let φ( µ, σ) denote the normal density. Then, the density h(u θ, x) for the vnm utilities u is: h(u θ, x) = i φ(ε i (a); f i (θ, x, θ) + µ, σ) a A where for all i and all a, ε i (a) = f i (x, a; θ 1 ) u i (a) Evaluating h(u θ, x) is not difficult. Draw s = 1,..., S vectors of vnm utilities, u (s) = (u (s) 1,..., u(s) ) from an importance density q(u). N 101 / 105

102 We can then simulate P(a x, θ, β) as follows: P(a x, θ, β) = { S s=1 λ(π; E(u (s) ), β) ( N i=1 π(a i ) )} h(u (s) θ,x) q(u (s) ) π E(u) Precompute E(u (s) ) for a large number of randomly drawn games s = 1,..., S. Evaluating P(a x, θ, β) at new parameters does not require recomputing E(u (s) ) for new s = 1,..., S! Evaluating simulation estimator of P(a x, θ, β) of P(a x, θ, β) only requires reweighting of the equilibrium by new λ and h(u (s) θ,x). q(u (s) ) This is a feasible computation. 102 / 105

103 Normally, the computational expense of structural estimation comes from recomputing the equilibrium many times. This can save on the computational time by orders of magnitude. Given P(a x, θ, β) we can simulate the likelihood function or simulate the moments. The asymptotics are standard. Simulated method of moments is unbiased and might have lower number of simulation draws required to converge to the asymptotic distribution at T 1/2 rate. The model we consider is parametric. 103 / 105

104 Importance sampler In private information static or dynamic models, Ackerberg (2009) shows that it works best with a full random coefficient specification. Recomputation of equilibrium or value function typically is needed for the non-random coefficients in each round of their iterations. The importance sampler for the complete information in Bajari et al (2009) does not require random coefficient specification. A dynamic version of the complete information game also requires random coefficient specification. In addition, MCMC appears to be doing a better job than importance sampler for exploring the parameter space. 104 / 105

105 Conclusion Models consistent with Nash equilibrium behavior have many empirical applications. Challenging topics for Nonparametric identification Parametric and semiparametric estimation Computation algorithms for value function iteration and equilibrium solution An exciting area of research for both econometrics and empirical industrial organization. 105 / 105

Estimating Single-Agent Dynamic Models

Estimating Single-Agent Dynamic Models Estimating Single-Agent Dynamic Models Paul T. Scott Empirical IO Fall, 2013 1 / 49 Why are dynamics important? The motivation for using dynamics is usually external validity: we want to simulate counterfactuals

More information

Identifying Dynamic Games with Serially-Correlated Unobservables

Identifying Dynamic Games with Serially-Correlated Unobservables Identifying Dynamic Games with Serially-Correlated Unobservables Yingyao Hu Dept. of Economics, Johns Hopkins University Matthew Shum Division of Humanities and Social Sciences, Caltech First draft: August

More information

Identifying Dynamic Games with Serially-Correlated Unobservables

Identifying Dynamic Games with Serially-Correlated Unobservables Identifying Dynamic Games with Serially-Correlated Unobservables Yingyao Hu Dept. of Economics, Johns Hopkins University Matthew Shum Division of Humanities and Social Sciences, Caltech First draft: August

More information

Identification and Estimation of Incomplete Information Games with Multiple Equilibria

Identification and Estimation of Incomplete Information Games with Multiple Equilibria Identification and Estimation of Incomplete Information Games with Multiple Equilibria Ruli Xiao Johns Hopkins University November 20, 2013 Job Market Paper Abstract The presence of multiple equilibria

More information

Estimating Dynamic Oligopoly Models of Imperfect Competition

Estimating Dynamic Oligopoly Models of Imperfect Competition Estimating Dynamic Oligopoly Models of Imperfect Competition Lanier Benkard, Yale University Leverhume Lecture, Warwick May 2010 Introduction Why are we interested in dynamic oligopoly? 1. Effects of policy/environmental

More information

Estimating Dynamic Discrete-Choice Games of Incomplete Information

Estimating Dynamic Discrete-Choice Games of Incomplete Information Estimating Dynamic Discrete-Choice Games of Incomplete Information Michael Egesdal Zhenyu Lai Che-Lin Su October 5, 2012 Abstract We investigate the estimation of models of dynamic discrete-choice games

More information

Estimating Dynamic Programming Models

Estimating Dynamic Programming Models Estimating Dynamic Programming Models Katsumi Shimotsu 1 Ken Yamada 2 1 Department of Economics Hitotsubashi University 2 School of Economics Singapore Management University Katsumi Shimotsu, Ken Yamada

More information

Estimating Dynamic Models of Imperfect Competition

Estimating Dynamic Models of Imperfect Competition Estimating Dynamic Models of Imperfect Competition Patrick Bajari Department of Economics University of Minnesota and NBER Jonathan Levin Department of Economics Stanford University and NBER C. Lanier

More information

Dynamic Discrete Choice Structural Models in Empirical IO

Dynamic Discrete Choice Structural Models in Empirical IO Dynamic Discrete Choice Structural Models in Empirical IO Lecture 4: Euler Equations and Finite Dependence in Dynamic Discrete Choice Models Victor Aguirregabiria (University of Toronto) Carlos III, Madrid

More information

1 Outline. 1. MSL. 2. MSM and Indirect Inference. 3. Example of MSM-Berry(1994) and BLP(1995). 4. Ackerberg s Importance Sampler.

1 Outline. 1. MSL. 2. MSM and Indirect Inference. 3. Example of MSM-Berry(1994) and BLP(1995). 4. Ackerberg s Importance Sampler. 1 Outline. 1. MSL. 2. MSM and Indirect Inference. 3. Example of MSM-Berry(1994) and BLP(1995). 4. Ackerberg s Importance Sampler. 2 Some Examples. In this chapter, we will be concerned with examples of

More information

Econometric Analysis of Games 1

Econometric Analysis of Games 1 Econometric Analysis of Games 1 HT 2017 Recap Aim: provide an introduction to incomplete models and partial identification in the context of discrete games 1. Coherence & Completeness 2. Basic Framework

More information

Estimating Single-Agent Dynamic Models

Estimating Single-Agent Dynamic Models Estimating Single-Agent Dynamic Models Paul T. Scott New York University Empirical IO Course Fall 2016 1 / 34 Introduction Why dynamic estimation? External validity Famous example: Hendel and Nevo s (2006)

More information

IDENTIFICATION OF INCOMPLETE INFORMATION GAMES WITH MULTIPLE EQUILIBRIA AND UNOBSERVED HETEROGENEITY

IDENTIFICATION OF INCOMPLETE INFORMATION GAMES WITH MULTIPLE EQUILIBRIA AND UNOBSERVED HETEROGENEITY IDENTIFICATION OF INCOMPLETE INFORMATION GAMES WITH MULTIPLE EQUILIBRIA AND UNOBSERVED HETEROGENEITY by Ruli Xiao A dissertation submitted to The Johns Hopkins University in conformity with the requirements

More information

Sequential Monte Carlo Methods for Estimating Dynamic Microeconomic Models

Sequential Monte Carlo Methods for Estimating Dynamic Microeconomic Models Sequential Monte Carlo Methods for Estimating Dynamic Microeconomic Models JASON R. BLEVINS Department of Economics, Ohio State University Working Paper 11-01 May 2, 2011 Abstract. This paper develops

More information

1 Hotz-Miller approach: avoid numeric dynamic programming

1 Hotz-Miller approach: avoid numeric dynamic programming 1 Hotz-Miller approach: avoid numeric dynamic programming Rust, Pakes approach to estimating dynamic discrete-choice model very computer intensive. Requires using numeric dynamic programming to compute

More information

Identification and Estimation of Normal Form Games.

Identification and Estimation of Normal Form Games. Identification and Estimation of Normal Form Games. Patrick Bajari, Han Hong and Stephen Ryan. 1 Preliminary Version. March 11, 2004. Abstract We discuss the identification and estimation of normal form

More information

Oblivious Equilibrium: A Mean Field Approximation for Large-Scale Dynamic Games

Oblivious Equilibrium: A Mean Field Approximation for Large-Scale Dynamic Games Oblivious Equilibrium: A Mean Field Approximation for Large-Scale Dynamic Games Gabriel Y. Weintraub, Lanier Benkard, and Benjamin Van Roy Stanford University {gweintra,lanierb,bvr}@stanford.edu Abstract

More information

SEQUENTIAL ESTIMATION OF DYNAMIC DISCRETE GAMES. Victor Aguirregabiria (Boston University) and. Pedro Mira (CEMFI) Applied Micro Workshop at Minnesota

SEQUENTIAL ESTIMATION OF DYNAMIC DISCRETE GAMES. Victor Aguirregabiria (Boston University) and. Pedro Mira (CEMFI) Applied Micro Workshop at Minnesota SEQUENTIAL ESTIMATION OF DYNAMIC DISCRETE GAMES Victor Aguirregabiria (Boston University) and Pedro Mira (CEMFI) Applied Micro Workshop at Minnesota February 16, 2006 CONTEXT AND MOTIVATION Many interesting

More information

Identifying Dynamic Games with Serially-Correlated Unobservables

Identifying Dynamic Games with Serially-Correlated Unobservables Identifying Dynamic Games with Serially-Correlated Unobservables Yingyao Hu Dept. of Economics, Johns Hopkins University Matthew Shum Division of Humanities and Social Sciences, Caltech First draft: August

More information

Bayesian Estimation of Discrete Games of Complete Information

Bayesian Estimation of Discrete Games of Complete Information Bayesian Estimation of Discrete Games of Complete Information Sridhar Narayanan August 2012 (First Version: May 2011) Abstract Estimation of discrete games of complete information, which have been applied

More information

Mini Course on Structural Estimation of Static and Dynamic Games

Mini Course on Structural Estimation of Static and Dynamic Games Mini Course on Structural Estimation of Static and Dynamic Games Junichi Suzuki University of Toronto June 1st, 2009 1 Part : Estimation of Dynamic Games 2 ntroduction Firms often compete each other overtime

More information

Dynamic Discrete Choice Structural Models in Empirical IO. Lectures at Universidad Carlos III, Madrid June 26-30, 2017

Dynamic Discrete Choice Structural Models in Empirical IO. Lectures at Universidad Carlos III, Madrid June 26-30, 2017 Dynamic Discrete Choice Structural Models in Empirical IO Lectures at Universidad Carlos III, Madrid June 26-30, 2017 Victor Aguirregabiria (University of Toronto) Web: http://individual.utoronto.ca/vaguirre

More information

IDENTIFYING DYNAMIC GAMES WITH SERIALLY CORRELATED UNOBSERVABLES

IDENTIFYING DYNAMIC GAMES WITH SERIALLY CORRELATED UNOBSERVABLES IDENTIFYING DYNAMIC GAMES WITH SERIALLY CORRELATED UNOBSERVABLES Yingyao Hu and Matthew Shum ABSTRACT In this chapter, we consider the nonparametric identification of Markov dynamic games models in which

More information

QED. Queen s Economics Department Working Paper No. 1092

QED. Queen s Economics Department Working Paper No. 1092 QED Queen s Economics Department Working Paper No 1092 Nonparametric Identification and Estimation of Finite Mixture Models of Dynamic Discrete Choices Hiroyuki Kasahara Department of Economics, University

More information

On the iterated estimation of dynamic discrete choice games

On the iterated estimation of dynamic discrete choice games On the iterated estimation of dynamic discrete choice games Federico A. Bugni Jackson Bunting The Institute for Fiscal Studies Department of Economics, UCL cemmap working paper CWP13/18 On the iterated

More information

Lecture notes: Rust (1987) Economics : M. Shum 1

Lecture notes: Rust (1987) Economics : M. Shum 1 Economics 180.672: M. Shum 1 Estimate the parameters of a dynamic optimization problem: when to replace engine of a bus? This is a another practical example of an optimal stopping problem, which is easily

More information

Structural Estimation of Sequential Games of Complete Information

Structural Estimation of Sequential Games of Complete Information Structural Estimation of Sequential Games of Complete Information JASON R. BLEVINS Department of Economics, The Ohio State University November, 204 Forthcoming in Economic Inquiry Abstract. In models of

More information

Estimating Static Discrete Games

Estimating Static Discrete Games Estimating Static Discrete Games Paul B. Ellickson 1 1 Simon School of Business Administration University of Rochester August 2013 Ellickson (Duke Quant Workshop) Estimating Static Discrete Games August

More information

Dynamic Models with Serial Correlation: Particle Filter Based Estimation

Dynamic Models with Serial Correlation: Particle Filter Based Estimation Dynamic Models with Serial Correlation: Particle Filter Based Estimation April 6, 04 Guest Instructor: Nathan Yang nathan.yang@yale.edu Class Overview ( of ) Questions we will answer in today s class:

More information

Estimating Static Models of Strategic Interactions

Estimating Static Models of Strategic Interactions Estimating Static Models of Strategic Interactions Patrick Bajari, Han Hong, John Krainer and Denis Nekipelov 1 University of Michigan and NBER Duke University Federal Reserve Bank of San Francisco Duke

More information

Sequential Estimation of Dynamic Programming Models

Sequential Estimation of Dynamic Programming Models Sequential Estimation of Dynamic Programming Models Hiroyuki Kasahara Department of Economics University of British Columbia hkasahar@gmail.com Katsumi Shimotsu Department of Economics Hitotsubashi University

More information

Bayesian Estimation of Games with Incomplete Information

Bayesian Estimation of Games with Incomplete Information Bayesian Estimation of Games with Incomplete Information Christopher Ferrall Jean-François Houde Susumu Imai, and Maxwell Pak Very preliminary draft. Comments are very welcome. June 30, 08 Abstract In

More information

ECO 2901 EMPIRICAL INDUSTRIAL ORGANIZATION

ECO 2901 EMPIRICAL INDUSTRIAL ORGANIZATION ECO 2901 EMPIRICAL INDUSTRIAL ORGANIZATION Lecture 8: Dynamic Games of Oligopoly Competition Victor Aguirregabiria (University of Toronto) Toronto. Winter 2017 Victor Aguirregabiria () Empirical IO Toronto.

More information

Bayesian Estimation of Discrete Games of Complete Information

Bayesian Estimation of Discrete Games of Complete Information Bayesian Estimation of Discrete Games of Complete Information Sridhar Narayanan May 30, 2011 Discrete games of complete information have been used to analyze a variety of contexts such as market entry,

More information

ECO 2901 EMPIRICAL INDUSTRIAL ORGANIZATION

ECO 2901 EMPIRICAL INDUSTRIAL ORGANIZATION ECO 2901 EMPIRICAL INDUSTRIAL ORGANIZATION Lecture 12: Dynamic Games of Oligopoly Competition Victor Aguirregabiria (University of Toronto) Toronto. Winter 2018 Victor Aguirregabiria () Empirical IO Toronto.

More information

Semiparametric Estimation of Dynamic Games of Incomplete Information

Semiparametric Estimation of Dynamic Games of Incomplete Information Semiparametric Estimation of Dynamic Games of Incomplete Information Patrick Bajari and Han Hong. University of Michigan and NBER Duke University PRELIMINARY VERSION June 13, 2005 Abstract Recently, empirical

More information

An Instrumental Variable Approach to Dynamic Models

An Instrumental Variable Approach to Dynamic Models An Instrumental Variable Approach to Dynamic Models work in progress, comments welcome Steven Berry and Giovanni Compiani Yale University September 26, 2017 1 Introduction Empirical models of dynamic decision

More information

Industrial Organization II (ECO 2901) Winter Victor Aguirregabiria. Problem Set #1 Due of Friday, March 22, 2013

Industrial Organization II (ECO 2901) Winter Victor Aguirregabiria. Problem Set #1 Due of Friday, March 22, 2013 Industrial Organization II (ECO 2901) Winter 2013. Victor Aguirregabiria Problem Set #1 Due of Friday, March 22, 2013 TOTAL NUMBER OF POINTS: 200 PROBLEM 1 [30 points]. Considertheestimationofamodelofdemandofdifferentiated

More information

Sequential Estimation of Structural Models with a Fixed Point Constraint

Sequential Estimation of Structural Models with a Fixed Point Constraint Sequential Estimation of Structural Models with a Fixed Point Constraint Hiroyuki Kasahara Department of Economics University of British Columbia hkasahar@mail.ubc.ca Katsumi Shimotsu Department of Economics

More information

High-dimensional Problems in Finance and Economics. Thomas M. Mertens

High-dimensional Problems in Finance and Economics. Thomas M. Mertens High-dimensional Problems in Finance and Economics Thomas M. Mertens NYU Stern Risk Economics Lab April 17, 2012 1 / 78 Motivation Many problems in finance and economics are high dimensional. Dynamic Optimization:

More information

An empirical model of firm entry with endogenous product-type choices

An empirical model of firm entry with endogenous product-type choices and An empirical model of firm entry with endogenous product-type choices, RAND Journal of Economics 31 Jan 2013 Introduction and Before : entry model, identical products In this paper : entry with simultaneous

More information

Flexible Estimation of Treatment Effect Parameters

Flexible Estimation of Treatment Effect Parameters Flexible Estimation of Treatment Effect Parameters Thomas MaCurdy a and Xiaohong Chen b and Han Hong c Introduction Many empirical studies of program evaluations are complicated by the presence of both

More information

Joint Analysis of the Discount Factor and Payoff Parameters in Dynamic Discrete Choice Models

Joint Analysis of the Discount Factor and Payoff Parameters in Dynamic Discrete Choice Models Joint Analysis of the Discount Factor and Payoff Parameters in Dynamic Discrete Choice Models Tatiana Komarova London School of Economics Fabio Sanches PUC-Rio Sorawoot Srisuma University of Surrey November

More information

A Note on Demand Estimation with Supply Information. in Non-Linear Models

A Note on Demand Estimation with Supply Information. in Non-Linear Models A Note on Demand Estimation with Supply Information in Non-Linear Models Tongil TI Kim Emory University J. Miguel Villas-Boas University of California, Berkeley May, 2018 Keywords: demand estimation, limited

More information

A Model of Human Capital Accumulation and Occupational Choices. A simplified version of Keane and Wolpin (JPE, 1997)

A Model of Human Capital Accumulation and Occupational Choices. A simplified version of Keane and Wolpin (JPE, 1997) A Model of Human Capital Accumulation and Occupational Choices A simplified version of Keane and Wolpin (JPE, 1997) We have here three, mutually exclusive decisions in each period: 1. Attend school. 2.

More information

Econometrics III: Problem Set # 2 Single Agent Dynamic Discrete Choice

Econometrics III: Problem Set # 2 Single Agent Dynamic Discrete Choice Holger Sieg Carnegie Mellon University Econometrics III: Problem Set # 2 Single Agent Dynamic Discrete Choice INSTRUCTIONS: This problem set was originally created by Ron Goettler. The objective of this

More information

Dynamic Decisions under Subjective Beliefs: A Structural Analysis

Dynamic Decisions under Subjective Beliefs: A Structural Analysis Dynamic Decisions under Subjective Beliefs: A Structural Analysis Yonghong An Yingyao Hu December 4, 2016 (Comments are welcome. Please do not cite.) Abstract Rational expectations of agents on state transitions

More information

Dynamic stochastic game and macroeconomic equilibrium

Dynamic stochastic game and macroeconomic equilibrium Dynamic stochastic game and macroeconomic equilibrium Tianxiao Zheng SAIF 1. Introduction We have studied single agent problems. However, macro-economy consists of a large number of agents including individuals/households,

More information

Equilibrium Selection in Discrete Games of Complete Information: An Application to the Home Improvement Industry

Equilibrium Selection in Discrete Games of Complete Information: An Application to the Home Improvement Industry Equilibrium Selection in Discrete Games of Complete Information: An Application to the Home Improvement Industry Li Zhao Vanderbilt University Nov 10, 2014 Abstract The existence of multiple equilibria

More information

1 Introduction to structure of dynamic oligopoly models

1 Introduction to structure of dynamic oligopoly models Lecture notes: dynamic oligopoly 1 1 Introduction to structure of dynamic oligopoly models Consider a simple two-firm model, and assume that all the dynamics are deterministic. Let x 1t, x 2t, denote the

More information

Implementation of Bayesian Inference in Dynamic Discrete Choice Models

Implementation of Bayesian Inference in Dynamic Discrete Choice Models Implementation of Bayesian Inference in Dynamic Discrete Choice Models Andriy Norets anorets@princeton.edu Department of Economics, Princeton University August 30, 2009 Abstract This paper experimentally

More information

Approximating High-Dimensional Dynamic Models: Sieve Value Function Iteration

Approximating High-Dimensional Dynamic Models: Sieve Value Function Iteration Approximating High-Dimensional Dynamic Models: Sieve Value Function Iteration Peter Arcidiacono Patrick Bayer Federico Bugni Jon James Duke University Duke University Duke University Duke University January

More information

Sequential Monte Carlo Methods for Estimating Dynamic Microeconomic Models

Sequential Monte Carlo Methods for Estimating Dynamic Microeconomic Models Sequential Monte Carlo Methods for Estimating Dynamic Microeconomic Models JASON R. BLEVINS Department of Economics, Ohio State University February 22, 2014 Abstract. This paper develops estimators for

More information

A Computational Method for Multidimensional Continuous-choice. Dynamic Problems

A Computational Method for Multidimensional Continuous-choice. Dynamic Problems A Computational Method for Multidimensional Continuous-choice Dynamic Problems (Preliminary) Xiaolu Zhou School of Economics & Wangyannan Institution for Studies in Economics Xiamen University April 9,

More information

Estimation of Games with Ordered Actions: An Application to Chain-Store Entry

Estimation of Games with Ordered Actions: An Application to Chain-Store Entry Estimation of Games with Ordered Actions: An Application to Chain-Store Entry Andres Aradillas-Lopez and Amit Gandhi Pennsylvania State University and University of Wisconsin-Madison December 3, 2015 Abstract

More information

Identification of Discrete Choice Models for Bundles and Binary Games

Identification of Discrete Choice Models for Bundles and Binary Games Identification of Discrete Choice Models for Bundles and Binary Games Jeremy T. Fox University of Michigan and NBER Natalia Lazzati University of Michigan February 2013 Abstract We study nonparametric

More information

Lecture 14 More on structural estimation

Lecture 14 More on structural estimation Lecture 14 More on structural estimation Economics 8379 George Washington University Instructor: Prof. Ben Williams traditional MLE and GMM MLE requires a full specification of a model for the distribution

More information

SEQUENTIAL ESTIMATION OF DYNAMIC DISCRETE GAMES

SEQUENTIAL ESTIMATION OF DYNAMIC DISCRETE GAMES SEQUENTIAL ESTIMATION OF DYNAMIC DISCRETE GAMES Víctor Aguirregabiria and Pedro Mira CEMFI Working Paper No. 0413 September 2004 CEMFI Casado del Alisal 5; 28014 Madrid Tel. (34) 914 290 551. Fax (34)

More information

CCP Estimation of Dynamic Discrete Choice Models with Unobserved Heterogeneity

CCP Estimation of Dynamic Discrete Choice Models with Unobserved Heterogeneity CCP Estimation of Dynamic Discrete Choice Models with Unobserved Heterogeneity Peter Arcidiacono Duke University Robert A. Miller Carnegie Mellon University August 13, 2010 Abstract We adapt the Expectation-Maximization

More information

Dynamic Random Utility. Mira Frick (Yale) Ryota Iijima (Yale) Tomasz Strzalecki (Harvard)

Dynamic Random Utility. Mira Frick (Yale) Ryota Iijima (Yale) Tomasz Strzalecki (Harvard) Dynamic Random Utility Mira Frick (Yale) Ryota Iijima (Yale) Tomasz Strzalecki (Harvard) Static Random Utility Agent is maximizing utility subject to private information randomness ( utility shocks ) at

More information

Ergodicity and Non-Ergodicity in Economics

Ergodicity and Non-Ergodicity in Economics Abstract An stochastic system is called ergodic if it tends in probability to a limiting form that is independent of the initial conditions. Breakdown of ergodicity gives rise to path dependence. We illustrate

More information

Approximating High-Dimensional Dynamic Models: Sieve Value Function Iteration

Approximating High-Dimensional Dynamic Models: Sieve Value Function Iteration Approximating High-Dimensional Dynamic Models: Sieve Value Function Iteration Peter Arcidiacono Department of Economics Duke University psarcidi@econ.duke.edu Federico A. Bugni Department of Economics

More information

A Simple Estimator for Dynamic Models with Serially Correlated Unobservables

A Simple Estimator for Dynamic Models with Serially Correlated Unobservables A Simple Estimator for Dynamic Models with Serially Correlated Unobservables Yingyao Hu Johns Hopkins Matthew Shum Caltech Wei Tan Compass-Lexecon Ruli Xiao Indiana University September 27, 2015 Abstract

More information

Essays in Industrial Organization and. Econometrics

Essays in Industrial Organization and. Econometrics Essays in Industrial Organization and Econometrics by Jason R. Blevins Department of Economics Duke University Date: Approved: Han Hong, Co-Advisor Shakeeb Khan, Co-Advisor Paul B. Ellickson Andrew Sweeting

More information

Syllabus. By Joan Llull. Microeconometrics. IDEA PhD Program. Fall Chapter 1: Introduction and a Brief Review of Relevant Tools

Syllabus. By Joan Llull. Microeconometrics. IDEA PhD Program. Fall Chapter 1: Introduction and a Brief Review of Relevant Tools Syllabus By Joan Llull Microeconometrics. IDEA PhD Program. Fall 2017 Chapter 1: Introduction and a Brief Review of Relevant Tools I. Overview II. Maximum Likelihood A. The Likelihood Principle B. The

More information

Robust Predictions in Games with Incomplete Information

Robust Predictions in Games with Incomplete Information Robust Predictions in Games with Incomplete Information joint with Stephen Morris (Princeton University) November 2010 Payoff Environment in games with incomplete information, the agents are uncertain

More information

April 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning

April 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning for for Advanced Topics in California Institute of Technology April 20th, 2017 1 / 50 Table of Contents for 1 2 3 4 2 / 50 History of methods for Enrico Fermi used to calculate incredibly accurate predictions

More information

Information Structure and Statistical Information in Discrete Response Models

Information Structure and Statistical Information in Discrete Response Models Information Structure and Statistical Information in Discrete Response Models Shakeeb Khan a and Denis Nekipelov b This version: January 2012 Abstract Strategic interaction parameters characterize the

More information

Lecture Notes: Estimation of dynamic discrete choice models

Lecture Notes: Estimation of dynamic discrete choice models Lecture Notes: Estimation of dynamic discrete choice models Jean-François Houde Cornell University November 7, 2016 These lectures notes incorporate material from Victor Agguirregabiria s graduate IO slides

More information

NBER WORKING PAPER SERIES IDENTIFICATION OF POTENTIAL GAMES AND DEMAND MODELS FOR BUNDLES. Jeremy T. Fox Natalia Lazzati

NBER WORKING PAPER SERIES IDENTIFICATION OF POTENTIAL GAMES AND DEMAND MODELS FOR BUNDLES. Jeremy T. Fox Natalia Lazzati NBER WORKING PAPER SERIES IDENTIFICATION OF POTENTIAL GAMES AND DEMAND MODELS FOR BUNDLES Jeremy T. Fox Natalia Lazzati Working Paper 18155 http://www.nber.org/papers/w18155 NATIONAL BUREAU OF ECONOMIC

More information

Solving Dynamic Games with Newton s Method

Solving Dynamic Games with Newton s Method Michael Ferris 1 Kenneth L. Judd 2 Karl Schmedders 3 1 Department of Computer Science, University of Wisconsin at Madison 2 Hoover Institution, Stanford University 3 Institute for Operations Research,

More information

TECHNICAL WORKING PAPER SERIES SEMIPARAMETRIC ESTIMATION OF A DYNAMIC GAME OF INCOMPLETE INFORMATION. Patrick Bajari Han Hong

TECHNICAL WORKING PAPER SERIES SEMIPARAMETRIC ESTIMATION OF A DYNAMIC GAME OF INCOMPLETE INFORMATION. Patrick Bajari Han Hong ECHNICAL WORKING PAPER SERIES SEMIPARAMERIC ESIMAION OF A DYNAMIC GAME OF INCOMPLEE INFORMAION Patrick Bajari Han Hong echnical Working Paper 320 http://www.nber.org/papers/0320 NAIONAL BUREAU OF ECONOMIC

More information

Estimation of Dynamic Discrete Choice Models in Continuous Time

Estimation of Dynamic Discrete Choice Models in Continuous Time Estimation of Dynamic Discrete Choice Models in Continuous Time Peter Arcidiacono Patrick Bayer Jason Blevins Paul Ellickson Duke University Duke University Duke University University of Rochester April

More information

Dynamic and Stochastic Model of Industry. Class: Work through Pakes, Ostrovsky, and Berry

Dynamic and Stochastic Model of Industry. Class: Work through Pakes, Ostrovsky, and Berry Dynamic and Stochastic Model of Industry Class: Work through Pakes, Ostrovsky, and Berry Reading: Recommend working through Doraszelski and Pakes handbook chapter Recall model from last class Deterministic

More information

INFERENCE APPROACHES FOR INSTRUMENTAL VARIABLE QUANTILE REGRESSION. 1. Introduction

INFERENCE APPROACHES FOR INSTRUMENTAL VARIABLE QUANTILE REGRESSION. 1. Introduction INFERENCE APPROACHES FOR INSTRUMENTAL VARIABLE QUANTILE REGRESSION VICTOR CHERNOZHUKOV CHRISTIAN HANSEN MICHAEL JANSSON Abstract. We consider asymptotic and finite-sample confidence bounds in instrumental

More information

Dynamic and Stochastic Model of Industry. Class: Work through Pakes, Ostrovsky, and Berry

Dynamic and Stochastic Model of Industry. Class: Work through Pakes, Ostrovsky, and Berry Dynamic and Stochastic Model of Industry Class: Work through Pakes, Ostrovsky, and Berry Reading: Recommend working through Doraszelski and Pakes handbook chapter Recall model from last class Deterministic

More information

ECO Class 6 Nonparametric Econometrics

ECO Class 6 Nonparametric Econometrics ECO 523 - Class 6 Nonparametric Econometrics Carolina Caetano Contents 1 Nonparametric instrumental variable regression 1 2 Nonparametric Estimation of Average Treatment Effects 3 2.1 Asymptotic results................................

More information

Additional Material for Estimating the Technology of Cognitive and Noncognitive Skill Formation (Cuttings from the Web Appendix)

Additional Material for Estimating the Technology of Cognitive and Noncognitive Skill Formation (Cuttings from the Web Appendix) Additional Material for Estimating the Technology of Cognitive and Noncognitive Skill Formation (Cuttings from the Web Appendix Flavio Cunha The University of Pennsylvania James Heckman The University

More information

CCP Estimation. Robert A. Miller. March Dynamic Discrete Choice. Miller (Dynamic Discrete Choice) cemmap 6 March / 27

CCP Estimation. Robert A. Miller. March Dynamic Discrete Choice. Miller (Dynamic Discrete Choice) cemmap 6 March / 27 CCP Estimation Robert A. Miller Dynamic Discrete Choice March 2018 Miller Dynamic Discrete Choice) cemmap 6 March 2018 1 / 27 Criteria for Evaluating Estimators General principles to apply when assessing

More information

Bayesian inference in dynamic discrete choice models

Bayesian inference in dynamic discrete choice models University of Iowa Iowa Research Online Theses and Dissertations 2007 Bayesian inference in dynamic discrete choice models Andriy Norets University of Iowa Copyright 2007 Andriy Norets This dissertation

More information

Semiparametric Estimation of Signaling Games with Equilibrium Refinement

Semiparametric Estimation of Signaling Games with Equilibrium Refinement Semiparametric Estimation of Signaling Games with Equilibrium Refinement Kyoo il Kim Michigan State University September 23, 2018 Abstract This paper studies an econometric modeling of a signaling game

More information

Lecture 3: Computing Markov Perfect Equilibria

Lecture 3: Computing Markov Perfect Equilibria Lecture 3: Computing Markov Perfect Equilibria April 22, 2015 1 / 19 Numerical solution: Introduction The Ericson-Pakes framework can generate rich patterns of industry dynamics and firm heterogeneity.

More information

IDENTIFICATION AND ESTIMATION OF DYNAMIC GAMES WITH CONTINUOUS STATES AND CONTROLS

IDENTIFICATION AND ESTIMATION OF DYNAMIC GAMES WITH CONTINUOUS STATES AND CONTROLS IDENTIFICATION AND ESTIMATION OF DYNAMIC GAMES WITH CONTINUOUS STATES AND CONTROLS PAUL SCHRIMPF Abstract. This paper analyzes dynamic games with continuous states and controls. There are two main contributions.

More information

Dynamic decisions under subjective expectations: a structural analysis

Dynamic decisions under subjective expectations: a structural analysis Dynamic decisions under subjective expectations: a structural analysis Yonghong An Yingyao Hu Ruli Xiao The Institute for Fiscal Studies Department of Economics, UCL cemmap working paper CWP11/18 Dynamic

More information

Incentives Work: Getting Teachers to Come to School. Esther Duflo, Rema Hanna, and Stephen Ryan. Web Appendix

Incentives Work: Getting Teachers to Come to School. Esther Duflo, Rema Hanna, and Stephen Ryan. Web Appendix Incentives Work: Getting Teachers to Come to School Esther Duflo, Rema Hanna, and Stephen Ryan Web Appendix Online Appendix: Estimation of model with AR(1) errors: Not for Publication To estimate a model

More information

Lecture Notes: Entry and Product Positioning

Lecture Notes: Entry and Product Positioning Lecture Notes: Entry and Product Positioning Jean-François Houde Cornell University & NBER November 21, 2016 1 Motivation: Models of entry and product positioning Static demand and supply analyses treat

More information

A Practitioner s Guide to Bayesian Estimation of Discrete Choice Dynamic Programming Models

A Practitioner s Guide to Bayesian Estimation of Discrete Choice Dynamic Programming Models A Practitioner s Guide to Bayesian Estimation of Discrete Choice Dynamic Programming Models Andrew Ching Rotman School of Management University of Toronto Susumu Imai Department of Economics Queen s University

More information

Correlated Equilibrium in Games with Incomplete Information

Correlated Equilibrium in Games with Incomplete Information Correlated Equilibrium in Games with Incomplete Information Dirk Bergemann and Stephen Morris Econometric Society Summer Meeting June 2012 Robust Predictions Agenda game theoretic predictions are very

More information

Theory and Empirical Work on Imperfectly Competitive Markets. by Ariel Pakes. (Harvard University). The Fisher-Schultz Lecture

Theory and Empirical Work on Imperfectly Competitive Markets. by Ariel Pakes. (Harvard University). The Fisher-Schultz Lecture Theory and Empirical Work on Imperfectly Competitive Markets. by Ariel Pakes (Harvard University). The Fisher-Schultz Lecture World Congress of the Econometric Society London, August 2005. 1 Structure

More information

Nonparametric Instrumental Variables Identification and Estimation of Nonseparable Panel Models

Nonparametric Instrumental Variables Identification and Estimation of Nonseparable Panel Models Nonparametric Instrumental Variables Identification and Estimation of Nonseparable Panel Models Bradley Setzler December 8, 2016 Abstract This paper considers identification and estimation of ceteris paribus

More information

Sequential Monte Carlo Methods for Estimating Dynamic Microeconomic Models

Sequential Monte Carlo Methods for Estimating Dynamic Microeconomic Models Sequential Monte Carlo Methods for Estimating Dynamic Microeconomic Models JASON R. BLEVINS Department of Economics, Ohio State University February 19, 2015 Forthcoming at the Journal of Applied Econometrics

More information

Value Function Iteration

Value Function Iteration Value Function Iteration (Lectures on Solution Methods for Economists II) Jesús Fernández-Villaverde 1 and Pablo Guerrón 2 February 26, 2018 1 University of Pennsylvania 2 Boston College Theoretical Background

More information

Semi and Nonparametric Models in Econometrics

Semi and Nonparametric Models in Econometrics Semi and Nonparametric Models in Econometrics Part 4: partial identification Xavier d Haultfoeuille CREST-INSEE Outline Introduction First examples: missing data Second example: incomplete models Inference

More information

Hidden Rust Models. Benjamin Connault. Princeton University, Department of Economics. Job Market Paper. November Abstract

Hidden Rust Models. Benjamin Connault. Princeton University, Department of Economics. Job Market Paper. November Abstract Hidden Rust Models Benjamin Connault Princeton University, Department of Economics Job Market Paper November 2014 Abstract This paper shows that all-discrete dynamic choice models with hidden state provide

More information

A Course in Applied Econometrics Lecture 14: Control Functions and Related Methods. Jeff Wooldridge IRP Lectures, UW Madison, August 2008

A Course in Applied Econometrics Lecture 14: Control Functions and Related Methods. Jeff Wooldridge IRP Lectures, UW Madison, August 2008 A Course in Applied Econometrics Lecture 14: Control Functions and Related Methods Jeff Wooldridge IRP Lectures, UW Madison, August 2008 1. Linear-in-Parameters Models: IV versus Control Functions 2. Correlated

More information

1 Outline. 1. Motivation. 2. SUR model. 3. Simultaneous equations. 4. Estimation

1 Outline. 1. Motivation. 2. SUR model. 3. Simultaneous equations. 4. Estimation 1 Outline. 1. Motivation 2. SUR model 3. Simultaneous equations 4. Estimation 2 Motivation. In this chapter, we will study simultaneous systems of econometric equations. Systems of simultaneous equations

More information

Estimating Macroeconomic Models: A Likelihood Approach

Estimating Macroeconomic Models: A Likelihood Approach Estimating Macroeconomic Models: A Likelihood Approach Jesús Fernández-Villaverde University of Pennsylvania, NBER, and CEPR Juan Rubio-Ramírez Federal Reserve Bank of Atlanta Estimating Dynamic Macroeconomic

More information

Bayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence

Bayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence Bayesian Inference in GLMs Frequentists typically base inferences on MLEs, asymptotic confidence limits, and log-likelihood ratio tests Bayesians base inferences on the posterior distribution of the unknowns

More information

Bayesian Methods for Machine Learning

Bayesian Methods for Machine Learning Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),

More information

1. Introduction. 2. A Simple Model

1. Introduction. 2. A Simple Model . Introduction In the last years, evolutionary-game theory has provided robust solutions to the problem of selection among multiple Nash equilibria in symmetric coordination games (Samuelson, 997). More

More information