Entry for Daniel McFadden in the New Palgrave Dictionary of Economics

Size: px
Start display at page:

Download "Entry for Daniel McFadden in the New Palgrave Dictionary of Economics"

Transcription

1 Entry for Daniel McFadden in the New Palgrave Dictionary of Economics 1. Introduction Daniel L. McFadden, the E. Morris Cox Professor of Economics at the University of California at Berkeley, was the 2000 co-recipient of the Nobel Prize in Economics, awarded for his development of theory and methods of analyzing discrete choice. 1 McFadden was born in North Carolina, USA in 1937 and received a B.S. in Physics from the University of Minnesota (with highest honors) in 1956, and a Ph.D. degree in Economics from Minnesota in His academic career began as a postdoctoral fellow at the University of Pittsburgh. In 1963 he was appointed as assistant professor of economics at the University of California at Berkeley, and tenured in He has also held tenured appointments at Yale (as Irving Fisher Research Professor in 1977), and at the Massachusetts Institute of Technology (from 1978 to 1991). In 1990 he was awarded the E. Morris Cox chair at the University of California at Berkeley, where he has also served as Department Chair and as Director of the Econometrics Laboratory. 2. Research Contributions McFadden is best known for his fundamental contributions to the theory and econometric methods for analyzing discrete choice. Building on a highly abstract, axiomatic literature on probabilistic choice theory due to Thurstone (1927), Block and Marschak (1960), and Luce (1959) and others in a literature that originated in mathematical psychology, McFadden developed the econometric methodology for estimating the utility functions underlying probabilistic choice theory. McFadden s primary contribution was to provide the econometric tools that permitted widespread practical empirical application of discrete choice models, in economics and other disciplines. According to his autobiography 2, In 1964, I was working with a graduate student, Phoebe Cottingham, who had data on freeway routing decisions of the California Department of Transportation, and was looking for a way to analyze these data to study institutional decision-making behavior. I worked out for her an econometric model based on an axiomatic theory of choice behavior developed by the psychologist Duncan Luce. Drawing upon the work of Thurstone and Marshak, I was able to show how this model linked to the economic theory of choice behavior. 1 The prize was split with James J. Heckman, awarded for his development of theory and methods for analyzing selective samples. Summary quotes were from Nobel prize summary at nobelprize.org. 2 From the Nobel prize web site, nobelprize.org. 1

2 2 These developments, now called the multinomial logit model and the random utility model for choice behavior, have turned out to be widely useful in economics and other social sciences. They are used, for example, to study travel modes, choice of occupation, brand of automobile purchase, and decisions on marriage and number of children. This understates the huge impact that the discrete choice literature has had on the social sciences, and is characteristic of McFadden s modesty. Thousands of papers applying his technique have been published since his path breaking papers, Conditional Logit Analysis of Qualitative Choice Behavior (1973) and The Revealed Preferences of a Government Bureaucracy: Empirical Evidence (1976). In December 2005, a search of the term discrete choice using the Google search engine yielded 10,200,000 entries, and a search on the Google Scholar search engine (which limits search to academic articles), returned 759,000 items. Besides the discrete choice literature itself, McFadden s work has spawned a number of related literatures in econometrics, theory, and industrial organization that are among the most active and productive parts of the economic literature in the present day. This includes work in game theory and industrial organization (e.g. the work on discrete choice and product differentiation of Anderson, De Palma and Thisse (1992), estimation of discrete games of incomplete information, Bajari, Hong, Krainer and Nekipelov (2005), and discrete choice modeling in the empirical industrial organization literature, Berry, S. Levinsohn, J. and A. Pakes (1995) and Goldberg (1995)), the econometric literature on semiparametric estimation of discrete choice models (Manski, (1985), McFadden and Train (2000)), the literature on discrete/continuous choice models and its connection to durable goods and energy demand modeling (Dagsvik (1994), Dubin and McFadden (1984), Hannemann (1984)), the econometric literature on choice based and stratified sampling (Cosslett (1981), Manski and McFadden, (1981)), the econometric literature on simulation estimation (Lerman and Manski (1981), McFadden (1994), Hajivassiliou and Ruud (1994), Pakes and Pollard (1989)), and the work on structural estimation of dynamic discrete choice models and extensions thereof (Dagsvik (1983), Eckstein and Wolpin (1989), Heckman (1981), Rust, (1994)). These are only some of fields that have been been hugely influenced by McFadden s contributions to discrete choice and econometrics: given space constraints I have not attempted to survey other fields that have benefited from McFadden s contributions (e.g. production economics, McFadden (1978)). In order to give the reader an appreciation for the elegance and generality of McFadden s contributions, I will provide a brief synopsis of the theory and econometrics of discrete choice, following the treatment in McFadden s (1981) paper Econometric Models of Probabilistic Choice. The underlying theory is superficially rather simple: an agent chooses a single alternative d from

3 3 a finite set D(x) of mutually exclusive alternatives to maximize a well defined utility function. Agents choices as well as the choice set D(x) may vary across agents depending on values of a vector x that can reflect state-dependent or agent-dependent factors or choice constraints, similar to the way a consumer budget set depends on quantities such as income and prices in standard (continuous) consumer theory. The vector x can include characteristics (or attributes ) of the alternatives in the choice set D(x). The vector x can also include agent characteristics such as income, age, sex, education and so forth. There is great flexibility in specifying how alternativespecific and agent-specific characteristics affect decisions, and McFadden was one of the first to appreciate the huge potential probabilistic choice theory offered for empirical work. In order to appreciate McFadden s contribution, it is useful to briefly summarize the elements of the theory on which he built, most of which originated in the literature on mathematical psychology. Fundamental to this literature is the concept of a choice probability P (d x, D(x)) which represents the probability that an agent chooses a particular element d D(x). Psychologists emphasized the seemingly random nature of subjects decisions in experiments. The earliest cited work on probabilistic choice is Thurstone (1927), who described subject choices as a discriminal process and used the normal distribution to model the impact of random factors affecting individual decisions. Thurstone s formula for the choice probability when there only two possible alternatives is now known in economics as the binomial probit model. The mathematical psychology literature also gave considerable attention to how the choice probability depends on the choice set D(x), since a major focus of the theory was to explain behavior of subjects in laboratory experiments where choice sets can be controlled. The initial work was relatively abstract and axiomatic, and attention focused on determining what restrictions, if any, were placed on choice probability functions (over and above satisfying the ordinary laws of probability) implied by the random utility maximization model (RUM). More precisely, what are the necessary and sufficient conditions for a choice probability P (d x, D(x)) to be consistent with random utility maximization, where P is given by P (d x, D(x)) = P r { ũ d ũ d, d D(x) }, (2.1) and {ũ d d D(x)} are a collection of random variables representing random utility values of the alternatives in D(x)? This is the discrete choice analog of the integrability problem in consumer theory, i.e. what are the necessary and sufficient conditions for a system of demand equations to be derivable from some underlying utility function?

4 The (1959) book Individual Choice Behavior by Duncan Luce introduced the axiom of independence from irrelevant alternatives (IIA). He showed that if this axiom holds, choice probabilities must have a multinomial logit representation (MNL). That is, there exists functions {u(x, d) d D(x)} such that P (d x, D(x)) = exp{u(x, d)} d D(x) exp{u(x, d )}. (2.2) 4 The IIA axiom states that the odds of choosing alternative d over alternative d are not changed if the choice set is enlarged. That is, if d, d D(x) E(x), then P (d x, D(x)) P (d x, E(x)) P (d = x, D(x)) P (d x, E(x)). (2.3) The IIA axiom is a strong restriction that may not always be empirically plausible for reasons noted in Debreu s (1960) review of Luce s book. McFadden introduced an alternative example he called the red bus/blue bus problem that illustrates potential problems with using the MNL to forecast how agents will respond to changes in their choice sets. 3 Block and Marschak (1960) did not impose the IIA axiom, and derived a general necessary condition on the choice probability to be consistent with random utility maximization, i.e. conditions on {P (d x, D(x)) d D(x)} to have the representation in equation (2.1) for some collection of random variables {ũ d d D(x)}. Falmagne (1978) showed the Block-Marschak condition was also a sufficient condition for the random utility representation to hold. 4 3 Consider a commuter who initially has only two alternatives to commute to work: walking (d = w) and taking the bus (d = b), D(x) = {w, b}. Suppose that the individual is indifferent between walking and taking the bus, then u(w, x) = u(b, x) and we see from the logit formula that the IIA axiom implies that P (w x) = P (b x) = 1/2. However suppose we now introduce a third irrelevant alternative a red bus which is in every manner identical to the existing bus alternative, the blue bus. Thus imagine that there is always both a red and blue bus waiting at the bus stop, and the commuter can always walk also. If we denote this new third alternative d = r, we have u(x, r) = u(x, b) = u(x, w), i.e. the commuter is indifferent between taking the red bus or blue bus, and also continues to be indifferent whether to walk or take the bus, the IIA axiom predicts that the choice probabilities in this situation will be P (r x) = P (b x) = P (w x) = 1/3. However Debreu s argument is that this is not a plausible prediction of the impact of the new alternative: the existence of the new red bus alternative should not affect the probability of whether to walk, so we should continue to have P (w x) = 1/2 when the new, irrelevant red bus alternative is introduced. However since the commuter is indifferent between taking a red or blue bus, we should have P (b x) = P (r x) = 1/4. Thus, Debreu argued that Luce s IIA axiom implies an intuitively implausible prediction of the impact of the introduction of new alternative to an agent s choice set, at least in situations where the new alternative is essentially identical to one of the existing alternatives. 4 More precisely, Falmagne s Theorem states that a system of choice probabilities can be derived from some random utility model if and only if the Block-Marschak polynomials are nonnegative. This is the analog of the Slutsky condition in standard consumer theory, i.e. that a demand system x(p, y) can be derived from a utility function if and only if it is homogeneous of degree 0 in (p, y), satisfies y = p x(p, y), and the Slutsky matrix corresponding to x(p, y) is symmetric and negative semidefinite. I refer the reader to Block and Marschak (1960) or Falmagne (1978) for the definition of the Block-Marschak polynomial.

5 5 McFadden s contribution to this literature was to recognize how to operationalize the random utility interpretation in an empirically tractable way. In particular, he derived the converse of Luce s representation theorem, that is, he discovered a random utility interpretation of the MNL model. His other fundamental contribution was to solve an analog of the revealed preference problem: i.e. using data on the actual choices and states of a sample of agents {(d i, x i )} N i=1, he showed how it was possible to reconstruct their underlying random utility function. Further, he introduced a new class of multivariate distributions, the generalized extreme value family (GEV), and derived tractable formulas for the implied choice probabilities including the nested multinomial logit model, and showed that these choice probabilities do not satisfy the IIA axiom, and thus, relax some of the empirically implausible restrictions implied by IIA. McFadden studied a more general specification of the random utility model where an agent s utility function is written as U(x, z, d, θ) that depends on variables x that the econometrician can observe, as well as variables z that the econometrician cannot observe. In addition, the utility is assumed to depend on a vector of parameters θ that are known by the agent but not by the econometrician. Under these assumptions, the solution to the revealed preference problem is equivalent to finding an estimator for θ. 5 McFadden suggested the method of maximum likelihood using the likelihood function L(θ) given by N L(θ) = P (d i x i, D(x i ), θ), (2.4) i=1 under the assumption that the observations (d i, x i ) are independently distributed across different agents i. McFadden showed that under appropriate regularity conditions, the maximum likelihood estimator ˆθ (the value of θ that maximizes L(θ)), is consistent and asymptotically normal, and thus presents a means for making inferences about agents underlying preferences. However the maximum likelihood approach only became feasible when it was possible to derive computationally tractable formulas for choice probabilities derived from various random utility models. This was perhaps McFadden s most important contribution to the discrete choice literature. Assume that agents utility function has the following additive separable representation U(x, z, d, θ) = u(x, d, θ) + v(z, d). (2.5) 5 The conceptual distinction between z and θ, both of which are unobserved by the econometrician, is that θ is assumed to be common across agents whereas the vector z can differ from agent to agent. Thus, it is feasible to consider the problem of estimating θ by pooling data on choice made by different agents with the same θ but different idiosyncratic values of z.

6 Define ɛ(d) v(z, d). It follows that an assumption on the distribution of the random vector z implies a distribution for the random vector ɛ {ɛ(d) d D(x)}. 6 McFadden s approach was to make assumptions directly about the distribution of ɛ, rather that making assumptions about the distribution of z and deriving the implied distribution of ɛ. Standard assumptions for the distribution of ɛ that have been considered include the multivariate normal which yields the multivariate probit variant of the discrete choice model. Unfortunately, in problems where there are more than only two alternatives (the case that Thurstone studied), the multinomial probit model becomes intractable in higher dimensional problems. The reason is that in order to derive the conditional choice probabilities, one must do numerical integrations that have a dimension equal to D(x), the number of elements in the choice set. In general this multivariate integration is computationally infeasible when D(x) is larger than 5 or 6, using standard quadrature methods on modern computers. McFadden introduced an alternative assumption for the distribution of ɛ, namely the Multivariate Extreme value distribution given by F (z x) = P r {ɛ d z d d D(x)} = exp { exp{ (z d µ d )/σ)}}, (2.6) d D(x) and showed that (when the location parameters µ d are normalized to 0) that the corresponding random utility model produces choice probabilities given by the multinomial logit formula P (d x, θ) = exp{u(x, d, θ)/σ} d D(x) exp{u(x, d, θ)/σ}. The reason why this should be true is not at all evident at first sight. However it turns out that a key to the tractability of the logit formula is due to an important property of the multivariate extreme value distribution, namely, it is max-stable: i.e. if ɛ 1 and ɛ 2 are extreme value random variables, then max(ɛ 1, ɛ 2 ) is also an extreme value random variable. 6 Define the Social Surplus function S({u(x, d, θ)} d D(x) x) = E { max u(x, d, θ) + ɛ(d) d D(x) }. (2.7) This is the expected maximum utility, where the expectation is taken over the random utility components ɛ, and can be viewed as the analog of the indirect utility function in standard (continuous) 6 Another way to say this is that the extreme value family is closed under maximization which is an analogous property of the class of stable distributions which are closed under addition.

7 consumer theory. 7 It turns out that the partial derivative of S with respect to u(x, d, θ) is P (d x, θ) { } u(x, d, θ) S({u(x, d, θ)} d D(x) x) = u(x, d, θ) E max u(x, d, θ) + ɛ(d) d D(x) { [ ]} = E max u(x, d, θ) + ɛ(d) u(x, d, θ) d D(x) = P r d = argmax u(x, d, θ) + ɛ(d ) d D(x) P (d x, θ). 7 (2.8) The result in equation (2.8) is what McFadden (1981) has called the Williams-Daly-Zachary Theorem. It provides an an explicit formula for choice probabilities derived from a random utility model, and can be regarded as the analog of Roy Identity in the standard continuous choice consumer theory. The max-stable property of multivariate extreme value distribution results in a closed-form express for the Social Surplus function. random terms ɛ d, it is not difficult to show that Normalizing the location parameters µ d = 0 for the S({u(x, d, θ)} d D(x) x) = σγ + σ log exp{(u(x, d, θ))/σ}, d D(x) where γ = lim n ni=1 1 k log(n) = is Euler s constant. 8 Daly-Zachary Theorem, we have u(x, d, θ) S({u(x, d, θ)} d D(x) x) = = Applying the Williams- u(x, d, θ) σ exp{u(x, d, θ)/σ} d D(x) exp{u(x, d, θ)/σ} d D(x) exp{u(x, d, θ)/σ}. (2.9) 7 The term Social Surplus is probably motivated by the interpretation of each ɛ as indexing different type of consumer, so that the expected maximized utility can be interpreted as a social welfare function when the distribution F (ɛ x) is reinterpreted as the distribution of types in the population. 8 To derive this note that if (ɛ 1, ɛ 2 ) are two independent random variables with distributions F 1 (x) and F 2 (x), respectively, then the distribution of max(ɛ 1, ɛ 2 ) is F 1 (x)f 2 (x). In the case where (ɛ 1, ɛ 2 ) are two independent extreme value random variables with common scale parameter σ and location parameters (µ 1, µ 2 ), then F 1 (x)f 2 (x) = exp{ exp{ (x µ 1 )/σ}} exp{ exp{ (x µ 2 )/σ)}} = exp{ exp{ (x µ)/σ}} where µ = σ log [exp{µ 1 /σ} + exp{µ 2 /σ}]. Note that the mean of an extreme value distribution with location parameter µ and scale parameter σ is (µ + σγ) where γ is Euler s constant.

8 8 This is McFadden s key result, i.e. the MNL choice probability is implied by a random utility model when the random utilities have extreme value distributions. It leads to the insight that the IIA property is a consequence of the statistical independence in the random utilities. In particular, even if the observed attributes of two alternatives d and d are identical (which implies u(x, d, θ) = u(x, d θ)), the statistical independence of unobservable components ɛ(d) and ɛ(d ) implies alternatives d and d are not perfect substitutes even when when their observed characteristics are identical. In many cases this is not problematic: individuals may have different idiosyncratic perceptions and preferences for two different items that have the same observed attributes. However in the case of the red bus/blue bus example or the concert ticket example discussed by Debreu (1960), there are cases where it is plausible to believe that the observed attributes provide a sufficiently good description of an agents perception of the desirability of two alternatives. In such cases, the hypothesis that choces are also affected by additive, independent unobservables ɛ(d) provides a poor representation of an agent s decisions. What is required in such cases is a random utility model that has the property that the degree of correlation in the unobserved components of utility ɛ(d) and ɛ(d ) for two alternatives d, d D(x) is a function of the degree of closeness in the observed attributes. This type of dependence can be captured by a random coefficient probit model. 9 McFadden (1981) introduced the generalized extreme value (GEV) family of distributions. This family relaxes the independence assumption of the extreme value specification while still yielding tractable expressions for choice probabilities. The GEV distribution is given by F (z x) = P r{ɛ d z d d D(x)} = exp { H(exp{ z 1 },..., exp{ z D(x) }, x, D(x)) }, for any function H(z, x, D(x)) satisfying certain consistency properties. 10 McFadden showed that the Social Surplus function for the GEV family is given by S({u(x, d, θ)} d D(x) x) = γ + log [H(exp{u(x, 1, θ)},..., exp{u(x, D(x), θ)}, x, D(x))], 9 This is a random utility model of the form U(x, z, d, θ) = x d (θ + z) where x d ix a k 1 vector of observed attributes of alternative d, and θ is a k 1 vector of utility weights representing the mean weights individuals assign to the various attributes in x d in the population and z N(0, Ω) is a k 1 normally distributed random vector representing agent specific deviations in their weighting of the attributes relative the population average values, θ. Under the random coefficients probit specification of the random utility model, when x d = x d, alternatives d and d are in fact perfect substitutes for each other and this model is able to provide the intuitively plausible prediction of the effect of introducing an irrelevant alternative the red bus in the red bus/blue bus problem. See, e.g. Hausman and Wise (1978). 10 Specifically H(z, x, D(x)) must be 1) linear homogeneous in z, 2) satisfies lim z H(z, x, D(x)) =, 3) has nonpositive even and nonnegative odd mixed partial derivatives in z, and 4) if D(x) E(x), and we let z E(x) denote a vector with as many components as in E(x) and (z D(x), 0 E(x) D(x) ) be a vector with E(x) components with values of z d for d D(x) and 0 for d E(x) D(X) = E(x) D(x) c, then H(z D(x), x, D(x)) = H((z D(x), 0 E(x) D(x) ), x, E(x)). This last property ensures that the marginal distributions of a GEV distribution are also in the GEV family.

9 9 so by the Williams-Daly-Zachary Theorem, the implied choice probabilities are given by P (d x, θ) = exp{u(x, d, θ)}h d(exp{u(x, 1, θ)},..., exp{u(x, D(x), θ)}, x, D(x)) H(exp{u(x, 1, θ)},..., exp{u(x, D(x), θ)}, x, D(x)), where H d (z, x, D(x)) = / z d H(z, x, D(x)). A prominent subclass of GEV distributions is given by H functions of the form n H(z, y, D(x)) = i=1 d D i (x) z 1/σ i d i σ, where {D 1 (x),..., D n (x)} is a partition of the full choice set D(x). This class of GEV distributions yields (two level) nested multinomial logit (NMNL) choice probabilities where d D i (x) and P (d x, θ) = P (d x, D i (x), θ)p (D i (x) x, D(x), θ), P (d x, D i (x), θ) = exp{u(x, d, θ)/σ i } d D i (x) exp{u(x, d, θ)/σ i }, and P (D i (x) x, D(x), θ) = exp{s i (x)} nj=1 exp{s j (x)}, S i (x) = σ i log exp{u(x, d, θ)/σ i }, d D i (x) is the Social Surplus function for the subset of choice D i (x). The nested logit model has an interpretation as a two stage decision process (or two level decision tree ). In the first stage, the agent chooses one of the n partitions D i (x) with probabilities determined by an ordinary logit model with the Social surplus values for each partition, S i (x), playing the role of the utility. This is reasonable since S i (x) represents the expected maximum utility of choosing an alternative d D i (x). Then in the second stage, the agent chooses an alternative d D i (x) according to a MNL model with utilities u(x, d, θ)/σ i (or alternatively, a random utility model with error terms σ i ɛ(d), d D i (x)). McFadden called σ i a similarity parameter since it plays the role of the scale parameter for extreme value errors ɛ d, which are independently distributed conditional on being in subset D i (x) of D(x). As σ i 0, the extreme value unobservables within each partition D i (x) play a diminishing role and S i (x) max{u(x, d, θ) d D i (x)}. The choice of a partition D i (x) is governed by an upper level MNL model with the utility for each partition equal to

10 10 the maximum utility over the alternatives in the partition. The nested logit model does not suffer from the IIA property (at least globally, although IIA does hold locally for alternatives d within the same partition element D i (x)). In particular, one can specify a nested logit model that avoids the red bus/blue bus problem and thus results in intuitively plausible predictions of the effect of introducing an irrelevant alternative. 11 The NMNL model has been applied in numerous empirical studies especially to study demand when there extremely large number of alternatives, such as modeling consumer choice of automobiles (e.g. Berkovec (1985), Goldberg, (1995)). In many of these consumer choice problems there is a natural partitioning of the choice set in terms of product classes (e.g. luxury, compact, intermediate, sport-utility, etc. classes in the case of autos). The nesting avoids the problems with the IIA property and results in more reasonable implied estimates of demand elasticities compared to those obtained using the MNL model. In fact, Dagsvik (1994) has shown that the class of random utility models with GEV distributed utilities is dense in the class of all random utility models, in the sense that a choice probabilities implied from any random utility model can be approximated arbitrarily closely by a RUM in the GEV class. However a limitation of nested logit models is that they imply a highly structured pattern of correlation in the unobservables induced by the econometrician s specification of how the overall choice set D(x) is to be partitioned, and the number of levels in the nested logit tree. Even though the NMNL model can be nested to arbitrarily many levels to achieve additional flexibility, it is desirable to have a method where patterns of correlation in unobservables can be estimated from the data rather than being imposed by the analyst. Further, even though McFadden and Train (2000) recognize Dagsvik s (1994) finding as a powerful theoretical result, they conclude that its practical econometric application is limited by the difficulty of specifying, estimating, and testing the consistency of relatively abstract generalized Extreme Value RUM. McFadden and Train, (2000), p As noted above, the random coefficients probit model has many attractive features: it allows a flexibly specified covariance matrix representing correlation between unobservable components of utilities that avoid many of the undesirable features implied by the IIA property of the MNL model, in a somewhat more direct and intuitive fashion than is possible via the GEV family. 11 For the commuter s problem discussed earlier, let D(x) = {w, r, b} (i.e. walk, take red bus, or take blue bus). Let D 1 (x) = {w} and D 2 (x) = {r, b}. As previously, we assume that u(x, w, θ) = u(x, b, θ) = u(x, r, θ). Further, let σ 1 = 1 and σ 2 = 0. Then it is not hard to see that for the nested logit model P (w x, θ) = 1/2 and P (D 2 (x) x, D(x), θ) = 1/2. Conditional on choosing to go by bus, the individual is indifferent between going by red or blue bus, so P (r x, D 2 (x), θ) = P (b x, D 2 (x), θ) = 1/2. Thus, P (r x, D(x), θ) = P (b x, D(x), θ) = 1/4, and it follows that this nested logit model yields the intuitively plausible solution to the red bus/blue bus problem.

11 However as noted above, the multinomial probit model is intractable for applications with more than 4 or 5 alternatives due to the curse of dimensionality of the numerical integrations required, at least using deterministic numerical integration methods such as Gaussian quadrature. of McFadden s most important contributions was his (1989) Econometrica paper that introduced the method of simulated moments (MSM). This was a major breakthrough that introduced a new econometric method that made it feasible to estimate the parameters of multinomial probit models with arbitrarily large numbers of alternatives. The basic idea underlying McFadden s contribution is to use monte carlo integration to approximate the probit choice probabilities. While this idea had been previously proposed by Lerman and Manski (1981), it was never developed into a practical, widespread estimation method because it requires an impractical number of Monte Carlo draws to estimate small choice probabilities and their derivatives with acceptable precision. McFadden, (1989), p However McFadden s brilliant insight was that it is not necessary to have extremely accurate (and thus very computationally time intensive) Monte Carlo estimates of choice probabilities in order to obtain an estimator for the parameters of a multinomial probit model that is consistent and asymptotically normal and performs well in finite samples. McFadden s insight is that the noise from Monte Carlo simulations can be treated in the same way as random sampling error and will thus average out in large samples. In particular, his MSM estimator has good asymptotic properties even when only a single Monte Carlo draw is used to estimate each agent s choice probability. The key idea behind MSM is to formulate it as a method of moments estimator using an orthogonality condition and an appropriate set of instrumental variables. The key orthogonality condition underlying the MSM estimator is the same as for the MM estimator, namely, if the the expected value of an agent s decision equals the choice probability when θ = θ. Let d i be a D(x i ) 1 vector of 0 s and 1 s with the property that if agent i with observed characteristics x i chooses a particular alternative from their choice set D(x i ), then the corresponding component of the vector d i equals 1, and equals 0 otherwise. Let P (x i, θ) be the corresponding D(x i ) 1 stacked vector of choice probabilities, i.e. P d (x i, θ) = P (d x i, θ), where P d (x i, θ) is the d th component of the vector P (x i, θ). If the random utility model is correctly specified, then at the true parameter vector θ we have E{ d i P (x i, θ )} = 0, We can regard the vector η = d i P (x i, θ) as a mean zero error term and we can construct a D(x) K matrix of instrumental variables satisfying E{η Z} = 0 (for example elements of 11 One

12 the matrix Z can be constructed from various powers and cross products of the components of the vector x i, for example). If it were possible to evaluate the choice probabilities, and hence the vector P (x i, θ), it would be possible to estimate θ using a minimum distance estimator ˆθmm = argmin θ N [ d i P (x i, θ)] Z i Z i [ d i P (x i, θ)]. (2.10) i=1 However in cases such as the multinomial probit model, it is not feasible to evaluate P (x i, θ) when there are more than 5 or 6 alternatives in D(x). So consider an alternative computationally feasible version of the minimum distance estimator where P (x i, θ) is replaced by a Monte Carlo estimator ˆP S (x i, θ) of based on S independent and identically distributed draws {ɛ 1,..., ɛ S } from the distribution of unobservables F (ɛ x i ) in the random utility model. Thus, the d th component of ˆP s (x i, θ) is given by ˆP d (x i, θ) = 1 S I S i=1 d = argmax d D(x i ) 12 u(x i, d, θ) + ɛ i (d ). (2.11) For any fixed θ the Monte Carlo estimator ˆP s (x i, θ) is an unbiased estimator of P (x i, θ) E{ ˆP (x i, θ) P (x i, θ)} = 0, and thus, it will also be the case that the fundamental orthogonality condition will continue to hold when P (x i, θ) is replaced by P S (x i, θ), i.e. E { Z i [ d i P (x i, θ )] } = 0. Based on this insight, McFadden introduced the Method of Simulated Moments estimator ˆθ msm = argmin θ N [ d i ˆP (x i, θ)] Z i Z i [ d i ˆP (x i, θ)]. (2.12) i=1 and showed that it is a consistent and asymptotically normal estimator of the true parameter vector θ. The errors in the Monte Carlo estimate ˆP S (x i, θ) of P (x i, θ) are conceptually similar to sampling errors, and thus tend to average out over the number of observations N, and become negligible as N. The cost of using the noisier simulation estimator of P (x i, θ) is that ˆθ msm will have a larger asymptotic variance-covariance matrix than the method of moments estimator ˆθ mm. However McFadden showed this cost is small: the asymptotic variance-covariance matrix of ˆθ msm is only (1 + 1/S) times as large as the asymptotic covariance matrix of the ordinary method of moments estimator ˆθ mm where S is the number of Monte Carlo simulation draws. In particular,

13 13 this implies that the variance of ˆθ msm is only twice as large as ˆθ mm when only a single Monte Carlo draw is used to estimate the choice probabilities. Since the savings in terms of reduced computation times from being able to use only a few Monte Carlo draws per observation to estimate ˆP s (x i, θ) are huge, McFadden s result made it possible to estimate a broad new class of econometric models that were previously believed to be be infeasible due to the computational demands of providing accurate estimates of P (x i, θ). The idea behind the MSM estimator is quite general and can be applied in many other settings besides the multinomial probit model. McFadden s work helped to spawn a large literature on simulation estimation that developed rapidly during the 1990s and resulted in computationally feasible estimators for a large new class of econometric models that were previously considered to be computationally infeasible. However there are even better simulation estimators for the multinomial probit model, which generally outperform the MSM estimator in terms of having lower asymptotic variance and better finite sample performance, and which are easier to compute. One problem with the crude frequency simulator ˆP (x i, θ) in equation (2.11) is that it is a discontinuous and locally flat function of the parameters θ, and thus the MSM criterion function in (2.12) is difficult to optimize. Hajivassiliou and McFadden (1998) introduced the method of simulated scores (MSS) that is based on Monte Carlo methods for simulating the scores of the likelihood function for a multinomial probit model and a wide class of other limited dependent variable models such as Tobit and other types of censored regression models. 12 Because it simulates the score of the likelihood rather than using a method of moments criterion that is does not generally lead to full asymptotic efficiency, the MSS estimator is more efficient than the MSM estimator. Also, the MSS is based on a smooth simulator (i.e. a method of simulation that results in an estimation criterion that is a continuously differentiable function of the parameters θ), so the MSS estimator is much easier to compute than the MSM estimator (2.11) based on the crude frequency simulator of ˆP (x i, θ) in equation (2.11). Based on numerous Monte Carlo studies and empirical applications, MSS (and a closely related simulated maximum likelihood estimator based on the Geweke Hajivassiliou- Keane (GHK) smoother simulator) are now regarded as the estimation methods of choice for a wide class of econometric models with limited dependent variable that are commonly encountered in empirical applications. Despite these computational breakthrough, the MNL model remains one of the most tractable functional forms for estimating discrete choice models. When the utility functions are specified 12 In the case of a discrete choice model, the score for the i th observation is / θ log(p (d i x i, θ)).

14 14 to be linear-in-parameters, u(x, d, θ) = v(x, d) θ where v(x, d) is a vector of interactions of characteristics of alternative d and characteristics of the agent, the likelihood function L(θ) is a concave function of θ which makes it easy to compute the maximum likelihood estimator ˆθ. 13 However as noted above, the MNL model is considered undesirable in many cases due to the IIA property. McFadden (1984) showed that the MNL model can serve as a universal approximator of any set of choice probabilities. That is, any conditional choice probability P (d x) can be represented as a MNL model with utilities given by u(x, d) = log(p (d x)). McFadden called this universal representation the mother logit model. However it is not the case that mother logit can legitimately rationalize any set of choice probabilities. The mother logit model is based on a pseudo utility function u(x, d) = log(p (d x)) that is an implicit function of a particular underlying choice set D(x). The approach does not allow us to predict how choices will change if the choice sets change unless it is based on choice probabilities that are explicit functions of the choice set P (d x, D(x)). However unless the choice probabilities also satisfy the Block-Marschak necessary conditions, the mother logit model will not result in a valid random utility model. McFadden and Train (2000), in a paper that won the Sir Richard Stone Prize for best empirical paper published in the Journal of Applied Econometrics, showed that a computationally tractable class of choice probabilities, mixed MNL models, are a valid class of random utility models whose implied choice probabilities can approximate choice probabilities implied by virtually any random utility model. 14 A mixed MNL model has choice probabilities of the form exp{u(x, d, α) P (d x, θ) = d D(x) exp{u(x, G(dα θ). (2.13) d, α)} There are several possible random utility interpretations of the mixed logit model. One interpretation is that the α vector represents unobserved heterogeneity in the preference parameters in the population, so the relevant choice probability is marginalized using the population distribution for the α parameters in the population, G(α θ). The other interpretation is that α is similar to vector ɛ, 13 In fact, standard hill-climbing algorithms can compute the global maximum ˆθ in polynomial time (as a function of the dimension K of the θ vector). In cases where the likelihood function is not concave, computer scientists have shown that the problem of finding a global optimum is exponential time in the worst case. 14 The main restriction on the set of allowable random utility models in their approximation result is that there be zero probability of a tie, i.e. zero probability that the agent is indifferent between multiple alternatives in the choice set.

15 15 i.e. it represents information that agents observe and which affects their choices (similar to ɛ) but which is unobserved by the econometrician, except that the components of ɛ, ɛ(d) enter the utility function additively separably, whereas the variables α are allowed to enter in a non-additively separable fashion and the random vectors α and ɛ are statistically independent. It is easy to see that under either interpretation, the mixed logit model will not satisfy the IIA property, and thus is not subject to its undesirable implications. McFadden and Train proposed several alternative ways to estimate mixed logit models, including maximum simulated likelihood and MSM. In each case, Monte Carlo integration is used to approximate the integral in equation (2.13) with respect to G(α θ). Both of these estimators are smooth functions of the parameters θ, and both benefit from the computational tractability of the MNL while at the same time having the flexibility to approximate virtually any type of random utility model. The intuition behind McFadden and Train s approximation theorem is that a mixed logit model can be regarded as a certain type of neural network using the MNL model as the underlying squashing function. Neural networks are known to have the ability to approximate arbitrary types of functions and enjoy certain optimality properties, i.e. the number of parameters (i.e. the dimension of the α vector) needed to approximate arbitrary choice probabilities grows only linearly in the number of included covariates x. 15 This brief survey of McFadden s contributions to the discrete choice literature has revealed the immense practical benefits of his ability to link theory and econometrics, innovations that lead to a vast empirical literature and widespread applications of discrete choice models. Beginning with his initial discovery, i.e. his demonstration that Luce s MNL choice probabilities result from a random utility model with multivariate exteme value distributed unobservables, McFadden has made a series of fundamental contributions that have enabled researchers to circumvent the problematic implications of the IIA property of the MNL model, providing computationally tractable methods for estimating ever wider and more flexible classes of random utility and limited dependent variable models in econometrics. 15 Other approximation methods, such as series estimators formed as tensor products of bases that are univariate functions of each of the components of x require a much larger number of coefficients to provide an comparable approximation, and the number of such coefficients grows exponentially fast with the dimension of the x vector.

16 16 3. Extensions McFadden s research continues, at an undiminished pace, providing important theoretical and applied contributions. A recent example is his innovative paper with Jenkins et. al. (2004) The Browser War - Econometric Analysis of Markov Perfect Equilibrium in Markets with Network Effects. This paper formulates a dynamic model of competition between two main internet browsers, Microsoft s Internet Explorer and Netscape, and uses the model to quantify the damages that resulted from aggressive competitive tactics on the part of Microsoft, which were judged anticompetitive and illegal in the landmark case U.S. vs. Microsoft in The model allows for possibility of network externalities such as where a consumer s utility of using a given browser may be an increasing function of the browser s market share. The analysis concludes that Microsoft s illegal exclusionary contracts with internet service providers that excluded Netscape was only a minor part of the explanation of Netscape s decline: the majority of the lost market share (and thus damages to Netscape) was due to Microsoft s tying of Internet Explorer to the Windows operating system, and the arrangements under which it was difficult or inconvenient for OEM s to preinstall another browser (Jenkins et. al. (2004), p. 45). Unfortunately, there is insufficient space to all of McFadden s other equally interesting and important work. However I do wish to devote the remaining space to two examples of how McFadden s work has helped to spawn several new literatures that appear likely to be among the most active and vibrant areas of future applied work in econometrics. One area is the estimation of static and dynamic discrete games of incomplete information. This is a very natural extension of the standard discrete choice model, which could be viewed as a game against nature. Consider, for example, a two player game where player a has observed characteristics x a and a choice set D a (x a ) and player b has observed characteristics x b and a choice set D b (x b ). Assume the two players move simultaneously, in order to maximize the expected value of utility functions u a (x a, d a, d b, θ a ) + ɛ a (d a ) (for player a) and u b (x b, d a, d b, θ b ) + ɛ b (d b ) (for player b). The utility functions for both players depend on vectors ɛ a and ɛ b which are private information (i.e. player a knows ɛ a but not ɛ b, and vice versa for player b). If it is common knowledge that ɛ a and ɛ b have extreme value distributions, the Bayesian Nash equilibrium to this game can be defined in terms of a pair of equilibrium choice probabilities (P a (d a x a, x b, θ), P b (d b x a, x b, θ)) that satisfy

17 17 the following equations exp{eu a (x a, x P a (d a x a, x b θ) = b, d a, θ)} d D a (x a ) exp{eu a(x a, x b, d )} exp{eu P b (d b x a, x b, θ) = b (x a, x b, d b, θ)} d D b (x b ) exp{eu b(x a, x b, d )}, (3.1) where Eu a (x a, x b, d a, θ) = u a (x a, d a, d b, θ a )P b (d b x a, x b, θ) d b D b (x b ) Eu b (x a, x b, d b, θ) = u b (x b, d a, d b, θ b )P a (d a x a, x b, θ). d a D a (x a ) The Brouwer fixed point theorem implies that a least one equilibrium always exists to this game. If we observe N independent games played by these two types of players, a and b, with observed outcomes {d i a, di b, xi a, xi b }N i=1, then we can estimate the parameter vector θ = (θ a, θ b ) by maximizing the likelihood function L(θ) given by N L(θ) = P a (d i a x i a, x i b, θ)p b(d i b xi a, x i b, θ). (3.2) i=1 We see that the equilibrium choice probabilities in (3.1) are a direct generalization of the MNL probabilities that McFadden derived in a single agent game against nature and the likelihood function (3.2) is a direct generalization of the likelihood McFadden developed to estimate the parameters θ in the MNL model (see (2.4)). This line of extension of the single agent discrete choice techniques that McFadden developed is one of the current frontier areas in applied econometrics (see, e.g. Bajari et. al. 2005). Another area where McFadden s work has been very influential is the literature on dynamic discrete choice models. This literature originated in the 1980s and maintains the single agent focus of most McFadden s work, but extends the choice from a static context that McFadden analyzed to situations where agents make repeated choices over time in order to maximize a dynamic or intertemporal objective function. For example, Dagsvik (1983) formulated a beautiful extension of the static discrete choice model to a discrete choice in continuous time setting where utilities are viewed as realizations of continuous time stochastic processes. Let P (d t x t, θ) be the probability that an agent with observed characteristics x t chooses alternative d t D(x t ) at time t. Then the natural continuous time extension of the random utility model is P (d t x t, θ) = P r { Ũ t (x t, d t, θ) Ũt(x t, d, θ), d D(x t ) },

18 where {Ũt(x t, d, θ)} is interpreted as a random utility process i.e. a stochastic process indexed by the time variable t. Dagsvik showed that a class of stochastic processes known as multivariate extremal processes are the natural continuous time extension of the extreme value error components in McFadden s original work, resulting in marginal (i.e. time t) choice probabilities that have the MNL form. Further, Dagsvik showed that the (discrete) stochastic process for the optimal choice chosen by such a decision maker forms a continuous time Markov chain. A related line of extension of McFadden s work has been to link it with discrete time sequential decision making models and the method of dynamic programming. In this theory, an agent selects an alternative d t D(x t ) at each time t to maximize the expected value of a time-separable discounted objective function. The solution to the problem is a decision rule, i.e. a sequence of functions (δ 0,..., δ T ) that solves T (δ 0,..., δ T ) = argmax E β t u t (x t, d t, θ) + ɛ t (d t ), t=0 where β (0, 1) is an intertemporal discount factor and d t = δ(i t ) and I t is the information available to the decision maker at time t. If we assume that the observed state variables evolve according to a controlled Markov process with transition probability p(x t+1 x t, d t, θ), and the components ɛ t are interpreted as unobserved state variables which are IID (i.e independent and identically distributed) and independent of the observed state variables {x t }, then the agent s choice probability at time t is given by P t (d t x t, θ) = P r{d t = δ t (I t )} = P r { v t (x t, d t ) + ɛ t (d t ) v t (x t, d ) + ɛ t (d ), d D(x t ) }, where v t (x, d) is an expected value function given by v t (x, d, θ) = u t (x, d, θ) + β S({v t+1(x, d ) d D(x ) x }p(dx x, d, θ), x where S is the same Social Surplus function that plays a key role in the derivation of the static discrete choice model (see (2.7)). In particular, if {ɛ t } is an IID extreme value process, and if T = and utilities are time invariant, the Markov decision problem can be shown to be time invariant or stationary with a time invariant decision rule d t = δ(x t, ɛ t ) that results in a dynamic generalization of the MNL model P (d x, θ) = P r{d = δ(x, ɛ) x} = exp{v(x, d, θ)/σ} d D(x) exp{v(x, d, θ)/σ}, 18

19 19 where v(x, d) is the unique fixed point to a contraction mapping v = Γ(v) defined by v(x, d) = Γ(v)(x, d) u(x, d) + β σ log x exp{u(x, d, θ)/σ} p(dx d, x). d D(x ) There are also dynamic extensions of the probit model. See Eckstein and Wolpin (1989) and Rust (1994) for surveys of the literature on dynamic extensions of discrete choice models. 4. References Anderson, S.P. De Palma, A. and J.F. Thisse (1992) Discrete choice theory of product differentiation Cambridge, MIT Press. Bajari, P., Hong, H., Krainer, J. and D. Nekipelov (2005) Estimating Static Models of Strategic Interactions manuscript, University of Michigan. Berkovec, J. (1985) New Car Sales and Used Car Stocks: A Model of the Automobile Market RAND Journal of Economics Berry, S., Levinsohn J. and A. Pakes (1995) Automobile Prices in Market Equilibrium Econometrica Block, H. and J. Marschak (1960) Random Orderings and Stochastic Theories of Response in I. Olkin (ed.) Contributions to Probability and Statistics Stanford, Stanford University Press. Cosslett, S.R. (1981) Efficient Estimation of Discrete-Choice Models in C.F. Manski and D. McFadden (eds.) op. cit Dagsvik, J.K. (1983) Discrete dynamic choice: An extension of the choice models of Luce and Thurstone. Journal of Mathematical Psychology, Dagsvik, J.K. (1995) How large is the class of generalized extreme value models? Journal of Mathematical Psychology Dagsvik, J.K. (1994) Discrete and continuous choice, max-stable processes and independence from irrelevant attributes Econometrica Daly, A. and S. Zachary (1979) Improved Multiple Choice Models in D. Hensher and Q. Dalvi (eds.) Identifying and Measuring the Determinants of Mode Choice London, Teakfield. Debreu, G. (1960) Review of R.D. Luce Individual Choice Behavior American Economic Review Dubin, J. and D. McFadden (1984) An Econometric Analysis of Residential Electric Appliance Holdings and Consumption Econometrica Eckstein, Z. and K. Wolpin (1989) The Specification and Estimation of Dynamic Stochastic Discrete Choice Models Journal of Human Resources Falmagne, J.C. (1978) A Representation Theorem for Finite Random Scale Systems Journal of Mathematical Psychology

Introduction to Discrete Choice Models

Introduction to Discrete Choice Models Chapter 7 Introduction to Dcrete Choice Models 7.1 Introduction It has been mentioned that the conventional selection bias model requires estimation of two structural models, namely the selection model

More information

Syllabus. By Joan Llull. Microeconometrics. IDEA PhD Program. Fall Chapter 1: Introduction and a Brief Review of Relevant Tools

Syllabus. By Joan Llull. Microeconometrics. IDEA PhD Program. Fall Chapter 1: Introduction and a Brief Review of Relevant Tools Syllabus By Joan Llull Microeconometrics. IDEA PhD Program. Fall 2017 Chapter 1: Introduction and a Brief Review of Relevant Tools I. Overview II. Maximum Likelihood A. The Likelihood Principle B. The

More information

An Overview of Choice Models

An Overview of Choice Models An Overview of Choice Models Dilan Görür Gatsby Computational Neuroscience Unit University College London May 08, 2009 Machine Learning II 1 / 31 Outline 1 Overview Terminology and Notation Economic vs

More information

Probabilistic Choice Models

Probabilistic Choice Models Probabilistic Choice Models James J. Heckman University of Chicago Econ 312 This draft, March 29, 2006 This chapter examines dierent models commonly used to model probabilistic choice, such as eg the choice

More information

Estimating Dynamic Programming Models

Estimating Dynamic Programming Models Estimating Dynamic Programming Models Katsumi Shimotsu 1 Ken Yamada 2 1 Department of Economics Hitotsubashi University 2 School of Economics Singapore Management University Katsumi Shimotsu, Ken Yamada

More information

(Behavioral Model and Duality Theory)

(Behavioral Model and Duality Theory) 2018 (Behavioral Model and Duality Theory) fukuda@plan.cv.titech.ac.jp 2 / / (Rational Inattention) 6 max u(x 1,x 2 ) x 1,x 2 s.t. p 1 x 1 + p 2 x 2 = I u = V (p 1,p 2,I) I = M(p 1,p 2, ū) x H j (p 1,p

More information

A Model of Human Capital Accumulation and Occupational Choices. A simplified version of Keane and Wolpin (JPE, 1997)

A Model of Human Capital Accumulation and Occupational Choices. A simplified version of Keane and Wolpin (JPE, 1997) A Model of Human Capital Accumulation and Occupational Choices A simplified version of Keane and Wolpin (JPE, 1997) We have here three, mutually exclusive decisions in each period: 1. Attend school. 2.

More information

Chapter 3 Choice Models

Chapter 3 Choice Models Chapter 3 Choice Models 3.1 Introduction This chapter describes the characteristics of random utility choice model in a general setting, specific elements related to the conjoint choice context are given

More information

Econometric Analysis of Cross Section and Panel Data

Econometric Analysis of Cross Section and Panel Data Econometric Analysis of Cross Section and Panel Data Jeffrey M. Wooldridge / The MIT Press Cambridge, Massachusetts London, England Contents Preface Acknowledgments xvii xxiii I INTRODUCTION AND BACKGROUND

More information

Course Description. Course Requirements

Course Description. Course Requirements University of Pennsylvania Spring 2007 Econ 721: Advanced Microeconometrics Petra Todd Course Description Lecture: 9:00-10:20 Tuesdays and Thursdays Office Hours: 10am-12 Fridays or by appointment. To

More information

Econometric Analysis of Games 1

Econometric Analysis of Games 1 Econometric Analysis of Games 1 HT 2017 Recap Aim: provide an introduction to incomplete models and partial identification in the context of discrete games 1. Coherence & Completeness 2. Basic Framework

More information

Specification Test on Mixed Logit Models

Specification Test on Mixed Logit Models Specification est on Mixed Logit Models Jinyong Hahn UCLA Jerry Hausman MI December 1, 217 Josh Lustig CRA Abstract his paper proposes a specification test of the mixed logit models, by generalizing Hausman

More information

Course Description. Course Requirements

Course Description. Course Requirements University of Pennsylvania Fall, 2015 Econ 721: Econometrics III Advanced Petra Todd Course Description Lecture: 10:30-11:50 Mondays and Wednesdays Office Hours: 10-11am Tuesdays or by appointment. To

More information

Probabilistic Choice Models

Probabilistic Choice Models Econ 3: James J. Heckman Probabilistic Choice Models This chapter examines different models commonly used to model probabilistic choice, such as eg the choice of one type of transportation from among many

More information

Lecture Notes: Estimation of dynamic discrete choice models

Lecture Notes: Estimation of dynamic discrete choice models Lecture Notes: Estimation of dynamic discrete choice models Jean-François Houde Cornell University November 7, 2016 These lectures notes incorporate material from Victor Agguirregabiria s graduate IO slides

More information

Econ 673: Microeconometrics

Econ 673: Microeconometrics Econ 673: Microeconometrics Chapter 4: Properties of Discrete Choice Models Fall 2008 Herriges (ISU) Chapter 4: Discrete Choice Models Fall 2008 1 / 29 Outline 1 2 Deriving Choice Probabilities 3 Identification

More information

SEQUENTIAL ESTIMATION OF DYNAMIC DISCRETE GAMES. Victor Aguirregabiria (Boston University) and. Pedro Mira (CEMFI) Applied Micro Workshop at Minnesota

SEQUENTIAL ESTIMATION OF DYNAMIC DISCRETE GAMES. Victor Aguirregabiria (Boston University) and. Pedro Mira (CEMFI) Applied Micro Workshop at Minnesota SEQUENTIAL ESTIMATION OF DYNAMIC DISCRETE GAMES Victor Aguirregabiria (Boston University) and Pedro Mira (CEMFI) Applied Micro Workshop at Minnesota February 16, 2006 CONTEXT AND MOTIVATION Many interesting

More information

Estimating Single-Agent Dynamic Models

Estimating Single-Agent Dynamic Models Estimating Single-Agent Dynamic Models Paul T. Scott Empirical IO Fall, 2013 1 / 49 Why are dynamics important? The motivation for using dynamics is usually external validity: we want to simulate counterfactuals

More information

I. Multinomial Logit Suppose we only have individual specific covariates. Then we can model the response probability as

I. Multinomial Logit Suppose we only have individual specific covariates. Then we can model the response probability as Econ 513, USC, Fall 2005 Lecture 15 Discrete Response Models: Multinomial, Conditional and Nested Logit Models Here we focus again on models for discrete choice with more than two outcomes We assume that

More information

Choice Theory. Matthieu de Lapparent

Choice Theory. Matthieu de Lapparent Choice Theory Matthieu de Lapparent matthieu.delapparent@epfl.ch Transport and Mobility Laboratory, School of Architecture, Civil and Environmental Engineering, Ecole Polytechnique Fédérale de Lausanne

More information

Flexible Estimation of Treatment Effect Parameters

Flexible Estimation of Treatment Effect Parameters Flexible Estimation of Treatment Effect Parameters Thomas MaCurdy a and Xiaohong Chen b and Han Hong c Introduction Many empirical studies of program evaluations are complicated by the presence of both

More information

16/018. Efficiency Gains in Rank-ordered Multinomial Logit Models. June 13, 2016

16/018. Efficiency Gains in Rank-ordered Multinomial Logit Models. June 13, 2016 16/018 Efficiency Gains in Rank-ordered Multinomial Logit Models Arie Beresteanu and Federico Zincenko June 13, 2016 Efficiency Gains in Rank-ordered Multinomial Logit Models Arie Beresteanu and Federico

More information

1 Bewley Economies with Aggregate Uncertainty

1 Bewley Economies with Aggregate Uncertainty 1 Bewley Economies with Aggregate Uncertainty Sofarwehaveassumedawayaggregatefluctuations (i.e., business cycles) in our description of the incomplete-markets economies with uninsurable idiosyncratic risk

More information

A Note on Demand Estimation with Supply Information. in Non-Linear Models

A Note on Demand Estimation with Supply Information. in Non-Linear Models A Note on Demand Estimation with Supply Information in Non-Linear Models Tongil TI Kim Emory University J. Miguel Villas-Boas University of California, Berkeley May, 2018 Keywords: demand estimation, limited

More information

Longitudinal and Panel Data: Analysis and Applications for the Social Sciences. Table of Contents

Longitudinal and Panel Data: Analysis and Applications for the Social Sciences. Table of Contents Longitudinal and Panel Data Preface / i Longitudinal and Panel Data: Analysis and Applications for the Social Sciences Table of Contents August, 2003 Table of Contents Preface i vi 1. Introduction 1.1

More information

Oblivious Equilibrium: A Mean Field Approximation for Large-Scale Dynamic Games

Oblivious Equilibrium: A Mean Field Approximation for Large-Scale Dynamic Games Oblivious Equilibrium: A Mean Field Approximation for Large-Scale Dynamic Games Gabriel Y. Weintraub, Lanier Benkard, and Benjamin Van Roy Stanford University {gweintra,lanierb,bvr}@stanford.edu Abstract

More information

Course Description. Course Requirements

Course Description. Course Requirements University of Pennsylvania Fall, 2016 Econ 721: Advanced Micro Econometrics Petra Todd Course Description Lecture: 10:30-11:50 Mondays and Wednesdays Office Hours: 10-11am Fridays or by appointment. To

More information

Lecture notes: Rust (1987) Economics : M. Shum 1

Lecture notes: Rust (1987) Economics : M. Shum 1 Economics 180.672: M. Shum 1 Estimate the parameters of a dynamic optimization problem: when to replace engine of a bus? This is a another practical example of an optimal stopping problem, which is easily

More information

Multinomial Discrete Choice Models

Multinomial Discrete Choice Models hapter 2 Multinomial Discrete hoice Models 2.1 Introduction We present some discrete choice models that are applied to estimate parameters of demand for products that are purchased in discrete quantities.

More information

Bayesian Estimation of Discrete Games of Complete Information

Bayesian Estimation of Discrete Games of Complete Information Bayesian Estimation of Discrete Games of Complete Information Sridhar Narayanan August 2012 (First Version: May 2011) Abstract Estimation of discrete games of complete information, which have been applied

More information

Goals. PSCI6000 Maximum Likelihood Estimation Multiple Response Model 1. Multinomial Dependent Variable. Random Utility Model

Goals. PSCI6000 Maximum Likelihood Estimation Multiple Response Model 1. Multinomial Dependent Variable. Random Utility Model Goals PSCI6000 Maximum Likelihood Estimation Multiple Response Model 1 Tetsuya Matsubayashi University of North Texas November 2, 2010 Random utility model Multinomial logit model Conditional logit model

More information

SOLUTIONS Problem Set 2: Static Entry Games

SOLUTIONS Problem Set 2: Static Entry Games SOLUTIONS Problem Set 2: Static Entry Games Matt Grennan January 29, 2008 These are my attempt at the second problem set for the second year Ph.D. IO course at NYU with Heski Bar-Isaac and Allan Collard-Wexler

More information

Lecture 6: Discrete Choice: Qualitative Response

Lecture 6: Discrete Choice: Qualitative Response Lecture 6: Instructor: Department of Economics Stanford University 2011 Types of Discrete Choice Models Univariate Models Binary: Linear; Probit; Logit; Arctan, etc. Multinomial: Logit; Nested Logit; GEV;

More information

A Robust Approach to Estimating Production Functions: Replication of the ACF procedure

A Robust Approach to Estimating Production Functions: Replication of the ACF procedure A Robust Approach to Estimating Production Functions: Replication of the ACF procedure Kyoo il Kim Michigan State University Yao Luo University of Toronto Yingjun Su IESR, Jinan University August 2018

More information

Lecture 9: Quantile Methods 2

Lecture 9: Quantile Methods 2 Lecture 9: Quantile Methods 2 1. Equivariance. 2. GMM for Quantiles. 3. Endogenous Models 4. Empirical Examples 1 1. Equivariance to Monotone Transformations. Theorem (Equivariance of Quantiles under Monotone

More information

ECOM 009 Macroeconomics B. Lecture 2

ECOM 009 Macroeconomics B. Lecture 2 ECOM 009 Macroeconomics B Lecture 2 Giulio Fella c Giulio Fella, 2014 ECOM 009 Macroeconomics B - Lecture 2 40/197 Aim of consumption theory Consumption theory aims at explaining consumption/saving decisions

More information

Lecture Notes 1: Decisions and Data. In these notes, I describe some basic ideas in decision theory. theory is constructed from

Lecture Notes 1: Decisions and Data. In these notes, I describe some basic ideas in decision theory. theory is constructed from Topics in Data Analysis Steven N. Durlauf University of Wisconsin Lecture Notes : Decisions and Data In these notes, I describe some basic ideas in decision theory. theory is constructed from The Data:

More information

Incentives Work: Getting Teachers to Come to School. Esther Duflo, Rema Hanna, and Stephen Ryan. Web Appendix

Incentives Work: Getting Teachers to Come to School. Esther Duflo, Rema Hanna, and Stephen Ryan. Web Appendix Incentives Work: Getting Teachers to Come to School Esther Duflo, Rema Hanna, and Stephen Ryan Web Appendix Online Appendix: Estimation of model with AR(1) errors: Not for Publication To estimate a model

More information

Why experimenters should not randomize, and what they should do instead

Why experimenters should not randomize, and what they should do instead Why experimenters should not randomize, and what they should do instead Maximilian Kasy Department of Economics, Harvard University Maximilian Kasy (Harvard) Experimental design 1 / 42 project STAR Introduction

More information

MONTE CARLO SIMULATIONS OF THE NESTED FIXED-POINT ALGORITHM. Erik P. Johnson. Working Paper # WP October 2010

MONTE CARLO SIMULATIONS OF THE NESTED FIXED-POINT ALGORITHM. Erik P. Johnson. Working Paper # WP October 2010 MONTE CARLO SIMULATIONS OF THE NESTED FIXED-POINT ALGORITHM Erik P. Johnson Working Paper # WP2011-011 October 2010 http://www.econ.gatech.edu/research/wokingpapers School of Economics Georgia Institute

More information

Multivariate Versus Multinomial Probit: When are Binary Decisions Made Separately also Jointly Optimal?

Multivariate Versus Multinomial Probit: When are Binary Decisions Made Separately also Jointly Optimal? Multivariate Versus Multinomial Probit: When are Binary Decisions Made Separately also Jointly Optimal? Dale J. Poirier and Deven Kapadia University of California, Irvine March 10, 2012 Abstract We provide

More information

Women. Sheng-Kai Chang. Abstract. In this paper a computationally practical simulation estimator is proposed for the twotiered

Women. Sheng-Kai Chang. Abstract. In this paper a computationally practical simulation estimator is proposed for the twotiered Simulation Estimation of Two-Tiered Dynamic Panel Tobit Models with an Application to the Labor Supply of Married Women Sheng-Kai Chang Abstract In this paper a computationally practical simulation estimator

More information

Industrial Organization II (ECO 2901) Winter Victor Aguirregabiria. Problem Set #1 Due of Friday, March 22, 2013

Industrial Organization II (ECO 2901) Winter Victor Aguirregabiria. Problem Set #1 Due of Friday, March 22, 2013 Industrial Organization II (ECO 2901) Winter 2013. Victor Aguirregabiria Problem Set #1 Due of Friday, March 22, 2013 TOTAL NUMBER OF POINTS: 200 PROBLEM 1 [30 points]. Considertheestimationofamodelofdemandofdifferentiated

More information

INTRODUCTION TO TRANSPORTATION SYSTEMS

INTRODUCTION TO TRANSPORTATION SYSTEMS INTRODUCTION TO TRANSPORTATION SYSTEMS Lectures 5/6: Modeling/Equilibrium/Demand 1 OUTLINE 1. Conceptual view of TSA 2. Models: different roles and different types 3. Equilibrium 4. Demand Modeling References:

More information

Stat 5101 Lecture Notes

Stat 5101 Lecture Notes Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random

More information

1 Differentiated Products: Motivation

1 Differentiated Products: Motivation 1 Differentiated Products: Motivation Let us generalise the problem of differentiated products. Let there now be N firms producing one differentiated product each. If we start with the usual demand function

More information

Lecture 14 More on structural estimation

Lecture 14 More on structural estimation Lecture 14 More on structural estimation Economics 8379 George Washington University Instructor: Prof. Ben Williams traditional MLE and GMM MLE requires a full specification of a model for the distribution

More information

1 Hotz-Miller approach: avoid numeric dynamic programming

1 Hotz-Miller approach: avoid numeric dynamic programming 1 Hotz-Miller approach: avoid numeric dynamic programming Rust, Pakes approach to estimating dynamic discrete-choice model very computer intensive. Requires using numeric dynamic programming to compute

More information

Imbens/Wooldridge, Lecture Notes 11, NBER, Summer 07 1

Imbens/Wooldridge, Lecture Notes 11, NBER, Summer 07 1 Imbens/Wooldridge, Lecture Notes 11, NBER, Summer 07 1 What s New in Econometrics NBER, Summer 2007 Lecture 11, Wednesday, Aug 1st, 9.00-10.30am Discrete Choice Models 1. Introduction In this lecture we

More information

An empirical model of firm entry with endogenous product-type choices

An empirical model of firm entry with endogenous product-type choices and An empirical model of firm entry with endogenous product-type choices, RAND Journal of Economics 31 Jan 2013 Introduction and Before : entry model, identical products In this paper : entry with simultaneous

More information

Theory Field Examination Game Theory (209A) Jan Question 1 (duopoly games with imperfect information)

Theory Field Examination Game Theory (209A) Jan Question 1 (duopoly games with imperfect information) Theory Field Examination Game Theory (209A) Jan 200 Good luck!!! Question (duopoly games with imperfect information) Consider a duopoly game in which the inverse demand function is linear where it is positive

More information

A Rothschild-Stiglitz approach to Bayesian persuasion

A Rothschild-Stiglitz approach to Bayesian persuasion A Rothschild-Stiglitz approach to Bayesian persuasion Matthew Gentzkow and Emir Kamenica Stanford University and University of Chicago December 2015 Abstract Rothschild and Stiglitz (1970) represent random

More information

Linear Models in Econometrics

Linear Models in Econometrics Linear Models in Econometrics Nicky Grant At the most fundamental level econometrics is the development of statistical techniques suited primarily to answering economic questions and testing economic theories.

More information

Econometrics I, Estimation

Econometrics I, Estimation Econometrics I, Estimation Department of Economics Stanford University September, 2008 Part I Parameter, Estimator, Estimate A parametric is a feature of the population. An estimator is a function of the

More information

Estimating Single-Agent Dynamic Models

Estimating Single-Agent Dynamic Models Estimating Single-Agent Dynamic Models Paul T. Scott New York University Empirical IO Course Fall 2016 1 / 34 Introduction Why dynamic estimation? External validity Famous example: Hendel and Nevo s (2006)

More information

A Dynamic Network Oligopoly Model with Transportation Costs, Product Differentiation, and Quality Competition

A Dynamic Network Oligopoly Model with Transportation Costs, Product Differentiation, and Quality Competition A Dynamic Network Oligopoly Model with Transportation Costs, Product Differentiation, and Quality Competition Anna Nagurney John F. Smith Memorial Professor and Dong Li Doctoral Student Department of Finance

More information

Area I: Contract Theory Question (Econ 206)

Area I: Contract Theory Question (Econ 206) Theory Field Exam Summer 2011 Instructions You must complete two of the four areas (the areas being (I) contract theory, (II) game theory A, (III) game theory B, and (IV) psychology & economics). Be sure

More information

September Math Course: First Order Derivative

September Math Course: First Order Derivative September Math Course: First Order Derivative Arina Nikandrova Functions Function y = f (x), where x is either be a scalar or a vector of several variables (x,..., x n ), can be thought of as a rule which

More information

Welfare Evaluation in a Heterogeneous Agents Model: How Representative is the CES Representative Consumer?

Welfare Evaluation in a Heterogeneous Agents Model: How Representative is the CES Representative Consumer? Welfare Evaluation in a Heterogeneous Agents Model: How Representative is the CES Representative Consumer? Maria D. Tito August 7, 0 Preliminary Draft: Do not cite without permission Abstract The aim of

More information

The New Palgrave: Separability

The New Palgrave: Separability The New Palgrave: Separability Charles Blackorby Daniel Primont R. Robert Russell 1. Introduction July 29, 2006 Separability, as discussed here, refers to certain restrictions on functional representations

More information

Lecture 2: Basic Concepts of Statistical Decision Theory

Lecture 2: Basic Concepts of Statistical Decision Theory EE378A Statistical Signal Processing Lecture 2-03/31/2016 Lecture 2: Basic Concepts of Statistical Decision Theory Lecturer: Jiantao Jiao, Tsachy Weissman Scribe: John Miller and Aran Nayebi In this lecture

More information

P1: GEM/IKJ P2: GEM/IKJ QC: GEM/ABE T1: GEM CB495-05Drv CB495/Train KEY BOARDED August 20, :28 Char Count= 0

P1: GEM/IKJ P2: GEM/IKJ QC: GEM/ABE T1: GEM CB495-05Drv CB495/Train KEY BOARDED August 20, :28 Char Count= 0 5 Probit 5.1 Choice Probabilities The logit model is limited in three important ways. It cannot represent random taste variation. It exhibits restrictive substitution patterns due to the IIA property.

More information

Robust Predictions in Games with Incomplete Information

Robust Predictions in Games with Incomplete Information Robust Predictions in Games with Incomplete Information joint with Stephen Morris (Princeton University) November 2010 Payoff Environment in games with incomplete information, the agents are uncertain

More information

Bresnahan, JIE 87: Competition and Collusion in the American Automobile Industry: 1955 Price War

Bresnahan, JIE 87: Competition and Collusion in the American Automobile Industry: 1955 Price War Bresnahan, JIE 87: Competition and Collusion in the American Automobile Industry: 1955 Price War Spring 009 Main question: In 1955 quantities of autos sold were higher while prices were lower, relative

More information

High-dimensional Problems in Finance and Economics. Thomas M. Mertens

High-dimensional Problems in Finance and Economics. Thomas M. Mertens High-dimensional Problems in Finance and Economics Thomas M. Mertens NYU Stern Risk Economics Lab April 17, 2012 1 / 78 Motivation Many problems in finance and economics are high dimensional. Dynamic Optimization:

More information

Bayesian Estimation of Discrete Games of Complete Information

Bayesian Estimation of Discrete Games of Complete Information Bayesian Estimation of Discrete Games of Complete Information Sridhar Narayanan May 30, 2011 Discrete games of complete information have been used to analyze a variety of contexts such as market entry,

More information

Masking Identification of Discrete Choice Models under Simulation Methods *

Masking Identification of Discrete Choice Models under Simulation Methods * Masking Identification of Discrete Choice Models under Simulation Methods * Lesley Chiou 1 and Joan L. Walker 2 Abstract We present examples based on actual and synthetic datasets to illustrate how simulation

More information

Bayesian Econometrics - Computer section

Bayesian Econometrics - Computer section Bayesian Econometrics - Computer section Leandro Magnusson Department of Economics Brown University Leandro Magnusson@brown.edu http://www.econ.brown.edu/students/leandro Magnusson/ April 26, 2006 Preliminary

More information

Introduction to Econometrics

Introduction to Econometrics Introduction to Econometrics T H I R D E D I T I O N Global Edition James H. Stock Harvard University Mark W. Watson Princeton University Boston Columbus Indianapolis New York San Francisco Upper Saddle

More information

Lecture 4. Xavier Gabaix. February 26, 2004

Lecture 4. Xavier Gabaix. February 26, 2004 14.127 Lecture 4 Xavier Gabaix February 26, 2004 1 Bounded Rationality Three reasons to study: Hope that it will generate a unified framework for behavioral economics Some phenomena should be captured:

More information

KIER DISCUSSION PAPER SERIES

KIER DISCUSSION PAPER SERIES KIER DISCUSSION PAPER SERIES KYOTO INSTITUTE OF ECONOMIC RESEARCH Discussion Paper No.992 Intertemporal efficiency does not imply a common price forecast: a leading example Shurojit Chatterji, Atsushi

More information

A Measure of Robustness to Misspecification

A Measure of Robustness to Misspecification A Measure of Robustness to Misspecification Susan Athey Guido W. Imbens December 2014 Graduate School of Business, Stanford University, and NBER. Electronic correspondence: athey@stanford.edu. Graduate

More information

Field Course Descriptions

Field Course Descriptions Field Course Descriptions Ph.D. Field Requirements 12 credit hours with 6 credit hours in each of two fields selected from the following fields. Each class can count towards only one field. Course descriptions

More information

Uncertainty and Disagreement in Equilibrium Models

Uncertainty and Disagreement in Equilibrium Models Uncertainty and Disagreement in Equilibrium Models Nabil I. Al-Najjar & Northwestern University Eran Shmaya Tel Aviv University RUD, Warwick, June 2014 Forthcoming: Journal of Political Economy Motivation

More information

Chapter 1 Introduction. What are longitudinal and panel data? Benefits and drawbacks of longitudinal data Longitudinal data models Historical notes

Chapter 1 Introduction. What are longitudinal and panel data? Benefits and drawbacks of longitudinal data Longitudinal data models Historical notes Chapter 1 Introduction What are longitudinal and panel data? Benefits and drawbacks of longitudinal data Longitudinal data models Historical notes 1.1 What are longitudinal and panel data? With regression

More information

1 Introduction to structure of dynamic oligopoly models

1 Introduction to structure of dynamic oligopoly models Lecture notes: dynamic oligopoly 1 1 Introduction to structure of dynamic oligopoly models Consider a simple two-firm model, and assume that all the dynamics are deterministic. Let x 1t, x 2t, denote the

More information

Matching with Trade-offs. Revealed Preferences over Competing Characteristics

Matching with Trade-offs. Revealed Preferences over Competing Characteristics : Revealed Preferences over Competing Characteristics Alfred Galichon Bernard Salanié Maison des Sciences Economiques 16 October 2009 Main idea: Matching involve trade-offs E.g in the marriage market partners

More information

Identifying Dynamic Games with Serially-Correlated Unobservables

Identifying Dynamic Games with Serially-Correlated Unobservables Identifying Dynamic Games with Serially-Correlated Unobservables Yingyao Hu Dept. of Economics, Johns Hopkins University Matthew Shum Division of Humanities and Social Sciences, Caltech First draft: August

More information

SP Experimental Designs - Theoretical Background and Case Study

SP Experimental Designs - Theoretical Background and Case Study SP Experimental Designs - Theoretical Background and Case Study Basil Schmid IVT ETH Zurich Measurement and Modeling FS2016 Outline 1. Introduction 2. Orthogonal and fractional factorial designs 3. Efficient

More information

PART I INTRODUCTION The meaning of probability Basic definitions for frequentist statistics and Bayesian inference Bayesian inference Combinatorics

PART I INTRODUCTION The meaning of probability Basic definitions for frequentist statistics and Bayesian inference Bayesian inference Combinatorics Table of Preface page xi PART I INTRODUCTION 1 1 The meaning of probability 3 1.1 Classical definition of probability 3 1.2 Statistical definition of probability 9 1.3 Bayesian understanding of probability

More information

July 31, 2009 / Ben Kedem Symposium

July 31, 2009 / Ben Kedem Symposium ing The s ing The Department of Statistics North Carolina State University July 31, 2009 / Ben Kedem Symposium Outline ing The s 1 2 s 3 4 5 Ben Kedem ing The s Ben has made many contributions to time

More information

A Summary of Economic Methodology

A Summary of Economic Methodology A Summary of Economic Methodology I. The Methodology of Theoretical Economics All economic analysis begins with theory, based in part on intuitive insights that naturally spring from certain stylized facts,

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate

More information

Using a Laplace Approximation to Estimate the Random Coefficients Logit Model by Non-linear Least Squares 1

Using a Laplace Approximation to Estimate the Random Coefficients Logit Model by Non-linear Least Squares 1 Using a Laplace Approximation to Estimate the Random Coefficients Logit Model by Non-linear Least Squares 1 Matthew C. Harding 2 Jerry Hausman 3 September 13, 2006 1 We thank Ketan Patel for excellent

More information

A Rothschild-Stiglitz approach to Bayesian persuasion

A Rothschild-Stiglitz approach to Bayesian persuasion A Rothschild-Stiglitz approach to Bayesian persuasion Matthew Gentzkow and Emir Kamenica Stanford University and University of Chicago January 2016 Consider a situation where one person, call him Sender,

More information

ECON FINANCIAL ECONOMICS

ECON FINANCIAL ECONOMICS ECON 337901 FINANCIAL ECONOMICS Peter Ireland Boston College Spring 2018 These lecture notes by Peter Ireland are licensed under a Creative Commons Attribution-NonCommerical-ShareAlike 4.0 International

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo January 29, 2012 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008 Gaussian processes Chuong B Do (updated by Honglak Lee) November 22, 2008 Many of the classical machine learning algorithms that we talked about during the first half of this course fit the following pattern:

More information

A Guide to Modern Econometric:

A Guide to Modern Econometric: A Guide to Modern Econometric: 4th edition Marno Verbeek Rotterdam School of Management, Erasmus University, Rotterdam B 379887 )WILEY A John Wiley & Sons, Ltd., Publication Contents Preface xiii 1 Introduction

More information

STRUCTURE Of ECONOMICS A MATHEMATICAL ANALYSIS

STRUCTURE Of ECONOMICS A MATHEMATICAL ANALYSIS THIRD EDITION STRUCTURE Of ECONOMICS A MATHEMATICAL ANALYSIS Eugene Silberberg University of Washington Wing Suen University of Hong Kong I Us Irwin McGraw-Hill Boston Burr Ridge, IL Dubuque, IA Madison,

More information

Identifying Dynamic Games with Serially-Correlated Unobservables

Identifying Dynamic Games with Serially-Correlated Unobservables Identifying Dynamic Games with Serially-Correlated Unobservables Yingyao Hu Dept. of Economics, Johns Hopkins University Matthew Shum Division of Humanities and Social Sciences, Caltech First draft: August

More information

Dynamic Discrete Choice Structural Models in Empirical IO

Dynamic Discrete Choice Structural Models in Empirical IO Dynamic Discrete Choice Structural Models in Empirical IO Lecture 4: Euler Equations and Finite Dependence in Dynamic Discrete Choice Models Victor Aguirregabiria (University of Toronto) Carlos III, Madrid

More information

Can everyone benefit from innovation?

Can everyone benefit from innovation? Can everyone benefit from innovation? Christopher P. Chambers and Takashi Hayashi June 16, 2017 Abstract We study a resource allocation problem with variable technologies, and ask if there is an allocation

More information

Ph.D. Preliminary Examination MICROECONOMIC THEORY Applied Economics Graduate Program June 2016

Ph.D. Preliminary Examination MICROECONOMIC THEORY Applied Economics Graduate Program June 2016 Ph.D. Preliminary Examination MICROECONOMIC THEORY Applied Economics Graduate Program June 2016 The time limit for this exam is four hours. The exam has four sections. Each section includes two questions.

More information

Econ 673: Microeconometrics

Econ 673: Microeconometrics Econ 673: Microeconometrics Chapter 2: Simulation Tools for Estimation and Inference Fall 2008 Herriges (ISU) Chapter 1: Simulation Tools Fall 2008 1 / 63 Outline 1 The Role of Simulation in Estimation

More information

CONSUMPTION-SAVINGS DECISIONS WITH QUASI-GEOMETRIC DISCOUNTING. By Per Krusell and Anthony A. Smith, Jr introduction

CONSUMPTION-SAVINGS DECISIONS WITH QUASI-GEOMETRIC DISCOUNTING. By Per Krusell and Anthony A. Smith, Jr introduction Econometrica, Vol. 71, No. 1 (January, 2003), 365 375 CONSUMPTION-SAVINGS DECISIONS WITH QUASI-GEOMETRIC DISCOUNTING By Per Krusell and Anthony A. Smith, Jr. 1 1 introduction The purpose of this paper

More information

Introduction. Chapter 1

Introduction. Chapter 1 Chapter 1 Introduction In this book we will be concerned with supervised learning, which is the problem of learning input-output mappings from empirical data (the training dataset). Depending on the characteristics

More information

MKTG 555: Marketing Models

MKTG 555: Marketing Models MKTG 555: Marketing Models Structural Models -- Overview Arvind Rangaswamy (Some Parts are adapted from a presentation by by Prof. Pranav Jindal) March 27, 2017 1 Overview Differences between structural

More information

Revisiting the Nested Fixed-Point Algorithm in BLP Random Coeffi cients Demand Estimation

Revisiting the Nested Fixed-Point Algorithm in BLP Random Coeffi cients Demand Estimation Revisiting the Nested Fixed-Point Algorithm in BLP Random Coeffi cients Demand Estimation Jinhyuk Lee Kyoungwon Seo September 9, 016 Abstract This paper examines the numerical properties of the nested

More information

Harold HOTELLING b. 29 September d. 26 December 1973

Harold HOTELLING b. 29 September d. 26 December 1973 Harold HOTELLING b. 29 September 1895 - d. 26 December 1973 Summary. A major developer of the foundations of statistics and an important contributor to mathematical economics, Hotelling introduced the

More information