Estimation under Ambiguity (Very preliminary)

Size: px
Start display at page:

Download "Estimation under Ambiguity (Very preliminary)"

Transcription

1 Estimation under Ambiguity (Very preliminary) Ra aella Giacomini, Toru Kitagawa, y and Harald Uhlig z Abstract To perform a Bayesian analysis for a set-identi ed model, two distinct approaches exist; the standard Bayesian inference that assumes a single prior for non-identi ed parameters, and the Bayesian inference for the identi ed set that assumes full ambiguity (multiple priors) for the parameters within their identi ed set. Both of the prior inputs considered by these two extreme approaches can often be a poor representation of the researcher s prior knowledge in practice. This paper lls this large gap between the two approaches by proposing a framework of multiple prior robust Bayes analysis that can simultaneously incorporate a probabilistic belief for the non-identi ed parameters and a misspeci cation concern about this belief. Our proposal introduces a benchmark prior representing the researcher s partially credible probabilistic belief for non-identi ed parameters, and a set of priors formed in its Kullback-Leibler (KL) neighborhood whose radius controls the degree of researcher s con dence put on the benchmark prior. We develop point estimation and interval estimation for the object of interest by minimizing the worst-case posterior risk over the resulting class of posteriors. We clarify that this minimax problem is analytically tractable and simple to solve numerically. We also derive analytical properties of the proposed robust Bayesian procedure in the limiting situations where the radius of KL neighborhood and/or the sample size are large. University College London, Department of Economics/Cemmap. r.giacomini@ucl.ac.uk y University College London, Department of Economics/Cemmap. t.kitagawa@ucl.ac.uk z University of Chicago, Department of Economics. huhlig@uchicago.edu

2 Introduction Consider a parametric model with parameter vectors (; ), where the likelihood is given by l(xj; ) and a sample is denoted by X = x. We consider a partially identi ed model (e.g., Poirier (998), Moon and Schorfheide (202)) where 2 denotes the reduced-form parameters and denotes the auxiliary parameters that are non-identi ed under a given set of identifying assumptions but are necessary to pin down the value of a (scalar) object of interest y = y(; ) 2 R. By the de nition of reduced-form parameters, the value of the likelihood depends only on for every realization of X, equivalent to saying X? j. The domain of, on the other hand, is constrained by the imposed identifying assumptions and it depends on, and we refer to it as the identi ed set of denoted by IS (). The identi ed set of y is accordingly de ned by the range of y(; ) when varies over IS (), IS y () fy(; ) : 2 IS ()g, () which can be viewed as a set-valued map from to R. We will focus on several leading examples of this set-up in the paper and use them to illustrate our methods. Example. (Supply and demand) Suppose the object of interest y is a structural parameter in a system of simultaneous equations. For example, consider a static version of the model of labor supply and demand analyzed by Baumeister and Hamilton (205): Ax t = u t ; (2) where x t = (w" t ; n t ) with # w t and n t the growth rates of wages and employment, respectively, A = with 0 the short-run wage elasticity of supply and 0 the short-run wage elasticity of demand and u t are shocks assumed to be i:i:d: N(0; D) with D = diag(d ; d 2 ). The reduced form representation of the model is x t = " t ; (3) with E (" t " 0 t) = = A D A 0 : The reduced form parameters are = (w ; w 2 ; w 22 ) 0 ; with w ij the (i; j) th element of : Let be the parameter of interest. The full vector of structural parameters is (; ; d ; d 2 ) 0, which can be reparametrized to (; w ; w 2 ; w 22 ). Accordingly, in our notation, can be set to ; and the object of interest is y = = itself. The identi ed set of when w 2 > 0 can be obtained as (see, e.g., Baumeister and Hamilton (205)): IS () = f : w 2 =w w 22 =w 2 g: (4) See Section 6. below for the transformation. If is a parameter of interest, an alternative reparametrization allows us to transform the structural parameters into (; w ; w 2; w 22). 2

3 Example.2 (Impulse response analysis) Suppose the object of interest is an impulseresponse in a general partially identi ed structural vector autoregression (SVAR) for a zero mean vector x t : px A 0 x t = A j x t j + u t ; (5) j= where u t is i:i:d:n(0; I); with I the identity matrix. The reduced form VAR representation is px x t = B j x t j= + " t, " t N (0; ); The reduced form parameters are = (vec(b ) 0 ; : : : ; vec(b p ) 0 ; w ; w 2 ; w 22 ) 0 2, with restricted to the set of such that the reduced form VAR can be inverted into a V MA() model: X x t = C j " t j : (6) j=0 The non-identi ed parameter is = (vec(q) 0 ) 0 ; where Q is the orthonormal rotation matrix that transforms the reduced form residuals into structural shocks (i.e., u t = Q 0 tr " t; where tr is the Cholesky factor from the factorization = tr 0 tr). The object of interest is the (i; j) th impulse response at horizon h, which captures the e ect on the i-th variable in x t+h of a unit shock to the j-th element of u t and is given by y = e 0 i C h tr Qe j ; with e j the j th column of the identity matrix. The identi ed set of the (i; j) th impulse response in the absence of any identifying restrictions is where O is the space of orthonormal matrices. IS y () = fy = e 0 ic h tr Qe j : Q 2 Og; (7) Example.3 (Entry game) As a microeconometric application, consider the two-player entry game in Bresnahan and Reiss (99) used as the illustrating example in Moon and Schorfheide (202). Let M ij = j + ij, j = ; 2; be the pro t of rm j if rm j is monopolistic in market i 2 f; : : : ; ng ; and D ij = j j + ij be rm j s pro t if the competing rm also enters the market i (duopolistic). The ij s capture unobservable (to the econometrician) pro t components of rm j in market i and they are known to the players, and we assume ( i ; i2 ) N (0; I 2 ). We restrict our analysis to the pure strategy Nash equilibrium, and assume that the game is strategic substitute, ; 2 0. The data consist of iid observations on entry decisions of the two rms. The non-redundant set of reduced form parameters are = ( ; 00 ; 0 ) ; the probabilities of observing a duopoly, no entry, or the entry of rm. This game has multiple equilibria depending on ( i ; i2 ); the monopoly of rm and the monopoly of rm 2 are pure strategy Nash equilibrium if i 2 [ ; + ] and i2 2 [ 2 ; ]. Let 2 [0; ] be a parameter for an equilibrium selection rule representing 3

4 the probability that the monopoly of rm is selected given ( i ; i2 ) leading to multiplicity of equilibria. Let the parameter of interest be y = ; the substitution e ect for rm from the rm 2 entry. The vector of full structural parameters augmented by the equilibrium selection parameter is ( ; ; 2 ; 2 ; ), and they can be reparametrized into ( ; ; ; 00 ; 0 ). 2 Hence, in our notation, can be set to = ( ; ) and y =. The identi ed set for does not have a convenient closed-form, but it can be expressed implicitly as IS () = ( ; ) : 0; min k ( ; ; 2 ; 2 ; )k = 0 ; (8) 2 2R 2 ; 2 0; 2[0;] where () is the map from structural parameters ( ; ; 2 ; 2 ; ) to reduced-form parameters. Projecting IS () to the -coordinate gives the identi ed set for. Generally, the identi ed set only collects all the admissible values of y given dogmatically imposed identifying assumptions and a knowledge of the distribution of observables (the reduced-form parameters). In some contexts, however, it may not be the case that the identifying assumptions imposed dogmatically exhaust all the available information that the researcher has. A more common situation is that the researcher has some form of additional but only partially credible assumptions about some underlying structural parameters or about the non-identi ed parameter based on economic theory, background knowledge of the problem, or empirical studies that use di erent data. From the standard Bayesian viewpoint, the recommendation is to incorporate this information into the analysis through specifying a prior distribution of (; ) or that of the full structural parameters. For instance, in the case of Example., Baumeister and Hamilton (205) propose a prior of the elasticity of supply that draws on estimates obtained in microeconometric studies, and consider a Student s t density calibrated to assign 90% probability to the interval 2 (0:; 2:2): Another example considered by Baumeister and Hamilton (205) is a prior that incorporates long-run identifying restrictions in SVARs in a non-dogmatic way, as a way to capture the uncertainty one might have about the validity of this popular but controversial type of identifying restrictions. In the situations where additional informative prior information other than the identifying restrictions is not available, some Bayesian literature has recommended the use of the uniform prior as a representation of the indi erence among s within the identi ed set. For example, in SVARs subject to sign restrictions (Uhlig (2005)) it is common to use the uniform distribution (the Haar measure) over the set of orthonormal matrices in (7) that satisfy the sign restrictions. For the entry game in Example.3, one of the prior speci cations considered in Moon and Schorfheide (202) is the uniform prior over the identi ed set of. At the opposite end of the standard Bayesian spectrum, Kitagawa (202) and Giacomini and Kitagawa (205) advocate adopting a multiple-prior Bayesian approach when one has no 2 See Section 6.2 below for concrete expressions of the transformation. 4

5 further information about besides a set of exact restrictions that can be used to characterize the identi ed set. While maintaining a single prior for, this set of priors consists of any conditional prior for given ; j ; supported on the identi ed set IS (): Kitagawa (202) and Giacomini and Kitagawa (205) propose to conduct a posterior bound analysis based on the resulting class of posteriors, that leads to an estimator for IS y () and an associated "robusti ed" credible region that asymptotically converge to the true identi ed set, which is the object of interest of frequentist inference. Being implicit about the ambiguity inherent in partial identi cation analysis, one can also consider posterior inference for the identi ed set as in Moon and Schorfheide (20), Kline and Tamer (203), and Liao and Simoni (203), to obtain similar asymptotic equivalence between posterior inference and frequentist inference. The motivation for the methods that we propose in this paper is the observation that both types of prior inputs considered by the two extreme approaches discussed above - a precise speci cation of j or full ambiguity about j - could be a poor representation of the belief that the researcher actually possesses in a given application. For example, the Student s t prior speci ed by Baumeister and Hamilton (205) in Example. builds on the plausible values of found in microeconometric studies, but such prior evidence may not to be su cient for the researcher to be con dent in the particular shape of the prior. At the same time, the researcher may not want to entirely discard such available prior evidence for and take the fully ambiguous approach. Further, a researcher who is indi erent over values of within its identi ed set may be concerned about the fact that even a uniform prior on IS () can cause unintentionally informative prior for y or other parameters. Full ambiguity for j may also not be appealing, if, for instance, a prior that is degenerate at an extreme value in IS () appears less sensible than a non-degenerate prior that supports any in the identi ed set. The existing approaches to inference in partially identi ed models lack a formal and convenient framework that enables one to incorporate any "vague" prior knowledge for the non-identi ed parameters that the researcher possesses and is willing to exploit. The main contribution of this paper is to ll the large gap between the single prior Bayesian approach and the fully ambiguous multiple prior Bayesian approach by proposing a method that can simultaneously incorporate a probabilistic belief for the non-identi ed parameters and a misspeci cation concern about this belief in a uni ed manner. Our idea is to replace the fully ambiguous beliefs for j considered in Kitagawa (202) and Giacomini and Kitagawa (205) by a class of priors de ned in a neighborhood of a benchmark prior. The benchmark prior j represents the researcher s reasonable but partially credible prior knowledge about, and the class of priors formed around the benchmark prior captures ambiguity or misspeci cation concerns about the benchmark prior. The radius of the neighborhood prespeci ed by the researcher controls the degree of con dence put on the benchmark prior. We then propose point estimation and interval estimation for the object of interest y by minimizing the worst-case (minimax) posterior risk over the priors constrained 5

6 to this neighborhood. Building on the robust control theory pioneered in operation research (Peterson, James, and Dupuis (2000)) and macroeconomics (Hansen and Sargent (200)), we solve this constrained minimax problem via the unconstrained multiplier minimax formulation. Our paper makes the following unique contributions: () we clarify that the estimation for the partially identi ed parameter under vague prior knowledge can be formulated as a decision under ambiguity in the form considered in Hansen and Sargent (200); (2) we provide an analytically tractable and numerically convenient way to solve the minimax estimation problem in general cases; (3).we give simple analytical solutions for the special cases of a quadratic and a check loss function and for the limit case when the shape of benchmark prior is irrelevant; (4) we derive the properties of our method in large samples. The remainder of the paper is organized as follows. In Section 2, we introduce the analytical framework and formulate the statistical decision problem with the multiple priors localized around the benchmark prior. Section 3 solves the multiplier minimax problem with a general loss function. With the quadratic and check loss functions, Section 4 analyzes point and interval estimations of the parameter of interest. Section 4 also considers the two types of limiting situations: () the radius of the set of priors diverges to in nity (fully ambiguous beliefs) and (2) the sample size goes to in nity. Section 5 discusses how to elicit the benchmark prior and how to set up the tuning parameter that governs the size of the prior class. In Section 6, we provide one empirical and one numerical examples. 2 Estimation as Statistical Decision under Ambiguity The starting point of the analysis is to express a joint prior of (; ) by j, where j is a conditional prior probability measure of the non-identi ed parameter given the reduced form parameter and is a marginal prior probability measure of. Note that j induces a conditional prior distribution of y given ; yj. The set of identifying assumptions imposed characterizes IS y () and any prior for (; ) that satis es the imposed identifying assumptions with probability one has the support of yj contained in the identi ed set IS y (), i.e., yj (y 2 IS y ()) =, for all 2. A sample X is always informative about so that can be updated by data to obtain a posterior jx, whereas the conditional prior j (and hence yj ) can never be updated by data and the posterior inference for y remains sensitive to the choice of conditional prior no matter how large the sample size is. Therefore, for the decision maker who is aware of these facts, misspeci cation of the unrevisable part of the prior yj becomes a major concern. Suppose that the decision maker can form a benchmark prior j for the unrevisable part of the prior. This prior captures information about that is available before the model is brought to the data (see Section 5 for discussions on how to elicit a benchmark prior). Note that, if one were to impose a su cient number of restrictions to point-identify y; this would amount to specifying the benchmark prior of y to be a point mass measure that selects 6

7 one particular point from the identi ed set. With such point-mass prior, the posterior of induces a single posterior of y. Under partial identi cation, on the other hand, j needs to be speci ed in order to have a single posterior distribution for y. We consider a set of priors (ambiguous beliefs) in a neighborhood of j - while maintaining a single prior of - and nd the estimator of y that minimizes the worst-case posterior risk as the priors range over this neighborhood. Formally, de ne the Kullback-Leibler neighborhood of j with radius 2 [0; ) as o j n j : R( j k j ) ; (9) where R( j k j ) 0 is the Kullback-Leibler distance (KL-distance) from j to j, or equivalently the relative entropy of j relative to j : R( j k j ) = IS () ln d j d j! d j, which is nite if and only if j is absolutely continuous with respect to j. Otherwise, we de ne R( j k j ) = following the convention. As is well known in information theory, R( j k j ) = 0 if and only if j = j (see, e.g., Lemma.4. in Dupuis and Ellis (997)). The main reason to de ne the neighborhood in terms of the KL-distance is its convexity property in j, which allows us to transform the constrained minimax problem in equation (3) below into the analytically more tractable unconstrained minimax problem in equation (4) below. A bigger corresponds to a larger j, and, in the extreme case, ( j ) lim! j contains any probability measure that is dominated by j, i.e., the benchmark prior becomes relevant only for determining the support of j in the limiting situation of!. Note that j is de ned for conditional priors at each so that the radius can be di erent over, although it is implicit in our notation. Indeed, in the multiplier minimax approach shown below, the implied set of priors for j has the radius dependent on. It is important also to note that other than through the benchmark prior, the class of priors is not subject to any constraint that restricts the dependence of on, i.e., xing j 2 j at one value of does not restrict feasible priors in j for the remaining values of. We consider a point estimation problem where (X) is a scalar statistical decision function that maps data X to a space of actions and h((x); y) is a loss function, such as the quadratic loss h((x); y) = ((X) y) 2 ; (0) 7

8 or the check loss for the -th quantile h((x); y) = (y (X)) () (u) = u fu > 0g ( )u fu < 0g : Given a conditional prior j and the single posterior for, the posterior risk is " # h((x); y (; ))d j d jx. (2) IS () Provided that the decision maker faces ambiguous beliefs for j in the form of multiple priors j, we assume that the decision maker wishes to make a robust or conservative decision for y by minimizing the worst-case posterior risk over j given data X = x, Constrained Minimax: min (x) max j 2 j " # h((x); y (; ))d j d jx (3) IS () Instead of working with the constrained minimax problem above, we consider the more analytically convenient multiplier minimax problem: for > 0, 2 ( )3 Multiplier Minimax: min 4 max h((x); y (; ))d j R( j k j ) 5 d jx : (x) IS () j 2 j (4) The following well-known result from convex analysis 3 that shows equivalence between the two minimax problems. Lemma 2. Fix and (x). Assume j () is nondegenerate and > 0. Let " # 0 max h((x); y (; ))d j : (5) IS () j 2 j If 0 <, then there exists 0 such that ( 0 = max j 2 j h((x); y (; ))d j IS () ) R( j k j ) : (6) 3 See Lemma 2.2. in Peterson, James, and Dupuis (2000). These authors refer to David Luenberger s book (969) "Optimization by Vector Space Methods" for a proof. Theorem 28.2 in Tyrrell Rockafeller s book (970) "Convex Analysis" shows the same claim. 8

9 Furthermore, if 0 j 2 j satis es R( j k j ) = 0. is a maximizer in (5), 0 j also maximizes (6) and As is clear from the lemma, plays the role of the Lagrangian multiplier in a constrained optimization problem and thus it can be interpreted as the increase in the objective function associated with a relaxation of the constraint (a unit change in ). Note that, since the constrained optimization problem depends on through j and IS (), a value of that equalizes the two optimizations may depend on if is a constant independent of. One way to justify our xed multiplier minimax analysis is therefore to think of the original constrained problem as having dependent on. 4 3 Solving Multiplier Minimax Problem The multiplier minimax problem of (4) has a convenient representation, as shown in the next theorem. Theorem 3. Assume h(; y) is bounded on IS y (); a:s: at every. The multiplier minimax problem (4) is then equivalent to min r (; ) d jx, (7)! where r (; ) ln exp fh(; y(; ))=g d j Proof. See Appendix A. IS () Note that the statement of the theorem is valid for any sample size and any realization of X. The obtained representation signi cantly simpli es the analytical investigation and the computation of the minimax decision, and we make use of it in the following sections. We can easily approximate the integrals in (7) using Monte Carlo draws of (; ) sampled from the benchmark conditional prior j and the posterior jx. The minimization for (x) can 4 Note that in the extreme situation where = 0 or =, the optimal decision in the constrained minimax problem can be replicated by the multiplier minimax decision with independent of : When = 0, the optimal decision in the constrained minimax problem is reduced to the standard Bayes decision with a single posterior. This standard Bayes decision can be replicated by the multiplier minimax decision with =, since if =, the inner maximization in (4) always selects the benchmark prior. When =, the constrained minimax problem is reduced to the unconstrained one, so that the multiplier minimax problem with = 0 coincides it. 9

10 be performed by a grid search using the approximated objective function. Section 6 applies this idea to some common applications. 2 ( )3 min 4 max h((x); y)d yj R( yj k (x) yj 2 yj ) 5 d jx ; IS yj y() Another advantage of expressing the multiplier minimax problem as in Theorem 3. is that it simpli es the investigation of the behavior of the optimal decision in large samples. Let n denote the sample size and 0 2 be the value of that generated the data (the true value of ). To establish asymptotic convergence of the minimax optimal decision, we impose the following set of regularity assumptions. Assumption 3.2 (i) The posterior of is consistent for 0 in the sense that for any open neighborhood G of 0, jx (G)! as n! for almost every sampling sequence. (ii) D (the action space of ), Y (the parameter space of y), and (the parameter space of ) are compact. (iii) The loss function h(; y) is non-negative, bounded, and continuous in at every (; y) 2 D Y. (iv) IS y () has a nonempty interior -a.s. and IS y ( 0 ) has a nonempty interior. The benchmark prior marginalized to y, yj ; is absolutely continuous with respect to the Lebesgue measure, and its density is di erentiable in with bounded derivatives, d yj dy at almost every y 2 IS y (), -a.s. R (v) r (; 0 ) ln IS(0 ) exp fh(; y(; 0))=g d j0 has a unique minimizer in. Assumption 3.2 (i) assumes that the posterior of is well-behaved and the true 0 can be estimated consistently in the Bayesian sense. The posterior consistency of can be ensured by imposing higher level assumptions on the likelihood of : We do not present them here for brevity (see, e.g., Section 7.4 of Schervish (995) for details about posterior consistency). Assumption 3.2 (iv) rules out point-identi ed models and assumes that the identi ed set has almost surely positive length. 5 Under these regularity assumptions, we obtain the following asymptotic result. 5 When a benchmark prior yj is a probability mass measure selecting a point from IS y() for every (i.e., the benchmark prior is an additional restriction that makes the model point-identi ed), the optimal (x) is given by the Bayes action with respect to the single posterior of y induced by such benchmark prior irrespective of the value of. This implies that robust estimation via the multiplier minimax approach is not e ective if the benchmark prior is chosen based on a point-identifying restriction. 0

11 Theorem 3.3 (i) Let ^ 2 arg min 2D R r (; ) d jx. Under Assumption 3.2, ^! ( 0 ) arg min 2D r (; 0 ) ; as n! for almost every sampling sequence. (ii) Furthermore, for any ^ such that ^ 0!p 0 as n!, ^ 2 arg min 2D r ; ^ converges in probability to ( 0 ) as n!. Proof. See Appendix A. This theorem shows that the nite sample optimal minimax decision has a well-de ned large sample limit which coincides with the optimal decision under the knowledge of the true value of. The theorem has a useful practical implication: When the sample size is moderate to large, so that the posterior distribution of is concentrated around its maximum likelihood estimator (MLE) ^ ML, one can well approximate the exact nite sample minimax decision by minimizing the "plug-in" objective function, where the averaging with respect to the posterior of in (7) is replaced by plugging ^ ML in r (; ). This will reduce computational cost of approximating the objective function since what we need in this case are only MCMC draws of (or y) from j^ml (or yj^ml ). 4 Multiplier Minimax Estimation with Speci c Loss Functions This section presents further analytical results on the multiplier minimax decision problem for two common choices of loss function. In particular, we focus on the limiting situation of! 0, i.e., the case when the decision maker faces extreme ambiguity. In the case when! 0, the choice of the benchmark conditional prior yj a ects the optimal decision only through the support of the prior. We therefore impose the following regularity assumptions concerning the tail behavior of the benchmark conditional prior. Assumption 4. (i) IS y () has a nonempty interior -a.s. and the benchmark prior marginalized to y, yj ; is absolutely continuous with respect to the Lebesgue measure -a.s. n o (ii) Let [y (); y ()] be the convex hull of y : d yj dy > 0, and y(); y() be the convex hull of IS y (): Assume [y (); y ()] is a bounded interval, -a.s. (iii) There exist > 0, > 0, and b > 0 such that [y (); y () + ) IS y () and (y () ; y ()] IS y () hold and the tails of yj near the boundary of the support satisfy d yj dy (y) b(y y ()) ; 8y 2 [y (); y () + ) and d yj dy (y) b(y () y) ; 8y 2 (y () ; y ()]; -a.s.

12 (iv) Let 0 be the true value of the reduced form parameters. Assume y () and y () are continuous in at = 0. Assumption 4. (i) rules out point-identi ed models as in Assumption 3.2 (i), though the current one is slightly weaker than Assumption 3.2 (i). Assumption 4. (ii) assumes that the benchmark conditional prior has bounded support, which automatically holds if the identi ed set IS y () is bounded. In particular, if the benchmark conditional prior supports the entire identi ed set, [y (); y ()] = y(); y() holds. Assumption 4. (iii) restricts the behavior of the benchmark conditional prior locally around the boundaries of the support. It requires the benchmark conditional prior to be bounded from below by a polynomial function with degree > 0 in a neighborhood of the support boundaries. When the density of the benchmark conditional prior is strictly positive at y () and y (), then the polynomial lower bound conditions clearly holds. Assumption 4. (iv) implies that the support of the benchmark conditional prior varies continuously in : Assumption 4. is for example satis ed by the product of Student s t prior considered by Baumeister and Hamilton (205), provided its support is bounded. The next two theorems characterize the asymptotic behavior of the multiplier minimax decisions for the quadratic loss and the check loss. Theorem 4.2 concerns the limiting situation of! 0 with a xed sample size. Theorem 4.3 concerns the large sample asymptotics with! 0. Theorem 4.2 Suppose Assumption 4. (i) - (iii) hold. (i) When h(; y) = ( y) 2 ; r (; ) d jx = lim!0 h ( y ()) 2 _ ( y ()) 2i d jx holds whenever the right-hand side integral is nite. (ii) When h(; y) = (y ) ; r (; ) d jx = [( )( y ()) _ (y () )] d jx lim!0 holds, whenever the right-hand side integral is nite. Proof. See Appendix A. Theorem 4.3 Suppose Assumption 3.2 (i)-(ii) and Assumption 4. hold. Let ^0 = arg min lim 2D!0 r (; ) d jx () 2

13 be the multiplier minimax estimator in the limiting case! 0: (i) When h(; y) = ( y) 2 ; ^ 0! 2 (y ( 0 ) + y ( 0 )) as the sample size n! for almost every sampling sequence. (ii) When h(; y) = (y ) ; (u) = u fu > 0g ( )u fu < 0g ; ^ 0! ( ) y ( 0 ) + y ( 0 ) as the sample size n! for almost every sampling sequence. Proof. See Appendix A. Theorem 4.2 shows that in the most ambiguous situation of! 0, only the convex hull of the support of the benchmark prior, [y () ; y ()] ; matters for the optimal minimax decision as far as the tail condition of Assumption 3.2 holds, and the shape of yj is irrelevant for the minimax decision. This result is intuitive since smaller implies a larger class of priors and at the limit! 0, any priors that share the support with the benchmark prior are included in the prior class. Theorem 4.3 (i) shows that in the large sample situation, the minimax decision with the quadratic loss converges to the middle point of the boundary points of the support of the benchmark prior evaluated at the true reduced form parameters. When the benchmark prior supports the entire identi ed set, this means that the minimax decision at the limit is to report the central point of the true identi ed set. When the loss is the check function associated with the -th quantile, the minimax decision at the limit is given by the convex combination of the same boundary points with weights and. One useful implication of this result is that, in the case of the check loss, solving for the optimal can be seen as obtaining the robusti ed posterior -th quantile of y, and the optimal may be used to construct a robusti ed interval estimate for y that explicitly incorporates the ambiguous beliefs about the benchmark prior. An implication of Proposition 4.3 is that in the case of the quantile check function the optimal estimator () always lies in the true identi ed set for any ; even in the most conservative case,! 0. This means that, if we use [ (0:05) ; (0:95)] as a robusti ed posterior credibility interval for y, this interval estimate will be asymptotically strictly narrower than the frequentist con dence interval for y, as [ (0:05) ; (0:95)] is contained in the true identi ed set asymptotically. This result is similar to the nding in Moon and Schorfheide (202) for the single posterior Bayesian credible interval. The asymptotic results of Theorems 4.2 and 4.3 assume that the benchmark prior is absolutely continuous with respect to the Lebesgue measure. We can instead consider a setting where the benchmark prior is given by a nondegenerate probability mass measure, which can naturally arise if the benchmark prior comes from a weighted combination of multiple point-identi ed models. This case leads to asymptotic results similar to Theorem 4.3. We present the formal analysis for this discrete benchmark prior setting in Appendix B. 3

14 5 Eliciting Benchmark Prior and To implement our robust estimation and inference procedures, the key inputs that the researcher has to specify are the benchmark conditional prior j and the parameter that determines the degree of robustness. This section presents some practical recommendations on how to choose them. 5. Benchmark Prior 5.2 Robustness Parameter 6 Examples We now illustrate the practical implementation of our method in two of the examples that we discussed in the introduction. Importantly, we show how in SVARs it is possible to incorporate prior information on non-identi ed parameters that is expressed as unconditional priors, which acknowledges the fact that it may not always be easy to specify a prior that is conditional on reduced form parameters. 6. Demand and Supply Consider again the 2-variable SVAR(0) in example., where the object of interest is the elasticity of supply. Suppose that the benchmark prior is speci ed by an unconditional prior distribution for the full structural parameter ~ = (; ; d ; d 2 ) ; whose probability density is denoted by ~ (; ; d ; d 2 ). (8) If the sign restrictions 0; 0 are imposed through the support of ~ () and ~ () is speci ed as in Baumeister and Hamilton (205), MCMC draws of ~ from its posterior can be obtained easily from the sampling algorithm presented in Baumeister and Hamilton (205). Consider solving the multiplier minimax problem (7) with xed sample size and > 0. In order to solve the problem, we have to gure out the benchmark conditional prior of given and the posterior of induced by the prior of ~, or at least we have to be able to draw s from the benchmark conditional prior and draw s from the posterior. The benchmark conditional prior of given can be derived by reparametrizing ~ in terms of (; ). Since = A D A 0 ; we have that! = d + d 2 ( ) 2 ;! 2 = d + d 2 ( ) 2 ;! 22 = 2 d + 2 d 2 ( ) 2 (9) 4

15 which implies the following mapping between (; ; d ; d 2 ) and (;! ;! 2 ;! 22 ) : = ; =! 2! 22 (; ) ;!! 2 (20) d =!! 2! 22!! 2 d 2 = 2! 2! 2 +! 22 d 2 (; ) : 2 2! + 2! 2! 22 d (; ) ; Since the conditional prior j is proportional to the the joint prior of (; ), the benchmark conditional prior j satis es d j d (j) / ~ (; (; ) ; d (; ) ; d 2 (; )) jdet (J (; ))j ; (2) where J (; ) is the Jacobian of the mapping (20), and jdet ()j is the absolute value of the determinant. This benchmark conditional prior supports the entire identi ed set IS () if ~ () supports any value of (; ) satisfying the sign restrictions. An analytical expression of the posterior of could be obtained by integrating out in the right hand side of (2). Even if the analytical expression of the posterior of were not easy to derive, it would be easy to obtain posterior draws of by transforming the posterior draws of according to = A D A 0. We hereafter denote the posterior draws of by ( ; : : : ; M ). An algorithm that approximates the objective function in (7) is as follows. Algorithm 6. ( > 0) Let posterior draws of, ( ; : : : ; M ), be given.. For each m = ; : : : ; M, we approximate r (; m ) = ln R IS ( m ) exp fh(; )=g d j by importance sampling, i.e., draw N draws of, ( m ; : : : ; mn ) from a proposal distribution (probability density) ~ j (j) (e.g., the uniform distribution on IS ( m )) and compute " PN # i= ^r (; m ) = ln w ( mi; m ) exp fh(; mi )=g P N i= w ( ; mi; m ) where w ( mi ; m ) = ~ ( mi ; ( mi ; m ) ; d ( mi ; m ) ; d 2 ( mi ; m )) jdet (J ( mi ; m ))j : ~ j ( mi j m ) 2. We then approximate the objective function of the multiplier minimax problem by and minimize it with respect to. M MX ^r (; m ), (22) m= 5

16 If the limiting case! 0 is considered (either with a quadratic or check loss), Lemma 4.2 implies that Step of this algorithm can be skipped and we can directly approximate the objective function to be minimized in by M MX h( ( m )) 2 _ ( ( m )) 2i m= for the quadratic loss case, where [( m ); ( m )] is the identi ed set of. For the typical sample sizes in macroeconometric applications, it is simple to compute (22) and there will not be signi cant computational gain in employing the asymptotic results. Nevertheless, if one is interested in the large sample approximation, one can approximate the posterior of by a point mass at = ^ ML, and replace the objective function (22) with ^r ; ^ ML. In Algorithm (6.), we consider that the s are drawn from the posterior of induced by the prior of speci ed in (8). If the prior of implies an informative prior of, then in nite samples, this can downplay the sample information for in the sense that the shape of the posterior of does not well represent the shape of the likelihood for due to the informativeness of the prior of. Since the motivation of our method is a concern that the prior for may be misspeci ed, one may not want to impose the restrictions on implied by the prior for but "let the data speak". These concerns might make the following hybrid approach attractive. Draw s from the posterior of obtained from a non-informative prior of (e.g., Je reys prior), and use of the benchmark prior of speci ed in (8) only for the purpose of constructing the benchmark conditional prior (2). Note that the uninformative prior of combined with the benchmark conditional prior () implied from (2) will not coincide with the benchmark prior (8). We conclude this section by noting that it is straightforward to include the intercept and lags in the static simultaneous equation model we considered in this section. Consider a 2-variable SVAR with lags L. where A 0 = " # A 0 x t = c + LX A l x t + u t : u t N (0; D) ; l= and D is as de ned above. The reduced form VAR is LX x t = b + B l x t + t ; l= 6

17 where b = A0 c and B l = A0 A l. The reduced form parameters are = (; B) ; B = (b; B ; : : : ; B L ), and the full vector of structural parameters are ~ = (; ; d ; d 2 ; A), A = (c; A ; : : : ; A L ). The mapping between (; ; d ; d 2 ; A) and (; ) consists of those shown in (20) and A = A 0 (; ) B A(; ); (23) " # (; ) where A 0 (; ) =. Hence, if the benchmark prior is speci ed in terms of ~, the conditional benchmark prior of given is given by d j d (j) / (; (; ) ; d (; ) ; d 2 (; ) ; A(; )) jdet (J (; ))j ; where ~ (; ; d ; d 2 ; A) is the benchmark prior of ~ and J(; ) is the Jacobian of the transformation (20) and (23). With this modi cation for the conditional benchmark prior, Algorithm 6. can be applied to solve the multiplier minimax problem for. 6.2 Game Theoretic Model For the entry game considered in example.3, the reduced form parameters relates to the full structural parameter ~ = ( ; ; 2 ; 2 ; ) by = G( )G( 2 2 ); (24) 00 = ( G( ))( G( 2 )); 0 = G( ) [ G( 2 )] + G( ) [G( 2 ) G( 2 2 )] + [G( ) G( )] [G( 2 ) G( 2 2 )] : where G() is the cdf of the standard normal distribution. As a benchmark prior ~ ( ; ; 2 ; 2 ; ) ; consider for example Priors and 2 in Moon and Schorfheide (202). Posterior draws of ~ can be obtained by the Metropolis-Hastings Algorithm or its variant. Plug them into (24) the yields the posterior draws of. The transformation (24) o ers the following one-to-one reparametrization mapping between ~ and ( ; ; ): = ; = ; 2 = G 2 = G 00 G( ) 2 ( ; ) ; 00 G G( ) G( ) 2 ( ; ; ) ; (25) = [ G( )] [ 0 + G( )] + [G( ) G( )] h i 00 ( G( [G( ) G( )] G( ) ) ; ; ) : 00 G( ) 7

18 As in the SVAR example above, the conditional benchmark prior for = ( ; ) given satis es j ( ; ) / ~ ( ; ; 2 ( ; ) ; 2 ( ; ; ) ; ( ; ; )) jdet (J( ; ; ))j ; where J( ; ; ) is the Jacobian of the transformation shown in (25). Solving for the multiplier minimax estimator for follows similar steps to those in Algorithm 6., except for a slight change in Step. Now, in the importance sampling step given a draw of, we draw ( ; ) jointly from a proposal distribution ~ j ( ; ) even though the object of interest is only. That is, to approximate r (; ) = ln R IS () exp fh(; )=g d j, we draw N draws of ( ; ), from a proposal distribution ~ j ( ; ) (e.g., a di use bivariate normal truncated to 0) and compute " PN # i= ^r (; m ) = ln w ( i; i ; ) exp fh(; i )=g P N i= w ( ; i; i ; ) where w ( ; ; ) = ~ ( ; ; 2 ( ; ) ; 2 ( ; ; ) ; ( ; ; )) jdet (J( ; ; ))j : ~ j ( ; ) 7 Concluding Remarks Appendix A Proofs Proof of Theorem 3.. Let and = (x) be xed. We rst consider the case where j is a discrete probability mass measure with m support points ( ; : : : ; m ) in IS (): Since the KL-distance R( j k j ) is positive in nity unless j is absolutely continuous with respect to j, we can restrict our search of the optimal j to those whose support points are constrained to ( ; : : : ; m ). Accordingly, let us denote a discrete j and the discrete loss by g i j ( i ), f i j ( i), h i = h(; y ( i ; )), for i = ; : : : ; m. (26) Then, the inner maximization problem of (4) can be written as s.t. max g ;:::;g m mx h i g i i= mx g i =. i= mx g i ln i= gi f i, (27) 8

19 With the Lagrangian multiplier, the rst order conditions in g i are obtained as h i + ln f i ln g i = 0 (28) () g i = f i exp (h i =) exp( + =) : P m j= g j = pins down exp( + =) = P m j= f j exp(h j =), so the optimal g i is obtained as g i = f i exp (h i =) P m j= f j exp(h j =). (29) Plugging this back into the objective function, we obtain mx 4 f i exp(h i =) mx P n j= f i exp(h i =) f j exp(h j =) A5 (30) i= 0 mx = f j exp(h j =) A ; j= R which is equivalent to ln IS y() eh((x);y(;))= d j with discrete j. We generalize the claim to arbitrary j. Based on an analogy to the optimal g i obtained n R o in (29), we guess that 0 j 2 j maximizing IS() h((x); y(; ))d j R( j k j ) satis es d 0 j () = exp(h(; y (; ))=) R IS () exp(h(; y (; d ))=)d j () (3) j for -a.e. Since exp(h(; y (; ))=) 2 (0; ) for all 2 IS () by assumption, (3) implies that j is absolutely continuous with respect to 0 j ; and hence, any j with R( j k j ) < is absolutely continuous with respect to 0 j. Therefore, the objective function of the inner maximization can be rewritten as h((x); y (; ))d j R( j k j ) = IS () IS (y) Plugging in (3) leads to j= h((x); y (; ))d j R( j k 0 j ) R( j k 0 j ) + IS () IS () log exp(h(; y(; ))=)d j : d 0 j d j! d j : Since R( j k 0 j ) 0 for any j 2 j and equal to zero if and only if j = 0 j holds for almost every, 0 j de ned in (3) solves the inner maximization problem, leading 9

20 to max j 2 j ( ) h((x); y(; ))d j d( () ; ()) = ln IS () e h((x);y(;))= d j. The conclusion follows by integrating this value function with respect to jx. Proof of Theorem 3.3. (i) Fix 2 D and consider the nite sample objective function R r (; ) d jx. Assumption 3.2 (ii) - (iv) imply that r (; ) is bounded and continuous in. Hence, combined with the weak convergence of jx to the mass measure at = 0 implied by the assumption of posterior consistency for, R r (; ) d jx! r (; 0 ) as n! for almost every sampling sequence. With Assumption 3.2 (ii) and (v), the conclusion follows if we can demonstrate that convergence R r (; ) d jx! r (; 0 ) is uniform over 2 D. Since sup r (; ) d jx r (; 0 ) sup jr (; ) r (; 0 )j d jx ; 2D 2D we consider bounding sup 2D jr (; ) r (; 0 )j by a quantity converging to zero. By invoking Assumption 3.2 (iv) and noting that r (; ) can be written as ln R Y exp fh(; y)=g d yj, the following inequalities hold: d Y exp fh(; y)=g jr (; ) r (; 0 )j sup 0 dy dy R 2 Y exp fh(; y)=g k d 0 k (32) yj M sup exp fh(; y)=g dy k 0k 2 Y M exp h= diam(y) k 0 k ; where h is the upper bound of h (; y) on (; y) 2 D Y and diam (Y) is the diameter of the parameter space of y, which are both nite by Assumption 3.2 (ii). Hence, sup r (; ) d jx r (; 0 ) C k 0 k d jx. 2D for some constant C <. By compactness of and the posterior consistency of, R k 0k d jx! 0 as n! for almost every sampling sequence. This completes the proof of claim (i). (ii) When ^! p 0, the continuous mapping theorem implies r ; ^ r (; 0 )! p 0 as n! pointwise in. Hence, by applying the consistency theorem of the M-estimator (Theorem 2. in Newey and McFadden (994)), the claim follows if we can extend this convergence to the uniform convergence in in probability. Following (32), this is indeed r true, sup 2D ; ^ r (; 0 ) C ^ 0!p 0 as n!. 20

21 Proof of Theorem (i) Fix and let h(; y) = ( y) 2. We partition the parameter space by + = 2 : y () + y () ; 2 = 2 : y () + y () < : 2 We write the objective function of Proposition 3 as r (; ) d jx + r (; ) d jx ; and we aim to derive the lower and upper bounds of each of the two terms that are shown to converge to the same limit. Note that for each 2, r (; ) can be bounded from below by (! r (; ) = log exp ( y ()) 2 (2 y () y) (y y ()) exp IS y() (! log exp ( y ()) 2 y()+ (2 y () y) (y y ()) exp y () y()+ ( y ()) 2 c() (y y ()) + log exp d yj y () ; + ) d yj ) d yj where c() 2 ( y ()) y () y () > 0 by Assumption 4. (ii) and 2. Using Assumption 4. (iii), plugging in the polynomial lower bound for the density of yj on y 2 [y (); y () + ) leads to y()+! r (; ) ( y ()) 2 + log b (y y ()) c() (y y ()) exp dy y ()! = = ( y ()) 2 + log b + z exp ( c()z) dz (33) Since lim!0 R = 0 z exp ( c()z) dy < and lim!0 log = 0, we obtain 0 lim inf!0 r (; ) ( y ()) 2 : For the upper bound of r (; ) on 2, we have r (; ) ( y ()) 2 + log = ( y ()) 2 IS y() exp (0) d yj (34) 6 The proof given here is based on the proof of the Laplace s integral approximation method shown in Theorem in Chapter II of Wong (989). 2

22 for all, where we use exp (2 y() y)(y y ()) exp(0) for all y 2 [y () ; y ()] when 2. It then holds lim sup r (; ) = ( y ()) 2.!0 Hence, lim!0 r (; ) = ( y ()) 2 for 2 pointwise. Bounds for r (; ) on 2 + follow similarly. For a lower bound, we have y r (; ) ( y ()) 2 () c() (y () y) + log exp d yj y ()! = ( y ()) 2 + log b + z exp ( c()z) dz! ( y ()) 2 as! 0. For a upper bound, the same argument as in (34) applies to yield r (; ) ( y ()) 2 for all. Hence, lim!0 r (; ) = ( y ()) 2 for 2 + pointwise. Since r (; ) has an integrable envelope (e.g., ( y ()) 2 on 2 and ( y ()) 2 on 2 + ), the dominated convergence theorem leads to lim r (; ) d jx = lim r (; ) d jx + lim r (; ) d jx!0!0 +!0 = ( y ()) 2 d jx + ( y ()) 2 d jx + = ( y ()) 2 _ ( y ()) 2 d jx ; where the last line follows by noting that ( y ()) 2 ( y ()) 2 for 2 and vice versa for 2 +. (ii) Fix and set h(; y) = (y ). Partition the parameter space by + = f 2 : ( )y () + y () g ; = f 2 : ( )y () + y () < g ; 0 and write R r (; ) d jx as r (; ) d jx + + r (; ) d jx : 22

23 For 2, a lower bound of r (; ) can be obtained as ( ( ) ( y ()) (y ) ( ) ( y ()) r (; ) = log exp exp IS y() ( ) ( ) (y y ()) ( ) ( y ()) + log exp d yj IS y() ( y()+ ( ) ( y ()) + log b (y y ()) ( ) (y y ()) exp! ( ) ( y ()) as! 0; where the second line follows by noting y () (y ) ( ) ( y ()) = [y + ( )y () ] fy > 0g [ ( ) (y y ()) + y ] fy > 0g ( ) (y y ()) fy > 0g ( ) (y y ()) for all y 2 [y () ; y ()] ; the third line follows by Assumption 4. (iii), and the convergence in the fourth line follows by the same reasoning as in (33). Also by noting (y ) ( ) ( y ()) 0 for 2, an upper bound of r (; ) is given ( ) ( y ()). Hence, we obtain lim!0 r (; ) = ( ) ( y ()) for 2 pointwise. By the same argument (we omit the detail for brevity), it can be shown that lim!0 r (; ) = (y () ) for 2 +. Hence, again by the dominated convergence theorem, lim!0 r (; ) d jx = = follows. This completes the proof. ( ) ( y ()) d jx + (y () + [( )( y ()) _ (y () )] d jx ) d jx Proof of Theorem 4.3. (i) Let R n () lim!0 R r (; ) d jx ; which, by Lemma 4.2 (i), is equal to R n () = R r 0 (; ) d jx where r 0 (; ) = ( y ()) 2 _ ( y ()) 2. Since the parameter space for y and the domain of are compact, r 0 (; ) is a bounded function in. In addition, y () and y () are assumed to be continuous at = 0, so r 0 (; ) is continuous at = 0. Hence, the weak convergence of jx to the point mass measure implies the convergence in mean R n ()! R () lim n! h ( y ()) 2 _ ( y ()) 2i d jx (35) = ( y ( 0 )) 2 _ ( y ( 0 )) 2 ) d yj ) dy 23

24 pointwise in for almost every sampling sequence. Note that R () is minimized uniquely at = 2 (y ( 0 ) + y ( 0 )). Hence, by an analogy to the argument of the convergence of M- estimators (see, e.g., Newey and McFadden (994)), the conclusion follows if the convergence of R n () to R () is uniform in. To show this is the case, de ne I() [y (); y ()] and note that ( y ()) 2 _ ( y ()) 2 can be interpreted as the squared Hausdor metric [d H (; I ())] 2 between point fg and interval I(). Then jr n () R ()j = [d H (; I ())] 2 [d H (; I ( 0 ))] 2 d jx (36) 2diam (Y) jd H (; I ()) d H (; I ( 0 ))j d jx 2diam (Y) d H (I(); I ( 0 )) d jx ; where diam (Y) < is the diameter of the parameter space of y and the third line follows by the triangular inequality of a metric, jd H (; I ()) d H (; I ( 0 ))j d H (I(); I ( 0 )). Since d H (I(); I ( 0 )) is bounded by the compactness assumption of the y space and is continuous at = 0 by 4. (iv), R d H (I(); I ( 0 )) d jx! 0 as jx converges weakly to the point mass measure at = 0. This implies the uniform convergence of R n (), sup jr n () R ()j! 0 as n!. We now prove (ii). Let l(; ) = ( )( y ()) _ (y () ). Similarly to the quadratic loss case shown above, we have R n ()! R () ( )( y ( 0 )) _ (y ( 0 ) ) = l(; 0 ); (37) which is minimized uniquely at = ( ) y ( 0 ) + y ( 0 ). Hence, the conclusion follows if sup jr n () R ()j! 0 is proven. To show this uniform convergence, de ne 0 f 2 : ( )y () + y () ( )y ( 0 ) + y ( 0 )g ; (38) + 0 f 2 : ( )y () + y () > ( )y ( 0 ) + y ( 0 )g : On 2 0 ; l(; ) l(; 0 ) can be expressed as = l(; ) l(; 0 ) (39) 8 >< ( ) [y ( 0 ) y ()] ; if ( )y () + y (); [y >: () y ( 0 )] [ y ( 0 )] ; if ( )y () + y () < ( )y ( 0 ) + y ((40) 0 ); [y () y ( 0 )] if > ( )y ( 0 ) + y ( 0 ): By noting that in the second case in (40), the absolute value of l(; ) l(; 0 ) is maximized at either of the boundary values of, it can be shown that jl(; ) l(; 0 )j can be bounded from above by jy () y ( 0 )j + jy () y ( 0 )j. Symmetrically, on 2 + 0, jl(; ) l(; 0)j 24

25 can be bounded from above by the same upper bound. Hence, sup jr n () R ()j can be bounded by sup jr n () R ()j sup jl(; ) l(; 0 )j d jx (4) jy () y ( 0 )j d jx + jy () y ( 0 )j d jx ; which converges to zero by the weak convergence of jx, compactness of y space, and continuity of y () and y () at = 0. This completes the proof of the proposition. A. Asymptotic Analysis with Discrete Benchmark Prior If the loss function h(; y) is di erentiable with respect to at almost every y, the rst order condition for the minimization problem (7) is obtained as "! exp fh(; y)=g h(; y) R IS IS exp fh(; y)=g d y() d yj d jx = 0. (42) yj Suppose the benchmark conditional prior is a mixture of multiple probability masses (multiple point-identifying models). These point-identifying models are indexed by m = ; : : : ; M, and they di er in the sense that each model selects a di erent point in the identi ed set. Denote the selection of y resulting from model m by y m () 2 IS (y). A benchmark prior is given by a particular mixture of these point mass measures, M (y) = X w m ym()(y), w m > 0 8m, m= MX w m =, where the weights (w ; : : : ; w M ) specify benchmark credibility over each point-identi ed model. The set of conditional priors concerned in (4) consists of any mixture of these point mass measures, ( M ) X = wm 0 ym()(y) : w; 0 : : : ; wm 0 2 M, m= where M is the probability simplex in R M. Denote (y ( 0 ); : : : ; y M ( 0 )) by (y ; : : : ; y M ) for short, and label the models according to y y 2 y M. With a xed > 0 and the degenerate posterior for, the rst order condition (42) is simpli ed to P M m= ( y m)w m exp ( ym) 2 m= P = 0, (43) M m= w m exp ( ym) 2 25

Estimation under Ambiguity

Estimation under Ambiguity Estimation under Ambiguity Raffaella Giacomini, Toru Kitagawa, and Harald Uhlig This draft: March 2019 Abstract To perform a Bayesian analysis for a set-identified model, two distinct approaches exist;

More information

Estimation under Ambiguity

Estimation under Ambiguity Estimation under Ambiguity R. Giacomini (UCL), T. Kitagawa (UCL), H. Uhlig (Chicago) Giacomini, Kitagawa, Uhlig Ambiguity 1 / 33 Introduction Questions: How to perform posterior analysis (inference/decision)

More information

Robust Bayes Inference for Non-identified SVARs

Robust Bayes Inference for Non-identified SVARs 1/29 Robust Bayes Inference for Non-identified SVARs Raffaella Giacomini (UCL) & Toru Kitagawa (UCL) 27, Dec, 2013 work in progress 2/29 Motivating Example Structural VAR (SVAR) is a useful tool to infer

More information

GMM-based inference in the AR(1) panel data model for parameter values where local identi cation fails

GMM-based inference in the AR(1) panel data model for parameter values where local identi cation fails GMM-based inference in the AR() panel data model for parameter values where local identi cation fails Edith Madsen entre for Applied Microeconometrics (AM) Department of Economics, University of openhagen,

More information

Inference about Non- Identified SVARs

Inference about Non- Identified SVARs Inference about Non- Identified SVARs Raffaella Giacomini Toru Kitagawa The Institute for Fiscal Studies Department of Economics, UCL cemmap working paper CWP45/14 Inference about Non-Identified SVARs

More information

Simultaneous Choice Models: The Sandwich Approach to Nonparametric Analysis

Simultaneous Choice Models: The Sandwich Approach to Nonparametric Analysis Simultaneous Choice Models: The Sandwich Approach to Nonparametric Analysis Natalia Lazzati y November 09, 2013 Abstract We study collective choice models from a revealed preference approach given limited

More information

Chapter 1. GMM: Basic Concepts

Chapter 1. GMM: Basic Concepts Chapter 1. GMM: Basic Concepts Contents 1 Motivating Examples 1 1.1 Instrumental variable estimator....................... 1 1.2 Estimating parameters in monetary policy rules.............. 2 1.3 Estimating

More information

Robust Bayes Inference for Non-Identified SVARs

Robust Bayes Inference for Non-Identified SVARs Robust Bayes Inference for Non-Identified SVARs Raffaella Giacomini and Toru Kitagawa This draft: March, 2014 Abstract This paper considers a robust Bayes inference for structural vector autoregressions,

More information

Online Appendix to: Marijuana on Main Street? Estimating Demand in Markets with Limited Access

Online Appendix to: Marijuana on Main Street? Estimating Demand in Markets with Limited Access Online Appendix to: Marijuana on Main Street? Estating Demand in Markets with Lited Access By Liana Jacobi and Michelle Sovinsky This appendix provides details on the estation methodology for various speci

More information

Solving Extensive Form Games

Solving Extensive Form Games Chapter 8 Solving Extensive Form Games 8.1 The Extensive Form of a Game The extensive form of a game contains the following information: (1) the set of players (2) the order of moves (that is, who moves

More information

Intersection Bounds, Robust Bayes, and Updating Ambiguous Beliefs.

Intersection Bounds, Robust Bayes, and Updating Ambiguous Beliefs. Intersection Bounds, Robust Bayes, and Updating mbiguous Beliefs. Toru Kitagawa CeMMP and Department of Economics, UCL Preliminary Draft November, 2011 bstract This paper develops multiple-prior Bayesian

More information

Limit pricing models and PBE 1

Limit pricing models and PBE 1 EconS 503 - Advanced Microeconomics II Limit pricing models and PBE 1 1 Model Consider an entry game with an incumbent monopolist (Firm 1) and an entrant (Firm ) who analyzes whether or not to join the

More information

Some Notes on Costless Signaling Games

Some Notes on Costless Signaling Games Some Notes on Costless Signaling Games John Morgan University of California at Berkeley Preliminaries Our running example is that of a decision maker (DM) consulting a knowledgeable expert for advice about

More information

9 A Class of Dynamic Games of Incomplete Information:

9 A Class of Dynamic Games of Incomplete Information: A Class of Dynamic Games of Incomplete Information: Signalling Games In general, a dynamic game of incomplete information is any extensive form game in which at least one player is uninformed about some

More information

SIMILAR-ON-THE-BOUNDARY TESTS FOR MOMENT INEQUALITIES EXIST, BUT HAVE POOR POWER. Donald W. K. Andrews. August 2011

SIMILAR-ON-THE-BOUNDARY TESTS FOR MOMENT INEQUALITIES EXIST, BUT HAVE POOR POWER. Donald W. K. Andrews. August 2011 SIMILAR-ON-THE-BOUNDARY TESTS FOR MOMENT INEQUALITIES EXIST, BUT HAVE POOR POWER By Donald W. K. Andrews August 2011 COWLES FOUNDATION DISCUSSION PAPER NO. 1815 COWLES FOUNDATION FOR RESEARCH IN ECONOMICS

More information

ECON2285: Mathematical Economics

ECON2285: Mathematical Economics ECON2285: Mathematical Economics Yulei Luo Economics, HKU September 17, 2018 Luo, Y. (Economics, HKU) ME September 17, 2018 1 / 46 Static Optimization and Extreme Values In this topic, we will study goal

More information

Nonparametric Identi cation and Estimation of Truncated Regression Models with Heteroskedasticity

Nonparametric Identi cation and Estimation of Truncated Regression Models with Heteroskedasticity Nonparametric Identi cation and Estimation of Truncated Regression Models with Heteroskedasticity Songnian Chen a, Xun Lu a, Xianbo Zhou b and Yahong Zhou c a Department of Economics, Hong Kong University

More information

Robust inference about partially identified SVARs

Robust inference about partially identified SVARs Robust inference about partially identified SVARs Raffaella Giacomini and Toru Kitagawa This draft: June 2015 Abstract Most empirical applications using partially identified Structural Vector Autoregressions

More information

Volume 30, Issue 3. Monotone comparative statics with separable objective functions. Christian Ewerhart University of Zurich

Volume 30, Issue 3. Monotone comparative statics with separable objective functions. Christian Ewerhart University of Zurich Volume 30, Issue 3 Monotone comparative statics with separable objective functions Christian Ewerhart University of Zurich Abstract The Milgrom-Shannon single crossing property is essential for monotone

More information

Lecture Notes on Game Theory

Lecture Notes on Game Theory Lecture Notes on Game Theory Levent Koçkesen 1 Bayesian Games So far we have assumed that all players had perfect information regarding the elements of a game. These are called games with complete information.

More information

Bayesian Modeling of Conditional Distributions

Bayesian Modeling of Conditional Distributions Bayesian Modeling of Conditional Distributions John Geweke University of Iowa Indiana University Department of Economics February 27, 2007 Outline Motivation Model description Methods of inference Earnings

More information

MC3: Econometric Theory and Methods. Course Notes 4

MC3: Econometric Theory and Methods. Course Notes 4 University College London Department of Economics M.Sc. in Economics MC3: Econometric Theory and Methods Course Notes 4 Notes on maximum likelihood methods Andrew Chesher 25/0/2005 Course Notes 4, Andrew

More information

Lecture 8: Basic convex analysis

Lecture 8: Basic convex analysis Lecture 8: Basic convex analysis 1 Convex sets Both convex sets and functions have general importance in economic theory, not only in optimization. Given two points x; y 2 R n and 2 [0; 1]; the weighted

More information

Prelim Examination. Friday August 11, Time limit: 150 minutes

Prelim Examination. Friday August 11, Time limit: 150 minutes University of Pennsylvania Economics 706, Fall 2017 Prelim Prelim Examination Friday August 11, 2017. Time limit: 150 minutes Instructions: (i) The total number of points is 80, the number of points for

More information

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,

More information

Inference when identifying assumptions are doubted. A. Theory B. Applications

Inference when identifying assumptions are doubted. A. Theory B. Applications Inference when identifying assumptions are doubted A. Theory B. Applications 1 A. Theory Structural model of interest: A y t B 1 y t1 B m y tm u t nn n1 u t i.i.d. N0, D D diagonal 2 Bayesian approach:

More information

Inference when identifying assumptions are doubted. A. Theory. Structural model of interest: B 1 y t1. u t. B m y tm. u t i.i.d.

Inference when identifying assumptions are doubted. A. Theory. Structural model of interest: B 1 y t1. u t. B m y tm. u t i.i.d. Inference when identifying assumptions are doubted A. Theory B. Applications Structural model of interest: A y t B y t B m y tm nn n i.i.d. N, D D diagonal A. Theory Bayesian approach: Summarize whatever

More information

Labor-Supply Shifts and Economic Fluctuations. Technical Appendix

Labor-Supply Shifts and Economic Fluctuations. Technical Appendix Labor-Supply Shifts and Economic Fluctuations Technical Appendix Yongsung Chang Department of Economics University of Pennsylvania Frank Schorfheide Department of Economics University of Pennsylvania January

More information

Supplemental Material 1 for On Optimal Inference in the Linear IV Model

Supplemental Material 1 for On Optimal Inference in the Linear IV Model Supplemental Material 1 for On Optimal Inference in the Linear IV Model Donald W. K. Andrews Cowles Foundation for Research in Economics Yale University Vadim Marmer Vancouver School of Economics University

More information

Not Only What But also When A Theory of Dynamic Voluntary Disclosure

Not Only What But also When A Theory of Dynamic Voluntary Disclosure Not Only What But also When A Theory of Dynamic Voluntary Disclosure PRELIMINARY AND INCOMPLETE Ilan Guttman, Ilan Kremer and Andrzej Skrzypacz Stanford Graduate School of Business November 2011 1 Introduction

More information

Implicit Function Theorem: One Equation

Implicit Function Theorem: One Equation Natalia Lazzati Mathematics for Economics (Part I) Note 3: The Implicit Function Theorem Note 3 is based on postol (975, h 3), de la Fuente (2, h5) and Simon and Blume (994, h 5) This note discusses the

More information

Lecture Notes on Bargaining

Lecture Notes on Bargaining Lecture Notes on Bargaining Levent Koçkesen 1 Axiomatic Bargaining and Nash Solution 1.1 Preliminaries The axiomatic theory of bargaining originated in a fundamental paper by Nash (1950, Econometrica).

More information

Microeconomics, Block I Part 1

Microeconomics, Block I Part 1 Microeconomics, Block I Part 1 Piero Gottardi EUI Sept. 26, 2016 Piero Gottardi (EUI) Microeconomics, Block I Part 1 Sept. 26, 2016 1 / 53 Choice Theory Set of alternatives: X, with generic elements x,

More information

The Kuhn-Tucker Problem

The Kuhn-Tucker Problem Natalia Lazzati Mathematics for Economics (Part I) Note 8: Nonlinear Programming - The Kuhn-Tucker Problem Note 8 is based on de la Fuente (2000, Ch. 7) and Simon and Blume (1994, Ch. 18 and 19). The Kuhn-Tucker

More information

Inference about Clustering and Parametric. Assumptions in Covariance Matrix Estimation

Inference about Clustering and Parametric. Assumptions in Covariance Matrix Estimation Inference about Clustering and Parametric Assumptions in Covariance Matrix Estimation Mikko Packalen y Tony Wirjanto z 26 November 2010 Abstract Selecting an estimator for the variance covariance matrix

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo January 29, 2012 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

Parametric Inference on Strong Dependence

Parametric Inference on Strong Dependence Parametric Inference on Strong Dependence Peter M. Robinson London School of Economics Based on joint work with Javier Hualde: Javier Hualde and Peter M. Robinson: Gaussian Pseudo-Maximum Likelihood Estimation

More information

CS 540: Machine Learning Lecture 2: Review of Probability & Statistics

CS 540: Machine Learning Lecture 2: Review of Probability & Statistics CS 540: Machine Learning Lecture 2: Review of Probability & Statistics AD January 2008 AD () January 2008 1 / 35 Outline Probability theory (PRML, Section 1.2) Statistics (PRML, Sections 2.1-2.4) AD ()

More information

Projection Inference for Set-Identified Svars

Projection Inference for Set-Identified Svars Projection Inference for Set-Identified Svars Bulat Gafarov (PSU), Matthias Meier (University of Bonn), and José-Luis Montiel-Olea (Columbia) September 21, 2016 1 / 38 Introduction: Set-id. SVARs SVAR:

More information

Desire-as-belief revisited

Desire-as-belief revisited Desire-as-belief revisited Richard Bradley and Christian List June 30, 2008 1 Introduction On Hume s account of motivation, beliefs and desires are very di erent kinds of propositional attitudes. Beliefs

More information

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference for DSGE Models. Lawrence J. Christiano Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.

More information

Nonlinear Programming (NLP)

Nonlinear Programming (NLP) Natalia Lazzati Mathematics for Economics (Part I) Note 6: Nonlinear Programming - Unconstrained Optimization Note 6 is based on de la Fuente (2000, Ch. 7), Madden (1986, Ch. 3 and 5) and Simon and Blume

More information

Shrinkage in Set-Identified SVARs

Shrinkage in Set-Identified SVARs Shrinkage in Set-Identified SVARs Alessio Volpicella (Queen Mary, University of London) 2018 IAAE Annual Conference, Université du Québec à Montréal (UQAM) and Université de Montréal (UdeM) 26-29 June

More information

Chapter 6. Maximum Likelihood Analysis of Dynamic Stochastic General Equilibrium (DSGE) Models

Chapter 6. Maximum Likelihood Analysis of Dynamic Stochastic General Equilibrium (DSGE) Models Chapter 6. Maximum Likelihood Analysis of Dynamic Stochastic General Equilibrium (DSGE) Models Fall 22 Contents Introduction 2. An illustrative example........................... 2.2 Discussion...................................

More information

Moral Hazard and Persistence

Moral Hazard and Persistence Moral Hazard and Persistence Hugo Hopenhayn Department of Economics UCLA Arantxa Jarque Department of Economics U. of Alicante PRELIMINARY AND INCOMPLETE Abstract We study a multiperiod principal-agent

More information

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley Time Series Models and Inference James L. Powell Department of Economics University of California, Berkeley Overview In contrast to the classical linear regression model, in which the components of the

More information

ECON0702: Mathematical Methods in Economics

ECON0702: Mathematical Methods in Economics ECON0702: Mathematical Methods in Economics Yulei Luo SEF of HKU January 14, 2009 Luo, Y. (SEF of HKU) MME January 14, 2009 1 / 44 Comparative Statics and The Concept of Derivative Comparative Statics

More information

Chapter 2. GMM: Estimating Rational Expectations Models

Chapter 2. GMM: Estimating Rational Expectations Models Chapter 2. GMM: Estimating Rational Expectations Models Contents 1 Introduction 1 2 Step 1: Solve the model and obtain Euler equations 2 3 Step 2: Formulate moment restrictions 3 4 Step 3: Estimation and

More information

Upstream capacity constraint and the preservation of monopoly power in private bilateral contracting

Upstream capacity constraint and the preservation of monopoly power in private bilateral contracting Upstream capacity constraint and the preservation of monopoly power in private bilateral contracting Eric Avenel Université de Rennes I et CREM (UMR CNRS 6) March, 00 Abstract This article presents a model

More information

Uncertain Identification

Uncertain Identification Uncertain Identification Raffaella Giacomini, Toru Kitagawa, and Alessio Volpicella This draft: September 2016 Abstract Uncertainty about the choice of identifying assumptions in causal studies has been

More information

Testing for Regime Switching: A Comment

Testing for Regime Switching: A Comment Testing for Regime Switching: A Comment Andrew V. Carter Department of Statistics University of California, Santa Barbara Douglas G. Steigerwald Department of Economics University of California Santa Barbara

More information

Measuring robustness

Measuring robustness Measuring robustness 1 Introduction While in the classical approach to statistics one aims at estimates which have desirable properties at an exactly speci ed model, the aim of robust methods is loosely

More information

Speci cation of Conditional Expectation Functions

Speci cation of Conditional Expectation Functions Speci cation of Conditional Expectation Functions Econometrics Douglas G. Steigerwald UC Santa Barbara D. Steigerwald (UCSB) Specifying Expectation Functions 1 / 24 Overview Reference: B. Hansen Econometrics

More information

Exclusive contracts and market dominance

Exclusive contracts and market dominance Exclusive contracts and market dominance Giacomo Calzolari and Vincenzo Denicolò Online Appendix. Proofs for the baseline model This Section provides the proofs of Propositions and 2. Proof of Proposition.

More information

GMM estimation of spatial panels

GMM estimation of spatial panels MRA Munich ersonal ReEc Archive GMM estimation of spatial panels Francesco Moscone and Elisa Tosetti Brunel University 7. April 009 Online at http://mpra.ub.uni-muenchen.de/637/ MRA aper No. 637, posted

More information

Labor Economics, Lecture 11: Partial Equilibrium Sequential Search

Labor Economics, Lecture 11: Partial Equilibrium Sequential Search Labor Economics, 14.661. Lecture 11: Partial Equilibrium Sequential Search Daron Acemoglu MIT December 6, 2011. Daron Acemoglu (MIT) Sequential Search December 6, 2011. 1 / 43 Introduction Introduction

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo September 6, 2011 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

It is convenient to introduce some notation for this type of problems. I will write this as. max u (x 1 ; x 2 ) subj. to. p 1 x 1 + p 2 x 2 m ;

It is convenient to introduce some notation for this type of problems. I will write this as. max u (x 1 ; x 2 ) subj. to. p 1 x 1 + p 2 x 2 m ; 4 Calculus Review 4.1 The Utility Maimization Problem As a motivating eample, consider the problem facing a consumer that needs to allocate a given budget over two commodities sold at (linear) prices p

More information

1 Objective. 2 Constrained optimization. 2.1 Utility maximization. Dieter Balkenborg Department of Economics

1 Objective. 2 Constrained optimization. 2.1 Utility maximization. Dieter Balkenborg Department of Economics BEE020 { Basic Mathematical Economics Week 2, Lecture Thursday 2.0.0 Constrained optimization Dieter Balkenborg Department of Economics University of Exeter Objective We give the \ rst order conditions"

More information

Bayesian Methods for Machine Learning

Bayesian Methods for Machine Learning Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),

More information

On the Power of Tests for Regime Switching

On the Power of Tests for Regime Switching On the Power of Tests for Regime Switching joint work with Drew Carter and Ben Hansen Douglas G. Steigerwald UC Santa Barbara May 2015 D. Steigerwald (UCSB) Regime Switching May 2015 1 / 42 Motivating

More information

COMPARISON OF INFORMATION STRUCTURES IN ZERO-SUM GAMES. 1. Introduction

COMPARISON OF INFORMATION STRUCTURES IN ZERO-SUM GAMES. 1. Introduction COMPARISON OF INFORMATION STRUCTURES IN ZERO-SUM GAMES MARCIN PESKI* Abstract. This note provides simple necessary and su cient conditions for the comparison of information structures in zero-sum games.

More information

Advanced Economic Growth: Lecture 8, Technology Di usion, Trade and Interdependencies: Di usion of Technology

Advanced Economic Growth: Lecture 8, Technology Di usion, Trade and Interdependencies: Di usion of Technology Advanced Economic Growth: Lecture 8, Technology Di usion, Trade and Interdependencies: Di usion of Technology Daron Acemoglu MIT October 3, 2007 Daron Acemoglu (MIT) Advanced Growth Lecture 8 October 3,

More information

Introduction: structural econometrics. Jean-Marc Robin

Introduction: structural econometrics. Jean-Marc Robin Introduction: structural econometrics Jean-Marc Robin Abstract 1. Descriptive vs structural models 2. Correlation is not causality a. Simultaneity b. Heterogeneity c. Selectivity Descriptive models Consider

More information

Notes on the Thomas and Worrall paper Econ 8801

Notes on the Thomas and Worrall paper Econ 8801 Notes on the Thomas and Worrall paper Econ 880 Larry E. Jones Introduction The basic reference for these notes is: Thomas, J. and T. Worrall (990): Income Fluctuation and Asymmetric Information: An Example

More information

Uncertain Identification

Uncertain Identification Uncertain Identification Raffaella Giacomini, Toru Kitagawa, and Alessio Volpicella This draft: January 2017 Abstract Uncertainty about the choice of identifying assumptions is common in causal studies,

More information

THE INVERSE FUNCTION THEOREM

THE INVERSE FUNCTION THEOREM THE INVERSE FUNCTION THEOREM W. PATRICK HOOPER The implicit function theorem is the following result: Theorem 1. Let f be a C 1 function from a neighborhood of a point a R n into R n. Suppose A = Df(a)

More information

Random Utility Models, Attention Sets and Status Quo Bias

Random Utility Models, Attention Sets and Status Quo Bias Random Utility Models, Attention Sets and Status Quo Bias Arie Beresteanu and Roee Teper y February, 2012 Abstract We develop a set of practical methods to understand the behavior of individuals when attention

More information

Bayesian consistent prior selection

Bayesian consistent prior selection Bayesian consistent prior selection Christopher P. Chambers and Takashi Hayashi yzx August 2005 Abstract A subjective expected utility agent is given information about the state of the world in the form

More information

Wageningen Summer School in Econometrics. The Bayesian Approach in Theory and Practice

Wageningen Summer School in Econometrics. The Bayesian Approach in Theory and Practice Wageningen Summer School in Econometrics The Bayesian Approach in Theory and Practice September 2008 Slides for Lecture on Qualitative and Limited Dependent Variable Models Gary Koop, University of Strathclyde

More information

Solutions to Problem Set 4 Macro II (14.452)

Solutions to Problem Set 4 Macro II (14.452) Solutions to Problem Set 4 Macro II (14.452) Francisco A. Gallego 05/11 1 Money as a Factor of Production (Dornbusch and Frenkel, 1973) The shortcut used by Dornbusch and Frenkel to introduce money in

More information

SIMILAR-ON-THE-BOUNDARY TESTS FOR MOMENT INEQUALITIES EXIST, BUT HAVE POOR POWER. Donald W. K. Andrews. August 2011 Revised March 2012

SIMILAR-ON-THE-BOUNDARY TESTS FOR MOMENT INEQUALITIES EXIST, BUT HAVE POOR POWER. Donald W. K. Andrews. August 2011 Revised March 2012 SIMILAR-ON-THE-BOUNDARY TESTS FOR MOMENT INEQUALITIES EXIST, BUT HAVE POOR POWER By Donald W. K. Andrews August 2011 Revised March 2012 COWLES FOUNDATION DISCUSSION PAPER NO. 1815R COWLES FOUNDATION FOR

More information

Non-parametric Identi cation and Testable Implications of the Roy Model

Non-parametric Identi cation and Testable Implications of the Roy Model Non-parametric Identi cation and Testable Implications of the Roy Model Francisco J. Buera Northwestern University January 26 Abstract This paper studies non-parametric identi cation and the testable implications

More information

Estimation and Inference for Set-identi ed Parameters Using Posterior Lower Probability

Estimation and Inference for Set-identi ed Parameters Using Posterior Lower Probability Estimation and Inference for Set-identi ed Parameters Using Posterior Lower Probability Toru Kitagawa CeMMAP and Department of Economics, UCL First Draft: September 2010 This Draft: March, 2012 Abstract

More information

Discussion of Robust Bayes Inference for non-identied SVARs", by March Giacomini 2014 and1 Kitagaw / 14

Discussion of Robust Bayes Inference for non-identied SVARs, by March Giacomini 2014 and1 Kitagaw / 14 Discussion of Robust Bayes Inference for non-identied SVARs", by Giacomini and Kitagawa Sophocles Mavroeidis 1 1 Oxford University March 2014 Discussion of Robust Bayes Inference for non-identied SVARs",

More information

Addendum to: International Trade, Technology, and the Skill Premium

Addendum to: International Trade, Technology, and the Skill Premium Addendum to: International Trade, Technology, and the Skill remium Ariel Burstein UCLA and NBER Jonathan Vogel Columbia and NBER April 22 Abstract In this Addendum we set up a perfectly competitive version

More information

Identi cation of Positive Treatment E ects in. Randomized Experiments with Non-Compliance

Identi cation of Positive Treatment E ects in. Randomized Experiments with Non-Compliance Identi cation of Positive Treatment E ects in Randomized Experiments with Non-Compliance Aleksey Tetenov y February 18, 2012 Abstract I derive sharp nonparametric lower bounds on some parameters of the

More information

Notes on Time Series Modeling

Notes on Time Series Modeling Notes on Time Series Modeling Garey Ramey University of California, San Diego January 17 1 Stationary processes De nition A stochastic process is any set of random variables y t indexed by t T : fy t g

More information

Chapter 4. Maximum Theorem, Implicit Function Theorem and Envelope Theorem

Chapter 4. Maximum Theorem, Implicit Function Theorem and Envelope Theorem Chapter 4. Maximum Theorem, Implicit Function Theorem and Envelope Theorem This chapter will cover three key theorems: the maximum theorem (or the theorem of maximum), the implicit function theorem, and

More information

Topics in Mathematical Economics. Atsushi Kajii Kyoto University

Topics in Mathematical Economics. Atsushi Kajii Kyoto University Topics in Mathematical Economics Atsushi Kajii Kyoto University 25 November 2018 2 Contents 1 Preliminary Mathematics 5 1.1 Topology.................................. 5 1.2 Linear Algebra..............................

More information

BEE1024 Mathematics for Economists

BEE1024 Mathematics for Economists BEE1024 Mathematics for Economists Dieter and Jack Rogers and Juliette Stephenson Department of Economics, University of Exeter February 1st 2007 1 Objective 2 Isoquants 3 Objective. The lecture should

More information

LECTURE 5 NOTES. n t. t Γ(a)Γ(b) pt+a 1 (1 p) n t+b 1. The marginal density of t is. Γ(t + a)γ(n t + b) Γ(n + a + b)

LECTURE 5 NOTES. n t. t Γ(a)Γ(b) pt+a 1 (1 p) n t+b 1. The marginal density of t is. Γ(t + a)γ(n t + b) Γ(n + a + b) LECTURE 5 NOTES 1. Bayesian point estimators. In the conventional (frequentist) approach to statistical inference, the parameter θ Θ is considered a fixed quantity. In the Bayesian approach, it is considered

More information

Combining Macroeconomic Models for Prediction

Combining Macroeconomic Models for Prediction Combining Macroeconomic Models for Prediction John Geweke University of Technology Sydney 15th Australasian Macro Workshop April 8, 2010 Outline 1 Optimal prediction pools 2 Models and data 3 Optimal pools

More information

University of Toronto

University of Toronto A Limit Result for the Prior Predictive by Michael Evans Department of Statistics University of Toronto and Gun Ho Jang Department of Statistics University of Toronto Technical Report No. 1004 April 15,

More information

Topics in Mathematical Economics. Atsushi Kajii Kyoto University

Topics in Mathematical Economics. Atsushi Kajii Kyoto University Topics in Mathematical Economics Atsushi Kajii Kyoto University 26 June 2018 2 Contents 1 Preliminary Mathematics 5 1.1 Topology.................................. 5 1.2 Linear Algebra..............................

More information

Internet Appendix for The Labor Market for Directors and Externalities in Corporate Governance

Internet Appendix for The Labor Market for Directors and Externalities in Corporate Governance Internet Appendix for The Labor Market for Directors and Externalities in Corporate Governance DORON LEVIT and NADYA MALENKO The Internet Appendix has three sections. Section I contains supplemental materials

More information

Robust Con dence Intervals in Nonlinear Regression under Weak Identi cation

Robust Con dence Intervals in Nonlinear Regression under Weak Identi cation Robust Con dence Intervals in Nonlinear Regression under Weak Identi cation Xu Cheng y Department of Economics Yale University First Draft: August, 27 This Version: December 28 Abstract In this paper,

More information

Some Notes on Adverse Selection

Some Notes on Adverse Selection Some Notes on Adverse Selection John Morgan Haas School of Business and Department of Economics University of California, Berkeley Overview This set of lecture notes covers a general model of adverse selection

More information

Gi en Demand for Several Goods

Gi en Demand for Several Goods Gi en Demand for Several Goods Peter Norman Sørensen January 28, 2011 Abstract The utility maimizing consumer s demand function may simultaneously possess the Gi en property for any number of goods strictly

More information

a = (a 1; :::a i )

a = (a 1; :::a  i ) 1 Pro t maximization Behavioral assumption: an optimal set of actions is characterized by the conditions: max R(a 1 ; a ; :::a n ) C(a 1 ; a ; :::a n ) a = (a 1; :::a n) @R(a ) @a i = @C(a ) @a i The rm

More information

h=1 exp (X : J h=1 Even the direction of the e ect is not determined by jk. A simpler interpretation of j is given by the odds-ratio

h=1 exp (X : J h=1 Even the direction of the e ect is not determined by jk. A simpler interpretation of j is given by the odds-ratio Multivariate Response Models The response variable is unordered and takes more than two values. The term unordered refers to the fact that response 3 is not more favored than response 2. One choice from

More information

Estimating the Number of Common Factors in Serially Dependent Approximate Factor Models

Estimating the Number of Common Factors in Serially Dependent Approximate Factor Models Estimating the Number of Common Factors in Serially Dependent Approximate Factor Models Ryan Greenaway-McGrevy y Bureau of Economic Analysis Chirok Han Korea University February 7, 202 Donggyu Sul University

More information

Appendix for "O shoring in a Ricardian World"

Appendix for O shoring in a Ricardian World Appendix for "O shoring in a Ricardian World" This Appendix presents the proofs of Propositions - 6 and the derivations of the results in Section IV. Proof of Proposition We want to show that Tm L m T

More information

Should all Machine Learning be Bayesian? Should all Bayesian models be non-parametric?

Should all Machine Learning be Bayesian? Should all Bayesian models be non-parametric? Should all Machine Learning be Bayesian? Should all Bayesian models be non-parametric? Zoubin Ghahramani Department of Engineering University of Cambridge, UK zoubin@eng.cam.ac.uk http://learning.eng.cam.ac.uk/zoubin/

More information

Estimation of Dynamic Nonlinear Random E ects Models with Unbalanced Panels.

Estimation of Dynamic Nonlinear Random E ects Models with Unbalanced Panels. Estimation of Dynamic Nonlinear Random E ects Models with Unbalanced Panels. Pedro Albarran y Raquel Carrasco z Jesus M. Carro x June 2014 Preliminary and Incomplete Abstract This paper presents and evaluates

More information

Alvaro Rodrigues-Neto Research School of Economics, Australian National University. ANU Working Papers in Economics and Econometrics # 587

Alvaro Rodrigues-Neto Research School of Economics, Australian National University. ANU Working Papers in Economics and Econometrics # 587 Cycles of length two in monotonic models José Alvaro Rodrigues-Neto Research School of Economics, Australian National University ANU Working Papers in Economics and Econometrics # 587 October 20122 JEL:

More information

Endogenous timing in a mixed duopoly

Endogenous timing in a mixed duopoly Endogenous timing in a mixed duopoly Rabah Amir Department of Economics, University of Arizona Giuseppe De Feo y CORE, Université Catholique de Louvain February 2007 Abstract This paper addresses the issue

More information

Cowles Foundation for Research in Economics at Yale University

Cowles Foundation for Research in Economics at Yale University Cowles Foundation for Research in Economics at Yale University Cowles Foundation Discussion Paper No. 1846 EFFICIENT AUCTIONS AND INTERDEPENDENT TYPES Dirk Bergemann, Stephen Morris, and Satoru Takahashi

More information

Cartel Stability in a Dynamic Oligopoly with Sticky Prices

Cartel Stability in a Dynamic Oligopoly with Sticky Prices Cartel Stability in a Dynamic Oligopoly with Sticky Prices Hassan Benchekroun and Licun Xue y McGill University and CIREQ, Montreal This version: September 2005 Abstract We study the stability of cartels

More information

Estimation with Aggregate Shocks

Estimation with Aggregate Shocks Estimation with Aggregate Shocks Jinyong Hahn UCLA Guido Kuersteiner y University of Maryland October 5, 06 Maurizio Mazzocco z UCLA Abstract Aggregate shocks a ect most households and rms decisions. Using

More information