Estimation under Ambiguity
|
|
- Rolf Robbins
- 5 years ago
- Views:
Transcription
1 Estimation under Ambiguity R. Giacomini (UCL), T. Kitagawa (UCL), H. Uhlig (Chicago) Giacomini, Kitagawa, Uhlig Ambiguity 1 / 33
2 Introduction Questions: How to perform posterior analysis (inference/decision) when a probabilistic prior is not available or not credible? How to assess robustness of the posterior results to a choice of prior? Giacomini, Kitagawa, Uhlig Ambiguity 2 / 33
3 Robust Bayes viewpoint I.J. Good (p5, 1965, The Estimation of Probabilities ): My own view, following Keynes and Koopman, is that judgements of probability inequalities are possible but not judgements of exact probabilities; therefore a Bayesian should have upper and lower betting probabilities. Giacomini, Kitagawa, Uhlig Ambiguity 3 / 33
4 Robust Bayes viewpoint I.J. Good (p5, 1965, The Estimation of Probabilities ): My own view, following Keynes and Koopman, is that judgements of probability inequalities are possible but not judgements of exact probabilities; therefore a Bayesian should have upper and lower betting probabilities. Prior input: a set of priors (ambiguous belief) An updating rule is applied to the ambiguous belief Posterior output: the set of posteriors and decision under ambiguity Giacomini, Kitagawa, Uhlig Ambiguity 3 / 33
5 Example: SVAR Baumeister & Hamilton (15 Ecta, BH hereafter) two-variable SVAR(8), ( n t, w t ): employment and wages growth A 0 = ( ) nt A 0 w t = c + 8 k=1 ( ) ( ) nt k ɛ d A k + t w t k ɛt s, ( ) βd 1, (ɛt β s 1 d, ɛt s ) N (0, diag(d 1, d 2 )). Giacomini, Kitagawa, Uhlig Ambiguity 4 / 33
6 Example: SVAR Baumeister & Hamilton (15 Ecta, BH hereafter) two-variable SVAR(8), ( n t, w t ): employment and wages growth A 0 = ( ) nt A 0 w t = c + 8 k=1 ( ) ( ) nt k ɛ d A k + t w t k ɛt s, ( ) βd 1, (ɛt β s 1 d, ɛt s ) N (0, diag(d 1, d 2 )). Parameter of interest: supply/demand elasticities and structural impulse responses. Bayesian practice in sign-restricted SVARs Agnostic prior: Canova & De Nicolo (02 JME), Faust (98 CRSPP), Uhlig (05 JME) Carefully elicited prior: BH Giacomini, Kitagawa, Uhlig Ambiguity 4 / 33
7 Prior elicitation procedure recommended by BH: Joint prior of (β d, β s, d 1, d 2, A): π (βd,β s ) π (d1,d 2 ) (β d,β s ) π A (βd,β s,d 1,d 2 ) (β d, β s ): Independent truncated t-dist. s.t. π βs ([0.1, 2.2]) = 0.9 and π βd ([ 2.2, 0.1]) = 0.9. (d 1, d 2 ) (α, β): Independent inverse gamma s.t. scales determined by A 0 ˆΩA 0. A (β d, β s, d 1, d 2 ): Independent Gaussian priors shrinking the reduced-form coefficients A0 1 A to random walks. Giacomini, Kitagawa, Uhlig Ambiguity 5 / 33
8 Prior elicitation procedure recommended by BH: Joint prior of (β d, β s, d 1, d 2, A): π (βd,β s ) π (d1,d 2 ) (β d,β s ) π A (βd,β s,d 1,d 2 ) (β d, β s ): Independent truncated t-dist. s.t. π βs ([0.1, 2.2]) = 0.9 and π βd ([ 2.2, 0.1]) = 0.9. (d 1, d 2 ) (α, β): Independent inverse gamma s.t. scales determined by A 0 ˆΩA 0. A (β d, β s, d 1, d 2 ): Independent Gaussian priors shrinking the reduced-form coefficients A0 1 A to random walks. Issues Available prior knowledge fails to pin down the exact shape. Why conjugate priors? Why priors independent? Giacomini, Kitagawa, Uhlig Ambiguity 5 / 33
9 Limited credibility on prior causes concern of robustness, which is even severer due to set-identification Giacomini, Kitagawa, Uhlig Ambiguity 6 / 33
10 Limited credibility on prior causes concern of robustness, which is even severer due to set-identification Existing robust Bayes procedure: Giacomini & Kitagawa (18): introduce arbitrary priors to unrevisable part of the prior, and draw posterior inference with the set of posteriors = posterior inference for identified set as in Kline & Tamer (16,QE), Chen, Christensen, & Tamer (19, Ecta), among others. Drawback: The set of priors is huge and may well contain unrealistic ones, e.g, an extreme scenario receives (subjective) probability one. Giacomini, Kitagawa, Uhlig Ambiguity 6 / 33
11 This paper Our proposal: Robust Bayesian methods with a refined set of priors Giacomini, Kitagawa, Uhlig Ambiguity 7 / 33
12 This paper Our proposal: Robust Bayesian methods with a refined set of priors Introduce a set of priors as a Kullback-Leibler (KL) neighborhood of a benchmark prior. Apply the Bayes rule to each prior and conduct the posterior analysis (inference and decision) with the resulting set of posteriors Giacomini, Kitagawa, Uhlig Ambiguity 7 / 33
13 This paper Our proposal: Robust Bayesian methods with a refined set of priors Introduce a set of priors as a Kullback-Leibler (KL) neighborhood of a benchmark prior. Apply the Bayes rule to each prior and conduct the posterior analysis (inference and decision) with the resulting set of posteriors Similar to the robust control methods of Peterson, James & Dupuis (00 IEEE) and Hansen & Sargent (01 AER), here applied to statistical decision problems with concerns about prior misspecification. Giacomini, Kitagawa, Uhlig Ambiguity 7 / 33
14 Notations Consider two-variable SVAR(0) model: ( ) ( ) nt ɛ d A 0 = t w t ɛt s, A 0 = structural shocks (ɛ d t, ɛ s t ) N (0, diag(d 1, d 2 )). ( ) β d 1. β s 1 Giacomini, Kitagawa, Uhlig Ambiguity 8 / 33
15 Notations Consider two-variable SVAR(0) model: ( ) ( ) nt ɛ d A 0 = t w t ɛt s, A 0 = structural shocks (ɛ d t, ɛ s t ) N (0, diag(d 1, d 2 )). Structural parameters: θ = (β d, β s, d 1, d 2 ) Θ ( ) β d 1. β s 1 Parameter of interest: denoted by α = α(θ) R, e.g., elasticity, impulse response of n t to unit ɛ d t, etc. Reduced-form parameters: Σ tr Cholesky decomposition of Σ the Var-cov matrix of ( n t, w t ), φ = (σ 11, σ 21, σ 22 ) Φ Giacomini, Kitagawa, Uhlig Ambiguity 8 / 33
16 Set-identifying assumption: downward sloping demand β d 0 and upward sloping supply β s 0 The identified set for the labor supply elasticity (Leamer (81 REStat)), α = β s, IS α (φ) { [σ21 /σ 11, σ 22 /σ 21 ], for σ 21 > 0, [0, ), for σ Giacomini, Kitagawa, Uhlig Ambiguity 9 / 33
17 Set-identifying assumption: downward sloping demand β d 0 and upward sloping supply β s 0 The identified set for the labor supply elasticity (Leamer (81 REStat)), α = β s, IS α (φ) { [σ21 /σ 11, σ 22 /σ 21 ], for σ 21 > 0, [0, ), for σ Starting from a prior π θ for θ, the posterior for α(θ) is π α X ( ) = Φ π θ φ (α(θ) )dπ φ X Giacomini, Kitagawa, Uhlig Ambiguity 9 / 33
18 Before observing data Innovation: Introduce the following set of priors: for λ > 0, Π λ θ {π = θ = Φ π θ φ dπ φ : π θ φ Π λ (π θ φ ) φ }, where π θ φ is a benchmark conditional prior s.t. π θ φ (IS θ(φ)) = 1, φ. { } Π λ (π θ φ ) = π θ φ : R(π θ φ π θ φ ) λ, R(π θ φ π θ φ ) = ( ) Θ ln dπ θ φ dπ θ φ : KL-distance relative to the benchmark prior. dπ θ φ Giacomini, Kitagawa, Uhlig Ambiguity 10 / 33
19 Before observing data Innovation: Introduce the following set of priors: for λ > 0, Π λ θ {π = θ = Φ π θ φ dπ φ : π θ φ Π λ (π θ φ ) φ }, where π θ φ is a benchmark conditional prior s.t. π θ φ (IS θ(φ)) = 1, φ. { } Π λ (π θ φ ) = π θ φ : R(π θ φ π θ φ ) λ, R(π θ φ π θ φ ) = ( ) Θ ln dπ θ φ dπ θ φ : KL-distance relative to the benchmark prior. dπ θ φ π θ φ Π λ (π θ φ ) is abs. continuous with respect to π θ φ. Giacomini, Kitagawa, Uhlig Ambiguity 10 / 33
20 After observing data The set of posteriors marginalized for α: Π λ α X {π = α X ( ) = Φ π θ φ (α(θ) )dπ φ X : π θ φ Π λ (π θ φ ) φ }, Giacomini, Kitagawa, Uhlig Ambiguity 11 / 33
21 After observing data The set of posteriors marginalized for α: Π λ α X {π = α X ( ) = Φ π θ φ (α(θ) )dπ φ X : π θ φ Π λ (π θ φ ) φ }, Report the set of posterior quantities (mean, quantiles, probs): let h(α) = α, 1{α A}, etc. max h(α)dπ α X π α X Π λ R α X ( ) = max h(α(θ))dπ θ φ dπ φ X {π θ φ Π λ (π θ φ ):φ Φ} Φ IS θ (φ) ( ) = max h(α(θ))dπ θ φ dπ φ X Φ π θ φ Π λ (π θ φ ) IS θ (φ) Giacomini, Kitagawa, Uhlig Ambiguity 11 / 33
22 Invariance to Marginalization The KL-neighborhood around π α φ or around π θ φ? Giacomini, Kitagawa, Uhlig Ambiguity 12 / 33
23 Invariance to Marginalization The KL-neighborhood around π α φ or around π θ φ? Marginalization lemma: Let Π λ (π ) be the KL-neighborhood formed α φ around the α-marginal of π θ φ. Then, Φ = Φ [ [ sup / sup / inf π θ φ Π λ (π θ φ ) inf π α φ Π λ (π α φ ) IS θ (φ) IS α (φ) h(α(θ))dπ θ φ ] dπ φ X h(α)dπ α φ ] dπ φ X. Working with Π λ (π θ φ ) or Πλ (π ) leads to the same range of posteriors α φ for α. Giacomini, Kitagawa, Uhlig Ambiguity 12 / 33
24 The range of posteriors Theorem: Suppose h( ) is bounded. ( inf E α X (h(α)) = π α X Π λ Φ α X where sup π α X Π λ α X E α X (h(α)) = dπ l α φ = dπ u α φ = Φ ( h(α)dπ l α φ IS α (φ) h(α)dπ u α φ IS α (φ) ) dπ φ X exp( h(α)/κ λ l (φ)) exp( h(α)/κ l λ (φ))dπ dπ α φ α φ exp(h(α)/κ λ u(φ)) exp(h(α)/κ u λ (φ))dπ dπ α φ, α φ ) dπ φ X, κλ l (φ) and κu λ (φ) are the Lagrange multipliers in the constrained optimization problems Giacomini, Kitagawa, Uhlig Ambiguity 13 / 33
25 Computing the range of posteriors Algorithm: Assume that π θ φ or π α φ multiplicative constant can be evaluated up to a 1 Draw many φ s from π φ X 2 At [ each φ, compute (κλ l (φ), κu λ (φ)) and approximate h(α)dπ l α φ, ] h(α)dπ u using importance sampling (if α φ necessary). Note (κλ l (φ), κu λ (φ)) can be obtained by convex optimizations: { ( min κ ln exp ± α ) } dπ κ 0 α φ IS α (φ) κ + λκ. 3 Take the averages of the intervals obtained in Step 2. Giacomini, Kitagawa, Uhlig Ambiguity 14 / 33
26 Asymptotics Under mild regularity conditions: As n, integration w.r.t. dπ φ X can be replaced by plug-in MLE As λ, if π α φ supports IS α(φ), the set of posterior means/quantiles converge to that of Giacomini and Kitagawa (18) As λ and n, the set of posterior means recovers the true identified set, IS α (φ 0 ) Giacomini, Kitagawa, Uhlig Ambiguity 15 / 33
27 Robust Bayes decision Conditional gamma-minimax: decision under multiple posteriors Giacomini, Kitagawa, Uhlig Ambiguity 16 / 33
28 Robust Bayes decision Conditional gamma-minimax: decision under multiple posteriors δ(x): decision function, and h(δ(x), α): loss function, e.g., h(δ(x), α) = (δ(x) α) 2. [ ] min max h(δ(x), α)dπ α φ dπ φ X δ(x) Φ π α φ Π λ (π α φ ) IS α (φ) In the empirical application, we compute a robust point estimator for α with the quadratic and absolute losses. As λ and n, the gamma-minimax estimator converges to the mid-point of the true identified set. Giacomini, Kitagawa, Uhlig Ambiguity 16 / 33
29 Empirical Example BH two-variable SVAR(8), quarterly data for A = ( ) nt A 0 w t = c + 8 k=1 ( ) ( ) nt k ɛ d A k + t w t k ɛt s, ( ) 1 β, (ɛt 1 α d, ɛt s ) N (0, diag(d 1, d 2 )). Giacomini, Kitagawa, Uhlig Ambiguity 17 / 33
30 Use BH prior for θ to generate the single prior π φ and the benchmark conditional prior π α φ. The benchmark conditional prior π is obtained by reparametrizing α φ θ (α, φ) and transforming π θ into π (α,φ). Giacomini, Kitagawa, Uhlig Ambiguity 18 / 33
31 Application to SVAR BH prior as the benchmark. α: labor supply elasticity Benchmark posterior (lambda=0) prob density posterior prior post med identified set alpha Giacomini, Kitagawa, Uhlig Ambiguity 19 / 33
32 Application to SVAR BH prior as the benchmark. α: labor supply elasticity Range of posteriors: lambda= 0.1 prob density post med bounds minmax est (abs loss) alpha Giacomini, Kitagawa, Uhlig Ambiguity 20 / 33
33 Application to SVAR BH prior as the benchmark. α: labor supply elasticity Range of posteriors: lambda= 0.5 prob density alpha Giacomini, Kitagawa, Uhlig Ambiguity 21 / 33
34 Application to SVAR BH prior as the benchmark. α: labor supply elasticity Range of posteriors: lambda= 1 prob density alpha Giacomini, Kitagawa, Uhlig Ambiguity 22 / 33
35 Application to SVAR BH prior as the benchmark. α: labor supply elasticity Range of posteriors: lambda= 2 prob density alpha Giacomini, Kitagawa, Uhlig Ambiguity 23 / 33
36 Application to SVAR BH prior as the benchmark. α: labor supply elasticity Range of posteriors: lambda= 5 prob density alpha Giacomini, Kitagawa, Uhlig Ambiguity 24 / 33
37 How to elicit λ? Given a benchmark prior: For each candidate λ, we can compute the range of priors of the parameter of interest or a parameter for which we have some prior knowledge. Compute the ranges of priors for several values of λ, and pick one that best fits your vague prior. Giacomini, Kitagawa, Uhlig Ambiguity 25 / 33
38 Range of priors (lambda=0.1) Range of priors (lambda=0.5) Range of priors (lambda=1) prob density prior mean bounds benchmark prio benchmark prior mean prob density prob density alpha alpha alpha Giacomini, Kitagawa, Uhlig Ambiguity 26 / 33
39 Range of priors (lambda=0.1) Range of priors (lambda=0.5) Range of priors (lambda=1) prob density prior mean bounds benchmark prio benchmark prior mean prob density prob density alpha alpha alpha Range of posteriors (lambda=0.1) Range of posteriors (lambda=0.5) Range of posteriors (lambda=1) prob density post mean bounds benchmark posterio benchmark post mean prob density prob density alpha alpha alpha Giacomini, Kitagawa, Uhlig Ambiguity 27 / 33
40 Discussions 1 Why neighborhood around π α φ rather than π α? 2 Why not multiplier minimax (a la, e.g., Hansen & Sargent)? 3 Is conditional gamma-minimax admissible? Giacomini, Kitagawa, Uhlig Ambiguity 28 / 33
41 Why neighborhood around π α φ? Ho (2019) considers the KL neighborhood around π α. A prior for α can be updated through the update of φ. Robustness concern should be more serious for the unrevisable part π α φ KL neighborhood around π α contains multiple priors of π φ, i.e., the marginal likelihood varies over the prior class Giacomini, Kitagawa, Uhlig Ambiguity 29 / 33
42 Why neighborhood around π α φ? Ho (2019) considers the KL neighborhood around π α. A prior for α can be updated through the update of φ. Robustness concern should be more serious for the unrevisable part π α φ KL neighborhood around π α contains multiple priors of π φ, i.e., the marginal likelihood varies over the prior class Non-unique way of updating the set of priors: Prior-by-prior or Type-II ML updating rule (Good (65,book), Gilboa & Schmeidler (91, JET))? Giacomini, Kitagawa, Uhlig Ambiguity 29 / 33
43 Why not multiplier minimax? Along Hansen & Sargent (01), we could formulate the optimal decision under ambiguity in the form of multiplier minimax: [ { } ] min max h(δ(x), α)dπ δ(x) Φ π α φ Π (π α φ ) α φ κr(π α φ IS α (φ) π α φ ) dπ φ X Giacomini, Kitagawa, Uhlig Ambiguity 30 / 33
44 Why not multiplier minimax? Along Hansen & Sargent (01), we could formulate the optimal decision under ambiguity in the form of multiplier minimax: [ { } ] min max h(δ(x), α)dπ δ(x) Φ π α φ Π (π α φ ) α φ κr(π α φ IS α (φ) π α φ ) dπ φ X Given κ 0 and δ, there exist KL-radius λ κ (φ) that makes the solution of the gamma minimax equivalent to that of multiplier minimax, but the implied λ(φ) depends on the loss function h(, ). We prefer the gamma minimax formulation with fixed λ, as we want the set of priors invariant to the loss function. Giacomini, Kitagawa, Uhlig Ambiguity 30 / 33
45 Statistical Admissibility? Unlike unconditional gamma-minimax, conditional gamma-minimax is not guaranteed to be admissible, as the worst-case prior becomes data dependent. Unconditional gamma-minimax is, on the other hand, difficult to compute, and the difference disappears in large samples. Giacomini, Kitagawa, Uhlig Ambiguity 31 / 33
46 Related Literatures Bayesian approach to partial identification: Poirier (97, ET), Moon & Schorfheide (12, Ecta), Kline & Tamer (16, QE), Chen, Christensen, & Tamer (18, Ecta), Giacomini & Kitagawa (18), among others. Robust Bayes/sensitivity analysis in econometrics/statistics: Robbins (51, Proc.Berkeley.Symp), Good (65, book), Kudo (67 Proc.Berkeley.Symp), Dempster (68 JRSSB) Huber (73 Bull.ISI), Chamberlain & Leamer (76 JRSSB), Manski (81 AnnStat), Leamer (82 Ecta), Berger (85 book), Berger & Berliner (86 Ann.Stat), Wasserman (90 Ann.Stat), Chamberlain (00 JAE), Geweke (05,book), Müller (12, JME), Ho (19) Decision under Ambiguity: Schmeidler (89 Ecta), Gilboa & Schmeidler (89 JME), Maccheroni, Marinacci & Rustichini (06 Ecta), Hansen & Sargent (01 AER, 08 book), Strzalecki (11 Ecta), among others Giacomini, Kitagawa, Uhlig Ambiguity 32 / 33
47 Conclusion This paper proposes a novel robust-bayes procedure for a class of set-identified models. It simultaneously deals with the robustness concern in the standard Bayesian method and the excess conservatism in the set-identification analysis. Giacomini, Kitagawa, Uhlig Ambiguity 33 / 33
48 Conclusion This paper proposes a novel robust-bayes procedure for a class of set-identified models. It simultaneously deals with the robustness concern in the standard Bayesian method and the excess conservatism in the set-identification analysis. Useful for Sensitivity analysis: perturb the prior of the Bayesian model over the KL-neighborhood, and check how much the posterior result changes. Adding uncertain assumptions to the set-identified model: Tightening up the identified set by the partially credible benchmark prior. Policy decision when available data only set-identify a welfare criterion (e.g. treatment choice with observational data). Giacomini, Kitagawa, Uhlig Ambiguity 33 / 33
Estimation under Ambiguity
Estimation under Ambiguity Raffaella Giacomini, Toru Kitagawa, and Harald Uhlig This draft: March 2019 Abstract To perform a Bayesian analysis for a set-identified model, two distinct approaches exist;
More informationRobust Bayes Inference for Non-identified SVARs
1/29 Robust Bayes Inference for Non-identified SVARs Raffaella Giacomini (UCL) & Toru Kitagawa (UCL) 27, Dec, 2013 work in progress 2/29 Motivating Example Structural VAR (SVAR) is a useful tool to infer
More informationShrinkage in Set-Identified SVARs
Shrinkage in Set-Identified SVARs Alessio Volpicella (Queen Mary, University of London) 2018 IAAE Annual Conference, Université du Québec à Montréal (UQAM) and Université de Montréal (UdeM) 26-29 June
More informationEstimation under Ambiguity (Very preliminary)
Estimation under Ambiguity (Very preliminary) Ra aella Giacomini, Toru Kitagawa, y and Harald Uhlig z Abstract To perform a Bayesian analysis for a set-identi ed model, two distinct approaches exist; the
More informationUncertain Identification
Uncertain Identification Raffaella Giacomini, Toru Kitagawa, and Alessio Volpicella This draft: September 2016 Abstract Uncertainty about the choice of identifying assumptions in causal studies has been
More informationRobust Bayes Inference for Non-Identified SVARs
Robust Bayes Inference for Non-Identified SVARs Raffaella Giacomini and Toru Kitagawa This draft: March, 2014 Abstract This paper considers a robust Bayes inference for structural vector autoregressions,
More informationUncertain Identification
Uncertain Identification Raffaella Giacomini, Toru Kitagawa, and Alessio Volpicella This draft: April 2017 Abstract Uncertainty about the choice of identifying assumptions is common in causal studies,
More informationUncertain Identification
Uncertain Identification Raffaella Giacomini, Toru Kitagawa, and Alessio Volpicella This draft: January 2017 Abstract Uncertainty about the choice of identifying assumptions is common in causal studies,
More informationInference about Non- Identified SVARs
Inference about Non- Identified SVARs Raffaella Giacomini Toru Kitagawa The Institute for Fiscal Studies Department of Economics, UCL cemmap working paper CWP45/14 Inference about Non-Identified SVARs
More informationRobust Bayesian inference for set-identified models
Robust Bayesian inference for set-identified models Raffaella Giacomini Toru Kitagawa The Institute for Fiscal Studies Department of Economics, UCL cemmap working paper CWP61/18 Robust Bayesian Inference
More informationRobust inference about partially identified SVARs
Robust inference about partially identified SVARs Raffaella Giacomini and Toru Kitagawa This draft: June 2015 Abstract Most empirical applications using partially identified Structural Vector Autoregressions
More informationProjection Inference for Set-Identified Svars
Projection Inference for Set-Identified Svars Bulat Gafarov (PSU), Matthias Meier (University of Bonn), and José-Luis Montiel-Olea (Columbia) September 21, 2016 1 / 38 Introduction: Set-id. SVARs SVAR:
More informationDiscussion of Robust Bayes Inference for non-identied SVARs", by March Giacomini 2014 and1 Kitagaw / 14
Discussion of Robust Bayes Inference for non-identied SVARs", by Giacomini and Kitagawa Sophocles Mavroeidis 1 1 Oxford University March 2014 Discussion of Robust Bayes Inference for non-identied SVARs",
More informationOn the maximum and minimum response to an impulse in Svars
On the maximum and minimum response to an impulse in Svars Bulat Gafarov (PSU), Matthias Meier (UBonn), and José-Luis Montiel-Olea (NYU) September 23, 2015 1 / 58 Introduction Structural VAR: Theoretical
More informationLabor-Supply Shifts and Economic Fluctuations. Technical Appendix
Labor-Supply Shifts and Economic Fluctuations Technical Appendix Yongsung Chang Department of Economics University of Pennsylvania Frank Schorfheide Department of Economics University of Pennsylvania January
More informationInference when identifying assumptions are doubted. A. Theory B. Applications
Inference when identifying assumptions are doubted A. Theory B. Applications 1 A. Theory Structural model of interest: A y t B 1 y t1 B m y tm u t nn n1 u t i.i.d. N0, D D diagonal 2 Bayesian approach:
More informationInference when identifying assumptions are doubted. A. Theory. Structural model of interest: B 1 y t1. u t. B m y tm. u t i.i.d.
Inference when identifying assumptions are doubted A. Theory B. Applications Structural model of interest: A y t B y t B m y tm nn n i.i.d. N, D D diagonal A. Theory Bayesian approach: Summarize whatever
More informationPrelim Examination. Friday August 11, Time limit: 150 minutes
University of Pennsylvania Economics 706, Fall 2017 Prelim Prelim Examination Friday August 11, 2017. Time limit: 150 minutes Instructions: (i) The total number of points is 80, the number of points for
More informationIntersection Bounds, Robust Bayes, and Updating Ambiguous Beliefs.
Intersection Bounds, Robust Bayes, and Updating mbiguous Beliefs. Toru Kitagawa CeMMP and Department of Economics, UCL Preliminary Draft November, 2011 bstract This paper develops multiple-prior Bayesian
More informationStructural Interpretation of Vector Autoregressions with Incomplete Identification: Revisiting the Role of Oil Supply and Demand Shocks
Structural Interpretation of Vector Autoregressions with Incomplete Identification: Revisiting the Role of Oil Supply and Demand Shocks Christiane Baumeister, University of Notre Dame James D. Hamilton,
More informationIdentification of Economic Shocks by Inequality Constraints in Bayesian Structural Vector Autoregression
Identification of Economic Shocks by Inequality Constraints in Bayesian Structural Vector Autoregression Markku Lanne Department of Political and Economic Studies, University of Helsinki Jani Luoto Department
More informationI. Bayesian econometrics
I. Bayesian econometrics A. Introduction B. Bayesian inference in the univariate regression model C. Statistical decision theory Question: once we ve calculated the posterior distribution, what do we do
More informationEstimation and Inference for Set-identi ed Parameters Using Posterior Lower Probability
Estimation and Inference for Set-identi ed Parameters Using Posterior Lower Probability Toru Kitagawa CeMMAP and Department of Economics, UCL First Draft: September 2010 This Draft: March, 2012 Abstract
More informationDynamic Factor Models, Factor Augmented VARs, and SVARs in Macroeconomics. -- Part 2: SVARs and SDFMs --
Dynamic Factor Models, Factor Augmented VARs, and SVARs in Macroeconomics -- Part 2: SVARs and SDFMs -- Mark Watson Princeton University Central Bank of Chile October 22-24, 208 Reference: Stock, James
More informationBayesian Regularization
Bayesian Regularization Aad van der Vaart Vrije Universiteit Amsterdam International Congress of Mathematicians Hyderabad, August 2010 Contents Introduction Abstract result Gaussian process priors Co-authors
More informationInference and Decision for Set Identi ed Parameters Using the Posterior Lower and Upper Probabilities
Inference and Decision for Set Identi ed Parameters Using the Posterior Lower and Upper Probabilities Toru Kitagawa CeMMAP and Department of Economics, UCL First Draft, July, 2010 This Draft, August, 2010
More informationDynamic Factor Models and Factor Augmented Vector Autoregressions. Lawrence J. Christiano
Dynamic Factor Models and Factor Augmented Vector Autoregressions Lawrence J Christiano Dynamic Factor Models and Factor Augmented Vector Autoregressions Problem: the time series dimension of data is relatively
More informationDensity Estimation. Seungjin Choi
Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/
More informationA Very Brief Summary of Statistical Inference, and Examples
A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)
More informationMeasuring nuisance parameter effects in Bayesian inference
Measuring nuisance parameter effects in Bayesian inference Alastair Young Imperial College London WHOA-PSI-2017 1 / 31 Acknowledgements: Tom DiCiccio, Cornell University; Daniel Garcia Rasines, Imperial
More informationLecture notes on statistical decision theory Econ 2110, fall 2013
Lecture notes on statistical decision theory Econ 2110, fall 2013 Maximilian Kasy March 10, 2014 These lecture notes are roughly based on Robert, C. (2007). The Bayesian choice: from decision-theoretic
More informationParametric Unsupervised Learning Expectation Maximization (EM) Lecture 20.a
Parametric Unsupervised Learning Expectation Maximization (EM) Lecture 20.a Some slides are due to Christopher Bishop Limitations of K-means Hard assignments of data points to clusters small shift of a
More informationUncertainty. Jayakrishnan Unnikrishnan. CSL June PhD Defense ECE Department
Decision-Making under Statistical Uncertainty Jayakrishnan Unnikrishnan PhD Defense ECE Department University of Illinois at Urbana-Champaign CSL 141 12 June 2010 Statistical Decision-Making Relevant in
More informationParametric Inference Maximum Likelihood Inference Exponential Families Expectation Maximization (EM) Bayesian Inference Statistical Decison Theory
Statistical Inference Parametric Inference Maximum Likelihood Inference Exponential Families Expectation Maximization (EM) Bayesian Inference Statistical Decison Theory IP, José Bioucas Dias, IST, 2007
More informationPosterior Regularization
Posterior Regularization 1 Introduction One of the key challenges in probabilistic structured learning, is the intractability of the posterior distribution, for fast inference. There are numerous methods
More informationStratégies bayésiennes et fréquentistes dans un modèle de bandit
Stratégies bayésiennes et fréquentistes dans un modèle de bandit thèse effectuée à Telecom ParisTech, co-dirigée par Olivier Cappé, Aurélien Garivier et Rémi Munos Journées MAS, Grenoble, 30 août 2016
More informationLECTURE 5 NOTES. n t. t Γ(a)Γ(b) pt+a 1 (1 p) n t+b 1. The marginal density of t is. Γ(t + a)γ(n t + b) Γ(n + a + b)
LECTURE 5 NOTES 1. Bayesian point estimators. In the conventional (frequentist) approach to statistical inference, the parameter θ Θ is considered a fixed quantity. In the Bayesian approach, it is considered
More informationExpectation Propagation Algorithm
Expectation Propagation Algorithm 1 Shuang Wang School of Electrical and Computer Engineering University of Oklahoma, Tulsa, OK, 74135 Email: {shuangwang}@ou.edu This note contains three parts. First,
More informationA Very Brief Summary of Bayesian Inference, and Examples
A Very Brief Summary of Bayesian Inference, and Examples Trinity Term 009 Prof Gesine Reinert Our starting point are data x = x 1, x,, x n, which we view as realisations of random variables X 1, X,, X
More informationStatistical Approaches to Learning and Discovery. Week 4: Decision Theory and Risk Minimization. February 3, 2003
Statistical Approaches to Learning and Discovery Week 4: Decision Theory and Risk Minimization February 3, 2003 Recall From Last Time Bayesian expected loss is ρ(π, a) = E π [L(θ, a)] = L(θ, a) df π (θ)
More informationProbabilistic Graphical Models
Probabilistic Graphical Models Brown University CSCI 2950-P, Spring 2013 Prof. Erik Sudderth Lecture 12: Gaussian Belief Propagation, State Space Models and Kalman Filters Guest Kalman Filter Lecture by
More informationIdentifying SVARs with Sign Restrictions and Heteroskedasticity
Identifying SVARs with Sign Restrictions and Heteroskedasticity Srečko Zimic VERY PRELIMINARY AND INCOMPLETE NOT FOR DISTRIBUTION February 13, 217 Abstract This paper introduces a new method to identify
More informationMinimizing Sensitivity to Model Misspecification
Minimizing Sensitivity to Model Misspecification Stéphane Bonhomme Martin Weidner January 2018 PRELIMINARY AND INCOMPLETE DRAFT Abstract We propose a framework to compute predictions based on an economic
More informationPriors for the frequentist, consistency beyond Schwartz
Victoria University, Wellington, New Zealand, 11 January 2016 Priors for the frequentist, consistency beyond Schwartz Bas Kleijn, KdV Institute for Mathematics Part I Introduction Bayesian and Frequentist
More informationSTA 732: Inference. Notes 10. Parameter Estimation from a Decision Theoretic Angle. Other resources
STA 732: Inference Notes 10. Parameter Estimation from a Decision Theoretic Angle Other resources 1 Statistical rules, loss and risk We saw that a major focus of classical statistics is comparing various
More informationEcon 2148, spring 2019 Statistical decision theory
Econ 2148, spring 2019 Statistical decision theory Maximilian Kasy Department of Economics, Harvard University 1 / 53 Takeaways for this part of class 1. A general framework to think about what makes a
More informationLecture 2: Statistical Decision Theory (Part I)
Lecture 2: Statistical Decision Theory (Part I) Hao Helen Zhang Hao Helen Zhang Lecture 2: Statistical Decision Theory (Part I) 1 / 35 Outline of This Note Part I: Statistics Decision Theory (from Statistical
More informationCovariance function estimation in Gaussian process regression
Covariance function estimation in Gaussian process regression François Bachoc Department of Statistics and Operations Research, University of Vienna WU Research Seminar - May 2015 François Bachoc Gaussian
More informationBayesian Inference for DSGE Models. Lawrence J. Christiano
Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Preliminaries. Probabilities. Maximum Likelihood. Bayesian
More informationInference and decision for set identified parameters using posterior lower and upper probabilities
Inference and decision for set identified parameters using posterior lower and upper probabilities Toru Kitagawa The Institute for Fiscal Studies Department of Economics, UCL cemmap working paper CWP16/11
More informationBayesian methods in economics and finance
1/26 Bayesian methods in economics and finance Linear regression: Bayesian model selection and sparsity priors Linear Regression 2/26 Linear regression Model for relationship between (several) independent
More informationInference for a Population Proportion
Al Nosedal. University of Toronto. November 11, 2015 Statistical inference is drawing conclusions about an entire population based on data in a sample drawn from that population. From both frequentist
More informationQuantifying Stochastic Model Errors via Robust Optimization
Quantifying Stochastic Model Errors via Robust Optimization IPAM Workshop on Uncertainty Quantification for Multiscale Stochastic Systems and Applications Jan 19, 2016 Henry Lam Industrial & Operations
More informationColumbia University. Department of Economics Discussion Paper Series. The Knob of the Discord. Massimiliano Amarante Fabio Maccheroni
Columbia University Department of Economics Discussion Paper Series The Knob of the Discord Massimiliano Amarante Fabio Maccheroni Discussion Paper No.: 0405-14 Department of Economics Columbia University
More informationEstimation of Operational Risk Capital Charge under Parameter Uncertainty
Estimation of Operational Risk Capital Charge under Parameter Uncertainty Pavel V. Shevchenko Principal Research Scientist, CSIRO Mathematical and Information Sciences, Sydney, Locked Bag 17, North Ryde,
More informationFoundations of Nonparametric Bayesian Methods
1 / 27 Foundations of Nonparametric Bayesian Methods Part II: Models on the Simplex Peter Orbanz http://mlg.eng.cam.ac.uk/porbanz/npb-tutorial.html 2 / 27 Tutorial Overview Part I: Basics Part II: Models
More informationSupport Vector Machines
Support Vector Machines Le Song Machine Learning I CSE 6740, Fall 2013 Naïve Bayes classifier Still use Bayes decision rule for classification P y x = P x y P y P x But assume p x y = 1 is fully factorized
More informationBAYESIAN INFERENCE IN A CLASS OF PARTIALLY IDENTIFIED MODELS
BAYESIAN INFERENCE IN A CLASS OF PARTIALLY IDENTIFIED MODELS BRENDAN KLINE AND ELIE TAMER UNIVERSITY OF TEXAS AT AUSTIN AND HARVARD UNIVERSITY Abstract. This paper develops a Bayesian approach to inference
More informationVariational Bayesian Logistic Regression
Variational Bayesian Logistic Regression Sargur N. University at Buffalo, State University of New York USA Topics in Linear Models for Classification Overview 1. Discriminant Functions 2. Probabilistic
More informationBayesian estimation of the discrepancy with misspecified parametric models
Bayesian estimation of the discrepancy with misspecified parametric models Pierpaolo De Blasi University of Torino & Collegio Carlo Alberto Bayesian Nonparametrics workshop ICERM, 17-21 September 2012
More informationThe Metropolis-Hastings Algorithm. June 8, 2012
The Metropolis-Hastings Algorithm June 8, 22 The Plan. Understand what a simulated distribution is 2. Understand why the Metropolis-Hastings algorithm works 3. Learn how to apply the Metropolis-Hastings
More informationConjugate Predictive Distributions and Generalized Entropies
Conjugate Predictive Distributions and Generalized Entropies Eduardo Gutiérrez-Peña Department of Probability and Statistics IIMAS-UNAM, Mexico Padova, Italy. 21-23 March, 2013 Menu 1 Antipasto/Appetizer
More informationLecture Notes 1: Decisions and Data. In these notes, I describe some basic ideas in decision theory. theory is constructed from
Topics in Data Analysis Steven N. Durlauf University of Wisconsin Lecture Notes : Decisions and Data In these notes, I describe some basic ideas in decision theory. theory is constructed from The Data:
More informationGaussian Estimation under Attack Uncertainty
Gaussian Estimation under Attack Uncertainty Tara Javidi Yonatan Kaspi Himanshu Tyagi Abstract We consider the estimation of a standard Gaussian random variable under an observation attack where an adversary
More informationBayesian inference. Fredrik Ronquist and Peter Beerli. October 3, 2007
Bayesian inference Fredrik Ronquist and Peter Beerli October 3, 2007 1 Introduction The last few decades has seen a growing interest in Bayesian inference, an alternative approach to statistical inference.
More informationGraphical Models for Collaborative Filtering
Graphical Models for Collaborative Filtering Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Sequence modeling HMM, Kalman Filter, etc.: Similarity: the same graphical model topology,
More informationGAMMA-ADMISSIBILITY IN A NON-REGULAR FAMILY WITH SQUARED-LOG ERROR LOSS
GAMMA-ADMISSIBILITY IN A NON-REGULAR FAMILY WITH SQUARED-LOG ERROR LOSS Authors: Shirin Moradi Zahraie Department of Statistics, Yazd University, Yazd, Iran (sh moradi zahraie@stu.yazd.ac.ir) Hojatollah
More informationCurve Fitting Re-visited, Bishop1.2.5
Curve Fitting Re-visited, Bishop1.2.5 Maximum Likelihood Bishop 1.2.5 Model Likelihood differentiation p(t x, w, β) = Maximum Likelihood N N ( t n y(x n, w), β 1). (1.61) n=1 As we did in the case of the
More informationHypothesis Testing. Part I. James J. Heckman University of Chicago. Econ 312 This draft, April 20, 2006
Hypothesis Testing Part I James J. Heckman University of Chicago Econ 312 This draft, April 20, 2006 1 1 A Brief Review of Hypothesis Testing and Its Uses values and pure significance tests (R.A. Fisher)
More informationStargazing with Structural VARs: Shock Identification via Independent Component Analysis
Stargazing with Structural VARs: Shock Identification via Independent Component Analysis Dmitry Kulikov and Aleksei Netšunajev Eesti Pank Estonia pst. 13 19 Tallinn Estonia phone: +372 668777 e-mail: dmitry.kulikov@eestipank.ee
More informationNoninformative Priors for the Ratio of the Scale Parameters in the Inverted Exponential Distributions
Communications for Statistical Applications and Methods 03, Vol. 0, No. 5, 387 394 DOI: http://dx.doi.org/0.535/csam.03.0.5.387 Noninformative Priors for the Ratio of the Scale Parameters in the Inverted
More informationParametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012
Parametric Models Dr. Shuang LIANG School of Software Engineering TongJi University Fall, 2012 Today s Topics Maximum Likelihood Estimation Bayesian Density Estimation Today s Topics Maximum Likelihood
More information9 Bayesian inference. 9.1 Subjective probability
9 Bayesian inference 1702-1761 9.1 Subjective probability This is probability regarded as degree of belief. A subjective probability of an event A is assessed as p if you are prepared to stake pm to win
More informationPart III. A Decision-Theoretic Approach and Bayesian testing
Part III A Decision-Theoretic Approach and Bayesian testing 1 Chapter 10 Bayesian Inference as a Decision Problem The decision-theoretic framework starts with the following situation. We would like to
More informationEstimation and Inference for Set-identi ed Parameters Using Posterior Lower Probability
Estimation and Inference for Set-identi ed Parameters Using Posterior Lower Probability Toru Kitagawa Department of Economics, University College London 28, July, 2012 Abstract In inference for set-identi
More informationBayesian Econometrics
Bayesian Econometrics Christopher A. Sims Princeton University sims@princeton.edu September 20, 2016 Outline I. The difference between Bayesian and non-bayesian inference. II. Confidence sets and confidence
More informationMachine Learning Techniques for Computer Vision
Machine Learning Techniques for Computer Vision Part 2: Unsupervised Learning Microsoft Research Cambridge x 3 1 0.5 0.2 0 0.5 0.3 0 0.5 1 ECCV 2004, Prague x 2 x 1 Overview of Part 2 Mixture models EM
More informationBayesian Inference for DSGE Models. Lawrence J. Christiano
Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.
More informationDSGE Methods. Estimation of DSGE models: Maximum Likelihood & Bayesian. Willi Mutschler, M.Sc.
DSGE Methods Estimation of DSGE models: Maximum Likelihood & Bayesian Willi Mutschler, M.Sc. Institute of Econometrics and Economic Statistics University of Münster willi.mutschler@uni-muenster.de Summer
More information1.2. Structural VARs
1. Shocks Nr. 1 1.2. Structural VARs How to identify A 0? A review: Choleski (Cholesky?) decompositions. Short-run restrictions. Inequality restrictions. Long-run restrictions. Then, examples, applications,
More informationClustering and Gaussian Mixture Models
Clustering and Gaussian Mixture Models Piyush Rai IIT Kanpur Probabilistic Machine Learning (CS772A) Jan 25, 2016 Probabilistic Machine Learning (CS772A) Clustering and Gaussian Mixture Models 1 Recap
More informationLINEAR MODELS FOR CLASSIFICATION. J. Elder CSE 6390/PSYC 6225 Computational Modeling of Visual Perception
LINEAR MODELS FOR CLASSIFICATION Classification: Problem Statement 2 In regression, we are modeling the relationship between a continuous input variable x and a continuous target variable t. In classification,
More informationQuantitative Biology II Lecture 4: Variational Methods
10 th March 2015 Quantitative Biology II Lecture 4: Variational Methods Gurinder Singh Mickey Atwal Center for Quantitative Biology Cold Spring Harbor Laboratory Image credit: Mike West Summary Approximate
More informationLecture 7 Introduction to Statistical Decision Theory
Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7
More informationEstimating and Accounting for the Output Gap with Large Bayesian Vector Autoregressions
Estimating and Accounting for the Output Gap with Large Bayesian Vector Autoregressions James Morley 1 Benjamin Wong 2 1 University of Sydney 2 Reserve Bank of New Zealand The view do not necessarily represent
More informationA Kullback-Leibler Divergence for Bayesian Model Comparison with Applications to Diabetes Studies. Chen-Pin Wang, UTHSCSA Malay Ghosh, U.
A Kullback-Leibler Divergence for Bayesian Model Comparison with Applications to Diabetes Studies Chen-Pin Wang, UTHSCSA Malay Ghosh, U. Florida Lehmann Symposium, May 9, 2011 1 Background KLD: the expected
More informationIdentifying Aggregate Liquidity Shocks with Monetary Policy Shocks: An Application using UK Data
Identifying Aggregate Liquidity Shocks with Monetary Policy Shocks: An Application using UK Data Michael Ellington and Costas Milas Financial Services, Liquidity and Economic Activity Bank of England May
More informationRobust Predictions in Games with Incomplete Information
Robust Predictions in Games with Incomplete Information joint with Stephen Morris (Princeton University) November 2010 Payoff Environment in games with incomplete information, the agents are uncertain
More informationRobust Partially Observable Markov Decision Processes
Submitted to manuscript Robust Partially Observable Markov Decision Processes Mohammad Rasouli 1, Soroush Saghafian 2 1 Management Science and Engineering, Stanford University, Palo Alto, CA 2 Harvard
More informationEstimation of Dynamic Regression Models
University of Pavia 2007 Estimation of Dynamic Regression Models Eduardo Rossi University of Pavia Factorization of the density DGP: D t (x t χ t 1, d t ; Ψ) x t represent all the variables in the economy.
More informationBayesian Semiparametric GARCH Models
Bayesian Semiparametric GARCH Models Xibin (Bill) Zhang and Maxwell L. King Department of Econometrics and Business Statistics Faculty of Business and Economics xibin.zhang@monash.edu Quantitative Methods
More informationBayesian Semiparametric GARCH Models
Bayesian Semiparametric GARCH Models Xibin (Bill) Zhang and Maxwell L. King Department of Econometrics and Business Statistics Faculty of Business and Economics xibin.zhang@monash.edu Quantitative Methods
More informationOutline lecture 2 2(30)
Outline lecture 2 2(3), Lecture 2 Linear Regression it is our firm belief that an understanding of linear models is essential for understanding nonlinear ones Thomas Schön Division of Automatic Control
More informationBayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework
HT5: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford http://www.stats.ox.ac.uk/~sejdinov/sdmml.html Maximum Likelihood Principle A generative model for
More informationAn Extended BIC for Model Selection
An Extended BIC for Model Selection at the JSM meeting 2007 - Salt Lake City Surajit Ray Boston University (Dept of Mathematics and Statistics) Joint work with James Berger, Duke University; Susie Bayarri,
More informationDavid Giles Bayesian Econometrics
David Giles Bayesian Econometrics 1. General Background 2. Constructing Prior Distributions 3. Properties of Bayes Estimators and Tests 4. Bayesian Analysis of the Multiple Regression Model 5. Bayesian
More information13: Variational inference II
10-708: Probabilistic Graphical Models, Spring 2015 13: Variational inference II Lecturer: Eric P. Xing Scribes: Ronghuo Zheng, Zhiting Hu, Yuntian Deng 1 Introduction We started to talk about variational
More informationWP/13/257. System Priors: Formulating Priors about DSGE Models' Properties. Michal Andrle and Jaromir Benes
WP/13/257 System Priors: Formulating Priors about DSGE Models' Properties Michal Andrle and Jaromir Benes 2013 International Monetary Fund WP/13/257 IMF Working Paper Research Department System Priors:
More informationMIT Spring 2016
Decision Theoretic Framework MIT 18.655 Dr. Kempthorne Spring 2016 1 Outline Decision Theoretic Framework 1 Decision Theoretic Framework 2 Decision Problems of Statistical Inference Estimation: estimating
More informationAn Introduction to Spectral Learning
An Introduction to Spectral Learning Hanxiao Liu November 8, 2013 Outline 1 Method of Moments 2 Learning topic models using spectral properties 3 Anchor words Preliminaries X 1,, X n p (x; θ), θ = (θ 1,
More information