BAYESIAN MODEL CRITICISM

Size: px
Start display at page:

Download "BAYESIAN MODEL CRITICISM"

Transcription

1 Monte via Chib s BAYESIAN MODEL CRITICM Hedibert Freitas Lopes The University of Chicago Booth School of Business 5807 South Woodlawn Avenue, Chicago, IL hlopes@chicagobooth.edu Based Gamerman and Lopes (2006) Markov Chain Monte : Stochastic Simulation for Inference, Chapman&Hall/CRC.

2 Monte via Chib s 1 2 Monte via Chib s Deviance Information Criterion Outline

3 Monte via Chib s Suppose that the competing s can be enumerated and are represented by the set M = {M 1, M 2,...}, and that the true is in M (Bernardo and Smith, 1994). The posterior probability of M j is given by where Pr(M j y) f (y M j )Pr(M j ) f (y M j ) = f (y θ j, M j )p(θ j M j )dθ j is the prior density of M j and Pr(M j ) is the prior probability of M j.

4 Monte via Chib s The posterior odds of M j relative to M k is given by Pr(M j y) Pr(M k y) }{{} posterior odds = Pr(M j) Pr(M k ) }{{} prior odds f (y M j). f (y M k ) }{{} The can be viewed as the weighted ratio of M j to M k. The main difficulty is the computation of the marginal or normalizing constant f (y M j ).

5 Monte via Chib s Jeffreys (1961) recommends the use of the following rule of thumb to decide between s j and k: B jk > 100 : decisive evidence against k 10 < B jk 100 : strong evidence against k 3 < B jk 10 : substantial evidence against k Therefore, the posterior probability for j can be obtained from 1 Pr(M j y) = M k M B kj Pr(M k ) Pr(M j ).

6 Monte via Chib s A basic ingredient for assessment is given by the density f (y M) = f (y θ, M)p(θ M)dθ, which is the normalizing constant of the posterior distribution. The density can now be viewed as the of M. It is sometimes referred to as, because it is obtained after marginalization of parameters. The density can be written as the expectation of the with respect to the prior: f (y) = E p [f (y θ)].

7 Monte via Chib s Let m and V be the posterior mode and the asymptotic approximation for the posterior covariance matrix. A normal approximation to f (y) is given by f0 (y) = (2π) d/2 V 1/2 p( m)f (y m) Sampling-based approximations for m and V can be constructed from posterior draws θ (1),..., θ (N) : m = arg max θ (j) π(θ (j) ) V = 1 N N j=1 (θ(j) θ)(θ (j) θ), where θ = 1 N N j=1 θ(j).

8 Monte via Chib s A direct Monte estimate is ˆf 1 (y) = 1 N N f (y θ (j) ) j=1 where θ (1),..., θ (N) is a sample from the prior distribution p(θ). This does not work well in cases of disagreement between prior and (Raftery, 1996, and McCulloch and Rossi, 1991). Even for large values of n, this estimate will be influenced by a few sampled values, making it very unstable.

9 Monte via Chib s Monte via An alternative is to perform importance sampling with the aim of boosting sampled values in regions where the integrand is large. This approach is based on sampling from the importance density g(θ), since the density can be rewritten as [ ] f (y θ)p(θ) f (y) = E g. g(θ) This form motivates a new estimate ˆf 2 (y) = 1 N N j=1 f (y θ (j) )p(θ (j) ) g(θ (j) ) where θ (1),..., θ (N) is a sample from g(θ).

10 Sometimes g is only known up to a normalizing constant, i.e. g(θ) = kg (θ), and the value of k must be estimated. Monte via Chib s Noting that k = kp(θ)dθ = leads to the of k given by k = 1 N N j=1 p(θ) g (θ) g(θ)dθ p(θ (j) ) g (θ (j) ) where, again, the θ (j) are sampled from g. Replacing this estimate in f 2 (y) gives N j=1 f3 (y) = f (y θ(j) )p(θ (j) )/g (θ (j) ) N. j=1 p(θ(j) )/g (θ (j) )

11 Monte via Chib s The harmonic mean (HM) is obtained when g(θ) is the posterior π(θ): f4 (y) = 1 N for θ (1),..., θ (N) from π(θ). N 1 f (y θ (j) ) j=1 This is a very appealing for its simplicity. However, it is strongly affected by small values! 1

12 Monte via Chib s A compromise between f 1 and f 4 would lead to g(θ) = δp(θ) + (1 δ)π(θ) Problem: f (y) needs to be known! Solution via an iterative scheme: f (i) 5 (y) = for i = 1, 2,... and, say, N p(y θ (j) ) j=1 δ b f (i 1) 5 (y)+(1 δ)p(y θ (j) ) N 1 j=1 δ b f (i 1) 5 (y)+(1 δ)p(y θ (j) ) f (0) 5 = f 4.

13 Monte via Chib s For any given density g(θ) it is easy to see that g(θ) f (y)π(θ) f (y θ)p(θ) dθ = 1 so, [ f (y) = ] g(θ) 1 f (y θ)p(θ) π(θ)dθ. Therefore, sampling θ (1),..., θ (1) from π leads to the estimate f6 (y) = 1 N N g(θ (j) ) f (y θ (j) )p(θ (j) ) j=1 1.

14 Monte via Chib s Meng and Wong (1996) introduced the bridge sampling to estimate ratios of normalizing constants by noticing that f (y) = E g {α(θ)p(θ)p(y θ)} E π {α(θ)g(θ)} for any bridge function α(θ) with support encompassing both supports of the posterior density π and the proposal density g. If α(θ) = 1/g(θ) then the bridge reduces to the simple Monte f 1. Similarly, if α(θ) = {p(θ)p(y θ)g(θ)} 1 then the bridge is a variation of the harmonic mean.

15 Monte via Chib s Meng and Wong (1996) showed that the optimal mean square error α function is which depends on f (y) itself. By letting α(θ) = {g(θ) + (N 2 /N 1 )π(θ)} 1, ω j = p(y θ (j) )p(θ (j) )/g(θ (j) ) θ (1),..., θ (N 1) π(θ) ω j = p(y θ (j) )p( θ (j) )/g( θ (j) ) θ (j),..., θ (N 2) g(θ) they devised an iterative scheme: f (i) 7 (y) = 1 N2 ω j N 2 j=1 s 1 ω j +s 2 b(i 1) f 7 (y) 1 N1 1 N 1 j=1 s 1 ω j +s 2 b(i 1) f 7 (y) for i = 1, 2,..., s 1 = N 1 /(N 1 + N 2 ), s 2 = N 2 /(N 1 + N 2 ) and, (0) say, f 7 = f 4.,

16 Monte via Chib s Chib s Chib (1995) introduced an estimate of π(θ) when conditionals are available in closed form: with π(θ) = π(θ 1 ) π(θ i θ 1,..., θ i 1 ) = 1 N N j=1 d π(θ i θ 1,..., θ i 1 ) i=2 π(θ i θ 1,..., θ i 1, θ (j) i+1,..., θ(j) d ) and (θ (j) 1,..., θ(j) ), j = 1,..., n, draws from π(θ). Thus d f8 (y) = f (y θ)p(θ) π(θ) Simple choices of θ are the mode and the mean but any value in that region should be adequate. θ

17 Monte via Chib s Estimate b f0 b f1 b f2 b f3 b f4 b f5 b f6 b f7 b f8 List of s of f (y) Proposal density/method normal approximation p(θ) unnormalized g (θ) unnormalized g(θ) π(θ) δp(θ) + (1 δ)π(θ) generalized harmonic mean optimal bridge sampling candidate s : Gibbs DiCiccio, Kass, Raftery and Wasserman (1997), Han and Carlin (2001) and Lopes and West (2004), among others, compared several of these s.

18 Monte via Chib s Example: Cauchy-normal : y 1,..., y n N(θ, σ 2 ), σ 2 known. Cauchy prior: p(θ) = π 1 (1 + θ 2 ) 1. Data: ȳ = 7 and σ 2 /n = 4.5. density for θ: { π(θ) (1 + θ 2 ) 1 exp 1 (θ ȳ)2 2σ2 Error = 100 ˆf (y) f (y) /f (y). For ˆf 5, δ = 0.1. f (y) Estimator Estimate Error (%) f (y) ˆf ˆf ˆf ˆf ˆf ˆf }.

19 Monte via Chib s Suppose that M 2 is described by p(y ω, ψ, M 2 ) and M 1 is a restricted verison of M 2, ie. Suppose also that p(y ψ, M 1 ) p(y ω = ω 0, ψ, M 2 ) π(ψ ω = ω 0, M 2 ) = π(ψ M 1 ) Therefore, it can be proved that the is B 12 = π(ω = ω 0 y, M 2 ) π(ω = ω 0 M 2 ) N 1 N n=1 π(ω = ω 0 ψ (n), y, M 2 ) π(ω = ω 0 M 2 ) where {ψ (1),..., ψ (N) } π(ψ y, M 2 ).

20 Monte via Chib s Example: Normality x Student-t From Verdinelli-Wasserman (1995). Suppose that we have observations x 1,..., x n and we would like to entertain two s: M 1 : x i N(µ, σ 2 ) and M 2 : x i t λ (µ, σ 2 ) Letting ω = 1/λ, M 1 is a particular case of M 2 when ω = ω 0 = 0.0, with ψ = (µ, σ 2 ). Let us also assume that ω U(0, 1), with ω = 1 corresponding to a Cauchy distribution, and that π(µ, σ 2 M 1 ) = π(µ, σ 2, ω M 2 ) σ 2 the formula holds and the is B 12 = π(ω 0 x, M 2 ), i.e., the marginal posterior of ω evaluated at 0.

21 Monte via Chib s Because π(µ, σ 2, ω x, M 2 ) has no closed form solution, they use a algorithm and sample (µ, σ 2, ω) from q(µ, σ 2, ω), ie. q(µ, σ 2, ω) = π(ω)π(σ 2 x, M 1 )π(µ σ 2, x, M 1 ) ω U(0, 1) ( ) n 1 σ 2 (n 1)s2 IG, 2 2 µ σ 2 N( x, σ 2 /n) When n = 100 from N(0, 1), then B 12 = 3.79 (standard error=0.145) When n = 100 from Cauchy(0, 1), then B 12 = (standard error= )

22 Monte via Chib s Suppose that the competing s can be enumerable and are represented by the set M = {M 1, M 2,...}. Under M k, the posterior distribution is p(θ k y, k) p(y θ k, k)p(θ k k) where p(y θ k, k) and p(θ k k) represent the probability and the prior distribution of the parameters of M k, respectively. Then, p(θ k, k y) p(k)p(θ k k, y)

23 Monte via Chib s The RJMCMC methods involve MH-type algorithms that move a simulation analysis between s defined by (k, θ k ) to (k, θ k ) with different defining dimensions k and k. The resulting Markov chain simulations jump between such distinct s and form samples from the joint distribution p(θ k, k). The algorithm are designed to be reversible so as to maintain detailed balance of a irreducible and aperiodic chain that converges to the correct target measure (See Green, 1995, and Gamerman and Lopes, 2006, chapter 7).

24 Monte via Chib s Step 0. Current state: (k, θ k ) Step 1. Sample M k from J(k k ). Step 2. Sample u from q(u θ k, k, k ). The RJMCMC algorithm Step 3. Set (θ k, u ) = g k,k (θ k, u), where g k,k ( ) is a bijection between (θ k, u) and (θ k, u ), where u and u play the role of matching the dimensions of both vectors. Step 4. The acceptance probability of the new, (θ k, k ) can be calculated as the minimum between one and p(y θ k, k )p(θ k )p(k ) J(k k)q(u θ k, k, k) g k,k (θ k, u) p(y θ k, k)p(θ k )p(k) J(k k {z } )q(u θ k, k, k ) (θ k, u) {z } ratio proposal ratio

25 Monte via Chib s p(k y) Looping through steps 1-4 above L times produces a sample {k l, l = 1,..., L} for the indicators and Pr(k y) can be estimated by Pr(k y) = 1 L 1 k (k l ) L l=1 where 1 k (k l ) = 1 if k = k l and zero otherwise.

26 Monte via Chib s Choice of q(u k, θ k, k ) The choice of the proposal probabilities, J(k k ), and the proposal densities, q(u k, θ k, k ), must be cautiously made, especially in highly parameterized problems. Independent sampler: If all parameters of the proposed are generated from the proposal distribution, then (θ k, u ) = (u, θ k ) and the Jacobian is one. Standard -Hastings: When the proposed k equals the current k, the loop through steps 1-4 corresponds to the traditional -Hastings algorithm.

27 Monte via Chib s Optimal choice If p(θ k y, k) is available in close form for each M k, then q(u θ k, k, k) = p(θ k y, k) and the acceptance probability reduces to the minimum between one and p(k )p(y k ) J(k k) p(k)p(y k) J(k k ) since p(y θ k, k)p(θ k )p(k) = p(θ k, k y)p(y k). The Jacobian equals one and p(y k) is available in close form. If J(k k) = J(k k ), then the acceptance probability is the posterior odds ratio from M k to M k. In this case, the move is automatically accepted when M k has higher posterior probability than M k ; otherwise the posterior odds ratio determines how likely is to move to a lower posterior probability.

28 Monte via Chib s Metropolized Carlin-Chib Let Θ = (θ k, θ k ) be the vector containing the parameters of all competing s. Then the joint posterior of (Θ, k) is p(θ, k y) p(k)p(y θ k, k)p(θ k k)p(θ k θ k, k) where p(θ k θ k, k) are pseudo-prior densities. Carlin and Chib (1995) proposed a Gibbs sampler where the full posterior conditional distributions are { p(y θk, k)p(θ p(θ k y, k, θ k ) k k) if k = k p(θ k k ) if k = k and p(k Θ, y) p(y θ k, k)p(k) p(θ m k) m M

29 Monte via Chib s Notice that the pseudo-prior densities and the RJMCMC s proposal densities have similar functions. As a matter of fact, Carlin and Chib (1995) suggest using pseudo-prior distributions that are close to the posterior distributions within each competing. The main problem with Carlin and Chib s Gibbs sampler is the need of evaluating and drawing from the pseudo-prior distributions at each iteration of the MCMC scheme. This problem can be overwhelmingly exacerbated in large situations where the number of competing s is relatively large.

30 Monte via Chib s Dellaportas, Forster and Ntzoufras (1998) and Godsill (1998) proposed Metropolizing Carlin and Chib s Gibbs sampler: Step 0. Current state: (θ k, k) Step 1. Sample M k from J(k k ); Step 2. Sample θ k from p(θ k k); Step 3. The acceptance probability is min(1, A) A = p(y θ k, k )p(k )J(k k) m M p(θ m k ) p(y θ k, k)p(k)j(k k ) m M p(θ m k) = p(y θ k, k )p(k )J(k k)p(θ k k )p(θ k k ) p(y θ k, k)p(k)j(k k )p(θ k k)p(θ k k). Pseudo-priors and RJMCMC s proposals play similar roles and the closer their are to the competing s posterior probabilities the better the sampler mixing.

31 Monte via Chib s See Hoeting, Madigan, Raftery and Volinsky (1999), Statistical Science, 14, Let M denote the set that indexes all entertained s. Assume that is an outcome of interest, such as the future value y t+k, or an elasticity well defined across s, etc. The posterior distribution for is p( y) = p( m, y)pr(m y) m M for data y and posterior probability Pr(m y) = p(y m)pr(m) where Pr(m) is the prior probability.

32 Monte via Chib s Gelfand and Ghosh (1998) introduced a posterior that, under squared error loss, favors the M j which minimizes D G j = P G j + G G j where P G j = G G j = n V (ỹ t y, M j ) t=1 n [y t E(ỹ t y, M j )] 2 t=1 and (ỹ 1,..., ỹ n ) are predictions/replicates of y. The first term, P j, is a penalty term for complexity. The second term, G j, accounts for goodness of fit.

33 Monte via Chib s More general losses Gelfand and Ghosh (1998) also derived the criteria for more general error loss functions. Expectations E(ỹ t y, M j ) and variances V (ỹ t y, M j ) are computed under posterior densities, ie. E[h(ỹ t ) y, M j ] = h(ỹ t )f (ỹ t y, θ j, M j )π(θ j M j )dθ j dỹ t for h(ỹ t ) = ỹ t and h(ỹ t ) = ỹ 2 t. The above integral can be approximated via Monte.

34 Monte via Chib s Deviance Information Criterion See Spiegelhalter, Best, Carlin and van der Linde (2002), JRSS-B, 64, If θ = E(θ y) and D(θ) = 2 log p(y θ) is the deviance, then the DIC generalizes the AIC DIC = D + p D = goodness of fit + complexity where D = E θ y (D(θ)) and p D = D D(θ ). The p D is the effective number of parameters. Small values of DIC suggests a better-fitting.

35 Monte via Chib s DIC is computationally attractive since its two terms can be easily computed during an MCMC run. Let θ (1),..., θ (M) be an MCMC sample from p(θ y). Then, and D 1 M M D i=1 = 2M 1 M (θ (i)) i=1 log p (y θ (i)) D(θ ) D( θ) = 2 log p(y θ) where θ = M 1 M i=1 θ(i).

36 Monte via Chib s Example: cycles-to-failure times Cycles-to-failure times for airplane yarns Gamma, log-normal and Weibull s: M 1 : y i G(α, β), α, β > 0 M 2 : y i LN(µ, σ 2 ), µ R, σ 2 > 0 M 3 : y i Weibull(γ, δ) γ, δ > 0, for i = 1,..., n.

37 Under M 2, µ and σ 2 are the mean and the variance of log y i, respectively. Monte via Chib s Under M 3, p(y i γ, δ) = γy γ 1 i δ γ e (y i /δ) γ. Flat priors were considered for θ 1 = (log α, log β) θ 2 = (µ, log σ 2 ) θ 3 = (log γ, log δ) It is also easy to see that, E(y θ 1, M 1 ) = α/β V (y θ 1, M 1 ) = α/β 2 E(y θ 2, M 2 ) = exp{µ + 0.5σ 2 } V (y θ 2, M 2 ) = exp{2µ + σ 2 }(e σ2 1) E(y θ 3, M 3 ) = δγ(1/γ)/γ V (y θ 3, M 3 ) = δ 2 [ 2Γ(2/γ) Γ(1/γ) 2 /γ ] /γ

38 Monte via Chib s Weighted resampling schemes, with bivariate normal importance functions, were used to sample from the posterior distributions. Proposals: q i (θ i ) = f N (θ i ; θ i, V i ) θ 1 = (0.15, 0.2) θ 2 = (5.16, 0.26) θ 3 = (0.47, 5.51) V 1 = diag (0.15, 0.2) V 2 = diag (0.087, 0.085) V 3 = diag (0.087, 0.101)

39 Monte via Chib s means, standard deviations and 95% credibility intervals: M1 M2 M3 α: 2.24, 0.21, (1.84, 2.68) β: 0.01, 0.001, (0.008, 0.012) α: 5.16, 0.06, (5.05, 5.27) β: 0.77, 0.04, (0.69, 0.86) α: 1.60, 0.09, (1.42, 1.79) β: , 13.88, (222.47, )

40 Monte via Chib s DIC indicates that both the Gamma and the Weibull s are relatively similar with the Weibull performing slightly better. DIC M 1 Gamma M 2 Log-normal M 3 Weibull

A note on Reversible Jump Markov Chain Monte Carlo

A note on Reversible Jump Markov Chain Monte Carlo A note on Reversible Jump Markov Chain Monte Carlo Hedibert Freitas Lopes Graduate School of Business The University of Chicago 5807 South Woodlawn Avenue Chicago, Illinois 60637 February, 1st 2006 1 Introduction

More information

MONTE CARLO METHODS. Hedibert Freitas Lopes

MONTE CARLO METHODS. Hedibert Freitas Lopes MONTE CARLO METHODS Hedibert Freitas Lopes The University of Chicago Booth School of Business 5807 South Woodlawn Avenue, Chicago, IL 60637 http://faculty.chicagobooth.edu/hedibert.lopes hlopes@chicagobooth.edu

More information

7. Estimation and hypothesis testing. Objective. Recommended reading

7. Estimation and hypothesis testing. Objective. Recommended reading 7. Estimation and hypothesis testing Objective In this chapter, we show how the election of estimators can be represented as a decision problem. Secondly, we consider the problem of hypothesis testing

More information

Penalized Loss functions for Bayesian Model Choice

Penalized Loss functions for Bayesian Model Choice Penalized Loss functions for Bayesian Model Choice Martyn International Agency for Research on Cancer Lyon, France 13 November 2009 The pure approach For a Bayesian purist, all uncertainty is represented

More information

7. Estimation and hypothesis testing. Objective. Recommended reading

7. Estimation and hypothesis testing. Objective. Recommended reading 7. Estimation and hypothesis testing Objective In this chapter, we show how the election of estimators can be represented as a decision problem. Secondly, we consider the problem of hypothesis testing

More information

Bayesian model selection: methodology, computation and applications

Bayesian model selection: methodology, computation and applications Bayesian model selection: methodology, computation and applications David Nott Department of Statistics and Applied Probability National University of Singapore Statistical Genomics Summer School Program

More information

Marginal likelihood estimation via power posteriors

Marginal likelihood estimation via power posteriors Marginal likelihood estimation via power posteriors N. Friel University of Glasgow, UK A.N. Pettitt Queensland University of Technology, Australia Summary. Model choice plays an increasingly important

More information

Markov Chain Monte Carlo Methods

Markov Chain Monte Carlo Methods Markov Chain Monte Carlo Methods John Geweke University of Iowa, USA 2005 Institute on Computational Economics University of Chicago - Argonne National Laboaratories July 22, 2005 The problem p (θ, ω I)

More information

Bayesian Model Comparison:

Bayesian Model Comparison: Bayesian Model Comparison: Modeling Petrobrás log-returns Hedibert Freitas Lopes February 2014 Log price: y t = log p t Time span: 12/29/2000-12/31/2013 (n = 3268 days) LOG PRICE 1 2 3 4 0 500 1000 1500

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee University of Minnesota July 20th, 2008 1 Bayesian Principles Classical statistics: model parameters are fixed and unknown. A Bayesian thinks of parameters

More information

Lecture 4: Dynamic models

Lecture 4: Dynamic models linear s Lecture 4: s Hedibert Freitas Lopes The University of Chicago Booth School of Business 5807 South Woodlawn Avenue, Chicago, IL 60637 http://faculty.chicagobooth.edu/hedibert.lopes hlopes@chicagobooth.edu

More information

arxiv: v2 [stat.co] 11 Oct 2017

arxiv: v2 [stat.co] 11 Oct 2017 A TUTORIAL ON BRIDGE SAMPLING A Tutorial on Bridge Sampling Quentin F. Gronau, Alexandra Sarafoglou, Dora Matzke, Alexander Ly, Udo Boehm, Maarten Marsman, David S. Leslie 2, Jonathan J. Forster 3, Eric-Jan

More information

Stat 535 C - Statistical Computing & Monte Carlo Methods. Lecture February Arnaud Doucet

Stat 535 C - Statistical Computing & Monte Carlo Methods. Lecture February Arnaud Doucet Stat 535 C - Statistical Computing & Monte Carlo Methods Lecture 13-28 February 2006 Arnaud Doucet Email: arnaud@cs.ubc.ca 1 1.1 Outline Limitations of Gibbs sampling. Metropolis-Hastings algorithm. Proof

More information

Hypothesis Testing. Econ 690. Purdue University. Justin L. Tobias (Purdue) Testing 1 / 33

Hypothesis Testing. Econ 690. Purdue University. Justin L. Tobias (Purdue) Testing 1 / 33 Hypothesis Testing Econ 690 Purdue University Justin L. Tobias (Purdue) Testing 1 / 33 Outline 1 Basic Testing Framework 2 Testing with HPD intervals 3 Example 4 Savage Dickey Density Ratio 5 Bartlett

More information

(5) Multi-parameter models - Gibbs sampling. ST440/540: Applied Bayesian Analysis

(5) Multi-parameter models - Gibbs sampling. ST440/540: Applied Bayesian Analysis Summarizing a posterior Given the data and prior the posterior is determined Summarizing the posterior gives parameter estimates, intervals, and hypothesis tests Most of these computations are integrals

More information

Bayesian Linear Regression

Bayesian Linear Regression Bayesian Linear Regression Sudipto Banerjee 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. September 15, 2010 1 Linear regression models: a Bayesian perspective

More information

Bayesian Linear Models

Bayesian Linear Models Bayesian Linear Models Sudipto Banerjee 1 and Andrew O. Finley 2 1 Department of Forestry & Department of Geography, Michigan State University, Lansing Michigan, U.S.A. 2 Biostatistics, School of Public

More information

Accept-Reject Metropolis-Hastings Sampling and Marginal Likelihood Estimation

Accept-Reject Metropolis-Hastings Sampling and Marginal Likelihood Estimation Accept-Reject Metropolis-Hastings Sampling and Marginal Likelihood Estimation Siddhartha Chib John M. Olin School of Business, Washington University, Campus Box 1133, 1 Brookings Drive, St. Louis, MO 63130.

More information

Markov Chain Monte Carlo

Markov Chain Monte Carlo Markov Chain Monte Carlo Michael Johannes Columbia University Nicholas Polson University of Chicago August 28, 2007 1 Introduction The Bayesian solution to any inference problem is a simple rule: compute

More information

Bayesian Linear Models

Bayesian Linear Models Bayesian Linear Models Sudipto Banerjee 1 and Andrew O. Finley 2 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department

More information

Introduction to Markov Chain Monte Carlo & Gibbs Sampling

Introduction to Markov Chain Monte Carlo & Gibbs Sampling Introduction to Markov Chain Monte Carlo & Gibbs Sampling Prof. Nicholas Zabaras Sibley School of Mechanical and Aerospace Engineering 101 Frank H. T. Rhodes Hall Ithaca, NY 14853-3801 Email: zabaras@cornell.edu

More information

Bayes Factors, posterior predictives, short intro to RJMCMC. Thermodynamic Integration

Bayes Factors, posterior predictives, short intro to RJMCMC. Thermodynamic Integration Bayes Factors, posterior predictives, short intro to RJMCMC Thermodynamic Integration Dave Campbell 2016 Bayesian Statistical Inference P(θ Y ) P(Y θ)π(θ) Once you have posterior samples you can compute

More information

Bayesian Estimation of DSGE Models 1 Chapter 3: A Crash Course in Bayesian Inference

Bayesian Estimation of DSGE Models 1 Chapter 3: A Crash Course in Bayesian Inference 1 The views expressed in this paper are those of the authors and do not necessarily reflect the views of the Federal Reserve Board of Governors or the Federal Reserve System. Bayesian Estimation of DSGE

More information

Bayesian Ingredients. Hedibert Freitas Lopes

Bayesian Ingredients. Hedibert Freitas Lopes Normal Prior s Ingredients Hedibert Freitas Lopes The University of Chicago Booth School of Business 5807 South Woodlawn Avenue, Chicago, IL 60637 http://faculty.chicagobooth.edu/hedibert.lopes hlopes@chicagobooth.edu

More information

The Metropolis-Hastings Algorithm. June 8, 2012

The Metropolis-Hastings Algorithm. June 8, 2012 The Metropolis-Hastings Algorithm June 8, 22 The Plan. Understand what a simulated distribution is 2. Understand why the Metropolis-Hastings algorithm works 3. Learn how to apply the Metropolis-Hastings

More information

Spatial Statistics Chapter 4 Basics of Bayesian Inference and Computation

Spatial Statistics Chapter 4 Basics of Bayesian Inference and Computation Spatial Statistics Chapter 4 Basics of Bayesian Inference and Computation So far we have discussed types of spatial data, some basic modeling frameworks and exploratory techniques. We have not discussed

More information

Markov Chain Monte Carlo methods

Markov Chain Monte Carlo methods Markov Chain Monte Carlo methods Tomas McKelvey and Lennart Svensson Signal Processing Group Department of Signals and Systems Chalmers University of Technology, Sweden November 26, 2012 Today s learning

More information

Markov Chain Monte Carlo

Markov Chain Monte Carlo Markov Chain Monte Carlo Recall: To compute the expectation E ( h(y ) ) we use the approximation E(h(Y )) 1 n n h(y ) t=1 with Y (1),..., Y (n) h(y). Thus our aim is to sample Y (1),..., Y (n) from f(y).

More information

MCMC algorithms for fitting Bayesian models

MCMC algorithms for fitting Bayesian models MCMC algorithms for fitting Bayesian models p. 1/1 MCMC algorithms for fitting Bayesian models Sudipto Banerjee sudiptob@biostat.umn.edu University of Minnesota MCMC algorithms for fitting Bayesian models

More information

Bayesian Statistical Methods. Jeff Gill. Department of Political Science, University of Florida

Bayesian Statistical Methods. Jeff Gill. Department of Political Science, University of Florida Bayesian Statistical Methods Jeff Gill Department of Political Science, University of Florida 234 Anderson Hall, PO Box 117325, Gainesville, FL 32611-7325 Voice: 352-392-0262x272, Fax: 352-392-8127, Email:

More information

Hierarchical models. Dr. Jarad Niemi. August 31, Iowa State University. Jarad Niemi (Iowa State) Hierarchical models August 31, / 31

Hierarchical models. Dr. Jarad Niemi. August 31, Iowa State University. Jarad Niemi (Iowa State) Hierarchical models August 31, / 31 Hierarchical models Dr. Jarad Niemi Iowa State University August 31, 2017 Jarad Niemi (Iowa State) Hierarchical models August 31, 2017 1 / 31 Normal hierarchical model Let Y ig N(θ g, σ 2 ) for i = 1,...,

More information

INTRODUCTION TO BAYESIAN STATISTICS

INTRODUCTION TO BAYESIAN STATISTICS INTRODUCTION TO BAYESIAN STATISTICS Sarat C. Dass Department of Statistics & Probability Department of Computer Science & Engineering Michigan State University TOPICS The Bayesian Framework Different Types

More information

Default Priors and Effcient Posterior Computation in Bayesian

Default Priors and Effcient Posterior Computation in Bayesian Default Priors and Effcient Posterior Computation in Bayesian Factor Analysis January 16, 2010 Presented by Eric Wang, Duke University Background and Motivation A Brief Review of Parameter Expansion Literature

More information

Computational statistics

Computational statistics Computational statistics Markov Chain Monte Carlo methods Thierry Denœux March 2017 Thierry Denœux Computational statistics March 2017 1 / 71 Contents of this chapter When a target density f can be evaluated

More information

Bayesian hypothesis testing for the distribution of insurance claim counts using the Gibbs sampler

Bayesian hypothesis testing for the distribution of insurance claim counts using the Gibbs sampler Journal of Computational Methods in Sciences and Engineering 5 (2005) 201 214 201 IOS Press Bayesian hypothesis testing for the distribution of insurance claim counts using the Gibbs sampler Athanassios

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee and Andrew O. Finley 2 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department

More information

Learning the hyper-parameters. Luca Martino

Learning the hyper-parameters. Luca Martino Learning the hyper-parameters Luca Martino 2017 2017 1 / 28 Parameters and hyper-parameters 1. All the described methods depend on some choice of hyper-parameters... 2. For instance, do you recall λ (bandwidth

More information

Empirical Likelihood Based Deviance Information Criterion

Empirical Likelihood Based Deviance Information Criterion Empirical Likelihood Based Deviance Information Criterion Yin Teng Smart and Safe City Center of Excellence NCS Pte Ltd June 22, 2016 Outline Bayesian empirical likelihood Definition Problems Empirical

More information

ComputationalToolsforComparing AsymmetricGARCHModelsviaBayes Factors. RicardoS.Ehlers

ComputationalToolsforComparing AsymmetricGARCHModelsviaBayes Factors. RicardoS.Ehlers ComputationalToolsforComparing AsymmetricGARCHModelsviaBayes Factors RicardoS.Ehlers Laboratório de Estatística e Geoinformação- UFPR http://leg.ufpr.br/ ehlers ehlers@leg.ufpr.br II Workshop on Statistical

More information

Introduction to Bayesian Methods

Introduction to Bayesian Methods Introduction to Bayesian Methods Jessi Cisewski Department of Statistics Yale University Sagan Summer Workshop 2016 Our goal: introduction to Bayesian methods Likelihoods Priors: conjugate priors, non-informative

More information

Bayesian Linear Models

Bayesian Linear Models Bayesian Linear Models Sudipto Banerjee September 03 05, 2017 Department of Biostatistics, Fielding School of Public Health, University of California, Los Angeles Linear Regression Linear regression is,

More information

A Bayesian Approach to Missile Reliability

A Bayesian Approach to Missile Reliability Brigham Young University BYU ScholarsArchive All Theses and Dissertations 2011-06-01 A Bayesian Approach to Missile Reliability Taylor Hardison Redd Brigham Young University - Provo Follow this and additional

More information

Markov Chain Monte Carlo methods

Markov Chain Monte Carlo methods Markov Chain Monte Carlo methods By Oleg Makhnin 1 Introduction a b c M = d e f g h i 0 f(x)dx 1.1 Motivation 1.1.1 Just here Supresses numbering 1.1.2 After this 1.2 Literature 2 Method 2.1 New math As

More information

STAT 425: Introduction to Bayesian Analysis

STAT 425: Introduction to Bayesian Analysis STAT 425: Introduction to Bayesian Analysis Marina Vannucci Rice University, USA Fall 2017 Marina Vannucci (Rice University, USA) Bayesian Analysis (Part 2) Fall 2017 1 / 19 Part 2: Markov chain Monte

More information

Overall Objective Priors

Overall Objective Priors Overall Objective Priors Jim Berger, Jose Bernardo and Dongchu Sun Duke University, University of Valencia and University of Missouri Recent advances in statistical inference: theory and case studies University

More information

Using Model Selection and Prior Specification to Improve Regime-switching Asset Simulations

Using Model Selection and Prior Specification to Improve Regime-switching Asset Simulations Using Model Selection and Prior Specification to Improve Regime-switching Asset Simulations Brian M. Hartman, PhD ASA Assistant Professor of Actuarial Science University of Connecticut BYU Statistics Department

More information

David Giles Bayesian Econometrics

David Giles Bayesian Econometrics David Giles Bayesian Econometrics 1. General Background 2. Constructing Prior Distributions 3. Properties of Bayes Estimators and Tests 4. Bayesian Analysis of the Multiple Regression Model 5. Bayesian

More information

Bayesian Input Variable Selection Using Posterior Probabilities and Expected Utilities

Bayesian Input Variable Selection Using Posterior Probabilities and Expected Utilities Helsinki University of Technology, Laboratory of Computational Engineering publications. Report B. ISSN 1457-1404 Bayesian Input Variable Selection Using Posterior Probabilities and Expected Utilities

More information

Stable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence

Stable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence Stable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham NC 778-5 - Revised April,

More information

Accounting for Complex Sample Designs via Mixture Models

Accounting for Complex Sample Designs via Mixture Models Accounting for Complex Sample Designs via Finite Normal Mixture Models 1 1 University of Michigan School of Public Health August 2009 Talk Outline 1 2 Accommodating Sampling Weights in Mixture Models 3

More information

Bayesian Inference and MCMC

Bayesian Inference and MCMC Bayesian Inference and MCMC Aryan Arbabi Partly based on MCMC slides from CSC412 Fall 2018 1 / 18 Bayesian Inference - Motivation Consider we have a data set D = {x 1,..., x n }. E.g each x i can be the

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee 1 and Andrew O. Finley 2 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department

More information

Bayesian Estimation with Sparse Grids

Bayesian Estimation with Sparse Grids Bayesian Estimation with Sparse Grids Kenneth L. Judd and Thomas M. Mertens Institute on Computational Economics August 7, 27 / 48 Outline Introduction 2 Sparse grids Construction Integration with sparse

More information

Bayesian Input Variable Selection Using Cross-Validation Predictive Densities and Reversible Jump MCMC

Bayesian Input Variable Selection Using Cross-Validation Predictive Densities and Reversible Jump MCMC Helsinki University of Technology, Laboratory of Computational Engineering publications. Report B. ISSN 1457-1404 Bayesian Input Variable Selection Using Cross-Validation Predictive Densities and Reversible

More information

Hmms with variable dimension structures and extensions

Hmms with variable dimension structures and extensions Hmm days/enst/january 21, 2002 1 Hmms with variable dimension structures and extensions Christian P. Robert Université Paris Dauphine www.ceremade.dauphine.fr/ xian Hmm days/enst/january 21, 2002 2 1 Estimating

More information

Lecture 13 Fundamentals of Bayesian Inference

Lecture 13 Fundamentals of Bayesian Inference Lecture 13 Fundamentals of Bayesian Inference Dennis Sun Stats 253 August 11, 2014 Outline of Lecture 1 Bayesian Models 2 Modeling Correlations Using Bayes 3 The Universal Algorithm 4 BUGS 5 Wrapping Up

More information

Lecture 6: Model Checking and Selection

Lecture 6: Model Checking and Selection Lecture 6: Model Checking and Selection Melih Kandemir melih.kandemir@iwr.uni-heidelberg.de May 27, 2014 Model selection We often have multiple modeling choices that are equally sensible: M 1,, M T. Which

More information

Recursive Deviance Information Criterion for the Hidden Markov Model

Recursive Deviance Information Criterion for the Hidden Markov Model International Journal of Statistics and Probability; Vol. 5, No. 1; 2016 ISSN 1927-7032 E-ISSN 1927-7040 Published by Canadian Center of Science and Education Recursive Deviance Information Criterion for

More information

Model comparison. Christopher A. Sims Princeton University October 18, 2016

Model comparison. Christopher A. Sims Princeton University October 18, 2016 ECO 513 Fall 2008 Model comparison Christopher A. Sims Princeton University sims@princeton.edu October 18, 2016 c 2016 by Christopher A. Sims. This document may be reproduced for educational and research

More information

BAYESIAN MODEL ASSESSMENT IN FACTOR ANALYSIS

BAYESIAN MODEL ASSESSMENT IN FACTOR ANALYSIS Statistica Sinica 14(2004), 41-67 BAYESIAN MODEL ASSESSMENT IN FACTOR ANALYSIS Hedibert Freitas Lopes and Mie West Federal University of Rio de Janeiro and Due University Abstract: Factor analysis has

More information

The binomial model. Assume a uniform prior distribution on p(θ). Write the pdf for this distribution.

The binomial model. Assume a uniform prior distribution on p(θ). Write the pdf for this distribution. The binomial model Example. After suspicious performance in the weekly soccer match, 37 mathematical sciences students, staff, and faculty were tested for the use of performance enhancing analytics. Let

More information

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference for DSGE Models. Lawrence J. Christiano Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.

More information

Bayesian Assessment of Hypotheses and Models

Bayesian Assessment of Hypotheses and Models 8 Bayesian Assessment of Hypotheses and Models This is page 399 Printer: Opaque this 8. Introduction The three preceding chapters gave an overview of how Bayesian probability models are constructed. Once

More information

Or How to select variables Using Bayesian LASSO

Or How to select variables Using Bayesian LASSO Or How to select variables Using Bayesian LASSO x 1 x 2 x 3 x 4 Or How to select variables Using Bayesian LASSO x 1 x 2 x 3 x 4 Or How to select variables Using Bayesian LASSO On Bayesian Variable Selection

More information

Markov chain Monte Carlo

Markov chain Monte Carlo 1 / 26 Markov chain Monte Carlo Timothy Hanson 1 and Alejandro Jara 2 1 Division of Biostatistics, University of Minnesota, USA 2 Department of Statistics, Universidad de Concepción, Chile IAP-Workshop

More information

Metropolis-Hastings Algorithm

Metropolis-Hastings Algorithm Strength of the Gibbs sampler Metropolis-Hastings Algorithm Easy algorithm to think about. Exploits the factorization properties of the joint probability distribution. No difficult choices to be made to

More information

A Review of Pseudo-Marginal Markov Chain Monte Carlo

A Review of Pseudo-Marginal Markov Chain Monte Carlo A Review of Pseudo-Marginal Markov Chain Monte Carlo Discussed by: Yizhe Zhang October 21, 2016 Outline 1 Overview 2 Paper review 3 experiment 4 conclusion Motivation & overview Notation: θ denotes the

More information

Cross-sectional space-time modeling using ARNN(p, n) processes

Cross-sectional space-time modeling using ARNN(p, n) processes Cross-sectional space-time modeling using ARNN(p, n) processes W. Polasek K. Kakamu September, 006 Abstract We suggest a new class of cross-sectional space-time models based on local AR models and nearest

More information

On Bayesian model and variable selection using MCMC

On Bayesian model and variable selection using MCMC Statistics and Computing 12: 27 36, 2002 C 2002 Kluwer Academic Publishers. Manufactured in The Netherlands. On Bayesian model and variable selection using MCMC PETROS DELLAPORTAS, JONATHAN J. FORSTER

More information

Assessing Regime Uncertainty Through Reversible Jump McMC

Assessing Regime Uncertainty Through Reversible Jump McMC Assessing Regime Uncertainty Through Reversible Jump McMC August 14, 2008 1 Introduction Background Research Question 2 The RJMcMC Method McMC RJMcMC Algorithm Dependent Proposals Independent Proposals

More information

Bayesian GLMs and Metropolis-Hastings Algorithm

Bayesian GLMs and Metropolis-Hastings Algorithm Bayesian GLMs and Metropolis-Hastings Algorithm We have seen that with conjugate or semi-conjugate prior distributions the Gibbs sampler can be used to sample from the posterior distribution. In situations,

More information

Likelihood-free MCMC

Likelihood-free MCMC Bayesian inference for stable distributions with applications in finance Department of Mathematics University of Leicester September 2, 2011 MSc project final presentation Outline 1 2 3 4 Classical Monte

More information

Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US

Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US Gerdie Everaert 1, Lorenzo Pozzi 2, and Ruben Schoonackers 3 1 Ghent University & SHERPPA 2 Erasmus

More information

BAYESIAN IRT MODELS INCORPORATING GENERAL AND SPECIFIC ABILITIES

BAYESIAN IRT MODELS INCORPORATING GENERAL AND SPECIFIC ABILITIES Behaviormetrika Vol.36, No., 2009, 27 48 BAYESIAN IRT MODELS INCORPORATING GENERAL AND SPECIFIC ABILITIES Yanyan Sheng and Christopher K. Wikle IRT-based models with a general ability and several specific

More information

10. Exchangeability and hierarchical models Objective. Recommended reading

10. Exchangeability and hierarchical models Objective. Recommended reading 10. Exchangeability and hierarchical models Objective Introduce exchangeability and its relation to Bayesian hierarchical models. Show how to fit such models using fully and empirical Bayesian methods.

More information

MODEL COMPARISON CHRISTOPHER A. SIMS PRINCETON UNIVERSITY

MODEL COMPARISON CHRISTOPHER A. SIMS PRINCETON UNIVERSITY ECO 513 Fall 2008 MODEL COMPARISON CHRISTOPHER A. SIMS PRINCETON UNIVERSITY SIMS@PRINCETON.EDU 1. MODEL COMPARISON AS ESTIMATING A DISCRETE PARAMETER Data Y, models 1 and 2, parameter vectors θ 1, θ 2.

More information

Model Assessment and Comparisons

Model Assessment and Comparisons Model Assessment and Comparisons Sudipto Banerjee 1 and Andrew O. Finley 2 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department

More information

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference for DSGE Models. Lawrence J. Christiano Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Preliminaries. Probabilities. Maximum Likelihood. Bayesian

More information

The Bayesian Choice. Christian P. Robert. From Decision-Theoretic Foundations to Computational Implementation. Second Edition.

The Bayesian Choice. Christian P. Robert. From Decision-Theoretic Foundations to Computational Implementation. Second Edition. Christian P. Robert The Bayesian Choice From Decision-Theoretic Foundations to Computational Implementation Second Edition With 23 Illustrations ^Springer" Contents Preface to the Second Edition Preface

More information

eqr094: Hierarchical MCMC for Bayesian System Reliability

eqr094: Hierarchical MCMC for Bayesian System Reliability eqr094: Hierarchical MCMC for Bayesian System Reliability Alyson G. Wilson Statistical Sciences Group, Los Alamos National Laboratory P.O. Box 1663, MS F600 Los Alamos, NM 87545 USA Phone: 505-667-9167

More information

Estimating marginal likelihoods from the posterior draws through a geometric identity

Estimating marginal likelihoods from the posterior draws through a geometric identity Estimating marginal likelihoods from the posterior draws through a geometric identity Johannes Reichl Energy Institute at the Johannes Kepler University Linz E-mail for correspondence: reichl@energieinstitut-linz.at

More information

David Giles Bayesian Econometrics

David Giles Bayesian Econometrics David Giles Bayesian Econometrics 5. Bayesian Computation Historically, the computational "cost" of Bayesian methods greatly limited their application. For instance, by Bayes' Theorem: p(θ y) = p(θ)p(y

More information

Control Variates for Markov Chain Monte Carlo

Control Variates for Markov Chain Monte Carlo Control Variates for Markov Chain Monte Carlo Dellaportas, P., Kontoyiannis, I., and Tsourti, Z. Dept of Statistics, AUEB Dept of Informatics, AUEB 1st Greek Stochastics Meeting Monte Carlo: Probability

More information

arxiv: v1 [stat.ap] 21 Dec 2010

arxiv: v1 [stat.ap] 21 Dec 2010 Discrimination for Two Way Models with Insurance Application G. O. Brown, W. S. Buckley arxiv:1012.4676v1 [stat.ap] 21 Dec 2010 May 7, 2018 Abstract In this paper, we review and apply several approaches

More information

The linear model is the most fundamental of all serious statistical models encompassing:

The linear model is the most fundamental of all serious statistical models encompassing: Linear Regression Models: A Bayesian perspective Ingredients of a linear model include an n 1 response vector y = (y 1,..., y n ) T and an n p design matrix (e.g. including regressors) X = [x 1,..., x

More information

BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA

BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA Intro: Course Outline and Brief Intro to Marina Vannucci Rice University, USA PASI-CIMAT 04/28-30/2010 Marina Vannucci

More information

ST440/540: Applied Bayesian Statistics. (9) Model selection and goodness-of-fit checks

ST440/540: Applied Bayesian Statistics. (9) Model selection and goodness-of-fit checks (9) Model selection and goodness-of-fit checks Objectives In this module we will study methods for model comparisons and checking for model adequacy For model comparisons there are a finite number of candidate

More information

Markov Chain Monte Carlo (MCMC)

Markov Chain Monte Carlo (MCMC) Markov Chain Monte Carlo (MCMC Dependent Sampling Suppose we wish to sample from a density π, and we can evaluate π as a function but have no means to directly generate a sample. Rejection sampling can

More information

David Giles Bayesian Econometrics

David Giles Bayesian Econometrics 9. Model Selection - Theory David Giles Bayesian Econometrics One nice feature of the Bayesian analysis is that we can apply it to drawing inferences about entire models, not just parameters. Can't do

More information

Bayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence

Bayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence Bayesian Inference in GLMs Frequentists typically base inferences on MLEs, asymptotic confidence limits, and log-likelihood ratio tests Bayesians base inferences on the posterior distribution of the unknowns

More information

Mixture models. Mixture models MCMC approaches Label switching MCMC for variable dimension models. 5 Mixture models

Mixture models. Mixture models MCMC approaches Label switching MCMC for variable dimension models. 5 Mixture models 5 MCMC approaches Label switching MCMC for variable dimension models 291/459 Missing variable models Complexity of a model may originate from the fact that some piece of information is missing Example

More information

Bayesian Analysis of Order Uncertainty in ARIMA Models

Bayesian Analysis of Order Uncertainty in ARIMA Models Bayesian Analysis of Order Uncertainty in ARIMA Models R.S. Ehlers Federal University of Paraná, Brazil S.P. Brooks University of Cambridge, UK Summary. In this paper we extend the work of Brooks and Ehlers

More information

Bayesian model selection for computer model validation via mixture model estimation

Bayesian model selection for computer model validation via mixture model estimation Bayesian model selection for computer model validation via mixture model estimation Kaniav Kamary ATER, CNAM Joint work with É. Parent, P. Barbillon, M. Keller and N. Bousquet Outline Computer model validation

More information

BAYESIAN ANALYSIS OF ORDER UNCERTAINTY IN ARIMA MODELS

BAYESIAN ANALYSIS OF ORDER UNCERTAINTY IN ARIMA MODELS BAYESIAN ANALYSIS OF ORDER UNCERTAINTY IN ARIMA MODELS BY RICARDO S. EHLERS AND STEPHEN P. BROOKS Federal University of Paraná, Brazil and University of Cambridge, UK Abstract. In this paper we extend

More information

Kobe University Repository : Kernel

Kobe University Repository : Kernel Kobe University Repository : Kernel タイトル Title 著者 Author(s) 掲載誌 巻号 ページ Citation 刊行日 Issue date 資源タイプ Resource Type 版区分 Resource Version 権利 Rights DOI URL Note on the Sampling Distribution for the Metropolis-

More information

Reminder of some Markov Chain properties:

Reminder of some Markov Chain properties: Reminder of some Markov Chain properties: 1. a transition from one state to another occurs probabilistically 2. only state that matters is where you currently are (i.e. given present, future is independent

More information

Improving power posterior estimation of statistical evidence

Improving power posterior estimation of statistical evidence Improving power posterior estimation of statistical evidence Nial Friel, Merrilee Hurn and Jason Wyse Department of Mathematical Sciences, University of Bath, UK 10 June 2013 Bayesian Model Choice Possible

More information

Stat 535 C - Statistical Computing & Monte Carlo Methods. Arnaud Doucet.

Stat 535 C - Statistical Computing & Monte Carlo Methods. Arnaud Doucet. Stat 535 C - Statistical Computing & Monte Carlo Methods Arnaud Doucet Email: arnaud@cs.ubc.ca 1 1.1 Outline Introduction to Markov chain Monte Carlo The Gibbs Sampler Examples Overview of the Lecture

More information

Introduction to Machine Learning CMU-10701

Introduction to Machine Learning CMU-10701 Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov

More information

Bayesian Computations for DSGE Models

Bayesian Computations for DSGE Models Bayesian Computations for DSGE Models Frank Schorfheide University of Pennsylvania, PIER, CEPR, and NBER October 23, 2017 This Lecture is Based on Bayesian Estimation of DSGE Models Edward P. Herbst &

More information