Default Priors and Effcient Posterior Computation in Bayesian

Size: px
Start display at page:

Download "Default Priors and Effcient Posterior Computation in Bayesian"

Transcription

1 Default Priors and Effcient Posterior Computation in Bayesian Factor Analysis January 16, 2010

2 Presented by Eric Wang, Duke University

3 Background and Motivation A Brief Review of Parameter Expansion Literature Treating the Number of Factors as Fixed Treating the Number of Factors as Unknown

4 Background and Motivation A Brief Review of Parameter Expansion Literature Factor models provide flexible and sparse representations of multivariate data take the form y i = Λη i + ɛ i, ɛ i N p (0, Σ), (1) Markov Chain Monte Carlo (MCMC) algorithms are commonly used for parameter inference. Often, conditionally conjugate priors are chosen for the model parameters to facilitate straightforward posterior computation by a Gibbs sampler. However, traditionally used priors lead to several challenges limiting the performance of Bayesian factor models.

5 Background and Motivation A Brief Review of Parameter Expansion Literature Challenges posed by the standard Bayesian FA construction Generally, knowledge of reasonable or plausible hyperparameter values are be limited. A common hierarchical structure is to use normal and inverse gamma priors for factor loadings and residual variances respectively. Known issues: Improper posteriors arise in the limiting case as the prior variance on the normal or inverse gamma gets large. Proper, but diffuse priors also do not solve the problem. Slow mixing is observed even in cases when informative priors are used.

6 Parameter expansion as a solution Background and Motivation A Brief Review of Parameter Expansion Literature This paper proposes a novel application of parameter expansion (PX) which yields default priors and leads to substantially improved mixing and reliable posterior computation. PX is an attractive approach since it allows us to introduce new families of priors, in this case t and folded-t priors. The authors also propose an efficient PX Gibbs sampling scheme Draw samples from a conventional conditionally-conjugate distributions in an expanded working model Use a post-processing step to transform the model back to the inferential model of (1). The authors propose a way to allow uncertainty in the number of factors.

7 Liu et al., 1998 Outline Background and Motivation A Brief Review of Parameter Expansion Literature Parameter Expansion to Accelerate EM: The PX-EM Algorithm (Liu et al., 1998) proposed PX as a way to accelerate EM inference in a simple model. The authors introduced an auxiliary variable to reduce coupling between variables in the original model. Using a simple hierarchical model, the PX-EM performs a covariance adjustment at every M-step to correct for the imputed value of the auxiliary variable and its fixed expectation under the non-expanded model. The authors also showed that when the set of model parameters θ = θ MLE, the deviation between the expanded model and the non-expanded model dissappears.

8 Qi and Jaakola, 2006 Background and Motivation A Brief Review of Parameter Expansion Literature Parameter Expanded Variational Bayesian Methods (Qi and Jaakola, 2006) proposed PX-VB for probit regression. The authors used the same concept as in PX-EM, suitably adapted to VB for probit regression. Significant efficiency gains were seen when applied to RVM. The authors claim that empirically, PX-VB solutions are similar to VB solutions, and include plots which show a 15 reduction in the number of iterations.

9 Gelman, 2004 Outline Background and Motivation A Brief Review of Parameter Expansion Literature Parameterization and Bayesian Modeling (Gelman, 2004) showed that PX can induce new families of priors by applying a redundant multiplicative reparameterization of the original model. Reparameterization can induce an implicit folded noncentral t distribution. Some appealing special cases of the folded noncentral t distribution including the half-t, uniform, and the proper half-cauchy distributions. Although the folded noncentral t distribution is itself not conditionally conjugate in the Bayesian hierarchical setting, the individual components used to induce it are, and straightforward Gibbs sampling can be performed.

10 Recall the original model specification y i = Λη i + ɛ i, ɛ i N p (0, Σ), (2) where Λ is a p k matrix of factor loadings, η i = (η i1,..., η ik ) N(0, I k ) is a vector of latent factors, and ɛ i is the residual with diagonal covariance matrix Σ = diag(σ 2 1,..., σ2 p). To insure identifiability, assume Λ has a full-rank lower triangular structure, The diagonal elements of Λ have truncated normal priors, the lower diagonal elements are given normal priors, and the σ 2 1,..., σ2 p have inverse-gamma priors. Note that if we marginalize out η i, then y i N(0, Ω) where Ω = ΛΛ + Σ.

11 Inducing Priors through Parameter Expansion The authors note that the priors on the previous slide yield computationally convenient posteriors via Gibbs sampling but is subject to issues such as slow mixing and wrongly informative priors. In this paper, the authors use PX to induce a heavier tailed, proper prior on the factor loadings following Gelman (2004,2006). The authors first introduce a redundantly overparamertized working model which is then related to inferential model above through a transformation.

12 PX-Factor Model Define the following PX-factor model y i = Λ η i + ɛ i, η i N(0, Ψ), ɛ i N p (0, Σ), (3) where Λ is an unconstrained p k lower triangular working factor loading matrix, ηi is a vector of working latent factors, Ψ = diag(ψ 1,..., ψ k ) and Σ is as defined previously. Note the redundant overparameterization after marginalizing out η i, y i N(0, Λ ΨΛ + Σ).

13 To relate the working model parameters and the inferential model parameters, the authors employ the following transformation Λ jl = S(λ ll )λ jl ψ1/2 l, j = 1,..., p, l = 1,..., k, η i = ψ 1/2 η i, (4) where S(x) is a function which takes the sign of x. The following priors are then employed for the working model parameters λ jl N(0, 1), j = 1,..., p, l = 1,..., min(j, k), λ jl δ 0, j = 1,..., p, l = j + 1,..., k, ψ l Gamma(a l, b l ), l = 1,..., k (5) where δ 0 is a measure concentrated at 0.

14 Full PX-FA Model Putting the previous two slides together, the full model is y i = Λ η i + ɛ i, η i N(0, Ψ), ɛ i N p (0, Σ), λ jl N(0, 1), j = 1,..., p, l = 1,..., min(j, k), λ jl δ 0, j = 1,..., p, l = j + 1,..., k, ψ l Gamma(a l, b l ), l = 1,..., k (6) with the transformation function λ jl = S(λ ll )λ jl ψ1/2 l, j = 1,..., p, l = 1,..., k, η i = ψ 1/2 η i, (7) where S(x) is a function which takes the sign of x.

15 Full PX-FA Model The preceding model can be thought of as a generalization of Gelman (2006). Upon marginalizing out Λ and Ψ, we obtain t priors for the off-diagonal elements of Λ and half-t priors for the diagonal elements. Note from the transform function λ jl = S(λ ll )λ jl ψ1/2 al that the columns of Λ are dependent on Ψ, specifically, columns with high ψ l will tend to have higher factor loadings, while columns with low ψ l will tend to have low factor loadings. Inference on the PX-FA model can be done using a standard Gibbs sampler on the working model, then each iteration is transformed back to the inferential model, discarding the working model samples.

16 Inference PX-FA can be written as y ij = z ij λ j + ɛ ij, ɛ ij N(0, σ 2 j ) (8) where z ij = (ηi1,..., η ik j ), λ j = (λ j1,..., λ jk j ), and k j = min(j, k). The full conditional posteriors of the model parameters are p(λ j η, Ψ, Σ, y) = N mj ((Σ 1 0λ j + σ 2 j Z j Z j) 1 (Σ 1 0λ j where Z j = (z 1j,..., z nj ) and Y j = (y 1j,..., y nj ). (continued on next slide) (Σ 1 0λ j λ 0j + σ 2 j Z j Y j), + σ 2 j Z j Z j) 1 )

17 Inference p(η λ j, Ψ, Σ, y) = N m((ψ 1 + Λ Σ 1 bmλ ) 1 Λ Σ 1 y i, and p(ψ 1 η, λ j, Σ, y) = Gamma(a l + n 2, b l p(σ 2 j η, λ j, Ψ, y) = Gamma(c j + n 2, d j (Ψ 1 + Λ Σ 1 Λ ) 1 ) (9) n ηil 2 ) (10) i=1 n (y ij z ij λ j )2 ) i=1

18 One Factor Model Here the authors consider a simple example with p = 7 and n = 100 where λ = (0.995, 0.975, 0.949, 0.922, 0.894, 0.866, 0.837), diag(σ) = (0.01, 0.05, 0.10, 0.15, 0.20, 0.25, 0.30). Traditional Gibbs sampler priors: N + (0, 1) for diagonal elements and N(0, 1) for the lower triangular elements of λ, respectively. PX Gibbs sampler priors for λ are induced half-cauchy and Cauchy priors, The priors are induced by choosing N(0, 1) priors on the free elements of Λ, η N(0, Ψ), and the diagonal elements of Ψ have Gamma(1/2, 1/2) priors.

19 One Factor Model The prior on the noise precision σ 2 j is Gamma(1, 0.2) for both samplers. Both Gibbs samplers were run iterations, with 5000 iteration burn-in. Recall Ω = ΛΛ + Σ. The authors compare Effective Sample Size (ESS) and bias of posterior means of Ω across 100 simulations. The results show that PX Gibbs sampling results in dramatic improvements in ESS and slightly lower bias. The authors claim that the improvement in ESS is due to the heavy-tailed induced Cauchy prior.

20 One Factor Model ESS-PX/ESS-Traditional on Ω

21 One Factor Model Bias for some values of Ω

22 Three Factor Model Here the authors carried out identical simulations as above, but with known k = 3. The ESS-PX/ESS-Traditional results are shown below.

23 Model Selection - ISPX The probability of choosing a model with h factors is Pr(k = h y) = κ hπ(y k = h) m l=1 κ lπ(y k = l) (11) where π(y k = h) is the marginal likelihood under model k, obtained by integrating i N p(y i ; 0, Λ (k) Λ (k) + Σ) across the priors for Λ (k) and residual variances Σ, and κ h is the prior probability P(k = h), h = 1,..., m. Instead of parameterizing all m models, the authors parameterize the k = m factor model, and constitute a smaller model k = h by marginalizing out the columns from (h + 1) to m.

24 Model Selection - ISPX The posterior probability can be expressed as Pr(k = h y) = O[h : j] BF[h : j] m l=1 O[h : j] BF[h : j] (12) where O[h : j] = κ h /κ j is the prior odds and BF[h : j] = {π(y k = h)/π(y k = j)} are the Bayes factors. The Bayes factors can be estimated for h = 2,..., m as ˆ BF [(h 1) : h] = 1 n n i=1 p(y θ (h) i, k = h 1) p(y θ (h) i, k = h) where θ (h) i = (Λ (h) i, Σ i ), i = 1,..., n are samples from running the model with k = h. (13)

25 Model Selection - ISPX The Bayes factor for comparing any two models can be obtained, e.g. BF[1 : m] = BF[1 : 2] BF[2 : 3]...BF[(m 1) : m]. Setting κ h = 1/m, the posterior probabilities Pr(k = h y) can be estimated. The authors call this method Importance Sampling with Parameter Expansion (ISPX).

26 Model Selection - PSPX The authors also adopted the path sampling based approach first used in Lee and Song (2002) for estimating log Bayes factors. This approach constructs a path using a scalar t [0, 1] to link two models M 0 and M 1, a method first suggested by Gelman and Meng (1998). Numerical integration is used to approximate the integration over t. This approach is highly accurate but computationally expensive, and is referred to as Path Sampling Parameter Expansion (PSPX).

27 Categorical Data Suppose that y i = (y i1, y i,2 ) where y i1 is a p 1 1 vector of continuous variables and y i2 is a p 2 2 vector of ordered categorical variables. The inferential (non-expanded) model can be generalized as y ij = h j (yij, τ j), j = 1,..., p y i = α + Λη i + η i, η N(0, I k ), η N(0, Σ) where α are Gaussian intercept variables, h j (.) is the identity link for j = 1,..., p 1, and for j = p 1 + 1,..., p is a probit type link L j h j (z; τ j ) = c1(τ j,c 1 z τ jc ). c=1

28 Treating the Number of Factors as Fixed Treating the Number of Factors as Unknown Fertility Study The underlying latent factor of interest in this study is the fertility score of subjects. Concentration is defined as (sperm count/semen volume). Each y i has three variables based on three different techniques for counting sperm. In addition the outcomes, each y i is associated with a vector of covariates x i = (x i1, x i2, x i3 ). The inferential model considered includes the covariates at the latent variable level: y ij = α j +λ j η i +ɛ ij, η i = β x i +δ i, η ij N(0, τ 1 j ), δ N(0, 1)

29 Fertility Study Outline Treating the Number of Factors as Fixed Treating the Number of Factors as Unknown The covariate vector x i is a three dimensional vector encoding location of the subject and whether the time since last ejaculation was less than 2 hours. The PX model is y ij = α j + λ j η i + ɛ ij, η i = µ + β x i + δ i, η ij N(0, τ 1 j ) δ N(0, 1) and the transformations relating the working model parameters to the inferential model are α j = α j + λ j µ, λ j = S(λ j )λ j ψ1/2, β = βψ 1/2, η i = ψ 1/2 (η i = µ ), δ i = ψ 1/2 δ i

30 Treating the Number of Factors as Fixed Treating the Number of Factors as Unknown y ij = α j +λ j η i +ɛ ij, η i = β x i +δ i, η ij N(0, τ 1 j ), δ N(0, 1) The priors for the standard Gibbs Sampler are specified as α j N(0, 1), λ j N + (0, 1), τ j Gamma(1, 0.2) for j = 1, 2, 3, and β N(0, 10 I 3 ). y ij = α j + λ j η i + ɛ ij, η i = µ + β x i + δ i, η ij N(0, τ 1 j ) δ N(0, 1) The priors for the PX Gibbs Sampler are specified as α j N(0, 1), λ j N(0, 1), τ j Gamma(1, 0.2) for j = 1, 2, 3, µ N(0, 1), β N(0, 10 I 3 ),ψ Gamma(1/2, 1/2).

31 Treating the Number of Factors as Fixed Treating the Number of Factors as Unknown Trace plots of intercept terms α j

32 Treating the Number of Factors as Fixed Treating the Number of Factors as Unknown Trace plots of factor loadings λ j

33 Treating the Number of Factors as Fixed Treating the Number of Factors as Unknown Additional Results The ESS-PX/ESS-Traditional for the upper triangular elements of Ω = ΛΛ + Σ are , , , , , , meaning that the traditional Gibbs sampler would have to run approximately 200 times longer than the PX Gibbs sampler to have the same performances in mixing. The absolute bias of these parameters under traditional Gibbs sampling are , , , , , and with the PX Gibbs sampler they are , , , , ,

34 Toxicology Study Outline Treating the Number of Factors as Fixed Treating the Number of Factors as Unknown The purpose of this study is to examine the effect of Anthraquinone in female Fischer rates. 60 animals were dosed with 0, 1875, 3750, 7500, and ppm Anthraquinone. Body weight along with organ weights were recorded. The small sample size makes estimation of the covariance matrix a significant challenge. The authors therefore apply both ISPX and PSPX to this data, setting the maximum of factors m = 3, 100,000 iterations for ISPX and 25,000 iterations per grid for PSPX. The probability for the one, two, and three factor models were , and for ISPX and , and for PSPX, respectively.

35 Toxicology Study Outline Treating the Number of Factors as Fixed Treating the Number of Factors as Unknown

36 Treating the Number of Factors as Fixed Treating the Number of Factors as Unknown Toxicology Study Notice that both ISPX and PSPX favor the one factor model. This is supported when examining the magnitudes of the factor loadings. Recall that due to prior dependence, the magnitude of the loadings will tend to scale with their respective ψ j. The two factor model is very close to the one factor model, and the authors state that the increase in the number of model parameters does not justify the small increase in likelihood, leading them to believe that the PSPX posterior probabilities are most likely correct.

37 Discussion and Factor models offers a flexible dimensionality reduction method for analyzing multivariate data, but the traditional specification of Normal Gamma priors for computational convenience leads to slow mixing in MCMC. The authors proposed a default heavy-tailed prior for factor analysis models using parameter expansion which yields substantially better mixing. General extensions of the model to categorical data was demonstrated to be straightforward. Two different methods for computing posterior model probabilities were proposed, and shown to work quite well in practice.

Default Priors and Efficient Posterior Computation in Bayesian Factor Analysis

Default Priors and Efficient Posterior Computation in Bayesian Factor Analysis Default Priors and Efficient Posterior Computation in Bayesian Factor Analysis Joyee Ghosh Institute of Statistics and Decision Sciences, Duke University Box 90251, Durham, NC 27708 joyee@stat.duke.edu

More information

STA 216, GLM, Lecture 16. October 29, 2007

STA 216, GLM, Lecture 16. October 29, 2007 STA 216, GLM, Lecture 16 October 29, 2007 Efficient Posterior Computation in Factor Models Underlying Normal Models Generalized Latent Trait Models Formulation Genetic Epidemiology Illustration Structural

More information

Bayes Model Selection with Path Sampling: Factor Models

Bayes Model Selection with Path Sampling: Factor Models with Path Sampling: Factor Models Ritabrata Dutta and Jayanta K Ghosh Purdue University 07/02/11 Factor Models in Applications Factor Models in Applications Factor Models Factor Models and Factor analysis

More information

Bayesian Regression Linear and Logistic Regression

Bayesian Regression Linear and Logistic Regression When we want more than point estimates Bayesian Regression Linear and Logistic Regression Nicole Beckage Ordinary Least Squares Regression and Lasso Regression return only point estimates But what if we

More information

Gibbs Sampling in Linear Models #2

Gibbs Sampling in Linear Models #2 Gibbs Sampling in Linear Models #2 Econ 690 Purdue University Outline 1 Linear Regression Model with a Changepoint Example with Temperature Data 2 The Seemingly Unrelated Regressions Model 3 Gibbs sampling

More information

Bayesian Methods for Machine Learning

Bayesian Methods for Machine Learning Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),

More information

Lecture 16: Mixtures of Generalized Linear Models

Lecture 16: Mixtures of Generalized Linear Models Lecture 16: Mixtures of Generalized Linear Models October 26, 2006 Setting Outline Often, a single GLM may be insufficiently flexible to characterize the data Setting Often, a single GLM may be insufficiently

More information

Bayesian Analysis of Latent Variable Models using Mplus

Bayesian Analysis of Latent Variable Models using Mplus Bayesian Analysis of Latent Variable Models using Mplus Tihomir Asparouhov and Bengt Muthén Version 2 June 29, 2010 1 1 Introduction In this paper we describe some of the modeling possibilities that are

More information

November 2002 STA Random Effects Selection in Linear Mixed Models

November 2002 STA Random Effects Selection in Linear Mixed Models November 2002 STA216 1 Random Effects Selection in Linear Mixed Models November 2002 STA216 2 Introduction It is common practice in many applications to collect multiple measurements on a subject. Linear

More information

Partial factor modeling: predictor-dependent shrinkage for linear regression

Partial factor modeling: predictor-dependent shrinkage for linear regression modeling: predictor-dependent shrinkage for linear Richard Hahn, Carlos Carvalho and Sayan Mukherjee JASA 2013 Review by Esther Salazar Duke University December, 2013 Factor framework The factor framework

More information

Bayesian linear regression

Bayesian linear regression Bayesian linear regression Linear regression is the basis of most statistical modeling. The model is Y i = X T i β + ε i, where Y i is the continuous response X i = (X i1,..., X ip ) T is the corresponding

More information

A Fully Nonparametric Modeling Approach to. BNP Binary Regression

A Fully Nonparametric Modeling Approach to. BNP Binary Regression A Fully Nonparametric Modeling Approach to Binary Regression Maria Department of Applied Mathematics and Statistics University of California, Santa Cruz SBIES, April 27-28, 2012 Outline 1 2 3 Simulation

More information

Bayesian (conditionally) conjugate inference for discrete data models. Jon Forster (University of Southampton)

Bayesian (conditionally) conjugate inference for discrete data models. Jon Forster (University of Southampton) Bayesian (conditionally) conjugate inference for discrete data models Jon Forster (University of Southampton) with Mark Grigsby (Procter and Gamble?) Emily Webb (Institute of Cancer Research) Table 1:

More information

Contents. Part I: Fundamentals of Bayesian Inference 1

Contents. Part I: Fundamentals of Bayesian Inference 1 Contents Preface xiii Part I: Fundamentals of Bayesian Inference 1 1 Probability and inference 3 1.1 The three steps of Bayesian data analysis 3 1.2 General notation for statistical inference 4 1.3 Bayesian

More information

Latent Variable Models for Binary Data. Suppose that for a given vector of explanatory variables x, the latent

Latent Variable Models for Binary Data. Suppose that for a given vector of explanatory variables x, the latent Latent Variable Models for Binary Data Suppose that for a given vector of explanatory variables x, the latent variable, U, has a continuous cumulative distribution function F (u; x) and that the binary

More information

Hastings-within-Gibbs Algorithm: Introduction and Application on Hierarchical Model

Hastings-within-Gibbs Algorithm: Introduction and Application on Hierarchical Model UNIVERSITY OF TEXAS AT SAN ANTONIO Hastings-within-Gibbs Algorithm: Introduction and Application on Hierarchical Model Liang Jing April 2010 1 1 ABSTRACT In this paper, common MCMC algorithms are introduced

More information

Part 8: GLMs and Hierarchical LMs and GLMs

Part 8: GLMs and Hierarchical LMs and GLMs Part 8: GLMs and Hierarchical LMs and GLMs 1 Example: Song sparrow reproductive success Arcese et al., (1992) provide data on a sample from a population of 52 female song sparrows studied over the course

More information

Bayesian Inference. Chapter 4: Regression and Hierarchical Models

Bayesian Inference. Chapter 4: Regression and Hierarchical Models Bayesian Inference Chapter 4: Regression and Hierarchical Models Conchi Ausín and Mike Wiper Department of Statistics Universidad Carlos III de Madrid Advanced Statistics and Data Mining Summer School

More information

Bayesian inference for factor scores

Bayesian inference for factor scores Bayesian inference for factor scores Murray Aitkin and Irit Aitkin School of Mathematics and Statistics University of Newcastle UK October, 3 Abstract Bayesian inference for the parameters of the factor

More information

Lecture Notes based on Koop (2003) Bayesian Econometrics

Lecture Notes based on Koop (2003) Bayesian Econometrics Lecture Notes based on Koop (2003) Bayesian Econometrics A.Colin Cameron University of California - Davis November 15, 2005 1. CH.1: Introduction The concepts below are the essential concepts used throughout

More information

Bayesian Networks in Educational Assessment

Bayesian Networks in Educational Assessment Bayesian Networks in Educational Assessment Estimating Parameters with MCMC Bayesian Inference: Expanding Our Context Roy Levy Arizona State University Roy.Levy@asu.edu 2017 Roy Levy MCMC 1 MCMC 2 Posterior

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate

More information

Bayes methods for categorical data. April 25, 2017

Bayes methods for categorical data. April 25, 2017 Bayes methods for categorical data April 25, 2017 Motivation for joint probability models Increasing interest in high-dimensional data in broad applications Focus may be on prediction, variable selection,

More information

Bayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence

Bayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence Bayesian Inference in GLMs Frequentists typically base inferences on MLEs, asymptotic confidence limits, and log-likelihood ratio tests Bayesians base inferences on the posterior distribution of the unknowns

More information

MULTILEVEL IMPUTATION 1

MULTILEVEL IMPUTATION 1 MULTILEVEL IMPUTATION 1 Supplement B: MCMC Sampling Steps and Distributions for Two-Level Imputation This document gives technical details of the full conditional distributions used to draw regression

More information

A Bayesian Nonparametric Approach to Monotone Missing Data in Longitudinal Studies with Informative Missingness

A Bayesian Nonparametric Approach to Monotone Missing Data in Longitudinal Studies with Informative Missingness A Bayesian Nonparametric Approach to Monotone Missing Data in Longitudinal Studies with Informative Missingness A. Linero and M. Daniels UF, UT-Austin SRC 2014, Galveston, TX 1 Background 2 Working model

More information

Penalized Loss functions for Bayesian Model Choice

Penalized Loss functions for Bayesian Model Choice Penalized Loss functions for Bayesian Model Choice Martyn International Agency for Research on Cancer Lyon, France 13 November 2009 The pure approach For a Bayesian purist, all uncertainty is represented

More information

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework HT5: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford http://www.stats.ox.ac.uk/~sejdinov/sdmml.html Maximum Likelihood Principle A generative model for

More information

Bayesian Inference. Chapter 4: Regression and Hierarchical Models

Bayesian Inference. Chapter 4: Regression and Hierarchical Models Bayesian Inference Chapter 4: Regression and Hierarchical Models Conchi Ausín and Mike Wiper Department of Statistics Universidad Carlos III de Madrid Master in Business Administration and Quantitative

More information

STAT 425: Introduction to Bayesian Analysis

STAT 425: Introduction to Bayesian Analysis STAT 425: Introduction to Bayesian Analysis Marina Vannucci Rice University, USA Fall 2017 Marina Vannucci (Rice University, USA) Bayesian Analysis (Part 2) Fall 2017 1 / 19 Part 2: Markov chain Monte

More information

STA414/2104 Statistical Methods for Machine Learning II

STA414/2104 Statistical Methods for Machine Learning II STA414/2104 Statistical Methods for Machine Learning II Murat A. Erdogdu & David Duvenaud Department of Computer Science Department of Statistical Sciences Lecture 3 Slide credits: Russ Salakhutdinov Announcements

More information

ST 740: Markov Chain Monte Carlo

ST 740: Markov Chain Monte Carlo ST 740: Markov Chain Monte Carlo Alyson Wilson Department of Statistics North Carolina State University October 14, 2012 A. Wilson (NCSU Stsatistics) MCMC October 14, 2012 1 / 20 Convergence Diagnostics:

More information

Part 6: Multivariate Normal and Linear Models

Part 6: Multivariate Normal and Linear Models Part 6: Multivariate Normal and Linear Models 1 Multiple measurements Up until now all of our statistical models have been univariate models models for a single measurement on each member of a sample of

More information

Bayesian Inference and MCMC

Bayesian Inference and MCMC Bayesian Inference and MCMC Aryan Arbabi Partly based on MCMC slides from CSC412 Fall 2018 1 / 18 Bayesian Inference - Motivation Consider we have a data set D = {x 1,..., x n }. E.g each x i can be the

More information

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) =

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) = Until now we have always worked with likelihoods and prior distributions that were conjugate to each other, allowing the computation of the posterior distribution to be done in closed form. Unfortunately,

More information

Regularization Parameter Selection for a Bayesian Multi-Level Group Lasso Regression Model with Application to Imaging Genomics

Regularization Parameter Selection for a Bayesian Multi-Level Group Lasso Regression Model with Application to Imaging Genomics Regularization Parameter Selection for a Bayesian Multi-Level Group Lasso Regression Model with Application to Imaging Genomics arxiv:1603.08163v1 [stat.ml] 7 Mar 016 Farouk S. Nathoo, Keelin Greenlaw,

More information

On Bayesian Computation

On Bayesian Computation On Bayesian Computation Michael I. Jordan with Elaine Angelino, Maxim Rabinovich, Martin Wainwright and Yun Yang Previous Work: Information Constraints on Inference Minimize the minimax risk under constraints

More information

Sparse Factor-Analytic Probit Models

Sparse Factor-Analytic Probit Models Sparse Factor-Analytic Probit Models By JAMES G. SCOTT Department of Statistical Science, Duke University, Durham, North Carolina 27708-0251, U.S.A. james@stat.duke.edu PAUL R. HAHN Department of Statistical

More information

Plausible Values for Latent Variables Using Mplus

Plausible Values for Latent Variables Using Mplus Plausible Values for Latent Variables Using Mplus Tihomir Asparouhov and Bengt Muthén August 21, 2010 1 1 Introduction Plausible values are imputed values for latent variables. All latent variables can

More information

17 : Markov Chain Monte Carlo

17 : Markov Chain Monte Carlo 10-708: Probabilistic Graphical Models, Spring 2015 17 : Markov Chain Monte Carlo Lecturer: Eric P. Xing Scribes: Heran Lin, Bin Deng, Yun Huang 1 Review of Monte Carlo Methods 1.1 Overview Monte Carlo

More information

VCMC: Variational Consensus Monte Carlo

VCMC: Variational Consensus Monte Carlo VCMC: Variational Consensus Monte Carlo Maxim Rabinovich, Elaine Angelino, Michael I. Jordan Berkeley Vision and Learning Center September 22, 2015 probabilistic models! sky fog bridge water grass object

More information

Part 7: Hierarchical Modeling

Part 7: Hierarchical Modeling Part 7: Hierarchical Modeling!1 Nested data It is common for data to be nested: i.e., observations on subjects are organized by a hierarchy Such data are often called hierarchical or multilevel For example,

More information

Module 22: Bayesian Methods Lecture 9 A: Default prior selection

Module 22: Bayesian Methods Lecture 9 A: Default prior selection Module 22: Bayesian Methods Lecture 9 A: Default prior selection Peter Hoff Departments of Statistics and Biostatistics University of Washington Outline Jeffreys prior Unit information priors Empirical

More information

MCMC algorithms for fitting Bayesian models

MCMC algorithms for fitting Bayesian models MCMC algorithms for fitting Bayesian models p. 1/1 MCMC algorithms for fitting Bayesian models Sudipto Banerjee sudiptob@biostat.umn.edu University of Minnesota MCMC algorithms for fitting Bayesian models

More information

Bayesian data analysis in practice: Three simple examples

Bayesian data analysis in practice: Three simple examples Bayesian data analysis in practice: Three simple examples Martin P. Tingley Introduction These notes cover three examples I presented at Climatea on 5 October 0. Matlab code is available by request to

More information

Likelihood-free MCMC

Likelihood-free MCMC Bayesian inference for stable distributions with applications in finance Department of Mathematics University of Leicester September 2, 2011 MSc project final presentation Outline 1 2 3 4 Classical Monte

More information

Nonparametric Bayesian Methods (Gaussian Processes)

Nonparametric Bayesian Methods (Gaussian Processes) [70240413 Statistical Machine Learning, Spring, 2015] Nonparametric Bayesian Methods (Gaussian Processes) Jun Zhu dcszj@mail.tsinghua.edu.cn http://bigml.cs.tsinghua.edu.cn/~jun State Key Lab of Intelligent

More information

Stable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence

Stable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence Stable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham NC 778-5 - Revised April,

More information

CSC 2541: Bayesian Methods for Machine Learning

CSC 2541: Bayesian Methods for Machine Learning CSC 2541: Bayesian Methods for Machine Learning Radford M. Neal, University of Toronto, 2011 Lecture 3 More Markov Chain Monte Carlo Methods The Metropolis algorithm isn t the only way to do MCMC. We ll

More information

Dynamic System Identification using HDMR-Bayesian Technique

Dynamic System Identification using HDMR-Bayesian Technique Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in

More information

Bayesian Hypothesis Testing in GLMs: One-Sided and Ordered Alternatives. 1(w i = h + 1)β h + ɛ i,

Bayesian Hypothesis Testing in GLMs: One-Sided and Ordered Alternatives. 1(w i = h + 1)β h + ɛ i, Bayesian Hypothesis Testing in GLMs: One-Sided and Ordered Alternatives Often interest may focus on comparing a null hypothesis of no difference between groups to an ordered restricted alternative. For

More information

Modeling conditional distributions with mixture models: Theory and Inference

Modeling conditional distributions with mixture models: Theory and Inference Modeling conditional distributions with mixture models: Theory and Inference John Geweke University of Iowa, USA Journal of Applied Econometrics Invited Lecture Università di Venezia Italia June 2, 2005

More information

BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA

BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA Intro: Course Outline and Brief Intro to Marina Vannucci Rice University, USA PASI-CIMAT 04/28-30/2010 Marina Vannucci

More information

Marginal Specifications and a Gaussian Copula Estimation

Marginal Specifications and a Gaussian Copula Estimation Marginal Specifications and a Gaussian Copula Estimation Kazim Azam Abstract Multivariate analysis involving random variables of different type like count, continuous or mixture of both is frequently required

More information

Bayesian methods in economics and finance

Bayesian methods in economics and finance 1/26 Bayesian methods in economics and finance Linear regression: Bayesian model selection and sparsity priors Linear Regression 2/26 Linear regression Model for relationship between (several) independent

More information

Bayesian Sparse Correlated Factor Analysis

Bayesian Sparse Correlated Factor Analysis Bayesian Sparse Correlated Factor Analysis 1 Abstract In this paper, we propose a new sparse correlated factor model under a Bayesian framework that intended to model transcription factor regulation in

More information

Stat 451 Lecture Notes Markov Chain Monte Carlo. Ryan Martin UIC

Stat 451 Lecture Notes Markov Chain Monte Carlo. Ryan Martin UIC Stat 451 Lecture Notes 07 12 Markov Chain Monte Carlo Ryan Martin UIC www.math.uic.edu/~rgmartin 1 Based on Chapters 8 9 in Givens & Hoeting, Chapters 25 27 in Lange 2 Updated: April 4, 2016 1 / 42 Outline

More information

Bayesian Inference. Chapter 9. Linear models and regression

Bayesian Inference. Chapter 9. Linear models and regression Bayesian Inference Chapter 9. Linear models and regression M. Concepcion Ausin Universidad Carlos III de Madrid Master in Business Administration and Quantitative Methods Master in Mathematical Engineering

More information

Bayesian Linear Regression

Bayesian Linear Regression Bayesian Linear Regression Sudipto Banerjee 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. September 15, 2010 1 Linear regression models: a Bayesian perspective

More information

Integrated Non-Factorized Variational Inference

Integrated Non-Factorized Variational Inference Integrated Non-Factorized Variational Inference Shaobo Han, Xuejun Liao and Lawrence Carin Duke University February 27, 2014 S. Han et al. Integrated Non-Factorized Variational Inference February 27, 2014

More information

Bayesian spatial hierarchical modeling for temperature extremes

Bayesian spatial hierarchical modeling for temperature extremes Bayesian spatial hierarchical modeling for temperature extremes Indriati Bisono Dr. Andrew Robinson Dr. Aloke Phatak Mathematics and Statistics Department The University of Melbourne Maths, Informatics

More information

Bayesian Inference on Joint Mixture Models for Survival-Longitudinal Data with Multiple Features. Yangxin Huang

Bayesian Inference on Joint Mixture Models for Survival-Longitudinal Data with Multiple Features. Yangxin Huang Bayesian Inference on Joint Mixture Models for Survival-Longitudinal Data with Multiple Features Yangxin Huang Department of Epidemiology and Biostatistics, COPH, USF, Tampa, FL yhuang@health.usf.edu January

More information

Supplement to A Hierarchical Approach for Fitting Curves to Response Time Measurements

Supplement to A Hierarchical Approach for Fitting Curves to Response Time Measurements Supplement to A Hierarchical Approach for Fitting Curves to Response Time Measurements Jeffrey N. Rouder Francis Tuerlinckx Paul L. Speckman Jun Lu & Pablo Gomez May 4 008 1 The Weibull regression model

More information

Hierarchical models. Dr. Jarad Niemi. August 31, Iowa State University. Jarad Niemi (Iowa State) Hierarchical models August 31, / 31

Hierarchical models. Dr. Jarad Niemi. August 31, Iowa State University. Jarad Niemi (Iowa State) Hierarchical models August 31, / 31 Hierarchical models Dr. Jarad Niemi Iowa State University August 31, 2017 Jarad Niemi (Iowa State) Hierarchical models August 31, 2017 1 / 31 Normal hierarchical model Let Y ig N(θ g, σ 2 ) for i = 1,...,

More information

Likelihood NIPS July 30, Gaussian Process Regression with Student-t. Likelihood. Jarno Vanhatalo, Pasi Jylanki and Aki Vehtari NIPS-2009

Likelihood NIPS July 30, Gaussian Process Regression with Student-t. Likelihood. Jarno Vanhatalo, Pasi Jylanki and Aki Vehtari NIPS-2009 with with July 30, 2010 with 1 2 3 Representation Representation for Distribution Inference for the Augmented Model 4 Approximate Laplacian Approximation Introduction to Laplacian Approximation Laplacian

More information

Gibbs Sampling in Endogenous Variables Models

Gibbs Sampling in Endogenous Variables Models Gibbs Sampling in Endogenous Variables Models Econ 690 Purdue University Outline 1 Motivation 2 Identification Issues 3 Posterior Simulation #1 4 Posterior Simulation #2 Motivation In this lecture we take

More information

Web Appendix for Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors by D. B. Woodard, C. Crainiceanu, and D.

Web Appendix for Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors by D. B. Woodard, C. Crainiceanu, and D. Web Appendix for Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors by D. B. Woodard, C. Crainiceanu, and D. Ruppert A. EMPIRICAL ESTIMATE OF THE KERNEL MIXTURE Here we

More information

Parameter Estimation. William H. Jefferys University of Texas at Austin Parameter Estimation 7/26/05 1

Parameter Estimation. William H. Jefferys University of Texas at Austin Parameter Estimation 7/26/05 1 Parameter Estimation William H. Jefferys University of Texas at Austin bill@bayesrules.net Parameter Estimation 7/26/05 1 Elements of Inference Inference problems contain two indispensable elements: Data

More information

Embedding Supernova Cosmology into a Bayesian Hierarchical Model

Embedding Supernova Cosmology into a Bayesian Hierarchical Model 1 / 41 Embedding Supernova Cosmology into a Bayesian Hierarchical Model Xiyun Jiao Statistic Section Department of Mathematics Imperial College London Joint work with David van Dyk, Roberto Trotta & Hikmatali

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Brown University CSCI 1950-F, Spring 2012 Prof. Erik Sudderth Lecture 25: Markov Chain Monte Carlo (MCMC) Course Review and Advanced Topics Many figures courtesy Kevin

More information

CPSC 540: Machine Learning

CPSC 540: Machine Learning CPSC 540: Machine Learning MCMC and Non-Parametric Bayes Mark Schmidt University of British Columbia Winter 2016 Admin I went through project proposals: Some of you got a message on Piazza. No news is

More information

(5) Multi-parameter models - Gibbs sampling. ST440/540: Applied Bayesian Analysis

(5) Multi-parameter models - Gibbs sampling. ST440/540: Applied Bayesian Analysis Summarizing a posterior Given the data and prior the posterior is determined Summarizing the posterior gives parameter estimates, intervals, and hypothesis tests Most of these computations are integrals

More information

ABC methods for phase-type distributions with applications in insurance risk problems

ABC methods for phase-type distributions with applications in insurance risk problems ABC methods for phase-type with applications problems Concepcion Ausin, Department of Statistics, Universidad Carlos III de Madrid Joint work with: Pedro Galeano, Universidad Carlos III de Madrid Simon

More information

Simultaneous inference for multiple testing and clustering via a Dirichlet process mixture model

Simultaneous inference for multiple testing and clustering via a Dirichlet process mixture model Simultaneous inference for multiple testing and clustering via a Dirichlet process mixture model David B Dahl 1, Qianxing Mo 2 and Marina Vannucci 3 1 Texas A&M University, US 2 Memorial Sloan-Kettering

More information

Variational Inference via Stochastic Backpropagation

Variational Inference via Stochastic Backpropagation Variational Inference via Stochastic Backpropagation Kai Fan February 27, 2016 Preliminaries Stochastic Backpropagation Variational Auto-Encoding Related Work Summary Outline Preliminaries Stochastic Backpropagation

More information

Hierarchical Models & Bayesian Model Selection

Hierarchical Models & Bayesian Model Selection Hierarchical Models & Bayesian Model Selection Geoffrey Roeder Departments of Computer Science and Statistics University of British Columbia Jan. 20, 2016 Contact information Please report any typos or

More information

Density Estimation. Seungjin Choi

Density Estimation. Seungjin Choi Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/

More information

Bayesian Modeling of Conditional Distributions

Bayesian Modeling of Conditional Distributions Bayesian Modeling of Conditional Distributions John Geweke University of Iowa Indiana University Department of Economics February 27, 2007 Outline Motivation Model description Methods of inference Earnings

More information

STA 4273H: Sta-s-cal Machine Learning

STA 4273H: Sta-s-cal Machine Learning STA 4273H: Sta-s-cal Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 2 In our

More information

Katsuhiro Sugita Faculty of Law and Letters, University of the Ryukyus. Abstract

Katsuhiro Sugita Faculty of Law and Letters, University of the Ryukyus. Abstract Bayesian analysis of a vector autoregressive model with multiple structural breaks Katsuhiro Sugita Faculty of Law and Letters, University of the Ryukyus Abstract This paper develops a Bayesian approach

More information

g-priors for Linear Regression

g-priors for Linear Regression Stat60: Bayesian Modeling and Inference Lecture Date: March 15, 010 g-priors for Linear Regression Lecturer: Michael I. Jordan Scribe: Andrew H. Chan 1 Linear regression and g-priors In the last lecture,

More information

Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, Jeffreys priors. exp 1 ) p 2

Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, Jeffreys priors. exp 1 ) p 2 Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, 2010 Jeffreys priors Lecturer: Michael I. Jordan Scribe: Timothy Hunter 1 Priors for the multivariate Gaussian Consider a multivariate

More information

Deblurring Jupiter (sampling in GLIP faster than regularized inversion) Colin Fox Richard A. Norton, J.

Deblurring Jupiter (sampling in GLIP faster than regularized inversion) Colin Fox Richard A. Norton, J. Deblurring Jupiter (sampling in GLIP faster than regularized inversion) Colin Fox fox@physics.otago.ac.nz Richard A. Norton, J. Andrés Christen Topics... Backstory (?) Sampling in linear-gaussian hierarchical

More information

Bayesian model selection: methodology, computation and applications

Bayesian model selection: methodology, computation and applications Bayesian model selection: methodology, computation and applications David Nott Department of Statistics and Applied Probability National University of Singapore Statistical Genomics Summer School Program

More information

Graphical Models for Collaborative Filtering

Graphical Models for Collaborative Filtering Graphical Models for Collaborative Filtering Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Sequence modeling HMM, Kalman Filter, etc.: Similarity: the same graphical model topology,

More information

On the Fisher Bingham Distribution

On the Fisher Bingham Distribution On the Fisher Bingham Distribution BY A. Kume and S.G Walker Institute of Mathematics, Statistics and Actuarial Science, University of Kent Canterbury, CT2 7NF,UK A.Kume@kent.ac.uk and S.G.Walker@kent.ac.uk

More information

Fitting Narrow Emission Lines in X-ray Spectra

Fitting Narrow Emission Lines in X-ray Spectra Outline Fitting Narrow Emission Lines in X-ray Spectra Taeyoung Park Department of Statistics, University of Pittsburgh October 11, 2007 Outline of Presentation Outline This talk has three components:

More information

Computationally Efficient Estimation of Multilevel High-Dimensional Latent Variable Models

Computationally Efficient Estimation of Multilevel High-Dimensional Latent Variable Models Computationally Efficient Estimation of Multilevel High-Dimensional Latent Variable Models Tihomir Asparouhov 1, Bengt Muthen 2 Muthen & Muthen 1 UCLA 2 Abstract Multilevel analysis often leads to modeling

More information

1 Data Arrays and Decompositions

1 Data Arrays and Decompositions 1 Data Arrays and Decompositions 1.1 Variance Matrices and Eigenstructure Consider a p p positive definite and symmetric matrix V - a model parameter or a sample variance matrix. The eigenstructure is

More information

0.1 factor.bayes: Bayesian Factor Analysis

0.1 factor.bayes: Bayesian Factor Analysis 0.1 factor.bayes: Bayesian Factor Analysis Given some unobserved explanatory variables and observed dependent variables, the Normal theory factor analysis model estimates the latent factors. The model

More information

Review: Probabilistic Matrix Factorization. Probabilistic Matrix Factorization (PMF)

Review: Probabilistic Matrix Factorization. Probabilistic Matrix Factorization (PMF) Case Study 4: Collaborative Filtering Review: Probabilistic Matrix Factorization Machine Learning for Big Data CSE547/STAT548, University of Washington Emily Fox February 2 th, 214 Emily Fox 214 1 Probabilistic

More information

ECO 513 Fall 2009 C. Sims HIDDEN MARKOV CHAIN MODELS

ECO 513 Fall 2009 C. Sims HIDDEN MARKOV CHAIN MODELS ECO 513 Fall 2009 C. Sims HIDDEN MARKOV CHAIN MODELS 1. THE CLASS OF MODELS y t {y s, s < t} p(y t θ t, {y s, s < t}) θ t = θ(s t ) P[S t = i S t 1 = j] = h ij. 2. WHAT S HANDY ABOUT IT Evaluating the

More information

Bayesian model selection for computer model validation via mixture model estimation

Bayesian model selection for computer model validation via mixture model estimation Bayesian model selection for computer model validation via mixture model estimation Kaniav Kamary ATER, CNAM Joint work with É. Parent, P. Barbillon, M. Keller and N. Bousquet Outline Computer model validation

More information

CS242: Probabilistic Graphical Models Lecture 7B: Markov Chain Monte Carlo & Gibbs Sampling

CS242: Probabilistic Graphical Models Lecture 7B: Markov Chain Monte Carlo & Gibbs Sampling CS242: Probabilistic Graphical Models Lecture 7B: Markov Chain Monte Carlo & Gibbs Sampling Professor Erik Sudderth Brown University Computer Science October 27, 2016 Some figures and materials courtesy

More information

MH I. Metropolis-Hastings (MH) algorithm is the most popular method of getting dependent samples from a probability distribution

MH I. Metropolis-Hastings (MH) algorithm is the most popular method of getting dependent samples from a probability distribution MH I Metropolis-Hastings (MH) algorithm is the most popular method of getting dependent samples from a probability distribution a lot of Bayesian mehods rely on the use of MH algorithm and it s famous

More information

Introduction to Bayesian Statistics and Markov Chain Monte Carlo Estimation. EPSY 905: Multivariate Analysis Spring 2016 Lecture #10: April 6, 2016

Introduction to Bayesian Statistics and Markov Chain Monte Carlo Estimation. EPSY 905: Multivariate Analysis Spring 2016 Lecture #10: April 6, 2016 Introduction to Bayesian Statistics and Markov Chain Monte Carlo Estimation EPSY 905: Multivariate Analysis Spring 2016 Lecture #10: April 6, 2016 EPSY 905: Intro to Bayesian and MCMC Today s Class An

More information

Online Appendix: Bayesian versus maximum likelihood estimation of treatment effects in bivariate probit instrumental variable models

Online Appendix: Bayesian versus maximum likelihood estimation of treatment effects in bivariate probit instrumental variable models Online Appendix: Bayesian versus maximum likelihood estimation of treatment effects in bivariate probit instrumental variable models A. STAN CODE // STAN code for Bayesian bivariate model // based on code

More information

Spatial Statistics Chapter 4 Basics of Bayesian Inference and Computation

Spatial Statistics Chapter 4 Basics of Bayesian Inference and Computation Spatial Statistics Chapter 4 Basics of Bayesian Inference and Computation So far we have discussed types of spatial data, some basic modeling frameworks and exploratory techniques. We have not discussed

More information

eqr094: Hierarchical MCMC for Bayesian System Reliability

eqr094: Hierarchical MCMC for Bayesian System Reliability eqr094: Hierarchical MCMC for Bayesian System Reliability Alyson G. Wilson Statistical Sciences Group, Los Alamos National Laboratory P.O. Box 1663, MS F600 Los Alamos, NM 87545 USA Phone: 505-667-9167

More information

Supplementary Note on Bayesian analysis

Supplementary Note on Bayesian analysis Supplementary Note on Bayesian analysis Structured variability of muscle activations supports the minimal intervention principle of motor control Francisco J. Valero-Cuevas 1,2,3, Madhusudhan Venkadesan

More information