Embedding Supernova Cosmology into a Bayesian Hierarchical Model

Size: px
Start display at page:

Download "Embedding Supernova Cosmology into a Bayesian Hierarchical Model"

Transcription

1 1 / 41 Embedding Supernova Cosmology into a Bayesian Hierarchical Model Xiyun Jiao Statistic Section Department of Mathematics Imperial College London Joint work with David van Dyk, Roberto Trotta & Hikmatali Shariff ICHASC Talk Jan 27, 2015

2 2 / 41 Outline 1 Algorithm Review 2 Combining Strategies 3 Surrogate Distribution 4 Extensions on Cosmological Model 5 Conclusion

3 1 Tanner, M. A. and Wong, W. H. (1987) 3 / 41 Problem Setting Goal: Sample from posterior distribution p(ψ Y ) using Gibbs-type samplers. Special case: Data Augmentation (DA) Algorithm 1 ψ = (θ, Y mis ). DA algorithm proceeds as: [Y mis θ ] [θ Y mis ]. Stationary distribution: p(y mis, θ Y ). DA algorithm and Gibbs samplers are easy to implement, but... Converge slowly!

4 1 Tanner, M. A. and Wong, W. H. (1987) 4 / 41 Problem Setting Goal: Sample from posterior distribution p(ψ Y ) using Gibbs-type samplers. Special case: Data Augmentation (DA) Algorithm 1 ψ = (θ, Y mis ). DA algorithm proceeds as: [Y mis θ ] [θ Y mis ]. Stationary distribution: p(y mis, θ Y ). DA algorithm and Gibbs samplers are easy to implement, but... Converge slowly!

5 Blue Rectangle Expand Paramter Space; Yellow Ellipse Change Conditioning Strategy 5 / 41 Algorithm Review MCMC Methods MDA (1999) DA Algorithm (1987) ASIS (2011) Gibbs Sampler (1993) PCG (2008)

6 6 / 41 Marginal Data Augmentation Marginal Data Augmentation (MDA) 2 MDA introduces a working parameter α into p(y, Y mis θ) via Y mis [e.g., Ỹmis = F α (Y mis )], s.t., p(ỹmis, Y θ, α)dỹmis = p(y θ). If the prior distribution of α is proper, MDA proceeds as: [α, Ỹmis θ ] [α, θ Ỹmis]. MDA improves convergence by increasing variability in augmented data and reducing augmented information. 2 Meng, X.-L. and van Dyk, D. A. (1999); Liu, J. S. and Wu, Y. N. (1999)

7 7 / 41 Ancillarity-Sufficiency Interweaving Strategy Ancillarity-Sufficiency Interweaving Strategy (ASIS) 3 ASIS considers a pair of special DA schemes: Sufficient augmentation Y mis,s : p(y Y mis,s, θ) is free of θ. Ancillary augmentation Y mis,a : p(y mis,a θ) is free of θ. Given θ, Y mis,a = F θ (Y mis,s ). ASIS proceeds as Interweave [θ Y mis,s ] into DA algorithm w.r.t. Y mis,a [Y mis,s θ ] [θ Y mis,s ] [Y mis,a Y mis,s, θ ] [θ Y mis,a ] [Y mis,s θ ] [Y mis,a Y mis,s ] [θ Y mis,a ] ASIS obtains more efficiency by taking advantage of the "beauty-and-beast" feature of two parent DA algorithms. 3 Yu, Y. and Meng, X.-L. (2011)

8 8 / 41 Understanding ASIS Model: Y (Y mis, θ) N(Y mis, 1), Y mis θ N(θ, V ), p(θ) 1. ASIS: Y mis,s = Y mis, Y mis,a = Y mis θ. [Y mis,s θ ] [θ Y mis,s ] [Y mis,a Y mis,s, θ ] [θ Y mis,a ] θ = constant Y mis = constant Y mis θ = constant Y mis Y mis Y mis θ θ θ More directions: efficient and easy to implement.

9 4 van Dyk, D. A. and Park, T. (2008) 9 / 41 Partially Collapsed Gibbs Sampling Partially Collapsed Gibbs (PCG) 4 Model Reduction: PCG reduces conditioning of Gibbs. It replaces some conditional distributions of a Gibbs sampler with conditionals of marginal distributions of the target. PCG improves convergence by increasing variance and jump size of conditional distributions. Three stages: Marginalization, permutation, trimming. Tools to transform a Gibbs sampler into a PCG one. Maintain the target stationary distribution.

10 10 / 41 Examples of PCG Sampling Example. ψ = (ψ 1, ψ 2, ψ 3, ψ 4 ); Sample from p(ψ Y ). Gibbs p(ψ 1 ψ 2, ψ 3, ψ 4 ) p(ψ 2 ψ 1, ψ 3, ψ 4 ) p(ψ 3 ψ 1, ψ 2, ψ 4 ) p(ψ 4 ψ 1, ψ 2, ψ 3 ) PCG I p(ψ 1 ψ 2, ψ 3, ψ 4 ) p(ψ 2, ψ 3 ψ 1, ψ 4 ) p(ψ 4 ψ 1, ψ 2, ψ 3 ) PCG II p(ψ 1 ψ 2, ψ 4 ) p(ψ 2, ψ 3 ψ 1, ψ 4 ) p(ψ 4 ψ 1, ψ 2, ψ 3 ) Special cases: blocked and collapsed Gibbs, e.g., PCG I. More interestingly, a PCG sampler consists of incompatible conditional distributions, e.g., PCG II. Modifying the order of steps of PCG II may alter its stationary distribution.

11 11 / 41 Three Stages to Derive a PCG Sampler (a) Gibbs p(ψ 1 ψ 2, ψ 3, ψ 4 ) p(ψ 2 ψ 1, ψ 3, ψ 4 ) p(ψ 3 ψ 1, ψ 2, ψ 4 ) p(ψ 4 ψ 1, ψ 2, ψ 3 ) (c) Permute p(ψ 1, ψ 3 ψ 2, ψ 4 ) p(ψ 2, ψ 3 ψ 1, ψ 4 ) p(ψ 2 ψ 1, ψ 3, ψ 4 ) p(ψ 4 ψ 1, ψ 2, ψ 3 ) (b) Marginalize p(ψ 1, ψ 3 ψ 2, ψ 4 ) p(ψ 2 ψ 1, ψ 3, ψ 4 ) p(ψ 2, ψ 3 ψ 1, ψ 4 ) p(ψ 4 ψ 1, ψ 2, ψ 3 ) (d) Trim [PCG II] p(ψ 1 ψ 2, ψ 4 ) p(ψ 2, ψ 3 ψ 1, ψ 4 ) p(ψ 4 ψ 1, ψ 2, ψ 3 ) Intermediate Draws

12 12 / 41 Outline 1 Algorithm Review 2 Combining Strategies 3 Surrogate Distribution 4 Extensions on Cosmological Model 5 Conclusion

13 5 Gilks et al. (1995) 6 van Dyk, D. A. and Jiao, X. (2015) 13 / 41 Combining Different Strategies into One Sampler Cannot Sample Conditionals? Embed Metropolis-Hastings (MH) into Gibbs 5 standard. Embed MH into PCG 6 subtle implementation! Further Improvement in Convergence Several parameters converge slowly a strategy is efficient for one parameter, but has little effect on others; Another strategy has opposite effect. By combining, we improve all. One strategy alone is useful for all parameters prefer to use a combination, as long as gained efficiency exceeds extra computational expense.

14 14 / 41 Background Physics Nobel Prize (2011): discovery of acceleration of expansion of the universe. The acceleration is attributed to existence of dark energy. Type Ia supernova (SNIa) observations: critical to quantify characteristics of dark energy. Mass > Chandrasekhar threshold (1.44 M ) = SN explosion. Image credit:

15 15 / 41 Standardizable Candles Common history = similar absolute magnitudes for SNIa, i.e., M i N(M 0, σint 2 ) = SNIa are standardizable candles. Phillips corrections: M i = Mi ɛ αx i + βc i, Mi ɛ N(M 0, σɛ 2 ); x i stretch correction, c i color correction, σ 2 ɛ σ 2 int

16 16 / 41 Distance Modulus Apparent Magnitude Absolute Magnitude = Distance Modulus: m B M = µ = 5log 10 [distance(mpc)] Nearby SN: distance = zc/h 0 ; Distant SN: µ = µ(z, Ω m, Ω Λ, H 0 ); c speed of light H 0 Hubble constant z redshift Ω m total matter density Ω Λ dark energy density

17 Bayesian Hierarchical Model 7 Level 1: Errors-in-variables regression: Level 2: ĉ i ˆx i ˆm Bi m Bi = µ i + M ɛ i αx i + βc i ; N c i x i m Bi, Ĉi, i = 1,..., n. M ɛ i N(M 0, σ 2 ɛ ); x i N(x 0, R 2 x ); c i N(c 0, R 2 c ). Priors: Gaussian for M 0, x 0, c 0 ; Uniform for Ω m, Ω Λ, α, β, log(r x ), log(r c ), log(σ ɛ ). z and H 0 fixed. 7 March et al. (2011) 17 / 41

18 18 / 41 Notation and Data Notation X (3n 1) (c 1, x 1, M ɛ 1,..., c n, x n, M ɛ n); b (3 1) (c 0, x 0, M 0 ); L (3n 1) (0, 0, µ 1,..., 0, 0, µ n ); T (3 3) = 0 1 0, and A (3n 3n) = Diag(T,..., T ). β α 1 Data: A sample of 288 SNIa compiled by Kessler et al. (2009).

19 19 / 41 Algorithms for Cosmological Herarchical Model MH within Gibbs sampler: Update of (Ω m, Ω Λ ) needs MH. MH within PCG sampler: Sample (Ω m, Ω Λ ) and (α, β) without conditioning on (X, b). Updates of both (Ω m, Ω Λ ) and (α, β) need MH. ASIS sampler: Y mis,s for (Ω m, Ω Λ ) and (α, β): AX + L; Y mis,a for (Ω m, Ω Λ ) and (α, β): X. MH within PCG+ASIS sampler: Given (α, β), sample (Ω m, Ω Λ ) with MH within PCG; Given (Ω m, Ω Λ ), sample (α, β) with ASIS. For each sampler, run 11,000 iterations with a burn-in of 1,000.

20 Convergence Results of Gibbs and PCG Ωm MH within Gibb MH within PCG Ωm ΩΛ 1.2 ΩΛ 1.2 α β α β / 41

21 Convergence Results of ASIS and Combining Ωm ASIS PCG within ASIS Ωm ΩΛ 1.2 ΩΛ 1.2 α β α β / 41

22 22 / 41 Effective Sample Size (ESS) per Second The larger the ESS/sec, the more efficient the algorithm. Gibbs PCG ASIS PCG+ASIS Ω m Ω Λ α β

23 23 / 41 Factor Analysis Model Model Y i N [ ] Z i β, Σ = Diag(σ1 2,..., σ2 p), for i = 1,..., n. Y i (1 p) vector of the ith observation; Z i (1 q) vector of factors; Z i β N(0, I); q < p. β and Σ unknown parameters. Priors: p(β) 1; σ 2 j Inv-Gamma(0.01, 0.01), j = 1,..., p. Simulation Study Set p = 6, q = 2, and n = 100. σ 2 j Inv-Gamma(1, 0.5), (j = 1,..., 6); β hj N(0, 3 2 ), (h = 1, 2; j = 1,..., 6).

24 Algorithms for Factor Analysis Standard Gibbs sampler: [ Z β, Σ ] [ ] σj 2 Z, β, σ j 2 p [β Z, Σ]. j=1 MH within PCG sampler: sampling σ 2 1, σ2 3 and σ2 4 without conditioning on Z. This should be facilitated by MH. ASIS sampler: Y mis,a for β: Z i ; Y mis,s for β: W i = Z i β. MH within PCG+ASIS sampler: Given β, update Σ with MH within PCG; Given Σ, update β with ASIS. For each sampler, run 11,000 iterations with a burn-in of 1, / 41

25 Convergence Results of Factor Analysis Model Gibbs 2 log(σ 1 ) PCG 2 log(σ 1 ) ASIS 2 log(σ 1 ) Gibbs β PCG β ASIS β PCG+ASIS 2 log(σ 1 ) PCG+ASIS β / 41

26 26 / 41 Effective Sample Size (ESS) per Second The larger the ESS/sec, the more efficient the algorithm. Gibbs PCG ASIS PCG + ASIS log(σ1 2 ) β

27 27 / 41 Outline 1 Algorithm Review 2 Combining Strategies 3 Surrogate Distribution 4 Extensions on Cosmological Model 5 Conclusion

28 28 / 41 Bivariate Surrogate Distribution Target distribution: p(ψ 1, ψ 2 ). Surrogate distribution: π(ψ 1, ψ 2 ). π(ψ 1 ) = p(ψ 1 ), π(ψ 2 ) = p(ψ 2 ); The correlation between ψ 1 and ψ 2 is lower for π than for p. Sampler S.1 p(ψ 1 ψ 2 ) p(ψ 2 ψ 1 ) Sampler S.2 π(ψ 1 ψ 2 ) p(ψ 2 ψ 1 ) Sampler S.3 π(ψ 1 ψ 2 ) π(ψ 2 ψ 1 ) Stationary distribution of Samplers S.1 and S.2: p(ψ 1, ψ 2 ). Stationary distribution of Sampler S.3: π(ψ 1, ψ 2 ). Condition for Sampler S.2 maintaining the target: π(ψ 1 ) = p(ψ 1 ), π(ψ 2 ) = p(ψ 2 ); Step order is fixed.

29 29 / 41 Comparison of Samplers S.1 S.3 Example. [( 0 p(ψ 1, ψ 2 ) : N 0 ) ( , )] [( 0 ; π(ψ 1, ψ 2 ) : N 0 ) ( 1 ρπ, ρ π 1 )]. Convergence rate Sampler S.1 Sampler S.2 Sampler S.3 ρ π

30 30 / 41 Ways to Derive Surrogate Distributions ASIS: [Y mis,s θ ] [Y mis,a Y mis,s ] [θ Y mis,a ]. π(θ Y mis,s ) = p(y mis,a Y mis,s )p(θ Y mis,a )dy mis,a ; π(θ, Y mis,s ) = π(θ Y mis,s )p(y mis,s ). PCG: intermediate stationary distributions. PCG II: [ψ 1 ψ 2, ψ 4 ] [ψ 2, ψ 3 ψ 1, ψ 4 ] [ψ 4 ψ 1, ψ 2, ψ 3 ]. Intermediate stationary ending with Step 1: π(ψ 1, ψ 2, ψ 3, ψ 4 ) = p(ψ 2, ψ 3, ψ 4 )p(ψ 1 ψ 2, ψ 4 ). MDA: [α, Ỹmis θ ] [α, θ Ỹmis]. p(θ Ỹmis) = p(α, θ Ỹmis)dα Set Ỹmis as Y ========== mis π(θ Y mis ); π(θ, Y mis ) = π(θ Y mis )p(y mis ).

31 31 / 41 Advantages of Surrogate Distribution Surrogate distribution unifies different strategies under a common framework. For ASIS, a sampler involving surrogate distribution, but equivalent to the original ASIS sampler, has fewer steps. If we are only interested in marginal distributions, surrogate distribution strategy is promising to produce more efficient algorithms.

32 32 / 41 Outline 1 Algorithm Review 2 Combining Strategies 3 Surrogate Distribution 4 Extensions on Cosmological Model 5 Conclusion

33 Model Review and New Data Recall: Level 1: Errors-in-variables regression: Level 2: ĉ i ˆx i ˆm Bi m Bi = µ i + M ɛ i αx i + βc i ; N c i x i m Bi, Ĉi, i = 1,..., n. M ɛ i N(M 0, σ 2 ɛ ); x i N(x 0, R 2 x ); c i N(c 0, R 2 c ). σ ɛ small = Standardizable candle Data: A JLA sample of 740 SNIa in Betoule, et al. (2014). 33 / 41

34 34 / 41 Shrinkage Estimation Low mean squared error estimates of M ɛ i Conditional Posterior Expectation of M i ε % CI σ ε

35 35 / 41 Shrinkage Error Reduced standard deviations Conditional Posterior Standard Deviation of M i ε % CI σ ε

36 36 / 41 Systematic Errors Systematic errors: seven sources of uncertainties. Blocks: different surveys. Effect on cosmological parameters: Ĉ stat vs Ĉstat + Ĉsys Ω Λ % stat+sys 95% stat+sys 68% only stat 95% only stat Ω m

37 Adjusting for Galaxy Mass: Method I Method I: Divide M ɛ i by w i = log 10 (M galaxy /M ); { M ɛ i N(M 01, σ 2 ɛ1 ), if w i < 10, M ɛ i N(M 02, σ 2 ɛ2 ), if w i > wi Density posterior mean of M i ε Density 37 / 41

38 Adjusting for Galaxy Mass: Method II Much scatter in both M i and w i w i posterior mean of M i ε Treat w i as covariate like x i and c i, ŵ i N(w i, ˆσ 2 w): m Bi = µ i + M ɛ i αx i + βc i + γw i γ Density 38 / 41

39 Model Checking Model setting: Fix (Ω m,ω Λ ); m Bi = µ i + M ɛ i αx i + βc i ; µ i = µ(z i, Ω m, Ω Λ, H 0 )+t(z i ), t(z i ) cubic spline Results: Red line posterior mean; Gray band 95% region; Black dots ( ˆm Bi M 0 + αˆx i βĉ i ) µ i z (red shift) µ mean µtheory Cubic Spline Curve Fitting (K=4) 39 / 41

40 40 / 41 Outline 1 Algorithm Review 2 Combining Strategies 3 Surrogate Distribution 4 Extensions on Cosmological Model 5 Conclusion

41 41 / 41 Conclusion Summary Combining strategy and surrogate distribution samplers are useful to produce more efficiency in convergence. The hierarchical Gaussian model reflects the underlying physical understanding of supernova cosmology. Future Work More numerical examples to illustrate the algorithms. Complete the theory of surrogate distribution strategy. Embed this hierarchical model into a model for the full time-series of the supernova explosion, using Gaussian process to impute apparent magnitudes over time.

arxiv: v2 [astro-ph.co] 18 Apr 2016

arxiv: v2 [astro-ph.co] 18 Apr 2016 BAHAMAS: new SNIa analysis reveals inconsistencies with standard cosmology Hikmatali Shariff Astrophysics Group, Physics Department, Imperial College London, Prince Consort Rd, London SW7 2AZ 1 arxiv:1510.05954v2

More information

Fitting Narrow Emission Lines in X-ray Spectra

Fitting Narrow Emission Lines in X-ray Spectra Outline Fitting Narrow Emission Lines in X-ray Spectra Taeyoung Park Department of Statistics, University of Pittsburgh October 11, 2007 Outline of Presentation Outline This talk has three components:

More information

Type Ia Supernovae meet Maximum Likelihood Estimators

Type Ia Supernovae meet Maximum Likelihood Estimators Type Ia Supernovae meet Maximum Likelihood Estimators Alberto Guffanti Università degli Studi di Torino & INFN Torino J. Trøst Nielsen, AG and S. Sarkar, arxiv:1506.01354 J. Trøst Nielsen, arxiv:1508.07850

More information

Bayesian Linear Models

Bayesian Linear Models Bayesian Linear Models Sudipto Banerjee 1 and Andrew O. Finley 2 1 Department of Forestry & Department of Geography, Michigan State University, Lansing Michigan, U.S.A. 2 Biostatistics, School of Public

More information

Accounting for Calibration Uncertainty

Accounting for Calibration Uncertainty : High Energy Astrophysics and the PCG Sampler 1 Vinay Kashyap 2 Taeyoung Park 3 Jin Xu 4 Imperial-California-Harvard AstroStatistics Collaboration 1 Statistics Section, Imperial College London 2 Smithsonian

More information

Bayesian Computation in Color-Magnitude Diagrams

Bayesian Computation in Color-Magnitude Diagrams Bayesian Computation in Color-Magnitude Diagrams SA, AA, PT, EE, MCMC and ASIS in CMDs Paul Baines Department of Statistics Harvard University October 19, 2009 Overview Motivation and Introduction Modelling

More information

Yaming Yu Department of Statistics, University of California, Irvine Xiao-Li Meng Department of Statistics, Harvard University.

Yaming Yu Department of Statistics, University of California, Irvine Xiao-Li Meng Department of Statistics, Harvard University. Appendices to To Center or Not to Center: That is Not the Question An Ancillarity-Sufficiency Interweaving Strategy (ASIS) for Boosting MCMC Efficiency Yaming Yu Department of Statistics, University of

More information

Bayesian Linear Models

Bayesian Linear Models Bayesian Linear Models Sudipto Banerjee 1 and Andrew O. Finley 2 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department

More information

Bayesian Linear Models

Bayesian Linear Models Bayesian Linear Models Sudipto Banerjee September 03 05, 2017 Department of Biostatistics, Fielding School of Public Health, University of California, Los Angeles Linear Regression Linear regression is,

More information

MCMC algorithms for fitting Bayesian models

MCMC algorithms for fitting Bayesian models MCMC algorithms for fitting Bayesian models p. 1/1 MCMC algorithms for fitting Bayesian models Sudipto Banerjee sudiptob@biostat.umn.edu University of Minnesota MCMC algorithms for fitting Bayesian models

More information

Default Priors and Effcient Posterior Computation in Bayesian

Default Priors and Effcient Posterior Computation in Bayesian Default Priors and Effcient Posterior Computation in Bayesian Factor Analysis January 16, 2010 Presented by Eric Wang, Duke University Background and Motivation A Brief Review of Parameter Expansion Literature

More information

Partially Collapsed Gibbs Samplers: Theory and Methods. Ever increasing computational power along with ever more sophisticated statistical computing

Partially Collapsed Gibbs Samplers: Theory and Methods. Ever increasing computational power along with ever more sophisticated statistical computing Partially Collapsed Gibbs Samplers: Theory and Methods David A. van Dyk 1 and Taeyoung Park Ever increasing computational power along with ever more sophisticated statistical computing techniques is making

More information

MARGINAL MARKOV CHAIN MONTE CARLO METHODS

MARGINAL MARKOV CHAIN MONTE CARLO METHODS Statistica Sinica 20 (2010), 1423-1454 MARGINAL MARKOV CHAIN MONTE CARLO METHODS David A. van Dyk University of California, Irvine Abstract: Marginal Data Augmentation and Parameter-Expanded Data Augmentation

More information

Bayesian Linear Regression

Bayesian Linear Regression Bayesian Linear Regression Sudipto Banerjee 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. September 15, 2010 1 Linear regression models: a Bayesian perspective

More information

Partially Collapsed Gibbs Samplers: Theory and Methods

Partially Collapsed Gibbs Samplers: Theory and Methods David A. VAN DYK and Taeyoung PARK Partially Collapsed Gibbs Samplers: Theory and Methods Ever-increasing computational power, along with ever more sophisticated statistical computing techniques, is making

More information

The linear model is the most fundamental of all serious statistical models encompassing:

The linear model is the most fundamental of all serious statistical models encompassing: Linear Regression Models: A Bayesian perspective Ingredients of a linear model include an n 1 response vector y = (y 1,..., y n ) T and an n p design matrix (e.g. including regressors) X = [x 1,..., x

More information

To Center or Not to Center: That is Not the Question An Ancillarity-Sufficiency Interweaving Strategy (ASIS) for Boosting MCMC Efficiency

To Center or Not to Center: That is Not the Question An Ancillarity-Sufficiency Interweaving Strategy (ASIS) for Boosting MCMC Efficiency To Center or Not to Center: That is Not the Question An Ancillarity-Sufficiency Interweaving Strategy (ASIS) for Boosting MCMC Efficiency Yaming Yu Department of Statistics, University of California, Irvine

More information

Marginal Markov Chain Monte Carlo Methods

Marginal Markov Chain Monte Carlo Methods Marginal Markov Chain Monte Carlo Methods David A. van Dyk Department of Statistics, University of California, Irvine, CA 92697 dvd@ics.uci.edu Hosung Kang Washington Mutual hosung.kang@gmail.com June

More information

Time Delay Estimation: Microlensing

Time Delay Estimation: Microlensing Time Delay Estimation: Microlensing Hyungsuk Tak Stat310 15 Sep 2015 Joint work with Kaisey Mandel, David A. van Dyk, Vinay L. Kashyap, Xiao-Li Meng, and Aneta Siemiginowska Introduction Image Credit:

More information

Lecture 4: Dynamic models

Lecture 4: Dynamic models linear s Lecture 4: s Hedibert Freitas Lopes The University of Chicago Booth School of Business 5807 South Woodlawn Avenue, Chicago, IL 60637 http://faculty.chicagobooth.edu/hedibert.lopes hlopes@chicagobooth.edu

More information

Fitting Narrow Emission Lines in X-ray Spectra

Fitting Narrow Emission Lines in X-ray Spectra Fitting Narrow Emission Lines in X-ray Spectra Taeyoung Park Department of Statistics, Harvard University October 25, 2005 X-ray Spectrum Data Description Statistical Model for the Spectrum Quasars are

More information

Computer Practical: Metropolis-Hastings-based MCMC

Computer Practical: Metropolis-Hastings-based MCMC Computer Practical: Metropolis-Hastings-based MCMC Andrea Arnold and Franz Hamilton North Carolina State University July 30, 2016 A. Arnold / F. Hamilton (NCSU) MH-based MCMC July 30, 2016 1 / 19 Markov

More information

Metropolis Hastings. Rebecca C. Steorts Bayesian Methods and Modern Statistics: STA 360/601. Module 9

Metropolis Hastings. Rebecca C. Steorts Bayesian Methods and Modern Statistics: STA 360/601. Module 9 Metropolis Hastings Rebecca C. Steorts Bayesian Methods and Modern Statistics: STA 360/601 Module 9 1 The Metropolis-Hastings algorithm is a general term for a family of Markov chain simulation methods

More information

University of Toronto Department of Statistics

University of Toronto Department of Statistics Norm Comparisons for Data Augmentation by James P. Hobert Department of Statistics University of Florida and Jeffrey S. Rosenthal Department of Statistics University of Toronto Technical Report No. 0704

More information

Markov Chain Monte Carlo

Markov Chain Monte Carlo Markov Chain Monte Carlo Recall: To compute the expectation E ( h(y ) ) we use the approximation E(h(Y )) 1 n n h(y ) t=1 with Y (1),..., Y (n) h(y). Thus our aim is to sample Y (1),..., Y (n) from f(y).

More information

Solar Spectral Analyses with Uncertainties in Atomic Physical Models

Solar Spectral Analyses with Uncertainties in Atomic Physical Models Solar Spectral Analyses with Uncertainties in Atomic Physical Models Xixi Yu Imperial College London xixi.yu16@imperial.ac.uk 3 April 2018 Xixi Yu (Imperial College London) Statistical methods in Astrophysics

More information

Bayesian model selection for computer model validation via mixture model estimation

Bayesian model selection for computer model validation via mixture model estimation Bayesian model selection for computer model validation via mixture model estimation Kaniav Kamary ATER, CNAM Joint work with É. Parent, P. Barbillon, M. Keller and N. Bousquet Outline Computer model validation

More information

Metropolis-Hastings Algorithm

Metropolis-Hastings Algorithm Strength of the Gibbs sampler Metropolis-Hastings Algorithm Easy algorithm to think about. Exploits the factorization properties of the joint probability distribution. No difficult choices to be made to

More information

MH I. Metropolis-Hastings (MH) algorithm is the most popular method of getting dependent samples from a probability distribution

MH I. Metropolis-Hastings (MH) algorithm is the most popular method of getting dependent samples from a probability distribution MH I Metropolis-Hastings (MH) algorithm is the most popular method of getting dependent samples from a probability distribution a lot of Bayesian mehods rely on the use of MH algorithm and it s famous

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee University of Minnesota July 20th, 2008 1 Bayesian Principles Classical statistics: model parameters are fixed and unknown. A Bayesian thinks of parameters

More information

Bayesian linear regression

Bayesian linear regression Bayesian linear regression Linear regression is the basis of most statistical modeling. The model is Y i = X T i β + ε i, where Y i is the continuous response X i = (X i1,..., X ip ) T is the corresponding

More information

Gibbs Sampling in Linear Models #1

Gibbs Sampling in Linear Models #1 Gibbs Sampling in Linear Models #1 Econ 690 Purdue University Justin L Tobias Gibbs Sampling #1 Outline 1 Conditional Posterior Distributions for Regression Parameters in the Linear Model [Lindley and

More information

EM Algorithm II. September 11, 2018

EM Algorithm II. September 11, 2018 EM Algorithm II September 11, 2018 Review EM 1/27 (Y obs, Y mis ) f (y obs, y mis θ), we observe Y obs but not Y mis Complete-data log likelihood: l C (θ Y obs, Y mis ) = log { f (Y obs, Y mis θ) Observed-data

More information

Markov Chain Monte Carlo methods

Markov Chain Monte Carlo methods Markov Chain Monte Carlo methods By Oleg Makhnin 1 Introduction a b c M = d e f g h i 0 f(x)dx 1.1 Motivation 1.1.1 Just here Supresses numbering 1.1.2 After this 1.2 Literature 2 Method 2.1 New math As

More information

Bayesian GLMs and Metropolis-Hastings Algorithm

Bayesian GLMs and Metropolis-Hastings Algorithm Bayesian GLMs and Metropolis-Hastings Algorithm We have seen that with conjugate or semi-conjugate prior distributions the Gibbs sampler can be used to sample from the posterior distribution. In situations,

More information

Luminosity Functions

Luminosity Functions Department of Statistics, Harvard University May 15th, 2012 Introduction: What is a Luminosity Function? Figure: A galaxy cluster. Introduction: Project Goal The Luminosity Function specifies the relative

More information

Really, what universe do we live in? White dwarfs Supernova type Ia Accelerating universe Cosmic shear Lyman α forest

Really, what universe do we live in? White dwarfs Supernova type Ia Accelerating universe Cosmic shear Lyman α forest Really, what universe do we live in? White dwarfs Supernova type Ia Accelerating universe Cosmic shear Lyman α forest White dwarf Core of solar mass star No energy from fusion or gravitational contraction

More information

(5) Multi-parameter models - Gibbs sampling. ST440/540: Applied Bayesian Analysis

(5) Multi-parameter models - Gibbs sampling. ST440/540: Applied Bayesian Analysis Summarizing a posterior Given the data and prior the posterior is determined Summarizing the posterior gives parameter estimates, intervals, and hypothesis tests Most of these computations are integrals

More information

The Recycling Gibbs Sampler for Efficient Learning

The Recycling Gibbs Sampler for Efficient Learning The Recycling Gibbs Sampler for Efficient Learning L. Martino, V. Elvira, G. Camps-Valls Universidade de São Paulo, São Carlos (Brazil). Télécom ParisTech, Université Paris-Saclay. (France), Universidad

More information

Novel Bayesian approaches to supernova type Ia cosmology

Novel Bayesian approaches to supernova type Ia cosmology Novel Bayesian approaches to supernova type Ia cosmology - MCMSki 2014 07/01/14 - www.robertotrotta.com The cosmological concordance model The ΛCDM cosmological concordance model is built on three pillars:

More information

Estimating marginal likelihoods from the posterior draws through a geometric identity

Estimating marginal likelihoods from the posterior draws through a geometric identity Estimating marginal likelihoods from the posterior draws through a geometric identity Johannes Reichl Energy Institute at the Johannes Kepler University Linz E-mail for correspondence: reichl@energieinstitut-linz.at

More information

BAYESIAN MODEL CRITICISM

BAYESIAN MODEL CRITICISM Monte via Chib s BAYESIAN MODEL CRITICM Hedibert Freitas Lopes The University of Chicago Booth School of Business 5807 South Woodlawn Avenue, Chicago, IL 60637 http://faculty.chicagobooth.edu/hedibert.lopes

More information

Gibbs Sampling in Linear Models #2

Gibbs Sampling in Linear Models #2 Gibbs Sampling in Linear Models #2 Econ 690 Purdue University Outline 1 Linear Regression Model with a Changepoint Example with Temperature Data 2 The Seemingly Unrelated Regressions Model 3 Gibbs sampling

More information

CTDL-Positive Stable Frailty Model

CTDL-Positive Stable Frailty Model CTDL-Positive Stable Frailty Model M. Blagojevic 1, G. MacKenzie 2 1 Department of Mathematics, Keele University, Staffordshire ST5 5BG,UK and 2 Centre of Biostatistics, University of Limerick, Ireland

More information

Overall Objective Priors

Overall Objective Priors Overall Objective Priors Jim Berger, Jose Bernardo and Dongchu Sun Duke University, University of Valencia and University of Missouri Recent advances in statistical inference: theory and case studies University

More information

Hierarchical Nearest-Neighbor Gaussian Process Models for Large Geo-statistical Datasets

Hierarchical Nearest-Neighbor Gaussian Process Models for Large Geo-statistical Datasets Hierarchical Nearest-Neighbor Gaussian Process Models for Large Geo-statistical Datasets Abhirup Datta 1 Sudipto Banerjee 1 Andrew O. Finley 2 Alan E. Gelfand 3 1 University of Minnesota, Minneapolis,

More information

On Reparametrization and the Gibbs Sampler

On Reparametrization and the Gibbs Sampler On Reparametrization and the Gibbs Sampler Jorge Carlos Román Department of Mathematics Vanderbilt University James P. Hobert Department of Statistics University of Florida March 2014 Brett Presnell Department

More information

Hierarchical models. Dr. Jarad Niemi. August 31, Iowa State University. Jarad Niemi (Iowa State) Hierarchical models August 31, / 31

Hierarchical models. Dr. Jarad Niemi. August 31, Iowa State University. Jarad Niemi (Iowa State) Hierarchical models August 31, / 31 Hierarchical models Dr. Jarad Niemi Iowa State University August 31, 2017 Jarad Niemi (Iowa State) Hierarchical models August 31, 2017 1 / 31 Normal hierarchical model Let Y ig N(θ g, σ 2 ) for i = 1,...,

More information

Multistage Anslysis on Solar Spectral Analyses with Uncertainties in Atomic Physical Models

Multistage Anslysis on Solar Spectral Analyses with Uncertainties in Atomic Physical Models Multistage Anslysis on Solar Spectral Analyses with Uncertainties in Atomic Physical Models Xixi Yu Imperial College London xixi.yu16@imperial.ac.uk 23 October 2018 Xixi Yu (Imperial College London) Multistage

More information

Set 5: Expansion of the Universe

Set 5: Expansion of the Universe Set 5: Expansion of the Universe Cosmology Study of the origin, contents and evolution of the universe as a whole Expansion rate and history Space-time geometry Energy density composition Origin of structure

More information

A Review of Pseudo-Marginal Markov Chain Monte Carlo

A Review of Pseudo-Marginal Markov Chain Monte Carlo A Review of Pseudo-Marginal Markov Chain Monte Carlo Discussed by: Yizhe Zhang October 21, 2016 Outline 1 Overview 2 Paper review 3 experiment 4 conclusion Motivation & overview Notation: θ denotes the

More information

Gibbs Sampling in Endogenous Variables Models

Gibbs Sampling in Endogenous Variables Models Gibbs Sampling in Endogenous Variables Models Econ 690 Purdue University Outline 1 Motivation 2 Identification Issues 3 Posterior Simulation #1 4 Posterior Simulation #2 Motivation In this lecture we take

More information

A Fully Nonparametric Modeling Approach to. BNP Binary Regression

A Fully Nonparametric Modeling Approach to. BNP Binary Regression A Fully Nonparametric Modeling Approach to Binary Regression Maria Department of Applied Mathematics and Statistics University of California, Santa Cruz SBIES, April 27-28, 2012 Outline 1 2 3 Simulation

More information

Spatial Statistics with Image Analysis. Outline. A Statistical Approach. Johan Lindström 1. Lund October 6, 2016

Spatial Statistics with Image Analysis. Outline. A Statistical Approach. Johan Lindström 1. Lund October 6, 2016 Spatial Statistics Spatial Examples More Spatial Statistics with Image Analysis Johan Lindström 1 1 Mathematical Statistics Centre for Mathematical Sciences Lund University Lund October 6, 2016 Johan Lindström

More information

Accounting for Complex Sample Designs via Mixture Models

Accounting for Complex Sample Designs via Mixture Models Accounting for Complex Sample Designs via Finite Normal Mixture Models 1 1 University of Michigan School of Public Health August 2009 Talk Outline 1 2 Accommodating Sampling Weights in Mixture Models 3

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Brown University CSCI 1950-F, Spring 2012 Prof. Erik Sudderth Lecture 25: Markov Chain Monte Carlo (MCMC) Course Review and Advanced Topics Many figures courtesy Kevin

More information

Computational statistics

Computational statistics Computational statistics Markov Chain Monte Carlo methods Thierry Denœux March 2017 Thierry Denœux Computational statistics March 2017 1 / 71 Contents of this chapter When a target density f can be evaluated

More information

Bayes via forward simulation. Approximate Bayesian Computation

Bayes via forward simulation. Approximate Bayesian Computation : Approximate Bayesian Computation Jessi Cisewski Carnegie Mellon University June 2014 What is Approximate Bayesian Computing? Likelihood-free approach Works by simulating from the forward process What

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate

More information

17 : Markov Chain Monte Carlo

17 : Markov Chain Monte Carlo 10-708: Probabilistic Graphical Models, Spring 2015 17 : Markov Chain Monte Carlo Lecturer: Eric P. Xing Scribes: Heran Lin, Bin Deng, Yun Huang 1 Review of Monte Carlo Methods 1.1 Overview Monte Carlo

More information

STA 414/2104, Spring 2014, Practice Problem Set #1

STA 414/2104, Spring 2014, Practice Problem Set #1 STA 44/4, Spring 4, Practice Problem Set # Note: these problems are not for credit, and not to be handed in Question : Consider a classification problem in which there are two real-valued inputs, and,

More information

Lecture 8: Bayesian Estimation of Parameters in State Space Models

Lecture 8: Bayesian Estimation of Parameters in State Space Models in State Space Models March 30, 2016 Contents 1 Bayesian estimation of parameters in state space models 2 Computational methods for parameter estimation 3 Practical parameter estimation in state space

More information

MULTILEVEL IMPUTATION 1

MULTILEVEL IMPUTATION 1 MULTILEVEL IMPUTATION 1 Supplement B: MCMC Sampling Steps and Distributions for Two-Level Imputation This document gives technical details of the full conditional distributions used to draw regression

More information

Bayesian Modeling for Type Ia Supernova Data, Dust, and Distances

Bayesian Modeling for Type Ia Supernova Data, Dust, and Distances Bayesian Modeling for Type Ia Supernova Data, Dust, and Distances Kaisey Mandel Supernova Group Harvard-Smithsonian Center for Astrophysics ichasc Astrostatistics Seminar 17 Sept 213 1 Outline Brief Introduction

More information

Markov Chain Monte Carlo in Practice

Markov Chain Monte Carlo in Practice Markov Chain Monte Carlo in Practice Edited by W.R. Gilks Medical Research Council Biostatistics Unit Cambridge UK S. Richardson French National Institute for Health and Medical Research Vilejuif France

More information

Markov Chain Monte Carlo (MCMC)

Markov Chain Monte Carlo (MCMC) Markov Chain Monte Carlo (MCMC Dependent Sampling Suppose we wish to sample from a density π, and we can evaluate π as a function but have no means to directly generate a sample. Rejection sampling can

More information

Large Scale Bayesian Inference

Large Scale Bayesian Inference Large Scale Bayesian I in Cosmology Jens Jasche Garching, 11 September 2012 Introduction Cosmography 3D density and velocity fields Power-spectra, bi-spectra Dark Energy, Dark Matter, Gravity Cosmological

More information

Default Priors and Efficient Posterior Computation in Bayesian Factor Analysis

Default Priors and Efficient Posterior Computation in Bayesian Factor Analysis Default Priors and Efficient Posterior Computation in Bayesian Factor Analysis Joyee Ghosh Institute of Statistics and Decision Sciences, Duke University Box 90251, Durham, NC 27708 joyee@stat.duke.edu

More information

Gibbs Sampling in Latent Variable Models #1

Gibbs Sampling in Latent Variable Models #1 Gibbs Sampling in Latent Variable Models #1 Econ 690 Purdue University Outline 1 Data augmentation 2 Probit Model Probit Application A Panel Probit Panel Probit 3 The Tobit Model Example: Female Labor

More information

Markov Chain Monte Carlo methods

Markov Chain Monte Carlo methods Markov Chain Monte Carlo methods Tomas McKelvey and Lennart Svensson Signal Processing Group Department of Signals and Systems Chalmers University of Technology, Sweden November 26, 2012 Today s learning

More information

An introduction to Bayesian statistics and model calibration and a host of related topics

An introduction to Bayesian statistics and model calibration and a host of related topics An introduction to Bayesian statistics and model calibration and a host of related topics Derek Bingham Statistics and Actuarial Science Simon Fraser University Cast of thousands have participated in the

More information

Rejoinder: Be All Our Insomnia Remembered...

Rejoinder: Be All Our Insomnia Remembered... Rejoinder: Be All Our Insomnia Remembered... Yaming Yu Department of Statistics, University of California, Irvine Xiao-Li Meng Department of Statistics, Harvard University August 2, 2011 1 Dream on: From

More information

CPSC 540: Machine Learning

CPSC 540: Machine Learning CPSC 540: Machine Learning MCMC and Non-Parametric Bayes Mark Schmidt University of British Columbia Winter 2016 Admin I went through project proposals: Some of you got a message on Piazza. No news is

More information

STA 216, GLM, Lecture 16. October 29, 2007

STA 216, GLM, Lecture 16. October 29, 2007 STA 216, GLM, Lecture 16 October 29, 2007 Efficient Posterior Computation in Factor Models Underlying Normal Models Generalized Latent Trait Models Formulation Genetic Epidemiology Illustration Structural

More information

Hastings-within-Gibbs Algorithm: Introduction and Application on Hierarchical Model

Hastings-within-Gibbs Algorithm: Introduction and Application on Hierarchical Model UNIVERSITY OF TEXAS AT SAN ANTONIO Hastings-within-Gibbs Algorithm: Introduction and Application on Hierarchical Model Liang Jing April 2010 1 1 ABSTRACT In this paper, common MCMC algorithms are introduced

More information

Calibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods

Calibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods Calibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods Jonas Hallgren 1 1 Department of Mathematics KTH Royal Institute of Technology Stockholm, Sweden BFS 2012 June

More information

Sequential Monte Carlo Methods

Sequential Monte Carlo Methods University of Pennsylvania Bradley Visitor Lectures October 23, 2017 Introduction Unfortunately, standard MCMC can be inaccurate, especially in medium and large-scale DSGE models: disentangling importance

More information

eqr094: Hierarchical MCMC for Bayesian System Reliability

eqr094: Hierarchical MCMC for Bayesian System Reliability eqr094: Hierarchical MCMC for Bayesian System Reliability Alyson G. Wilson Statistical Sciences Group, Los Alamos National Laboratory P.O. Box 1663, MS F600 Los Alamos, NM 87545 USA Phone: 505-667-9167

More information

Slice Sampling with Adaptive Multivariate Steps: The Shrinking-Rank Method

Slice Sampling with Adaptive Multivariate Steps: The Shrinking-Rank Method Slice Sampling with Adaptive Multivariate Steps: The Shrinking-Rank Method Madeleine B. Thompson Radford M. Neal Abstract The shrinking rank method is a variation of slice sampling that is efficient at

More information

Hierarchical Modeling for Spatial Data

Hierarchical Modeling for Spatial Data Bayesian Spatial Modelling Spatial model specifications: P(y X, θ). Prior specifications: P(θ). Posterior inference of model parameters: P(θ y). Predictions at new locations: P(y 0 y). Model comparisons.

More information

Stat 451 Lecture Notes Markov Chain Monte Carlo. Ryan Martin UIC

Stat 451 Lecture Notes Markov Chain Monte Carlo. Ryan Martin UIC Stat 451 Lecture Notes 07 12 Markov Chain Monte Carlo Ryan Martin UIC www.math.uic.edu/~rgmartin 1 Based on Chapters 8 9 in Givens & Hoeting, Chapters 25 27 in Lange 2 Updated: April 4, 2016 1 / 42 Outline

More information

Bayesian Estimation of DSGE Models 1 Chapter 3: A Crash Course in Bayesian Inference

Bayesian Estimation of DSGE Models 1 Chapter 3: A Crash Course in Bayesian Inference 1 The views expressed in this paper are those of the authors and do not necessarily reflect the views of the Federal Reserve Board of Governors or the Federal Reserve System. Bayesian Estimation of DSGE

More information

Bayesian Nonparametric Regression for Diabetes Deaths

Bayesian Nonparametric Regression for Diabetes Deaths Bayesian Nonparametric Regression for Diabetes Deaths Brian M. Hartman PhD Student, 2010 Texas A&M University College Station, TX, USA David B. Dahl Assistant Professor Texas A&M University College Station,

More information

An introduction to Sequential Monte Carlo

An introduction to Sequential Monte Carlo An introduction to Sequential Monte Carlo Thang Bui Jes Frellsen Department of Engineering University of Cambridge Research and Communication Club 6 February 2014 1 Sequential Monte Carlo (SMC) methods

More information

Testing the mutual consistency of different supernovae surveys

Testing the mutual consistency of different supernovae surveys doi:10.1093/mnras/stv415 Testing the mutual consistency of different supernovae surveys N. V. Karpenka, 1 F. Feroz 2 and M. P. Hobson 2 1 The Oskar Klein Centre for Cosmoparticle Physics, Department of

More information

Stat 535 C - Statistical Computing & Monte Carlo Methods. Lecture 15-7th March Arnaud Doucet

Stat 535 C - Statistical Computing & Monte Carlo Methods. Lecture 15-7th March Arnaud Doucet Stat 535 C - Statistical Computing & Monte Carlo Methods Lecture 15-7th March 2006 Arnaud Doucet Email: arnaud@cs.ubc.ca 1 1.1 Outline Mixture and composition of kernels. Hybrid algorithms. Examples Overview

More information

Set 1: Expansion of the Universe

Set 1: Expansion of the Universe Set 1: Expansion of the Universe Syllabus Course text book: Ryden, Introduction to Cosmology, 2nd edition Olber s paradox, expansion of the universe: Ch 2 Cosmic geometry, expansion rate, acceleration:

More information

Point spread function reconstruction from the image of a sharp edge

Point spread function reconstruction from the image of a sharp edge DOE/NV/5946--49 Point spread function reconstruction from the image of a sharp edge John Bardsley, Kevin Joyce, Aaron Luttman The University of Montana National Security Technologies LLC Montana Uncertainty

More information

27 : Distributed Monte Carlo Markov Chain. 1 Recap of MCMC and Naive Parallel Gibbs Sampling

27 : Distributed Monte Carlo Markov Chain. 1 Recap of MCMC and Naive Parallel Gibbs Sampling 10-708: Probabilistic Graphical Models 10-708, Spring 2014 27 : Distributed Monte Carlo Markov Chain Lecturer: Eric P. Xing Scribes: Pengtao Xie, Khoa Luu In this scribe, we are going to review the Parallel

More information

Introduction to Stochastic Gradient Markov Chain Monte Carlo Methods

Introduction to Stochastic Gradient Markov Chain Monte Carlo Methods Introduction to Stochastic Gradient Markov Chain Monte Carlo Methods Changyou Chen Department of Electrical and Computer Engineering, Duke University cc448@duke.edu Duke-Tsinghua Machine Learning Summer

More information

Bayesian Gaussian Process Regression

Bayesian Gaussian Process Regression Bayesian Gaussian Process Regression STAT8810, Fall 2017 M.T. Pratola October 7, 2017 Today Bayesian Gaussian Process Regression Bayesian GP Regression Recall we had observations from our expensive simulator,

More information

Part 8: GLMs and Hierarchical LMs and GLMs

Part 8: GLMs and Hierarchical LMs and GLMs Part 8: GLMs and Hierarchical LMs and GLMs 1 Example: Song sparrow reproductive success Arcese et al., (1992) provide data on a sample from a population of 52 female song sparrows studied over the course

More information

Advanced Statistical Methods. Lecture 6

Advanced Statistical Methods. Lecture 6 Advanced Statistical Methods Lecture 6 Convergence distribution of M.-H. MCMC We denote the PDF estimated by the MCMC as. It has the property Convergence distribution After some time, the distribution

More information

THE ACCELERATING UNIVERSE. A.Rahman Mohtasebzadeh (9/21/2012)

THE ACCELERATING UNIVERSE. A.Rahman Mohtasebzadeh (9/21/2012) 1 THE ACCELERATING UNIVERSE A.Rahman Mohtasebzadeh (9/21/2012) 2 Content Cosmology through ages Necessary Concepts Nobel Prize in Physics (2011) Conclusion 3 Twinkle twinkle little star, how I wonder where

More information

MARKOV CHAIN MONTE CARLO

MARKOV CHAIN MONTE CARLO MARKOV CHAIN MONTE CARLO RYAN WANG Abstract. This paper gives a brief introduction to Markov Chain Monte Carlo methods, which offer a general framework for calculating difficult integrals. We start with

More information

BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA

BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA Intro: Course Outline and Brief Intro to Marina Vannucci Rice University, USA PASI-CIMAT 04/28-30/2010 Marina Vannucci

More information

Bayesian Inference. Chapter 4: Regression and Hierarchical Models

Bayesian Inference. Chapter 4: Regression and Hierarchical Models Bayesian Inference Chapter 4: Regression and Hierarchical Models Conchi Ausín and Mike Wiper Department of Statistics Universidad Carlos III de Madrid Advanced Statistics and Data Mining Summer School

More information

Graphical Models and Kernel Methods

Graphical Models and Kernel Methods Graphical Models and Kernel Methods Jerry Zhu Department of Computer Sciences University of Wisconsin Madison, USA MLSS June 17, 2014 1 / 123 Outline Graphical Models Probabilistic Inference Directed vs.

More information

Metropolis-Hastings sampling

Metropolis-Hastings sampling Metropolis-Hastings sampling Gibbs sampling requires that a sample from each full conditional distribution. In all the cases we have looked at so far the conditional distributions were conjugate so sampling

More information

Advanced Statistical Computing

Advanced Statistical Computing Advanced Statistical Computing Fall 206 Steve Qin Outline Collapsing, predictive updating Sequential Monte Carlo 2 Collapsing and grouping Want to sample from = Regular Gibbs sampler: Sample t+ from π

More information