Applicability of subsampling bootstrap methods in Markov chain Monte Carlo

Size: px
Start display at page:

Download "Applicability of subsampling bootstrap methods in Markov chain Monte Carlo"

Transcription

1 Applicability of subsampling bootstrap methods in Markov chain Monte Carlo James M. Flegal Abstract Markov chain Monte Carlo (MCMC) methods allow exploration of intractable probability distributions by constructing a Markov chain whose stationary distribution equals the desired distribution. The output from the Markov chain is typically used to estimate several features of the stationary distribution such as mean and variance parameters along with quantiles and so on. Unfortunately, most reported MCMC estimates do not include a clear notion of the associated uncertainty. For expectations one can assess the uncertainty by estimating the variance in an asymptotic normal distribution of the Monte Carlo error. For general functionals there is no such clear path. This article studies the applicability of subsampling bootstrap methods to assess the uncertainty in estimating general functionals from MCMC simulations. 1 Introduction This article develops methods to evaluate the reliability of estimators constructed from Markov chain Monte Carlo (MCMC) simulations. MCMC uses computergenerated data to estimate some functional θ π, where π is a probability distribution with support X. It has become a standard technique, especially for Bayesian inference, and the reliability of MCMC estimators has already been studied for cases where we are estimating an expected value [9, 13, 19]. Here, we investigate the applicability of subsampling bootstrap methods (SBM) for output analysis of an MCMC simulation. This work is appropriate for general functionals including expectations, quantiles and modes. The basic MCMC method entails constructing a Harris ergodic Markov chain X = {X 0,X 1,X 2,...} on X having invariant distribution π. The popularity of MCMC methods result from the ease with which an appropriate X can be simulated [4, 20, James M. Flegal University of California, Riverside, CA 92521, jflegal@ucr.edu 1

2 2 J.M. Flegal 23]. Suppose we simulate X for a finite number of steps, say n, and use the observed values to estimate θ π with ˆθ n. In practice the simulation is run sufficiently long until we have obtained an accurate estimate of θ π. Unfortunately, we have no certain way to know when to terminate the simulation. At present, most analysts use convergence diagnostics for this purpose (for a review see [5]); although it is easily implemented, this method is mute about the quality of ˆθ n as an estimate of θ π. Moreover, diagnostics can introduce bias directly in to the estimates [6]. The approach advocated here will directly analyze output from an MCMC simulation to establish non-parametric or parametric confidence intervals for θ π. There is already substantial research when θ π is an expectation, but very little for general quantities. Calculating and reporting an uncertainty estimate, or confidence interval, allows everyone to judge the reliability of the estimates. The main point is an uncertainty estimate should be reported along with the point estimate obtained from an MCMC experiment. This may seem obvious to most statisticians but this is not currently standard practice in MCMC [9, 13, 19]. Outside of toy examples, no matter how long our simulation, there will be an unknown Monte Carlo error, ˆθ n θ π. While it is impossible to assess this error directly, we can estimate the error via a sampling distribution. That is, we need an asymptotic distribution for ˆθ n obtained from a Markov chain simulation. Assume ˆθ n, properly normalized, has a limiting distribution J π, specifically as n τ n ( ˆθ n θ π ) d Jπ (1) where τ n. For general dependent sequences, there is a substantial amount of research about obtaining asymptotic distributions for a large variety of θ π. These results are often applicable since the Markov chains in MCMC are special cases of strong mixing processes. This article addresses how to estimate the uncertainty of ˆθ n given a limiting distribution as at (1). Bootstrap methods may be appropriate for this task. Indeed, there is already sentiment that bootstrap methods used in stationary time series are appropriate for MCMC [1, 2, 7, 21]. However, my preliminary work [8] suggests that the SBM has superior computational and finite-sample properties. The basic SBM provides a general approach to constructing asymptotically valid confidence intervals [22]. In short, SBM calculates the desired statistic over subsamples of the chain and then use these values to approximate the sampling distribution of θ π. From the subsample values, one can construct a non-parametric confidence interval directly or estimate the unknown asymptotic variance of J π and construct a parametric confidence interval. The rest of this article is organized as follows. Section 2 overviews construction of non-parametric and parametric confidence intervals for general quantities θ π via SBM. Section 3 examines the finite sample properties in a toy example and Section 4

3 Subsampling bootstrap in MCMC 3 illustrates the use of SBM in a realistic example to obtain uncertainty estimates for estimating quantiles. 2 Subsampling bootstrap methods This section overviews SBM for constructing asymptotically valid confidence intervals of θ π. Aside from a proposed diagnostic [14] and a brief summary for quantiles [11], there has been little investigation of SBM in MCMC. Nonetheless, SBM is widely applicable with only limited assumptions. The main requirement is that ˆθ n, properly normalized, has a limiting distribution as at (1). SBM divides the simulation into overlapping subsamples of length b from the first n observations of X. In general, there are n b + 1 subsamples for which we calculate the statistics over each subsample. Procedurally, we select a batch size b such that b/n 0, τ b /τ n 0, τ b and b as n. If we let ˆθ i for i = 1,...,n b+1 denote the value of the statistic calculated from the ith batch, the assumptions on b imply as n τ b ( ˆθ i θ π ) d Jπ for i = 1,...,n b + 1. We can then use the values of ˆθ i to approximate J π and construct asymptotically valid inference procedures. Specifically, define the empirical distribution of the standardized ˆθ i s as L n,b (y) = Further for α (0,1) define n b+1 1 n b + 1 i=1 I { τ b ( ˆθ i ˆθ n ) y }. L 1 n,b (1 α) = inf{ y : L n,b (y) 1 α } and Jπ 1 (1 α) = inf{y : J π (y) 1 α}. Theorem 1. Let X be a Harris ergodic Markov chain. Assume (1) and that b/n 0, τ b /τ n 0, τ b and b as n. 1. If y is a continuity point of J π ( ), then L n,b (y) J π (y) in probability. 2. If J π ( ) is continuous at Jπ 1 (1 α), then as n { ( ) } Pr τ n ˆθ n θ π L 1 n,b (1 α) 1 α. Proof. Note that Assumption of [22] holds under (1) and the fact that X possesses a unique invariant distribution. Then the proof is a direct result of Theorem of [22] and the fact that Harris ergodic Markov chains are strongly mixing [18].

4 4 J.M. Flegal Theorem 1 provides a consistent estimate of the limiting law J π for Harris ergodic Markov chains through the empirical distribution of ˆθ i. Hence a theoretically valid (1 α)100% non-parametric interval can be expressed as [ ] ˆθ n τn 1 Ln,b 1 (1 α/2), ˆθ n τn 1 Ln,b 1 (α/2). (2) Alternatively, one can also estimate the asymptotic variance [3, 22] using ˆσ 2 SBM = τ 2 b n b + 1 n b+1( ˆθ ) 2 i ˆθ n. (3) i=1 If J π is Normal then a (1 α)100% level parametric confidence interval can be obtained as [ ˆθ n t n b,α/2 τn 1 ˆσ SBM, ˆθ n +t n b,α/2 τn 1 ] ˆσ SBM. (4) SBM is applicable for any ˆθ n such that (1) holds and the rate of convergence τ n is known as required in (2)-(4). Implementation requires selection of b, the subsample size. We will use the naive choice of b n = n 1/2 in later examples. The following sections consider two common quantities where SBM is appropriate, expectations and quantiles. 2.1 Expectations Consider estimating an expectation of π, that is θ π = E π g = g(x)π(dx). X Suppose we use the observed values to estimate E π g with a sample average ḡ n = 1 n n 1 g(x i ). i=0 The use of this estimator is justified through the Markov chain strong law of large numbers. Further assume a Markov chain CLT holds [18, 26], that is n(ḡn E π g) d N(0,σ 2 ) (5) as n where σ 2 (0, ). Then we can use (2) or (4) to form non-parametric or parametric confidence intervals, respectively. Alternatively, one can consider the overlapping batch means (OLBM) variance estimator [10]. As the name suggests, OLBM divides the simulation into overlapping batches of length b resulting in n b + 1 batches for which Ȳ j (b) =

5 Subsampling bootstrap in MCMC 5 b 1 b 1 i=0 g(x j+i) for j = 0,...,n b. Then the OLBM estimator of σ 2 is ˆσ 2 OLBM = n b nb (n b)(n b + 1) j=0 It is easy to show that (3) is asymptotically equivalent to (6). (Ȳ j (b) ḡ n ) 2. (6) 2.2 Quantiles It is routine when summarizing an MCMC experiment to include sample quantiles, especially in Bayesian applications. These are based on quantiles of the univariate marginal distributions associated with π. Let F be the marginal cumulative distribution function, then consider estimating the quantile function of F, i.e. the generalized inverse F 1 : (0,1) R given by θ π = F 1 (q) = inf{y : F(y) q}. We will say a sequence of quantile functions converges weakly to a limit quantile function, denoted Fn 1 F 1, if and only if Fn 1 (t) F 1 (t) at every t where F 1 is continuous. Lemma 21.2 of [28] shows Fn 1 F 1 if and only if F n F. Thus we consider estimating F with the empirical distribution function defined as F n (y) = 1 n n i=1 I{Y i y}, where Y = {Y 1,...,Y n } is the observed univariate sample from F and I is the usual indicator function on Z +. The ergodic theorem gives pointwise convergence (F n (y) F(y) for every y almost surely as n ) and the Glivenko-Cantelli theorem extends this to uniform convergence (sup y R F n (y) F(y) 0 almost surely as n ). Letting Y n(1),...,y n(n) be the order statistics of the sample, the empirical quantile function is given by: ( j 1 F 1 n = Y n( j) for q n, j ]. n Often the empirical distribution function F n and the empirical quantile function F 1 n are directly used to estimate F and F 1. Construction of interval estimate of F 1 requires existence of a limiting distribution as at (1). We will assume a CLT exists for the Monte Carlo error [12], that is ( n F 1 n (q) F 1 (q) ) d N(0,σ 2 ) (7)

6 6 J.M. Flegal as n where σ 2 (0, ). Then we can use (2) or (4) to form non-parametric or parametric confidence intervals respectively by setting ˆθ i to the estimated quantile from the ith subsample. 3 Toy example Consider estimating the quantiles of an Exp(1) distribution, i.e. f (x) = e x I(x > 0), using the methods outlined above. It is easy to show that F 1 (q) = log(1 q) 1, and simulation methods are not necessary; accordingly, we use the true values to evaluate the resulting coverage probability of the parametric and non-parametric intervals. Monte Carlo sampling. SBM is also valid using i.i.d. draws from π, that is for Monte Carlo simulations. Here the subsamples need not be overlapping, hence there are N := ( n b) subsamples. Calculation over N subsamples will often be computational extensive. Instead, a suitably large N << ( n b) can be selected resulting in a estimate based on a large number of subsamples rather than all the subsamples. Consider sampling from π using i.i.d. draws. For each simulation, with n = 1e4 iterations, CIs were calculated for q {.025,.1,.5,.9,.975} based on b {100,4000}. For both values of b, calculation of ˆσ SBM 2 was based on N = 1000 random subsamples rather than ( n b) subsamples. This procedure was repeated 2000 times to evaluate the resulting confidence intervals, see Table 1 for a summary of the simulation results. For b = 100, the mean values of ˆσ SBM /σ 2 are close to 1 for all values of q implying there is no systematic bias in the variance estimates. When q {.1,.5,.9}, the coverage probabilities are close to the nominal value of For more extreme values of q {.025,.975}, the results are worse, which should not be surprising given b = 100. The use of non-parametric CIs at (2) show a similar trend, though the overall results are considerably worse. Table 1: Coverage probabilities for Exp(1) example using i.i.d. sampler. Coverage probabilities reported have 0.95 nominal level with standard errors equal to ˆp(1 ˆp)/ q b =100 b =4e3 SBM NP SBM SBM NP SBM

7 Subsampling bootstrap in MCMC 7 One may consider increasing b to improve the results for q {.025,.975}. However, if b = 4000 without increasing n, the resulting coverage probabilities are significantly worse for both types of CIs (see Table 1). The simulations also show the mean value of ˆσ SBM /σ 2 is less that 1, hence the variance estimates are biased down. Instead, as b increases, the overall simulation effort should also increase. Rather than increasing b, it may be useful to consider different quantile estimates including continuous estimators [16] or a finite sampler correction [22]. Given our interest in MCMC, these were not considered here. MCMC sampling. Consider sampling from π using an independence Metropolis sampler with an Exp(θ) proposal [19, 25, 27]. If θ = 1 the sampler simply provides i.i.d. draws from π. The chain is geometrically ergodic if 0 < θ < 1 and subgeometric (slower than geometric) if θ > 1. Table 2: Coverage probabilities for Exp(1) example using independence Metropolis sampler. Coverage probabilities reported have 0.95 nominal level with standard errors equal to ˆp(1 ˆp)/ q θ = 1/4 θ = 1/2 θ = 2 SBM NP SBM SBM NP SBM SBM NP SBM We calculated intervals for q {.025,.1,.5,.9,.975}; each chain contained n =1e4 iterations and the procedure was repeated 2000 times. The simulations began at X 0 = 1, with θ {1/4,1/2,2}, and b = 100. Table 2 summarizes the results. For θ {1/4,1/2} and q {.1,.5,.9,.975}, the coverage probabilities are close to the nominal value of Increasing b would likely improve the results, but with a concurrent requirement for larger n. These limited results for parametric confidence intervals are very encouraging. In contrast, non-parametric CIs derived from (2) perform worse, especially for q {.025,.1}. When θ = 2, the chain is sub-geometric and it is unclear if n-clt holds as at (7). In fact, the independence sampler fails to have a n-clt at (5) for all suitably non-trial functions g when θ > 2 [24, 27]. However, it is possible via SBM to obtain parametric and non-parametric CIs at (2) or (4) if one assumes a CLT with rate of convergence τ n = n. The results from this simulation are also contained in Table 2. We can see the coverage probabilities are close to the 0.95 nominal level for small quantiles, but this is likely because 0.95 is close to 1. In the case of large quantiles, the results are terrible, as low as This example highlights the importance of obtaining a Markov chain CLT.

8 8 J.M. Flegal 4 A realistic example In this section, we consider the analysis of US government HMO data [15] under the following proposed model [17]. Let y i denote the individual monthly premium of the ith HMO plan for i = 1,...,341 and consider a Bayesian version of the following frequentist model y i = β 0 + β 1 x i1 + β 2 x i2 + ε i (8) where ε i are i.i.d. N ( 0,λ 1), x i1 denotes the centered and scaled average expenses per admission in the state in which the ith HMO operates, and x i2 is an indicator for New England. (Specifically, if x i1 are the original values and x 1 is the overall average per admission then x i1 = ( x i1 x 1 )/1000.) Our analysis is based on the following Bayesian version of (8) y β,λ N N ( Xβ,λ 1 I N ) β λ N 3 ( b,b 1 ) λ Gamma(r 1,r 2 ) where N = 341, y is the vector of individual premiums, β = (β 0,β 1,β 2 ) is the vector of regression coefficients, and X is the design matrix whose ith row is xi T = (1,x i1,x i2 ). (We will say W Gamma(a,b) if it has density proportional to w a 1 e bw for w > 0.) This model requires specification of the hyper-parameters (r 1,r 2,b,B) which we assign based on estimates from the usual frequentist model [17]. Specifically, r 1 =3.122e-06, r 2 =1.77e-03, b = , and B 1 = We will sample from π (β,λ y) using a two-component block Gibbs sampler requiring the following full conditionals λ β Gamma (r 1 + N2,r ) V (β) ( (λx β λ N T 3 X + B ) 1 ( λx T y + Bb ), ( λx T X + B ) ) 1 where V (β) = (y Xβ) T (y Xβ) and we have suppressed the dependency on y. We consider the sampler which updates λ followed by β in each iteration, i.e. (β,λ ) (β,λ) (β,λ). Our goal is estimating the median and reporting a 90% Bayesian credible region for each of the three marginal distributions. Denote the qth quantile associated with the marginal for β j as φ q (i) for j = 0,1,2. Then the vector of parameters to be estimated is

9 Subsampling bootstrap in MCMC 9 Table 3: HMO parameter estimates with MCSEs. q Estimate MCSE β e e e-3 β e e e-2 β e e e-2 ( ) Φ = φ (0).05,φ(0).5,φ(0).95,φ(1).05,φ(1).5,φ(1).95,φ(2).05,φ(2).5,φ(2).95. Along with estimating Φ, we calculated the associated MCSEs using SBM. Table 3 summarizes estimates for Φ and MCSEs from 40,000 total iterations (b n = 40,000 1/2 = 200). Acknowledgements I am grateful to Galin L. Jones and two anonymous referees for their constructive comments in preparing this article. References 1. Patrice Bertail and Stéphan Clémençon. Regenerative block-bootstrap for Markov chains. Bernoulli, 12: , Peter Bühlmann. Bootstraps for time series. Statistical Science, 17:52 72, Edward Carlstein. The use of subseries values for estimating the variance of a general statistic from a stationary sequence. The Annals of Statistics, 14: , Ming-Hui Chen, Qi-Man Shao, and Joseph George Ibrahim. Monte Carlo Methods in Bayesian Computation. Springer-Verlag Inc, Mary Kathryn Cowles and Bradley P. Carlin. Markov chain Monte Carlo convergence diagnostics: A comparative review. Journal of the American Statistical Association, 91: , Mary Kathryn Cowles, Gareth O. Roberts, and Jeffrey S. Rosenthal. Possible biases induced by MCMC convergence diagnostics. Journal of Statistical Computing and Simulation, 64:87 104, Somnath Datta and William P. McCormick. Regeneration-based bootstrap for Markov chains. The Canadian Journal of Statistics, 21: , James M. Flegal. Monte Carlo standard errors for Markov chain Monte Carlo. PhD thesis, University of Minnesota, School of Statistics, James M. Flegal, Murali Haran, and Galin L. Jones. Markov chain Monte Carlo: Can we trust the third significant figure? Statistical Science, 23: , James M. Flegal and Galin L. Jones. Batch means and spectral variance estimators in Markov chain Monte Carlo. The Annals of Statistics, 38: , 2010.

10 10 J.M. Flegal 11. James M. Flegal and Galin L. Jones. Implementing Markov chain Monte Carlo: Estimating with confidence. In S.P. Brooks, A.E. Gelman, G.L. Jones, and X.L. Meng, editors, Handbook of Markov Chain Monte Carlo. Chapman & Hall/CRC Press, James M. Flegal and Galin L. Jones. Quantile estimation via Markov chain Monte Carlo. Work in progress, Charles J. Geyer. Practical Markov chain Monte Carlo (with discussion). Statistical Science, 7: , S. G. Giakoumatos, I. D. Vrontos, P. Dellaportas, and D. N. Politis. A Markov chain Monte Carlo convergence diagnostic using subsampling. Journal of Computational and Graphical Statistics, 8: , James S. Hodges. Some algebra and geometry for hierarchical models, applied to diagnostics (Disc: P ). Journal of the Royal Statistical Society, Series B: Statistical Methodology, 60: , Rob J. Hyndman and Yanan Fan. Sample quantiles in statistical packages. The American Statistician, 50: , Alicia A. Johnson and Galin L. Jones. Gibbs sampling for a Bayesian hierarchical general linear model. Electronic Journal of Statistics, 4: , Galin L. Jones. On the Markov chain central limit theorem. Probability Surveys, 1: , Galin L. Jones and James P. Hobert. Honest exploration of intractable probability distributions via Markov chain Monte Carlo. Statistical Science, 16: , Jun S. Liu. Monte Carlo Strategies in Scientific Computing. Springer, New York, Dimitris N. Politis. The impact of bootstrap methods on time series analysis. Statistical Science, 18: , Dimitris N. Politis, Joseph P. Romano, and Michael Wolf. Subsampling. Springer-Verlag Inc, Christian P. Robert and George Casella. Monte Carlo Statistical Methods. Springer, New York, Gareth O. Roberts. A note on acceptance rate criteria for CLTs for Metropolis-Hastings algorithms. Journal of Applied Probability, 36: , Gareth O. Roberts and Jeffrey S. Rosenthal. Markov chain Monte Carlo: Some practical implications of theoretical results (with discussion). Canadian Journal of Statistics, 26:5 31, Gareth O. Roberts and Jeffrey S. Rosenthal. General state space Markov chains and MCMC algorithms. Probability Surveys, 1:20 71, Gareth O. Roberts and Jeffrey S. Rosenthal. Quantitative non-geometric convergence bounds for independence samplers. Methodology and Computing in Applied Probability, 13: , A. W. van der Vaart. Asymptotic Statistics. Cambridge University Press, New York, 1998.

Output analysis for Markov chain Monte Carlo simulations

Output analysis for Markov chain Monte Carlo simulations Chapter 1 Output analysis for Markov chain Monte Carlo simulations James M. Flegal and Galin L. Jones (October 12, 2009) 1.1 Introduction In obtaining simulation-based results, it is desirable to use estimation

More information

Markov chain Monte Carlo

Markov chain Monte Carlo Markov chain Monte Carlo Karl Oskar Ekvall Galin L. Jones University of Minnesota March 12, 2019 Abstract Practically relevant statistical models often give rise to probability distributions that are analytically

More information

Markov Chain Monte Carlo: Can We Trust the Third Significant Figure?

Markov Chain Monte Carlo: Can We Trust the Third Significant Figure? Markov Chain Monte Carlo: Can We Trust the Third Significant Figure? James M. Flegal School of Statistics University of Minnesota jflegal@stat.umn.edu Murali Haran Department of Statistics The Pennsylvania

More information

The Polya-Gamma Gibbs Sampler for Bayesian. Logistic Regression is Uniformly Ergodic

The Polya-Gamma Gibbs Sampler for Bayesian. Logistic Regression is Uniformly Ergodic he Polya-Gamma Gibbs Sampler for Bayesian Logistic Regression is Uniformly Ergodic Hee Min Choi and James P. Hobert Department of Statistics University of Florida August 013 Abstract One of the most widely

More information

On the Applicability of Regenerative Simulation in Markov Chain Monte Carlo

On the Applicability of Regenerative Simulation in Markov Chain Monte Carlo On the Applicability of Regenerative Simulation in Markov Chain Monte Carlo James P. Hobert 1, Galin L. Jones 2, Brett Presnell 1, and Jeffrey S. Rosenthal 3 1 Department of Statistics University of Florida

More information

On Reparametrization and the Gibbs Sampler

On Reparametrization and the Gibbs Sampler On Reparametrization and the Gibbs Sampler Jorge Carlos Román Department of Mathematics Vanderbilt University James P. Hobert Department of Statistics University of Florida March 2014 Brett Presnell Department

More information

University of Toronto Department of Statistics

University of Toronto Department of Statistics Norm Comparisons for Data Augmentation by James P. Hobert Department of Statistics University of Florida and Jeffrey S. Rosenthal Department of Statistics University of Toronto Technical Report No. 0704

More information

eqr094: Hierarchical MCMC for Bayesian System Reliability

eqr094: Hierarchical MCMC for Bayesian System Reliability eqr094: Hierarchical MCMC for Bayesian System Reliability Alyson G. Wilson Statistical Sciences Group, Los Alamos National Laboratory P.O. Box 1663, MS F600 Los Alamos, NM 87545 USA Phone: 505-667-9167

More information

Package mcmcse. July 4, 2017

Package mcmcse. July 4, 2017 Version 1.3-2 Date 2017-07-03 Title Monte Carlo Standard Errors for MCMC Package mcmcse July 4, 2017 Author James M. Flegal , John Hughes , Dootika Vats ,

More information

large number of i.i.d. observations from P. For concreteness, suppose

large number of i.i.d. observations from P. For concreteness, suppose 1 Subsampling Suppose X i, i = 1,..., n is an i.i.d. sequence of random variables with distribution P. Let θ(p ) be some real-valued parameter of interest, and let ˆθ n = ˆθ n (X 1,..., X n ) be some estimate

More information

A fast sampler for data simulation from spatial, and other, Markov random fields

A fast sampler for data simulation from spatial, and other, Markov random fields A fast sampler for data simulation from spatial, and other, Markov random fields Andee Kaplan Iowa State University ajkaplan@iastate.edu June 22, 2017 Slides available at http://bit.ly/kaplan-phd Joint

More information

Markov Chain Monte Carlo A Contribution to the Encyclopedia of Environmetrics

Markov Chain Monte Carlo A Contribution to the Encyclopedia of Environmetrics Markov Chain Monte Carlo A Contribution to the Encyclopedia of Environmetrics Galin L. Jones and James P. Hobert Department of Statistics University of Florida May 2000 1 Introduction Realistic statistical

More information

Markov Chain Monte Carlo

Markov Chain Monte Carlo Markov Chain Monte Carlo Recall: To compute the expectation E ( h(y ) ) we use the approximation E(h(Y )) 1 n n h(y ) t=1 with Y (1),..., Y (n) h(y). Thus our aim is to sample Y (1),..., Y (n) from f(y).

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee University of Minnesota July 20th, 2008 1 Bayesian Principles Classical statistics: model parameters are fixed and unknown. A Bayesian thinks of parameters

More information

Geometric ergodicity of the Bayesian lasso

Geometric ergodicity of the Bayesian lasso Geometric ergodicity of the Bayesian lasso Kshiti Khare and James P. Hobert Department of Statistics University of Florida June 3 Abstract Consider the standard linear model y = X +, where the components

More information

Bayesian Linear Regression

Bayesian Linear Regression Bayesian Linear Regression Sudipto Banerjee 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. September 15, 2010 1 Linear regression models: a Bayesian perspective

More information

Block Gibbs sampling for Bayesian random effects models with improper priors: Convergence and regeneration

Block Gibbs sampling for Bayesian random effects models with improper priors: Convergence and regeneration Block Gibbs sampling for Bayesian random effects models with improper priors: Convergence and regeneration Aixin Tan and James P. Hobert Department of Statistics, University of Florida May 4, 009 Abstract

More information

Bayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence

Bayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence Bayesian Inference in GLMs Frequentists typically base inferences on MLEs, asymptotic confidence limits, and log-likelihood ratio tests Bayesians base inferences on the posterior distribution of the unknowns

More information

Bayes: All uncertainty is described using probability.

Bayes: All uncertainty is described using probability. Bayes: All uncertainty is described using probability. Let w be the data and θ be any unknown quantities. Likelihood. The probability model π(w θ) has θ fixed and w varying. The likelihood L(θ; w) is π(w

More information

ST 740: Markov Chain Monte Carlo

ST 740: Markov Chain Monte Carlo ST 740: Markov Chain Monte Carlo Alyson Wilson Department of Statistics North Carolina State University October 14, 2012 A. Wilson (NCSU Stsatistics) MCMC October 14, 2012 1 / 20 Convergence Diagnostics:

More information

Computational statistics

Computational statistics Computational statistics Markov Chain Monte Carlo methods Thierry Denœux March 2017 Thierry Denœux Computational statistics March 2017 1 / 71 Contents of this chapter When a target density f can be evaluated

More information

The Pennsylvania State University The Graduate School RATIO-OF-UNIFORMS MARKOV CHAIN MONTE CARLO FOR GAUSSIAN PROCESS MODELS

The Pennsylvania State University The Graduate School RATIO-OF-UNIFORMS MARKOV CHAIN MONTE CARLO FOR GAUSSIAN PROCESS MODELS The Pennsylvania State University The Graduate School RATIO-OF-UNIFORMS MARKOV CHAIN MONTE CARLO FOR GAUSSIAN PROCESS MODELS A Thesis in Statistics by Chris Groendyke c 2008 Chris Groendyke Submitted in

More information

Control Variates for Markov Chain Monte Carlo

Control Variates for Markov Chain Monte Carlo Control Variates for Markov Chain Monte Carlo Dellaportas, P., Kontoyiannis, I., and Tsourti, Z. Dept of Statistics, AUEB Dept of Informatics, AUEB 1st Greek Stochastics Meeting Monte Carlo: Probability

More information

A quick introduction to Markov chains and Markov chain Monte Carlo (revised version)

A quick introduction to Markov chains and Markov chain Monte Carlo (revised version) A quick introduction to Markov chains and Markov chain Monte Carlo (revised version) Rasmus Waagepetersen Institute of Mathematical Sciences Aalborg University 1 Introduction These notes are intended to

More information

The Jackknife-Like Method for Assessing Uncertainty of Point Estimates for Bayesian Estimation in a Finite Gaussian Mixture Model

The Jackknife-Like Method for Assessing Uncertainty of Point Estimates for Bayesian Estimation in a Finite Gaussian Mixture Model Thai Journal of Mathematics : 45 58 Special Issue: Annual Meeting in Mathematics 207 http://thaijmath.in.cmu.ac.th ISSN 686-0209 The Jackknife-Like Method for Assessing Uncertainty of Point Estimates for

More information

When is a Markov chain regenerative?

When is a Markov chain regenerative? When is a Markov chain regenerative? Krishna B. Athreya and Vivekananda Roy Iowa tate University Ames, Iowa, 50011, UA Abstract A sequence of random variables {X n } n 0 is called regenerative if it can

More information

Bayesian Prediction of Code Output. ASA Albuquerque Chapter Short Course October 2014

Bayesian Prediction of Code Output. ASA Albuquerque Chapter Short Course October 2014 Bayesian Prediction of Code Output ASA Albuquerque Chapter Short Course October 2014 Abstract This presentation summarizes Bayesian prediction methodology for the Gaussian process (GP) surrogate representation

More information

Kobe University Repository : Kernel

Kobe University Repository : Kernel Kobe University Repository : Kernel タイトル Title 著者 Author(s) 掲載誌 巻号 ページ Citation 刊行日 Issue date 資源タイプ Resource Type 版区分 Resource Version 権利 Rights DOI URL Note on the Sampling Distribution for the Metropolis-

More information

Analysis of Polya-Gamma Gibbs sampler for Bayesian logistic analysis of variance

Analysis of Polya-Gamma Gibbs sampler for Bayesian logistic analysis of variance Electronic Journal of Statistics Vol. (207) 326 337 ISSN: 935-7524 DOI: 0.24/7-EJS227 Analysis of Polya-Gamma Gibbs sampler for Bayesian logistic analysis of variance Hee Min Choi Department of Statistics

More information

Making rating curves - the Bayesian approach

Making rating curves - the Bayesian approach Making rating curves - the Bayesian approach Rating curves what is wanted? A best estimate of the relationship between stage and discharge at a given place in a river. The relationship should be on the

More information

MH I. Metropolis-Hastings (MH) algorithm is the most popular method of getting dependent samples from a probability distribution

MH I. Metropolis-Hastings (MH) algorithm is the most popular method of getting dependent samples from a probability distribution MH I Metropolis-Hastings (MH) algorithm is the most popular method of getting dependent samples from a probability distribution a lot of Bayesian mehods rely on the use of MH algorithm and it s famous

More information

MCMC algorithms for fitting Bayesian models

MCMC algorithms for fitting Bayesian models MCMC algorithms for fitting Bayesian models p. 1/1 MCMC algorithms for fitting Bayesian models Sudipto Banerjee sudiptob@biostat.umn.edu University of Minnesota MCMC algorithms for fitting Bayesian models

More information

Markov chain Monte Carlo

Markov chain Monte Carlo 1 / 26 Markov chain Monte Carlo Timothy Hanson 1 and Alejandro Jara 2 1 Division of Biostatistics, University of Minnesota, USA 2 Department of Statistics, Universidad de Concepción, Chile IAP-Workshop

More information

Multivariate Output Analysis for Markov Chain Monte Carlo

Multivariate Output Analysis for Markov Chain Monte Carlo Multivariate Output Analysis for Marov Chain Monte Carlo Dootia Vats School of Statistics University of Minnesota vatsx007@umn.edu James M. Flegal Department of Statistics University of California, Riverside

More information

Geometric Ergodicity of a Random-Walk Metorpolis Algorithm via Variable Transformation and Computer Aided Reasoning in Statistics

Geometric Ergodicity of a Random-Walk Metorpolis Algorithm via Variable Transformation and Computer Aided Reasoning in Statistics Geometric Ergodicity of a Random-Walk Metorpolis Algorithm via Variable Transformation and Computer Aided Reasoning in Statistics a dissertation submitted to the faculty of the graduate school of the university

More information

Slice Sampling with Adaptive Multivariate Steps: The Shrinking-Rank Method

Slice Sampling with Adaptive Multivariate Steps: The Shrinking-Rank Method Slice Sampling with Adaptive Multivariate Steps: The Shrinking-Rank Method Madeleine B. Thompson Radford M. Neal Abstract The shrinking rank method is a variation of slice sampling that is efficient at

More information

Markov Chain Monte Carlo (MCMC)

Markov Chain Monte Carlo (MCMC) Markov Chain Monte Carlo (MCMC Dependent Sampling Suppose we wish to sample from a density π, and we can evaluate π as a function but have no means to directly generate a sample. Rejection sampling can

More information

Supplement to A Hierarchical Approach for Fitting Curves to Response Time Measurements

Supplement to A Hierarchical Approach for Fitting Curves to Response Time Measurements Supplement to A Hierarchical Approach for Fitting Curves to Response Time Measurements Jeffrey N. Rouder Francis Tuerlinckx Paul L. Speckman Jun Lu & Pablo Gomez May 4 008 1 The Weibull regression model

More information

University of California San Diego and Stanford University and

University of California San Diego and Stanford University and First International Workshop on Functional and Operatorial Statistics. Toulouse, June 19-21, 2008 K-sample Subsampling Dimitris N. olitis andjoseph.romano University of California San Diego and Stanford

More information

Minicourse on: Markov Chain Monte Carlo: Simulation Techniques in Statistics

Minicourse on: Markov Chain Monte Carlo: Simulation Techniques in Statistics Minicourse on: Markov Chain Monte Carlo: Simulation Techniques in Statistics Eric Slud, Statistics Program Lecture 1: Metropolis-Hastings Algorithm, plus background in Simulation and Markov Chains. Lecture

More information

POSTERIOR ANALYSIS OF THE MULTIPLICATIVE HETEROSCEDASTICITY MODEL

POSTERIOR ANALYSIS OF THE MULTIPLICATIVE HETEROSCEDASTICITY MODEL COMMUN. STATIST. THEORY METH., 30(5), 855 874 (2001) POSTERIOR ANALYSIS OF THE MULTIPLICATIVE HETEROSCEDASTICITY MODEL Hisashi Tanizaki and Xingyuan Zhang Faculty of Economics, Kobe University, Kobe 657-8501,

More information

Supplement to Quantile-Based Nonparametric Inference for First-Price Auctions

Supplement to Quantile-Based Nonparametric Inference for First-Price Auctions Supplement to Quantile-Based Nonparametric Inference for First-Price Auctions Vadim Marmer University of British Columbia Artyom Shneyerov CIRANO, CIREQ, and Concordia University August 30, 2010 Abstract

More information

Quantitative Non-Geometric Convergence Bounds for Independence Samplers

Quantitative Non-Geometric Convergence Bounds for Independence Samplers Quantitative Non-Geometric Convergence Bounds for Independence Samplers by Gareth O. Roberts * and Jeffrey S. Rosenthal ** (September 28; revised July 29.) 1. Introduction. Markov chain Monte Carlo (MCMC)

More information

Introduction to Machine Learning CMU-10701

Introduction to Machine Learning CMU-10701 Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov

More information

Stat 5102 Notes: Markov Chain Monte Carlo and Bayesian Inference

Stat 5102 Notes: Markov Chain Monte Carlo and Bayesian Inference Stat 5102 Notes: Markov Chain Monte Carlo and Bayesian Inference Charles J. Geyer March 30, 2012 1 The Problem This is an example of an application of Bayes rule that requires some form of computer analysis.

More information

Sampling Methods (11/30/04)

Sampling Methods (11/30/04) CS281A/Stat241A: Statistical Learning Theory Sampling Methods (11/30/04) Lecturer: Michael I. Jordan Scribe: Jaspal S. Sandhu 1 Gibbs Sampling Figure 1: Undirected and directed graphs, respectively, with

More information

arxiv: v1 [stat.co] 21 Mar 2014

arxiv: v1 [stat.co] 21 Mar 2014 A practical sequential stopping rule for high-dimensional MCMC and its application to spatial-temporal Bayesian models arxiv:1403.5536v1 [stat.co] 21 Mar 2014 Lei Gong Department of Statistics University

More information

Stat 5101 Lecture Notes

Stat 5101 Lecture Notes Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random

More information

General Glivenko-Cantelli theorems

General Glivenko-Cantelli theorems The ISI s Journal for the Rapid Dissemination of Statistics Research (wileyonlinelibrary.com) DOI: 10.100X/sta.0000......................................................................................................

More information

Bayesian Methods for Machine Learning

Bayesian Methods for Machine Learning Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),

More information

Non-Parametric Bayesian Inference for Controlled Branching Processes Through MCMC Methods

Non-Parametric Bayesian Inference for Controlled Branching Processes Through MCMC Methods Non-Parametric Bayesian Inference for Controlled Branching Processes Through MCMC Methods M. González, R. Martínez, I. del Puerto, A. Ramos Department of Mathematics. University of Extremadura Spanish

More information

Stat 516, Homework 1

Stat 516, Homework 1 Stat 516, Homework 1 Due date: October 7 1. Consider an urn with n distinct balls numbered 1,..., n. We sample balls from the urn with replacement. Let N be the number of draws until we encounter a ball

More information

A note on Reversible Jump Markov Chain Monte Carlo

A note on Reversible Jump Markov Chain Monte Carlo A note on Reversible Jump Markov Chain Monte Carlo Hedibert Freitas Lopes Graduate School of Business The University of Chicago 5807 South Woodlawn Avenue Chicago, Illinois 60637 February, 1st 2006 1 Introduction

More information

Stat 5102 Notes: Markov Chain Monte Carlo and Bayesian Inference

Stat 5102 Notes: Markov Chain Monte Carlo and Bayesian Inference Stat 5102 Notes: Markov Chain Monte Carlo and Bayesian Inference Charles J. Geyer April 6, 2009 1 The Problem This is an example of an application of Bayes rule that requires some form of computer analysis.

More information

STAT 425: Introduction to Bayesian Analysis

STAT 425: Introduction to Bayesian Analysis STAT 425: Introduction to Bayesian Analysis Marina Vannucci Rice University, USA Fall 2017 Marina Vannucci (Rice University, USA) Bayesian Analysis (Part 2) Fall 2017 1 / 19 Part 2: Markov chain Monte

More information

The Bayesian Approach to Multi-equation Econometric Model Estimation

The Bayesian Approach to Multi-equation Econometric Model Estimation Journal of Statistical and Econometric Methods, vol.3, no.1, 2014, 85-96 ISSN: 2241-0384 (print), 2241-0376 (online) Scienpress Ltd, 2014 The Bayesian Approach to Multi-equation Econometric Model Estimation

More information

The Mixture Approach for Simulating New Families of Bivariate Distributions with Specified Correlations

The Mixture Approach for Simulating New Families of Bivariate Distributions with Specified Correlations The Mixture Approach for Simulating New Families of Bivariate Distributions with Specified Correlations John R. Michael, Significance, Inc. and William R. Schucany, Southern Methodist University The mixture

More information

Bayesian Linear Models

Bayesian Linear Models Bayesian Linear Models Sudipto Banerjee 1 and Andrew O. Finley 2 1 Department of Forestry & Department of Geography, Michigan State University, Lansing Michigan, U.S.A. 2 Biostatistics, School of Public

More information

Kazuhiko Kakamu Department of Economics Finance, Institute for Advanced Studies. Abstract

Kazuhiko Kakamu Department of Economics Finance, Institute for Advanced Studies. Abstract Bayesian Estimation of A Distance Functional Weight Matrix Model Kazuhiko Kakamu Department of Economics Finance, Institute for Advanced Studies Abstract This paper considers the distance functional weight

More information

A Dirichlet Form approach to MCMC Optimal Scaling

A Dirichlet Form approach to MCMC Optimal Scaling A Dirichlet Form approach to MCMC Optimal Scaling Giacomo Zanella, Wilfrid S. Kendall, and Mylène Bédard. g.zanella@warwick.ac.uk, w.s.kendall@warwick.ac.uk, mylene.bedard@umontreal.ca Supported by EPSRC

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee and Andrew O. Finley 2 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department

More information

Markov Chain Monte Carlo for Linear Mixed Models

Markov Chain Monte Carlo for Linear Mixed Models Markov Chain Monte Carlo for Linear Mixed Models a dissertation submitted to the faculty of the graduate school of the university of minnesota by Felipe Humberto Acosta Archila in partial fulfillment of

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee 1 and Andrew O. Finley 2 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department

More information

Bayesian Nonparametric Regression for Diabetes Deaths

Bayesian Nonparametric Regression for Diabetes Deaths Bayesian Nonparametric Regression for Diabetes Deaths Brian M. Hartman PhD Student, 2010 Texas A&M University College Station, TX, USA David B. Dahl Assistant Professor Texas A&M University College Station,

More information

Bridge estimation of the probability density at a point. July 2000, revised September 2003

Bridge estimation of the probability density at a point. July 2000, revised September 2003 Bridge estimation of the probability density at a point Antonietta Mira Department of Economics University of Insubria Via Ravasi 2 21100 Varese, Italy antonietta.mira@uninsubria.it Geoff Nicholls Department

More information

Introduction to Markov Chain Monte Carlo & Gibbs Sampling

Introduction to Markov Chain Monte Carlo & Gibbs Sampling Introduction to Markov Chain Monte Carlo & Gibbs Sampling Prof. Nicholas Zabaras Sibley School of Mechanical and Aerospace Engineering 101 Frank H. T. Rhodes Hall Ithaca, NY 14853-3801 Email: zabaras@cornell.edu

More information

Bayesian Linear Models

Bayesian Linear Models Bayesian Linear Models Sudipto Banerjee 1 and Andrew O. Finley 2 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department

More information

Bayesian nonparametric estimation of finite population quantities in absence of design information on nonsampled units

Bayesian nonparametric estimation of finite population quantities in absence of design information on nonsampled units Bayesian nonparametric estimation of finite population quantities in absence of design information on nonsampled units Sahar Z Zangeneh Robert W. Keener Roderick J.A. Little Abstract In Probability proportional

More information

LECTURE 15 Markov chain Monte Carlo

LECTURE 15 Markov chain Monte Carlo LECTURE 15 Markov chain Monte Carlo There are many settings when posterior computation is a challenge in that one does not have a closed form expression for the posterior distribution. Markov chain Monte

More information

Practical unbiased Monte Carlo for Uncertainty Quantification

Practical unbiased Monte Carlo for Uncertainty Quantification Practical unbiased Monte Carlo for Uncertainty Quantification Sergios Agapiou Department of Statistics, University of Warwick MiR@W day: Uncertainty in Complex Computer Models, 2nd February 2015, University

More information

Stat 451 Lecture Notes Markov Chain Monte Carlo. Ryan Martin UIC

Stat 451 Lecture Notes Markov Chain Monte Carlo. Ryan Martin UIC Stat 451 Lecture Notes 07 12 Markov Chain Monte Carlo Ryan Martin UIC www.math.uic.edu/~rgmartin 1 Based on Chapters 8 9 in Givens & Hoeting, Chapters 25 27 in Lange 2 Updated: April 4, 2016 1 / 42 Outline

More information

The Bayesian Choice. Christian P. Robert. From Decision-Theoretic Foundations to Computational Implementation. Second Edition.

The Bayesian Choice. Christian P. Robert. From Decision-Theoretic Foundations to Computational Implementation. Second Edition. Christian P. Robert The Bayesian Choice From Decision-Theoretic Foundations to Computational Implementation Second Edition With 23 Illustrations ^Springer" Contents Preface to the Second Edition Preface

More information

Advances and Applications in Perfect Sampling

Advances and Applications in Perfect Sampling and Applications in Perfect Sampling Ph.D. Dissertation Defense Ulrike Schneider advisor: Jem Corcoran May 8, 2003 Department of Applied Mathematics University of Colorado Outline Introduction (1) MCMC

More information

A regeneration proof of the central limit theorem for uniformly ergodic Markov chains

A regeneration proof of the central limit theorem for uniformly ergodic Markov chains A regeneration proof of the central limit theorem for uniformly ergodic Markov chains By AJAY JASRA Department of Mathematics, Imperial College London, SW7 2AZ, London, UK and CHAO YANG Department of Mathematics,

More information

Bagging During Markov Chain Monte Carlo for Smoother Predictions

Bagging During Markov Chain Monte Carlo for Smoother Predictions Bagging During Markov Chain Monte Carlo for Smoother Predictions Herbert K. H. Lee University of California, Santa Cruz Abstract: Making good predictions from noisy data is a challenging problem. Methods

More information

On automating Markov chain Monte Carlo for a class of spatial models

On automating Markov chain Monte Carlo for a class of spatial models On automating Markov chain Monte Carlo for a class of spatial models Murali Haran School of Statistics The Pennsylvania State University mharan@stat.psu.edu Luke Tierney Department of Statistics and Actuarial

More information

Quantile POD for Hit-Miss Data

Quantile POD for Hit-Miss Data Quantile POD for Hit-Miss Data Yew-Meng Koh a and William Q. Meeker a a Center for Nondestructive Evaluation, Department of Statistics, Iowa State niversity, Ames, Iowa 50010 Abstract. Probability of detection

More information

Lecture 8: The Metropolis-Hastings Algorithm

Lecture 8: The Metropolis-Hastings Algorithm 30.10.2008 What we have seen last time: Gibbs sampler Key idea: Generate a Markov chain by updating the component of (X 1,..., X p ) in turn by drawing from the full conditionals: X (t) j Two drawbacks:

More information

On Markov chain Monte Carlo methods for tall data

On Markov chain Monte Carlo methods for tall data On Markov chain Monte Carlo methods for tall data Remi Bardenet, Arnaud Doucet, Chris Holmes Paper review by: David Carlson October 29, 2016 Introduction Many data sets in machine learning and computational

More information

Parameter Estimation. William H. Jefferys University of Texas at Austin Parameter Estimation 7/26/05 1

Parameter Estimation. William H. Jefferys University of Texas at Austin Parameter Estimation 7/26/05 1 Parameter Estimation William H. Jefferys University of Texas at Austin bill@bayesrules.net Parameter Estimation 7/26/05 1 Elements of Inference Inference problems contain two indispensable elements: Data

More information

Spatial Statistics Chapter 4 Basics of Bayesian Inference and Computation

Spatial Statistics Chapter 4 Basics of Bayesian Inference and Computation Spatial Statistics Chapter 4 Basics of Bayesian Inference and Computation So far we have discussed types of spatial data, some basic modeling frameworks and exploratory techniques. We have not discussed

More information

Bayesian spatial hierarchical modeling for temperature extremes

Bayesian spatial hierarchical modeling for temperature extremes Bayesian spatial hierarchical modeling for temperature extremes Indriati Bisono Dr. Andrew Robinson Dr. Aloke Phatak Mathematics and Statistics Department The University of Melbourne Maths, Informatics

More information

Convergence of Conditional Metropolis-Hastings Samplers

Convergence of Conditional Metropolis-Hastings Samplers Convergence of Conditional Metropolis-Hastings Samplers Galin L. Jones Gareth O. Roberts Jeffrey S. Rosenthal (June, 2012; revised March 2013 and June 2013) Abstract We consider Markov chain Monte Carlo

More information

Bayesian Methods in Multilevel Regression

Bayesian Methods in Multilevel Regression Bayesian Methods in Multilevel Regression Joop Hox MuLOG, 15 september 2000 mcmc What is Statistics?! Statistics is about uncertainty To err is human, to forgive divine, but to include errors in your design

More information

7. Estimation and hypothesis testing. Objective. Recommended reading

7. Estimation and hypothesis testing. Objective. Recommended reading 7. Estimation and hypothesis testing Objective In this chapter, we show how the election of estimators can be represented as a decision problem. Secondly, we consider the problem of hypothesis testing

More information

Markov Chain. Edited by. Andrew Gelman. Xiao-Li Meng. CRC Press. Taylor & Francis Croup. Boca Raton London New York. an informa business

Markov Chain. Edited by. Andrew Gelman. Xiao-Li Meng. CRC Press. Taylor & Francis Croup. Boca Raton London New York. an informa business Chapman & Hall/CRC Handbooks of Modern Statistical Methods Handbook of Markov Chain Monte Carlo Edited by Steve Brooks Andrew Gelman Galin L. Jones Xiao-Li Meng CRC Press Taylor & Francis Croup Boca Raton

More information

Bayesian Inference. Chapter 1. Introduction and basic concepts

Bayesian Inference. Chapter 1. Introduction and basic concepts Bayesian Inference Chapter 1. Introduction and basic concepts M. Concepción Ausín Department of Statistics Universidad Carlos III de Madrid Master in Business Administration and Quantitative Methods Master

More information

CS281A/Stat241A Lecture 22

CS281A/Stat241A Lecture 22 CS281A/Stat241A Lecture 22 p. 1/4 CS281A/Stat241A Lecture 22 Monte Carlo Methods Peter Bartlett CS281A/Stat241A Lecture 22 p. 2/4 Key ideas of this lecture Sampling in Bayesian methods: Predictive distribution

More information

Markov Chain Monte Carlo methods

Markov Chain Monte Carlo methods Markov Chain Monte Carlo methods By Oleg Makhnin 1 Introduction a b c M = d e f g h i 0 f(x)dx 1.1 Motivation 1.1.1 Just here Supresses numbering 1.1.2 After this 1.2 Literature 2 Method 2.1 New math As

More information

BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA

BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA Intro: Course Outline and Brief Intro to Marina Vannucci Rice University, USA PASI-CIMAT 04/28-30/2010 Marina Vannucci

More information

STA205 Probability: Week 8 R. Wolpert

STA205 Probability: Week 8 R. Wolpert INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and

More information

Inference in state-space models with multiple paths from conditional SMC

Inference in state-space models with multiple paths from conditional SMC Inference in state-space models with multiple paths from conditional SMC Sinan Yıldırım (Sabancı) joint work with Christophe Andrieu (Bristol), Arnaud Doucet (Oxford) and Nicolas Chopin (ENSAE) September

More information

A Review of Pseudo-Marginal Markov Chain Monte Carlo

A Review of Pseudo-Marginal Markov Chain Monte Carlo A Review of Pseudo-Marginal Markov Chain Monte Carlo Discussed by: Yizhe Zhang October 21, 2016 Outline 1 Overview 2 Paper review 3 experiment 4 conclusion Motivation & overview Notation: θ denotes the

More information

Bahadur representations for bootstrap quantiles 1

Bahadur representations for bootstrap quantiles 1 Bahadur representations for bootstrap quantiles 1 Yijun Zuo Department of Statistics and Probability, Michigan State University East Lansing, MI 48824, USA zuo@msu.edu 1 Research partially supported by

More information

A Note on Bayesian Inference After Multiple Imputation

A Note on Bayesian Inference After Multiple Imputation A Note on Bayesian Inference After Multiple Imputation Xiang Zhou and Jerome P. Reiter Abstract This article is aimed at practitioners who plan to use Bayesian inference on multiplyimputed datasets in

More information

Statistical Machine Learning Lecture 8: Markov Chain Monte Carlo Sampling

Statistical Machine Learning Lecture 8: Markov Chain Monte Carlo Sampling 1 / 27 Statistical Machine Learning Lecture 8: Markov Chain Monte Carlo Sampling Melih Kandemir Özyeğin University, İstanbul, Turkey 2 / 27 Monte Carlo Integration The big question : Evaluate E p(z) [f(z)]

More information

Simulation of truncated normal variables. Christian P. Robert LSTA, Université Pierre et Marie Curie, Paris

Simulation of truncated normal variables. Christian P. Robert LSTA, Université Pierre et Marie Curie, Paris Simulation of truncated normal variables Christian P. Robert LSTA, Université Pierre et Marie Curie, Paris Abstract arxiv:0907.4010v1 [stat.co] 23 Jul 2009 We provide in this paper simulation algorithms

More information

MCMC for big data. Geir Storvik. BigInsight lunch - May Geir Storvik MCMC for big data BigInsight lunch - May / 17

MCMC for big data. Geir Storvik. BigInsight lunch - May Geir Storvik MCMC for big data BigInsight lunch - May / 17 MCMC for big data Geir Storvik BigInsight lunch - May 2 2018 Geir Storvik MCMC for big data BigInsight lunch - May 2 2018 1 / 17 Outline Why ordinary MCMC is not scalable Different approaches for making

More information

Bayesian Modelling of Extreme Rainfall Data

Bayesian Modelling of Extreme Rainfall Data Bayesian Modelling of Extreme Rainfall Data Elizabeth Smith A thesis submitted for the degree of Doctor of Philosophy at the University of Newcastle upon Tyne September 2005 UNIVERSITY OF NEWCASTLE Bayesian

More information

SMOOTHED BLOCK EMPIRICAL LIKELIHOOD FOR QUANTILES OF WEAKLY DEPENDENT PROCESSES

SMOOTHED BLOCK EMPIRICAL LIKELIHOOD FOR QUANTILES OF WEAKLY DEPENDENT PROCESSES Statistica Sinica 19 (2009), 71-81 SMOOTHED BLOCK EMPIRICAL LIKELIHOOD FOR QUANTILES OF WEAKLY DEPENDENT PROCESSES Song Xi Chen 1,2 and Chiu Min Wong 3 1 Iowa State University, 2 Peking University and

More information