The bootstrap and Markov chain Monte Carlo

Size: px
Start display at page:

Download "The bootstrap and Markov chain Monte Carlo"

Transcription

1 The bootstrap and Markov chain Monte Carlo Bradley Efron Stanford University Abstract This note concerns the use of parametric bootstrap sampling to carry out Bayesian inference calculations. This is only possible in a subset of those problems amenable to MCMC analysis, but when feasible the bootstrap approach offers both computational and theoretical advantages. The discussion here is in terms of a simple example, with no attempt at a general analysis. Keywords: Bayes credible intervals, Gibbs sampling, MCMC, importance sampling 1 Introduction Markov chain Monte Carlo (MCMC) is the great success story of modern-day Bayesian statistics. MCMC, and its sister method Gibbs sampling, permit the numerical calculation of posterior distributions in situations far too complicated for analytic expression. Together they have produced a forceful surge of Bayesian applications in our journals, as seen for instance in Li et al. (2010), Chakraborty et al. (2010), and Ramirez-Cobo et al. (2010). The point of this brief note is to say that in some situations the bootstrap, in particular the parametric bootstrap, offers an easier path toward the calculation of Bayes posterior distributions. An important but somewhat under-appreciated article by Newton and Raftery (1994) made the same point using nonparametric bootstrapping. By going parametric, we can illustrate more explicitly the bootstrap/mcmc connection. The arguments here will be made mainly in terms of a simple example, with no attempt at the mathematical justifications seen in Newton and Raftery. It is not really surprising that the bootstrap and MCMC share some overlapping territory. Both are general-purpose computer-based algorithmic methods for assessing statistical accuracy, and both enable the statistician to deal effectively with nuisance parameters, obtaining inferences for the interesting part of the problem. On the less salubrious side, both share the tendency of general-purpose algorithms toward overuse. Of course the two methodologies operate in competing inferential realms: frequentist for the bootstrap, Bayesian for MCMC. Here, as in Newton and Raftery, we leap over that divide, bringing bootstrap calculations to bear on the Bayesian world. The working assumption is that we have a Bayesian prior in mind and the only question is how to compute its posterior distribution. Arguments about the merits of Bayesian versus frequentist analysis will not be taken up here, except for our main point that the two camps share some common ground. Research supported in part by NIH grant 8R01 EB and by NSF grant DMS

2 Section 2 develops the connecting link between bootstrap and Bayesian calculations, mainly in terms of a simple component of variance model. The bootstrap approach to Bayesian inference for this model is carried out in Section 3, and then compared with Gibbs sampling. We end with some brief comments on the comparison. 2 Bootstrap and Bayes computations We consider the following components of variance (or random effects) model: a N n (α 0 1, σ 2 0I) and y a N n (a, I). (2.1) That is, a 1, a 2,..., a n the unobserved random effects are an independent sample of size n from a normal distribution having mean α 0 and variance σ0 2 ; we get to see the n independent normal variates y i ind y i N (a i, 1) for i = 1, 2,..., n. (2.2) Marginally, integrating out the unobserved random effects a i, where y i ind N (α 0, σ 2 ) i = 1, 2,..., n, (2.3) σ 2 = σ (2.4) is the total variance. In this section we will concentrate on assessing the variability of the bivariate parameter the maximum likelihood estimate (mle) of β is ( ˆβ = (ˆα 0, ˆσ 2) = β = (α 0, σ 2 ). (2.5) ȳ, ) n (y i ȳ) /n 2, (2.6) i=1 ȳ = n 1 y i/n; ˆβ is sufficient for β, with distributions ˆα 0 N (α 0, σ 2 /n) independent of ˆσ 2 σ 2 χ 2 n 1/n, (2.7) where χ 2 n 1 indicates a chi-squared variate having n 1 degrees of freedom. A parametric bootstrap replication ˆβ is obtained by substituting ˆβ = (ˆα 0, ˆσ 2 ) for the parameter value β = (α 0, σ 2 ) in (2.7), ˆα 0 N (ˆα 0, ˆσ 2 /n ) independent of ˆσ 2 ˆσ 2 χ 2 n 1/n. (2.8) Here ˆβ = (ˆα 0, ˆσ 0 2) is fixed at its observed value, while ˆβ = (ˆα 0, ˆσ2 ) varies according to (2.8). To harmonize notation with that for Bayesian inference we will denote ˆβ simply β, that now indicates any point in the sample space B of possible β s, not necessarily a true value. Suppose that θ = t(β) is any function of β, for example θ = σ 2 /α 0, or { 1 if σ 2 2 θ = 0 if σ 2 (2.9) < 2. 2

3 Starting with a prior density π(β) on β, Bayes theorem says that θ has posterior expectation, given ˆβ, / E{θ ˆβ} = t(β)π(β)g β ( ˆβ) dβ π(β)g β ( ˆβ) dβ, (2.10) B B where g β ( ˆβ) is the likelihood function: the density of ˆβ = (ˆα 0, ˆσ 2 ) in (2.7) (now fixed at its observed value) as β = (α 0, σ 2 ) varies over B. From (2.7) we calculate g β ( ˆβ) { = c exp n [ 2σ 2 (α 0 ˆα 0 ) 2 + ˆσ 2]} / (σ 2 ) n/2 (2.11) for the components of variance problem, where c is a constant that does not depend upon β. Define the conversion ratio R(β) = g β ( ˆβ)/g ˆβ(β), (2.12) the denominator being the bootstrap density (2.8), with β = (α 0, σ 2 ) denoting ˆβ = (ˆα 0, ˆσ2 ). We can rewrite (2.10) as / E{θ ˆβ} = t(β)π(β)r(β)g ˆβ(β) dβ π(β)r(β)g ˆβ(β) dβ. (2.13) B B Now the integrals are over the bootstrap density g ˆβ(β) rather than the likelihood function in (2.10). This allows E{θ ˆβ} to be directly estimated via bootstrap sampling. Suppose we draw a parametric bootstrap sample by independently sampling B times according to (2.8), g ˆβ β 1, β 2,..., β B. Let R i = R(β i ), π i = π(β i ), and t i = t(β i ). Both the numerator and denominator of (2.13) are expectations with respect to the density g ˆβ(β). These can be estimated by bootstrap sample averages, leading to the estimate of E{θ ˆβ}, Ê{θ ˆβ} = B / B t i π i R i π i R i. (2.14) i=1 In this way, any Bayesian posterior expectation can be evaluated from parametric bootstrap replications. None of this is very novel, except for the focus on the parametric bootstrap: (2.14) is a standard importance sampling procedure, as described in Chapter 23 of Lange (2010). A connection between the nonparametric bootstrap and Bayesian inference was suggested under the name Bayesian bootstrap in Rubin (1981), and also in Section 10.6 of Efron (1982). Newton and Raftery (1994) make the connection more tangible, applying (2.14) with nonparametric bootstrap samples. Parametric bootstrapping makes the connection more tangible still, since in favorable situations we can write down and examine explicit representations for the conversion factor R(β) (2.12). For our components of variance problem, (2.11) and the related expression for g ˆβ(β) yield ( ) σ 2 3/2 R(β) = ˆσ 2 exp n ( {(α 0 ˆα 0 ) 2 1σ 2 2 1ˆσ ) ( ˆσ σ 2 σ2 ˆσ 2 i=1 ) + 2 log σ2 ˆσ 2 }. (2.15) The factor R is interesting in its own right. It relates sample space integration to parameter space integration, to quote Fraser et al. (2010), investigating a related problem. 3

4 A considerable advantage of bootstrap estimate (2.14), compared to its Gibbs sampling or MCMC analogues, is that an accuracy estimate for Ê{θ ˆβ} is available from the same data used in its calculation. In (2.14) define r i = π i R i and s i = t i π i R i, (2.16) so Ê{θ ˆβ} = s/ r. (2.17) Let Ĉov be the usual 2-by-2 sample covariance matrix calculated from the B vectors (s i, r i ), say ) (ĉss ĉ Ĉov = sr. (2.18) (For example, ĉ sr = (s i s)(r i r)/b.) Then standard delta-method calculations yield ĉ rs ĉ rr }.= Ê{θ β} var {Ê{θ β} 2 ) (ĉss B s 2 2ĉsr s r + ĉrr r 2. (2.19) Formula (2.19) is possible because the bootstrap vectors (s i, r i ) are drawn independently of each other, which is not the case in MCMC sampling. 3 An example As an example of the bootstrap/bayes computations of Section 2, a simulated data vector y was generated according to (2.1), with n = 100, α 0 = 1, and σ 2 0 = (3.1) Marginally, the independent obervations y i had distribution as in (2.3). The maximum likelihood estimates (2.6) were y i ind N (α 0, σ 2 ) [σ 2 = 1.25] (3.2) ˆβ = (ˆα 0, ˆσ 2) = (1.005, 1.295). (3.3) Here we will only concern ourselves with calculating the posterior distribution of σ 2. B = 25, 000 bootstrap replications (many times more than necessary, as we shall see), β 1, β 2,..., β B, were generated from the bootstrap distribution g ˆβ(β) (2.8), and the conversion factors R i = R(β i ) (2.12) calculated. The prior density π(β) was taken to be π(β) = 1/σ 2. (3.4) (Or equivalently, π(β) = c/σ 2 for any positive constant c, since c factors out of Bayesian calculations, as seen in (2.10).) Formula (3.4) is Jeffreys uninformative prior for inferences concerning the total variance σ 2. Finally, the adjusted factors r i = π i R i = π(β i )R(β i ) were calculated. We now have B = 25, 000 points β i, each with associated weight r i, and these can be used to estimate any posterior expectation E{θ ˆβ}, as in (2.16) (2.17). Table 1 describes the posterior distribution of the total variance σ 2. The interval [0.75,2.15], which contains almost all of the bootstrap replications σi 2, has been divided into bins of width 4

5 Boot Bayes Table 1: Top row: bin counts for B = 25, 000 parametric bootstrap replications of σ 2, as in (2.8); for example, 497 values σi 2 fell into second bin,.85 σ2 i <.95. Bottom row: weighted bin counts (3.5), for posterior density of σ 2 given y; using Jeffreys invariant prior density 1/σ each. The top row of the table shows the bootstrap histogram counts, for example, 1707 of the σi 2 values fell into the third bin, 0.95 σ2 i < The bottom row shows the Bayesian bin counts, weighted according to r i = π i R i, Bayes k = B kth bin / B r i r i, (3.5) scaled to sum to B. The weights r i were smaller than 1 in the third bin, resulting in fewer Bayes counts, 1010 compared to The opposite happens at the other end of the scale. 1 count > bootstrap Jeffreys and Bca sig2tot > Figure 1: Distribution of total variance σ 2 given ˆβ (3.3); dashed curve counts from 25,000 parametric bootstrap replications (2.8), Boot row of Table 1; solid curve weighted counts (3.5), weights r i = π(β i )R(β i ), with π(β) the Jeffreys prior 1/σ 2 and R(β) the conversion factor (2.15). Corrected bootstrap distribution Bca agrees with Jeffreys. Figure 1 graphs the Boot and Bayes counts of Table 1. We see that the Jeffreys Bayes distribution is shifted to the right of the Boot distribution. It can be shown that the Jeffreys prior π(β) = 1/σ 2 is almost exactly correct from a frequentist point of view: for example, the upper 90% point for the Jeffreys posterior distribution is almost exactly the upper endpoint of the usual 90% one-sided confidence interval for σ 2. (The Welch Peers theory of 1963 shows this 5

6 kind of Bayes-frequentist agreement holding asymptotically for all priors of the form π(β) = h(α 0 )/σ 2, with h( ) any smooth positive function. The components of variance problem is unusual in allowing a simple expression for the Welch Peers prior; see Section 6 of Efron (1993).) α se cv σ 2 [α] se cv Table 2: Estimated posterior quantiles σ 2 [α] for total variance σ 2, given data (3.3) and Jeffreys prior π(β) = 1/σ 2. Also shown are standard errors (se) and coefficients of variation (cv), obtained using (2.19), as explained in text. Table 2 shows the estimated posterior quantiles σ 2 [α] for σ 2, given data (3.3) and Jeffreys prior (3.4); for example, σ 2 [0.95] = 1.69 is the empirical 95th percentile of the 25,000 bootstrap replications weighted according to r i = π i R i as in (3.5). Table 2 also shows standard errors derived from (2.19), i.e., the variability due to working with B = 25, 000 bootstrap replications (rather than B = ). As an example, let θ = t(β) in (2.9) equal 0 or 1 as σ 2 i is greater or less than 1.69, the value of σ 2 [0.95]. Following through (2.16) (2.17) results in as the standard error of Ê{θ ˆβ} = 0.95, and this is the se value shown in the second column s next-to-bottom row, along with cv = (= /( )). The standard error for ˆσ 2 [α] = 1.69 is then obtained from the usual delta-method approximation /g[0.95] = , where g[0.95] is the height of the Jeffreys prior density curve at All of the standard errors are small. If we had stopped at B = 5000 bootstrap replications, they would be multiplied by 5 = 2.2, and would still indicate good accuracy. At B = 1000 the multiplying factor increases to 5, and we might begin to worry about simulation errors. All of these calculations depend on the adjusted conversion factor r i = π i R i, with π i = 1/σ 2 i and R i = R(β i ) as in (2.15). Figure 2 plots r i versus σ 2 i for the B = 25, 000 bootstrap replications from (2.8). These are less or greater than 1 as σ 2 i is less or greater than ˆσ2 = 1.30, accounting for the rightward shift of the Bayes posterior density in Figure 1. Much of the simulation standard error seen in Table 2 comes from the large values of r i on the right side of Figure 2. Strategems for decreasing simulation error exist. In this case we might modify the bootstrap sampling recipe (2.8), say to ˆσ 2 = 2ˆσ 2 χ 2 n 1/n. (3.6) This would extend the dashed curve rightwards in Figure 1, reducing the large values of R i and r i. We will not worry further about estimation efficency here. The Bca system of confidence intervals ( bias-corrected and adjusted, Efron, 1987) adjust the raw bootstrap distribution represented by the dashed curve in Figure 1 to achieve 6

7 r > sig2tot > Figure 2: Adjusted conversion factor r i = π i R i for the B = 25, 000 bootstrap replications, plotted versus the bootstrap replications σi 2; the factor is less or greater than 1 as σ2 i is less or greater than ˆσ 2 = 1.30, moving the Jeffreys posterior density rightward in Figure 1. second-order accurate frequentist coverage: a level-α Bca interval has true average probability α + O(1/n), n the sample size, rather than α + O(1/ n) for the usual first-order approximate intervals. The corresponding confidence distribution (e.g., assigning probability 0.01 to the region between the 0.90 and 0.91 Bca interval endpoints) requires a different conversion factor R Bca (β), that does not involve any Bayesian input. In the example of Figure 1, the Bca confidence density almost perfectly matches the Jeffreys posterior density. All of the calculations so far have concerned the total variance σ 2 = σ We might be more interested in σ0 2 itself, the variance of the random effects a i in model (2.11). Simply shifting the posterior densities of Figure 1 a unit to the left has an obvious defect: some of the probability (0.051 for the raw bootstrap, for Jeffreys/Bca) winds up below zero, defying the obvious constraint σ We need a prior density π(β) that has only positive support for σ0 2. The Jeffreys analogue π(β) = 1/σ0 2 looks promising, but fails because of its infinite spike at zero. (Unlike σ 2, σ0 2 = σ2 1 has substantial likelihood near 0 given the observed data (3.3).) The toy example in Spiegelhalter et al. (1996, Sect. 2.3) suggests using prior density π(β) : α 0 N (0, V 0 ) independent of σ 2 0 ν 0 /G ν0, (3.7) V 0 = 10, 000 and ν 0 = 0.01, in a situation similar to (2.11), with G ν0 indicating a gamma distribution having shape parameter ν 0. Prior or hyperprior assumptions like (3.7) are familiar in the MCMC literature. We can reweight our B = 25, 000 bootstrap replications β i to compute the posterior distribution of σ0 2 for prior (3.7), as in (3.5); R i is still provided by (2.15) (with σ 2 = σ ); the Bayes prior is π i = c exp{ ν 0 /σ0i} 2 / σ 2(ν 0+1) 0i, (3.8) 7

8 from the inverse gamma distribution in (3.7), the α 0 N (0, V 0 ) portion of the prior having almost no effect. mcmc counts posterior density inverse gamma prior original bootstrap sigma0^2 Figure 3: Heavy pointed curve indicates posterior density of σ 2 0 in model (2.1), data (3.3); inverse gamma prior (3.7) as calculated from weighted bootstrap replications. Stepped curve is same posterior density calculation by Gibbs sampling algorithm (3.9). Dashed curve shows unweighted bootstrap counts of Figure 2 shifted one unit left. The heavy pointed curve in Figure 3 shows the posterior distribution of σ0 2 under model (3.7), as calculated from the weighted bootstrap replications. It looks completely different than a simple leftward translation of the curves in Figure 1. Now the posterior density has its high point at the minimum allowable value σ0 2 = 0, and decreases monotonically to the right. A Gibbs sampling analysis was performed, as a check on the weighted bootstrap results. Given the data y, or equivalently given the sufficient statistics (ˆα 0, ˆσ 2 ) (3.3), a Gibbs analysis cycles through the three unobserved quantities in (2.11), namely a, α 0, and σ0 2. The complete conditional distribution of each one given the other two is calculated from the specifications (2.1) and (3.7): (( ) ) [ a α 0, σ0 2 α0 N ν σ y /h 1, I/h 1 h 1 = 1 ] σ ( ) [ nā α 0 a, σ0 2 N /h σ0 2 2, 1/h 2 h 2 = 1 + n ] V 0 σ0 2 (3.9) [ ] n σ0 α 2 0, a N (ν 0 + S/2)/G ν0 +n/2 S = (a i ā) 2. Here n = 100 is the sample size, ā = a i /n, 1 is a vector of n 1 s, I is the n n identity matrix, and G ν indicates a gamma random variable with shape parameter ν. Starting from α 0 = 0 and σ 2 0 = 1 as initial values, 25,000 cycles through (3.9) produced 25,000 values σ2 0i. 1 8

9 Their step histogram in Figure 3 agrees reasonably well with the bootstrap results. (Formula (2.19) suggests that the bootstrap estimates are nearly exact.) A comparison of Figure 3 with Figure 1 raises some important points: The inverse gamma prior distribution (3.7) (3.8) is far from uninformative. The posterior distribution of σ 2 0 = σ2 1 is strongly pulled toward zero, putting nearly 50% of its posterior probability on σ This compares with 25% for σ2 1.2 in the Jeffreys/Bca analysis of Figure 1. Changing ν 0 = 0.01 to in (3.7) substantially increases the posterior concentration of σ0 2 near 0. Inverse gamma priors are especially convenient for Gibbs sampling implementation, as in (3.9), but should not be assigned casually. The mle ˆσ 2 = (y i ȳ) 2 /n in (2.6) ignores the constraint σ 2 1, the actual mle being max(1, ˆσ 2 ). This suggests that for the dashed curve in Figure 3 we could erase the portion below zero and replace it with a probability atom of size at 0. More accurately, we could shift the Jeffreys/Bca density of Figure 1 one unit left, replacing the part below zero with an atom of size at 0. In a genuine random effects application, we might well allocate some prior probability to σ0 2 = 0 (as is not done in (3.7)). This makes for more difficult Bayesian analysis. The Bca approach of the previous remark is an attractive option here. Priors that are convenient for MCMC or Gibbs sampling applications play no special role in the bootstrap/bayes approach of this paper. Convenience priors can be risky, in my opinion, and should be avoided in favor of frequentist methods in the absence of genuine Bayesian prior information. References Chakraborty, A., Gelfand, A. E., Wilson, A. M., Latimer, A. M. and John A. Silander, J. (2010). Modeling large scale species abundance with latent spatial processes. Ann. Appl. Statist. 4: , doi: /10-aoas335. Efron, B. (1982). The Jackknife, the Bootstrap and Other Resampling Plans, CBMS-NSF Regional Conference Series in Applied Mathematics 38. Philadelphia, Pa.: Society for Industrial and Applied Mathematics (SIAM). Efron, B. (1987). Better bootstrap confidence intervals. J. Amer. Statist. Assoc. 82: , with comments and a rejoinder by the author. Efron, B. (1993). Bayes and likelihood calculations from confidence intervals. Biometrika 80: Fraser, D. A. S., Reid, N., Marras, E. and Yi, G. Y. (2010). Default priors for bayesian and frequentist inference. J. Roy. Statist. Soc. Ser. B 72: , doi: /j x. Lange, K. (2010). Numerical Analysis for Statisticians. Statistics and Computing. New York: Springer, 2nd ed. 9

10 Li, B., Nychka, D. W. and Ammann, C. M. (2010). The value of multiproxy reconstruction of past climate. Journal of the American Statistical Association 105: , doi: /jasa ap Newton, M. A. and Raftery, A. E. (1994). Approximate Bayesian inference with the weighted likelihood bootstrap. J. Roy. Statist. Soc. Ser. B 56: 3 48, with discussion and a reply by the authors. Ramirez-Cobo, P., Lillo, R. E., Wilson, S. and Wiper, M. P. (2010). Bayesian inference for double Pareto lognormal queues. Ann. Appl. Statist. 4: , doi: /10-aoas336. Rubin, D. B. (1981). The Bayesian bootstrap. Ann. Statist. 9: Spiegelhalter, D. J., Best, N. G., Gilks, W. R. and Inskip, H. (1996). Hepatitis B: A case study im MCMC methods. In Gilks, W. R., Richardson, S. and Spiegelhalter, D. J. (eds), Markov chain Monte Carlo in practice, Interdisciplinary Statistics. London: Chapman & Hall, Welch, B. L. and Peers, H. W. (1963). On formulae for confidence points based on integrals of weighted likelihoods. J. Roy. Statist. Soc. Ser. B 25:

Bayesian Inference and the Parametric Bootstrap. Bradley Efron Stanford University

Bayesian Inference and the Parametric Bootstrap. Bradley Efron Stanford University Bayesian Inference and the Parametric Bootstrap Bradley Efron Stanford University Importance Sampling for Bayes Posterior Distribution Newton and Raftery (1994 JRSS-B) Nonparametric Bootstrap: good choice

More information

Model Selection, Estimation, and Bootstrap Smoothing. Bradley Efron Stanford University

Model Selection, Estimation, and Bootstrap Smoothing. Bradley Efron Stanford University Model Selection, Estimation, and Bootstrap Smoothing Bradley Efron Stanford University Estimation After Model Selection Usually: (a) look at data (b) choose model (linear, quad, cubic...?) (c) fit estimates

More information

Frequentist Accuracy of Bayesian Estimates

Frequentist Accuracy of Bayesian Estimates Frequentist Accuracy of Bayesian Estimates Bradley Efron Stanford University RSS Journal Webinar Objective Bayesian Inference Probability family F = {f µ (x), µ Ω} Parameter of interest: θ = t(µ) Prior

More information

The comparative studies on reliability for Rayleigh models

The comparative studies on reliability for Rayleigh models Journal of the Korean Data & Information Science Society 018, 9, 533 545 http://dx.doi.org/10.7465/jkdi.018.9..533 한국데이터정보과학회지 The comparative studies on reliability for Rayleigh models Ji Eun Oh 1 Joong

More information

NONINFORMATIVE NONPARAMETRIC BAYESIAN ESTIMATION OF QUANTILES

NONINFORMATIVE NONPARAMETRIC BAYESIAN ESTIMATION OF QUANTILES NONINFORMATIVE NONPARAMETRIC BAYESIAN ESTIMATION OF QUANTILES Glen Meeden School of Statistics University of Minnesota Minneapolis, MN 55455 Appeared in Statistics & Probability Letters Volume 16 (1993)

More information

Statistics - Lecture One. Outline. Charlotte Wickham 1. Basic ideas about estimation

Statistics - Lecture One. Outline. Charlotte Wickham  1. Basic ideas about estimation Statistics - Lecture One Charlotte Wickham wickham@stat.berkeley.edu http://www.stat.berkeley.edu/~wickham/ Outline 1. Basic ideas about estimation 2. Method of Moments 3. Maximum Likelihood 4. Confidence

More information

The automatic construction of bootstrap confidence intervals

The automatic construction of bootstrap confidence intervals The automatic construction of bootstrap confidence intervals Bradley Efron Balasubramanian Narasimhan Abstract The standard intervals, e.g., ˆθ ± 1.96ˆσ for nominal 95% two-sided coverage, are familiar

More information

Frequentist Accuracy of Bayesian Estimates

Frequentist Accuracy of Bayesian Estimates Frequentist Accuracy of Bayesian Estimates Bradley Efron Stanford University Bayesian Inference Parameter: µ Ω Observed data: x Prior: π(µ) Probability distributions: Parameter of interest: { fµ (x), µ

More information

FULL LIKELIHOOD INFERENCES IN THE COX MODEL

FULL LIKELIHOOD INFERENCES IN THE COX MODEL October 20, 2007 FULL LIKELIHOOD INFERENCES IN THE COX MODEL BY JIAN-JIAN REN 1 AND MAI ZHOU 2 University of Central Florida and University of Kentucky Abstract We use the empirical likelihood approach

More information

A noninformative Bayesian approach to domain estimation

A noninformative Bayesian approach to domain estimation A noninformative Bayesian approach to domain estimation Glen Meeden School of Statistics University of Minnesota Minneapolis, MN 55455 glen@stat.umn.edu August 2002 Revised July 2003 To appear in Journal

More information

Journeys of an Accidental Statistician

Journeys of an Accidental Statistician Journeys of an Accidental Statistician A partially anecdotal account of A Unified Approach to the Classical Statistical Analysis of Small Signals, GJF and Robert D. Cousins, Phys. Rev. D 57, 3873 (1998)

More information

Markov Chain Monte Carlo methods

Markov Chain Monte Carlo methods Markov Chain Monte Carlo methods By Oleg Makhnin 1 Introduction a b c M = d e f g h i 0 f(x)dx 1.1 Motivation 1.1.1 Just here Supresses numbering 1.1.2 After this 1.2 Literature 2 Method 2.1 New math As

More information

Carl N. Morris. University of Texas

Carl N. Morris. University of Texas EMPIRICAL BAYES: A FREQUENCY-BAYES COMPROMISE Carl N. Morris University of Texas Empirical Bayes research has expanded significantly since the ground-breaking paper (1956) of Herbert Robbins, and its province

More information

Bootstrap Approximation of Gibbs Measure for Finite-Range Potential in Image Analysis

Bootstrap Approximation of Gibbs Measure for Finite-Range Potential in Image Analysis Bootstrap Approximation of Gibbs Measure for Finite-Range Potential in Image Analysis Abdeslam EL MOUDDEN Business and Management School Ibn Tofaïl University Kenitra, Morocco Abstract This paper presents

More information

Confidence Intervals in Ridge Regression using Jackknife and Bootstrap Methods

Confidence Intervals in Ridge Regression using Jackknife and Bootstrap Methods Chapter 4 Confidence Intervals in Ridge Regression using Jackknife and Bootstrap Methods 4.1 Introduction It is now explicable that ridge regression estimator (here we take ordinary ridge estimator (ORE)

More information

Bayesian inference and the parametric bootstrap

Bayesian inference and the parametric bootstrap Bayesian inference and the parametric bootstrap Bradley Efron Stanford University Abstract The parametric bootstrap can be used for the efficient computation of Bayes posterior distributions. Importance

More information

Parameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn

Parameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn Parameter estimation and forecasting Cristiano Porciani AIfA, Uni-Bonn Questions? C. Porciani Estimation & forecasting 2 Temperature fluctuations Variance at multipole l (angle ~180o/l) C. Porciani Estimation

More information

Bayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence

Bayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence Bayesian Inference in GLMs Frequentists typically base inferences on MLEs, asymptotic confidence limits, and log-likelihood ratio tests Bayesians base inferences on the posterior distribution of the unknowns

More information

ASSESSING A VECTOR PARAMETER

ASSESSING A VECTOR PARAMETER SUMMARY ASSESSING A VECTOR PARAMETER By D.A.S. Fraser and N. Reid Department of Statistics, University of Toronto St. George Street, Toronto, Canada M5S 3G3 dfraser@utstat.toronto.edu Some key words. Ancillary;

More information

Bayesian Econometrics

Bayesian Econometrics Bayesian Econometrics Christopher A. Sims Princeton University sims@princeton.edu September 20, 2016 Outline I. The difference between Bayesian and non-bayesian inference. II. Confidence sets and confidence

More information

Preliminaries The bootstrap Bias reduction Hypothesis tests Regression Confidence intervals Time series Final remark. Bootstrap inference

Preliminaries The bootstrap Bias reduction Hypothesis tests Regression Confidence intervals Time series Final remark. Bootstrap inference 1 / 171 Bootstrap inference Francisco Cribari-Neto Departamento de Estatística Universidade Federal de Pernambuco Recife / PE, Brazil email: cribari@gmail.com October 2013 2 / 171 Unpaid advertisement

More information

Bayesian inference. Fredrik Ronquist and Peter Beerli. October 3, 2007

Bayesian inference. Fredrik Ronquist and Peter Beerli. October 3, 2007 Bayesian inference Fredrik Ronquist and Peter Beerli October 3, 2007 1 Introduction The last few decades has seen a growing interest in Bayesian inference, an alternative approach to statistical inference.

More information

Linear Models A linear model is defined by the expression

Linear Models A linear model is defined by the expression Linear Models A linear model is defined by the expression x = F β + ɛ. where x = (x 1, x 2,..., x n ) is vector of size n usually known as the response vector. β = (β 1, β 2,..., β p ) is the transpose

More information

Default priors and model parametrization

Default priors and model parametrization 1 / 16 Default priors and model parametrization Nancy Reid O-Bayes09, June 6, 2009 Don Fraser, Elisabeta Marras, Grace Yun-Yi 2 / 16 Well-calibrated priors model f (y; θ), F(y; θ); log-likelihood l(θ)

More information

eqr094: Hierarchical MCMC for Bayesian System Reliability

eqr094: Hierarchical MCMC for Bayesian System Reliability eqr094: Hierarchical MCMC for Bayesian System Reliability Alyson G. Wilson Statistical Sciences Group, Los Alamos National Laboratory P.O. Box 1663, MS F600 Los Alamos, NM 87545 USA Phone: 505-667-9167

More information

Bagging During Markov Chain Monte Carlo for Smoother Predictions

Bagging During Markov Chain Monte Carlo for Smoother Predictions Bagging During Markov Chain Monte Carlo for Smoother Predictions Herbert K. H. Lee University of California, Santa Cruz Abstract: Making good predictions from noisy data is a challenging problem. Methods

More information

Bayesian inference for multivariate skew-normal and skew-t distributions

Bayesian inference for multivariate skew-normal and skew-t distributions Bayesian inference for multivariate skew-normal and skew-t distributions Brunero Liseo Sapienza Università di Roma Banff, May 2013 Outline Joint research with Antonio Parisi (Roma Tor Vergata) 1. Inferential

More information

Comparing Non-informative Priors for Estimation and Prediction in Spatial Models

Comparing Non-informative Priors for Estimation and Prediction in Spatial Models Environmentrics 00, 1 12 DOI: 10.1002/env.XXXX Comparing Non-informative Priors for Estimation and Prediction in Spatial Models Regina Wu a and Cari G. Kaufman a Summary: Fitting a Bayesian model to spatial

More information

The bootstrap. Patrick Breheny. December 6. The empirical distribution function The bootstrap

The bootstrap. Patrick Breheny. December 6. The empirical distribution function The bootstrap Patrick Breheny December 6 Patrick Breheny BST 764: Applied Statistical Modeling 1/21 The empirical distribution function Suppose X F, where F (x) = Pr(X x) is a distribution function, and we wish to estimate

More information

The Bayesian Approach to Multi-equation Econometric Model Estimation

The Bayesian Approach to Multi-equation Econometric Model Estimation Journal of Statistical and Econometric Methods, vol.3, no.1, 2014, 85-96 ISSN: 2241-0384 (print), 2241-0376 (online) Scienpress Ltd, 2014 The Bayesian Approach to Multi-equation Econometric Model Estimation

More information

Preliminaries The bootstrap Bias reduction Hypothesis tests Regression Confidence intervals Time series Final remark. Bootstrap inference

Preliminaries The bootstrap Bias reduction Hypothesis tests Regression Confidence intervals Time series Final remark. Bootstrap inference 1 / 172 Bootstrap inference Francisco Cribari-Neto Departamento de Estatística Universidade Federal de Pernambuco Recife / PE, Brazil email: cribari@gmail.com October 2014 2 / 172 Unpaid advertisement

More information

Bootstrap Confidence Intervals

Bootstrap Confidence Intervals Bootstrap Confidence Intervals Patrick Breheny September 18 Patrick Breheny STA 621: Nonparametric Statistics 1/22 Introduction Bootstrap confidence intervals So far, we have discussed the idea behind

More information

Bayesian inference for multivariate extreme value distributions

Bayesian inference for multivariate extreme value distributions Bayesian inference for multivariate extreme value distributions Sebastian Engelke Clément Dombry, Marco Oesting Toronto, Fields Institute, May 4th, 2016 Main motivation For a parametric model Z F θ of

More information

Supplement to A Hierarchical Approach for Fitting Curves to Response Time Measurements

Supplement to A Hierarchical Approach for Fitting Curves to Response Time Measurements Supplement to A Hierarchical Approach for Fitting Curves to Response Time Measurements Jeffrey N. Rouder Francis Tuerlinckx Paul L. Speckman Jun Lu & Pablo Gomez May 4 008 1 The Weibull regression model

More information

Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, Jeffreys priors. exp 1 ) p 2

Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, Jeffreys priors. exp 1 ) p 2 Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, 2010 Jeffreys priors Lecturer: Michael I. Jordan Scribe: Timothy Hunter 1 Priors for the multivariate Gaussian Consider a multivariate

More information

Bayesian inference for factor scores

Bayesian inference for factor scores Bayesian inference for factor scores Murray Aitkin and Irit Aitkin School of Mathematics and Statistics University of Newcastle UK October, 3 Abstract Bayesian inference for the parameters of the factor

More information

Better Bootstrap Confidence Intervals

Better Bootstrap Confidence Intervals by Bradley Efron University of Washington, Department of Statistics April 12, 2012 An example Suppose we wish to make inference on some parameter θ T (F ) (e.g. θ = E F X ), based on data We might suppose

More information

Bayesian nonparametric estimation of finite population quantities in absence of design information on nonsampled units

Bayesian nonparametric estimation of finite population quantities in absence of design information on nonsampled units Bayesian nonparametric estimation of finite population quantities in absence of design information on nonsampled units Sahar Z Zangeneh Robert W. Keener Roderick J.A. Little Abstract In Probability proportional

More information

The Nonparametric Bootstrap

The Nonparametric Bootstrap The Nonparametric Bootstrap The nonparametric bootstrap may involve inferences about a parameter, but we use a nonparametric procedure in approximating the parametric distribution using the ECDF. We use

More information

Stat 5101 Lecture Notes

Stat 5101 Lecture Notes Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random

More information

Correlation, z-values, and the Accuracy of Large-Scale Estimators. Bradley Efron Stanford University

Correlation, z-values, and the Accuracy of Large-Scale Estimators. Bradley Efron Stanford University Correlation, z-values, and the Accuracy of Large-Scale Estimators Bradley Efron Stanford University Correlation and Accuracy Modern Scientific Studies N cases (genes, SNPs, pixels,... ) each with its own

More information

Tools for Parameter Estimation and Propagation of Uncertainty

Tools for Parameter Estimation and Propagation of Uncertainty Tools for Parameter Estimation and Propagation of Uncertainty Brian Borchers Department of Mathematics New Mexico Tech Socorro, NM 87801 borchers@nmt.edu Outline Models, parameters, parameter estimation,

More information

Practice Exam 1. (A) (B) (C) (D) (E) You are given the following data on loss sizes:

Practice Exam 1. (A) (B) (C) (D) (E) You are given the following data on loss sizes: Practice Exam 1 1. Losses for an insurance coverage have the following cumulative distribution function: F(0) = 0 F(1,000) = 0.2 F(5,000) = 0.4 F(10,000) = 0.9 F(100,000) = 1 with linear interpolation

More information

Parametric Techniques

Parametric Techniques Parametric Techniques Jason J. Corso SUNY at Buffalo J. Corso (SUNY at Buffalo) Parametric Techniques 1 / 39 Introduction When covering Bayesian Decision Theory, we assumed the full probabilistic structure

More information

STAT 518 Intro Student Presentation

STAT 518 Intro Student Presentation STAT 518 Intro Student Presentation Wen Wei Loh April 11, 2013 Title of paper Radford M. Neal [1999] Bayesian Statistics, 6: 475-501, 1999 What the paper is about Regression and Classification Flexible

More information

Nonparametric Bayesian Methods - Lecture I

Nonparametric Bayesian Methods - Lecture I Nonparametric Bayesian Methods - Lecture I Harry van Zanten Korteweg-de Vries Institute for Mathematics CRiSM Masterclass, April 4-6, 2016 Overview of the lectures I Intro to nonparametric Bayesian statistics

More information

Heriot-Watt University

Heriot-Watt University Heriot-Watt University Heriot-Watt University Research Gateway Prediction of settlement delay in critical illness insurance claims by using the generalized beta of the second kind distribution Dodd, Erengul;

More information

Confidence Distribution

Confidence Distribution Confidence Distribution Xie and Singh (2013): Confidence distribution, the frequentist distribution estimator of a parameter: A Review Céline Cunen, 15/09/2014 Outline of Article Introduction The concept

More information

Markov Chain Monte Carlo methods

Markov Chain Monte Carlo methods Markov Chain Monte Carlo methods Tomas McKelvey and Lennart Svensson Signal Processing Group Department of Signals and Systems Chalmers University of Technology, Sweden November 26, 2012 Today s learning

More information

SPRING 2007 EXAM C SOLUTIONS

SPRING 2007 EXAM C SOLUTIONS SPRING 007 EXAM C SOLUTIONS Question #1 The data are already shifted (have had the policy limit and the deductible of 50 applied). The two 350 payments are censored. Thus the likelihood function is L =

More information

Bayesian inference for sample surveys. Roderick Little Module 2: Bayesian models for simple random samples

Bayesian inference for sample surveys. Roderick Little Module 2: Bayesian models for simple random samples Bayesian inference for sample surveys Roderick Little Module : Bayesian models for simple random samples Superpopulation Modeling: Estimating parameters Various principles: least squares, method of moments,

More information

A Bayesian perspective on GMM and IV

A Bayesian perspective on GMM and IV A Bayesian perspective on GMM and IV Christopher A. Sims Princeton University sims@princeton.edu November 26, 2013 What is a Bayesian perspective? A Bayesian perspective on scientific reporting views all

More information

Parametric Techniques Lecture 3

Parametric Techniques Lecture 3 Parametric Techniques Lecture 3 Jason Corso SUNY at Buffalo 22 January 2009 J. Corso (SUNY at Buffalo) Parametric Techniques Lecture 3 22 January 2009 1 / 39 Introduction In Lecture 2, we learned how to

More information

Parametric Empirical Bayes Methods for Microarrays

Parametric Empirical Bayes Methods for Microarrays Parametric Empirical Bayes Methods for Microarrays Ming Yuan, Deepayan Sarkar, Michael Newton and Christina Kendziorski April 30, 2018 Contents 1 Introduction 1 2 General Model Structure: Two Conditions

More information

Fractional Imputation in Survey Sampling: A Comparative Review

Fractional Imputation in Survey Sampling: A Comparative Review Fractional Imputation in Survey Sampling: A Comparative Review Shu Yang Jae-Kwang Kim Iowa State University Joint Statistical Meetings, August 2015 Outline Introduction Fractional imputation Features Numerical

More information

Parameter Estimation. William H. Jefferys University of Texas at Austin Parameter Estimation 7/26/05 1

Parameter Estimation. William H. Jefferys University of Texas at Austin Parameter Estimation 7/26/05 1 Parameter Estimation William H. Jefferys University of Texas at Austin bill@bayesrules.net Parameter Estimation 7/26/05 1 Elements of Inference Inference problems contain two indispensable elements: Data

More information

The Jackknife-Like Method for Assessing Uncertainty of Point Estimates for Bayesian Estimation in a Finite Gaussian Mixture Model

The Jackknife-Like Method for Assessing Uncertainty of Point Estimates for Bayesian Estimation in a Finite Gaussian Mixture Model Thai Journal of Mathematics : 45 58 Special Issue: Annual Meeting in Mathematics 207 http://thaijmath.in.cmu.ac.th ISSN 686-0209 The Jackknife-Like Method for Assessing Uncertainty of Point Estimates for

More information

Stable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence

Stable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence Stable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham NC 778-5 - Revised April,

More information

inferences on stress-strength reliability from lindley distributions

inferences on stress-strength reliability from lindley distributions inferences on stress-strength reliability from lindley distributions D.K. Al-Mutairi, M.E. Ghitany & Debasis Kundu Abstract This paper deals with the estimation of the stress-strength parameter R = P (Y

More information

Bayesian Inference: Concept and Practice

Bayesian Inference: Concept and Practice Inference: Concept and Practice fundamentals Johan A. Elkink School of Politics & International Relations University College Dublin 5 June 2017 1 2 3 Bayes theorem In order to estimate the parameters of

More information

Bayesian Inference. Chapter 1. Introduction and basic concepts

Bayesian Inference. Chapter 1. Introduction and basic concepts Bayesian Inference Chapter 1. Introduction and basic concepts M. Concepción Ausín Department of Statistics Universidad Carlos III de Madrid Master in Business Administration and Quantitative Methods Master

More information

Some Curiosities Arising in Objective Bayesian Analysis

Some Curiosities Arising in Objective Bayesian Analysis . Some Curiosities Arising in Objective Bayesian Analysis Jim Berger Duke University Statistical and Applied Mathematical Institute Yale University May 15, 2009 1 Three vignettes related to John s work

More information

A note on Reversible Jump Markov Chain Monte Carlo

A note on Reversible Jump Markov Chain Monte Carlo A note on Reversible Jump Markov Chain Monte Carlo Hedibert Freitas Lopes Graduate School of Business The University of Chicago 5807 South Woodlawn Avenue Chicago, Illinois 60637 February, 1st 2006 1 Introduction

More information

Practical Bayesian Quantile Regression. Keming Yu University of Plymouth, UK

Practical Bayesian Quantile Regression. Keming Yu University of Plymouth, UK Practical Bayesian Quantile Regression Keming Yu University of Plymouth, UK (kyu@plymouth.ac.uk) A brief summary of some recent work of us (Keming Yu, Rana Moyeed and Julian Stander). Summary We develops

More information

A union of Bayesian, frequentist and fiducial inferences by confidence distribution and artificial data sampling

A union of Bayesian, frequentist and fiducial inferences by confidence distribution and artificial data sampling A union of Bayesian, frequentist and fiducial inferences by confidence distribution and artificial data sampling Min-ge Xie Department of Statistics, Rutgers University Workshop on Higher-Order Asymptotics

More information

STAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01

STAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01 STAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01 Nasser Sadeghkhani a.sadeghkhani@queensu.ca There are two main schools to statistical inference: 1-frequentist

More information

A Note on Bootstraps and Robustness. Tony Lancaster, Brown University, December 2003.

A Note on Bootstraps and Robustness. Tony Lancaster, Brown University, December 2003. A Note on Bootstraps and Robustness Tony Lancaster, Brown University, December 2003. In this note we consider several versions of the bootstrap and argue that it is helpful in explaining and thinking about

More information

Accounting for Complex Sample Designs via Mixture Models

Accounting for Complex Sample Designs via Mixture Models Accounting for Complex Sample Designs via Finite Normal Mixture Models 1 1 University of Michigan School of Public Health August 2009 Talk Outline 1 2 Accommodating Sampling Weights in Mixture Models 3

More information

Shu Yang and Jae Kwang Kim. Harvard University and Iowa State University

Shu Yang and Jae Kwang Kim. Harvard University and Iowa State University Statistica Sinica 27 (2017), 000-000 doi:https://doi.org/10.5705/ss.202016.0155 DISCUSSION: DISSECTING MULTIPLE IMPUTATION FROM A MULTI-PHASE INFERENCE PERSPECTIVE: WHAT HAPPENS WHEN GOD S, IMPUTER S AND

More information

11. Bootstrap Methods

11. Bootstrap Methods 11. Bootstrap Methods c A. Colin Cameron & Pravin K. Trivedi 2006 These transparencies were prepared in 20043. They can be used as an adjunct to Chapter 11 of our subsequent book Microeconometrics: Methods

More information

Estimation of Quantiles

Estimation of Quantiles 9 Estimation of Quantiles The notion of quantiles was introduced in Section 3.2: recall that a quantile x α for an r.v. X is a constant such that P(X x α )=1 α. (9.1) In this chapter we examine quantiles

More information

Bayesian Methods for Machine Learning

Bayesian Methods for Machine Learning Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),

More information

Supporting Information for Estimating restricted mean. treatment effects with stacked survival models

Supporting Information for Estimating restricted mean. treatment effects with stacked survival models Supporting Information for Estimating restricted mean treatment effects with stacked survival models Andrew Wey, David Vock, John Connett, and Kyle Rudser Section 1 presents several extensions to the simulation

More information

Statistical Inference for Stochastic Epidemic Models

Statistical Inference for Stochastic Epidemic Models Statistical Inference for Stochastic Epidemic Models George Streftaris 1 and Gavin J. Gibson 1 1 Department of Actuarial Mathematics & Statistics, Heriot-Watt University, Riccarton, Edinburgh EH14 4AS,

More information

STAT 425: Introduction to Bayesian Analysis

STAT 425: Introduction to Bayesian Analysis STAT 425: Introduction to Bayesian Analysis Marina Vannucci Rice University, USA Fall 2017 Marina Vannucci (Rice University, USA) Bayesian Analysis (Part 2) Fall 2017 1 / 19 Part 2: Markov chain Monte

More information

UNIVERSITÄT POTSDAM Institut für Mathematik

UNIVERSITÄT POTSDAM Institut für Mathematik UNIVERSITÄT POTSDAM Institut für Mathematik Testing the Acceleration Function in Life Time Models Hannelore Liero Matthias Liero Mathematische Statistik und Wahrscheinlichkeitstheorie Universität Potsdam

More information

STATS 200: Introduction to Statistical Inference. Lecture 29: Course review

STATS 200: Introduction to Statistical Inference. Lecture 29: Course review STATS 200: Introduction to Statistical Inference Lecture 29: Course review Course review We started in Lecture 1 with a fundamental assumption: Data is a realization of a random process. The goal throughout

More information

ST 740: Linear Models and Multivariate Normal Inference

ST 740: Linear Models and Multivariate Normal Inference ST 740: Linear Models and Multivariate Normal Inference Alyson Wilson Department of Statistics North Carolina State University November 4, 2013 A. Wilson (NCSU STAT) Linear Models November 4, 2013 1 /

More information

Estimation of P (X > Y ) for Weibull distribution based on hybrid censored samples

Estimation of P (X > Y ) for Weibull distribution based on hybrid censored samples Estimation of P (X > Y ) for Weibull distribution based on hybrid censored samples A. Asgharzadeh a, M. Kazemi a, D. Kundu b a Department of Statistics, Faculty of Mathematical Sciences, University of

More information

The exact bootstrap method shown on the example of the mean and variance estimation

The exact bootstrap method shown on the example of the mean and variance estimation Comput Stat (2013) 28:1061 1077 DOI 10.1007/s00180-012-0350-0 ORIGINAL PAPER The exact bootstrap method shown on the example of the mean and variance estimation Joanna Kisielinska Received: 21 May 2011

More information

Introduction to Bayesian Data Analysis

Introduction to Bayesian Data Analysis Introduction to Bayesian Data Analysis Phil Gregory University of British Columbia March 2010 Hardback (ISBN-10: 052184150X ISBN-13: 9780521841504) Resources and solutions This title has free Mathematica

More information

Markov chain Monte Carlo

Markov chain Monte Carlo Markov chain Monte Carlo Karl Oskar Ekvall Galin L. Jones University of Minnesota March 12, 2019 Abstract Practically relevant statistical models often give rise to probability distributions that are analytically

More information

Bayesian vs frequentist techniques for the analysis of binary outcome data

Bayesian vs frequentist techniques for the analysis of binary outcome data 1 Bayesian vs frequentist techniques for the analysis of binary outcome data By M. Stapleton Abstract We compare Bayesian and frequentist techniques for analysing binary outcome data. Such data are commonly

More information

MODEL SELECTION, ESTIMATION, AND BOOTSTRAP SMOOTHING. Bradley Efron. Division of Biostatistics STANFORD UNIVERSITY Stanford, California

MODEL SELECTION, ESTIMATION, AND BOOTSTRAP SMOOTHING. Bradley Efron. Division of Biostatistics STANFORD UNIVERSITY Stanford, California MODEL SELECTION, ESTIMATION, AND BOOTSTRAP SMOOTHING By Bradley Efron Technical Report 262 August 2012 Division of Biostatistics STANFORD UNIVERSITY Stanford, California MODEL SELECTION, ESTIMATION, AND

More information

Invariant HPD credible sets and MAP estimators

Invariant HPD credible sets and MAP estimators Bayesian Analysis (007), Number 4, pp. 681 69 Invariant HPD credible sets and MAP estimators Pierre Druilhet and Jean-Michel Marin Abstract. MAP estimators and HPD credible sets are often criticized in

More information

The Bayesian Choice. Christian P. Robert. From Decision-Theoretic Foundations to Computational Implementation. Second Edition.

The Bayesian Choice. Christian P. Robert. From Decision-Theoretic Foundations to Computational Implementation. Second Edition. Christian P. Robert The Bayesian Choice From Decision-Theoretic Foundations to Computational Implementation Second Edition With 23 Illustrations ^Springer" Contents Preface to the Second Edition Preface

More information

Modeling and Interpolation of Non-Gaussian Spatial Data: A Comparative Study

Modeling and Interpolation of Non-Gaussian Spatial Data: A Comparative Study Modeling and Interpolation of Non-Gaussian Spatial Data: A Comparative Study Gunter Spöck, Hannes Kazianka, Jürgen Pilz Department of Statistics, University of Klagenfurt, Austria hannes.kazianka@uni-klu.ac.at

More information

Exact Inference for the Two-Parameter Exponential Distribution Under Type-II Hybrid Censoring

Exact Inference for the Two-Parameter Exponential Distribution Under Type-II Hybrid Censoring Exact Inference for the Two-Parameter Exponential Distribution Under Type-II Hybrid Censoring A. Ganguly, S. Mitra, D. Samanta, D. Kundu,2 Abstract Epstein [9] introduced the Type-I hybrid censoring scheme

More information

Introduction to Bayesian Methods

Introduction to Bayesian Methods Introduction to Bayesian Methods Jessi Cisewski Department of Statistics Yale University Sagan Summer Workshop 2016 Our goal: introduction to Bayesian methods Likelihoods Priors: conjugate priors, non-informative

More information

Bootstrap (Part 3) Christof Seiler. Stanford University, Spring 2016, Stats 205

Bootstrap (Part 3) Christof Seiler. Stanford University, Spring 2016, Stats 205 Bootstrap (Part 3) Christof Seiler Stanford University, Spring 2016, Stats 205 Overview So far we used three different bootstraps: Nonparametric bootstrap on the rows (e.g. regression, PCA with random

More information

Research Article A Nonparametric Two-Sample Wald Test of Equality of Variances

Research Article A Nonparametric Two-Sample Wald Test of Equality of Variances Advances in Decision Sciences Volume 211, Article ID 74858, 8 pages doi:1.1155/211/74858 Research Article A Nonparametric Two-Sample Wald Test of Equality of Variances David Allingham 1 andj.c.w.rayner

More information

Bayesian Inference and MCMC

Bayesian Inference and MCMC Bayesian Inference and MCMC Aryan Arbabi Partly based on MCMC slides from CSC412 Fall 2018 1 / 18 Bayesian Inference - Motivation Consider we have a data set D = {x 1,..., x n }. E.g each x i can be the

More information

Bootstrap, Jackknife and other resampling methods

Bootstrap, Jackknife and other resampling methods Bootstrap, Jackknife and other resampling methods Part VI: Cross-validation Rozenn Dahyot Room 128, Department of Statistics Trinity College Dublin, Ireland dahyot@mee.tcd.ie 2005 R. Dahyot (TCD) 453 Modern

More information

On the Accuracy of Bootstrap Confidence Intervals for Efficiency Levels in Stochastic Frontier Models with Panel Data

On the Accuracy of Bootstrap Confidence Intervals for Efficiency Levels in Stochastic Frontier Models with Panel Data On the Accuracy of Bootstrap Confidence Intervals for Efficiency Levels in Stochastic Frontier Models with Panel Data Myungsup Kim University of North Texas Yangseon Kim East-West Center Peter Schmidt

More information

Lecture 2: Linear Models. Bruce Walsh lecture notes Seattle SISG -Mixed Model Course version 23 June 2011

Lecture 2: Linear Models. Bruce Walsh lecture notes Seattle SISG -Mixed Model Course version 23 June 2011 Lecture 2: Linear Models Bruce Walsh lecture notes Seattle SISG -Mixed Model Course version 23 June 2011 1 Quick Review of the Major Points The general linear model can be written as y = X! + e y = vector

More information

Bayesian Statistical Methods. Jeff Gill. Department of Political Science, University of Florida

Bayesian Statistical Methods. Jeff Gill. Department of Political Science, University of Florida Bayesian Statistical Methods Jeff Gill Department of Political Science, University of Florida 234 Anderson Hall, PO Box 117325, Gainesville, FL 32611-7325 Voice: 352-392-0262x272, Fax: 352-392-8127, Email:

More information

Statistics 3858 : Maximum Likelihood Estimators

Statistics 3858 : Maximum Likelihood Estimators Statistics 3858 : Maximum Likelihood Estimators 1 Method of Maximum Likelihood In this method we construct the so called likelihood function, that is L(θ) = L(θ; X 1, X 2,..., X n ) = f n (X 1, X 2,...,

More information

ACCOUNTING FOR INPUT-MODEL AND INPUT-PARAMETER UNCERTAINTIES IN SIMULATION. <www.ie.ncsu.edu/jwilson> May 22, 2006

ACCOUNTING FOR INPUT-MODEL AND INPUT-PARAMETER UNCERTAINTIES IN SIMULATION. <www.ie.ncsu.edu/jwilson> May 22, 2006 ACCOUNTING FOR INPUT-MODEL AND INPUT-PARAMETER UNCERTAINTIES IN SIMULATION Slide 1 Faker Zouaoui Sabre Holdings James R. Wilson NC State University May, 006 Slide From American

More information

A nonparametric two-sample wald test of equality of variances

A nonparametric two-sample wald test of equality of variances University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 211 A nonparametric two-sample wald test of equality of variances David

More information