An Overview of Objective Bayesian Analysis

Size: px
Start display at page:

Download "An Overview of Objective Bayesian Analysis"

Transcription

1 An Overview of Objective Bayesian Analysis James O. Berger Duke University visiting the University of Chicago Department of Statistics Spring Quarter,

2 Lectures Lecture 1. Objective Bayesian Analysis: Introduction and a Casual History Lecture 2. Objective Bayesian Estimation Lecture 3. Objective Bayesian Hypothesis Testing and Conditional Frequentist Testing Lecture 4. Essentials of Objective Bayesian Model Uncertainty Lecture 5. Methodology of Objective Bayesian Model Uncertainty 2

3 Objective Bayesian Hypothesis Testing A pedagogical example from high-energy physics: When the Large Hadron Collider at CERN reaches full power, one major goal will be to determine if the Higgs boson particle actually exists. 3

4 Department of Statistics ' & University of Chicago: 2011 Atlas Detector at the Large Hadron Collider (CERN) 4 $ %

5 Data: X = # events observed in time T that are characteristic of Higgs boson production in LHC particle collisions. Statistical Model: X has density where Poisson(x θ + b) = (θ + b)x e (θ+b) x!, θ is the mean rate of production of Higgs events in time T ; b is the (known) mean rate of production of events with the same characteristics from background sources in time T. To test: H 0 : θ = 0 vs H 1 : θ > 0. (H 0 corresponds to no Higgs. ) 5

6 P-value: p = P (X x b, θ = 0) = m=x Poisson(m 0 + b) Case 1: p = if x = 7, b = 1.2 Case 2: p = if x = 6, b = 2.2. Physicists mostly have not heard of p-values; they tend to talk in terms of the number of σ (standard errors) from the null. Those who have heard of them mostly interpret them incorrectly. Those who understand p-values know their use is difficult: Luc Demortier: In any search for new physics, a small p value should only be seen as a first step in the interpretation of the data, to be followed by a serious investigation of an alternative hypothesis. Only by showing that the latter provides a better explanation of the observations than the null hypothesis can one make a convincing case for discovery. Sequential testing: This is actually a sequential experiment, so p should be further adjusted to account for multiple looks at the data. 6

7 Bayes factor of H 0 to H 1 : ratio of likelihood under H 0 to average likelihood under H 1 (or odds of H 0 to H 1 ) B 01 (x) = Poisson(x 0 + b) Poisson(x θ + b)π(θ) dθ = b x e b 0 0 (θ + b)x e (θ+b) π(θ) dθ. Subjective approach: Choose π(θ) subjectively (e.g., using the standard physics model predictions of the mass of the Higgs). Objective approach: Choose π(θ) to be the intrinsic prior (discussed later) π I (θ) = b(θ + b) 2. (Note that this prior is proper and has median b.) Bayes factor: is then given by B 01 = 0 b x e b (θ + b)x e (θ+b) b(θ + b) 2 dθ = b(x 1) e b Γ(x 1, b), where Γ is the incomplete gamma function. Case 1: B 01 = (recall p = ) Case 2: B 01 = 0.26 (recall p = 0.025) 7

8 Posterior probability of the null hypothesis: The objective choice of prior probabilities of the hypotheses is Pr(H 0 ) = Pr(H 1 ) = 0.5, in which case Pr(H 0 x) = B B 01. Case 1: Pr(H 0 x) = (recall p = ) Case 2: Pr(H 0 x) = 0.21 (recall p = 0.025) Complete posterior distribution: is given by Pr(H 0 x), the posterior probability of null hypothesis π(θ x, H 1 ), the posterior distribution of θ under H 1 A useful summary of the complete posterior is Pr(H 0 x) and C, a (say) 95% posterior credible set for θ under H 1. Case 1: Pr(H 0 x) = ; C = (1.0, 10.5) Case 2: Pr(H 0 x) = 0.21; C = (0.2, 8.2) Note: For testing precise hypotheses, confidence intervals alone are not a satisfactory inferential summary. 8

9 x and H Figure 1: Pr(H 0 x) (the vertical bar), and the posterior density for θ given 9

10 A Lengthy Aside: What should a Fisherian or a frequentist think of the discrepancy between the p-value and the objective Bayesian answers in precise hypothesis testing? I. Many Fisherians (and arguably Fisher) prefer likelihood ratios to p-values, when they are available (e.g., genetics). A lower bound on the Bayes factor (or likelihood ratio): choose π(θ) to be a point mass at ˆθ, yielding B 01 (x) = Poisson(x 0 + b) 0 Poisson(x θ + b)π(θ) dθ ( ) x Poisson(x 0 + b) b Poisson(x ˆθ + b) = min{1, e x b }. x Case 1: B (recall p = ) Case 2: B (recall p = 0.025) 10

11 II. To a frequentist, p-values are not particularly relevant anyway; the difference between p-values and frequentist error probabilities can be seen from the conditional frequentist viewpoint (Kiefer, 1977 JASA; Brown, 1978 AOS) Find a statistic S that reflects the strength of evidence in the data. Compute the frequentist measure of error conditional on S. Artificial example: Observe X 1 and X 2 where θ + 1 with probability 1/2 X i = θ 1 with probability 1/ Confidence set for θ: C(X 1, X 2 ) = (X 1 + X 2 ) if X 1 X 2 X 1 1 if X 1 = X 2. Unconditional coverage: P θ (C(X 1, X 2 ) contains θ) = Strength of evidence in the data: Measured by S = X 1 X 2 (either 0 or 2) { Conditional coverage: P θ (C(X 1, X 2 ) contains θ S) = 0.5 if S = if S = 2. 11

12 For testing, Berger, Brown and Wolpert, 1994 AOS (continuous case) and Dass, 2001 Sankhya (discrete case) proposed: Develop S as follows: Let p i (x) be the p-value from testing H i against the other hypothesis; use the p i (x) as measures of strength of evidence, as Fisher suggested. Define the conditioning statistic S = max{p 0 (x), p 1 (x)}; its use is based on deciding that data (in either the rejection or acceptance regions) with the same p-value has the same strength of evidence. Accept H 0 when p 0 > p 1, and reject otherwise. Compute Type I and Type II conditional error probabilities as α(s) = P 0 (rejecting H 0 S = s) P 0 (p 0 p 1 S(X) = s) β(s) = P 1 (accepting H 0 S = s) P 1 (p 0 > p 1 S(X) = s), where P i refers to probability under H i. 12

13 Objective Bayes and Conditional Frequentist Agreement Theorem 1 The conditional frequentist error probabilities, α(s) and β(s), exactly equal the (objective) posterior probabilities of H 0 and H 1, so conditional frequentists and Bayesians report the same error probabilities. In the physics example, the conditional Type I error is thus α(s) = Pr(H 0 x) = B 01 /(1 + B 01 ) (= in Case 1; =0.21 in Case 2) The result can be viewed as a way to convert p-values into real frequentist error probabilities, when there is an alternative hypothesis. The conditional error probabilities α(s) and β(s) are fully data-dependent, yet fully frequentist. The procedure also applies to sequential settings, since the Bayesian error probabilities ignore the stopping rule. (Berger, Boukai and Wang, 1999 Biom.) Savage (1961): When I first heard the stopping rule principle from Barnard in the early 50 s, I thought it was scandalous that anyone in the profession could espouse a principle so obviously wrong, even as today I find it scandalous that anyone could deny a principle so obviously right. 13

14 General Formulation (Dass and Berger, 2003 Scand. J. Stat.) Test H 0 : X has density p 0 (x η) versus H 1 : X has density p 1 (x η, ξ} on X, with respect to a σ-finite dominating measure λ. {p 0 (x η) : η Ω} is invariant with respect to a group operation g G. g (η, ξ) = (g η, ξ) and {p 1 (x η, ξ) : η Ω} is invariant with respect to g for each given ξ. Utilize the right-haar prior, ν Ḡ (η), on η and a proper prior π(ξ) on ξ. The original problem is reduced to testing distributions of the maximal invariant, T: H 0 : T f 0 versus H 1 : T f 1, where f 0 (t) = p 0 (x η)dν Ḡ (η) and f 1 (t) = p 1 (x η, ξ)dν Ḡ (η)dπ(ξ). Since these are simple distributions, the rest of the analysis proceeds as before, and the equivalence between posterior probabilities and conditional frequentist error probabilities holds. 14

15 Example: Nonparametric testing of fit (Berger and Guglielmi, 2001 JASA) To test: H 0 : X N (µ, σ) versus H 1 : X F (µ, σ), where F is an unknown location-scale distribution. Determine the Bayes factor B 01 (x), using a Polya tree prior for F, centered at H 0 ; the right-haar prior π(µ, σ) = 1/σ for the location-scale parameters As before, the conditional frequentist Type I error, based on p-value conditioning, is α(x) = B 01 (x)/(1 + B 01 (x)) = Pr(H 0 x). Notes: This gives exact conditional frequentist error probabilities, even for small samples (e.g., n = 3). Computation of B(x) uses the (simple) exact formula for a Polya tree marginal distribution, together with importance sampling to deal with (µ, σ). 15

16 A Teaching Aside 1. One way to help teach students that p-values are very different from error probabilities (both Bayesian and frequentist) is to utilize the applet (of German Molina) available at berger. 2. Here s another useful way to explain the difference using (say) the physics example, where the p-value.00025, but the objective posterior probability (or conditional frequentist Type I error) of the null differs by a factor of 30: In the example, a factor of.0014/ is due to the difference between a tail area {X : X 7} and the actual observation X = 7. The remaining factor of roughly 5.4 in favor of the null results from the Ockham s razor penalty that Bayesian analysis automatically gives to a more complex model. (A more complex model will virtually always fit the data better than a simple model, and so some penalty is necessary to avoid overfitting.) 16

17 When there is no alternative hypothesis, robust Bayesian theory suggests a way to calibrate p-values. (Sellke, Bayarri and Berger, 2001 Am. Stat.). A proper p-value satisfies H 0 : p(x) Uniform(0, 1). Consider testing this versus H 1 : p f(p), where Y = log(p) has a decreasing failure rate (a natural non-parametric alternative). Theorem 2 If p < e 1, B 01 e p log(p). For the physics example, with p =.00025, ep log p = (recall that the Bayes factor we computed was B01 I = ). An analogous lower bound on the conditional Type I frequentist error is α (1 + [ e p log(p)] 1 ) 1. p α(p)

18 Choosing priors for common parameters in testing If pure subjective (evidence-based?) choice is not possible, here are some guidelines: Gold standard: if there are parameters in each hypothesis that have the same group invariance structure, one can use the right-haar priors for those parameters (even though improper) (Berger, Pericchi and Varshavsky, 1998) Silver standard: if there are parameters in each hypothesis that have the same scientific meaning, reasonable default priors (e.g. the constant prior 1) can be used (e.g., in the later vaccine example, where p 1 means the same in H 0 and H 1 ). Bronze standard: to try to obtain parameters that have the same scientific meaning (beware the fallacy of Greek letters ), one strategy often employed is to orthogonalize the parameters, i.e., reparameterize so that the partial Fisher information for those parameters is zero. 18

19 Example: (location-scale) Suppose X 1, X 2,..., X n are i.i.d with density Several models are entertained: M N : g is N(0, 1) M U : g is Uniform (0, 1) M C : g is Cauchy (0, 1) f(x i µ, σ) = 1 ( ) σ g xi µ σ M L : g is Left Exponential ( 1 σ ex µ, x µ) M R : g is Right Exponential ( 1 σ e (x µ), x µ) All models have location-scale parameters (µ, σ), for which the right-haar prior density is π(µ, σ) = 1 σ. 19

20 The marginal densities m(x M) = 0 n i=1 [ 1 σ g ( xi µ σ )] 1 σ dµ dσ for the five models are given by 1. Normal: m(x M N ) = 2. Uniform: m(x M U ) = Γ((n 1)/2) (2 π) (n 1)/2 n( i (x i x) 2 ) (n 1)/2 1 n (n 1)(x (n) x (1) ) n 1 3. Cauchy: m(x M C ) is given in Spiegelhalter (1985). 4. Left Exponential: m(x M L ) = (n 2)! n n (x (n) x) n 1 5. Right Exponential: m(x M R ) = (n 2)! n n ( x x (1) ) n 1 20

21 Darwin s Data, n = 15 Cavendish s Data, n = Stigler s Data Set 9, n = 20 Generated Cauchy Data, n =

22 The objective posterior probabilities of the five models, for each data set are as follows: Data Models set Normal Uniform Cauchy L. Exp. R. Exp. Darwin Cavendish Stigler Cauchy

23 Choosing priors for non-common parameters If subjective (evidence-based?) choice is not possible, some guidelines are: Vague proper priors are horrible (related to the Jeffreys-Lindley paradox): for instance, if X N(µ, 1) and we test H 0 : µ = 0 versus H 1 : µ 0 with a Uniform( c, c) prior for θ, the Bayes factor is B 01 (c) = f(x 0) c c f(x µ)(2c) 1 dµ 2c f(x 0) f(x µ)dµ for large c, which depends dramatically on the choice of c. Improper priors are problematical, because they are unnormalized; is B 01 = f(x 0) f(x µ)(1)dµ or B 01 = f(x 0)? f(x µ)(2)dµ Robust solution: if one can specify a plausible range c 1 c c 2, look at B 01 (c) over this range and hope the conclusion is robust. (Not obvious for higher dimensional parameters, but there is a literature.) 23

24 Various proposed default priors for non-common parameters Fractional priors (O Hagan): use a fraction γ of the model likelihood (usually γ = parameter dimension / sample size ) as the prior, with L(θ) 1 γ as the likelihood. Intrinsic priors (Berger, Pericchi and others): generate priors from training samples (either actual subsets of the data, or imaginary data generated under the null model). Conventional priors that have at least some nice properties: e.g., Zellner-Siow priors for linear models are invariant to scale changes in covariates consistent (the true model will be selected as n ) information-consistent (e.g., will reject as t or F statistics ) coherent (roughly, are logically connected) Various efforts at predictive matching priors. Approximations (such as BIC); these can capture part of the prior influence, but not all. 24

25 A Brief Aside: Approximating a believable precise hypothesis by an exact precise null hypothesis* A precise null, like H 0 : θ = θ 0, is typically never true exactly; rather, it is used as a surrogate for a real null H ϵ 0 : θ θ 0 < ϵ, ϵ small. Result (Berger and Delampady, 1987 Statistical Science): Robust Bayesian theory can be used to show that, under reasonable conditions, if ϵ < 1 4 σˆθ, where σˆθ is the standard error of the estimate of θ, then P r(h ϵ 0 x) P r(h 0 x). Note: Typically, σˆθ c n, where n is the sample size, so for large n the above condition can be violated, and using a precise null may not be appropriate, even if the real null is believable. *In non-precise hypothesis testing, such as one-sided testing, the discrepancy between p-values and Bayesian/conditional frequentist answers can be much smaller. 25

26 Department of Statistics ' University of Chicago: 2011 $ A Recent Medical Example & 26 %

27 Hypotheses, Data, and Classical Test: Alvac had shown no effect Aidsvax had shown no effect Question: Would Alvac as a primer and Aidsvax as a booster work? The Study: Conducted in Thailand with 16,395 individuals from the general (not high-risk) population: 74 HIV cases reported in the 8198 individuals receiving placebos 51 HIV cases reported in the 8197 individuals receiving the treatment Model: X 1 Binomial(x 1 p 1, 8198) and X 2 Binomial(x 2 p 2, 8197), respectively, so that p 1 and p 2 denote the probability of HIV in the placebo and treatment populations, respectively. Classical test of H 0 : p 1 = p 2 versus H 1 : p 1 p 2 yielded a p-value of

28 Bayesian Analysis: Reparameterize to p 1 and V = 100 so that we are testing H 0 : V = 0, p 1 arbitrary H 1 : V 0, p 1 arbitrary. Prior distribution: P r(h i ) = prior probability that H i is true, i = 0, 1, Let π 0 (p 1 ) = π 1 (p 1 ), and choose them to be either ( 1 p 2 p 1 ), uniform on (0,1) subjective (evidence-based?) priors based on knowledge of HIV infection rates Note: the answers are essentially the same for either choice. For V under H 1, consider the priors uniform on (-20, 60) (apriori subjective evidence-based beliefs) uniform on ( 100c/3, 100c) for 0 < c < 1, to study sensitivity (constrained also to V > 100(1 1 p 1 ). 28

29 Posterior probability of the null hypothesis: P r(h 0 data) = P r(h 0 )B 01 P r(h 0 )B 01 + P r(h 1 ), where the Bayes factor of H 0 to H 1 is B 01 = Binomial(74 p1, 8198) Binomial(51 p 1, 8197)π 0 (p 1 )dp 1 Binomial(74 p1, 8198) Binomial(51 p 2, 8197)π 0 (p 1 )π 1 (p 2 p 1 )dp 1 dp 2. For the prior for V that is uniform on (-20, 60), B 01 1/4 ( recall, p-value.04) If the prior probabilitites of the hypotheses are each 1/2, the overall posterior density of V has a point mass of size 0.20 at V = 0, a density (having total mass 0.80) on non-zero values of V : 29

30 RV144 data; no adjustment probability density of VE VE 30

31 Robust Bayes: For the prior on V that is uniform on ( 100c/3, 100c), the Bayes factor is Thai B01; psi ~ Un( c/3, c) B_01(c) Note: There is sensitivity to c, indeed 0.22 < B 01 (c) < 1, but why would this cause one to instead report p = 0.04, knowing it will be misinterpreted? Note: Uniform priors are the extreme points of monotonic priors, and so such robustness curves are quite general. c 31

the unification of statistics its uses in practice and its role in Objective Bayesian Analysis:

the unification of statistics its uses in practice and its role in Objective Bayesian Analysis: Objective Bayesian Analysis: its uses in practice and its role in the unification of statistics James O. Berger Duke University and the Statistical and Applied Mathematical Sciences Institute Allen T.

More information

Lecture 1: Reproducibility of Science: Impact of Bayesian Testing and Multiplicity

Lecture 1: Reproducibility of Science: Impact of Bayesian Testing and Multiplicity Lecture 1: Reproducibility of Science: Impact of Bayesian Testing and Multiplicity Jim Berger Duke University CBMS Conference on Model Uncertainty and Multiplicity July 23-28, 2012 1 I. Reproducibility

More information

The Use of Rejection Odds and Rejection Ratios in Testing Hypotheses

The Use of Rejection Odds and Rejection Ratios in Testing Hypotheses The Use of Rejection Odds and Rejection Ratios in Testing Hypotheses Jim Berger Duke University with M.J. Bayarri, Daniel J. Benjamin (University of Southern California, and Thomas M. Sellke (Purdue University)

More information

1 Hypothesis Testing and Model Selection

1 Hypothesis Testing and Model Selection A Short Course on Bayesian Inference (based on An Introduction to Bayesian Analysis: Theory and Methods by Ghosh, Delampady and Samanta) Module 6: From Chapter 6 of GDS 1 Hypothesis Testing and Model Selection

More information

Bayesian Statistics. Debdeep Pati Florida State University. February 11, 2016

Bayesian Statistics. Debdeep Pati Florida State University. February 11, 2016 Bayesian Statistics Debdeep Pati Florida State University February 11, 2016 Historical Background Historical Background Historical Background Brief History of Bayesian Statistics 1764-1838: called probability

More information

Some Curiosities Arising in Objective Bayesian Analysis

Some Curiosities Arising in Objective Bayesian Analysis . Some Curiosities Arising in Objective Bayesian Analysis Jim Berger Duke University Statistical and Applied Mathematical Institute Yale University May 15, 2009 1 Three vignettes related to John s work

More information

Divergence Based priors for the problem of hypothesis testing

Divergence Based priors for the problem of hypothesis testing Divergence Based priors for the problem of hypothesis testing gonzalo garcía-donato and susie Bayarri May 22, 2009 gonzalo garcía-donato and susie Bayarri () DB priors May 22, 2009 1 / 46 Jeffreys and

More information

Bayesian Adjustment for Multiplicity

Bayesian Adjustment for Multiplicity Bayesian Adjustment for Multiplicity Jim Berger Duke University with James Scott University of Texas 2011 Rao Prize Conference Department of Statistics, Penn State University May 19, 2011 1 2011 Rao Prize

More information

A Very Brief Summary of Bayesian Inference, and Examples

A Very Brief Summary of Bayesian Inference, and Examples A Very Brief Summary of Bayesian Inference, and Examples Trinity Term 009 Prof Gesine Reinert Our starting point are data x = x 1, x,, x n, which we view as realisations of random variables X 1, X,, X

More information

Unified Conditional Frequentist and Bayesian Testing of Composite Hypotheses

Unified Conditional Frequentist and Bayesian Testing of Composite Hypotheses Unified Conditional Frequentist and Bayesian Testing of Composite Hypotheses Sarat C. Dass University of Michigan James O. Berger Duke University October 26, 2000 Abstract Testing of a composite null hypothesis

More information

7. Estimation and hypothesis testing. Objective. Recommended reading

7. Estimation and hypothesis testing. Objective. Recommended reading 7. Estimation and hypothesis testing Objective In this chapter, we show how the election of estimators can be represented as a decision problem. Secondly, we consider the problem of hypothesis testing

More information

Harrison B. Prosper. CMS Statistics Committee

Harrison B. Prosper. CMS Statistics Committee Harrison B. Prosper Florida State University CMS Statistics Committee 08-08-08 Bayesian Methods: Theory & Practice. Harrison B. Prosper 1 h Lecture 3 Applications h Hypothesis Testing Recap h A Single

More information

STAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01

STAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01 STAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01 Nasser Sadeghkhani a.sadeghkhani@queensu.ca There are two main schools to statistical inference: 1-frequentist

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)

More information

Overall Objective Priors

Overall Objective Priors Overall Objective Priors Jim Berger, Jose Bernardo and Dongchu Sun Duke University, University of Valencia and University of Missouri Recent advances in statistical inference: theory and case studies University

More information

CERN EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH

CERN EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH CERN 2008 001 7 March 2008 ORGANISATION EUROPÉENNE POUR LA RECHERCHE NUCLÉAIRE CERN EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH PHYSTAT LHC Workshop on Statistical Issues for LHC Physics CERN, Geneva, Switzerland,

More information

An Extended BIC for Model Selection

An Extended BIC for Model Selection An Extended BIC for Model Selection at the JSM meeting 2007 - Salt Lake City Surajit Ray Boston University (Dept of Mathematics and Statistics) Joint work with James Berger, Duke University; Susie Bayarri,

More information

Seminar über Statistik FS2008: Model Selection

Seminar über Statistik FS2008: Model Selection Seminar über Statistik FS2008: Model Selection Alessia Fenaroli, Ghazale Jazayeri Monday, April 2, 2008 Introduction Model Choice deals with the comparison of models and the selection of a model. It can

More information

Part III. A Decision-Theoretic Approach and Bayesian testing

Part III. A Decision-Theoretic Approach and Bayesian testing Part III A Decision-Theoretic Approach and Bayesian testing 1 Chapter 10 Bayesian Inference as a Decision Problem The decision-theoretic framework starts with the following situation. We would like to

More information

Stat 5101 Lecture Notes

Stat 5101 Lecture Notes Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random

More information

Statistical Inference

Statistical Inference Statistical Inference Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham, NC, USA Spring, 2006 1. DeGroot 1973 In (DeGroot 1973), Morrie DeGroot considers testing the

More information

Hypothesis Testing and Model Selection

Hypothesis Testing and Model Selection Hypothesis Testing and Model Selection M.J. Bayarri and James O. Berger Universitat de València and Duke University www.stat.duke.edu/ berger/talks/valencia7.ps Valencia 7, Playa de Las Américas, Tenerife,

More information

Unified Frequentist and Bayesian Testing of a Precise Hypothesis

Unified Frequentist and Bayesian Testing of a Precise Hypothesis Statistical Science 1997, Vol. 12, No. 3, 133 160 Unified Frequentist and Bayesian Testing of a Precise Hypothesis J. O. Berger, B. Boukai and Y. Wang Abstract. In this paper, we show that the conditional

More information

STAT 425: Introduction to Bayesian Analysis

STAT 425: Introduction to Bayesian Analysis STAT 425: Introduction to Bayesian Analysis Marina Vannucci Rice University, USA Fall 2017 Marina Vannucci (Rice University, USA) Bayesian Analysis (Part 1) Fall 2017 1 / 10 Lecture 7: Prior Types Subjective

More information

A BAYESIAN MATHEMATICAL STATISTICS PRIMER. José M. Bernardo Universitat de València, Spain

A BAYESIAN MATHEMATICAL STATISTICS PRIMER. José M. Bernardo Universitat de València, Spain A BAYESIAN MATHEMATICAL STATISTICS PRIMER José M. Bernardo Universitat de València, Spain jose.m.bernardo@uv.es Bayesian Statistics is typically taught, if at all, after a prior exposure to frequentist

More information

Bayesian Statistics. Luc Demortier. INFN School of Statistics, Vietri sul Mare, June 3-7, The Rockefeller University

Bayesian Statistics. Luc Demortier. INFN School of Statistics, Vietri sul Mare, June 3-7, The Rockefeller University Bayesian Statistics Luc Demortier The Rockefeller University INFN School of Statistics, Vietri sul Mare, June 3-7, 2013 1 / 55 Outline 1 Frequentism versus Bayesianism; 2 The Choice of Prior: Objective

More information

Testing Simple Hypotheses R.L. Wolpert Institute of Statistics and Decision Sciences Duke University, Box Durham, NC 27708, USA

Testing Simple Hypotheses R.L. Wolpert Institute of Statistics and Decision Sciences Duke University, Box Durham, NC 27708, USA Testing Simple Hypotheses R.L. Wolpert Institute of Statistics and Decision Sciences Duke University, Box 90251 Durham, NC 27708, USA Summary: Pre-experimental Frequentist error probabilities do not summarize

More information

Objective Bayesian Statistical Inference

Objective Bayesian Statistical Inference Objective Bayesian Statistical Inference James O. Berger Duke University and the Statistical and Applied Mathematical Sciences Institute London, UK July 6-8, 2005 1 Preliminaries Outline History of objective

More information

Chapter 4 HOMEWORK ASSIGNMENTS. 4.1 Homework #1

Chapter 4 HOMEWORK ASSIGNMENTS. 4.1 Homework #1 Chapter 4 HOMEWORK ASSIGNMENTS These homeworks may be modified as the semester progresses. It is your responsibility to keep up to date with the correctly assigned homeworks. There may be some errors in

More information

Statistical Methods in Particle Physics

Statistical Methods in Particle Physics Statistical Methods in Particle Physics Lecture 11 January 7, 2013 Silvia Masciocchi, GSI Darmstadt s.masciocchi@gsi.de Winter Semester 2012 / 13 Outline How to communicate the statistical uncertainty

More information

STAT 461/561- Assignments, Year 2015

STAT 461/561- Assignments, Year 2015 STAT 461/561- Assignments, Year 2015 This is the second set of assignment problems. When you hand in any problem, include the problem itself and its number. pdf are welcome. If so, use large fonts and

More information

P Values and Nuisance Parameters

P Values and Nuisance Parameters P Values and Nuisance Parameters Luc Demortier The Rockefeller University PHYSTAT-LHC Workshop on Statistical Issues for LHC Physics CERN, Geneva, June 27 29, 2007 Definition and interpretation of p values;

More information

Module 22: Bayesian Methods Lecture 9 A: Default prior selection

Module 22: Bayesian Methods Lecture 9 A: Default prior selection Module 22: Bayesian Methods Lecture 9 A: Default prior selection Peter Hoff Departments of Statistics and Biostatistics University of Washington Outline Jeffreys prior Unit information priors Empirical

More information

7. Estimation and hypothesis testing. Objective. Recommended reading

7. Estimation and hypothesis testing. Objective. Recommended reading 7. Estimation and hypothesis testing Objective In this chapter, we show how the election of estimators can be represented as a decision problem. Secondly, we consider the problem of hypothesis testing

More information

E. Santovetti lesson 4 Maximum likelihood Interval estimation

E. Santovetti lesson 4 Maximum likelihood Interval estimation E. Santovetti lesson 4 Maximum likelihood Interval estimation 1 Extended Maximum Likelihood Sometimes the number of total events measurements of the experiment n is not fixed, but, for example, is a Poisson

More information

Bayesian Statistics from Subjective Quantum Probabilities to Objective Data Analysis

Bayesian Statistics from Subjective Quantum Probabilities to Objective Data Analysis Bayesian Statistics from Subjective Quantum Probabilities to Objective Data Analysis Luc Demortier The Rockefeller University Informal High Energy Physics Seminar Caltech, April 29, 2008 Bayes versus Frequentism...

More information

Why Try Bayesian Methods? (Lecture 5)

Why Try Bayesian Methods? (Lecture 5) Why Try Bayesian Methods? (Lecture 5) Tom Loredo Dept. of Astronomy, Cornell University http://www.astro.cornell.edu/staff/loredo/bayes/ p.1/28 Today s Lecture Problems you avoid Ambiguity in what is random

More information

A CONDITION TO OBTAIN THE SAME DECISION IN THE HOMOGENEITY TEST- ING PROBLEM FROM THE FREQUENTIST AND BAYESIAN POINT OF VIEW

A CONDITION TO OBTAIN THE SAME DECISION IN THE HOMOGENEITY TEST- ING PROBLEM FROM THE FREQUENTIST AND BAYESIAN POINT OF VIEW A CONDITION TO OBTAIN THE SAME DECISION IN THE HOMOGENEITY TEST- ING PROBLEM FROM THE FREQUENTIST AND BAYESIAN POINT OF VIEW Miguel A Gómez-Villegas and Beatriz González-Pérez Departamento de Estadística

More information

Likelihood Inference. Jesús Fernández-Villaverde University of Pennsylvania

Likelihood Inference. Jesús Fernández-Villaverde University of Pennsylvania Likelihood Inference Jesús Fernández-Villaverde University of Pennsylvania 1 Likelihood Inference We are going to focus in likelihood-based inference. Why? 1. Likelihood principle (Berger and Wolpert,

More information

Bayesian tests of hypotheses

Bayesian tests of hypotheses Bayesian tests of hypotheses Christian P. Robert Université Paris-Dauphine, Paris & University of Warwick, Coventry Joint work with K. Kamary, K. Mengersen & J. Rousseau Outline Bayesian testing of hypotheses

More information

Constructing Ensembles of Pseudo-Experiments

Constructing Ensembles of Pseudo-Experiments Constructing Ensembles of Pseudo-Experiments Luc Demortier The Rockefeller University, New York, NY 10021, USA The frequentist interpretation of measurement results requires the specification of an ensemble

More information

Statistics for the LHC Lecture 1: Introduction

Statistics for the LHC Lecture 1: Introduction Statistics for the LHC Lecture 1: Introduction Academic Training Lectures CERN, 14 17 June, 2010 indico.cern.ch/conferencedisplay.py?confid=77830 Glen Cowan Physics Department Royal Holloway, University

More information

Statistical Methods for Discovery and Limits in HEP Experiments Day 3: Exclusion Limits

Statistical Methods for Discovery and Limits in HEP Experiments Day 3: Exclusion Limits Statistical Methods for Discovery and Limits in HEP Experiments Day 3: Exclusion Limits www.pp.rhul.ac.uk/~cowan/stat_freiburg.html Vorlesungen des GK Physik an Hadron-Beschleunigern, Freiburg, 27-29 June,

More information

Bayes Testing and More

Bayes Testing and More Bayes Testing and More STA 732. Surya Tokdar Bayes testing The basic goal of testing is to provide a summary of evidence toward/against a hypothesis of the kind H 0 : θ Θ 0, for some scientifically important

More information

Stat 5102 Final Exam May 14, 2015

Stat 5102 Final Exam May 14, 2015 Stat 5102 Final Exam May 14, 2015 Name Student ID The exam is closed book and closed notes. You may use three 8 1 11 2 sheets of paper with formulas, etc. You may also use the handouts on brand name distributions

More information

Divergence Based Priors for Bayesian Hypothesis testing

Divergence Based Priors for Bayesian Hypothesis testing Divergence Based Priors for Bayesian Hypothesis testing M.J. Bayarri University of Valencia G. García-Donato University of Castilla-La Mancha November, 2006 Abstract Maybe the main difficulty for objective

More information

Foundations of Statistical Inference

Foundations of Statistical Inference Foundations of Statistical Inference Julien Berestycki Department of Statistics University of Oxford MT 2016 Julien Berestycki (University of Oxford) SB2a MT 2016 1 / 20 Lecture 6 : Bayesian Inference

More information

Statistical Theory MT 2006 Problems 4: Solution sketches

Statistical Theory MT 2006 Problems 4: Solution sketches Statistical Theory MT 006 Problems 4: Solution sketches 1. Suppose that X has a Poisson distribution with unknown mean θ. Determine the conjugate prior, and associate posterior distribution, for θ. Determine

More information

Invariant HPD credible sets and MAP estimators

Invariant HPD credible sets and MAP estimators Bayesian Analysis (007), Number 4, pp. 681 69 Invariant HPD credible sets and MAP estimators Pierre Druilhet and Jean-Michel Marin Abstract. MAP estimators and HPD credible sets are often criticized in

More information

Statistics for Particle Physics. Kyle Cranmer. New York University. Kyle Cranmer (NYU) CERN Academic Training, Feb 2-5, 2009

Statistics for Particle Physics. Kyle Cranmer. New York University. Kyle Cranmer (NYU) CERN Academic Training, Feb 2-5, 2009 Statistics for Particle Physics Kyle Cranmer New York University 91 Remaining Lectures Lecture 3:! Compound hypotheses, nuisance parameters, & similar tests! The Neyman-Construction (illustrated)! Inverted

More information

Bayesian Econometrics

Bayesian Econometrics Bayesian Econometrics Christopher A. Sims Princeton University sims@princeton.edu September 20, 2016 Outline I. The difference between Bayesian and non-bayesian inference. II. Confidence sets and confidence

More information

g-priors for Linear Regression

g-priors for Linear Regression Stat60: Bayesian Modeling and Inference Lecture Date: March 15, 010 g-priors for Linear Regression Lecturer: Michael I. Jordan Scribe: Andrew H. Chan 1 Linear regression and g-priors In the last lecture,

More information

Hypothesis Testing. Econ 690. Purdue University. Justin L. Tobias (Purdue) Testing 1 / 33

Hypothesis Testing. Econ 690. Purdue University. Justin L. Tobias (Purdue) Testing 1 / 33 Hypothesis Testing Econ 690 Purdue University Justin L. Tobias (Purdue) Testing 1 / 33 Outline 1 Basic Testing Framework 2 Testing with HPD intervals 3 Example 4 Savage Dickey Density Ratio 5 Bartlett

More information

Bayesian inference: what it means and why we care

Bayesian inference: what it means and why we care Bayesian inference: what it means and why we care Robin J. Ryder Centre de Recherche en Mathématiques de la Décision Université Paris-Dauphine 6 November 2017 Mathematical Coffees Robin Ryder (Dauphine)

More information

Statistical Theory MT 2007 Problems 4: Solution sketches

Statistical Theory MT 2007 Problems 4: Solution sketches Statistical Theory MT 007 Problems 4: Solution sketches 1. Consider a 1-parameter exponential family model with density f(x θ) = f(x)g(θ)exp{cφ(θ)h(x)}, x X. Suppose that the prior distribution has the

More information

A Discussion of the Bayesian Approach

A Discussion of the Bayesian Approach A Discussion of the Bayesian Approach Reference: Chapter 10 of Theoretical Statistics, Cox and Hinkley, 1974 and Sujit Ghosh s lecture notes David Madigan Statistics The subject of statistics concerns

More information

Second Workshop, Third Summary

Second Workshop, Third Summary Statistical Issues Relevant to Significance of Discovery Claims Second Workshop, Third Summary Luc Demortier The Rockefeller University Banff, July 16, 2010 1 / 23 1 In the Beginning There Were Questions...

More information

Statistical Methods for Particle Physics (I)

Statistical Methods for Particle Physics (I) Statistical Methods for Particle Physics (I) https://agenda.infn.it/conferencedisplay.py?confid=14407 Glen Cowan Physics Department Royal Holloway, University of London g.cowan@rhul.ac.uk www.pp.rhul.ac.uk/~cowan

More information

Journeys of an Accidental Statistician

Journeys of an Accidental Statistician Journeys of an Accidental Statistician A partially anecdotal account of A Unified Approach to the Classical Statistical Analysis of Small Signals, GJF and Robert D. Cousins, Phys. Rev. D 57, 3873 (1998)

More information

Mathematical Statistics

Mathematical Statistics Mathematical Statistics MAS 713 Chapter 8 Previous lecture: 1 Bayesian Inference 2 Decision theory 3 Bayesian Vs. Frequentist 4 Loss functions 5 Conjugate priors Any questions? Mathematical Statistics

More information

Bayesian Model Averaging

Bayesian Model Averaging Bayesian Model Averaging Hoff Chapter 9, Hoeting et al 1999, Clyde & George 2004, Liang et al 2008 October 24, 2017 Bayesian Model Choice Models for the variable selection problem are based on a subset

More information

Parameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn

Parameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn Parameter estimation and forecasting Cristiano Porciani AIfA, Uni-Bonn Questions? C. Porciani Estimation & forecasting 2 Temperature fluctuations Variance at multipole l (angle ~180o/l) C. Porciani Estimation

More information

Stat 535 C - Statistical Computing & Monte Carlo Methods. Arnaud Doucet.

Stat 535 C - Statistical Computing & Monte Carlo Methods. Arnaud Doucet. Stat 535 C - Statistical Computing & Monte Carlo Methods Arnaud Doucet Email: arnaud@cs.ubc.ca 1 CS students: don t forget to re-register in CS-535D. Even if you just audit this course, please do register.

More information

Statistics of Small Signals

Statistics of Small Signals Statistics of Small Signals Gary Feldman Harvard University NEPPSR August 17, 2005 Statistics of Small Signals In 1998, Bob Cousins and I were working on the NOMAD neutrino oscillation experiment and we

More information

Integrated Objective Bayesian Estimation and Hypothesis Testing

Integrated Objective Bayesian Estimation and Hypothesis Testing Integrated Objective Bayesian Estimation and Hypothesis Testing José M. Bernardo Universitat de València, Spain jose.m.bernardo@uv.es 9th Valencia International Meeting on Bayesian Statistics Benidorm

More information

Uncertain Inference and Artificial Intelligence

Uncertain Inference and Artificial Intelligence March 3, 2011 1 Prepared for a Purdue Machine Learning Seminar Acknowledgement Prof. A. P. Dempster for intensive collaborations on the Dempster-Shafer theory. Jianchun Zhang, Ryan Martin, Duncan Ermini

More information

Bayesian Regression (1/31/13)

Bayesian Regression (1/31/13) STA613/CBB540: Statistical methods in computational biology Bayesian Regression (1/31/13) Lecturer: Barbara Engelhardt Scribe: Amanda Lea 1 Bayesian Paradigm Bayesian methods ask: given that I have observed

More information

Bayesian Assessment of Hypotheses and Models

Bayesian Assessment of Hypotheses and Models 8 Bayesian Assessment of Hypotheses and Models This is page 399 Printer: Opaque this 8. Introduction The three preceding chapters gave an overview of how Bayesian probability models are constructed. Once

More information

UNIFORMLY MOST POWERFUL BAYESIAN TESTS. BY VALEN E. JOHNSON 1 Texas A&M University

UNIFORMLY MOST POWERFUL BAYESIAN TESTS. BY VALEN E. JOHNSON 1 Texas A&M University The Annals of Statistics 2013, Vol. 41, No. 4, 1716 1741 DOI: 10.1214/13-AOS1123 Institute of Mathematical Statistics, 2013 UNIFORMLY MOST POWERFUL BAYESIAN TESTS BY VALEN E. JOHNSON 1 Texas A&M University

More information

Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution.

Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution. Hypothesis Testing Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution. Suppose the family of population distributions is indexed

More information

Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, Jeffreys priors. exp 1 ) p 2

Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, Jeffreys priors. exp 1 ) p 2 Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, 2010 Jeffreys priors Lecturer: Michael I. Jordan Scribe: Timothy Hunter 1 Priors for the multivariate Gaussian Consider a multivariate

More information

University of Texas, MD Anderson Cancer Center

University of Texas, MD Anderson Cancer Center University of Texas, MD Anderson Cancer Center UT MD Anderson Cancer Center Department of Biostatistics Working Paper Series Year 2012 Paper 74 Uniformly Most Powerful Bayesian Tests Valen E. Johnson Dept.

More information

Bayes Factors for Goodness of Fit Testing

Bayes Factors for Goodness of Fit Testing Carnegie Mellon University Research Showcase @ CMU Department of Statistics Dietrich College of Humanities and Social Sciences 10-31-2003 Bayes Factors for Goodness of Fit Testing Fulvio Spezzaferri University

More information

Problems ( ) 1 exp. 2. n! e λ and

Problems ( ) 1 exp. 2. n! e λ and Problems The expressions for the probability mass function of the Poisson(λ) distribution, and the density function of the Normal distribution with mean µ and variance σ 2, may be useful: ( ) 1 exp. 2πσ

More information

Bayesian Inference: Concept and Practice

Bayesian Inference: Concept and Practice Inference: Concept and Practice fundamentals Johan A. Elkink School of Politics & International Relations University College Dublin 5 June 2017 1 2 3 Bayes theorem In order to estimate the parameters of

More information

Decision theory. 1 We may also consider randomized decision rules, where δ maps observed data D to a probability distribution over

Decision theory. 1 We may also consider randomized decision rules, where δ maps observed data D to a probability distribution over Point estimation Suppose we are interested in the value of a parameter θ, for example the unknown bias of a coin. We have already seen how one may use the Bayesian method to reason about θ; namely, we

More information

The Jeffreys-Lindley Paradox and Discovery Criteria in High Energy Physics

The Jeffreys-Lindley Paradox and Discovery Criteria in High Energy Physics The Jeffreys-Lindley Paradox and Discovery Criteria in High Energy Physics Bob Cousins Univ. of California, Los Angeles Workshop on Evidence, Discovery, Proof: Measuring the Higgs Particle Univ. of South

More information

Physics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester

Physics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester Physics 403 Parameter Estimation, Correlations, and Error Bars Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Review of Last Class Best Estimates and Reliability

More information

Should all Machine Learning be Bayesian? Should all Bayesian models be non-parametric?

Should all Machine Learning be Bayesian? Should all Bayesian models be non-parametric? Should all Machine Learning be Bayesian? Should all Bayesian models be non-parametric? Zoubin Ghahramani Department of Engineering University of Cambridge, UK zoubin@eng.cam.ac.uk http://learning.eng.cam.ac.uk/zoubin/

More information

PMR Learning as Inference

PMR Learning as Inference Outline PMR Learning as Inference Probabilistic Modelling and Reasoning Amos Storkey Modelling 2 The Exponential Family 3 Bayesian Sets School of Informatics, University of Edinburgh Amos Storkey PMR Learning

More information

CSC321 Lecture 18: Learning Probabilistic Models

CSC321 Lecture 18: Learning Probabilistic Models CSC321 Lecture 18: Learning Probabilistic Models Roger Grosse Roger Grosse CSC321 Lecture 18: Learning Probabilistic Models 1 / 25 Overview So far in this course: mainly supervised learning Language modeling

More information

Remarks on Improper Ignorance Priors

Remarks on Improper Ignorance Priors As a limit of proper priors Remarks on Improper Ignorance Priors Two caveats relating to computations with improper priors, based on their relationship with finitely-additive, but not countably-additive

More information

Statistical Methods for Particle Physics Lecture 1: parameter estimation, statistical tests

Statistical Methods for Particle Physics Lecture 1: parameter estimation, statistical tests Statistical Methods for Particle Physics Lecture 1: parameter estimation, statistical tests http://benasque.org/2018tae/cgi-bin/talks/allprint.pl TAE 2018 Benasque, Spain 3-15 Sept 2018 Glen Cowan Physics

More information

Hypothesis Testing - Frequentist

Hypothesis Testing - Frequentist Frequentist Hypothesis Testing - Frequentist Compare two hypotheses to see which one better explains the data. Or, alternatively, what is the best way to separate events into two classes, those originating

More information

Introduction to Bayesian Methods

Introduction to Bayesian Methods Introduction to Bayesian Methods Jessi Cisewski Department of Statistics Yale University Sagan Summer Workshop 2016 Our goal: introduction to Bayesian methods Likelihoods Priors: conjugate priors, non-informative

More information

STAT215: Solutions for Homework 2

STAT215: Solutions for Homework 2 STAT25: Solutions for Homework 2 Due: Wednesday, Feb 4. (0 pt) Suppose we take one observation, X, from the discrete distribution, x 2 0 2 Pr(X x θ) ( θ)/4 θ/2 /2 (3 θ)/2 θ/4, 0 θ Find an unbiased estimator

More information

Bayesian Hypothesis Testing: Redux

Bayesian Hypothesis Testing: Redux Bayesian Hypothesis Testing: Redux Hedibert F. Lopes and Nicholas G. Polson Insper and Chicago Booth arxiv:1808.08491v1 [math.st] 26 Aug 2018 First draft: March 2018 This draft: August 2018 Abstract Bayesian

More information

Bayesian inference. Fredrik Ronquist and Peter Beerli. October 3, 2007

Bayesian inference. Fredrik Ronquist and Peter Beerli. October 3, 2007 Bayesian inference Fredrik Ronquist and Peter Beerli October 3, 2007 1 Introduction The last few decades has seen a growing interest in Bayesian inference, an alternative approach to statistical inference.

More information

ST 740: Model Selection

ST 740: Model Selection ST 740: Model Selection Alyson Wilson Department of Statistics North Carolina State University November 25, 2013 A. Wilson (NCSU Statistics) Model Selection November 25, 2013 1 / 29 Formal Bayesian Model

More information

Statistical Inference: Estimation and Confidence Intervals Hypothesis Testing

Statistical Inference: Estimation and Confidence Intervals Hypothesis Testing Statistical Inference: Estimation and Confidence Intervals Hypothesis Testing 1 In most statistics problems, we assume that the data have been generated from some unknown probability distribution. We desire

More information

Statistics: Learning models from data

Statistics: Learning models from data DS-GA 1002 Lecture notes 5 October 19, 2015 Statistics: Learning models from data Learning models from data that are assumed to be generated probabilistically from a certain unknown distribution is a crucial

More information

Math 494: Mathematical Statistics

Math 494: Mathematical Statistics Math 494: Mathematical Statistics Instructor: Jimin Ding jmding@wustl.edu Department of Mathematics Washington University in St. Louis Class materials are available on course website (www.math.wustl.edu/

More information

Predictive Distributions

Predictive Distributions Predictive Distributions October 6, 2010 Hoff Chapter 4 5 October 5, 2010 Prior Predictive Distribution Before we observe the data, what do we expect the distribution of observations to be? p(y i ) = p(y

More information

Search Procedures in High Energy Physics

Search Procedures in High Energy Physics Version 2.20 July 1, 2009 Search Procedures in High Energy Physics Luc Demortier 1 Laboratory of Experimental High-Energy Physics The Rockefeller University Abstract The usual procedure for searching for

More information

Statistical and Learning Techniques in Computer Vision Lecture 2: Maximum Likelihood and Bayesian Estimation Jens Rittscher and Chuck Stewart

Statistical and Learning Techniques in Computer Vision Lecture 2: Maximum Likelihood and Bayesian Estimation Jens Rittscher and Chuck Stewart Statistical and Learning Techniques in Computer Vision Lecture 2: Maximum Likelihood and Bayesian Estimation Jens Rittscher and Chuck Stewart 1 Motivation and Problem In Lecture 1 we briefly saw how histograms

More information

The binomial model. Assume a uniform prior distribution on p(θ). Write the pdf for this distribution.

The binomial model. Assume a uniform prior distribution on p(θ). Write the pdf for this distribution. The binomial model Example. After suspicious performance in the weekly soccer match, 37 mathematical sciences students, staff, and faculty were tested for the use of performance enhancing analytics. Let

More information

Bayesian Inference: Posterior Intervals

Bayesian Inference: Posterior Intervals Bayesian Inference: Posterior Intervals Simple values like the posterior mean E[θ X] and posterior variance var[θ X] can be useful in learning about θ. Quantiles of π(θ X) (especially the posterior median)

More information

Statistical Data Analysis Stat 3: p-values, parameter estimation

Statistical Data Analysis Stat 3: p-values, parameter estimation Statistical Data Analysis Stat 3: p-values, parameter estimation London Postgraduate Lectures on Particle Physics; University of London MSci course PH4515 Glen Cowan Physics Department Royal Holloway,

More information

Confidence Intervals for Normal Data Spring 2014

Confidence Intervals for Normal Data Spring 2014 Confidence Intervals for Normal Data 18.05 Spring 2014 Agenda Today Review of critical values and quantiles. Computing z, t, χ 2 confidence intervals for normal data. Conceptual view of confidence intervals.

More information

(1) Introduction to Bayesian statistics

(1) Introduction to Bayesian statistics Spring, 2018 A motivating example Student 1 will write down a number and then flip a coin If the flip is heads, they will honestly tell student 2 if the number is even or odd If the flip is tails, they

More information