E. Santovetti lesson 4 Maximum likelihood Interval estimation

Size: px
Start display at page:

Download "E. Santovetti lesson 4 Maximum likelihood Interval estimation"

Transcription

1 E. Santovetti lesson 4 Maximum likelihood Interval estimation 1

2 Extended Maximum Likelihood Sometimes the number of total events measurements of the experiment n is not fixed, but, for example, is a Poisson r.v., mean. The extended likelihood function is then: If is a function of we have: Example: the expected number of events of a certain process Extended ML uses more informations, so the error on the parameters will be smaller, compared to the case in which n independent In case n doesn't depend from we have the usual likelihood 2

3 Extended ML example Consider two types of events (e.g., signal and background) each of which predict a given pdf for the variable x: fs(x) and fb(x). We observe a mixture of the two event types, signal fraction = θ, expected total number = ν, observed total number = n Let s = and b = (1- be the number of signal and background that we want to evaluate 3

4 Extended ML example (2) Consider for signal a Gaussian pdf and for background an exponential Maximize the log L, to find s and b Here errors reflect total Poisson fluctuation as well as that in proportion of signal/background 4

5 Unphysical values for estimators Here the unphysical estimator is unbiased and should nevertheless be reported, since average of a large number of unbiased estimates converges to the true value (cf. PDG). Repeat entire MC experiment many times, allow unphysical estimates. 5

6 Extended ML example II LH does not provide any information on the Goodness of the fit. This has to be checked separately. Simulate toy MC according to estimated pdf (using fit results from data as true parameter values) compare max Likelihood value in toy to the one in data Draw data in (binned) histogram, compare distribution with result of ML fit. 6

7 Extended ML example II Again you want to distinguish (and count) signal events respect to background events. The signal is the process The background is combinatorial: vertex (and then particle) reconstructed with the wrong tracks. To select signal from background we can use two main things: 1) The invariant mass of the two daughter particles has to peak to the B meson mass; 2) The time of flight of the B meson candidate has to be of the order of the B meson life time. These two variables have to behave in a complete different way for the two event categories. Let us see the distribution of this variables 7

8 Extended ML example II A first look at the distribution (mass and time) allow us to state: pdf for signal mass: double Gaussian pdf for signal time: exponential (negative) pdf for background mass: exponential (almost flat) pdf for background time: exponential + Lorentzian We build the pdf and make as: By maximizing the likelihood, we can estimate the number of signal and background as well as the B meson mass and life time 8

9 Extended ML example II signal background all data mass Life time The fit is done with the RooFit package (Root) 9

10 Weighted maximum likelihood Suppose we want to measure the polarization of the J/Ψ meson (1--). The measurement can be done by looking at the angular distribution of the decay product of the meson itself: and are respectively the polar and azimuthal angles of the positive muon, in the decay in the J/Ψ rest frame, measured choosing the J/Ψ direction in the lab frame as polar axis. θ 10

11 Weighted likelihood polarization measurement We have to measure the angular distribution and fit with the function There are two main problem to face: When we select our signal, there is an unavoidable amount of background events (evident from the mass distribution) The angular distribution of the background events is unknown and also very difficult to parametrize The likelihood function is: Where ε is the total detection efficiency, P is the above angular function and Norm is a normalization function in order to have probability normalized to 1. 11

12 Weighted likelihood polarization measurement The efficiency term at the denominator does not depend on λ parameters and then is a constant term in the maximization procedure. In order to take into account the background events the likelihood sum is extended at all events but with some proper weights: The background mass distribution is linear. The hypothesis is well satisfied otherwise we can always take into account this by readjusting the weights in a proper way right side band The combinatorial background angular distributions are the same in the signal and side bands regions (can be demonstrated shifting the three regions 300 MeV up) left side band The background events contribution cancels out if: 12

13 Weighted likelihood polarization measurement How to evaluate the Norm function (depending on detector efficiency)? We can again use the MC simulation, considering an unpolarized sample, P=1 and the sum is over the MC events. 13

14 Weighted likelihood polarization measurement Then from MC events we can compute the function: 14

15 The splot technique 15

16 Relationship between ML and Bayesian estimators In Bayesian statistics, both θ and x are random variables:. In the Bayes approach, if θ is a certain hypothesis: posterior θ pdf (conditional pdf for θ given x) prior θ probability Purist Bayesian: p(θ x) contains all the informations about θ. Pragmatist Bayesian: p(θ x) can be a complicated function: summarize by using new estimator Looking at p(θ x): what do we use for π(θ)? No golden rule (subjective!), often represent prior ignorance by π(θ) = constant, in which case 16 But... we could have used a different parameter, e.g., λ = 1/θ, and if prior π(θ) is constant, then π(λ) is not! Complete prior ignorance is not well defined.

17 Relationship between ML and Bayesian estimators The main concern expressed by frequentist statisticians regarding the use of Bayesian probability is its intrinsic dependence on a prior probability that can be chosen in an arbitrary way. This arbitrariness makes Bayesian probability to some extent subjective. Adding more measurements increases one s knowledge of the unknown parameter, hence the posterior probability will depend less, and be less sensitive to the choice of the prior probability. In those cases, where a large number of measurements occurs, in most of the cases the results of Bayesian calculations tend to be identical to those of frequentist calculations. Many interesting statistical problems arise in the cases of low statistics, i.e. a small number of measurements. In those cases, Bayesian or frequentist methods usually leads to different results. In those cases, using the Bayesian approach, the choice of the prior probabilities plays a crucial role and has great influence in the results. One main difficulty is how to chose a PDF that models one s complete ignorance on an unknown parameter. One naively could choose a uniform ( flat ) PDF in the interval of validity of the parameter. But it is clear that if we move the parametrization from x to a function of x (say log x or 1/x), the resulting 17 transformed parameter will no longer have a uniform prior PDF

18 The Jeffreys prior One possible approach has been proposed by Harold Jeffreys, adopting a choice of prior PDF that results invariant under parameter transformation. This choice is: with: Determinant of the Fischer information matrix Examples of Jeffreys prior distributions for some important parameters 18

19 Interval estimation, setting limits 19

20 Interval estimation introduction In addition to a point estimate of a parameter we should report an interval reflecting its statistical uncertainty. Desirable properties of such an interval may include: communicate objectively the result of the experiment; have a given probability of containing the true parameter; provide informations needed to draw conclusions about the parameter possibly incorporating stated prior beliefs. Often use +/- the estimated standard deviation of the estimator. In some cases, however, this is not adequate: estimate near a physical boundary, e.g., an observed event rate consistent with zero. We will look briefly at Frequentist and Bayesian intervals. 20

21 Neyman confidence intervals Rigorous procedure to get confidence intervals in frequentist approach Consider an estimator for a parameter (measurable) We also need the pdf Specify upper and lower tail probabilities, e.g., α = 0.05, β = 0.05, then find functions uα(θ) and vβ(θ) such that Integral over all the possible estimator values We obtain a confidence interval (CL = 1-α-β) for the estimator function of the true parameter value θ. This is the interval for the estimator No unique way to define this interval with the same CL 21

22 Confidence interval from the confidence belt Confident belt region is a function of the parameter Find points where observed estimate intersects the confidence belt This gives the confidence interval for The true parameter Confidence level = 1 - α - β = probability for the interval to cover true value of the parameter (holds for any possible true θ) with 22

23 Confidence intervals by inverting a test Confidence intervals for a parameter θ can be found by defining a test of the hypothesized value θ (do this for all θ): Define values of the data that are disfavored by θ (critical region) such that P(data in critical region) γ for a specified γ, e.g., 0.05 or 0.1. If data observed in the critical region, reject the value θ. Now invert the test to define a confidence interval as: set of θ values that would not be rejected in a test of size γ (confidence level is 1 - γ ). We have to collect many data... The interval will cover the true value of θ with probability 1 - γ. Equivalent to confidence belt construction; confidence belt is acceptance region of a test. 23

24 Relation between confidence interval and p-value Equivalently we can consider a significance test for each hypothesized value of θ, resulting in a p-value, pθ. Equivalently we can consider a significance test for each hypothesized value of θ, resulting in a p-value, pθ. The confidence interval at CL = 1 γ consists of those values of θ that are not rejected. E.g. an upper limit on θ is the greatest value for which pθ γ. In practice find by setting pθ = γ and solve for θ 24

25 Confidence intervals in practice In practice, in order to find the interval [a, b] we have to solve: we replace uα(θ) with and get a we replace uβ(θ) with and get b a is hypothetical value of θ such that b is hypothetical value of θ such that 25

26 Meaning of a confidence interval Important to keep in mind: The interval is random The true θ is an unknown constant Often we report this interval as: This does not mean but: repeat the measurements many times, build the interval according to the same prescription each time, in 1 experiments the interval will contain θ 26

27 Central vs. one-sided confidence intervals Fixed the CL, the choice of and is not unique, in literature this is the so called ordering rule Sometimes, only specified or : one-side interval (limit) Often = = / 2: coverage probability 1- : central confidence interval N.B.: central confidence level does not mean symmetric interval around θ. In HEP the convention to quote the error is: = = / 2 with 1 - = 68.3% = 1σ 27

28 Intervals from the likelihood function In the large sample limit it can be shown for ML estimators: N-dimensional Gaussian, variance V defines a hyper ellipsoidal confidence region If the θ follows a multi-dimentional Gaussian 28

29 Approximate confidence regions from L(θ) So the recipe to find the confidence region with CL = 1 - γ is: For finite samples, these are approximate confidence regions. Coverage probability not guaranteed to be exactly equal to 1 - γ ; no simple theorem to say by how far off it will be (use MC). Remember here the interval is random, not the parameter. 29

30 Example of interval from ln L(θ) For n=1, CL = 1 - γ = Q = 1 30

31 Setting limits on Poisson parameter Consider again the case in which we have a sample of events that contains signal and background (means s and b), and both of them are Poisson variables. Suppose that we can say how many background we expect. Unfortunately we observe: There is clearly no evidence of signal. This means that we cannot exclude s = 0, we can anyway put un upper limit to the number of signal 31

32 Upper limit for Poisson parameter We have to find the hypothetical s such that there is a given small probability, say, γ = 0.05, to find as few events as we did or less. Solving numerically for s, it gives an upper limit at a confidence level of 1-γ (usually 0.95). Suppose b = 0 and we find n = 0 32

33 Calculating Poisson parameter limits To find the lower and upper limits we can use the relation to the 2 distribution with z/2. It can be found: For low fluctuation of n this can give negative result for sup; i.e. confidence interval is empty. 33

34 Limits near a physical boundary Suppose e.g. b = 2.5 and we observe n = 0. If we choose CL = 0.9, we find from the formula for sup negative!? Physicist: We already knew s 0 before we started; can t use negative upper limit to report result of expensive experiment! Statistician: The interval is designed to cover the true value only 90% of the time this was clearly not one of those times. Not uncommon dilemma when limit of parameter is close to a physical boundary 34

35 Expected limit for s = 0 Physicist: I should have used CL = 0.95, then sup = even better: for CL = we get sup = 10-4! We are not taking into account the background fluctuation Reality check: with b = 2.5, typical Poisson fluctuation in n is at least 2.5 = 1.6. How can the limit be so low? Look at the mean limit for the no-signal hypothesis (s = 0) (sensitivity). Distribution of 95% CL limits with b = 2.5, s = 0. Mean upper limit = 4.44 With N MC experiments (poisson) with μ=2.5, I extract n and then evaluate sup with 95% CL 35

36 The flip-flopping problem In order to determine confidence intervals, a consistent choice of ordering rule has to be adopted. Feldman and Cousins demonstrated that the ordering rule choice must not depend on the outcome of the measurements, otherwise the quoted confidence intervals or upper limits could be incorrect. In some cases, experiment searching for a rare signal make the chose, while quoting their result, to switch from a central interval to an upper limit depending on the outcome of the measurement. A typical choice is to quote an upper limit if the significance of the observed signal is smaller than 3σ, and a central value otherwise We have than to quote the error fixing the CL, say 90%. If x 3σ we choose a symmetric interval (5% each), while if x < 3σ an upper limit implies a completely asymmetric interval. 36

37 The flip-flopping problem From a single measurement of x we can decide to quote an interval with a e certain CL if x>3σ: or we can decide to quote only an upper limit if our measurement is x<3σ. 37

38 The flip-flopping problem The choice to switch from a central interval to a fully asymmetric interval (upper limit) based on the observation of x clearly spoils the statistical coverage. Looking at the figure, depending on the value of μ, the interval [x1, x2] obtained by crossing the confidence belt in by an horizontal line, one may have cases where the coverage decreases from 90% to 85%, which is lower than the desired CL. To avoid flip-flopping, decide before the measurement if you quote limit or 2sided interval - and stick to it. Or use Feldman-Cousins 38

39 The Feldman Cousins method The ordering rule proposed by Feldman and Cousins provides a Neyman confidence belt that smoothly changes from a central or quasi-central interval to an upper limit in the case of low observed signal yield. The ordering rule is based on the likelihood ratio given a value θ0 of the unknown parameter under a Neyman construction, the chosen interval on the variable x is defined from the ratio of two PDFs of x, one under the hypothesis that θis equal to the considered fixed value θ0, the other under the hypothesis that θ is equal to the maximum-likelihood estimate value θbest(x) corresponding to the given measurement x. 39

40 Feldman Cousins: Gaussian case Let apply the Feldman-Cousins method to a Gaussian distribution When we divide by f(x μbest) we obtain: That is a an asymmetric function with a longer tail to the negative x values Using Feldman-Cousin approach, for alrge x we have the usual symmetric confidence interval. Going at small x (close to the boundary) the interval becomes more and more asymmetric and at certain point it become a completely asymmetric interval (upper limit) 40

41 The Bayesian approach In Bayesian statistics need to start with prior pdf π(θ), this reflects degree of belief about θ before doing the experiment. Bayes theorem tells how our beliefs should be updated in light of the data x: Then we have to integrate posterior probability to the desired probability confidence level. For the Poisson case, suppose 95% CL, we have: 41

42 Bayesian prior for Poisson parameter Include knowledge that s 0 by setting prior π(s) = 0 for s<0. Often try to reflect prior ignorance with e.g. Not normalized but this is OK as long as L(s) dies off for large s. Not invariant under change of parameter if we had used instead a flat prior for, say, the mass of the Higgs boson, this would imply a non-flat prior for the expected number of Higgs events. Does not really reflect a reasonable degree of belief, but often used as a point of reference; or viewed as a recipe for producing an interval whose frequentist properties can be studied (coverage will depend on true s). 42

43 Bayesian interval with flat prior for s Solve numerically to find limit sup. For special case b = 0, Bayesian upper limit with flat prior numerically same as classical case ( coincidence ). Otherwise Bayesian limit is everywhere greater than classical ( conservative ). Never goes negative. Doesn t depend on b if n = 0 43

Statistical Methods in Particle Physics

Statistical Methods in Particle Physics Statistical Methods in Particle Physics Lecture 11 January 7, 2013 Silvia Masciocchi, GSI Darmstadt s.masciocchi@gsi.de Winter Semester 2012 / 13 Outline How to communicate the statistical uncertainty

More information

Statistical Methods for Discovery and Limits in HEP Experiments Day 3: Exclusion Limits

Statistical Methods for Discovery and Limits in HEP Experiments Day 3: Exclusion Limits Statistical Methods for Discovery and Limits in HEP Experiments Day 3: Exclusion Limits www.pp.rhul.ac.uk/~cowan/stat_freiburg.html Vorlesungen des GK Physik an Hadron-Beschleunigern, Freiburg, 27-29 June,

More information

Statistical Methods for Particle Physics Lecture 4: discovery, exclusion limits

Statistical Methods for Particle Physics Lecture 4: discovery, exclusion limits Statistical Methods for Particle Physics Lecture 4: discovery, exclusion limits www.pp.rhul.ac.uk/~cowan/stat_aachen.html Graduierten-Kolleg RWTH Aachen 10-14 February 2014 Glen Cowan Physics Department

More information

Lecture 3. G. Cowan. Lecture 3 page 1. Lectures on Statistical Data Analysis

Lecture 3. G. Cowan. Lecture 3 page 1. Lectures on Statistical Data Analysis Lecture 3 1 Probability (90 min.) Definition, Bayes theorem, probability densities and their properties, catalogue of pdfs, Monte Carlo 2 Statistical tests (90 min.) general concepts, test statistics,

More information

Statistical Methods in Particle Physics Lecture 2: Limits and Discovery

Statistical Methods in Particle Physics Lecture 2: Limits and Discovery Statistical Methods in Particle Physics Lecture 2: Limits and Discovery SUSSP65 St Andrews 16 29 August 2009 Glen Cowan Physics Department Royal Holloway, University of London g.cowan@rhul.ac.uk www.pp.rhul.ac.uk/~cowan

More information

Topics in Statistical Data Analysis for HEP Lecture 1: Bayesian Methods CERN-JINR European School of High Energy Physics Bautzen, June 2009

Topics in Statistical Data Analysis for HEP Lecture 1: Bayesian Methods CERN-JINR European School of High Energy Physics Bautzen, June 2009 Topics in Statistical Data Analysis for HEP Lecture 1: Bayesian Methods CERN-JINR European School of High Energy Physics Bautzen, 14 27 June 2009 Glen Cowan Physics Department Royal Holloway, University

More information

Physics 403. Segev BenZvi. Credible Intervals, Confidence Intervals, and Limits. Department of Physics and Astronomy University of Rochester

Physics 403. Segev BenZvi. Credible Intervals, Confidence Intervals, and Limits. Department of Physics and Astronomy University of Rochester Physics 403 Credible Intervals, Confidence Intervals, and Limits Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Summarizing Parameters with a Range Bayesian

More information

Statistical Data Analysis Stat 3: p-values, parameter estimation

Statistical Data Analysis Stat 3: p-values, parameter estimation Statistical Data Analysis Stat 3: p-values, parameter estimation London Postgraduate Lectures on Particle Physics; University of London MSci course PH4515 Glen Cowan Physics Department Royal Holloway,

More information

Statistics for Particle Physics. Kyle Cranmer. New York University. Kyle Cranmer (NYU) CERN Academic Training, Feb 2-5, 2009

Statistics for Particle Physics. Kyle Cranmer. New York University. Kyle Cranmer (NYU) CERN Academic Training, Feb 2-5, 2009 Statistics for Particle Physics Kyle Cranmer New York University 91 Remaining Lectures Lecture 3:! Compound hypotheses, nuisance parameters, & similar tests! The Neyman-Construction (illustrated)! Inverted

More information

Journeys of an Accidental Statistician

Journeys of an Accidental Statistician Journeys of an Accidental Statistician A partially anecdotal account of A Unified Approach to the Classical Statistical Analysis of Small Signals, GJF and Robert D. Cousins, Phys. Rev. D 57, 3873 (1998)

More information

Lecture 5. G. Cowan Lectures on Statistical Data Analysis Lecture 5 page 1

Lecture 5. G. Cowan Lectures on Statistical Data Analysis Lecture 5 page 1 Lecture 5 1 Probability (90 min.) Definition, Bayes theorem, probability densities and their properties, catalogue of pdfs, Monte Carlo 2 Statistical tests (90 min.) general concepts, test statistics,

More information

Confidence Intervals. First ICFA Instrumentation School/Workshop. Harrison B. Prosper Florida State University

Confidence Intervals. First ICFA Instrumentation School/Workshop. Harrison B. Prosper Florida State University Confidence Intervals First ICFA Instrumentation School/Workshop At Morelia,, Mexico, November 18-29, 2002 Harrison B. Prosper Florida State University Outline Lecture 1 Introduction Confidence Intervals

More information

Statistics of Small Signals

Statistics of Small Signals Statistics of Small Signals Gary Feldman Harvard University NEPPSR August 17, 2005 Statistics of Small Signals In 1998, Bob Cousins and I were working on the NOMAD neutrino oscillation experiment and we

More information

Solution: chapter 2, problem 5, part a:

Solution: chapter 2, problem 5, part a: Learning Chap. 4/8/ 5:38 page Solution: chapter, problem 5, part a: Let y be the observed value of a sampling from a normal distribution with mean µ and standard deviation. We ll reserve µ for the estimator

More information

Use of the likelihood principle in physics. Statistics II

Use of the likelihood principle in physics. Statistics II Use of the likelihood principle in physics Statistics II 1 2 3 + Bayesians vs Frequentists 4 Why ML does work? hypothesis observation 5 6 7 8 9 10 11 ) 12 13 14 15 16 Fit of Histograms corresponds This

More information

Statistics for the LHC Lecture 1: Introduction

Statistics for the LHC Lecture 1: Introduction Statistics for the LHC Lecture 1: Introduction Academic Training Lectures CERN, 14 17 June, 2010 indico.cern.ch/conferencedisplay.py?confid=77830 Glen Cowan Physics Department Royal Holloway, University

More information

Statistical Methods in Particle Physics Lecture 1: Bayesian methods

Statistical Methods in Particle Physics Lecture 1: Bayesian methods Statistical Methods in Particle Physics Lecture 1: Bayesian methods SUSSP65 St Andrews 16 29 August 2009 Glen Cowan Physics Department Royal Holloway, University of London g.cowan@rhul.ac.uk www.pp.rhul.ac.uk/~cowan

More information

Modern Methods of Data Analysis - WS 07/08

Modern Methods of Data Analysis - WS 07/08 Modern Methods of Data Analysis Lecture VII (26.11.07) Contents: Maximum Likelihood (II) Exercise: Quality of Estimators Assume hight of students is Gaussian distributed. You measure the size of N students.

More information

Statistical Methods in Particle Physics Day 4: Discovery and limits

Statistical Methods in Particle Physics Day 4: Discovery and limits Statistical Methods in Particle Physics Day 4: Discovery and limits 清华大学高能物理研究中心 2010 年 4 月 12 16 日 Glen Cowan Physics Department Royal Holloway, University of London g.cowan@rhul.ac.uk www.pp.rhul.ac.uk/~cowan

More information

Systematic uncertainties in statistical data analysis for particle physics. DESY Seminar Hamburg, 31 March, 2009

Systematic uncertainties in statistical data analysis for particle physics. DESY Seminar Hamburg, 31 March, 2009 Systematic uncertainties in statistical data analysis for particle physics DESY Seminar Hamburg, 31 March, 2009 Glen Cowan Physics Department Royal Holloway, University of London g.cowan@rhul.ac.uk www.pp.rhul.ac.uk/~cowan

More information

Method of Feldman and Cousins for the construction of classical confidence belts

Method of Feldman and Cousins for the construction of classical confidence belts Method of Feldman and Cousins for the construction of classical confidence belts Ulrike Schnoor IKTP TU Dresden 17.02.2011 - ATLAS Seminar U. Schnoor (IKTP TU DD) Felcman-Cousins 17.02.2011 - ATLAS Seminar

More information

Primer on statistics:

Primer on statistics: Primer on statistics: MLE, Confidence Intervals, and Hypothesis Testing ryan.reece@gmail.com http://rreece.github.io/ Insight Data Science - AI Fellows Workshop Feb 16, 018 Outline 1. Maximum likelihood

More information

32. STATISTICS. 32. Statistics 1

32. STATISTICS. 32. Statistics 1 32. STATISTICS 32. Statistics 1 Revised September 2007 by G. Cowan (RHUL). This chapter gives an overview of statistical methods used in High Energy Physics. In statistics, we are interested in using a

More information

Statistical Methods for Particle Physics (I)

Statistical Methods for Particle Physics (I) Statistical Methods for Particle Physics (I) https://agenda.infn.it/conferencedisplay.py?confid=14407 Glen Cowan Physics Department Royal Holloway, University of London g.cowan@rhul.ac.uk www.pp.rhul.ac.uk/~cowan

More information

Unified approach to the classical statistical analysis of small signals

Unified approach to the classical statistical analysis of small signals PHYSICAL REVIEW D VOLUME 57, NUMBER 7 1 APRIL 1998 Unified approach to the classical statistical analysis of small signals Gary J. Feldman * Department of Physics, Harvard University, Cambridge, Massachusetts

More information

32. STATISTICS. 32. Statistics 1

32. STATISTICS. 32. Statistics 1 32. STATISTICS 32. Statistics 1 Revised April 1998 by F. James (CERN); February 2000 by R. Cousins (UCLA); October 2001, October 2003, and August 2005 by G. Cowan (RHUL). This chapter gives an overview

More information

Introduction to Likelihoods

Introduction to Likelihoods Introduction to Likelihoods http://indico.cern.ch/conferencedisplay.py?confid=218693 Likelihood Workshop CERN, 21-23, 2013 Glen Cowan Physics Department Royal Holloway, University of London g.cowan@rhul.ac.uk

More information

Some Topics in Statistical Data Analysis

Some Topics in Statistical Data Analysis Some Topics in Statistical Data Analysis Invisibles School IPPP Durham July 15, 2013 Glen Cowan Physics Department Royal Holloway, University of London g.cowan@rhul.ac.uk www.pp.rhul.ac.uk/~cowan G. Cowan

More information

Recent developments in statistical methods for particle physics

Recent developments in statistical methods for particle physics Recent developments in statistical methods for particle physics Particle Physics Seminar Warwick, 17 February 2011 Glen Cowan Physics Department Royal Holloway, University of London g.cowan@rhul.ac.uk

More information

arxiv:physics/ v2 [physics.data-an] 16 Dec 1999

arxiv:physics/ v2 [physics.data-an] 16 Dec 1999 HUTP-97/A096 A Unified Approach to the Classical Statistical Analysis of Small Signals arxiv:physics/9711021v2 [physics.data-an] 16 Dec 1999 Gary J. Feldman Department of Physics, Harvard University, Cambridge,

More information

Second Workshop, Third Summary

Second Workshop, Third Summary Statistical Issues Relevant to Significance of Discovery Claims Second Workshop, Third Summary Luc Demortier The Rockefeller University Banff, July 16, 2010 1 / 23 1 In the Beginning There Were Questions...

More information

Constructing Ensembles of Pseudo-Experiments

Constructing Ensembles of Pseudo-Experiments Constructing Ensembles of Pseudo-Experiments Luc Demortier The Rockefeller University, New York, NY 10021, USA The frequentist interpretation of measurement results requires the specification of an ensemble

More information

32. STATISTICS. 32. Statistics 1

32. STATISTICS. 32. Statistics 1 32. STATISTICS 32. Statistics 1 Revised September 2009 by G. Cowan (RHUL). This chapter gives an overview of statistical methods used in high-energy physics. In statistics, we are interested in using a

More information

Hypothesis Testing - Frequentist

Hypothesis Testing - Frequentist Frequentist Hypothesis Testing - Frequentist Compare two hypotheses to see which one better explains the data. Or, alternatively, what is the best way to separate events into two classes, those originating

More information

Error analysis for efficiency

Error analysis for efficiency Glen Cowan RHUL Physics 28 July, 2008 Error analysis for efficiency To estimate a selection efficiency using Monte Carlo one typically takes the number of events selected m divided by the number generated

More information

Parameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn

Parameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn Parameter estimation and forecasting Cristiano Porciani AIfA, Uni-Bonn Questions? C. Porciani Estimation & forecasting 2 Temperature fluctuations Variance at multipole l (angle ~180o/l) C. Porciani Estimation

More information

Confidence intervals and the Feldman-Cousins construction. Edoardo Milotti Advanced Statistics for Data Analysis A.Y

Confidence intervals and the Feldman-Cousins construction. Edoardo Milotti Advanced Statistics for Data Analysis A.Y Confidence intervals and the Feldman-Cousins construction Edoardo Milotti Advanced Statistics for Data Analysis A.Y. 2015-16 Review of the Neyman construction of the confidence intervals X-Outline of a

More information

Modern Methods of Data Analysis - SS 2009

Modern Methods of Data Analysis - SS 2009 Modern Methods of Data Analysis Lecture X (2.6.09) Contents: Frequentist vs. Bayesian approach Confidence Level Re: Bayes' Theorem (1) Conditional ( bedingte ) probability Examples: Rare Disease (1) Probability

More information

Statistical Methods for Particle Physics

Statistical Methods for Particle Physics Statistical Methods for Particle Physics Invisibles School 8-13 July 2014 Château de Button Glen Cowan Physics Department Royal Holloway, University of London g.cowan@rhul.ac.uk www.pp.rhul.ac.uk/~cowan

More information

Bayes Theorem (Recap) Adrian Bevan.

Bayes Theorem (Recap) Adrian Bevan. Bayes Theorem (Recap) Adrian Bevan email: a.j.bevan@qmul.ac.uk Overview Bayes theorem is given by The probability you want to compute: The probability of the hypothesis B given the data A. This is sometimes

More information

Advanced Statistics Course Part I

Advanced Statistics Course Part I Advanced Statistics Course Part I W. Verkerke (NIKHEF) Wouter Verkerke, NIKHEF Outline of this course Advances statistical methods Theory and practice Focus on limit setting and discovery for the LHC Part

More information

Statistics and Data Analysis

Statistics and Data Analysis Statistics and Data Analysis The Crash Course Physics 226, Fall 2013 "There are three kinds of lies: lies, damned lies, and statistics. Mark Twain, allegedly after Benjamin Disraeli Statistics and Data

More information

FYST17 Lecture 8 Statistics and hypothesis testing. Thanks to T. Petersen, S. Maschiocci, G. Cowan, L. Lyons

FYST17 Lecture 8 Statistics and hypothesis testing. Thanks to T. Petersen, S. Maschiocci, G. Cowan, L. Lyons FYST17 Lecture 8 Statistics and hypothesis testing Thanks to T. Petersen, S. Maschiocci, G. Cowan, L. Lyons 1 Plan for today: Introduction to concepts The Gaussian distribution Likelihood functions Hypothesis

More information

Practical Statistics part II Composite hypothesis, Nuisance Parameters

Practical Statistics part II Composite hypothesis, Nuisance Parameters Practical Statistics part II Composite hypothesis, Nuisance Parameters W. Verkerke (NIKHEF) Summary of yesterday, plan for today Start with basics, gradually build up to complexity of Statistical tests

More information

STAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01

STAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01 STAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01 Nasser Sadeghkhani a.sadeghkhani@queensu.ca There are two main schools to statistical inference: 1-frequentist

More information

New Bayesian methods for model comparison

New Bayesian methods for model comparison Back to the future New Bayesian methods for model comparison Murray Aitkin murray.aitkin@unimelb.edu.au Department of Mathematics and Statistics The University of Melbourne Australia Bayesian Model Comparison

More information

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012 Parametric Models Dr. Shuang LIANG School of Software Engineering TongJi University Fall, 2012 Today s Topics Maximum Likelihood Estimation Bayesian Density Estimation Today s Topics Maximum Likelihood

More information

Statistical Methods for Particle Physics Lecture 1: parameter estimation, statistical tests

Statistical Methods for Particle Physics Lecture 1: parameter estimation, statistical tests Statistical Methods for Particle Physics Lecture 1: parameter estimation, statistical tests http://benasque.org/2018tae/cgi-bin/talks/allprint.pl TAE 2018 Benasque, Spain 3-15 Sept 2018 Glen Cowan Physics

More information

Confidence intervals fundamental issues

Confidence intervals fundamental issues Confidence intervals fundamental issues Null Hypothesis testing P-values Classical or frequentist confidence intervals Issues that arise in interpretation of fit result Bayesian statistics and intervals

More information

Physics 509: Error Propagation, and the Meaning of Error Bars. Scott Oser Lecture #10

Physics 509: Error Propagation, and the Meaning of Error Bars. Scott Oser Lecture #10 Physics 509: Error Propagation, and the Meaning of Error Bars Scott Oser Lecture #10 1 What is an error bar? Someone hands you a plot like this. What do the error bars indicate? Answer: you can never be

More information

Introductory Statistics Course Part II

Introductory Statistics Course Part II Introductory Statistics Course Part II https://indico.cern.ch/event/735431/ PHYSTAT ν CERN 22-25 January 2019 Glen Cowan Physics Department Royal Holloway, University of London g.cowan@rhul.ac.uk www.pp.rhul.ac.uk/~cowan

More information

P Values and Nuisance Parameters

P Values and Nuisance Parameters P Values and Nuisance Parameters Luc Demortier The Rockefeller University PHYSTAT-LHC Workshop on Statistical Issues for LHC Physics CERN, Geneva, June 27 29, 2007 Definition and interpretation of p values;

More information

Hypothesis testing. Chapter Formulating a hypothesis. 7.2 Testing if the hypothesis agrees with data

Hypothesis testing. Chapter Formulating a hypothesis. 7.2 Testing if the hypothesis agrees with data Chapter 7 Hypothesis testing 7.1 Formulating a hypothesis Up until now we have discussed how to define a measurement in terms of a central value, uncertainties, and units, as well as how to extend these

More information

Frequentist Confidence Limits and Intervals. Roger Barlow SLUO Lectures on Statistics August 2006

Frequentist Confidence Limits and Intervals. Roger Barlow SLUO Lectures on Statistics August 2006 Frequentist Confidence Limits and Intervals Roger Barlow SLUO Lectures on Statistics August 2006 Confidence Intervals A common part of the physicist's toolkit Especially relevant for results which are

More information

Physics 509: Bootstrap and Robust Parameter Estimation

Physics 509: Bootstrap and Robust Parameter Estimation Physics 509: Bootstrap and Robust Parameter Estimation Scott Oser Lecture #20 Physics 509 1 Nonparametric parameter estimation Question: what error estimate should you assign to the slope and intercept

More information

Statistical Methods for Particle Physics Lecture 3: systematic uncertainties / further topics

Statistical Methods for Particle Physics Lecture 3: systematic uncertainties / further topics Statistical Methods for Particle Physics Lecture 3: systematic uncertainties / further topics istep 2014 IHEP, Beijing August 20-29, 2014 Glen Cowan ( Physics Department Royal Holloway, University of London

More information

Inconsistency of Bayesian inference when the model is wrong, and how to repair it

Inconsistency of Bayesian inference when the model is wrong, and how to repair it Inconsistency of Bayesian inference when the model is wrong, and how to repair it Peter Grünwald Thijs van Ommen Centrum Wiskunde & Informatica, Amsterdam Universiteit Leiden June 3, 2015 Outline 1 Introduction

More information

Statistics for Data Analysis. Niklaus Berger. PSI Practical Course Physics Institute, University of Heidelberg

Statistics for Data Analysis. Niklaus Berger. PSI Practical Course Physics Institute, University of Heidelberg Statistics for Data Analysis PSI Practical Course 2014 Niklaus Berger Physics Institute, University of Heidelberg Overview You are going to perform a data analysis: Compare measured distributions to theoretical

More information

Bayesian Learning (II)

Bayesian Learning (II) Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen Bayesian Learning (II) Niels Landwehr Overview Probabilities, expected values, variance Basic concepts of Bayesian learning MAP

More information

Hypothesis testing (cont d)

Hypothesis testing (cont d) Hypothesis testing (cont d) Ulrich Heintz Brown University 4/12/2016 Ulrich Heintz - PHYS 1560 Lecture 11 1 Hypothesis testing Is our hypothesis about the fundamental physics correct? We will not be able

More information

Statistics: Learning models from data

Statistics: Learning models from data DS-GA 1002 Lecture notes 5 October 19, 2015 Statistics: Learning models from data Learning models from data that are assumed to be generated probabilistically from a certain unknown distribution is a crucial

More information

Statistical Methods for Astronomy

Statistical Methods for Astronomy Statistical Methods for Astronomy If your experiment needs statistics, you ought to have done a better experiment. -Ernest Rutherford Lecture 1 Lecture 2 Why do we need statistics? Definitions Statistical

More information

Introduction to Statistical Methods for High Energy Physics

Introduction to Statistical Methods for High Energy Physics Introduction to Statistical Methods for High Energy Physics 2011 CERN Summer Student Lectures Glen Cowan Physics Department Royal Holloway, University of London g.cowan@rhul.ac.uk www.pp.rhul.ac.uk/~cowan

More information

Search and Discovery Statistics in HEP

Search and Discovery Statistics in HEP Search and Discovery Statistics in HEP Eilam Gross, Weizmann Institute of Science This presentation would have not been possible without the tremendous help of the following people throughout many years

More information

Statistical Methods for Particle Physics Lecture 3: Systematics, nuisance parameters

Statistical Methods for Particle Physics Lecture 3: Systematics, nuisance parameters Statistical Methods for Particle Physics Lecture 3: Systematics, nuisance parameters http://benasque.org/2018tae/cgi-bin/talks/allprint.pl TAE 2018 Centro de ciencias Pedro Pascual Benasque, Spain 3-15

More information

An introduction to Bayesian reasoning in particle physics

An introduction to Bayesian reasoning in particle physics An introduction to Bayesian reasoning in particle physics Graduiertenkolleg seminar, May 15th 2013 Overview Overview A concrete example Scientific reasoning Probability and the Bayesian interpretation

More information

Review: Statistical Model

Review: Statistical Model Review: Statistical Model { f θ :θ Ω} A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced the data. The statistical model

More information

Statistics for the LHC Lecture 2: Discovery

Statistics for the LHC Lecture 2: Discovery Statistics for the LHC Lecture 2: Discovery Academic Training Lectures CERN, 14 17 June, 2010 indico.cern.ch/conferencedisplay.py?confid=77830 Glen Cowan Physics Department Royal Holloway, University of

More information

Lecture 24. Introduction to Error Analysis. Experimental Nuclear Physics PHYS 741

Lecture 24. Introduction to Error Analysis. Experimental Nuclear Physics PHYS 741 Lecture 24 Introduction to Error Analysis Experimental Nuclear Physics PHYS 741 heeger@wisc.edu References and Figures from: - Bevington Data Reduction and Error Analysis 1 Seminar Announcement Friday,

More information

Basic Concepts of Inference

Basic Concepts of Inference Basic Concepts of Inference Corresponds to Chapter 6 of Tamhane and Dunlop Slides prepared by Elizabeth Newton (MIT) with some slides by Jacqueline Telford (Johns Hopkins University) and Roy Welsch (MIT).

More information

Modern Methods of Data Analysis - WS 07/08

Modern Methods of Data Analysis - WS 07/08 Modern Methods of Data Analysis Lecture V (12.11.07) Contents: Central Limit Theorem Uncertainties: concepts, propagation and properties Central Limit Theorem Consider the sum X of n independent variables,

More information

CSC321 Lecture 18: Learning Probabilistic Models

CSC321 Lecture 18: Learning Probabilistic Models CSC321 Lecture 18: Learning Probabilistic Models Roger Grosse Roger Grosse CSC321 Lecture 18: Learning Probabilistic Models 1 / 25 Overview So far in this course: mainly supervised learning Language modeling

More information

One-parameter models

One-parameter models One-parameter models Patrick Breheny January 22 Patrick Breheny BST 701: Bayesian Modeling in Biostatistics 1/17 Introduction Binomial data is not the only example in which Bayesian solutions can be worked

More information

Some Statistical Tools for Particle Physics

Some Statistical Tools for Particle Physics Some Statistical Tools for Particle Physics Particle Physics Colloquium MPI für Physik u. Astrophysik Munich, 10 May, 2016 Glen Cowan Physics Department Royal Holloway, University of London g.cowan@rhul.ac.uk

More information

Lecture 2. G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 1

Lecture 2. G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 1 Lecture 2 1 Probability (90 min.) Definition, Bayes theorem, probability densities and their properties, catalogue of pdfs, Monte Carlo 2 Statistical tests (90 min.) general concepts, test statistics,

More information

arxiv: v1 [hep-ex] 9 Jul 2013

arxiv: v1 [hep-ex] 9 Jul 2013 Statistics for Searches at the LHC Glen Cowan Physics Department, Royal Holloway, University of London, Egham, Surrey, TW2 EX, UK arxiv:137.2487v1 [hep-ex] 9 Jul 213 Abstract These lectures 1 describe

More information

Statistical Methods for Particle Physics Lecture 2: statistical tests, multivariate methods

Statistical Methods for Particle Physics Lecture 2: statistical tests, multivariate methods Statistical Methods for Particle Physics Lecture 2: statistical tests, multivariate methods www.pp.rhul.ac.uk/~cowan/stat_aachen.html Graduierten-Kolleg RWTH Aachen 10-14 February 2014 Glen Cowan Physics

More information

Statistics Challenges in High Energy Physics Search Experiments

Statistics Challenges in High Energy Physics Search Experiments Statistics Challenges in High Energy Physics Search Experiments The Weizmann Institute of Science, Rehovot, Israel E-mail: eilam.gross@weizmann.ac.il Ofer Vitells The Weizmann Institute of Science, Rehovot,

More information

LECTURE NOTES FYS 4550/FYS EXPERIMENTAL HIGH ENERGY PHYSICS AUTUMN 2013 PART I A. STRANDLIE GJØVIK UNIVERSITY COLLEGE AND UNIVERSITY OF OSLO

LECTURE NOTES FYS 4550/FYS EXPERIMENTAL HIGH ENERGY PHYSICS AUTUMN 2013 PART I A. STRANDLIE GJØVIK UNIVERSITY COLLEGE AND UNIVERSITY OF OSLO LECTURE NOTES FYS 4550/FYS9550 - EXPERIMENTAL HIGH ENERGY PHYSICS AUTUMN 2013 PART I PROBABILITY AND STATISTICS A. STRANDLIE GJØVIK UNIVERSITY COLLEGE AND UNIVERSITY OF OSLO Before embarking on the concept

More information

OVERVIEW OF PRINCIPLES OF STATISTICS

OVERVIEW OF PRINCIPLES OF STATISTICS OVERVIEW OF PRINCIPLES OF STATISTICS F. James CERN, CH-1211 eneva 23, Switzerland Abstract A summary of the basic principles of statistics. Both the Bayesian and Frequentist points of view are exposed.

More information

Introduction to the Terascale: DESY 2015

Introduction to the Terascale: DESY 2015 Introduction to the Terascale: DESY 2015 Analysis walk-through exercises EXERCISES Ivo van Vulpen Exercise 0: Root and Fitting basics We start with a simple exercise for those who are new to Root. If you

More information

A Calculator for Confidence Intervals

A Calculator for Confidence Intervals A Calculator for Confidence Intervals Roger Barlow Department of Physics Manchester University England Abstract A calculator program has been written to give confidence intervals on branching ratios for

More information

Miscellany : Long Run Behavior of Bayesian Methods; Bayesian Experimental Design (Lecture 4)

Miscellany : Long Run Behavior of Bayesian Methods; Bayesian Experimental Design (Lecture 4) Miscellany : Long Run Behavior of Bayesian Methods; Bayesian Experimental Design (Lecture 4) Tom Loredo Dept. of Astronomy, Cornell University http://www.astro.cornell.edu/staff/loredo/bayes/ Bayesian

More information

Statistical Data Analysis Stat 5: More on nuisance parameters, Bayesian methods

Statistical Data Analysis Stat 5: More on nuisance parameters, Bayesian methods Statistical Data Analysis Stat 5: More on nuisance parameters, Bayesian methods London Postgraduate Lectures on Particle Physics; University of London MSci course PH4515 Glen Cowan Physics Department Royal

More information

Estimation of reliability parameters from Experimental data (Parte 2) Prof. Enrico Zio

Estimation of reliability parameters from Experimental data (Parte 2) Prof. Enrico Zio Estimation of reliability parameters from Experimental data (Parte 2) This lecture Life test (t 1,t 2,...,t n ) Estimate θ of f T t θ For example: λ of f T (t)= λe - λt Classical approach (frequentist

More information

Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, Jeffreys priors. exp 1 ) p 2

Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, Jeffreys priors. exp 1 ) p 2 Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, 2010 Jeffreys priors Lecturer: Michael I. Jordan Scribe: Timothy Hunter 1 Priors for the multivariate Gaussian Consider a multivariate

More information

MAXIMUM LIKELIHOOD, SET ESTIMATION, MODEL CRITICISM

MAXIMUM LIKELIHOOD, SET ESTIMATION, MODEL CRITICISM Eco517 Fall 2004 C. Sims MAXIMUM LIKELIHOOD, SET ESTIMATION, MODEL CRITICISM 1. SOMETHING WE SHOULD ALREADY HAVE MENTIONED A t n (µ, Σ) distribution converges, as n, to a N(µ, Σ). Consider the univariate

More information

Decision theory. 1 We may also consider randomized decision rules, where δ maps observed data D to a probability distribution over

Decision theory. 1 We may also consider randomized decision rules, where δ maps observed data D to a probability distribution over Point estimation Suppose we are interested in the value of a parameter θ, for example the unknown bias of a coin. We have already seen how one may use the Bayesian method to reason about θ; namely, we

More information

Statistical Methods in Particle Physics. Lecture 2

Statistical Methods in Particle Physics. Lecture 2 Statistical Methods in Particle Physics Lecture 2 October 17, 2011 Silvia Masciocchi, GSI Darmstadt s.masciocchi@gsi.de Winter Semester 2011 / 12 Outline Probability Definition and interpretation Kolmogorov's

More information

Recommendations for presentation of error bars

Recommendations for presentation of error bars Draft 0.00 ATLAS Statistics Forum 15 February, 2011 Recommendations for presentation of error bars 1 Introduction This note summarizes recommendations on how to present error bars on plots. It follows

More information

Statistics. Lent Term 2015 Prof. Mark Thomson. 2: The Gaussian Limit

Statistics. Lent Term 2015 Prof. Mark Thomson. 2: The Gaussian Limit Statistics Lent Term 2015 Prof. Mark Thomson Lecture 2 : The Gaussian Limit Prof. M.A. Thomson Lent Term 2015 29 Lecture Lecture Lecture Lecture 1: Back to basics Introduction, Probability distribution

More information

1 A simple example. A short introduction to Bayesian statistics, part I Math 217 Probability and Statistics Prof. D.

1 A simple example. A short introduction to Bayesian statistics, part I Math 217 Probability and Statistics Prof. D. probabilities, we ll use Bayes formula. We can easily compute the reverse probabilities A short introduction to Bayesian statistics, part I Math 17 Probability and Statistics Prof. D. Joyce, Fall 014 I

More information

Physics 403. Segev BenZvi. Classical Hypothesis Testing: The Likelihood Ratio Test. Department of Physics and Astronomy University of Rochester

Physics 403. Segev BenZvi. Classical Hypothesis Testing: The Likelihood Ratio Test. Department of Physics and Astronomy University of Rochester Physics 403 Classical Hypothesis Testing: The Likelihood Ratio Test Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Bayesian Hypothesis Testing Posterior Odds

More information

Frequentist Statistics and Hypothesis Testing Spring

Frequentist Statistics and Hypothesis Testing Spring Frequentist Statistics and Hypothesis Testing 18.05 Spring 2018 http://xkcd.com/539/ Agenda Introduction to the frequentist way of life. What is a statistic? NHST ingredients; rejection regions Simple

More information

STAT 425: Introduction to Bayesian Analysis

STAT 425: Introduction to Bayesian Analysis STAT 425: Introduction to Bayesian Analysis Marina Vannucci Rice University, USA Fall 2017 Marina Vannucci (Rice University, USA) Bayesian Analysis (Part 1) Fall 2017 1 / 10 Lecture 7: Prior Types Subjective

More information

6.867 Machine Learning

6.867 Machine Learning 6.867 Machine Learning Problem set 1 Solutions Thursday, September 19 What and how to turn in? Turn in short written answers to the questions explicitly stated, and when requested to explain or prove.

More information

Relative branching ratio measurements of charmless B ± decays to three hadrons

Relative branching ratio measurements of charmless B ± decays to three hadrons LHCb-CONF-011-059 November 10, 011 Relative branching ratio measurements of charmless B ± decays to three hadrons The LHCb Collaboration 1 LHCb-CONF-011-059 10/11/011 Abstract With an integrated luminosity

More information

Hypothesis Testing. Part I. James J. Heckman University of Chicago. Econ 312 This draft, April 20, 2006

Hypothesis Testing. Part I. James J. Heckman University of Chicago. Econ 312 This draft, April 20, 2006 Hypothesis Testing Part I James J. Heckman University of Chicago Econ 312 This draft, April 20, 2006 1 1 A Brief Review of Hypothesis Testing and Its Uses values and pure significance tests (R.A. Fisher)

More information

PoS(ICHEP2012)238. Search for B 0 s µ + µ and other exclusive B decays with the ATLAS detector. Paolo Iengo

PoS(ICHEP2012)238. Search for B 0 s µ + µ and other exclusive B decays with the ATLAS detector. Paolo Iengo Search for B s µ + µ and other exclusive B decays with the ATLAS detector. On behalf of the ATLAS Collaboration INFN Naples, Italy E-mail: paolo.iengo@cern.ch The ATLAS experiment, collecting data in pp

More information

Bayesian Methods for Machine Learning

Bayesian Methods for Machine Learning Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),

More information