Lecture 3. G. Cowan. Lecture 3 page 1. Lectures on Statistical Data Analysis

Similar documents
Statistical Data Analysis Stat 3: p-values, parameter estimation

Statistical Methods for Particle Physics Lecture 1: parameter estimation, statistical tests

Statistics for the LHC Lecture 1: Introduction

E. Santovetti lesson 4 Maximum likelihood Interval estimation

Modern Methods of Data Analysis - WS 07/08

Systematic uncertainties in statistical data analysis for particle physics. DESY Seminar Hamburg, 31 March, 2009

Statistical Methods for Particle Physics (I)

Lecture 2. G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 1

Lecture 5. G. Cowan Lectures on Statistical Data Analysis Lecture 5 page 1

Modern Methods of Data Analysis - WS 07/08

Statistical Methods in Particle Physics

Statistical Methods in Particle Physics Lecture 1: Bayesian methods

Statistical Methods in Particle Physics

Statistical Methods for Particle Physics Lecture 2: statistical tests, multivariate methods

Error analysis for efficiency

YETI IPPP Durham

RWTH Aachen Graduiertenkolleg

Statistics and Data Analysis

Statistical Methods in Particle Physics Lecture 2: Limits and Discovery

Primer on statistics:

Hypothesis testing:power, test statistic CMS:

Lectures on Statistical Data Analysis

Introduction to Likelihoods

Introduction to Statistical Methods for High Energy Physics

Statistics. Lent Term 2015 Prof. Mark Thomson. 2: The Gaussian Limit

Statistics for the LHC Lecture 2: Discovery

Lecture 13: Data Modelling and Distributions. Intelligent Data Analysis and Probabilistic Inference Lecture 13 Slide No 1

Statistical Models with Uncertain Error Parameters (G. Cowan, arxiv: )

Physics 403. Segev BenZvi. Numerical Methods, Maximum Likelihood, and Least Squares. Department of Physics and Astronomy University of Rochester

Topics in Statistical Data Analysis for HEP Lecture 1: Bayesian Methods CERN-JINR European School of High Energy Physics Bautzen, June 2009

Bayesian Regression Linear and Logistic Regression

Statistical Methods for Particle Physics Lecture 3: Systematics, nuisance parameters

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012

Statistical Distribution Assumptions of General Linear Models

ECE521 lecture 4: 19 January Optimization, MLE, regularization

STA414/2104 Statistical Methods for Machine Learning II

Stat 5101 Lecture Notes

Parameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn

Irr. Statistical Methods in Experimental Physics. 2nd Edition. Frederick James. World Scientific. CERN, Switzerland

Chapter 3: Maximum-Likelihood & Bayesian Parameter Estimation (part 1)

Hypothesis Testing - Frequentist

Statistical techniques for data analysis in Cosmology

Lecture 4: Types of errors. Bayesian regression models. Logistic regression

F & B Approaches to a simple model

Parameter Estimation and Fitting to Data

STA 4273H: Statistical Machine Learning

Statistics. Lecture 2 August 7, 2000 Frank Porter Caltech. The Fundamentals; Point Estimation. Maximum Likelihood, Least Squares and All That

The Bayes classifier

STA 4273H: Sta-s-cal Machine Learning

Statistical Methods for Particle Physics Lecture 4: discovery, exclusion limits

Review. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

STA414/2104. Lecture 11: Gaussian Processes. Department of Statistics

Fundamental Probability and Statistics

Inferring from data. Theory of estimators

Statistical Methods for Particle Physics

HANDBOOK OF APPLICABLE MATHEMATICS

Probability and Estimation. Alan Moses

32. STATISTICS. 32. Statistics 1

Parametric Unsupervised Learning Expectation Maximization (EM) Lecture 20.a

The Expectation-Maximization Algorithm

Bayesian Gaussian / Linear Models. Read Sections and 3.3 in the text by Bishop

Multivariate statistical methods and data mining in particle physics

32. STATISTICS. 32. Statistics 1

Lecture 2: Linear Models. Bruce Walsh lecture notes Seattle SISG -Mixed Model Course version 23 June 2011

Machine Learning. Gaussian Mixture Models. Zhiyao Duan & Bryan Pardo, Machine Learning: EECS 349 Fall

Recall the Basics of Hypothesis Testing

Machine Learning 4771

Data Analysis I. Dr Martin Hendry, Dept of Physics and Astronomy University of Glasgow, UK. 10 lectures, beginning October 2006

Discovery significance with statistical uncertainty in the background estimate

STATISTICS SYLLABUS UNIT I

ECE531 Lecture 10b: Maximum Likelihood Estimation

Use of the likelihood principle in physics. Statistics II

Statistical Methods for Particle Physics Lecture 3: systematic uncertainties / further topics

Lecture : Probabilistic Machine Learning

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Statistics. Lecture 4 August 9, 2000 Frank Porter Caltech. 1. The Fundamentals; Point Estimation. 2. Maximum Likelihood, Least Squares and All That

Statistical Methods in Particle Physics

PART I INTRODUCTION The meaning of probability Basic definitions for frequentist statistics and Bayesian inference Bayesian inference Combinatorics

Point Estimation. Vibhav Gogate The University of Texas at Dallas

Machine Learning CSE546 Carlos Guestrin University of Washington. September 30, 2013

Copula Regression RAHUL A. PARSA DRAKE UNIVERSITY & STUART A. KLUGMAN SOCIETY OF ACTUARIES CASUALTY ACTUARIAL SOCIETY MAY 18,2011

Linear Regression. Data Model. β, σ 2. Process Model. ,V β. ,s 2. s 1. Parameter Model

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3

Histograms were invented for human beings, so that we can visualize the observations as a density (estimated values of a pdf).

Curve Fitting Re-visited, Bishop1.2.5

Subject CS1 Actuarial Statistics 1 Core Principles

Multivariate Normal & Wishart

Maximum Likelihood Estimation; Robust Maximum Likelihood; Missing Data with Maximum Likelihood

Modern Methods of Data Analysis - SS 2009

Parametric Inference Maximum Likelihood Inference Exponential Families Expectation Maximization (EM) Bayesian Inference Statistical Decison Theory

Introduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Lior Wolf

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators

An Introduction to Bayesian Linear Regression

Statistical Data Analysis Stat 5: More on nuisance parameters, Bayesian methods

Some Topics in Statistical Data Analysis

Likelihood-Based Methods

Statistics Ph.D. Qualifying Exam: Part I October 18, 2003

Advanced Statistical Methods. Lecture 6

Pattern Recognition and Machine Learning

FYST17 Lecture 8 Statistics and hypothesis testing. Thanks to T. Petersen, S. Maschiocci, G. Cowan, L. Lyons

Transcription:

Lecture 3 1 Probability (90 min.) Definition, Bayes theorem, probability densities and their properties, catalogue of pdfs, Monte Carlo 2 Statistical tests (90 min.) general concepts, test statistics, multivariate methods, goodness-of-fit tests 3 Parameter estimation (90 min.) general concepts, maximum likelihood, variance of estimators, least squares 4 Interval estimation (60 min.) setting limits 5 Further topics (60 min.) systematic errors, MCMC Lecture 3 page 1

Parameter estimation The parameters of a pdf are constants that characterize its shape, e.g. r.v. parameter Suppose we have a sample of observed values: We want to find some function of the data to estimate the parameter(s): estimator written with a hat Sometimes we say estimator for the function of x1,..., xn; estimate for the value of the estimator with a particular data set. Lecture 3 page 2

Properties of estimators If we were to repeat the entire measurement, the estimates from each would follow a pdf: best large variance biased We want small (or zero) bias (systematic error): average of repeated estimates should tend to true value. And we want a small variance (statistical error): small bias & variance are in general conflicting criteria Lecture 3 page 3

An estimator for the mean (expectation value) Parameter: Estimator: ( sample mean ) We find: Lecture 3 page 4

An estimator for the variance Parameter: ( sample variance ) Estimator: We find: (factor of n 1 makes this so) where Lecture 3 page 5

The likelihood function Consider n independent observations of x: x1,..., xn, where x follows f (x; ). The joint pdf for the whole data sample is: Now evaluate this function with the data sample obtained and regard it as a function of the parameter(s). This is the likelihood function: (xi constant) Lecture 3 page 6

Maximum likelihood estimators If the hypothesized is close to the true value, then we expect a high probability to get data like that which we actually found. So we define the maximum likelihood (ML) estimator(s) to be the parameter value(s) for which the likelihood is maximum. ML estimators not guaranteed to have any optimal properties, (but in practice they re very good). Lecture 3 page 7

ML example: parameter of exponential pdf Consider exponential pdf, and suppose we have data, The likelihood function is The value of for which L( ) is maximum also gives the maximum value of its logarithm (the log-likelihood function): Lecture 3 page 8

ML example: parameter of exponential pdf (2) Find its maximum from Monte Carlo test: generate 50 values using = 1: We find the ML estimate: (Exercise: show this estimator is unbiased.) Lecture 3 page 9

Functions of ML estimators Suppose we had written the exponential pdf as i.e., we use = 1/. What is the ML estimator for? For a function ( ) of a parameter, it doesn t matter whether we express L as a function of or. The ML estimator of a function ( ) is simply So for the decay constant we have Caveat: is biased, even though Can show is unbiased. (bias 0 for n ) Lecture 3 page 10

Example of ML: parameters of Gaussian pdf Consider independent x1,..., xn, with xi ~ Gaussian (, 2) The log-likelihood function is Lecture 3 page 11

Example of ML: parameters of Gaussian pdf (2) Set derivatives with respect to, 2 to zero and solve, We already know that the estimator for is unbiased. But we find, however, so ML estimator for 2 has a bias, but b 0 for n. Recall, however, that is an unbiased estimator for 2. Lecture 3 page 12

Variance of estimators: Monte Carlo method Having estimated our parameter we now need to report its statistical error, i.e., how widely distributed would estimates be if we were to repeat the entire measurement many times. One way to do this would be to simulate the entire experiment many times with a Monte Carlo program (use ML estimate for MC). For exponential example, from sample variance of estimates we find: Note distribution of estimates is roughly Gaussian (almost) always true for ML in large sample limit. Lecture 3 page 13

Variance of estimators from information inequality The information inequality (RCF) sets a minimum variance bound (MVB) for any estimator (not only ML): Often the bias b is small, and equality either holds exactly or is a good approximation (e.g. large data sample limit). Then, Estimate this using the 2nd derivative of ln L at its maximum: Lecture 3 page 14

Variance of estimators: graphical method Expand ln L ( ) about its maximum: First term is ln Lmax, second term is zero, for third term use information inequality (assume equality): i.e., to get, change away from until ln L decreases by 1/2. Lecture 3 page 15

Example of variance by graphical method ML example with exponential: Not quite parabolic ln L since finite sample size (n = 50). Lecture 3 page 16

Information inequality for n parameters Suppose we have estimated n parameters The (inverse) minimum variance bound is given by the Fisher information matrix: The information inequality then states that V I is a positive semi-definite matrix; therefore for the diagonal elements, Often use I as an approximation for covariance matrix, estimate using e.g. matrix of 2nd derivatives at maximum of L. Lecture 3 page 17

Example of ML with 2 parameters Consider a scattering angle distribution with x = cos, or if xmin < x < xmax, need always to normalize so that Example: = 0.5, = 0.5, xmin = 0.95, xmax = 0.95, generate n = 2000 events with Monte Carlo. Lecture 3 page 18

Example of ML with 2 parameters: fit result Finding maximum of ln L(, ) numerically (MINUIT) gives N.B. No binning of data for fit, but can compare to histogram for goodness-of-fit (e.g. visual or 2). (MINUIT routine HESSE) (Co)variances from Lecture 3 page 19

Two-parameter fit: MC study Repeat ML fit with 500 experiments, all with n = 2000 events: Estimates average to ~ true values; (Co)variances close to previous estimates; marginal pdfs approximately Gaussian. Lecture 3 page 20

The ln Lmax 1/2 contour For large n, ln L takes on quadratic form near maximum: is an ellipse: The contour Lecture 3 page 21

(Co)variances from ln L contour The, plane for the first MC data set Tangent lines to contours give standard deviations. Angle of ellipse related to correlation: Correlations between estimators result in an increase in their standard deviations (statistical errors). Lecture 3 page 22

Extended ML Sometimes regard n not as fixed, but as a Poisson r.v., mean. Result of experiment defined as: n, x1,..., xn. The (extended) likelihood function is: Suppose theory gives = (θ), then the log-likelihood is where C represents terms not depending on θ. Lecture 3 page 23

Extended ML (2) Example: expected number of events where the total cross section (θ) is predicted as a function of the parameters of a theory, as is the distribution of a variable x. Extended ML uses more info smaller errors for Important e.g. for anomalous couplings in e e W+W If does not depend on θ but remains a free parameter, extended ML gives: Lecture 3 page 24

Extended ML example Consider two types of events (e.g., signal and background) each of which predict a given pdf for the variable x: fs(x) and fb(x). We observe a mixture of the two event types, signal fraction =, expected total number =, observed total number = n. Let goal is to estimate s, b. Lecture 3 page 25

Extended ML example (2) Monte Carlo example with combination of exponential and Gaussian: Maximize log-likelihood in terms of s and b: Here errors reflect total Poisson fluctuation as well as that in proportion of signal/background. Lecture 3 page 26

Extended ML example: an unphysical estimate A downwards fluctuation of data in the peak region can lead to even fewer events than what would be obtained from background alone. Estimate for s here pushed negative (unphysical). We can let this happen as long as the (total) pdf stays positive everywhere. Lecture 3 page 27

Unphysical estimators (2) Here the unphysical estimator is unbiased and should nevertheless be reported, since average of a large number of unbiased estimates converges to the true value (cf. PDG). Repeat entire MC experiment many times, allow unphysical estimates: Lecture 3 page 28

ML with binned data Often put data into a histogram: Hypothesis is where If we model the data as multinomial (ntot constant), then the log-likelihood function is: Lecture 3 page 29

ML example with binned data Previous example with exponential, now put data into histogram: Limit of zero bin width usual unbinned ML. If ni treated as Poisson, we get extended log-likelihood: Lecture 3 page 30

Relationship between ML and Bayesian estimators In Bayesian statistics, both θ and x are random variables: Recall the Bayesian method: Use subjective probability for hypotheses ( ); before experiment, knowledge summarized by prior pdf ( ); use Bayes theorem to update prior in light of data: Posterior pdf (conditional pdf for given x) Lecture 3 page 31

ML and Bayesian estimators (2) Purist Bayesian: p( x) contains all knowledge about. Pragmatist Bayesian: p( x) could be a complicated function, summarize using an estimator Take mode of p( x), (could also use e.g. expectation value) What do we use for ( )? No golden rule (subjective!), often represent prior ignorance by ( ) = constant, in which case But... we could have used a different parameter, e.g., = 1/, and if prior ( ) is constant, then ( ) is not! Complete prior ignorance is not well defined. Lecture 3 page 32

The method of least squares Suppose we measure N values, y1,..., yn, assumed to be independent Gaussian r.v.s with Assume known values of the control variable x1,..., xn and known variances We want to estimate, i.e., fit the curve to the data points. The likelihood function is Lecture 3 page 33

The method of least squares (2) The log-likelihood function is therefore So maximizing the likelihood is equivalent to minimizing Minimum defines the least squares (LS) estimator Very often measurement errors are ~Gaussian and so ML and LS are essentially the same. Often minimize 2 numerically (e.g. program MINUIT). Lecture 3 page 34

LS with correlated measurements If the yi follow a multivariate Gaussian, covariance matrix V, Then maximizing the likelihood is equivalent to minimizing Lecture 3 page 35

Example of least squares fit Fit a polynomial of order p: Lecture 3 page 36

Variance of LS estimators In most cases of interest we obtain the variance in a manner similar to ML. E.g. for data ~ Gaussian we have and so 1.0 or for the graphical method we take the values of where Lecture 3 page 37

Two-parameter LS fit Lecture 3 page 38

Goodness-of-fit with least squares The value of the 2 at its minimum is a measure of the level of agreement between the data and fitted curve: It can therefore be employed as a goodness-of-fit statistic to test the hypothesized functional form (x; ). We can show that if the hypothesis is correct, then the statistic t = 2min follows the chi-square pdf, where the number of degrees of freedom is nd = number of data points number of fitted parameters Lecture 3 page 39

Goodness-of-fit with least squares (2) The chi-square pdf has an expectation value equal to the number of degrees of freedom, so if 2min nd the fit is good. More generally, find the p-value: This is the probability of obtaining a 2min as high as the one we got, or higher, if the hypothesis is correct. E.g. for the previous example with 1st order polynomial (line), whereas for the 0th order polynomial (horizontal line), Lecture 3 page 40

Wrapping up lecture 3 No golden rule for parameter estimation, construct so as to have desirable properties (small variance, small or zero bias,...) Most important methods: Maximum Likelihood, Least Squares Several methods to obtain variances (stat. errors) from a fit Analytically Monte Carlo From information equality / graphical method Finding estimator often involves numerical minimization Lecture 3 page 41

Extra slides for lecture 3 Goodness-of-fit vs. statistical errors Fitting histograms with LS Combining measurements with LS Lecture 3 page 42

Goodness-of-fit vs. statistical errors Lecture 3 page 43

Goodness-of-fit vs. stat. errors (2) Lecture 3 page 44

LS with binned data Lecture 3 page 45

LS with binned data (2) Lecture 3 page 46

LS with binned data normalization Lecture 3 page 47

LS normalization example Lecture 3 page 48

Using LS to combine measurements Lecture 3 page 49

Combining correlated measurements with LS Lecture 3 page 50

Example: averaging two correlated measurements Lecture 3 page 51

Negative weights in LS average Lecture 3 page 52