Lecture 2: Poisson Sta*s*cs Probability Density Func*ons Expecta*on and Variance Es*mators

Similar documents
Approximations and more PMFs and PDFs

Lecture 7: Properties of Random Samples

Econ 325/327 Notes on Sample Mean, Sample Proportion, Central Limit Theorem, Chi-square Distribution, Student s t distribution 1.

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting

4. Partial Sums and the Central Limit Theorem

SDS 321: Introduction to Probability and Statistics

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting

STAT 515 fa 2016 Lec Sampling distribution of the mean, part 2 (central limit theorem)

7.1 Convergence of sequences of random variables

7.1 Convergence of sequences of random variables

Statisticians use the word population to refer the total number of (potential) observations under consideration

Sampling Distributions, Z-Tests, Power

Discrete Mathematics for CS Spring 2008 David Wagner Note 22

Probability and statistics: basic terms

KLMED8004 Medical statistics. Part I, autumn Estimation. We have previously learned: Population and sample. New questions

Lecture 12: November 13, 2018

f X (12) = Pr(X = 12) = Pr({(6, 6)}) = 1/36

MATH 320: Probability and Statistics 9. Estimation and Testing of Parameters. Readings: Pruim, Chapter 4

Closed book and notes. No calculators. 60 minutes, but essentially unlimited time.

Lecture 3. Properties of Summary Statistics: Sampling Distribution

Lecture 19: Convergence

CEE 522 Autumn Uncertainty Concepts for Geotechnical Engineering

MATH/STAT 352: Lecture 15

NOTES ON DISTRIBUTIONS

Last time: Moments of the Poisson distribution from its generating function. Example: Using telescope to measure intensity of an object

PRACTICE PROBLEMS FOR THE FINAL

Unbiased Estimation. February 7-12, 2008

CH.25 Discrete Random Variables

AMS570 Lecture Notes #2

Distribution of Random Samples & Limit theorems

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 19 11/17/2008 LAWS OF LARGE NUMBERS II THE STRONG LAW OF LARGE NUMBERS

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 19

Topic 10: Introduction to Estimation

Binomial Distribution

Limit Theorems. Convergence in Probability. Let X be the number of heads observed in n tosses. Then, E[X] = np and Var[X] = np(1-p).

Expectation and Variance of a random variable

Lecture 01: the Central Limit Theorem. 1 Central Limit Theorem for i.i.d. random variables

Lecture Chapter 6: Convergence of Random Sequences

Topic 9: Sampling Distributions of Estimators

Elements of Statistical Methods Lots of Data or Large Samples (Ch 8)

Random Variables, Sampling and Estimation

The Poisson Process *

f(x i ; ) L(x; p) = i=1 To estimate the value of that maximizes L or equivalently ln L we will set =0, for i =1, 2,...,m p x i (1 p) 1 x i i=1

Lecture 6: Coupon Collector s problem

n outcome is (+1,+1, 1,..., 1). Let the r.v. X denote our position (relative to our starting point 0) after n moves. Thus X = X 1 + X 2 + +X n,

Discrete Mathematics for CS Spring 2005 Clancy/Wagner Notes 21. Some Important Distributions

January 25, 2017 INTRODUCTION TO MATHEMATICAL STATISTICS

AMS 216 Stochastic Differential Equations Lecture 02 Copyright by Hongyun Wang, UCSC ( ( )) 2 = E X 2 ( ( )) 2

PRACTICE PROBLEMS FOR THE FINAL

Stat 421-SP2012 Interval Estimation Section

This section is optional.

Discrete Mathematics for CS Spring 2007 Luca Trevisan Lecture 22

NO! This is not evidence in favor of ESP. We are rejecting the (null) hypothesis that the results are

Mathematical Statistics - MS

Final Review for MATH 3510

The standard deviation of the mean

Lecture 2: Monte Carlo Simulation

Chapter 6 Sampling Distributions

Topic 9: Sampling Distributions of Estimators

Topic 8: Expected Values


CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 5

Lecture 1 Probability and Statistics

Week 2: Probability review Bernoulli, binomial, Poisson, and normal distributions Solutions

Statistical Noise Models and Diagnostics

Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables

Module 1 Fundamentals in statistics

Lecture 2: Concentration Bounds

Statistics 511 Additional Materials

Parameter, Statistic and Random Samples

10/31/2018 CentralLimitTheorem

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 2 9/9/2013. Large Deviations for i.i.d. Random Variables

MATH 472 / SPRING 2013 ASSIGNMENT 2: DUE FEBRUARY 4 FINALIZED

Math 525: Lecture 5. January 18, 2018

Statistical Inference (Chapter 10) Statistical inference = learn about a population based on the information provided by a sample.

Big Picture. 5. Data, Estimates, and Models: quantifying the accuracy of estimates.

Exercise 4.3 Use the Continuity Theorem to prove the Cramér-Wold Theorem, Theorem. (1) φ a X(1).

Vector Quantization: a Limiting Case of EM

Statistical Theory; Why is the Gaussian Distribution so popular?

Confidence intervals summary Conservative and approximate confidence intervals for a binomial p Examples. MATH1005 Statistics. Lecture 24. M.

Introduction to probability Stochastic Process Queuing systems. TELE4642: Week2

Axis Aligned Ellipsoid

Generalized Semi- Markov Processes (GSMP)

Discrete probability distributions

Chapter 6 Principles of Data Reduction

Estimation of the Mean and the ACVF

This exam contains 19 pages (including this cover page) and 10 questions. A Formulae sheet is provided with the exam.

Topic 9: Sampling Distributions of Estimators

Lecture 1 Probability and Statistics

Understanding Samples

Exponential Families and Bayesian Inference

IE 230 Seat # Name < KEY > Please read these directions. Closed book and notes. 60 minutes.

Quick Review of Probability

PH 425 Quantum Measurement and Spin Winter SPINS Lab 1

Quick Review of Probability

Lecture 24 Floods and flood frequency

Machine Learning Brett Bernstein

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n.

FACULTY OF MATHEMATICAL STUDIES MATHEMATICS FOR PART I ENGINEERING. Lectures

Solutions: Homework 3

Transcription:

Lecture 2: Poisso Sta*s*cs Probability Desity Fuc*os Expecta*o ad Variace Es*mators

Biomial Distribu*o: P (k successes i attempts) =! k!( k)! p k s( p s ) k prob of each success

Poisso Distributio Note that, sice the expectatio value for the umber of successes i oe Beroulli trial is p, the expectatio after summig over idetical Beroulli trials is: =p Now cosider the case where the expected umber of successes depeds o the size of a cotiuous variable (e.g. legth or time iterval), which ca be arbitrarily small. The umber of successes expected over a cotiuous iterval of fiite size ca thus be viewed as resultig from the sum of a ifiite umber of Beroulli trials carried out for arbitrarily small itervals such that: = lim p

So, set p= μ/ ad evaluate P (k) = lim! k k!( k)! k = k k! lim! ( k)! k k lim! ( k)! k = lim ( )( 2)...( k)( k )...() ( k)( k )...() k = lim = lim = ( )( 2)...( k + ) k 2 k +...

So, set p= μ/ ad evaluate P (k) = lim! k k!( k)! k = k k! lim! ( k)! k k lim h = lim exp log h = lim exp =exp h log i i i

So, set p= μ/ ad evaluate P (k) = lim! k k!( k)! k = k k! lim! ( k)! k k lim k =

So, set p= μ/ ad evaluate P (k) = lim! k k!( k)! k = k k! lim! ( k)! k k = k e k! Poisso Distributio Coutig statistics, decay processes Iteractio legths cotiuous variable is time cotiuous variable is distace

Radioactive Decay: τ = average time for a decay to occur (mea lifetime) = average # decays i time t = t/τ Probability for o decays (=0) withi time t P 0 = ( μ e μ! ) e t/τ P decay = e t/τ (itegral probability) Differetial Probability: P (t) = τ e t/τ Note that this is ow a probability for a cotiuous variable!

Poisso distributio: the probability of success depeds o cotiuous variable, but the observatio is a discreet umber of successes. But observatios are ot always of a discreet variable. For cotiuous radom variables (i.e. time, legth, etc.), the probability of obtaiig a particular exact value is geerally vaishigly small (o phase space!). But the relative probability of gettig a value i this viciity versus that viciity is meaigful That s whe you talk about probability desities. But the terms probability distributio ad probability desity fuctio are sometimes iformally used iterchageably.

Probability Desity Fuctio (Co*uous radom variables) where Differetial probability as a fuctio of parameters q such that (relative frequecy) q

Expectatio Value or for discrete case average value of the fuctio weighted by the frequecy of the depedet variable(s)

Is the expectatio the most likely value of the fuctio?.0 NO! 0.5 It does t eve have to be a allowed value of the fuctio! 0.0-80 -90 0 90 80 Agle Betwee Shoe Toes ad Isteps The peak is the most likely (frequet) value

Is the expectatio the value see half the time? 0.56 0.55 0.44 0.33 0.22 0. 0.00 0 25 50 75 00 25 50 75 200 225 250 275 300 325 Solar System Plaet Masses Relative to Earth (bis of uit) NO! That s the Media (50% poit) These are oly the same for symmetric distributios

The expectatio value tells you how to gamble! How much will you wi, o average, if you play this game very may times?

Variace: Average Squared Deviatio from Mea ote: for Poisso: h 2 i = = e X = e " = X =0 2! e = e X = apple ( ) ( )! + ( )! 2 X =2 2 ( 2)! + X = # ( )! ( )! = e " X =2 ( 2)! + X = # ( )! = e 2 (e )+(e )

variace = 2 = x 2 2 Uits of σ are same as uits of x (or μ) But, for Poisso, 2 = How do uits work? Here, μ refers to the expected umber of successes, which is uit-less (special case)

= p h(x ) 2 i = p hx 2 i 2 = RMS (Root Mea Squared) deviatio uiversal Stadard deviatio whe iterpreted i the cotext of a Normal (Gaussia) distributio

Some Useful Cosequeces: The stadard deviatio o a measured umber of couts due to statistical fluctuatios is the square root of the expected mea umber of couts (sqrt of the measured umber is ofte ot a bad approximatio) The expected sesitivity for detectig a sigal i a coutig experimet i terms of the umber of stadard deviatios above backgroud fluctuatios is S/ B I a coutig experimet, the umber of sigal ad backgroud evets detected is proportioal to the coutig time. Thus, the sigal sesitivity goes like T

Estimators Ofte we do t kow the true mea ad variace of a distributio ad wat to estimate it from the data: ' X i= x i We wat this to be ubiased, such that the expectatio value is equal to the true value h i = * X i= x i + = X hx i i = () = i= fair eough!

2 ' X (x i x) 2 But what about?! i=

2 ' X (x i x) 2 But what about?! i= = X * X + X 2 h 2i = (xi x) x 2 i xj * + = X x 2 2 i x X i x j + X X 2 x j x k i * j j k + = X 2 2 X x2 i x i x j + X 2 x 2 j + X X 2 x j x k i 2 j6=i j j k6=j 3 = X 4 2 x 2 2 X i hx i x j i + X x 2 X X 2 j + 2 hx j x k i5 i j6=i j j k6=j = X apple 2 ( 2 + 2 2 ) ( )2 + 2 ( 2 + 2 )+ ( )2 2 i = 2 Biased!! So istead, take 2 ' X (x i x) 2 i= Quick Argumet: For every data poits, there are - idepedet measures of the variace (thus correctig the offedig factor)!

Variace i the Estimated Mea Note that: var(αx) = (αx) 2 αx 2 = α 2 ( x 2 x 2 ) = α 2 var(x) So, cosider: σ 2 m = var ( x i i= ) = var x 2 ( i i= ) = 2 i= var (x i ) For idepedet variables (as will be show i lecture 4) = 2 ( σ 2 ) = σ2 or σ m = σ