Estimation MLE-Pandemic data MLE-Financial crisis data Evaluating estimators. Estimation. September 24, STAT 151 Class 6 Slide 1
|
|
- Alan Bryan Dalton
- 5 years ago
- Views:
Transcription
1 Estimation September 24, 2018 STAT 151 Class 6 Slide 1
2 Pandemic data Treatment outcome, X, from n = 100 patients in a pandemic: 1 = recovered and 0 = not recovered A probability model for treatment outcome How can we estimate p and 1 p? Outcome Probability 1 (recovers) p 0 (does not recover) 1 p STAT 151 Class 6 Slide 2
3 Possible solutions Some assumptions: P(success) = p, 0 < p < 1 is the same for every trial we can combine all 100 patients to evaluate the drug efficacy The outcomes of the trials are independent of one another to simplify calculations A few possible models: P(X) X p = 0.5 P(X) X p = 0.6 P(X) X p = 0.3 STAT 151 Class 6 Slide 3
4 Maximum likelihood estimation (MLE) Key ideas: (a) The best model for the observed data is the best model for the population (b) The best model is the most likely explanation of the observed data (c) (a) and (b) leads to the a method called maximum likelihood estimation Some notations and terminologies: We draw a (independent and identically distributed, iid) sample: X 1, X 2,..., X n to estimate p Each observation X i is an observation from a probability model, Bernoulli(p) Write PDF of X as f (X θ) for both discrete and continuous variables where θ is a generic symbol for parameter(s) For any quantity Q, we use ˆQ to denote its estimate (estimator) STAT 151 Class 6 Slide 4
5 MLE (2) Our data consist of (X 1,..., X 100 ) = (1, 1, 1, 0, 0,..., 0, 1) }{{} 60 1s and 40 0s The probability (likelihood) that X 1 = 1 is p The likelihood that X 2 = 1 is p The likelihood that X 3 = 1 is p The likelihood that X 4 = 0 is 1 p, etc. The likelihood that (X 1,..., X 100 )=(1, 1, 1, 0, 0,..., 0, 1) is L(p X 1,..., X 100 ) = L(p X 1 ) L(p X 2 ) L(p X 99 ) L(p X 100 ) = p p p (1 p) (1 p) (1 p) p = p 60 (1 p) 40 L(p X 1,..., X 100 ) L(p) is called a likelihood function and it is a function of p. L(p) can be considered as the likelihood of the observed data for a particular value of p The maximum likelihood estimate (MLE) of p, is the value of p that gives the highest likelihood for the observed data STAT 151 Class 6 Slide 5
6 Finding MLE method 1 p L(p) = p 60 (1 p) STAT 151 Class 6 Slide 6
7 MLE [Ronald Aylmer (R.A.) Fisher, ] For iid X 1,..., X n with PDF f (x θ), the likelihood of θ is: L(θ) = L(θ X 1,..., X n ) = L(θ X 1 )... L(θ X n ) n = f (X 1 θ)... f (X n θ) f (X i θ) L(θ) is the likelihood of observing X 1,..., X n for a particular θ The MLE of θ, is the value ˆθ that gives the highest likelihood for the data, among all possible values of θ MLE is usually obtained by maximizing log L(θ) l(θ). Since the logarithmic function is a monotone function of its argument, maximizing the likelihood or the log-likelihood yield the same ˆθ When possible, it is best to draw a figure of L(θ) or l(θ) STAT 151 Class 6 Slide 7
8 Likelihood vs. log-likelihood finding MLE method 2 Likelihood Log likelihood p 60 (1 p) 40 0e+00 2e 30 4e 30 6e p 60log(p) + 40log(1 p) p STAT 151 Class 6 Slide 8
9 Finding MLE method 3 X 1,..., X n iid Bernoulli(p), then f (X i p) = p Xi (1 p) 1 Xi L(p) = p 60 (1 p) 40 = n=100 l(p) = log L(p) = log = p Xi (1 p) 1 Xi n p Xi (1 p) 1 Xi n [X i log p + (1 X i )log(1 p)] MLE ˆp is the value that maximises the log-likelihood dl(p) dp dl(ˆp) = 0 p=ˆp dp STAT 151 Class 6 Slide 9 n [ Xi ˆp + 1 X ] i 1 ˆp ( 1) = 0 n n (1 ˆp) X i ˆp (1 X i ) = 0 n n (1 ˆp) X i (ˆp)n + ˆp X i = 0 ˆp = X = = 0.6
10 Financial crises data 82 Mexican 84 S&L Black Mon. Comm. RE AsianLTCM 00 Dotcom 07 Subprime 12? Euro Data ??? X = # crisis per unit time, e.g., a decade possible values for X : 0, 1, 2,..., assume crises occur (i) independently and (ii) at a constant rate a (probability) model for # random events over time is Poisson(λ), λ > 0 is the rate of crises per unit time How can we use the data X 1, X 2, X 3, X 4 = 3, 3, 2, 1 to learn about?? Which Poisson model is best for the data and?? What is the best λ? STAT 151 Class 6 Slide 10
11 Financial crises data (2) Original data (X 1, X 2, X 3, X 4 ) = (3, 3, 2, 1), let s ignore X 4 for now # crises (X ) in n = 3 decades are: (X 1, X 2, X 3 ) = (3, 3, 2) The likelihood that the first observation is 3 is λ3 3! e λ The likelihood that the second observation is 3 is λ3 3! e λ The likelihood that the third observation is 2 is λ2 2! e λ The likelihood of (X 1, X 2, X 3 )=(3,3,2) for a particular λ is L(λ) = λ3 3! e λ λ3 3! e λ λ2 2! e λ = λ !3!2! e 3λ Which value of λ makes the observed data most probable? STAT 151 Class 6 Slide 11
12 Financial crises data (3) λ L(λ) = λ !3!2! e 3λ STAT 151 Class 6 Slide 12
13 Financial crises data (4) Likelihood Log likelihood exp( 3λ) λ log(λ) log(6 6 2) 3λ λ λ STAT 151 Class 6 Slide 13
14 Financial crises data (5) L(λ) = 3 f (X i λ) = l(λ) = 3 λ X i X i! e λ 3 {X i log(λ) λ log(x i!)} = log(λ) 3 X i 3λ 3 log(x i!). Let ˆλ be the MLE of λ, then ˆλ is determined as follows: d dλ l(ˆλ) = 1ˆλ 3 X i 3 = 0 ˆλ = 3 X i 3 = X = 8 3 STAT 151 Class 6 Slide 14
15 Invariance property: Financial crises data (6) MLE of λ, average # crises in a decade is ˆλ = X = 8 3 Other characteristics of X might be of interest (a) Average time between crises, E(T ) = 1/λ (recall link to Exp(λ)) Ê(T ) = 1/ˆλ = 1/( 8 ) decades or 3.75 years 3 (b) Probability of no crises in the next decade, P(X = 0) P(X = 0) = λ0 e λ = e λ 0! P(X = 0) = e ˆλ = e 8/ If ˆθ is the MLE of θ, then for any function, g(θ), the MLE of g(θ) is g(ˆθ). This is called the invariance property of MLE STAT 151 Class 6 Slide 15
16 Special cases: Financial crises data (7) Original data (X 1, X 2, X 3, X 4 ) = (3, 3, 2, 1); X 4 is an observation from = 0.7 decade, X 4 is called censored. Assuming censoring is random, X 4 Poisson(0.7λ) The likelihood that the first observation is 3 is λ3 3! e λ. The likelihood that the second observation is 3 is λ3 3! e λ. The likelihood that the third observation is 2 is λ2 2! e λ. The likelihood that the fourth observation is 1 is (0.7λ)1 e 0.7λ. 1! The likelihood that (X 1, X 2, X 3, X4 )=(3,3,2,1) is L(λ) = λ3 3! e λ λ3 3! e λ λ2 which is the new likelihood 2! e λ (0.7λ)1 1! e 0.7λ = 0.7λ e 3.7λ, 3!3!2!1! STAT 151 Class 6 Slide 16
17 Financial crises data (8) L(λ) = 0.7λ e 3.7λ 3!3!2!1! l(λ) = log[l(λ)] = log(0.7) + 9 log(λ) 3.7λ log(3!3!2!1!) Let ˆλ be the MLE of λ, then ˆλ is determined as follows: d dλ l(ˆλ) = 9ˆλ 3.7 = 0 ˆλ = Total # events {}}{ }{{} Total time STAT 151 Class 6 Slide 17
18 Pandemic data (2) MLE suggests estimating p using ˆp = X = n X i n ˆp = X is called an estimator because it can be applied to any sample X 1,..., X n Our sample (X 1,..., X 100 ) = (1, 1, 1, 0, 0,, 0, 1) gives ˆp = so our estimate of p is 0.6 = An estimate is the value of an estimator applied to a particular sample STAT 151 Class 6 Slide 18
19 Estimate vs. estimator Using a sample (X 1,..., X 100 ) = (1, 1, 1, 0, 0,, 0, 1), our estimate of p is ˆp = X = 0.6 Our estimate come from a sample. Its sampling error estimate parameter = 0.6 p is unknown and not estimable since p is unknown. We study the performance of the estimator that produces our estimate one sample {}}{ ˆp p =? many samples {}}{ E( ˆp p) = Average sampling error = Bias E[{ˆp E(ˆp)} 2 ] = Differences in estimates between samples = Variance E(ˆp p) 2 = Average distance of estimates to p = MSE STAT 151 Class 6 Slide 19
20 Bias average sampling error For an estimator ˆθ of θ, the bias is the average sampling error using ˆθ over different samples of size n bias(ˆθ) = }{{} E (ˆθ θ) average An estimator is unbiased if bias(ˆθ) = 0. Otherwise, it is biased. A biased estimator systematically overestimates or underestimates θ Some estimators may be biased when the sample size n is small but bias(ˆθ) 0 for large values of n. Those estimators are called consistent estimators. In practice, it is often sufficient to look for a consistent rather than an unbiased estimator Unbiased Biased θ Estimates from different samples ˆθ ˆθ STAT 151 Class 6 Slide 20
21 Variance does the estimate vary much with the sample? The variance measures how ˆθ estimates the same θ using different samples of size n: var(ˆθ) = Recall that var(ˆθ) is the sampling variation }{{} E [{ˆθ E(ˆθ)} 2 ] average A large var(ˆθ) suggests the estimator s estimate of the (same) unknown θ varies a lot with the sample chosen. So an estimator with a large variance is bad Small variance Large variance Estimates from different samples ˆθ E(ˆθ) ˆθ Note that the reference is E(ˆθ), not θ, so an estimator with a small variance does not guarantee that its estimate will be close to the unknown θ STAT 151 Class 6 Slide 21
22 Mean squared error (MSE) is our estimate close to the unknown? Mean square error (MSE) measures, on average, the distance between the estimate and θ: MSE(ˆθ) = {bias(ˆθ)} 2 + var(ˆθ) For an unbiased estimator, bias(ˆθ) = 0 and MSE(ˆθ) = var(ˆθ), for all n For a consistent estimator, bias(ˆθ) 0 and MSE(ˆθ) var(ˆθ), for large n For consistent or unbiased estimators, variance is the best measure of performance We illustrate the concept using unbiased estimators, so MSE(ˆθ) = var(ˆθ) Small MSE Large MSE θ Estimates from different samples ˆθ ˆθ Estimator with lower MSE has a higher chance of producing an estimate close to θ STAT 151 Class 6 Slide 22
23 Bias vs. Variance: Financial crises data (9) Two estimators of λ : (a) ˆλ = X1+X2+X3 3 (b) ˆλ = X1+X2+X3+X } Both are MLE; which is better? (1) Recall for Y Poisson(µ), E(Y ) = var(y ) = µ (2) X 1 +X 2 +X 3 +X4 {# events in (3 + t) decades} Poisson((3+t)λ), t > 0: ( X1 + X 2 + X 3 + X ) 4 E = E(X 1 + X 2 + X 3 + X4 ) (3 + t)λ = = λ Bias = t 3 + t 3 + t var ( X1 + X 2 + X 3 + X t (3) Variance decreases with a larger t (b) is better (4) Using a larger sample is better (c.f., Class 7) ) = var(x 1 + X 2 + X 3 + X4 ) (3 + t)λ (3 + t) 2 = (3 + t) 2 = λ 3 + t, t > 0 STAT 151 Class 6 Slide 23
24 Summary Consistent or unbiased estimators are desirable Among consistent or unbiased estimators, the estimator with the smallest variance is efficient Under most circumstances, as the sample size increases (asymptotically), if ˆθ is the MLE and ˆθ is any other unbiased estimator of θ var(ˆθ) var(ˆθ ) ˆθ is at least as good as ˆθ so the MLE is efficient Invariance: if we are interested in estimating any function of θ, say g(θ), the following also holds var[g(ˆθ)] var[g(ˆθ )] g(ˆθ) is at least as good as g(ˆθ ) STAT 151 Class 6 Slide 24
f(x θ)dx with respect to θ. Assuming certain smoothness conditions concern differentiating under the integral the integral sign, we first obtain
0.1. INTRODUCTION 1 0.1 Introduction R. A. Fisher, a pioneer in the development of mathematical statistics, introduced a measure of the amount of information contained in an observaton from f(x θ). Fisher
More informationSpecial distributions
Special distributions August 22, 2017 STAT 101 Class 4 Slide 1 Outline of Topics 1 Motivation 2 Bernoulli and binomial 3 Poisson 4 Uniform 5 Exponential 6 Normal STAT 101 Class 4 Slide 2 What distributions
More informationInterval estimation. October 3, Basic ideas CLT and CI CI for a population mean CI for a population proportion CI for a Normal mean
Interval estimation October 3, 2018 STAT 151 Class 7 Slide 1 Pandemic data Treatment outcome, X, from n = 100 patients in a pandemic: 1 = recovered and 0 = not recovered 1 1 1 0 0 0 1 1 1 0 0 1 0 1 0 0
More informationSTAT 135 Lab 3 Asymptotic MLE and the Method of Moments
STAT 135 Lab 3 Asymptotic MLE and the Method of Moments Rebecca Barter February 9, 2015 Maximum likelihood estimation (a reminder) Maximum likelihood estimation Suppose that we have a sample, X 1, X 2,...,
More informationChapters 9. Properties of Point Estimators
Chapters 9. Properties of Point Estimators Recap Target parameter, or population parameter θ. Population distribution f(x; θ). { probability function, discrete case f(x; θ) = density, continuous case The
More informationSTAT 135 Lab 2 Confidence Intervals, MLE and the Delta Method
STAT 135 Lab 2 Confidence Intervals, MLE and the Delta Method Rebecca Barter February 2, 2015 Confidence Intervals Confidence intervals What is a confidence interval? A confidence interval is calculated
More informationRegression Estimation - Least Squares and Maximum Likelihood. Dr. Frank Wood
Regression Estimation - Least Squares and Maximum Likelihood Dr. Frank Wood Least Squares Max(min)imization Function to minimize w.r.t. β 0, β 1 Q = n (Y i (β 0 + β 1 X i )) 2 i=1 Minimize this by maximizing
More informationBIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation
BIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation Yujin Chung November 29th, 2016 Fall 2016 Yujin Chung Lec13: MLE Fall 2016 1/24 Previous Parametric tests Mean comparisons (normality assumption)
More informationELEG 5633 Detection and Estimation Minimum Variance Unbiased Estimators (MVUE)
1 ELEG 5633 Detection and Estimation Minimum Variance Unbiased Estimators (MVUE) Jingxian Wu Department of Electrical Engineering University of Arkansas Outline Minimum Variance Unbiased Estimators (MVUE)
More informationHT Introduction. P(X i = x i ) = e λ λ x i
MODS STATISTICS Introduction. HT 2012 Simon Myers, Department of Statistics (and The Wellcome Trust Centre for Human Genetics) myers@stats.ox.ac.uk We will be concerned with the mathematical framework
More informationMathematical statistics
October 4 th, 2018 Lecture 12: Information Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation Chapter
More informationTheory of Statistics.
Theory of Statistics. Homework V February 5, 00. MT 8.7.c When σ is known, ˆµ = X is an unbiased estimator for µ. If you can show that its variance attains the Cramer-Rao lower bound, then no other unbiased
More informationIntroduction to Simple Linear Regression
Introduction to Simple Linear Regression Yang Feng http://www.stat.columbia.edu/~yangfeng Yang Feng (Columbia University) Introduction to Simple Linear Regression 1 / 68 About me Faculty in the Department
More informationMS&E 226: Small Data
MS&E 226: Small Data Lecture 12: Frequentist properties of estimators (v4) Ramesh Johari ramesh.johari@stanford.edu 1 / 39 Frequentist inference 2 / 39 Thinking like a frequentist Suppose that for some
More informationUnbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.
Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it
More informationUnbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.
Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it
More informationStatistics II Lesson 1. Inference on one population. Year 2009/10
Statistics II Lesson 1. Inference on one population Year 2009/10 Lesson 1. Inference on one population Contents Introduction to inference Point estimators The estimation of the mean and variance Estimating
More informationStatistics and Econometrics I
Statistics and Econometrics I Point Estimation Shiu-Sheng Chen Department of Economics National Taiwan University September 13, 2016 Shiu-Sheng Chen (NTU Econ) Statistics and Econometrics I September 13,
More informationRegression Estimation Least Squares and Maximum Likelihood
Regression Estimation Least Squares and Maximum Likelihood Dr. Frank Wood Frank Wood, fwood@stat.columbia.edu Linear Regression Models Lecture 3, Slide 1 Least Squares Max(min)imization Function to minimize
More informationStatistics - Lecture One. Outline. Charlotte Wickham 1. Basic ideas about estimation
Statistics - Lecture One Charlotte Wickham wickham@stat.berkeley.edu http://www.stat.berkeley.edu/~wickham/ Outline 1. Basic ideas about estimation 2. Method of Moments 3. Maximum Likelihood 4. Confidence
More informationEIE6207: Estimation Theory
EIE6207: Estimation Theory Man-Wai MAK Dept. of Electronic and Information Engineering, The Hong Kong Polytechnic University enmwmak@polyu.edu.hk http://www.eie.polyu.edu.hk/ mwmak References: Steven M.
More informationSTAT 512 sp 2018 Summary Sheet
STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}
More informationEvaluating the Performance of Estimators (Section 7.3)
Evaluating the Performance of Estimators (Section 7.3) Example: Suppose we observe X 1,..., X n iid N(θ, σ 2 0 ), with σ2 0 known, and wish to estimate θ. Two possible estimators are: ˆθ = X sample mean
More informationChapter 3: Maximum Likelihood Theory
Chapter 3: Maximum Likelihood Theory Florian Pelgrin HEC September-December, 2010 Florian Pelgrin (HEC) Maximum Likelihood Theory September-December, 2010 1 / 40 1 Introduction Example 2 Maximum likelihood
More informationMathematical statistics
October 18 th, 2018 Lecture 16: Midterm review Countdown to mid-term exam: 7 days Week 1 Chapter 1: Probability review Week 2 Week 4 Week 7 Chapter 6: Statistics Chapter 7: Point Estimation Chapter 8:
More informationReview and continuation from last week Properties of MLEs
Review and continuation from last week Properties of MLEs As we have mentioned, MLEs have a nice intuitive property, and as we have seen, they have a certain equivariance property. We will see later that
More informationElements of statistics (MATH0487-1)
Elements of statistics (MATH0487-1) Prof. Dr. Dr. K. Van Steen University of Liège, Belgium November 12, 2012 Introduction to Statistics Basic Probability Revisited Sampling Exploratory Data Analysis -
More informationStat410 Probability and Statistics II (F16)
Some Basic Cocepts of Statistical Iferece (Sec 5.) Suppose we have a rv X that has a pdf/pmf deoted by f(x; θ) or p(x; θ), where θ is called the parameter. I previous lectures, we focus o probability problems
More informationReview of Discrete Probability (contd.)
Stat 504, Lecture 2 1 Review of Discrete Probability (contd.) Overview of probability and inference Probability Data generating process Observed data Inference The basic problem we study in probability:
More informationStatistics 3858 : Maximum Likelihood Estimators
Statistics 3858 : Maximum Likelihood Estimators 1 Method of Maximum Likelihood In this method we construct the so called likelihood function, that is L(θ) = L(θ; X 1, X 2,..., X n ) = f n (X 1, X 2,...,
More informationTesting Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata
Maura Department of Economics and Finance Università Tor Vergata Hypothesis Testing Outline It is a mistake to confound strangeness with mystery Sherlock Holmes A Study in Scarlet Outline 1 The Power Function
More informationTheory of Maximum Likelihood Estimation. Konstantin Kashin
Gov 2001 Section 5: Theory of Maximum Likelihood Estimation Konstantin Kashin February 28, 2013 Outline Introduction Likelihood Examples of MLE Variance of MLE Asymptotic Properties What is Statistical
More informationFrom Model to Log Likelihood
From Model to Log Likelihood Stephen Pettigrew February 18, 2015 Stephen Pettigrew From Model to Log Likelihood February 18, 2015 1 / 38 Outline 1 Big picture 2 Defining our model 3 Probability statements
More informationLoglikelihood and Confidence Intervals
Stat 504, Lecture 2 1 Loglikelihood and Confidence Intervals The loglikelihood function is defined to be the natural logarithm of the likelihood function, l(θ ; x) = log L(θ ; x). For a variety of reasons,
More informationSTATS 200: Introduction to Statistical Inference. Lecture 29: Course review
STATS 200: Introduction to Statistical Inference Lecture 29: Course review Course review We started in Lecture 1 with a fundamental assumption: Data is a realization of a random process. The goal throughout
More informationMS&E 226: Small Data. Lecture 11: Maximum likelihood (v2) Ramesh Johari
MS&E 226: Small Data Lecture 11: Maximum likelihood (v2) Ramesh Johari ramesh.johari@stanford.edu 1 / 18 The likelihood function 2 / 18 Estimating the parameter This lecture develops the methodology behind
More informationCentral Limit Theorem ( 5.3)
Central Limit Theorem ( 5.3) Let X 1, X 2,... be a sequence of independent random variables, each having n mean µ and variance σ 2. Then the distribution of the partial sum S n = X i i=1 becomes approximately
More informationMath 152. Rumbos Fall Solutions to Assignment #12
Math 52. umbos Fall 2009 Solutions to Assignment #2. Suppose that you observe n iid Bernoulli(p) random variables, denoted by X, X 2,..., X n. Find the LT rejection region for the test of H o : p p o versus
More informationReview Questions, Chapters 8, 9. f(y) = 0, elsewhere. F (y) = f Y(1) = n ( e y/θ) n 1 1 θ e y/θ = n θ e yn
Stat 366 Lab 2 Solutios (September 2, 2006) page TA: Yury Petracheko, CAB 484, yuryp@ualberta.ca, http://www.ualberta.ca/ yuryp/ Review Questios, Chapters 8, 9 8.5 Suppose that Y, Y 2,..., Y deote a radom
More informationSOLUTION FOR HOMEWORK 8, STAT 4372
SOLUTION FOR HOMEWORK 8, STAT 4372 Welcome to your 8th homework. Here you have an opportunity to solve classical estimation problems which are the must to solve on the exam due to their simplicity. 1.
More informationECE 275A Homework 7 Solutions
ECE 275A Homework 7 Solutions Solutions 1. For the same specification as in Homework Problem 6.11 we want to determine an estimator for θ using the Method of Moments (MOM). In general, the MOM estimator
More informationReview. December 4 th, Review
December 4 th, 2017 Att. Final exam: Course evaluation Friday, 12/14/2018, 10:30am 12:30pm Gore Hall 115 Overview Week 2 Week 4 Week 7 Week 10 Week 12 Chapter 6: Statistics and Sampling Distributions Chapter
More informationInference in non-linear time series
Intro LS MLE Other Erik Lindström Centre for Mathematical Sciences Lund University LU/LTH & DTU Intro LS MLE Other General Properties Popular estimatiors Overview Introduction General Properties Estimators
More informationIntroduction to Estimation Methods for Time Series models Lecture 2
Introduction to Estimation Methods for Time Series models Lecture 2 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 2 SNS Pisa 1 / 21 Estimators:
More informationMath 494: Mathematical Statistics
Math 494: Mathematical Statistics Instructor: Jimin Ding jmding@wustl.edu Department of Mathematics Washington University in St. Louis Class materials are available on course website (www.math.wustl.edu/
More informationSpring 2012 Math 541A Exam 1. X i, S 2 = 1 n. n 1. X i I(X i < c), T n =
Spring 2012 Math 541A Exam 1 1. (a) Let Z i be independent N(0, 1), i = 1, 2,, n. Are Z = 1 n n Z i and S 2 Z = 1 n 1 n (Z i Z) 2 independent? Prove your claim. (b) Let X 1, X 2,, X n be independent identically
More informationLecture 2: Statistical Decision Theory (Part I)
Lecture 2: Statistical Decision Theory (Part I) Hao Helen Zhang Hao Helen Zhang Lecture 2: Statistical Decision Theory (Part I) 1 / 35 Outline of This Note Part I: Statistics Decision Theory (from Statistical
More informationRegression #3: Properties of OLS Estimator
Regression #3: Properties of OLS Estimator Econ 671 Purdue University Justin L. Tobias (Purdue) Regression #3 1 / 20 Introduction In this lecture, we establish some desirable properties associated with
More informationAdvanced Signal Processing Introduction to Estimation Theory
Advanced Signal Processing Introduction to Estimation Theory Danilo Mandic, room 813, ext: 46271 Department of Electrical and Electronic Engineering Imperial College London, UK d.mandic@imperial.ac.uk,
More informationEXAMINERS REPORT & SOLUTIONS STATISTICS 1 (MATH 11400) May-June 2009
EAMINERS REPORT & SOLUTIONS STATISTICS (MATH 400) May-June 2009 Examiners Report A. Most plots were well done. Some candidates muddled hinges and quartiles and gave the wrong one. Generally candidates
More informationTwo hours. To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER. 21 June :45 11:45
Two hours MATH20802 To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER STATISTICAL METHODS 21 June 2010 9:45 11:45 Answer any FOUR of the questions. University-approved
More informationMax. Likelihood Estimation. Outline. Econometrics II. Ricardo Mora. Notes. Notes
Maximum Likelihood Estimation Econometrics II Department of Economics Universidad Carlos III de Madrid Máster Universitario en Desarrollo y Crecimiento Económico Outline 1 3 4 General Approaches to Parameter
More informationIntroduction to Maximum Likelihood Estimation
Introduction to Maximum Likelihood Estimation Eric Zivot July 26, 2012 The Likelihood Function Let 1 be an iid sample with pdf ( ; ) where is a ( 1) vector of parameters that characterize ( ; ) Example:
More information5.2 Fisher information and the Cramer-Rao bound
Stat 200: Introduction to Statistical Inference Autumn 208/9 Lecture 5: Maximum likelihood theory Lecturer: Art B. Owen October 9 Disclaimer: These notes have not been subjected to the usual scrutiny reserved
More informationSTAT 135 Lab 5 Bootstrapping and Hypothesis Testing
STAT 135 Lab 5 Bootstrapping and Hypothesis Testing Rebecca Barter March 2, 2015 The Bootstrap Bootstrap Suppose that we are interested in estimating a parameter θ from some population with members x 1,...,
More information2.6.3 Generalized likelihood ratio tests
26 HYPOTHESIS TESTING 113 263 Generalized likelihood ratio tests When a UMP test does not exist, we usually use a generalized likelihood ratio test to verify H 0 : θ Θ against H 1 : θ Θ\Θ It can be used
More informationp y (1 p) 1 y, y = 0, 1 p Y (y p) = 0, otherwise.
1. Suppose Y 1, Y 2,..., Y n is an iid sample from a Bernoulli(p) population distribution, where 0 < p < 1 is unknown. The population pmf is p y (1 p) 1 y, y = 0, 1 p Y (y p) = (a) Prove that Y is the
More informationA Very Brief Summary of Statistical Inference, and Examples
A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2009 Prof. Gesine Reinert Our standard situation is that we have data x = x 1, x 2,..., x n, which we view as realisations of random
More informationsimple if it completely specifies the density of x
3. Hypothesis Testing Pure significance tests Data x = (x 1,..., x n ) from f(x, θ) Hypothesis H 0 : restricts f(x, θ) Are the data consistent with H 0? H 0 is called the null hypothesis simple if it completely
More informationA General Overview of Parametric Estimation and Inference Techniques.
A General Overview of Parametric Estimation and Inference Techniques. Moulinath Banerjee University of Michigan September 11, 2012 The object of statistical inference is to glean information about an underlying
More informationChapter 8.8.1: A factorization theorem
LECTURE 14 Chapter 8.8.1: A factorization theorem The characterization of a sufficient statistic in terms of the conditional distribution of the data given the statistic can be difficult to work with.
More informationEstimation of Parameters
CHAPTER Probability, Statistics, and Reliability for Engineers and Scientists FUNDAMENTALS OF STATISTICAL ANALYSIS Second Edition A. J. Clark School of Engineering Department of Civil and Environmental
More informationStatistics Ph.D. Qualifying Exam: Part I October 18, 2003
Statistics Ph.D. Qualifying Exam: Part I October 18, 2003 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. 1 2 3 4 5 6 7 8 9 10 11 12 2. Write your answer
More informationGov 2001: Section 4. February 20, Gov 2001: Section 4 February 20, / 39
Gov 2001: Section 4 February 20, 2013 Gov 2001: Section 4 February 20, 2013 1 / 39 Outline 1 The Likelihood Model with Covariates 2 Likelihood Ratio Test 3 The Central Limit Theorem and the MLE 4 What
More informationStatistical Inference Using Progressively Type-II Censored Data with Random Scheme
International Mathematical Forum, 3, 28, no. 35, 1713-1725 Statistical Inference Using Progressively Type-II Censored Data with Random Scheme Ammar M. Sarhan 1 and A. Abuammoh Department of Statistics
More informationStatistical Computing with R MATH , Set 6 (Monte Carlo Methods in Statistical Inference)
Statistical Computing with R MATH 6382 1, Set 6 (Monte Carlo Methods in Statistical Inference) Tamer Oraby UTRGV tamer.oraby@utrgv.edu 1 Based on textbook. Last updated November 14, 2016 Tamer Oraby (University
More informationOn the efficiency of two-stage adaptive designs
On the efficiency of two-stage adaptive designs Björn Bornkamp (Novartis Pharma AG) Based on: Dette, H., Bornkamp, B. and Bretz F. (2010): On the efficiency of adaptive designs www.statistik.tu-dortmund.de/sfb823-dp2010.html
More informationISyE 6644 Fall 2014 Test 3 Solutions
1 NAME ISyE 6644 Fall 14 Test 3 Solutions revised 8/4/18 You have 1 minutes for this test. You are allowed three cheat sheets. Circle all final answers. Good luck! 1. [4 points] Suppose that the joint
More information1 Degree distributions and data
1 Degree distributions and data A great deal of effort is often spent trying to identify what functional form best describes the degree distribution of a network, particularly the upper tail of that distribution.
More informationBetter Bootstrap Confidence Intervals
by Bradley Efron University of Washington, Department of Statistics April 12, 2012 An example Suppose we wish to make inference on some parameter θ T (F ) (e.g. θ = E F X ), based on data We might suppose
More informationParameter Estimation
Parameter Estimation Consider a sample of observations on a random variable Y. his generates random variables: (y 1, y 2,, y ). A random sample is a sample (y 1, y 2,, y ) where the random variables y
More informationMATH4427 Notebook 2 Fall Semester 2017/2018
MATH4427 Notebook 2 Fall Semester 2017/2018 prepared by Professor Jenny Baglivo c Copyright 2009-2018 by Jenny A. Baglivo. All Rights Reserved. 2 MATH4427 Notebook 2 3 2.1 Definitions and Examples...................................
More informationInformation in Data. Sufficiency, Ancillarity, Minimality, and Completeness
Information in Data Sufficiency, Ancillarity, Minimality, and Completeness Important properties of statistics that determine the usefulness of those statistics in statistical inference. These general properties
More informationLecture 3 September 1
STAT 383C: Statistical Modeling I Fall 2016 Lecture 3 September 1 Lecturer: Purnamrita Sarkar Scribe: Giorgio Paulon, Carlos Zanini Disclaimer: These scribe notes have been slightly proofread and may have
More information2.3 Methods of Estimation
96 CHAPTER 2. ELEMENTS OF STATISTICAL INFERENCE 2.3 Methods of Estimation 2.3. Method of Moments The Method of Moments is a simple technique based on the idea that the sample moments are natural estimators
More informationInstitute of Actuaries of India
Institute of Actuaries of India Subject CT3 Probability & Mathematical Statistics May 2011 Examinations INDICATIVE SOLUTION Introduction The indicative solution has been written by the Examiners with the
More informationPh.D. Qualifying Exam Friday Saturday, January 6 7, 2017
Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Put your solution to each problem on a separate sheet of paper. Problem 1. (5106) Let X 1, X 2,, X n be a sequence of i.i.d. observations from a
More informationParameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn
Parameter estimation and forecasting Cristiano Porciani AIfA, Uni-Bonn Questions? C. Porciani Estimation & forecasting 2 Cosmological parameters A branch of modern cosmological research focuses on measuring
More informationParametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012
Parametric Models Dr. Shuang LIANG School of Software Engineering TongJi University Fall, 2012 Today s Topics Maximum Likelihood Estimation Bayesian Density Estimation Today s Topics Maximum Likelihood
More informationLecture 23 Maximum Likelihood Estimation and Bayesian Inference
Lecture 23 Maximum Likelihood Estimation and Bayesian Inference Thais Paiva STA 111 - Summer 2013 Term II August 7, 2013 1 / 31 Thais Paiva STA 111 - Summer 2013 Term II Lecture 23, 08/07/2013 Lecture
More informationTerminology Suppose we have N observations {x(n)} N 1. Estimators as Random Variables. {x(n)} N 1
Estimation Theory Overview Properties Bias, Variance, and Mean Square Error Cramér-Rao lower bound Maximum likelihood Consistency Confidence intervals Properties of the mean estimator Properties of the
More informationRowan University Department of Electrical and Computer Engineering
Rowan University Department of Electrical and Computer Engineering Estimation and Detection Theory Fall 2013 to Practice Exam II This is a closed book exam. There are 8 problems in the exam. The problems
More informationA Few Notes on Fisher Information (WIP)
A Few Notes on Fisher Information (WIP) David Meyer dmm@{-4-5.net,uoregon.edu} Last update: April 30, 208 Definitions There are so many interesting things about Fisher Information and its theoretical properties
More informationChapter 8: Estimation 1
Chapter 8: Estimation 1 Jae-Kwang Kim Iowa State University Fall, 2014 Kim (ISU) Ch. 8: Estimation 1 Fall, 2014 1 / 33 Introduction 1 Introduction 2 Ratio estimation 3 Regression estimator Kim (ISU) Ch.
More informationChapter 2. Discrete Distributions
Chapter. Discrete Distributions Objectives ˆ Basic Concepts & Epectations ˆ Binomial, Poisson, Geometric, Negative Binomial, and Hypergeometric Distributions ˆ Introduction to the Maimum Likelihood Estimation
More informationBasic Concepts of Inference
Basic Concepts of Inference Corresponds to Chapter 6 of Tamhane and Dunlop Slides prepared by Elizabeth Newton (MIT) with some slides by Jacqueline Telford (Johns Hopkins University) and Roy Welsch (MIT).
More informationIIT JAM : MATHEMATICAL STATISTICS (MS) 2013
IIT JAM : MATHEMATICAL STATISTICS (MS 2013 Question Paper with Answer Keys Ctanujit Classes Of Mathematics, Statistics & Economics Visit our website for more: www.ctanujit.in IMPORTANT NOTE FOR CANDIDATES
More information1. Point Estimators, Review
AMS571 Prof. Wei Zhu 1. Point Estimators, Review Example 1. Let be a random sample from. Please find a good point estimator for Solutions. There are the typical estimators for and. Both are unbiased estimators.
More informationMathematical statistics
October 1 st, 2018 Lecture 11: Sufficient statistic Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation
More informationSTAT 400 Homework 09 Spring 2018 Dalpiaz UIUC Due: Friday, April 6, 2:00 PM
STAT Homework 9 Sprg 28 Dalpaz UIUC Due: Fray, Aprl 6, 2: PM Exercse f(x, θ) = θ e x/θ, x >, θ > Note that, the momets of ths strbuto are gve by E[X k ] = Ths wll be a useful fact for Exercses 2 a 3. x
More information1 Inference, probability and estimators
1 Inference, probability and estimators The rest of the module is concerned with statistical inference and, in particular the classical approach. We will cover the following topics over the next few weeks.
More informationBias Variance Trade-off
Bias Variance Trade-off The mean squared error of an estimator MSE(ˆθ) = E([ˆθ θ] 2 ) Can be re-expressed MSE(ˆθ) = Var(ˆθ) + (B(ˆθ) 2 ) MSE = VAR + BIAS 2 Proof MSE(ˆθ) = E((ˆθ θ) 2 ) = E(([ˆθ E(ˆθ)]
More informationSTAT 461/561- Assignments, Year 2015
STAT 461/561- Assignments, Year 2015 This is the second set of assignment problems. When you hand in any problem, include the problem itself and its number. pdf are welcome. If so, use large fonts and
More informationChapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic
Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic Unbiased estimation Unbiased or asymptotically unbiased estimation plays an important role in
More informationVariations. ECE 6540, Lecture 10 Maximum Likelihood Estimation
Variations ECE 6540, Lecture 10 Last Time BLUE (Best Linear Unbiased Estimator) Formulation Advantages Disadvantages 2 The BLUE A simplification Assume the estimator is a linear system For a single parameter
More informationLecture 3. G. Cowan. Lecture 3 page 1. Lectures on Statistical Data Analysis
Lecture 3 1 Probability (90 min.) Definition, Bayes theorem, probability densities and their properties, catalogue of pdfs, Monte Carlo 2 Statistical tests (90 min.) general concepts, test statistics,
More informationWeek 2: Review of probability and statistics
Week 2: Review of probability and statistics Marcelo Coca Perraillon University of Colorado Anschutz Medical Campus Health Services Research Methods I HSMP 7607 2017 c 2017 PERRAILLON ALL RIGHTS RESERVED
More informationLECTURE 5 NOTES. n t. t Γ(a)Γ(b) pt+a 1 (1 p) n t+b 1. The marginal density of t is. Γ(t + a)γ(n t + b) Γ(n + a + b)
LECTURE 5 NOTES 1. Bayesian point estimators. In the conventional (frequentist) approach to statistical inference, the parameter θ Θ is considered a fixed quantity. In the Bayesian approach, it is considered
More informationParameter estimation! and! forecasting! Cristiano Porciani! AIfA, Uni-Bonn!
Parameter estimation! and! forecasting! Cristiano Porciani! AIfA, Uni-Bonn! Questions?! C. Porciani! Estimation & forecasting! 2! Cosmological parameters! A branch of modern cosmological research focuses
More informationEC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)
1 EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) Taisuke Otsu London School of Economics Summer 2018 A.1. Summation operator (Wooldridge, App. A.1) 2 3 Summation operator For
More information