2.3 Methods of Estimation
|
|
- Angelica Lane
- 6 years ago
- Views:
Transcription
1 96 CHAPTER 2. ELEMENTS OF STATISTICAL INFERENCE 2.3 Methods of Estimation 2.3. Method of Moments The Method of Moments is a simple technique based on the idea that the sample moments are natural estimators of population moments. Thek-th population moment of a random variabley is µ k = E(Y k ), k =,2,... and thek-th sample moment of a sampley,...,y n is m k = Yi k n, k =,2,... If Y,...,Y n are assumed to be independent and identically distributed then the Method of Moments estimators of the distribution parameters ϑ,...,ϑ p are obtained by solving the set ofpequations: µ k = m k, k =,2,...,p. Under fairly general conditions, Method of Moments estimators are asymptotically normal and asymptotically unbiased. However, they are not, in general, efficient. Example 2.7. Let Y i iid N(µ,σ 2 ). We will find the Method of Moments estimators ofµandσ 2. We have µ = E(Y) = µ, µ 2 = E(Y 2 ) = σ 2 + µ 2, m = Y and m 2 = n Y i 2/n. So, the Method of Moments estimators ofµand σ2 satisfy the equations µ = Y Thus, we obtain µ = Y σ 2 = n σ 2 + µ 2 = n Y 2 i Y 2 = n Y 2 i. (Y i Y) 2.
2 2.3. METHODS OF ESTIMATION 97 Estimators obtained by the Method of Moments are not always unique. Example 2.8. Let Y i iid Poisson(λ). We will find the Method of Moments estimator of λ. We know that for this distribution E(Y i ) = var(y i ) = λ. Hence By comparing the first and second population and sample moments we get two different estimators of the same parameter, λ = Y λ 2 = n Y 2 i Y 2. Exercise 2.. Let Y = (Y,...,Y n ) T be a random sample from the distribution with the pdf given by { 2 (ϑ y), y [0,ϑ], f(y;ϑ) = ϑ 2 0, elsewhere. Find an estimator ofϑusing the Method of Moments Method of Maximum Likelihood This method was introduced by R.A.Fisher and it is the most common method of constructing estimators. We will illustrate the method by the following simple example. Example 2.9. Assume thaty i Bernoulli(p),i =,2,3,, with probability of iid success equal top, wherep Θ = {, 2, 3 } i.e.,pbelongs to the parameter space of only three elements. We want to estimate parameter p based on observations of the random sampley = (Y,Y 2,Y 3,Y ) T. The joint pmf is P(Y = y;p) = P(Y i = y i ;p) = p y i ( p) y i. The different values of the joint pmf for allp Θ are given in the table below
3 98 CHAPTER 2. ELEMENTS OF STATISTICAL INFERENCE p 8 27 P(Y = y;p) y i We see thatp( Y i = 0) is largest whenp =. It can be interpreted that when the observed value of the random sample is (0,0,0,0) T the most likely value of the parameter p is p =. Then, this value can be considered as an estimate of p. Similarly, we can conclude that when the observed value of the random sample is, for example,(0,,,0) T, then the most likely value of the parameter is p =. 2 Altogether, we have p = p = 2 p = 3 if we observe all failures or just one success; if we observe two failures and two successes; if we observe three successes and one failure or four successes. Note that, for each point (y,y 2,y 3,y ) T, the estimate p is the value of parameter p for which the joint mass function, treated as a function of p, attains maximum (or its largest value). Here, we treat the joint pmf as a function of parameter p for a given y. Such a function is called the likelihood function and it is denoted by L(p y). Now we introduce a formal definition of the Maximum Likelihood Estimator (MLE). Definition 2.. The MLE(ϑ) is the statistict(y) = ϑ whose value for a given y satisfies the condition L( ϑ y) = supl(ϑ y), ϑ Θ wherel(ϑ y) is the likelihood function forϑ. Properties of MLE The MLEs are invariant, that is MLE(g(ϑ)) = g(mle(ϑ)) = g( ϑ).
4 2.3. METHODS OF ESTIMATION 99 MLEs are asymptotically normal and asymptptically unbiased. Also, they are efficient, that is CRLB(g(ϑ)) eff(g( ϑ)) = lim =. n varg( ϑ) In this case, for largen,varg( ϑ) is approximately equal to the CRLB. Therefore, for largen, g( ϑ) N ( g(ϑ),crlb(g(ϑ)) ) approximately. This is called the asymptotic distribution ofg( ϑ). Example Suppose thaty,...,y n are independentpoisson(λ) random variables. Then the likelihood is n λ y i e λ L(λ y) = = λ n y i e nλ y i! n y. i! We need to find the value of λ which maximizes the likelihood. This value will also maximizel(λ y) = logl(λ y), which is easier to work with. Now, we have l(λ y) = y i logλ nλ log(y i!). The value of λ which maximizes l(λ y) is the solution of dl/dλ = 0. Thus, solving the equation dl n dλ = y i n = 0 λ yields the estimator ˆλ = T(Y) = n Y i/n = Y, which is the same as the Method of Moments estimator. The second derivative is negative for all λ hence, λ indeed maximizes the log-likelihood. Example 2.2. Suppose that Y,...,Y n are independent N(µ,σ 2 ) random variables. Then the likelihood is n { L(µ,σ 2 y) = exp (y } i µ) 2 2πσ 2 2σ 2 { } = (2πσ 2 ) n 2 exp (y 2σ 2 i µ) 2 and so the log-likelihood is l(µ,σ 2 y) = n 2 log(2πσ2 ) 2σ 2 (y i µ) 2.
5 00 CHAPTER 2. ELEMENTS OF STATISTICAL INFERENCE Thus, we have and l µ = 2σ 2 2(y i µ) = (y σ 2 i µ) l σ = n 2 2 2πσ 22π + 2σ Setting these equations to zero, we obtain (y i µ) 2 = n 2σ 2 + 2σ (y i µ) 2. ˆσ 2 (y i ˆµ) = 0 y i = nˆµ, so that ˆµ = Y is the maximum likelihood estimator ofµ, and n 2ˆσ 2 + 2ˆσ (y i y) 2 = 0 nˆσ 2 = (y i y) 2, so that ˆσ 2 = n (Y i Y) 2 /n is the maximum likelihood estimator ofσ 2, which are the same as the Method of Moments estimators. Exercise 2.2. LetY = (Y,...,Y n ) T be a random sample from a Gamma distribution,gamma(λ,α), with the following pdf Assume that the parameterαis known. f(y;λ,α) = λα Γ(α) yα e λy, for y > 0. (a) Identify the complete sufficient statistic forλ. (b) Find the MLE[g(λ)], where g(λ) =. Is the estimator a function of the λ complete sufficient statistic? (c) Knowing that E(Y i ) = α for all i =,...,n, check that the MLE[g(λ)] is λ an unbiased estimator ofg(λ). What can you conclude about the properties of the estimator?
6 2.3. METHODS OF ESTIMATION Method of Least Squares IfY,...,Y n are independent random variables, which have the same variance and higher-order moments, and, if each E(Y i ) is a linear function of ϑ,...,ϑ p, then the Least Squares estimates ofϑ,...,ϑ p are obtained by minimizing S(ϑ) = {Y i E(Y i )} 2. The Least Squares estimator of ϑ j has minimum variance amongst all linear unbiased estimators of ϑ j and is known as the best linear unbiased estimator (BLUE). If the Y i s have a normal distribution, then the Least Squares estimator of ϑ j is the Maximum Likelihood estimator, has a normal distribution and is the MVUE. Example Suppose thaty,...,y n are independentn(µ,σ 2 ) random variables and that Y n +,...,Y n are independent N(µ 2,σ 2 ) random variables. Find the least squares estimators ofµ and µ 2. Since E(Y i ) = { µ, i =,...,n, µ 2, i = n +,...,n, it is a linear function of µ and µ 2. The Least Squares estimators are obtained by minimizing S = n {Y i E(Y i )} 2 = (Y i µ ) 2 + (Y i µ 2 ) 2. i=n + Now, and S µ = 2 S µ 2 = 2 n i=n + (Y i µ ) = 0 µ = n Y i = Y n (Y i µ 2 ) = 0 µ 2 = n 2 i=n + Y i = Y 2, wheren 2 = n n. So, we estimate the mean of each group in the population by the mean of the corresponding sample.
7 02 CHAPTER 2. ELEMENTS OF STATISTICAL INFERENCE Example Suppose thaty i N(β 0 +β x i,σ 2 ) independently fori =,2,...,n, where x i is some explanatory variable. This is called the simple linear regression model. Find the least squares estimators ofβ 0 and β. Since E(Y i ) = β 0 + β x i, it is a linear function of β 0 and β. So we can obtain the least squares estimates by minimizing S = (Y i β 0 β x i ) 2. Now, S β 0 = 2 (Y i β 0 β x i ) = 0 Y i n β 0 β x i = 0 β 0 = y β x and S β = 2 x i (Y i β 0 β x i ) = 0 x i Y i β 0 x i β x 2 i = 0. Substituting the first equation into the second one, we have x i Y i (y β x) ( x i β x 2 i = 0 nx 2 ) β = nxy x i Y i. x 2 i Hence, we have the estimators β 0 = Y β x and β = n x iy i nxy n x2 i nx2. These are the Least Squares estimators of the regression coefficients β 0 and β. Exercise 2.3. Given data,(x,y ),...,(x n,y n ), assume thaty i N(β 0 +β x i,σ 2 ) independently fori =,2,...,n and that σ 2 is known. (a) Show that the Maximum Likelihood Estimators ofβ 0 andβ must be the same as the Least Squares Estimators of these parameters.
8 2.3. METHODS OF ESTIMATION 03 (b) The quench bath temperature in a heat treatment operation was thought to affect the Rockwell hardness of a certain coil spring. An experiment was run in which several springs were treated under four different temperatures. The table gives values of the set temperatures (x) and the observed hardness (y, coded). Assuming that hardness depends on temperature linearly and the variance of the r.vs is constant we may write the following model: E(Y i ) = β 0 +β x i, var(y i ) = σ 2. Calculate the LS estimates of β 0 and of β. What is the estimate of the expected hardness given the temperaturex = 0? Run x y In this section we were considering so called point estimators. We used various methods, such as the Method of Moments, Maximum Likelihood or Least Squares, to derive them. We may also construct the estimators using the Rao- Blackwell Theorem. The estimators are functions T(Y,...,Y n ) Θ, that is, their values belong to the parameter space. However, the values vary with the observed sample. If the estimator is MVUE we may expect that on average the calculated estimates are very close to the true parameter and also that the variability of the estimates is the smallest possible. Sometimes it is more appropriate to construct an interval which covers the unknown parameter with high probability and whose limits depend on the sample. We introduce such intervals in the next section. The point estimators are used in constructing the intervals.
2.3.3 Method of Least Squares
.3. METHODS OF ESTIMATION 105.3.3 Method of Least Squares IfY 1,...,Y n are independent random variables, which have the same variance and higher-order moments, and, if each E(Y i ) is a linear function
More information2.2.2 Comparing estimators
2.2. POINT ESTIMATION 87 2.2.2 Comparing estimators Estimators can be compared through their mean square errors. If they are unbiased, this is equivalent to comparing their variances. In many applications,
More information2.6.3 Generalized likelihood ratio tests
26 HYPOTHESIS TESTING 113 263 Generalized likelihood ratio tests When a UMP test does not exist, we usually use a generalized likelihood ratio test to verify H 0 : θ Θ against H 1 : θ Θ\Θ It can be used
More information2.5.3 Generalized likelihood ratio tests
25 HYPOTHESIS TESTING 127 253 Generalized likelihood ratio tests When a UMP test does not exist, we usually use a generalized likelihood ratio test to verify H 0 : ϑ Θ against H 1 : ϑ Θ\Θ It can be used
More informationStatistics 3858 : Maximum Likelihood Estimators
Statistics 3858 : Maximum Likelihood Estimators 1 Method of Maximum Likelihood In this method we construct the so called likelihood function, that is L(θ) = L(θ; X 1, X 2,..., X n ) = f n (X 1, X 2,...,
More informationMS&E 226: Small Data. Lecture 11: Maximum likelihood (v2) Ramesh Johari
MS&E 226: Small Data Lecture 11: Maximum likelihood (v2) Ramesh Johari ramesh.johari@stanford.edu 1 / 18 The likelihood function 2 / 18 Estimating the parameter This lecture develops the methodology behind
More information2.5 Hypothesis Testing
118 CHAPTER 2. ELEMENTS OF STATISTICAL INFERENCE 2.5 Hypothesis Testing We assume that Y 1,...,Y n have a joint distribution which depends on the unknown parametersϑ = ϑ 1,...,ϑ p ) T. The set of all possible
More informationSTAT 730 Chapter 4: Estimation
STAT 730 Chapter 4: Estimation Timothy Hanson Department of Statistics, University of South Carolina Stat 730: Multivariate Analysis 1 / 23 The likelihood We have iid data, at least initially. Each datum
More informationMathematical statistics
October 1 st, 2018 Lecture 11: Sufficient statistic Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation
More informationParameter Estimation
Parameter Estimation Consider a sample of observations on a random variable Y. his generates random variables: (y 1, y 2,, y ). A random sample is a sample (y 1, y 2,, y ) where the random variables y
More informationp y (1 p) 1 y, y = 0, 1 p Y (y p) = 0, otherwise.
1. Suppose Y 1, Y 2,..., Y n is an iid sample from a Bernoulli(p) population distribution, where 0 < p < 1 is unknown. The population pmf is p y (1 p) 1 y, y = 0, 1 p Y (y p) = (a) Prove that Y is the
More informationChapter 3: Maximum Likelihood Theory
Chapter 3: Maximum Likelihood Theory Florian Pelgrin HEC September-December, 2010 Florian Pelgrin (HEC) Maximum Likelihood Theory September-December, 2010 1 / 40 1 Introduction Example 2 Maximum likelihood
More informationMathematical statistics
October 4 th, 2018 Lecture 12: Information Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation Chapter
More informationExercises and Answers to Chapter 1
Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean
More informationEstimation MLE-Pandemic data MLE-Financial crisis data Evaluating estimators. Estimation. September 24, STAT 151 Class 6 Slide 1
Estimation September 24, 2018 STAT 151 Class 6 Slide 1 Pandemic data Treatment outcome, X, from n = 100 patients in a pandemic: 1 = recovered and 0 = not recovered 1 1 1 0 0 0 1 1 1 0 0 1 0 1 0 0 1 1 1
More informationSTA 260: Statistics and Probability II
Al Nosedal. University of Toronto. Winter 2017 1 Properties of Point Estimators and Methods of Estimation 2 3 If you can t explain it simply, you don t understand it well enough Albert Einstein. Definition
More informationGeneralized Linear Models Introduction
Generalized Linear Models Introduction Statistics 135 Autumn 2005 Copyright c 2005 by Mark E. Irwin Generalized Linear Models For many problems, standard linear regression approaches don t work. Sometimes,
More informationSTAT 512 sp 2018 Summary Sheet
STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}
More informationMath 181B Homework 1 Solution
Math 181B Homework 1 Solution 1. Write down the likelihood: L(λ = n λ X i e λ X i! (a One-sided test: H 0 : λ = 1 vs H 1 : λ = 0.1 The likelihood ratio: where LR = L(1 L(0.1 = 1 X i e n 1 = λ n X i e nλ
More informationFall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.
1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n
More informationBIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation
BIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation Yujin Chung November 29th, 2016 Fall 2016 Yujin Chung Lec13: MLE Fall 2016 1/24 Previous Parametric tests Mean comparisons (normality assumption)
More information1. (Regular) Exponential Family
1. (Regular) Exponential Family The density function of a regular exponential family is: [ ] Example. Poisson(θ) [ ] Example. Normal. (both unknown). ) [ ] [ ] [ ] [ ] 2. Theorem (Exponential family &
More informationNotes on the Multivariate Normal and Related Topics
Version: July 10, 2013 Notes on the Multivariate Normal and Related Topics Let me refresh your memory about the distinctions between population and sample; parameters and statistics; population distributions
More informationEstimation Theory. as Θ = (Θ 1,Θ 2,...,Θ m ) T. An estimator
Estimation Theory Estimation theory deals with finding numerical values of interesting parameters from given set of data. We start with formulating a family of models that could describe how the data were
More informationSpring 2012 Math 541A Exam 1. X i, S 2 = 1 n. n 1. X i I(X i < c), T n =
Spring 2012 Math 541A Exam 1 1. (a) Let Z i be independent N(0, 1), i = 1, 2,, n. Are Z = 1 n n Z i and S 2 Z = 1 n 1 n (Z i Z) 2 independent? Prove your claim. (b) Let X 1, X 2,, X n be independent identically
More informationECON 4160, Autumn term Lecture 1
ECON 4160, Autumn term 2017. Lecture 1 a) Maximum Likelihood based inference. b) The bivariate normal model Ragnar Nymoen University of Oslo 24 August 2017 1 / 54 Principles of inference I Ordinary least
More informationStatistics - Lecture One. Outline. Charlotte Wickham 1. Basic ideas about estimation
Statistics - Lecture One Charlotte Wickham wickham@stat.berkeley.edu http://www.stat.berkeley.edu/~wickham/ Outline 1. Basic ideas about estimation 2. Method of Moments 3. Maximum Likelihood 4. Confidence
More information1. Point Estimators, Review
AMS571 Prof. Wei Zhu 1. Point Estimators, Review Example 1. Let be a random sample from. Please find a good point estimator for Solutions. There are the typical estimators for and. Both are unbiased estimators.
More informationMathematics Ph.D. Qualifying Examination Stat Probability, January 2018
Mathematics Ph.D. Qualifying Examination Stat 52800 Probability, January 2018 NOTE: Answers all questions completely. Justify every step. Time allowed: 3 hours. 1. Let X 1,..., X n be a random sample from
More informationIntroduction to Estimation Methods for Time Series models Lecture 2
Introduction to Estimation Methods for Time Series models Lecture 2 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 2 SNS Pisa 1 / 21 Estimators:
More informationLikelihoods. P (Y = y) = f(y). For example, suppose Y has a geometric distribution on 1, 2,... with parameter p. Then the pmf is
Likelihoods The distribution of a random variable Y with a discrete sample space (e.g. a finite sample space or the integers) can be characterized by its probability mass function (pmf): P (Y = y) = f(y).
More informationChapters 9. Properties of Point Estimators
Chapters 9. Properties of Point Estimators Recap Target parameter, or population parameter θ. Population distribution f(x; θ). { probability function, discrete case f(x; θ) = density, continuous case The
More informationMathematics Qualifying Examination January 2015 STAT Mathematical Statistics
Mathematics Qualifying Examination January 2015 STAT 52800 - Mathematical Statistics NOTE: Answer all questions completely and justify your derivations and steps. A calculator and statistical tables (normal,
More information[Chapter 6. Functions of Random Variables]
[Chapter 6. Functions of Random Variables] 6.1 Introduction 6.2 Finding the probability distribution of a function of random variables 6.3 The method of distribution functions 6.5 The method of Moment-generating
More information2017 Financial Mathematics Orientation - Statistics
2017 Financial Mathematics Orientation - Statistics Written by Long Wang Edited by Joshua Agterberg August 21, 2018 Contents 1 Preliminaries 5 1.1 Samples and Population............................. 5
More informationMax. Likelihood Estimation. Outline. Econometrics II. Ricardo Mora. Notes. Notes
Maximum Likelihood Estimation Econometrics II Department of Economics Universidad Carlos III de Madrid Máster Universitario en Desarrollo y Crecimiento Económico Outline 1 3 4 General Approaches to Parameter
More informationECE531 Lecture 10b: Maximum Likelihood Estimation
ECE531 Lecture 10b: Maximum Likelihood Estimation D. Richard Brown III Worcester Polytechnic Institute 05-Apr-2011 Worcester Polytechnic Institute D. Richard Brown III 05-Apr-2011 1 / 23 Introduction So
More informationHT Introduction. P(X i = x i ) = e λ λ x i
MODS STATISTICS Introduction. HT 2012 Simon Myers, Department of Statistics (and The Wellcome Trust Centre for Human Genetics) myers@stats.ox.ac.uk We will be concerned with the mathematical framework
More informationSTAT 135 Lab 3 Asymptotic MLE and the Method of Moments
STAT 135 Lab 3 Asymptotic MLE and the Method of Moments Rebecca Barter February 9, 2015 Maximum likelihood estimation (a reminder) Maximum likelihood estimation Suppose that we have a sample, X 1, X 2,...,
More informationMathematical Statistics
Mathematical Statistics Chapter Three. Point Estimation 3.4 Uniformly Minimum Variance Unbiased Estimator(UMVUE) Criteria for Best Estimators MSE Criterion Let F = {p(x; θ) : θ Θ} be a parametric distribution
More informationLecture 15. Hypothesis testing in the linear model
14. Lecture 15. Hypothesis testing in the linear model Lecture 15. Hypothesis testing in the linear model 1 (1 1) Preliminary lemma 15. Hypothesis testing in the linear model 15.1. Preliminary lemma Lemma
More informationStatistics and Econometrics I
Statistics and Econometrics I Point Estimation Shiu-Sheng Chen Department of Economics National Taiwan University September 13, 2016 Shiu-Sheng Chen (NTU Econ) Statistics and Econometrics I September 13,
More informationMaximum Likelihood Estimation
University of Pavia Maximum Likelihood Estimation Eduardo Rossi Likelihood function Choosing parameter values that make what one has observed more likely to occur than any other parameter values do. Assumption(Distribution)
More informationMath 494: Mathematical Statistics
Math 494: Mathematical Statistics Instructor: Jimin Ding jmding@wustl.edu Department of Mathematics Washington University in St. Louis Class materials are available on course website (www.math.wustl.edu/
More informationMonte Carlo Studies. The response in a Monte Carlo study is a random variable.
Monte Carlo Studies The response in a Monte Carlo study is a random variable. The response in a Monte Carlo study has a variance that comes from the variance of the stochastic elements in the data-generating
More informationRegression Estimation - Least Squares and Maximum Likelihood. Dr. Frank Wood
Regression Estimation - Least Squares and Maximum Likelihood Dr. Frank Wood Least Squares Max(min)imization Function to minimize w.r.t. β 0, β 1 Q = n (Y i (β 0 + β 1 X i )) 2 i=1 Minimize this by maximizing
More informationStatistical Inference
Statistical Inference Classical and Bayesian Methods Revision Class for Midterm Exam AMS-UCSC Th Feb 9, 2012 Winter 2012. Session 1 (Revision Class) AMS-132/206 Th Feb 9, 2012 1 / 23 Topics Topics We will
More informationFinal Exam. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given.
1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given. (a) If X and Y are independent, Corr(X, Y ) = 0. (b) (c) (d) (e) A consistent estimator must be asymptotically
More informationRegression Models - Introduction
Regression Models - Introduction In regression models, two types of variables that are studied: A dependent variable, Y, also called response variable. It is modeled as random. An independent variable,
More informationIEOR 165 Lecture 13 Maximum Likelihood Estimation
IEOR 165 Lecture 13 Maximum Likelihood Estimation 1 Motivating Problem Suppose we are working for a grocery store, and we have decided to model service time of an individual using the express lane (for
More informationSOLUTION FOR HOMEWORK 12, STAT 4351
SOLUTION FOR HOMEWORK 2, STAT 435 Welcome to your 2th homework. It looks like this is the last one! As usual, try to find mistakes and get extra points! Now let us look at your problems.. Problem 7.22.
More informationProblem Selected Scores
Statistics Ph.D. Qualifying Exam: Part II November 20, 2010 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. Problem 1 2 3 4 5 6 7 8 9 10 11 12 Selected
More informationRegression Models - Introduction
Regression Models - Introduction In regression models there are two types of variables that are studied: A dependent variable, Y, also called response variable. It is modeled as random. An independent
More informationPh.D. Qualifying Exam Friday Saturday, January 6 7, 2017
Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Put your solution to each problem on a separate sheet of paper. Problem 1. (5106) Let X 1, X 2,, X n be a sequence of i.i.d. observations from a
More information10. Linear Models and Maximum Likelihood Estimation
10. Linear Models and Maximum Likelihood Estimation ECE 830, Spring 2017 Rebecca Willett 1 / 34 Primary Goal General problem statement: We observe y i iid pθ, θ Θ and the goal is to determine the θ that
More information2.3 Analysis of Categorical Data
90 CHAPTER 2. ESTIMATION AND HYPOTHESIS TESTING 2.3 Analysis of Categorical Data 2.3.1 The Multinomial Probability Distribution A mulinomial random variable is a generalization of the binomial rv. It results
More informationELEG 5633 Detection and Estimation Minimum Variance Unbiased Estimators (MVUE)
1 ELEG 5633 Detection and Estimation Minimum Variance Unbiased Estimators (MVUE) Jingxian Wu Department of Electrical Engineering University of Arkansas Outline Minimum Variance Unbiased Estimators (MVUE)
More informationSTAT5044: Regression and Anova. Inyoung Kim
STAT5044: Regression and Anova Inyoung Kim 2 / 47 Outline 1 Regression 2 Simple Linear regression 3 Basic concepts in regression 4 How to estimate unknown parameters 5 Properties of Least Squares Estimators:
More informationProof In the CR proof. and
Question Under what conditions will we be able to attain the Cramér-Rao bound and find a MVUE? Lecture 4 - Consequences of the Cramér-Rao Lower Bound. Searching for a MVUE. Rao-Blackwell Theorem, Lehmann-Scheffé
More informationy = 1 N y i = 1 N y. E(W )=a 1 µ 1 + a 2 µ a n µ n (1.2) and its variance is var(w )=a 2 1σ a 2 2σ a 2 nσ 2 n. (1.
The probability density function of a continuous random variable Y (or the probability mass function if Y is discrete) is referred to simply as a probability distribution and denoted by f(y; θ) where θ
More informationλ n e λ n f(x;λ) f(y;λ) = λn e λ = e λ ( n = 2 θ θ 1 2 y2 1 3 y3 ] θ = 1 3 θ. E(Y i ) = 3 1 n n1 3 θ = θ. θ 1 3 y3 1 4 y4 ] θ = 1 6 θ2.
CHAPTER Exercise 1 Suppose that Y (Y 1,,Y ) is a radom sample from a Exp(λ) distributio The we may write f Y (y) λe λy i λ e λ y i }{{} }{{} 1 g λ (T(Y )) h(y) It follows thatt(y ) Y i is a sufficiet statistic
More informationMath 152. Rumbos Fall Solutions to Assignment #12
Math 52. umbos Fall 2009 Solutions to Assignment #2. Suppose that you observe n iid Bernoulli(p) random variables, denoted by X, X 2,..., X n. Find the LT rejection region for the test of H o : p p o versus
More informationSB1a Applied Statistics Lectures 9-10
SB1a Applied Statistics Lectures 9-10 Dr Geoff Nicholls Week 5 MT15 - Natural or canonical) exponential families - Generalised Linear Models for data - Fitting GLM s to data MLE s Iteratively Re-weighted
More informationPoisson Regression. Ryan Godwin. ECON University of Manitoba
Poisson Regression Ryan Godwin ECON 7010 - University of Manitoba Abstract. These lecture notes introduce Maximum Likelihood Estimation (MLE) of a Poisson regression model. 1 Motivating the Poisson Regression
More informationChapter 8: Hypothesis Testing Lecture 9: Likelihood ratio tests
Chapter 8: Hypothesis Testing Lecture 9: Likelihood ratio tests Throughout this chapter we consider a sample X taken from a population indexed by θ Θ R k. Instead of estimating the unknown parameter, we
More informationTwo hours. To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER. 21 June :45 11:45
Two hours MATH20802 To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER STATISTICAL METHODS 21 June 2010 9:45 11:45 Answer any FOUR of the questions. University-approved
More informationEstimation, Inference, and Hypothesis Testing
Chapter 2 Estimation, Inference, and Hypothesis Testing Note: The primary reference for these notes is Ch. 7 and 8 of Casella & Berger 2. This text may be challenging if new to this topic and Ch. 7 of
More informationExample: An experiment can either result in success or failure with probability θ and (1 θ) respectively. The experiment is performed independently
Chapter 3 Sufficient statistics and variance reduction Let X 1,X 2,...,X n be a random sample from a certain distribution with p.m/d.f fx θ. A function T X 1,X 2,...,X n = T X of these observations is
More information1. Simple Linear Regression
1. Simple Linear Regression Suppose that we are interested in the average height of male undergrads at UF. We put each male student s name (population) in a hat and randomly select 100 (sample). Then their
More informationF & B Approaches to a simple model
A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 215 http://www.astro.cornell.edu/~cordes/a6523 Lecture 11 Applications: Model comparison Challenges in large-scale surveys
More informationMcGill University. Faculty of Science. Department of Mathematics and Statistics. Part A Examination. Statistics: Theory Paper
McGill University Faculty of Science Department of Mathematics and Statistics Part A Examination Statistics: Theory Paper Date: 10th May 2015 Instructions Time: 1pm-5pm Answer only two questions from Section
More informationST3241 Categorical Data Analysis I Generalized Linear Models. Introduction and Some Examples
ST3241 Categorical Data Analysis I Generalized Linear Models Introduction and Some Examples 1 Introduction We have discussed methods for analyzing associations in two-way and three-way tables. Now we will
More informationFirst Year Examination Department of Statistics, University of Florida
First Year Examination Department of Statistics, University of Florida August 19, 010, 8:00 am - 1:00 noon Instructions: 1. You have four hours to answer questions in this examination.. You must show your
More informationChapter 8.8.1: A factorization theorem
LECTURE 14 Chapter 8.8.1: A factorization theorem The characterization of a sufficient statistic in terms of the conditional distribution of the data given the statistic can be difficult to work with.
More informationEstimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators
Estimation theory Parametric estimation Properties of estimators Minimum variance estimator Cramer-Rao bound Maximum likelihood estimators Confidence intervals Bayesian estimation 1 Random Variables Let
More informationCh 2: Simple Linear Regression
Ch 2: Simple Linear Regression 1. Simple Linear Regression Model A simple regression model with a single regressor x is y = β 0 + β 1 x + ɛ, where we assume that the error ɛ is independent random component
More informationDiagnostics can identify two possible areas of failure of assumptions when fitting linear models.
1 Transformations 1.1 Introduction Diagnostics can identify two possible areas of failure of assumptions when fitting linear models. (i) lack of Normality (ii) heterogeneity of variances It is important
More informationCentral Limit Theorem ( 5.3)
Central Limit Theorem ( 5.3) Let X 1, X 2,... be a sequence of independent random variables, each having n mean µ and variance σ 2. Then the distribution of the partial sum S n = X i i=1 becomes approximately
More informationMAS223 Statistical Modelling and Inference Examples
Chapter MAS3 Statistical Modelling and Inference Examples Example : Sample spaces and random variables. Let S be the sample space for the experiment of tossing two coins; i.e. Define the random variables
More informationLecture 23 Maximum Likelihood Estimation and Bayesian Inference
Lecture 23 Maximum Likelihood Estimation and Bayesian Inference Thais Paiva STA 111 - Summer 2013 Term II August 7, 2013 1 / 31 Thais Paiva STA 111 - Summer 2013 Term II Lecture 23, 08/07/2013 Lecture
More informationRegression Estimation Least Squares and Maximum Likelihood
Regression Estimation Least Squares and Maximum Likelihood Dr. Frank Wood Frank Wood, fwood@stat.columbia.edu Linear Regression Models Lecture 3, Slide 1 Least Squares Max(min)imization Function to minimize
More informationThe loss function and estimating equations
Chapter 6 he loss function and estimating equations 6 Loss functions Up until now our main focus has been on parameter estimating via the maximum likelihood However, the negative maximum likelihood is
More information1.12 Multivariate Random Variables
112 MULTIVARIATE RANDOM VARIABLES 59 112 Multivariate Random Variables We will be using matrix notation to denote multivariate rvs and their distributions Denote by X (X 1,,X n ) T an n-dimensional random
More informationAn Introduction to Bayesian Linear Regression
An Introduction to Bayesian Linear Regression APPM 5720: Bayesian Computation Fall 2018 A SIMPLE LINEAR MODEL Suppose that we observe explanatory variables x 1, x 2,..., x n and dependent variables y 1,
More informationMLE and GMM. Li Zhao, SJTU. Spring, Li Zhao MLE and GMM 1 / 22
MLE and GMM Li Zhao, SJTU Spring, 2017 Li Zhao MLE and GMM 1 / 22 Outline 1 MLE 2 GMM 3 Binary Choice Models Li Zhao MLE and GMM 2 / 22 Maximum Likelihood Estimation - Introduction For a linear model y
More information1 Complete Statistics
Complete Statistics February 4, 2016 Debdeep Pati 1 Complete Statistics Suppose X P θ, θ Θ. Let (X (1),..., X (n) ) denote the order statistics. Definition 1. A statistic T = T (X) is complete if E θ g(t
More information2 Statistical Estimation: Basic Concepts
Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 2 Statistical Estimation:
More informationBivariate Normal Distribution
.0. TWO-DIMENSIONAL RANDOM VARIABLES 47.0.7 Bivariate Normal Distribution Figure.: Bivariate Normal pdf Here we use matrix notation. A bivariate rv is treated as a random vector X X =. The expectation
More informationIntroduction to Maximum Likelihood Estimation
Introduction to Maximum Likelihood Estimation Eric Zivot July 26, 2012 The Likelihood Function Let 1 be an iid sample with pdf ( ; ) where is a ( 1) vector of parameters that characterize ( ; ) Example:
More informationReview of Discrete Probability (contd.)
Stat 504, Lecture 2 1 Review of Discrete Probability (contd.) Overview of probability and inference Probability Data generating process Observed data Inference The basic problem we study in probability:
More informationStatistics Ph.D. Qualifying Exam: Part I October 18, 2003
Statistics Ph.D. Qualifying Exam: Part I October 18, 2003 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. 1 2 3 4 5 6 7 8 9 10 11 12 2. Write your answer
More informationSTA 6857 Estimation ( 3.6)
STA 6857 Estimation ( 3.6) Outline 1 Yule-Walker 2 Least Squares 3 Maximum Likelihood Arthur Berg STA 6857 Estimation ( 3.6) 2/ 19 Outline 1 Yule-Walker 2 Least Squares 3 Maximum Likelihood Arthur Berg
More informationSTAT 135 Lab 2 Confidence Intervals, MLE and the Delta Method
STAT 135 Lab 2 Confidence Intervals, MLE and the Delta Method Rebecca Barter February 2, 2015 Confidence Intervals Confidence intervals What is a confidence interval? A confidence interval is calculated
More information1 General problem. 2 Terminalogy. Estimation. Estimate θ. (Pick a plausible distribution from family. ) Or estimate τ = τ(θ).
Estimation February 3, 206 Debdeep Pati General problem Model: {P θ : θ Θ}. Observe X P θ, θ Θ unknown. Estimate θ. (Pick a plausible distribution from family. ) Or estimate τ = τ(θ). Examples: θ = (µ,
More informationInference for the Pareto, half normal and related. distributions
Inference for the Pareto, half normal and related distributions Hassan Abuhassan and David J. Olive Department of Mathematics, Southern Illinois University, Mailcode 4408, Carbondale, IL 62901-4408, USA
More information1 Probability Model. 1.1 Types of models to be discussed in the course
Sufficiency January 18, 016 Debdeep Pati 1 Probability Model Model: A family of distributions P θ : θ Θ}. P θ (B) is the probability of the event B when the parameter takes the value θ. P θ is described
More informationLecture 34: Properties of the LSE
Lecture 34: Properties of the LSE The following results explain why the LSE is popular. Gauss-Markov Theorem Assume a general linear model previously described: Y = Xβ + E with assumption A2, i.e., Var(E
More informationIIT JAM : MATHEMATICAL STATISTICS (MS) 2013
IIT JAM : MATHEMATICAL STATISTICS (MS 2013 Question Paper with Answer Keys Ctanujit Classes Of Mathematics, Statistics & Economics Visit our website for more: www.ctanujit.in IMPORTANT NOTE FOR CANDIDATES
More information[y i α βx i ] 2 (2) Q = i=1
Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation
More informationElements of statistics (MATH0487-1)
Elements of statistics (MATH0487-1) Prof. Dr. Dr. K. Van Steen University of Liège, Belgium November 12, 2012 Introduction to Statistics Basic Probability Revisited Sampling Exploratory Data Analysis -
More information