Lecture 4: UMVUE and unbiased estimators of 0
|
|
- Andra Pearson
- 5 years ago
- Views:
Transcription
1 Lecture 4: UMVUE and unbiased estimators of Problems of approach 1. Approach 1 has the following two main shortcomings. Even if Theorem or its extension is applicable, there is no guarantee that the bound is sharp, i.e., there may be a UMVUE but it still cannot achieve the Cramér-Rao lower bound. The conditions for Theorem is somewhat strong. Example (a case where Theorem is not applicable) Let X 1,...,X n be iid with from uniform(,θ), where θ > is unknown. The pdf of X i is f θ (x i ) = θ 1 I( < x i < θ). Since P θ ( < X i < θ) = 1, we can focus on < x i < θ: Then logf θ (x i ) = logθ, θ logf θ (x i ) = 1 θ, [ ] 2 E θ θ logf θ (X i ) = 1 θ 2 < x i < θ UW-Madison (Statistics) Stat 61 Lecture / 16
2 Consider the estimation of g(θ) = θ, g (θ) = 1. According to Theorem (if it holds), for any unbiased estimator T (X) of θ, we should have Var θ (T ) [g (θ)] 2 [ ] 2 = θ 2 ne θ θ logf n θ (X 1 ) Let X (n) be the largest order statistic. From the result in Chapter 5, the pdf of X (n) is ny n 1 θ n, < y < θ Thus, θ ny n E θ (X (n) ) = θ n dy = n y n+1 θ θ n n + 1 = nθ n + 1 showing that n+1 n X (n) is an unbiased estimator of θ. E θ (X 2 (n) ) = θ ny n+1 θ n dy = n y n+2 θ n n + 2 θ = nθ 2 n + 2 UW-Madison (Statistics) Stat 61 Lecture / 16
3 Then ( ) n + 1 Var θ n X (n) = = = (n + 1)2 n 2 Var θ (X (n) ) (n + [ 1)2 n 2 E θ (X(n) 2 ) {E θ (X (n) )} 2] [ (n + 1)2 n 2 nθ 2 n + 2 ( ) ] nθ 2 n + 1 = θ 2 n(n + 2) < θ 2 n Hence, Theorem does not apply. What is wrong? Note that the key condition for Theorem is that θ E θ (T ) = T (x) f θ (x) θ dx θ Θ The right hand side is θ θ T (x) θ ( 1 θ n X ) dx 1 dx n = n θ n+1 θ θ T (x)dx 1 dx n UW-Madison (Statistics) Stat 61 Lecture / 16
4 The left hand side is [ 1 θ θ ] θ θ n T (x)dx 1 dx n = T (θ,...,θ) θ n n θ n+1 θ θ T (x)dx 1 dx n which is not the same as the right hand side unless T (θ,...,θ) = for all θ. The problem is the support of f θ (x), {x : f θ (x) > }, depends on θ. When this occurs, Theorem is typically not applicable. Approach 2: using unbiased estimators of To see when an unbiased estimator is best unbiased, we might ask how could we improve upon a given unbiased estimator? Suppose that T (X) is unbiased for g(θ) and U(X) is a statistic satisfying E θ (U) = for all θ, i.e., U is unbiased for. Then, for any constant a, is unbiased for g(θ). Can it be better than T (X)? T (X) + au(x) UW-Madison (Statistics) Stat 61 Lecture / 16
5 Var θ (T + au) = Var θ (T ) + 2aCov θ (T,U) + a 2 Var θ (U) If for some θ, Cov θ (T,U) <, then we can make 2aCov θ (T,U) + a 2 Var θ (U) < by choosing < a 2Cov θ (T,U)/Var θ (U). Hence, T (X) + au(x) is better than T (X) at least when θ = θ and T (X) cannot be UMVUE. Similarly, if Cov θ (T,U) > for some θ, then T (X) cannot be UMVUE either. Thus, Cov θ (T,U) = is necessary for T (X) to be a UMVUE, for all unbiased estimators of. It turns out that Cov θ (T,U) = for all U(X) unbiased for is also sufficient for T (X) being a UMVUE. Theorem An unbiased estimator T (X) of g(θ) is UMVUE iff T (X) is uncorrelated with all unbiased estimators of. UW-Madison (Statistics) Stat 61 Lecture / 16
6 Proof. We have shown the necessity (the only if part). We now show the sufficiency (the if part). Let W (X) be another unbiased estimator of g(θ). Then W (X) T (X) is unbiased for and Var θ (W ) = Var θ (T + (W T )) = Var θ (T ) + Var θ (W T ) + 2Cov θ (T,W T ) = Var θ (T ) + Var θ (W T ) Var θ (T ) The result follows since W is arbitrary. An unbiased estimator of is a first-order ancillary statistic and can be treated as a random noise. It makes sense that the most sensible way to estimate is with, not with a random noise. Theorem says that an estimator is correlated with a random noise, then it can be improved. UW-Madison (Statistics) Stat 61 Lecture / 16
7 The following two useful results are consequence of Theorem Corollary Let T j be a UMVUE of g j (θ), j = 1,...,m, where m is a fixed positive integer. Then T = m j=1 c jt j is a UMVUE of g(θ) = m j=1 c jg j (θ) for any constants c 1,...,c m. Proof. Let U(X) be any unbiased estimator of. First, the unbiasedness of T j s imply that T is unbiased for g(θ): E θ (T ) = E θ ( m j=1 c j T j ) = m j=1 E θ (T j ) = m j=1 c j g j (θ) = g(θ) By Theorem 7.3.2, Cov θ (T j,u) = for all j = 1,...,m, and θ Θ. ) Cov θ (T,U) = Cov θ ( m j=1 c j T j,u = m j=1 c j Cov θ (T j,u) = Again, by Theorem 7.3.2, this shows that T is a UMVUE of g(θ). UW-Madison (Statistics) Stat 61 Lecture / 16
8 Theorem If T (X) is a UMVUE of g(θ), then T (X) is unique in the sense that if T 1 (X) is another UMVUE of g(θ), then P θ (T (X) = T 1 (X)) = 1 for any θ Θ Proof. Let T (X) and T 1 (X) be both UMVUE of g(θ). Since both T (X) and T 1 (X) are unbiased, T (X) T 1 (X) is an unbiased estimator of. Since both T and T 1 are UMVUE, by Theorem 7.3.2, Then Cov θ (T,T T 1 ) =, Cov θ (T 1,T T 1 ) = θ Θ E θ (T T 1 ) 2 = E θ [(T T 1 )(T T 1 )] = E θ [T (T T 1 )] E θ [T 1 (T T 1 )] = Cov θ (T,T T 1 ) Cov θ (T 1,T T 1 ) = Hence, P θ (T (X) = T 1 (X)) = 1 for any θ Θ. UW-Madison (Statistics) Stat 61 Lecture / 16
9 Although Theorem provides an interesting characterization of best unbiased estimators and two useful results, its usefulness of checking whether a particular unbiased estimator T is a UMVUE is limited since it asks to check whether T is uncorrelated with all unbiased estimator of. However, Theorem is sometimes useful in determining that an estimator is not best unbiased or showing that no UMVUE exists. Example Let X be an observation from uniform(θ,θ + 1), θ R. We want to show that there is no UMVUE of g(θ) for any nonconstant differentiable function g. Note that an unbiased estimator U(X) of must satisfy θ+1 U(x)dx = for all θ R θ Differentiating both sizes of the previous equation, U(θ) = U(θ + 1), θ R, i.e., U(x) = U(x + 1), x R UW-Madison (Statistics) Stat 61 Lecture / 16
10 If T is a UMVUE of g(θ), then T (X)U(X) is unbiased for and, hence, T (x)u(x) = T (x + 1)U(x + 1), where U(X) is any unbiased estimator of. Since this is true for all U, Since T is unbiased for g(θ), g(θ) = T (x) = T (x + 1), θ+1 θ T (x)dx x R x R for all θ R Differentiating both sides of the previous equation we obtain that g (θ) = T (θ + 1) T (θ) =, Thus, g is a constant function. θ R We consider more for g(θ) = θ. Since E θ (X) = θ + 1 2, X is unbiased for θ, and its variance is 12. One unbiased estimator of is U = sin(2πx), i.e., E θ (U) = θ+1 sin(2πx)dx = θ R θ UW-Madison (Statistics) Stat 61 Lecture / 16
11 Cov θ (X 1 2,U) = Cov θ (X,sin(2πX)) = E θ [X sin(2πx)] = θ+1 θ x sin(2πx)dx θ+1 = x cos(2πx) 2π = cos(2πθ) 2π θ θ+1 + θ cos(2πx) dx 2π Hence X 1 2 is correlated with an unbiased estimator of, and cannot be a UMVUE of θ. In fact, X sin(2πx) 2π is unbiased for θ and has variance.71 < Sufficient statistics If there is a sufficient statistic T, then the Rao-Blackwell theorem says that E(W T ) is a better unbiased estimator than an unbiased estimator W. The following is a Rao-Blackwell theorem for unbiased estimators, a special case of what we have shown in Chapter 6. UW-Madison (Statistics) Stat 61 Lecture / 16
12 Theorem Let W be any unbiased estimator of g(θ) and T be a sufficient statistic for θ. Define ψ(t ) = E(W T ), which does not depend on θ since T is sufficient. Then ψ(t ) is an unbiased estimator of g(θ) and Var θ (T ) Var θ (W ). Thus, conditioning any unbiased estimator on a sufficient statistic will result in a uniform improvement. This and the next result indicate that we need consider only estimators that are functions of a sufficient statistic in our search for the UMVUE. An alternative to Theorem Suppose that T is a sufficient statistic for θ. An unbiased estimator ψ(t ) of g(θ) is a UMVUE iff Cov θ (ψ(t ),h(t )) = for any θ Θ and any h(t ) that is an unbiased estimator of. Proof. The only if" part follows from the only if" part of Theorem We now prove the if" part. UW-Madison (Statistics) Stat 61 Lecture / 16
13 Suppose that Cov θ (ψ(t ),h(t )) = for any θ Θ and any h(t ) that is an unbiased estimator of. For any unbiased estimator of U and any θ, Cov θ (ψ(t ),U) = E θ [ψ(t )U] = E θ {E[ψ(T )U T ]} = E θ {ψ(t )E(U T )}= since E(U T ) is a function of T and E θ [E(U T )] = E θ (U)=. If we have another sufficient statistic S, should we consider E[E(W T ) S]? If there is a function h such that S = h(t ), then by the properties of conditional expectation, E[E(W T ) S] = E(W S) = E[E(W S) T ] That is, we should always conditioning on a simpler sufficient statistic, such as a minimal sufficient statistic or a complete sufficient statistic. If we do have a sufficient and complete statistic T, then by the completeness, is essentially the only unbiased estimator of that is a function of T. UW-Madison (Statistics) Stat 61 Lecture / 16
14 Thus, by the alternative to Theorem 7.3.2, we have actually proved the following result useful for finding the UMVUE. Theorem (Lehmann-Scheffé Theorem) Let T be a complete sufficient statistic for θ. If ψ(t ) is an unbiased estimator of g(θ), then it is the unique UMVUE. By Example , Theorem does not hold if T is only minimal sufficient, since X is minimal sufficient for θ in Example This is because functions of minimal sufficient statistic can be first-order ancillary and can still be useful. Example Let X 1,...,X n be iid from uniform(,θ) with unknown θ >. Consider the estimation of θ. Previously we show that approach 1 is not applicable in this example. Since T = X (n) is the sufficient and complete statistic for θ and The UMVUE of θ is (1 + n 1 )X (n). E(X (n) ) = nθ/(n + 1) UW-Madison (Statistics) Stat 61 Lecture / 16
15 Suppose now that Θ = [1, ). Then X (n) is not complete, although it is still sufficient for θ. Thus, Theorem does not apply to X (n). We now illustrate how to use the alternative Theorem to find a UMVUE of θ. Let U(X (n) ) be an unbiased estimator of. Since X (n) has pdf nθ n x n 1 I (,θ) (x), = 1 U(x)x n 1 dx + θ This implies that U(x) = on [1, ) and 1 U(x)x n 1 dx =. 1 U(x)x n 1 dx for all θ 1. Consider T = h(x (n) ). To have E(TU) =, we must have 1 h(x)u(x)x n 1 dx =. Thus, we may consider the following function: UW-Madison (Statistics) Stat 61 Lecture / 16
16 { c x 1 h(x) = bx x > 1, where c and b are some constants. From the previous discussion, E[h(X (n) )U(X (n) )] =, θ 1. Since E[h(X (n) )] = θ, we obtain that θ = cp(x (n) 1) + be[x (n) I (1, ) (X (n) )] = cθ n + [bn/(n + 1)](θ θ n ). Thus, c = 1 and b = (n + 1)/n. The UMVUE of θ is then { 1 X(n) 1 h(x (n) ) = (1 + n 1 )X (n) X (n) > 1. This estimator is better than (1 + n 1 )X (n), which is the UMVUE when Θ = (, ) and does not make use of the information about θ 1. When Θ = (, ), this estimator is not unbiased. UW-Madison (Statistics) Stat 61 Lecture / 16
Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic
Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic Unbiased estimation Unbiased or asymptotically unbiased estimation plays an important role in
More information557: MATHEMATICAL STATISTICS II BIAS AND VARIANCE
557: MATHEMATICAL STATISTICS II BIAS AND VARIANCE An estimator, T (X), of θ can be evaluated via its statistical properties. Typically, two aspects are considered: Expectation Variance either in terms
More informationST5215: Advanced Statistical Theory
Department of Statistics & Applied Probability Wednesday, October 19, 2011 Lecture 17: UMVUE and the first method of derivation Estimable parameters Let ϑ be a parameter in the family P. If there exists
More informationLecture 16: UMVUE: conditioning on sufficient and complete statistics
Lecture 16: UMVUE: coditioig o sufficiet ad complete statistics The 2d method of derivig a UMVUE whe a sufficiet ad complete statistic is available Fid a ubiased estimator of ϑ, say U(X. Coditioig o a
More informationMethods of evaluating estimators and best unbiased estimators Hamid R. Rabiee
Stochastic Processes Methods of evaluating estimators and best unbiased estimators Hamid R. Rabiee 1 Outline Methods of Mean Squared Error Bias and Unbiasedness Best Unbiased Estimators CR-Bound for variance
More informationMathematical Statistics
Mathematical Statistics Chapter Three. Point Estimation 3.4 Uniformly Minimum Variance Unbiased Estimator(UMVUE) Criteria for Best Estimators MSE Criterion Let F = {p(x; θ) : θ Θ} be a parametric distribution
More informationVariations. ECE 6540, Lecture 10 Maximum Likelihood Estimation
Variations ECE 6540, Lecture 10 Last Time BLUE (Best Linear Unbiased Estimator) Formulation Advantages Disadvantages 2 The BLUE A simplification Assume the estimator is a linear system For a single parameter
More informationUnbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.
Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it
More information1. (Regular) Exponential Family
1. (Regular) Exponential Family The density function of a regular exponential family is: [ ] Example. Poisson(θ) [ ] Example. Normal. (both unknown). ) [ ] [ ] [ ] [ ] 2. Theorem (Exponential family &
More informationLast Lecture. Biostatistics Statistical Inference Lecture 14 Obtaining Best Unbiased Estimator. Related Theorems. Rao-Blackwell Theorem
Last Lecture Biostatistics 62 - Statistical Inference Lecture 14 Obtaining Best Unbiased Estimator Hyun Min Kang February 28th, 213 For single-parameter exponential family, is Cramer-Rao bound always attainable?
More informationUnbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.
Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it
More informationECE 275B Homework # 1 Solutions Version Winter 2015
ECE 275B Homework # 1 Solutions Version Winter 2015 1. (a) Because x i are assumed to be independent realizations of a continuous random variable, it is almost surely (a.s.) 1 the case that x 1 < x 2
More informationSpring 2012 Math 541A Exam 1. X i, S 2 = 1 n. n 1. X i I(X i < c), T n =
Spring 2012 Math 541A Exam 1 1. (a) Let Z i be independent N(0, 1), i = 1, 2,, n. Are Z = 1 n n Z i and S 2 Z = 1 n 1 n (Z i Z) 2 independent? Prove your claim. (b) Let X 1, X 2,, X n be independent identically
More informationECE 275B Homework # 1 Solutions Winter 2018
ECE 275B Homework # 1 Solutions Winter 2018 1. (a) Because x i are assumed to be independent realizations of a continuous random variable, it is almost surely (a.s.) 1 the case that x 1 < x 2 < < x n Thus,
More informationLecture 5: Moment generating functions
Lecture 5: Moment generating functions Definition 2.3.6. The moment generating function (mgf) of a random variable X is { x e tx f M X (t) = E(e tx X (x) if X has a pmf ) = etx f X (x)dx if X has a pdf
More informationEstimation and Detection
stimation and Detection Lecture 2: Cramér-Rao Lower Bound Dr. ir. Richard C. Hendriks & Dr. Sundeep P. Chepuri 7//207 Remember: Introductory xample Given a process (DC in noise): x[n]=a + w[n], n=0,,n,
More informationENEE 621 SPRING 2016 DETECTION AND ESTIMATION THEORY THE PARAMETER ESTIMATION PROBLEM
c 2007-2016 by Armand M. Makowski 1 ENEE 621 SPRING 2016 DETECTION AND ESTIMATION THEORY THE PARAMETER ESTIMATION PROBLEM 1 The basic setting Throughout, p, q and k are positive integers. The setup With
More informationChapter 6. Hypothesis Tests Lecture 20: UMP tests and Neyman-Pearson lemma
Chapter 6. Hypothesis Tests Lecture 20: UMP tests and Neyman-Pearson lemma Theory of testing hypotheses X: a sample from a population P in P, a family of populations. Based on the observed X, we test a
More informationChapter 9: Interval Estimation and Confidence Sets Lecture 16: Confidence sets and credible sets
Chapter 9: Interval Estimation and Confidence Sets Lecture 16: Confidence sets and credible sets Confidence sets We consider a sample X from a population indexed by θ Θ R k. We are interested in ϑ, a vector-valued
More informationECE531 Lecture 8: Non-Random Parameter Estimation
ECE531 Lecture 8: Non-Random Parameter Estimation D. Richard Brown III Worcester Polytechnic Institute 19-March-2009 Worcester Polytechnic Institute D. Richard Brown III 19-March-2009 1 / 25 Introduction
More informationEvaluating the Performance of Estimators (Section 7.3)
Evaluating the Performance of Estimators (Section 7.3) Example: Suppose we observe X 1,..., X n iid N(θ, σ 2 0 ), with σ2 0 known, and wish to estimate θ. Two possible estimators are: ˆθ = X sample mean
More informationFebruary 26, 2017 COMPLETENESS AND THE LEHMANN-SCHEFFE THEOREM
February 26, 2017 COMPLETENESS AND THE LEHMANN-SCHEFFE THEOREM Abstract. The Rao-Blacwell theorem told us how to improve an estimator. We will discuss conditions on when the Rao-Blacwellization of an estimator
More informationLecture 17: Minimal sufficiency
Lecture 17: Minimal sufficiency Maximal reduction without loss of information There are many sufficient statistics for a given family P. In fact, X (the whole data set) is sufficient. If T is a sufficient
More informationEstimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators
Estimation theory Parametric estimation Properties of estimators Minimum variance estimator Cramer-Rao bound Maximum likelihood estimators Confidence intervals Bayesian estimation 1 Random Variables Let
More information40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology
Singapore University of Design and Technology Lecture 9: Hypothesis testing, uniformly most powerful tests. The Neyman-Pearson framework Let P be the family of distributions of concern. The Neyman-Pearson
More informationSummary. Ancillary Statistics What is an ancillary statistic for θ? .2 Can an ancillary statistic be a sufficient statistic?
Biostatistics 62 - Statistical Inference Lecture 5 Hyun Min Kang 1 What is an ancillary statistic for θ? 2 Can an ancillary statistic be a sufficient statistic? 3 What are the location parameter and the
More informationECE534, Spring 2018: Solutions for Problem Set #3
ECE534, Spring 08: Solutions for Problem Set #3 Jointly Gaussian Random Variables and MMSE Estimation Suppose that X, Y are jointly Gaussian random variables with µ X = µ Y = 0 and σ X = σ Y = Let their
More informationChapter 3. Point Estimation. 3.1 Introduction
Chapter 3 Point Estimation Let (Ω, A, P θ ), P θ P = {P θ θ Θ}be probability space, X 1, X 2,..., X n : (Ω, A) (IR k, B k ) random variables (X, B X ) sample space γ : Θ IR k measurable function, i.e.
More informationMathematical statistics
October 4 th, 2018 Lecture 12: Information Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation Chapter
More informationMarch 10, 2017 THE EXPONENTIAL CLASS OF DISTRIBUTIONS
March 10, 2017 THE EXPONENTIAL CLASS OF DISTRIBUTIONS Abstract. We will introduce a class of distributions that will contain many of the discrete and continuous we are familiar with. This class will help
More informationChapter 2: Fundamentals of Statistics Lecture 15: Models and statistics
Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Data from one or a series of random experiments are collected. Planning experiments and collecting data (not discussed here). Analysis:
More informationLecture 20: Linear model, the LSE, and UMVUE
Lecture 20: Linear model, the LSE, and UMVUE Linear Models One of the most useful statistical models is X i = β τ Z i + ε i, i = 1,...,n, where X i is the ith observation and is often called the ith response;
More informationProof In the CR proof. and
Question Under what conditions will we be able to attain the Cramér-Rao bound and find a MVUE? Lecture 4 - Consequences of the Cramér-Rao Lower Bound. Searching for a MVUE. Rao-Blackwell Theorem, Lehmann-Scheffé
More informationMcGill University. Faculty of Science. Department of Mathematics and Statistics. Part A Examination. Statistics: Theory Paper
McGill University Faculty of Science Department of Mathematics and Statistics Part A Examination Statistics: Theory Paper Date: 10th May 2015 Instructions Time: 1pm-5pm Answer only two questions from Section
More informationLecture 17: Likelihood ratio and asymptotic tests
Lecture 17: Likelihood ratio and asymptotic tests Likelihood ratio When both H 0 and H 1 are simple (i.e., Θ 0 = {θ 0 } and Θ 1 = {θ 1 }), Theorem 6.1 applies and a UMP test rejects H 0 when f θ1 (X) f
More informationFor a stochastic process {Y t : t = 0, ±1, ±2, ±3, }, the mean function is defined by (2.2.1) ± 2..., γ t,
CHAPTER 2 FUNDAMENTAL CONCEPTS This chapter describes the fundamental concepts in the theory of time series models. In particular, we introduce the concepts of stochastic processes, mean and covariance
More informationLecture 34: Properties of the LSE
Lecture 34: Properties of the LSE The following results explain why the LSE is popular. Gauss-Markov Theorem Assume a general linear model previously described: Y = Xβ + E with assumption A2, i.e., Var(E
More informationMathematical statistics
October 18 th, 2018 Lecture 16: Midterm review Countdown to mid-term exam: 7 days Week 1 Chapter 1: Probability review Week 2 Week 4 Week 7 Chapter 6: Statistics Chapter 7: Point Estimation Chapter 8:
More informationLecture 23: UMPU tests in exponential families
Lecture 23: UMPU tests in exponential families Continuity of the power function For a given test T, the power function β T (P) is said to be continuous in θ if and only if for any {θ j : j = 0,1,2,...}
More informationLecture 28: Asymptotic confidence sets
Lecture 28: Asymptotic confidence sets 1 α asymptotic confidence sets Similar to testing hypotheses, in many situations it is difficult to find a confidence set with a given confidence coefficient or level
More informationLecture 5. i=1 xi. Ω h(x,y)f X Θ(y θ)µ Θ (dθ) = dµ Θ X
LECURE NOES 25 Lecture 5 9. Minimal sufficient and complete statistics We introduced the notion of sufficient statistics in order to have a function of the data that contains all information about the
More informationFundamentals of Statistics
Chapter 2 Fundamentals of Statistics This chapter discusses some fundamental concepts of mathematical statistics. These concepts are essential for the material in later chapters. 2.1 Populations, Samples,
More informationA Few Notes on Fisher Information (WIP)
A Few Notes on Fisher Information (WIP) David Meyer dmm@{-4-5.net,uoregon.edu} Last update: April 30, 208 Definitions There are so many interesting things about Fisher Information and its theoretical properties
More information2.2.2 Comparing estimators
2.2. POINT ESTIMATION 87 2.2.2 Comparing estimators Estimators can be compared through their mean square errors. If they are unbiased, this is equivalent to comparing their variances. In many applications,
More information5 Equivariance. 5.1 The principle of equivariance
5 Equivariance Assuming that the family of distributions, containing that unknown distribution that data are observed from, has the property of being invariant or equivariant under some transformation,
More informationIntroduction to Estimation Methods for Time Series models Lecture 2
Introduction to Estimation Methods for Time Series models Lecture 2 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 2 SNS Pisa 1 / 21 Estimators:
More informationChapter 8.8.1: A factorization theorem
LECTURE 14 Chapter 8.8.1: A factorization theorem The characterization of a sufficient statistic in terms of the conditional distribution of the data given the statistic can be difficult to work with.
More informationGraduate Econometrics I: Unbiased Estimation
Graduate Econometrics I: Unbiased Estimation Yves Dominicy Université libre de Bruxelles Solvay Brussels School of Economics and Management ECARES Yves Dominicy Graduate Econometrics I: Unbiased Estimation
More informationDetection & Estimation Lecture 1
Detection & Estimation Lecture 1 Intro, MVUE, CRLB Xiliang Luo General Course Information Textbooks & References Fundamentals of Statistical Signal Processing: Estimation Theory/Detection Theory, Steven
More information1.1 Review of Probability Theory
1.1 Review of Probability Theory Angela Peace Biomathemtics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,
More informationA General Overview of Parametric Estimation and Inference Techniques.
A General Overview of Parametric Estimation and Inference Techniques. Moulinath Banerjee University of Michigan September 11, 2012 The object of statistical inference is to glean information about an underlying
More informationSTAT 512 sp 2018 Summary Sheet
STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}
More informationFirst Year Examination Department of Statistics, University of Florida
First Year Examination Department of Statistics, University of Florida August 19, 010, 8:00 am - 1:00 noon Instructions: 1. You have four hours to answer questions in this examination.. You must show your
More informationLecture 26: Likelihood ratio tests
Lecture 26: Likelihood ratio tests Likelihood ratio When both H 0 and H 1 are simple (i.e., Θ 0 = {θ 0 } and Θ 1 = {θ 1 }), Theorem 6.1 applies and a UMP test rejects H 0 when f θ1 (X) f θ0 (X) > c 0 for
More informationClassical Estimation Topics
Classical Estimation Topics Namrata Vaswani, Iowa State University February 25, 2014 This note fills in the gaps in the notes already provided (l0.pdf, l1.pdf, l2.pdf, l3.pdf, LeastSquares.pdf). 1 Min
More informationLecture 21: Convergence of transformations and generating a random variable
Lecture 21: Convergence of transformations and generating a random variable If Z n converges to Z in some sense, we often need to check whether h(z n ) converges to h(z ) in the same sense. Continuous
More informationDetection & Estimation Lecture 1
Detection & Estimation Lecture 1 Intro, MVUE, CRLB Xiliang Luo General Course Information Textbooks & References Fundamentals of Statistical Signal Processing: Estimation Theory/Detection Theory, Steven
More informationChapter 8: Least squares (beginning of chapter)
Chapter 8: Least squares (beginning of chapter) Least Squares So far, we have been trying to determine an estimator which was unbiased and had minimum variance. Next we ll consider a class of estimators
More information5 Operations on Multiple Random Variables
EE360 Random Signal analysis Chapter 5: Operations on Multiple Random Variables 5 Operations on Multiple Random Variables Expected value of a function of r.v. s Two r.v. s: ḡ = E[g(X, Y )] = g(x, y)f X,Y
More informationChapter 7. Confidence Sets Lecture 30: Pivotal quantities and confidence sets
Chapter 7. Confidence Sets Lecture 30: Pivotal quantities and confidence sets Confidence sets X: a sample from a population P P. θ = θ(p): a functional from P to Θ R k for a fixed integer k. C(X): a confidence
More informationLecture 14: Multivariate mgf s and chf s
Lecture 14: Multivariate mgf s and chf s Multivariate mgf and chf For an n-dimensional random vector X, its mgf is defined as M X (t) = E(e t X ), t R n and its chf is defined as φ X (t) = E(e ıt X ),
More informationStatistics Ph.D. Qualifying Exam: Part I October 18, 2003
Statistics Ph.D. Qualifying Exam: Part I October 18, 2003 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. 1 2 3 4 5 6 7 8 9 10 11 12 2. Write your answer
More informationSummary. Complete Statistics ... Last Lecture. .1 What is a complete statistic? .2 Why it is called as complete statistic?
Last Lecture Biostatistics 602 - Statistical Inference Lecture 06 Hyun Min Kang January 29th, 2013 1 What is a complete statistic? 2 Why it is called as complete statistic? 3 Can the same statistic be
More informationVariations. ECE 6540, Lecture 02 Multivariate Random Variables & Linear Algebra
Variations ECE 6540, Lecture 02 Multivariate Random Variables & Linear Algebra Last Time Probability Density Functions Normal Distribution Expectation / Expectation of a function Independence Uncorrelated
More information1 Complete Statistics
Complete Statistics February 4, 2016 Debdeep Pati 1 Complete Statistics Suppose X P θ, θ Θ. Let (X (1),..., X (n) ) denote the order statistics. Definition 1. A statistic T = T (X) is complete if E θ g(t
More informationProbability and Statistics qualifying exam, May 2015
Probability and Statistics qualifying exam, May 2015 Name: Instructions: 1. The exam is divided into 3 sections: Linear Models, Mathematical Statistics and Probability. You must pass each section to pass
More informationECE Lecture #9 Part 2 Overview
ECE 450 - Lecture #9 Part Overview Bivariate Moments Mean or Expected Value of Z = g(x, Y) Correlation and Covariance of RV s Functions of RV s: Z = g(x, Y); finding f Z (z) Method : First find F(z), by
More informationParametric Inference
Parametric Inference Moulinath Banerjee University of Michigan April 14, 2004 1 General Discussion The object of statistical inference is to glean information about an underlying population based on a
More informationStat 710: Mathematical Statistics Lecture 40
Stat 710: Mathematical Statistics Lecture 40 Jun Shao Department of Statistics University of Wisconsin Madison, WI 53706, USA Jun Shao (UW-Madison) Stat 710, Lecture 40 May 6, 2009 1 / 11 Lecture 40: Simultaneous
More information5.2 Fisher information and the Cramer-Rao bound
Stat 200: Introduction to Statistical Inference Autumn 208/9 Lecture 5: Maximum likelihood theory Lecturer: Art B. Owen October 9 Disclaimer: These notes have not been subjected to the usual scrutiny reserved
More informationReview: mostly probability and some statistics
Review: mostly probability and some statistics C2 1 Content robability (should know already) Axioms and properties Conditional probability and independence Law of Total probability and Bayes theorem Random
More informationLecture 3: Random variables, distributions, and transformations
Lecture 3: Random variables, distributions, and transformations Definition 1.4.1. A random variable X is a function from S into a subset of R such that for any Borel set B R {X B} = {ω S : X(ω) B} is an
More informationMath 494: Mathematical Statistics
Math 494: Mathematical Statistics Instructor: Jimin Ding jmding@wustl.edu Department of Mathematics Washington University in St. Louis Class materials are available on course website (www.math.wustl.edu/
More informationEstimation Theory Fredrik Rusek. Chapters
Estimation Theory Fredrik Rusek Chapters 3.5-3.10 Recap We deal with unbiased estimators of deterministic parameters Performance of an estimator is measured by the variance of the estimate (due to the
More informationAdvanced Signal Processing Introduction to Estimation Theory
Advanced Signal Processing Introduction to Estimation Theory Danilo Mandic, room 813, ext: 46271 Department of Electrical and Electronic Engineering Imperial College London, UK d.mandic@imperial.ac.uk,
More informationECE531 Lecture 10b: Maximum Likelihood Estimation
ECE531 Lecture 10b: Maximum Likelihood Estimation D. Richard Brown III Worcester Polytechnic Institute 05-Apr-2011 Worcester Polytechnic Institute D. Richard Brown III 05-Apr-2011 1 / 23 Introduction So
More informationf(y θ) = g(t (y) θ)h(y)
EXAM3, FINAL REVIEW (and a review for some of the QUAL problems): No notes will be allowed, but you may bring a calculator. Memorize the pmf or pdf f, E(Y ) and V(Y ) for the following RVs: 1) beta(δ,
More informationRecall that if X 1,...,X n are random variables with finite expectations, then. The X i can be continuous or discrete or of any other type.
Expectations of Sums of Random Variables STAT/MTHE 353: 4 - More on Expectations and Variances T. Linder Queen s University Winter 017 Recall that if X 1,...,X n are random variables with finite expectations,
More informationChapter 4. Chapter 4 sections
Chapter 4 sections 4.1 Expectation 4.2 Properties of Expectations 4.3 Variance 4.4 Moments 4.5 The Mean and the Median 4.6 Covariance and Correlation 4.7 Conditional Expectation SKIP: 4.8 Utility Expectation
More informationFinal Exam. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given.
1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given. (a) If X and Y are independent, Corr(X, Y ) = 0. (b) (c) (d) (e) A consistent estimator must be asymptotically
More informationELEG 5633 Detection and Estimation Minimum Variance Unbiased Estimators (MVUE)
1 ELEG 5633 Detection and Estimation Minimum Variance Unbiased Estimators (MVUE) Jingxian Wu Department of Electrical Engineering University of Arkansas Outline Minimum Variance Unbiased Estimators (MVUE)
More informationStat 710: Mathematical Statistics Lecture 27
Stat 710: Mathematical Statistics Lecture 27 Jun Shao Department of Statistics University of Wisconsin Madison, WI 53706, USA Jun Shao (UW-Madison) Stat 710, Lecture 27 April 3, 2009 1 / 10 Lecture 27:
More informationStochastic Processes
Elements of Lecture II Hamid R. Rabiee with thanks to Ali Jalali Overview Reading Assignment Chapter 9 of textbook Further Resources MIT Open Course Ware S. Karlin and H. M. Taylor, A First Course in Stochastic
More informationSTAT 5380 Advanced Mathematical Statistics I (Lecture Notes Spring 2018) 1. Alex Trindade Department of Mathematics & Statistics Texas Tech University
STAT 5380 Advanced Mathematical Statistics I (Lecture Notes Spring 208) Alex Trindade Department of Mathematics & Statistics Texas Tech University Based primarily on TPE: Theory of Point Estimation, 2nd
More informationECE 275A Homework 6 Solutions
ECE 275A Homework 6 Solutions. The notation used in the solutions for the concentration (hyper) ellipsoid problems is defined in the lecture supplement on concentration ellipsoids. Note that θ T Σ θ =
More informationPubh 8482: Sequential Analysis
Pubh 8482: Sequential Analysis Joseph S. Koopmeiners Division of Biostatistics University of Minnesota Week 7 Course Summary To this point, we have discussed group sequential testing focusing on Maintaining
More informationElements of statistics (MATH0487-1)
Elements of statistics (MATH0487-1) Prof. Dr. Dr. K. Van Steen University of Liège, Belgium November 12, 2012 Introduction to Statistics Basic Probability Revisited Sampling Exploratory Data Analysis -
More informationStat 5101 Notes: Algorithms
Stat 5101 Notes: Algorithms Charles J. Geyer January 22, 2016 Contents 1 Calculating an Expectation or a Probability 3 1.1 From a PMF........................... 3 1.2 From a PDF...........................
More informationSystem Identification, Lecture 4
System Identification, Lecture 4 Kristiaan Pelckmans (IT/UU, 2338) Course code: 1RT880, Report code: 61800 - Spring 2012 F, FRI Uppsala University, Information Technology 30 Januari 2012 SI-2012 K. Pelckmans
More informationSystem Identification, Lecture 4
System Identification, Lecture 4 Kristiaan Pelckmans (IT/UU, 2338) Course code: 1RT880, Report code: 61800 - Spring 2016 F, FRI Uppsala University, Information Technology 13 April 2016 SI-2016 K. Pelckmans
More informationEconomics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1,
Economics 520 Lecture Note 9: Hypothesis Testing via the Neyman-Pearson Lemma CB 8., 8.3.-8.3.3 Uniformly Most Powerful Tests and the Neyman-Pearson Lemma Let s return to the hypothesis testing problem
More informationCompleteness. On the other hand, the distribution of an ancillary statistic doesn t depend on θ at all.
Completeness A minimal sufficient statistic achieves the maximum amount of data reduction while retaining all the information the sample has concerning θ. On the other hand, the distribution of an ancillary
More informationIntroduction to Statistical Inference Self-study
Introduction to Statistical Inference Self-study Contents Definition, sample space The fundamental object in probability is a nonempty sample space Ω. An event is a subset A Ω. Definition, σ-algebra A
More informationSTA 732: Inference. Notes 10. Parameter Estimation from a Decision Theoretic Angle. Other resources
STA 732: Inference Notes 10. Parameter Estimation from a Decision Theoretic Angle Other resources 1 Statistical rules, loss and risk We saw that a major focus of classical statistics is comparing various
More informationSTAT 730 Chapter 4: Estimation
STAT 730 Chapter 4: Estimation Timothy Hanson Department of Statistics, University of South Carolina Stat 730: Multivariate Analysis 1 / 23 The likelihood We have iid data, at least initially. Each datum
More informationS6880 #13. Variance Reduction Methods
S6880 #13 Variance Reduction Methods 1 Variance Reduction Methods Variance Reduction Methods 2 Importance Sampling Importance Sampling 3 Control Variates Control Variates Cauchy Example Revisited 4 Antithetic
More informationconditional cdf, conditional pdf, total probability theorem?
6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random
More information4. Distributions of Functions of Random Variables
4. Distributions of Functions of Random Variables Setup: Consider as given the joint distribution of X 1,..., X n (i.e. consider as given f X1,...,X n and F X1,...,X n ) Consider k functions g 1 : R n
More informationf X, Y (x, y)dx (x), where f(x,y) is the joint pdf of X and Y. (x) dx
INDEPENDENCE, COVARIANCE AND CORRELATION Independence: Intuitive idea of "Y is independent of X": The distribution of Y doesn't depend on the value of X. In terms of the conditional pdf's: "f(y x doesn't
More informationf (1 0.5)/n Z =
Math 466/566 - Homework 4. We want to test a hypothesis involving a population proportion. The unknown population proportion is p. The null hypothesis is p = / and the alternative hypothesis is p > /.
More information