February 26, 2017 COMPLETENESS AND THE LEHMANN-SCHEFFE THEOREM
|
|
- Walter Oliver
- 5 years ago
- Views:
Transcription
1 February 26, 2017 COMPLETENESS AND THE LEHMANN-SCHEFFE THEOREM Abstract. The Rao-Blacwell theorem told us how to improve an estimator. We will discuss conditions on when the Rao-Blacwellization of an estimator is the best estimator. This is one of the major highlights of this course, and the classical theory of mathematical statistics. 1. Introduction Let F = (f θ ) θ Θ be a family of pdfs, as usual the family may be discrete or continuous. For each θ let Z = Z θ be a random variable with pdf f θ (z). The family F is complete if the condition: (1) E θ u(z) < and E θ u(z) = 0 for all θ Θ implies that for each θ Θ, we have u(z) = 0, P θ -almost surely; that is, (2) P θ (u(z) = 0) = 1 for all θ Θ. It is important to note that it in the definition, it is families which are complete, and the for all quantifier appears in the definition. Thus to chec if a family is complete, we consider any function u with Property (1), and using this property, we have to verify (2). If there is even one u for which (1) is satisfied, but (2) fails, then the family fails to be complete. From the statement of (2), we see that we will encounter some measure-theoretic difficulties. Let F = (f θ ) θ Θ be a family of pdfs. Let X = (X 1,..., X n ) be a random sample and T be a statistic. Here X, and T depend on θ. For each θ Θ, let h θ be the pdf for T. We say that T is complete statistic if the corresponding family H = (h θ ) θ Θ is complete. Let us remar that when considering the completeness of T, it is the family H that is important, not F. 1
2 2 COMPLETENESS AND THE LEHMANN-SCHEFFE THEOREM Theorem 1 (Lehmann-Schefee). Let X = (X 1,..., X n ) be a random sample from f θ, where θ Θ. Let T be a sufficient statistic for θ and suppose the corresponding family for T is complete. Let g : Θ R. If φ is a function that does not depend on θ and φ(t ) is an unbiased estimator of g(θ), the φ(t ) is the (almost surely) unique MVUE for g(θ). In particular, if Y is any unbiased estimator of g(θ), then E(Y T ) is the MVUE for g(θ). Proof of Theorem 1. First, we show the uniqueness using completeness. Let ψ(t ) be another unbiased estimator of g(θ). Then for all θ Θ, we have E θ φ(t ) = g(θ) = E θ ψ(t ), so that E θ (ψ(t ) φ(t )) = 0. Consider the function u(t) = ψ(t) φ(t). Since T is a complete family for every θ Θ, we have that u(t ) = 0, P θ -almost surely, from which we deduce that ψ(t ) = φ(t ) almost surely. Thus there is at most one unbiased estimator of g(θ) which is a function of T. Next, we show that it is a MVUE using the Rao-Blacwell theorem and the sufficiency of T. If Y is any unbiased estimator of g(θ), the Rao-Blacwell theorem gives that E(Y T ) is an unbiased estimator of g(θ) and Var θ (E(Y T )) Var θ (Y ) furthermore, since it is a conditional expectation, it is a function of T. Hence it must actually be φ(t ), and we have Var θ (φ(t )) = Var θ (E(Y T )) Var θ (Y ). Thus Theorem 1 gives us a powerful way of finding MVUE if we now that the corresponding family for a sufficient statistic is complete. It is not easy to show completeness. 2. Examples from the Binomial family Exercise 2. Fix n 1. Show that the Binomial family, given by f p () = p (1 p) n, where is an integer, with 0 n, and p (0, 1) is complete.
3 COMPLETENESS AND THE LEHMANN-SCHEFFE THEOREM 3 Solution. Let Z Bin(n, p). Assume that u : {0, 1,..., n} R is such that E p (u(z)) = 0 for all p (0, 1). We have that for all p (0, 1), 0 = E p (u(z)) = u() p (1 p) n =0 ( p ). = (1 p) n u() 1 p Set g(q) := =0 u() q. =0 We have that g is a polynomial in q of degree n, which means it must have at most n distinct roots, unless c := u() ( n ) = 0 for all. Since for all q (0, ), we have that g(q) = 0, we can conclude that c = 0, and infer that u() = 0, for all. Exercise 3. Let X = (X 1,..., X n ) be a random sample, where X 1 Bern(p), p (0, 1). Let X be the usual sample mean. Show that X is the MVUE for p. Solution. We already now that the sample sum T = X X n is sufficient and T B(n, p). We now that X 1 is an unbiased estimator for p. By Theorem 1, we have that E(X 1 T ) = X is the MVUE. Exercise 4. Referring to Exercise 3, find the MVUE for p 2. Solution. Here we will try to guess. Observe that X 2 = T 2 /n 2 is a good place to start, since it is at least consistent, and X is the MVUE for p. Note that E X 2 = Var( X) + (E X) 2 = p(1 p)/n + p 2 = p/n + p 2 (1 1/n). So consider 1 φ(t) := n(n 1) (t2 t). Our calculations gives that Eφ(T ) = p 2, and by Exercise 3, T is a complete and sufficient statistic, thus φ(t ) is the MVUE. Exercise 5. Do Exercise 4, by considering Y = X 1 X 2 and E(Y T ). Exercise 6. Do Exercise 4, by solving Eφ(T ) = p 2, for φ by brute force. Solution. We want to solve for φ, using the equation E p (φ(t )) = φ() p (1 p) n = p 2. =0
4 4 COMPLETENESS AND THE LEHMANN-SCHEFFE THEOREM Again, with the change of variables q = p/(1 p) = 1/(1 p) 1 we obtain the relation that for all q (0, ), we have n 2 2 φ() q = q 2 (q + 1) n 2 = q j+2. j =0 Thus we equate coefficients, to obtain that φ(0) = φ(1) = 0 and for 2 n, we have φ() = from which we deduce that φ() = j=0 2, 2 ( 1) n(n 1). Note also that formula remains valid for = 0, Examples from the Poisson distribution Exercise 7. Show that the Poisson family, given by λ λn f λ (n) = e n!, where n is a nonnegative integer and θ (0, ), is complete. Solution. Let Z P oi(λ). Assume that u : {0, 1, 2...} R is such that E λ u(z) = 0 for all λ > 0. We need to show that u = 0. We have that for all λ > 0, E λ u(z) = e λ n=0 u(n) λ n = 0, n! where the sum is absolutely convergent for all λ > 0. Let a n = u(n)/n! and consider the power series g(x) = a n x n. n=0 Recall, the by the root test, power series have a radius of absolute convergence. Hence in the case of g, the radius is infinite, and furthermore, g(x) = 0 for all x > 0. From our nowledge of power series and analytic functions this is enough to conclude that g(x) = 0 for all x R, and that a n = 0 for all n 0. Thus u(n) = 0.
5 COMPLETENESS AND THE LEHMANN-SCHEFFE THEOREM 5 One way to see why g = 0 is to let c (0, ) and tae a Talyor series expansion of g at the point c, then g(x) = c n (x c) n, n=0 where c n = gn (c) n! = 0, since c > 0. Exercise 8. Let X = (X 1,..., X n ) be a random sample, where X 1 P oi(λ) Let X be the usual sample mean. Show that X is a MVUE for λ. Solution. We already now that the sample sum T = X X n is sufficient. We now that T P oi(nλ), since the sum of independent Poisson random variables is again Poisson. Thus it follows from Exercise 7 that T is complete. We also now that the sample mean X = T/n is a unbiased estimator of λ, so by Theorem 1 it must be the MVUE. Exercise 9. Referring to Exercise 8, find the MVUE for e λ. Hint: you already did this without officially nowing. Just apply Theorem 1 to what you already calculated using the Rao-Blacwell theorem. Exercise 10. Referring to Exercise 8, find the MVUE for λ 2. Solution. We will just try to find a unbiased estimator of λ 2 by inspection. We now that X 2 is at least a consistent estimator of λ 2, so we start here. Let T = X X n be the sample sum. Note that Var(T ) = n Var(X 1 ) = nλ. Thus E( X 2 ) = Var( X) + (E X) 2 = λ/n + λ 2. So consider Y := X 2 X/n. From our calculations we now that Y is an unbiased estimator for λ 2. Furthermore, since Y is a function of the sufficient statistic T, Y = (T/n) 2 T/n 2 T (T 1) =. n 2 we have by Theorem 1, it must be the MVUE. Exercise 11. Referring to Exercise 8, find E(X 1 X 2 T ), where T is the sample sum. Solution. We have that X 1 X 2 is an unbiased estimator for λ 2, thus by Theorem 1, and Exercise 10, we now that E(X 1 X 2 T ) = X 2 X/n.
6 6 COMPLETENESS AND THE LEHMANN-SCHEFFE THEOREM Exercise 12. Let Z P oi(λ) Let r 1, set g r (n) = n(n 1)(n 2) (n r + 1), where the product contains r terms. Show that Eg r (Z) = λ r. Exercise 13. Referring to Exercise 8, find the MVUE for λ r, where r is a positive integer. 4. Examples from the uniform family Exercise 14. Let X = (X 1,..., X n ) be a random sample, where X 1 U(0, θ), where θ > 0. Find the MVUE for θ. Solution. Consider M := max {X 1,..., X n }. First, we show that M is a sufficient statistic for θ. Let x (0, ) n. We now that L(x; θ) = 1 n 1[0 < x θ n i < θ]. i=1 Observe that if m = max {x 1,..., x n }, we have n 1[x i < θ] = 1[m < θ]. i=1 Hence L(x; θ) = g(m; θ) := 1 1[m < θ], θn and it follows from the Neyman factorization theorem that M is sufficient. Next, we show that M is complete. For this, we will need to now the distribution of M. An easy computation gives that M has pdf g(m) = nmn 1 1[0 < m < θ]. θ n We need to show that this family is complete. Suppose E θ u(m) = 0 for all θ, then θ u(m) nmn 1 dm = 0. 0 θ n This is equivalent to the statement that for all θ > 0, we have θ 0 u(m)m n 1 dm = 0. Tae a derivative on both sides with respect to θ, we have by the fundamental theorem of calculus tells us that if u is continuous, then u(θ)θ n 1 = 0,
7 COMPLETENESS AND THE LEHMANN-SCHEFFE THEOREM 7 from which we deduce that u = 0. If u is not continuous, we have to use a measure-theoretic version of the fundamental theorem of calculus, from which we can conclude that u = 0 Lebesgue-almost everywhere, from which we can deduce that u(z) = 0 P θ -almost surely, for every θ > 0. Finally, another easy calculation gives that E θ M = n θ. Hence, if n+1 we set Y = n+1 M, we have by Theorem 1 that M is the MVUE. n Exercise 15. Referring to Exercise 14, find the MVUE for θ Examples from the normal family Proposition 16. The normal family with nown variance σ 2, given by f(x; µ) = 1 σ (x µ) 2 2π e 2σ, where x R and µ R is complete. The proof of Proposition 16 requires nowledge of Laplace transforms, which is beyond the scope of this course. In general, showing that a family is normal may require advanced techniques from analysis. We will not be able to provide all the details, but we will be able to motivate why such statements are true. Setch Proof of Proposition 16. Let X N(µ, σ 2 ). Suppose that for all µ R, we have E µ u(x) = 1 u(x) σ (x µ) 2 2π e 2σ dx = 0. We have to show that u(x) = 0, P µ -almost surely. After some algebra, we find that the above is equivalent to the statement that for all µ R, we have u(x)e x2 /2σ e xµ/σ dx = 0. Setting g(x) := u(x)e x2 /2σ, for all µ R, we have ĝ(µ) := g(x)e xµ/σ dx = 0. The above expression may remind you of a Laplace transform. In the case that u is continuous, by appealing to theory of Laplace transforms, we can deduce that g = 0, from which we can deduce that u = 0. Exercise 17. Let X = (X 1,..., X n ) be a random sample, where X i N(µ, 1), where µ is unnown. Show that X is the MVUE for µ.
8 8 COMPLETENESS AND THE LEHMANN-SCHEFFE THEOREM Solution. We already now that X is a sufficient statistic for µ. We also now that X N(µ, 1/n) is a complete statistic by Proposition 16. Of course X is an unbiased estimator, so Theorem 1 tells us it is a MVUE. Exercise 18. Referring to Exercise 17. Find the MVUE for µ 2. Solution. As usual, we will try to modify X 2. We now that 1/n = Var( X) = E X 2 (E X) 2 = E X 2 µ 2. Thus set Y := X 2 1/n. Clearly, Y is an unbiased estimator for µ 2, and also it is a function of the sufficient statistic X, by Theorem 1, we are done.
ST5215: Advanced Statistical Theory
Department of Statistics & Applied Probability Wednesday, October 19, 2011 Lecture 17: UMVUE and the first method of derivation Estimable parameters Let ϑ be a parameter in the family P. If there exists
More informationChapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic
Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic Unbiased estimation Unbiased or asymptotically unbiased estimation plays an important role in
More informationMathematical Statistics
Mathematical Statistics Chapter Three. Point Estimation 3.4 Uniformly Minimum Variance Unbiased Estimator(UMVUE) Criteria for Best Estimators MSE Criterion Let F = {p(x; θ) : θ Θ} be a parametric distribution
More informationMarch 10, 2017 THE EXPONENTIAL CLASS OF DISTRIBUTIONS
March 10, 2017 THE EXPONENTIAL CLASS OF DISTRIBUTIONS Abstract. We will introduce a class of distributions that will contain many of the discrete and continuous we are familiar with. This class will help
More informationSUFFICIENT STATISTICS
SUFFICIENT STATISTICS. Introduction Let X (X,..., X n ) be a random sample from f θ, where θ Θ is unknown. We are interested using X to estimate θ. In the simple case where X i Bern(p), we found that the
More informationBEST TESTS. Abstract. We will discuss the Neymann-Pearson theorem and certain best test where the power function is optimized.
BEST TESTS Abstract. We will discuss the Neymann-Pearson theorem and certain best test where the power function is optimized. 1. Most powerful test Let {f θ } θ Θ be a family of pdfs. We will consider
More information1. (Regular) Exponential Family
1. (Regular) Exponential Family The density function of a regular exponential family is: [ ] Example. Poisson(θ) [ ] Example. Normal. (both unknown). ) [ ] [ ] [ ] [ ] 2. Theorem (Exponential family &
More informationLast Lecture. Biostatistics Statistical Inference Lecture 14 Obtaining Best Unbiased Estimator. Related Theorems. Rao-Blackwell Theorem
Last Lecture Biostatistics 62 - Statistical Inference Lecture 14 Obtaining Best Unbiased Estimator Hyun Min Kang February 28th, 213 For single-parameter exponential family, is Cramer-Rao bound always attainable?
More informationMethods of evaluating estimators and best unbiased estimators Hamid R. Rabiee
Stochastic Processes Methods of evaluating estimators and best unbiased estimators Hamid R. Rabiee 1 Outline Methods of Mean Squared Error Bias and Unbiasedness Best Unbiased Estimators CR-Bound for variance
More informationMathematical statistics
October 4 th, 2018 Lecture 12: Information Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation Chapter
More informationChapter 11 - Sequences and Series
Calculus and Analytic Geometry II Chapter - Sequences and Series. Sequences Definition. A sequence is a list of numbers written in a definite order, We call a n the general term of the sequence. {a, a
More information1 Complete Statistics
Complete Statistics February 4, 2016 Debdeep Pati 1 Complete Statistics Suppose X P θ, θ Θ. Let (X (1),..., X (n) ) denote the order statistics. Definition 1. A statistic T = T (X) is complete if E θ g(t
More informationTaylor and Maclaurin Series
Taylor and Maclaurin Series MATH 211, Calculus II J. Robert Buchanan Department of Mathematics Spring 2018 Background We have seen that some power series converge. When they do, we can think of them as
More information3.4 Introduction to power series
3.4 Introduction to power series Definition 3.4.. A polynomial in the variable x is an expression of the form n a i x i = a 0 + a x + a 2 x 2 + + a n x n + a n x n i=0 or a n x n + a n x n + + a 2 x 2
More informationUnbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.
Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it
More informationSpring 2012 Math 541B Exam 1
Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote
More informationMath 494: Mathematical Statistics
Math 494: Mathematical Statistics Instructor: Jimin Ding jmding@wustl.edu Department of Mathematics Washington University in St. Louis Class materials are available on course website (www.math.wustl.edu/
More informationStat 5101 Lecture Slides: Deck 7 Asymptotics, also called Large Sample Theory. Charles J. Geyer School of Statistics University of Minnesota
Stat 5101 Lecture Slides: Deck 7 Asymptotics, also called Large Sample Theory Charles J. Geyer School of Statistics University of Minnesota 1 Asymptotic Approximation The last big subject in probability
More informationTAYLOR AND MACLAURIN SERIES
TAYLOR AND MACLAURIN SERIES. Introduction Last time, we were able to represent a certain restricted class of functions as power series. This leads us to the question: can we represent more general functions
More informationMAS113 Introduction to Probability and Statistics. Proofs of theorems
MAS113 Introduction to Probability and Statistics Proofs of theorems Theorem 1 De Morgan s Laws) See MAS110 Theorem 2 M1 By definition, B and A \ B are disjoint, and their union is A So, because m is a
More informationIB Mathematics HL Year 2 Unit 11: Completion of Algebra (Core Topic 1)
IB Mathematics HL Year Unit : Completion of Algebra (Core Topic ) Homewor for Unit Ex C:, 3, 4, 7; Ex D: 5, 8, 4; Ex E.: 4, 5, 9, 0, Ex E.3: (a), (b), 3, 7. Now consider these: Lesson 73 Sequences and
More information18.175: Lecture 13 Infinite divisibility and Lévy processes
18.175 Lecture 13 18.175: Lecture 13 Infinite divisibility and Lévy processes Scott Sheffield MIT Outline Poisson random variable convergence Extend CLT idea to stable random variables Infinite divisibility
More informationn f(k) k=1 means to evaluate the function f(k) at k = 1, 2,..., n and add up the results. In other words: n f(k) = f(1) + f(2) f(n). 1 = 2n 2.
Handout on induction and written assignment 1. MA113 Calculus I Spring 2007 Why study mathematical induction? For many students, mathematical induction is an unfamiliar topic. Nonetheless, this is an important
More informationMATH 103 Pre-Calculus Mathematics Test #3 Fall 2008 Dr. McCloskey Sample Solutions
MATH 103 Pre-Calculus Mathematics Test #3 Fall 008 Dr. McCloskey Sample Solutions 1. Let P (x) = 3x 4 + x 3 x + and D(x) = x + x 1. Find polynomials Q(x) and R(x) such that P (x) = Q(x) D(x) + R(x). (That
More information1 Generating functions
1 Generating functions Even quite straightforward counting problems can lead to laborious and lengthy calculations. These are greatly simplified by using generating functions. 2 Definition 1.1. Given a
More informationProbability Models. 4. What is the definition of the expectation of a discrete random variable?
1 Probability Models The list of questions below is provided in order to help you to prepare for the test and exam. It reflects only the theoretical part of the course. You should expect the questions
More informationMoment Generating Functions
MATH 382 Moment Generating Functions Dr. Neal, WKU Definition. Let X be a random variable. The moment generating function (mgf) of X is the function M X : R R given by M X (t ) = E[e X t ], defined for
More informationLecture 4: Probability and Discrete Random Variables
Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecture 4: Probability and Discrete Random Variables Wednesday, January 21, 2009 Lecturer: Atri Rudra Scribe: Anonymous 1
More informationSTA 260: Statistics and Probability II
Al Nosedal. University of Toronto. Winter 2017 1 Properties of Point Estimators and Methods of Estimation 2 3 If you can t explain it simply, you don t understand it well enough Albert Einstein. Definition
More informationProbability and Distributions
Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated
More informationLecture 32: Taylor Series and McLaurin series We saw last day that some functions are equal to a power series on part of their domain.
Lecture 32: Taylor Series and McLaurin series We saw last day that some functions are equal to a power series on part of their domain. For example f(x) = 1 1 x = 1 + x + x2 + x 3 + = ln(1 + x) = x x2 2
More informationAPPM/MATH 4/5520 Solutions to Exam I Review Problems. f X 1,X 2. 2e x 1 x 2. = x 2
APPM/MATH 4/5520 Solutions to Exam I Review Problems. (a) f X (x ) f X,X 2 (x,x 2 )dx 2 x 2e x x 2 dx 2 2e 2x x was below x 2, but when marginalizing out x 2, we ran it over all values from 0 to and so
More informationLecture 4: UMVUE and unbiased estimators of 0
Lecture 4: UMVUE and unbiased estimators of Problems of approach 1. Approach 1 has the following two main shortcomings. Even if Theorem 7.3.9 or its extension is applicable, there is no guarantee that
More informationECE534, Spring 2018: Solutions for Problem Set #3
ECE534, Spring 08: Solutions for Problem Set #3 Jointly Gaussian Random Variables and MMSE Estimation Suppose that X, Y are jointly Gaussian random variables with µ X = µ Y = 0 and σ X = σ Y = Let their
More informationMath 651 Introduction to Numerical Analysis I Fall SOLUTIONS: Homework Set 1
ath 651 Introduction to Numerical Analysis I Fall 2010 SOLUTIONS: Homework Set 1 1. Consider the polynomial f(x) = x 2 x 2. (a) Find P 1 (x), P 2 (x) and P 3 (x) for f(x) about x 0 = 0. What is the relation
More informationA Few Notes on Fisher Information (WIP)
A Few Notes on Fisher Information (WIP) David Meyer dmm@{-4-5.net,uoregon.edu} Last update: April 30, 208 Definitions There are so many interesting things about Fisher Information and its theoretical properties
More informationChapter 6: Rational Expr., Eq., and Functions Lecture notes Math 1010
Section 6.1: Rational Expressions and Functions Definition of a rational expression Let u and v be polynomials. The algebraic expression u v is a rational expression. The domain of this rational expression
More informationMAS113 Introduction to Probability and Statistics. Proofs of theorems
MAS113 Introduction to Probability and Statistics Proofs of theorems Theorem 1 De Morgan s Laws) See MAS110 Theorem 2 M1 By definition, B and A \ B are disjoint, and their union is A So, because m is a
More informationAverages of Random Variables. Expected Value. Die Tossing Example. Averages of Random Variables
Averages of Random Variables Suppose that a random variable U can tae on any one of L random values, say u 1, u 2,...u L. Imagine that we mae n independent observations of U and that the value u is observed
More informationMath 362, Problem set 1
Math 6, roblem set Due //. (4..8) Determine the mean variance of the mean X of a rom sample of size 9 from a distribution having pdf f(x) = 4x, < x
More informationCMSC Discrete Mathematics FINAL EXAM Tuesday, December 5, 2017, 10:30-12:30
CMSC-37110 Discrete Mathematics FINAL EXAM Tuesday, December 5, 2017, 10:30-12:30 Name (print): Email: This exam contributes 40% to your course grade. Do not use book, notes, scrap paper. NO ELECTRONIC
More information40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology
Singapore University of Design and Technology Lecture 9: Hypothesis testing, uniformly most powerful tests. The Neyman-Pearson framework Let P be the family of distributions of concern. The Neyman-Pearson
More informationthe convolution of f and g) given by
09:53 /5/2000 TOPIC Characteristic functions, cont d This lecture develops an inversion formula for recovering the density of a smooth random variable X from its characteristic function, and uses that
More informationExpectation, variance and moments
Expectation, variance and moments John Appleby Contents Expectation and variance Examples 3 Moments and the moment generating function 4 4 Examples of moment generating functions 5 5 Concluding remarks
More informationMoments. Raw moment: February 25, 2014 Normalized / Standardized moment:
Moments Lecture 10: Central Limit Theorem and CDFs Sta230 / Mth 230 Colin Rundel Raw moment: Central moment: µ n = EX n ) µ n = E[X µ) 2 ] February 25, 2014 Normalized / Standardized moment: µ n σ n Sta230
More informationand the compositional inverse when it exists is A.
Lecture B jacques@ucsd.edu Notation: R denotes a ring, N denotes the set of sequences of natural numbers with finite support, is a generic element of N, is the infinite zero sequence, n 0 R[[ X]] denotes
More informationProof In the CR proof. and
Question Under what conditions will we be able to attain the Cramér-Rao bound and find a MVUE? Lecture 4 - Consequences of the Cramér-Rao Lower Bound. Searching for a MVUE. Rao-Blackwell Theorem, Lehmann-Scheffé
More informatione x = 1 + x + x2 2! + x3 If the function f(x) can be written as a power series on an interval I, then the power series is of the form
Taylor Series Given a function f(x), we would like to be able to find a power series that represents the function. For example, in the last section we noted that we can represent e x by the power series
More informationMAS223 Statistical Inference and Modelling Exercises
MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,
More informationTopic 15: Simple Hypotheses
Topic 15: November 10, 2009 In the simplest set-up for a statistical hypothesis, we consider two values θ 0, θ 1 in the parameter space. We write the test as H 0 : θ = θ 0 versus H 1 : θ = θ 1. H 0 is
More informationUnbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.
Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it
More information6 CARDINALITY OF SETS
6 CARDINALITY OF SETS MATH10111 - Foundations of Pure Mathematics We all have an idea of what it means to count a finite collection of objects, but we must be careful to define rigorously what it means
More informationCompletion Date: Monday February 11, 2008
MATH 4 (R) Winter 8 Intermediate Calculus I Solutions to Problem Set #4 Completion Date: Monday February, 8 Department of Mathematical and Statistical Sciences University of Alberta Question. [Sec..9,
More informationStochastic Processes and Monte-Carlo Methods. University of Massachusetts: Spring Luc Rey-Bellet
Stochastic Processes and Monte-Carlo Methods University of Massachusetts: Spring 2010 Luc Rey-Bellet Contents 1 Random variables and Monte-Carlo method 3 1.1 Review of probability............................
More informationMath 3215 Intro. Probability & Statistics Summer 14. Homework 5: Due 7/3/14
Math 325 Intro. Probability & Statistics Summer Homework 5: Due 7/3/. Let X and Y be continuous random variables with joint/marginal p.d.f. s f(x, y) 2, x y, f (x) 2( x), x, f 2 (y) 2y, y. Find the conditional
More informationStat 5101 Lecture Slides Deck 4. Charles J. Geyer School of Statistics University of Minnesota
Stat 5101 Lecture Slides Deck 4 Charles J. Geyer School of Statistics University of Minnesota 1 Existence of Integrals Just from the definition of integral as area under the curve, the integral b g(x)
More informationSUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)
SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) D. ARAPURA This is a summary of the essential material covered so far. The final will be cumulative. I ve also included some review problems
More informationSection Taylor and Maclaurin Series
Section.0 Taylor and Maclaurin Series Ruipeng Shen Feb 5 Taylor and Maclaurin Series Main Goal: How to find a power series representation for a smooth function us assume that a smooth function has a power
More informationLecture 3 - Tuesday July 5th
Lecture 3 - Tuesday July 5th jacques@ucsd.edu Key words: Identities, geometric series, arithmetic series, difference of powers, binomial series Key concepts: Induction, proofs of identities 3. Identities
More information1 Review of di erential calculus
Review of di erential calculus This chapter presents the main elements of di erential calculus needed in probability theory. Often, students taking a course on probability theory have problems with concepts
More information1 Probability Model. 1.1 Types of models to be discussed in the course
Sufficiency January 18, 016 Debdeep Pati 1 Probability Model Model: A family of distributions P θ : θ Θ}. P θ (B) is the probability of the event B when the parameter takes the value θ. P θ is described
More informationInteger-Valued Polynomials
Integer-Valued Polynomials LA Math Circle High School II Dillon Zhi October 11, 2015 1 Introduction Some polynomials take integer values p(x) for all integers x. The obvious examples are the ones where
More informationDA Freedman Notes on the MLE Fall 2003
DA Freedman Notes on the MLE Fall 2003 The object here is to provide a sketch of the theory of the MLE. Rigorous presentations can be found in the references cited below. Calculus. Let f be a smooth, scalar
More informationStatistics (1): Estimation
Statistics (1): Estimation Marco Banterlé, Christian Robert and Judith Rousseau Practicals 2014-2015 L3, MIDO, Université Paris Dauphine 1 Table des matières 1 Random variables, probability, expectation
More informationn=0 xn /n!. That is almost what we have here; the difference is that the denominator is (n + 1)! in stead of n!. So we have x n+1 n=0
DISCRETE MATHEMATICS HOMEWORK 8 SOL Undergraduate Course Chukechen Honors College Zhejiang University Fall-Winter 204 HOMEWORK 8 P496 6. Find a closed form for the generating function for the sequence
More informationConditional distributions (discrete case)
Conditional distributions (discrete case) The basic idea behind conditional distributions is simple: Suppose (XY) is a jointly-distributed random vector with a discrete joint distribution. Then we can
More informationChapters 9. Properties of Point Estimators
Chapters 9. Properties of Point Estimators Recap Target parameter, or population parameter θ. Population distribution f(x; θ). { probability function, discrete case f(x; θ) = density, continuous case The
More informationDistributions of Functions of Random Variables. 5.1 Functions of One Random Variable
Distributions of Functions of Random Variables 5.1 Functions of One Random Variable 5.2 Transformations of Two Random Variables 5.3 Several Random Variables 5.4 The Moment-Generating Function Technique
More informationDepartment of Mathematics
Department of Mathematics Ma 3/103 KC Border Introduction to Probability and Statistics Winter 2017 Lecture 8: Expectation in Action Relevant textboo passages: Pitman [6]: Chapters 3 and 5; Section 6.4
More informationStat 426 : Homework 1.
Stat 426 : Homework 1. Moulinath Banerjee University of Michigan Announcement: The homework carries 120 points and contributes 10 points to the total grade. (1) A geometric random variable W takes values
More informationPhysics 6720 Introduction to Statistics April 4, 2017
Physics 6720 Introduction to Statistics April 4, 2017 1 Statistics of Counting Often an experiment yields a result that can be classified according to a set of discrete events, giving rise to an integer
More informationConditional distributions. Conditional expectation and conditional variance with respect to a variable.
Conditional distributions Conditional expectation and conditional variance with respect to a variable Probability Theory and Stochastic Processes, summer semester 07/08 80408 Conditional distributions
More informationExercises from other sources REAL NUMBERS 2,...,
Exercises from other sources REAL NUMBERS 1. Find the supremum and infimum of the following sets: a) {1, b) c) 12, 13, 14, }, { 1 3, 4 9, 13 27, 40 } 81,, { 2, 2 + 2, 2 + 2 + } 2,..., d) {n N : n 2 < 10},
More informationp y (1 p) 1 y, y = 0, 1 p Y (y p) = 0, otherwise.
1. Suppose Y 1, Y 2,..., Y n is an iid sample from a Bernoulli(p) population distribution, where 0 < p < 1 is unknown. The population pmf is p y (1 p) 1 y, y = 0, 1 p Y (y p) = (a) Prove that Y is the
More informationProbability and Measure
Chapter 4 Probability and Measure 4.1 Introduction In this chapter we will examine probability theory from the measure theoretic perspective. The realisation that measure theory is the foundation of probability
More informationMaximum Likelihood Estimation
Connexions module: m11446 1 Maximum Likelihood Estimation Clayton Scott Robert Nowak This work is produced by The Connexions Project and licensed under the Creative Commons Attribution License Abstract
More informationAsymptotic Statistics-III. Changliang Zou
Asymptotic Statistics-III Changliang Zou The multivariate central limit theorem Theorem (Multivariate CLT for iid case) Let X i be iid random p-vectors with mean µ and and covariance matrix Σ. Then n (
More informationChapter 8: Least squares (beginning of chapter)
Chapter 8: Least squares (beginning of chapter) Least Squares So far, we have been trying to determine an estimator which was unbiased and had minimum variance. Next we ll consider a class of estimators
More informationp. 6-1 Continuous Random Variables p. 6-2
Continuous Random Variables Recall: For discrete random variables, only a finite or countably infinite number of possible values with positive probability (>). Often, there is interest in random variables
More information3 Modeling Process Quality
3 Modeling Process Quality 3.1 Introduction Section 3.1 contains basic numerical and graphical methods. familiar with these methods. It is assumed the student is Goal: Review several discrete and continuous
More information. Find E(V ) and var(v ).
Math 6382/6383: Probability Models and Mathematical Statistics Sample Preliminary Exam Questions 1. A person tosses a fair coin until she obtains 2 heads in a row. She then tosses a fair die the same number
More informationSpace Telescope Science Institute statistics mini-course. October Inference I: Estimation, Confidence Intervals, and Tests of Hypotheses
Space Telescope Science Institute statistics mini-course October 2011 Inference I: Estimation, Confidence Intervals, and Tests of Hypotheses James L Rosenberger Acknowledgements: Donald Richards, William
More informationTheory of Statistics.
Theory of Statistics. Homework V February 5, 00. MT 8.7.c When σ is known, ˆµ = X is an unbiased estimator for µ. If you can show that its variance attains the Cramer-Rao lower bound, then no other unbiased
More informationMath WW08 Solutions November 19, 2008
Math 352- WW08 Solutions November 9, 2008 Assigned problems 8.3 ww ; 8.4 ww 2; 8.5 4, 6, 26, 44; 8.6 ww 7, ww 8, 34, ww 0, 50 Always read through the solution sets even if your answer was correct. Note
More informationContinuous distributions
CHAPTER 7 Continuous distributions 7.. Introduction A r.v. X is said to have a continuous distribution if there exists a nonnegative function f such that P(a X b) = ˆ b a f(x)dx for every a and b. distribution.)
More informationIf g is also continuous and strictly increasing on J, we may apply the strictly increasing inverse function g 1 to this inequality to get
18:2 1/24/2 TOPIC. Inequalities; measures of spread. This lecture explores the implications of Jensen s inequality for g-means in general, and for harmonic, geometric, arithmetic, and related means in
More informationLimiting Distributions
We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the two fundamental results
More information18 Green s function for the Poisson equation
8 Green s function for the Poisson equation Now we have some experience working with Green s functions in dimension, therefore, we are ready to see how Green s functions can be obtained in dimensions 2
More informationClassical Estimation Topics
Classical Estimation Topics Namrata Vaswani, Iowa State University February 25, 2014 This note fills in the gaps in the notes already provided (l0.pdf, l1.pdf, l2.pdf, l3.pdf, LeastSquares.pdf). 1 Min
More informationBob Brown Math 251 Calculus 1 Chapter 4, Section 4 1 CCBC Dundalk
Bob Brown Math 251 Calculus 1 Chapter 4, Section 4 1 A Function and its Second Derivative Recall page 4 of Handout 3.1 where we encountered the third degree polynomial f(x) = x 3 5x 2 4x + 20. Its derivative
More informationStochastic Processes and Monte-Carlo methods. University of Massachusetts: Fall Luc Rey-Bellet
Stochastic Processes and Monte-Carlo methods University of Massachusetts: Fall 2007 Luc Rey-Bellet Contents 1 Random Variables 3 1.1 Review of probability............................ 3 1.2 Some Common
More informationAP Calculus Chapter 9: Infinite Series
AP Calculus Chapter 9: Infinite Series 9. Sequences a, a 2, a 3, a 4, a 5,... Sequence: A function whose domain is the set of positive integers n = 2 3 4 a n = a a 2 a 3 a 4 terms of the sequence Begin
More informationF X (x) = P [X x] = x f X (t)dt. 42 Lebesgue-a.e, to be exact 43 More specifically, if g = f Lebesgue-a.e., then g is also a pdf for X.
10.2 Properties of PDF and CDF for Continuous Random Variables 10.18. The pdf f X is determined only almost everywhere 42. That is, given a pdf f for a random variable X, if we construct a function g by
More informationExercises and Answers to Chapter 1
Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean
More informationParameter estimation! and! forecasting! Cristiano Porciani! AIfA, Uni-Bonn!
Parameter estimation! and! forecasting! Cristiano Porciani! AIfA, Uni-Bonn! Questions?! C. Porciani! Estimation & forecasting! 2! Cosmological parameters! A branch of modern cosmological research focuses
More informationConstructing Taylor Series
Constructing Taylor Series 8-8-200 The Taylor series for fx at x = c is fc + f cx c + f c 2! x c 2 + f c x c 3 + = 3! f n c x c n. By convention, f 0 = f. When c = 0, the series is called a Maclaurin series.
More informationSpring 2012 Math 541A Exam 1. X i, S 2 = 1 n. n 1. X i I(X i < c), T n =
Spring 2012 Math 541A Exam 1 1. (a) Let Z i be independent N(0, 1), i = 1, 2,, n. Are Z = 1 n n Z i and S 2 Z = 1 n 1 n (Z i Z) 2 independent? Prove your claim. (b) Let X 1, X 2,, X n be independent identically
More informationWeek 9 The Central Limit Theorem and Estimation Concepts
Week 9 and Estimation Concepts Week 9 and Estimation Concepts Week 9 Objectives 1 The Law of Large Numbers and the concept of consistency of averages are introduced. The condition of existence of the population
More information16.5 Problem Solving with Functions
CHAPTER 16. FUNCTIONS Excerpt from "Introduction to Algebra" 2014 AoPS Inc. Exercises 16.4.1 If f is a function that has an inverse and f (3) = 5, what is f 1 (5)? 16.4.2 Find the inverse of each of the
More informationIt can be shown that if X 1 ;X 2 ;:::;X n are independent r.v. s with
Example: Alternative calculation of mean and variance of binomial distribution A r.v. X has the Bernoulli distribution if it takes the values 1 ( success ) or 0 ( failure ) with probabilities p and (1
More information