Lecture 18: L-estimators and trimmed sample mean
|
|
- Ronald Harrell
- 5 years ago
- Views:
Transcription
1 Lecture 18: L-estimators and trimmed sample mean L-functional and L-estimator For a function J(t) on [0,1], define the L-functional as T (G) = xj(g(x))dg(x), G F. If X 1,...,X n are i.i.d. from F and T (F) is the parameter of interest, T (F n ) is called an L-estimator of T (F). T (F n ) is a linear function of order statistics: T (F n ) = xj(f n (x))df n (x) = 1 n since F n (X (i) ) = i/n, i = 1,...,n. Examples n i=1 When J(t) 1, T (F n ) = X, the sample mean. J ( i n) X(i), When J(t) = (1 2α) I (α,1 α) (t), T (F n ) = X α is the α-trimmed sample mean. UW-Madison (Statistics) Stat 710, Lecture 18 Jan / 10
2 Lecture 18: L-estimators and trimmed sample mean L-functional and L-estimator For a function J(t) on [0,1], define the L-functional as T (G) = xj(g(x))dg(x), G F. If X 1,...,X n are i.i.d. from F and T (F) is the parameter of interest, T (F n ) is called an L-estimator of T (F). T (F n ) is a linear function of order statistics: T (F n ) = xj(f n (x))df n (x) = 1 n since F n (X (i) ) = i/n, i = 1,...,n. Examples n i=1 When J(t) 1, T (F n ) = X, the sample mean. J ( i n) X(i), When J(t) = (1 2α) I (α,1 α) (t), T (F n ) = X α is the α-trimmed sample mean. UW-Madison (Statistics) Stat 710, Lecture 18 Jan / 10
3 Although the sample median is also a linear function of order statistics, it is not of the form T (F n ) with an L-functional T Asymptotic normality of L-estimators To establish the asymptotic normality for L-estimators T (F n ), we follow the following steps. Step 1. For x R, calculate φ F (x) = lim t 0 T (F + t(δ x F)) T (F) t (if it exists), where δ x is the point mass at x. The function φ F is called the influence function of T at F. The influence function is an important tool in the study of robuestness of estimators Also, verify that E[φ F (X 1 )] = φ F (x)df(x) = 0 UW-Madison (Statistics) Stat 710, Lecture 18 Jan / 10
4 Although the sample median is also a linear function of order statistics, it is not of the form T (F n ) with an L-functional T Asymptotic normality of L-estimators To establish the asymptotic normality for L-estimators T (F n ), we follow the following steps. Step 1. For x R, calculate φ F (x) = lim t 0 T (F + t(δ x F)) T (F) t (if it exists), where δ x is the point mass at x. The function φ F is called the influence function of T at F. The influence function is an important tool in the study of robuestness of estimators Also, verify that E[φ F (X 1 )] = φ F (x)df(x) = 0 UW-Madison (Statistics) Stat 710, Lecture 18 Jan / 10
5 Asymptotic normality of L-estimators Step 2. Verify that E[φ F (X 1 )] 2 < and obtain σf 2 = E[φ F (X 1 )] 2 = [φ F (x)] 2 df(x). Step 3. Verify that T (F n ) T (F) = 1 n n i=1 φ F (X i ) + o p ( 1 n ). This holds when T is differentiable in some sense ( 5.2.1). Then n[t (Fn ) T (F)] d N(0,σ 2 F ). Step 3 is the most difficult part. This approach can also be applied to other functionals ( 5.2). We now apply this approach to show the asymptotic normality of the trimmed sample mean. UW-Madison (Statistics) Stat 710, Lecture 18 Jan / 10
6 Asymptotic normality of L-estimators Step 2. Verify that E[φ F (X 1 )] 2 < and obtain σf 2 = E[φ F (X 1 )] 2 = [φ F (x)] 2 df(x). Step 3. Verify that T (F n ) T (F) = 1 n n i=1 φ F (X i ) + o p ( 1 n ). This holds when T is differentiable in some sense ( 5.2.1). Then n[t (Fn ) T (F)] d N(0,σ 2 F ). Step 3 is the most difficult part. This approach can also be applied to other functionals ( 5.2). We now apply this approach to show the asymptotic normality of the trimmed sample mean. UW-Madison (Statistics) Stat 710, Lecture 18 Jan / 10
7 Asymptotic normality of L-estimators Step 2. Verify that E[φ F (X 1 )] 2 < and obtain σf 2 = E[φ F (X 1 )] 2 = [φ F (x)] 2 df(x). Step 3. Verify that T (F n ) T (F) = 1 n n i=1 φ F (X i ) + o p ( 1 n ). This holds when T is differentiable in some sense ( 5.2.1). Then n[t (Fn ) T (F)] d N(0,σ 2 F ). Step 3 is the most difficult part. This approach can also be applied to other functionals ( 5.2). We now apply this approach to show the asymptotic normality of the trimmed sample mean. UW-Madison (Statistics) Stat 710, Lecture 18 Jan / 10
8 Step 1: Derivation of the influence function φ F T (G) = xj(g(x))dg(x), G F For F and G in F, T (G) T (F) = xj(g(x))dg(x) xj(f(x))df (x) = = = = 1 [G (t) F (t)]j(t)dt 0 1 G (t) 0 F (t) F (x) G(x) dxj(t)dt J(t)dtdx [F(x) G(x)]J(F(x))dx U G (x)[g(x) F(x)]J(F(x))dx, UW-Madison (Statistics) Stat 710, Lecture 18 Jan / 10
9 Step 1: Derivation of the influence function φ F where U G (x) = G(x) F(x) J(t)dt [G(x) F (x)]j(f (x)) 1 G(x) F(x),J(F(x)) 0 0 otherwise and the fourth equality follows from Fubini s theorem and the fact that the region in R 2 between curves F(x) and G(x) is the same as the region in R 2 between curves G (t) and F (t). Let G = F + t(δ x F), where δ x is the degenerated distribution at x. Since lim t 0 U F +t(δx F )(y) = 0, by the dominated convergence theorem, lim U F +t(δx F )(y)[δ x (y) F(y)]J(F(y))dy = 0. t 0 Hence lim t 0 T (F + t(δ x F)) T (F) t which is φ F (x), the influence function of T. = [δ x (y) F(y)]J(F(y))dy, UW-Madison (Statistics) Stat 710, Lecture 18 Jan / 10
10 Step 1: Derivation of the influence function φ F By Fubini s theorem and the fact that δ x (y)df (x) = F(y), [ φ F (x)df (x) = (δ x F)(y)dF(x)] J(F(y))dy = 0, Consider now J(t) = () I (α,β) (t), φ F (x) = 1 F (β) [δ x (y) F(y)]dy. Assume that F is continuous at and F (β). F() = α and F(F (β)) = β. When x <, φ F (x) = 1 = F (β) y[1 F (y)] [1 F (y)]dy F (β) = (1 α) F (β)(1 β) 1 F (β) ydf(y) T (F) UW-Madison (Statistics) Stat 710, Lecture 18 Jan / 10
11 Step 1: Derivation of the influence function φ F Similarly, when x > F (β), φ F (x) = 1 F (β) F(y)]dy = F (β)β α Finally, when x F (β), φ F (x) = 1 = yf(y) + x x y[1 F(y)] F(y)dy 1 1 F (β) x = x α F (β)(1 β) x T (F). F (β) x ydf(y) [1 F(y)]dy 1 F (β) ydf(y) x T (F). UW-Madison (Statistics) Stat 710, Lecture 18 Jan / 10
12 Step 1: Derivation of the influence function φ F Hence, φ F (x) = (1 α) F (β)(1 β) β α T (F) x < x α F (β)(1 β) β α T (F ) x F (β) F (β)β α β α T (F) x > F (β). If F is symmetric about θ, J is symmetric about 1 2 (J(t) = J(1 t)), and 1 0 J(t)dt = 1, then F(x) = F 0(x θ), where F 0 is a c.d.f. that is symmetric about 0, i.e., F 0 (x) = 1 F 0 ( x), and xj(f 0 (x))df 0 (x) = xj(1 F 0 ( x))df 0 (x) = xj(f 0 ( x))df 0 (x) = yj(f 0 (y))df 0 (y), i.e., xj(f 0 (x))df 0 (x) = 0. UW-Madison (Statistics) Stat 710, Lecture 18 Jan / 10
13 Step 1: Derivation of the influence function φ F Hence, T (F) = xj(f(x))df(x) = θ J(F(x))dF (x) + (x θ)j(f 0 (x θ))df 0 (x θ) 1 = θ J(t)dt + 0 = θ. yj(f 0 (y))df 0 (y) Assume that F is continuous at and F (1 α). When β = 1 α, J is symmetric about 1 2 and F 0 (α) 1 2α x < x θ φ F (x) = 1 2α x F (1 α) x > F (1 α), F 0 (1 α) 1 2α where + F (1 α) = 2θ, F 0 (α) = θ and F 0 (1 α) = F (1 α) θ. UW-Madison (Statistics) Stat 710, Lecture 18 Jan / 10
14 Step 2: Calculation of σ 2 F = E[φ F (X 1 )] 2 Because F 0 [φ F (x)] 2 df(x) = (α) = F (1 α), we obtain that = 0 [F0 (α)]2 [F0 (1 α)]2 α + (1 2α) 2 (1 2α) 2 α + = σ 2 α. F (1 α) 2α[F0 (1 α)]2 (1 2α) 2 + (x θ) 2 df (x) (1 2α) 2 F 0 (1 α) x 2 F 0 (α) (1 2α) 2 df 0(x) Step 3: Asymptotic normality of the trimmed sample mean It can be shown that the L-functional T (G) is differentiable in some sense (see the textbook). Hence, for the α-trimmed sample mean X α, n( Xα θ) d N(0,σα). 2 UW-Madison (Statistics) Stat 710, Lecture 18 Jan / 10
15 Step 2: Calculation of σ 2 F = E[φ F (X 1 )] 2 Because F 0 [φ F (x)] 2 df(x) = (α) = F (1 α), we obtain that = 0 [F0 (α)]2 [F0 (1 α)]2 α + (1 2α) 2 (1 2α) 2 α + = σ 2 α. F (1 α) 2α[F0 (1 α)]2 (1 2α) 2 + (x θ) 2 df (x) (1 2α) 2 F 0 (1 α) x 2 F 0 (α) (1 2α) 2 df 0(x) Step 3: Asymptotic normality of the trimmed sample mean It can be shown that the L-functional T (G) is differentiable in some sense (see the textbook). Hence, for the α-trimmed sample mean X α, n( Xα θ) d N(0,σα). 2 UW-Madison (Statistics) Stat 710, Lecture 18 Jan / 10
Lecture 16: Sample quantiles and their asymptotic properties
Lecture 16: Sample quantiles and their asymptotic properties Estimation of quantiles (percentiles Suppose that X 1,...,X n are i.i.d. random variables from an unknown nonparametric F For p (0,1, G 1 (p
More informationStat 710: Mathematical Statistics Lecture 31
Stat 710: Mathematical Statistics Lecture 31 Jun Shao Department of Statistics University of Wisconsin Madison, WI 53706, USA Jun Shao (UW-Madison) Stat 710, Lecture 31 April 13, 2009 1 / 13 Lecture 31:
More information7 Influence Functions
7 Influence Functions The influence function is used to approximate the standard error of a plug-in estimator. The formal definition is as follows. 7.1 Definition. The Gâteaux derivative of T at F in the
More informationLecture 21: Convergence of transformations and generating a random variable
Lecture 21: Convergence of transformations and generating a random variable If Z n converges to Z in some sense, we often need to check whether h(z n ) converges to h(z ) in the same sense. Continuous
More informationLecture 17: Likelihood ratio and asymptotic tests
Lecture 17: Likelihood ratio and asymptotic tests Likelihood ratio When both H 0 and H 1 are simple (i.e., Θ 0 = {θ 0 } and Θ 1 = {θ 1 }), Theorem 6.1 applies and a UMP test rejects H 0 when f θ1 (X) f
More informationLecture 26: Likelihood ratio tests
Lecture 26: Likelihood ratio tests Likelihood ratio When both H 0 and H 1 are simple (i.e., Θ 0 = {θ 0 } and Θ 1 = {θ 1 }), Theorem 6.1 applies and a UMP test rejects H 0 when f θ1 (X) f θ0 (X) > c 0 for
More informationLimits: An Intuitive Approach
Limits: An Intuitive Approach SUGGESTED REFERENCE MATERIAL: As you work through the problems listed below, you should reference Chapter. of the recommended textbook (or the equivalent chapter in your alternative
More informationLecture 28: Asymptotic confidence sets
Lecture 28: Asymptotic confidence sets 1 α asymptotic confidence sets Similar to testing hypotheses, in many situations it is difficult to find a confidence set with a given confidence coefficient or level
More informationLecture 32: Asymptotic confidence sets and likelihoods
Lecture 32: Asymptotic confidence sets and likelihoods Asymptotic criterion In some problems, especially in nonparametric problems, it is difficult to find a reasonable confidence set with a given confidence
More informationSTAT 801: Mathematical Statistics. Distribution Theory
STAT 81: Mathematical Statistics Distribution Theory Basic Problem: Start with assumptions about f or CDF of random vector X (X 1,..., X p ). Define Y g(x 1,..., X p ) to be some function of X (usually
More informationIndeed, the family is still orthogonal if we consider a complex valued inner product ( or an inner product on complex vector space)
Fourier series of complex valued functions Suppose now f is a piecewise continuous complex valued function on [, π], that is f(x) = u(x)+iv(x) such that both u and v are real valued piecewise continuous
More informationChapter 1: Probability Theory Lecture 1: Measure space, measurable function, and integration
Chapter 1: Probability Theory Lecture 1: Measure space, measurable function, and integration Random experiment: uncertainty in outcomes Ω: sample space: a set containing all possible outcomes Definition
More informationChapter 7 Statistical Functionals and the Delta Method
Chapter 7 Statistical Functionals and the Delta Method. Estimators as Functionals of F n or P n 2. Continuity of Functionals of F or P 3. Metrics for Distribution Functions F and Probability Distributions
More informationQuantile-quantile plots and the method of peaksover-threshold
Problems in SF2980 2009-11-09 12 6 4 2 0 2 4 6 0.15 0.10 0.05 0.00 0.05 0.10 0.15 Figure 2: qqplot of log-returns (x-axis) against quantiles of a standard t-distribution with 4 degrees of freedom (y-axis).
More informationLecture 3: Random variables, distributions, and transformations
Lecture 3: Random variables, distributions, and transformations Definition 1.4.1. A random variable X is a function from S into a subset of R such that for any Borel set B R {X B} = {ω S : X(ω) B} is an
More informationChapter 1: Probability Theory Lecture 1: Measure space and measurable function
Chapter 1: Probability Theory Lecture 1: Measure space and measurable function Random experiment: uncertainty in outcomes Ω: sample space: a set containing all possible outcomes Definition 1.1 A collection
More informationChapter 7. Confidence Sets Lecture 30: Pivotal quantities and confidence sets
Chapter 7. Confidence Sets Lecture 30: Pivotal quantities and confidence sets Confidence sets X: a sample from a population P P. θ = θ(p): a functional from P to Θ R k for a fixed integer k. C(X): a confidence
More informationChapter 2: Resampling Maarten Jansen
Chapter 2: Resampling Maarten Jansen Randomization tests Randomized experiment random assignment of sample subjects to groups Example: medical experiment with control group n 1 subjects for true medicine,
More informationLecture 5: Moment generating functions
Lecture 5: Moment generating functions Definition 2.3.6. The moment generating function (mgf) of a random variable X is { x e tx f M X (t) = E(e tx X (x) if X has a pmf ) = etx f X (x)dx if X has a pdf
More informationSalient features of dynamic optimization problems Basics of dynamic programming The principle of optimality
A Z D I A 4 2 B C 5 6 7 2 E F 3 4 3 5 G H 8 6 2 4 J K 3 2 Z D I A 4 2 B C 5 6 7 2 E F 3 4 3 5 G H 8 6 2 4 J K 3 2 Z s x {, 2,..., n} r(x, s) x s r(, A) = 2 r(, D) = 2 r(2, A) = 4 r(2, D) = 8 g(x, s) s
More informationNon-parametric Inference and Resampling
Non-parametric Inference and Resampling Exercises by David Wozabal (Last update. Juni 010) 1 Basic Facts about Rank and Order Statistics 1.1 10 students were asked about the amount of time they spend surfing
More informationOrder Statistics and Distributions
Order Statistics and Distributions 1 Some Preliminary Comments and Ideas In this section we consider a random sample X 1, X 2,..., X n common continuous distribution function F and probability density
More informationEconomics 620, Lecture 8: Asymptotics I
Economics 620, Lecture 8: Asymptotics I Nicholas M. Kiefer Cornell University Professor N. M. Kiefer (Cornell University) Lecture 8: Asymptotics I 1 / 17 We are interested in the properties of estimators
More informationlim F n(x) = F(x) will not use either of these. In particular, I m keeping reserved for implies. ) Note:
APPM/MATH 4/5520, Fall 2013 Notes 9: Convergence in Distribution and the Central Limit Theorem Definition: Let {X n } be a sequence of random variables with cdfs F n (x) = P(X n x). Let X be a random variable
More informationSTAT 450: Statistical Theory. Distribution Theory. Reading in Casella and Berger: Ch 2 Sec 1, Ch 4 Sec 1, Ch 4 Sec 6.
STAT 45: Statistical Theory Distribution Theory Reading in Casella and Berger: Ch 2 Sec 1, Ch 4 Sec 1, Ch 4 Sec 6. Basic Problem: Start with assumptions about f or CDF of random vector X (X 1,..., X p
More informationChapter 2: Fundamentals of Statistics Lecture 15: Models and statistics
Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Data from one or a series of random experiments are collected. Planning experiments and collecting data (not discussed here). Analysis:
More informationSTAT 450: Statistical Theory. Distribution Theory. Reading in Casella and Berger: Ch 2 Sec 1, Ch 4 Sec 1, Ch 4 Sec 6.
STAT 450: Statistical Theory Distribution Theory Reading in Casella and Berger: Ch 2 Sec 1, Ch 4 Sec 1, Ch 4 Sec 6. Example: Why does t-statistic have t distribution? Ingredients: Sample X 1,...,X n from
More informationStat 710: Mathematical Statistics Lecture 40
Stat 710: Mathematical Statistics Lecture 40 Jun Shao Department of Statistics University of Wisconsin Madison, WI 53706, USA Jun Shao (UW-Madison) Stat 710, Lecture 40 May 6, 2009 1 / 11 Lecture 40: Simultaneous
More informationand 1 P (w i)=1. n i N N = P (w i) lim
Chapter 1 Probability 1.1 Introduction Consider an experiment, result of which is random, and is one of the nite number of outcomes. Example 1. Examples of experiments and possible outcomes: Experiment
More informationKernel Density Estimation
EECS 598: Statistical Learning Theory, Winter 2014 Topic 19 Kernel Density Estimation Lecturer: Clayton Scott Scribe: Yun Wei, Yanzhen Deng Disclaimer: These notes have not been subjected to the usual
More informationChapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic
Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic Unbiased estimation Unbiased or asymptotically unbiased estimation plays an important role in
More informationLecture 4: Law of Large Number and Central Limit Theorem
ECE 645: Estimation Theory Sring 2015 Instructor: Prof. Stanley H. Chan Lecture 4: Law of Large Number and Central Limit Theorem (LaTeX reared by Jing Li) March 31, 2015 This lecture note is based on ECE
More information3.8 Functions of a Random Variable
STAT 421 Lecture Notes 76 3.8 Functions of a Random Variable This section introduces a new and important topic: determining the distribution of a function of a random variable. We suppose that there is
More informationTheoretical Statistics. Lecture 19.
Theoretical Statistics. Lecture 19. Peter Bartlett 1. Functional delta method. [vdv20] 2. Differentiability in normed spaces: Hadamard derivatives. [vdv20] 3. Quantile estimates. [vdv21] 1 Recall: Delta
More informationCOMP 9024, Class notes, 11s2, Class 1
COMP 90, Class notes, 11s, Class 1 John Plaice Sun Jul 31 1::5 EST 011 In this course, you will need to know a bit of mathematics. We cover in today s lecture the basics. Some of this material is covered
More informationA contamination model for approximate stochastic order
A contamination model for approximate stochastic order Eustasio del Barrio Universidad de Valladolid. IMUVA. 3rd Workshop on Analysis, Geometry and Probability - Universität Ulm 28th September - 2dn October
More informationUses of Asymptotic Distributions: In order to get distribution theory, we need to norm the random variable; we usually look at n 1=2 ( X n ).
1 Economics 620, Lecture 8a: Asymptotics II Uses of Asymptotic Distributions: Suppose X n! 0 in probability. (What can be said about the distribution of X n?) In order to get distribution theory, we need
More informationStats 579 Intermediate Bayesian Modeling. Assignment # 2 Solutions
Stats 579 Intermediate Bayesian Modeling Assignment # 2 Solutions 1. Let w Gy) with y a vector having density fy θ) and G having a differentiable inverse function. Find the density of w in general and
More information2 2 + x =
Lecture 30: Power series A Power Series is a series of the form c n = c 0 + c 1 x + c x + c 3 x 3 +... where x is a variable, the c n s are constants called the coefficients of the series. n = 1 + x +
More informationSTAT 512 sp 2018 Summary Sheet
STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}
More informationare continuous). Then we also know that the above function g is continuously differentiable on [a, b]. For that purpose, we will first show that
Derivatives and Integrals Now suppose f : [a, b] [c, d] IR is continuous. It is easy to see that for each fixed x [a, b] the function f x (y) = f(x, y) is continuous on [c, d] (note that f x is sometimes
More informationd(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N
Problem 1. Let f : A R R have the property that for every x A, there exists ɛ > 0 such that f(t) > ɛ if t (x ɛ, x + ɛ) A. If the set A is compact, prove there exists c > 0 such that f(x) > c for all x
More informationLecture 15: Density estimation
Lecture 15: Desity estimatio Why do we estimate a desity? Suppose that X 1,...,X are i.i.d. radom variables from F ad that F is ukow but has a Lebesgue p.d.f. f. Estimatio of F ca be doe by estimatig f.
More informationHyers-Ulam-Rassias Stability of a Quadratic-Additive Type Functional Equation on a Restricted Domain
Int. Journal of Math. Analysis, Vol. 7, 013, no. 55, 745-75 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.1988/ijma.013.394 Hyers-Ulam-Rassias Stability of a Quadratic-Additive Type Functional Equation
More informationFolland: Real Analysis, Chapter 4 Sébastien Picard
Folland: Real Analysis, Chapter 4 Sébastien Picard Problem 4.19 If {X α } is a family of topological spaces, X α X α (with the product topology) is uniquely determined up to homeomorphism by the following
More informationLecture 8: Convergence of transformations and law of large numbers
Lecture 8: Covergece of trasformatios ad law of large umbers Trasformatio ad covergece Trasformatio is a importat tool i statistics. If X coverges to X i some sese, we ofte eed to check whether g(x ) coverges
More informationLecture 2: CDF and EDF
STAT 425: Introduction to Nonparametric Statistics Winter 2018 Instructor: Yen-Chi Chen Lecture 2: CDF and EDF 2.1 CDF: Cumulative Distribution Function For a random variable X, its CDF F () contains all
More informationT k b p M r will so ordered by Ike one who quits squuv. fe2m per year, or year, jo ad vaoce. Pleaie and THE ALTO SOLO
q q P XXX F Y > F P Y ~ Y P Y P F q > ##- F F - 5 F F?? 5 7? F P P?? - - F - F F - P 7 - F P - F F % P - % % > P F 9 P 86 F F F F F > X7 F?? F P Y? F F F P F F
More informationNotes 9 : Infinitely divisible and stable laws
Notes 9 : Infinitely divisible and stable laws Math 733 - Fall 203 Lecturer: Sebastien Roch References: [Dur0, Section 3.7, 3.8], [Shi96, Section III.6]. Infinitely divisible distributions Recall: EX 9.
More informationSolutions for Econometrics I Homework No.3
Solutions for Econometrics I Homework No3 due 6-3-15 Feldkircher, Forstner, Ghoddusi, Pichler, Reiss, Yan, Zeugner April 7, 6 Exercise 31 We have the following model: y T N 1 X T N Nk β Nk 1 + u T N 1
More informationHyun-Woo Jin* and Min-Young Lee**
JOURNAL OF THE CHUNGCHEONG MATHEMATICAL SOCIETY Volue 27, No. 2, May 214 http://dx.doi.org/1.1443/jcs.214.27.2.157 CHARACTERIZATIONS OF THE GAMMA DISTRIBUTION BY INDEPENDENCE PROPERTY OF RANDOM VARIABLES
More informationLimiting Distributions
We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the two fundamental results
More informationSTA 4321/5325 Solution to Homework 5 March 3, 2017
STA 4/55 Solution to Homework 5 March, 7. Suppose X is a RV with E(X and V (X 4. Find E(X +. By the formula, V (X E(X E (X E(X V (X + E (X. Therefore, in the current setting, E(X V (X + E (X 4 + 4 8. Therefore,
More informationStat 710: Mathematical Statistics Lecture 27
Stat 710: Mathematical Statistics Lecture 27 Jun Shao Department of Statistics University of Wisconsin Madison, WI 53706, USA Jun Shao (UW-Madison) Stat 710, Lecture 27 April 3, 2009 1 / 10 Lecture 27:
More informationQuantitative Methods in Economics Conditional Expectations
Quantitative Methods in Economics Conditional Expectations Maximilian Kasy Harvard University, fall 2016 1 / 19 Roadmap, Part I 1. Linear predictors and least squares regression 2. Conditional expectations
More informationCHAPTER 1. Metric Spaces. 1. Definition and examples
CHAPTER Metric Spaces. Definition and examples Metric spaces generalize and clarify the notion of distance in the real line. The definitions will provide us with a useful tool for more general applications
More information1 Joint and marginal distributions
DECEMBER 7, 204 LECTURE 2 JOINT (BIVARIATE) DISTRIBUTIONS, MARGINAL DISTRIBUTIONS, INDEPENDENCE So far we have considered one random variable at a time. However, in economics we are typically interested
More informationLecture 14 October 13
STAT 383C: Statistical Modeling I Fall 2015 Lecture 14 October 13 Lecturer: Purnamrita Sarkar Scribe: Some one Disclaimer: These scribe notes have been slightly proofread and may have typos etc. Note:
More informationChapter 6. Hypothesis Tests Lecture 20: UMP tests and Neyman-Pearson lemma
Chapter 6. Hypothesis Tests Lecture 20: UMP tests and Neyman-Pearson lemma Theory of testing hypotheses X: a sample from a population P in P, a family of populations. Based on the observed X, we test a
More informationStat 5101 Notes: Algorithms
Stat 5101 Notes: Algorithms Charles J. Geyer January 22, 2016 Contents 1 Calculating an Expectation or a Probability 3 1.1 From a PMF........................... 3 1.2 From a PDF...........................
More informationH 2 : otherwise. that is simply the proportion of the sample points below level x. For any fixed point x the law of large numbers gives that
Lecture 28 28.1 Kolmogorov-Smirnov test. Suppose that we have an i.i.d. sample X 1,..., X n with some unknown distribution and we would like to test the hypothesis that is equal to a particular distribution
More information1 Solution to Problem 2.1
Solution to Problem 2. I incorrectly worked this exercise instead of 2.2, so I decided to include the solution anyway. a) We have X Y /3, which is a - function. It maps the interval, ) where X lives) onto
More informationRelative efficiency. Patrick Breheny. October 9. Theoretical framework Application to the two-group problem
Relative efficiency Patrick Breheny October 9 Patrick Breheny STA 621: Nonparametric Statistics 1/15 Relative efficiency Suppose test 1 requires n 1 observations to obtain a certain power β, and that test
More informationLimits at Infinity. Use algebraic techniques to help with indeterminate forms of ± Use substitutions to evaluate limits of compositions of functions.
SUGGESTED REFERENCE MATERIAL: Limits at Infinity As you work through the problems listed below, you should reference Chapter. of the recommended textbook (or the equivalent chapter in your alternative
More informationChaos induced by coupled-expanding maps
First Prev Next Last Seminar on CCCN, Jan. 26, 2006 Chaos induced by coupled-expanding maps Page 1 of 35 Yuming Shi Department of Mathematics Shandong University, China ymshi@sdu.edu.cn Collaborators:
More informationLecture 14: Multivariate mgf s and chf s
Lecture 14: Multivariate mgf s and chf s Multivariate mgf and chf For an n-dimensional random vector X, its mgf is defined as M X (t) = E(e t X ), t R n and its chf is defined as φ X (t) = E(e ıt X ),
More informationp. 6-1 Continuous Random Variables p. 6-2
Continuous Random Variables Recall: For discrete random variables, only a finite or countably infinite number of possible values with positive probability (>). Often, there is interest in random variables
More informationRegularity for Poisson Equation
Regularity for Poisson Equation OcMountain Daylight Time. 4, 20 Intuitively, the solution u to the Poisson equation u= f () should have better regularity than the right hand side f. In particular one expects
More informationodhady a jejich (ekonometrické)
modifikace Stochastic Modelling in Economics and Finance 2 Advisor : Prof. RNDr. Jan Ámos Víšek, CSc. Petr Jonáš 27 th April 2009 Contents 1 2 3 4 29 1 In classical approach we deal with stringent stochastic
More informationGradient Ascent Chris Piech CS109, Stanford University
Gradient Ascent Chris Piech CS109, Stanford University Our Path Deep Learning Linear Regression Naïve Bayes Logistic Regression Parameter Estimation Our Path Deep Learning Linear Regression Naïve Bayes
More informationMath 140A - Fall Final Exam
Math 140A - Fall 2014 - Final Exam Problem 1. Let {a n } n 1 be an increasing sequence of real numbers. (i) If {a n } has a bounded subsequence, show that {a n } is itself bounded. (ii) If {a n } has a
More informationNonparametric estimation of log-concave densities
Nonparametric estimation of log-concave densities Jon A. Wellner University of Washington, Seattle Seminaire, Institut de Mathématiques de Toulouse 5 March 2012 Seminaire, Toulouse Based on joint work
More informationA Gentle Introduction to Stein s Method for Normal Approximation I
A Gentle Introduction to Stein s Method for Normal Approximation I Larry Goldstein University of Southern California Introduction to Stein s Method for Normal Approximation 1. Much activity since Stein
More informationLecture 19: Convergence
Lecture 19: Covergece Asymptotic approach I statistical aalysis or iferece, a key to the success of fidig a good procedure is beig able to fid some momets ad/or distributios of various statistics. I may
More informationAsymptotic Statistics-III. Changliang Zou
Asymptotic Statistics-III Changliang Zou The multivariate central limit theorem Theorem (Multivariate CLT for iid case) Let X i be iid random p-vectors with mean µ and and covariance matrix Σ. Then n (
More informationSection 2.1, Section 3.1 Rate of change, Tangents and Derivatives at a point
Section 2.1, Section 3.1 Rate of change, Tangents and Derivatives at a point Line through P and Q approaches to the tangent line at P as Q approaches P. That is as a + h a = h gets smaller. Slope of the
More informationNonparametric estimation under Shape Restrictions
Nonparametric estimation under Shape Restrictions Jon A. Wellner University of Washington, Seattle Statistical Seminar, Frejus, France August 30 - September 3, 2010 Outline: Five Lectures on Shape Restrictions
More informationLecture 6: Contraction mapping, inverse and implicit function theorems
Lecture 6: Contraction mapping, inverse and implicit function theorems 1 The contraction mapping theorem De nition 11 Let X be a metric space, with metric d If f : X! X and if there is a number 2 (0; 1)
More informationAPPROXIMATE FIXED POINT OF REICH OPERATOR. 1. Introduction
APPROXIMATE FIXED POINT OF REICH OPERATOR D. DEY and M. SAHA Abstract. In the present paper, we study the existence of approximate fixed point for Reich operator together with the property that the ε-fixed
More informationλ(x + 1)f g (x) > θ 0
Stat 8111 Final Exam December 16 Eleven students took the exam, the scores were 92, 78, 4 in the 5 s, 1 in the 4 s, 1 in the 3 s and 3 in the 2 s. 1. i) Let X 1, X 2,..., X n be iid each Bernoulli(θ) where
More information7 Random samples and sampling distributions
7 Random samples and sampling distributions 7.1 Introduction - random samples We will use the term experiment in a very general way to refer to some process, procedure or natural phenomena that produces
More information7. Let X be a (general, abstract) metric space which is sequentially compact. Prove X must be complete.
Math 411 problems The following are some practice problems for Math 411. Many are meant to challenge rather that be solved right away. Some could be discussed in class, and some are similar to hard exam
More informationrobustness, efficiency, breakdown point, outliers, rank-based procedures, least absolute regression
Robust Statistics robustness, efficiency, breakdown point, outliers, rank-based procedures, least absolute regression University of California, San Diego Instructor: Ery Arias-Castro http://math.ucsd.edu/~eariasca/teaching.html
More informationECE353: Probability and Random Processes. Lecture 7 -Continuous Random Variable
ECE353: Probability and Random Processes Lecture 7 -Continuous Random Variable Xiao Fu School of Electrical Engineering and Computer Science Oregon State University E-mail: xiao.fu@oregonstate.edu Continuous
More informationSection 6.1: Composite Functions
Section 6.1: Composite Functions Def: Given two function f and g, the composite function, which we denote by f g and read as f composed with g, is defined by (f g)(x) = f(g(x)). In other words, the function
More informationLecture 13: p-values and union intersection tests
Lecture 13: p-values and union intersection tests p-values After a hypothesis test is done, one method of reporting the result is to report the size α of the test used to reject H 0 or accept H 0. If α
More informationMath438 Actuarial Probability
Math438 Actuarial Probability Jinguo Lian Department of Math and Stats Jan. 22, 2016 Continuous Random Variables-Part I: Definition A random variable X is continuous if its set of possible values is an
More informationAnnouncements. Topics: Homework:
Topics: Announcements - section 2.6 (limits at infinity [skip Precise Definitions (middle of pg. 134 end of section)]) - sections 2.1 and 2.7 (rates of change, the derivative) - section 2.8 (the derivative
More informationContinuous random variables
Continuous random variables Can take on an uncountably infinite number of values Any value within an interval over which the variable is definied has some probability of occuring This is different from
More informationFall TMA4145 Linear Methods. Solutions to exercise set 9. 1 Let X be a Hilbert space and T a bounded linear operator on X.
TMA445 Linear Methods Fall 26 Norwegian University of Science and Technology Department of Mathematical Sciences Solutions to exercise set 9 Let X be a Hilbert space and T a bounded linear operator on
More informationFinal Examination Statistics 200C. T. Ferguson June 11, 2009
Final Examination Statistics 00C T. Ferguson June, 009. (a) Define: X n converges in probability to X. (b) Define: X m converges in quadratic mean to X. (c) Show that if X n converges in quadratic mean
More information56 CHAPTER 3. POLYNOMIAL FUNCTIONS
56 CHAPTER 3. POLYNOMIAL FUNCTIONS Chapter 4 Rational functions and inequalities 4.1 Rational functions Textbook section 4.7 4.1.1 Basic rational functions and asymptotes As a first step towards understanding
More informationLecture 2: Basic Concepts and Simple Comparative Experiments Montgomery: Chapter 2
Lecture 2: Basic Concepts and Simple Comparative Experiments Montgomery: Chapter 2 Fall, 2013 Page 1 Random Variable and Probability Distribution Discrete random variable Y : Finite possible values {y
More information3 a = 3 b c 2 = a 2 + b 2 = 2 2 = 4 c 2 = 3b 2 + b 2 = 4b 2 = 4 b 2 = 1 b = 1 a = 3b = 3. x 2 3 y2 1 = 1.
MATH 222 LEC SECOND MIDTERM EXAM THU NOV 8 PROBLEM ( 5 points ) Find the standard-form equation for the hyperbola which has its foci at F ± (±2, ) and whose asymptotes are y ± 3 x The calculations b a
More informationDRAFT - Math 101 Lecture Note - Dr. Said Algarni
2 Limits 2.1 The Tangent Problems The word tangent is derived from the Latin word tangens, which means touching. A tangent line to a curve is a line that touches the curve and a secant line is a line that
More informationMeasure and Integration: Solutions of CW2
Measure and Integration: s of CW2 Fall 206 [G. Holzegel] December 9, 206 Problem of Sheet 5 a) Left (f n ) and (g n ) be sequences of integrable functions with f n (x) f (x) and g n (x) g (x) for almost
More information1 General problem. 2 Terminalogy. Estimation. Estimate θ. (Pick a plausible distribution from family. ) Or estimate τ = τ(θ).
Estimation February 3, 206 Debdeep Pati General problem Model: {P θ : θ Θ}. Observe X P θ, θ Θ unknown. Estimate θ. (Pick a plausible distribution from family. ) Or estimate τ = τ(θ). Examples: θ = (µ,
More informationSummer Jump-Start Program for Analysis, 2012 Song-Ying Li
Summer Jump-Start Program for Analysis, 01 Song-Ying Li 1 Lecture 6: Uniformly continuity and sequence of functions 1.1 Uniform Continuity Definition 1.1 Let (X, d 1 ) and (Y, d ) are metric spaces and
More informationChapter-2 Relations and Functions. Miscellaneous
1 Chapter-2 Relations and Functions Miscellaneous Question 1: The relation f is defined by The relation g is defined by Show that f is a function and g is not a function. The relation f is defined as It
More informationLecture 6: Linear models and Gauss-Markov theorem
Lecture 6: Linear models and Gauss-Markov theorem Linear model setting Results in simple linear regression can be extended to the following general linear model with independently observed response variables
More information