Statistics & Decisions 7, (1989) R. Oldenbourg Verlag, München /89 $

Similar documents
Carl N. Morris. University of Texas

LECTURES IN ECONOMETRIC THEORY. John S. Chipman. University of Minnesota

Estimation of parametric functions in Downton s bivariate exponential distribution

Journal of Statistical Research 2007, Vol. 41, No. 1, pp Bangladesh

CS 195-5: Machine Learning Problem Set 1

Asymptotic efficiency of simple decisions for the compound decision problem

KRUSKAL-WALLIS ONE-WAY ANALYSIS OF VARIANCE BASED ON LINEAR PLACEMENTS

Studentization and Prediction in a Multivariate Normal Setting

Multinomial Data. f(y θ) θ y i. where θ i is the probability that a given trial results in category i, i = 1,..., k. The parameter space is

Siegel s formula via Stein s identities

The Hilbert Transform and Fine Continuity

and Michael White 1. Introduction

Fixed Effects, Invariance, and Spatial Variation in Intergenerational Mobility

DISJOINT UNIONS AND ORDINAL SUMSi

BAYESIAN ESTIMATION OF THE EXPONENTI- ATED GAMMA PARAMETER AND RELIABILITY FUNCTION UNDER ASYMMETRIC LOSS FUNC- TION

ON MINIMAL PAIRWISE SUFFICIENT STATISTICS

Multivariate Distributions

Optimal Unbiased Estimates of P {X < Y } for Some Families of Distributions

Simultaneous Estimation in a Restricted Linear Model*

STA 732: Inference. Notes 10. Parameter Estimation from a Decision Theoretic Angle. Other resources

Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½

Summary of factor model analysis by Fan at al.

Mi-Hwa Ko. t=1 Z t is true. j=0

ESTIMATORS FOR GAUSSIAN MODELS HAVING A BLOCK-WISE STRUCTURE

Statistical Measures of Uncertainty in Inverse Problems

More Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order Restriction

ASYMPTOTIC DIOPHANTINE APPROXIMATION: THE MULTIPLICATIVE CASE

LECTURE 5 NOTES. n t. t Γ(a)Γ(b) pt+a 1 (1 p) n t+b 1. The marginal density of t is. Γ(t + a)γ(n t + b) Γ(n + a + b)

A NOTE ON STRATEGY ELIMINATION IN BIMATRIX GAMES

ON PITMAN EFFICIENCY OF

ADMISSIBILITY OF UNBIASED TESTS FOR A COMPOSITE HYPOTHESIS WITH A RESTRICTED ALTERNATIVE

MULTIVARIATE ANALYSIS OF VARIANCE UNDER MULTIPLICITY José A. Díaz-García. Comunicación Técnica No I-07-13/ (PE/CIMAT)

Higher order moments of the estimated tangency portfolio weights

NUMBER FIELDS WITHOUT SMALL GENERATORS

Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data

Submitted to the Brazilian Journal of Probability and Statistics

What is a Hilbert C -module? arxiv:math/ v1 [math.oa] 29 Dec 2002

Some History of Optimality

Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines

PACKING-DIMENSION PROFILES AND FRACTIONAL BROWNIAN MOTION

Introduction to Bayesian Statistics

Numerische Mathematik

Testing Some Covariance Structures under a Growth Curve Model in High Dimension

ON EMPIRICAL BAYES WITH SEQUENTIAL COMPONENT

A CLASS OF ORTHOGONALLY INVARIANT MINIMAX ESTIMATORS FOR NORMAL COVARIANCE MATRICES PARAMETRIZED BY SIMPLE JORDAN ALGEBARAS OF DEGREE 2

PRECISE ASYMPTOTIC IN THE LAW OF THE ITERATED LOGARITHM AND COMPLETE CONVERGENCE FOR U-STATISTICS

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

CORRECTIONS AND ADDITION TO THE PAPER KELLERER-STRASSEN TYPE MARGINAL MEASURE PROBLEM. Rataka TAHATA. Received March 9, 2005

Better bounds for k-partitions of graphs

ON WEAK STATISTICAL CONVERGENCE OF SEQUENCE OF FUNCTIONALS

NAVARRO VERTICES AND NORMAL SUBGROUPS IN GROUPS OF ODD ORDER

Goodness of Fit Test and Test of Independence by Entropy

ON D-OPTIMAL DESIGNS FOR ESTIMATING SLOPE

SIMULTANEOUS ESTIMATION OF SCALE MATRICES IN TWO-SAMPLE PROBLEM UNDER ELLIPTICALLY CONTOURED DISTRIBUTIONS

Testing Statistical Hypotheses

Sharpening the Karush-John optimality conditions

EXPLICIT EXPRESSIONS FOR MOMENTS OF χ 2 ORDER STATISTICS

PRE-TEST ESTIMATION OF THE REGRESSION SCALE PARAMETER WITH MULTIVARIATE STUDENT-t ERRORS AND INDEPENDENT SUB-SAMPLES

Stability of Adjointable Mappings in Hilbert

List coloring hypergraphs

DIFFERENTIAL OPERATORS ON A CUBIC CONE

HOMEOMORPHISMS OF BOUNDED VARIATION

DA Freedman Notes on the MLE Fall 2003

Determining elements of minimal index in an infinite family of totally real bicyclic biquadratic number fields

INTERSECTIONS OF RANDOM LINES

Supermodular ordering of Poisson arrays

Behaviour of multivariate tail dependence coefficients

Sampling Distributions of Statistics Corresponds to Chapter 5 of Tamhane and Dunlop

Some Statistical Inferences For Two Frequency Distributions Arising In Bioinformatics

MAS223 Statistical Inference and Modelling Exercises

New Information Measures for the Generalized Normal Distribution

STATISTICS SYLLABUS UNIT I

2-UNIVERSAL POSITIVE DEFINITE INTEGRAL QUINARY DIAGONAL QUADRATIC FORMS

SIMULATED POWER OF SOME DISCRETE GOODNESS- OF-FIT TEST STATISTICS FOR TESTING THE NULL HYPOTHESIS OF A ZIG-ZAG DISTRIBUTION

Statistical Approaches to Learning and Discovery. Week 4: Decision Theory and Risk Minimization. February 3, 2003

Principal Components Theory Notes

Czechoslovak Mathematical Journal

Bilinear generating relations for a family of q-polynomials and generalized basic hypergeometric functions

A NOTE ON A BASIS PROBLEM

Mathematics Ph.D. Qualifying Examination Stat Probability, January 2018

A noninformative Bayesian approach to domain estimation

WEYL S LEMMA, ONE OF MANY. Daniel W. Stroock

ON POSITIVE, LINEAR AND QUADRATIC BOOLEAN FUNCTIONS

A Cautionary Note on Estimating the Reliability of a Mastery Test with the Beta-Binomial Model

NORMAL CHARACTERIZATION BY ZERO CORRELATIONS

Testing Equality of Two Intercepts for the Parallel Regression Model with Non-sample Prior Information

Optimisation Problems for the Determinant of a Sum of 3 3 Matrices

A Very Brief Summary of Statistical Inference, and Examples

Chapter 1 Vector Spaces

A COMPARISON OF POISSON AND BINOMIAL EMPIRICAL LIKELIHOOD Mai Zhou and Hui Fang University of Kentucky

Representations and Derivations of Modules

DETERMINANT IDENTITIES FOR THETA FUNCTIONS

BIRKHOFF-JAMES ǫ-orthogonality SETS IN NORMED LINEAR SPACES

ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES. 1. Introduction

On the second smallest prime non-residue

ON THE MAXIMUM OF A NORMAL STATIONARY STOCHASTIC PROCESS 1 BY HARALD CRAMER. Communicated by W. Feller, May 1, 1962

Stat 206: Sampling theory, sample moments, mahalanobis

On Systems of Diagonal Forms II

Lecture Note 1: Probability Theory and Statistics

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1

Transcription:

Statistics & Decisions 7, 377-382 (1989) R. Oldenbourg Verlag, München 1989-0721-2631/89 $3.00 + 0.00 A NOTE ON SIMULTANEOUS ESTIMATION OF PEARSON MODES L. R. Haff 1 ) and R. W. Johnson 2^ Received: Revised version: April 3, 1989 Abstract. We exhibit an estimator of the mode of a vector of independent variates having densities in the Pearson (1895) class. This estimator dominates the componentwise minimum variance unbiased estimator under weighted squared error loss. 1. Introduction James and Stein (1961) presented a class of estimators which dominate the usual estimator of the mean vector of a multivariate normal distribution in three or more dimensions under quadratic loss. Since then much work has been directed at finding improved estimators of the mean in more general settings. Hudson (1978), for example, extended the work of James and Stein to an exponential class framework. Berger (1980) presented 1) Research supported by an NSF Grant 2) Work performed while at the Naval Ocean Systems Center, San Diego, Cal i forni a AMS 1980 subject classifications: Primary 62 C 15 Secondary 62 F 10, 62 C 25 Key words and phrases: Simultaneous estimation of modes, Pearson curves, Stein-like estimators, estimation of means

378 L. R. Haff, R. W. Johnson similar results for a variety of loss functions. Johnson (1984), and Haff and Johnson (1986) noted that Hudson's framework includes variates of the four parameter Pearson class provided only the mode is unknown. This structure was used by Haff and Johnson (1986) to obtain further results on estimating means. In this note, we provide alternative estimators for a vector of Pearson modes. These estimators dominate the componentwise minimum variance unbiased estimator (MVUE) under weighted squared error loss. Apparently simultaneous modal estimation has not been discussed in the literature until now. For certain asymmetric densities, the modes are more interesting than the means as objects of estimation. These problems are closely related in the present context however. Indeed, our main result shows that it is natural to formulate the general problem of estimating Pearson modes in terms of estimating means. We cite the applicability of this result to several examples which have appeared in the recent literature on simultaneous estimation. 2. The Dominance Result Let X= (Xp...,Xp) t be a vector of independent variates in which X. has p.d.f. f n (χ η IΘ 1 ) defined by Karl Pearson's (1895) equation on (k lis k 2i ), where ßj-j known quantities. We shall denote this by X i ~P(8 i,3 o i,on (k^.,k 2i ). Here we estimate θ= (θ^,...,6p) t, the mode vector, under loss function (l) L(M) = (<s-e)w-e), where Q = diag...»qp) is a diagonal matrix, the q^ (i = 1,2 p) being arbitrary fixed positive constants. The standard decision theoretic notions of "risk" and "dominance" are assumed as in Haff and Johnson (1986), a reference we henceforth abbreviate by H&J (1986). Additionally, the regularity conditions from Section 2 (pp. 46, 47) of that paper are assumed throughout.

L. R. Haff, R. W. Johnson 379 Let (2) a.(x) = (e oi + β χι χ + e Zi x 2 )/(i - 2B 2i ), ß 2i < 1/2, where a i (x)>0. Also, let (3) b.(x) = ; X dt/a.(t). As noted in H&J (1986), EX i = (B^ + 0 i )/(l - 20 2i ). Since b. is one-toone and since b.(x^) is a minimal sufficient statistic for Θ. - see (1.2) and (1.3) of H&J (1986) - it is clear that (4) I = c î" d (Ρ X 1) is the componentwise MVUE of the mode θ where C=diag (c,,...,c ) with 1 t Ρ c i = 1-2ß 2i and d=(dp..d ) with d^ = ß^. Our main result incorporates equation (4) and the following lemma which shows how Pearson curves behave under affine transformations. Lemma. If U~P(m,r,s,t), then V ξ eu - f~p(em-f, e 2 r+efs + f 2 t, es + 2ft, t). Proof. See Kaskey, Krishnaiah, Kolman, and Steinberg (1980). From (2) and (3), set B= (bpb 2,...,b It will be necessary to subscript Β and its components by the variables under consideration. Thus, for example, let U and V be given as in the above lemma. Componentwise we need the relations (5) a v (v) = e 2 a (J (u) and b y (v) = b u (u)/e, which are easy to verify. Theorem. Let X i ~P(6.,ßo i,ß 2i )> i = 1.2,...,ρ, ρ s 3, and set Χ = (Xj,...Xp) t. Assume that the regularity conditions from H&J (1986), pp. 46, 47, hold. Now let Y = CX - d with C (pxp) a diagonal matrix of positive constants and d (pxl) a vector of constants. Then

380 L. R. Haff, R. W. Johnson (6) * =!" t Β χ C QC Β χ QC_1 Bv dominates Y as an estimator of Py = EY=CEX-d (which is the mode of X if c.j = 1 - and d^ = ) with respect to the loss function (1). Proof. Note that (7) Y = E Q-I/ZCX- t (P - Q^C-X-Q-^CEXII 2, B^ C A QC Α Β χ where is the euclidean norm. Set W = Q~ 1/2 CX SO EW = Q~ 1/2 CEX. From ~ ι/? -1 the above lemma and from (5) it follows that B, (, = Q C Β χ. Taking the expectation with respect to W, (7) becomes E > - -^Β,-EWII 2 B W B W ~ = E W (W - EW)V- EW) - (p - 2) 2 E W (bjb w ) _1 = E X (Y - EYîYV - ΕΥ) - (ρ - 2) 2 É W (B B W ) _1 < E X (Y- EYjVV- ΕΥ), where the first equality follows from Theorem 2.5 of H&J (1986) with Μ > - - ^ ν > Β 1 Λ and q i =1, i = 1,2,...,p. 3. Examples Here we present b v (X.) explicitly for X. a random variable from a x 1 ι ι number of distributions. For convenience, we drop the subscript i. Exampl e 1. (cf. James and Stein (1961)). If Χ~Ν(θ,σ 2 ), then Χ~Ρ(θ,σ,0,0) and b x (X) = X/a 2. Example 2. (cf. Hudson (1978)). If Χ~Γ(φ,λ) with density

L. R. Haff, R. W. Johnson 381 λφ Φ-i ρ-λχ f(x 4>)= Γ(φ) for x> 0, then Χ~Ρ((φ-1)/λ, 0,1/λ,0) and b(x) = λ In X. Example 3. (cf. Berger (1980)). If X= (φ - 1)/W where Ν~Γ(φ,λ), then Χ ~ Ρ((φ- 1)λ/(φ+ 1), 0, 0, 1/(φ+ 1)) and b(x) = -(φ - 1)/X. Example 4. (cf. Johnson (1984)). If X has the beta density f(x 0) «x c6 (l- x) c(1 ~ 9), 0< x< 1 (c = k-2 where k is the "concentration parameter", p. 233 of Brunk (1975)), then X~P(6,0,l/c,-l/c) and b(x)=(c + 2) In (X/(l - X)). Example 5. (cf. Johnson (1984)). If X has the Pearson type IV density (see Elderton and Johnson (1969)), then X~P(6,ß o,ß 1,ß 2 ) with D 2 ξ 4ß Q ß 2 - β 2 > 0 and b(x) = [2(1-23 2 )/D] arctan [(ßj+2ß 2 X)/D] References [1] Berger, J.: Improving on inadmissible estimators in continuous exponential families with applications to simultaneous estimation of gamma scale parameters. Ann. Stat. 8 (1980), 545-571 [2] Brunk, H.: An Introduction to Mathematical Statistics, Third edition. John Wiley, New York, 1975 [3] Elderton, W. and Johnson, N.: Systems of Frequency Curves. Cambridge Univ. Press, London, 1969 [4] Haff, L. and Johnson, R.: The superharmonic condition for simultaneous estimation of means in exponential families. Canadian Jour. Statist. 14 (1986), 43-54 [5] Hudson, H.: Empirical Bayes estimation. Technical report no. 58, Dept. of Statist., Stanford University, 1974

382 L. R. Haff, R. W. Johnson [6] Hudson, H.: A natural identity for exponential families with applications in multiparameter estimation. Ann. Statist. 6 (1978), 473-484 [7] James, W. and Stein, C.: Estimation with quadratic loss. Proc. Fourth Berkeley Symp. Math. Statist. Probab., vol. 1, 361-380, Univ. of California Press, 1961 [8] Johnson, R.: Simultaneous estimation of generalized Pearson means. Ph. D. dissertation, Dept. of Mathematics, University of California, San Diego, 1984 [9] Kaskey, G., Krishnaiah, P., Kolman, B., and Steinberg, L.: Transformations to normality. Handbook of Statistics, vol. 1, 321-341, North Holland, 1980 [10] Pearson, K.: Memoir on a skew variation in homogeneous material. Phil. Trans. Roy. Soc. A 186 (1895), 343-414 L. R. Haff Department of Mathematics University of California, San Diego La Joli a, California 92093, USA R. W. Johnson Department of Mathematics and Computer Science Carleton College One North College Street Northfield, Minnesota 55057-4025 USA