EXPANSIONS FOR QUANTILES AND MULTIVARIATE MOMENTS OF EXTREMES FOR HEAVY TAILED DISTRIBUTIONS

Similar documents
Solutions to the recurrence relation u n+1 = v n+1 + u n v n in terms of Bell polynomials

THE nth POWER OF A MATRIX AND APPROXIMATIONS FOR LARGE n. Christopher S. Withers and Saralees Nadarajah (Received July 2008)

PARAMETER ESTIMATION FOR THE LOG-LOGISTIC DISTRIBUTION BASED ON ORDER STATISTICS

High-dimensional asymptotic expansions for the distributions of canonical correlations

EXPANSIONS FOR FUNCTIONS OF DETERMINANTS OF POWER SERIES

THE UNIVERSITY OF BRITISH COLUMBIA DEPARTMENT OF STATISTICS TECHNICAL REPORT #253 RIZVI-SOBEL SUBSET SELECTION WITH UNEQUAL SAMPLE SIZES

Research Article The Laplace Likelihood Ratio Test for Heteroscedasticity

University of Lisbon, Portugal

Bias Reduction when Data are Rounded

GENERALIZATION OF UNIVERSAL PARTITION AND BIPARTITION THEOREMS. Hacène Belbachir 1

Stable Process. 2. Multivariate Stable Distributions. July, 2006

A Few Special Distributions and Their Properties

LIST OF FORMULAS FOR STK1100 AND STK1110

Department of Econometrics and Business Statistics

A Non-uniform Bound on Poisson Approximation in Beta Negative Binomial Distribution

Method of Conditional Moments Based on Incomplete Data

Multivariate Normal-Laplace Distribution and Processes

On the Chebyshev quadrature rule of finite part integrals

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

Stable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence

On the exact computation of the density and of the quantiles of linear combinations of t and F random variables

MAS223 Statistical Inference and Modelling Exercises

EXPLICIT EXPRESSIONS FOR MOMENTS OF χ 2 ORDER STATISTICS

K-ANTITHETIC VARIATES IN MONTE CARLO SIMULATION ISSN k-antithetic Variates in Monte Carlo Simulation Abdelaziz Nasroallah, pp.

Research Article Strong Convergence Bound of the Pareto Index Estimator under Right Censoring

k-protected VERTICES IN BINARY SEARCH TREES

4. CONTINUOUS RANDOM VARIABLES

Research Article Exponential Inequalities for Positively Associated Random Variables and Applications

Received: 2/7/07, Revised: 5/25/07, Accepted: 6/25/07, Published: 7/20/07 Abstract

Some Congruences for the Partial Bell Polynomials

Does k-th Moment Exist?

Negentropy as a Function of Cumulants

5 Introduction to the Theory of Order Statistics and Rank Statistics

Bivariate Weibull-power series class of distributions

ON THE MOMENTS OF ITERATED TAIL

1 Isotropic Covariance Functions

Stat 5101 Notes: Brand Name Distributions

x 9 or x > 10 Name: Class: Date: 1 How many natural numbers are between 1.5 and 4.5 on the number line?

Exact Inference for the Two-Parameter Exponential Distribution Under Type-II Hybrid Censoring

Combinatorial Dimension in Fractional Cartesian Products

Estimation of the second order parameter for heavy-tailed distributions

c 2002 Society for Industrial and Applied Mathematics

Stat 5101 Notes: Brand Name Distributions

Research Article A Nonparametric Two-Sample Wald Test of Equality of Variances

Gaussian integers. 1 = a 2 + b 2 = c 2 + d 2.

The Number of Inversions in Permutations: A Saddle Point Approach

Nonparametric estimation of tail risk measures from heavy-tailed distributions

Approximating the negative moments of the Poisson distribution

A New Identity for Complete Bell Polynomials Based on a Formula of Ramanujan

A PRACTICAL WAY FOR ESTIMATING TAIL DEPENDENCE FUNCTIONS

A gap result for the norms of semigroups of matrices

Pareto approximation of the tail by local exponential modeling

Multivariate Distributions

Nonparametric Bayes Estimator of Survival Function for Right-Censoring and Left-Truncation Data

Extreme Value Analysis and Spatial Extremes

1 Uniform Distribution. 2 Gamma Distribution. 3 Inverse Gamma Distribution. 4 Multivariate Normal Distribution. 5 Multivariate Student-t Distribution

International Competition in Mathematics for Universtiy Students in Plovdiv, Bulgaria 1994

Bootstrap inference for the finite population total under complex sampling designs

A note on profile likelihood for exponential tilt mixture models

CHARACTERIZATIONS OF POWER DISTRIBUTIONS VIA MOMENTS OF ORDER STATISTICS AND RECORD VALUES

Asymptotic Variance Formulas, Gamma Functions, and Order Statistics

A Note on Tail Behaviour of Distributions. the max domain of attraction of the Frechét / Weibull law under power normalization

Nonparametric estimation of extreme risks from heavy-tailed distributions

SOME CONGRUENCES ASSOCIATED WITH THE EQUATION X α = X β IN CERTAIN FINITE SEMIGROUPS

EVALUATION OF A FAMILY OF BINOMIAL DETERMINANTS

Difference Equations for Multiple Charlier and Meixner Polynomials 1

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

Arithmetic properties of lacunary sums of binomial coefficients

AN ASYMPTOTICALLY UNBIASED MOMENT ESTIMATOR OF A NEGATIVE EXTREME VALUE INDEX. Departamento de Matemática. Abstract

Bias-corrected AIC for selecting variables in Poisson regression models

Background and Definitions...2. Legendre s Equation, Functions and Polynomials...4 Legendre s Associated Equation and Functions...

On a generalization of an approximation operator defined by A. Lupaş 1

Preliminary statistics

3. Probability and Statistics

In this chapter we study several functions that are useful in calculus and other areas of mathematics.

On the Maximum and Minimum of Multivariate Pareto Random Variables

x k 34 k 34. x 3

CHARACTERIZATIONS OF UNIFORM AND EXPONENTIAL DISTRIBUTIONS VIA MOMENTS OF THE kth RECORD VALUES RANDOMLY INDEXED

On Certain Sums of Stirling Numbers with Binomial Coefficients

Research Article Some Formulae of Products of the Apostol-Bernoulli and Apostol-Euler Polynomials

Part 3.3 Differentiation Taylor Polynomials

ON POLYNOMIAL REPRESENTATION FUNCTIONS FOR MULTILINEAR FORMS. 1. Introduction

The Anti-Ramsey Problem for the Sidon equation

We denote the derivative at x by DF (x) = L. With respect to the standard bases of R n and R m, DF (x) is simply the matrix of partial derivatives,

Testing Some Covariance Structures under a Growth Curve Model in High Dimension

Best linear unbiased and invariant reconstructors for the past records

Asymptotics of generating the symmetric and alternating groups

F ( B d > = d I 1 S (d,d-k) F _ k k=0

New asymptotic expansion for the Γ (z) function.

POSTERIOR PROPRIETY IN SOME HIERARCHICAL EXPONENTIAL FAMILY MODELS

Today. Probability and Statistics. Linear Algebra. Calculus. Naïve Bayes Classification. Matrix Multiplication Matrix Inversion

EXPONENTIAL SUMS OVER THE SEQUENCES OF PRN S PRODUCED BY INVERSIVE GENERATORS

On the minimum of several random variables

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable

ON CHARACTERIZATION OF POWER SERIES DISTRIBUTIONS BY A MARGINAL DISTRIBUTION AND A REGRESSION FUNCTION

A NOTE ON SECOND ORDER CONDITIONS IN EXTREME VALUE THEORY: LINKING GENERAL AND HEAVY TAIL CONDITIONS

STAT 7032 Probability Spring Wlodek Bryc

Newton, Fermat, and Exactly Realizable Sequences

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Approximate interval estimation for EPMC for improved linear discriminant rule under high dimensional frame work

Transcription:

REVSTAT Statistical Journal Volume 15, Number 1, January 2017, 25 43 EXPANSIONS FOR QUANTILES AND MULTIVARIATE MOMENTS OF EXTREMES FOR HEAVY TAILED DISTRIBUTIONS Authors: Christopher Withers Industrial Research Limited, Lower Hutt, New Zealand Saralees Nadarajah School of Mathematics, University of Manchester, Manchester M13 9PL, UK mbbsssn2@manchester.ac.uk Received: August 2014 Revised: March 2015 Accepted: May 2015 Abstract: Let X n,r be the r-th largest of a random sample of size n from a distribution function Fx) = 1 c i x α iβ for α > 0 and β > 0. An inversion theorem is proved and used to derive an expansion for the quantile F 1 u) and powers of it. From this an expansion in powers of n 1,n β/α) is given for the multivariate moments of the extremes {X n,n si, 1 i k}/n 1/α for fixed s = s 1,...,s k ), where k 1. Examples include the Cauchy, Student s t, F, second extreme distributions and stable laws of index α < 1. Key-Words: Bell polynomials; extremes; inversion theorem; moments; quantiles. AMS Subject Classification: 62E15, 62E17.

26 C.S. Withers and S. Nadarajah

Extremes for Quantiles and Multivariate Moments 27 1. INTRODUCTION For 1 r n, let X n,r be the r-th largest of a random sample of size n from a continuous distribution function F on R, the real numbers. Let f denote the density function of F when it exists. The study of the asymptotes of the moments of X n,r has been of considerable interest. McCord [12] gave a first approximation to the moments of X n,1 for three classes. This showed that a moment of X n,1 can behave like any positive power of n or n 1 = log n. Here, log is to the base e.) Pickands [15] explored the conditions under which various moments of X n,1 b n )/a n converge to the corresponding moments of the extreme value distribution. It was proved that this is indeed true for all F in the domain of attraction of an extreme value distribution provided that the moments are finite for sufficiently large n. Nair [13] investigated the limiting behavior of the distribution and the moments of X n,1 for large n when F is the standard normal distribution function. The results provided rates of convergence of the distribution and the moments of X n,1. Downey [4] derived explicit bounds for E[X n,1 ] in terms of the moments associated with F. The bounds were given up to the order o n 1/ρ), where x ρ dfx) is defined, so E[X n,1 ] grows slowly with the sample size. For other work, we refer the readers to Ramachandran [16], Hill and Spruill [9] and Hüsler et al. [10]. The main aim of this paper is to study multivariate moments of {X n,n si, 1 i k} for fixed s = s 1,..., s k ), where k 1. We suppose F is heavy tailed, i.e., 1.1) 1 Fx) Cx α as x for some C > 0 and α > 0. For a nonparametric estimate of α, see Novak and Utev [14]. There are many practical examples giving rise to {X n,n si, 1 i k} for heavy tailed F. Perhaps the most prominent example is the Hill s estimator Hill [8]) for the extremal index given by k log X n,n k + k 1 log X n,n i+1. Clearly, this is a function of X n,n si, 1 i k. Real life applications of the Hill s estimator are far too many to list. Since Hill [8], many other estimators have been proposed for the extremal index, see Gomes and Guillou [6] for an excellent review of such estimators. Each of these estimators is a function of X n,n si, 1 i k. No doubt that many more

28 C.S. Withers and S. Nadarajah estimators taking the form of a function of X n,n si, 1 i k will be proposed in the future. A possible application of the results in this paper is to assess optimality of these estimators. Suppose we can write the general form of the estimators as 1.2) ω = ω X n,n s1, X n,n s2,..., X n,n sk ; µ ), where µ contains some parameters, which include k itself. The optimum values of µ can be based on criteria like bias and mean squared error. For example, µ could be chosen as the value minimizing the bias of ω or the value minimizing the mean squared error of ω. If 1.2) can be expanded as ω = θ 1,θ 2,...,θ k a θ1, θ 2,..., θ k ; µ ) k X θ i n,n s i then the bias and mean squared error of ω can be expressed as and MSEω) = Biasω) = θ 1,θ 2,...,θ k a a θ 1,θ 2,...,θ k ϑ 1,ϑ 2,...,ϑ k θ 1,θ 2,...,θ k a θ1, θ 2,..., θ k ; µ ) E[ k X θ i n,n s i ] ω θ1, θ 2,..., θ k ; µ ) a ϑ 1, ϑ 2,..., ϑ k ; µ ) E[ k θ1, θ 2,..., θ k ; µ ) E[ k 2 X θ i n,n s i ] + [ Biasω) ] 2, ] X θ i+ϑ i n,n s i respectively. Both involve multivariate moments of X n,n si, 1 i k. Expressions for the latter are given in Section 2, in particular, Theorem 2.2. Hence, general estimators can be developed for µ which minimize bias, mean squared error, etc. Such developments could apply to any future estimator also to any past estimator) of the extremal index taking the form of 1.2). Note that U n,r = F X n,r ) is the r-th order statistics from U0, 1). For 1 r 1 < r 2 < < r k n set U n,r = {U n,ri, 1 i k}. By Section 14.2 of Stuart and Ord [17], U n,r has the multivariate beta density function 1.3) U n,r B u : r) = k u i+1 u i ) r i+1 r i 1 / B n r) on 0 < u 1 < < u k < 1, where u 0 = 0, u k+1 = 1, r 0 = 0, r k+1 = n + 1 and 1.4) B n r) = k B r i, r i+1 r i ).

Extremes for Quantiles and Multivariate Moments 29 David and Johnson [3] expanded X n,ri = F 1 U n,ri ) about u n,i = E[U n,ri ] = r i /n + 1): X n,ri = G j) u n,i )U n,i u n,i ) j /j!, where Gu) = F 1 u) and j=0 G j) u) = d j Gu)/du j, and using the properties of 1.3) showed that if r depends on n in such a ways that r/n p 0,1) as n then the m-th order cumulants of X n,r = {X n,ri, 1 i k} have magnitude O n 1 m) at least for n 4, so that the distribution function of X n,r has a multivariate Edgeworth expansion in powers of n 1/2. Alternatively one can use James and Mayne [11] to derive the cumulants of X n,r from those of U n,r.) The method requires the derivatives of F at { F 1 p i ), 1 i k } so breaks down if p i = 0 or p k = 1 which is the situation we study here. In Withers and Nadarajah [18], we showed that for fixed r when 1.1) holds the distribution of X n,n1 r where 1 is the vector of ones in R k ), suitably normalized tends to a certain multivariate extreme value distribution as n, and so obtained the leading terms of the expansions of its moments in inverse powers of n. Here, we show how to extend those expansions when 1.5) F 1 u) = b i 1 u) α i with α 0 < α 1 <, that is, {1 Fx)} x 1/α 0 has a power series in {x δ i : δ i = α i α 0 ) /α 0 }. Hall [7] considered 1.5) with α i = i 1/α, but did not give the corresponding expansion for Fx) or expansions in inverse powers of n. He applied it to the Cauchy. In Section 2, we demonstrate the method when 1.6) 1 Fx) = x α c i x iβ, where α > 0 and β > 0. In this case, 1.5) holds with α i = iβ 1)/α. In Section 3, we apply it to the Student s t, F and second extreme value distributions and to stable laws of exponent α < 1. The appendix gives the inverse theorem needed to pass from 1.6) to 1.5), and expansions for powers and logs of series. We use the following notation and terminology. Let x) i = Γx + i)/γx) and x i = Γx + 1)/Γx i + 1). An inequality in R k consists of k inequalities. For example, for x in C k, where C is the set of complex numbers, Rex) < 0 means that Re x i ) < 0 for 1 i k. Also let IA) = 1 if A is true and IA) = 0 i if A is false. For θ C k let θ denote the vector with θ i = θ j. j=1

30 C.S. Withers and S. Nadarajah 2. MAIN RESULTS For 1 r 1 < < r k n set s i = n r i. Here, we show how to obtain expansions in inverse powers of n for the moments of the X n,s for fixed r when 1.5) holds, and in particular when the upper tail of F satisfies 1.6). Theorem 2.1. Suppose 1.6) holds with c 0, α, β > 0. Then F 1 u) is given by 1.5) with α i = ia 1/α, a = β/α and b i = C i,1/α, where C i,ψ = c ψ 0 Ĉi ψ, c 0,x ) of A.3) and x i = x i a,1,c) of A.4). In particular, C 0,ψ = c ψ 0, C 1,ψ = ψ c ψ a 1 0 c 1, C 2,ψ = ψ c ψ 2a 2 { 0 c0 c 2 + ψ 2a 1)c 2 1/2 }, [ ] C 3,ψ = ψ c ψ 3a 3 0 c 2 0c 2 + ψ 3a 1)c 0 c 1 c 2 + {ψ +1) 2 /6ψ + 3a/2)a+1)} c 3 1, and so on. Also for any θ in R, 2.1) { F 1 u) } θ = 1 u) ia ψ C i,ψ at ψ = θ/α. On those rate occasions, where the coefficients d i = C i,1/α in F 1 u) = 1 u) ia 1/α d i are known from some alternative formula then one can use C i,ψ = d θ 0Ĉi θ, 1/d 0,d) of A.3). Proof of Theorem 2.1: By Theorem A.1 with k = 1, we have x α = x i 1 u) 1+ia at u = Fx), where x 0 = c 1 0, x 1 = c0 a 2 c 1, x 2 = c0 2a 3 { c0 c 2 + a + 1)c 2 } 1, x 3 = c0 3a 4 { c 2 0 c 3 + 2 + 3a)c 0 c 1 c 2 2 + 3a)1 + a)c 2 1/2 }, and so on. So, for S of A.1), x α = c 1 0 v [1 + c 0S v a,x )] at v = 1 u. Now apply A.2).

Extremes for Quantiles and Multivariate Moments 31 Lemma 2.1. For θ in C k, [ k ] E 1 U n,ri ) θ ) 2.2) i = b n r : θ, where ) k b n r : θ = b ) 2.3) r i r i 1, n r i + 1 : θ i and b α, β : θ) = Bα, β + θ)/bα, β). Also in 1.4), 2.4) B n r) = k B r i r i 1, n r i + 1). Since Bα, β) = for Reβ 0, for 2.2) to be finite we need n r i + 1 + Re θ i > 0 for 1 i k. Proof of Lemma 2.1: Let I k denote the left hand side of 2.2). Then k I k = B n u : r) 1 u i ) θ i du 1 du k integrated over 0 < u 1 < < u k < 1 by 1.3). So, 2.2), 2.4) hold for k = 1. Set s i = u i u i 1 ) / 1 u i 1 ). Then 1 1 I 2 = u r 1 1 1 1 u 1 ) θ 1 u 2 u 1 ) r 2 r 1 1 1 u 2 ) r 3 r 2 1+θ 2 / du 2 Bn r), u 1 0 which is the the right hand side of 2.2) with denominator replaced by the right hand side of 2.3). Putting θ = 0 gives 2.2), 2.4) for k = 2. Now use induction. Lemma 2.2. In Lemma 2.1, the restriction 2.5) 1 r 1 < < r k n may be relaxed to 1 r 1 r k n. Proof: For k = 2, the second factor in the right hand side of 2.3) is br 2 r 1, n r 2 + 1 : θ 2 ) = f ) ) ) θ 2 /f0), where f θ2 = Γ n r2 + 1 + θ 2 / Γ ) n r 1 + 1 + θ 2 = 1 if r2 = r 1 and the first factor is b ) [ ] r 1, n r 1 + 1 : θ 1 = E 1 U n,r1 ) θ 1. Similarly, if r i = r i 1, the i-th factor is 1 and the product of the k ) others is E θ 1 j Un,rj, where θj = θ j for j i 1 and θj = θ i 1 + θ i for j = i 1. j=1,j i Corollary 2.1. In any formulas for E[g X n,r )] for some function g, 2.5) holds. In particular it holds for the moments and cumulants of X n,r.

32 C.S. Withers and S. Nadarajah This result is very important as it means we can dispense with treating the 2 k 1 cases r i < r i+1 or r i = r i+1, 1 i k 1 separately. For example, Hall [7] treats the two cases for cos X n,r, X n,s ) separately and David and Johnson [3] treat the 2 k 1 cases for the k-th order cumulants of X n,r separately for k 4. Theorem 2.2. Under the conditions of Theorem 2.1, 2.6) E [ k X θ i n,r i ] = i 1,...,i k =0 C i1,ψ 1 C ik,ψ k b n r : ia θ/α ) with b n as in 2.3), where ψ = θ/α. All terms are finite if Re θ < s + 1)α, where s i = n r i. Lemma 2.3. For α, β positive integers and θ in C, 2.7) α+β 1 bα, β : θ) = 1 + θ/j) 1. j=β So, for θ in C k, 2.8) b n r : θ ) = k s i 1 j=s i +1 1 + θi /j ) 1, where s i = n r i and r 0 = 0. Proof: The left hand side of 2.7) is equal to Γβ + θ)γα + β)/ {Γβ + θ + α)γβ)}. But Γα + x)/γx) = x) α, so 2.7) holds, and hence 2.8). From 2.3) we have, interpreting Lemma 2.4. For s i = n r i, k 1 i=2 b i as 1, 2.9) b n r : θ ) = B s : θ ) n! / Γ n + 1 + θ1 ), where B s : θ ) = Γ ) k s 1 + 1 + θ 1 s1!) 1 b ) s i 1 s i, s i + 1 : θ i does not depend on n for fixed s. i=2

Extremes for Quantiles and Multivariate Moments 33 Lemma 2.5. We have n!/ Γn + 1 + θ) = n θ e i θ)n i, where e 0 θ) = 1, e 1 θ) = θ) 2 /2, e 2 θ) = θ) 3 3θ + 1)/24, e 3 θ) = θ) 4 θ) 2 / 4! 2), e 4 θ) = θ) 5 15θ 3 + 30θ 2 + 5θ 2 ) / 5! 48), e 5 θ) = θ) 6 θ) 2 3θ 2 + 7θ 2 ) / 6! 16), e 6 θ) = θ) 7 63θ 5 + 315θ 4 + 315θ 3 91θ 2 42θ + 16 ) / 7! 576), e 7 θ) = θ) 8 θ) 2 9θ 4 + 54θ 3 + 51θ 2 58θ + 16 ) / 8! 144). Proof: Apply equation 6.1.47) of Abramowitz and Stegun [1]. So, 2.6), 2.9) yield the joint moments of X n,r n 1/α for fixed s as a power series in 1/n, n α ): Corollary 2.2. Under the conditions of Theorem 2.1, [ k ] E X θ i n,n s i = n! Γ ) 1 2.10) n + 1 + ja ψ 1 Cj s : ψ), j=0 where ψ = θ/α and C j s : ψ) = {C i1,ψ 1 C ik,ψ k B s : ia ψ ) } : i 1 + + i k = j. So, if s, θ are fixed as n and Re θ ) < s + 1)α, then the left hand side of 2.10) is equal to 2.11) n ψ 1 n i ja ) e i ja ψ1 Cj s : ψ). i,j=0 If a is rational, say a = M/N then the left hand side of 2.10) is equal to 2.12) where n ψ 1 n m/n d m s : ψ), m=0 d m s : ψ) = ) } {e i ja ψ1 Cj s : ψ): in + jm = m = ) } {e m ja ja ψ1 Cj s : ψ): 0 j m/a if N = 1; so for d m to depend on c 1 and not just c 0 we need m M.

34 C.S. Withers and S. Nadarajah The leading term in 2.11) does not involve c 1 so may be deduced from the multivariate extreme value distribution that the law of X n,n si, suitably normalized, tends to. The same is true of the leading terms of its cumulants. See Withers and Nadarajah [18] for details. where The leading terms in 2.11) are n ψ 1[ {1 n 1 ψ 1 2 /2 } C 0 s : ψ) + n a C 1 s : ψ) + O n 2a 0 )], a 0 = mina,1), and I j,m = Im j). For k = 1, C 0 s : ψ) = c 0 B s : ψ ), k C 1 s : ψ) = c ψ 1 a 2 0 c 1 ψ j B s : ai j ψ ) j=1 C 0 s : ψ) = c ψ 0 s + 1) ψ = c ψ 0 / s ψ, C 1 s : ψ) = ψc ψ a 1 0 c 1 s + 1) a ψ = ψc ψ a 1 0 c 1 / s ψ a. Set π s λ) = b s 1 s 2, s 2 + 1 : λ) = s 1 j=s 2 +1 1/ 1 + λ/j) for λ an integer. For example, π s 1) = s 2 + 1) / s 1 + 1) and π s 1) = s 1 /s 2. Then for k = 2, C 0 s : λ1) = c 2λ 0 s 1 1 2λ π s λ) = c 2 0 s 1 1) 1 s 2 for λ = 1 = c 2 0 s 2 2 1 2 s 2 1 2 for λ = 2 and C 1 s : λ1) = λ c0 2λ a 1 c 1 s 1 1 { 2λ a πs λ) + π s a λ) } = λ c 1 a 0 c 1 s 1 1 = λ c 3 a 0 c 1 s 1 1 4 a 2 a{ s1 /s 2 + π s a 1) } for λ = 1 { s1 2 s 2 1 2 + π s a 2) } for λ = 2. Set λ = 1/α, Y n,s = X n,n s / nc 0 ) λ and E c = λ c a 1 0 c 1. Then for s > λ 1 2.13) E[Y n,s ] = { 1 n 1 λ 2 /2 } s 1 λ and for s 1 > 2λ 1, s 2 > λ 1, s 1 s 2, + n a E c s 1 λ a + O n 2a 0 ) 2.14) E[Y n,s1 Y n,s2 ] = { 1 n 1 2λ 2 /2 } B 2,0 + n a E c D a + O n 2a 0 ),

Extremes for Quantiles and Multivariate Moments 35 where B 2,0 = s 1 1 2λ π s λ), D a = s 1 1 2λ a {π s λ) + π s a λ)} and 2.15) Cov Y n,s1, Y n,s2 ) = F 0 + F 1 /n + E c F 2 /n + O n 2a 0 ), where F 0 = B 2,0 s 1 1 λ s 2 1 λ, F 1 = λ 2 s 1 1 λ s 2 1 λ 2λ 2B 2,0 /2 and F 2 = D a s 1 1 λ s 2 1 λ a s 1 1 λ a s 2 1 λ. Similarly, we may use 2.11) to approximate higher order cumulants. If a = 1 this gives E[Y n,s ] and Cov Y n,s1, Y n,s2 ) to O n 2). Example 2.1. Suppose α = 1. Then Y n,s = X n,n s / nc 0 ), E c = c0 a 1 c 1, B 2,0 = F 1 = s 1 1) 1 s 1 2, F 0 = s 1 1 2 s 1 2, D a = s 1 1 2 a G a, where G a = s 1 s 1 2 + π s a 1) for s 1 s 2, G a = 2 for s 1 = s 2 and F 2 = D a s 1 1 s 2 1 1 a s 1 2 s 1 1 1 a. So, 2.16) E[Y n,s ] = s 1 + n a E c s 1 1 a + O n 2a 0 ) for s > 0 and 2.14) 2.15) hold if 2.17) s 1 > 1, s 2 > 0, s 1 s 2. A little calculation shows that C 0 s : 1) = c k 0 B k,0, C 1 s : 1) = c k a 1 0 c 1 B k,, and [ k ] E Y n,si = { 1 + n 1 k 2 /2 } B k,0 + n a E c B k, + O n 2a ) 0 = m 0 s) + n 1 m 1 s) + n a m a s) + O n 2a 0 ) say for s i > k i, 1 i k and s 1 s k, where B k, = B k,0 = k B k,j, j=1 k 1/ s 1 k + i), j 1 k B k,j = s i k + a + i) 1 s j k + j + 1 a 1 s i k + i) 1, B k,k = k 1 s i k + a + i) 1 s k 1 1 a i=j+1 for s i > k i and 1 j < k. For example, B 1,0 = s 1, B 2m,0 = s 1 1) 1 s 1 2 and B 3,0 = s 1 2) 1 s 2 1) 1 s 1 3. So, κ ns) = κy n,s1,..., Y n,sk ), the joint cumulant of Y n,s1,..., Y n,sk ), is given by κ n s) = κ 0 s) + n 1 κ 1 s) + n a κ a s) +

36 C.S. Withers and S. Nadarajah O n 2a 0), where, for example, κ 0 s 1, s 2, s 3 ) = m 0 s 1, s 2, s 3 ) m 0 s 1 )m 0 s 2, s 3 ) m 0 s 2 ) m 0 s 1, s 3 ) 3 m 0 s 3 )m 0 s 1, s 2 ) + 2 m 0 s i ) = 2 s 1 + s 2 2)D s 1, s 2, s 3 ), κ 1 s 1, s 2, s 3 ) = m 1 s 1, s 2, s 3 ) m 0 s 1 )m 1 s 2, s 3 ) m 0 s 2 ) m 1 s 1, s 3 ) m 0 s 3 )m 1 s 1, s 2 ) = 2 { s 2 1 2s 1 ) + s 1 s 2 1} /D s1, s 2, s 3 ) since m 1 s 1 ) = 0, κ a s 1, s 2, s 3 ) = m a s 1, s 2, s 3 ) m 0 s 1 )m a s 2, s 3 ) m a s 1 ) m 0 s 2, s 3 ) m 0 s 2 )m a s 1, s 3 ) m a s 2 )m 0 s 1, s 3 ) m 0 s 3 )m a s 1, s 2 ) m a s 3 )m 0 s 1, s 2 ) + 2m 0 s 1 )m 0 s 2 )m a s 3 ) + 2m 0 s 3 )m 0 s 1 ) m a s 2 ) + 2m 0 s 2 )m 0 s 3 )m a s 1 ), where D s 1, s 2, s 3 ) = s 1 3 s 2 2 s 3. Consider the case a = 1. Then κ a s 1, s 2, s 3 ) = 0 so { κ n s 1, s 2, s 3 ) = 2 s 1 + s 2 2 + n 1 s 2 1 2s 1 ) + s 1 s 2 1) } / D s1, s 2, s 3 ) 2.18) Set s = k s j. Then j=1 + O n 2). B 1, = B 1,1 1, B 2,2 = 1/s 2, B 2,2 = 1/s 2, B 2,2 = s 1, B 2, = s 1 1 + s 1 2 = s 1 + s 2 ) / s 1 s 2 ), B 3,1 = s 2 1) 1 s 1 3, B 3,2 = s 1 1) 1 s 1 3, B 3,3 = s 1 1) 1 s 1 2, B 3, = { s 2 s 2) s 3 } s1 1) 1 s 2 1 2 s 1 3, B 4,1 = s 2 2) 1 s 3 1) 1 s 1 4, B 4,2 = s 1 2) 1 s 3 1) 1 s 1 4, B 4,3 = s 1 2) 1 s 2 1) 1 s 1 4, B 4,4 = s 1 2) 1 s 2 1) 1 s 1 3, B 4, = { s s 3 s 2 2) + s 3 s 2 4s 2 + 4) s 2 s 4 } { s1 2) s 2 2 2 s 3 2 s 4 } 1. Also E c = c 2 0 c 1, D a = s 1 1 + s 1 2, F 2 = 0, and 2.19) 2.20) 2.21) E[Y n,s ] = s 1 + n 1 E c + O n 2) for s > 0, E[Y n,s1 Y n,s2 ] = 1 n 1) B 2, + n 1 E c D a + O n 2) if 2.17) holds, Cov Y n,s1, Y n,s2 ) = s 1 1 2 s 1 2 s2 n 1 ) s 1 + O n 2 ) if 2.17) holds. In the case a 2, 2.19) 2.21) hold with E c replaced by 0. In the case a 1, 2.14) 2.16) with a 0 = a give terms O n 2a) with the n 1 terms disposable if a 1/2.

Extremes for Quantiles and Multivariate Moments 37 We now investigate what extra terms are needed to make 2.19) 2.21) depend on c when a = 1 or 2. Example 2.2. α = β = 1. Here, we find the coefficients of n 2. By 2.12), d 2 s : ψ) = 2 ) e 2 j j ψ1 Cj s : ψ) j=0 = e 2 ψ1 ) C0 s : ψ) + e 1 1 ψ1 ) C1 s : ψ) + C 2 s : ψ) = C 2 s : ψ) if ψ 1 = 1 or 2. For k = 1, C 2 s : ψ) = C 2,ψ s+1) 2 ψ, where C 2,ψ = ψc ψ 4 { 0 c0 c 2 + ψ 3)c 2 1 /2}, so d 2 s : 1) = s + 1)F c, where F c = c 3 0 c0 c 2 c 2 ) 1, so in 2.19) we may replace O n 2) by n 2 s + 1)F c c 1 0 + O n 3). For k = 2, C 2 s : 1) = } {C i,1 C j,1 B s : 0, j 1): i + j = 2 = C 0,1 C 2,1 { B s : 0, 1) + B s : 0, 1) } + C 2 1,1 B s : 0, 0), where B s : 0, λ) = b s 1 s 2, s 2 + 1 : λ) = π s λ), so d 2 s : 1) = C 2 s : 1) D 2,s H c + c 2 0 c2 1, where D 2,s = s 2 + 1) s 1 + 1) 1 + s 1 s 1 2, H c = c 2 0 c0 c 2 c 2 1) and in 2.20) we may replace O n 2) by n 2 d 2 s : 1)c 2 0 + O n 3). Upon simplifying this gives Cov Y n,s1, Y n,s2 ) = s 1 1 2 s 1 2 1 n 1 ) s 1 c 2 0 H cf 3,s n 2 + O n 2), where F 3,s = s 2 + 1) / s 1 2 + s 1 2. For k = 1, Example 2.3. α = 1, β = 2. So, a = 2, λ = 1, ψ = θ. By 2.12), d 2 s : ψ) = 1 ) e 2 2j 2j ψ1 Cj s : ψ) j=0 C 1 s : ψ) = ψ c ψ 3 0 c 1 s 1 ψ 2 = = e 2 ψ1 ) C0 s : ψ) + C 1 s : ψ) = C 1 s : ψ) if ψ 1 = 0, 1 or 2. { c 2 0 c 1s + 1), if ψ = 1, 2 c 1 0 c 1, if ψ = 2, so E[Y n,s ] = s 1 + c 3 0 c 1s + 1)n 2 + O n 3) for s > 0. For k = 2, C 1 s : 1) = c 1 0 c 1D 2,s for D 2,s above, so and E[Y n,s1 Y n,s2 ] = 1 n 1) s 1 1) 1 s 1 2 + n 2 c 3 0 c 1D 2,s + O n 3) Cov Y n,s1, Y n,s2 ) = s 1 1 2 s 1 2 1 n 1 s 1 ) n 2 c 3 0 c 1F 3,s + O n 3).

38 C.S. Withers and S. Nadarajah 3. EXAMPLES Example 3.1. For Student s t distribution, X = t N has density function 1 + x 2 /N ) γ gn = d i x 2γ 2i, { Nπ } ) γ where γ = N + 1)/2, g N = Γγ)/ ΓN/2) and d i = N γ+i g N. i So, 1.6) holds with α = N, β = 2 and c i = d i /N + 2i). In particular, c 0 = N γ 1 g N, c 1 = γn γ+1 N + 2) 1 g N = N γ+1 N + 1)N + 2) 1 g N /2, c 2 = γ) 2 N γ+2 N + 4) 1 g N /2, c 3 = γ) 3 N γ+3 g N N + 6) 1 /6, and so on. So, a = 2/N and 2.12) gives an expression in powers of n a/2 if N is odd or n a if N is even. The first term in 2.12) to involve c 1, not just c 0, is the coefficient of n a. Putting N = 1 we obtain Example 3.2. For the Cauchy distribution, 1.6) holds with α = 1, β = 2 and c i = 1) i 2i + 1) 1 π 1. So, a = 2, ψ = θ, C 0,ψ = π ψ, C 1,ψ = ψ π 2 ψ /3, C 2,ψ = ψ π 4 ψ {1/5 + ψ 5)/a} and C 3,ψ = ψ π 6 ψ {1/105 2ψ/15 + ψ + 1) 2 /162}. By Example 2.3, Y n,s = π/n)x n,n s satisfies 3.1) E[Y n,s ] = s 1 n 2 π 2 s + 1) + O n 3) for s > 0 and when 2.17) holds 3.2) E[Y n,s1 Y n,s2 ] = 1 n 1) s 1 1) 1 s 1 2 n 2 π 2 D 2,s /3 + O n 3) for D 2,s = s 2 + 1) / s 1 + 1) + s 1 /s 2 and Cov Y n,s1, Y n,s2 ) = s 1 1 2 s 1 2 1 n 1 ) s 1 + n 2 π 2 F 3,s /3 + O n 3) for F 3,s = s 2 + 1)/ s 1 2 + s 1 2. Page 274 of Hall [7] gave the first term in 3.1) and 3.2) when s 1 = s 2 but his version of 3.2) for s 1 > s 2 replaces s 1 1) 1 s 1 2 and D 2,s by complicated expressions each with s 1 s 2 terms. The joint order of order three for {Y n,si, 1 i 3} is given by 2.18). Hall points out that F 1 u) = cotπ πu), so F 1 u) = 1 u) 2i 1 C i,1, where C i,1 = 4π 2) i π 1 B 2,i /2i)!.

Extremes for Quantiles and Multivariate Moments 39 Example 3.3. Consider the F distribution. For N, M 1, set ν = M/N, γ = M + N)/2 and g M,N = ν M/2 /B M/2, N/2). Then X = F M,N has density function x M/2 1 + νx) γ g M,N = ν γ x N/2 1 + ν 1 x 1) γ gm,n = d i x N/2 i, ) γ where d i = h M,N ν i and h M,N = g M,N ν γ = ν N/2/ BM/2, N/2). So, for i N > 2, 2.1) holds with α = N/2 1, β = 1 and c i = d i /N/2 + i 1). If N = 4 then α = 1 and Examples 2.1 2.2 apply. Otherwise 2.13) 2.15) give E[Y n,s ], E[Y n,s1 Y n,s2 ] and Cov Y n,s1, Y n,s2 ) to O n 2a 0), where Yn,s = X n,n s / nc 0 )λ, λ = 1/α, a = 2/N 2), a 0 = mina,1) = a if N 4 and a 0 = mina,1) = 1 if N < 4. Example 3.4. Consider the stable laws. Page 549 of Feller [5] proves that the general stable law of index α 0, 1) has density function x 1 αk a k α, γ), k=1 where a k α, γ) = 1/π)Γkα + 1) { 1) k /k! } sin{kπγ α)/2} and γ α. So, for x > 0 its distribution function F satisfies 2.1) with β = α and c i = a i+1 α, γ)γ 1 i+1) 1. Since a =1 the first two moments of Y n,s = X n,n s / nc 0 ) λ, where λ = 1/α are O n 2) by 2.13) 2.15). Example 3.5. Finally, consider the second extreme value distribution. Suppose Fx) = exp x α ) for x > 0, where α > 0. Then 1.6) holds with β = α and c i = 1) i /i + 1)!. Since a = 1 the first two moments of Y n,s = X n,n s /n 1/α are given to O n 2) by 2.13) 2.15).

40 C.S. Withers and S. Nadarajah APPENDIX: AN INVERSION THEOREM Given x j = y j /j! for j 1 set A.1) S = Ŝt,x) = x j t j = St,y) = y j t j /j!. j=1 j=1 The partial ordinary and exponential Bell polynomials B r,i x) and B r,i y) are defined for r = 0, 1,... by S i = t r Br,i x) = i! t r B r,i y)/r!. r=i r=i So, Br,0 x) = B r,0 y) = Ir = 0), Br,i λx) = λ i Br,i x) and B r,i λy) = λ i B r,i y). They are tabled on pages 307 309 of Comtet [2] for r 10 and 12. Note that A.2) where A.3) and 1 + λs) α = t r Ĉ r = r=0 Ĉ r = Ĉr α, λ,x) = C r = C r α, λ,y) = t r C r /r!, r=0 r ) α B r,i x) λ i i r B r,i y) α i λ i. So, Ĉ 0 = 1, Ĉ 1 = αλx 1, Ĉ 2 = αλx 2 + α 2 λ 2 x 2 1 /2, Ĉ 3 = αλx 3 + α 2 λ 2 x 1 x 2 + α 3 λ 3 x 3 1 /6 and C 0 = 1, C 1 = αλy 1, C 2 = αλy 2 + α 2 λ 2 y 2 1. Similarly, and where log1 + λs) = t r Dr = t r D r /r! r=1 r=1 expλs) = 1 + t r Br = 1 + t r B r /r!, r=1 r=1 D r = D r r λ,x) = B r,i x) λ) i /i!, r D r = D r λ,y) = B r,i y) λ) i /i 1)!,

Extremes for Quantiles and Multivariate Moments 41 and B r = B r λ,x) = B r = B r λ,y) = r B r,i x)λ i /i! r B r,i y)λ i. Here, B r 1,x) and B r 1,y) are known as the complete ordinary and exponential Bell polynomials. If x j = y j = 0 for j even, then S = t 1 X j t 2j, where X j = x 2j 1, so S i = t i t 2r Br,i X) and expλs) = 1 + t k Bk, j=1 r=i k=1 where B k = { } Br,i X)λ i /i!: i = 2r k, k/2 < r k. The following derives from Lagrange s inversion formula. Theorem A.1. Let k be a positive integer and a any real number. Suppose v/u = x i u ia = y i v ia /i! with x 0 0. Then u/v) k = x i v ia = yi v ia /ia)!, where x i = x i a, k,x) and y i = y i a, k,y) are given by A.4) and A.5) x i = k n 1 Ĉ i n, 1/x 0,x) = k x n 0 y i = k n 1 C i n, 1/y 0,y) = k y n 0 i n + 1) j 1 Bi,j x) x 0 ) j /j! j=0 i n + 1) j 1 B i,j y) y 0 ) j, j=0 respectively, where n = k + ai. Proof: u/v has a power series in v a so that u/v) k does also. A little work shows that A.4) A.5) are correct for i = 0, 1, 2, 3 and so by induction that x i xia 0 and yi yia 0 are polynomials in a of degree i 1. Hence, A.4) A.5) will hold true for all a if they hold true for all positive integers a. Suppose then a is a positive

42 C.S. Withers and S. Nadarajah integer. Since v/u = x 0 1 + x 1 0 S) for S = Ŝ ua,x) = S u a,y), the coefficient of u ai in v/u) n is x n 0 Ĉ i n, 1/x 0,x) = y0 n C i n, 1/y 0,y)/n k)!. Now set n = k + ai and apply Theorem A in page 148 of Comtet [2] to v = fu) = x i u 1+ai. Theorem F in page 15 of Comtet [2] proves A.4) for the case k = 1 and a a positive integer. ACKNOWLEDGMENTS The authors would like to thank the Editor and the referee for careful reading and comments which greatly improved the paper. REFERENCES [1] Abramowitz, M. and Stegun, I.A. 1964). Handbook of Mathematical Functions, National Bureau of Standards, Washington, D.C. [2] Comtet, L. 1974). Advanced Combinatorics, Reidel, Dordrecht. [3] David, F. N. and Johnson, N. L. 1954). Statistical treatment of censored data. Part I: Fundamental formulae, Biometrika, 41, 225 231. [4] Downey, P. J. 1990). Distribution-free bounds on the expectation of the maximum with scheduling applications, Operations Research Letters, 9, 189 201. [5] Feller, W. 1966). An Introduction to Probability Theory and Its Applications, volume 2, John Wiley and Sons, New York. [6] Gomes, M. I. and Guillou, A. 2014). Extreme value theory and statistics of univariate extremes: A review, International Statistical Review, doi: 10.1111/insr.12058. [7] Hall, P. 1978). Some asymptotic expansions of moments of order statistics, Stochastic Processes and Their Applications, 7, 265 275. [8] Hill, B. 1975). A simple general approach to inference about the tail of a distribution, Annals of Statistics, 3, 1163 1173. [9] Hill, T. P. and Spruill, M. C. 1994). On the relationship between convergence in distribution and convergence of expected extremes, Proceedings of the American Mathematical Society, 121, 1235 1243.

Extremes for Quantiles and Multivariate Moments 43 [10] Hüsler, J.; Piterbarg, V. and Seleznjev, O. 2003). On convergence of the uniform norms for Gaussian processes and linear approximation problems, Annals of Applied Probability, 13, 1615 1653. [11] James, G. S. and Mayne, A. J. 1962). Cumulants of functions of random variables, Sankhyā, A, 24, 47 54. [12] McCord, J. R. 1964). On asymptotic moments of extreme statistics, Annals of Mathematical Statistics, 64, 1738 1745. [13] Nair, K.A. 1981). Asymptotic distribution and moments of normal extremes, Annals of Probability, 9, 150 153. [14] Novak, S. Y. and Utev, S. A. 1990). Asymptotics of the distribution of the ratio of sums of random variables, Siberian Mathematical Journal, 31, 781 788. [15] Pickands, J. 1968). Moment convergence of sample extremes, Annals of Mathematical Statistics, 39, 881 889. [16] Ramachandran, G. 1984). Approximate values for the moments of extreme order statistics in large samples. In: Statistical Extremes and Applications Vimeiro, 1983), pp. 563 578, NATO Advanced Science Institutes Series C: Mathematical and Physical Sciences, volume 131, Reidel, Dordrecht. [17] Stuart, A. and Ord, J. K. 1987). Kendall s Advanced Theory of Statistics, fifth edition, volume 1, Griffin, London. [18] Withers, C.S. and Nadarajah, S. 2014). Asymptotic multivariate distributions and moments of extremes, Technical Report, Applied Mathematics Group, Industrial Research Ltd., Lower Hutt, New Zealand.