A Skewed Look at Bivariate and Multivariate Order Statistics

Similar documents
Conditional independence of blocked ordered data

Multiple Random Variables

Multivariate Random Variable

Chapter 5 continued. Chapter 5 sections

SKEW-NORMALITY IN STOCHASTIC FRONTIER ANALYSIS

A NEW CLASS OF SKEW-NORMAL DISTRIBUTIONS

Exact Linear Likelihood Inference for Laplace

International Journal of Scientific & Engineering Research, Volume 6, Issue 2, February-2015 ISSN

ESTIMATION OF THE LOCATION PARAMETER OF DISTRIBUTIONS WITH KNOWN COEFFICIENT OF VARIATION BY RECORD VALUES.

BAYESIAN PREDICTION OF WEIBULL DISTRIBUTION BASED ON FIXED AND RANDOM SAMPLE SIZE. A. H. Abd Ellah

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n

Quadratic forms in skew normal variates

Bounds on expectation of order statistics from a nite population

Introduction to Computational Finance and Financial Econometrics Matrix Algebra Review

On a simple construction of bivariate probability functions with fixed marginals 1

Continuous Random Variables

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics

A study of generalized skew-normal distribution

Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2

Exact Inference for the Two-Parameter Exponential Distribution Under Type-II Hybrid Censoring

Estimation of parametric functions in Downton s bivariate exponential distribution

PARAMETER ESTIMATION FOR THE LOG-LOGISTIC DISTRIBUTION BASED ON ORDER STATISTICS

Probability Lecture III (August, 2006)

Lecture 2: Review of Probability

Introduction to Probability and Stocastic Processes - Part I

Random Variables and Their Distributions

arxiv: v1 [math.pr] 10 Oct 2017

Probability and Stochastic Processes

Lecture 25: Review. Statistics 104. April 23, Colin Rundel

Lecture 2: Repetition of probability theory and statistics

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

BULLETIN OF MATHEMATICS AND STATISTICS RESEARCH

EE4601 Communication Systems

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

Notes on Random Vectors and Multivariate Normal

Weighted Marshall-Olkin Bivariate Exponential Distribution

A NOTE ON THE INFINITE DIVISIBILITY OF SKEW-SYMMETRIC DISTRIBUTIONS

Lecture 1: August 28

The skew-normal distribution and related multivariate families. Adelchi Azzalini. Università di Padova. NordStat 2004: Jyväskylä, 6 10 June 2004

STT 843 Key to Homework 1 Spring 2018

Probability and Distributions

STAT215: Solutions for Homework 1

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Optimal Unbiased Estimates of P {X < Y } for Some Families of Distributions

Multivariate Normal-Laplace Distribution and Processes

JMASM25: Computing Percentiles of Skew- Normal Distributions

Probability Theory and Statistics. Peter Jochumzen

More than one variable

Covariance. Lecture 20: Covariance / Correlation & General Bivariate Normal. Covariance, cont. Properties of Covariance

STAT 512 sp 2018 Summary Sheet

CONDITIONALLY SPECIFIED MULTIVARIATE SKEWED DISTRIBUTIONS

Sample mean, covariance and T 2 statistic of the skew elliptical model

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Spring 2012 Math 541A Exam 1. X i, S 2 = 1 n. n 1. X i I(X i < c), T n =

Multivariate Birnbaum-Saunders Distribution Based on Multivariate Skew Normal Distribution

Chapter 5. Chapter 5 sections

Chapter 2. Discrete Distributions

EE 302: Probabilistic Methods in Electrical Engineering

SYSTEM RELIABILITY AND WEIGHTED LATTICE POLYNOMIALS

TAMS39 Lecture 2 Multivariate normal distribution

University of Lisbon, Portugal

Review of Probability Theory

BIOS 2083 Linear Models Abdus S. Wahed. Chapter 2 84

Algorithms for Uncertainty Quantification

IDENTIFIABILITY OF THE MULTIVARIATE NORMAL BY THE MAXIMUM AND THE MINIMUM

conditional cdf, conditional pdf, total probability theorem?

Method of Conditional Moments Based on Incomplete Data

Estimation of the Bivariate Generalized. Lomax Distribution Parameters. Based on Censored Samples

Multivariate Distributions

Problem 1. Problem 2. Problem 3. Problem 4

Jordan Journal of Mathematics and Statistics (JJMS) 7(4), 2014, pp

Chapter 4. Chapter 4 sections

RELATIONS FOR MOMENTS OF PROGRESSIVELY TYPE-II RIGHT CENSORED ORDER STATISTICS FROM ERLANG-TRUNCATED EXPONENTIAL DISTRIBUTION

Random Vectors and Multivariate Normal Distributions

ST5215: Advanced Statistical Theory

Bivariate Paired Numerical Data

A New Class of Positively Quadrant Dependent Bivariate Distributions with Pareto

Actuarial Science Exam 1/P

A PRACTICAL WAY FOR ESTIMATING TAIL DEPENDENCE FUNCTIONS

Lecture 11. Multivariate Normal theory

Reliability of Coherent Systems with Dependent Component Lifetimes

STAT Chapter 5 Continuous Distributions

Deccan Education Society s FERGUSSON COLLEGE, PUNE (AUTONOMOUS) SYLLABUS UNDER AUTOMONY. SECOND YEAR B.Sc. SEMESTER - III

Lecture 11. Probability Theory: an Overveiw

5.6 The Normal Distributions

A MULTIVARIATE BIRNBAUM-SAUNDERS DISTRIBUTION BASED ON THE MULTIVARIATE SKEW NORMAL DISTRIBUTION

On a six-parameter generalized Burr XII distribution

Asymptotic Statistics-III. Changliang Zou

IEOR 3106: Introduction to Operations Research: Stochastic Models. Professor Whitt. SOLUTIONS to Homework Assignment 2

Econ 508B: Lecture 5

Regression and Statistical Inference

Statistics STAT:5100 (22S:193), Fall Sample Final Exam B

ECON 5350 Class Notes Review of Probability and Distribution Theory

Introduction of Shape/Skewness Parameter(s) in a Probability Distribution

MATH c UNIVERSITY OF LEEDS Examination for the Module MATH2715 (January 2015) STATISTICAL METHODS. Time allowed: 2 hours

Analysis of variance and linear contrasts in experimental design with generalized secant hyperbolic distribution

3. Probability and Statistics

MAS223 Statistical Inference and Modelling Exercises

Stable Process. 2. Multivariate Stable Distributions. July, 2006

STA205 Probability: Week 8 R. Wolpert

Transcription:

A Skewed Look at Bivariate and Multivariate Order Statistics Prof. N. Balakrishnan Dept. of Mathematics & Statistics McMaster University, Canada bala@mcmaster.ca p. 1/4

Presented with great pleasure as p. 2/4

p. 2/4 Presented with great pleasure as Plenary Lecture

p. 2/4 Presented with great pleasure as Plenary Lecture at Tartu Conference, 2007

p. 3/4 Roadmap 1. Order Statistics 2. Skew-Normal Distribution 3. OS from BVN Distribution 4. Generalized Skew-Normal Distribution 5. OS from TVN Distribution 6. OS Induced by Linear Functions 7. OS from BV and TV t Distributions 8. Bibliography

p. 4/4 Order Statistics Let X 1,,X n be n independent identically distributed (IID) random variables from a popln. with cumulative distribution function (cdf) F(x) and an absolutely continuous probability density function (pdf) f(x).

p. 4/4 Order Statistics Let X 1,,X n be n independent identically distributed (IID) random variables from a popln. with cumulative distribution function (cdf) F(x) and an absolutely continuous probability density function (pdf) f(x). If we arrange these X i s in increasing order of magnitude, we obtain the so-called order statistics, denoted by X 1:n X 2:n X n:n, which are clearly dependent.

p. 5/4 Order Statistics (cont.) Using multinomial argument, we readily have for r = 1,,n Pr (x < X r:n x + δx) n! = (r 1)!(n r)! {F(x)}r 1 {F(x + δx) F(x)} {1 F(x + δx)} n r + O ( (δx) 2).

p. 5/4 Order Statistics (cont.) Using multinomial argument, we readily have for r = 1,,n Pr (x < X r:n x + δx) n! = (r 1)!(n r)! {F(x)}r 1 {F(x + δx) F(x)} {1 F(x + δx)} n r + O ( (δx) 2). From this, we obtain the pdf of X r:n as (for x R) f r:n (x) = n! (r 1)!(n r)! {F(x)}r 1 {1 F(x)} n r f(x).

p. 6/4 Order Statistics (cont.) Similarly, we obtain the joint pdf of (X r:n,x s:n ) as (for 1 r < s n and x < y) f r,s:n (x,y) = n! (r 1)!(s r 1)!(n s)! {F(x)}r 1 f(x) {F(y) F(x)} s r 1 {1 F(y)} n s f(y).

p. 6/4 Order Statistics (cont.) Similarly, we obtain the joint pdf of (X r:n,x s:n ) as (for 1 r < s n and x < y) f r,s:n (x,y) = n! (r 1)!(s r 1)!(n s)! {F(x)}r 1 f(x) {F(y) F(x)} s r 1 {1 F(y)} n s f(y). From the pdf and joint pdf, we can derive, for example, means, variances and covariances of order statistics, and also study their dependence structure.

p. 6/4 Order Statistics (cont.) Similarly, we obtain the joint pdf of (X r:n,x s:n ) as (for 1 r < s n and x < y) f r,s:n (x,y) = n! (r 1)!(s r 1)!(n s)! {F(x)}r 1 f(x) {F(y) F(x)} s r 1 {1 F(y)} n s f(y). From the pdf and joint pdf, we can derive, for example, means, variances and covariances of order statistics, and also study their dependence structure. The area of order statistics has a long and rich history, and a very vast literature.

p. 7/4 Order Statistics (cont.) Some key references are the books by

p. 7/4 Order Statistics (cont.) Some key references are the books by H.A. David (1970, 1981) B. Arnold & N. Balakrishnan (1989) N. Balakrishnan & A.C. Cohen (1991) B. Arnold, N. Balakrishnan & H.N. Nagaraja (1992) N. Balakrishnan & C.R. Rao (1998 a,b) H.A. David & H.N. Nagaraja (2003)

p. 7/4 Order Statistics (cont.) Some key references are the books by H.A. David (1970, 1981) B. Arnold & N. Balakrishnan (1989) N. Balakrishnan & A.C. Cohen (1991) B. Arnold, N. Balakrishnan & H.N. Nagaraja (1992) N. Balakrishnan & C.R. Rao (1998 a,b) H.A. David & H.N. Nagaraja (2003) However, most of the literature on order statistics have focused on the independent case, and very little on the dependent case.

p. 8/4 Skew-normal Distribution The skew-normal distribution has pdf ϕ(x) = 2 Φ(λx)φ(x), x R, λ R, where φ( ) and Φ( ) are standard normal pdf and cdf, respectively.

p. 8/4 Skew-normal Distribution The skew-normal distribution has pdf ϕ(x) = 2 Φ(λx)φ(x), x R, λ R, where φ( ) and Φ( ) are standard normal pdf and cdf, respectively. Note that

Skew-normal Distribution The skew-normal distribution has pdf ϕ(x) = 2 Φ(λx)φ(x), x R, λ R, where φ( ) and Φ( ) are standard normal pdf and cdf, respectively. Note that λ R is a shape parameter; λ = 0 corresponds to std. normal case; λ corresponds to half normal case; Location and scale parameters can be introduced into the model as well. p. 8/4

p. 9/4 Skew-Normal Distribution (cont.) Azzalini s (1985, Scand. J. Statist.) article generated a lot of work on this family.

p. 9/4 Skew-Normal Distribution (cont.) Azzalini s (1985, Scand. J. Statist.) article generated a lot of work on this family. This distribution was, however, present either explicitly or implicitly in the early works of

p. 9/4 Skew-Normal Distribution (cont.) Azzalini s (1985, Scand. J. Statist.) article generated a lot of work on this family. This distribution was, however, present either explicitly or implicitly in the early works of Birnbaum (1950) Nelson (1964) Weinstein (1964) O Hagan & Leonard (1976)

Skew-Normal Distribution (cont.) Azzalini s (1985, Scand. J. Statist.) article generated a lot of work on this family. This distribution was, however, present either explicitly or implicitly in the early works of Birnbaum (1950) Nelson (1964) Weinstein (1964) O Hagan & Leonard (1976) Interpretation through hidden truncation / selective reporting is due to Arnold & Beaver (2002, Test) for univariate/multivariate case. p. 9/4

p. 10/4 Skew-Normal Distribution (cont.) New connection to OS

p. 10/4 Skew-Normal Distribution (cont.) New connection to OS For n = 0, 1, 2,, consider the integral L n (λ) = {Φ(λx)} n φ(x) dx.

p. 10/4 Skew-Normal Distribution (cont.) New connection to OS For n = 0, 1, 2,, consider the integral L n (λ) = Clearly, L 0 (λ) = 1 λ R. {Φ(λx)} n φ(x) dx.

Skew-Normal Distribution (cont.) New connection to OS For n = 0, 1, 2,, consider the integral L n (λ) = Clearly, L 0 (λ) = 1 λ R. Furthermore, {Φ(λx)} n φ(x) dx. { Φ(λx) 1 2} 2n+1 φ(x) dx = 0 since the integrand is an odd function of x, we obtain: p. 10/4

p. 11/4 Skew-Normal Distribution (cont.) L 2n+1 (λ) = 2n+1 i=1 ( 1) i+1 1 2 i ( 2n + 1 i ) L 2n+1 i (λ), λ R, for n = 0, 1, 2,...

p. 11/4 Skew-Normal Distribution (cont.) L 2n+1 (λ) = 2n+1 i=1 ( 1) i+1 1 2 i ( 2n + 1 i ) L 2n+1 i (λ), λ R, for n = 0, 1, 2,... For n = 0, we simply obtain L 1 (λ) = Φ(λx) φ(x) dx = 1 2 L 0(λ) = 1 2, λ R,

p. 11/4 Skew-Normal Distribution (cont.) L 2n+1 (λ) = 2n+1 i=1 ( 1) i+1 1 2 i ( 2n + 1 i ) L 2n+1 i (λ), λ R, for n = 0, 1, 2,... For n = 0, we simply obtain L 1 (λ) = Φ(λx) φ(x) dx = 1 2 L 0(λ) = 1 2, λ R, which leads to the skew-normal density ϕ 1 (x;λ) = 2 Φ(λx)φ(x), x R, λ R.

p. 12/4 Skew-Normal Distribution (cont.) From L 2 (λ) = {Φ(λx)} 2 φ(x) dx, λ R,

p. 12/4 Skew-Normal Distribution (cont.) From we find L 2 (λ) = {Φ(λx)} 2 φ(x) dx, λ R,

p. 12/4 Skew-Normal Distribution (cont.) From we find dl 2 (λ) dλ L 2 (λ) = = 2 = 1 π = {Φ(λx)} 2 φ(x) dx, λ R, x Φ(λx) φ(λx) φ(x) dx Φ(λx) x exp λ π(1 + λ 2 ) 1 + 2λ 2. { 1 } 2 x2 (1 + λ 2 ) dx

Skew-Normal Distribution (cont.) From we find dl 2 (λ) dλ L 2 (λ) = = 2 = 1 π = {Φ(λx)} 2 φ(x) dx, λ R, x Φ(λx) φ(λx) φ(x) dx Φ(λx) x exp λ π(1 + λ 2 ) 1 + 2λ 2. { 1 } 2 x2 (1 + λ 2 ) Solving this differential equation, we obtain dx p. 12/4

p. 13/4 Skew-Normal Distribution (cont.) L 2 (λ) = 1 π tan 1 1 + 2λ 2, λ R,

p. 13/4 Skew-Normal Distribution (cont.) L 2 (λ) = 1 π tan 1 1 + 2λ 2, λ R, which leads to another skew-normal density

p. 13/4 Skew-Normal Distribution (cont.) L 2 (λ) = 1 π tan 1 1 + 2λ 2, λ R, which leads to another skew-normal density ϕ 2 (x;λ) = π tan 1 1 + 2λ 2 {Φ(λx)}2 φ(x), x R,λ R.

p. 13/4 Skew-Normal Distribution (cont.) L 2 (λ) = 1 π tan 1 1 + 2λ 2, λ R, which leads to another skew-normal density ϕ 2 (x;λ) = π tan 1 1 + 2λ 2 {Φ(λx)}2 φ(x), x R,λ R. Remark 1: Interestingly, this family also includes standard normal (when λ = 0) and the half normal (when λ ) distributions, just as ϕ 1 (x;λ) does.

p. 14/4 Skew-Normal Distribution (cont.) Next, setting n = 1 in the relation, we get

p. 14/4 Skew-Normal Distribution (cont.) Next, setting n = 1 in the relation, we get L 3 (λ) = 3 2 L 2(λ) 3 4 L 1(λ) + 1 8 L 0(λ) = 3 2π tan 1 1 + 2λ 2 1 4, λ R,

p. 14/4 Skew-Normal Distribution (cont.) Next, setting n = 1 in the relation, we get L 3 (λ) = 3 2 L 2(λ) 3 4 L 1(λ) + 1 8 L 0(λ) = 3 2π tan 1 1 + 2λ 2 1 4, λ R, which leads to another skew-normal density

p. 14/4 Skew-Normal Distribution (cont.) Next, setting n = 1 in the relation, we get L 3 (λ) = 3 2 L 2(λ) 3 4 L 1(λ) + 1 8 L 0(λ) = 3 2π tan 1 1 + 2λ 2 1 4, λ R, which leads to another skew-normal density ϕ 3 (x;λ) = 1 3 2π tan 1 1 + 2λ 2 1 4 {Φ(λx)} 3 φ(x), x R.

Skew-Normal Distribution (cont.) Next, setting n = 1 in the relation, we get L 3 (λ) = 3 2 L 2(λ) 3 4 L 1(λ) + 1 8 L 0(λ) = 3 2π tan 1 1 + 2λ 2 1 4, λ R, which leads to another skew-normal density ϕ 3 (x;λ) = 1 3 2π tan 1 1 + 2λ 2 1 4 {Φ(λx)} 3 φ(x), x R. Remark 2: Interestingly, this family also includes standard normal (when λ = 0) and the half normal (when λ ) distributions, just as ϕ 1 (x;λ) and ϕ 2 (x;λ) do. p. 14/4

p. 15/4 Skew-Normal Distribution (cont.) Remark 3: Evidently, in the special case when λ = 1, the densities ϕ 1 (x;λ),ϕ(x;λ 2 ) and ϕ 3 (x;λ) become the densities of the largest OS in samples of size 2, 3 and 4, respectively, from N(0, 1) distribution.

p. 15/4 Skew-Normal Distribution (cont.) Remark 3: Evidently, in the special case when λ = 1, the densities ϕ 1 (x;λ),ϕ(x;λ 2 ) and ϕ 3 (x;λ) become the densities of the largest OS in samples of size 2, 3 and 4, respectively, from N(0, 1) distribution. Remark 4: In addition, the integral L n (λ) is also involved in the means of OS from N(0, 1) distribution. For example, with µ m:m denoting the mean of the largest OS in a sample of size m from N(0, 1), we have

p. 15/4 Skew-Normal Distribution (cont.) Remark 3: Evidently, in the special case when λ = 1, the densities ϕ 1 (x;λ),ϕ(x;λ 2 ) and ϕ 3 (x;λ) become the densities of the largest OS in samples of size 2, 3 and 4, respectively, from N(0, 1) distribution. Remark 4: In addition, the integral L n (λ) is also involved in the means of OS from N(0, 1) distribution. For example, with µ m:m denoting the mean of the largest OS in a sample of size m from N(0, 1), we have µ 2:2 = L 0 π, µ 3:3 = 3L 1(1) π, µ 4:4 = 6L 2(1) π, µ 5:5 = 10L 3(1) π.

p. 16/4 Skew-Normal Distribution (cont.) Natural questions that arise:

p. 16/4 Skew-Normal Distribution (cont.) Natural questions that arise: First, we could consider a general skew-normal family ϕ(x;λ,p) = C(λ,p) {Φ(λx)} p φ(x), x R,λ R,p > 0.

p. 16/4 Skew-Normal Distribution (cont.) Natural questions that arise: First, we could consider a general skew-normal family ϕ(x;λ,p) = C(λ,p) {Φ(λx)} p φ(x), x R,λ R,p > 0. How about the normalizing constant?

p. 16/4 Skew-Normal Distribution (cont.) Natural questions that arise: First, we could consider a general skew-normal family ϕ(x;λ,p) = C(λ,p) {Φ(λx)} p φ(x), x R,λ R,p > 0. How about the normalizing constant? How much flexibility is in this family?

p. 16/4 Skew-Normal Distribution (cont.) Natural questions that arise: First, we could consider a general skew-normal family ϕ(x;λ,p) = C(λ,p) {Φ(λx)} p φ(x), x R,λ R,p > 0. How about the normalizing constant? How much flexibility is in this family? Can we develop inference for this model?

p. 16/4 Skew-Normal Distribution (cont.) Natural questions that arise: First, we could consider a general skew-normal family ϕ(x;λ,p) = C(λ,p) {Φ(λx)} p φ(x), x R,λ R,p > 0. How about the normalizing constant? How much flexibility is in this family? Can we develop inference for this model? Next, can we have a similar skewed look at OS from BV and MV normal distributions?

Skew-Normal Distribution (cont.) Natural questions that arise: First, we could consider a general skew-normal family ϕ(x;λ,p) = C(λ,p) {Φ(λx)} p φ(x), x R,λ R,p > 0. How about the normalizing constant? How much flexibility is in this family? Can we develop inference for this model? Next, can we have a similar skewed look at OS from BV and MV normal distributions? How about other distributions? p. 16/4

p. 17/4 OS from BVN Distribution Let (X 1,X 2 ) d = BV N ( µ 1,µ 2,σ 2 1,σ 2 2,ρ ).

p. 17/4 OS from BVN Distribution Let (X 1,X 2 ) d = BV N ( µ 1,µ 2,σ 2 1,σ 2 2,ρ ). Let W 1:2 W 2:2 be the OS from (X 1,X 2 ).

p. 17/4 OS from BVN Distribution Let (X 1,X 2 ) d = BV N ( µ 1,µ 2,σ 2 1,σ 2 2,ρ ). Let W 1:2 W 2:2 be the OS from (X 1,X 2 ). This bivariate case has been discussed by many, including

OS from BVN Distribution Let (X 1,X 2 ) d = BV N ( µ 1,µ 2,σ 2 1,σ 2 2,ρ ). Let W 1:2 W 2:2 be the OS from (X 1,X 2 ). This bivariate case has been discussed by many, including S.S. Gupta and K.C.S. Pillai (1965) A.P. Basu and J.K. Ghosh (1978) H.N. Nagaraja (1982) N. Balakrishnan (1993) M. Cain (1994) M. Cain and E. Pan (1995) p. 17/4

p. 18/4 Generalized Skew-Normal Distribution A variable Z λ1,λ 2,ρ is said to have a generalized skew-normal distribution, denoted by GSN(λ 1,λ 2,ρ), if

p. 18/4 Generalized Skew-Normal Distribution A variable Z λ1,λ 2,ρ is said to have a generalized skew-normal distribution, denoted by GSN(λ 1,λ 2,ρ), if Z λ1,λ 2,ρ d = X (Y 1 < λ 1 X,Y 2 < λ 2 X), λ 1,λ 2 R, ρ < 1, where X N(0, 1) independently of (Y 1,Y 2 ) BV N(0, 0, 1, 1,ρ).

p. 18/4 Generalized Skew-Normal Distribution A variable Z λ1,λ 2,ρ is said to have a generalized skew-normal distribution, denoted by GSN(λ 1,λ 2,ρ), if Z λ1,λ 2,ρ d = X (Y 1 < λ 1 X,Y 2 < λ 2 X), λ 1,λ 2 R, ρ < 1, where X N(0, 1) independently of (Y 1,Y 2 ) BV N(0, 0, 1, 1,ρ). It should be mentioned that Z λ1,λ 2,ρ belongs to

p. 18/4 Generalized Skew-Normal Distribution A variable Z λ1,λ 2,ρ is said to have a generalized skew-normal distribution, denoted by GSN(λ 1,λ 2,ρ), if Z λ1,λ 2,ρ d = X (Y 1 < λ 1 X,Y 2 < λ 2 X), λ 1,λ 2 R, ρ < 1, where X N(0, 1) independently of (Y 1,Y 2 ) BV N(0, 0, 1, 1,ρ). It should be mentioned that Z λ1,λ 2,ρ belongs to SUN 1,2 (0, 0, 1, Ω ) [Arellano-Valle & Azzalini (2006)] CSN 1,2 [Farias, Molina & A.K. Gupta (2004)]

p. 19/4 Generalized Skew-Normal Distribution (cont.) It can shown that the pdf of Z λ1,λ 2,ρ is

p. 19/4 Generalized Skew-Normal Distribution (cont.) It can shown that the pdf of Z λ1,λ 2,ρ is ϕ(z;λ 1,λ 2,ρ) = c(λ 1,λ 2,ρ) φ(z) Φ(λ 1 z,λ 2 z;ρ), z R, with λ 1,λ 2 R, ρ < 1, and Φ(, ;ρ) denoting the cdf of BV N(0, 0, 1, 1,ρ).

p. 19/4 Generalized Skew-Normal Distribution (cont.) It can shown that the pdf of Z λ1,λ 2,ρ is ϕ(z;λ 1,λ 2,ρ) = c(λ 1,λ 2,ρ) φ(z) Φ(λ 1 z,λ 2 z;ρ), z R, with λ 1,λ 2 R, ρ < 1, and Φ(, ;ρ) denoting the cdf of BV N(0, 0, 1, 1,ρ). For determining c(λ 1,λ 2,ρ), we note that c(λ 1,λ 2,ρ) 1 a(λ 1,λ 2,ρ) = 1 P (Y 1 < λ 1 X,Y 2 < λ 2 X), where X N(0, 1) independently of (Y 1,Y 2 ) BV N(0, 0, 1, 1,ρ).

p. 20/4 Generalized Skew-Normal Distribution (cont.) Lemma 1: We have

p. 20/4 Generalized Skew-Normal Distribution (cont.) Lemma 1: We have a(λ 1,λ 2,ρ) = P (Y 1 < λ 1 X,Y 2 < λ 2 X) ( ) = 1 (ρ + λ 1 λ 2 ) 2π cos 1 ; 1 + λ 2 1 1 + λ 2 2 [Kotz, Balakrishnan and Johnson (2000)].

p. 20/4 Generalized Skew-Normal Distribution (cont.) Lemma 1: We have a(λ 1,λ 2,ρ) = P (Y 1 < λ 1 X,Y 2 < λ 2 X) ( ) = 1 (ρ + λ 1 λ 2 ) 2π cos 1 ; 1 + λ 2 1 1 + λ 2 2 [Kotz, Balakrishnan and Johnson (2000)]. The generalized skew-normal pdf is then

Generalized Skew-Normal Distribution (cont.) Lemma 1: We have a(λ 1,λ 2,ρ) = P (Y 1 < λ 1 X,Y 2 < λ 2 X) ( ) = 1 (ρ + λ 1 λ 2 ) 2π cos 1 ; 1 + λ 2 1 1 + λ 2 2 [Kotz, Balakrishnan and Johnson (2000)]. The generalized skew-normal pdf is then ϕ(z;λ 1,λ 2,ρ) = cos 1 ( 2π (ρ+λ 1 λ 2 ) 1+λ 2 1 1+λ 2 2 )φ(z)φ(λ 1 z,λ 2 z;ρ) for z,λ 1,λ 2 R, ρ < 1. p. 20/4

p. 21/4 Generalized Skew-Normal Distribution (cont.) Let Φ(, ;δ) denote cdf of BV N(0, 0, 1, 1,δ), and Φ( ;θ) denote cdf of SN(θ).

p. 21/4 Generalized Skew-Normal Distribution (cont.) Let Φ(, ;δ) denote cdf of BV N(0, 0, 1, 1,δ), and Φ( ;θ) denote cdf of SN(θ). Lemma 2: We then have

Generalized Skew-Normal Distribution (cont.) Let Φ(, ;δ) denote cdf of BV N(0, 0, 1, 1,δ), and Φ( ;θ) denote cdf of SN(θ). Lemma 2: We then have Φ(0, 0;δ) = 1 2π cos 1 ( δ), Φ(γx, 0;δ) = Φ(0,γx;δ) = 1 2 Φ (γx; ) δ, 1 δ 2 Φ(γ 1 x,γ 2 x;δ) = 1 2 {Φ(γ 1x;η 1 ) + Φ(γ 2 x;η 2 ) I(γ 1 γ 2 )}, where I(a) = 0 if a > 0 and 1 if a < 0, η 1 = 1 1 δ 2 ( ) γ2 δ γ 1, η 2 = 1 1 δ 2 ( ) γ1 δ γ 2. p. 21/4

p. 22/4 Generalized Skew-Normal Distribution (cont.) Theorem 1: If M(t;λ 1,λ 2,ρ) is the MGF of Z λ1,λ 2,ρ GSN(λ 1,λ 2,ρ), then

p. 22/4 Generalized Skew-Normal Distribution (cont.) Theorem 1: If M(t;λ 1,λ 2,ρ) is the MGF of Z λ1,λ 2,ρ GSN(λ 1,λ 2,ρ), then M(t;λ 1,λ 2,ρ) = Φ 2π ( ) e t2/2 cos 1 (ρ+λ 1 λ 2 ) 1+λ 2 1 1+λ 2 2 ( λ 1 t 1 + λ 2 1, λ 2 t 1 + λ 2 2 ; ) ρ + λ 1 λ 2. 1 + λ 2 1 1 + λ 2 2

p. 22/4 Generalized Skew-Normal Distribution (cont.) Theorem 1: If M(t;λ 1,λ 2,ρ) is the MGF of Z λ1,λ 2,ρ GSN(λ 1,λ 2,ρ), then M(t;λ 1,λ 2,ρ) = Φ 2π ( ) e t2/2 cos 1 (ρ+λ 1 λ 2 ) 1+λ 2 1 1+λ 2 2 ( λ 1 t 1 + λ 2 1, λ 2 t 1 + λ 2 2 ; ) ρ + λ 1 λ 2. 1 + λ 2 1 1 + λ 2 2 Corollary 1: Theorem 1 yields, for example, E [Z λ1,λ 2,ρ] = { π/2 λ ( ) 1 + cos 1 (ρ+λ 1 λ 2 ) 1 + λ 2 1 1+λ 2 1 1+λ 2 2 } λ 2. 1 + λ 2 2

p. 23/4 OS from TVN Distribution Let (W 1,W 2,W 3 ) TV N(0, Σ), where

p. 23/4 OS from TVN Distribution Let (W 1,W 2,W 3 ) TV N(0, Σ), where Σ = σ 2 1 ρ 12 σ 1 σ 2 ρ 13 σ 1 σ 3 ρ 12 σ 1 σ 2 σ 2 2 ρ 23 σ 2 σ 3 ρ 13 σ 1 σ 3 ρ 23 σ 2 σ 3 σ 2 3 is a positive definite matrix.

p. 23/4 OS from TVN Distribution Let (W 1,W 2,W 3 ) TV N(0, Σ), where Σ = σ 2 1 ρ 12 σ 1 σ 2 ρ 13 σ 1 σ 3 ρ 12 σ 1 σ 2 σ 2 2 ρ 23 σ 2 σ 3 ρ 13 σ 1 σ 3 ρ 23 σ 2 σ 3 σ 2 3 is a positive definite matrix. Let W 1:3 = min(w 1,W 2,W 3 ) < W 2:3 < W 3:3 = max(w 1,W 2,W 3 ) denote the order statistics from (W 1,W 2,W 3 ).

OS from TVN Distribution Let (W 1,W 2,W 3 ) TV N(0, Σ), where Σ = σ 2 1 ρ 12 σ 1 σ 2 ρ 13 σ 1 σ 3 ρ 12 σ 1 σ 2 σ 2 2 ρ 23 σ 2 σ 3 ρ 13 σ 1 σ 3 ρ 23 σ 2 σ 3 σ 2 3 is a positive definite matrix. Let W 1:3 = min(w 1,W 2,W 3 ) < W 2:3 < W 3:3 = max(w 1,W 2,W 3 ) denote the order statistics from (W 1,W 2,W 3 ). Let F i (t; Σ) denote the cdf of W i:3,i = 1, 2, 3. p. 23/4

p. 24/4 OS from TVN Distribution (cont.) Theorem 2: The cdf of W 3:3 is the mixture

p. 24/4 OS from TVN Distribution (cont.) Theorem 2: The cdf of W 3:3 is the mixture F 3 (t;σ) = a(θ 1 )Φ «t ; θ 1 σ 1 + a(θ 2 )Φ «t ; θ 2 σ 2 + a(θ 3 )Φ «t ; θ 3 σ 3,

OS from TVN Distribution (cont.) Theorem 2: The cdf of W 3:3 is the mixture F 3 (t;σ) = a(θ 1 )Φ «t ; θ 1 σ 1 + a(θ 2 )Φ «t ; θ 2 σ 2 + a(θ 3 )Φ «t ; θ 3 σ 3 where Φ( ;θ) denotes the cdf of GSN(θ), a(λ 1, λ 2, ρ) = 1 2π cos 1 θ 1 = θ 2 = θ 3 = 0 B @ 0 B @ 0 B @ σ 1 ρ σ 12 q 2, 1 ρ 2 12 σ 2 ρ σ 12 q 1, 1 ρ 2 12 σ 3 ρ σ 13 q 1, 1 ρ 2 13 0 B @ (ρ + λ 1λ 2 ) q q 1 + λ 2 1 1 + λ 2 2 σ 1 ρ σ 13 q 3, 1 ρ 2 13 σ 2 ρ σ 23 q 3, 1 ρ 2 23 σ 3 ρ σ 23 q 2, 1 ρ 2 23 1 C A, ρ 23 ρ 12 ρ 13 q 1 ρ 2 12 q 1 ρ 2 13 ρ 13 ρ 12 ρ 23 q 1 ρ 2 12 q 1 ρ 2 23 ρ 12 ρ 13 ρ 23 q 1 ρ 2 13 q 1 ρ 2 23 1 C A, 1 C A, 1 C A., p. 24/4

p. 25/4 OS from TVN Distribution (cont.) Theorem 3: The mgf of W 3:3 is

p. 25/4 OS from TVN Distribution (cont.) Theorem 3: The mgf of W 3:3 is ( r! M 3 (s;σ) = e s2 /2 1 ρ12 Φ s; α 1 + Φ 2 r!) 1 ρ23 +Φ s; α 3, 2 r! 1 ρ13 s; α 2 2

OS from TVN Distribution (cont.) Theorem 3: The mgf of W 3:3 is ( r! M 3 (s;σ) = e s2 /2 1 ρ12 Φ s; α 1 + Φ 2 r!) 1 ρ23 +Φ s; α 3, 2 r! 1 ρ13 s; α 2 2 where Φ( ;θ) is the cdf of SN(θ), and α 1 = 1 + ρ 12 ρ 13 ρ 23 A, α 2 = 1 + ρ 13 ρ 12 ρ 23 A, α 3 = 1 + ρ 23 ρ 12 ρ 13 A, A = 6 (1 + ρ 12 ) 2 + (1 + ρ 13 ) 2 + (1 + ρ 23 ) 2 +2(ρ 12 ρ 13 + ρ 12 ρ 23 + ρ 13 ρ 23 ). p. 25/4

p. 26/4 OS from TVN Distribution (cont.) Corollary 2: Theorem 3 yields, for example,

p. 26/4 OS from TVN Distribution (cont.) Corollary 2: Theorem 3 yields, for example, E (W 3:3 ) = 1 2 π and { 1 ρ12 + 1 ρ 13 + 1 ρ 23 }

p. 26/4 OS from TVN Distribution (cont.) Corollary 2: Theorem 3 yields, for example, E (W 3:3 ) = 1 2 π { 1 ρ12 + 1 ρ 13 + 1 ρ 23 } and V ar (W 3:3 ) = 1 + A 2π E2 (W 3:3 ).

p. 26/4 OS from TVN Distribution (cont.) Corollary 2: Theorem 3 yields, for example, E (W 3:3 ) = 1 2 π { 1 ρ12 + 1 ρ 13 + 1 ρ 23 } and V ar (W 3:3 ) = 1 + A 2π E2 (W 3:3 ). Remark 5: Similar mixture forms can be derived for the cdf of W 1:3 and W 2:3, and from them explicit expressions for their mgf, moments, etc.

p. 27/4 OS from TVN Distribution (cont.) Problems for further study:

p. 27/4 OS from TVN Distribution (cont.) Problems for further study: Can we make use of these results concerning distributions and moments of OS to develop some efficient inferential methods?

p. 27/4 OS from TVN Distribution (cont.) Problems for further study: Can we make use of these results concerning distributions and moments of OS to develop some efficient inferential methods? How far can these results be generalized to obtain such explicit expressions for the case of OS from MVN?

p. 28/4 OS Induced by Linear Functions Let X i = (X 1i,...,X pi ) T, i = 1,...,n, be iid observations from MV N(µ, Σ), where µ = (µ 1,...,µ p ) T and Σ = ((σ ij )).

p. 28/4 OS Induced by Linear Functions Let X i = (X 1i,...,X pi ) T, i = 1,...,n, be iid observations from MV N(µ, Σ), where µ = (µ 1,...,µ p ) T and Σ = ((σ ij )). Let P = {i 1,...,i m } (m 1) be a partition of {1,...,p}, and Q its complementary partition.

p. 28/4 OS Induced by Linear Functions Let X i = (X 1i,...,X pi ) T, i = 1,...,n, be iid observations from MV N(µ, Σ), where µ = (µ 1,...,µ p ) T and Σ = ((σ ij )). Let P = {i 1,...,i m } (m 1) be a partition of {1,...,p}, and Q its complementary partition. Let C = (c 1,...,c p ) T be a vector of non-zero constants.

p. 28/4 OS Induced by Linear Functions Let X i = (X 1i,...,X pi ) T, i = 1,...,n, be iid observations from MV N(µ, Σ), where µ = (µ 1,...,µ p ) T and Σ = ((σ ij )). Let P = {i 1,...,i m } (m 1) be a partition of {1,...,p}, and Q its complementary partition. Let C = (c 1,...,c p ) T be a vector of non-zero constants. Further, let us define for j = 1,...,n, X j = i P c i X ij = C T P X j, Y j = i Q c i X ij = C T Q X j.

p. 29/4 OS Induced by Linear Functions (cont.) Lemma 3: Evidently,

p. 29/4 OS Induced by Linear Functions (cont.) Lemma 3: Evidently, µ X = E(X j ) = C T P µ, µ Y = E(Y j ) = C T Q µ, σ 2 X = V ar(x j ) = C T P Σ C P, σ 2 Y = V ar(y j ) = C T Q Σ C Q, σ X,Y = Cov(X j,y j ) = C T P Σ C Q, ρ = σ X,Y σ X σ Y = C T P Σ C Q. (C T P Σ C P )(C T Q Σ C Q )

p. 30/4 OS Induced by Linear Functions (cont.) Now, let S j = X j + Y j = C T PX j + C T QX j = C T X j for j = 1,...,n.

p. 30/4 OS Induced by Linear Functions (cont.) Now, let S j = X j + Y j = C T PX j + C T QX j = C T X j for j = 1,...,n. Let S 1:n S 2:n S n:n denote the order statistics of S j s.

p. 30/4 OS Induced by Linear Functions (cont.) Now, let S j = X j + Y j = C T PX j + C T QX j = C T X j for j = 1,...,n. Let S 1:n S 2:n S n:n denote the order statistics of S j s. Let X [k:n] be the k-th induced multivariate order statistic; i.e., X [k:n] = X j whenever S k:n = S j.

p. 31/4 OS Induced by Linear Functions (cont.) Example 1: While analyzing extreme lake levels in hydrology, the annual maximum level at a location in the lake is a combination of the daily water level averaged over the entire lake and the up surge in local water levels due to wind effects at that site [ Song, Buchberger & Deddens (1992)].

p. 31/4 OS Induced by Linear Functions (cont.) Example 1: While analyzing extreme lake levels in hydrology, the annual maximum level at a location in the lake is a combination of the daily water level averaged over the entire lake and the up surge in local water levels due to wind effects at that site [ Song, Buchberger & Deddens (1992)]. Example 2: While evaluating the performance of students in a course, the final grade may often be a weighted average of the scores in mid-term tests and the final examination.

p. 32/4 OS Induced by Linear Functions (cont.) Theorem 4: We have X [k:n] = C T P X [k:n] and Y [k:n] = C T Q X [k:n].

p. 32/4 OS Induced by Linear Functions (cont.) Theorem 4: We have X [k:n] = C T P X [k:n] and Y [k:n] = C T Q X [k:n]. Consequently, we readily obtain 8 >< E(X [k:n] ) = C T P µ + α k:n >: C T P ΣC P + C T P ΣC Q q C T P ΣC P + C T Q ΣC Q + 2C T P ΣC Q where α k:n is the mean of the k-th OS from a sample of size n from N(0, 1). 9 >= >;,

p. 32/4 OS Induced by Linear Functions (cont.) Theorem 4: We have X [k:n] = C T P X [k:n] and Y [k:n] = C T Q X [k:n]. Consequently, we readily obtain 8 >< E(X [k:n] ) = C T P µ + α k:n >: C T P ΣC P + C T P ΣC Q q C T P ΣC P + C T Q ΣC Q + 2C T P ΣC Q where α k:n is the mean of the k-th OS from a sample of size n from N(0, 1). Similar expressions can be derived for V ar(x [k:n] ) and other moments. 9 >= >;,

p. 33/4 OS Induced by Linear Functions (cont.) By choosing the partitions P and Q suitably, we have the following results for within a concomitant OS.

p. 33/4 OS Induced by Linear Functions (cont.) By choosing the partitions P and Q suitably, we have the following results for within a concomitant OS. Theorem 5: For k = 1,...,n, we obtain

OS Induced by Linear Functions (cont.) By choosing the partitions P and Q suitably, we have the following results for within a concomitant OS. Theorem 5: For k = 1,...,n, we obtain 8 9 >< P p r=1 E(X i[k:n] ) = µ i + α c >= rσ ir k:n q Pp P, i = 1,..., p, >: p s=1 r=1 c rc s σ rs >; ( `P p r=1 V ar(x i[k:n] ) = σ ii (1 β k,k:n ) c ) rσ ir 2 P p r=1 c, i = 1,..., p, rc s σ rs P p s=1 j P p P p s=1 r=1 Cov(X i[k:n], X j[k:n] )) = σ ij (1 β k,k:n ) c ff rc s σ ir σ js P p r=1 c, rc s σ rs P p s=1 1 i < j p, where β k,k:n is the variance of the k-th OS from a sample of size n from N(0, 1). p. 33/4

p. 34/4 OS Induced by Linear Functions (cont.) By choosing the partitions P and Q suitably, we have the following results for between concomitant OS.

p. 34/4 OS Induced by Linear Functions (cont.) By choosing the partitions P and Q suitably, we have the following results for between concomitant OS. Theorem 6: For 1 k < l < n, we obtain

OS Induced by Linear Functions (cont.) By choosing the partitions P and Q suitably, we have the following results for between concomitant OS. Theorem 6: For 1 k < l < n, we obtain Cov(X i[k:n],x i[l:n] ) = β k,l:n { ( } p r=1 c rσ ir ) 2 p r=1 c, rc s σ rs p s=1 i = 1,...,p, p p s=1 r=1 Cov(X i[k:n],x j[l:n] )) = β k,l:n ){ c } rc s σ ir σ js p r=1 c, rc s σ rs p s=1 1 i < j p, where β k,l:n is covariance between k-th and l-th OS from a sample of size n from N(0, 1). p. 34/4

p. 35/4 OS Induced by Linear Functions (cont.) Corollary 3: V ar(x [k:n] ) and V ar(y [k:n] ) can be rewritten as

p. 35/4 OS Induced by Linear Functions (cont.) Corollary 3: V ar(x [k:n] ) and V ar(y [k:n] ) can be rewritten as {( ) aσ V ar(x [k:n] ) = σx 2 2 2 } (1 β k,k:n ) X + bρσ X σ Y, {( ) bσ V ar(y [k:n] ) = σy 2 2 2 } (1 β k,k:n ) Y + aρσ X σ Y, where = a 2 σ 2 X + b2 σ 2 Y + 2abρσ Xσ Y.

OS Induced by Linear Functions (cont.) Corollary 3: V ar(x [k:n] ) and V ar(y [k:n] ) can be rewritten as {( ) aσ V ar(x [k:n] ) = σx 2 2 2 } (1 β k,k:n ) X + bρσ X σ Y, {( ) bσ V ar(y [k:n] ) = σy 2 2 2 } (1 β k,k:n ) Y + aρσ X σ Y, where = a 2 σ 2 X + b2 σ 2 Y + 2abρσ Xσ Y. Now, since β j,k:n > 0 [Bickel (1967)] and n j=1 β j,k:n = 1 for 1 k n [Arnold, Balakrishnan & Nagaraja (1992)], we have 0 < β k,k:n < 1. p. 35/4

p. 36/4 OS Induced by Linear Functions (cont.) Consequently, we observe that V ar(x [k:n] ) < σ 2 X and V ar(y [k:n] ) < σ 2 Y for all k = 1,...,n.

p. 36/4 OS Induced by Linear Functions (cont.) Consequently, we observe that V ar(x [k:n] ) < σ 2 X and V ar(y [k:n] ) < σ 2 Y for all k = 1,...,n. Using a similar argument, it can be shown that V ar(x i[k:n] ) < σ ii for i = 1,...,p and all k = 1,...,n.

p. 37/4 OS from BV and TV t Distributions Arellano-Valle and Azzalini (2006) presented unified multivariate skew-elliptical distribution through conditional distributions.

p. 37/4 OS from BV and TV t Distributions Arellano-Valle and Azzalini (2006) presented unified multivariate skew-elliptical distribution through conditional distributions. A special case of this distribution is a generalized skew-t ν distribution.

p. 37/4 OS from BV and TV t Distributions Arellano-Valle and Azzalini (2006) presented unified multivariate skew-elliptical distribution through conditional distributions. A special case of this distribution is a generalized skew-t ν distribution. Now, using skew-t ν and generalized skew-t ν distributions, distributions and properties of OS from BV and TV t ν distributions can be studied.

p. 38/4 OS from BV and TV t Distributions (cont.) Problems for further study:

p. 38/4 OS from BV and TV t Distributions (cont.) Problems for further study: Can we make use of these results concerning distributions and moments of OS to develop some efficient inferential methods?

p. 38/4 OS from BV and TV t Distributions (cont.) Problems for further study: Can we make use of these results concerning distributions and moments of OS to develop some efficient inferential methods? How far can these results be generalized to obtain such explicit expressions for the case of OS from MV t ν?

p. 38/4 OS from BV and TV t Distributions (cont.) Problems for further study: Can we make use of these results concerning distributions and moments of OS to develop some efficient inferential methods? How far can these results be generalized to obtain such explicit expressions for the case of OS from MV t ν? Can we do this work more generally in terms of elliptically contoured distributions, for example?

p. 39/4 Bibliography Most pertinent papers are:

Bibliography Most pertinent papers are: Arellano-Valle, R.B. and Azzalini, A. (2006). Scand. J. Statist. Arnold, B.C. and Beaver, R.J. (2002). Test. Azzalini, A. (1985). Scand. J. Statist. Balakrishnan, N. (1993). Statist. Probab. Lett. Basu, A.P. and Ghosh, J.K. (1978). J. Multivar. Anal. Bickel, P.J. (1967). Proc. of 5th Berkeley Symposium. Birnbaum, Z.W. (1950). Ann. Math. Statist. Cain, M. (1994). Amer. Statist. Cain, M. and Pan, E. (1995). Math. Scientist. Gonzalez-Farias, G., Dominguez-Molina, A. and Gupta, A.K. (2004). J. Statist. Plann. Inf. Gupta, S.S. and Pillai, K.C.S. (1965). Biometrika. Nagaraja, H.N. (1982). Biometrika. p. 39/4

p. 40/4 Bibliography Nelson, L.S. (1964). Technometrics. O Hagan, A. and Leonard, T. (1976). Biometrika. Song, R., Buchberger, S.G. and Deddens, J.A. (1992). Statist. Probab. Lett. Weinstein, M.A. (1964). Technometrics.

p. 41/4 Bibliography (cont.) Most pertinent books are:

Bibliography (cont.) Most pertinent books are: Arnold, B.C. and Balakrishnan, N. (1989). Relations, Bounds and Approximations for Order Statistics, Springer-Verlag, New York. Arnold, B.C., Balakrishnan, N. and Nagaraja, H.N. (1992). A First Course in Order Statistics, John Wiley & Sons, New York. Balakrishnan, N. and Cohen, A.C. (1991). Order Statistics and Inference: Estimation Methods, Academic Press, Boston. Balakrishnan, N. and Rao, C.R. (Eds.) (1998a,b). Handbook of Statistics: Order Statistics, Vols. 16 & 17, North-Holland, Amsterdam. David, H.A. (1970, 1981). Order Statistics, 1st and 2nd editions, John Wiley & Sons, New York. David, H. A. and Nagaraja, H.N. (2003). Order Statistics, 3rd edition, John Wiley & Sons, Hoboken, New Jersey. Kotz, S., Balakrishnan, N. and Johnson, N.L. (2000). Continuous Multivariate Distributions Vol. 1, Second edition, John Wiley & Sons, New York. p. 41/4