A^VÇÚO 1 33 ò 1 6 Ï 2017 c 12 Chinese Journal of Applied Probability and Statistics Dec. 2017 Vol. 33 No. 6 pp. 591-607 doi: 10.3969/j.issn.1001-4268.2017.06.004 Stochastic Comparisons of Order Statistics from Generalized Normal Distributions DING Ying Department of Applied Mathematics Zhejiang University of Technology Hangzhou 310023 China ZHANG XinSheng Department of Statistics School of Management Fudan University Shanghai 200433 China Abstract: In this paper we study the stochastic comparisons of order statistics from generalized normal distributions. We obtain some sufficient conditions for ordering results based on parameter matrix and vector majorization comparisons. These conditions are necessary in some cases. Keywords: order statistics; generalized normal distribution; majorization 2010 Mathematics Subject Classification: 60E15; 60E05 Citation: Ding Y Zhang X S. Stochastic comparisons of order statistics from generalized normal distributions J. Chinese J. Appl. Probab. Statist. 2017 336: 591 607. 1. Introduction Among the discussions of stochastic comparisons order statistics play the prominent role since their important behaviour in statistical inference. For example Pledger and Proschan 1 Boland et al. 2 Zhao and Balakrishnan 3 5 Zhao and Zhang 6 and Balakrishnan et al. 7 study the order statistics from exponential family. For the order statistics comparisons from normal distributions the Slepian s inequality 8 is well known. It provides the comparison results from center normal distributions based on covariance matrix comparisons. Recently Huang and Zhang 9 and Fang and Zhang 10 give some comparisons for multi-normal distributions by vector majorization in independent and dependent cases respectively. In this paper we study the stochastic comparisons of order statistics from generalized normal distribution. We say a random variable X has the generalized normal distribution if its cumulative distribution function is of the form F x µ/σ α. Here F is the The project was supported by the Zhejiang Provincial Natural Science Foundation Grant No. LQ16A010008 and the National Natural Science Foundation of China Grant Nos. 11601483; 11571080. Received April 28 2016. Revised August 9 2016.
592 Chinese Journal of Applied Probability and Statistics Vol. 33 distribution function of N0 1 µ R is the location parameter σ R + is the scale parameter and α R + is the shape parameter. We denote it as X GNµ σ 2 α. When α = 1 it reduces to normal distribution Nµ σ 2. Generalized normal distribution can be commonly used in statistical inference. For example it is closed under maximum. More precisely if X i are the independent random variables with distributions GNµ σ 2 α i i = 1 2... n then X n:n GN n µ σ 2 α i. Especially the maximum order statistics of X i Nµ σ 2 i = 1 2... n is the generalized normal distribution GNµ σ 2 n. We focus on the stochastic orders of the largest and smallest order statistics from the generalized normal distributions. We give some sufficient conditions for the usual stochastic order based on parameter matrix and vector majorization comparisons inspired by Balakrishnan et al. 7. It is also proved these conditions are necessary in some special cases. The rest of this paper is organized as follows. In Section 2 we introduce some definitions and notations. Section 3 and Section 4 are devoted to the matrix and vector majorization comparisons for the order statistics from the generalized normal distributions. 2. Definitions and Notations We recall some definitions and notations of stochastic orders. Let X and Y be two random variables. Denote F X and F Y as their distribution functions F X and F Y as the survival functions r X and r Y as the hazard rate function r X and r Y as the reversed hazard rate functions and f X and f Y as the density functions respectively. For more details on stochastic orders one may refer to 11 and 12. Definition 1 Given two random variables X and Y. i X is said to be smaller than Y in the usual stochastic order written X st Y if F X x F Y x x. ii X is said to be smaller than Y in the hazard rate order written X hr Y if r Y x r X x x. iii X is said to be smaller than Y in the reversed hazard rate order written X rh Y if r X x r Y x x. iv X is said to be smaller than Y in the likelihood ratio order written X lr f Y x/f X x is increasing in x. Y if Let {x 1 x 2 x n } be the increasing arrangement of components from the vector x = x 1 x 2... x n.
No. 6 DING Y. ZHANG X. S.: Stochastic Orders of Generalized Normal Distributions 593 Definition 2 Given two vectors x = x 1 x 2... x n and y = y 1 y 2... y n x is j j said to vector majorize y written x m y if x i y i for i = 1 2... n 1 and n x i = n y i. A square matrix Π is said to be a permutation matrix if each row and column has a single unit and all other entries to be zero. A square matrix T ω is said to be a T -transform matrix if T ω = ωi n + 1 ωπ where ω 0 1 and Π is a permutation matrix that just interchange two coordinates. Definition 3 Let A and B be m n matrices. A is said to matrix majorize B written A m B if there exist finite n n T -transform matrices T ω1 T ω2... T ωk such that B = AT ω1 T ωk. 3. Results Based on Matrix Majorization Comparisons In this section we give some sufficient conditions for stochastic orderings of order statistics based on parameter matrix majorization comparison. Denote { } x1 x 2... x n P n = x y = : x i x j y i y j 0 i j = 1 2... n i j y 1 y 2... y n and { x1 x 2... x n Q n = x y = y 1 y 2... y n } : x i x j y i y j 0 i j = 1 2... n i j. We need the following lemma which is from 7. Lemma 4 A differentiable function φ : R 4 R + satisfies φa φb for any 2 2 matrices A and B with A P 2 Q 2 and A m B if and only if i φa = φaπ for all permutation matrices Π; ii 2 a ik a ij φ ik A φ ij A 0 for all j k = 1 2 where φ ij A = φa/ a ij. The next lemma is from 9. Lemma 5 i Let gx = e x2 /2 + x e t2 /2 dt x R. Then gx is nonincreasing. ii Let lx = e x2 /2 x e t2 /2 dt x R. Then lx is nondecreasing. We also need the following lemma. Lemma 6 Let hα x = 1 x α /1 xx α 1 x 0 1 α 0 +. When α 1 + it is nonincreasing on x. When α 0 1 it is nondecreasing on x.
594 Chinese Journal of Applied Probability and Statistics Vol. 33 Proof After some basic calculation we konw hα x/ x has the same sign as x α +αx α 1. Then it is easy to check hα x is nonincreasing on x when α 1 + and nondecreasing on x when α 0 1. In the rest of this paper we denote F and f as the distribution function and density function of N0 1 on R respectively. First we give some comparison results when the location and scale parameters are different. Theorem 7 Given α 1. Suppose s : R R is a twice differentiable strictly increasing and convex function. Let independent random variables X i GNµ i σi 2 α and Xi GNµ i σ2 i α. For 1/σ1 1/σ 2 sµ 1 sµ 2 P 2 if 1/σ1 1/σ 2 sµ 1 sµ 2 1/σ 1 m 1/σ2 sµ 1 sµ 2 then X 1:2 st X 1:2. Proof For clarity we denote λ i = 1/σ i λ i = 1/σ i θ i = sµ i θi = sµ i i = 1 2. We assume λ 1 λ 2 θ 1 θ 2 satisfying λ 1 λ 2 θ 1 θ 2 0. Then µ 1 µ 2 since s is increasing. We use Lemma 4 to prove. Since the survival function of X 1:2 is F 1:2 x = 1 F λ 1 x µ 1 α 1 F λ 2 x µ 2 α it satisfies the condition i in Lemma 4. After some calculation we have Then F 1:2 x = αx µ 1 F λ 1 x µ 1 α 1 fλ 1 x µ 1 1 F λ 2 x µ 2 α λ 1 F 1:2 x = αx µ 2 F λ 2 x µ 2 α 1 fλ 2 x µ 2 1 F λ 1 x µ 1 α λ 2 F 1:2 x = αλ 1 θ 1 s µ 1 F λ 1x µ 1 α 1 fλ 1 x µ 1 1 F λ 2 x µ 2 α F 1:2 x θ 2 = αλ 2 s µ 2 F λ 2x µ 2 α 1 fλ 2 x µ 2 1 F λ 1 x µ 1 α. F 1:2 x λ 1 λ 2 λ 1 sgn = λ 1 λ 2 x µ 2 fλ 1 x µ 1 + θ 1 θ 2 F 1:2x F 1:2 x + θ 1 θ 2 λ 2 θ 1 λ 1 s µ 1 fλ 2 x µ 2 1 F λ 1 x µ 1 α F λ 1 x µ 1 α 1 x µ 1 F 1:2x θ 2 1 F λ 2 x µ 2 α F λ 2 x µ 2 α 1 fλ 2 x µ 2 1 F λ 2 x µ 2 α F λ 2 x µ 2 α 1 λ 2 s µ 2 fλ 1 x µ 1
No. 6 DING Y. ZHANG X. S.: Stochastic Orders of Generalized Normal Distributions 595 1 F λ 1x µ 1 α F λ 1 x µ 1 α 1 = λ 1 λ 2 x µ 2 gλ 1 x µ 1 hα F λ 1 x µ 1 x µ 1 gλ 2 x µ 2 hα F λ 2 x µ 2 λ1 + θ 1 θ 2 s µ 1 gλ 2x µ 2 hα F λ 2 x µ 2 λ 2 s µ 2 gλ 1x µ 1 hα F λ 1 x µ 1. 1 Here a sgn = b means a and b have the same signs gx = e x2 /2 + x e t2 /2 dt and hα x = 1 x α /1 xx α 1. Using the mean value theorem we know there exists some ζ µ 2 µ 1 satisfying s µ 2 s ζ s µ 1 such that θ 1 θ 2 λ 1 /s µ 1 = µ 1 µ 2 λ 1 s ζ/s µ 1 µ 1 µ 2 λ 1 and θ 1 θ 2 λ 2 /s µ 2 = µ 1 µ 2 λ 2 s ζ/s µ 2 µ 1 µ 2 λ 2 since s is twice differentiable strictly increasing and convex on R. Combining the assumptions λ 1 λ 2 and µ 1 µ 2 we have 1 λ 1 λ 2 x µ 2 gλ 1 x µ 1 hα F λ 1 x µ 1 x µ 1 gλ 2 x µ 2 hα F λ 2 x µ 2 + µ 1 µ 2 λ 1 gλ 2 x µ 2 hα F λ 2 x µ 2 λ 2 gλ 1 x µ 1 hα F λ 1 x µ 1 λ 2 x µ 2 λ 1 x µ 1 0. gλ 2 x µ 2 hα F λ 2 x µ 2 gλ 1 x µ 1 hα F λ 1 x µ 1 Here the last inequality comes from Lemmas 5 and 6. Thus the condition ii in Lemma 4 is satisfied and we have X 1:2 st X 1:2. Theorem 8 Set α > 0. Suppose s : R R is a twice differentiable strictly increasing and concave function. Let independent random variables X i GNµ i σi 2 α and Xi GNµ i σ2 i α. For if 1/σ1 1/σ 2 then X 2:2 st X 2:2. 1/σ1 1/σ 2 sµ 1 sµ 2 sµ 1 sµ 2 Q 2 1/σ 1 m 1/σ2 sµ 1 sµ 2 Proof We denote λ i = 1/σ i λ i = 1/σi θ i = sµ i θi = sµ i i = 1 2. We assume λ 1 λ 2 θ 1 θ 2 satisfying λ 1 λ 2 θ 1 θ 2 0. Then µ 1 µ 2 since s is
596 Chinese Journal of Applied Probability and Statistics Vol. 33 increasing. We use Lemma 4 to prove. Since the cumulative distribution function of X 2:2 is F 2:2 x = F λ 1 x µ 1 α F λ 2 x µ 2 α it satisfies the condition i in Lemma 4. After some calculation we have F 2:2 x = αx µ 1 F λ 1 x µ 1 α 1 fλ 1 x µ 1 F λ 2 x µ 2 α λ 1 F 2:2 x = αx µ 2 F λ 2 x µ 2 α 1 fλ 2 x µ 2 F λ 1 x µ 1 α λ 2 F 2:2 x = αλ 1 θ 1 s µ 1 F λ 1x µ 1 α 1 fλ 1 x µ 1 F λ 2 x µ 2 α F 2:2 x = αλ 2 θ 2 s µ 2 F λ 2x µ 2 α 1 fλ 2 x µ 2 F λ 1 x µ 1 α. Then F2:2 x λ 1 λ 2 λ 1 sgn = λ 1 λ 2 x µ 1 F λ 2x µ 2 λ1 + θ 1 θ 2 s µ 1 F 2:2x F2:2 x + θ 1 θ 2 F 2:2x λ 2 θ 1 θ 2 fλ 2 x µ 2 x µ 2 F λ 1x µ 1 fλ 1 x µ 1 F λ 2 x µ 2 fλ 2 x µ 2 + λ 2 s µ 2 F λ 1 x µ 1 fλ 1 x µ 1 = λ 1 λ 2 x µ 1 lλ 2 x µ 2 x µ 2 lλ 1 x µ 1 λ1 + θ 1 θ 2 s µ 1 lλ 2x µ 2 + λ 2 s µ 2 lλ 1x µ 1 2 where lx = e x2 /2 x e t2 /2 dt. Using the mean value theorem we know there exists some ζ µ 1 µ 2 satisfying s µ 2 s ζ s µ 1 such that θ 1 θ 2 λ 1 /s µ 1 = µ 1 µ 2 s ζ/s µ 1 λ 1 µ 1 µ 2 λ 1 and θ 1 θ 2 λ 2 /s µ 2 = µ 1 µ 2 s ζ/s µ 2 λ 2 µ 1 µ 2 λ 2 since the assumptions on s. Combining the assumptions λ 1 λ 2 and µ 1 µ 2 we know 2 λ 1 λ 2 x µ 1 lλ 2 x µ 2 x µ 2 lλ 1 x µ 1 + µ 1 µ 2 λ 2 lλ 1 x µ 1 λ 1 lλ 2 x µ 2 λ 1 λ 2 x µ 2 lλ 2 x µ 2 x µ 2 lλ 1 x µ 1 + µ 1 µ 2 λ 1 lλ 1 x µ 1 λ 1 lλ 2 x µ 2 = λ 1 x µ 1 λ 2 x µ 2 lλ 2 x µ 2 lλ 1 x µ 1 0. Here the last inequality is from Lemma 5. So Lemma 4 is satisfied. Thus the conclusion.
No. 6 DING Y. ZHANG X. S.: Stochastic Orders of Generalized Normal Distributions 597 Remark 9 It can be proved Theorems 7 and 8 are also efficient for bivariate nonindependent normal distributions. More precisely if X 1 X 2 Nµ 1 µ 2 σ1 2 σ2 2 ρ and X 1 X 2 Nµ 1 µ 2 σ2 1 σ2 2 ρ then we have the similar conclusions as Theorems 7 and 8. Since the proof are similar we omit it here. Next we generalize Theorems 7 and 8 to the multivariate case. Theorem 10 Given α 1 α > 0. Suppose s : R R is a twice differentiable strictly increasing and convex concave function. Let independent random variables X i GNµ i σi 2 α and X i GNµ i σ2 i α i = 1 2... n. For 1/σ1 1/σ 2... 1/σ n sµ 1 sµ 2... sµ n P n Q n i if there exists a T -transform matrix T ω such that 1/σ 1 1/σ2... 1/σn sµ 1 sµ 2... sµ n = 1/σ1 1/σ 2... 1/σ n sµ 1 sµ 2... sµ n T ω then X 1:n st X 1:n X n:n st X n:n; ii if there exist T -transform matrices T ω1 T ω2... T ωk k 2 with the same structures such that 1/σ 1 1/σ2... 1/σn sµ 1 sµ 2... sµ n = 1/σ1 1/σ 2... 1/σ n sµ 1 sµ 2... sµ n T ω1 T ω2... T ωk then X 1:n st X 1:n X n:n st X n:n; iii if there exist T -transform matrices T ω1 T ω2... T ωk k 2 such that 1/σ1 1/σ 2... 1/σ n sµ 1 sµ 2... sµ n T ω1 T ω2... T ωi P n Q n i = 1 2... k 1 and 1/σ 1 1/σ2... 1/σn sµ 1 sµ 2... sµ n = 1/σ1 1/σ 2... 1/σ n sµ 1 sµ 2... sµ n T ω1 T ω2... T ωk then X 1:n st X 1:n X n:n st X n:n. Proof We focus on X 1:n st X 1:n since the case of X n:n st X n:n is similar.
598 Chinese Journal of Applied Probability and Statistics Vol. 33 i Set T ω = ωi n + 1 ωπ where Π is a permutation matrix just interchange the ith and jth coordinates i j. Denote the survival functions of X 1:n and X 1:n as F 1:n x and F 1:nx respectively. Then F 1:n x = k=ij k=ij = k=ij = F 1:nx 1 F λ k x µ k α 1 F λ k x µ k α 1 F λ k x µ k α k ij k ij k ij 1 F λ k x µ k α 1 F λ k x µ k α 1 F λ k x µ k α where the inequality and the second equality are due to Theorem 7 and the fact that there is only one matrix transformation T ω. ii Since T ωi i = 1 2... k are of the same structures it follows the product T ω1 T ω2... T ωk is still a T -transform matrix with the same structure as T ω1. Then we have the conclusion according to i. iii Let Z j 1 Zj σ j2 i be the independent random variables with Z j i 2... Zj n α i = 1 2... n j = 1 2... k 1. If j 1/σ 1 1/σ j 2... 1/σ n j sµ j 1 sµj 2... sµj n = 1/σ1 1/σ 2... 1/σ n sµ 1 sµ 2... sµ n GNµ j i T ω1 T ω2... T ωj j = 1 2 k 1 then we have X 1:n st Z 1 1:n st st Z k 1 1:n st X1:n using i for k times. Corollary 11 Given α 1 α > 0. Suppose s : R R is a twice differentiable strictly increasing and convex concave function. Let independent random variables X i GNµ i σ 2 i α and X i GNµ i σ2 i α i = 1 2... n. If 1/σ 1 sµ 1 1/σ 2 sµ 2... 1/σ n sµ n are in a line with nonpositive nonnegative slope then X 1:n st X 1:n X n:n st X n:n. Proof We have the conclusions according to the iii in Theorem 10. Example Let independent random variables X i GNµ i σ 2 i α and X i GNµ i σ 2 i α i = 1 2 3. Take sx = x 1/σ1 1/σ 2 1/σ 3 µ 1 µ 2 µ 3 = 6 4 2 1 2 3
No. 6 DING Y. ZHANG X. S.: Stochastic Orders of Generalized Normal Distributions 599 and If we choose 0.9 0 0.1 T 0.9 = 0 1 0 0.1 0 0.9 1/σ 1 1/σ2 1/σ3 µ 1 µ 2 µ 3 T 0.8 = = 0.8 0.2 0 0.2 0.8 0 0 0 1 5.28 3.744 2.976 1.36 2.128 2.512 and T 0.7 = then we have 1/σ1 1/σ 2 1/σ 3 5.6 4 2.4 T 0.9 = P 3 µ 1 µ 2 µ 3 1.2 2 2.8 1/σ1 1/σ 2 1/σ 3 5.28 4.32 2.4 T 0.9 T 0.8 = P 3 µ 1 µ 2 µ 3 1.36 1.84 2.8 and 1/σ1 1/σ 2 1/σ 3 Hence X 1:3 st X 1:3. µ 1 µ 2 µ 3 T 0.9 T 0.8 T 0.7 =. 1/σ 1 1/σ 2 1/σ 3 µ 1 µ 2 µ 3 1 0 0 0 0.7 0.3 0 0.3 0.7 Second we give some comparisons based on the location and shape parameters. Lemma 12 Let uα x = 1 x α /α1 xx α 1 x 0 1 α 0 +. i For each fixed x 0 1 uα x is nondecreasing on α; ii For each fixed α 0 1 uα x is nondecreasing on x and for each fixed α 1 + uα x is nonincreasing on x. Proof. i After some basic calculation we konw uα x/ α sgn = αx 1 α ln x x 1 α + x which is nondecreasing on α when x is fixed. ii For fixed α the monotone of uα x is the same as that of hα x in Lemma 6. Lemma 13 Let vα x = x α ln x/1 x α x 0 1 α 0 +. i For each fixed x 0 1 vα x is nondecreasing on α; ii For each fixed α 0 + vα x is nonincreasing on x. Proof i After some calculation we know vα x/ α 0 for each fixed x 0 1. ii For fixed α vα x/ x sgn = α ln x + 1 x α which is nonincreasing on x.
600 Chinese Journal of Applied Probability and Statistics Vol. 33 Theorem 14 Suppose s : R R is a twice differentiable strictly increasing and convex function and t : 1 + R is a twice differentiable strictly increasing and convex function. Let independent random variables X i GNµ i σ 2 α i and X i GNµ i σ2 α i where µ i µ i R α i α i 1 +. For if tα1 tα 2 then X 1:2 st X 1:2. sµ 1 sµ 2 tα1 tα 2 sµ 1 sµ 2 P 2 tα 1 m tα2 sµ 1 sµ 2 Proof For clarity we denote θ i = sµ i θi = sµ i η i = tα i ηi = tα i i = 1 2. Without losing generality we assume θ 1 θ 2 and η 1 η 2. Then µ 1 µ 2 and α 1 α 2 due to the monotone of s and t. We also assume σ = 1. We use Lemma 4 to prove. Since the survival function of X 1:2 is F 1:2 x = 1 F x µ 1 α 1 1 F x µ 2 α 2 it satisfies the condition i in Lemma 4. After some calculation we know Then F 1:2 x θ 1 = α 1 s µ 1 F x µ 1 α 1 1 fx µ 1 1 F x µ 2 α 2 F 1:2 x θ 2 = α 2 s µ 2 F x µ 2 α 2 1 fx µ 2 1 F x µ 1 α 1. F 1:2 x θ 1 θ 2 F 1:2x θ 1 θ 2 sgn 1 1 1 F x µ 2 α 2 = θ 1 θ 2 s µ 1 fx µ 2 α 2 F x µ 2 α 2 1 1 1 1 F x µ 1 α 1 s µ 2 fx µ 1 α 1 F x µ 1 α 1 1 1 1 F x µ 2 1 F x µ 2 α 2 = θ 1 θ 2 s µ 1 fx µ 2 α 2 1 F x µ 2 F x µ 2 α 2 1 1 1 F x µ 1 1 F x µ 1 α 1 s µ 2 fx µ 1 α 1 1 F x µ 1 F x µ 1 α 1 1 1 = θ 1 θ 2 s µ 1 gx µ 2uα 2 F x µ 2 1 s µ 2 gx µ 1uα 1 F x µ 1 3 where gx = e x2 /2 + x e t2 /2 dt and uα x = 1 x α /α1 xx α 1. Using the mean value theorem and the fact that s is twice differentiable strictly increasing and convex on R we know θ 1 θ 2 /s µ 1 µ 1 µ 2 θ 1 θ 2 /s µ 2 µ 1 µ 2 and 3 µ 1 µ 2 gx µ 2 uα 2 F x µ 2 gx µ 1 uα 1 F x µ 1.
No. 6 DING Y. ZHANG X. S.: Stochastic Orders of Generalized Normal Distributions 601 Due to Lemma 5 Lemma 12 and the assumptions α 1 α 2 and µ 1 µ 2 satisfying α 1 α 2 µ 1 µ 2 0 we have gx µ 2 gx µ 1 and uα 2 F x µ 2 uα 1 F x µ 2 uα 1 F x µ 1. So µ 1 µ 2 gx µ 2 uα 2 F x µ 2 gx µ 1 uα 1 F x µ 1 0. Similarly after some calculation we know F 1:2 x η 1 F 1:2 x η 2 = F x µ 1 α1 t ln F x µ 1 1 F x µ 2 α 2 α 1 = F x µ 2 α2 t ln F x µ 2 1 F x µ 1 α 1 α 2 Then F 1:2 x η 1 η 2 F 1:2x η 1 η 2 sgn 1 F x µ 1 α 1 = η 1 η 2 t α 1 1 F x µ 1 α ln F x µ 1 + 1 F x µ 2 α 2 1 t α 2 1 F x µ 2 α ln F x µ 2 2 1 = η 1 η 2 t α 1 vα 1 F x µ 1 + 1 t α 2 vα 2 F x µ 2 4 where vα x = x α ln x/1 x α. Using the mean value theorem we know η 1 η 2 /t α 1 α 1 α 2 and η 1 η 2 /t α 2 α 1 α 2 since t is strictly increasing and convex on R. Then 4 α 1 α 2 vα 1 F x µ 1 + vα 2 F x µ 2. Due to Lemma 13 and the assumptions α 1 α 2 1 and µ 1 µ 2 satisfying α 1 α 2 µ 1 µ 2 0 we know vα 1 F x µ 1 vα 2 F x µ 1 vα 2 F x µ 2. So α 1 α 2 vα 1 F x µ 1 + vα 2 F x µ 2 0. Thus F 1:2 x α 1 α 2 F 1:2x F 1:2 x + µ 1 µ 2 F 1:2x 0. α 1 α 2 µ 1 µ 2 According to Lemma 4 we have X 1:2 st X 1:2. Theorem 15 Suppose s : R R is a twice differentiable strictly increasing and concave function and t : 0 + R is a twice differentiable strictly increasing and concave function. Let independent random variables X i GNµ i σ 2 α i and X i GNµ i σ2 α i. For tα1 tα 2 sµ 1 sµ 2 Q 2 if tα1 tα 2 then X 2:2 st X 2:2. sµ 1 sµ 2 tα 1 m tα2 sµ 1 sµ 2
602 Chinese Journal of Applied Probability and Statistics Vol. 33 Proof We denote θ i = sµ i θi = sµ i η i = tα i ηi = tα i i = 1 2. Without losing generality we assume θ 1 θ 2 and η 1 η 2. Then µ 1 µ 2 and α 1 α 2 due to the monotone of s and t. We also assume σ = 1. We use Lemma 4 to prove. Since the cumulative distribution function of X 2:2 is F 2:2 x = F x µ 1 α 1 F x µ 2 α 2 it satisfies the i in Lemma 4. After some calculation we know Then F 2:2 x θ 1 = α 1 s µ 1 F x µ 1 α1 1 fx µ 1 F x µ 2 α 2 F 2:2 x θ 2 = α 2 s µ 2 F x µ 2 α2 1 fx µ 2 F x µ 1 α 1. F2:2 x θ 1 θ 2 F 2:2x θ 1 θ 2 sgn α1 fx µ 1 = θ 1 θ 2 s µ 1 F x µ 1 + α 2 fx µ 2 s µ 2 F x µ 2 α 1 = θ 1 θ 2 s µ 1 lx µ 1 + α 2 s 5 µ 2 lx µ 2 where lx = e x2 /2 x e t2 /2 dt. Using the mean value theorem and the fact s is strictly increasing and concave we know θ 1 θ 2 /s µ 1 µ 1 µ 2 θ 1 θ 2 /s µ 2 µ 1 µ 2. So α 1 5 µ 1 µ 2 lx µ 1 + α 2 lx µ 2 µ 1 µ 2 α 2 lx µ 1 + α 2 lx µ 2 0 using the assumption α 1 α 2 and Lemma 5. Similarly after some calculation we know Then F 2:2 x = 1 η 1 t α 1 ln F x µ 1F x µ 1 α 1 F x µ 2 α 2 F 2:2 x = 1 η 2 t α 2 ln F x µ 2F x µ 2 α 2 F x µ 1 α 1. F2:2 x η 1 η 2 η 1 sgn = η 1 η 2 F 2:2x η 2 1 t α 1 ln F x µ 1 1 t α 2 ln F x µ 2 Using the mean value theorem we know η 1 η 2 /t α 1 α 1 α 2 and η 1 η 2 /t α 2 α 1 α 2 since t is twice differentiable strictly increasing and concave on R. So F2:2 x η 1 η 2 F 2:2x 0 η 1 η 2 using the assumption α 1 α 2 and µ 1 µ 2. Combining two parts we know Lemma 4 is satisfied. Thus the conclusion..
No. 6 DING Y. ZHANG X. S.: Stochastic Orders of Generalized Normal Distributions 603 Remark 16 Theorem 10. We omit it for clarity. Theorems 14 and 15 can also be generalized to the multivariate case as 4. Results Based on Vector Majorization Comparisons In this section we give some conditions for stochastic results based on parameter vector majorization comparisons. In some cases these conditions are sufficient and necessary. Theorem 17 Suppose s : R R is a twice differentiable strictly increasing and convex function. Let independent random variables X i GNµ i σ 2 α and Xi GNµ i σ2 α i = 1 2... n. If sµ 1 sµ 2... sµ n m sµ 1 sµ 2... sµ n then i X 1:n st X 1:n ii X n:n st X n:n when α 1; when α > 0. Theorem 18 Let independent random variables X i GNµ σ 2 i α and X i GNµ σi 2 α i = 1 2... n. If 1/σ 1 1/σ 2... 1/σ n m 1/σ1 1/σ 2... 1/σ n then i X 1:n st X 1:n ii X n:n st X n:n when α 1; when α > 0. Theorems 17 and 18 can be proved by using the iii in Theorem 10. For example if we take σ 1 = σ 2 =... = σ n = σ in Theorem 10 then we get the conclusions in Theorem 17. Theorems 17 and 18 generalize 8. In the following special case we show the sufficient condition in Theorem 18 can be strengthened as the necessary one. Theorem 19 Given α 1 α > 0. Let X 1 X 2... X n be independent random variables with X i GNµ σ1 2 α i = 1 2... p and X j GNµ σ2 2 α j = p + 1 p + 2... n. Also let X1 X 2... X n be independent random variables with X i GNµ σ1 2 α i = 1 2... p and Xj GNµ σ2 2 α j = p + 1 p + 2... n. Suppose σ 1 σ1 σ2 σ 2. Then the following two statements are equivalent: i 1/σ 1... 1/σ }{{} 1 1/σ 2... 1/σ 2 }{{} m 1/σ1... 1/σ1 1/σ }{{} 2... 1/σ2 ; }{{} p n p p n p ii X 1:n st X 1:n X n:n st X n:n. Proof We only need to prove ii i since i ii has been proved in Theorem 18. For clarity we denote λ i = 1/σ i λ i = 1/σ i i = 1 2 and assume µ = 0.
604 Chinese Journal of Applied Probability and Statistics Vol. 33 If X 1:n st X1:n then for any x R we have 1 F λ 1 x α p 1 F λ 2 x α q 1 F λ 1x α p 1 F λ 2x α q. By using Taylor s expansion at x = 0 and some arrangement we have for any x R Since x is arbitrary we then get pλ 1 + qλ 2 x + ox pλ 1 + qλ 2x + ox. pλ 1 + qλ 2 = pλ 1 + qλ 2. Combining the assumption λ 1 λ 1 λ 2 λ 2 we have the conclusion. If X n:n st X n:n we can get the conclusion similarly. We omit the proof here. Especially when n = 2 the assumption σ 1 σ 1 σ 2 σ 2 in Theorem 19 can be removed. This is stated as follows. Theorem 20 Given α 1 α > 0. Let independent random variables X i GNµ σ 2 i α and X i GNµ σ2 i α i = 1 2. Then the following two statements are equivalent: i 1/σ 1 1/σ 2 m 1/σ 1 1/σ 2 ; ii X 1:2 st X 1:2 X 2:2 st X 2:2. Proof The i ii part has been proved in Theorem 19. We prove ii i part. Along with the proof and notations of Theorem 19 we assume λ 1 λ 2 and λ 1 λ 2 for convenience and we only need to prove λ 1 λ 1. We use the contradiction method. When α 1 if λ 1 < λ 1 it implies λ 1 λ 2 m λ 1 λ 2 since λ 1 + λ 2 = λ 1 + λ 2. Then we have X 1:2 st X 1:2 and X 2:2 st X 2:2 according to the i ii part. Combining the assumption X 1:2 st X1:2 we have X 1:2 = d X1:2 and X 2:2 st X2:2 where X = d Y means X and Y have the same distributions. Then for any x R 1 F λ 1 x α 1 F λ 2 x α = 1 F λ 1x α 1 F λ 2x α and F λ 1 x α F λ 2 x α F λ 1x α F λ 2x α which lead to for any x R F λ 1 x α + F λ 2 x α F λ 1x α + F λ 2x α. According to the mean value theorem there exist some ξ and η satisfying λ 1 < ξ < λ 1 λ 2 < η < λ 2 such that for any x R αf ξx α 1 fξxλ 1 λ 1x αf ηx α 1 fηxλ 2 λ 2x.
No. 6 DING Y. ZHANG X. S.: Stochastic Orders of Generalized Normal Distributions 605 Since λ 1 + λ 2 = λ 1 + λ 2 and x is arbitrary taking x = 1 1 and using the symmetry of density function f we have F ξ α 1 fξ F η α 1 fη and 1 F ξ α 1 fξ 1 F η α 1 fη. Then we get F ξ/1 F ξ α 1 F η/1 F η α 1. Due to the increment of the function x/1 x α 1 on x 0 1 when α 1 we then have ξ η. But this is a contradiction. Hence λ 1 λ 1 and λ 1 λ 2 m λ 1 λ 2. Similarly when α > 0 if X 2:2 st X2:2 we then have X 2:2 = d X2:2 under the assumption λ 1 < λ 1. This means for any x R F λ 1 xf λ 2 x = F λ 1 xf λ 2x. Taking x = 1 and 1 we have F λ 1 F λ 2 = F λ 1 F λ 2 and 1 F λ 11 F λ 2 = 1 F λ 1 1 F λ 2 which lead to λ 1 = λ 1 or λ 2. This is a contradiction. Remark 21 When α 1 under the setting of Theorem 20 it is impossible to improve the componentwise comparisons to multivariate cases X 1:2 X 2:2 st X 1:2 X 2:2 or the converse case unless X 1 X 2 and X1 X 2 have the same distributions. In fact if X 1:2 X 2:2 st X1:2 X 2:2 then X 1:2 st X1:2 and X 2:2 st X2:2 according to 12. But these will lead to the conclusion 1/σ 1 1/σ 2 = 1/σ1 1/σ 2 due to Theorem 20. Hence X 1:2 X 2:2 st X 1:2 X 2:2 if and only if X 1 X 2 = d X 1 X 2. Remark 22 Under the setting of Theorem 20 it is impossible to improve the usual stochastic orders to such results as X 1:2 hr X 1:2 X 1:2 rh X 1:2 X 2:2 hr X 1:2 or X 2:2 rh X 1:2 unless X 1 X 2 and X 1 X 2 have the same distributions. In fact if X 2:2 rh X 2:2 then we have X 2:2 st X2:2 which implies 1/σ 1 + 1/σ 2 = 1/σ1 + 1/σ 2 due to the proof of Theorem 19. Meanwhile we have r 2:2 x r 2:2 x for any x R where r 2:2x and r 2:2 x are the reversed hazard rate functions of X 2:2 and X2:2 respectively. Using Taylor s expansion on the reversed hazard rate functions at x = 0 and the fact 1/σ 1 + 1/σ 2 = 1/σ1 + 1/σ 2 we have for any x R 1/σ1 2 + 1/σ2 2 x + x 1/σ2 1 + 1/σ2 2 x + x. Since x is arbitrary we get 1/σ1 2 + 1/σ2 2 = 1/σ2 1 + 1/σ2 2 which leads to σ 1 = σ1 or σ 2. Hence X 2:2 rh X2:2 if and only if X 1 X 2 = d X1 X 2. The other cases are similar and we omit the proofs here. Remark 23 The conclusion in Theorem 20 is also efficient to nonindependent bivariate normal distribution case. Let random vectors X 1 X 2 Nµ µ σ 2 1 σ2 2 ρ X 1 X 2 Nµ µ σ 2 1 σ2 2 ρ where µ R σ i σ i R + and ρ 1 1. Then the following three statements are equivalent: i 1/σ 1 1/σ 2 m 1/σ 1 1/σ 2 ; ii X 1:2 st X 1:2 ; iii X 2:2 st X2:2. These partially strengthen 12. Since the proof is similar to that of Theorem 20 we omit it here. When σi i = 1 2... n in Theorem 18 are the same say σ the condition 1/σ 1 1/σ 2... 1/σ n m 1/σ1 1/σ 2... 1/σ n reduces to σ = n / n 1/σ i. At this time it is sufficient and necessary.
606 Chinese Journal of Applied Probability and Statistics Vol. 33 Theorem 24 Given α 1 α > 0. Let independent random variables X i GNµ σ 2 i α and X i GNµ σ2 α i = 1 2... n. Then the following statements are equivalent: i σ = n / n 1/σ i ; ii X 1:n st X 1:n X n:n st X n:n. Proof The part i ii has been proved in Theorem 18. We only prove ii i. For clarity we denote λ = 1/σ λ i = 1/σ i i = 1 2... n and assume µ = 0. If X 1:n st X1:n n then for any x R 1 F λ i x α 1 F λx α n. By using Taylor s expansion at zero we have λ 1 + λ 2 + + λ n x + x nλx + x. Since x is arbitrary we have λ = n λ i /n. If X n:n st Xn:n we can get the conclusion similarly. i Theorem 25 Given α 1. Let independent random variables X i GNµ σi 2 α i and Xi GNµ σ 2 αi i = 1 2... n. Suppose β = n α i = n αi. Then the following statements are equivalent: i σ = 1 / n α i /σ i β; ii X n:n st X n:n. Proof For clarity we denote λ = 1/σ λ i = 1/σ i i = 1 2... n and assume µ = 0. i ii If λ = n α i λ i /β we need to prove for any x R F λx β n F λ i x α i. i Taking logarithm and making use the assumption β = n α i = n prove for any x R n n / n α i ln F α i α j λ i x n α i lnf λ i x. j=1 αi it is equivalent to Due to Lemma 5 ln F is concave. Then we have the conclusion using Jensen s inequality. ii i If X n:n st Xn:n then F λx β n F λ i x α i. By Taylor s expansion at zero we have βλx + x n α i λ i x + x x R. Thus the conclusion. Remark 26 Theorem 25 supplies a lower bound for the survival function of X n:n. More precisely if independent random variables X i GNµ σ 2 i α i i = 1 2... n then for any x R Remark 27 n / F n:n x 1 F α n x β. i σ i α j j=1 Under the setting of Theorem 25 it is impossible to improve the usual stochastic order to the hazard rate order. In fact if X n:n hr X n:n then we have σ2 =
No. 6 DING Y. ZHANG X. S.: Stochastic Orders of Generalized Normal Distributions 607 1 / n α i /σi 2β using Taylor s expansion. However this is a contradiction since hr is stronger than st which implies σ = 1 / n α i /σ i β due to Theorem 25. References 1 Pledger G Proschan F. Comparisons of order statistics and of spacings from heterogeneous distributions M // Rustagi J S. Optimizing Methods in Statistics. New York: Academic Press 1971: 89 113. 2 Boland P J El-Neweihi E Proschan F. Applications of the hazard rate ordering in reliability and order statistics J. J. Appl. Probab. 1994 311: 180 192. 3 Zhao P Balakrishnan N. Some characterization results for parallel systems with two heterogeneous exponential components J. Statistics 2011 456: 593 604. 4 Zhao P Balakrishnan N. New results on comparisons of parallel systems with heterogeneous gamma components J. Statist. Probab. Lett. 2011 811: 36 44. 5 Zhao P Balakrishnan N. Stochastic comparisons of largest order statistics from multiple-outlier exponential models J. Probab. Engrg. Inform. Sci. 2012 262: 159 182. 6 Zhao P Zhang Y Y. On the maxima of heterogeneous gamma variables with different shape and scale parameters J. Metrika 2014 776: 811 836. 7 Balakrishnan N Haidari A Masoumifard K. Stochastic comparisons of series and parallel systems with generalized exponential components J. IEEE Trans. Reliab. 2015 641: 333 348. 8 Slepian D. The one-sided barrier problem for Gaussian noise J. Bell System Tech. J. 1962 412: 463 501. 9 Huang Y J Zhang X S. On stochastic orders for order statistics from normal distributions J. Chinese J. Appl. Probab. Statist. 2009 254: 381 388. in Chinese 10 Fang L X Zhang X S. Slepian s inequality with respect to majorization J. Linear Algebra Appl. 2011 4344: 1107 1118. 11 Marshall A W Olkin I. Inequalities: Theory of Majorization and Its Applications M. New York: Academic Press 1979. 12 Müller D Stoyan D. Comparison Methods for Stochastic Models and Risks M. New York: Wiley 2002. 2 ÙgSÚOþ ÅS' J úôó ŒÆA^êÆX ɲ 310023 Ü# E ŒÆ+nÆÚOX þ 200433 Á : ïä2â ÙgSÚOþ ÅS'X ÏLëêÝÚëê þ`zs Ñ Å S'X á ^ y²ù^ 3œ/e 7 ^. ' c: gsúoþ; 2 Ù; `zs ã aò: O212.4