= P k,j = l,,nwhere P ij

Similar documents
Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone Missing Data

A NOTE ON CONFIDENCE BOUNDS CONNECTED WITH ANOVA AND MANOVA FOR BALANCED AND PARTIALLY BALANCED INCOMPLETE BLOCK DESIGNS. v. P.

Department of Mathematics Temple University~ Philadelphia

~,. :'lr. H ~ j. l' ", ...,~l. 0 '" ~ bl '!; 1'1. :<! f'~.., I,," r: t,... r':l G. t r,. 1'1 [<, ."" f'" 1n. t.1 ~- n I'>' 1:1 , I. <1 ~'..

Multivariate Normal-Laplace Distribution and Processes

Testing the homogeneity of variances in a two-way classification

JUST THE MATHS UNIT NUMBER DIFFERENTIATION APPLICATIONS 5 (Maclaurin s and Taylor s series) A.J.Hobson

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i

ON THE DIVERGENCE OF FOURIER SERIES

Explicit evaluation of the transmission factor T 1. Part I: For small dead-time ratios. by Jorg W. MUller

SOME RESULTS ON THE MULTIPLE GROUP DISCRIMINANT PROBLEM. Peter A. Lachenbruch

Stein s Method and Characteristic Functions

The final is cumulative, but with more emphasis on chapters 3 and 4. There will be two parts.

Continuous Univariate Distributions

SOLUTION TO RUBEL S QUESTION ABOUT DIFFERENTIALLY ALGEBRAIC DEPENDENCE ON INITIAL VALUES GUY KATRIEL PREPRINT NO /4

1.1 Basic Algebra. 1.2 Equations and Inequalities. 1.3 Systems of Equations

Chapter R - Basic Algebra Operations (94 topics, no due date)

VALUES FOR THE CUMULATIVE DISTRIBUTION FUNCTION OF THE STANDARD MULTIVARIATE NORMAL DISTRIBUTION. Carol Lindee

IDENTIFIABILITY OF THE MULTIVARIATE NORMAL BY THE MAXIMUM AND THE MINIMUM

Asymptotic Multivariate Kriging Using Estimated Parameters with Bayesian Prediction Methods for Non-linear Predictands

A Test for Order Restriction of Several Multivariate Normal Mean Vectors against all Alternatives when the Covariance Matrices are Unknown but Common

On Limit Distributions Arising from I terated Random Subdivisions of an Interval

THE LENGTH OF THE CONTINUED FRACTION EXPANSION FOR A CLASS OF RATIONAL FUNCTIONS IN f q {X)

2 Functions of random variables

Continuous Univariate Distributions

NEW GENERATING FUNCTIONS FOR CLASSICAL POLYNOMIALS1. J. w. brown

A NOTE ON A DISTRIBUTION OF WEIGHTED SUMS OF I.I.D. RAYLEIGH RANDOM VARIABLES

The Free Central Limit Theorem: A Combinatorial Approach

Contents 1. Contents

Univariate Discrete Distributions


Mathematics Standards for High School Financial Algebra A and Financial Algebra B

n

PR D NT N n TR T F R 6 pr l 8 Th Pr d nt Th h t H h n t n, D D r r. Pr d nt: n J n r f th r d t r v th tr t d rn z t n pr r f th n t d t t. n

A Note on Some Wishart Expectations

The Distributions of Sums, Products and Ratios of Inverted Bivariate Beta Distribution 1

The Distribution of Mixing Times in Markov Chains

The Behavior of Multivariate Maxima of Moving Maxima Processes

THE APPROXIMATE SOLUTION OF / = F(x,y)

Basic Math Review for CS1340

Section 9.7 and 9.10: Taylor Polynomials and Approximations/Taylor and Maclaurin Series

Applied Probability and Stochastic Processes

Solutions for Math 217 Assignment #3

NYS Algebra II and Trigonometry Suggested Sequence of Units (P.I's within each unit are NOT in any suggested order)

Variance Function Estimation in Multivariate Nonparametric Regression

Econ 508B: Lecture 5

Basic Math Review for CS4830

Mathematics High School Algebra

Algebra 2 (3 rd Quad Expectations) CCSS covered Key Vocabulary Vertical

School District of Marshfield Course Syllabus

On the exact computation of the density and of the quantiles of linear combinations of t and F random variables

Research Article The Laplace Likelihood Ratio Test for Heteroscedasticity

Root Finding For NonLinear Equations Bisection Method

Part 6: Multivariate Normal and Linear Models

Negative Multinomial Model and Cancer. Incidence

ACM 116: Lectures 3 4

REQUIRED MATHEMATICAL SKILLS FOR ENTERING CADETS

MAXIMUM LIKELIHOOD ESTIMATION FOR THE GROWTH CURVE MODEL WITH UNEQUAL DISPERSION MATRICES. S. R. Chakravorti

Algebra I Number and Quantity The Real Number System (N-RN)

MATH 205C: STATIONARY PHASE LEMMA

- 2 ' a 2 =- + _ y'2 + _ 21/4. a 3 =! +! y'2 +! 21/4 +!. / (!:_ +! y'2) 21/ Y 2 2 c!+!v'2+!21/4). lc!+!v'2) 21/4.

A Library of Functions

Math 361: Homework 1 Solutions

Algebra II Curriculum Crosswalk

828.^ 2 F r, Br n, nd t h. n, v n lth h th n l nd h d n r d t n v l l n th f v r x t p th l ft. n ll n n n f lt ll th t p n nt r f d pp nt nt nd, th t

J2 e-*= (27T)- 1 / 2 f V* 1 '»' 1 *»

Engg. Math. II (Unit-IV) Numerical Analysis

4 4 N v b r t, 20 xpr n f th ll f th p p l t n p pr d. H ndr d nd th nd f t v L th n n f th pr v n f V ln, r dn nd l r thr n nt pr n, h r th ff r d nd

Trimester 2 Expectations. Chapter (McGraw-Hill. CCSS covered Key Vocabulary Vertical. Alignment

arxiv: v1 [stat.me] 28 Mar 2011

Review (Probability & Linear Algebra)

Pre-calculus 12 Curriculum Outcomes Framework (110 hours)

On Bayesian Inference with Conjugate Priors for Scale Mixtures of Normal Distributions

Gaussian distributions and processes have long been accepted as useful tools for stochastic

n r t d n :4 T P bl D n, l d t z d th tr t. r pd l

Discrete Distributions Chapter 6

TAYLOR AND MACLAURIN SERIES

P(I -ni < an for all n > in) = 1 - Pm# 1

Pascal Eigenspaces and Invariant Sequences of the First or Second Kind

Algebra 2 Standards. Essential Standards:

LARGE DEVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILED DEPENDENT RANDOM VECTORS*

Stat 451 Lecture Notes Numerical Integration

TAYLOR SERIES [SST 8.8]

Fitting Linear Statistical Models to Data by Least Squares I: Introduction

MULTISECTION METHOD AND FURTHER FORMULAE FOR π. De-Yin Zheng

Applications of Basu's TheorelTI. Dennis D. Boos and Jacqueline M. Hughes-Oliver I Department of Statistics, North Car-;'lina State University

Algebra II Unit Breakdown (Curriculum Map Outline)

Algebra Performance Level Descriptors

B. L. Raktoe* and W. T. Federer University of Guelph and Cornell University. Abstract

Zero-Mean Difference Discrimination and the. Absolute Linear Discriminant Function. Peter A. Lachenbruch

GENERATING FUNCTIONS OF CENTRAL VALUES IN GENERALIZED PASCAL TRIANGLES. CLAUDIA SMITH and VERNER E. HOGGATT, JR.

University of Lisbon, Portugal

MATHEMATICS. Higher 2 (Syllabus 9740)

WA State Common Core Standards - Mathematics

Math 160 Final Exam Info and Review Exercises

ON THE MOMENTS OF ITERATED TAIL

On Expected Gaussian Random Determinants

Continuous RVs. 1. Suppose a random variable X has the following probability density function: π, zero otherwise. f ( x ) = sin x, 0 < x < 2

AMSCO Algebra 2. Number and Quantity. The Real Number System

HANDBOOK OF APPLICABLE MATHEMATICS

Transcription:

UNIVERSITY OF NORTH CAROLINA Department of Statistics Chapel Hill, N. C. ON A DISTRIBUTION OF SUM OF IDENrICALLY DISTRIBUTED CORRElATED GAMMA VARIABIES by S. Kotz and John W. Adams June 1963 Grant No. AF-AFISR- 62-169 \ ( A distribution of a sum of identically distributed Gamma-variables correlated according to an 'exponential' autocorrelation law P kj = P k,j = l,,nwhere P ij is the correlation coefficient between the i-th and j-th random varia~les /k-j '< ) and 0 < P < 1 is a given number is derived. This is performed by determining the characteristic roots of the appropriate variance - covariance matrix, using a special method, and by applying Robbins' and Pitman's result on a mixtures of distributions to the case of Ga.mm&ovariables. An "approximate" distribution of the sum of these variables under the assumption that the sum itself is a Gamma variable is given. A comparison between an "exact" and an "approximate"distribution for certa.in values of the correlation coefficient, the number of variables in the sum and the values of parameters of the initial distributions are presented. This research was supported by the Mathematics Division of the Air Force Office of Scientific Research. Institute of Statistics Mimeo. Series No. 367

ON A DISTRIBUTION OF SUM OF IDENTICALLY DISTRIBUTED CORRELATED GJIMMA VARIABIES I by Samuel Kotz and John W. Adams University of North Carolina ==== == = = = = O. Abstract: = = = = ===== = ======== = = = = == = == = = = A distribution of a sum of identica~.ly distributed Gamma-variables correlated according to an 'exponential' autocorre~tion law Pkj = plk- j (k l j = l~ n) where P ij is the correlation coefficien,t betileen the i-th and j-th random variables and 0 < P < 1 is a given n1jil1per 1& oerived. This is performed by determining the characteristic roots of the appropriate variance - covariance matrix l using a special method, and by applying Robbins f and Pitman's result on mixtures of distributions to the case of Gamma-variables. An "approximate" distribution of the sum of these variables under the assllillption that the sum itself is a Gamma variable is given. A comparison between an "exact" and an "approximate" distributions for certain values of the correlation coefficient, the number of variables in the sum and the values of parameters of the initial distributions are presented. 1. Introduction and general remarks. Distribution of sum of correlated Gamma variables has 8 significant interest and many applications in engineering1 insurance and other areas. Besides the applications mentioned in I)} and f~7 'We should like to note the usefulness of this distribution in problems connected with representation of precipitation amounts. Weekly or monthly, etc., precipitation amounts are regarded as sums of ~his research was supported by the Mathematics Division of ~he Air Force Office of Scientific Research.

daily amounts, the latter being well fitted by Gamma distribution L'd.7. A partieular solution for the case of a constant correlation between each ~air of variables in the sum is ~resent.ed in I)}. In this note we shall extend this result for the case of " an exponential autocorrelation scheme" between the variables, where each one of the variables has the marginal density function given by f(x) X: 1 --g = e X r - l, r(r)q"j.. x>o = 0 x<o In meteorological ap~lications it is sojaetimes assumed that the sum random variable is itself a Gamma. variable and a.n "approximate" Gamma distribution whose first two moments are identical with the "exact" distribution of the sum is used L'd.7, (see also, for exam~le, ~~7 for an earlier a~plication in another field). In the case of identically distributed r.v. and of stationary exponential correlation model (namely when the correlation coefficient between the i-th and the j-th r.v. in the sum is given by p i-j for all i and j, p being a given positive constant) it is easy to verify L'd.7 that the appropriate parameters rand Q of the "appr~fjilaten Gamma. variable, representing the sum r.v., n n are given by: and n2. r n = r 2. 1 n Q = n n+...e(n _ -=.e ). l-p l-p n n+ ~( n-- l-p) l..p l-p n when n is the number variables in the sum and rand Q are the parameters Q of initial r.v. distributed according to (1.1). Since the first two moments

3 com.:p1etely determine Gamma-distribution the "a:p:proximate" distribution is" unique 4 'rhe :pur:pose of this note is to cozn:pare this "a:p:proximate" distribution (which is quite easy com.:putab1e and a:p:p1ied) with an "exact distribution of the sum (which is much more difficult to be actually determined). We also note that, similar to the (lase treated in ["J}, the obtained ifnatura1" exact distribution may not be unique, since the correlation scheme, in general, does not determine uniquely a multivariate Gamma distribution as it will be noted in some more detail in the following section. 2. JA ;Joint Characteristic P\.mcti. on of I!lx;Ponentia11y Correlated Gamma Variables Consider the characteristic function (t, t,, t ), 1 2 n, t) = n. II - r Q T vi -r where Q and r are :positive constants, I is the nxn identity matrix, T is the nxn nxn diagonal matrix with the elements t jj = t j, and V is an arbitrary :positive definite matrix. This chara.c tel'~stic function leads to a joint :probability density function whose margin?ls are given by (1.1) and whose matrix of second moments is some :positive definite matrix V*, say. In general, a joint d density function with marginal given by (1.1) and a given covariance matrix V* is not uniquely determined; (for exam:p1e.. as it is known, a multivariate Gamma distribution with constant correlation between the components is not unique L27). An explicit expression for a joint distribution with a general matrix is rather complicated.. but can be obtained in terms of a series of Laguerre polynomials by applying the derivations given in LE7. We will not present these expressions here since the main concern of this paper is the validity Elind applicability of approximations to the distribution of the sum of the component random variables. Following Gurland LJ:7 we shall regard (2.1) as the characteristic function of a natural multivariate analogue of the Gamma distribution.

In particular, if the elements of V are given by V ij = p~i-jj. 4 i, j =1, n (0 < P < 1 a given constant) it is easy to verify by differentiation of (2.1) that the corresponding elements of V* will be given by 2 2ri-jf V1'j = rq p...' i,j =1,... n. The characteristic function of the distribution of the sum of the random variables whose joint distribution has the characteristic function given in (2.1) is: (t) = JII - i Q t vi -r and (2.3) may be expressed in the form (t) = n--'(l -i Q A. j tr r j:;l where the A. j are the characteristic roots of the matrix V. In section 4 we shall describe a procedure for an explicit determination of these characte~istic roots. 3. The Distribution of the Random Variables ~ose Characteristic Function is (t) Robbins p~c:red in 17 that if {\-.ll is a ~equence of constants such that '1t? 0, k = ~)... and k:o ~ = 1 and if. { F. k ( )J is a sequence of distrib~tien functions with corresponding sequence of characteristic functions { kl'lj. then (i) FC,-) = I: ;c k Fk.~ d is a distribution function, (ii) the characteristic function (.) of F(.) is given by (.) = I: ~k k(.)' and (iii) for any Borel measurable function g( ) the following equality holds: Jg(x) df(x) = I: c k j g(x) dfk(x), where 1.* is the approp~ia.te space. These ~ k ~. results is valid for a general class of spaces and in particular for any finite dimensional Euclidean space. The distribution function of the Gamma variable with positive parameters rand Q denoted here by F () is given by:. r

5 ' 1 F ( X») = r l or rem) o r-1 u du I x> 0 x<o The characteristie function of this distribution is *(t) = (1 - i Q t)-r Let Y be the random variable whose characteristic function is given by (2.4). The Y has the same distribution as the random variable X, X = n ~ A X. j=l j J (,.,) where X jl j =1, 2, n, are independent identically distributed random variables, each following a Gamma distribution with parameters ~ and Q as given in (,.1) Let A* = min A j Then j Prey < y) = Pr (~* < t*) (,.4) and the characteristic function of the random variable X/A* is n Aj-r ~ (t) = 1T (1 - i 0 ~ t) j=l Using the method developed by Pitman and Robbins in f l, we express the right hand side of (,.5) in the form n A-r 1T (1 - i Q A~ t) j=l -nr = (1 - i 0 t)

6 Since" (3.7) for all real Q and t the binomial theorem may be applied to expand the r.h.s. of (3.6). We thus obtain fr (1 _ i Q 7-. j t)-r =.1=1 7-.* where the coefficients c are determined by the identity k n I 7-. 7-.*)-r!Xl 17/ ( ~ ) 1 - (1 - r-) z_7, = I: Ok zk.1=1 l j j k=o It f01~ows from (3.6) that all. c k > 0" and putting (1. i Q t) =1 in (3.6) and (3.8) shows that I: c k =1 Thus the results of Robbins 17 apply for c k determined 'by (3.9) and hence the r.h.s. of (3.8) is the characteristic function of the distribution F(.. ) given by!xl F(x) = I: c k F -.Lk (x) (3.10) k=o nol" where Fnr+k( ) (k =0"1,, ) are defined by (3.1). Hence Pr(Y < y) co = I: c k Fnr+k (Y/7-.*) = k=o ẏ u Xi --. (U)nr+k-l Q ~. ' ~ e \,I,U o (3.11) and the corresponding probability density function is given by:

x 7 co \ nr+k-1 - G* fiy ) - I 1 1: (~~Q) e ~ -I, x>o k=o f(nr+k1 ta~ Q 0 x<o (The term by term differentiation is justified by the uniform convergence of the series). The upper bound on the error of truncation is given by the following formula: for any integers 0 ~ P1 ~ P2 and for any xl.. P2 o ~Pr(Y < y) - 1: P1 i P1- l P P2 + sup J L Fnr+k(Y/~*) L (1-1:!Ok - 1: 2 ~) ~ 1-1: ok. :J 0 P1 P1 We should like to point out that the expansions (3.8) and (3.9) are closely related to the general results on expansion of the hypergeametric functions of matrix variables into series of Go ~ca.llea."zonal polynomials" flo, 10~7. 4. The Characteristic Roots of the Matrix V. i-j The inverse matrix of V, whose elements are v ij = P,i, j = 1,,n, is V- l given by 1 2 2 2-1 V- = «l+p )I - P A - P B)(l-p ) (4.1) '\-There I is the nxn identity matrix, A is the nm matrix with elements all = ann = 1 and all other elements 0,and B is the nxn matrix with b ij = 1 for li..j J = 1 and all other elements O. Hence the characteris- 1 2-1 tic roots of V- are (l-p) Y j I ~ =l,2,,n, where the Y j are those value of Y which satisfy the equation;

Let bon{n,p) be the f.h.s. of (4.2). Expanding An(A,p) by its minors we obtain 8 bon(y,p) = (1 - y)2 D n _ 2 (y,p) - 2p2(1-Y)Dn_,(y,p) 4 + P D n _ 4 (y,p) (4.,) where Dn(y,p) is the determinant of the nxn matrix I U, U = (1 + (:,2 - y) - P B (4.4) Expanding D n {YIP) by its minors we get the following difference equation and, is it is well know., (see e.g. 27), this difference equation has the solution where = pn sin(n+l} Q sin Q 2-1 + P - Y = 2p Cos Q (4.6) equation Substituting (4.6) in (4.;) and equating bon(y,p) to zero ~ obtain the ( 1 _ y)2 pn-2 sin(n-l)q _ 2 (1 _ ) n-l sin(n-2)q sin Q.. Y P. sin Q + + n sin(n-,)q P sin Q = o Using the complex representation of the sine function we get after multiplying (4.7) by sin Q (excluding the value of Q for which sin Q = 0), i(n-,}q.~ e... ~. l-ye iq) -p =,:tei(n-,) 2 Q {( l-ye ) -iq -P, ) «) and after a few a.lgebraic manipulations the equation (4.8) reduces to (4.8)

9 and Sin (n+l) Q Sin (n-l) Q = p 2 2 Cos (n+l) Cos (n-l) Q Q. 2 = P 2 (~.9) (4.10) If Q j j = 1~2, n are the values of Q which satisfy one or the other of the equations (4.9) and (4.10), then the characteristic roots of the matrix V 1 are 2 2-1 given by Yj = (1-2p Cos Q j + P ) (1- p), and the characteristic roots of 1 2-1 2 V are ~j =Yj = (1-2p Cos Q j + p) (1 - p ), j = l,2,,n. Apparently it is not possible to express the solutions of (4.9) and (4.10) as explicit functions of I&' and p. However, for 0 ~ p ~ 1" it can be verified that the solutions of (4.9) are located one in each of the intervals ( 2k-l 2k3t ) n n-l.. ~ 3t, n+l ' k = 1,2,... '2 for T depending on whether n ~s odd; and that the solutions of (4.10) are one in each of the intervals -n 3t, li.'+"l tt" k = 1,2,, '2 or T depending on whether n ( 2k-2 2k-l) n n+l of odd. even or is even Using this observation we can obtain the solutio~ of these equations. Consider the iterative equations: Qjk 2 ( -l( n-l» = ii+i ktt - Sin p Sin 2" Qj-l"k ' for k odd r ~. ln~l (ktt + Sin- l (p Sin n;l Qj-l,k»' for k even (4.11) where Q (2k-13t 2K1t) ok -n' ii+i ' n (n-1) and k =1,2,... '2 T for 7C -1(. n-l ) tt - '2 ~ Sinp Si~ Qj-l"k ~'2' j =0,1,, n even (odd); and

where Q e (2k-2 n. 2k-l n). n < C -l( Cos n-l Q ) < ~ ok n ' n+ l ' - "2 - os p :2 j-l,k - t=. ' n (n+l) ( ) j =0,1, and k = 1,2, "2 2 for n even odd. The series of solutions obtained using this iterative procedure actually converges to the solutions of equations (4.9) and (4.10). 10 This can easily be seen by means of the following theorem of differential calculus. (See e.g. 117). o If Q e 1\, where '--0 1\ is an interval containing a root of the equation o Q = g(q), and if g(q) is a diffe;rential function of Q throughoutr and, (...:..' in addition, Ig'(Q) I < M < 1 for all Q e I?, then the sequence of iterative solutions of the equation Q j = g(qj-l)' starting with the point Qo' converges to a root of the equation. In the case of the equations (4.11) and (4.12) the conditions of this theorem are clearly satisfied. In fact, Q 1. belongs, in each case, to an interval 01.\,. containing exactly one solution of (4.11) or (4.12) respectively. absolute values of the derivatives of (4.11) and (4.12) are: ( n-l) n-l! P Cos T Q ii+i1'1_p2 6in2 ( > Q Moreover, the and P Sin (n-l) Q. 2 (4.14) which for all p, ipl < 1 and for all real Q, are, in each case, less than!!:1: < 1 ntl 5. Comparison between the "a;p;proximate lt and "exact lt distributions of sums of identically distributed exponentially correlated Gamma-variables. In order to obtain the distribution of sum of identically distributed Gamma variables correlated according to the stationary exponential law (r (Xi' X ) j = p,i- j /, i,j =1, n) we replace in (2.1) the matrix

li-jl/ 11 V = V ij = P f.t-~ by the matrix 1""1 =[w ij } =ip 2 so that the matrix J "{-1* vrhich corresponds to the matrix V* 'Will have its elements.,1~ t'1'j1 =lr Q2 p li-j1' ~here "- ~l is the common variance each of the gammavariables Xi i = l, n. Using the UNIVAC al05 of the Computation Center of the University of North Carolina we computed the parameters { ~J and "-*.of the distribution given by (:5.11) for several different values of n" p, Q" t and compared several percentiles of this distribution 'With the corresponding percentiles of the "approximate" Gamma distribution with the parameters given by (1.2) and (1.'). The results of the computations are presented in Table 1 on page 12. We recall that the first two moments of these two distributions are identical; (this fact has been used for checking the correctness of the numerical calculations). Q'\.ling to computational difficulties and complications especially on computing the sequences t~j we cannot claim higher accuracy than the second significant figure. Comparing these two types of distributions we observe that the functional dependence of parameters on n follows the same pattern in both cases. The parameter r is asymptotically linear with. n while Q is independent of n. The discrepancies between the corresponding distributions are relatively small and may be considered insignificant for most of the practical purposes in the cases of small values of pep So.,) even for values of n as small as 5. The computations also indicate that for higher values of p (p = 0.5 and greater) the relatively large deviations at the lower tail decline sharply with the increase of the values of n. The number of. c k ': necessary to obtain the cumulative sum equal to 1

The number of iterative steps necessary to obtain the values of the characteristic roots ~j with accuracy up to the 6-th significant digit also depends strongly on nand p and varies for the cases presented below between 5 and 18. TABLE I Comparison between the approximate a.nd "exact" distributions of sum of identically distributed exponentially correlated Gamma variables 12 IPercentiles of _ct distribution corresponding to the follmdng percentiles of the approximate distribution n p r Q 1 5 25 75 95 99 5.2 1.92.5.62 4.14 24.81 75.73 94.80 98.75 10.2 2.04.5.70 4.39 25.03 75.61 94.87 98.87 5.3 1.94.5.47 3.75 24.67 75.93 94.68 98.63 10.3 2.00.5.56 4.06 25.00 75.81 94.75 98.75 5 2.00.28 ;.18 24.60 76.24 94.77 98.72 10.5 2.04.5.;5 3.50 24.81 76.,20 98.76 15 2.00.42 ;.74 25.10 76.0 98.8 5.75 3.00.18 ;.01 2;.8 76.6 98.. 6.5 5.75 3.00.;4 ;.66 24.6 76.2 98.7 15.;;2.151 28.1.10 1.40 20.1 98.70 * *This case corresponds to a distribution of precipitation amounts at a certain station in..

ACKNOWLEDGE:ME:1'T. The authors wish to express their thanks to Professor W. HoefftUng and especially to Professor N. L. Johnson for their valuable ccmm.ents. We are also much indebted to Mr. P. J. Bro'WIl who carried out the programming of all the computations on the UNIVAC computer. Studien M1. 3 (1961)" p. 83-92., 1'2.7 S. Kotz and J. Neumann" 'ton distribution of precipitation amounts for the periods of increasing lengths,," to be published in J. Geoph. Research (1963).!:.7 Welch" B. L." "The significance of difference between two means when the e population variances are unequal," Biometrika" Vol. 29 (1938)" p. 350-362. 27 J. Gurland" Correction to "Distribution of the Maximum of the Arithmetic Mean of Correlated Random Variables,," Ann. Math. Stat., Vol. 30 (1959), p. 1265-1266. J A. S. Krishnamoorthy and M. Parthasarathy" "A Multivariate Gamma-Type Distribution," Ann. Math. Stat." Vol. 22 (1951)" p. 549-557.L17 H. Robbins" ''Mixture sf Distributions,," Ann. Math. Stat." Vol. 19 (1948)".L.7 p. 360 369 H. Robbins and E. J. G. Pitman" "Application of the method of mi~ures to quadratic forms in normal variates, n Ann. Math, Stat." Vol. 20 (1949)" 1'.552.L27 F. B. Hildebrand" Introduction to Numerical Analysis" McGraw Hill, 1956. 1J.g7 A. '.diw1l~'t J,~.he, d~s.ri.but:j.ott af~ 1a1teX!.t roots of the covariance matrix," Anp. :t-1ath. Stat., Vol.. 31 (1960)1. p. 151-158. L10!7 A. T. James" "A comparison of Univariate and Multivariate Distributions,," special invited ~aper presented at the 94th Eastern Regional Meeting of the n;.m (May 6, 1963)., ["1]} L. Brand, Advanced Calculus" Wiley and Son, 1955.