K-ANTITHETIC VARIATES IN MONTE CARLO SIMULATION ISSN k-antithetic Variates in Monte Carlo Simulation Abdelaziz Nasroallah, pp.

Similar documents
An Akaike Criterion based on Kullback Symmetric Divergence in the Presence of Incomplete-Data

Spring 2012 Math 541B Exam 1

Monte Carlo Methods for Statistical Inference: Variance Reduction Techniques

Theory and Methods of Statistical Inference. PART I Frequentist likelihood methods

Theory and Methods of Statistical Inference. PART I Frequentist theory and methods

Lecture 25: Review. Statistics 104. April 23, Colin Rundel

Theory and Methods of Statistical Inference

SIMULTANEOUS CONFIDENCE INTERVALS AMONG k MEAN VECTORS IN REPEATED MEASURES WITH MISSING DATA

Monte Carlo Integration I [RC] Chapter 3

Stochastic Simulation Introduction Bo Friis Nielsen

Fiducial Inference and Generalizations

U-Likelihood and U-Updating Algorithms: Statistical Inference in Latent Variable Models

Exact Inference for the Two-Parameter Exponential Distribution Under Type-II Hybrid Censoring

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline.

LIST OF FORMULAS FOR STK1100 AND STK1110

Parametric Inference Maximum Likelihood Inference Exponential Families Expectation Maximization (EM) Bayesian Inference Statistical Decison Theory

Label Switching and Its Simple Solutions for Frequentist Mixture Models

4. Distributions of Functions of Random Variables

Regression and Statistical Inference

MAS223 Statistical Inference and Modelling Exercises

Estimation of the Conditional Variance in Paired Experiments

The Nonparametric Bootstrap

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan

On the Optimal Scaling of the Modified Metropolis-Hastings algorithm

Confidence Estimation Methods for Neural Networks: A Practical Comparison

2. Matrix Algebra and Random Vectors

Bayesian Inference. Chapter 1. Introduction and basic concepts

Review (Probability & Linear Algebra)

Testing for a Global Maximum of the Likelihood

Stochastic Simulation Variance reduction methods Bo Friis Nielsen

Tolerance limits for a ratio of normal random variables

Irr. Statistical Methods in Experimental Physics. 2nd Edition. Frederick James. World Scientific. CERN, Switzerland

Monte Carlo Integration

Kobe University Repository : Kernel

4. CONTINUOUS RANDOM VARIABLES

Kullback-Leibler Designs

Review. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

The Jackknife-Like Method for Assessing Uncertainty of Point Estimates for Bayesian Estimation in a Finite Gaussian Mixture Model

1. Introduction Over the last three decades a number of model selection criteria have been proposed, including AIC (Akaike, 1973), AICC (Hurvich & Tsa

A Note on Auxiliary Particle Filters

AN ASYMPTOTICALLY UNBIASED MOMENT ESTIMATOR OF A NEGATIVE EXTREME VALUE INDEX. Departamento de Matemática. Abstract

HANDBOOK OF APPLICABLE MATHEMATICS

18 Bivariate normal distribution I

arxiv:math/ v1 [math.pr] 9 Sep 2003

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit

Bootstrap inference for the finite population total under complex sampling designs

Alternative implementations of Monte Carlo EM algorithms for likelihood inferences

COPYRIGHTED MATERIAL CONTENTS. Preface Preface to the First Edition

Submitted to the Brazilian Journal of Probability and Statistics

MULTIVARIATE PROBABILITY DISTRIBUTIONS

Econ 673: Microeconometrics

conditional cdf, conditional pdf, total probability theorem?

POSTERIOR ANALYSIS OF THE MULTIPLICATIVE HETEROSCEDASTICITY MODEL

IEOR E4703: Monte-Carlo Simulation

PART I INTRODUCTION The meaning of probability Basic definitions for frequentist statistics and Bayesian inference Bayesian inference Combinatorics

A PRACTICAL WAY FOR ESTIMATING TAIL DEPENDENCE FUNCTIONS

Simulation of truncated normal variables. Christian P. Robert LSTA, Université Pierre et Marie Curie, Paris

Econometrics I, Estimation

An Introduction to Multivariate Statistical Analysis

PERFORMANCE OF THE EM ALGORITHM ON THE IDENTIFICATION OF A MIXTURE OF WAT- SON DISTRIBUTIONS DEFINED ON THE HYPER- SPHERE

[POLS 8500] Review of Linear Algebra, Probability and Information Theory

Simulation. Alberto Ceselli MSc in Computer Science Univ. of Milan. Part 4 - Statistical Analysis of Simulated Data

Confidence Intervals in Ridge Regression using Jackknife and Bootstrap Methods

ON SMALL SAMPLE PROPERTIES OF PERMUTATION TESTS: INDEPENDENCE BETWEEN TWO SAMPLES

Bootstrap & Confidence/Prediction intervals

On prediction and density estimation Peter McCullagh University of Chicago December 2004

Quick Tour of Basic Probability Theory and Linear Algebra

Some Theoretical Properties and Parameter Estimation for the Two-Sided Length Biased Inverse Gaussian Distribution

COMPETING RISKS WEIBULL MODEL: PARAMETER ESTIMATES AND THEIR ACCURACY

Bayesian Statistical Methods. Jeff Gill. Department of Political Science, University of Florida

Bayesian Inference for the Multivariate Normal

PROBABILITY AND STOCHASTIC PROCESSES A Friendly Introduction for Electrical and Computer Engineers

Bayesian SAE using Complex Survey Data Lecture 4A: Hierarchical Spatial Bayes Modeling

Optimization Methods II. EM algorithms.

Lecture 2: Linear Models. Bruce Walsh lecture notes Seattle SISG -Mixed Model Course version 23 June 2011

Generating the Sample

Summary of Extending the Rank Likelihood for Semiparametric Copula Estimation, by Peter Hoff

The Bayesian Approach to Multi-equation Econometric Model Estimation

The square root rule for adaptive importance sampling

Today: Fundamentals of Monte Carlo

Predicting the Probability of Correct Classification

Practical Bayesian Quantile Regression. Keming Yu University of Plymouth, UK

CV-NP BAYESIANISM BY MCMC. Cross Validated Non Parametric Bayesianism by Markov Chain Monte Carlo CARLOS C. RODRIGUEZ

Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach

Multiple Imputation for Missing Data in Repeated Measurements Using MCMC and Copulas

A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models

Stable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence

Bootstrap Approximation of Gibbs Measure for Finite-Range Potential in Image Analysis

Probability reminders

Uniform Correlation Mixture of Bivariate Normal Distributions and. Hypercubically-contoured Densities That Are Marginally Normal

5 Operations on Multiple Random Variables

PARAMETER CONVERGENCE FOR EM AND MM ALGORITHMS

Introduction to Probability and Statistics (Continued)

Multivariate Distributions

STATISTICAL INFERENCE WITH DATA AUGMENTATION AND PARAMETER EXPANSION

Bootstrap Simulation Procedure Applied to the Selection of the Multiple Linear Regressions

Lecture 11. Probability Theory: an Overveiw

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Transcription:

K-ANTITHETIC VARIATES IN MONTE CARLO SIMULATION ABDELAZIZ NASROALLAH Abstract. Standard Monte Carlo simulation needs prohibitive time to achieve reasonable estimations. for untractable integrals (i.e. multidimensional integrals and/or intergals with complex integrand forms). Several statistical technique, called variance reduction methods, are used to reduce the simulation time. In this note, we propose a generalization of the well known antithetic variate method. Principally we propose a K antithetic variate estimator (KAVE) based on the generation of K correlated uniform variates. Some numerical examples are presented to show the improvenment of our proposition. 1. Introduction The approximation of integrals is one of the major class of problems that arise in statistical inference. There are a lot of situations that can be formulated as integration problems. Generally, in statistic context, integration is associated with the Bayesian approach [9]. Note that it is not always possible to derive explicit probabilistic models and even less possible to analytically compute the estimators associated with a given paradigm (maximum likelihood, Bayes, method of moments, etc..). Moreover, other statistical methods, such as bootstrap methods [3], although unrelated to the Bayesian approach, may involve the integration of the empirical cumulative distribution function (cdf). Similarly, alternatives to standard likelihood, such as marginal likelihood in [1] for example, may require the integration of the nuisance parameters. Also, several optimization problems in statistical inference use numerical integration steps, such as EM algorithm in [6] and [2]. Here we are concerned with simulation based integration methods that uses the generation of random variables. Such methods are generally known as Monte Carlo simulation methods. Received : 27 December 8. Revised version : 16 January 29 144

Basically, we interest to numerical approximation of untractable integrals ( i.e. integrals in height dimensional case and/or with complex integrand form). It is well known that a standard Monte Carlo simulation of such integrals needs a prohibitive time to achieve reasonable estimations. To reduce the simulation time, there are many statistical techniques, called variance reduction methods [8], [1], [5], [4], [9], [11], [7]. The variance is linked to simulation time via the sample size (the relation can be seen in confidence interval expression). Our contribution, in numerical integration, is a generalization form of the known antithetic variate estimator [4]. Principaly, we propose a K antithetic variate estimator, based on K negatively correlated uniform variates. To generate such variates, the proposed algorithm,namely KAVE, uses normal vectors on IR K. The improvement and efficiency of KAVE is tested on four numerical examples. The paper is organized as follows: in section two, we present the standard Monte Carlo integration. In section three we present the antithetic variate method. In section four, we present the proposed estimator which generalizes the antithetic case, in the same section we give a proof of the variance reduction carried with respect to the standard case. We terminate the section by giving our generation algorithm. Finally, in section five, we give simulation results to confirm the improvement of our proposition. 2. Standard Monte Carlo integration Let X be a real random variable with function density f(x) 1 [a,b] (x), ( 1 A is the indicator function of set A). We denote expectation and variance operators by IE[.] and V ar(.) respectively. We consider an integrable function g with respect to the measure f(x)dx, and we suppose that g is monotone. A presentation of standard Monte Carlo integration is easily accomplished by looking at the Afrika Statistika www.jafristat.net 145

generic problem of evaluating the integral (1) θ = IE (g(x)) = b a g(x)f(x)dx. It is natural to propose, using an independent and identically distributed (i.i.d) sample (X 1,..., X n ) generated from the density f, the following empirical average to approximate θ: θ n = 1 n g(x i ), n i=1 since θ n converges almost surly to IE [g(x)] by the Strong Law of Large Numbers. Moreover, the speed of convergence of θ n can be assessed. Construction of convergence test and confidence bounds on the approximation of θ can be made. Remarks 1.. Standard Monte Carlo integration in the multivariate case can be accomplished in a similar way. The use of Monte Carlo method for approximating any quantity on integral form is possible since the integral can be transformed to an expectation of a random variable. For simplisity and without loss of generality, we interest to the basic example θ = 1 g(x)f(x)dx, where f is a density function of an absolute continuous real random X. 3. Standard antithetic variates Although a standard Monte Carlo simulation lead to i.i.d samples, it may be preferable to generate samples from correlated variables when estimating an integral, as they may reduce the variance of the corresponding estimator. A first setting where the generation of independent sample is less desirable corresponds to the estimation of an integral θ written as sum (resp. difference) of two quantities which are close in value: let θ 1 = 1 g 1(x)f 1 (x)dx and θ 2 = 1 g 2(x)f 2 (x)dx such quantities, and θ 1n and θ 2n are independent estimators of θ 1 and θ 2 respectively, the variance of θ n = θ 1n + θ 2n (resp. θ 1n θ 2n ) Afrika Statistika www.jafristat.net 146

is var(θ 1n ) + var(θ 2n ), which may be too large to support a fine enough analysis on the sum (resp. difference). However, if θ 1n and θ 2n are negatively (resp. positively) correlated, the variance is reduced by a factor +2cov(θ 1n, θ 2n ) (resp. 2cov(θ 1n, θ 2n )), which may greatly improve the analysis of the sum (resp. difference). The philosophy of the antithetic variates algorithm, as generally presented in the literature, is as follows: θ = 1 g(x)dx = 1 2 1 g(x)dx + 1 2 1 g(1 x)dx. If (U 1,..., U n ) is an i.i.d sample of uniform variables in [, 1], then a standard estimator of θ, related to the antithetic variates is θn av = θ 1n + θ 2n, where θ 1n = 1 n g(u i ) and θ 2n = 1 n g(1 U i ). 2n 2n We have i=1 i=1 var(θ av n ) = var(θ 1n ) + var(θ 2n )) + 2cov(θ 1n, θ 2n ) Since cov(u i, 1 U i ) <, g is monotone and var(θ 1n ) = var(θ 2n ) = 1 4 var(θ n), then var(θn av ) < var(θ n ). So the antithetic variates technique reduces the variance of the standard estimator of θ. and θ n are both unbiased and converge in proba- Remark 1.. θn av bility to θ. In the following section, we propose a generalization of the variable antithetic method. 4. K-antithetic variates Let n and K be positif integers and let S n (K) := n i=1 {U i 1,, U i K } be a sample of nk uniform random on ], 1[ such that i Afrika Statistika www.jafristat.net 147

{1, 2,..., n}, cov(uk i, U j i) < for j k and cov(u k i, U j l) = for i l. Now, for a fixed n, let (S n (K)) K IN be the sequence of samples defined by S n (K + 1) = S n (K) {UK+1 i, i = 1,..., n}, and consider the estimator θ n (K) defined on the sample S n (K) by (2) θ n (K) = 1 nk n i=1 K g(uj). i This estimator is a generalization form of the standard estimator θn av by writing θ in the form θ = [ 1 g(x)dx = K 1 1 K k=1 ]. g(x)dx j=1 Proposition 4.1. For fixed n, we have var(θ n (K + 1)) var(θ n (K)), K = 1, 2, 3,... Where K = 1 and K = 2 correspond to the standard Monte Carlo and standard antithetic variates case respectively. Proof Let s define a = var(g(u1 1 )) and α K = 2 n K i=1 j=1 cov(g(u j i), g(u K+1 i )). By straightforward calculation, we obtain var(θ n (K)) = a nk + 1 K 1 α (nk) 2 k, K 2 Let s note V n (K) = var(θ n (K + 1)) var(θ n (K)). We have K 1 (nk(k +1)) 2 V n (K) = nk(k +1)a+K 2 α K (2K +1) k=1 Using the Schwarz s inequality, we have α k 2nka. So 2(K + 1) nk(k + 1)2 V n (K) a To simulate θ n (K), we need to generate K correlated uniform random variable. Such a procedure is based on the following lemma. k=1 α k Afrika Statistika www.jafristat.net 148

Lemma 4.2. Let Z 1 and Z 2 be two standard gaussian random variables. Then (3) cov (φ(z 1 ), φ(z 2 )) = 1 ( ) 2π arcsin cov(z1, Z 2 ), 2 where φ is the cdf of the standard gaussian law N(, 1). The proof of this lemma is given in annexe. Now, for i = 1,, n, let (X (i) 1,..., X(i) K ) be a gaussian random vector N K (, Σ (i) ), where Σ (i) = (σ (i) kj ) 1 k,j K is a covariance matrix such that σ (i) kj < for all 1 k j K. For 1 k K, Z (i) k := X(i) k is a standard gaussian variable (σ (i) kk ) 1 2 N(, 1), for all i = 1,, n. Now, for each i = 1,, n, applying the lemma 4.2 to each (Z (i) k, Z(i) j ), 1 k, j K, we get a sample (φ(z (i) 1 ),..., φ(z(i) K )) of K negatively correlated uniform random. 4.1. K-Antithetic Variables Algorithm (KAVE). step : i = ; θ = step 1 : i := i + 1; give Σ (i) such that σ (i) kj < for all 1 k j K step 2 : generate a random vector (X (i) 1,..., X(i) K ) from N K(, Σ (i) ) and take (U (i) 1,..., U (i) k ) := (φ(z(i) 1 ),..., φ(z(i) )), where Z(i) K step 3 : compute a = K k=1 g(u k i) step 4 : θ = θ + a, if i < n then goto step 1 step 5 : θ n (K) = θ nk is the estimator. K k = X(i) (σ (i) kk )1/2 Remark 2.. the statistical table of N(, 1) is used to get φ(.). 5. Numerical examples In the present section, we present four numerical examples to show the effectiveness of our algorithm. Since the number of arithmetic operations and the number of generated random is different from a situation to the other, we compare KAVE algorithm and Afrika Statistika www.jafristat.net 149

standard Antithetic variables Algorithm using CPU time. For each example, we give estimation accompanied, with standard deviation estimation between brackets, for different CPU times and different values of K. For each example the Monte Carlo simulation result is summarized in a table. We denote t n (K) the estimation of θ n (K) where n is fixed by CPU time and s K the standard deviation corresponding to t n (K). For all examples, we consider K = 1, 2, 3, 5 and 7. The case K = 1 correspond to the standard Monte Carlo simulation and K = 2 is similar to the standard antithetic variates. Simulations are carried on a Pinthium 4 computer. In all these examples, KAVE works well and outperform the standard cases. It gives good estimations with reduced standard deviation. Its performances increase with K for the same CPU time. So the proposed generalization of the standard antithetic variates method, KAVE, can be a competitive algorithm in Monte Carlo integration context. Afrika Statistika www.jafristat.net 15

Example 1 : θ = 1 du (the exact value is.5) CPU t n (1)(s 1 ) t n (2)(s 2 ) t n (3)(s 3 ) t n (5)(s 5 ) t n (7)(s 7 ) 8.53(.2889).5(.237).57(.1661).4999(.128).56(.181) 16.55(.2886).5(.246).4999(.1643).4999(.1276).55(.177) 24.4971(.2882).5(.239).5(.1661).54(.13).5(.177) Table 1: estimation of θ n (K) with it s standard deviation between brackets for different values of K and different CPU times. Example 2 : θ = 1 log.5 y dy (the exact value is θ = 1.6931) CPU t n (1)(s 1 ) t n (2)(s 2 ) t n (3)(s 3 ) t n (5)(s 5 ) t n (7)(s 7 ) 8-1.6952(1.119) -1.6931(.756) -1.6996(.5842) -1.6962(.441) -1.716(.3784) 16-1.691(.9982) -1.6919(.78) -1.718(.5878) -1.715(.45) -1.6928(.3698) 24-1.6957(1.62) -1.6932(.7165) -1.6837(.5851) -1.6912(.4511) -1.6953(.3754) Table 2 : Example 3 : θ = 1 θ =.125) estimation of θ n (K) with it s standard deviation between brackets for different values of K and different CPU times. 1 1 u 1u 2 u 3 du 1 du 2 du 3 (the exact value is CPU t n (1)(s 1 ) t n (2)(s 2 ) t n (3)(s 3 ) t n (5)(s 5 ) t n (7)(s 7 ) 3.126(.1479).125(.59).1251(.266).1249(.146).1251(.132) 4.1241(.1452).125(.59).1257(.264).1252(.135).1291(.19) 5.1256(.1473).125(.59).1255(.293).127(.125).1276(.56) Table 3 : estimation of θ n (K) with it s standard deviation between brackets for different values of K and different CPU times. 1 1 1 Example 4 : θ = 1 (the exact value is θ =.312) 1 u 1u 2 u 3 u 4 u 5 du 1 du 2 du 3 du 4 du 5 CPU t n (1)(s 1 1) t n (2)(s 2 1 2 ) t n (3)(s 3 1 2 ) t n (5)(s 5 1 3 ) t n (7)(s 7 1 3 ) 4.314(.56).313(.98).314(.39).313(.71).319(.45) 5.315(.57).313(.99).311(.31).312(1.22).38(.54) 6.313(.56).313(.98).313(.35).314(.97).311(.36) Table 4 : estimation of θ n (K) with it s standard deviation between brackets for different values of K and different CPU times. Afrika Statistika www.jafristat.net 151

References [1] Barndorff-Nielsen O. and Cox D. R. (1994) Inference and Asymptotics. Chapman and Hall, London. [2] Dempster A. P., Laird N. M. and Rubin D. B. (1977) Maximum likelihood from incomplete data via the EM algorithm (with discussion). J. Roy. Statist. Soc. Ser. B 39, 1-38. [3] Efron B., Tibshirani R. J. (1994) An Introduction to the bootstrap. Chapman and Hall, London. [4] Hammersley J. M. and Handscomb D. C. (1964) Monte Carlo Methods. John Wiley, New York. [5] Law A. M. and Kelton W. D. (1991) Simulation Modeling and Analysis. McGraw-Hill, New York. [6] Mclachlan G. J. and Krishnam T. (1997) The EM algorithm and extensions. John Wiley and sons, New York-London, Sydney-Toronto. [7] M. S. (21) On the use of antithetic variates in particle transport problems. Annals of Nuclear Energy, Vol. 28, Num. 4, pp. 297-332(36). [8] Nasroallah A. (1991) Simulation de chaînes de Markov et techniques de réduction de la variance. Thèse de Doctorat IRISA, Rennes. [9] Robert C. P., Casella G. (1999)Monte Carlo Statistical Methods. Springer-Verlag, New York. [1] Rubinstein R. Y. (1981) Simulation and the Monte Carlo Method. John Wiley and Sons, New York. [11] Tuffin B. (1996) Variance Reductions Applied to Product-Form Multi-Class Networks; Antithetic Variates and Low Discrepancy Sequences. Pub. Int. 15 (Model), IRISA, Rennes. Afrika Statistika www.jafristat.net 152

Appendix A. Proof of lemma 4.2 Let X, Y and Z be three independent d-dimensional centered gaussian vectors with covariance matrix Σ = (σ ij ) 1 i,j d. The proof need the following auxillary result: If (X 1, Y 1 ), (X 2, Y 2 ) and (X 3, Y 3 ) are i.i.d gaussian couples, then (X 1, Y 1 ) and (X 2, Y 3 ) are independent. Now return to the proof of our principal lemma: we denote φ i the cdf of X i, we interest to the computation of cov (φ i (X i ), φ j (X j )) for i j. Let α ij = IE [φ i (X i )φ j (X j )] we have, α ij = φ i (s)φ j (t)dh Xi X j (s, t), where H Xi X j (s, t) is the cdf of (X i, X j ). Now X i, Y i and Z j are identically distributed for i, j = 1,..., d. So α ij = IP (Y i < s)ip (Z j < t)dh Xi X j (s, t). Since Y and Z are independent, one can easily proof that Y i and Z j for i j are independent. So α ij = IP (Y i < s, Z j < t)dh Xi X j (s, t). Now by the above auxiliary result, we get α ij = IP (Y i < x, Z j < y/x i = x, X j = y)dh Xi X j (x, y) = IP (Y i < X i, Z j < X j ). Let V = X Y and W = X Z, one can easily verify that (V i, W j ) is N 2 (, Σ (ij) ), where ( ) Σ (ij) 2σ = ii σii σ jj ρ ij and ρ σii σ jj ρ ij 2σ ij = cov(x i, X j ). jj So α ij = P (V i >, W j > ). Afrika Statistika www.jafristat.net 153

Now, let A and B be two independent and centered gaussian random variables with the same variance 1, then the vector (A, B) is N 2 (, I 2 ). Let N = 2σii 1 ρ2 ij 4 σii 2 ρ ij 2σjj. We have NN = Σ (ij), where prime stand for transpose. It is easy to see that (V i, W j ) and N(A, B) have the same law. So α ij = IP 2σ ii 1 ρ2 ij 4 A + ρ ij σ ii B >, 2σjj B > 2 = IP 1 ρ2 ij 4 A + ρ ij 2 B >, B >. Now consider the transformation T given by T : (A, B) (R cos (Θ), R sin (Θ)), where R = A 2 + B 2 and Θ is uniform on [, 2π]. Using this transformation, we get So α ij = IP Since R >, then α ij 1 ρ2 ij 4 R cos(θ) + ρ ij R sin(θ) >, R sin(θ) >. 2 = IP 1 ρ2 ij 4 cos(θ) + ρ ij sin(θ) >, sin(θ) > 2 ( [ ( ρij ) ] ) = IP cos arcsin Θ >, sin(θ) > ( ] ( 2 ρij ) = IP Θ arcsin π 2 2, arcsin(ρ ij 2 ) + π [ ) ], π[ ( ] ( 2 ρij ) = IP Θ, arcsin + π [). 2 2 Afrika Statistika www.jafristat.net 154

Since Θ is uniform in [, 2π], then α ij = arcsin(ρ ij 2 ) + π 2. 2π Finally, since φ i (X i ) i = 1, 2,..., d are uniform, then cov (φ i (X i ), φ j (X j )) = α ij 1 4 = arcsin(ρ ij E-mail address: Email : nasroallah@ucam.ac.ma Cadi Ayyad University, Faculty of Sciences Semlalia, Marrakesh This research is supported by Laboratoire Ibn-al-Banna de Mathématiques et Applications (LIBMA) 2π 2 ). Afrika Statistika www.jafristat.net 155