Near-exact approximations for the likelihood ratio test statistic for testing equality of several variance-covariance matrices

Size: px
Start display at page:

Download "Near-exact approximations for the likelihood ratio test statistic for testing equality of several variance-covariance matrices"

Transcription

1 Near-exact approximations for the likelihood ratio test statistic for testing euality of several variance-covariance matrices Carlos A. Coelho 1 Filipe J. Marues 1 Mathematics Department, Faculty of Sciences and Technology The New University of Lisbon, Portugal Abstract While on one hand, the exact distribution of the likelihood ratio test statistic for testing the euality of several variance-covariance matrices, as it also happens with several other likelihood ratio test statistics used in Multivariate Statistics, has a non-manageable form, which does not allow for the computation of uantiles, even for a small number of variables, on the other hand the asymptotic approximations available do not have the necessary uality for small sample sizes. This way, the development of near-exact approximations to the distribution of this statistic is a good goal. Starting from a factorization of the exact characteristic function for the statistic under study and by adeuately replacing some of the factors, we obtain a near-exact characteristic function which determines the near-exact distribution for the statistic. This near-exact distribution takes the form of either a GNIG (Generalized Near-Integer Gamma) distribution or a mixture of GNIG distributions. The evaluation of the performance of the near-exact and asymptotic distributions developed is done through the use of two measures based on the charactersitic function with which we are able to obtain good upper-bounds on the absolute value of the difference between the exact and approximate probability density or cumulative distribution functions. As a reference we use the asymptotic distribution proposed by Box. Key words: Mixtures, Gamma distribution, Generalized Near-Integer Gamma distribution, mixtures. 1 This research was financially supported by the Portuguese Foundation for Science and Technology (FCT). Preprint submitted to Elsevier Science 1 June 007

2 1 Introduction The results presented in this paper, together with the ones already published on the Wilks Λ statistic (Coelho, 003, 004; Alberto and Coelho, 007; Grilo and Coelho, 007) and the ones on the sphericity likelihood ratio test statistic (Marues and Coelho, 007), are intended to be used as the basis for two future works: one on a common approach for the more common likelihood ratio test statistics used in Multivariate Analysis (the Wilks Λ statistic, the statistic to test the euality of several variance-covariance matrices, the statistic to test the euality of several multivariate Normal distributions and the sphericity test statistic) which will recall the common traits of these statistics both in terms of their exact and near-exact distributions, and the other on a general approach for two families of generalized sphericity tests, which we may call as multi-sample block-scalar and multi-sample block-matrix spericity tests, their common links and particular cases. In this paper we will be dealing with the likelihood ratio test statistic to test the euality of several variance-covariance matrices, under the assumption of underlying multivariate normal distributions. We will show how we can see the exact distribution of this statistic as the distribution of a product of independent Gamma random variables and then how we can factorize the characteristic function of the logarithm of this statistic into a term that is the characteristic function of a Generalized Integer Gamma distribution (Coelho, 1998) and another term that is the characteristic function of a sum of independent random variables whose exponentials have Beta distributions. From this decomposition we will be able to build near-exact distributions for the logarithm of the likelihood ratio test statistic as well as for the statistic itself. The closeness of these approximate distributions to the exact distribution will be assessed and measured through the use of two measures, presented in Section 5, derived from inversion formulas, one of which has a link with the Berry-Esseen bound. Let us suppose that we have independent samples from multivariate Normal distributions N p (µ j, Σ j ) (j 1,..., ), the j-th sample having size n + 1, and that we want to test the null hypothesis H 0 : Σ 1 Σ... Σ ( Σ) (with Σ unspecified). (1) By using the modified likelihood ratio test statistic we obtain an unbiased test (Bartlet, 1937; Muirhead, 198, Sec. 8.). The modified likelihood ratio test statistic (where the sample sizes are replaced by the number of degrees of freedom of the Wishart distributions) may be written as (Bartlett, 1937;

3 Anderson, 1958; Muirhead, 198) λ (n)np/ n pn/ A j n/ A n/, () where A j is the matrix of corrected sums of suares and products formed from the j-th sample and A A A. The h-th moment of λ in () is (Muirhead, 198) E [ (λ ) h] p nph/ Γ ( n + 1 ) j Γ ( n + 1 j + n h) p Γ ( n + 1 j + n h) Γ ( n + 1 ) j (3) ( ) h > p n 1 n. From (3) we may write the c.f. (characteristic function) for the r.v. (random variable) W log(λ ) as Φ W (t) E [ e W it] E [ e log(λ )it ] E [ (λ ) it] npit/ p Γ ( n + 1 ) j Γ ( n + 1 j (4) n it) p Γ ( n + 1 j n it) Γ ( n + 1 ). j It will be based on this expression that we will obtain, in Section 3, decompositions of the c.f. of W that will be used to build near-exact distributions for W and λ. Asymptotic distributions Box (1949) proposes for the statistic W log(λ ) an asymptotic distribution based on an expansion of the form P (ρw z) (1 ω)p (χ g z) + ωp (χ g+4 z) + O ( (n) 3) 3

4 where g p + 3p 1 ( 1)p(p + 1), ρ 1 n 6(p + 1) and ω 1 { } p(p + 1) (p 1)(P ) 3 1 6( 1)(1 ρ) 48ρ n and where P (χ g z) stands for the value of the c.d.f. (cumulative distribution function) of a chi-suare r.v. with g degrees of freedom evaluated at z(> 0). However, taking into accout that if ( g X χ g Γ, 1 ) then ( ) X g ρ Γ, ρ, where we use the notation X Γ(r, λ) to denote the fact that the r.v. X has pdf (probability density function) f X (x) λr Γ(r) e λx x r 1, (x > 0; r, λ > 0) we may write P (W z) (1 ω)p ( ( ) ) ( ( ) ) g g Γ, ρ z + ω P Γ +, ρ z, where P (Γ (ν, ρ) z) stands for the value of the c.d.f. of a Γ(ν, ρ) distributed r.v. evaluated at z(> 0). We may thus write, for the c.f. of W, Φ W (t) Φ Box (t) (1 ω)ρ g/ (ρ it) g/ + ω ρ +g/ (ρ it) g/. (5) Somehow inspired on this asymptotic approximation due to Box, which ultimately approximates the exact c.f. of W by the c.f. of a mixture of Gamma distributions and also on Box s introduction of his 1949 paper (Box, 1949), where he states that Although in many cases the exact distributions cannot 4

5 be obtained in a form which is of practical use, it is usually possible to obtain the moments, and these may be used to obtain approximations. In some cases, for instance, a suitable power of the likelihood statistic has been found to be distributed approximately in the type I form, and good approximations have been obtained by euating the moments of the likelihood statistic to this curve., we propose two other mixtures of Gamma distributions, all with the same rate parameter, which match the first four or six exact moments to approximate asymptotically the c.f. of W log(λ ), for increasing values of n, the number of degrees of freedom of the Wishart distribution of matrix A. These distributions are: the mixture of two Gamma distributions, both with the same rate parameter, with characteristic function Φ MG (t) p,j λ r,j (λ it) r,j, (6) where p, 1 p,1 with p,j, r,j, λ > 0, and the mixture of three Gamma distributions, all with the same rate parameter, with characteristic function Φ M3G (t) 3 p 3,j λ r 3,j 3 (λ 3 it) r 3,j, (7) where p 3,3 1 p 3,1 p 3,, with p 3,j, r 3,j, λ 3 > 0. The parameters in (6) and (7) are respectively obtained by solving the systems of euations i h k p k,j Γ(r k,j + h) Γ(r k,j ) λ h k h Φ MkG (t) t h h Φ W (t) t0 t h, (8) t0 with h 1,..., k and making k for the parameters in (6) and k 3 for the parameters in (7). Actually, since, as shown in Lemma 1 ahead, Φ W (t) is the characteristic function of a sum of independent log Beta random variables, the approximation of the distribution of W by a mixture of Gamma distributions is a well justified procedure, since as Coelho et al. (006) proved, a log Beta distribution may be represented as an infinite mixture of Exponential distributions, and as such a sum of independent log Beta random variables may be represented as an infinite mixture of sums of independent Exponential distributions, which are particular Generalized Integer Gamma distributions (Coelho, 1998). Thus, the use of a finite mixture of Gamma distributions to replace a log Beta distribution seems to be a much adeuate simplification. 5

6 3 The characteristic function of W log(λ ) In this section we will present several results that will enable us to obtain a decomposition of the c.f. of W that will be used to build near-exact distributions for W and λ. In a first step we will show how the characteristic function of W may be split in two parts, one of them being the c.f. of the sum of independent Logbeta r.v. s and the other the c.f. of the sum of independent Exponential r.v. s. In a second step we will identify the Exponential distributions involved and devise a method to count and obtain the corresponding analytic expressions for the number of different Exponential distributions involved. Lemma 1: The characteristic function of W log λ may be written as Φ W (t) p/ Γ(a j + b jk) Γ(a j ) Γ(a j nit) Γ(a j + b jk nit) Γ(a p + b pk) Γ ( a p n it) Γ (a p ) Γ(a p + b pk nit) p } {{ } Φ 1 (t) p/ Γ(a j + b jk ) Γ(a j + b jk nit) Γ(a j + b jk ) Γ(a j + b jk nit) Γ(a p + b pk ) Γ(a p + b pk nit) Γ(a p + b pk ) Γ ( a p + b pk n it) p } {{ } Φ (t), (9) where p represents the remainder of the integer division of p by, a j n + 1 j, b jk j 1 + k j, (10) a p n + 1 p, b pk p p + k 1, (11) b jk b jk and b pk b pk. (1) Proof : Using the fact that Γ(z) π 1/ z 1 Γ(z) Γ(z + 1/) 6

7 we may write the c.f. of W log λ as p/ Φ W (t) npit/ p/ npit/ Γ ( n + 1 ) ( j 1 Γ n + 1 ) j Γ ( n + 1 j 1 n it) Γ ( n + 1 j n it) Γ ( n + 1 j 1 n it) Γ ( n + 1 j n ) ( it) Γ n + 1 ) j Γ ( n + 1 j 1 ) Γ ( n + 1 p Γ ( n + 1 p n it) Γ(n + 1 j) Γ(n + 1 j nit) Γ ( n + 1 p ) Γ ( n + 1 p n it) Γ ( n + 1 p n it) p Γ ( n + 1 ) p Γ(n + 1 j nit) Γ(n + 1 j) Γ ( n + 1 p n it) p Γ ( n + 1 ) p. Then, using Γ(mz) (π) m 1 m mz 1/ m Γ ( z + k 1 m ) we may write Φ W (t) npit/ p/ p/ nit n it Γ ( n + 1 j + ) k 1 Γ ( Γ(n + 1 j nit) n + 1 j + ) k 1 nit Γ(n + 1 j) Γ ( n + 1 p + ) k 1 Γ ( n + 1 p n Γ ( it) n + 1 p + k 1 n it) Γ ( n + 1 ) p Γ ( n + 1 j + ) k 1 Γ ( Γ(n + 1 j nit) n + 1 j + ) k 1 nit Γ(n + 1 j) Γ ( n + 1 p + ) k 1 Γ ( n + 1 p n Γ ( it) p n + 1 p + k 1 n it) Γ ( n + 1 ) p considering that, since for any p IN, p/ + (p + 1)/ p, npit/ ( n it) p+1 p p/ nit p+1 npit/+n p/ it/+n it/ 1. p 7

8 Taking then a j, b jk, a p and b pk defined as in (10) and (11), we may write p/ Φ W (t) Γ(a j + b kj ) Γ(a j nit) Γ(a j + b kj nit) Γ(a j ) Γ(a p + b pk ) Γ ( ap n Γ ( it) p a p + b pk n it) Γ(a p ) that taking b jk and b pk given by (1) may be written as p/ Φ W (t) { Γ(aj + b kj) Γ(a j ) Γ(a j nit) Γ(a j + b kj nit) Γ(a j + b kj ) Γ(a j + b kj nit) Γ(a j + b kj ) Γ(a j + b kj nit) Γ(a p + b pk ) Γ(a p + b pk nit) p Γ(a p + b pk ) Γ ( a p + b pk n it) Γ(a p + b pk) Γ ( a p n it) Γ (a p ) Γ(a p + b pk nit) } p, what, after some small rearrangements yields (9). Let us now take Φ 1 (t) p/ Γ(a j + b kj) Γ(a j nit) Γ(a j + b kj nit) Γ(a j ) }{{} Φ 1,1 (t) Γ(a p + b pk) Γ ( a p n it) Γ (a p ) Γ(a p + b pk nit) p } {{ } Φ 1, (t). (13) We will now show that Φ 1 (t) is indeed the c.f. of the sum of independent Exponential r.v. s and we will identify the different Exponential distributions involved, by adeuately decomposing first Φ 1,1 (t) and then Φ 1, (t). 8

9 Lemma : We may write Φ 1,1 (t) p/ j 1 l1 k j (n l) (n l nit) 1. Proof : Applying now Γ(a + n) Γ(a) n 1 l0 (a + l) (14) and noticing that b kj j 1 + k j k j j 1 +, we may write p/ Φ 1,1 (t) p/ b kj 1 l0 Γ(a j + b kj) b kj Γ(a j ) Γ(a j nit) Γ(a j + b kj nit) (n j l) (n j l nit) 1 p/ (n j + l) (n j + l nit) 1 (15) l1 p/ j 1+ k j l1 p/ j 1+ k j ( l1 (n j + l) (n j + l nit) 1 ) k j n l + ( 1 k j n l + nit), that by a simple change in the limits of the last product yields the desired result. In the next two Lemmas we identify the different Exponential distributions involved in Φ 1,1 (t) and obtain analytic expressions for their counts. The first Lemma refers to even and the second to odd. 9

10 Lemma 3: For even we may write Φ 1,1 (t) p/ 1 jα+ α+1 (n j) ( p/ j/ ) (n j nit) ( p/ j/ ) (n k) a k+γ k (n k nit) (a k+γ k ) (16) where α p 1, γ k / ((k 1) k/ ) (k 1,..., α + 1) (17) and /4 k 1,..., α a k ( ( p/ α / )) ( p/ α / ) k α + 1. (18) Proof : Since for k 1,..., and j 1,..., p/, ν k j takes the values 0, 1,,..., α, with α p 1, we may write, from the result in Lemma, where p/ Φ 1,1 (t) j 1 l1 k j p/ min(α+1,j 1) l1 k j p/ j 1 lα+ p/ (n l) (n l nit) 1 (n l) (n l nit) 1 j 1 lα+ (n l) (n l nit) min(α+1,j 1) l1 k j (n l) (n l nit) 1 (19) (n l) (n l nit) 1 10

11 p/ j 1 lα+ (n l) (n l nit) p/ 1 jα+ (n j) ( p/ j/ ) (n j nit) ( p/ j/ ) (0) and, for even (for which although / /, we will use the notation / to make it more uniform with the notation for odd ) p/ min(α+1,j 1) l1 k j α 1 (ν+1) / ν0 +ν / (n l) (n l nit) 1 min(α+1,j 1) (n l) ((j ν / ) 1) l1+ν (n l nit) ( ((j ν / ) 1)) min(α+1,j 1) l+ν (n l) (j ν / ) 1 (n l nit) ((j ν / ) 1) p/ +α / min(α+1,j 1) (n l) ((j α / ) 1) l1+α (n l nit) ( ((j α / ) 1)) min(α+1,j 1) l+α (n l) (j α / ) 1 (n l nit) ((j α / ) 1) α 1 ν0 / min(α+1,(j+ν / ) 1) (n l) (j 1) l1+ν (n l nit) ( (j 1)) min(α+1,(j+ν / ) 1) (n l) j 1 l+ν (n l nit) (j 1) p/ α / (n (α+1)) (j 1) (n (α+1) nit) ( (j 1)) 11

12 / / α 1 ν0 α 1 ν0 p/ α / min(α+1,j+ν 1) l+ν (n l) (n l nit) (n (ν+1)) (j 1) (n (ν+1) nit) ( (j 1)) (n (α+1)) (j 1) (n (α+1) nit) ( (j 1)) where, for even, i) / α 1 ν0 min(α+1,j+ν 1) l+ν α+1 k (n l) (n l nit) (n k) γ k (n k nit) γ k, for γ k / ((k 1) k/ ) (k,..., α + 1), as defined in (17), ii) / α 1 ν0 (n (ν+1)) (j 1) (n (ν+1) nit) ( (j 1)) α (n k) a k (n k nit) a k, for a k given in (18), that is, for a k /4 (k 1,..., α), iii) p/ α / (n (α+1)) (j 1) (n (α+1) nit) ( (j 1)) (n (α + 1)) a α+1 (n (α + 1) nit) a α+1, for a α+1 ( ( p/ α / )) ( p/ α / ), as given in (18), so that we may write Φ 1,1 (t) as in (16). Lemma 4: For odd, and once again for α p 1, we may write Φ 1,1 (t) p/ 1 jα+ α+1 (n j) ( p/ j/ ) (n j nit) ( p/ j/ ) (n k) a k+γ k (n k nit) (a k+γ k ) (1) 1

13 where γ k / (k 1) (k 1,..., α + 1) () and +k k 1,..., α a k ( ( )) ( ) (3) p α α p α α+1 k α + 1. Proof : Following the same lines used in the proof of Lemma 3, we obtain once again expressions (19) and (0). Then, taking into account that 1 + ν ν+1 (ν + 1) and that for odd, 1, we have, p/ min(α+1,j 1) l1 k j α 1 (n l) (n l nit) 1 (ν+1) / + 1+ν/ ν0 +ν / + (1+ν)/ min(α+1,j 1) (n l) ((j ν / ) (ν+1)) l1+ν (n l nit) ( ((j ν / ) (ν+1))) min(α+1,j 1) l+ν (n l) (j ν / ) (ν+1) (n l nit) ((j ν / ) (ν+1)) p/ +α / + (1+α)/ min(α+1,j 1) (n l) ((j α / ) (α+1)) l1+α (n l nit) ( ((j α / ) (α+1))) min(α+1,j 1) l+α (n l) (j α / ) (α+1) (n l nit) ((j α / ) (α+1)) α 1 / + 1+ν/ ν0 + (1+ν)/ min(α+1,(j+ν / ) 1) (n l) (j (ν+1)) l1+ν (n l nit) ( (j (ν+1))) 13

14 min(α+1,(j+ν / ) 1) (n l) j (ν+1) l+ν (n l nit) (j (ν+1)) p/ α / ν0 + (1+ν)/ (n (α+1)) (j (α+1)) (n (α+1) nit) ( (j (α+1))) + (1+α)/ α 1 / + 1+ν/ min(α+1,j+ν ν 1) α 1 / + 1+ν/ ν0 + (1+ν)/ p/ α / + (1+α)/ l+ν (n l) (n l nit) (n (ν+1)) j+ν+1 (n (ν+1) nit) ( j+ν+1) (n (α+1)) j+α+1 (n (α+1) nit) ( j+α+1) where, for odd, i) α 1 / + 1+ν/ ν0 + (1+ν)/ min(α+1,j+ν ν 1) l+ν (n l) (n l nit) α 1 ν0 α+1 k / +((ν+1) ) min(α+1,j+ν (ν+1) ) l+ν (n k) γ k (n k nit) γ k, (n l) (n l nit) for γ k / (k 1) (k,..., α + 1), as defined in (), ii) α 1 / + 1+ν/ ν0 + (1+ν)/ (n (ν+1)) j+ν+1 (n (ν+1) nit) ( j+ν+1) α 1 ν0 / +(ν+1) (n (ν+1)) j (ν )+1 ( j (ν )+1) (n (ν+1) nit) α (n k) a k (n k nit) a k, for a k given by (3), that is, for a k +k (k 1,..., α), iii) p/ α / + α+1 (n (α+1)) j+α+1 (n (α+1) nit) ( j+α+1) 14

15 p/ α / α+1 j α +1 (n (α + 1)) (n (α + 1)) a α+1 (n (α + 1) nit) a α+1, ( j α +1) (n (α + 1) nit) for a α+1 ( ( )) ( ) p α α p α α+1, as given by (3), so that we may write Φ 1,1 (t) as in (1). Eualities i), ii) and iii) in the proofs of Lemmas 3 and 4 are uite straightforward to verify and may be proven by induction but the proofs are not shown because they are a bit long and tedious. In the next Corollary we show how we can put together in one single expression the results from Lemmas 3 and 4, for both even and odd. Corollary 1: For both even and odd we may write Φ 1,1 (t) p/ 1 jα+ α+1 (n j) ( p/ j/ ) (n j nit) ( p/ j/ ) (n k) a k+γ k (n k nit) (a k+γ k ) (4) where ( ) k γ k (k 1) (( + 1) ) (k 1,..., α + 1) (5) and +k k 1,..., α ( ( )) ( a k p α p α (6) ) +( ) ( α p α α + α ) α k α + 1. Proof : Since for k IN 0 and even we have +k, and since for even we also have 4, 15

16 from (18) and (3), for any, even or odd, we may write + k a k, k 1,..., α, while from (17) and () we may write (5). Finally, since α + 1 α α and α α + 1 α 4 α even α 1 4 α odd we have ( ( )) ( ) p α p α + 1 α α ( ( )) ( p p α+1 α α ) α+1 α α+1 p α α + ( ( p α )) ( p α + 1 α + ) p α+1 + α α p α α α 4 + α. 4 In the next Lemma we show how Φ 1, (t) may also be seen as the c.f. of the sum of independent Exponential r.v. s. We identify the different Exponential distributions involved and obtain analytic expressions for their counts. Lemma 5: We may write, for a p and b pk defined in (11) and (1), Φ 1,(t) Γ(a p + b pk) Γ (a p ) Γ ( a p n it) Γ(a p + b pk n it) ( ) n k γ(α α 1 ) ( n k ) γ(α α 1 ) it n n p 1 ( ) ( n l n l n n l1+p α 1 step ) it, (7) 16

17 where ( p 1 γ ) p β, with β, (8) and k 1 + p α, α 1 1 p 1 1, α p + 1. (9) Proof : Using (14) we may always write, Φ 1,(t) Γ(a p + b pk) Γ ( a p n it) Γ(a p ) Γ ( a p + b pk n it) b pk (a p + l 1) l1 where b pk b pk, with ( a p + l 1 n it ) 1, b pk p p + k 1 1 p 1 + k 1 so that, given that here p is odd and thus p 1 is an integer, b pk 1 p 1 + k 1 α 1 for k 1,..., p 1 β α for k p+1 β,..., with β given by (8) and α 1 and α given by (9). We may thus write, taking into account the definitions of a p in (11), γ in (8) and α 1 and α in (9), p 1 Φ β α 1 1(t) (a p + l 1) (a p + l 1 n ) 1 l1 it α (a p + l 1) (a p + l 1 n ) 1 k p+1 β l1 it α 1 ( (a p + l 1) p 1 β a p + l 1 n ) ( p 1 l1 it β) 17

18 α 1 l1 α l1 (a p + l 1) ( p+1 β ) ( a p + l 1 n ) +( p+1 it ( ) n + 1 p ( n + 1 p + l 1 + l 1 n ) it ( ) n + 1 p γ(α α 1 ) + α 1 ( n + 1 p + α 1 n ) γ(α α it 1 ) ( ) ( ) n p 1 + l n p 1 + l it l1 n n ( ) n p 1 + γ(α α α 1 ) ( n p 1 + α α 1 n n ( ) ( n (p + 1 l) n (p + 1 l) it l1 n n ( ) γ(α α n (p + 1 α ) 1 ) α 1 p 1 l p+1 α 1 step n ( n (p + 1 α ) it n ( ) n l ( n l it n n ) γ(α α 1 ) ) ( ) n k γ(α α 1 ) ( n k it n n ) ) γ(α α 1 ) β) ) γ(α α 1 ) it where the last euality is obtained by taking l p + 1 l and taking into account the definition of k in (9). Corollary : Using the results in Lemmas 1 through 5, we may finally write Φ W (t) p 1 ( ) rk ( n k n k it n n ) rk } {{ } Φ 1 (t) p/ Γ(a j + b jk ) Γ(a j + b jk nit) Γ(a j + b jk ) Γ(a j + b jk nit) Γ(a p + b pk ) Γ(a p + b pk nit) Γ(a p + b pk ) Γ ( a p + b pk n it) p } {{ } Φ (t), (30) 18

19 where rk for k 1,..., p 1, r k rk+(p )(α α 1 ) ( p 1 + ) p except for k p 1 α 1 for k p 1 α 1 (31) with a k + γ k for k 1,..., α + 1 ( ) p k for k α +,..., min(p α rk 1, p 1) and k +p α 1,..., p 1, step ( ) p+1 k for k 1+p α1,..., p 1, step, (3) and p 1 1 α, α 1 p 1 1, α p + 1, where, from (5) and (6), ( ) k a k + γ k (k 1) ((+1) ) + k + for k 1,..., α and ( p a α+1 + γ α+1 ) ( p α + ( +( ) α ) α + 1 p + α α 4 4 α ). (33) Proof : First of all we have to notice that (n k) r k (n k nit) r k ( ) rk ( n k n k n n it) rk. 19

20 Then, while for k 1,..., α, r k is obtained just by adding γ k in (5) and a k in the first row of (6) in Corollary 1, for k α + 1 we have, from (6), ( ( )) ( p p a α+1 α α ) ( p +( ) α α α ( ) p p α α ( ) ( p +( ) α 4 + α ) α α + 1 α α 4 + α 4 ) while, from (5), γ α+1 α α + 1 (( + 1) ), where α + 1 0, odd (( + 1) ) α+1, even α + 1 (( + 1) ) so that a α+1 + γ α+1 comes given by (33), since α + 1 α + 1 α + 1 ( ) + (( + 1) ). For k α +,..., min(p α 1, p 1) and k + p α 1,..., p 1, with step, we have to consider the result in the first row in (4) in Corollary 1, while for k 1 + p α 1,..., p 1, with step, we have to consider this same result together with the result in Lemma 5 and notice that Φ 1, (t) ( Φ 1,(t) ) p and that, as such, the exponent in (7) in Lemma 5 only appears for odd p 0

21 and that ( p ) k + (p ) p k ( p ) k ( p ) + 1 k. for even for odd Finally, from (7) and (9) in Lemma 5, we see that for k p + 1 α we have to add, for odd p, γ(α α 1 ), where γ ( p 1 ) p, to the value of r k to obtain rk. It happens that the only possible values for α α 1 are either zero or 1, so that it will be only for α α 1 1 or 1 + p α p 1 α 1 that we will have to add γ(α α 1 ) to r k. 4 Near-exact distributions for W and λ In all cases where we are able to factorize a c.f. into two factors, one of which corresponds to a manageable well known distribution and the other to a distribution that although giving us some problems in terms of being convoluted with the first factor, may however be adeuately asymptoticaly replaced by another c.f. in such a way that the overall c.f. obtained by leaving the first factor unchanged and adeuately replacing the second factor may then correspond to a known manageable distribution. This way we will be able to obtain what we call a near-exact c.f. for the random variable under study. This is exactly what happens with the c.f. of W. In this Section we will show how by keeping Φ 1 (t) in the characteristic function of W in (30) unchanged and replacing Φ (t) by the c.f. of a Gamma distribution or the mixture of two or three Gamma distributions, matching the first two, four or six derivatives of Φ (t) in order to t at t 0, we will be able to obtain high uality near-exact distributions for W under the form of a GNIG (Generalized Near- Integer Gamma) distribution or mixtures of GNIG distributions. From these distributions we may then easily obtain near-exact distributions for λ e W, as it is shown in the next section. The GNIG distribution of depth g + 1 (Coelho, 004) is the distribution of the r.v. g Z Y + X i i1 where the g + 1 random variables Y and X i (i 1,..., g) are all independent 1

22 with Gamma distributions, Y with shape parameter r, a positive non-integer, and rate parameter λ and each X i (i 1,..., g) with an integer shape parameter r i and rate parameter λ i, being all the g + 1 rate parameters different. The p.d.f. (probability density function) of Z is given by f Z (z r 1,..., r g, r; λ 1,..., λ g, λ) r g j { } Kλ r e λ Γ(k) jz c j,k Γ(k+r) zk+r 1 1F 1 (r, k+r, (λ λ j )z), (34) (z > 0) and the c.d.f. (cumulative distribution function) given by F Z (z r 1,..., r g, r; λ 1,..., λ g, λ) λ r Γ(r+1) 1 F 1 (r, r+1, λz) r g j k 1 Kλ r e λ jz c z r+i λ i j (35) j,k i0 Γ(r+1+i) 1 F 1 (r, r+1+i, (λ λ j )z) (z > 0) z r where g K λ r j j and c j,k c j,k Γ(k) λ k j with c j,k given by (11) through (13) in Coelho (1998). In the above expressions 1F 1 (a, b; z) is the Kummer confluent hypergeometric function. This function has usually very good convergence properties and is nowadays easily handled by a number of software packages. In the next Theorem we develop near-exact distributions for W. Theorem 1: Using for Φ (t) in the characteristic function of W log λ the approximations: i) λ s (λ it) s with s, λ > 0, such that h t h λs (λ it) s h t0 t Φ (t) h for h 1, ; (36) t0

23 ii) θ k µ s k (µ it) s k, where θ 1 θ 1 with θ k, s k, µ > 0, such that h t h θ k µ s k (µ it) s k h t0 t Φ (t) h for h 1,..., 4 ; (37) t0 iii) 3 θk ν s k (ν it) s k, where θ 3 1 θ1 θ with θk, s k, ν > 0, such that h t h 3 θk ν s k (ν it) s k h t0 t Φ (t) h for h 1,..., 6 ; (38) t0 we obtain as near-exact distributions for W, respectively, i) a GNIG distribution of depth p with cdf F (w r 1,..., r p 1, s; λ 1,..., λ p 1, λ), (39) where the r j (j 1,..., p 1) are given in (31) and and λ j n j n, (j 1,..., p 1), (40) λ m 1 m m 1 and s m 1 m m 1 (41) with m h i h h t Φ (t) h, h 1, ; t0 ii) a mixture of two GNIG distributions of depth p, with cdf θ k F (w r 1,..., r p 1, s k ; λ 1,..., λ p 1, µ), (4) where r j and λ j (j 1,..., p 1) are given in (31) and (40) and θ 1, µ, r 1 and r are obtained from the numerical solution of the system of four euations θ k Γ(r k + h) Γ(r k ) for these parameters, with θ 1 θ 1 ; µ h h h i t Φ (t) h (h 1,..., 4) (43) t0 3

24 iii) or a mixture of three GNIG distributions of depth p 1, with cdf 3 θk F (w r 1,..., r p, s k; λ 1,..., λ p, ν), (44) with r j and λ j (j 1,..., p ) given by (31) and (40) and θ 1, θ, ν, s 1, s and s 3 obtained from the numerical solution of the system of six euations 3 θj Γ(rk + h) Γ(rk ) ν h h h i t Φ (t) h (h 1,..., 6) (45) t0 for these parameters, with θ 3 1 θ 1 θ. Proof : If in the c.f. of W we replace Φ (t) by λ s (λ it) s we obtain Φ W (t) λ s (λ it) s p 1 ( ) rk ( n k n k it n n ) rk } {{ } Φ 1 (t), that is the c.f. of the sum of p 1 independent Gamma random variables, p of which with integer shape parameters r j and rate parameters λ j given by (31) and (40), and a further Gamma random variable with rate parameter s > 0 and shape parameter λ. This c.f. is thus the c.f. of the GNIG distribution of depth p with distribution function given in (39). The parameters s and λ are determined in such a way that (36) holds. This compels s and λ to be given by (41) and makes the two first moments of this near-exact distribution for W to be the same as the two first exact moments of W. If in the c.f. of W we replace Φ (t) by θ k µ r k (µ it) r k we obtain Φ W (t) θ k µ r k (µ it) r k p 1 ( ) rk ( n k n k it n n ) rk } {{ } Φ 1 (t), that is the c.f. of the mixture of two GNIG distributions of depth p with distribution function given in (4). The parameters θ 1, µ, r 1 and r are defined in such a way that (37) holds, giving rise to the evaluation of these parameters as the numerical solution of the system of euations in (37) and to a near-exact distribution that matches the first four exact moments of W. 4

25 If in the c.f. of W we replace Φ (t) by 3 θ k ν r k(ν it) r k we obtain Φ W (t) 3 θk ν r k (ν it) rk p 1 ( ) rk ( n k n k it n n ) rk } {{ } Φ 1 (t), that is the characteristic function of the mixture of three GNIG distributions of depth p with distribution function given in (44). The parameters θ 1, θ, ν, r 1, r and r 3 are defined in such a way that (38) holds, what gives rise to the evaluation of these parameters as the numerical solution of the system of euations in (38), giving rise to a near-exact distribution that matches the first six exact moments of W. We should note here that the replacement of the characteristic function of a sum of Logbeta random variables by the characteristic function of a single Gamma random variable or the characteristic function of a mixture of two or three of such random variables has already been well justified at the end of Section. Corollary 3: Distributions with cdf s given by i) 1 F ( log z r 1,..., r p 1, s; λ 1,..., λ p 1, λ), ii) 1 θ k F ( log z r 1,..., r p 1, s k ; λ 1,..., λ p 1, µ), or iii) 1 3 θ k F ( log z r 1,..., r p 1, s k; λ 1,..., λ p 1, ν), where the parameters are the same as in Theorem 1, and 0 < z < 1 represents the running value of the statistic λ e W, may be used as near-exact distributions for this statistic. Proof : Since the near-exact distributions developed in Theorem 1 were for the random variable W log(λ ) we only need to mind the relation F λ (z) 1 F W ( log z) where F λ ( ) is the cumulative distribution function of λ and F W ( ) is the cumulative distribution function of W, in order to obtain the corresponding near-exact distributions for λ. We should also stress that although we advocate the numerical solution of systems of euations (8), (43) and (45), the remarks in Marues and Coelho 5

26 (007), at the end of Section 3, also apply here. 5 Numerical studies In order to evaluate the uality of the approximations developed we use two measures of proximity between characteristic functions which are also measures of proximity between distribution functions or densities. Let Y be a continuous random variable defined on S with distribution function F Y (y), density function f Y (y) and characteristic function φ Y (t), and let φ n (t), F n (y) and f n (y) be respectively the characteristic, distribution and density function of a random variable X n. The two measures are 1 1 π φ Y (t) φ n (t) dt and 1 π φ Y (t) φ n (t) t dt, with max y S f Y (y) f n (y) 1 ; max y S F Y (y) F n (y). (46) We should note that for continuous random variables, lim 1 0 n lim 0 (47) n and either one of the limits above imply that X n d Y. (48) Indeed both measures and both relations in (46) may be derived directly from inversion formulas, and may be seen as based on the Berry-Esseen upper bound on F Y (y) F n (y) (Berry, 1941; Esseen, 1945; Loève, 1977, Chap. VI, Sec. 1; Hwang, 1998) which may, for any b > 1/(π) and any T > 0, be written as T max F Y (y) F n (y) b y S T φ Y (t) φ n (t) t dt + C(b) M T (49) where M max y S f n (y) and C(b) is a positive constant that only depends of b. If in (49) above we take T then we will have, since then we may take b 1/(π). 6

27 These measures were used by Grilo and Coelho (007) to study near-exact approximations to the distribution of the product of independent Beta random variables and by Marues and Coelho (007) to the study of near-exact distributions for the sphericity likelihood ratio test statistic. In this section we will denote Box s asymptotic distribution by Box, by MG and M3G respectively the asymptotic mixture of two and three Gamma distributions proposed in Section and by GNIG, MGNIG and M3GNIG the near-exact single GNIG distribution and the mixtures of two and three GNIG distributions. We show in this section the Tables for values of while in Appendix A are the corresponding Tables for the values of 1. In Table 1, we may see that for increasing p (number of variables), with the sample size remaining close to p (n p ), the continuous degradation (increase) of the values of the measure for Box s asymptotic distribution, even with a value which does not make much sense for p 50 (since is an upper-bound on the absolute value of the difference between the approximate and the exact c.d.f.). Actually, in many cases where n is close to p, Box s asymptotic distribution does not even correspond to a true distribution (see Appendix B). Also the two asymptotic distributions MG and M3G show slightly increasing values for, although remaining within much low values. For the distribution M3G it was not possible to obtain the convergence for the solutions for p 0 and p 50, but this distribution, for values of p 7 even shows lower values of then the GNIG near-exact distribution. Opposite to this behavior, all three near-exact distributions show a sharp improvement (decrease) in their values for for increasing values of p, only with some minor fluctuations for consecutive even and odd values of p, showing the asymptotic character of these distributions for increasing values of p. Table 1 Values of the measure for increasing values of p, with small sample sizes p n Box MG M3G GNIG MGNIG M3GNIG

28 Table A.1 in Appendix A, has the corresponding values for 1 and would lead us to draw similar conclusions. In Tables and A. we may see the clear asymptotic character of the near-exact distributions for increasing values of (the number of matrices being tested), with a similar but less marked behavior of the MG and M3G asymptotic distributions, opposite to what happens with Box s asymptotic distribution. In Tables 3 and A.3 we may see how, for increasing sample sizes, the asymptotic character is stronger for the near-exact distributions than for the asymptotic distributions. This asymptotic character being more marked for the nearexact distributions based on mixtures. However, the M3G asymptotic distribution for p 7, and n 50 even beats the MGNIG near-exact distribution and the MG asymptotic distribution performs better than the GNIG near-exact distriution for most of the cases, namely those with larger sample sizes. Table Values of the measure for increasing values of, with small sample sizes p n Box MG M3G GNIG MGNIG M3GNIG Table 3 Values of the measure for increasing sample sizes p n Box MG M3G GNIG MGNIG M3GNIG

29 In Tables 4 and A.4, if we compare the values of for different values of p and the same sample size, we may see how, opposite to the asymptotic distributions, the near-exact distributions show a clear asymptotic character for increasing values of p. Table 4 Values of the measure for large and large sample sizes p n Box MG M3G GNIG MGNIG M3GNIG Conclusions All the near-exact distributions show a very good performance, with the ones based on mixtures showing an outstanding behaviour. For the approximate distributions developed in this paper, the near-exact distributions, which also have a more elaborate structure, clearly outperform their asymptotic counterparts, for a given number of exact moments matched. Moreover, opposite to the usual asymptotic distributions, the near-exact distributions developed show a marked asymptotic behavior not only for increasing sample sizes but also for increasing values of p (the number of variables) and for increasing values of (the number of matrices being tested). Yet, all the near-exact distributions proposed may be easily used to compute near-exact uantiles. We should stress here that also the two new asymptotic distributions proposed show an asymptotic behavior for increasing values of (the number of matrices being tested). Thus, as a final comment, and given the values of the measures 1 and obtained for the distributions, we would say that we may use the asymptotic distributions proposed in this paper in practical applications that may need a not so high degree of precision, although higher than the one that the usual asymptotic distributions deliver. For applications that may need a high degree of precision in the computation of uantiles we may then use the more elaborate near-exact distributions, mainly those based on mixtures of GNIG distributions, which anyway allow for an easy computation of uantiles. The distribution M3GNIG, given its excellent performance and manageability, may even be used as a replacement of the exact distribution. 9

30 Appendix A Tables with values of 1 Table A.1 Values of the measure 1 for increasing values of p, with small sample sizes p n Box MG M3G GNIG MGNIG M3GNIG Table A. Values of the measure 1 for increasing values of, with small sample sizes p n Box MG M3G GNIG MGNIG M3GNIG Table A.3 Values of the measure 1 for increasing sample sizes p n Box MG M3G GNIG MGNIG M3GNIG

31 Table A.4 Values of the measure 1 for large and large sample sizes p n Box MG M3G GNIG MGNIG M3GNIG Appendix B Plots of p.d.f. s and c.d.f. s of Box s asymptotic distribution for W log(λ ) In this Appendix we show some plots of p.d.f. s and c.d.f. s of Box s asymptotic distribution corresponding to the c.f. in (5) for W log(λ ), for a few combinations of p, and n for which this distribution is not a proper distribution. In all the four cases presented the p.d.f. goes below zero and for p 5, 7 and 10 the c.d.f. goes above 1, while for p 50 it has negative values for smaller values of the argument, as it is shown in Figures B.1 and B.. p 5 p p 10 p Figure B.1 Plots of p.d.f. s of Box s asymptotic distribution for W log(λ ) for and n p +. 31

32 p 5 p p 10 p Figure B. Plots of c.d.f. s of Box s asymptotic distribution for W log(λ ) for and n p +. References Alberto, R. P., Coelho, C. A., 007. Study of the uality of several asymptotic and near-exact approximations based on moments for the distribution of the Wilks Lambda statistic. J. Statist. Plann. Inference 137, Anderson, T. W., An Introduction to Multivariate Statistical Analysis. first ed., Wiley, New York. Bartlett, M. S., Properties of sufficiency and statistical tests. Proc. Roy. Soc., ser. A 160, Berry, A., The accuracy of the Gaussian approximation to the sum of independent variates. Trans. Amer. Math. Soc. 49, Box, G. E. P., A general distribution theory for a class of likelihood criteria. Biometrika 36, Coelho, C. A., The Generalized Integer Gamma distribution a basis for distributions in Multivariate Statistics. J. Multivariate Anal. 64, Coelho, C. A., 003. A Generalized Integer Gamma distribution as an asymptotic replacement for a Logbeta distribution applications. Amer. J. Math. Managem. Sciences 3, Coelho, C. A., 004. The Generalized Near-Integer Gamma distribution: a basis for near-exact approximations to the distribution of statistics which are the product of an odd number of independent Beta random variables. J. Multivariate Anal. 89, Coelho, C. A., Alberto, R. P., Grilo, L. M., 006. A mixture of Generalized 3

33 Integer Gamma distributions as the exact distribution of the product of an odd number of independent Beta random variables. J. Interdiscip. Math. 9, Esseen, C.-G., Fourier analysis of distribution functions. A mathematical study of the Laplace-Gaussian Law. Acta Math. 77, Grilo, L. M., Coelho, C. A., 007. Development and study of two near-exact approximations to the distribution of the product of an odd number of independent Beta random variables. J. Statist. Plann. Inference 137, Hwang, H.-K., On convergence rates in the central limit theorems for combinatorial structures. European J. Combin. 19, Loève, M., Probability Theory, vol. I, fourth ed. Springer, New York. Marues, F. J., Coelho, C. A., 007. Near-exact distributions for the sphericity likelihood ratio test statistic. J. Statist. Plann. Inference (in print). Muirhead, R. J., Aspects of Multivariate Statistical Theory. Wiley, New York. 33

University of Lisbon, Portugal

University of Lisbon, Portugal Development and comparative study of two near-exact approximations to the distribution of the product of an odd number of independent Beta random variables Luís M. Grilo a,, Carlos A. Coelho b, a Dep.

More information

NEAR-EXACT DISTRIBUTIONS FOR THE LIKELIHOOD RATIO TEST STATISTIC FOR TESTING EQUALITY OF SEVERAL VARIANCE-COVARIANCE MATRICES (REVISITED)

NEAR-EXACT DISTRIBUTIONS FOR THE LIKELIHOOD RATIO TEST STATISTIC FOR TESTING EQUALITY OF SEVERAL VARIANCE-COVARIANCE MATRICES (REVISITED) NEAR-EXACT DISTRIBUTIONS FOR THE LIKELIHOOD RATIO TEST STATISTIC FOR TESTING EQUALIT OF SEVERAL VARIANCE-COVARIANCE MATRICES (REVISITED) By Carlos A. Coelho and Filipe J. Marues The New University of Lisbon

More information

The exact and near-exact distributions of the likelihood ratio test statistic for testing circular symmetry

The exact and near-exact distributions of the likelihood ratio test statistic for testing circular symmetry The exact and near-exact distributions of the likelihood ratio test statistic for testing circular symmetry Filipe J. Marques and Carlos A. Coelho Departamento de Matemática Faculdade de Ciências e Tecnologia

More information

Exact and Near-Exact Distribution of Positive Linear Combinations of Independent Gumbel Random Variables

Exact and Near-Exact Distribution of Positive Linear Combinations of Independent Gumbel Random Variables Exact and Near-Exact Distribution of Positive Linear Combinations of Independent Gumbel Random Variables Filipe J. Marques Carlos A. Coelho Miguel de Carvalho. Abstract We show that the exact distribution

More information

On the Distribution of the Product and Ratio of Independent Generalized Gamma-Ratio Random Variables

On the Distribution of the Product and Ratio of Independent Generalized Gamma-Ratio Random Variables Sankhyā : The Indian Journal of Statistics 2007, Volume 69, Part 2, pp. 22-255 c 2007, Indian Statistical Institute On the Distribution of the Product Ratio of Independent Generalized Gamma-Ratio Rom Variables

More information

Multivariate Distributions

Multivariate Distributions IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate

More information

Journal of Multivariate Analysis. Sphericity test in a GMANOVA MANOVA model with normal error

Journal of Multivariate Analysis. Sphericity test in a GMANOVA MANOVA model with normal error Journal of Multivariate Analysis 00 (009) 305 3 Contents lists available at ScienceDirect Journal of Multivariate Analysis journal homepage: www.elsevier.com/locate/jmva Sphericity test in a GMANOVA MANOVA

More information

MULTIVARIATE ANALYSIS OF VARIANCE UNDER MULTIPLICITY José A. Díaz-García. Comunicación Técnica No I-07-13/ (PE/CIMAT)

MULTIVARIATE ANALYSIS OF VARIANCE UNDER MULTIPLICITY José A. Díaz-García. Comunicación Técnica No I-07-13/ (PE/CIMAT) MULTIVARIATE ANALYSIS OF VARIANCE UNDER MULTIPLICITY José A. Díaz-García Comunicación Técnica No I-07-13/11-09-2007 (PE/CIMAT) Multivariate analysis of variance under multiplicity José A. Díaz-García Universidad

More information

On the product of independent Generalized Gamma random variables

On the product of independent Generalized Gamma random variables On the product of independent Generalized Gamma random variables Filipe J. Marques Universidade Nova de Lisboa and Centro de Matemática e Aplicações, Portugal Abstract The product of independent Generalized

More information

1 Data Arrays and Decompositions

1 Data Arrays and Decompositions 1 Data Arrays and Decompositions 1.1 Variance Matrices and Eigenstructure Consider a p p positive definite and symmetric matrix V - a model parameter or a sample variance matrix. The eigenstructure is

More information

3. Probability and Statistics

3. Probability and Statistics FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important

More information

An Introduction to Multivariate Statistical Analysis

An Introduction to Multivariate Statistical Analysis An Introduction to Multivariate Statistical Analysis Third Edition T. W. ANDERSON Stanford University Department of Statistics Stanford, CA WILEY- INTERSCIENCE A JOHN WILEY & SONS, INC., PUBLICATION Contents

More information

Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, Jeffreys priors. exp 1 ) p 2

Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, Jeffreys priors. exp 1 ) p 2 Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, 2010 Jeffreys priors Lecturer: Michael I. Jordan Scribe: Timothy Hunter 1 Priors for the multivariate Gaussian Consider a multivariate

More information

Research Article The Laplace Likelihood Ratio Test for Heteroscedasticity

Research Article The Laplace Likelihood Ratio Test for Heteroscedasticity International Mathematics and Mathematical Sciences Volume 2011, Article ID 249564, 7 pages doi:10.1155/2011/249564 Research Article The Laplace Likelihood Ratio Test for Heteroscedasticity J. Martin van

More information

Institute of Actuaries of India

Institute of Actuaries of India Institute of Actuaries of India Subject CT3 Probability and Mathematical Statistics For 2018 Examinations Subject CT3 Probability and Mathematical Statistics Core Technical Syllabus 1 June 2017 Aim The

More information

Testing Some Covariance Structures under a Growth Curve Model in High Dimension

Testing Some Covariance Structures under a Growth Curve Model in High Dimension Department of Mathematics Testing Some Covariance Structures under a Growth Curve Model in High Dimension Muni S. Srivastava and Martin Singull LiTH-MAT-R--2015/03--SE Department of Mathematics Linköping

More information

ACCURATE ASYMPTOTIC ANALYSIS FOR JOHN S TEST IN MULTICHANNEL SIGNAL DETECTION

ACCURATE ASYMPTOTIC ANALYSIS FOR JOHN S TEST IN MULTICHANNEL SIGNAL DETECTION ACCURATE ASYMPTOTIC ANALYSIS FOR JOHN S TEST IN MULTICHANNEL SIGNAL DETECTION Yu-Hang Xiao, Lei Huang, Junhao Xie and H.C. So Department of Electronic and Information Engineering, Harbin Institute of Technology,

More information

When Do the Moments Uniquely Identify a Distribution

When Do the Moments Uniquely Identify a Distribution When Do the Moments Uniquely Identify a Distribution Carlos A. Coelho, Rui P. Alberto and Luis M. Grilo 1 Abstract In this brief note we show that whenever X is a positive random variable, if E(X h is

More information

Random Variables and Their Distributions

Random Variables and Their Distributions Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital

More information

Common ontinuous random variables

Common ontinuous random variables Common ontinuous random variables CE 311S Earlier, we saw a number of distribution families Binomial Negative binomial Hypergeometric Poisson These were useful because they represented common situations:

More information

High-dimensional asymptotic expansions for the distributions of canonical correlations

High-dimensional asymptotic expansions for the distributions of canonical correlations Journal of Multivariate Analysis 100 2009) 231 242 Contents lists available at ScienceDirect Journal of Multivariate Analysis journal homepage: www.elsevier.com/locate/jmva High-dimensional asymptotic

More information

Statistics for scientists and engineers

Statistics for scientists and engineers Statistics for scientists and engineers February 0, 006 Contents Introduction. Motivation - why study statistics?................................... Examples..................................................3

More information

Bessel Functions Michael Taylor. Lecture Notes for Math 524

Bessel Functions Michael Taylor. Lecture Notes for Math 524 Bessel Functions Michael Taylor Lecture Notes for Math 54 Contents 1. Introduction. Conversion to first order systems 3. The Bessel functions J ν 4. The Bessel functions Y ν 5. Relations between J ν and

More information

Multivariate Normal-Laplace Distribution and Processes

Multivariate Normal-Laplace Distribution and Processes CHAPTER 4 Multivariate Normal-Laplace Distribution and Processes The normal-laplace distribution, which results from the convolution of independent normal and Laplace random variables is introduced by

More information

A Few Special Distributions and Their Properties

A Few Special Distributions and Their Properties A Few Special Distributions and Their Properties Econ 690 Purdue University Justin L. Tobias (Purdue) Distributional Catalog 1 / 20 Special Distributions and Their Associated Properties 1 Uniform Distribution

More information

1 Appendix A: Matrix Algebra

1 Appendix A: Matrix Algebra Appendix A: Matrix Algebra. Definitions Matrix A =[ ]=[A] Symmetric matrix: = for all and Diagonal matrix: 6=0if = but =0if 6= Scalar matrix: the diagonal matrix of = Identity matrix: the scalar matrix

More information

Higher order moments of the estimated tangency portfolio weights

Higher order moments of the estimated tangency portfolio weights WORKING PAPER 0/07 Higher order moments of the estimated tangency portfolio weights Farrukh Javed, Stepan Mazur, Edward Ngailo Statistics ISSN 403-0586 http://www.oru.se/institutioner/handelshogskolan-vid-orebro-universitet/forskning/publikationer/working-papers/

More information

Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone Missing Data

Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone Missing Data Journal of Multivariate Analysis 78, 6282 (2001) doi:10.1006jmva.2000.1939, available online at http:www.idealibrary.com on Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone

More information

Let us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided

Let us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided Let us first identify some classes of hypotheses. simple versus simple H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided H 0 : θ θ 0 versus H 1 : θ > θ 0. (2) two-sided; null on extremes H 0 : θ θ 1 or

More information

Statistical Data Analysis Stat 3: p-values, parameter estimation

Statistical Data Analysis Stat 3: p-values, parameter estimation Statistical Data Analysis Stat 3: p-values, parameter estimation London Postgraduate Lectures on Particle Physics; University of London MSci course PH4515 Glen Cowan Physics Department Royal Holloway,

More information

Lecture 17: The Exponential and Some Related Distributions

Lecture 17: The Exponential and Some Related Distributions Lecture 7: The Exponential and Some Related Distributions. Definition Definition: A continuous random variable X is said to have the exponential distribution with parameter if the density of X is e x if

More information

The University of Hong Kong Department of Statistics and Actuarial Science STAT2802 Statistical Models Tutorial Solutions Solutions to Problems 71-80

The University of Hong Kong Department of Statistics and Actuarial Science STAT2802 Statistical Models Tutorial Solutions Solutions to Problems 71-80 The University of Hong Kong Department of Statistics and Actuarial Science STAT2802 Statistical Models Tutorial Solutions Solutions to Problems 71-80 71. Decide in each case whether the hypothesis is simple

More information

A Remark on Sieving in Biased Coin Convolutions

A Remark on Sieving in Biased Coin Convolutions A Remark on Sieving in Biased Coin Convolutions Mei-Chu Chang Department of Mathematics University of California, Riverside mcc@math.ucr.edu Abstract In this work, we establish a nontrivial level of distribution

More information

STAT 501 Assignment 2 NAME Spring Chapter 5, and Sections in Johnson & Wichern.

STAT 501 Assignment 2 NAME Spring Chapter 5, and Sections in Johnson & Wichern. STAT 01 Assignment NAME Spring 00 Reading Assignment: Written Assignment: Chapter, and Sections 6.1-6.3 in Johnson & Wichern. Due Monday, February 1, in class. You should be able to do the first four problems

More information

A Generalization of Discounted Central Limit Theorem and its Berry-Esseen Analogue

A Generalization of Discounted Central Limit Theorem and its Berry-Esseen Analogue Journal of Statistical Theory and Applications, Vol. 13, No. 4 (December 2014), 289-295 A Generalization of Discounted Central Limit Theorem and its Berry-Esseen Analogue B. Amiri Aghbilagh 1, Z. Shishebor

More information

A lower estimate for the Lebesgue constants of linear means of Laguerre expansions

A lower estimate for the Lebesgue constants of linear means of Laguerre expansions A lower estimate for the Lebesgue constants of linear means of Laguerre expansions George Gasper 1 and Walter Trebels 2 Dedicated to P. L. Butzer on the occasion of his 70th birthday in gratitude (Oct.

More information

Exercises Chapter 4 Statistical Hypothesis Testing

Exercises Chapter 4 Statistical Hypothesis Testing Exercises Chapter 4 Statistical Hypothesis Testing Advanced Econometrics - HEC Lausanne Christophe Hurlin University of Orléans December 5, 013 Christophe Hurlin (University of Orléans) Advanced Econometrics

More information

GLM Repeated Measures

GLM Repeated Measures GLM Repeated Measures Notation The GLM (general linear model) procedure provides analysis of variance when the same measurement or measurements are made several times on each subject or case (repeated

More information

Arithmetic progressions in sumsets

Arithmetic progressions in sumsets ACTA ARITHMETICA LX.2 (1991) Arithmetic progressions in sumsets by Imre Z. Ruzsa* (Budapest) 1. Introduction. Let A, B [1, N] be sets of integers, A = B = cn. Bourgain [2] proved that A + B always contains

More information

Analysis of the AIC Statistic for Optimal Detection of Small Changes in Dynamic Systems

Analysis of the AIC Statistic for Optimal Detection of Small Changes in Dynamic Systems Analysis of the AIC Statistic for Optimal Detection of Small Changes in Dynamic Systems Jeremy S. Conner and Dale E. Seborg Department of Chemical Engineering University of California, Santa Barbara, CA

More information

1 Probability theory. 2 Random variables and probability theory.

1 Probability theory. 2 Random variables and probability theory. Probability theory Here we summarize some of the probability theory we need. If this is totally unfamiliar to you, you should look at one of the sources given in the readings. In essence, for the major

More information

Central Limit Theorems for Classical Likelihood Ratio Tests for High-Dimensional Normal Distributions

Central Limit Theorems for Classical Likelihood Ratio Tests for High-Dimensional Normal Distributions Central Limit Theorems for Classical Likelihood Ratio Tests for High-Dimensional Normal Distributions Tiefeng Jiang 1 and Fan Yang 1, University of Minnesota Abstract For random samples of size n obtained

More information

Consistency of Test-based Criterion for Selection of Variables in High-dimensional Two Group-Discriminant Analysis

Consistency of Test-based Criterion for Selection of Variables in High-dimensional Two Group-Discriminant Analysis Consistency of Test-based Criterion for Selection of Variables in High-dimensional Two Group-Discriminant Analysis Yasunori Fujikoshi and Tetsuro Sakurai Department of Mathematics, Graduate School of Science,

More information

Random Variate Generation

Random Variate Generation Random Variate Generation 28-1 Overview 1. Inverse transformation 2. Rejection 3. Composition 4. Convolution 5. Characterization 28-2 Random-Variate Generation General Techniques Only a few techniques

More information

Generalised AR and MA Models and Applications

Generalised AR and MA Models and Applications Chapter 3 Generalised AR and MA Models and Applications 3.1 Generalised Autoregressive Processes Consider an AR1) process given by 1 αb)x t = Z t ; α < 1. In this case, the acf is, ρ k = α k for k 0 and

More information

Asymptotics of Integrals of. Hermite Polynomials

Asymptotics of Integrals of. Hermite Polynomials Applied Mathematical Sciences, Vol. 4, 010, no. 61, 04-056 Asymptotics of Integrals of Hermite Polynomials R. B. Paris Division of Complex Systems University of Abertay Dundee Dundee DD1 1HG, UK R.Paris@abertay.ac.uk

More information

PRINCIPLES OF STATISTICAL INFERENCE

PRINCIPLES OF STATISTICAL INFERENCE Advanced Series on Statistical Science & Applied Probability PRINCIPLES OF STATISTICAL INFERENCE from a Neo-Fisherian Perspective Luigi Pace Department of Statistics University ofudine, Italy Alessandra

More information

MATH c UNIVERSITY OF LEEDS Examination for the Module MATH2715 (January 2015) STATISTICAL METHODS. Time allowed: 2 hours

MATH c UNIVERSITY OF LEEDS Examination for the Module MATH2715 (January 2015) STATISTICAL METHODS. Time allowed: 2 hours MATH2750 This question paper consists of 8 printed pages, each of which is identified by the reference MATH275. All calculators must carry an approval sticker issued by the School of Mathematics. c UNIVERSITY

More information

Statistics 3858 : Maximum Likelihood Estimators

Statistics 3858 : Maximum Likelihood Estimators Statistics 3858 : Maximum Likelihood Estimators 1 Method of Maximum Likelihood In this method we construct the so called likelihood function, that is L(θ) = L(θ; X 1, X 2,..., X n ) = f n (X 1, X 2,...,

More information

Statistics 3657 : Moment Generating Functions

Statistics 3657 : Moment Generating Functions Statistics 3657 : Moment Generating Functions A useful tool for studying sums of independent random variables is generating functions. course we consider moment generating functions. In this Definition

More information

Lecture 1: August 28

Lecture 1: August 28 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random

More information

Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama

Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama Instructions This exam has 7 pages in total, numbered 1 to 7. Make sure your exam has all the pages. This exam will be 2 hours

More information

Weighted Exponential Distribution and Process

Weighted Exponential Distribution and Process Weighted Exponential Distribution and Process Jilesh V Some generalizations of exponential distribution and related time series models Thesis. Department of Statistics, University of Calicut, 200 Chapter

More information

INFERENCE FOR MULTIPLE LINEAR REGRESSION MODEL WITH EXTENDED SKEW NORMAL ERRORS

INFERENCE FOR MULTIPLE LINEAR REGRESSION MODEL WITH EXTENDED SKEW NORMAL ERRORS Pak. J. Statist. 2016 Vol. 32(2), 81-96 INFERENCE FOR MULTIPLE LINEAR REGRESSION MODEL WITH EXTENDED SKEW NORMAL ERRORS A.A. Alhamide 1, K. Ibrahim 1 M.T. Alodat 2 1 Statistics Program, School of Mathematical

More information

Department of Mathematics

Department of Mathematics Department of Mathematics Ma 3/103 KC Border Introduction to Probability and Statistics Winter 2017 Supplement 2: Review Your Distributions Relevant textbook passages: Pitman [10]: pages 476 487. Larsen

More information

7.3 The Chi-square, F and t-distributions

7.3 The Chi-square, F and t-distributions 7.3 The Chi-square, F and t-distributions Ulrich Hoensch Monday, March 25, 2013 The Chi-square Distribution Recall that a random variable X has a gamma probability distribution (X Gamma(r, λ)) with parameters

More information

CHAPTER 6 SOME CONTINUOUS PROBABILITY DISTRIBUTIONS. 6.2 Normal Distribution. 6.1 Continuous Uniform Distribution

CHAPTER 6 SOME CONTINUOUS PROBABILITY DISTRIBUTIONS. 6.2 Normal Distribution. 6.1 Continuous Uniform Distribution CHAPTER 6 SOME CONTINUOUS PROBABILITY DISTRIBUTIONS Recall that a continuous random variable X is a random variable that takes all values in an interval or a set of intervals. The distribution of a continuous

More information

A COMPARISON OF POISSON AND BINOMIAL EMPIRICAL LIKELIHOOD Mai Zhou and Hui Fang University of Kentucky

A COMPARISON OF POISSON AND BINOMIAL EMPIRICAL LIKELIHOOD Mai Zhou and Hui Fang University of Kentucky A COMPARISON OF POISSON AND BINOMIAL EMPIRICAL LIKELIHOOD Mai Zhou and Hui Fang University of Kentucky Empirical likelihood with right censored data were studied by Thomas and Grunkmier (1975), Li (1995),

More information

Likelihood Ratio Criterion for Testing Sphericity from a Multivariate Normal Sample with 2-step Monotone Missing Data Pattern

Likelihood Ratio Criterion for Testing Sphericity from a Multivariate Normal Sample with 2-step Monotone Missing Data Pattern The Korean Communications in Statistics Vol. 12 No. 2, 2005 pp. 473-481 Likelihood Ratio Criterion for Testing Sphericity from a Multivariate Normal Sample with 2-step Monotone Missing Data Pattern Byungjin

More information

Some General Types of Tests

Some General Types of Tests Some General Types of Tests We may not be able to find a UMP or UMPU test in a given situation. In that case, we may use test of some general class of tests that often have good asymptotic properties.

More information

Chi-square goodness-of-fit test for vague data

Chi-square goodness-of-fit test for vague data Chi-square goodness-of-fit test for vague data Przemys law Grzegorzewski Systems Research Institute Polish Academy of Sciences Newelska 6, 01-447 Warsaw, Poland and Faculty of Math. and Inform. Sci., Warsaw

More information

Probability and Stochastic Processes

Probability and Stochastic Processes Probability and Stochastic Processes A Friendly Introduction Electrical and Computer Engineers Third Edition Roy D. Yates Rutgers, The State University of New Jersey David J. Goodman New York University

More information

The random continued fractions of Dyson and their extensions. La Sapientia, January 25th, G. Letac, Université Paul Sabatier, Toulouse.

The random continued fractions of Dyson and their extensions. La Sapientia, January 25th, G. Letac, Université Paul Sabatier, Toulouse. The random continued fractions of Dyson and their extensions La Sapientia, January 25th, 2010. G. Letac, Université Paul Sabatier, Toulouse. 1 56 years ago Freeman J. Dyson the Princeton physicist and

More information

Discrete Distributions Chapter 6

Discrete Distributions Chapter 6 Discrete Distributions Chapter 6 Negative Binomial Distribution section 6.3 Consider k r, r +,... independent Bernoulli trials with probability of success in one trial being p. Let the random variable

More information

Frequentist-Bayesian Model Comparisons: A Simple Example

Frequentist-Bayesian Model Comparisons: A Simple Example Frequentist-Bayesian Model Comparisons: A Simple Example Consider data that consist of a signal y with additive noise: Data vector (N elements): D = y + n The additive noise n has zero mean and diagonal

More information

Decomposition of Riesz frames and wavelets into a finite union of linearly independent sets

Decomposition of Riesz frames and wavelets into a finite union of linearly independent sets Decomposition of Riesz frames and wavelets into a finite union of linearly independent sets Ole Christensen, Alexander M. Lindner Abstract We characterize Riesz frames and prove that every Riesz frame

More information

Integrated Likelihood Estimation in Semiparametric Regression Models. Thomas A. Severini Department of Statistics Northwestern University

Integrated Likelihood Estimation in Semiparametric Regression Models. Thomas A. Severini Department of Statistics Northwestern University Integrated Likelihood Estimation in Semiparametric Regression Models Thomas A. Severini Department of Statistics Northwestern University Joint work with Heping He, University of York Introduction Let Y

More information

A note on profile likelihood for exponential tilt mixture models

A note on profile likelihood for exponential tilt mixture models Biometrika (2009), 96, 1,pp. 229 236 C 2009 Biometrika Trust Printed in Great Britain doi: 10.1093/biomet/asn059 Advance Access publication 22 January 2009 A note on profile likelihood for exponential

More information

On prediction and density estimation Peter McCullagh University of Chicago December 2004

On prediction and density estimation Peter McCullagh University of Chicago December 2004 On prediction and density estimation Peter McCullagh University of Chicago December 2004 Summary Having observed the initial segment of a random sequence, subsequent values may be predicted by calculating

More information

Supermodular ordering of Poisson arrays

Supermodular ordering of Poisson arrays Supermodular ordering of Poisson arrays Bünyamin Kızıldemir Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological University 637371 Singapore

More information

Lecture 3. G. Cowan. Lecture 3 page 1. Lectures on Statistical Data Analysis

Lecture 3. G. Cowan. Lecture 3 page 1. Lectures on Statistical Data Analysis Lecture 3 1 Probability (90 min.) Definition, Bayes theorem, probability densities and their properties, catalogue of pdfs, Monte Carlo 2 Statistical tests (90 min.) general concepts, test statistics,

More information

MFM Practitioner Module: Quantitative Risk Management. John Dodson. September 23, 2015

MFM Practitioner Module: Quantitative Risk Management. John Dodson. September 23, 2015 MFM Practitioner Module: Quantitative Risk Management September 23, 2015 Mixtures Mixtures Mixtures Definitions For our purposes, A random variable is a quantity whose value is not known to us right now

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4. (*. Let independent variables X,..., X n have U(0, distribution. Show that for every x (0,, we have P ( X ( < x and P ( X (n > x as n. Ex. 4.2 (**. By using induction or otherwise,

More information

2008 Winton. Statistical Testing of RNGs

2008 Winton. Statistical Testing of RNGs 1 Statistical Testing of RNGs Criteria for Randomness For a sequence of numbers to be considered a sequence of randomly acquired numbers, it must have two basic statistical properties: Uniformly distributed

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4.1 (*. Let independent variables X 1,..., X n have U(0, 1 distribution. Show that for every x (0, 1, we have P ( X (1 < x 1 and P ( X (n > x 1 as n. Ex. 4.2 (**. By using induction

More information

Analysis of variance, multivariate (MANOVA)

Analysis of variance, multivariate (MANOVA) Analysis of variance, multivariate (MANOVA) Abstract: A designed experiment is set up in which the system studied is under the control of an investigator. The individuals, the treatments, the variables

More information

7 Random samples and sampling distributions

7 Random samples and sampling distributions 7 Random samples and sampling distributions 7.1 Introduction - random samples We will use the term experiment in a very general way to refer to some process, procedure or natural phenomena that produces

More information

SOME SPECIFIC PROBABILITY DISTRIBUTIONS. 1 2πσ. 2 e 1 2 ( x µ

SOME SPECIFIC PROBABILITY DISTRIBUTIONS. 1 2πσ. 2 e 1 2 ( x µ SOME SPECIFIC PROBABILITY DISTRIBUTIONS. Normal random variables.. Probability Density Function. The random variable is said to be normally distributed with mean µ and variance abbreviated by x N[µ, ]

More information

Continuous Distributions

Continuous Distributions Continuous Distributions 1.8-1.9: Continuous Random Variables 1.10.1: Uniform Distribution (Continuous) 1.10.4-5 Exponential and Gamma Distributions: Distance between crossovers Prof. Tesler Math 283 Fall

More information

On integral representations of q-gamma and q beta functions

On integral representations of q-gamma and q beta functions On integral representations of -gamma and beta functions arxiv:math/3232v [math.qa] 4 Feb 23 Alberto De Sole, Victor G. Kac Department of Mathematics, MIT 77 Massachusetts Avenue, Cambridge, MA 239, USA

More information

1 Uniform Distribution. 2 Gamma Distribution. 3 Inverse Gamma Distribution. 4 Multivariate Normal Distribution. 5 Multivariate Student-t Distribution

1 Uniform Distribution. 2 Gamma Distribution. 3 Inverse Gamma Distribution. 4 Multivariate Normal Distribution. 5 Multivariate Student-t Distribution A Few Special Distributions Their Properties Econ 675 Iowa State University November 1 006 Justin L Tobias (ISU Distributional Catalog November 1 006 1 / 0 Special Distributions Their Associated Properties

More information

Eco517 Fall 2004 C. Sims MIDTERM EXAM

Eco517 Fall 2004 C. Sims MIDTERM EXAM Eco517 Fall 2004 C. Sims MIDTERM EXAM Answer all four questions. Each is worth 23 points. Do not devote disproportionate time to any one question unless you have answered all the others. (1) We are considering

More information

Statistical Inference: Estimation and Confidence Intervals Hypothesis Testing

Statistical Inference: Estimation and Confidence Intervals Hypothesis Testing Statistical Inference: Estimation and Confidence Intervals Hypothesis Testing 1 In most statistics problems, we assume that the data have been generated from some unknown probability distribution. We desire

More information

Practice Problems Section Problems

Practice Problems Section Problems Practice Problems Section 4-4-3 4-4 4-5 4-6 4-7 4-8 4-10 Supplemental Problems 4-1 to 4-9 4-13, 14, 15, 17, 19, 0 4-3, 34, 36, 38 4-47, 49, 5, 54, 55 4-59, 60, 63 4-66, 68, 69, 70, 74 4-79, 81, 84 4-85,

More information

Regression and Statistical Inference

Regression and Statistical Inference Regression and Statistical Inference Walid Mnif wmnif@uwo.ca Department of Applied Mathematics The University of Western Ontario, London, Canada 1 Elements of Probability 2 Elements of Probability CDF&PDF

More information

Probability. Machine Learning and Pattern Recognition. Chris Williams. School of Informatics, University of Edinburgh. August 2014

Probability. Machine Learning and Pattern Recognition. Chris Williams. School of Informatics, University of Edinburgh. August 2014 Probability Machine Learning and Pattern Recognition Chris Williams School of Informatics, University of Edinburgh August 2014 (All of the slides in this course have been adapted from previous versions

More information

Stat 5101 Notes: Brand Name Distributions

Stat 5101 Notes: Brand Name Distributions Stat 5101 Notes: Brand Name Distributions Charles J. Geyer September 5, 2012 Contents 1 Discrete Uniform Distribution 2 2 General Discrete Uniform Distribution 2 3 Uniform Distribution 3 4 General Uniform

More information

Poisson random measure: motivation

Poisson random measure: motivation : motivation The Lévy measure provides the expected number of jumps by time unit, i.e. in a time interval of the form: [t, t + 1], and of a certain size Example: ν([1, )) is the expected number of jumps

More information

Chapter 2 Continuous Distributions

Chapter 2 Continuous Distributions Chapter Continuous Distributions Continuous random variables For a continuous random variable X the probability distribution is described by the probability density function f(x), which has the following

More information

Department of Mathematics

Department of Mathematics Department of Mathematics Ma 3/103 KC Border Introduction to Probability and Statistics Winter 2018 Supplement 2: Review Your Distributions Relevant textbook passages: Pitman [10]: pages 476 487. Larsen

More information

ARIMA Modelling and Forecasting

ARIMA Modelling and Forecasting ARIMA Modelling and Forecasting Economic time series often appear nonstationary, because of trends, seasonal patterns, cycles, etc. However, the differences may appear stationary. Δx t x t x t 1 (first

More information

Probability Theory and Statistics. Peter Jochumzen

Probability Theory and Statistics. Peter Jochumzen Probability Theory and Statistics Peter Jochumzen April 18, 2016 Contents 1 Probability Theory And Statistics 3 1.1 Experiment, Outcome and Event................................ 3 1.2 Probability............................................

More information

Exercises and Answers to Chapter 1

Exercises and Answers to Chapter 1 Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean

More information

Multivariate Statistics

Multivariate Statistics Multivariate Statistics Chapter 2: Multivariate distributions and inference Pedro Galeano Departamento de Estadística Universidad Carlos III de Madrid pedro.galeano@uc3m.es Course 2016/2017 Master in Mathematical

More information

Test Code: STA/STB (Short Answer Type) 2013 Junior Research Fellowship for Research Course in Statistics

Test Code: STA/STB (Short Answer Type) 2013 Junior Research Fellowship for Research Course in Statistics Test Code: STA/STB (Short Answer Type) 2013 Junior Research Fellowship for Research Course in Statistics The candidates for the research course in Statistics will have to take two shortanswer type tests

More information

Department of Statistics

Department of Statistics Research Report Department of Statistics Research Report Department of Statistics No. 05: Testing in multivariate normal models with block circular covariance structures Yuli Liang Dietrich von Rosen Tatjana

More information

Probability Density Functions

Probability Density Functions Probability Density Functions Probability Density Functions Definition Let X be a continuous rv. Then a probability distribution or probability density function (pdf) of X is a function f (x) such that

More information

Characteristic Functions and the Central Limit Theorem

Characteristic Functions and the Central Limit Theorem Chapter 6 Characteristic Functions and the Central Limit Theorem 6.1 Characteristic Functions 6.1.1 Transforms and Characteristic Functions. There are several transforms or generating functions used in

More information

Gaussian vectors and central limit theorem

Gaussian vectors and central limit theorem Gaussian vectors and central limit theorem Samy Tindel Purdue University Probability Theory 2 - MA 539 Samy T. Gaussian vectors & CLT Probability Theory 1 / 86 Outline 1 Real Gaussian random variables

More information

MAS223 Statistical Inference and Modelling Exercises

MAS223 Statistical Inference and Modelling Exercises MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,

More information