Department of Statistics

Size: px
Start display at page:

Download "Department of Statistics"

Transcription

1 Research Report Department of Statistics Research Report Department of Statistics No. 05: Testing in multivariate normal models with block circular covariance structures Yuli Liang Dietrich von Rosen Tatjana von Rosen No. 05: Testing in multivariate normal models with block circular covariance structures Yuli Liang Dietrich von Rosen Tatjana von Rosen Department of Statistics, Stockholm University, SE-06 9 Stockholm, Sweden

2 Testing in multivariate normal models with block circular covariance structures Yuli Liang a, Dietrich von Rosen b,c, Tatjana von Rosen a a Department of Statistics, Stockholm University, SE-06 9 Stockholm b Department of Energy and Technology, Swedish University of Agricultural Sciences, SE Uppsala c Department of Mathematics, Linköping University, SE Linköping Abstract In this article, likelihood ratio tests concerning multivariate normal models with block circular covariance structures are obtained. Hypotheses about general block structures of the covariance matrix and specific covariance parameters of the block circular covariance structure have been of main interest. In addition, tests about the mean vector have been considered. Keywords: Beta random variables, Canonical reduction, Covariance parameters, Likelihood ratio test, Restricted model. Introduction The main goal of this paper is to develop likelihood ratio tests (LRT) for multivariate normal models with block circular covariance structure. We consider both testing a block structure (external test) and testing specific (co)variance parameters inside of the block circular structure (internal test). Testing specific (co)variance parameters in case of block structures is a quite challenging problem. It demanded innovative solutions to solve an overparametrization problem, and formulation of a meaningful hypotheses. For some related works of the external test we refer to Wilks (946), Votaw (948), Olkin and Press (969), Olkin (973) and Srivastava et al. (008) who

3 all consider estimation and testing problems for patterned covariance matrices. For example, Votaw (948) studied a test for block compound symmetry (CS) covariance matrix, extended the testing problem of the CS structure of Wilks (946) to a block version and developed a likelihood ratio test (LRT) criteria for testing hypotheses which are relevant to certain psychometric and medical research problems. Olkin (973) considered the problem of testing circular Toeplitz (CT) covariance matrix in blocks which is an extension of his previous work Olkin and Press (969). In comparison to the mentioned works this paper treats a different covariance structure which has only been studied by Liang et al. (04). In this work, we also take into account the mean structure and test both mean and covariance structures simultaneously. Furthermore, we consider testing hypotheses about the mean given a specific covariance matrix. The organization of the paper is as follows. In Section, we present a model with three hypothetical block covariance structures and consider various hypotheses concerning testing mean and covariance matrices. Moreover, the model with a block circular Toeplitz covariance structure will be further studied by introducing two nested random effects. The LRT statistics for testing block structures as well as the corresponding null distributions are given in Section 3. In Section 4, we test hypotheses concerning (co)variance parameters within the block circular Toeplitz covariance structure.. Models Let y, y,..., y n be independent samples from N p ( n µ, Σ), where µ is an n variate unknown vector, p = n n, and let Y = (y, y,..., y n ). Then we may write Y N p,n (( n µ) n, Σ, I n ), ()

4 where N p,n (( n µ) n, Σ, I n ) denotes the p n matrix normal distribution with mean matrix ( n µ) n and p p covariance matrix within elements of columns Σ and there are n independent columns. Throughout this paper, s is a column vector of size s with all elements equal to one, J s = s s, I s is the s s identity matrix and denotes the Kronecker product. In addition, we denote P s = s J s and Q s = I s P s as two mutually orthogonal projectors of size s s. The notation vec( ) denotes the vectorization operator and tr( ) denotes the trace function. The following three specific structures of Σ, namely, Σ I, Σ II and Σ III, are of interest when constructing the LRT statistics in this article. (i) Σ I = I n Σ () + (J n I n ) Σ (), where Σ (h) : n n is an unstructured symmetric matrix, h =, ; (ii) Σ II = I n Σ () + (J n I n ) Σ (), where Σ (h), h =,, is a CT matrix which depends on r parameters, r = [n /] +, and the symbol [ ] stands for the integer part; (iii) Σ III = I n Σ () + (J n I n ) Σ (), where Σ (h), h =,, is a CS matrix and can be written as Σ (h) = σ h I n + σ h (J n I n ). Throughout this thesis, the notation CT matrix stands for a symmetric circular Toeplitz matrix. Note that the matrix Σ (h) = (σ (h) ij ) depends on r, r = [n /] +, parameters, and for i, j =,..., n, h =,, τ σ (h) j i ++(h )r, if j i r, ij = τ n j i ++(h )r, otherwise, where the τ qs are unknown parameters, and taking into account that h =,, the index q =,..., r. For spectral properties of CT matrices, see Basilevsky (983), for example. The number of unknown parameters in Σ I, Σ II and Σ III are n (n + ), r and 4, respectively. That is, more restrictions are imposed on Σ from Σ I to Σ III. 3

5 Olkin (973) considered the problems of testing the block versions of hypotheses: sphericity versus intraclass correlation, sphericity versus circular symmetry, intraclass correlation versus circular symmetry and circular symmetry versus general structure by assuming unstructured mean. Olkin s block version of intraclass correlation is the same as Σ I. Different from Olkin (973), this paper will start the null hypothesis of Σ I and test Σ I versus a type of mixed block structure Σ II which follows to Barton and Fuhrmann (993), which has been shown that it can be transformed into Olkin s block circular symmetry model with intraclass blocks inside by using the so-called commutation matrix (Liang et al., 0). Moreover, it is useful to test further Σ I or Σ II versus a more parsimonious structure Σ III. It is of interest to test both the mean µ and block structure of Σ simultaneously. H 0 : µ = µ n, Σ = Σ II versus H a : µ R n, Σ = Σ I, H 0 : µ = µ n, Σ = Σ III versus H a : µ R n, Σ = Σ I. Furthermore, the following hypotheses about the pattern of Σ will be tested, i.e., H 0 3 : Σ = Σ III versus H a 3 : Σ = Σ II, given µ = µ n, H 0 4 : Σ = Σ II versus H a 4 : Σ = Σ I, given µ R n, H 0 5 : Σ = Σ III versus H a 5 : Σ = Σ I, given µ R n, or test about the mean, i.e., H 0 6 : µ = µ n versus H a 6 : µ R n, given Σ = Σ II, H 0 7 : µ = µ n versus H a 7 : µ R n, given Σ = Σ III. In this paper we will also pay particular attention to the case when Σ = Σ II. In Liang et al. (0, 04), a balanced hierarchical mixed linear model with the covariance structure Σ II is considered and the model can be applied to situations when there is a spatial circular layout on one factor and another factor satisfies the 4

6 property of exchangeability. Liang et al. (04) gave a real-data example which motivated the pattern of Σ II. Moreover, Σ II characterizes the dependency when both compound symmetry and circular symmetry appear hierarchically (Liang et al., 0). Now we will introduce two random effects γ and γ, where γ is nested within γ, and the response vector y i is given by y i = µ p + Z γ + Z γ + ɛ, i =,..., n, () where y i is a p vector of observations, p = n n, µ is an unknown constant, γ : n, γ : p and ɛ are independently normally distributed random vectors with zero means and variances-covariance matrices Σ, Σ, and σ I p, respectively. Here Z = I n n, Z = I n I n. Hence, y i N p (µ p, V (θ)), i =,..., n, V (θ) = Z Σ Z + Σ + σ I p, (3) where V (θ) is the covariance matrix with the vector of unknown parameters θ. The covariance matrix V (θ) in (3) may have different structures depending on Σ and Σ. Here we will consider V (θ) when the covariance matrix Σ : n n is compound symmetric, i.e. Σ = σ I n + σ (J n I n ), (4) where σ and σ are unknown parameters. Furthermore, the covariance matrix Σ : p p, in (3), is assumed to have the following block compound symmetric pattern: Σ = I n Σ () + (J n I n ) Σ (), (5) where Σ (h) is a CT matrix, h =,. Spectral properties of Σ, Z Σ Z and Σ have been derived in Liang et al. (0), which will be used later in Section 4 when discussing test for the parameters θ of V (θ) in (3). 5

7 3. Testing block covariance structures In this section we are going to derive LRT statistics for testing the null hypotheses Hi 0 versus the alternative hypotheses Hi a, i =,..., 7 defined in Section. 3.. Testing simultaneous hypotheses The LRT statistics corresponding to test H 0 i versus H a i, i =,, are defined as Λ = max L(µ, Σ II) max L(µ, Σ I ), Λ = max L(µ, Σ III) max L(µ, Σ I ). We start with defining an orthogonal matrix (known as Helmert matrix, see Lancaster, 965) K : n n such that where K n K = (n / n.k ), (6) = 0 and K K = I n. Moreover, an orthogonal matrix which contains the known orthonormal eigenvectors, v j, j =,..., n, of any n n CT matrix is denoted V, i.e. V = (v,..., v n ). We also define the following matrices: X = (n / n I n )Y, X = (K I n )Y, X = (X,..., X (n )), S = X Q n X, S = n i= The next theorem establishes Λ. X i X i, S 3 = X P n X. Theorem 3.. The LRT statistic for testing H 0 versus H a, defined in Section, is given by Λ /n (7) = n (n even) S S n, (8) r r t i= t m i i i=r+ where even is the indicator function that n is even, matrices S h are given in (7), h =,, t i = tr((x X )(v i v i + v n i+v n i+)), t = tr(s P n ), t,r+i = 6 t m i i

8 tr(s (v i v i + v n i+v n i+)), i =,..., r, and for i = r, tr((x X )(v r v r + v n r+v n r+)), if n is odd, t r = tr((x X )(v r v r)), if n is even, tr(s (v r v r + v n r+v n r+)), if n is odd, t,r = tr(s v r v r), if n is even. Proof. The likelihood function under the alternative hypothesis H a is given by L(µ, Σ I ) = (π) pn ΣI n e tr{σ I (Y ( n µ) n)(y ( n µ) n) }. (9) Now it is convenient to use the reduction to a canonical form. Observe that (X.X ) = (K I n )Y, where K has been defined in (6). The expectations of X and X equal (n / n I n )( n µ) n = n µ n, (K I n )( n µ) n = 0, respectively. For the dispersion matrices of X and X, first observe that K J n K = n 0, 0 0 n and then using the properties of Kronecker product yields (K I n )Σ I (K I n ) = Σ() + (n )Σ () 0. 0 I n (Σ () Σ () ) Thus, we have the following canonical form: X n µ N p,n n, 0, (0) 0 0 I n X where = Σ () + (n )Σ () and = Σ () Σ (). Moreover, since the covariance between X and X equals 0, X and X are independently distributed. 7

9 Let L (µ, ) and L ( ) be the two likelihood functions corresponding to X and X, respectively. Using (0), equation (9) can be rewritten as L(µ, Σ I ) = L (µ, )L ( ) = (π) n n n e tr{ (X n µ n)(x n µ n) } (π) n (n )n n(n ) e tr{ n i= X i X i}. () From () it can be observed that the likelihood function in (9) is identical to the likelihood functions of two independent MANOVA (multivariate analysis of variance) models where in one the mean equals 0. Hence, the MLEs of µ and equal n ˆµ = n X n, ˆ = n X Q n X = n S. () The MLE of is given by ˆ = n X i X i = n(n ) i= n(n ) S. Hence, the likelihood function in () is maximized by replacing µ, and with their corresponding MLEs, i.e., max L(µ,, ) = L(ˆµ, ˆ, ˆ ) = (π) pn S n(n ) n(n ) S n n e pn.(3) The likelihood function under H 0, i.e. Y N p,n (µ p n, Σ II, I n ) can be written as L(µ, Σ II ) = (π) pn ΣII n e tr{σ II (Y µp n )(Y µp n ) }. (4) Furthermore, (I n K V )vecy N pn [(I n K n V n )µ, I n D(η)], where K n = ( n, 0,..., 0) and V n = ( n, 0,..., 0). The last equality follows because K V is the matrix of eigenvectors of Σ II, i.e., Σ II = (K V )D(η)(K V ), 8

10 where D(η) is a p p diagonal matrix with the eigenvalues of Σ II (Liang et al., 0). Put w i = (I n u i)vecy and u i is the ith column of K V, i =,..., p. It follows that the model can be split into r independent models (Liang et al., 04, proof of Proposition ), where r = [n /] + and [ ] denotes the integer part. Each model has the following response vector: ỹ = w, ỹ i = vec(w i, w n i+); vec(w r, w n r+), if n is odd, ỹ r = w r, if n is even; ỹ r+ = vec(w n +, w n +,..., w (n )n +); (5) ỹ r+i = vec(w n +i, w n i+,..., w (n )n +i, w n n i+), i =,..., r ; vec(w n +r, w n r+,..., w (n )n +r, w n n r+), if n is odd, ỹ r = vec(w n +r, w n +r,..., w (n )n +r), if n is even. It can be shown that ỹ N n ( pµ n, η I n ) and ỹ l N nml (0, η i I nml ), l =,..., r, where η l are the distinct eigenvalue of Σ II, of multiplicity m l, l =,..., r. Thus, the MLEs of µ and η are given by pˆµ = n, ˆη = nỹ Q n ỹ, ˆη l = ỹ nm lỹ l, l l =,..., r. (6) In order to related the maximum of the likelihoods under H 0 and H a, we should rewrite the MLEs of η l in terms of the matrices S h, h =,, 3. Since w i = (I n u i)vecy = Y u i, i.e. w i = Y (/ n n v i ), and w jn +i = Y (k j v i ) (Liang et al., 0, Theorem.4), where k j is the jth column of the matrix K, i =,..., n, j =,..., n, after some manipulations we get the following expressions for ˆη l : ˆη = nỹ Q n ỹ = n u Y Q n Y u = n v X Q n X v = n tr(s P n ), (7) 9

11 with the multiplicity m = ; ˆη r+ = = with m r+ = n. n(n )ỹ r+ỹ r+ = n n(n ) n n(n ) v X j X jv = j= For i =,..., r, we have m i =, and j= w jn +w jn + n(n ) tr (S P n ), (8) ˆη i = nỹ iỹ i = n (w iw i + w n i+w n i+) = n (v ix X v i + v n i+x X v n i+) = n tr ( (S + S 3 )(v i v i + v n i+v n i+) ) ; (9) ˆη r+i = = = n(n )ỹ iỹ i = n(n ) n j= n(n ) n (w jn +iw jn +i + w (j+)n i+w (j+)n i+) j= [ (k j v i)y Y (k j v i ) + (k j v n i+)y Y (k j v n i+) ] n(n ) tr ( S (v i v i + v n i+v n i+) ), (0) with m r+i = (n ). Similarly, when n is odd, we have m r = and m r = (n ); when n is even, m r = and m r = n. Then, ˆη r = tr ( (S n + S 3 )(v r v r + v n r+v n r+) ), if n is odd, tr ((S n + S 3 )(v r v r)), if n is even. ˆη r = tr ( n(n S ) (v r v r + v n r+v n r+) ), if n is odd, tr (S n(n ) (v r v r)), if n is even. Therefore, the likelihood function in (4) is maximized by max L(µ, Σ II ) = (π) pn 0 r i= () ˆη nm i i e pn, ()

12 where ˆη i, i =,..., r is given in (7), (8), (9), (0) and (). Using the expressions given in (3) and (), Λ has the form given in (8) and this completes the proof. Next we will show that under the null hypothesis H 0, the distribution of Λ has the same distribution as a product of independent Beta random variables. The result is given in Theorem 3.4. We will use the notation d to indicate that has the same distribution (of) and β(a, b) indicates a Beta random variable with parameters a and b. We first present two auxiliary lemmas which will be used in the subsequent proof. Lemma 3.. (Muirhead, 98, Theorem 3..5, p.00) If A is W m (n, Σ), where n m is an integer, then A / Σ has the same distribution as the product of m independent χ random variables, m i= χ n i+. Lemma 3.3. (Olkin and Press, 969, Lemma ) Let W 0, W,..., W m be independently distributed, W j χ a j, j = 0,,..., m. If m L = m m j= W j (W 0 + m j= W j), m then L d m j= X j, where X,..., X m are independently distributed, X j β (a j, b j ), a = m j=0 a j, b j = a + j m a j. Theorem 3.4. Under the null hypothesis H 0, the distribution of Λ, given in (8), follows Λ /n n d i= B i B n i, (3)

13 where B i and B i are independently distributed, i =,..., n, β( n i, i+ B i ), for i =,..., [n /], β( n i, i+), for i = [n /] +,..., n, β( n(n ) i, i B i ), for i =,..., [n /], β( n(n ) i, i+), for i = [n /] +,..., n. Proof. To derive the distribution of Λ, we first need the distributions of S, S and S 3. Since X N n,n( n µ n,, I n ) and (I n n J n) is idempotent of rank n, we have (Kollo and von Rosen, 005, Corollary.4.3., p.40) S W n (, n ). By Definition.4. in Kollo and von Rosen (005), S 3 has a non-central Wishart distribution with the non-central parameter n n µµ, i.e., S 3 W n (,, n n µµ ), and since X i N n,n(0,, I n ), we have X i X i W n (, n), i =,..., n. Based on the property of the sum of independent Wishart variables (Anderson, 003, Theorem 7.3., p.60), we have S W n (, n(n )). Using the results of Theorem.4. in Vaish and Chaganty (004), it can be found that S and S 3 are independent. Moreover, S and S 3 are independent of S since they are the statistics from two separate MANOVA models, see equation (). Now we start to derive the distribution of Λ. Recall that the orthogonal matrix V contains the orthonormal eigenvectors of the CT matrix of size n n, we know that V S k V = S k holds, k =,. The statistics t, t i (i =,..., r) and t,r+i (i =,..., r) are nothing but some functions of the ith diagonal element of the matrices V S h V, h =,..., 3. When n is odd, based on Theorem 3., Λ /n can be written as follows: n(n ) V S V (V r [ S V ) i= (V S V ) ii + (V S V ) n i+,n i+ + (V S 3 V ) ii + (V S 3 V ) n i+,n i+] ( V S V ) n (V r [ S V ) i= (V S V ) ii + (V ], S V ) n i+,n i+ (4)

14 where (V S h V ) ii is the ith diagonal element of the matrix V S h V. Due to the mutual independence of S, S and S 3, equation (4) actually contains two independent components V S V (V r [ S V ) i= (V S V ) ii + (V S V ) n i+,n i+ + (V S 3 V ) ii + (V S 3 V ) n i+,n i+] and ( V S V ) n (V r [ S V ) i= (V S V ) ii + (V ]. S V ) n i+,n i+ We first apply Barlett decomposition (Muirhead, 98, Theorem 3..4, p.99). Decompose V S V = UU and V S V = LL, where U and L are two different lower triangular matrices with positive diagonal elements and under the null hypothesis, we have u η χ n, u ii η i χ n i, u n i+,n i+ η i χ n (n i+), i α= u iα η i χ i and n i+ α= u n i+,α η i χ n i+, for i =,..., r and they are all independent. Similarly, it is noted that l η r+ χ n(n ), l ii η r+i χ n(n ) i+, l n i+,n i+ η r+i χ n(n ) (n i+)+, and mutual independence also holds. Then, n V S V = u ii = u i= n V S V = lii = l i= and under the null hypothesis H 0, r i= r i= i liα η r+i χ i, α= n i+ α= l n i+,α η r+i χ n i+, u iiu n i+,n i+, l iil n i+,n i+, n u ii i= d η r i= η i n i= Z i, n lii i= d η r+ r i=r+ η i n i= Z i, where Z i χ n i, Z i χ n(n ) i+ are independently distributed, i =,..., n. Moreover, (V S V ) = u, (V S V ) ii = u ii + i α= u iα, (V S V ) = l, 3

15 (V S V ) ii = l ii + i α= l iα, and all u ij and l ij ( j i n ) are independently distributed. Moreover, under H 0, v is 3 v i and v n i+s 3 v n i+ are both independently χ distributed with the scale parameter η i, i =,..., r. Since they are also independent of i α= u iα and n i+ α= u n i+,α, we have i n i+ u iα + u n i+,α + v is 3 v i + v n i+s 3 v n i+ η i χ n +. α= α= Similarly, we have i α= l iα + n i+ α= ln i+,α η r+i χ n. Hence, equation (4) can be written as r u ii u n i+,n i+ (u ii + u n + i i+,n i+ α= u iα + n i+ α= u n + i+,α v i S 3v i + v n S i+ 3v n i+) i= [ r i= l ii l n i+,n i+ (l ii + l n i+,n i+ + i α= l iα + n i+ α= l n i+,α ) ] n, (5) and its distribution can be illustrated: r ηi χ n i χ n (n i+) [ r ηr+i χ n(n ) i+ χ n(n ) (n i+)+ ηi (χ n + + χ n i + χ n (n i+) ) ηr+i (χ n + χ n(n + ) i+ χ n(n ) (n i+)+ ) i= i= Then we can use Lemma 3.3 to obtain the distribution of Λ. Put W 0u = i α= u iα + n i+ α= u n i+,α + v is 3 v i + v n i+s 3 v n i+, W u = u ii and W u = u n i+,n i+ and in a similar way let W 0l = i α= l iα+ n i+ α= l n i+,α, W l = l ii and W l = ln i+,n i+. Then, (5) has the following form r W u W u [ r W l W l (W 0u + W u + W u ) (W 0l + W l + W l ) i= i= ] n. ] n Thus Lemma 3.3 yields that (5) is distributed as r ( n i β, i ) ( n (n i + ) β, n ) i + 3 i= [ r ( n(n ) i + β, i ) ( n(n ) (n i + ) + β, n ) ] n i +, i= which gives the result (3). When n is even, the distribution of Λ /n can be obtained using exactly the same ideas and manipulations. 4.

16 In the next theorem we are going to derive the expression of Λ. Let us first give a lemma which will be used in the proof (for the proof of the lemma we refer to Nahtman, 006). Lemma 3.5. The matrix Σ III : p p, p = n n, Σ III = I n [σ I n + σ (J n I n )] + (J n I n ) [σ I n + σ (J n I n )], where σ h, i =,, are constants, has the following four distinct eigenvalues λ = σ + (n )σ + (n ) [σ + (n )σ ], λ = σ σ + (n ) (σ σ ), λ 3 = σ + (n )σ [σ + (n )σ ], λ 4 = σ σ (σ σ ), of multiplicity m =, m = n, m 3 = n and m 4 = (n )(n ), respectively. Theorem 3.6. The LRT statistic for testing H 0 versus H a, defined in Section, is given by Λ /n = (n ) n (n ) S S n 4 j= j t m j 3j (t 3 + n i= v i S 3v i ) n where S h are given in (7), h =,, 3, m j =, n, n, (n )(n ) for j =,..., 4, respectively, t 3 = tr(s P n ), t 3 = tr(s Q n ), t 33 = tr(s P n ) and t 34 = tr(s Q n ). Proof. Since the maximum of the likelihood function under the alternative hypothesis H a, i.e., maxl(µ, Σ I ), has been given in (3), let us now consider the derivation of maxl(µ, Σ III ). The likelihood function under the null hypothesis H 0 is given by (6) L(µ, Σ III ) = (π) pn ΣIII n e tr{σ III (Y µp n )(Y µp n ) }. (7) 5

17 Here it is convenient to use the canonical form with the help of Lemma 3.5. First, (I n K V )vecy N pn [(I n K n V n )µ, I n (K V )Σ III (K V )], [ which becomes N pn (In ( n, 0,..., 0) ( n, 0,..., 0) )µ, I n D(λ)) ], where D(λ) is the p p diagonal matrix given in Lemma 3.5. It yields that the model can be split into 4 independent models and each model has the following response vector: ỹ = w, ỹ = vec(w,..., w n ), ỹ 3 = vec(w n +,..., w (n )n +), ỹ 4 = vec(w n +,..., w n, w n +,..., w 3n,..., w (n )n +,..., w n n ), (8) where w i = (I n u i)vecy and u i is the ith column of K V, i =,..., p. It can be shown that ỹ N n ( pµ n, λ I n ) and ỹ j N nmj (0, λ j I nmj ), for j =, 3, 4, where λ j and m j, j =,..., 4 are given in Lemma 3.5. Thus, the MLEs of µ and λ are given by pˆµ = nỹ n, ˆλ = nỹ Q n ỹ, ˆλj = ỹ nm jỹ j, j =, 3, 4. (9) j Recall that there is a reformulation of the MLEs of η in the proof of Theorem 3.. Here the expressions of λ in terms of S h, h =,, 3, are analogous. Observe that ˆλ = ˆη and ˆλ 3 = ˆη r+, where ˆη and ˆη r+ are given in (6). As a result, ˆλ and ˆλ 3 can be reexpressed as given in (7) and (8), respectively. Moreover, since n n ỹ ỹ = w iw i = (/ n n v i)y Y (/ n n v i ) ỹ 4ỹ 4 = i= n i= = v ix X v i = tr((s + S 3 )Q n ), i= n n w jn +iw jn +i = j= n = i= i= v i ( n j= X j X j ) n j= n i= v i = tr(s Q n ), 6 (k j v i)y Y (k j v i )

18 the MLEs ˆλ and ˆλ 4 can be rewritten as follows: ˆλ = ˆλ 4 = n(n )ỹ ỹ = n(n )(n )ỹ 4ỹ 4 = n(n ) tr((s + S 3 )Q n ), (30) n(n )(n ) tr(s Q n ). (3) Hence, the maximum of the likelihood function in (7) is given by max L(µ, Σ III ) = (π) pn 4 j= ˆλ nm j j e pn, (3) where ˆλ j, j =,..., 4, are given in (7), (30), (8) and (3). Using the expressions given in (3) and (3), Λ has the form given in (6). This completes the proof. Theorem 3.7. Under the null hypothesis H 0, the distribution of Λ, given in (6), equals Λ /n n d i= B i B n i, (33) where B i and B i are independently distributed, i =,..., n, ( n i i B i β, n + i + ), ( n(n ) i i B i β, n + i ). Proof. The proof is similar to the one of Theorem 3.4. Based on Theorem 3.7, Λ can be written as follows: Λ /n (n ) n (n ) V S V = (V S V ) [ n i= (V S V ) ii + n i= (V S 3 V ) ii ] n ( ) V n S V (V S V ) [ n i= (V S V ) ii ] n, (34) V S V (V S V ) [ n i= (V S V ) ii + n i= (V S 3 V ) ii] n which contains two independent components ( ) n V and S V because of the mutual independence of S h, (V S V ) [ n i= (V S V ) ii] n 7

19 h =,, 3. Applying the same Barlett decompositions as in the proof of Theorem 3.7, i.e., V S V = UU and V S V = LL, where U and L are two different lower triangular matrices with positive diagonal elements. Under the null hypothesis, we have u ii λ χ n i, i α= u iα λ χ i, lii λ 4 χ n(n ) i+ and i α= l iα λ 4 χ i, which are mutually independent. Then, V S V = n i= u ii and V S V = n i= l ii. Moreover, under the null hypothesis, based on Lemma 3., we have n u d ii i= n λ λ n i= Z i and n lii d i= n λ 3 λ n Z i, where Z i χ n i, Z i χ n(n ) i+ are independently distributed, i =,..., n. Equation (34) can be written as (n ) n n i= u ii ( n i= (u ii + i α= u iα ) + ) n i= v i S n 3v i 4 i= (n ) n n i= l ii ( n i= (l ii + i α= l iα ) ) n n (35). Moreover, n i= ( i α= u iα+v is 3 v i ) λ χ (n )i and n i i= α= l iα λ 4 χ (n )(i ) are independent. Defining W 0u = n i= ( i i =,..., n. Similarly, put W 0l = n i= α= u iα + v is 3 v i ) and W iu = u i,i, i α= l iα and W il = li,i, i =,..., n, the distribution of Λ is obtained from Lemma Testing hypotheses about means or patterned covariance matrices In this section, we are going to derive the LRT statistics for testing patterned covariance matrices given a specific mean structure: Λ 3 = max L(µ, Σ III) max L(µ, Σ II ), Λ 4 = max L(µ, Σ II) max L(µ, Σ I ), Λ 5 = max L(µ, Σ III) max L(µ, Σ I ), Furthermore, we derive the LRT statistics for testing means given a specific covariance structure: Λ 6 = max L(µ, Σ II) max L(µ, Σ II ), Λ 7 = max L(µ, Σ III) max L(µ, Σ III ). 8

20 Theorem 3.8. The LRT statistic for testing H 0 3 versus H a 3, defined in Section, is given by Λ /n 3 = ( n ) n (n ) ( n where for i =,..., r, r i= t i ( r i= t i) n ) n (n ) t r r i= t i ( r i= t i) n ( r i= t,r+i ( r i= t,r+i) n ( t r,r i= t,r+i ( r i= t,r+i) n ) n t i = tr ( (X X )(v i v i + v n i+v n i+) ), t,r+i = tr(s (v i v i + v n i+v n i+)),, if n is odd, ) n, if n is even, (36) where X and S are given in (7), and for i = r, tr(s (v r v r + v n r+v n r+)), t,r = tr(s v r v r), if n is odd, if n is even. Proof. The expressions of max L(µ, Σ III ) and max L(µ, Σ II ) have been given in (3) and (), respectively. Observe that t = t 3 and t,r+ = t 33 with the corresponding multiplicities and n, and therefore they can be cancelled out when calculating max L(µ, Σ III )/ max L(µ, Σ II ). Hence, Λ 3 has the form given in (36). The result stated in the next lemma will be used when we derive the distribution of Λ 3 in Theorem 3.0. Lemma 3.9. (Olkin and Press, 969, Lemma ) Let U,..., U m, V,..., V m independently distributed, U i χ n, V j χ n. If M M m i= L = U m i j= V j m ( m i= U i + m j= V j), M = m M + m, then L d M j= X j, where X,..., X M are independently distributed, ( ) n X j β, j, j =,..., M m, M ( n + j X j β, M ), j = M m,..., M. (37) 9 be

21 Theorem 3.0. Under the null hypothesis H 0 3, the distribution of Λ 3, given in (36), equals Λ /n 3 n d i= B i B n i, (38) where B i and B i are independently distributed, i =,..., n, β( n B i, i ) for i =,..., [n n /], β( n+, i ) for i = [n n /],..., n, β( n(n ) i, B i ) for i =,..., [n n /], β( n(n )+ i, ) for i = [n n /],..., n. Proof. When n is odd, (36) can be represented as Λ /n 3 d (n ) n r i= t i (r ) ( r i= t i) n ( (n ) n r i= t,r+i (r ) ( r i= t,r+i) n ) n, (39) where t i χ n and t,r+i χ n(n ), i =,..., r, and they are independent. Define M = n and m = r (. We then apply Lemma 3.9 to both of components (n ) n r i= t (n i ) (r ) ( and n ) n r i= t,r+i r i= t i) n (r ) ( in (39) separately by r i= t,r+i) n putting V i = t i for the first component and V i = t,r+i for the second component, i =,..., r. Hence, we have the distributional result given in (38). When n is even, (36) can be represented as Λ /n 3 d (n ) n r i= t it r ( (r ) r i= t ) n i + t r ( (n ) n r i= t,r+it,r (r ) ( r i= t,r+i + t,r ) n ) n, (40) where t i χ n and t,r+i χ n(n ), i =,..., r, t r χ n, t,r χ n(n ) and they are independent. Again, we can apply Lemma 3.9. Let M = n, m = and m = r. Defining U = t r and V i = t i to the first part (n ) n r i= t i t r (r ) ( r (n ) n r i= t,r+i t,r and for the second part i= t i+t r) n (r ) ( having r i= t,r+i+t,r) n U = t,r and V i = t,r+i, i =,..., r, we have the distributional result given in (38). 0

22 Theorem 3.. The LRT statistic for testing H4 0 versus H4 a, defined in Section, is given by 4 = n (n even) S S n, (4) r Λ /n l= where even is the indicator function that n is even, S k are given in (7) and t,(k )r+i = tr(s k (v i v i + v n i+v n i+)), i =,..., r and k =,. Proof. Since the maximum of the likelihood function under the alternative hypothesis H a 4, i.e., maxl(µ, Σ I ), has been given in (3), let us now consider the derivation of maxl(µ, Σ II ). The likelihood function under the null hypothesis H 0 4 is given by t m l l L(µ, Σ II ) = (π) pn ΣII n e tr { Σ II [Y ( n µ) n][y ( n µ) n] }. (4) We first proceed with the following canonical reduction: (I n K V )vecy N pn (I n K n V µ, I n D(η)), where that K n = ( n, 0,..., 0) and V µ = (v µ,..., v n µ). Put w i = (I n u i)vecy and u i is the ith column of K V, i =,..., p. It follows that the model can be split into n + r independent models with the responses w l, l =,..., n and ỹ l, l = r +,..., r, respectively, where r = [n /] + and the r models are given in (5). It can be shown that w i and ỹ i, i =,..., r, are mutually independent and w N n ( n v µ n, η I n ), w i N n ( n v iµ n, η i I n ), w n i+ N n ( n v n i+µ n, η i I n ), i =,..., r, (43) ỹ i N nmr+i (0, η r+i I nmr+i ), i =,..., r, where η l are the distinct eigenvalues of Σ II with multiplicities m l, l =,..., r. It implies that the likelihood function in (4) is identical to the products of the

23 likelihood functions of the n + r separate models, i.e. r L(µ, Σ II ) = L (v µ, η ω ) L i (v iµ, η i ω i )L n i+(v n i+µ, η i ω n i+) i= r L r+i (η r+i ỹ i ). (44) i= Since there is one-to-one transformation between the parameter sets {µ, Σ II } and {V µ, η}, the MLEs of θ i = v iµ and η are given by n ˆθi = n w i n, i =,..., n, ˆη = n w Q n w, ˆη i = n tr [ Q n (w i w i + w n i+w n i+) ], i =,..., r, [ Qn (w n r w r + w n r+w n r+) ], if n is odd, ˆη r = n w rq n w r, if n is even, (45) and the MLEs of η l, l = r +,..., r are given in (6). Recall that w i = Y (/ n n v i ) and w jn +i = Y (k j v i ), where k j is the jth column of the matrix K, i =,..., n, j =,..., n. The MLEs of η i could be reexpressed as follows: ˆη = n tr (S v v ), (46) for i =,..., r, ˆη i = n tr ( S (v i v i + v n i+v n i+) ), (47) ˆη r = tr ( S n (v r v r + v n r+v n r+) ), if n is odd, (48) tr (S n (v r v r)), if n is even, and the MLEs of η l, l = r +,..., r, have the expressions given in (8), (0) and (). Therefore, the likelihood function in (4) is maximized by max L(µ, Σ II ) = (π) pn r ( ti i= nm i ) nm i e pn, (49)

24 where t,(k )r+i = tr(s k (v i v i + v n i+v n i+)), i =,..., r, k =,. Using the expressions given in (3) and (49), Λ 4 has the form given in (4) which completes the proof. Theorem 3.. Under the null hypothesis H 0 4, the distribution of Λ 4, given in (4), equals Λ /n 4 n d i= B i B n i, (50) where B i and B i are independently distributed, i =,..., n, β( n i, i B i ) for i =,..., [n /], β( n i, i+) for i = [n /] +,..., n, β( n(n ) i, i B i ) for i =,..., [n /], β( n(n ) i, i+) for i = [n /] +,..., n. Proof. The proof follows the same lines as for the proof of Theorem 3.4. We, therefore, briefly sketch the main steps. We begin by writing Λ /n 4 as Λ /n n (n even) V S V 4 = (V S V ) r i= [(V S V ) ii + (V S V ) n i+,n i+] ( V S V (V S V ) r i= [(V S V ) ii + (V S V ) n i+,n i+] ) n where even is the indicator function that n is even and (V S k V ) ii is the ith diagonal element of the matrix V S k V. After applying the same Barlett decomposition as we did in the proof of Theorem 3.4, under the null hypothesis, equation (5) has the same distribution as r i= η i χ n i χ n (n i+) η i (χ n + χ n i + χ n (n i+) ) [ r i= η r+i χ n(n ) i+ χ n(n ) (n i+)+ η r+i (χ n + χ n(n ) i+ + χ n(n ) (n i+)+ ), (5) ] n We then use Lemma 3.3 to obtain the distribution of Λ /n 4 and it turns out that Λ 4 is distributed as the expression given in (50). 3.

25 Theorem 3.3. The LRT statistic for testing H 0 5, is given by versus H a 5, defined in Section Λ /n 5 = (n ) n (n ) S S n, (5) 4 j= where S h, h =,, are given in (7), t 3j and m j, j =,..., 4, are given in Theorem 3.6. Proof. Since the maximum of the likelihood function under the alternative hypothesis H a 5, i.e., maxl(µ, Σ I ), has been given in (3), let us now consider the derivation of maxl(µ, Σ III ). The likelihood function under the null hypothesis H 0 5 is given by L(µ, Σ III ) = (π) pn ΣIII n e tr { Σ III[Y ( n µ) n][y ( n µ) n] }. t m j 3j Furthermore, (I n K V )vecy N pn (I n K n V µ, I n (K V )Σ III (K V )), which becomes N pn ( In ( n, 0,..., 0) (v µ,..., v n µ), I n D(λ) ), where D(λ) is the p p diagonal matrix given in Lemma 3.5. It yields that the model can be split into n + independent models with the responses w i, i =,..., n and ỹ j, j = 3, 4, where ỹ j are given in (8), w i = (I n u i)vecy and u i is the ith column of K V, i =,..., p. It can be shown that w N n ( n v µ, λ I n ), w i N n ( n v iµ, λ I n ), i =,..., n and y j N nmj (0, λ j I nmj ), for j = 3, 4, where λ j and m j, j =,..., 4, are given in Lemma 3.5. Thus, the MLEs of θ i = v iµ and λ are given by n ˆθi = n w i n, i =,..., n, ˆλ = n w Q n w, ˆλ = n(n ) n i= w iq n w i, 4

26 and the MLEs of λ 3 and λ 4 are given in (9). Moreover, the estimators ˆλ j, j =, 3, 4, can be expressed as given in (7), (8) and (3), respectively, and the MLE of λ has also the following expression: Therefore, ˆλ = n(n ) tr(s n i= v i v i) = n(n ) tr(s Q n ). max L(µ, Σ III ) = (π) pn 4 ( t 3j ) nm j e pn, (53) nm j j= and Λ 5 has the form given in (5) and this completes the proof. Theorem 3.4. Under the null hypothesis H 0 5, the distribution of Λ 5, given in (5), equals Λ /n 5 n d i= B i B n i, (54) where B i and B i are independently distributed, i =,..., n, ( n i i B i β, n + i ), ( n(n ) i i B i β, n + i ). Proof. The proof follows the same lines as the proof of Theorem 3.7. Therefore, we only briefly sketch the main steps. We begin by writing Λ /n 5 as Λ /n (n ) n (n ) V S V 5 = (V S V ) [ n i= (V S V ) ii ] n ( V S V (V S V ) [ n i= (V S V ) ii ] n ) n, (55) where (V S k V ) ii is the ith diagonal element of the matrix V S k V. After applying the same Barlett decomposition to V S k V as we did in the proof of Theorem 3.7, 5

27 under the null hypothesis H5, 0 equation (55) has the same distribution as (n ) n n ( χ (n )(i ) + n i= χ n i i= χ n i ) n (n ) n n ( χ (n )(i ) + n i= χ n(n ) i+ i= χ n(n ) i+ ) n n. (56) We then use Lemma 3.3 to obtain the distribution of Λ /n 5 and it comes out that Λ 5 is distributed as the expression given in (54). Theorem 3.5. The LRT statistic for testing H 0 6, is given by Λ /n 6 = r i= G i, ( r i= G i ) tr(s v rv r) tr((x X )(vrv r)), versus H6 a, defined in Section if n is odd, (57) if n is even, where G i = and S and X are given in (7). tr(s (v i v i + v n i+v n i+)) tr((x X )(v i v i + v n i+v n i+ )), Proof. Since the maximum of the likelihood function under the null hypothesis H 0 6, i.e. max L(µ, Σ II ), has been derived in () and the maximum of the likelihood function under the alternative hypothesis H a 6, i.e. max L(µ, Σ II ) is given in (49), the likelihood ratio Λ 6 has the form presented in (57). Theorem 3.6. Under the null hypothesis H 0 6, the distribution of Λ 6, given in (57), equals Λ /n 6 d r i= ( r i= β( n, ), if n is odd, β( n, )) β( n, ), if n is even. Proof. We have already shown in the proof of Theorem 3.4 that v is v i η i χ n and v n i+s v n i+ η i χ n. Therefore, tr(s (v i v i + v n i+v n i+)) η i χ (n ). And it has also been shown in the proof of Theorem 3.4 that v is 3 v i 6 (58)

28 η i χ and v n i+s 3 v n i+ η i χ. Moreover, S and S 3 are independent. Hence, when n is odd, G i d χ (n ) χ (n ) + χ d β(n, ), i =,..., r, and using the property that if Z β(a, ) then Z β(a/, ), we have obtained Λ 6 d [ r n i= β(, )] n/. The situation when n is even is analogous to the odd case. Hence, the distribution of Λ 6 has the expression in (58). Theorem 3.7. The LRT statistic for testing H 0 7, is given by where S and X are given in (7). versus H a 7, defined in Section ( ) Λ /n tr(s Q 7 = n ) n tr((x X, (59) )Q n ) Proof. Since the maximum of the likelihood function under the null hypothesis H 0 7, i.e. max L(µ, Σ III ), has been derived in (3) and the maximum of the likelihood function under the alternative hypothesis H a 7, i.e. max L(µ, Σ III ) is given in (53). Hence, the likelihood ratio Λ 7 has the form in (59). Theorem 3.8. Under the null hypothesis H 0 7, the distribution of Λ 7, given in (59), equals where B β ( ) (n )(n ), n. Λ n(n ) 7 d B, (60) Proof. Recall X X = S + S 3 in (7), we have tr((x X )Q n ) = tr(s Q n ) + tr(s 3 Q n ). Since tr(s Q n ) χ (n )(n ), tr(s 3Q n ) χ n and it has been known that S and S 3 are independent, we then have d χ [ ( (n )(n ) d (n )(n ) β Λ /n 7 χ (n )(n ) + χ n, n )] n, which yields the distribution of Λ 7 in (60). 7

29 4. Testing parameters under a block circular Toeplitz structure In this section, we consider testing hypotheses about parameters in model (). Let Y = (y, y,..., y n ) be a random sample from model (), i.e., Y N p,n (µ p n, V (θ), I n ). (6) After some manipulation, the covariance matrix V (θ) in (3) can be rewritten as follows: V (θ) = I n V () + (J n I n ) V (), (6) where V (h), h =,, is a CT matrix and for i, j =,..., n, σ v (h) (h=) + σ h + τ j i ++(h )r, if j i r, ij = σ h + τ n j i ++(h )r, otherwise, where ( ) is the indicator function, σ h, τ q and σ are the parameters of the matrices Σ, Σ and σ I in (3), respectively, q =,..., r. Let θ = (σ, σ, σ, τ,..., τ r ) be the vector containing all unknown (co)variance parameters. It has been observed by Liang et al. (0, 04) that model () is overparametrized and we have to put at least three restrictions on the parameter space of θ in order to estimate θ uniquely. Three restricted models M i considered by Liang et al. (04) will be introduced in the following lemma. The matrices K i and L in the lemma will be stated without explicit expressions, i =,, 3, and the proof will be omitted. For details, it is referred to Liang et al. (04). Lemma 4.. Let η be the vector of the r distinct eigenvalues of V (θ). Then the restricted model M i equals (6) but with the restriction K i θ = 0, i =,, 3. Equivalently M i can be defined via the free parameters θ i = (L(K i) o ) η, 8

30 where θ = (σ, σ, τ,..., τ r, τ r+,..., τ r ), θ = (σ, σ, τ,..., τ r, τ r+3,..., τ r ), θ 3 = (σ, σ, σ, τ,..., τ r, τ r+3,..., τ r ). The matrix (K i) o : (r +3) r is a matrix which columns generate the orthogonal complement to the column vector space of K i and the explicit expressions of K i, (K i) o and L are given in Liang et al. (04), i =,, 3. By using L and (K i) o the following important results for the three models M i, i =,, 3, can be studied (Liang et al., 04): M : η = σ, η r+ = σ n n σ, M : η l = σ, η r+ = σ n n σ, for some l {,..., r, r +,..., r}, M 3 : η = σ + n [σ + (n )σ ], η r+ = σ + n (σ σ ), (63) η l = σ, for some l {,..., r, r +,..., r}, where η = (η l ) are the distinct eigenvalues of V (θ), given in (6). Lemma 4.. Let η = (η l ), l =,..., r, be as in Lemma 4.. Without loss of information, any of the models M i, i =,, 3, can be formulated equivalently as ỹ N n ( pµ n, η I n ), ỹ l N nml (0, η l I nml ), l =,..., r. Depending on which M i is considered, η l is defined in Lemma 4., and it has multiplicity m l which was given in Liang et al. (0, Table ). Moreover, ỹ can be further decomposed as n nỹ N( pnµ, η ), z N n (0, η I n ), where z = T ỹ and T is an (n ) n semiorthogonal matrix such that T T = I n and T n = 0. 9

31 For each restricted model given in (63), we will consider the following hypothesis testing problems for the parameters of V (θ). In model M and M we will test and in M 3 we will test H 0 : σ = 0 versus H a : σ < 0, H 0 : σ = 0 versus H a : σ n σ σ. The hypothesis H 0 implies that there is no random effect γ in model M and M, while hypothesis H 0 means that the factor levels of γ are uncorrelated. Remark: The alternative hypothesis H a is due to the fact σ = (n )σ and σ must be negative in order to make sure that Σ is positive semidefinite. In model M 3, there is no restriction imposed on the eigenvalues of Σ. Hence the alternative hypothesis H a is σ σ σ /(n ) in order to preserve the semidefiniteness of Σ. From (63) it can be directly observed that testing H 0 versus H a in model M is equivalent to test H 0,M : η = η r+ versus H a,m : η < η r+, (64) and testing H 0 versus H a in model M is equivalent to test H 0,M : η l = η r+ versus H a,m : η l < η r+, (65) for some l {,..., r, r +,..., r}. In model M 3, testing H 0 versus H a in model M 3 is equivalent to test H 0 : η = η r+ versus H a : η η r+. (66) Next, concerning the covariance matrix Σ in (5), we will test for M, M and M 3 H 03 : τ = 0 versus H a3 : τ 0. 30

32 The hypothesis H 03 means that there is no random effect γ in all restricted models, since under the null hypothesis τ = 0, the only possibility to preserve the semidefiteness of Σ is that Σ is the zero matrix. Then the linear function η l (σ, τ q ) (see Liang et al., 04, for all coefficients), becomes η l (σ, 0) = σ for some l, and q depending on which θ i in Lemma 4. is considered. Consequently, testing H 03 versus H a3 in the restricted models M and M is equivalent to test H 03,Mi : η l = σ versus H a3,mi : none of η l is equal, i {, }, l r +,(67) and in model M 3, the testing problem can be formulated as H 03,M3 : η l = σ versus H a3,m3 : none of η l is equal, l, r +. (68) In the subsequent two subsections we will propose the test procedures for testing the null hypotheses H 0i, i =,, 3. It is seen from Lemma 4. and (64)-(68) that to test each hypothesis is nothing but testing the equality of several variances of some of the r independent models of Lemma 4.. Bartlett s test is a test procedure which is often used to test the null hypothesis that population variances are equal under a normality assumption (Bartlett, 937). In the context of zero means for k populations, the test statistic has the following expression (Bartlett, 937, p.74): ( k ) i= N ln n isi k N i= n i ln Si B = ( k ), (69) + 3(k ) i= n i N where S i is the sample variance of the i-th population with sample size n i, i =,..., k, and N is the total sample sizes, i.e. N = k i= n i. The test statistic in (69) is a modification of the LRT statistic, which is here denoted as W, and with zero means we have B = c ln W, where c = + 3 ( k ). 3(k ) n i= i N

33 The asymptotic null distribution of B in (69) is the χ distribution with (k ) degrees of freedom. 4.. Test for the hypotheses of parameters in Σ under different restricted models In this subsection, we consider testing the null hypotheses H 0 and H 0, respectively, and construct the corresponding Bartlett s test statistics. According to the equivalent hypothesis (64), constructing a test statistic for (64) is actually to test equal variances in the following two populations of Lemma 4.: z N n (0, η I n ), ỹ r+ N n(n )(0, η r+ I n(n )). Thus, it is straightforward to construct a LRT or the corresponding Bartlett s test for testing equality of two variances based on the two populations z and ỹ r+ with zero means and sample sizes n and n(n ). The result is presented in the following theorem. The proof is omitted since it is a direct application of the results from the previous section. Theorem 4.3. In model M, the LRT statistic for testing H 0,M : σ = 0 or equivalently η = η r+ against H a,m : σ < 0 or equivalently η < η r+ is given by W = (nn ) nn (n ) n [n(n )] n(n ) (z z) n (ỹ r+ỹ r+ ) n(n ) ( ), (70) z z + ỹ r+ỹ nn r+ and the corresponding Bartlett s test statistic with the asymptotic null distribution is given by where c = + 3 H 0,M B = c ln W χ, (7) [ ] + n n(n ) nn. 3

34 Theorem 4.3 suggests to reject the null hypothesis H 0,M for small values of W or for large values of B, which implies to reject the null hypothesis of no random effect γ in (). Next we construct a test statistic for the testing problem given in (65), which can be considered as testing the equal variances of two populations with respective sample sizes n(n ) and nm l as follows: ỹ r+ N n(n )(0, η r+ I n(n )), ỹ l N nml (0, η l I nml ), (7) for some l {,..., r, r +,..., r}. Theorem 4.4. In model M, the LRT statistic for testing H 0,M : σ = 0 or equivalently η l = η r+ against H a,m {,..., r, r +,..., r}, is given by W /n l : σ < 0 or equivalently η l < η r+, l = (m l + n ) m l+n (ỹ lỹl) m l (ỹ r+ ỹ r+ ) n m m l(n ) n (ỹ ) l ỹ l + ỹ r+ỹ ml +n, (73) r+ and the corresponding Bartlett s test statistic with the asymptotic null distribution is given by where c = + 3 H 0,M H 0,M B = c ln W χ, (74) [ ] nm l + n(n ) n(m l +n. ) Similarly to Theorem 4.3, in Theorem 4.4 we will reject the null hypothesis for small values of W or for large values of B, which implies rejection of the null hypothesis of no random effect γ. Differently from M and M, it can be seen from (63), in model M 3, there is no restriction imposed on the parameters (or equivalently eigenvalues) of Σ, which results in rejecting the null hypothesis that the factor levels of γ are uncorrelated. However, the testing problem in (66) is in fact a test for equality of two variances based on two populations z and ỹ r+, which leads to exactly the same test statistic as given in Theorem

35 Theorem 4.5. In model M 3, the LRT statistic for testing H 0 : σ = 0 or equivalently η = η r+ against H a : σ σ σ /(n ) or equivalently η η r+ is given by W 3 = (nn ) nn (n ) n [n(n )] n(n ) (z z) n (ỹ r+ỹ r+ ) n(n ) ( ), (75) z z + ỹ r+ỹ nn r+ and the corresponding Bartlett s test statistic with the asymptotic null distribution is given by where c = + 3 H B 3 = c ln W 0 3 χ, (76) [ ] + n n(n ) nn. 4.. Test for the hypotheses of parameters in Σ under different restricted models In this subsection, we consider testing the null hypotheses H 03 under different restricted models, i.e. H 03,Mi, i =,, 3, respectively. The testing problem in (67) can be considered to test equality of variances coming from the r following populations: z N n (0, η I n ), ỹ l N nml (0, η l I nml ), for some l =,..., r but l r +. Theorem 4.6. In model M and M, the LRT statistic for testing H 03,Mi : η l = σ against H a3,mi : none of η l is equal, i {, }, l r +, is given by W 4 = [n(p n + ) ] n(p n+) (n ) n r l= l r+ (nm l ) nm l (z z) n z z + r l= l r+ r l= l r+ ỹ lỹl (ỹ lỹl) nm l n(p n +), (77) 34

36 and the corresponding Bartlett s test statistic with the asymptotic distribution is given by where r = [n /] + and c = + H 03,Mi B 4 = c ln W 4 χ r, i {, }, (78) 3(r ) n + r l= l r+ nm l In Theorem 4.6, we reject the null hypothesis H a3,mi. n(p n + ) for small values of W 4 or large values of B 4 and it means that we reject the hypothesis of no random effect γ under M exist, or M. The testing problem in (68) can be considered to be a test for equal variances for the r populations: where l =,..., r but l r +. ỹ l N nml (0, η l I nml ), Theorem 4.7. In model M 3, the LRT statistic for testing H 03,M3 : η l = σ against H a3,m3 : none of η l is equal, l, r +, is given by W /n l= l r+ m m l l r l= l r+ 5 = (p n ) p n r r l= l r+ (ỹ lỹl) m l ỹ lỹl p n, (79) and the corresponding Bartlett s test statistic with the asymptotic distribution is given by where r = [n /] + and c = + H 03,M3 B 5 = c ln W 5 χ r 3, (80) 3(r 3) r l= l r+ nm l. n(p n ) 35

37 References Anderson, T. W. (003). An introduction to multivariate statistical analysis, 3rd edition. Wiley, New Jersey. Bartlett, M. S. (937). Properties of sufficiency and statistical tests. Proceedings of the Royal Society of London. Series A-Mathematical and Physical Sciences, 60, Barton, T. A. and Fuhrmann, D. R. (993). Covariance structures for multidimensional data. Multidimensional Systems and Signal Processing, 4, 3. Basilevsky, A. (983). Applied matrix algebra in the statistical sciences. North- Holland, New York. Kollo, T. and von Rosen, D. (005). Advanced multivariate statistics with matrices. Springer, New York. Lancaster, H. (965). The Helmert matrices. American Mathematical Monthly, 7, 4. Liang, Y., von Rosen, D. and von Rosen, T. (0). On estimation in multilevel models with block circular symmetric covariance structures. Acta et Commentationes Universitatis Tartuensis de Mathematica, 6, 4. Liang, Y., von Rosen, D. and von Rosen, T. (04). On estimation in hierarchical models with block circular covariance structures. Annals of the Institute of Statistical Mathematics, DOI: 0.007/s Liang, Y., von Rosen, T. and von Rosen, D. (0). Circular block symmetry in multilevel models. Tech. Rep. RR 0:3, Department of Statistics, Stockholm University. 36

38 Muirhead, R. J. (98). Aspects of multivariate statistical theory. Wiley, New York. Nahtman, T. (006). Marginal permutation invariant covariance matrices with applications to linear models. Linear Algebra and its Applications, 47, Olkin, I. (973). Testing and estimation for structures which are circularly symmetric in blocks. In D. G. Kabe and R. P. Gupta, eds., Multivariate statistical inference , North Holland, Amsterdam. Olkin, I. and Press, S. (969). Testing and estimation for a circular stationary model. The Annals of Mathematical Statistics, 40, Srivastava, M. S., von Rosen, T. and von Rosen, D. (008). Models with a Kronecker product covariance structure: estimation and testing. Mathematical Methods of Statistics, 7, Vaish, A. and Chaganty, N. R. (004). Wishartness and independence of matrix quadratic forms for kronecker product covariance structures. Linear Algebra and its Applications, 388, Votaw, D. F. (948). Testing compound symmetry in a normal multivariate distribution. The Annals of Mathematical Statistics, 9, Wilks, S. S. (946). Sample criteria for testing equality of means, equality of variances, and equality of covariances in a normal multivariate distribution. The Annals of Mathematical Statistics, 7,

Hypothesis testing in multilevel models with block circular covariance structures

Hypothesis testing in multilevel models with block circular covariance structures 1/ 25 Hypothesis testing in multilevel models with block circular covariance structures Yuli Liang 1, Dietrich von Rosen 2,3 and Tatjana von Rosen 1 1 Department of Statistics, Stockholm University 2 Department

More information

Block Circular Symmetry in Multilevel Models

Block Circular Symmetry in Multilevel Models Block Circular Symmetry in Multilevel Models Yuli Liang, Tatjana von Rosen and Dietrich von Rosen Abstract Models that describe symmetries present in the error structure of observations have been widely

More information

Testing Some Covariance Structures under a Growth Curve Model in High Dimension

Testing Some Covariance Structures under a Growth Curve Model in High Dimension Department of Mathematics Testing Some Covariance Structures under a Growth Curve Model in High Dimension Muni S. Srivastava and Martin Singull LiTH-MAT-R--2015/03--SE Department of Mathematics Linköping

More information

Canonical Correlation Analysis of Longitudinal Data

Canonical Correlation Analysis of Longitudinal Data Biometrics Section JSM 2008 Canonical Correlation Analysis of Longitudinal Data Jayesh Srivastava Dayanand N Naik Abstract Studying the relationship between two sets of variables is an important multivariate

More information

Analysis of variance, multivariate (MANOVA)

Analysis of variance, multivariate (MANOVA) Analysis of variance, multivariate (MANOVA) Abstract: A designed experiment is set up in which the system studied is under the control of an investigator. The individuals, the treatments, the variables

More information

Journal of Multivariate Analysis. Sphericity test in a GMANOVA MANOVA model with normal error

Journal of Multivariate Analysis. Sphericity test in a GMANOVA MANOVA model with normal error Journal of Multivariate Analysis 00 (009) 305 3 Contents lists available at ScienceDirect Journal of Multivariate Analysis journal homepage: www.elsevier.com/locate/jmva Sphericity test in a GMANOVA MANOVA

More information

Lecture 3. Inference about multivariate normal distribution

Lecture 3. Inference about multivariate normal distribution Lecture 3. Inference about multivariate normal distribution 3.1 Point and Interval Estimation Let X 1,..., X n be i.i.d. N p (µ, Σ). We are interested in evaluation of the maximum likelihood estimates

More information

Small area estimation with missing data using a multivariate linear random effects model

Small area estimation with missing data using a multivariate linear random effects model Department of Mathematics Small area estimation with missing data using a multivariate linear random effects model Innocent Ngaruye, Dietrich von Rosen and Martin Singull LiTH-MAT-R--2017/07--SE Department

More information

Orthogonal decompositions in growth curve models

Orthogonal decompositions in growth curve models ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 4, Orthogonal decompositions in growth curve models Daniel Klein and Ivan Žežula Dedicated to Professor L. Kubáček on the occasion

More information

TAMS39 Lecture 10 Principal Component Analysis Factor Analysis

TAMS39 Lecture 10 Principal Component Analysis Factor Analysis TAMS39 Lecture 10 Principal Component Analysis Factor Analysis Martin Singull Department of Mathematics Mathematical Statistics Linköping University, Sweden Content - Lecture Principal component analysis

More information

TAMS39 Lecture 2 Multivariate normal distribution

TAMS39 Lecture 2 Multivariate normal distribution TAMS39 Lecture 2 Multivariate normal distribution Martin Singull Department of Mathematics Mathematical Statistics Linköping University, Sweden Content Lecture Random vectors Multivariate normal distribution

More information

Statistical Inference with Monotone Incomplete Multivariate Normal Data

Statistical Inference with Monotone Incomplete Multivariate Normal Data Statistical Inference with Monotone Incomplete Multivariate Normal Data p. 1/4 Statistical Inference with Monotone Incomplete Multivariate Normal Data This talk is based on joint work with my wonderful

More information

Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone Missing Data

Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone Missing Data Journal of Multivariate Analysis 78, 6282 (2001) doi:10.1006jmva.2000.1939, available online at http:www.idealibrary.com on Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone

More information

Testing Hypotheses Of Covariance Structure In Multivariate Data

Testing Hypotheses Of Covariance Structure In Multivariate Data Electronic Journal of Linear Algebra Volume 33 Volume 33: Special Issue for the International Conference on Matrix Analysis and its Applications, MAT TRIAD 2017 Article 6 2018 Testing Hypotheses Of Covariance

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Maximum Likelihood Estimation Merlise Clyde STA721 Linear Models Duke University August 31, 2017 Outline Topics Likelihood Function Projections Maximum Likelihood Estimates Readings: Christensen Chapter

More information

Gaussian Models (9/9/13)

Gaussian Models (9/9/13) STA561: Probabilistic machine learning Gaussian Models (9/9/13) Lecturer: Barbara Engelhardt Scribes: Xi He, Jiangwei Pan, Ali Razeen, Animesh Srivastava 1 Multivariate Normal Distribution The multivariate

More information

An Introduction to Multivariate Statistical Analysis

An Introduction to Multivariate Statistical Analysis An Introduction to Multivariate Statistical Analysis Third Edition T. W. ANDERSON Stanford University Department of Statistics Stanford, CA WILEY- INTERSCIENCE A JOHN WILEY & SONS, INC., PUBLICATION Contents

More information

A Box-Type Approximation for General Two-Sample Repeated Measures - Technical Report -

A Box-Type Approximation for General Two-Sample Repeated Measures - Technical Report - A Box-Type Approximation for General Two-Sample Repeated Measures - Technical Report - Edgar Brunner and Marius Placzek University of Göttingen, Germany 3. August 0 . Statistical Model and Hypotheses Throughout

More information

Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data

Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data Applied Mathematical Sciences, Vol 3, 009, no 54, 695-70 Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data Evelina Veleva Rousse University A Kanchev Department of Numerical

More information

MULTIVARIATE ANALYSIS OF VARIANCE UNDER MULTIPLICITY José A. Díaz-García. Comunicación Técnica No I-07-13/ (PE/CIMAT)

MULTIVARIATE ANALYSIS OF VARIANCE UNDER MULTIPLICITY José A. Díaz-García. Comunicación Técnica No I-07-13/ (PE/CIMAT) MULTIVARIATE ANALYSIS OF VARIANCE UNDER MULTIPLICITY José A. Díaz-García Comunicación Técnica No I-07-13/11-09-2007 (PE/CIMAT) Multivariate analysis of variance under multiplicity José A. Díaz-García Universidad

More information

The purpose of this section is to derive the asymptotic distribution of the Pearson chi-square statistic. k (n j np j ) 2. np j.

The purpose of this section is to derive the asymptotic distribution of the Pearson chi-square statistic. k (n j np j ) 2. np j. Chapter 9 Pearson s chi-square test 9. Null hypothesis asymptotics Let X, X 2, be independent from a multinomial(, p) distribution, where p is a k-vector with nonnegative entries that sum to one. That

More information

Cointegrated VAR s. Eduardo Rossi University of Pavia. November Rossi Cointegrated VAR s Financial Econometrics / 56

Cointegrated VAR s. Eduardo Rossi University of Pavia. November Rossi Cointegrated VAR s Financial Econometrics / 56 Cointegrated VAR s Eduardo Rossi University of Pavia November 2013 Rossi Cointegrated VAR s Financial Econometrics - 2013 1 / 56 VAR y t = (y 1t,..., y nt ) is (n 1) vector. y t VAR(p): Φ(L)y t = ɛ t The

More information

ANOVA: Analysis of Variance - Part I

ANOVA: Analysis of Variance - Part I ANOVA: Analysis of Variance - Part I The purpose of these notes is to discuss the theory behind the analysis of variance. It is a summary of the definitions and results presented in class with a few exercises.

More information

Lecture 5: Hypothesis tests for more than one sample

Lecture 5: Hypothesis tests for more than one sample 1/23 Lecture 5: Hypothesis tests for more than one sample Måns Thulin Department of Mathematics, Uppsala University thulin@math.uu.se Multivariate Methods 8/4 2011 2/23 Outline Paired comparisons Repeated

More information

Asymptotic Statistics-VI. Changliang Zou

Asymptotic Statistics-VI. Changliang Zou Asymptotic Statistics-VI Changliang Zou Kolmogorov-Smirnov distance Example (Kolmogorov-Smirnov confidence intervals) We know given α (0, 1), there is a well-defined d = d α,n such that, for any continuous

More information

Tests Concerning Equicorrelation Matrices with Grouped Normal Data

Tests Concerning Equicorrelation Matrices with Grouped Normal Data Tests Concerning Equicorrelation Matrices with Grouped Normal Data Paramjit S Gill Department of Mathematics and Statistics Okanagan University College Kelowna, BC, Canada, VV V7 pgill@oucbcca Sarath G

More information

[y i α βx i ] 2 (2) Q = i=1

[y i α βx i ] 2 (2) Q = i=1 Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation

More information

Statistical Inference with Monotone Incomplete Multivariate Normal Data

Statistical Inference with Monotone Incomplete Multivariate Normal Data Statistical Inference with Monotone Incomplete Multivariate Normal Data p. 1/4 Statistical Inference with Monotone Incomplete Multivariate Normal Data This talk is based on joint work with my wonderful

More information

THE UNIVERSITY OF CHICAGO Graduate School of Business Business 41912, Spring Quarter 2008, Mr. Ruey S. Tsay. Solutions to Final Exam

THE UNIVERSITY OF CHICAGO Graduate School of Business Business 41912, Spring Quarter 2008, Mr. Ruey S. Tsay. Solutions to Final Exam THE UNIVERSITY OF CHICAGO Graduate School of Business Business 41912, Spring Quarter 2008, Mr. Ruey S. Tsay Solutions to Final Exam 1. (13 pts) Consider the monthly log returns, in percentages, of five

More information

1 Data Arrays and Decompositions

1 Data Arrays and Decompositions 1 Data Arrays and Decompositions 1.1 Variance Matrices and Eigenstructure Consider a p p positive definite and symmetric matrix V - a model parameter or a sample variance matrix. The eigenstructure is

More information

MATH5745 Multivariate Methods Lecture 07

MATH5745 Multivariate Methods Lecture 07 MATH5745 Multivariate Methods Lecture 07 Tests of hypothesis on covariance matrix March 16, 2018 MATH5745 Multivariate Methods Lecture 07 March 16, 2018 1 / 39 Test on covariance matrices: Introduction

More information

Repeated Measures ANOVA Multivariate ANOVA and Their Relationship to Linear Mixed Models

Repeated Measures ANOVA Multivariate ANOVA and Their Relationship to Linear Mixed Models Repeated Measures ANOVA Multivariate ANOVA and Their Relationship to Linear Mixed Models EPSY 905: Multivariate Analysis Spring 2016 Lecture #12 April 20, 2016 EPSY 905: RM ANOVA, MANOVA, and Mixed Models

More information

Chapter 4: Factor Analysis

Chapter 4: Factor Analysis Chapter 4: Factor Analysis In many studies, we may not be able to measure directly the variables of interest. We can merely collect data on other variables which may be related to the variables of interest.

More information

A Test for Order Restriction of Several Multivariate Normal Mean Vectors against all Alternatives when the Covariance Matrices are Unknown but Common

A Test for Order Restriction of Several Multivariate Normal Mean Vectors against all Alternatives when the Covariance Matrices are Unknown but Common Journal of Statistical Theory and Applications Volume 11, Number 1, 2012, pp. 23-45 ISSN 1538-7887 A Test for Order Restriction of Several Multivariate Normal Mean Vectors against all Alternatives when

More information

Math 423/533: The Main Theoretical Topics

Math 423/533: The Main Theoretical Topics Math 423/533: The Main Theoretical Topics Notation sample size n, data index i number of predictors, p (p = 2 for simple linear regression) y i : response for individual i x i = (x i1,..., x ip ) (1 p)

More information

2. Matrix Algebra and Random Vectors

2. Matrix Algebra and Random Vectors 2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns

More information

High-Dimensional AICs for Selection of Redundancy Models in Discriminant Analysis. Tetsuro Sakurai, Takeshi Nakada and Yasunori Fujikoshi

High-Dimensional AICs for Selection of Redundancy Models in Discriminant Analysis. Tetsuro Sakurai, Takeshi Nakada and Yasunori Fujikoshi High-Dimensional AICs for Selection of Redundancy Models in Discriminant Analysis Tetsuro Sakurai, Takeshi Nakada and Yasunori Fujikoshi Faculty of Science and Engineering, Chuo University, Kasuga, Bunkyo-ku,

More information

Graphical Models with Symmetry

Graphical Models with Symmetry Wald Lecture, World Meeting on Probability and Statistics Istanbul 2012 Sparse graphical models with few parameters can describe complex phenomena. Introduce symmetry to obtain further parsimony so models

More information

Mathematical Methods wk 2: Linear Operators

Mathematical Methods wk 2: Linear Operators John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

STAT 730 Chapter 5: Hypothesis Testing

STAT 730 Chapter 5: Hypothesis Testing STAT 730 Chapter 5: Hypothesis Testing Timothy Hanson Department of Statistics, University of South Carolina Stat 730: Multivariate Analysis 1 / 28 Likelihood ratio test def n: Data X depend on θ. The

More information

Factorization of Seperable and Patterned Covariance Matrices for Gibbs Sampling

Factorization of Seperable and Patterned Covariance Matrices for Gibbs Sampling Monte Carlo Methods Appl, Vol 6, No 3 (2000), pp 205 210 c VSP 2000 Factorization of Seperable and Patterned Covariance Matrices for Gibbs Sampling Daniel B Rowe H & SS, 228-77 California Institute of

More information

z = β βσβ Statistical Analysis of MV Data Example : µ=0 (Σ known) consider Y = β X~ N 1 (β µ, β Σβ) test statistic for H 0β is

z = β βσβ Statistical Analysis of MV Data Example : µ=0 (Σ known) consider Y = β X~ N 1 (β µ, β Σβ) test statistic for H 0β is Example X~N p (µ,σ); H 0 : µ=0 (Σ known) consider Y = β X~ N 1 (β µ, β Σβ) H 0β : β µ = 0 test statistic for H 0β is y z = β βσβ /n And reject H 0β if z β > c [suitable critical value] 301 Reject H 0 if

More information

Multivariate Gaussian Distribution. Auxiliary notes for Time Series Analysis SF2943. Spring 2013

Multivariate Gaussian Distribution. Auxiliary notes for Time Series Analysis SF2943. Spring 2013 Multivariate Gaussian Distribution Auxiliary notes for Time Series Analysis SF2943 Spring 203 Timo Koski Department of Mathematics KTH Royal Institute of Technology, Stockholm 2 Chapter Gaussian Vectors.

More information

On Expected Gaussian Random Determinants

On Expected Gaussian Random Determinants On Expected Gaussian Random Determinants Moo K. Chung 1 Department of Statistics University of Wisconsin-Madison 1210 West Dayton St. Madison, WI 53706 Abstract The expectation of random determinants whose

More information

VAR Models and Cointegration 1

VAR Models and Cointegration 1 VAR Models and Cointegration 1 Sebastian Fossati University of Alberta 1 These slides are based on Eric Zivot s time series notes available at: http://faculty.washington.edu/ezivot The Cointegrated VAR

More information

ABOUT PRINCIPAL COMPONENTS UNDER SINGULARITY

ABOUT PRINCIPAL COMPONENTS UNDER SINGULARITY ABOUT PRINCIPAL COMPONENTS UNDER SINGULARITY José A. Díaz-García and Raúl Alberto Pérez-Agamez Comunicación Técnica No I-05-11/08-09-005 (PE/CIMAT) About principal components under singularity José A.

More information

Modelling Mean-Covariance Structures in the Growth Curve Models

Modelling Mean-Covariance Structures in the Growth Curve Models Modelling Mean-Covariance Structures in the Growth Curve Models Jianxin Pan and Dietrich von Rosen Research Report Centre of Biostochastics Swedish University of Report 004:4 Agricultural Sciences ISSN

More information

Linear Algebra using Dirac Notation: Pt. 2

Linear Algebra using Dirac Notation: Pt. 2 Linear Algebra using Dirac Notation: Pt. 2 PHYS 476Q - Southern Illinois University February 6, 2018 PHYS 476Q - Southern Illinois University Linear Algebra using Dirac Notation: Pt. 2 February 6, 2018

More information

Consistent Bivariate Distribution

Consistent Bivariate Distribution A Characterization of the Normal Conditional Distributions MATSUNO 79 Therefore, the function ( ) = G( : a/(1 b2)) = N(0, a/(1 b2)) is a solu- tion for the integral equation (10). The constant times of

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Central Limit Theorems for Classical Likelihood Ratio Tests for High-Dimensional Normal Distributions

Central Limit Theorems for Classical Likelihood Ratio Tests for High-Dimensional Normal Distributions Central Limit Theorems for Classical Likelihood Ratio Tests for High-Dimensional Normal Distributions Tiefeng Jiang 1 and Fan Yang 1, University of Minnesota Abstract For random samples of size n obtained

More information

Random Matrices and Multivariate Statistical Analysis

Random Matrices and Multivariate Statistical Analysis Random Matrices and Multivariate Statistical Analysis Iain Johnstone, Statistics, Stanford imj@stanford.edu SEA 06@MIT p.1 Agenda Classical multivariate techniques Principal Component Analysis Canonical

More information

Statement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:.

Statement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:. MATHEMATICAL STATISTICS Homework assignment Instructions Please turn in the homework with this cover page. You do not need to edit the solutions. Just make sure the handwriting is legible. You may discuss

More information

Y t = ΦD t + Π 1 Y t Π p Y t p + ε t, D t = deterministic terms

Y t = ΦD t + Π 1 Y t Π p Y t p + ε t, D t = deterministic terms VAR Models and Cointegration The Granger representation theorem links cointegration to error correction models. In a series of important papers and in a marvelous textbook, Soren Johansen firmly roots

More information

4.1 Order Specification

4.1 Order Specification THE UNIVERSITY OF CHICAGO Booth School of Business Business 41914, Spring Quarter 2009, Mr Ruey S Tsay Lecture 7: Structural Specification of VARMA Models continued 41 Order Specification Turn to data

More information

Multivariate Time Series: VAR(p) Processes and Models

Multivariate Time Series: VAR(p) Processes and Models Multivariate Time Series: VAR(p) Processes and Models A VAR(p) model, for p > 0 is X t = φ 0 + Φ 1 X t 1 + + Φ p X t p + A t, where X t, φ 0, and X t i are k-vectors, Φ 1,..., Φ p are k k matrices, with

More information

Variational Principal Components

Variational Principal Components Variational Principal Components Christopher M. Bishop Microsoft Research 7 J. J. Thomson Avenue, Cambridge, CB3 0FB, U.K. cmbishop@microsoft.com http://research.microsoft.com/ cmbishop In Proceedings

More information

On V-orthogonal projectors associated with a semi-norm

On V-orthogonal projectors associated with a semi-norm On V-orthogonal projectors associated with a semi-norm Short Title: V-orthogonal projectors Yongge Tian a, Yoshio Takane b a School of Economics, Shanghai University of Finance and Economics, Shanghai

More information

MULTIVARIATE POPULATIONS

MULTIVARIATE POPULATIONS CHAPTER 5 MULTIVARIATE POPULATIONS 5. INTRODUCTION In the following chapters we will be dealing with a variety of problems concerning multivariate populations. The purpose of this chapter is to provide

More information

Estimation and Testing for Common Cycles

Estimation and Testing for Common Cycles Estimation and esting for Common Cycles Anders Warne February 27, 2008 Abstract: his note discusses estimation and testing for the presence of common cycles in cointegrated vector autoregressions A simple

More information

Measures and Jacobians of Singular Random Matrices. José A. Díaz-Garcia. Comunicación de CIMAT No. I-07-12/ (PE/CIMAT)

Measures and Jacobians of Singular Random Matrices. José A. Díaz-Garcia. Comunicación de CIMAT No. I-07-12/ (PE/CIMAT) Measures and Jacobians of Singular Random Matrices José A. Díaz-Garcia Comunicación de CIMAT No. I-07-12/21.08.2007 (PE/CIMAT) Measures and Jacobians of singular random matrices José A. Díaz-García Universidad

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

Testing linear hypotheses of mean vectors for high-dimension data with unequal covariance matrices

Testing linear hypotheses of mean vectors for high-dimension data with unequal covariance matrices Testing linear hypotheses of mean vectors for high-dimension data with unequal covariance matrices Takahiro Nishiyama a,, Masashi Hyodo b, Takashi Seo a, Tatjana Pavlenko c a Department of Mathematical

More information

Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, Jeffreys priors. exp 1 ) p 2

Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, Jeffreys priors. exp 1 ) p 2 Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, 2010 Jeffreys priors Lecturer: Michael I. Jordan Scribe: Timothy Hunter 1 Priors for the multivariate Gaussian Consider a multivariate

More information

On corrections of classical multivariate tests for high-dimensional data

On corrections of classical multivariate tests for high-dimensional data On corrections of classical multivariate tests for high-dimensional data Jian-feng Yao with Zhidong Bai, Dandan Jiang, Shurong Zheng Overview Introduction High-dimensional data and new challenge in statistics

More information

On testing the equality of mean vectors in high dimension

On testing the equality of mean vectors in high dimension ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 17, Number 1, June 2013 Available online at www.math.ut.ee/acta/ On testing the equality of mean vectors in high dimension Muni S.

More information

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN Canonical Edps/Soc 584 and Psych 594 Applied Multivariate Statistics Carolyn J. Anderson Department of Educational Psychology I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN Canonical Slide

More information

TUTORIAL 8 SOLUTIONS #

TUTORIAL 8 SOLUTIONS # TUTORIAL 8 SOLUTIONS #9.11.21 Suppose that a single observation X is taken from a uniform density on [0,θ], and consider testing H 0 : θ = 1 versus H 1 : θ =2. (a) Find a test that has significance level

More information

Mean. Pranab K. Mitra and Bimal K. Sinha. Department of Mathematics and Statistics, University Of Maryland, Baltimore County

Mean. Pranab K. Mitra and Bimal K. Sinha. Department of Mathematics and Statistics, University Of Maryland, Baltimore County A Generalized p-value Approach to Inference on Common Mean Pranab K. Mitra and Bimal K. Sinha Department of Mathematics and Statistics, University Of Maryland, Baltimore County 1000 Hilltop Circle, Baltimore,

More information

Linear Algebra: Characteristic Value Problem

Linear Algebra: Characteristic Value Problem Linear Algebra: Characteristic Value Problem . The Characteristic Value Problem Let < be the set of real numbers and { be the set of complex numbers. Given an n n real matrix A; does there exist a number

More information

We describe the generalization of Hazan s algorithm for symmetric programming

We describe the generalization of Hazan s algorithm for symmetric programming ON HAZAN S ALGORITHM FOR SYMMETRIC PROGRAMMING PROBLEMS L. FAYBUSOVICH Abstract. problems We describe the generalization of Hazan s algorithm for symmetric programming Key words. Symmetric programming,

More information

You can compute the maximum likelihood estimate for the correlation

You can compute the maximum likelihood estimate for the correlation Stat 50 Solutions Comments on Assignment Spring 005. (a) _ 37.6 X = 6.5 5.8 97.84 Σ = 9.70 4.9 9.70 75.05 7.80 4.9 7.80 4.96 (b) 08.7 0 S = Σ = 03 9 6.58 03 305.6 30.89 6.58 30.89 5.5 (c) You can compute

More information

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1 Inverse of a Square Matrix For an N N square matrix A, the inverse of A, 1 A, exists if and only if A is of full rank, i.e., if and only if no column of A is a linear combination 1 of the others. A is

More information

MULTIVARIATE ANALYSIS OF VARIANCE

MULTIVARIATE ANALYSIS OF VARIANCE MULTIVARIATE ANALYSIS OF VARIANCE RAJENDER PARSAD AND L.M. BHAR Indian Agricultural Statistics Research Institute Library Avenue, New Delhi - 0 0 lmb@iasri.res.in. Introduction In many agricultural experiments,

More information

A Likelihood Ratio Test

A Likelihood Ratio Test A Likelihood Ratio Test David Allen University of Kentucky February 23, 2012 1 Introduction Earlier presentations gave a procedure for finding an estimate and its standard error of a single linear combination

More information

DYNAMIC AND COMPROMISE FACTOR ANALYSIS

DYNAMIC AND COMPROMISE FACTOR ANALYSIS DYNAMIC AND COMPROMISE FACTOR ANALYSIS Marianna Bolla Budapest University of Technology and Economics marib@math.bme.hu Many parts are joint work with Gy. Michaletzky, Loránd Eötvös University and G. Tusnády,

More information

Statistical Inference On the High-dimensional Gaussian Covarianc

Statistical Inference On the High-dimensional Gaussian Covarianc Statistical Inference On the High-dimensional Gaussian Covariance Matrix Department of Mathematical Sciences, Clemson University June 6, 2011 Outline Introduction Problem Setup Statistical Inference High-Dimensional

More information

9.1 Orthogonal factor model.

9.1 Orthogonal factor model. 36 Chapter 9 Factor Analysis Factor analysis may be viewed as a refinement of the principal component analysis The objective is, like the PC analysis, to describe the relevant variables in study in terms

More information

Linear Algebra 2 Spectral Notes

Linear Algebra 2 Spectral Notes Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex

More information

18.S096 Problem Set 3 Fall 2013 Regression Analysis Due Date: 10/8/2013

18.S096 Problem Set 3 Fall 2013 Regression Analysis Due Date: 10/8/2013 18.S096 Problem Set 3 Fall 013 Regression Analysis Due Date: 10/8/013 he Projection( Hat ) Matrix and Case Influence/Leverage Recall the setup for a linear regression model y = Xβ + ɛ where y and ɛ are

More information

Bayesian Inference. Chapter 9. Linear models and regression

Bayesian Inference. Chapter 9. Linear models and regression Bayesian Inference Chapter 9. Linear models and regression M. Concepcion Ausin Universidad Carlos III de Madrid Master in Business Administration and Quantitative Methods Master in Mathematical Engineering

More information

Analysis of variance using orthogonal projections

Analysis of variance using orthogonal projections Analysis of variance using orthogonal projections Rasmus Waagepetersen Abstract The purpose of this note is to show how statistical theory for inference in balanced ANOVA models can be conveniently developed

More information

Asymptotic Distribution of the Largest Eigenvalue via Geometric Representations of High-Dimension, Low-Sample-Size Data

Asymptotic Distribution of the Largest Eigenvalue via Geometric Representations of High-Dimension, Low-Sample-Size Data Sri Lankan Journal of Applied Statistics (Special Issue) Modern Statistical Methodologies in the Cutting Edge of Science Asymptotic Distribution of the Largest Eigenvalue via Geometric Representations

More information

Physics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester

Physics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester Physics 403 Parameter Estimation, Correlations, and Error Bars Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Review of Last Class Best Estimates and Reliability

More information

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

More information

Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach

Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach Jae-Kwang Kim Department of Statistics, Iowa State University Outline 1 Introduction 2 Observed likelihood 3 Mean Score

More information

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA, 00 MODULE : Statistical Inference Time Allowed: Three Hours Candidates should answer FIVE questions. All questions carry equal marks. The

More information

Advanced Multivariate Statistics with Matrices

Advanced Multivariate Statistics with Matrices Advanced Multivariate Statistics with Matrices Mathematics and Its Applications Managing Editor: M. HAZEWINKEL Centre for Mathematics and Computer Science, Amsterdam, The Netherlands Volume 579 Advanced

More information

THE UNIVERSITY OF CHICAGO Booth School of Business Business 41912, Spring Quarter 2016, Mr. Ruey S. Tsay

THE UNIVERSITY OF CHICAGO Booth School of Business Business 41912, Spring Quarter 2016, Mr. Ruey S. Tsay THE UNIVERSITY OF CHICAGO Booth School of Business Business 41912, Spring Quarter 2016, Mr. Ruey S. Tsay Lecture 5: Multivariate Multiple Linear Regression The model is Y n m = Z n (r+1) β (r+1) m + ɛ

More information

Testing Equality of Natural Parameters for Generalized Riesz Distributions

Testing Equality of Natural Parameters for Generalized Riesz Distributions Testing Equality of Natural Parameters for Generalized Riesz Distributions Jesse Crawford Department of Mathematics Tarleton State University jcrawford@tarleton.edu faculty.tarleton.edu/crawford April

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

Chapter 7, continued: MANOVA

Chapter 7, continued: MANOVA Chapter 7, continued: MANOVA The Multivariate Analysis of Variance (MANOVA) technique extends Hotelling T 2 test that compares two mean vectors to the setting in which there are m 2 groups. We wish to

More information

1 Planar rotations. Math Abstract Linear Algebra Fall 2011, section E1 Orthogonal matrices and rotations

1 Planar rotations. Math Abstract Linear Algebra Fall 2011, section E1 Orthogonal matrices and rotations Math 46 - Abstract Linear Algebra Fall, section E Orthogonal matrices and rotations Planar rotations Definition: A planar rotation in R n is a linear map R: R n R n such that there is a plane P R n (through

More information

CS281A/Stat241A Lecture 17

CS281A/Stat241A Lecture 17 CS281A/Stat241A Lecture 17 p. 1/4 CS281A/Stat241A Lecture 17 Factor Analysis and State Space Models Peter Bartlett CS281A/Stat241A Lecture 17 p. 2/4 Key ideas of this lecture Factor Analysis. Recall: Gaussian

More information

Hypothesis Testing. Robert L. Wolpert Department of Statistical Science Duke University, Durham, NC, USA

Hypothesis Testing. Robert L. Wolpert Department of Statistical Science Duke University, Durham, NC, USA Hypothesis Testing Robert L. Wolpert Department of Statistical Science Duke University, Durham, NC, USA An Example Mardia et al. (979, p. ) reprint data from Frets (9) giving the length and breadth (in

More information

Partitioned Covariance Matrices and Partial Correlations. Proposition 1 Let the (p + q) (p + q) covariance matrix C > 0 be partitioned as C = C11 C 12

Partitioned Covariance Matrices and Partial Correlations. Proposition 1 Let the (p + q) (p + q) covariance matrix C > 0 be partitioned as C = C11 C 12 Partitioned Covariance Matrices and Partial Correlations Proposition 1 Let the (p + q (p + q covariance matrix C > 0 be partitioned as ( C11 C C = 12 C 21 C 22 Then the symmetric matrix C > 0 has the following

More information

Vector Auto-Regressive Models

Vector Auto-Regressive Models Vector Auto-Regressive Models Laurent Ferrara 1 1 University of Paris Nanterre M2 Oct. 2018 Overview of the presentation 1. Vector Auto-Regressions Definition Estimation Testing 2. Impulse responses functions

More information

Part 6: Multivariate Normal and Linear Models

Part 6: Multivariate Normal and Linear Models Part 6: Multivariate Normal and Linear Models 1 Multiple measurements Up until now all of our statistical models have been univariate models models for a single measurement on each member of a sample of

More information

High-dimensional asymptotic expansions for the distributions of canonical correlations

High-dimensional asymptotic expansions for the distributions of canonical correlations Journal of Multivariate Analysis 100 2009) 231 242 Contents lists available at ScienceDirect Journal of Multivariate Analysis journal homepage: www.elsevier.com/locate/jmva High-dimensional asymptotic

More information

Review: General Approach to Hypothesis Testing. 1. Define the research question and formulate the appropriate null and alternative hypotheses.

Review: General Approach to Hypothesis Testing. 1. Define the research question and formulate the appropriate null and alternative hypotheses. 1 Review: Let X 1, X,..., X n denote n independent random variables sampled from some distribution might not be normal!) with mean µ) and standard deviation σ). Then X µ σ n In other words, X is approximately

More information