The Geometry of the Wilks s Λ Random Field

Size: px
Start display at page:

Download "The Geometry of the Wilks s Λ Random Field"

Transcription

1 The Geometry of the Wilks s Λ Random Field F. Carbonell Institute for Cybernetics, Mathematics and Physics Havana L. Galan Cuban Neuroscience Center Havana August 19, 2005 K. J. Worsley McGill University Montreal Abstract The statistical problem addressed in this paper to approximate the P-value of the maximum of a smooth random field of Wilks s Λ statistics. So far results are only available for the usual univariate statistics (Z, t, χ 2, F ) and a few multivariate statistics (Hotelling s T 2, maximum canonical correlation, Roy s maximum root). We derive results for any differentiable scalar function of two independent Wishart random fields, such as Wilks s Λ and Pillai s trace. We apply our results to a problem in brain shape analysis. Key words and phrases: Multivariate Random Fields, Excursion Sets, Euler Characteristic, Derivatives of Matrix Functions. MSC 2000: 60G60, 60D05,62M40, 1 Introduction The motivation for this work came from practical applications in brain imaging. Changes in brain shape can be inferred from the 3D non-linear deformations required to warp a patient s MRI scan to an atlas standard. Using these vector deformations as a dependent variable, we can use a multivariate multiple regression model to detect regions of the brain whose shape is correlated with external regressors, such as disease state. The statistic of choice is Wilks s Λ, the likelihood ratio statistic (Anderson 1984), evaluated at each voxel in the brain. The challenging statistical problem is how to assess the P-value of local maxima of a smooth 3D random field (RF) of Wilks s Λ statistics. This problem of assessing the P-value of local maxima was studied in the seminal works of Adler & Hasofer (1976) and Hasofer (1978) for the case of Gaussian RFs. The key idea consists on approximating the P-value that the local maximum exceeds a high threshold value via the geometry of the excursion set, the set of points the RF exceeds the threshold value. For high thresholds, the excursion sets consists of isolated regions, each containing a local maxima. The Euler characteristic (EC) of the excursion sets counts the number of such connected components. For high thresholds near the maximum of the RF, the EC takes the value one if the maximum is 1

2 above the threshold, and zero otherwise. Thus, for high thresholds, the expected EC approximates the P-value of the maximum of the RF. During the last decade this approach has became an important tool in different areas of applied probability. This is due to the many applications that have emerged principally in medical imaging and astrophysics (Gott, Park, Juskiewicz, Bies, Bennett, Bouchet & Stebbins 1990, Beaky, Scherrer & Villumsen 1992, Vogeley, Park, Geller, Huchira & Gott 1994, Worsley 1995a, Worsley 1995b, Worsley, Cao, Paus, Petrides & Evans 1998, Cao & Worsley 2001). So far, results are available for some fields of standard univariate statistics such as Z, t, χ 2 and F RFs (Worsley 1994). Some other useful extensions were obtained in Cao & Worsley (1999a) and Cao & Worsley (1999b) for Hotelling s T 2 and the cross-correlation RFs, respectively. However, the extension of these results to other RFs of classical multivariate statistics has not so far been fully studied (Worsley, Taylor, Tomaiuolo & Lerch 2004). RFs of multivariate statistics appear when dealing with multivariate RF data and multiple contrasts, the classical example is the Wilks s Λ statistic mentioned above. The purpose of this work is to extend Adler s results to a wide class of RFs of multivariate statistics to which the Wilks s Λ RF belongs. We build these random fields from Gaussian random fields in the same way as we build the statistics themselves. First let Z = Z(t), t R N, be a smooth isotropic Gaussian random field with zero mean, unit variance, and with Var(Ż) = I N, (1) dot represents derivative with respect to t, and I N is the N N identity matrix. Note that if we have another isotropic Gaussian ransom field Z with Var(Ż ) = v 2 I N and v 1, then we can always re-scale the parameter t by dividing by v to define Z(t) = Z (t/v) so that (1) is satisfied. Now suppose that Z(t) is an ν m matrix independent Gaussian random fields each with the same distribution as Z(t). Then the Wishart random field with ν degrees of freedom is defined as W (t) = Z(t) Z(t) Wishart m (I m, ν) (2) at each fixed t (Cao & Worsley 1999b). Let U(t) and V(t) be independent Wishart random fields with ν and η degrees of freedom, respectively, whose component Gaussian random fields all have the same distribution as Z(t) above. We shall be concerned with the class of RFs Λ(t) defined as Λ (t) = Λ (U (t), V (t)), (3) Λ is any differentiable function of two matrix arguments. Our aim is to provide approximate values for the expected EC of the excursion sets of Λ(t). The plan of the paper is the following. In Section 2 we shall state the notation as well as some preliminaries facts about RF theory and matrix differential theory. In the Section 3 we obtain expressions for the derivatives up to second order of the random field Λ (t), while, in the Sections 4 and 5 we get the EC densities of Λ (t). Finally, we applied these results to detecting brain damage of a group of 17 non-missile trauma patients compared to a group of 19 age and sex matched controls in the Section 6. 2 Preliminaries In this section we shall settle the notation to be used throughout the paper, as well as some basic facts about RFs and matrix differentiation. 2

3 2.1 Basic Notations We use Normal m n (µ, Σ) to denote the multivariate normal distribution of a matrix X R m n with mean µ and covariance matrix Σ, and write Σ = M(I mn ) when cov(x ij, X kl ) = ε(i, j, k, l) δ ij δ kl with ε(i, j, k, l) symmetric in its arguments and with δ ij = 1 if i = j and 0 otherwise. Let W ishart m (ν, Σ) be represent the Wishart ( distribution of a m m matrix with expectation νσ and degrees of freedom ν, and use Beta 1 m ν, 1η) for the multivariate beta distribution of a m m 2 2 matrix with degrees of freedom ν and η. For any vector we shall use subscripts j and j to denote the jth component and the first j components, respectively. In a similar way, for any matrix, the subscript j shall denote the matrix of the first j rows and columns. For any scalar b, let b + = b if b > 0 and zero otherwise. For any symmetric matrix B, let B = B if B is negative definite and zero otherwise and let detr j (B) be the sum of the determinants of all j j principal minors of B and define detr 0 (B) = 1. We shall denote by B 1/2 the square root of the matrix B, which is defined by B 1/2 (B 1/2 ) = B. For any m m symmetric matrix B and any multi-indices κ = (k 1, k 2,..., k m ), k 1 k 2 k m, we denote with C κ (B) the zonal polynomials corresponding to κ (see details in Muirhead (1982)), which satisfy tr(b) k = C κ (B). (4) κ =k For any real a and natural k, (a) k shall be defined by (a) k = a(a + 1)... (a + k 1), (a) 0 = 1. For the multi-index κ = (k 1, k 2,..., k m ) define (a) κ = m (a 1(i 1)) 2 k i. i=1 Finally, for any real a > 1 2 (m 1), Γ m(a) will denote the multivariate Gamma function, defined by Γ m (a) = π m(m 1)/4 Γ(.) denotes the usual Gamma function. 2.2 Random Field Theory m i=1 Γ(a 1 (i 1)), 2 Let Y = Y (t), t S R N be a real valued isotropic random field and let S be a compact subset of R N with a twice differentiable boundary S. Denote the first derivative of Y by Y. and the N N matrix of second-order partial derivatives of Y by Ÿ. For j > 0 define the j-dimensional Euler Characteristic (EC) density as ρ j (y) = E(Ẏ + j det( Ÿ j 1) Ẏ j 1 = 0, Y = y) θ j 1 (0, y) (5) E ( ) denotes mathematical expectation and θ j 1 (, ) is the joint probability density function of Ẏ j 1 and Y. For j = 0 define ρ 0 (y) = P(Y y). Let a j = 2π j/2 /Γ(j/2) be the surface area of a unit (j 1) sphere in R N. Define µ N (S) to be the Lebesgue measure of S and µ j (S), j = 0,..., N 1 to be proportional to the j dimensional Minkowski functional or intrinsic volume µ j (S) = 1 a N j S 3 detr N 1 j (C) dt

4 with C denoting the inside curvature matrix of S at the point t. The expected EC χ of the excursion set of Y above the threshold y inside S (A y (Y, S)) is given by E(χ(A y (Y, S))) = N µ j (S) ρ j (y). j=0 Then, the P-value of the maximum of Z inside S is well approximated by ( ) P max Y (t) y E(χ(A y (Y, S))) (6) t S (Adler 2000, Taylor, Takemura & Adler 2005). In this paper Y is a function of independent RFs each with the same distribution as the standard Gaussian RF Z, and the EC densities are ρ j (y) are calculated assuming unit variance of the derivative (1). If instead Var(Ż) = v2 I m (7) then t can be re-scaled by dividing t by v to satisfy (1). Thus is equivalent to multiplying µ j (S) by v j, to give ( ) N P max Y (t) y E(χ(A y (Y, S))) = µ j (S)v j ρ j (y). (8) t S For this reason all the subsequent theory for the EC densities will assume v = Matrix Functions Throughout the next section we shall need the following result, which follows easily from the properties of the operators vec ( ) (vectorization) and (Kronecker tensor product). If the p q matrix X is such that X Normal p q (0, I pq ) then for any s p matrix A and q r matrix B, j=0 AXB Normal s r (0, (B B) (AA )). (9) We shall also make intensive use the differential theory of matrix argument functions proposed in Magnus & Neudecker (1988).). It is based on expressing the derivative of a matrix-valued function of a matrix argument F (X) : R p q R m n by the mn pq matrix D X F (X) = d (vec (F (X))) d ( vec (X)). 3 Representations of Derivatives In this section we obtain expressions for the first and second derivatives of the random field Λ (t) with respect to t, denoted by Λ and Λ respectively. These expressions generalize, for instance, the formulas for the derivatives of the F field obtained in Worsley (1994). Theorem 1 The two first derivatives of the random field Λ can be expressed as Λ = A 1 vec(m 0 (W, B)), 4

5 Λ = tr(m 1 (W, B))I N + tr(m 2 (W, B)) 1/2 H + tr(m 3 (W, B))P + tr(m 4 ( W, B))Q +A 1 M 5 (W, B)A 1 + A 1 M 6 (W, B)A 2 + A 2 M 7 (W, B)A 1 + A 2 M 8 (W, B)A 2, equality means equality in law, A 1, A 2 Normal N m 2(0, I Nm 2), H Normal N N (0, M (I N 2)), P W ishart N (ν m, I N ), Q W ishart N (η m, I N ), W W ishart m (ν + η, I m ), B Beta m ( 1 2 ν, 1 2 η), independently, and M i (W, B), i = 0,..., 8 are matrices that depends on B and W. Proof. For each j = 1,..., N we have Λ j = D U Λ (U, V) D tj U + D V Λ (U, V) D tj V. (10) According to Neudecker (1969) one is to able to find two matrix functions G 1 (U, V), G 2 (U, V), such that G 1, G 2 : R m m R m m R m m D U Λ (U, V) = vec(g 1 ), D V Λ (U, V) = vec(g 2 ). From Lemma 3.2 in Cao & Worsley (1999a) we make the substitutions independently. Thus, D tj U = vec(u 1/2 A U j + (U 1/2 A U j ) ), (11) D tj V = vec(v 1/2 A V j + (V 1/2 A V j ) ), (A U 1,..., A U N), (A V 1,..., A V N) Normal m mn (0, I m 2 N), Λ j = vec(g 1 ) vec(u 1/2 A U j + (U 1/2 A U j ) ) + vec(g 2 ) vec(v 1/2 A V j + (V 1/2 A) ) = tr(((g 1 ) + G 1 )U 1/2 A U j ) + tr(((g 2 ) + G 2 )V 1/2 A V j ) = tr(gu 1/2 A U j + FV 1/2 A V j ) = tr(c 1 1 A 1 j), G = (G 1 ) + G 1, F = (G 2 ) + G 2, C 1 = (GUG + FVF) 1/2, A 1 j = C 1 (GU 1/2 A U j + FV 1/2 A V j ), j = 1,..., N. It is easily verified from (9) that, conditional on U, V, A 1 j Normal m m (0, I m 2), which yields Λ = A 1 vec(c 1 1 ), 5

6 Using the fact that A 1 = (vec(a 1 1),..., vec( A 1 N)) Normal N m 2(0, I m 2 N). W = U + V W ishart m (ν + η, I m ), B = (U + V) 1/2 U(U + V) 1/2 Beta m ( 1 2 ν, 1 2 η), independently (Olkin & Rubin 1962), we obtain the first identity by rewriting the matrix C 1 1 in terms of W and B by means of the substitutions For the second identity we obtain from (10) U = W 1/2 BW 1/2, V = W 1/2 (I m B)W 1/2. (12) Λ kj = (D tj U) D 2 UUΛ (U, V) D tk U + (D 2 VUΛ (U, V))D tk V (13) +(D tj V) D 2 UVΛ ( U, V))D tk U + (D 2 VVΛ ( U, V))D tk V + (D U Λ (U, V)) D 2 t k t j U + (D V Λ (U, V)) D 2 t k t j V, From Lemma 3.2 in Cao & Worsley (1999a) we make the substitutions D 2 t k t j U = vec( P jk + P kj + U 1/2 H U kj + (U 1/2 H U kj) + (A U j ) A U k + (A U k ) A U j 2δ kj U), (14) D 2 t k t j V = vec( Q jk + Q kj + V 1/2 H V kj + (V 1/2 H V kj) + (A V j ) A V k + (A V k ) A V j 2δ kj V ), The expressions (11) and (14) yields and P = ( P jk ) W ishart mn (ν m, I mn ), Q = ( Q jk ) W ishart mn (η m, I mn ), H U kj, H V kj Normal m m (0, M (I m 2)). (D tj U) (D 2 UUΛ)D tk U = vec(u 1/2 A U j ) (I m 2 + K m)(d 2 UUΛ) (I m 2 + K m ) vec(u 1/2 A U k ), (15) (D tj U) (D 2 VUΛ)D tk V = vec(u 1/2 A U j ) (I m 2 + K m)(d 2 VUΛ) (I m 2 + K m ) vec(v 1/2 A V k ), (D tj V) (D 2 UVΛ)D tk U = vec(v 1/2 A V j ) (I m 2 + K m)(d 2 UVΛ) (I m 2 + K m ) vec(u 1/2 A U k ), (D tj V) (D 2 VVΛ)D tk V = vec(v 1/2 A V j ) (I m 2 + K m)(d 2 VVΛ) (I m 2 + K m ) vec(v 1/2 A V k ), D U Λ(D 2 t k t j U) = 2δ kj vec(g 1 ) vec (U) + vec(g) vec( P jk ) + vec(g) vec(u 1/2 H U jk) +vec(g) vec((a U j ) A U k ), D V Λ(D 2 t k t j V) = 2δ kj vec(g 2 ) vec (V) + vec(f) vec( Q jk ) + vec(f) vec(v 1/2 H V jk) +vec(f) vec((a V j ) A V k ). (16) We can now find a linear combination of A U j and A V j, j = 1,..., N that is independent of A 1 j, namely, A 2 j = C 2 (G 1 U 1/2 A U j F 1 V 1/2 A V j ), C 2 = ((GUG) 1 + (FVF) 1 ) 1/2. 6

7 This implies that the matrices A U j, A V j can be written in terms of A 1 j and A 2 j by suitable expressions (unequivocally determined), which when substituted in each of the right members of (15) and (16) give Λ kj = 2δ kj tr(g 1 U + G 2 V) + tr(c 1 1 H kj ) + tr(g P jk ) + tr(f Q jk ) (17) +(vec(a 1 j)) S 1 vec(a 1 k) + (vec(a 1 j)) S 2 vec(a 2 k) +(vec(a 2 j)) S 3 vec(a 1 k) + (vec(a 2 j)) S 4 vec(a 2 k), for certain matrices S 1, S 2, S 3, S 4 that only depends on U, V, and H kj given by H kj = C 1 (GU 1/2 H U jk + FV 1/2 H V kj). It can be easily shown that the matrix (tr(c 1 1 H kj )) kj can be rewritten as with (tr(c 1 1 H kj )) kj = tr(c 2 1 ) 1/2 H, H = tr(c 2 1 ) 1/2 (I N (vec(c 1 1 )) )(vec(h kj )) kj, (18) which, conditional on U, V, is distributed as Normal N N (0, M (I N 2)). On the other hand, noting that we define the matrices (tr(g P kj )) kj = (I N vec(g 1/2 ) )(I m P kj ) kj (I N vec( G 1/2 )), (tr(f Q kj )) kj = (I N vec(f 1/2 ) )(I m Q kj ) kj (I N vec( F 1/2 )), (I m P kj ) kj W ishart m 2 N (ν m, I m 2 N), (I m Q kj ) kj W ishart m 2 N(η m, I m 2 N), P = (tr(g 1 )) 1 (I N vec(g 1/2 ) )(I m P kj ) kj (I N vec(g 1/2 )), (19) Q = (tr(f 1 )) 1 (I N vec(f 1/2 ) )(I m Q kj ) kj (I N vec(f 1/2 )), which distribute independently as W ishart N (ν m, I N ) and W ishart N (η m, I N ), respectively. It is also easily verified that (vec(a 1 j) S 1 vec(a 1 k)) kj = A 1 S 1 A 1, (vec(a 2 j) S 2 vec(a 2 k)) kj = A 2 S 2 A 2, (vec(a 1 j) S 3 vec(a 2 k)) kj = A 1 S 3 A 2, (vec(a 2 j) S 4 vec(a 1 k)) kj = A 2 S 4 A 1, with A 2 = (vec(a 2 1),..., vec(a 2 N )) Normal N m 2 (0, I m 2 N) independently of A 1. Combining these last expressions with (18) and (19) and substituting in (17) we obtain Λ = 2tr(G U U + G V V)I N + tr(c 2 1 ) 1/2 H + tr ( G 1) P + tr ( F 1) Q +A 1 S 1 A 1 + A 1 S 2 A 2 + A 2 S 3 A 1 + A 2 S 4 A 2. The substitutions (12) conclude the proof. 7

8 4 Euler Characteristic Densities In this section, we obtain expressions for the EC densities of the random field Λ. We begin with the following proposition. Proposition 2 The following equalities hold Λ j = tr (M 2 ) 1/2 z Λ j ( Λ j = 0) = tr(m 1 )I j + tr (M 2 ) 1/2 H + tr (M 3 ) P + tr (M 4 ) Q + X 1 M 5 X 1 +X 1 M 6 X 2 + X 2 M 7 X 1 + X 2 M 8 X 2, independently, with Σ given by X 1 Normal j m 2(0, Σ I j ), X 2 Normal j m 2(0, I jm 2), z Normal j (0, I j ), H Normal j j (0, M(I j 2)), P W ishart j (ν m, I j ), Q W ishart j (η m, I j ) W W ishart m (ν + η, I m ), B Beta m ( 1 2 ν, 1 2 η) Proof. Define the vector b and the matrix C by Σ = I m 2 m 2 tr(m 2) 1 vec(m 0 )vec(m 0 ). b = A j 1 vec (M 0 ) and C = (A j 1, b), respectively. It is easily seen from the definition of K 0 and K 2 in the theorem above that which implies vec (M 0 ) vec (M 0 ) = tr(m 2 ), b Normal j (0, tr(m 2 ) I j ). The first equality follows by noting that Λ j = b. Moreover it is very easy to check that C Normal j m 2 +1(0, Σ), ( Ijm Σ = 2 jm 2 vec(k ) 0) I j vec(k 0 ). I j tr(k 2 )I j From Theorem and Theorem in Mardia & Bibby (1979) we obtain independently of b, A j 1 (b = 0) Normal j m 2(0, Σ 1 ), (20) Σ 1 = I jm 2 jm 2 (vec(m 0) I j )(tr(m 2 )I j ) 1 (vec(m 0 ) I j ) = I jm 2 jm 2 tr(m 2) 1 vec(m 0 )vec(m 0 ) I j = (I m 2 m 2 tr(m 2) 1 vec(m 0 )vec(m 0 ) ) I j. 8

9 According to Theorem 1 the condition ( Λ j = 0) is equivalent to b = 0. This implies, by (20), Λ j ( Λ j = 0) = tr(m 1 )I N + tr (M 2 ) 1/2 H + tr (M 3 ) P + tr (M 4 ) Q + X 1 M 5 X 1 + X 1 M 6 X 2 +X 2 M 7 X 1 + X 2 M 8 X 2, X 1 Normal j m 2(0, Σ), X 2 Normal j m 2(0, I jm 2) and Σ = I m 2 m 2 tr(m 2) 1 vec(m 0 )vec(m 0 ), (21) which concludes the proof The next theorem is the main result of this section. The EC densities are obtained only for j = 1, 2, 3, which are the most important cases in practical applications. Theorem 3 The EC densities ρ j (λ), j = 1, 2, 3 for the random field Λ are given by ϕ j (λ) are expressions of the type ρ 1 (λ) = (2π) 1/2 θ 0 (λ) ϕ 1 (λ), ρ 2 (λ) = (2π) 1 θ 0 (λ)ϕ 2 (λ) ρ 3 (λ) = (2π) 3/2 θ 0 (λ) ϕ 3 (λ), ϕ j (λ) = E(f j (B, W) Λ(B, W) = λ), for certain scalar values f j (B, W), and θ 0 (λ) denotes the probability density function of Λ. Proof. We shall evaluate the expectations in (5) by first conditioning on Λ = λ, W and B, and then taking expectations over W, B conditional on Λ(B, W) = λ.this is, ρ j (λ) = E(E( Λ + j Λ j 1 = 0,Λ = λ; B.W)E(det( Λ j 1 ) Λ j 1 = 0,Λ = λ; B, W) (22) θ j 1 (0; g, B, W) Λ(B, W) = λ) θ 0 (λ), the corresponding derivatives should be rewritten according to Lemma 2. From Lemma in Adler (1981) and Theorem 2 we have E( Λ + j Λ j 1 = 0,Λ = λ; B, W) = (2π) 1/2 tr(m 2 ) 1/2, (23) and the density of Λ j 1 at zero conditional on Λ = λ and B, W given by θ j 1 (0; λ, B, W) = (2πtr(M 2 )) 1 2 (j 1), (24) According to Proposition 2, Lemma 7 and Lemma 8 we obtain E(det( Λ 0 ) Λ 0 = 0,Λ = λ; B, W) = 1, (25) E(det( Λ 1 ) Λ 1 = 0,Λ = λ; B, W) = ((ν m)tr (M 3 ) + (η m)tr (M 4 ) (26) +tr(m 1 + M 5 Σ + M 8 )), E(det( Λ 2 ) Λ 2 = 0,Λ = λ; B, W) = ( ) ν m 2 tr (M3 ) 2 + ( ) η m 2 tr (M4 ) 2 +tr(m 1 ) 2 (27) +(ν m)(η m)tr (M 3 ) tr(m 4 ) +[(ν m)tr (M 3 ) + (η m)tr(m 4 )]tr(m 1 ) + tr(m 5 Σ + K 8 ) 2 +[(ν m)tr (M 3 ) + (η m)tr(m 4 ) + tr(m 1 )]tr(m 5 Σ + M 8 ) tr(m 5 ΣM 5 Σ + M M 6 M 7 Σ + M 7 M 6 Σ + M 2 ), 9

10 respectively, with Σ given by (21). Using (23), (24) and (25)-(27) we finally obtain ϕ 1 (λ) = E(tr(M 2 ) 1/2 ), ρ 1 (λ) = (2π) 1/2 θ 0 (λ) ϕ 1 (λ), ρ 2 (λ) = (2π) 1 θ 0 (λ)ϕ 2 (λ) ρ 3 (λ) = (2π) 3/2 θ 0 (λ) ϕ 3 (λ), ϕ 2 (λ) = E((ν m)tr (M 3 + (η m)tr (M 4 ) + tr(m 1 ) + tr(m 5 Σ + M 8 )), ϕ 3 (λ) = E(tr(M 2 ) 1/2 ( ( ) ν m 2 tr (M3 ) 2 + ( ) η m 2 tr (M4 ) 2 +(ν m)(η m)tr (M 3 ) tr (M 4 ) +tr(m 1 ) 2 + tr(m 5 Σ + M 8 ) 2 + ((ν m)tr (M 3 ) + (η m)tr(m 4 ))tr(m 1 ) +((ν m)tr (M 3 ) + (η m)tr (M 4 ) + tr(m 1 ))tr(m 5 Σ + M 8 ) tr(m 5 ΣM 5 Σ +M M 6 M 7 Σ + M 7 M 6 Σ + M 2 ))), all expectations above have to be taken by conditioning on Λ(B, W) = λ. 5 Wilks s Λ Random Field In this section we shall apply the previous results for obtaining the EC densities of the Wilks s Λ random field. Definition 4 Let U(t), V(t) be two independent m dimensional Wishart RFs with ν and η degrees of freedom, respectively. The Wilk s Λ RF is then defined as Λ(t) = det(u(t)) det(u(t) + V(t)). Note that although the Wilks s Λ statistic is well defined at a point t provided ν+ η m, this may not be true for all points inside a compact subset of R N. The reason is that both the numerator and denominator if Wilks s Λ could be zero. For the particular case of m = 1 it was shown in Worsley (1994) that ν+η N to avoid this, with probability one. For general m, following the same ideas in Cao & Worsley (1999a), we obtain ν +η m+n 1 as a necessary and sufficient condition for having Λ well defined. When used in hypothesis testing problems, one rejects null hypotheses based on small values of the Wilk s Λ statistics. So, in what follows, we shall work with the RF Y (t) = log(λ(t)). It is well-known that for any symmetric matrix function U(t) the identities det(u) j = det(u)tr(u 1 U j ), det(u) kj = det(u) 1 det(u) k det(u) j det(u)tr(u 1 U k U 1 U j + U 1 Ü kj ) hold, so that Theorem 1 gives Ẏ = 2A 1 vec(m 1/2 0 ) Ÿ = 2(tr(M 0 ) 1/2 H tr(m 0 )P + tr(m 4 )Q + A 1 M 5 A 1 A 1 M 6 A 2 A 2 M 7 A 1), 10

11 M 0 = (B 1 I m )W 1, M 4 = W 1, M 5 = (M 1/2 0 M 1/2 0 ), M 6 = (M 1/2 0 W 1/2 ), M 7 = M 6. Theorem 3 yields ρ 1 (y) = (2π) 1/2 θ 0 (y)e(tr(m 0 ) 1/2 det(b) = e y ), (28) ρ 2 (y) = 2(2π) 1 θ 0 (y)e((ν m)tr(m 0 ) (η m)tr (M 4 ) tr(m 5 Σ) det(b) = e y ), (29) ρ 3 (y) = 4(2π) 3/2 θ 0 (y)e(tr(m 0 ) 1/2 ( ( ) ν m 2 tr (M0 ) 2 + ( ) η m 2 tr (M4 ) 2 + tr(m 5 Σ) 2 (30) (ν m)(η m)tr(m 0 )tr(m 4 ) + ((η m)tr(m 4 ) (ν m)tr(m 0 ))tr(m 5 Σ) tr(m 5 ΣM 5 Σ+2M 6 M 7 Σ) tr(m 0 ))) det(b) = e y ), Σ = I m 2 m 2 tr(m 0) 1 vec(m 1/2 0 )vec(m 1/2 0 ). There is no closed form expression for θ 0 (y), the density of Y = log(λ). A host of approximations are available (Anderson 1984), but it is easier numerically to use Fourier methods as follows. We can write m Y = Y i, i=1 exp( Y i ) Beta 1 ( 1 2 (ν + 1 i), 1 2 η) independently, i = 1,..., m (Anderson 1984). Then the product of the Fourier transforms of the densities of Y i is the Fourier transform of the density of Y. Inverting gives the desired density θ 0 (y) of Y. 5.1 EC densities Providing closed expressions for the expectations that appear in the formulas (28-30) is a very difficult task. In this subsection we propose a practical way to approximate such expectations. Specifically, we shall use expansion series for either negative or fractional powers of tr(m 0 ) (see Shapiro & Gross (1981)), that is, tr(m 0 ) r E(tr(M 0 )) r + re(tr(m 0 )) r 1 (tr(m 0 ) E(tr(M 0 ))) +r(r 1)E(tr(M 0 )) r 2 (tr(m 0 ) E(tr(M 0 ))) However, for simplicity in the exposition, we just describe how to deal with the simplest approximation, namely, tr(u 1 ) r E(tr(U 1 )) r. For the EC density ρ 1 (y) we have ρ 1 (y) = (2π) 1/2 θ 0 (y) E(tr(M 0 ) A) 1/2. 11

12 A is the event det(b) = e y. According to Letac & Massam (2001), E(tr(M 0 ) A) = E(tr(B 1 I m ) A, q q = ν + η m 1. Now, by expressing tr(b 1 ) and det(b) in terms of the eigenvalues of B we obtain E(tr(B 1 ) A) = E(tr(B 1 )) + E(det(B ))e y, for some matrix B Beta m 1 ( 1ν, 1 η), namely, one of the principal minors of order m 1 of B. 2 2 Then, by (34) and (35), Therefore c 1 (y) := E(tr(B 1 ) A) = p m (a, b) = (m 1)(q + 1) (ν m) + p m 1 (1, 0)e y, { Γm ( 1 2 ν+a)γ m( 1 2 (ν+η)+b) Γ m( 1 2 ν+b)γm( 1 (ν+η)+a), 2 m 1 0, m 0. ρ 1 (y) = (2π) 1/2 θ 0 (y)k 1 (y) 1/2, k 1 (y) = c 1(y) m. q In a similar way, we use the approximation to obtain E(tr(M 5 Σ)) E(tr(M 0 )) E(tr(M 0 )) 1 E(tr(M 2 0)) ρ 2 (y) = 2(2π) 1 θ 0 (y)((ν m 1)k 1 (y) (η m)mq 1 + k 1 (y) 1 k 2 (y)), c 2 (y) = p m 1 (2, 0)e 2y + k 2 (y) = c 3(y) 2mc 1 (y) + m 2 q 3 q + c 2(y) 2c 1 (y) + m, q 2 1 (q + 1)(q + 3) (ν m)(ν m + 2) C (q + 1)(q + 2) (2)(I m 1 ) 2(ν m)(ν m + 1) C (1,1)(I m 1 ), c 3 (y) = p m 1 (2, 0)e 2y + (m 1)p m 2 (1, 0)e y + (q + 1)(q + 2) + (ν m)(ν m + 1) C (1,1)(I m 1 ). Note that we have used (Letac & Massam 2001) (q + 1)(q + 3) (ν m)(ν m + 2) C (2)(I m 1 ) and E(tr(M 2 0) A) = E(tr(B 1 I m ) 2 A) q 3 q + E(tr((B 1 I m ) 2 ) A) q 2 1 c 2 (y) := E(tr(B 2 ) A) = E(tr(B 2 )) + E(det(B ) 2 )e 2y, c 3 (y) := E(tr(B 1 ) 2 A) = E(tr(B 1 ) 2 ) + 2E( tr(b 1 ) det(b ))e y + E(det(B ) 2 )e 2y, 12

13 the expressions for c 2 (y) and c 3 (y) are obtained from (4), (34) and (35). Finally, from Letac & Massam (2001) and Graczyk, Letac & Massam (2003) we have E(tr(M 0 ) 2 A) = E(tr(B 1 I m ) 2 A) q E(tr((B 1 I m ) 2 ) A), q 3 q k 5 (y) := E(tr (M 4 ) tr (M 0 ) A) = mq + 1 q 3 q E(tr(B 1 I m ) A), k 6 (y) := E(tr (M 4 ) tr(m 2 0) A) = (mq + 2)E(tr(B 1 I m ) 2 A) + (mq 2 + 2q 2m)E(tr((B 1 I m ) 2 ) A), q(q 2 1)(q 2 4) k 7 (y) := E(tr(M 2 0M 4 ) A) = 2(q + m)e(tr(b 1 I m ) 2 A) q(q 2 1)(q 2 4) + (q + m)e(tr((b 1 I m ) 2 ) A), (q 2 1)(q 2 4) which implies ρ 3 (y) = 4(2π) 3/2 θ 0 (y)k 1 (y) 1/2 { ( ) ν m k3 (y) + ( ) η m k4 (ν m)(η m)k (y) +(ν m)[k 3 (y) k 2 (y)] (η m)[k 5 (y) k 1 (y) 1 k 6 (y)] + [k 3 (y) k 2 (y)] 2[k 5 (y) k 1 (y) 1 k 7 (y)] k 1 (y)}, k 3 (y) = c 3(y) 2mc 1 (y) + m 2 q 2 1 m 2 k 4 = q m q 3 q, k 5 (y) = (mq + 1)(c 1(y) m), q 3 q + c 2(y) 2c 1 (y) + m, q 3 q k 6 (y) = (mq + 2)(c 3(y) 2mc 1 (y) + m 2 ) + (mq 2 + 2q 2m)(c 2 (y) 2c 1 (y) + m), q(q 2 1)(q 2 4) k 7 (y) = 2(q + m)(c 3(y) 2mc 1 (y) + m 2 ) q(q 2 1)(q 2 4) + (q + m)(c 2(y) 2c 1 (y) + m). (q 2 1)(q 2 4) 5.2 Simulation Results In this subsection we shall show some simulation results for validating the previous formulae. We generated 200 Y = log(λ) RFs on a 32 3 voxel rectilinear lattice as follows. We first generated lattices of independent standard Gaussian random variables then smoothed them with a Gaussian shaped filter of standard deviation σ = 3.2 voxels, which gives v = 1/( 2σ) = 0.22 as the standard deviation of the spatial derivative (Worsley, Marrett, Neelin, Vandal, Friston & Evans 1996).). The Wilks s Λ RF was generated by ( Y = log det(z 1Z 1 ) det(z 1Z 1 + Z 2Z 2 ) ), (31) Z 1 and Z 2 are ν m and η m matrices of independent smoothed Gaussian random fields as above. The variance of the derivative of the component random fields is The EC χ(a y ) of the 13

14 12 Simulated Theoretical 10 8 Expected Euler Characteristic Threshold Figure 1: Euler Characteristic of the excursion set of Y = log(λ) for ν = 7, η = 4 and m = 2 sampled on a 32 3 lattice smoothed by Gaussian filter with standard deviation 6.37 voxels. excursion sets A y was calculated for each of the 200 Y RFs at y = 0, 0.1, 0.2,..., 6.0. Then, the average of the measurements χ(a y ) was used as an estimator of E(χ(A y )). This value and the theoretical approximation obtained in the subsection above are plotted in Figure 1 for ν = 7, η = 4, and m = 2. Note that for thresholds greater that 2 both the theoretical approximation and the simulated values are very close. The P-value approximation (6) based on the expected EC, the (uncorrected) P-value at a single voxel, and the Bonferroni P-value are plotted in Figure 2. The parameter values were chosen to match the example in Section 6: m = 3, ν = 31 and η = 1, 2, 3, 4. The parameter set S was a 3D ball of radius with radius 68mm sampled on a 2mm voxel lattice. The data was smoothed by a Gaussian filter with standard deviation 5.65mm to give v = In the first plot with m = 1, Wilks s Λ is equivalent to Hotelling s T 2 = ν(exp(y ) 1). Exact results for the expected EC of Hotelling s T 2 from Cao & Worsley (1999a) are added to the plot. We can see that these are in reasonable agreement with our Wilks s Λ approximation, which appears to be too liberal by a factor of 3 to 4. 14

15 P(max Y > y) 10 1 m = 3, ν = 31, η = EC Bon 1 vox T 2 P(max Y > y) 10 1 m = 3, ν = 31, η = y = log Wilks s Λ 10 1 m = 3, ν = 31, η = y = log Wilks s Λ 10 1 m = 3, ν = 31, η = 4 P(max Y > y) 10 2 P(max Y > y) y = log Wilks s Λ y = log Wilks s Λ Figure 2: P-values of the maximum of Y = log(λ) over the brain, approximated by a 3D ball with radius 68mm sampled on a 2mm voxel lattice. The data was smoothed by a Gaussian filter with standard deviation 5.65mm. EC = Euler Characteristic approximation (6), Bon = Bonferroni upper bound, 1 vox = uncorrected P-value at a single voxel (lower bound), T 2 = exact expected EC for equivalent Hotelling s T 2 (η = 1 only). 15

16 6 Application 6.1 Multivariate Linear Models We apply our results to a multivariate linear model. Suppose we have n independent RFs of multivariate observations y i (t) R m, i = 1,..., n, t S R N, and a multivariate linear model (Cao & Worsley 1999a): y i (t) = x iβ(t) + ɛ i (t) Σ(t) 1/2, (32) x i is a p-vector of known regressors, and β(t) is an unknown p m coefficient matrix. The error ɛ i (t) is a m-vector of independent zero mean, unit variance isotropic Gaussian components with the same spatial correlation structure, and Var(y i (t)) = Σ(t) is an unknown m m matrix. We can now detect how the regressors are related to the multivariate data at point t by testing contrasts in the rows of β(t). Classical multivariate test statistics evaluated at each point t then form a random field. Let β(t) be the usual least-squares estimate of β(t). The Wilks s Λ random field is defined in terms of two independent Wishart random fields. The first is the error sum of squares matrix U(t) = n (y i (t) x β(t)) i (y i (t) x β(t)) i Wishart m (Σ(t), ν) i=1 ν = n p. Let X = (x 1,..., x n) be the design matrix. The second is the regression sum of squares matrix for a η p matrix of contrasts C V(t) = (C β(t)) (C(X X) 1 C ) 1 (C β(t)) Wishart m (Σ(t), η) under the null hypothesis of no effect, Cβ = 0. In Wilks s Λ (31), Σ(t) will cancel so under the null hypothesis, (31) will be a Wilks s Λ random field. 6.2 Brain Shape Analysis As an illustration of the methods, we apply the P-value approximations for Wilks s Λ to a data set on non-missile trauma (Tomaiuolo, Worsley, Lerch, Di Paulo, Carlesimo, Bonanni, Caltagirone & Paus 2005) that was analysed in a similar way in Worsley et al. (2004). The subjects were 17 patients with non-missile brain trauma who were in a coma for 3-14 days. MRI images were taken after the trauma, and the multivariate data were the N = 3 component vector deformations needed to warp the MRI images to an atlas standard sampled on a 2mm voxel lattice. The same data were also collected on a group of 19 age and sex matched controls. Damage is expected in white mater areas, so the search region S was defined as the voxels smoothed average control subject white matter density exceeded 5%. For calculating the intrinsic volumes, this was approximated by a sphere with the same volume, 1.31cc, which is slightly liberal for a non-spherical search region. The effective v from (7), averaged over the search region, was The first analysis was to look for brain damage by comparing the deformations of the 17 trauma patients with the 19 controls, so the sample size is n = 36. We are looking at a single contrast, the difference between trauma and controls, so η = 1 and the residual degrees of freedom is ν = 34. In this case Wilks s Λ is Hotelling s T 2. The P = 0.05 threshold, found using the EC densities in Cao & Worsley (1999a) and by equating (8) to 0.05 and solving for y, was y = The thresholded data, 16

17 together with the estimated contrast (mean trauma - control deformations) is shown in Figure 3(a). A large region near the corpus callosum seems to be damaged. The nature of the damage, judged by the direction of the arrows, is away from the center (see Figure 3(b)). This can be interpreted as expansion of the ventricles, or more likely, atrophy of the surrounding white matter, which causes the ventricle/white matter boundary to move outwards. We might also be interested in functional anatomical connectivity: are there any regions of the brain whose shape (as measured by the deformations) is correlated with shape at a reference point? In other words, the explanatory variables are the deformations at a pre-selected reference point, and the test statistic is Wilks s Λ. We chose as the reference the point with maximum Hotelling s T 2 for damage, marked by axis lines in Figure 3. Figure 3(c) shows the resulting -log Wilks s Λ field above the P = 0.05 threshold of 1.54 for the combined trauma and control data sets removing separate means for both groups (ν = 31, η = 3 ). Obviously there is strong correlation with points near the reference, due to the smoothness of the data. The main feature is the strong correlation with contra-lateral points, indicating that brain anatomy tends to be symmetric. A more interesting question is whether the correlations observed in the control subjects are modified by the trauma (Friston, Büchel, Fink, Morris, Rolls & Dolan 1997). In other words, is there any evidence for an interaction between group and reference vector deformations? To do this, we simply add another three covariates to the linear model whose values are the reference vector deformations for the trauma patients, and the negative of the reference vector deformations for the control subjects. The resulting -log Wilks s Λ field for testing for these three extra covariates, thresholded at 1.72 (P = 0.05, η = 3, ν = 28) is shown in Figure 3(d). Apart from changes in the neighbourhood of the reference point, there is some evidence of a change in correlation at a location in the contraleteral side, slightly anterior. Looking at the maximum canonical correlations in the two groups separately, we find that correlation has increased at this location from to 0.927, perhaps indicating that the damage is strongly bilateral. These results are in close agreement with those in Worsley et al. (2004) which used Roy s maximum root rather than Wilks s Λ. A Appendix Lemma 5 (Lemma A.2 in Worsley (1994)) Let H Normal j j (0, M(I j )), let h be a fixed scalar, and let A be a fixed symmetric j j matrix. Then Lemma 6 Let E(det(A + hh)) = j 2 i=0 ( 1) i (2i)! h 2i detr 2 i j 2i (A). i! P W ishart j (ν, I j ), Q W ishart j (η, I j ), X 1 Normal j m (0, Σ I j ), X 2 Normal j m (0, I jm ) independently, a, b, c be fixed scalars, and C 1, C 2, C 3, C 4 be fixed m m matrices. Then = E(detr i (ap + bq + ci j + X 1 C 1 X 1 + X 1 C 2 X 2 + X 2 C 3 X 1 + X 2 C 4 X 2)) i j! i k ( )( ) ν η k ( ) j l a i k r b r c k l E(detr l (X 1 C 1 X 1 (j i + k)! i k r r k l k=0 r=0 +X 1 C 2 X 2 + X 2 C 3 X 1 + X 2 C 4 X 2 )). l=0 17

18 (a) (b) (c) (d) Figure 3: Shape analysis of non-missile brain trauma data. (a) Trauma minus control average deformations (arrows and color bar), sampled every 6mm inside the brain, with Hotelling s T 2 field for significant group differences (threshold y = 54.0, P = 0.05). The reference point of maximum Hotelling s T 2 is marked by the intersection of the three axes. (b) Closeup of (a) showing that the damage is an outward movement of the anatomy, either due to swelling of the ventricles or atrophy of the surrounding white matter. (c) Regions of effective anatomical connectivity with the reference point, assessed by the Wilks s Λ field (threshold y = 1.54, P = 0.05). The reference point is connected with its neighbours (due to smoothness) and with contralateral regions (due to symmetry). (d) Regions the connectivity is different between trauma and control groups, assessed by the Wilks s Λ field (threshold y = 1.72, P = 0.05 ). The small region in the contralateral hemisphere (right) is more correlated in the trauma group than the control group. 18

19 Proof. Holding X 1 and X 2 fixed and using Lemma in Worsley (1994) we obtain = = E(detr i (ap + bq + ci j + X 1 C 1 X 1 + X 1 C 2 X 2 + X 2 C 3 X 1 + X 2 C 4 X 2)) i E(detr i k (ap + bq))detr k (ci j + X 1 C 1 X 1 + X 1 C 2 X 2 + X 2 C 3 X 1 + X 2 C 4 X 2) k=0 i k=0 j! (j i + k)! +X 2 C 4 X 2). i k ( r=0 ν i k r )( η r ) a i k r b r detr k (ci j + X 1 C 1 X 1 + X 1 C 2 X 2 + X 2 C 3 X 1 Taking expectations over X 1, X 2 in the equality above we have = E(detr i (ap + bq + ci j + X 1 C 1 X 1 + X 1 C 2 X 2 + X 2 C 3 X 1 + X 2 C 4 X 2)) i j! i k ( )( ) ν η k ( ) j l a i k r b r c k l E(detr l (X 1 C 1 X 1 (j i + k)! i k r r k l k=0 r=0 +X 1 C 2 X 2 + X 2 C 3 X 1 + X 2 C 4 X 2)), l=0 which ends the proof. Lemma 7 Let P W ishart j (ν, I j ), Q W ishart j (η, I j ), X 1 Normal j m (0, Σ I j ), X 2 Normal j m (0, I jm ), H Normal j j (0, M(I j )) independently, a, b, c, h be fixed scalars, and C 1, C 2, C 3, C 4 be fixed m m matrices. Then = E(det(aP + bq + ci j + X 1 C 1 X 1 + X 1 C 2 X 2 + X 2 C 3 X 1 + X 2 C 4 X 2 + hh)) ( 1) i j 2i j 2i k (2i)! h 2i j! ( )( ) ν η k a j 2i k r b r 2 i i! (2i + k)! j 2i k r r [ j 2 ] i=0 k=0 E(detr l (X 1 C 1 X 1 + X 1 C 2 X 2 + X 2 C 3 X 1 + X 2 C 4 X 2 )). r=0 l=0 ( j l k l ) c k l Proof. Holding P, Q, X 1, X 2 fixed and using Lemma 5 we obtain = E(det(aP + bq + ci j + X 1 C 1 X 1 + X 1 C 2 X 2 + X 2 C 3 X 1 + X 2 C 4 X 2 + hh)) [ j 2 ] i=0 ( 1) i (2i)! h 2i detr 2 i j 2i (ap + bq + ci j + X 1 C 1 X 1 + X 1 C 2 X 2 + X 2 C 3 X 1 + X 2 C 4 X i! 2). Now, taking expectations and using Lemma 6 we have the desired result. Lemma 8 Let X 1 Normal j m (0, Σ I j ), X 2 Normal j m (0, I jm ), and C 1, C 2, C 3, C 4 be fixed symmetric m m matrices.. Then tr(c 1 Σ + C 4 ), j = 1, E(det(X 1 C 1 X 1 + X 1 C 2 X 2 + X 2 C 3 X 1 + X 2 C 4 X 2 )) = tr(c 1 Σ + C 4 ) 2 tr(c 1 ΣC 1 Σ +C C 2 C 3 Σ + C 3 C 2 Σ), j = 2. 19

20 Proof. It follows from the fact that the rows of X 1 and X 2 are independent vectors from Normal m (0, Σ) and Normal m (0, I m ), respectively, and using the well-known identity i, j, k, l = 1,..., r, σ ij = E(x i x j ). E(x i x j x k x l ) = σ ij σ kl + σ ik σ jl + σ il σ jk, Lemma 9 If κ = (k 1,..., k m ), κ = k then for a > k 1 + (m + 1)/2, 0<X<I m det(x) a (m+1)/2 det(i m X) b (m+1)/2 C κ (X 1 ) dx (33) = ( 1)k ( (a + b) + m+1) 2 κ Γ m (a)γ m (b) ( a + m+1) 2 κ Γ m (a + b) C κ(i m ), C κ (I m ) = 2 2k k!( m 2 ) κ and p denotes the nonzero indices in κ. p (2k i 2k j i + j) i<j p (2k i + p i)! Proof. It is proved by using following similar ideas to that of Theorem and Theorem in Muirhead (1982). As a particular case we have that if B Beta m ( 1 2 ν, 1 2 η) then for ν > 2k 1 + m + 1, i=1 E(C κ (B 1 )) = ( 1)k ( ν+η + m+1 2 ( ν + m+1) 2 2 κ 2 ) κ Lemma 10 (Corollary in Muirhead (1982)) If B Beta m ( 1ν, 1 η) then 2 2 References, C κ (I m ). (34) E(det(B) h ) = Γ m( 1ν + h)γ 2 m( 1 (ν + η)) 2 Γ m ( 1ν)Γ 2 m( 1 (35) (ν + η) + h). 2 Adler, R. J. (1981). The Geometry of Random Fields, Wiley, New York. Adler, R. J. (2000). On excursion sets, tube formulae, and maxima of random fields., The Annals of Applied Probability 10(1): Adler, R. J. & Hasofer, A. M. (1976). 4: Level crossings for random fields. annals of probability, Anderson, T. W. (1984). An introduction to multivariate statistical analysis, 2 edn, Wiley, New York. Beaky, M. M., Scherrer, R. J. & Villumsen, J. V. (1992). Topology of large scale structure in seeded hot dark matter models. astrophysical journal, 387:

21 Cao, J. & Worsley, K. J. (1999a). The detection of local shape changes via the geometry of hotelling s t 2 fields, Annals of Statistics 27: Cao, J. & Worsley, K. J. (1999b). The geometry of correlation fields with an application to functional connectivity of the brain, Annals of Applied Probability 9: Cao, J. & Worsley, K. J. (2001). Applications of random fields in human brain mapping, In M. Moore (Ed.) Spatial Statists: Methodological Aspects and Applications, Springer Lecture Notes in Statistics 159: Friston, K., Büchel, C., Fink, G. R., Morris, J., Rolls, E. & Dolan, R. J. (1997). Psychophysiological and modulatory interactions in neuroimaging, NeuroImage 6: Gott, J. R., Park, C., Juskiewicz, R., Bies, W. E., Bennett, D. P., Bouchet, F. R. & Stebbins, A. (1990). Topology of microwave background fluctuations: theory, Astrophysical Journal 352: Graczyk, P., Letac, G. & Massam, H. (2003). The complex wishart distribution and the symmetric group, Annals of Statistics 31(1): Hasofer, A. M. (1978). Upcrossings of random fields, Advances in Applied Probability 10: Letac, G. & Massam, H. (2001). The moments of wishart laws, binary trees, and the triple products, Comptes Rendus de l Académie des Sciences - Series I - Mathematics 333(4): Magnus, J. R. & Neudecker, H. (1988).). Matrix differential calculus with applications in statistics and econometrics, Wiley, New York. Mardia, K. V., K. J. T. & Bibby, J. M. (1979). Multivariate Analysis, Academic Press, New York. Muirhead, R. J. (1982). Aspects of Multivariate Statistical Theory, Wiley, New York. Neudecker, H. (1969). Some theorems on matrix differentiation with special reference to kronecker matrix products, Journal of the American Statistical Association 64 (327): Olkin, I. & Rubin, H. (1962). A characterization of the wishart distribution, Annals of Mathematical Statistics 33(4): Shapiro, S. S. & Gross, A. J. (1981). Statistical modeling techniques, Marcel Dekker, New York. Taylor, J. E., Takemura, A. & Adler, R. J. (2005). Validity of the expected Euler characteristic heuristic, Annals of Probability 33(4): Tomaiuolo, F., Worsley, K. J., Lerch, J., Di Paulo, M., Carlesimo, G. A., Bonanni, R., Caltagirone, C. & Paus, T. (2005). Changes in white matter in long-term survivors of severe non-missile traumatic brain injury: a computational analysis of magnetic resonance images, Journal of Neurotrauma 22: Vogeley, M. S., Park, C., Geller, M. J., Huchira, J. P. & Gott, J. R. (1994). Topological analysis of the CfA redshift survey, Astrophysical Journal 420: Worsley, K. J. (1994). Local maxima and the expected euler characteristic of excursion sets of χ 2, f and t fields, Advances in Applied Probability 26:

22 Worsley, K. J. (1995a). Boundary corrections for the expected euler characteristic of excursion sets of random fields, with an application to astrophysics, Advances in Applied Probability 27: Worsley, K. J. (1995b). Estimating the number of peaks in a random field using the hadwiger characteristic of excursion sets, with applications to medical images, Annals of Statistics 23: Worsley, K. J., Cao, J., Paus, T., Petrides, M. & Evans, A. (1998). Applications of random field theory to functional connectivity, Human Brain Mapping 6: Worsley, K. J., Marrett, S., Neelin, P., Vandal, A. C., Friston, K. J. & Evans, A. C. (1996).). A unified statistical approach for determining significant signals in images of cerebral activation, Human Brain Mapping 4: Worsley, K. J., Taylor, J. E., Tomaiuolo, F. & Lerch, J. (2004). Unified univariate and multivariate random field theory, NeuroImage 23: S

Unified univariate and multivariate random field theory

Unified univariate and multivariate random field theory Unified univariate and multivariate random field theory Keith J. Worsley 1, Jonathan E. Taylor 3, Francesco Tomaiuolo 4, Jason Lerch 1 Department of Mathematics and Statistics, and Montreal Neurological

More information

A test for a conjunction

A test for a conjunction A test for a conjunction K.J. Worsley a,, K.J. Friston b a Department of Mathematics and Statistics, McGill University, Montreal, Canada H3A 2K6 b Wellcome Department of Cognitive Neurology, Institute

More information

RANDOM FIELDS OF MULTIVARIATE TEST STATISTICS, WITH APPLICATIONS TO SHAPE ANALYSIS

RANDOM FIELDS OF MULTIVARIATE TEST STATISTICS, WITH APPLICATIONS TO SHAPE ANALYSIS Submitted to the Annals of Statistics RANDOM FIELDS OF MULTIVARIATE TEST STATISTICS, WITH APPLICATIONS TO SHAPE ANALYSIS By J.E. Taylor and K.J. Worsley Stanford University and McGill University Our data

More information

An unbiased estimator for the roughness of a multivariate Gaussian random field

An unbiased estimator for the roughness of a multivariate Gaussian random field An unbiased estimator for the roughness of a multivariate Gaussian random field K.J. Worsley July 28, 2000 Abstract Images from positron emission tomography (PET) and functional magnetic resonance imaging

More information

On Expected Gaussian Random Determinants

On Expected Gaussian Random Determinants On Expected Gaussian Random Determinants Moo K. Chung 1 Department of Statistics University of Wisconsin-Madison 1210 West Dayton St. Madison, WI 53706 Abstract The expectation of random determinants whose

More information

Detecting fmri activation allowing for unknown latency of the hemodynamic response

Detecting fmri activation allowing for unknown latency of the hemodynamic response Detecting fmri activation allowing for unknown latency of the hemodynamic response K.J. Worsley McGill University J.E. Taylor Stanford University January 7, 006 Abstract Several authors have suggested

More information

THE DETECTION OF LOCAL SHAPE CHANGES VIA THE GEOMETRY OF HOTELLING S T 2 FIELDS 1

THE DETECTION OF LOCAL SHAPE CHANGES VIA THE GEOMETRY OF HOTELLING S T 2 FIELDS 1 THE DETECTION OF LOCAL SHAPE CHANGES VIA THE GEOMETRY OF HOTELLING S T FIELDS 1 By Jin Cao and Keith J. Worsley Bell Laboratories and McGill University This paper is motivated by the problem of detecting

More information

Tube formula approach to testing multivariate normality and testing uniformity on the sphere

Tube formula approach to testing multivariate normality and testing uniformity on the sphere Tube formula approach to testing multivariate normality and testing uniformity on the sphere Akimichi Takemura 1 Satoshi Kuriki 2 1 University of Tokyo 2 Institute of Statistical Mathematics December 11,

More information

The General Linear Model. Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London

The General Linear Model. Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London The General Linear Model Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM Course Lausanne, April 2012 Image time-series Spatial filter Design matrix Statistical Parametric

More information

Multivariate Linear Models

Multivariate Linear Models Multivariate Linear Models Stanley Sawyer Washington University November 7, 2001 1. Introduction. Suppose that we have n observations, each of which has d components. For example, we may have d measurements

More information

Statistical applications of geometry and random

Statistical applications of geometry and random Statistical applications of geometry and random fields 3 e Cycle romand de statistique et probabilités appliquées Based on joint work with: R. Adler, K. Worsley Jonathan Taylor Stanford University / Université

More information

Multivariate Regression Generalized Likelihood Ratio Tests for FMRI Activation

Multivariate Regression Generalized Likelihood Ratio Tests for FMRI Activation Multivariate Regression Generalized Likelihood Ratio Tests for FMRI Activation Daniel B Rowe Division of Biostatistics Medical College of Wisconsin Technical Report 40 November 00 Division of Biostatistics

More information

A Derivation of the 3-entity-type model

A Derivation of the 3-entity-type model A Derivation of the 3-entity-type model The 3-entity-type model consists of two relationship matrices, a m n matrix X and a n r matrix Y In the spirit of generalized linear models we define the low rank

More information

RANDOM FIELDS AND GEOMETRY. Robert Adler and Jonathan Taylor

RANDOM FIELDS AND GEOMETRY. Robert Adler and Jonathan Taylor RANDOM FIELDS AND GEOMETRY from the book of the same name by Robert Adler and Jonathan Taylor IE&M, Technion, Israel, Statistics, Stanford, US. ie.technion.ac.il/adler.phtml www-stat.stanford.edu/ jtaylor

More information

Random Matrices and Multivariate Statistical Analysis

Random Matrices and Multivariate Statistical Analysis Random Matrices and Multivariate Statistical Analysis Iain Johnstone, Statistics, Stanford imj@stanford.edu SEA 06@MIT p.1 Agenda Classical multivariate techniques Principal Component Analysis Canonical

More information

Rice method for the maximum of Gaussian fields

Rice method for the maximum of Gaussian fields Rice method for the maximum of Gaussian fields IWAP, July 8, 2010 Jean-Marc AZAÏS Institut de Mathématiques, Université de Toulouse Jean-Marc AZAÏS ( Institut de Mathématiques, Université de Toulouse )

More information

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0. Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the

More information

Multivariate Gaussian Analysis

Multivariate Gaussian Analysis BS2 Statistical Inference, Lecture 7, Hilary Term 2009 February 13, 2009 Marginal and conditional distributions For a positive definite covariance matrix Σ, the multivariate Gaussian distribution has density

More information

STT 843 Key to Homework 1 Spring 2018

STT 843 Key to Homework 1 Spring 2018 STT 843 Key to Homework Spring 208 Due date: Feb 4, 208 42 (a Because σ = 2, σ 22 = and ρ 2 = 05, we have σ 2 = ρ 2 σ σ22 = 2/2 Then, the mean and covariance of the bivariate normal is µ = ( 0 2 and Σ

More information

An Introduction to Multivariate Statistical Analysis

An Introduction to Multivariate Statistical Analysis An Introduction to Multivariate Statistical Analysis Third Edition T. W. ANDERSON Stanford University Department of Statistics Stanford, CA WILEY- INTERSCIENCE A JOHN WILEY & SONS, INC., PUBLICATION Contents

More information

Multivariate Analysis and Likelihood Inference

Multivariate Analysis and Likelihood Inference Multivariate Analysis and Likelihood Inference Outline 1 Joint Distribution of Random Variables 2 Principal Component Analysis (PCA) 3 Multivariate Normal Distribution 4 Likelihood Inference Joint density

More information

On the linear combination of the Gaussian and student s t random field and the integral geometry of its excursion sets

On the linear combination of the Gaussian and student s t random field and the integral geometry of its excursion sets On the linear combination of the Gaussian and student s t random field and the integral geometry of its excursion sets Ola Ahmad, Jean-Charles Pinoli To cite this version: Ola Ahmad, Jean-Charles Pinoli.

More information

Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data

Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data Applied Mathematical Sciences, Vol 3, 009, no 54, 695-70 Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data Evelina Veleva Rousse University A Kanchev Department of Numerical

More information

A. Motivation To motivate the analysis of variance framework, we consider the following example.

A. Motivation To motivate the analysis of variance framework, we consider the following example. 9.07 ntroduction to Statistics for Brain and Cognitive Sciences Emery N. Brown Lecture 14: Analysis of Variance. Objectives Understand analysis of variance as a special case of the linear model. Understand

More information

PHYS 705: Classical Mechanics. Rigid Body Motion Introduction + Math Review

PHYS 705: Classical Mechanics. Rigid Body Motion Introduction + Math Review 1 PHYS 705: Classical Mechanics Rigid Body Motion Introduction + Math Review 2 How to describe a rigid body? Rigid Body - a system of point particles fixed in space i r ij j subject to a holonomic constraint:

More information

GLM Repeated Measures

GLM Repeated Measures GLM Repeated Measures Notation The GLM (general linear model) procedure provides analysis of variance when the same measurement or measurements are made several times on each subject or case (repeated

More information

TEST FOR INDEPENDENCE OF THE VARIABLES WITH MISSING ELEMENTS IN ONE AND THE SAME COLUMN OF THE EMPIRICAL CORRELATION MATRIX.

TEST FOR INDEPENDENCE OF THE VARIABLES WITH MISSING ELEMENTS IN ONE AND THE SAME COLUMN OF THE EMPIRICAL CORRELATION MATRIX. Serdica Math J 34 (008, 509 530 TEST FOR INDEPENDENCE OF THE VARIABLES WITH MISSING ELEMENTS IN ONE AND THE SAME COLUMN OF THE EMPIRICAL CORRELATION MATRIX Evelina Veleva Communicated by N Yanev Abstract

More information

Analysis of variance, multivariate (MANOVA)

Analysis of variance, multivariate (MANOVA) Analysis of variance, multivariate (MANOVA) Abstract: A designed experiment is set up in which the system studied is under the control of an investigator. The individuals, the treatments, the variables

More information

Example 1 describes the results from analyzing these data for three groups and two variables contained in test file manova1.tf3.

Example 1 describes the results from analyzing these data for three groups and two variables contained in test file manova1.tf3. Simfit Tutorials and worked examples for simulation, curve fitting, statistical analysis, and plotting. http://www.simfit.org.uk MANOVA examples From the main SimFIT menu choose [Statistcs], [Multivariate],

More information

Lecture 5: Hypothesis tests for more than one sample

Lecture 5: Hypothesis tests for more than one sample 1/23 Lecture 5: Hypothesis tests for more than one sample Måns Thulin Department of Mathematics, Uppsala University thulin@math.uu.se Multivariate Methods 8/4 2011 2/23 Outline Paired comparisons Repeated

More information

A Few Special Distributions and Their Properties

A Few Special Distributions and Their Properties A Few Special Distributions and Their Properties Econ 690 Purdue University Justin L. Tobias (Purdue) Distributional Catalog 1 / 20 Special Distributions and Their Associated Properties 1 Uniform Distribution

More information

Part 6: Multivariate Normal and Linear Models

Part 6: Multivariate Normal and Linear Models Part 6: Multivariate Normal and Linear Models 1 Multiple measurements Up until now all of our statistical models have been univariate models models for a single measurement on each member of a sample of

More information

The Multivariate Normal Distribution. In this case according to our theorem

The Multivariate Normal Distribution. In this case according to our theorem The Multivariate Normal Distribution Defn: Z R 1 N(0, 1) iff f Z (z) = 1 2π e z2 /2. Defn: Z R p MV N p (0, I) if and only if Z = (Z 1,..., Z p ) T with the Z i independent and each Z i N(0, 1). In this

More information

High-dimensional asymptotic expansions for the distributions of canonical correlations

High-dimensional asymptotic expansions for the distributions of canonical correlations Journal of Multivariate Analysis 100 2009) 231 242 Contents lists available at ScienceDirect Journal of Multivariate Analysis journal homepage: www.elsevier.com/locate/jmva High-dimensional asymptotic

More information

Linear Systems and Matrices

Linear Systems and Matrices Department of Mathematics The Chinese University of Hong Kong 1 System of m linear equations in n unknowns (linear system) a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.......

More information

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN Canonical Edps/Soc 584 and Psych 594 Applied Multivariate Statistics Carolyn J. Anderson Department of Educational Psychology I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN Canonical Slide

More information

2. Matrix Algebra and Random Vectors

2. Matrix Algebra and Random Vectors 2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns

More information

Gaussian Random Fields: Excursion Probabilities

Gaussian Random Fields: Excursion Probabilities Gaussian Random Fields: Excursion Probabilities Yimin Xiao Michigan State University Lecture 5 Excursion Probabilities 1 Some classical results on excursion probabilities A large deviation result Upper

More information

Some New Properties of Wishart Distribution

Some New Properties of Wishart Distribution Applied Mathematical Sciences, Vol., 008, no. 54, 673-68 Some New Properties of Wishart Distribution Evelina Veleva Rousse University A. Kanchev Department of Numerical Methods and Statistics 8 Studentska

More information

Statistical Inference

Statistical Inference Statistical Inference J. Daunizeau Institute of Empirical Research in Economics, Zurich, Switzerland Brain and Spine Institute, Paris, France SPM Course Edinburgh, April 2011 Image time-series Spatial

More information

Differential Geometry and Lie Groups with Applications to Medical Imaging, Computer Vision and Geometric Modeling CIS610, Spring 2008

Differential Geometry and Lie Groups with Applications to Medical Imaging, Computer Vision and Geometric Modeling CIS610, Spring 2008 Differential Geometry and Lie Groups with Applications to Medical Imaging, Computer Vision and Geometric Modeling CIS610, Spring 2008 Jean Gallier Department of Computer and Information Science University

More information

Statistical Inference

Statistical Inference Statistical Inference Jean Daunizeau Wellcome rust Centre for Neuroimaging University College London SPM Course Edinburgh, April 2010 Image time-series Spatial filter Design matrix Statistical Parametric

More information

NOTES FOR LINEAR ALGEBRA 133

NOTES FOR LINEAR ALGEBRA 133 NOTES FOR LINEAR ALGEBRA 33 William J Anderson McGill University These are not official notes for Math 33 identical to the notes projected in class They are intended for Anderson s section 4, and are 2

More information

You can compute the maximum likelihood estimate for the correlation

You can compute the maximum likelihood estimate for the correlation Stat 50 Solutions Comments on Assignment Spring 005. (a) _ 37.6 X = 6.5 5.8 97.84 Σ = 9.70 4.9 9.70 75.05 7.80 4.9 7.80 4.96 (b) 08.7 0 S = Σ = 03 9 6.58 03 305.6 30.89 6.58 30.89 5.5 (c) You can compute

More information

FORMATION OF TRAPPED SURFACES II. 1. Introduction

FORMATION OF TRAPPED SURFACES II. 1. Introduction FORMATION OF TRAPPED SURFACES II SERGIU KLAINERMAN, JONATHAN LUK, AND IGOR RODNIANSKI 1. Introduction. Geometry of a null hypersurface As in [?] we consider a region D = D(u, u ) of a vacuum spacetime

More information

Elliptically Contoured Distributions

Elliptically Contoured Distributions Elliptically Contoured Distributions Recall: if X N p µ, Σ), then { 1 f X x) = exp 1 } det πσ x µ) Σ 1 x µ) So f X x) depends on x only through x µ) Σ 1 x µ), and is therefore constant on the ellipsoidal

More information

Classical differential geometry of two-dimensional surfaces

Classical differential geometry of two-dimensional surfaces Classical differential geometry of two-dimensional surfaces 1 Basic definitions This section gives an overview of the basic notions of differential geometry for twodimensional surfaces. It follows mainly

More information

Properties and approximations of some matrix variate probability density functions

Properties and approximations of some matrix variate probability density functions Technical report from Automatic Control at Linköpings universitet Properties and approximations of some matrix variate probability density functions Karl Granström, Umut Orguner Division of Automatic Control

More information

The Canonical Gaussian Measure on R

The Canonical Gaussian Measure on R The Canonical Gaussian Measure on R 1. Introduction The main goal of this course is to study Gaussian measures. The simplest example of a Gaussian measure is the canonical Gaussian measure P on R where

More information

Journal of Multivariate Analysis. Sphericity test in a GMANOVA MANOVA model with normal error

Journal of Multivariate Analysis. Sphericity test in a GMANOVA MANOVA model with normal error Journal of Multivariate Analysis 00 (009) 305 3 Contents lists available at ScienceDirect Journal of Multivariate Analysis journal homepage: www.elsevier.com/locate/jmva Sphericity test in a GMANOVA MANOVA

More information

The Linear Regression Model

The Linear Regression Model The Linear Regression Model Carlo Favero Favero () The Linear Regression Model 1 / 67 OLS To illustrate how estimation can be performed to derive conditional expectations, consider the following general

More information

Probability Distributions

Probability Distributions Probability Distributions Seungjin Choi Department of Computer Science Pohang University of Science and Technology, Korea seungjin@postech.ac.kr 1 / 25 Outline Summarize the main properties of some of

More information

Matrix Differentiation

Matrix Differentiation Matrix Differentiation CS5240 Theoretical Foundations in Multimedia Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (NUS) Matrix Differentiation

More information

Detecting sparse signals in random fields, with an application to brain mapping

Detecting sparse signals in random fields, with an application to brain mapping Detecting sparse signals in random fields, with an application to brain mapping J.E. Taylor Stanford University K.J. Worsley McGill University November 17, 2006 Abstract Brain mapping data have been modeled

More information

Interpreting Regression Results

Interpreting Regression Results Interpreting Regression Results Carlo Favero Favero () Interpreting Regression Results 1 / 42 Interpreting Regression Results Interpreting regression results is not a simple exercise. We propose to split

More information

THE UNIVERSITY OF CHICAGO Booth School of Business Business 41912, Spring Quarter 2016, Mr. Ruey S. Tsay

THE UNIVERSITY OF CHICAGO Booth School of Business Business 41912, Spring Quarter 2016, Mr. Ruey S. Tsay THE UNIVERSITY OF CHICAGO Booth School of Business Business 41912, Spring Quarter 2016, Mr. Ruey S. Tsay Lecture 5: Multivariate Multiple Linear Regression The model is Y n m = Z n (r+1) β (r+1) m + ɛ

More information

Lecture 20: Linear model, the LSE, and UMVUE

Lecture 20: Linear model, the LSE, and UMVUE Lecture 20: Linear model, the LSE, and UMVUE Linear Models One of the most useful statistical models is X i = β τ Z i + ε i, i = 1,...,n, where X i is the ith observation and is often called the ith response;

More information

Canonical Correlation Analysis of Longitudinal Data

Canonical Correlation Analysis of Longitudinal Data Biometrics Section JSM 2008 Canonical Correlation Analysis of Longitudinal Data Jayesh Srivastava Dayanand N Naik Abstract Studying the relationship between two sets of variables is an important multivariate

More information

Appendix A: Matrices

Appendix A: Matrices Appendix A: Matrices A matrix is a rectangular array of numbers Such arrays have rows and columns The numbers of rows and columns are referred to as the dimensions of a matrix A matrix with, say, 5 rows

More information

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8. Linear Algebra M1 - FIB Contents: 5 Matrices, systems of linear equations and determinants 6 Vector space 7 Linear maps 8 Diagonalization Anna de Mier Montserrat Maureso Dept Matemàtica Aplicada II Translation:

More information

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN Comparisons of Several Multivariate Populations Edps/Soc 584 and Psych 594 Applied Multivariate Statistics Carolyn J Anderson Department of Educational Psychology I L L I N O I S UNIVERSITY OF ILLINOIS

More information

STA 2101/442 Assignment 3 1

STA 2101/442 Assignment 3 1 STA 2101/442 Assignment 3 1 These questions are practice for the midterm and final exam, and are not to be handed in. 1. Suppose X 1,..., X n are a random sample from a distribution with mean µ and variance

More information

ANOVA: Analysis of Variance - Part I

ANOVA: Analysis of Variance - Part I ANOVA: Analysis of Variance - Part I The purpose of these notes is to discuss the theory behind the analysis of variance. It is a summary of the definitions and results presented in class with a few exercises.

More information

Submitted to the Brazilian Journal of Probability and Statistics

Submitted to the Brazilian Journal of Probability and Statistics Submitted to the Brazilian Journal of Probability and Statistics Multivariate normal approximation of the maximum likelihood estimator via the delta method Andreas Anastasiou a and Robert E. Gaunt b a

More information

Exercise 1 (Formula for connection 1-forms) Using the first structure equation, show that

Exercise 1 (Formula for connection 1-forms) Using the first structure equation, show that 1 Stokes s Theorem Let D R 2 be a connected compact smooth domain, so that D is a smooth embedded circle. Given a smooth function f : D R, define fdx dy fdxdy, D where the left-hand side is the integral

More information

Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone Missing Data

Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone Missing Data Journal of Multivariate Analysis 78, 6282 (2001) doi:10.1006jmva.2000.1939, available online at http:www.idealibrary.com on Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone

More information

A Box-Type Approximation for General Two-Sample Repeated Measures - Technical Report -

A Box-Type Approximation for General Two-Sample Repeated Measures - Technical Report - A Box-Type Approximation for General Two-Sample Repeated Measures - Technical Report - Edgar Brunner and Marius Placzek University of Göttingen, Germany 3. August 0 . Statistical Model and Hypotheses Throughout

More information

Chapter 7, continued: MANOVA

Chapter 7, continued: MANOVA Chapter 7, continued: MANOVA The Multivariate Analysis of Variance (MANOVA) technique extends Hotelling T 2 test that compares two mean vectors to the setting in which there are m 2 groups. We wish to

More information

Consistent Bivariate Distribution

Consistent Bivariate Distribution A Characterization of the Normal Conditional Distributions MATSUNO 79 Therefore, the function ( ) = G( : a/(1 b2)) = N(0, a/(1 b2)) is a solu- tion for the integral equation (10). The constant times of

More information

Neuroimage Processing

Neuroimage Processing Neuroimage Processing Instructor: Moo K. Chung mkchung@wisc.edu Lecture 10-11. Deformation-based morphometry (DBM) Tensor-based morphometry (TBM) November 13, 2009 Image Registration Process of transforming

More information

On prediction and density estimation Peter McCullagh University of Chicago December 2004

On prediction and density estimation Peter McCullagh University of Chicago December 2004 On prediction and density estimation Peter McCullagh University of Chicago December 2004 Summary Having observed the initial segment of a random sequence, subsequent values may be predicted by calculating

More information

Matrix Algebra, Class Notes (part 2) by Hrishikesh D. Vinod Copyright 1998 by Prof. H. D. Vinod, Fordham University, New York. All rights reserved.

Matrix Algebra, Class Notes (part 2) by Hrishikesh D. Vinod Copyright 1998 by Prof. H. D. Vinod, Fordham University, New York. All rights reserved. Matrix Algebra, Class Notes (part 2) by Hrishikesh D. Vinod Copyright 1998 by Prof. H. D. Vinod, Fordham University, New York. All rights reserved. 1 Converting Matrices Into (Long) Vectors Convention:

More information

Lecture 3: Linear Models. Bruce Walsh lecture notes Uppsala EQG course version 28 Jan 2012

Lecture 3: Linear Models. Bruce Walsh lecture notes Uppsala EQG course version 28 Jan 2012 Lecture 3: Linear Models Bruce Walsh lecture notes Uppsala EQG course version 28 Jan 2012 1 Quick Review of the Major Points The general linear model can be written as y = X! + e y = vector of observed

More information

Regression with Compositional Response. Eva Fišerová

Regression with Compositional Response. Eva Fišerová Regression with Compositional Response Eva Fišerová Palacký University Olomouc Czech Republic LinStat2014, August 24-28, 2014, Linköping joint work with Karel Hron and Sandra Donevska Objectives of the

More information

2. What are the tradeoffs among different measures of error (e.g. probability of false alarm, probability of miss, etc.)?

2. What are the tradeoffs among different measures of error (e.g. probability of false alarm, probability of miss, etc.)? ECE 830 / CS 76 Spring 06 Instructors: R. Willett & R. Nowak Lecture 3: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics Executive summary In the last lecture we

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

ABOUT PRINCIPAL COMPONENTS UNDER SINGULARITY

ABOUT PRINCIPAL COMPONENTS UNDER SINGULARITY ABOUT PRINCIPAL COMPONENTS UNDER SINGULARITY José A. Díaz-García and Raúl Alberto Pérez-Agamez Comunicación Técnica No I-05-11/08-09-005 (PE/CIMAT) About principal components under singularity José A.

More information

A Multivariate Two-Sample Mean Test for Small Sample Size and Missing Data

A Multivariate Two-Sample Mean Test for Small Sample Size and Missing Data A Multivariate Two-Sample Mean Test for Small Sample Size and Missing Data Yujun Wu, Marc G. Genton, 1 and Leonard A. Stefanski 2 Department of Biostatistics, School of Public Health, University of Medicine

More information

Cointegrated VAR s. Eduardo Rossi University of Pavia. November Rossi Cointegrated VAR s Financial Econometrics / 56

Cointegrated VAR s. Eduardo Rossi University of Pavia. November Rossi Cointegrated VAR s Financial Econometrics / 56 Cointegrated VAR s Eduardo Rossi University of Pavia November 2013 Rossi Cointegrated VAR s Financial Econometrics - 2013 1 / 56 VAR y t = (y 1t,..., y nt ) is (n 1) vector. y t VAR(p): Φ(L)y t = ɛ t The

More information

RV Coefficient and Congruence Coefficient

RV Coefficient and Congruence Coefficient RV Coefficient and Congruence Coefficient Hervé Abdi 1 1 Overview The congruence coefficient was first introduced by Burt (1948) under the name of unadjusted correlation as a measure of the similarity

More information

DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 1002 Lecture notes 10 November 23, Linear models DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

More information

Department of Statistics

Department of Statistics Research Report Department of Statistics Research Report Department of Statistics No. 05: Testing in multivariate normal models with block circular covariance structures Yuli Liang Dietrich von Rosen Tatjana

More information

An Approximate Test for Homogeneity of Correlated Correlation Coefficients

An Approximate Test for Homogeneity of Correlated Correlation Coefficients Quality & Quantity 37: 99 110, 2003. 2003 Kluwer Academic Publishers. Printed in the Netherlands. 99 Research Note An Approximate Test for Homogeneity of Correlated Correlation Coefficients TRIVELLORE

More information

GARCH Models Estimation and Inference

GARCH Models Estimation and Inference GARCH Models Estimation and Inference Eduardo Rossi University of Pavia December 013 Rossi GARCH Financial Econometrics - 013 1 / 1 Likelihood function The procedure most often used in estimating θ 0 in

More information

Bayesian Inference for the Multivariate Normal

Bayesian Inference for the Multivariate Normal Bayesian Inference for the Multivariate Normal Will Penny Wellcome Trust Centre for Neuroimaging, University College, London WC1N 3BG, UK. November 28, 2014 Abstract Bayesian inference for the multivariate

More information

Global Maxwellians over All Space and Their Relation to Conserved Quantites of Classical Kinetic Equations

Global Maxwellians over All Space and Their Relation to Conserved Quantites of Classical Kinetic Equations Global Maxwellians over All Space and Their Relation to Conserved Quantites of Classical Kinetic Equations C. David Levermore Department of Mathematics and Institute for Physical Science and Technology

More information

Elementary maths for GMT

Elementary maths for GMT Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1

More information

Partitioned Covariance Matrices and Partial Correlations. Proposition 1 Let the (p + q) (p + q) covariance matrix C > 0 be partitioned as C = C11 C 12

Partitioned Covariance Matrices and Partial Correlations. Proposition 1 Let the (p + q) (p + q) covariance matrix C > 0 be partitioned as C = C11 C 12 Partitioned Covariance Matrices and Partial Correlations Proposition 1 Let the (p + q (p + q covariance matrix C > 0 be partitioned as ( C11 C C = 12 C 21 C 22 Then the symmetric matrix C > 0 has the following

More information

Lecture 2: Linear Models. Bruce Walsh lecture notes Seattle SISG -Mixed Model Course version 23 June 2011

Lecture 2: Linear Models. Bruce Walsh lecture notes Seattle SISG -Mixed Model Course version 23 June 2011 Lecture 2: Linear Models Bruce Walsh lecture notes Seattle SISG -Mixed Model Course version 23 June 2011 1 Quick Review of the Major Points The general linear model can be written as y = X! + e y = vector

More information

The Matrix-Tree Theorem

The Matrix-Tree Theorem The Matrix-Tree Theorem Christopher Eur March 22, 2015 Abstract: We give a brief introduction to graph theory in light of linear algebra. Our results culminates in the proof of Matrix-Tree Theorem. 1 Preliminaries

More information

Analysis of the AIC Statistic for Optimal Detection of Small Changes in Dynamic Systems

Analysis of the AIC Statistic for Optimal Detection of Small Changes in Dynamic Systems Analysis of the AIC Statistic for Optimal Detection of Small Changes in Dynamic Systems Jeremy S. Conner and Dale E. Seborg Department of Chemical Engineering University of California, Santa Barbara, CA

More information

TAMS39 Lecture 2 Multivariate normal distribution

TAMS39 Lecture 2 Multivariate normal distribution TAMS39 Lecture 2 Multivariate normal distribution Martin Singull Department of Mathematics Mathematical Statistics Linköping University, Sweden Content Lecture Random vectors Multivariate normal distribution

More information

ECON 4160, Lecture 11 and 12

ECON 4160, Lecture 11 and 12 ECON 4160, 2016. Lecture 11 and 12 Co-integration Ragnar Nymoen Department of Economics 9 November 2017 1 / 43 Introduction I So far we have considered: Stationary VAR ( no unit roots ) Standard inference

More information

HIGHER ORDER CUMULANTS OF RANDOM VECTORS, DIFFERENTIAL OPERATORS, AND APPLICATIONS TO STATISTICAL INFERENCE AND TIME SERIES

HIGHER ORDER CUMULANTS OF RANDOM VECTORS, DIFFERENTIAL OPERATORS, AND APPLICATIONS TO STATISTICAL INFERENCE AND TIME SERIES HIGHER ORDER CUMULANTS OF RANDOM VECTORS DIFFERENTIAL OPERATORS AND APPLICATIONS TO STATISTICAL INFERENCE AND TIME SERIES S RAO JAMMALAMADAKA T SUBBA RAO AND GYÖRGY TERDIK Abstract This paper provides

More information

Wigner s semicircle law

Wigner s semicircle law CHAPTER 2 Wigner s semicircle law 1. Wigner matrices Definition 12. A Wigner matrix is a random matrix X =(X i, j ) i, j n where (1) X i, j, i < j are i.i.d (real or complex valued). (2) X i,i, i n are

More information

4. Determinants.

4. Determinants. 4. Determinants 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 2 2 determinant 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 3 3 determinant 4.1.

More information

Multivariate Distributions

Multivariate Distributions IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate

More information

Chapter 5: Matrices. Daniel Chan. Semester UNSW. Daniel Chan (UNSW) Chapter 5: Matrices Semester / 33

Chapter 5: Matrices. Daniel Chan. Semester UNSW. Daniel Chan (UNSW) Chapter 5: Matrices Semester / 33 Chapter 5: Matrices Daniel Chan UNSW Semester 1 2018 Daniel Chan (UNSW) Chapter 5: Matrices Semester 1 2018 1 / 33 In this chapter Matrices were first introduced in the Chinese Nine Chapters on the Mathematical

More information

Discrete Multivariate Statistics

Discrete Multivariate Statistics Discrete Multivariate Statistics Univariate Discrete Random variables Let X be a discrete random variable which, in this module, will be assumed to take a finite number of t different values which are

More information

Near-exact approximations for the likelihood ratio test statistic for testing equality of several variance-covariance matrices

Near-exact approximations for the likelihood ratio test statistic for testing equality of several variance-covariance matrices Near-exact approximations for the likelihood ratio test statistic for testing euality of several variance-covariance matrices Carlos A. Coelho 1 Filipe J. Marues 1 Mathematics Department, Faculty of Sciences

More information