TEST FOR INDEPENDENCE OF THE VARIABLES WITH MISSING ELEMENTS IN ONE AND THE SAME COLUMN OF THE EMPIRICAL CORRELATION MATRIX.

Size: px
Start display at page:

Download "TEST FOR INDEPENDENCE OF THE VARIABLES WITH MISSING ELEMENTS IN ONE AND THE SAME COLUMN OF THE EMPIRICAL CORRELATION MATRIX."

Transcription

1

2 Serdica Math J 34 (008, TEST FOR INDEPENDENCE OF THE VARIABLES WITH MISSING ELEMENTS IN ONE AND THE SAME COLUMN OF THE EMPIRICAL CORRELATION MATRIX Evelina Veleva Communicated by N Yanev Abstract We consider variables with joint multivariate normal distribution and suppose that the sample correlation matrix has missing elements, located in one and the same column Under these assumptions we derive the maximum likelihood ratio test for independence of the variables We obtain also the maximum likelihood estimations for the missing values 1 Introduction We consider variables with joint multivariate normal distribution and suppose that the sample correlation matrix has missing elements, located in one and the same column This might be due to a loss during the keeping or the transportation to the researcher Another reason is when a researcher A has the full matrix of the observations on n variables X 1,, X n, but he decides to include in considerations an additional variable X n+1 Assume that 000 Mathematics Subject Classification: 6H15, 6H1 Key words: Multivariate normal distribution, Wishart distribution, correlation matrix completion, maximum likelihood ratio test

3 510 Evelina Veleva his colleague B works in a competitive firm and has the data matrix for the variables X k+1,, X n, X n+1 Suppose the researcher A succeeds to get from B only the empirical correlation coefficients r i n+1, i = k + 1,, n, instead of the data vector of the observations on X n+1, which is m 1 and m is enough greater than n k (m n k In this situation A will have the complete empirical correlation matrix for the variables X 1,, X n, X n+1, with the exception of the coefficients r i n+1, i = 1,, k Recently, the question of evaluating the return to a given program or treatment has received considerable attention in the economics, statistics and medical literatures [, 5, 7, 8] The derived densities of the outcome gain distributions necessarily depend on the unidentified cross-regime correlation coefficient Roy s model of self-selection has been applied extensively in economics to explain individual choice between two alternatives or sectors Since an agent s earnings are observed only in the sector of choice under partial observability, the correlation coefficient between the disturbance terms in the earnings equations is not identified Similar investigations with one missing element in the correlation matrix have been made by Heckman and Honore [1], Vijverberg [11], Koop and Poirier [4], Sareen [6] In the present paper we assume that the researcher is interested in n + 1 random variables, which are multivariate normal distributed but he has only the empirical correlation matrix R, in which the elements r i n+1, i = 1,, k are missing In this case we obtain the maximum likelihood estimations for the correlation coefficients ρ i n+1, i = 1,, k, which are elements of the theoretical correlation matrix P We derive also the maximum likelihood ratio test for the hypothesis for the independence of the n + 1 random variables, ie where I is the identity matrix H 0 : P = I, The likelihood ratio test derivation We begin with one auxiliary Theorem Theorem 1 Let ω ij, 1 i j n be random variables with joint distribution - the Wishart distribution W n (m, Σ Consider the set of random variables V = {τ i, i = 1,, n, ν ij, 1 i < j n}, such that τ i = ω ii, i = 1,, n,

4 Test for independence of the variables 511 ν ij = ω ij ωii ω jj, 1 i < j n The joint density function of the random variables from V has the form f V (t 1,, t n, y ij, 1 i < j n (t 1 t n m = n nm π n(n 4 (det(σ m Γ ( (det(y n m n e 1 tr(tyntσ I E, m i+1 where Γ ( is the well known Gamma function; T and Y n are the matrices T = diag( t 1,, t n = t1 0 0 tn, Y n = 1 y 1 y 1n y 1 1 y n y 1n y n 1 ; E is the set of all points (t 1,, t n, y ij, 1 i < j n in the real space R n(n+1/, such that t i > 0, i = 1,, n and the matrix Y n is positive definite, and I E here and after denotes the indicator of a given set E P r o o f The inverse transformation formulas are ω ii = τ i, i = 1,, n, The Jacobian of the transformation is ω ij = ν ij τi τ j, 1 i < j n J = (ω 11,, ω nn, ω 1,, ω 1n, ω 3,, ω nn (τ 1,, τ n, ν 1,, ν 1n, ν 3,, ν nn

5 51 Evelina Veleva = τ 1 τ τn τ n Consequently det(j = (τ 1 τ n n The joint density function of ω ij, 1 i j n has the form f ωij,1 i j n(w ij, 1 i j n = (det(w m n e 1 tr(wσ n nm π n(n 4 (det(σ m Γ ( m i+1 I {W>0}, where W is the symmetric matrix with elements w ij, 1 i j n and {W > 0} here and after denotes the set of all values for the unknown elements of a given real matrix W, such that the matrix W is positive definite Hence, the joint density function of the random variables from the set V = {τ i, i = 1,, n, ν ij, 1 i < j n} will have the form f V (t 1,, t n, y ij, 1 i < j n = (det(ty nt m n e 1 tr(tyntσ n nm π n(n 4 (det(σ m (t 1 t n m = n nm π n(n 4 (det(σ m Γ ( m i+1 (t 1 t n n I E Γ ( (det(y n m n e 1 tr(tyntσ I E m i+1 Corollary 1 Under the conditions of Theorem 1, let Σ = D, where D is a diagonal matrix D = diag(σ1,, σ n Then the set of random variables {τ 1,, τ n } is independent of the set {ν ij, 1 i < j n} The variables τ 1,, τ n are mutually independent and τ i σi χ (m, i = 1,, n The random

6 Test for independence of the variables 513 variables ν ij, 1 i < j n has joint distribution Ψ(m, n with density function of the form f νij,1 i<j n(y ij, 1 i < j n = π n(n 4 [ Γ ( m n ] n Γ ( (det(y n m n I {Yn>0} m i Let x 1,, x m be a sample from N n+1 (µ, Σ, µ is unknown Assume that we wish to test the hypothesis H 0 : Σ = diag(σ 1,, σ n+1, σ i are unknown, i = 1,, n + 1 which is equivalent to H 0 : P = I against the alternative H 1 : no constraints on Σ Suppose at first, that we don t hold the observations themselves, but we have m and the empirical covariance matrix S = {s ij } n+1 i,j=1, in which the elements s in+1, i = 1,, k are unidentified It is well known, that the joint distribution of the elements of the matrix (m 1S is Wishart - W n+1 (m 1, Σ Consider the variables and ν ij = τ i = (m 1s ii, i = 1,, n + 1 (m 1s ij (m 1sii (m 1s jj = r ij, 1 i < j n + 1, where r ij is the corresponding element of the empirical correlation matrix R Let the hypothesis H 0 be true According to Corollary 1, the distribution of the elements of the empirical correlation matrix R is Ψ(m 1, n + 1 The random variables τ i = (m 1s ii, i = 1,, n + 1 are mutually independent, they are independent of the correlation coefficients r ij, 1 i < j n + 1 and τ i = (m 1s ii σ i χ (m 1, i = 1,, n + 1 The next proposition can be found in [9] Proposition 1 Let ν ij, 1 i < j n be random variables with distribution Ψ(m, n The joint density function of the random variables from the set V = {ν ij, 1 i < j n}\{ν 1n,, ν kn }, where k is an integer, 1 k n,

7 514 Evelina Veleva has the form f V (y ij, 1 i < j n, y k+1n,, y nn = π n(n k [ Γ ( m ] n 4 Γ ( m n+k+1 n Γ ( m i (det(y n({n} m n (det(y n ({1,, k} m n+k I (det(y n ({1,, k, n} m n+k {Yn({n}>0} I {Yn({1,,k}>0}, where A({i 1,, i l } here and after denotes the matrix, which is obtained from the matrix A, after deleting the rows and columns with numbers i 1,, i l From this Propositions it follows that under H 0, the joint density function of the empirical correlation coefficients from the set U = {r ij, 1 i < j n, r k+1n+1,, r nn+1 } will have the form f U (y ij, 1 i < j n, y k+1n+1,, y nn+1 [ ( Γ m ] n = π n(n+1 k 4 Γ ( m n+k n Γ ( I {Y n+1 ({n+1}>0} I {Yn+1 ({1,,k}>0} m i (det(y n+1({n + 1} m n (det(y n+1 ({1,, k} m n+k 3 (det(y n+1 ({1,, k, n + 1} m n+k Consequently, under H 0 the joint density function of our observations : (m 1s 11,, (m 1s n+1n+1, r 1,, r nn, r k+1n+1,, r nn+1, ie the likelihood function L 0, will have the form L 0 = (m 1 (n+1(m 3 (n+1(m π n(n+1 k 4 Γ ( m n+k n Γ ( m i (s 11 s n+1n+1 m 3 (σ 1 σ n+1 m e n+1 (m s ii σ i m n (det(r({n + 1} (det(r({1,, k} m n+k 3 (det(r({1,, k, n + 1} m n+k

8 Test for independence of the variables 515 The maximum likelihood estimations for the unknown parameters σ1,, σ n+1 are s 11,, s n+1n+1 respectively The maxima L 0 of the likelihood function is L 0 = (m 1 (n+1(m 3 (n+1(m π n(n+1 k 4 Γ ( m n+k n Γ ( m i (n+1(m e (det(r({n + 1} m n (det(r({1,, k} m n+k 3 (s 11 s n+1n+1 (det(r({1,, k, n + 1} m n+k Let the hypothesis H 1 be true Applying Theorem 1 for m = m 1 and n = n + 1, we get the joint density function f 1 of the observations : (m 1s 11,, (m 1s n+1n+1, r 1,, r nn, r k+1n+1,, r nn+1 and the unknown realizations r 1n+1,, r kn+1 of the correlation coefficients ρ 1n+1,, ρ kn+1 : f 1 = (n+1(m 3 (m 1 (s 11 s n+1n+1 (m 3 (n+1(m π n(n+1 4 (det(σ (m n+1 Γ ( m i (det(r m n 3 e (m tr(trtσ I {R>0}, where T is the diagonal matrix T = diag( s 11,, s n+1n+1 The likelihood function L 1 under H 1 is an integral of f 1 with respect to r 1n+1,, r kn+1 : L 1 = f 1 dr 1n+1 dr kn+1 Let us denote by σ ij, 1 i j n + 1 the elements of the matrix Σ It is easy to see that the trace where tr(trtσ = Ξ + n+1 n (1 Ξ = s ii σ ii + n j=i+1 k σ in+1 s ii s n+1n+1 r in+1, σ ij sii s jj r ij + n σ in+1 sii s n+1n+1 r in+1 i=k+1

9 516 Evelina Veleva Hence ( L 1 = K(det(Σ (m e (m Ξ J, where K = (n+1(m 3 (m 1 (s 11 s n+1n+1 (m 3 (n+1(m π n(n+1 4 n+1 Γ ( m i, J = (det(r m n 3 e (m k (σ jn+1 sjj s n+1n+1 r jn+1 j=1 I {R>0} dr 1n+1 dr kn+1 The maximum likelihood estimations ˆσ ij for σ ij, 1 i j n+1 satisfy the system of equations (3 L 1 = 0, i = 1,, n + 1 σii L 1 = 0, 1 i < j n + 1 σij It is easy to see that det(σ σ 11 = det(σ ({1} Hence the first equation of the system (3 is L 1 σ (m 1 = K J 11 = 0 Consequently, (det(σ (m 3 e (m Ξ [ det(σ ({1} det(σ s 11 ] det(σ ({1} det(σ = s 11

10 Test for independence of the variables 517 The left hand side of the above equation equals to the (1,1 element of the inverse matrix of the matrix Σ, ie the (1,1 element of the matrix Σ Consequently we have We conclude similarly that ˆσ 11 = s 11 ˆσ ii = s ii, i =,, n + 1 In the further considerations we need the following Theorem Theorem Let A = {a ij } n i,j=1, a ij = a ji be a n n symmetric matrix For each element a pq, p q of the matrix A, the determinant det(a can be written as (4 det(a = det(a({p, q}a pq + ( q p det(a({p}, {q} 0 a pq + det(a 0 p,q, where A({p}, {q} 0 is the matrix, which is obtained from the matrix A after deleting its p th row, q th column and replacing the element a qp with zero; A 0 p,q is the matrix, which is obtained from the matrix A, after replacing the elements a pq and a qp with zero P r o o f It is easy to see that (5 det(a = a pq ( q p det(a({p}, {q} + det(a 0, where A 0 is the matrix, which is obtained from the matrix A after replacing the element a pq with zero Analogically (6 det(a 0 = a qp ( p q det(a({q}, {p} 0 + det(a 0 p,q and (7 det(a({p}, {q} = a qp ( p q det(a({p, q} + det(a({p}, {q} 0 Since the matrix A is symmetric, from (5 (7 we obtain (4 Put in the above Theorem A = Σ, n = n + 1, p = 1, q = Then det(σ = det(σ ({1, }(σ 1 det(σ ({1}, {} 0 σ 1 +det((σ 0 1,

11 518 Evelina Veleva Consequently, det(σ σ 1 = det(σ ({1, }σ 1 det(σ ({1}, {} 0 Hence = det(σ ({1}, {} L 1 σ 1 = K J(m 1(det(Σ (m 3 e (m Ξ Therefore we get [ det(σ ({1}, {} det(σ s 11 s r 1 ] = 0 det(σ ({1}, {} det(σ = s 11 s r 1 = s 1 The left hand side of the above equation equals to the (1, element of the inverse matrix of the matrix Σ, ie the (1, element of the matrix Σ Consequently we get We conclude by analogy that ˆσ 1 = s 1 ˆσ ij = s ij, 1 i < j n, ˆσ in+1 = s in+1, i = k + 1,, n Let us apply Theorem for every one of the elements σ in+1, i = 1,, k of the matrix Σ We get det(σ = det(σ ({i, n + 1}(σ in+1 + ( n i+1 det(σ ({i}, {n + 1} 0 σ in+1 + det((σ 0 i,n+1, i = 1,, k Hence det(σ σ in+1 = det(σ ({i, n+1}σ in+1 +( n i+1 det(σ ({i}, {n+1} 0

12 Test for independence of the variables 519 = ( n i+1 [det(σ ({i}, {n + 1} 0 + ( n i det(σ ({i, n + 1}σ in+1 ] The integral J depends of σ in+1, i = 1,, k and where J i = s ii Therefore J σ in+1 = (m 1 s n+1n+1 J i, (det(r m n 3 r in+1 e (m = ( n i+1 det(σ ({i}, {n + 1} k (σ jn+1 sjj s n+1n+1 r jn+1 j=1 I {R>0} dr 1n+1 dr kn+1, i = 1,, k L 1 σ in+1 = K (m 1(det(Σ (m 3 e (m Ξ [( n i+1 det(σ ({i}, {n + 1} J det(σ s n+1n+1 J i ] = 0, i = 1,, k Consequently, for i = 1,, k we have ( n i+1 det(σ ({i}, {n + 1} det(σ = sn+1n+1 J i J The left hand side of the above equation equals to the (i, n + 1 element of the matrix Σ, ie the element σ in+1 Consequently for ˆσ in+1, i = 1,, k we get the equations (8 ˆσ in+1 = Ĵ i s n+1n+1, i = 1,, k, Ĵ where Ĵi and Ĵ are the integrals J i and J respectively, in which we are substituted the maximum likelihood estimations ˆσ in+1, i = 1,, k for the unknown parameters σ in+1, i = 1,, k

13 50 Evelina Veleva Let us denote by ˆΣ the maximum likelihood estimation of the unknown matrix Σ For the matrix ˆΣ we obtained that s 11 s 1k s 1n ˆσ 1n+1 ˆΣ = s 1k s kk s kn ˆσ kn+1 s 1n s kn s nn s nn+1 ˆσ 1n+1 ˆσ kn+1 s nn+1 s n+1n+1 Let us denote by A({i 1,, i l }, {j 1,, j l } the matrix, which is obtained from the matrix A after deleting the rows with numbers i 1,, i l and the columns with numbers j 1,, j l The matrix A({i 1,, i l }, {i 1,, i l, j} 0 is the matrix, which is obtained from the matrix A({i 1,, i l }, {i 1,, i l, j} after replacing the element a jil with zero We shall prove that for ˆσ in+1, defined by (9 ˆσ in+1 = (n i+1 det( ˆΣ({1,, i}, {1,, i 1, n + 1} 0, i = 1,, k, det( ˆΣ({1,, i, n + 1} the equations (8 holds The formulae (9 are recurrent They determine at first ˆσ kn+1, then ˆσ kn+1 and so on, and finally - ˆσ 1n+1 The next Theorem gives an equivalent presentation of ˆσ in+1, i = 1,, k Then Theorem 3 Let ˆσ in+1, i = 1,, k are defined by the equalities (9 (10 ˆσ in+1 = (n k+1 det( ˆΣ({1,, k}, {1,, i 1, i + 1,, k, n + 1} 0, det( ˆΣ({1,, k, n + 1} i = 1,, k P r o o f The proof is by induction The presentation (10 holds for i = k Suppose that it holds for i = k, k,, q+1, where q is an integer, 1 q k We shall prove that it holds for i = q too From (9 we have that (11 ˆσ qn+1 = (n q+1 det( ˆΣ({1,, q}, {1,, q 1, n + 1} 0 det( ˆΣ({1,, q, n + 1}

14 Test for independence of the variables 51 It can be easily seen that (1 det( ˆΣ({1,, q}, {1,, q 1, n + 1} 0 = det(a + ˆσ q+1n+1 ( n q+1 det( ˆΣ({1,, q, n + 1}, {1,, q 1, q + 1, n + 1}, where A is the matrix A = s qq+1 s q+1q+1 s q+1n s qn s q+1n s nn 0 0 s nn+1 If we move in the matrix A the last row after its first row, the determinant will eventually change its sign, more precisely (13 det(a = ( n q+1 det s qq+1 s q+1q+1 s q+1n 0 0 s nn+1 s qq+ s q+1q+ s q+n s qn s q+1n s nn Let us denote the last matrix by B Applying the Sylvester s determinant identity to the matrix B (Karlin, 1968 we have (14 det(b det( ˆΣ({1,, q + 1, n + 1} = ( n q+1 det( ˆΣ({1,, q + 1}, {1,, q, n + 1} 0 det( ˆΣ({1,, q, n + 1}, {1,, q 1, q + 1, n + 1} ( n q+1 det( ˆΣ({1,, q + 1}, {1,, q 1, q + 1, n + 1} 0 det( ˆΣ({1,, q, n + 1} Replacing (13 and (14 in (1 and using the representation (9 for ˆσ q+1n+1, we obtain that (15 det( ˆΣ({1,, q}, {1,, q 1, n + 1} 0 = det( ˆΣ({1,, q, n + 1} det( ˆΣ({1,, q + 1}, {1,, q 1, q + 1, n + 1} 0 det( ˆΣ({1,, q + 1, n + 1}

15 5 Evelina Veleva Consequently from the equality (11 it follows that (16 ˆσ qn+1 = (n q det( ˆΣ({1,, q + 1}, {1,, q 1, q + 1, n + 1} 0 det( ˆΣ({1,, q + 1, n + 1} According to the representation (9 for ˆσ q+1n+1, (17 ˆσ q+1n+1 = (n q det( ˆΣ({1,, q + 1}, {1,, q, n + 1} 0 det( ˆΣ({1,, q + 1, n + 1} The right hand sides of formulas (16 and (17 are almost the same The difference is only in the first column of the matrix in the numerator The other elements of the matrices are identical and do not depend of the elements in the first columns of the two matrices s qq+,, s qn and s q+1q+,, s q+1n respectively According to the induction assumption ˆσ q+1n+1 = (n k+1 det( ˆΣ({1,, k}, {1,, q, q +,, k, n + 1} 0 det( ˆΣ({1,, k, n + 1} Therefore we conclude that ˆσ qn+1 = (n k+1 det( ˆΣ({1,, k}, {1,, q 1, q + 1,, k, n + 1} 0 det( ˆΣ({1,, k, n + 1} Consequently the presentation (10 holds for i = q, hence it is true by induction Let us determine for ˆσ in+1, i = 1,, k, given by (9 the corresponding elements ˆσ in+1, i = 1,, k of the matrix ˆΣ By definition It is easy to see that ˆσ in+1 = (n i+1 det( ˆΣ({i}, {n + 1}, i = 1,, k det( ˆΣ ( n det( ˆΣ({1}, {n + 1} = ( n [ˆσ 1n+1 ( n det( ˆΣ({1, n + 1} + det( ˆΣ({1}, {n + 1} 0 ] [ ] = ( n ( det( ˆΣ({1}, {n + 1} 0 det( ˆΣ({1, n + 1} + det( ˆΣ({1}, {n + 1} det( ˆΣ({1, 0 n + 1} = 0

16 Test for independence of the variables 53 Consequently ˆσ 1n+1 = 0 We shall prove by induction that (18 ˆσ in+1 = 0, i = 1,, k Assume that ˆσ in+1 = 0 for i = 1,, q 1, where q is an integer, q k and consider the matrices A = ˆΣ({q,, n + 1}, D = ˆΣ({1,, q 1}, B = ˆΣ({q,, n + 1}, {1,, q 1}, C = ˆΣ({1,, q 1}, {q,, n + 1} The matrix ˆΣ can be written as a block matrix ( A B ˆΣ = C D It can be easily verified that ( ˆΣ (A BD C = (A BD C BD D C(A BD C D + D C(A BD C BD ( X Y = Z U The element in the lower left corner of the matrix U = D C(A BD C BD + D is exactly the element ˆσ qn+1 and we have to prove that it equals to zero At first we shall show that the element d 1n q in the lower left corner of the matrix D equals to zero Indeed, d 1n q = (n q det(d({1}, {n q} det(d

17 54 Evelina Veleva and it is easy to see that det(d({1}, {n q} = det( ˆΣ({1,, q}, {1,, q 1, n + 1} = ˆσ qn+1 ( n q det( ˆΣ({1,, q, n + 1} + det( ˆΣ({1,, q}, {1,, q 1, n + 1} 0 = det( ˆΣ({1,, q}, {1,, q 1, n + 1} 0 + det( ˆΣ({1,, q}, {1,, q 1, n + 1} 0 = 0 According to the induction assumption, all elements in the last row of the matrix Z = D C(A BD C are equal to zero Hence the elements in the last row of the matrix D C(A BD C BD = ZBD are equal to zero too Adding the matrix D to the last one we obtain the matrix U and conclude that the element ˆσ qn+1 in its lower left corner is equal to zero Consequently the equalities (18 are true by induction The integrals Ĵ and Ĵi then become and Ĵ = J i = s ii (det(r m n 3 I {R>0} dr 1n+1 dr kn+1 (det(r m n 3 r in+1 I {R>0} dr 1n+1 dr kn+1 Applying Proposition 1 for m = m 1, n = n + 1 we get that (19 [ ( Γ 1 ] k ( Γ m n Ĵ = Γ ( (det(r({n + 1} m n (det(r({1,, k} m n+k 3 m n+k (det(r({1,, k, n + 1} m n+k Let i is an integer, 1 i k If i, from Proposition 1 we obtain that (0 [ ( (det(r m n 3 Γ 1 ] i ( Γ m n I {R>0} dr 1n+1 dr in+1 = Γ ( m n+i m n (det(r({n + 1} (det(r({1,, i 1} m n+i 4 (det(r({1,, i 1, n + 1} m n+i 3 I {R({1,,i}>0}

18 Test for independence of the variables 55 For 1 i k 1 let us move in the matrix R the i th row after the k th row and do the same with the i th column Denote the new matrix by R It is easy to see that det(r ({1,, i 1} = det(r({1,, i 1} It can be easily shown that the matrix R ({1,, i 1} is positive definite iff the matrix R({1,, i 1} is so The proof uses the fact that the two matrices have one and the same eigenvalues From (0 it follows that (det(r ({1,, i 1} m n+i 4 I {R ({1,,i}>0} dr i+1n+1 dr kn+1 [ ( Γ 1 ] k i ( Γ m n+i = Γ ( I m n+k {R({1,,i,i+1,,k}>0} m n+i 3 (det(r({1,, i, n+1} (det(r({1,, i, i+1,, k} m n+k 4 (det(r({1,, i, i+1,, k, n+1} m n+k 3 Consequently for 1 i k, (det(r m n 3 I {R>0} dr 1n+1 dr in+1 dr i+11n+1 dr kn+1 [ ( Γ 1 ] k ( Γ m n = Γ ( I m n+k {R({1,,i,i+1,,k}>0} m n (det(r({n + 1} (det(r({1,, i 1, i + 1,, k} m n+k 4 (det(r({1,, i 1, i + 1,, k, n + 1} m n+k 3

19 56 Evelina Veleva Hence Ĵ i = [ ( Γ 1 ] k ( Γ m n s ii Γ ( m n+k (det(r({n + 1} m n (det(r({1,, i 1, i + 1,, k, n + 1} m n+k 3 r in+1 (det(r({1,, i, i+1,, k} m n+k 4 I {R({1,,i,i+1,,k}>0} dr in+1 Let us denote the above integral by H The next Proposition can be found in [10] Proposition Let A be a real symmetric matrix of size n, and let i, j be fixed integers such that 1 i < j n The matrix A is positive definite iff the matrices A({i} and A({j} are positive definite and the element a ij satisfies the inequalities ( j i det(a({i}, {j} 0 det(a({i} det(a({j} det(a({i, j} < a ij < (j i det(a({i}, {j} 0 + det(a({i} det(a({j} det(a({i, j} Let in the integral H we do the substitution r in+1 = (n k+1 det(r({1, i 1, i + 1,, k, n + 1}, {1,, k} 0 det(r({1,, k, n + 1} det(r({1, i 1, i + 1,, k, n + 1} det(r({1,, k} + t det(r({1,, k, n + 1} From Proposition it follows that the new variable t will runs from 1 to 1 Applying Theorem we obtain the representation (1 det(r({1, i 1, i + 1,, k} = det(r({1,, k, n + 1}r in+1 + ( n k+1 det(r({1,, i 1, i + 1,, k, n + 1}, {1,, k} 0 r in+1 + det(r({1,, i 1, i + 1,, k} 0 i,n+1

20 Test for independence of the variables 57 From the Sylvester s determinant identity we have ( det(r({1,, i 1, i + 1,, k} 0 i,n+1 det(r({1,, k, n + 1} = det(r({1,, i 1, i + 1,, k, n + 1} det(r({1,, k} (det(r({1,, i 1, i + 1,, k, n + 1}, {1,, k} 0 Using (1 and ( it can be shown that det(r({1, i 1, i + 1,, k} = det(r({1,, i 1, i + 1,, k, n + 1} det(r({1,, k} (1 t det(r({1,, k, n + 1} The integral H now can be found and we obtain that Ĵ i = [ ( Γ 1 ] k ( s ii ( n k+1 Γ m n Γ ( m n+k det(r({1,, i 1, i + 1,, k, n + 1}, {1,, k} 0 m n (det(r({n + 1} (det(r({1,, k} m n+k 3 (det(r({1,, k, n + 1} m n+k At last it can be easily checked that ˆσ in+1, i = 1,, k, defined by (9, or equivalently by (10, give a solution of the equations (8 To prove that the likelihood function L 1 reaches exactly maxima, one can calculate the Hess matrix of the second derivatives of L 1 with respect to σ in+1, i = 1,, k and ascertain that it is a negative definite matrix for ˆσ in+1, i = 1,, k, defined by (9 We now have to substitute in L 1 the obtained maximum likelihood estimations for the unknown parameters The determinant of the matrix ˆΣ can be found using the next statement Theorem 4 Let A be a symmetric matrix of size n Let the element a 1n satisfies the equality (3 a 1n = (n det(a({1}, {n} 0 det(a({1, n} Then (4 det(a = det(a({1} det(a({n} det(a({1, n}

21 58 Evelina Veleva P r o o f From Theorem it follows that (5 det(a = det(a({1, n}a 1n + ( n det(a({1}, {n} 0 a 1n + det(a 0 1,n From the Sylvester s determinant identity we have (6 det(a 0 1,n det(a({1, n} = det(a({1} det(a({n} (det(a({1}, {n} 0 Applying (6 and the representation (3 for a 1n in (5, we get (4 In the matrix ˆΣ the element ˆσ 1n+1, according to the presentation (9 satisfies the condition (3 Therefore from Theorem 4 it follows that det( ˆΣ = det( ˆΣ({1} det( ˆΣ({n + 1} det( ˆΣ({1, n + 1} Consider the matrix ˆΣ({1} The element ˆσ n+1, according to the presentation (9 satisfies the equality (3 Hence Consequently det( ˆΣ({1} = det( ˆΣ({1, } det( ˆΣ({1, n + 1} det( ˆΣ({1,, n + 1} Proceeding this way, we obtain finally det( ˆΣ = det( ˆΣ({1, } det( ˆΣ({n + 1} det( ˆΣ({1,, n + 1} det( ˆΣ = det( ˆΣ({1,, k} det( ˆΣ({n + 1} det( ˆΣ({1,, k, n + 1} det(r({1,, k} det(r({n + 1} = s 11 s n+1n+1 det(r({1,, k, n + 1} Using the equalities (18, the expression Ξ, defined by (1 can be written in the form n+1 n Ξ = s ii ˆσ ii + n j=i+1 = tr{ ˆΣ ˆΣ } = n + 1 s ij ˆσ ij + n i=k+1 s in+1ˆσ in+1 + k ˆσ in+1ˆσ in+1

22 Test for independence of the variables 59 So for the maxima L 1 of the likelihood function L 1 we obtain L 1 = (m 1 (n+1(m 3 (n+1(m π n(n+1 k 4 Γ ( m n+k (n+1(m e (s 11 s n+1n+1 n Γ ( m i (det(r({1,, k, n + 1} n k+1 (det(r({1,, k} n k+ (det(r({n + 1} n+1 Finally we calculate the likelihood ratio L 0 L 1 ( det(r({1,, k} det(r({n + 1} = det(r({1,, k, n + 1} m Obviously, for the testing of H 0 against H 1 the elements s 11,, s n+1n+1 of the covariance matrix are unnecessary So instead of the covariance matrix S with missing elements we need only the correlation matrix R with the same elements missing Instead of the likelihood ratio we can use the log-likelihood ( ( L log 0 = (m 1 log L 1 det(r({1,, k, n + 1} det(r({1,, k} det(r({n + 1} Then an asymptotic rejection ( region can be given by computing the 1 α quantile n(n + 1 χ of the χ distribution: 1 α; n(n+1 ( det(r({1,, k, n + 1} (m 1 log > χ det(r({1,, k} det(r({n + 1} 1 α; n(n+1 R E F E R E N C E S [1] J Heckman, B Honore The empirical content of the Roy Model Econometrica 58 (1990, [] J Heckman, J Tobias, E Vytlacil Simple Estimators for Treatment Parameters in a Latent Variable Framework Review of Economics and Statistics 85, 3 (003,

23 530 Evelina Veleva [3] S Karlin Total Positivity, Volume 1 Stanford University Press, Stanford, CA, 1968 [4] G Koop, D J Poirier Learning About the Across-Regime Correlation in Switching Regression Models J Econometrics 78 (1997, 17 7 [5] D J Poirier, J L Tobias On the Predictive Distribution of Outcome Gains in the Presence of an Unidentified Parameter J Bus Econom Statist 1 (003, [6] S Sareen Reference Bayesian Inference in nonregular models J Econometrics 113, (003, [7] J L Tobias Estimation, Learning and Parameters of Interest in a Multiple Outcome Selection Model Econometric Rev 5, 1 (006, 1 40 [8] J L Tobias, D J Poirier On the Predictive Distributions of Outcome Gains in the Presence of an Unidentified Parameter J Bus Econom Statist 1, (003, [9] E I Veleva Conditional independence of joint sample correlation coefficients of independent normally distributed random variables Math and Edication in Math 35 (006, [10] E I Veleva Positive definite random matrices Compt Rend Acad Bulg Sci 59, 4 (006, [11] W P M Vijverberg Measuring the Unidentied Parameter of the Extended Roy Model of Selectivity J Econometrics 57 (1993, Rousse University A Kanchev Department of Numerical Methods and Statistics 8 Studentska str, room Rouse, Bulgaria eveleva@abvbg Recieved July 7, 007 Revised December 13, 007

Some New Properties of Wishart Distribution

Some New Properties of Wishart Distribution Applied Mathematical Sciences, Vol., 008, no. 54, 673-68 Some New Properties of Wishart Distribution Evelina Veleva Rousse University A. Kanchev Department of Numerical Methods and Statistics 8 Studentska

More information

Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data

Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data Applied Mathematical Sciences, Vol 3, 009, no 54, 695-70 Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data Evelina Veleva Rousse University A Kanchev Department of Numerical

More information

Here are some additional properties of the determinant function.

Here are some additional properties of the determinant function. List of properties Here are some additional properties of the determinant function. Prop Throughout let A, B M nn. 1 If A = (a ij ) is upper triangular then det(a) = a 11 a 22... a nn. 2 If a row or column

More information

The purpose of this section is to derive the asymptotic distribution of the Pearson chi-square statistic. k (n j np j ) 2. np j.

The purpose of this section is to derive the asymptotic distribution of the Pearson chi-square statistic. k (n j np j ) 2. np j. Chapter 9 Pearson s chi-square test 9. Null hypothesis asymptotics Let X, X 2, be independent from a multinomial(, p) distribution, where p is a k-vector with nonnegative entries that sum to one. That

More information

Multivariate Gaussian Analysis

Multivariate Gaussian Analysis BS2 Statistical Inference, Lecture 7, Hilary Term 2009 February 13, 2009 Marginal and conditional distributions For a positive definite covariance matrix Σ, the multivariate Gaussian distribution has density

More information

UNIFORM RANDOM POSITIVE DEFINITE MATRIX GENERATION * Evelina I. Veleva

UNIFORM RANDOM POSITIVE DEFINITE MATRIX GENERATION * Evelina I. Veleva МАТЕМАТИКА И МАТЕМАТИЧЕСКО ОБРАЗОВАНИЕ, 006 MATHEMATICS AND EDUCATION IN MATHEMATICS, 006 Proceedings of the Thirty Fifth Spring Conference of the Union of Bulgarian Mathematicians Borovets, April 5 8,

More information

1 Matrices and Systems of Linear Equations. a 1n a 2n

1 Matrices and Systems of Linear Equations. a 1n a 2n March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real

More information

Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices

Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices Matrices A. Fabretti Mathematics 2 A.Y. 2015/2016 Table of contents Matrix Algebra Determinant Inverse Matrix Introduction A matrix is a rectangular array of numbers. The size of a matrix is indicated

More information

Lecture 23: Trace and determinants! (1) (Final lecture)

Lecture 23: Trace and determinants! (1) (Final lecture) Lecture 23: Trace and determinants! (1) (Final lecture) Travis Schedler Thurs, Dec 9, 2010 (version: Monday, Dec 13, 3:52 PM) Goals (2) Recall χ T (x) = (x λ 1 ) (x λ n ) = x n tr(t )x n 1 + +( 1) n det(t

More information

Random Matrices and Multivariate Statistical Analysis

Random Matrices and Multivariate Statistical Analysis Random Matrices and Multivariate Statistical Analysis Iain Johnstone, Statistics, Stanford imj@stanford.edu SEA 06@MIT p.1 Agenda Classical multivariate techniques Principal Component Analysis Canonical

More information

arxiv:math/ v5 [math.ac] 17 Sep 2009

arxiv:math/ v5 [math.ac] 17 Sep 2009 On the elementary symmetric functions of a sum of matrices R. S. Costas-Santos arxiv:math/0612464v5 [math.ac] 17 Sep 2009 September 17, 2009 Abstract Often in mathematics it is useful to summarize a multivariate

More information

MATH 323 Linear Algebra Lecture 6: Matrix algebra (continued). Determinants.

MATH 323 Linear Algebra Lecture 6: Matrix algebra (continued). Determinants. MATH 323 Linear Algebra Lecture 6: Matrix algebra (continued). Determinants. Elementary matrices Theorem 1 Any elementary row operation σ on matrices with n rows can be simulated as left multiplication

More information

Journal of Multivariate Analysis. Sphericity test in a GMANOVA MANOVA model with normal error

Journal of Multivariate Analysis. Sphericity test in a GMANOVA MANOVA model with normal error Journal of Multivariate Analysis 00 (009) 305 3 Contents lists available at ScienceDirect Journal of Multivariate Analysis journal homepage: www.elsevier.com/locate/jmva Sphericity test in a GMANOVA MANOVA

More information

Statistical Inference with Monotone Incomplete Multivariate Normal Data

Statistical Inference with Monotone Incomplete Multivariate Normal Data Statistical Inference with Monotone Incomplete Multivariate Normal Data p. 1/4 Statistical Inference with Monotone Incomplete Multivariate Normal Data This talk is based on joint work with my wonderful

More information

Linear Systems and Matrices

Linear Systems and Matrices Department of Mathematics The Chinese University of Hong Kong 1 System of m linear equations in n unknowns (linear system) a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.......

More information

Math 317, Tathagata Basak, Some notes on determinant 1 Row operations in terms of matrix multiplication 11 Let I n denote the n n identity matrix Let E ij denote the n n matrix whose (i, j)-th entry is

More information

Gibbs Sampling in Linear Models #2

Gibbs Sampling in Linear Models #2 Gibbs Sampling in Linear Models #2 Econ 690 Purdue University Outline 1 Linear Regression Model with a Changepoint Example with Temperature Data 2 The Seemingly Unrelated Regressions Model 3 Gibbs sampling

More information

Determinants: Uniqueness and more

Determinants: Uniqueness and more Math 5327 Spring 2018 Determinants: Uniqueness and more Uniqueness The main theorem we are after: Theorem 1 The determinant of and n n matrix A is the unique n-linear, alternating function from F n n to

More information

Comptes rendus de l Academie bulgare des Sciences, Tome 59, 4, 2006, p POSITIVE DEFINITE RANDOM MATRICES. Evelina Veleva

Comptes rendus de l Academie bulgare des Sciences, Tome 59, 4, 2006, p POSITIVE DEFINITE RANDOM MATRICES. Evelina Veleva Comtes rendus de l Academie bulgare des ciences Tome 59 4 6 353 36 POITIVE DEFINITE RANDOM MATRICE Evelina Veleva Abstract: The aer begins with necessary and suicient conditions or ositive deiniteness

More information

MATH5745 Multivariate Methods Lecture 07

MATH5745 Multivariate Methods Lecture 07 MATH5745 Multivariate Methods Lecture 07 Tests of hypothesis on covariance matrix March 16, 2018 MATH5745 Multivariate Methods Lecture 07 March 16, 2018 1 / 39 Test on covariance matrices: Introduction

More information

Introduction Eigen Values and Eigen Vectors An Application Matrix Calculus Optimal Portfolio. Portfolios. Christopher Ting.

Introduction Eigen Values and Eigen Vectors An Application Matrix Calculus Optimal Portfolio. Portfolios. Christopher Ting. Portfolios Christopher Ting Christopher Ting http://www.mysmu.edu/faculty/christophert/ : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 November 4, 2016 Christopher Ting QF 101 Week 12 November 4,

More information

II. Determinant Functions

II. Determinant Functions Supplemental Materials for EE203001 Students II Determinant Functions Chung-Chin Lu Department of Electrical Engineering National Tsing Hua University May 22, 2003 1 Three Axioms for a Determinant Function

More information

STT 843 Key to Homework 1 Spring 2018

STT 843 Key to Homework 1 Spring 2018 STT 843 Key to Homework Spring 208 Due date: Feb 4, 208 42 (a Because σ = 2, σ 22 = and ρ 2 = 05, we have σ 2 = ρ 2 σ σ22 = 2/2 Then, the mean and covariance of the bivariate normal is µ = ( 0 2 and Σ

More information

Elementary Matrices. which is obtained by multiplying the first row of I 3 by -1, and

Elementary Matrices. which is obtained by multiplying the first row of I 3 by -1, and Elementary Matrices In this special handout of material not contained in the text, we introduce the concept of elementary matrix. Elementary matrices are useful in several ways that will be shown in this

More information

1 Data Arrays and Decompositions

1 Data Arrays and Decompositions 1 Data Arrays and Decompositions 1.1 Variance Matrices and Eigenstructure Consider a p p positive definite and symmetric matrix V - a model parameter or a sample variance matrix. The eigenstructure is

More information

18.S34 linear algebra problems (2007)

18.S34 linear algebra problems (2007) 18.S34 linear algebra problems (2007) Useful ideas for evaluating determinants 1. Row reduction, expanding by minors, or combinations thereof; sometimes these are useful in combination with an induction

More information

SPRING OF 2008 D. DETERMINANTS

SPRING OF 2008 D. DETERMINANTS 18024 SPRING OF 2008 D DETERMINANTS In many applications of linear algebra to calculus and geometry, the concept of a determinant plays an important role This chapter studies the basic properties of determinants

More information

CHARACTERIZATIONS. is pd/psd. Possible for all pd/psd matrices! Generating a pd/psd matrix: Choose any B Mn, then

CHARACTERIZATIONS. is pd/psd. Possible for all pd/psd matrices! Generating a pd/psd matrix: Choose any B Mn, then LECTURE 6: POSITIVE DEFINITE MATRICES Definition: A Hermitian matrix A Mn is positive definite (pd) if x Ax > 0 x C n,x 0 A is positive semidefinite (psd) if x Ax 0. Definition: A Mn is negative (semi)definite

More information

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI? Section 5. The Characteristic Polynomial Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI? Property The eigenvalues

More information

Sudoku and Matrices. Merciadri Luca. June 28, 2011

Sudoku and Matrices. Merciadri Luca. June 28, 2011 Sudoku and Matrices Merciadri Luca June 28, 2 Outline Introduction 2 onventions 3 Determinant 4 Erroneous Sudoku 5 Eigenvalues Example 6 Transpose Determinant Trace 7 Antisymmetricity 8 Non-Normality 9

More information

Linear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,

Linear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, Linear Regression In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, y = Xβ + ɛ, where y t = (y 1,..., y n ) is the column vector of target values,

More information

det(ka) = k n det A.

det(ka) = k n det A. Properties of determinants Theorem. If A is n n, then for any k, det(ka) = k n det A. Multiplying one row of A by k multiplies the determinant by k. But ka has every row multiplied by k, so the determinant

More information

Finding normalized and modularity cuts by spectral clustering. Ljubjana 2010, October

Finding normalized and modularity cuts by spectral clustering. Ljubjana 2010, October Finding normalized and modularity cuts by spectral clustering Marianna Bolla Institute of Mathematics Budapest University of Technology and Economics marib@math.bme.hu Ljubjana 2010, October Outline Find

More information

Multivariate Analysis and Likelihood Inference

Multivariate Analysis and Likelihood Inference Multivariate Analysis and Likelihood Inference Outline 1 Joint Distribution of Random Variables 2 Principal Component Analysis (PCA) 3 Multivariate Normal Distribution 4 Likelihood Inference Joint density

More information

Determinants: Introduction and existence

Determinants: Introduction and existence Math 5327 Spring 2018 Determinants: Introduction and existence In this set of notes I try to give the general theory of determinants in a fairly abstract setting. I will start with the statement of the

More information

A Few Special Distributions and Their Properties

A Few Special Distributions and Their Properties A Few Special Distributions and Their Properties Econ 690 Purdue University Justin L. Tobias (Purdue) Distributional Catalog 1 / 20 Special Distributions and Their Associated Properties 1 Uniform Distribution

More information

Determinants Chapter 3 of Lay

Determinants Chapter 3 of Lay Determinants Chapter of Lay Dr. Doreen De Leon Math 152, Fall 201 1 Introduction to Determinants Section.1 of Lay Given a square matrix A = [a ij, the determinant of A is denoted by det A or a 11 a 1j

More information

Statistical Inference On the High-dimensional Gaussian Covarianc

Statistical Inference On the High-dimensional Gaussian Covarianc Statistical Inference On the High-dimensional Gaussian Covariance Matrix Department of Mathematical Sciences, Clemson University June 6, 2011 Outline Introduction Problem Setup Statistical Inference High-Dimensional

More information

Diagonalizing Matrices

Diagonalizing Matrices Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B

More information

Matrices and Linear Algebra

Matrices and Linear Algebra Contents Quantitative methods for Economics and Business University of Ferrara Academic year 2017-2018 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2

More information

Chapter 3. Differentiable Mappings. 1. Differentiable Mappings

Chapter 3. Differentiable Mappings. 1. Differentiable Mappings Chapter 3 Differentiable Mappings 1 Differentiable Mappings Let V and W be two linear spaces over IR A mapping L from V to W is called a linear mapping if L(u + v) = Lu + Lv for all u, v V and L(λv) =

More information

MA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS

MA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS MA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS 1. (7 pts)[apostol IV.8., 13, 14] (.) Let A be an n n matrix with characteristic polynomial f(λ). Prove (by induction) that the coefficient of λ n 1 in f(λ) is

More information

Properties of the Determinant Function

Properties of the Determinant Function Properties of the Determinant Function MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Overview Today s discussion will illuminate some of the properties of the determinant:

More information

SOLUTION OF GENERALIZED LINEAR VECTOR EQUATIONS IN IDEMPOTENT ALGEBRA

SOLUTION OF GENERALIZED LINEAR VECTOR EQUATIONS IN IDEMPOTENT ALGEBRA , pp. 23 36, 2006 Vestnik S.-Peterburgskogo Universiteta. Matematika UDC 519.63 SOLUTION OF GENERALIZED LINEAR VECTOR EQUATIONS IN IDEMPOTENT ALGEBRA N. K. Krivulin The problem on the solutions of homogeneous

More information

Decomposable and Directed Graphical Gaussian Models

Decomposable and Directed Graphical Gaussian Models Decomposable Decomposable and Directed Graphical Gaussian Models Graphical Models and Inference, Lecture 13, Michaelmas Term 2009 November 26, 2009 Decomposable Definition Basic properties Wishart density

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations March 3, 203 6-6. Systems of Linear Equations Matrices and Systems of Linear Equations An m n matrix is an array A = a ij of the form a a n a 2 a 2n... a m a mn where each a ij is a real or complex number.

More information

The Determinant. Chapter Definition of the Determinant

The Determinant. Chapter Definition of the Determinant Chapter 5 The Determinant 5.1 Definition of the Determinant Given a n n matrix A, we would like to define its determinant. We already have a definition for the 2 2 matrix. We define the determinant of

More information

MATH 2030: EIGENVALUES AND EIGENVECTORS

MATH 2030: EIGENVALUES AND EIGENVECTORS MATH 2030: EIGENVALUES AND EIGENVECTORS Determinants Although we are introducing determinants in the context of matrices, the theory of determinants predates matrices by at least two hundred years Their

More information

arxiv: v1 [math.co] 3 Nov 2014

arxiv: v1 [math.co] 3 Nov 2014 SPARSE MATRICES DESCRIBING ITERATIONS OF INTEGER-VALUED FUNCTIONS BERND C. KELLNER arxiv:1411.0590v1 [math.co] 3 Nov 014 Abstract. We consider iterations of integer-valued functions φ, which have no fixed

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by Linear Algebra Determinants and Eigenvalues Introduction: Many important geometric and algebraic properties of square matrices are associated with a single real number revealed by what s known as the determinant.

More information

Formula for the inverse matrix. Cramer s rule. Review: 3 3 determinants can be computed expanding by any row or column

Formula for the inverse matrix. Cramer s rule. Review: 3 3 determinants can be computed expanding by any row or column Math 20F Linear Algebra Lecture 18 1 Determinants, n n Review: The 3 3 case Slide 1 Determinants n n (Expansions by rows and columns Relation with Gauss elimination matrices: Properties) Formula for the

More information

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues.

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues. Similar Matrices and Diagonalization Page 1 Theorem If A and B are n n matrices, which are similar, then they have the same characteristic equation and hence the same eigenvalues. Proof Let A and B be

More information

Eigenvectors and Reconstruction

Eigenvectors and Reconstruction Eigenvectors and Reconstruction Hongyu He Department of Mathematics Louisiana State University, Baton Rouge, USA hongyu@mathlsuedu Submitted: Jul 6, 2006; Accepted: Jun 14, 2007; Published: Jul 5, 2007

More information

Statistical Inference with Monotone Incomplete Multivariate Normal Data

Statistical Inference with Monotone Incomplete Multivariate Normal Data Statistical Inference with Monotone Incomplete Multivariate Normal Data p. 1/4 Statistical Inference with Monotone Incomplete Multivariate Normal Data This talk is based on joint work with my wonderful

More information

Notes on the Multivariate Normal and Related Topics

Notes on the Multivariate Normal and Related Topics Version: July 10, 2013 Notes on the Multivariate Normal and Related Topics Let me refresh your memory about the distinctions between population and sample; parameters and statistics; population distributions

More information

Counting Matrices Over a Finite Field With All Eigenvalues in the Field

Counting Matrices Over a Finite Field With All Eigenvalues in the Field Counting Matrices Over a Finite Field With All Eigenvalues in the Field Lisa Kaylor David Offner Department of Mathematics and Computer Science Westminster College, Pennsylvania, USA kaylorlm@wclive.westminster.edu

More information

Non-absolutely monotonic functions which preserve non-negative definiteness

Non-absolutely monotonic functions which preserve non-negative definiteness Non-absolutely monotonic functions which preserve non-negative definiteness Alexander Belton (Joint work with Dominique Guillot, Apoorva Khare and Mihai Putinar) Department of Mathematics and Statistics

More information

Cointegrated VAR s. Eduardo Rossi University of Pavia. November Rossi Cointegrated VAR s Financial Econometrics / 56

Cointegrated VAR s. Eduardo Rossi University of Pavia. November Rossi Cointegrated VAR s Financial Econometrics / 56 Cointegrated VAR s Eduardo Rossi University of Pavia November 2013 Rossi Cointegrated VAR s Financial Econometrics - 2013 1 / 56 VAR y t = (y 1t,..., y nt ) is (n 1) vector. y t VAR(p): Φ(L)y t = ɛ t The

More information

On Expected Gaussian Random Determinants

On Expected Gaussian Random Determinants On Expected Gaussian Random Determinants Moo K. Chung 1 Department of Statistics University of Wisconsin-Madison 1210 West Dayton St. Madison, WI 53706 Abstract The expectation of random determinants whose

More information

Numerical methods for solving linear systems

Numerical methods for solving linear systems Chapter 2 Numerical methods for solving linear systems Let A C n n be a nonsingular matrix We want to solve the linear system Ax = b by (a) Direct methods (finite steps); Iterative methods (convergence)

More information

2 b 3 b 4. c c 2 c 3 c 4

2 b 3 b 4. c c 2 c 3 c 4 OHSx XM511 Linear Algebra: Multiple Choice Questions for Chapter 4 a a 2 a 3 a 4 b b 1. What is the determinant of 2 b 3 b 4 c c 2 c 3 c 4? d d 2 d 3 d 4 (a) abcd (b) abcd(a b)(b c)(c d)(d a) (c) abcd(a

More information

Math 304 Fall 2018 Exam 3 Solutions 1. (18 Points, 3 Pts each part) Let A, B, C, D be square matrices of the same size such that

Math 304 Fall 2018 Exam 3 Solutions 1. (18 Points, 3 Pts each part) Let A, B, C, D be square matrices of the same size such that Math 304 Fall 2018 Exam 3 Solutions 1. (18 Points, 3 Pts each part) Let A, B, C, D be square matrices of the same size such that det(a) = 2, det(b) = 2, det(c) = 1, det(d) = 4. 2 (a) Compute det(ad)+det((b

More information

Lecture 5: Hypothesis tests for more than one sample

Lecture 5: Hypothesis tests for more than one sample 1/23 Lecture 5: Hypothesis tests for more than one sample Måns Thulin Department of Mathematics, Uppsala University thulin@math.uu.se Multivariate Methods 8/4 2011 2/23 Outline Paired comparisons Repeated

More information

Testing Block-Diagonal Covariance Structure for High-Dimensional Data

Testing Block-Diagonal Covariance Structure for High-Dimensional Data Testing Block-Diagonal Covariance Structure for High-Dimensional Data MASASHI HYODO 1, NOBUMICHI SHUTOH 2, TAKAHIRO NISHIYAMA 3, AND TATJANA PAVLENKO 4 1 Department of Mathematical Sciences, Graduate School

More information

k=1 ( 1)k+j M kj detm kj. detm = ad bc. = 1 ( ) 2 ( )+3 ( ) = = 0

k=1 ( 1)k+j M kj detm kj. detm = ad bc. = 1 ( ) 2 ( )+3 ( ) = = 0 4 Determinants The determinant of a square matrix is a scalar (i.e. an element of the field from which the matrix entries are drawn which can be associated to it, and which contains a surprisingly large

More information

DETERMINANTS. , x 2 = a 11b 2 a 21 b 1

DETERMINANTS. , x 2 = a 11b 2 a 21 b 1 DETERMINANTS 1 Solving linear equations The simplest type of equations are linear The equation (1) ax = b is a linear equation, in the sense that the function f(x) = ax is linear 1 and it is equated to

More information

Summary of Chapters 7-9

Summary of Chapters 7-9 Summary of Chapters 7-9 Chapter 7. Interval Estimation 7.2. Confidence Intervals for Difference of Two Means Let X 1,, X n and Y 1, Y 2,, Y m be two independent random samples of sizes n and m from two

More information

PROBLEMS ON LINEAR ALGEBRA

PROBLEMS ON LINEAR ALGEBRA 1 Basic Linear Algebra PROBLEMS ON LINEAR ALGEBRA 1. Let M n be the (2n + 1) (2n + 1) for which 0, i = j (M n ) ij = 1, i j 1,..., n (mod 2n + 1) 1, i j n + 1,..., 2n (mod 2n + 1). Find the rank of M n.

More information

A PECULIAR COIN-TOSSING MODEL

A PECULIAR COIN-TOSSING MODEL A PECULIAR COIN-TOSSING MODEL EDWARD J. GREEN 1. Coin tossing according to de Finetti A coin is drawn at random from a finite set of coins. Each coin generates an i.i.d. sequence of outcomes (heads or

More information

MATH 1210 Assignment 4 Solutions 16R-T1

MATH 1210 Assignment 4 Solutions 16R-T1 MATH 1210 Assignment 4 Solutions 16R-T1 Attempt all questions and show all your work. Due November 13, 2015. 1. Prove using mathematical induction that for any n 2, and collection of n m m matrices A 1,

More information

NOTES FOR LINEAR ALGEBRA 133

NOTES FOR LINEAR ALGEBRA 133 NOTES FOR LINEAR ALGEBRA 33 William J Anderson McGill University These are not official notes for Math 33 identical to the notes projected in class They are intended for Anderson s section 4, and are 2

More information

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015 Part IB Statistics Theorems with proof Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly)

More information

Optimal Multiple Decision Statistical Procedure for Inverse Covariance Matrix

Optimal Multiple Decision Statistical Procedure for Inverse Covariance Matrix Optimal Multiple Decision Statistical Procedure for Inverse Covariance Matrix Alexander P. Koldanov and Petr A. Koldanov Abstract A multiple decision statistical problem for the elements of inverse covariance

More information

Determinants. Recall that the 2 2 matrix a b c d. is invertible if

Determinants. Recall that the 2 2 matrix a b c d. is invertible if Determinants Recall that the 2 2 matrix a b c d is invertible if and only if the quantity ad bc is nonzero. Since this quantity helps to determine the invertibility of the matrix, we call it the determinant.

More information

MIMO Capacities : Eigenvalue Computation through Representation Theory

MIMO Capacities : Eigenvalue Computation through Representation Theory MIMO Capacities : Eigenvalue Computation through Representation Theory Jayanta Kumar Pal, Donald Richards SAMSI Multivariate distributions working group Outline 1 Introduction 2 MIMO working model 3 Eigenvalue

More information

Multivariate Time Series: Part 4

Multivariate Time Series: Part 4 Multivariate Time Series: Part 4 Cointegration Gerald P. Dwyer Clemson University March 2016 Outline 1 Multivariate Time Series: Part 4 Cointegration Engle-Granger Test for Cointegration Johansen Test

More information

SIMULTANEOUS CONFIDENCE INTERVALS AMONG k MEAN VECTORS IN REPEATED MEASURES WITH MISSING DATA

SIMULTANEOUS CONFIDENCE INTERVALS AMONG k MEAN VECTORS IN REPEATED MEASURES WITH MISSING DATA SIMULTANEOUS CONFIDENCE INTERVALS AMONG k MEAN VECTORS IN REPEATED MEASURES WITH MISSING DATA Kazuyuki Koizumi Department of Mathematics, Graduate School of Science Tokyo University of Science 1-3, Kagurazaka,

More information

Some New Results on Lyapunov-Type Diagonal Stability

Some New Results on Lyapunov-Type Diagonal Stability Some New Results on Lyapunov-Type Diagonal Stability Mehmet Gumus (Joint work with Dr. Jianhong Xu) Department of Mathematics Southern Illinois University Carbondale 12/01/2016 mgumus@siu.edu (SIUC) Lyapunov-Type

More information

Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone Missing Data

Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone Missing Data Journal of Multivariate Analysis 78, 6282 (2001) doi:10.1006jmva.2000.1939, available online at http:www.idealibrary.com on Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone

More information

Econometrics of Panel Data

Econometrics of Panel Data Econometrics of Panel Data Jakub Mućk Meeting # 2 Jakub Mućk Econometrics of Panel Data Meeting # 2 1 / 26 Outline 1 Fixed effects model The Least Squares Dummy Variable Estimator The Fixed Effect (Within

More information

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors EE401 (Semester 1) 5. Random Vectors Jitkomut Songsiri probabilities characteristic function cross correlation, cross covariance Gaussian random vectors functions of random vectors 5-1 Random vectors we

More information

Connections and Determinants

Connections and Determinants Connections and Determinants Mark Blunk Sam Coskey June 25, 2003 Abstract The relationship between connections and determinants in conductivity networks is discussed We paraphrase Lemma 312, by Curtis

More information

Table of Contents. Multivariate methods. Introduction II. Introduction I

Table of Contents. Multivariate methods. Introduction II. Introduction I Table of Contents Introduction Antti Penttilä Department of Physics University of Helsinki Exactum summer school, 04 Construction of multinormal distribution Test of multinormality with 3 Interpretation

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

Regression #5: Confidence Intervals and Hypothesis Testing (Part 1)

Regression #5: Confidence Intervals and Hypothesis Testing (Part 1) Regression #5: Confidence Intervals and Hypothesis Testing (Part 1) Econ 671 Purdue University Justin L. Tobias (Purdue) Regression #5 1 / 24 Introduction What is a confidence interval? To fix ideas, suppose

More information

Dependence. Practitioner Course: Portfolio Optimization. John Dodson. September 10, Dependence. John Dodson. Outline.

Dependence. Practitioner Course: Portfolio Optimization. John Dodson. September 10, Dependence. John Dodson. Outline. Practitioner Course: Portfolio Optimization September 10, 2008 Before we define dependence, it is useful to define Random variables X and Y are independent iff For all x, y. In particular, F (X,Y ) (x,

More information

0.1 Rational Canonical Forms

0.1 Rational Canonical Forms We have already seen that it is useful and simpler to study linear systems using matrices. But matrices are themselves cumbersome, as they are stuffed with many entries, and it turns out that it s best

More information

The Wishart distribution Scaled Wishart. Wishart Priors. Patrick Breheny. March 28. Patrick Breheny BST 701: Bayesian Modeling in Biostatistics 1/11

The Wishart distribution Scaled Wishart. Wishart Priors. Patrick Breheny. March 28. Patrick Breheny BST 701: Bayesian Modeling in Biostatistics 1/11 Wishart Priors Patrick Breheny March 28 Patrick Breheny BST 701: Bayesian Modeling in Biostatistics 1/11 Introduction When more than two coefficients vary, it becomes difficult to directly model each element

More information

1 Determinants. 1.1 Determinant

1 Determinants. 1.1 Determinant 1 Determinants [SB], Chapter 9, p.188-196. [SB], Chapter 26, p.719-739. Bellow w ll study the central question: which additional conditions must satisfy a quadratic matrix A to be invertible, that is to

More information

DYNAMIC AND COMPROMISE FACTOR ANALYSIS

DYNAMIC AND COMPROMISE FACTOR ANALYSIS DYNAMIC AND COMPROMISE FACTOR ANALYSIS Marianna Bolla Budapest University of Technology and Economics marib@math.bme.hu Many parts are joint work with Gy. Michaletzky, Loránd Eötvös University and G. Tusnády,

More information

Bootstrapping Heteroskedasticity Consistent Covariance Matrix Estimator

Bootstrapping Heteroskedasticity Consistent Covariance Matrix Estimator Bootstrapping Heteroskedasticity Consistent Covariance Matrix Estimator by Emmanuel Flachaire Eurequa, University Paris I Panthéon-Sorbonne December 2001 Abstract Recent results of Cribari-Neto and Zarkos

More information

. a m1 a mn. a 1 a 2 a = a n

. a m1 a mn. a 1 a 2 a = a n Biostat 140655, 2008: Matrix Algebra Review 1 Definition: An m n matrix, A m n, is a rectangular array of real numbers with m rows and n columns Element in the i th row and the j th column is denoted by

More information

Asymptotic Statistics-VI. Changliang Zou

Asymptotic Statistics-VI. Changliang Zou Asymptotic Statistics-VI Changliang Zou Kolmogorov-Smirnov distance Example (Kolmogorov-Smirnov confidence intervals) We know given α (0, 1), there is a well-defined d = d α,n such that, for any continuous

More information

THREE LECTURES ON QUASIDETERMINANTS

THREE LECTURES ON QUASIDETERMINANTS THREE LECTURES ON QUASIDETERMINANTS Robert Lee Wilson Department of Mathematics Rutgers University The determinant of a matrix with entries in a commutative ring is a main organizing tool in commutative

More information

Review (Probability & Linear Algebra)

Review (Probability & Linear Algebra) Review (Probability & Linear Algebra) CE-725 : Statistical Pattern Recognition Sharif University of Technology Spring 2013 M. Soleymani Outline Axioms of probability theory Conditional probability, Joint

More information

a(b + c) = ab + ac a, b, c k

a(b + c) = ab + ac a, b, c k Lecture 2. The Categories of Vector Spaces PCMI Summer 2015 Undergraduate Lectures on Flag Varieties Lecture 2. We discuss the categories of vector spaces and linear maps. Since vector spaces are always

More information

A = , A 32 = n ( 1) i +j a i j det(a i j). (1) j=1

A = , A 32 = n ( 1) i +j a i j det(a i j). (1) j=1 Lecture Notes: Determinant of a Square Matrix Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk 1 Determinant Definition Let A [a ij ] be an

More information

Discrete Applied Mathematics

Discrete Applied Mathematics Discrete Applied Mathematics 194 (015) 37 59 Contents lists available at ScienceDirect Discrete Applied Mathematics journal homepage: wwwelseviercom/locate/dam Loopy, Hankel, and combinatorially skew-hankel

More information

MP 472 Quantum Information and Computation

MP 472 Quantum Information and Computation MP 472 Quantum Information and Computation http://www.thphys.may.ie/staff/jvala/mp472.htm Outline Open quantum systems The density operator ensemble of quantum states general properties the reduced density

More information