Matrix Algebra Review

Similar documents
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

1 Determinants. 1.1 Determinant

Do not copy, quote, or cite without permission LECTURE 4: THE GENERAL LISREL MODEL

Introduction to Matrices

Chapter 3. Determinants and Eigenvalues

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

A Little Necessary Matrix Algebra for Doctoral Studies in Business & Economics. Matrix Algebra

Review of Vectors and Matrices

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Appendix A: Matrices

Chapter 2:Determinants. Section 2.1: Determinants by cofactor expansion

Review of Linear Algebra

ECON 186 Class Notes: Linear Algebra

Linear Systems and Matrices

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

POLI270 - Linear Algebra

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

Mathematics. EC / EE / IN / ME / CE. for

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

Chapter 4. Determinants

Theorem A.1. If A is any nonzero m x n matrix, then A is equivalent to a partitioned matrix of the form. k k n-k. m-k k m-k n-k

Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices

4. Determinants.

Linear Algebra. James Je Heon Kim

1300 Linear Algebra and Vector Geometry

Graduate Mathematical Economics Lecture 1

Undergraduate Mathematical Economics Lecture 1

A Review of Matrix Analysis

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Lecture 7. Econ August 18

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

ELE/MCE 503 Linear Algebra Facts Fall 2018

Review problems for MA 54, Fall 2004.

Chapter 3 Transformations

1 Matrices and Systems of Linear Equations. a 1n a 2n

Matrices A brief introduction

MATRICES. knowledge on matrices Knowledge on matrix operations. Matrix as a tool of solving linear equations with two or three unknowns.

EE731 Lecture Notes: Matrix Computations for Signal Processing

Matrix & Linear Algebra

Matrices and Determinants

Numerical Linear Algebra Homework Assignment - Week 2

Introduction to Matrix Algebra

Linear Equations in Linear Algebra

Introduction to Determinants

Lecture Notes in Linear Algebra

Fundamentals of Engineering Analysis (650163)

EIGENVALUES AND EIGENVECTORS 3

APPENDIX A. Matrix Algebra. a n1 a n2 g a nk

Introduction to Matrices

Chapter 1. Matrix Algebra

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

Lecture Summaries for Linear Algebra M51A

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

MATRICES The numbers or letters in any given matrix are called its entries or elements

Principle Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

TOPIC III LINEAR ALGEBRA

Multivariate Statistical Analysis

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

MATRICES AND MATRIX OPERATIONS

Matrix Operations: Determinant

Linear Algebra Primer

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

Determinants Chapter 3 of Lay

Chap 3. Linear Algebra

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Study Guide for Linear Algebra Exam 2

Section 9.2: Matrices.. a m1 a m2 a mn

Letting be a field, e.g., of the real numbers, the complex numbers, the rational numbers, the rational functions W(s) of a complex variable s, etc.

ELEMENTARY LINEAR ALGEBRA

ELEMENTS OF MATRIX ALGEBRA

Foundations of Matrix Analysis

Linear Algebra Primer

2. Matrix Algebra and Random Vectors

Topic 15 Notes Jeremy Orloff

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Systems of Linear Equations and Matrices

MAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :

Determinants by Cofactor Expansion (III)

M. Matrices and Linear Algebra

MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam. Topics

Section 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.

Systems of Linear Equations and Matrices

Linear Algebra and Matrices

MAC Module 3 Determinants. Learning Objectives. Upon completing this module, you should be able to:

Phys 201. Matrices and Determinants

a11 a A = : a 21 a 22

Matrices 2. Slide for MA1203 Business Mathematics II Week 4

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

Linear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway

M.A.P. Matrix Algebra Procedures. by Mary Donovan, Adrienne Copeland, & Patrick Curry

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

MAT 2037 LINEAR ALGEBRA I web:

G1110 & 852G1 Numerical Linear Algebra

Transcription:

APPENDIX A Matrix Algebra Review This appendix presents some of the basic definitions and properties of matrices. Many of the matrices in the appendix are named the same as the matrices that appear in the book's notation. I do this so that readers may become more familiar with the structural equation notation at the same time they are reviewing matrix algebra. Those wishing a more detailed treatment of matrices should refer to Lunneborg and Abbott (1983), Searle (1982), Graybill (1983), or Hadley (1961). SCALARS, VECTORS, AND MATRICES A basic distinction is that between scalars, vectors, and matrices. A single element, value, or quantity is referred to as a scalar. For instance, the covariance between Xl and X 2, COV(x l, x 2 ), is a scalar, as is the number 5 and the regression coefficient 13 21, When two or more scalars are written in a row or a column, they form a row or column vector. The order of a row vector is 1 X c, where the 1 indicates one row and c is the number of columns (elements); the order of a column vector is r X 1. Three examples of vectors are a'=[l 4 5 2], f' = ["'Ill "'I2I] Boldface print shows that a symbol (e.g., E) refers to a vector (or matrix) rather than a scalar. In the preceding examples E is a 3 X 1 column vector and a' and f' are row vectors of orders 1 X 4 and 1 X 2, respectively. The prime superscript distinguishes row vectors (e.g., a') from column vectors 449

450 MATRIX ALGEBRA REVIEW (e.g., E). The prime stands for the transpose operator, which I will define later. A matrix is a group of elements arranged into rows and columns. Matrices also are represented by boldface symbols. The order of a matrix is indicated by r X c, where r is the number of rows and c is the number of columns. The order r X c also is the dimension of the matrix. As examples, consider The dimension of S is 2 X 2, whereas r is a 3 X 2 matrix. Vectors and scalars are special cases of matrices. A row vector is a 1 X c matrix, a column vector is an r X 1 matrix, and a scalar is a 1 X 1 matrix. Two matrices, say Sand T, are equal if they are of the same order and if every element Si} of S equals the corresponding element til of T for all i's and j's. For instance, here Sand T are equal, but Sand T* are not: S=T T* = S '" T*, for S21 '" 0 MATRIX OPERA nons To add two or more matrices, they must be of the same dimension or order. The resulting matrix is of the same order, with each element equal to the sum of the corresponding elements in each of the matrices added together. For instance, I = [~ n B+I= f3n + 1 [ f321 f312 ] f322 + 1

MATRIX OPERATIONS 451 Two useful properties of matrix addition, for any matrices S, T, and U of the same order, are as follows: 1. S + T = T + S. 2. (S + T) + U = S + (T + U). Two matrices can be multiplied only when the number of columns in the first matrix equals the number of rows in the second matrix. If this condition is met the matrices are said to be conformable with respect to multiplication. If the first matrix has order a X b and the second matrix has order b X c, then the resulting product of these two matrices is a matrix of order a X c. The ij element of the product matrix is derived by multiplying the elements of the ith row of the first matrix by the corresponding elements of the Jth column of the second matrix and summing all terms. Consider two matrices Sand T of orders a X band b X c, respectively: I Sll S12 SIb S21 S22 S2b Sal Sa2 Sub S T tll t12 tie t 21 t22 t2c t ~l tb2 t~c U = l"b U 12 U 2l U 22 U al U a2 To form the matrix product of S times T, labeled U, start with the first row of S, which is enclosed in a box. Each element in this row is multiplied by the corresponding element in the first column of T, which is also enclosed in a box. The sum of these b products equals Ull of U. In other words Ull = Slltll + S12t2l + 000 +Slbtblo This may be stated more generally as u ij = b L Siktkj k~l for each element of U.

452 MATRIX ALGEBRA REVIEW For example, B+~' H "~ [::] f312 0 0 3 X 3 3 xl [ fin"' 1 B'I'] = f321y/1 ~ f3 23 1 1J 3 x I (The term B'I'] appears in the latent variable model.) Some properties of matrix multiplication for any matrices S, T, and U that are conformable are as follows: 1. ST 4= TS (except in special cases). 2. (ST)U = S(TU). 3. S(T + U) = ST + Su. 4. c(s + T) = cs + ct (where c is a scalar). These properties come into play at many points throughout the book. The order in which matrices are multiplied is important. For this reason premultiplication and postmultiplication by a matrix are distinguished. For instance, if U = ST, we can say that U results from the premultiplication of T by S or the postmultiplication of S by T. The transpose of a matrix interchanges its rows and columns. The transpose of a matrix is indicated by a prime (') symbol following the matrix. As an example, consider f and f/: f' = [Yll o 2 X 3 The first row of f is the first column of f/, the second row is the second column, and the third row is the third column. The order of f is 3 X 2, whereas the order of f' is 2 X 3. The transpose of an a X b order matrix leads to a b X a matrix.

MATRIX OPERATIONS 453 Some useful properties of the transpose operator are listed below: 1. (S')' = S. 2. (S + T)' = S' + T' (where Sand T have the same order). 3. (ST)' = T'S' (where matrices are conformable for multiplication). 4. (STU)' = U'T'S' (where matrices are conformable for multiplication). Some additional matrix types and matrix operations are important for square matrices. A square matrix is a matrix that has the same number of rows and columns. An example of a square matrix is Yll r= 0 [ o o Y22 Y32 The dimension of the r matrix is 3 X 3. The trace is defined for a square matrix. It is the sum of the elements on the main diagonal. For an n X n matrix S, tr(s) = " L Sii i~l Properties of the trace include: 1. tr(s) = tr(s'). 2. tr(st) = tr(ts) (if T and S conform for multiplication). 3. tr(s + T) = tr(s) + tr(t) (if Sand T conform for addition). The trace appears in the fitting functions and indices of goodness of fit for many structural equation techniques. If all the elements above (or below) the main diagonal of a square matrix are zero, the matrix is triangular. For instance, the B matrix for recursive models (which I discuss in Chapter 4) may be written as a triangular matrix. The B matrix contains the coefficients of the effects of the endogenous latent variables on one another. To illustrate, one such B matrix is ~l

454 MATRIX ALGEBRA REVIEW Note that in this case the main diagonal elements are zero. However, triangular matrices may have nonzero entries in the main diagonal. A diagonal matrix is a square matrix that has some nonzero elements along the main diagonal and zeros elsewhere. For instance, 8 0, the covariance matrix (see Chapter 2), of the errors of measurement for the X variables, commonly is assumed to be diagonal. For 8 1, 8 2, and 8 3, the population covariance matrix 8 0 might look as follows: The zeros above and below the main diagonal represent the assumption that the errors of measurement for different variables are uncorrelated. A symmetric matrix is a square matrix that equals its transpose (e.g., S = S'). The typical correlation and covariance matrices are symmetric since the ij element equals the ji element. For instance, COV(X1' x 2 ) VAR(x 2 ) COV(X3' x 2 ) For all the variables, the covariance of Xi and Xj equals the covariance of Xj and Xi' Sometimes symmetric matrices, such as ~, are written with blanks above the main diagonal because these terms are redundant. An identity matrix, I, is a square matrix with ones down the main diagonal and zeros elsewhere. The 3 X 3 identity matrix is o 0 1 0 o 1 Properties of the identity matrix, I, include the following: 1. IS = SI = S (for any I and S conformable for multiplication). 2. I = I'.

MATRIX OPERATIONS 455 A vector that consists of all ones is a unit vector: l' = [1 1 1 ] Unit vector products have some interesting properties. If you premultiply a matrix by a conformable unit vector, the result is a row vector whose elements are the column sums of the matrix. For example, Postmultiplying a matrix by a conforming unit vector leads to a column vector of row sums: Finally, if we both premultiply and postmultiply a matrix by conforming unit vectors, a scalar that equals the sum of all the matrix elements results: U sing unit vectors and some of the other matrix properties, we can compute the covariance matrix. Consider X an N X P matrix of N observations for p variables. The 1 X P row vector of means for X is formed as The deviation form of X requires subtracting from X a matrix whose columns consist of N X 1 vectors of the means for the corresponding variables in X. So every element in the first column equals the mean of the first variable in X, every element in the second column equals the mean of

456 MATRIX ALGEBRA REVIEW the second column of X, and so on. This matrix of means is l( ~ )l'x which, when subtracted from X, forms deviation from the mean scores: X-l(~)l'X If the preceding deviation score matrix is represented by Z, then the p X P unbiased sample covariance matrix estimator S is S = ( 1 (N - 1) )Z'Z A numerical example illustrates these calculations: 3 1 l-~ -1 0 X= 4 ~j ( ~ ) l'x = (±)[ 1 1 1 l(~)n~ Do 2 l]l-~ 1) ~ l~ Z = X - l( ~ )lfx = l -~ -1-1 2 2 2 2 3 1 4 0 1-1 2-2 i j j ~ ~ [0 jj 2 cov(x 1, x 2 ) var( x 2 ) 1 ] cov(x 1, x 3 ) j COV(X2' x 3 ) COV(X3' x 2 ) var( x 3 )

MATRIX OPERATIONS 457 The last line is the unbiased sample covariance matrix and contains the variances of the variables in the main diagonal and the covariances of all pairs of variables in the off-diagonal elements. All covariance matrices are square and symmetric. Suppose that I form a diagonal matrix from the main diagonal of S: where var(x;) represents the sample variance of Xi' The square root of D, represented as Dl/2, has standard deviations in its diagonal: f [Var(Xl)]1/2 Dl/2 = 0 o and D ~ 1/2 has the inverse of the standard deviations in its main diagonal. If S is postmultiplied by D ~ 1/ 2, the first column is divided by the standard deviation of Xl' the second column is divided by the standard deviation of X 2, and the third column is divided by the standard deviation of X3: SD~1/2 = [var(x l )]1/2 COV(X2' Xl) [var(xl)]1/2 COV(Xl' x 2 ) [var( X ) ]1/2 2 [var(x 2 ) ]1/2 COV(X3' Xl) COV(X3' X 2 ) [var(xl)] 1/2 [var(x 2 )]1/2 COV(Xl' x 3 ) [var(x3)]1/2 COV(X2' X 3 ) [var(x3)]1/2 [var( X 3 ) r/ 2 Premultiplying SD~1/2 by D~1/2 leads to cov( XI' X2) [var( XI )var( X2)] 1/2 [var( XI )var( Xl)] 1/2 cov(x2' Xl) [var( X2 )var( XI)] 1/2 [var( X2 )var( Xl) ]1/2 cov( Xl' XI) [ var( Xl )var( X 2 )] 1/2

458 MATRIX ALGEBRA REVIEW The resulting matrix is the sample correlation matrix with the off-diagonal elements equal to the correlations of the Xi and xi variables. These results generalize to any dimension matrix. If the covariance matrix S is pre- and postmultiplied by 0-1 / 2, where 0-1 / 2 is the diagonal matrix with the standard deviations of x in its diagonal, then the resulting matrix is the sample correlation matrix. The nonnegative integer power of square matrices occurs in the decomposition of effects in path analysis. It is defined as the number of times a matrix is multiplied by itself. For instance, For any square matrix, S, a scalar quantity exists called the determinant of S, represented as I S I or det S. In the case of a 2 X 2 matrix the determinant is If S is a 3 X 3 matrix, the determinant is Sll S12 s13 S21 S22 S23 = SllS22S 33 - S12S 21 S 33 + S12 S 23 S 31 - S13S22S31 S31 S32 S33 As the order of S increases, the formula for the determinant becomes more complicated. There is a general rule for calculating determinants for a square matrix of any order. To explain this rule, the concepts of a minor and a cofactor need to be defined. The minor of a matrix is the determinant of the matrix obtained when the ith row and Jth column of a matrix are removed. Consider the following matrix S:

MATRIX OPERATIONS 459 The minor with respect to Sll' represented as I Sll I, is The minor of S22 is The cofactor of the element s;) is defined as (-1);+) times the minor of s;)' The cofactors of each element of matrix S placed in the appropriate ijth location creates a new matrix: -ISd + IS 221 -ISd The determinant of a matrix can be found by multiplying each element in any given row (column) in the S matrix by the corresponding cofactor in the preceding matrix. This is then summed over all elements in the row (column). For example, if we do this for the first row of S, we obtain -S12(.S21S33 - S23S 31) +S13(S21S32 - S22s 31) = SllS22S 33 - Slls23s 32 -S12 S 21S 33 + S12 S 23 S 31 +S13S 21 S 32 - s13 s 22S 31 = Slls22s 33 - S12S 21 S 33 +S12 S 23 S 31 - s13s 22 s 31 +S13S 21S 32 - slls23s 32 Note that this formula is identical to the earlier formula for the determinant

460 MATRIX ALGEBRA REVIEW of a 3 X 3 matrix. Slightly different arrangements of terms occur depending on the row or column selected for expansion. However, regardless of which formula is chosen, the determinant will be the same. Useful properties of the determinant for square and conformable Sand T and a scalar c include: 1. IS'I = lsi. 2. If S = ct, then I S I = cql T I (where q is the order of S). 3. If S has two identical rows (or columns), lsi = O. 4. ISTI = ISIITI. The determinant appears in the fitting functions for the estimators of structural equations. It also is useful in finding the rank and inverse of matrices. The inverse of a square matrix S is that matrix S -1 that, when S is preor postmultiplied by S - \ produces the identity matrix, I: SS-1 = S-1S = I The inverse of a matrix is calculated from the adjoint and the determinant of a matrix. The adjoint of a matrix is the transpose of the matrix of co factors defined earlier. Using the 3 X 3 S matrix, the adjoint of S is The inverse matrix is adjs = [ + ISul -ISui + IS 131 -IS211 + IS 221 -IS231 1 S-1 = -(adj S) lsi To illustrate the calculation of the inverse, consider the simple case of a two-variable covariance matrix: S = [20 10] 10 20 lsi = (20)(20) - (10)(10) = 400-100 = 300 20 Matrix of cofactors of S = [ -10-10] 20. [20 ad] S = -10-10] 20

MATRIX OPERATIONS 461 (The adjoint of a symmetric matrix of co factors equals the matrix of cofactors.) S -1 1 [ 20 = 300-10 -10] 20 Multiplying S -1 by S yields a 2 X 2 identity matrix. Note that the inverse S -1 does not exist if I S I = O. If a matrix has a zero determinant, it is called a singular matrix. Two properties of inverses for conformable square matrices S, T, and U are the following: 1. (S,)-1 = (S-1),. 2. (ST)-1 = T-1S-1; (STU)-1 = U- 1 T- 1 S-1. In manipulating the latent variable equations, we sometimes need to take inverses. In addition the inverse appears in explanations of the fitting functions and in several other topics. Another important property of a matrix is its rank. The rank of a matrix, S, is the maximum number of independent columns or rows of S, where S is any a X b order matrix. Another way to define the rank is as the order of the largest square submatrix of S whose determinant is nonzero. The properties of ranks for matrices Sand T include the following: 1. rank(s) ~ min(a, b), where a is the number of rows, b is the number of columns. 2. rank(st) ~ min[rank(s), rank(t)]. The rank of matrices appears in the discussion of identification of models in Chapter 4. Eigenvalues and eigenvectors are important characteristics of square matrices. If a vector u =1= 0, a scalar e, and an n X n matrix S exist such that Su = eu then u is an eigenvector and e is an eigenvalue of S. The eigenvectors and eigenvalues are sometimes referred to as latent vectors and latent values, or characteristic vectors and characteristic roots. (Often such an equation is represented as Ax = M. I depart from this practice so that ),. is not confused with the factor loadings which use the same symbol.)

462 MATRIX ALGEBRA REVIEW The preceding equation may be rewritten as Su - eu = 0 (S - el)u = 0 Only if (S - e I) is singular, does a nontrivial solutiod for this equation exist. If (S - e I) is singular, then IS - ell = 0 Solving this equation for e provides the eigenvalues. To illustrate, suppose that S is a 2 x 2 correlation matrix: S = [1.00 0.50 0.50] 1.00 The (S - el) matrix is S _ I = [1.00 - e e 0.50 0.50 ] 1.00 - e The determinant is IS - ell = (1.00 - e)2-0.25 = e 2-2e + 0.75 The two solutions for e, 1.5 and 0.5, are the eigenvalues for this 2 X 2 correlation matrix. Each eigenvalue, e, has a set of eigenvectors, u, associated with it. For example, the e of 1.5 leads to the following: (S - el)u = 0 1.00 - e [ 0.50 0.50 ]/ U 1 ] _ 0 1.00 - e. U2 [ -0.50 0.50 0.50 ] [ U l ] = 0-0.50 _ u 2-0.50u l + 0.50u2 = 0 0.50u l - 0.50u2 = 0 la trivial solution for e would exist if u = O. As specified here, I assume that u is a nonzero vector.

MATRIX OPERATIONS 463 From this you can see that u 1 = u 2 and that an infinite set of values would work as the eigenvector for the eigenvalue of 1.5. Though the eigenvalues for the preceding example and for all real symmetric matrices are real numbers, this need not be true for nonsymmetric matrices. When the eigenvalue is a complex number, say z = a + ib, where a and b are real constants and i = Ff, we commonly refer to the modulus or norm of z which is (a 2 + b 2 )1/2. Some useful properties of the eigenvalues for a symmetric or nonsymmetric square matrix S are 1. A b X b matrix S has b eigenvalues (some may take the same value). 2. The product of all eigenvalues for S equals 1 S I. 3. The number of nonzero eigenvalues of S equals the rank of S. 4. The sum of the eigenvalues of S equals the tr S. Eigenvalue and eigenvectors playa large role in traditional factor analyses. In this book they are useful in the decomposition of effects in path analysis discussed in Chapter 8. Quadratic forms are represented by which equals x'sx (1 X b)(b X b)(b Xl) LX;S;; + L LX;Xj(s;j + Sj;) j >i Usually S for a quadratic form is a symmetric matrix so that x'sx = LX;s;; + 2L LXiXjS;j j >i Quadratic forms result in a scalar. S is positive-definite if x'sx is positive for all nonzero x vectors. If this quadratic form is nonnegative for all nonzero x, then S is positive-semidefinite. The eigenvalues of a positive definite matrix are all positive. If S is positive-definite, then S is nonsingular. The eigenvalues of a positive semidefinite matrix are positive or zero. Negative-definite and negative-semidefinite have analogous definitions and properties.

464 MATRIX ALGEBRA REVIEW Occasionally, structural equation models analyzed with the LISREL program (Joreskog and Sorbom 1984) may report that a matrix is not posi ti ve-defini teo For instance, suppose that we analyze the following sample covariance matrix S: S is not positive-definite, since x'sx is zero for some x =1= (e.g., x' = [1-1 - 1 D. Indeed, S is singular (I S I = 0), and singular matrices are not positive-definite. Consider the following three matrices: [23 31] [-2 1 1] [0 0] 1.5 2 Assume that these are estimates of the covariance matrix of the disturbances from two equations. None is positive-definite. The failure of the first two to be positive-definite indicates a problem. In the first case the covariance (= 3) and variances (2 and 1) imply an impossible correlation value (= 3/ Ii). The middle matrix has an impossible negative disturbance variance (= - 2). Whether the nonpositive definite nature of the last matrix is troublesome depends on whether the variance of the first disturbance should be zero. Identity relations (e.g., 111 = 112 + 113) or when measurement error is absent (e.g., Xl = ~l) are two situations where zero disturbance variances make sense. However, when zero is not a plausible value, then the analyst must determine the source of this improbable value. The vec operator is the operation of forming a vector from a matrix by stacking each column of a matrix one under the other. For instance: J /312] = [/3~1 1 /312 1 The vec operator appears in Chapter 8.

MATRIX OPERATIONS 465 A Kronecker product (or a direct product) of two matrices, S( p X q) and T( m X n), is defined as Each element of the left matrix, S, is multiplied by T to form a submatrix Sij T. All of these submatrices combined result in a pm X qn matrix. An example is: ] [ 1 f3 12 Y12 ] [Yll = f321 1 Yu f3 21 Y12f3 12 ] Y12 Kronecker's products appear in the formulas for the asymptotic standard errors of indirect effects in Chapter 8.