Introduction to Matrices

Similar documents
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

POLI270 - Linear Algebra

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Linear Algebra Highlights

Inverting Matrices. 1 Properties of Transpose. 2 Matrix Algebra. P. Danziger 3.2, 3.3

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Chapter 2 Notes, Linear Algebra 5e Lay

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

MATRICES. knowledge on matrices Knowledge on matrix operations. Matrix as a tool of solving linear equations with two or three unknowns.

Chapter 4. Solving Systems of Equations. Chapter 4

Linear Algebra Practice Problems

Elementary Linear Algebra

Review of Linear Algebra

Vectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes.

Lecture Notes in Linear Algebra

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Linear Systems and Matrices

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

Appendix C Vector and matrix algebra

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Chapter 1 Matrices and Systems of Equations

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Math 313 Chapter 1 Review

Conceptual Questions for Review

3 Matrix Algebra. 3.1 Operations on matrices

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Finite Mathematics Chapter 2. where a, b, c, d, h, and k are real numbers and neither a and b nor c and d are both zero.

3.4 Elementary Matrices and Matrix Inverse

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

Topic 15 Notes Jeremy Orloff

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Section Gaussian Elimination

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Matrices and Matrix Algebra.

Introduction to Matrix Algebra

Elementary Row Operations on Matrices

Review Let A, B, and C be matrices of the same size, and let r and s be scalars. Then

Linear Algebra March 16, 2019

Matrix Arithmetic. j=1

Chapter 7. Linear Algebra: Matrices, Vectors,

[ Here 21 is the dot product of (3, 1, 2, 5) with (2, 3, 1, 2), and 31 is the dot product of

Digital Workbook for GRA 6035 Mathematics

Numerical Analysis Lecture Notes

Linear Algebra 1 Exam 1 Solutions 6/12/3

Math Linear Algebra Final Exam Review Sheet

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Matrices and Linear Algebra

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

INVERSE OF A MATRIX [2.2]

Appendix A: Matrices

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.

1 Matrices and Systems of Linear Equations

A matrix over a field F is a rectangular array of elements from F. The symbol

Econ Slides from Lecture 7

Matrix & Linear Algebra

Lecture 12: Solving Systems of Linear Equations by Gaussian Elimination

1300 Linear Algebra and Vector Geometry

Introduction to Matrices

is a 3 4 matrix. It has 3 rows and 4 columns. The first row is the horizontal row [ ]

Study Guide for Linear Algebra Exam 2

2. Every linear system with the same number of equations as unknowns has a unique solution.

Matrix notation. A nm : n m : size of the matrix. m : no of columns, n: no of rows. Row matrix n=1 [b 1, b 2, b 3,. b m ] Column matrix m=1

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Systems of Linear Equations and Matrices

Linear Algebra Primer

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

a11 a A = : a 21 a 22

and let s calculate the image of some vectors under the transformation T.

1300 Linear Algebra and Vector Geometry

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Phys 201. Matrices and Determinants

Eigenvalues and Eigenvectors

Linear Algebra Primer

Linear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway

CPE 310: Numerical Analysis for Engineers

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

LS.1 Review of Linear Algebra

Kevin James. MTHSC 3110 Section 2.2 Inverses of Matrices

7.4. The Inverse of a Matrix. Introduction. Prerequisites. Learning Outcomes

Review Packet 1 B 11 B 12 B 13 B = B 21 B 22 B 23 B 31 B 32 B 33 B 41 B 42 B 43

Systems of Linear Equations and Matrices

MATH 2030: MATRICES. Example 0.2. Q:Define A 1 =, A. 3 4 A: We wish to find c 1, c 2, and c 3 such that. c 1 + c c

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

Numerical Linear Algebra Homework Assignment - Week 2

Basic Concepts in Linear Algebra

Math Computation Test 1 September 26 th, 2016 Debate: Computation vs. Theory Whatever wins, it ll be Huuuge!

Systems of Linear Equations. By: Tri Atmojo Kusmayadi and Mardiyana Mathematics Education Sebelas Maret University

ELEMENTARY LINEAR ALGEBRA

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

MAT 2037 LINEAR ALGEBRA I web:

Transcription:

POLS 704 Introduction to Matrices Introduction to Matrices. The Cast of Characters A matrix is a rectangular array (i.e., a table) of numbers. For example, 2 3 X 4 5 6 (4 3) 7 8 9 0 0 0 Thismatrix,with4rowsand3columns,isoforder 4by3.Forclarity, I will often show the order in parentheses below a matrix. Note that a matrix is represented by a bold-face upper-case letter. Introduction to Matrices 2 A more general example: A ( ) 2 2. 22. 2. 2 Introduction to Matrices 3 A column vector is a one-column matrix: 5 y 3 (4 ) 4 a ( ) 2. This matrix is of order. is the entry or element in the th row and th column of the matrix. An individual number (e.g., 2 or 2, such as a matrix element, is called a scalar. Two matrices are equal if they are of the same order and all corresponding entries are equal. Likewise, a row vector is a one-row matrix: b 0 ( ) [ 2 ] I adopt the convention that row vectors are always written with a prime (i.e., 0 ). Note that bold-face lower-case letters are used to represent vectors.

Introduction to Matrices 4 More generally, the transpose of a matrix X, written as X 0 or X, interchanges rows and columns; for example, 2 3 X 4 5 6 (4 3) 7 8 9 4 7 0 2 5 8 0 (3 4) X0 3 6 90 0 0 0 A square matrix of order has rows and columns; for example, B 5 3 22 6 (3 3) 734 The main diagonal of a square matrix B ( ) consists of the entries, 2. In the example, the main diagonal consists of the entries 5 2 and 4. Introduction to Matrices 5 The trace of a square matrix is the sum of its diagonal elements: X trace(b) 5+247 A square matrix is symmetric if it is equal to its transpose. That is, A is symmetric if for all and. ( ) Thus B 5 3 22 6 734 is not symmetric, while C 5 3 2 6 364 is symmetric Introduction to Matrices 6 A square matrix is lower-triangular if all entries above its main diagonal are 0; for example, L 00 520 203 Similarly, an upper-triangular matrix has zeroes below the main diagonal; for example, U 55 2 024 00 3 Introduction to Matrices 7 A diagonal matrix is a square matrix with all off-diagonal entries equal to 0; for example, 6 000 D 0 2 00 0 000 diag(6 2 0 7) 0 007 A scalar matrix is a diagonal matrix with equal diagonal entries; for example, S 3 0 0 0 3 0 0 0 3 t

Introduction to Matrices 8 An identity matrix is a scalar matrix with ones on the diagonal; for example, I 3 I 00 00 (3 3) 00 Introduction to Matrices 9 A unit vector has all of its entries equal to ; for example, (4 ) A zero matrix has all of its elements equalto 0; for example, 0 000 (4 3) 000 000 Introduction to Matrices 0 2. Basic Matrix Arithmetic 2. Addition, Subtraction, Negation, Product of a Matrix and a Scalar Addition and subtraction are defined for matrices of the same order and are elementwise operations; for example, for 23 5 2 A B (2 3) 456 (2 3) 304 435 C A + B (2 3) 7 5 2 6 D A B (2 3) 50 Introduction to Matrices Matrix negation and the product of a matrix and a scalar are also elementwise operations; examples: 23 5 2 A B (2 3) 456 (2 3) 304 2 3 E A (2 3) 4 5 6 5 3 6 3 B B 3 9 0 2 F (2 3) Consequently, matrix addition, subtraction, and negation, and the product of a matrix by a scalar obey familiar rules for scalar arithmetric:

Introduction to Matrices 2 A + B B + A (commutivity) A +(B + C) (A + B)+C (associativity) A B A +(B) (B A) A A 0 (A is the additive inverse of A) A + 0 A (0 is the additive identity) (A) A A A (commutivity) (A + B) A+B (distributive laws) A( + ) A + A 0A 0 (zero) A A (unit) ()A A (negation) where A,B,C, and0 are matrices of the same order, and 0 and are scalars. Introduction to Matrices 3 Another useful rule is that matrix transposition distributes over addition: (A + B) 0 A 0 + B 0 Introduction to Matrices 4 2.2 Inner Product and Matrix Product The inner product (or dot product) of two vectors with equal numbers of elements is the sum of the product of corresponding elements; that is X a 0 b ( ) ( ) Note: the inner product is defined similarly between two row vectors or two column vectors. For example, [2 0 3] 6 0 9 2() + 0(6) + (0) + 3(9) 25 Introduction to Matrices 5 The matrix product AB is defined if the number of columns of A equals the number of rows of B. In this case, A and B are said to be conformable for multiplication. Some examples: 23 00 00 456 00 (2 3) (3 3) 00 00 23 456 00 (2 3) (3 3) [ 0 2 3 ] ( 4) 2 3 (4 ) conformable not conformable conformable

Introduction to Matrices 6 20 2 0 03 0 3 (2 2) (2 2) 2 0 20 03 0 3 (2 2) (2 2) conformable conformable In general, if A is ( ) then, for the product AB to be defined, B must be ( ), where and are unconstrained. Note that BA may not be defined even if AB is (i.e., unless ). Introduction to Matrices 7 The matrix product is defined in terms of the inner product. Let a 0 represent the th row of the ( ) matrix A, and b represent the th column of the ( ) matrix B. Then C AB is an ( ) matrix with X a 0 b That is, the matrix product is formed by successively multiplying each row of the right-hand factor into each column of the left-hand factor. Introduction to Matrices 8 Some examples: 2 3 00 00 4 5 6 00 (2 3) (3 3) () + 2(0) + 3(0) (0) + 2() + 3(0) (0) + 2(0) + 3() 4() + 5(0) + 6(0) 4(0) + 5() + 6(0) 4(0) + 5(0) + 6() (2 3) 23 456 Introduction to Matrices 9 [ 0 2 3 ] ( 4) 2 3 (4 ) [ 0 + + 2 2 + 3 3 ] ( ) 2 03 4 5 34 2 83 03 2 92 2 34 5 8 20 2 0 0 03 0 3 0 2 0 20 0 03 0 0 3

Introduction to Matrices 20 In summary: A B AB general matrix ( ) matrix ( ) matrix ( ) square matrix ( ) square matrix ( ) square matrix ( ) row vector ( ) matrix ( ) row vector ( ) matrix ( ) column vector ( ) column vector ( ) row vector ( ) column vector ( ) "scalar" ( ) column vector ( ) row vector ( ) matrix ( ) Introduction to Matrices 2 Some properties of matrix multiplication: A(BC) (AB)C (associativity) (A + B)C AC + BC (distributive laws) A(B + C) AB + AC A I I A A (unit) ( ) A 0 0 (zero) ( )( ) ( ) 0 A 0 ( )( ) ( ) ( A B ) 0 B 0 ( )( ) ( )( ) A0 (transpose of a product) (AB F) 0 F 0 B 0 A 0 Introduction to Matrices 22 But matrix multiplication is not in general commutative: For A B, the product BA is not defined unless. ( )( ) For A and B, the product AB is ( ) and BA is ( ), so ( ) ( ) they cannot be equal unless. Even when A and B are both ( ), the products AB and BA need not be equal. When AB and BA are equal, the matrices A and B aresaidto commute. Introduction to Matrices 23 2.3 The Sense Behind Matrix Multiplication The definition of matrix multiplication makes it simple to formulate systems of scalar equations as a single matrix equation, often providing a useful level of abstraction. For example, consider the following system of two linear equations in two unknowns, and 2 : 2 +5 2 4 +3 2 5 Writing these equations as a matrix equation, 25 4 3 2 5 A x b (2 2)(2 ) (2 )

Introduction to Matrices 24 More generally, for linear equations in unknowns, A x ( )( ) b ( ) Introduction to Matrices 25 3. Matrix Inversion In scalar algebra, division is essential to the solution of simple equations. For example, 6 2 2 6 2 or, equivalently, 6 6 6 2 2 where 6 6 is the scalar inverse of 6. In matrix algebra, there is no direct analog of division, but most square matrices have a matrix inverse. The matrix inverse of the ( ) matrix A is an ( ) matrix A such that AA A A I Introduction to Matrices 26 Squarematricesthathaveaninversearetermednonsingular. Squares matrices with no inverse are singular (i.e., strange, unusual). If an inverse of a square matrix exists, it is unique Moreover, if, for square matrices A and B, AB I, then necessarily BA I, and B A. In scalar algebra, only the number 0 has no inverse (i.e., 0 0 is undefined). In matrix algebra, 0 is singular, but there are also nonzero singular ( ) matrices Introduction to Matrices 27 For example, take the matrix 0 A 00 and suppose for the sake of argument that B 2 2 22 is the inverse of A. But AB 0 2 2 6 I 00 2 22 0 0 2 contradicting the claim that B is the inverse of A, andsoa has no inverse. t

Introduction to Matrices 28 A useful fact to remember is that if the matrix A is singular then at least one column of A can be written as a multiple of another column or as a weighted sum (linear combination) of the other columns; and, equivalently, at least one row can be written as a multiple of another row or as a weighted sum of the other rows. In the example above, the second row is 0 times the first row, and the second column is 0 times the first column. Indeed, any square matrix with a zero row or column is therefore singular. We ll explore this idea in greater depth presently. Introduction to Matrices 29 A convenient property of matrix inverses: If A and B are nonsingular matrices of the same order, then their product AB is also nonsingular, with inverse (AB) B A. The proof of this property is simple: (AB)(B A )A(BB )A AIA AA I This result extends directly to the product of any number of nonsingular matrices of the same order: (AB F) F B A Introduction to Matrices 30 Example: The inverse of the nonsingular matrix 25 3 is the matrix 3 5 2 Check: 25 3 5 0 X 3 2 0 3 5 25 0 X 2 3 0 Introduction to Matrices 3 Matrix inverses are useful for solving systems of linear simultaneous equations where there is an equal number of equations and unknowns. The general form of such a problem, with equations and unknowns, is A x ( )( ) b ( ) Here, as explained previously, A is a matrix of known coefficients; x is the vector of unknowns; and b is a vector containing the (known) right-hand sides of the equations. The solution is obtained by multiplying both sides of the equation on the left by the inverse of the coefficient matrix: A Ax A b Ix A b x A b

Introduction to Matrices 32 For example, consider the following system of 2 equations in 2 unknowns: 2 +5 2 4 +3 2 5 In matrix form, 25 4 3 2 5 Introduction to Matrices 33 and so 2 25 4 3 5 3 5 4 2 5 3 6 That is, the solution is 3 2 6. Check: 2(3) + 5(6) 4X 3 + 3(6) 5X Introduction to Matrices 34 3. Matrix Inversion by Gaussian Elimination There are many methods for finding inverses of nonsingular matrices. Gaussian elimination (after Carl Friedrich Gauss, the great German mathematician) is one such method, and has other uses as well. In practice, finding matrix inverses is tedious, and is a job best left to a computer. Gaussian elimination proceeds as follows: Suppose that we want to invert the following matrix (or to determine that it is singular): A 2 2 0 4 4 4 Introduction to Matrices 35 Adjoin to this matrix an order-3 identify matrix: 2 2 0 0 0 0 0 4 4 4 0 0 Attempt to reduce the original matrix to the identify matrix by a series of elementary row operations (EROs) of three types, bringing the adjoined identify matrix along for the ride: : Multiply each entry in a row of the matrix by a nonzero scalar constant : Add a scalar multiple of one row to another, replacing the other row. : Interchange two rows of the matrix.

Introduction to Matrices 36 Starting with the first row, and then proceeding to each row in turn: (i) Insure that there is a nonzero entry in the diagonal position, called the pivot (e.g., the, position when we are working on the first row), exchanging the current row for a lower row, if necessary. Ifthereisa0inthepivotpositionandthereisnolowerrowwith a nonzero entry in the current column, then the original matrix is singular. Numerical accuracy can be increased by always exchanging for a lower row if by doing so we can increase the magnitude of the pivot. (ii) Divide the current row by the pivot to produce in the pivot position. (iii) Add multiples of the current row to other rows to produce 0 s in the current column. Introduction to Matrices 37 Applying this algorithm to the example: Divide row by 2 0 2 0 0 0 0 4 4 4 0 0 Subtract row fromrow 2 0 2 0 0 0 0 2 0 4 4 4 0 0 Subtract 4 rowfromrow3 0 2 0 0 0 0 2 0 0 8 4 2 0 Interchange rows 2 and 3 0 2 0 0 0 8 4 2 0 0 0 2 0 Introduction to Matrices 38 Divide row 2 by 8 0 2 0 0 0 2 4 0 8 0 0 2 0 Addrow2torow 0 2 4 0 8 0 2 4 0 8 0 0 2 0 Add 2 row 3 to each of rows and 2 0 0 0 2 8 0 0 2 2 8 0 0 2 0 Introduction to Matrices 39 Thus, the inverse of A is A 0 2 8 2 2 8 2 0 which can be verified by multiplication. Sociology 76

Introduction to Matrices 40 Why Gaussian elimination works: Each elementary row operation can be represented as multiplication ontheleftbyaneromatrix. For example, to interchange rows 2 and 3 in the preceding example, we multiply by E 00 00 00 (Satisfy yourself that this works as advertised!) The entire sequence of EROs can be represented as E E 2 E [A I ][I B] E [A I ][I B] where E E E 2 E. Therefore, EA I, implying that E is A ;andei B, implying that B E A. Introduction to Matrices 4 4. Determinants Each square matrix A is associated with a scalar called its determinant, and written A or det A. The determinant is uniquely defined by the following axioms (rules): (a) Multiplying a row of A by a scalar constant multiples the determinant bythesameconstant. (b) AddingamultipleofonerowofA to another does not change the determinant. (c) Interchanging two rows of A changes the sign of the determinant. (d) det I. These rules suggest that Gaussian elimination can be used to find the determinant of a square matrix: The determinant is the product of the pivots, reversing the sign of this product if an odd number of row interchanges was performed. Introduction to Matrices 42 For the example: det A (2)(8)() 6 As well, since we encounter a 0 pivot when we try to invert a singular matrix, the determinant of a singular matrix is 0 (and a nonsingular matrix has a nonzero determinant). Introduction to Matrices 43 5. Matrix Rank and the Solution of Linear Simultaneous Equations As explained, the matrix inverse suffices for the solution of linear simultaneous equations when there are equal numbers of equations and unknowns, and when the coefficient matrix is nonsingular. This case covers most, but not all, statistical applications. In the more general case, we have linear equations in unknowns: A x b ( )( ) ( ) We can solve such a system of equations by Gaussian elimination, placing the coefficient matrix A in reduced row-echelon form (RREF) by a sequence of EROs, and bringing the right-hand-side vector b along for the ride. Once the coefficient matrix is in RREF, the solution(s) of the equation system will be apparent.

Introduction to Matrices 44 A matrix is in RREF form when (i) All of its zero rows (if any) follow its nonzero rows (if any). (ii) The leading entry in each nonzero row i.e., the first nonzero entry, proceeding from left to right is. (iii) The leading entry in each nonzero row after the first is to the right of the leading entry in the previous row. (iv) All other entries in a column containing a leading entry are 0. The proof that this procedure works proceeds from the observation that none of the elementary row operations changes the solutions of the set of equations, and is similar to the proof that Gaussian elimination suffices to find the inverse of a nonsingular square matrix. Introduction to Matrices 45 Consider the following system of 3 equations in 4 unknowns: 20 2 40 0 2 2 60 2 3 5 Adjoin the RHS vector to the coefficient matrix 2 0 2 4 0 0 2 6 0 2 5 Reduce the coefficient matrix to row-echelon form: Divide row by 2 0 2 4 0 0 2 6 0 2 5 Introduction to Matrices 46 Subtract 4 row from row 2, and subtract 6 row from row 3 0 2 2 0 0 4 4 0 0 2 8 8 Multiply row 2 by 0 2 2 0 0 4 4 0 0 2 8 8 Subtract 2 row 2 from row, and add 2 row2torow3 3 0 0 2 0 0 4 4 0 0 0 0 0 (with the leading entries marked by asterisks). Introduction to Matrices 47 Writing the result as a scalar system of equations, we get + 4 3 2 3 4 4 4 00 The third equation is uninformative, but it does indicate that the original system of equations is consistent. The first two equations imply that the unknowns 2 and 4 can be given arbitrary values (say 2 and 4 ), and the values of the and 3 (corresponding to the leading entries) will then follow: 3 2 4 3 4+4 4 and thus any vector x [ 2 3 4 ] 0 3 2 4 2 4+4 4 4 is a solution of the system of equations. 0

Introduction to Matrices 48 A system for which there is more than one solution (in this case, an infinity of solutions) is called underdetermined. Now consider the system of equations 20 2 40 0 2 2 60 2 3 Attaching b to A and reducing the coefficient matrix to RREF yields 0 0 2 0 0 4 2 0 0 0 0 2 The last equation, 0 +0 2 +0 3 +0 4 2 is contradictory, implying that the original system of equations has no solution. Such an inconsistent system of equations is termed overdetermined Introduction to Matrices 49 The third possibility is that the system of equations is consistent and has a unique solution. This occurs, for example, when there are equal numbers of equations and unknowns and when the (square) coefficient matrix is nonsingular. The rank of a matrix is its maximum number of linearly independent rows or columns. A set of rows (or columns) is linearly independent when no row (or column) in the set can be expressed as a linear combination (weighted sum) of the others. It turns out that the maximum number of linearly independent rows in a matrix is the same as the maximum number of linearly independent columns. Because Gaussian elimination reduces linearly dependent rows to 0, therankofamatrixisthenumberofnonzerorowsinitsrref. Thus, in the examples, the matrix A has rank 2. Introduction to Matrices 50 To restate our previous results: When the rank of the coefficient matrix is equal to the number of unknowns, and the system of equations is consistent, there is a unique solution. When the rank of the coefficient matrix is less than the number of unknowns, the system of equations is underdetermined (if consistent) or overdetermined (if inconsistent). Introduction to Matrices 5 5. Homogeneous Systems of Linear Equations When the RHS vector in an equation system is a zero vector, the system of equations is said to be homogeneous: A x 0 ( )( ) ( ) Homogeneous equations cannot be overdetermined, because the trivial solution x 0 always satisfies the system. When the rank of the coefficient matrix A is less than the number of unknowns, a homogeneous system of equations is underdetermined, and consequently has nontrivial solutions as well. Consider, for example, the homogeneous system 20 2 40 0 2 60 2 3 0 0 0 4

Introduction to Matrices 52 Reducing the coefficient matrix to RREF, we have 0 0 0 0 4 2 0 0 0 0 3 0 0 0 Thus, the solutions (trivial and nontrivial) may be written in the form 4 2 2 3 4 4 4 4 That is, 2 and 4 can be given arbitrary values, and the values of the other two unknowns follow (from the value assigned to 4 ). 4 Introduction to Matrices 53 6. Eigenvalues and Eigenvectors Suppose that A is an order- square matrix. Then the homogeneous system of equations (A I )x 0 will have nontrivial solutions only for certain values of the scalar. There will be nontrivial solutions only when the matrix (A I ) is singular, that is when det(a I )0 This determinantal equation is called the characteristic equation of the matrix A. Values of for which the characteristic equation holds are called eigenvalues, characteristic roots, or latent roots of A. (The German word eigen means own i.e., the matrix s own values.) Introduction to Matrices 54 Suppose that is a particular eigenvalue of A. Then a vector x x satisfying the homogeneous system of equations (A I )x 0 is called an eigenvector of A associated with the eigenvalue. Eigenvectors associated with a particular eigenvalue are never unique, because if x is an eigenvector of A, thensoisx (where is any nonzero scalar constant). Introduction to Matrices 55 Because of its simplicity, let s examine the 2 2 case. The characteristic equation is 2 22 0 2 I ll make use of the simple result that the determinant of a 2 2 matrix is the product of the diagonal entries minus the product of the off-diagonal entries, so the characteristic equation is ( )( 22 ) 2 2 0 2 ( + 22 ) + 22 2 2 0

Introduction to Matrices 56 This is a quadratic equation, and therefore it can be solved by the quadratic formula, producing the two roots h + 22 + p i ( + 22 ) 2 2 4( 22 2 2 ) 2 h + 22 p i ( + 22 ) 2 2 4( 22 2 2 ) The roots are real if ( + 22 ) 2 4( 22 2 2 ) is nonnegative. Notice that + 2 + 22 trace(a) 2 22 2 2 det(a) As well, if A is singular, and thus det(a) 0, then at least one of the eigenvalues must be 0. (Bothare0 only if A is a 0 matrix.) Introduction to Matrices 57 Things become simpler when the matrix A is symmetric (as in most statistical applications of eigenvalues) Then 2 2,and q + 22 + ( 22 ) 2 2 +4 2 2 + 22 q ( 22 ) 2 +4 2 2 2 2 In this case, the eigenvalues are necessarily real numbers, since the quantity within the square-root is a sum of squares. Introduction to Matrices 58 Example: 05 A 05 h ++ p i ( ) 2 2 +4(05 2 ) 5 2 h + p i ( ) 2 2 +4(05 2 ) 05 To find the eigenvectors associated with 5, solve 5 05 0 05 5 2 0 05 05 0 05 05 2 0 which produces x 2 2 2 where 2 can be given an arbitrary nonzero value. Introduction to Matrices 59 Likewise, for 2 05, 05 05 0 05 05 2 0 05 05 0 05 05 2 0 which produces 2 x 2 22 22 22 where 22 is arbitrary. The two eigenvectors of A have an inner product of 0: x x 2 2 22 2 22 2( 22)+ 2 22 0 Vectors whose inner product is 0 are termed orthogonal.

Introduction to Matrices 60 The properties of the 2 2 case generalize as follows: The characteristic equation det(a I )0of an order- square matrix A is an th-order polynomial equation in. Consequently, there are in general eigenvalues, though not all are necessarily distinct. There are better ways to find eigenvalues and eigenvectors in the general case than to solve the characteristic equation. The sum of the eigenvalues is the trace, X trace(a) The product of the eigenvalues is the determinant, Y det(a) The number of nonzero eigenvalues is the rank of A. If A is symmetric, then all of the eigenvalues are real numbers. Introduction to Matrices 6 If all of the eigenvalues are distinct, then each eigenvector is determineduptoaconstantfactor. Eigenvectors associated with different eigenvalues are orthogonal