Linear Algebra II. 2 Matrices. Notes 2 21st October Matrix algebra

Similar documents
Definition 2.3. We define addition and multiplication of matrices as follows.

Math 54 HW 4 solutions

Matrices. Chapter Definitions and Notations

Chapter 2 Notes, Linear Algebra 5e Lay

ORIE 6300 Mathematical Programming I August 25, Recitation 1

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Econ Slides from Lecture 7

First we introduce the sets that are going to serve as the generalizations of the scalars.

Inverses and Elementary Matrices

CSL361 Problem set 4: Basic linear algebra

2 b 3 b 4. c c 2 c 3 c 4

Review of Linear Algebra

Linear Algebra March 16, 2019

Chapter 2: Matrix Algebra

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

Determinant: 3.3 Properties of Determinants

Elementary Operations and Matrices

Linear Equations in Linear Algebra

Simplicity of P SL n (F ) for n > 2

1 Matrices and Systems of Linear Equations. a 1n a 2n

Appendix A: Matrices

10. Linear Systems of ODEs, Matrix multiplication, superposition principle (parts of sections )

Ch6 Addition Proofs of Theorems on permutations.

Mon Feb Matrix algebra and matrix inverses. Announcements: Warm-up Exercise:

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

x + 2y + 3z = 8 x + 3y = 7 x + 2z = 3

Determinants: Introduction and existence

Linear Algebra, Vectors and Matrices

Prepared by: M. S. KumarSwamy, TGT(Maths) Page

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam. Topics

3 Matrix Algebra. 3.1 Operations on matrices

1111: Linear Algebra I

Solution: By inspection, the standard matrix of T is: A = Where, Ae 1 = 3. , and Ae 3 = 4. , Ae 2 =

LINEAR ALGEBRA REVIEW

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Elementary maths for GMT

Mon Feb Matrix inverses, the elementary matrix approach overview of skipped section 2.5. Announcements: Warm-up Exercise:

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

1300 Linear Algebra and Vector Geometry

MA 0540 fall 2013, Row operations on matrices

Chapter 2:Determinants. Section 2.1: Determinants by cofactor expansion

a s 1.3 Matrix Multiplication. Know how to multiply two matrices and be able to write down the formula

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

Chapter 2. Square matrices

Multiplying matrices by diagonal matrices is faster than usual matrix multiplication.

MTH 464: Computational Linear Algebra

Midterm 1 Review. Written by Victoria Kala SH 6432u Office Hours: R 12:30 1:30 pm Last updated 10/10/2015

Chapter 1: Systems of Linear Equations

MODEL ANSWERS TO THE FIRST HOMEWORK

Topic 1: Matrix diagonalization

1 Matrices and Systems of Linear Equations

CS 246 Review of Linear Algebra 01/17/19

Matrices. In this chapter: matrices, determinants. inverse matrix

Review 1 Math 321: Linear Algebra Spring 2010

Family Feud Review. Linear Algebra. October 22, 2013

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

APPENDIX: MATHEMATICAL INDUCTION AND OTHER FORMS OF PROOF

3 Fields, Elementary Matrices and Calculating Inverses

Introduction to Matrices

JORDAN NORMAL FORM. Contents Introduction 1 Jordan Normal Form 1 Conclusion 5 References 5

Linear equations in linear algebra

Determinants. Recall that the 2 2 matrix a b c d. is invertible if

I = i 0,

MTH 464: Computational Linear Algebra

MA106 Linear Algebra lecture notes

Evaluating Determinants by Row Reduction

2.1 Matrices. 3 5 Solve for the variables in the following matrix equation.

MTH5112 Linear Algebra I MTH5212 Applied Linear Algebra (2017/2018)

Math 3C Lecture 20. John Douglas Moore

Linear Algebra, Summer 2011, pt. 2

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

x y = x + y. For example, the tropical sum 4 9 = 4, and the tropical product 4 9 = 13.

Solution to Homework 1

Chapter 4. Solving Systems of Equations. Chapter 4

Definition 1.2. Let p R n be a point and v R n be a non-zero vector. The line through p in direction v is the set

Mathematics for Graphics and Vision

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

7.6 The Inverse of a Square Matrix

Chapter 2: Linear Independence and Bases

Next topics: Solving systems of linear equations

CLASS 12 ALGEBRA OF MATRICES

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Determinants - Uniqueness and Properties

Lecture Summaries for Linear Algebra M51A

EC5555 Economics Masters Refresher Course in Mathematics September 2014

0.2 Vector spaces. J.A.Beachy 1

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Section 9.2: Matrices.. a m1 a m2 a mn

Math 344 Lecture # Linear Systems

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

1300 Linear Algebra and Vector Geometry

1111: Linear Algebra I

Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

Transcription:

MTH6140 Linear Algebra II Notes 2 21st October 2010 2 Matrices You have certainly seen matrices before; indeed, we met some in the first chapter of the notes Here we revise matrix algebra, consider row and column operations on matrices, and define the rank of a matrix 21 Matrix algebra Definition 21 A matrix of size m n over a field K, where m and n are positive integers, is an array with m rows and n columns, where each entry is an element of K For 1 i m and 1 j n, the entry in row i and column j of A is denoted by A i j, and referred to as the (i, j) entry of A Example 21 A column vector in K n can be thought of as a n 1 matrix, while a row vector is a 1 n matrix Definition 22 We define addition and multiplication of matrices as follows (a) Let A and B be matrices of the same size m n over K Then the sum A + B is defined by adding corresponding entries: (A + B) i j = A i j + B i j (b) Let A be an m n matrix and B an n p matrix over K Then the product AB is the m p matrix whose (i, j) entry is obtained by multiplying the elements in the ith row of A by the corresponding elements in the ith column of B and summing: n (AB) i j = A ik B k j k=1 1

Remark Note that we can only add or multiply matrices if their sizes satisfy appropriate conditions In particular, for a fixed value of n, we can add and multiply n n matrices It turns out that the set M n (K) of n n matrices over K is a ring with identity: this means that it satisfies conditions (A0) (A4), (M0) (M2) and (D) of the Fields and vector spaces information sheet The zero matrix, which we denote by O, is the matrix with every entry zero, while the identity matrix, which we denote by I (or by I n ), is the matrix with entries 1 on the main diagonal and 0 everywhere else Note that matrix multiplication is not commutative: BA is usually not equal to AB We already met matrix multiplication in Section 1 of the notes: recall that if P B,B denotes the transition matrix between two bases of a vector space, then 22 Row and column operations P B,B P B,B = P B,B Given an m n matrix A over a field K, we define certain operations on A called row and column operations Definition 23 Elementary row operations There are three types: Type 1 Add a multiple of the jth row to the ith, where j i Type 2 Multiply the ith row by a non-zero scalar Tyle 3 Interchange the ith and jth rows, where j i Elementary column operations There are three types: Type 1 Add a multiple of the jth column to the ith, where j i Type 2 Multiply the ith column by a non-zero scalar Tyle 3 Interchange the ith and jth column, where j i By applying these operations, we can reduce any matrix to a particularly simple form: Theorem 21 Let A be an m n matrix over the field K Then it is possible to change A into B by elementary row and column operations, where B is a matrix of the same size satisfying B ii = 1 for 0 i r, where r min{m,n}, and all other entries of B are zero If A can be reduced to two matrices B and B both of the above form, where the numbers of non-zero elements are r and r respectively, by different sequences of elementary operations, then r = r, and so B = B 2

Definition 24 The number r in the above theorem is called the rank of A; while a matrix of the form described for B is said to be in the canonical form for equivalence We can write the canonical form matrix in block form as Ir O B =, O O where I r is an r r identity matrix and O denotes a zero matrix of the appropriate size (that is, r (n r), (m r) r, and (m r) (n r) respectively for the three Os) Note that some or all of these Os may be missing: for example, if r = m, we just have [I m O] Proof We outline the proof that the reduction is possible To prove that we always get the same value of r, we need a different argument (see Section 23) The proof is by induction on the size of the matrix A: in other words, we assume as inductive hypothesis that any smaller matrix can be reduced as in the theorem Let the matrix A be given We proceed in steps as follows: If A = O (the all-zero matrix), then the conclusion of the theorem holds, with r = 0; no reduction is required So assume that A O If A 11 0, then skip this step If A 11 = 0, then there is a non-zero element A i j somewhere in A; by swapping the first and ith rows, and the first and jth columns, if necessary (Type 3 operations), we can bring this entry into the (1,1) position Now we can assume that A 11 0 Multiplying the first row by A 1 11, (row operation Type 2), we obtain a matrix with A 11 = 1 Now by row and column operations of Type 1, we can assume that all the other elements in the first row and column are zero For if A 1 j 0, then subtracting A 1 j times the first column from the jth gives a matrix with A 1 j = 0 Repeat this until all non-zero elements have been removed Now let B be the matrix obtained by deleting the first row and column of A Then B is smaller than A and so, by the inductive hypothesis, we can reduce B to canonical form by elementary row and column operations The same sequence of operations applied to A now finish the job Example 22 Here is a small example Let 1 2 3 A = 4 5 6 3

We have A 11 = 1, so we can skip the first three steps Subtracting twice the first column from the second, and three times the first column from the third, gives the matrix 4 3 6 Now subtracting four times the first row from the second gives 0 3 6 From now on, we have to operate on the smaller matrix [ 3 to apply the operations to the large matrix Multiply the second row by 1/3 to get [ 0 1 2 Now subtract twice the second column from the third to obtain 0 1 0 ] 6], but we continue We have finished the reduction, and we conclude that the rank of the original matrix A is equal to 2 We finish this section by describing the elementary row and column operations in a different way For each elementary row operation on an n-rowed matrix A, we define the corresponding elementary matrix by applying the same operation to the n n identity matrix I Similarly we represent elementary column operations by elementary matrices obtained by applying the same operations to the m m identity matrix We don t have to distinguish between rows and columns for our elementary matrices For example, the matrix 1 2 0 0 1 0 corresponds to the elementary column operation of adding twice the first column to the second, or to the elementary row operation of adding twice the second row to the first For example 1 2 0 0 1 0 a b c d = a + 2c b + 2d c d, e f e f 4

while [ a b c d e f ] 1 2 0 0 1 0 = [ a 2a + b c d 2d + e f For the other types, the matrices for row operations and column operations are identical Lemma 22 The effect of an elementary row operation on a matrix is the same as that of multiplying on the left by the corresponding elementary matrix Similarly, the effect of an elementary column operation is the same as that of multiplying on the right by the corresponding elementary matrix The proof of this lemma is a somewhat tedious calculation Example 23 We continue our previous example In order, here is the list of elementary matrices corresponding to the operations we applied to A (Here 2 2 matrices are row operations while 3 3 matrices are column operations) 1 2 0 0 1 0, 1 0 3 0 1 0,[ 1 0 4 1 ] 1 0, 0 1/3 ], 0 1 2 So the whole process can be written as a matrix equation: 1 0 1 0 A 1 2 0 0 1 0 1 0 3 0 1 0 0 1 2 = B, 0 1/3 4 1 or more simply where, as before, 1 0 A 1 2 1 0 1 2 = B, 4/3 1/3 A = 1 2 3, B = 4 5 6 0 1 0 The result is that we have found matrices P and Q such that PAQ = B The matrices P and Q can be found more easily, without the need for so much matrix multiplication Take identity matrices I 3 and I 4 of sizes 3 3 and 4 4 respectively Each time you perform a row operation on A, do the same to I 3 ; and each time 5

you perform a column operation on A, do the same to I 4 The resulting matrices will be the required P and Q In our case, the row operations have the effect 1 0 1 0 1 0, 0 1 4 1 4/3 1/3 as we found before; the other matrix is found similarly Starting with an m n matrix A, do the same thing with identity matrices I m for the row operations and I n for the column operations An important observation about the elementary operations is that each of them can have its effect undone by another elementary operation of the same kind, and hence every elementary matrix is invertible, with its inverse being another elementary matrix of the same kind For example, the effect of adding twice the first row to the second is undone by adding 2 times the first row to the second, so that 1 1 2 1 2 = 0 1 0 1 Since the product of invertible matrices is invertible, we can state the above theorem in a more concise form First, one more definition: Definition 25 The m n matrices A and B are said to be equivalent if B = PAQ, where P and Q are invertible matrices of sizes m m and n n respectively Theorem 23 Given any m n matrix A, there exist invertible matrices P and Q of sizes m m and n n respectively, such that PAQ is in the canonical form for equivalence Remark The relation equivalence defined above is an equivalence relation on the set of all m n matrices; that is, it is reflexive, symmetric and transitive When mathematicians talk about a canonical form for an equivalence relation, they mean a set of objects which are representatives of the equivalence classes: that is, every object is equivalent to a unique object in the canonical form We have shown this for the relation of equivalence defined earlier, except for the uniqueness of the canonical form This is our job for the next section 23 Rank We have the unfinished business of showing that the rank of a matrix is well defined; that is, no matter how we do the row and column reduction, we end up with the same canonical form We do this by defining two further kinds of rank, and proving that all three are the same 6

Definition 26 Let A be an m n matrix over a field K We say that the column rank of A is the maximum number of linearly independent columns of A, while the row rank of A is the maximum number of linearly independent rows of A (We regard columns or rows as vectors in K m and K n respectively) Now we need a sequence of four lemmas Lemma 24 matrix (a) Elementary column operations don t change the column rank of a (b) Elementary row operations don t change the column rank of a matrix (c) Elementary column operations don t change the row rank of a matrix (d) Elementary row operations don t change the row rank of a matrix Proof (a) This is clear for Type 3 operations, which just rearrange the vectors For Types 1 and 2, we have to show that such an operation cannot take a linearly independent set to a linearly dependent set; the vice versa statement holds because the inverse of an elementary operation is another operation of the same kind So suppose that some column vectors v 1,,v n are linearly independent Consider a Type 1 operation involving adding c times the jth column to the ith; the new columns are v 1,,v n, where v k = v k for k i, while v i = v i + cv j Suppose (in order to obtain a contradiction) that the new vectors are linearly dependent Then there are scalars a 1,,a n, not all zero, such that 0 = a 1 v 1 + + a n v n = a 1 v 1 + + a i (v i + cv j ) + + a j v j + + a n v n = a 1 v 1 + + a i v i + + (a j + ca i )v j + + a n v n Since v 1,,v n are linearly independent, we conclude that a 1 = 0,,a i = 0,,a j + ca i = 0,,a n = 0, from which we see that all the a k are zero, contrary to assumption So the new columns are linearly independent The argument for Type 2 operations is similar but easier (b) It is easily checked that, if an elementary row operation is applied, then the new vectors satisfy exactly the same linear relations as the old ones (that is, the same linear combinations are zero) So the linearly independent sets of vectors don t change at all (c) Same as (b), but applied to rows (d) Same as (a), but applied to rows 7

In fact, parts (a) and (d) can be strengthened The column space of an m n matrix over K is defined to be the subspace of K m spanned by the columns, and the row space is the subspace of K n spanned by the rows Now the following holds: Lemma 25 Elementary column operations don t change the column space of a matrix; and elementary row operations don t change the row space Elementary column operations do change the row space, but transform it into a subspace of the same dimension; and similarly the other way round Theorem 26 For any matrix A, the row rank, the column rank, and the rank are all equal In particular, the rank is independent of the row and column operations used to compute it Proof Suppose that we reduce A to canonical form B by elementary operations, where B has rank r These elementary operations don t change the row or column rank, by Lemma 24; so the row ranks of A and B are equal, and their column ranks are equal But it is trivial to see that, if Ir O B =, O O then the row and column ranks of B are both equal to r So the theorem is proved We can get an extra piece of information from our deliberations Let A be an invertible n n matrix Then the canonical form of A is just the identity matrix I: its rank is equal to n This means that there are matrices P and Q, each a product of elementary matrices, such that PAQ = I n From this we deduce that A = P 1 I n Q 1 = P 1 Q 1 But P 1 and Q 1 are both products of elementary matrices, so we have proved: Corollary 27 Every invertible square matrix is a product of elementary matrices In fact, we learn a little bit more We observed, when we defined elementary matrices, that they can represent either elementary column operations or elementary row operations So, when we have written A as a product of elementary matrices, we can choose to regard them as representing column operations, and we see that A can be obtained from the identity by applying elementary column operations If we now apply the inverse operations in the other order, they will turn A into the identity (which is its canonical form) In other words, the following is true: 8

Corollary 28 If A is an invertible n n matrix, then A can be transformed into the identity matrix by elementary column operations alone (or by elementary row operations alone) 9