Roberto s Notes on Linear Algebra Chapter 4: Matrix Algebra Section 7. Inverse matrices

Similar documents
Number of solutions of a system

Roberto s Notes on Linear Algebra Chapter 10: Eigenvalues and diagonalization Section 3. Diagonal matrices

Using matrices to represent linear systems

Dependence and independence

Chapter 4. Solving Systems of Equations. Chapter 4

Roberto s Notes on Linear Algebra Chapter 11: Vector spaces Section 1. Vector space axioms

Eigenvalues and eigenvectors

Basic matrix operations

Cofactors and Laplace s expansion theorem

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Roberto s Notes on Linear Algebra Chapter 4: Matrix Algebra Section 4. Matrix products

4 Elementary matrices, continued

We could express the left side as a sum of vectors and obtain the Vector Form of a Linear System: a 12 a x n. a m2

1300 Linear Algebra and Vector Geometry

LECTURES 4/5: SYSTEMS OF LINEAR EQUATIONS

Lecture 2 Systems of Linear Equations and Matrices, Continued

Roberto s Notes on Linear Algebra Chapter 9: Orthogonality Section 2. Orthogonal matrices

Linear Algebra for Beginners Open Doors to Great Careers. Richard Han

4 Elementary matrices, continued

Basic methods to solve equations

Row and column spaces

Integration by inverse substitution

Review Solutions for Exam 1

Topic 14 Notes Jeremy Orloff

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

36 What is Linear Algebra?

CS123 INTRODUCTION TO COMPUTER GRAPHICS. Linear Algebra 1/33

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Math 54 HW 4 solutions

Inverses and Elementary Matrices

Vector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture

CHAPTER 8: MATRICES and DETERMINANTS

To factor an expression means to write it as a product of factors instead of a sum of terms. The expression 3x

BASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C =

Integration by partial fractions

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

CS123 INTRODUCTION TO COMPUTER GRAPHICS. Linear Algebra /34

Solving Quadratic & Higher Degree Equations

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.

2 Systems of Linear Equations

Elementary Linear Algebra

Matrices and RRE Form

CHAPTER 7: TECHNIQUES OF INTEGRATION

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

February 20 Math 3260 sec. 56 Spring 2018

Solving Quadratic & Higher Degree Equations

Mon Feb Matrix algebra and matrix inverses. Announcements: Warm-up Exercise:

7.4. The Inverse of a Matrix. Introduction. Prerequisites. Learning Outcomes

( )( b + c) = ab + ac, but it can also be ( )( a) = ba + ca. Let s use the distributive property on a couple of

Linear Algebra, Summer 2011, pt. 2

Introduction to Matrices

Section 2.2: The Inverse of a Matrix

Matrices MA1S1. Tristan McLoughlin. November 9, Anton & Rorres: Ch

Chapter 2. Matrix Arithmetic. Chapter 2

Inverting Matrices. 1 Properties of Transpose. 2 Matrix Algebra. P. Danziger 3.2, 3.3

INVERSE OF A MATRIX [2.2]

For all For every For each For any There exists at least one There exists There is Some

Linear Algebra. Chapter Linear Equations

MAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :

is a 3 4 matrix. It has 3 rows and 4 columns. The first row is the horizontal row [ ]

Section Gaussian Elimination

7.6 The Inverse of a Square Matrix

Solving Quadratic & Higher Degree Equations

Math 308 Midterm Answers and Comments July 18, Part A. Short answer questions

Differential Equations

1300 Linear Algebra and Vector Geometry

Linear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway

March 19 - Solving Linear Systems

Take the Anxiety Out of Word Problems

The following are generally referred to as the laws or rules of exponents. x a x b = x a+b (5.1) 1 x b a (5.2) (x a ) b = x ab (5.

Lecture 9: Elementary Matrices

MATH 310, REVIEW SHEET 2

Integration by substitution

Math 31 Lesson Plan. Day 5: Intro to Groups. Elizabeth Gillaspy. September 28, 2011

1. Solve each linear system using Gaussian elimination or Gauss-Jordan reduction. The augmented matrix of this linear system is

MTH5112 Linear Algebra I MTH5212 Applied Linear Algebra (2017/2018)

3.4 Elementary Matrices and Matrix Inverse

Polynomials; Add/Subtract

Algebra & Trig Review

Definition of geometric vectors

Systems of equation and matrices

SYDE 112, LECTURE 7: Integration by Parts

Spanning, linear dependence, dimension

Methods for Solving Linear Systems Part 2

A summary of factoring methods

Determinants and Scalar Multiplication

base 2 4 The EXPONENT tells you how many times to write the base as a factor. Evaluate the following expressions in standard notation.

Midterm 1 Review. Written by Victoria Kala SH 6432u Office Hours: R 12:30 1:30 pm Last updated 10/10/2015

Linear Algebra Handout

Quadratic Equations Part I

Elimination and back substitution

EXAM 2 REVIEW DAVID SEAL

Math 250B Midterm I Information Fall 2018

Chapter 1 Review of Equations and Inequalities

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

MODEL ANSWERS TO THE THIRD HOMEWORK

Elementary matrices, continued. To summarize, we have identified 3 types of row operations and their corresponding

Solving Systems of Linear Equations

CHAPTER 8: MATRICES and DETERMINANTS

Transcription:

Roberto s Notes on Linear Algebra Chapter 4: Matrix Algebra Section 7 Inverse matrices What you need to know already: How to add and multiply matrices. What elementary matrices are. What you can learn here: What the multiplicative inverse of a matrix is and how to construct it. A few, very basic properties of the inverse of a matrix. Now that you know how to add, subtract and multiply matrices, are you curious about how to divide them? I would be, but I am afraid that it will be very complicated, given how messy it is to multiply them! You are right: the matrix product is such a complex operation that devising an effective way to divide matrices has proven to be a futile task in general. Good! Let s move on to the next section Not so fast! It turns out that, while in general it is a futile task, in some not-sorare cases it can be done, but the trick is to think of division in a way that is familiar to you already. The key is to remember that in usual algebra, dividing by a number is the same as multiplying by its reciprocal: b b a a So, instead of constructing some complicated division process, we ll try to give meaning to the concept of reciprocal. We shall do so by focussing on the key property that the product of a number and its reciprocal is : a a a a. Therefore, we shall start by defining the inverse A of a matrix A as being a matrix with this same property. Later we ll try to figure out when and how such inverse matrix can be computed. Definition The inverse of a matrix A is a matrix denoted by A such that: A A AA I A matrix that has an inverse is called an invertible matrix. Does that mean that not all matrices have an inverse? Yes, and that should not be a surprise. After all, not every number has a reciprocal: 0 does not! This is a major issue that we shall explore at length later. But we begin by looking some other issues that must be noted right away. We begin with the minor one. Linear Algebra Chapter 4: Matrix Algebra Section 7: Inverse matrices Page

Knot on your finger Although the matrix A has been defined to mirror the properties of a reciprocal, it is called an inverse instead. The reason for this choice will become clear once we use inverse matrices in a role for which they represent inverse functions in the calculus meaning of the word. Part of the fun of mathematics is to determine how little you need to assume in order to make something work. Some mathematicians spend their whole research career on this kind of problems! But let us notice a more pressing issue for us. Knot on your finger Since matrix multiplication is not commutative, even if we find a matrix A for which A A I, we must still check that AA I. And now for two more technical issues. In order for a matrix A to be invertible, it must be a square matrix. And that will make our search more complicated, right? Not necessarily, but it will require us to check this point. I get the problem with the commutativity, but how do we even find an inverse? It still seems very messy to reverse a matrix product. It seems so, but it turns out that if we think carefully about the properties of the matrix product that we have seen so far, developing a suitable method will be pretty simple and the method will not be complicated on unfamiliar either! So, let s head in that direction by identifying a property that any invertible matrix must satisfy. Proof If A is an nm matrix, in order for both A A and AA to be defined, A must be an mn matrix. But in that case the products will have dimensions mm and nn respectively. For these to be the same identity matrix, m must be equal to n and hence A must be square. I notice now that in the definition you did not specify which identity matrix needed to be used, but you did specify that it had to be the same one! Sneaky! If a matrix A is invertible, then its RREF must be the identity matrix I. Equivalently, if the RREF of a square matrix A is not the identity, the matrix is not invertible. Linear Algebra Chapter 4: Matrix Algebra Section 7: Inverse matrices Page 2

Proof We prove this by contradiction. So let us assume that the RREF of A is not I. This means that the homogeneous system Ax=0 has a free variable and hence non trivial solutions. Let us pick one non-zero solution and call it c, so that Ac=0. If, by contradiction, A has an inverse, we must have that: 0 A 0 A Ac Ic c But we picked c as a non-zero vector, so this cannot also be 0! The contradiction implies that A cannot possibly have an inverse. So, in order to look for invertible matrices, we must look among those square matrices whose RREF is I. Can we say that a square matrix whose RREF is I is invertible? Not yet! Notice that this only makes a statement in one direction: if A is invertible, its RREF must be I, but it makes no claim in the opposite direction. It turns out that the opposite direction is indeed true, but we have not proved that yet. To prove it and to construct inverses we will use the elementary matrices we explored in the previous section. To begin, let me show you that elementary matrices are invertible. Proof A Every elementary matrix E has an inverse E. This inverse E is also an elementary matrix and it corresponds to the ERO that reverses the ERO corresponding to E. There are three types of elementary matrix, corresponding to the three types of ERO s. If E is obtained by switching two rows, applying it twice will revert the rows back to their original position. In other words, applying it twice will produce no change, hence produces the identity matrix, which is what we want. So, in this case E E. If E is obtained by multiplying a row by the non-zero scalar k, consider the elementary matrix that multiplies that same row by /k. Again, applying these two matrices one after the other will revert that row back to its original values. In other words, their product corresponds to the identity matrix, which is what we want. Alternatively, just multiply two such matrices together (in the general case!) to see that it works. Finally, if E is obtained by adding to the i-th row a multiple of the j-th row, consider the elementary matrix that subtracts from the i-th row the same multiple of the j-th row. Again, applying these two matrices one after the other will revert the i-th row back to its original values. In other words, their product corresponds to the identity matrix, which is what we want. So, in all three cases the inverse exists and is the claimed elementary matrix. I think I follow the logic, but I still don t see how it works. I am not surprised, since the above proof is rather theoretical. So, if you trust that proof, here is what this fact is saying for 33 elementary matrices. Quick portrait of Inverses of elementary matrices For elementary matrices that switch rows: 0 0 0 0 E 0 0 0 0 E E 0 0 0 0 Linear Algebra Chapter 4: Matrix Algebra Section 7: Inverse matrices Page 3

For elementary matrices that multiply a row by a non-zero scalar k: 0 0 0 0 E 0 0 0 0 E 0 0 k 0 0 k For elementary matrices that add to a row a multiple of another row: 0 0 0 0 E 0 k 0 k E 0 0 0 0 I have a suggestion for you here: answer the Learning questions that asks you to verify that this portrait is correct. It is a very simple task and will give you a better insight into inverse matrices and how they work in this simple situation, in addition of convincing you that this portrait is accurate. Done? Good. It turns out that this simple fact about elementary matrices allows us to construct the inverse of any matrix whose RREF is I, thus showing also that any such matrix is invertible. The construction that I will show you is based on the following facts that we have learned so far about matrices. To go from a matrix A to its RREF we need to apply a sequence of ERO s. Regardless of which sequence we use, the RREF of A is unique. Each ERO can be implemented by multiplying the current matrix, on the left, by the elementary matrix corresponding to that ERO. The matrix we are looking for must have the property that when multiplied on the left of A, the resulting product is the identity matrix. We are now ready for the main event of this section. Proof If the RREF of a square matrix A is the identity matrix, and if E, E2,..., E p is a sequence of elementary matrices that changes A to I, then: A E pe p E2E is a matrix that works as an inverse for A on the left side. We need to show that A A I, meaning that p p 2 E E E E A I. But this follows from the fact that these elementary matrices were chosen exactly to reduce A to its RREF and that such RREF is I. Notice how, despite our initial fears, this construction is very simple. All we need to do to get the inverse of A is multiply together all the elementary matrices that are needed to reduce A to I, in the proper order! But, according to what you said earlier, we cannot call this an inverse yet, sine it only works when multiplied on the left side. What about the right? I am glad you noticed and we do need to check what happens on the right. However, I will show you that the matrix constructed in this way works on the right side as well. Therefore, it is the inverse and we can identify it as A. OK, but then I beg to disagree about the simplicity of the procedure! You want me to get the RREF, write down all the corresponding matrices and then multiply them together?! That is not very simple! Not in the way you are suggesting, but there is a trick that allows us to do the whole procedure without ever writing down an elementary matrix! Watch: Linear Algebra Chapter 4: Matrix Algebra Section 7: Inverse matrices Page 4

Strategy for Computing the inverse of a matrix Given a square matrix A: Augment it with the identity of the same dimension: A I Select an appropriate sequence of ERO s in order to obtain the RREF of A and apply each of them to the whole augmented matrix. If the RREF of A is not I, we know that A is not invertible and we can stop. If the procedure generates I on the left side of the augmented matrix, the matrix on its right side is A, that is, the final augmented matrix so obtained is of the form: RREF A I I A. Slow down! This feels like magic. How do I know that the right side is the inverse? Good point. Think about it: by performing all the needed ERO s, we have multiplied the augmented matrix by the corresponding sequence of elementary matrices, that is, EpEp E E 2. On the left side we multiplied this product of matrices by A, so we got I. On the right side we multiplied that product by I, so we got the product back again. But we have seen that this product must be the inverse of A! Bingo! And that does the trick. Here is an example. Example: 2 2 A 3 4 4 Let s see if we can find the inverse of this matrix, or if it turns out that it is not invertible. We start by augmenting it with the identity: 2 2 0 0 A I 3 0 0 4 4 0 0 Now we perform a sequence of row operations aiming to get an RREF on the left side, but applying them to the right side as well: 2 2 0 0 3 0 0 r r 2 A I 3 0 0 2 2 0 0 4 4 0 0 4 4 0 0 r2 2r 3 0 0 3 0 0 r2 r3 r3 4r 0 7 4 2 0 0 2 0 8 5 0 4 0 8 5 0 4 3 0 0 3r 3 9 3 0 3 0 3 8r2 0 2 3r2 0 3 3 3 6 3 r 0 0 3 8 2 7 0 0 3 8 2 7 r r3 3 9 0 8 9 7 3 0 0 7 9 5 r 3r2 r2 r3 0 3 0 5 6 4 0 3 0 5 6 4 0 0 3 8 2 7 0 0 3 8 2 7 r r 0 0 7 / 3 3 5 / 3 2 r 3 ; ; 3 3 3 0 0 5 / 3 2 4 / 3 0 0 8 / 3 4 7 /3 Linear Algebra Chapter 4: Matrix Algebra Section 7: Inverse matrices Page 5

And there you have it: check, by multiplication, that in fact the matrix 7 / 3 3 5 / 3 8 / 3 4 7 / 3 A 5 / 3 2 4 / 3 is worthy to be considered as the inverse of our original matrix A. I see that it is not as bad as it seemed, we just have to compute the RREF of the augmented matrix. Yes, and remember that once we know that this is what must be done, we can get assistance from a calculator or computer. Yeeah for computers! And the smart people who program them, whose work we can better appreciate by knowing what is behind it. Now, before we close this section, what do we need to do? Have a chocolate bar? Good idea, but there is still a technical issue to resolve: you haven t forgotten that we still need to check that this matrix works on the right every time, have you? To check that it does we begin by observing a simple, but important property of inverses. Proof If 2 A, A,..., A p are invertible matrices of the same dimension, then so is their product: A A A A 2 p Moreover, the inverse of this product is: A A A A A p p 2 All we need to do is check. If we multiply this product on the left of A, we have: A A A A A A A p p 2 2 p A A A A A A A A p p 2 2 p p A A A IA A A p p 2 2 p p A A I A A A IA I p p p p p p The same works on the right, as I am sure you will be glad to check. But we are still multiplying on the left! Patience, we are not there yet. We have seen that elementary matrices are invertible; therefore, we can apply this last fact applies to a product of them. This allows us to check that the matrix we construct as the inverse, is, in fact, so on both sides. Linear Algebra Chapter 4: Matrix Algebra Section 7: Inverse matrices Page 6

If A E pe p E2E is the product of elementary matrices obtained in the strategy for constructing the inverse of a matrix A, then: A E E E E A A 2 p p I Therefore, this matrix is indeed the inverse of A as it satisfies the defining property on both sides. Wow! I will have to read this again before I am convinced that it is not magic! You should certainly do that. And to conclude this fairly technical section, here are some useful properties of inverses whose easy proofs are left for you in the Learning questions. After all, a technical section deserves technical Learning questions, no? A square matrix is invertible if and only if its RREF is I. Proof We know that EpEp E2E is a product of elementary matrices and all elementary matrices are invertible. Therefore, this is a product in invertible matrices. From what we have seen before, the product we have constructed is invertible and we know that its inverse is E E E E. But this 2 p product, when applied to I, undoes, in the correct order, all the ERO s we used to go from A to I. Therefore, this product takes us back from I to A and it equals A. E E E E is the inverse of A. However, This by itself tells us that p p 2 to convince us further that our original product is an inverse on the right as well, we multiply it on the right: A A E E2 E pe p E pe p E2E 2 p p p p 2 E E2 E p E p E2E I E E E E E E E E p If A is an invertible matrix and c 0 is a scalar, then: c ca A If A is an invertible matrix, then: A A T T Linear Algebra Chapter 4: Matrix Algebra Section 7: Inverse matrices Page 7

If A is an invertible matrix, then: A A n n So, now that we have learned how to construct these inverse matrices, what are we going to do with them? In the following chapters, we shall use inverse matrices in several contexts and situations, so they do have a wide range of applications. However, there is one for which you are ready and that links directly to the problem from which we got the idea of a matrix. Proof If Ax c is a linear system and A is a square matrix, then the system has a unique solution if, and only if A is invertible, On the other hand, if A is not invertible, its RREF has a zero row. By applying the Gauss-Jordan method to the system, we must end up with either a leading coefficient in the column of constants (no solution) or must have a free variable (infinitely many solutions). In either case, we do not have a unique solution. I can see from the proof that this also means that we can solve such a system by multiplying the vector of constants by the inverse function. Yes, and we could state that as an interesting fact. However, it will have little real uses for us, since in order to obtain the inverse we have to apply Gauss-Jordan, the same method we would use to solve the system, so we don t get any real advantage. But it is a little fact to keep in mind. And, to finish, here is another connection with things we have seen before. An n n matrix is invertible if and only if its rank is n. The proof of this one is a great exercise for you, so try it in the Learning questions. Time to digest the information of this section before we move on to deeper applications of them. If A is invertible, then, by multiplying by sides of the equation by its inverse we get: Ax c A Ax A c x A c But this means that only the vector is only one solution. A c satisfies the equation, so that there Linear Algebra Chapter 4: Matrix Algebra Section 7: Inverse matrices Page 8

Summary A matrix is invertible if, and only if its RREF is an identity matrix. To construct the inverse of an invertible matrix, we augment that matrix with the identity of the proper size and compute the RREF of such augmented matrix. The right half of this RREF is the inverse. Common errors to avoid Implement this method to compute inverses a sufficient number of times to understand how it works, but also reflect on all the properties used to obtain it and all the properties of inverses that stem from it. Learning questions for Section LA 4-7 Review questions:. Describe how to compute the inverse of an invertible matrix. 3. Explain how to obtain the inverse of the product of invertible matrices. 2. Identify the technical problems involved in the search for the inverse of a matrix. Memory questions:. Is an elementary matrix always invertible? 2. If a square matrix is invertible, what can be said about its RREF? 3. If A and B are invertible matrices, what is the inverse of AB? 4. How is the inverse of A used to solve the system Ax c? 5. If B is invertible, how many solutions does a system of the form Bx u have? 6. To what matrix do we apply Gauss-Jordan elimination when looking for A? Linear Algebra Chapter 4: Matrix Algebra Section 7: Inverse matrices Page 9

Computation questions: For each of the matrices provided in questions -2, compute its inverse, or explain why it does not exist, and if the inverse does exist, identify the first two elementary matrices you would use in its construction.. 2. 3. 2 3 5 2 3 2 2 4 2 6. 7. 2 2 0 3 6 0 4 2 0 3 7 8 8 2 0.. 2 0 3 2 5 3 2 0 3 2 0 4. 5. 3 6 2 2 0 0 0 8. 9. 2 3 3 2 2 4 6 7 2 0 3 2 0 2. 2 3 2 2 6 4 For each of the matrices presented in questions 3-8 determine the values of x for which the matrix is not invertible. 3. 2 x x 2 2 0 0 5 4. 2 x 0 x 3 3 5. 2 x x 2 2 0 0 0 5 0 4 2 0 Linear Algebra Chapter 4: Matrix Algebra Section 7: Inverse matrices Page 0

6. x 2 x 0 x 2 x 0 x x 7. sin x 2 0 sin x 2 0 sin x cos x 8. x x 3 0 x x 8 x 9. Given the matrices 2 3 3 4 A 0 4, 2 B, compute 0 0 2 A B. Then use this fact to determine the inverse of the matrix HINT: Look at how A and C are related. 2 4 2 4 4 2 C. 20. Given the matrices A B. 7 2 4 2 4 4 2 A and 0 2 B 0 2 2 0, check that 2. Write the system x2y 3 3x y 7 of the matrix of coefficients to solve the system. as a single matrix equation and use the inverse Theory questions:. Which matrices can be written as the product of elementary matrices? 2. Which visible feature of the matrix invertible? 0 b 0 3. Which triangular matrices are invertible? a 0 0 c d 0 tells you that it is not 4. If a square matrix A is not symmetric, by which matrix can it be multiplied so that the product is a symmetric matrix? 5. For which square matrices C is it true that CC C C? 6. If A and B are invertible matrices of the same dimensions, does it always follow that A B is also invertible? 7. What can we say about the RREF of a non-invertible square matrix? Linear Algebra Chapter 4: Matrix Algebra Section 7: Inverse matrices Page

Proof questions:. Prove that if B is an invertible matrix, then the product AB commutes if and only if the product AB commutes. 2. Prove that if A is an invertible matrix and c 0 is a scalar, then ca A c 3. Prove that if A is an invertible matrix, then A A T T 4. Prove that if A is an invertible matrix, then A A n n 6. Prove that a 2 2 matrix is invertible if and only if its two rows are not parallel vectors. 7. Prove that an n n matrix is invertible if and only if its rank is n. 8. Use the Gauss-Jordan method to construct the inverse of the matrix / a / c valid. / b / d and identify all restrictions on a, b, c, d that make the answer 9. Prove that any square matrix can be written as the product of a set of elementary matrices and an upper triangular matrix. 5. Prove that the inverse of a symmetric matrix is also symmetric. Templated questions:. Construct a 22, 33 or 44 matrix and construct its inverse or show that it does not exist. What questions do you have for your instructor? Linear Algebra Chapter 4: Matrix Algebra Section 7: Inverse matrices Page 2