INVERSE OF A MATRIX [2.2] 8-1

Similar documents
INVERSE OF A MATRIX [2.2]

Chapter 2 Notes, Linear Algebra 5e Lay

Section Gaussian Elimination

Section 2.2: The Inverse of a Matrix

SOLVING Ax = b: GAUSS-JORDAN ELIMINATION [LARSON 1.2]

February 20 Math 3260 sec. 56 Spring 2018

Inverting Matrices. 1 Properties of Transpose. 2 Matrix Algebra. P. Danziger 3.2, 3.3

1 Last time: inverses

3.4 Elementary Matrices and Matrix Inverse

Math 1553 Introduction to Linear Algebra

Elementary Matrices. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

Math 314H EXAM I. 1. (28 points) The row reduced echelon form of the augmented matrix for the system. is the matrix

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.

Review Solutions for Exam 1

MATH 152 Exam 1-Solutions 135 pts. Write your answers on separate paper. You do not need to copy the questions. Show your work!!!

Mon Feb Matrix algebra and matrix inverses. Announcements: Warm-up Exercise:

Chapter 2: Matrices and Linear Systems

Fall Inverse of a matrix. Institute: UC San Diego. Authors: Alexander Knop

Math 54 HW 4 solutions

1 Last time: determinants

MODEL ANSWERS TO THE THIRD HOMEWORK

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES

Methods for Solving Linear Systems Part 2

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Kevin James. MTHSC 3110 Section 2.2 Inverses of Matrices

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

Math 313 Chapter 1 Review

Chapter 4. Solving Systems of Equations. Chapter 4

Chapter 1. Vectors, Matrices, and Linear Spaces

Solutions to Exam I MATH 304, section 6

Review Let A, B, and C be matrices of the same size, and let r and s be scalars. Then

Lecture 6 & 7. Shuanglin Shao. September 16th and 18th, 2013

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

If A is a 4 6 matrix and B is a 6 3 matrix then the dimension of AB is A. 4 6 B. 6 6 C. 4 3 D. 3 4 E. Undefined

Elementary maths for GMT

MTH 464: Computational Linear Algebra

Lecture Notes in Linear Algebra

1111: Linear Algebra I

MAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :

Math Computation Test 1 September 26 th, 2016 Debate: Computation vs. Theory Whatever wins, it ll be Huuuge!

Midterm 1 Review. Written by Victoria Kala SH 6432u Office Hours: R 12:30 1:30 pm Last updated 10/10/2015

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

MATH 2030: MATRICES. Example 0.2. Q:Define A 1 =, A. 3 4 A: We wish to find c 1, c 2, and c 3 such that. c 1 + c c

Linear Algebra Handout

A FIRST COURSE IN LINEAR ALGEBRA. An Open Text by Ken Kuttler. Matrix Arithmetic

MATH10212 Linear Algebra B Homework Week 5

Math Linear Algebra Final Exam Review Sheet

Linear Algebra. Chapter Linear Equations

Introduction to Matrices

Systems of Linear Equations. By: Tri Atmojo Kusmayadi and Mardiyana Mathematics Education Sebelas Maret University

Lecture 2 Systems of Linear Equations and Matrices, Continued

Math 60. Rumbos Spring Solutions to Assignment #17

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Linear Equation: a 1 x 1 + a 2 x a n x n = b. x 1, x 2,..., x n : variables or unknowns

MATH10212 Linear Algebra B Homework 6. Be prepared to answer the following oral questions if asked in the supervision class:

Introduction to Matrices and Linear Systems Ch. 3

CHAPTER 8: Matrices and Determinants

Matrices and systems of linear equations

Math 3C Lecture 20. John Douglas Moore

MTH 464: Computational Linear Algebra

Homework Set #8 Solutions

1 Last time: linear systems and row operations

MATH 2050 Assignment 6 Fall 2018 Due: Thursday, November 1. x + y + 2z = 2 x + y + z = c 4x + 2z = 2

4 Elementary matrices, continued

Determine whether the following system has a trivial solution or non-trivial solution:

Math 323 Exam 2 Sample Problems Solution Guide October 31, 2013

Vector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture

MTH 464: Computational Linear Algebra

Mon Feb Matrix inverses, the elementary matrix approach overview of skipped section 2.5. Announcements: Warm-up Exercise:

Lecture 12: Solving Systems of Linear Equations by Gaussian Elimination

4 Elementary matrices, continued

Math 2030 Assignment 5 Solutions

Math 3191 Applied Linear Algebra

Math 20F Practice Final Solutions. Jor-el Briones

The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.

Elementary Linear Algebra

Chapter 1: Linear Equations

Section 4.5. Matrix Inverses

Extra Problems: Chapter 1

7.4. The Inverse of a Matrix. Introduction. Prerequisites. Learning Outcomes

Solving Systems of Linear Equations

Review Questions REVIEW QUESTIONS 71

Matrix & Linear Algebra

Lecture 9: Elementary Matrices

Math 240, 4.3 Linear Independence; Bases A. DeCelles. 1. definitions of linear independence, linear dependence, dependence relation, basis

MATH 323 Linear Algebra Lecture 6: Matrix algebra (continued). Determinants.

Chapter 2: Matrix Algebra

Review : Powers of a matrix

Math 22AL Lab #4. 1 Objectives. 2 Header. 0.1 Notes

Linear Algebra March 16, 2019

1111: Linear Algebra I

Math 344 Lecture # Linear Systems

Lesson 3. Inverse of Matrices by Determinants and Gauss-Jordan Method

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Third Midterm Exam Name: Practice Problems November 11, Find a basis for the subspace spanned by the following vectors.

Working with Block Structured Matrices

Section 1.1 System of Linear Equations. Dr. Abdulla Eid. College of Science. MATHS 211: Linear Algebra

Linear Equations and Matrix

Transcription:

INVERSE OF A MATRIX [2.2] 8-1

The inverse of a matrix: Introduction We have a mapping from R n to R n represented by a matrix A. Can we invert this mapping? i.e. can we find a matrix (call it B for now) such that when B is applied to Ax the result is x? x Product by A Product by B Ax Example: blurring operation. We want to revert blurring, i.e., to deblur. So: Blurring: A; Deblurring: B. B is the inverse of A and is denoted by A 1. 8-2 Text: 2.2 Inverse 8-2

Recall that I n x = x for all x. Since we want A 1 (Ax) = x for all x this means, we need to have A 1 A = I n Naturally the inverse of A 1 should be A so we also want AA 1 = I n Finding an inverse to A is not always possible. When it is we say that the matrix A is invertible Next: details. 8-3 Text: 2.2 Inverse 8-3

The inverse of a matrix An n n matrix A is said to be invertible if there is an n n matrix B such that BA = I and AB = I where I = I n, the n n identity matrix. In this case, B is an inverse of A. In fact, B is uniquely determined by A: If C were another inverse of A, then C = CI = C(AB) = (CA)B = IB = B This unique inverse is denoted by A 1 -so that AA 1 = A 1 A = I 8-4 Text: 2.2 Inverse 8-4

Matrix inverse - the 2 2 case [ ] a b Let A =. If ad bc 0 then A is invertible and c d [ ] A 1 1 d b = ad bc c a Verify the result If ad bc = 0 then A is not invertible (does not have an inverse) The quantity ad bc is called the determinant of A (det(a)) The above says that a 2 2 matrix is invertible if and only if det(a) 0. 8-5 Text: 2.2 Inverse 8-5

Matrix inverse - Properties Theorem If A is invertible, then for each b in R n, the equation Ax = b has the unique solution x = A 1 b. Proof: Take any b in R n. A solution exists because if A 1 b is substituted for x, then Ax = A(A 1 b) = (A 1 A)b = Ib = b. So A 1 b is a solution. To prove that the solution is unique, show that if u is any solution, then u must be A 1 b. If Au = b, we can multiply both sides by A 1 and obtain A 1 Au = A 1 b, so Iu = A 1 b, and u = A 1 b Recall: A is one-to-one iff its columns are linearly independent. Show: If A is invertible then it is one to one, i.e., its columns are linearly independent. 8-6 Text: 2.2 Inverse 8-6

Matrix inverse - Properties a. If A is an invertible matrix, then A 1 is invertible and (A 1 ) 1 = A b. If A and B are n n invertible matrices, then so is AB, and we have (AB) 1 = B 1 A 1 c. If A is an invertible matrix, then so is A T, and the inverse of A T is the transpose of A 1 : (A T ) 1 = (A 1 ) T Common notation (A T ) 1 A T 8-7 Text: 2.2 Inverse 8-7

Elementary matrices Consider the matrix on the right and call it 1 0 0 0 E. What is the result of the product EA for 0 1 0 0 some matrix A? r 0 1 0 0 0 0 1 Can this operation result in a change of the linear independence of the columns of A? [prove or disprove] Consider now the matrix on the right 1 0 0 0 [obtained by swapping rows 2 and 4 of I]. Call 0 0 0 1 it P. Same questions as above. 0 0 1 0 0 1 0 0 Matrices like E (elementary elimination matrix) and P (permutation matrix) are called elementary matrices 8-8 Text: 2.2 Inverse 8-8

Elimination algorithms and elementary matrices We will show this: The following algorithms: Gaussian elimination, Gauss-Jordan, reduction to echelon form, and to reduced row echelon form, are all based on multiplying the original matrix by a sequence of elementary matrices to the left. Each of these transformations preserves linear independence of the columns of the original matrix. An elementary matrix is one that is obtained by performing a single elementary row operation on an identity matrix. Let us revisit Gaussian Elimination - Recommended : compare with lecture note example on section 1.1.. 8-9 Text: 2.2 Inverse 8-9

Recall: Gaussian Elimination Consider example seen in section 1.1 Step 1 must transform: 2 4 4 2 1 3 1 1 1 5 6 6 into: x x x x 0 x x x 0 x x x row 2 := row 2 1 2 row 1: row 3 := row 3 1 2 row 1: 2 4 4 2 0 1 1 0 1 5 6 6 2 4 4 2 0 1 1 0 0 3 4 7 8-10 Text: 2.2 Inverse 8-10

The first transformation ( row 2 := row 2 1 2 row 1 ) is equivalent to performing this product: 1 0 0 1 2 1 0 0 0 1 2 4 4 2 1 3 1 1 1 5 6 6 = 2 4 4 2 0 1 1 0 1 5 6 6 Similarly, operation of row 3 is equivalent to product: 1 0 0 0 1 0 1 2 0 1 2 4 4 2 0 1 1 0 1 5 6 6 Hint: Use the row-wise form of the matrix products = 2 4 4 2 0 1 1 0 0 3 4 7 Matrix on the left is called an Elementaty elimination matrix Do the same thing for 2nd (and last) step of GE. 8-11 Text: 2.2 Inverse 8-11

Another type of elementary matrices: Permutations A permutation matrix is a matrix obtained from the identity matrix by permuting its rows For example for the permutation p = {3, 1, 4, 2} we obtain P = 0 0 1 0 1 0 0 0 0 0 0 1 0 1 0 0 Important observation: the matrix P A is obtained from A by permuting its rows with the permutation p (P A) i,: = A p(i),: In words: the i-th row of P A is row number p(i) of A. 8-12 Text: 2.2 Inverse 8-12

What does this mean? It means that for example the 3rd row of P A is simply row number p(3) which is 4, of the original matrix A. 3rd row of P A equals p(3) th row of A Why is this true? What can you say of the j-th column of AP? What is the matrix P A when 0 0 1 0 P = 1 0 0 0 0 0 0 1 A = 0 1 0 0 1 2 3 4 5 6 7 8 9 0 1 2? 3 4 5 6 8-13 Text: 2.2 Inverse 8-13

Back to elementary matrices Do the elementary matrices E 1, E 2,..., E n 1 (including permutations) change linear independence of the columns? Prove: If u, v, w (3 columns of A) are independent then the columns E 1 u, E 1 v, E 1 w are independent where E 1 is an elementary matrix (elimination matrix or a permutation matrix). So: (*Very important*) Elimination operations (Gaussian elimination, Gauss-Jordan, reduction to echelon form, and to rref) preserve the linear independence of the columns. This will help us establish the main results on inverses of matrices 8-14 Text: 2.2 Inverse 8-14

Existence of the inverse and related properties We are now prepared to prove the following theorem. Existence Theorem. The 4 following statements are equivalent (1) A is invertible (2) The columns of A are linearly independent (3) The Span of the columns of A is R n (4) rref(a) is the identity matrix 8-15 Text: 2.2 Inverse 8-15

1 2 (2) (4). 3 4 Theorem: Let A be an n n matrix. Then the columns of A are linearly independent iff its reduced echelon form is the identity matrix Only way in which the rref(a) I is by having at least one free variable. Form the augmented system [A, 0]. Set this free variable to one (other free var. to zero) and solve for the basic variables. Result: a nontrivial sol. to the systsm Ax = 0 Contradiction If rref(a) = I then columns of A are independent since the elementary operations do not alter linear dependence. (**) Let A an n n matrix with independent columns and b R n a right-hand side. Apply rref to [A, b]. What do A and b become? [Hint: use result of 1st part of proof above]. Consequence? 8-16 Text: 2.2 Inverse 8-16

1 2 Proof: (3) (4). As was seen before (3) implies that there is a pivot in every row. Since the matrix is n n the only possible rref echelon matrix of this type is I. 3 4 1 2 3 4 Proof: (2) (3) Proof by contradiction. Assume A has linearly independent columns. And assume that some system Ax = b does not have a solution. Then A, b will have a reduced row echelon form in which b will become a pivot. So there is a zero row in the A part of the echelon matrix.. This means we have at least a free variable - So systems Ax = 0 will have nontrivial solutions contradiction. 8-17 Text: 2.2 Inverse 8-17

1 2 (2) (1) 3 4 Theorem: Let A be an n n matrix. Then A has independent columns if and only if A is invertible. From previous theorem, A can be reduced to the identity matrix with the reduced echelon form procedure. There are elementary matrices E 1, E 2,..., E p such that E p E p 1 E 2 E 1 A = I (Step 1: left-multuply A by E 1 ; Step 2: left-multuply result by E 2 ; etc.. ) Call C the matrix E p E p 1 E 1. Then CA = I. So A has a left-inverse. It also has a right inverse X (s.t. AX = I) because any system Ax = b has a solution (See exercise (**) seen earlier). Therefore we can solve Ax i = e i, where e i is the i-th col. if I. For X = [x 1, x 2,, x n ] this gives AX = I. Finally, X = C. Indeed CA = I C(AX) = X (because AX = I). So C = X. 8-18 Text: 2.2 Inverse 8-18

Let A be invertible. Its columns are lin. independent if (by definition) Ax = 0 implies x = 0 - this is trivially true as can be seen by multiplying Ax = 0 to the left by A 1. 1 2 Q: Is the Existence Theorem proved? A: Yes. 3 4 Here is what you need to remember: A invertible rref(a) = I cols(a) Lin. independ cols(a) Span R n 8-19 Text: 2.2 Inverse 8-19

Computing the inverse Q: How do I compute the inverse of a matrix A? A: Two common strategies [not necessarily the best] Using the reduced row echelon form Solving the n systems Ax = e i for i = 1,, n How to use the echelon form? Could record the product of the E i s as suggested by one of the previous theorems Too complicated! 8-20 Text: 2.2 Inverse 8-20

Instead get the reduced echelon form of the augmented matrix [A, I] Assuming A is invertible result is of the form [I, C] The inverse is C. Explain why. What will happen if A is not invertible? 8-21 Text: 2.2 Inverse 8-21

Example: Compute the inverse of Solution. First form the augmented matrix Then get reduced echelon form: Inverse is C = 0 0 1 1 2 1 3 1 2 4 1 2 1 4 1 1 2 1 1 2 4 2 3 2 1 1 0 0 2 3 1 0 1 0 4 2 3 0 0 1 2 1 0 0 5 2 4 0 1 0 2 2 2 0 0 1 2 1 1 5 2 4 2 2 2 2 1 1 8-22 Text: 2.2 Inverse 8-22

Example of application: Classical Crypto Idea of cryptography: A mapping from some space to itself. Encoding = applying the mapping. Decoding = applying the inverse mapping. Simple example: Hill s cipher [linear] Wll describe a simplification of the scheme Associate a number to every letter [e.g., 0 25]: A 0; B 1 ; C 2 ;...; Z 25 1st step: translate message with these numbers. 8-23 crypto 8-23

Example: BUY GOOGLE TODAY Translates to (note: 26 is for space) 1, 20, 24, 26, 6, 14, 14, 6, 11, 4, 26, 13 14, 22, 26 2nd step: Put that into a matrix of size 3?? 1 26 14 4 14 Message = X = 20 6 6 26 22 24 14 11 13 26 3rd step: Scramble message with Encoding matrix: 3 3 4 A = 0 1 1 4 3 4 8-24

This means multiply X by A to get the encoded message: 159 152 104 142 212 Y = AX = 44 20 17 39 48 160 178 118 146 226... which is transmitted. 4th step: The receiver must now decode the message by applying the inverse of A which in this case is: A 1 = 1 0 1 4 4 3 4 3 3 1 26 14 4 14 Decoded message : X = A 1 Y = 20 6 6 26 22 24 14 11 13 26 8-25 crypto 8-25

To break the code all you need is the mapping A Then compute A 1 (easy) Mapping is linear and so it is easy to find A. How would you proceed to get A? [Recall Practice exercise sets 8 & 9] How many messages do you need to intercept to do this? Is the message Hello enough? How about Good morning? Nonlinear codes are much harder to break.. Hill s cipher adds a modulo operation by translating Y into letters first. For example, 226 will become Mod(226,25)=1 which gives B... more complicated. 8-26 crypto 8-26