P = 1 F m(p ) = IP = P I = f(i) = QI = IQ = 1 F m(p ) = Q, so we are done.

Similar documents
Lecture 6 & 7. Shuanglin Shao. September 16th and 18th, 2013

Elementary Matrices. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

Introduction to Determinants

We could express the left side as a sum of vectors and obtain the Vector Form of a Linear System: a 12 a x n. a m2

12. Perturbed Matrices

Solutions to Exam I MATH 304, section 6

Math 3013 Problem Set 4

Row Space, Column Space, and Nullspace

MODEL ANSWERS TO THE THIRD HOMEWORK

c i r i i=1 r 1 = [1, 2] r 2 = [0, 1] r 3 = [3, 4].

Inverses and Elementary Matrices

Linear Algebra. Preliminary Lecture Notes

A = , A 32 = n ( 1) i +j a i j det(a i j). (1) j=1

Linear Algebra. Preliminary Lecture Notes

Math 60. Rumbos Spring Solutions to Assignment #17

Chapter 1: Systems of Linear Equations

Numerical Linear Algebra Homework Assignment - Week 2

A FIRST COURSE IN LINEAR ALGEBRA. An Open Text by Ken Kuttler. Matrix Arithmetic

INVERSE OF A MATRIX [2.2]

Chapter 2: Matrix Algebra

MATH10212 Linear Algebra B Homework 6. Be prepared to answer the following oral questions if asked in the supervision class:

Elementary operation matrices: row addition

Fall Inverse of a matrix. Institute: UC San Diego. Authors: Alexander Knop

ORIE 6300 Mathematical Programming I August 25, Recitation 1

Properties of Linear Transformations from R n to R m

E k E k 1 E 2 E 1 A = B

4.9 The Rank-Nullity Theorem

3.4 Elementary Matrices and Matrix Inverse

Linear Algebra Exam 1 Spring 2007

1111: Linear Algebra I

MATH 2030: MATRICES. Example 0.2. Q:Define A 1 =, A. 3 4 A: We wish to find c 1, c 2, and c 3 such that. c 1 + c c

Matrix Arithmetic. j=1

1111: Linear Algebra I

MATH 2050 Assignment 6 Fall 2018 Due: Thursday, November 1. x + y + 2z = 2 x + y + z = c 4x + 2z = 2

10. Rank-nullity Definition Let A M m,n (F ). The row space of A is the span of the rows. The column space of A is the span of the columns.

Lecture Notes in Linear Algebra

To hand in: (a) Prove that a group G is abelian (= commutative) if and only if (xy) 2 = x 2 y 2 for all x, y G.

Math 3C Lecture 20. John Douglas Moore

Section 2.2: The Inverse of a Matrix

MATH10212 Linear Algebra B Homework Week 3. Be prepared to answer the following oral questions if asked in the supervision class

MATH10212 Linear Algebra B Homework Week 5

INVERSE OF A MATRIX [2.2] 8-1

Know the Well-ordering principle: Any set of positive integers which has at least one element contains a smallest element.

Determine whether the following system has a trivial solution or non-trivial solution:

HOMEWORK 7 solutions

Section 19 Integral domains

Properties of the Determinant Function

Math/CS 466/666: Homework Solutions for Chapter 3

Inverting Matrices. 1 Properties of Transpose. 2 Matrix Algebra. P. Danziger 3.2, 3.3

Math 240, 4.3 Linear Independence; Bases A. DeCelles. 1. definitions of linear independence, linear dependence, dependence relation, basis

First we introduce the sets that are going to serve as the generalizations of the scalars.

Determinant: 3.3 Properties of Determinants

Chapter 1. Linear equations

MATH 2360 REVIEW PROBLEMS

6-2 Matrix Multiplication, Inverses and Determinants

MATH 152 Exam 1-Solutions 135 pts. Write your answers on separate paper. You do not need to copy the questions. Show your work!!!

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

Kernel and range. Definition: A homogeneous linear equation is an equation of the form A v = 0

PH1105 Lecture Notes on Linear Algebra.

1 Determinants. 1.1 Determinant

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

1111: Linear Algebra I

Chapter 2 Subspaces of R n and Their Dimensions

Math 344 Lecture # Linear Systems

7.5 Operations with Matrices. Copyright Cengage Learning. All rights reserved.

Chapter 2 Notes, Linear Algebra 5e Lay

Math Matrix Theory - Spring 2012

DM559 Linear and Integer Programming. Lecture 6 Rank and Range. Marco Chiarandini

The following techniques for methods of proofs are discussed in our text: - Vacuous proof - Trivial proof

Homework 11/Solutions. (Section 6.8 Exercise 3). Which pairs of the following vector spaces are isomorphic?

Kevin James. MTHSC 3110 Section 2.2 Inverses of Matrices

Matrices and systems of linear equations

Extra Problems: Chapter 1

SF2729 GROUPS AND RINGS LECTURE NOTES

4 Elementary matrices, continued

Determinants Chapter 3 of Lay

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Matrix Arithmetic. a 11 a. A + B = + a m1 a mn. + b. a 11 + b 11 a 1n + b 1n = a m1. b m1 b mn. and scalar multiplication for matrices via.

1111: Linear Algebra I

Solution: By inspection, the standard matrix of T is: A = Where, Ae 1 = 3. , and Ae 3 = 4. , Ae 2 =

Notes on arithmetic. 1. Representation in base B

Math 346 Notes on Linear Algebra

Applied Matrix Algebra Lecture Notes Section 2.2. Gerald Höhn Department of Mathematics, Kansas State University

CHAPTER 3 REVIEW QUESTIONS MATH 3034 Spring a 1 b 1

3 Matrix Algebra. 3.1 Operations on matrices

Math 313 Chapter 1 Review

4 Elementary matrices, continued

Lectures on Linear Algebra for IT

A matrix over a field F is a rectangular array of elements from F. The symbol

a s 1.3 Matrix Multiplication. Know how to multiply two matrices and be able to write down the formula

Matrices and RRE Form

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

Elementary matrices, continued. To summarize, we have identified 3 types of row operations and their corresponding

ELEMENTARY LINEAR ALGEBRA

Chapter 2 Linear Transformations

Review Solutions for Exam 1

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

is a 3 4 matrix. It has 3 rows and 4 columns. The first row is the horizontal row [ ]

Linear Algebra Primer

Solution to Homework 1

Transcription:

Section 1.6: Invertible Matrices One can show (exercise) that the composition of finitely many invertible functions is invertible. As a result, we have the following: Theorem 6.1: Any admissible row operation is an invertible function. Proof: If f : F m n F m n is an admissible row operation, then by definition, f = e 1... e k, where e 1,..., e k are elementary row operations on F m n. Since each elementary row operation is an invertible function, the theorem follows. In fact, the result of this theorem is an important part of the reason for using admissible row operations on the augmented matrix for a system of linear equations in order to solve the system: It is equivalent to the symmetry property of row- equivalence. Recall that any admissible row operation f is effected by multiplication by an appropriate matrix, P. In particular, such a matrix is the product of finitely many elementary matrices. In this context, we make the following definition: Definition: If f : F m n F m n is an admissible row operation and P F m m is such that f(a) = P A for all A F m n, then we say that the matrix P represents the row operation f. Lemma: The identity matrix I in F m m represents the identity operation, 1 F, which is an admissible row operation. Lemma: If I is the identity matrix in F m m,, then I commutes with every m m matrix over F, i.e. we have IA = AI, for every matrix A in F m m. Theorem 6.2: Each admissible row operation f is represented by a unique matrix P. Proof: Let f : F m n F m n be an admissible row operation which is represented by matrices P, Q F m m. If I is the m m identity matrix, then we have so we are done. P = 1 F m(p ) = IP = P I = f(i) = QI = IQ = 1 F m(p ) = Q, The above results motivate the definitions of the following notions for matrices, which are given in the text: left inverse of a matrix, right inverse of a matrix, (two-sided) inverse of a matrix, invertible matrix, square matrix A useful result is Theorem 6.3: If A F n n has a left (right) inverse, B, then B is also a right (left) inverse for A. Thus, a square matrix is invertible iff it has either a left inverse or a right inverse. 1

The proof of this theorem is essentially the marriage of the proof of the lemma for theorem 10 in the text (pg. 22) with the proof of the corollary to theorem 13(pg.24). The remainder of the results of this section should be familiar, but the reader is advised to review their proofs, which may differ somewhat from those in more elementary texts. We will highlight the remaining results of section 1.6, giving few proofs. Theorem 6.4: If A, B F n n, then: (i) If A is invertible, then its inverse is invertible, and (A 1 ) 1 = A. (ii) The product AB is invertible iff both A and B are invertible. (iii) If A and B are both invertible, then (AB) 1 = B 1 A 1. Corollary: A product of finitely many n n matrices is invertible iff each factor is invertible. That is, if A 1,..., A k F n n, and if A = A 1... A k, then A is invertible iff for each natural number j with 1 j k, A j is invertible. In this case, we have A 1 = A k... A 1. Proof: We use induction on k. Note that the desired result holds trivially for k = 1, and suppose we have shown the result for all products of kn n matrices. Let A 1,..., A k+1 F n n, let A = A 1... A k and let B = A k+1. By theorem 6.4, AB is invertible iff A is invertible and B is invertible. But by the inductive hypothesis, A is invertible iff A 1,..., A k are all invertible, and since B = A k+1, it follows that AB = A 1... A k+1 is invertible iff A 1,..., A k+1 are all invertible. Also by theorem 6.4, if A and B are invertible, then (AB) 1 = B 1 A 1. By the induction hypothesis, we have in this case that the inverse of A is given by A 1 = A k... A 1. Since B = A k+1, we get the desired formula: (A 1... A k+1 ) 1 = (AB) 1 = B 1 A 1 By mathematical induction, we are done. = A k+1 (A k... A 1 ) = A k+1... A 1. Theorem 6.5: Any elementary matrix is invertible. Corollary: Any admissible row operation is represented by an invertible matrix. Proof: If f : F m n F m n is an admissible row operation which is represented by the m m matrix P, let E 1,..., E k be m m elementary 2

[ A I R [ I A 1. matrices such that P = E 1... E k. Since each of the matrices E 1,..., E k is invertible, it follows that P is invertible and P 1 = E k... E 1. Theorem 6.6: Let A be an m m matrix over a field F. Then the following are equivalent: (i) The matrix A is invertible. (ii)a R I. (iii) A represents an admissible row operation. (iv) A is a product of elementary matrices. (v) The function f 1 : F m 1 F m 1, defined by f 1 (x) = Ax, is an invertible function. (vi) For each n N\{0}, the function f n : F m n F m n, defined by f n (B) = AB, is an invertible function. Corollary: A square matrix A is invertible iff That is, A F n n is invertible iff there are admissible (or elementary) operations f 1,..., f k on F n (2n) such that f k... f 1 ( [ A I ) = [ I B, and in this case, B is the inverse of A. Corollary: Let A and B be m n matrices over a field F. Then we have that A R B iff B = P A for some invertible P F m m. Theorem 6.7: If A is an n n matrix over a field F, then the following are equivalent: (i) The matrix A is invertible. (ii) The homogeneous system Ax = 0 has only the trivial solution, x = 0. (iii) For every y F n 1, the system Ax = y has a solution. (iv) For every y F n 1, the system Ax = y has exactly one solution. (v) For every y F n 1, the system Ax = y has at most one solution. Examples: 1. Compute the inverse of the following 3 3 complex matrix: A = 1 0 i 1 1 i 0 1 1 Solution: We perform admissible row operations on the 3 6 matrix [ A I.. According to the corollary to theorem 6.6, A is invertible iff this procedure can succed in computing [ I A 1 from [ A I :. 3

[ A I = 1 0 i 1 0 0 1 1 i 0 1 0 0 1 1 0 0 1 1 0 0 (1 i) i i 0 1 0 1 1 0 0 1 1 0 0 1 1 0 0 (1 i) i i 0 1 0 1 1 0 0 0 1 1 1 1 Thus we have A 1 = (1 i) i i 1 1 0 1 1 1 2. Let f : Z 5 Z 5 be defined by That is, If A = [a ij, then f(a) = A. f(a). 1 0 i 1 0 0 0 1 0 1 1 0 0 1 1 0 0 1 = [ I A 1. a 1,1... a 1,5 a 2,1 + a 1,1... a 2,5 + a 1,5 2a 3,1 + 4a 1,1... 2a 3,5 + 4a 1,5 Show that f is an admissible row operation. Solution: We will find an invertible 3 3 matrix P which represents f. In fact, if we let P = 1 0 0 1 1 0 4 0 2, and Q = 1 0 0 4 1 0 3 0 3 then QP = I, and for any matrix A Z 5, we have f(a) = P A. Since f is represented by P and P is invertible (Q = P 1 ), it follows that f is an admissible row operation, as desired. 3. Define f : R 2 3 R 2 3 by A f(a). Show that f is not an admissible row operation. Solution: Let Q R 2 2 be given by Q = [ 1 1 1 1.,. 4

Then we claim that f(a) = QA for all A R 2 3, but that Q is not invertible. We leave it to the reader to verify that f(a) = QA for all A R 2 3, and note that the assumption of invertibility for Q would lead to a system of four equations in two variables with a solution which would happen to imply 1 = 0. Since in R(and in fact in any field), 1 0, this contradiction proves the claim. Now suppose by way of contradiction that f is an admissible row operation. Then f is represented by an invertible 2 2 matrix P. But then we would have [ Q 0 = Q [ I 0 = f( [ I 0 ) = P [ I 0 = [ P 0, where I is the 2 2 identity matrix over R, and 0 is the 2 1 zero matrix over R. It follows that P = Q. But Q was shown to be noninvertible, so this is a contradiction. Thus f is not an admissible row operation. Homework Assignment for Section 1.6: 1. Let A Z 13 be given by A = 1 2 1 0 12 0 3 5. 1 11 1 1 Find a row-reduced echelon matrix R which is row-equivalent to A, an admissible row-operation f on 3 5 matrices over Z 13 such that f(a) = R, and the invertible matrix P which represents f. 2. Do problem #2 on pg. 26 of the text, following the instructions given in 1 above. 3. Let F be any field, and define f : F m n F m n by A f(a), where j {1,..., m} and a 1,..., a m, b 1,..., b m F satisfy: (i)0 {a 1,..., a m }\{a j } (ii)a j + b j 0. Prove that f is an admissible operation. Is the converse true? 4. Consider the matrix [ A y Z 5 in example 1 on pg. 44 of these notes, and the operations used to reduce this matrix to [ I x to solve the system Ax = y. Call these operations f 1, f 2, f 3, f 4 and f 5, respectively, so that they are defined on Z 5 by X f 1 (X), X f 2 (X), X f 3 (X), X f 4 (X), X f 5 (X), 5

Prove that these are admissible row operations by finding invertible matrices P 1, P 2, P 3, P 4, P 5 Z 5 which represent f 1, f 2, f 3, f 4, and f 5, respectively. 5. Explain what is wrong with the following computation in R 4 4 : 1 1 1 1 2 2 2 1 A = 1 1 1 0 1 0 1 1 2 1 2 1 1 1 2 2 = B. (*) 0 1 1 1 1 2 2 2 In particular, note that the systems Ax = 0 and Bx = 0 do not have the same solutions, and explain why the computation (*) has caused this undesirable situation. 6. Do problem #6 on pg. 27 of the text. 7. Do problem #7 on pg. 27 of the text. 8. Do problem #10 on pg. 27 of the text. 9. Do problem #11 on pg. 27 of the text. 10. Do problem #12 on pg. 27 of the text. (Hint: First consider row operations which involve only the last two rows of the matrix.) 11. Prove that the composition of finitely many invertible functions is invertible. 6