(I.D) Solving Linear Systems via Row-Reduction

Similar documents
Section Gaussian Elimination

Row Reduction and Echelon Forms

Chapter 1. Vectors, Matrices, and Linear Spaces

Matrices and RRE Form

Relationships Between Planes

Midterm 1 Review. Written by Victoria Kala SH 6432u Office Hours: R 12:30 1:30 pm Last updated 10/10/2015

3.4 Elementary Matrices and Matrix Inverse

Notes on Row Reduction

Solutions of Linear system, vector and matrix equation

Elementary Matrices. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

Lecture 2 Systems of Linear Equations and Matrices, Continued

System of Linear Equations

0.0.1 Section 1.2: Row Reduction and Echelon Forms Echelon form (or row echelon form): 1. All nonzero rows are above any rows of all zeros.

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Determine whether the following system has a trivial solution or non-trivial solution:

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

1. Determine by inspection which of the following sets of vectors is linearly independent. 3 3.

Chapter 5. Linear Algebra. Sections A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Lecture 12: Solving Systems of Linear Equations by Gaussian Elimination

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Systems of Linear Equations. By: Tri Atmojo Kusmayadi and Mardiyana Mathematics Education Sebelas Maret University

Solutions to Exam I MATH 304, section 6

Math 344 Lecture # Linear Systems

Last Time. x + 3y = 6 x + 2y = 1. x + 3y = 6 y = 1. 2x + 4y = 8 x 2y = 1. x + 3y = 6 2x y = 7. Lecture 2

LECTURES 4/5: SYSTEMS OF LINEAR EQUATIONS

EBG # 3 Using Gaussian Elimination (Echelon Form) Gaussian Elimination: 0s below the main diagonal

(II.D) There is exactly one rref matrix row-equivalent to any A

Chapter 5. Linear Algebra. Sections A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Section 1.1 System of Linear Equations. Dr. Abdulla Eid. College of Science. MATHS 211: Linear Algebra

Pre-Calculus I. For example, the system. x y 2 z. may be represented by the augmented matrix

1 - Systems of Linear Equations

Solving Linear Systems Using Gaussian Elimination

Problem Sheet 1 with Solutions GRA 6035 Mathematics

Linear Algebra I Lecture 10

Linear Algebra Handout

Math 240, 4.3 Linear Independence; Bases A. DeCelles. 1. definitions of linear independence, linear dependence, dependence relation, basis

Math 314H EXAM I. 1. (28 points) The row reduced echelon form of the augmented matrix for the system. is the matrix

MODEL ANSWERS TO THE THIRD HOMEWORK

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.

Math 1314 Week #14 Notes

1 Last time: linear systems and row operations

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

MATH10212 Linear Algebra B Homework Week 4

Applied Matrix Algebra Lecture Notes Section 2.2. Gerald Höhn Department of Mathematics, Kansas State University

REPLACE ONE ROW BY ADDING THE SCALAR MULTIPLE OF ANOTHER ROW

Linear Algebra Practice Problems

Chapter 3. Directions: For questions 1-11 mark each statement True or False. Justify each answer.

MATH 152 Exam 1-Solutions 135 pts. Write your answers on separate paper. You do not need to copy the questions. Show your work!!!

Math 1021, Linear Algebra 1. Section: A at 10am, B at 2:30pm

Lecture 9: Elementary Matrices

Chapter 4. Solving Systems of Equations. Chapter 4

Math 2174: Practice Midterm 1

1300 Linear Algebra and Vector Geometry Week 2: Jan , Gauss-Jordan, homogeneous matrices, intro matrix arithmetic

Matrices and systems of linear equations

Chapter 1: Linear Equations

Solving Consistent Linear Systems

Linear Equations in Linear Algebra

Lecture 6 & 7. Shuanglin Shao. September 16th and 18th, 2013

Elementary matrices, continued. To summarize, we have identified 3 types of row operations and their corresponding

Chapter 1: Linear Equations

Math 54 HW 4 solutions

The Gauss-Jordan Elimination Algorithm

Lecture 1 Systems of Linear Equations and Matrices

March 19 - Solving Linear Systems

Lecture 4: Gaussian Elimination and Homogeneous Equations

Chapter 1: Systems of Linear Equations

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

CHAPTER 8: MATRICES and DETERMINANTS

Section 6.3. Matrices and Systems of Equations

Mon Feb Matrix algebra and matrix inverses. Announcements: Warm-up Exercise:

Math 123, Week 5: Linear Independence, Basis, and Matrix Spaces. Section 1: Linear Independence

Lecture 21: 5.6 Rank and Nullity

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

MATH 2360 REVIEW PROBLEMS

Rank and Nullity. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

MAT 2037 LINEAR ALGEBRA I web:

1300 Linear Algebra and Vector Geometry

Linear Methods (Math 211) - Lecture 2

4 Elementary matrices, continued

Linear equations in linear algebra

DM559 Linear and Integer Programming. Lecture 2 Systems of Linear Equations. Marco Chiarandini

1 Determinants. 1.1 Determinant

Lecture 3: Gaussian Elimination, continued. Lecture 3: Gaussian Elimination, continued

Review Notes for Linear Algebra True or False Last Updated: February 22, 2010

DEPARTMENT OF MATHEMATICS

The definition of a vector space (V, +, )

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Find the solution set of 2x 3y = 5. Answer: We solve for x = (5 + 3y)/2. Hence the solution space consists of all vectors of the form

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

4 Elementary matrices, continued

Chapter 2. Systems of Equations and Augmented Matrices. Creighton University

Algebra & Trig. I. For example, the system. x y 2 z. may be represented by the augmented matrix

1111: Linear Algebra I

DM559 Linear and Integer Programming. Lecture 6 Rank and Range. Marco Chiarandini

web: HOMEWORK 1

13. Systems of Linear Equations 1

(II.B) Basis and dimension

Solving Linear Systems

Number of solutions of a system

Transcription:

(I.D) Solving Linear Systems via Row-Reduction Turning to the promised algorithmic approach to Gaussian elimination, we say an m n matrix M is in reduced-row echelon form if: the first nonzero entry of each row is 1 (called a leading 1 ). We write r for the number of leading 1 s; if a column contains a leading 1, then this must be its only nonzero entry (such columns are called pivot columns ); and if a row contains a leading 1, then each row above contains a leading 1 further to the left. Note in particular that if the leading 1 of row i occurs in the k th i then k 1 <... < k r. entry, DEFINITION 1. The number r is called the rank of M. If M is m n then r min{m, n}. If r = min{m, n} we say M has maximal rank. When m n, reduced row echelon matrices of maximal rank are all of the form 1 0 0 0 1 0 0 0 1 for m > n, I n for m = n. 0 0 0 0 0 0 In contrast, when m < n there are many possibilities: a 3 5 rowreduced echelon matrix of maximal rank (r = 3), for instance, can 1

2 (I.D) SOLVING LINEAR SYSTEMS VIA ROW-REDUCTION take any of the forms 1 0 0 0 1 0, 0 0 1 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 1 1 0 0 1 0 0 0 1 0, 0 1 0, 0 0 0 1 0 0 0 0 1 1 0 0 1 0 0, 0 0 1 0, 0 0 0 1 0, 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 1 0 0, 0 0 1 0, 0 0 0 1 0, 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 or 0 0 0 1 0, 0 0 0 0 1 where the stands for arbitrary value. We now show that any matrix is row-equivalent to a reduced row echelon matrix, by describing an row-reduction algorithm that always terminates on such a matrix. Begin with an (arbitrary) m n matrix A, and place your imaginary cursor at A 11, its upper lefthand entry. Now move the cursor to the right (if necessary) until it reaches a column with a nonzero entry; if the cursor entry = 0, swap the cursor row with the first row below with a nonzero entry in the cursor column; divide the cursor row by the cursor entry (to make cursor entry = 1 ); eliminate all other entries in the cursor column (by adding suitable multiples of the cursor row to all the other rows); move the cursor down and to the right, go back to the first, and repeat until we reach a reduced row echelon matrix rre f (A). Since this procedure is completely deterministic, it yields a welldefined map M mn (R) M mn (R) sending A rre f (A). This

(I.D) SOLVING LINEAR SYSTEMS VIA ROW-REDUCTION 3 rre f (A) is simply defined as the outcome of this particular algorithm applied to A. We have not proved that A is row-equivalent to a unique row-reduced echelon matrix (that is true, but will be proved somewhat later in these notes). EXAMPLE 2. 0 0 1 1 1 2 4 2 4 2 rre f 2 4 3 3 3 3 6 6 3 6 = 1 2 0 3 0 0 0 1 1 0 0 0 0 0 1 0 0 0 0 0. DEFINITION 3. For an arbitrary matrix A, we define the rank by (In the example, this is 3.) rank(a) := rank(rre f (A)). Notice that each of the bullets is accomplished by elementary row operations, which is to say via left-multiplication by an (invertible) elementary matrix. Given A as input, the above algorithm therefore spits out two well-defined matrices: namely, rre f (A) and the (invertible) product E(A) = E N... E 1 of the row-operations. These matrices are related by rre f (A) = E(A) A, which may be viewed as a decomposition of A = E(A) 1 rre f (A) into {invertible (m m)} {reduced row echelon (m n)}. Solving homogeneous equations. A x = 0 = 0 = E(A) A x = rre f (A) x ; and conversely rre f (A) x = 0 = 0 = E(A) 1 rre f (A) x = A x. So the solutions (in x ) to rre f (A) x = 0 and A x = 0 are the same. Now rre f (A) x = 0 is easily solved. Suppose 1 2 0 3 0 0 0 1 1 0 rre f (A) = 0 0 0 0 1 ; 0 0 0 0 0 then in solving rre f (A) x = 0 we may choose the variables x 2 and x 4 freely, while the variables in the pivot columns (with leading 1 s)

4 (I.D) SOLVING LINEAR SYSTEMS VIA ROW-REDUCTION are determined by those choices: x 1 = 2x 2 3x 4, x 3 = x 4, x 5 = 0. The upshot is that for (rre f (A) x = 0 )A x = 0 to have nontrivial solutions (i.e. solutions other than x = 0 ), we ve got to have columns without leading 1 s. There are n columns and r leading 1 s, so r < n is what we need. Therefore m < n = r min{m, n} = m < n = of nontrivial solutions. If m = n then the only possible rre f (A) with all columns filled up by leading 1 s is the identity I n, and so the condition for an interesting solution is for rre f (A) not to be I n : that is, {rre f (A) = I n } {A x = 0 has only the trivial solution}. Now {rre f (A) = I n } = A = E(A) 1 rre f (A) = E(A) 1 = A is invertible = {A x = 0 has only the trivial solution} since we can multiply both sides by A 1. So the following four items are equivalent for a square matrix A : A x = 0 has only the trivial solution rre f (A) = I n E(A) = A 1 A is invertible. Now we could use these equivalences to see quickly that A leftinvertible = A invertible, but then it s difficult to see what is going on. So let s be more deliberate: Let A be any n n matrix with a left-inverse L, so that L A = I n. Suppose x solves A x = 0 ; then x = I n x = (L A) x = L(A x) = 0. Therefore A x = 0 has only the trivial solution, and so does rre f (A) x = 0 [since rre f (A) = E(A) A where E(A) is invertible (this is the first key)]. But if x = 0 is the only solution to rre f (A) x = 0, then all columns of rre f (A) must contain a leading 1. Therefore rre f (A) = I n, and A = E(A) 1 rre f (A) = E(A) 1 is invertible. This also establishes the claim from I.C that for a square matrix, left-invertibility implies right-invertibility and vice-versa. Solving inhomogeneous equations and computing inverses. Consider an augmented matrix (A B); this is just a big m (n 1 + n 2 ) matrix made up of a m n 1 block A and an m n 2 block B. We

(I.D) SOLVING LINEAR SYSTEMS VIA ROW-REDUCTION 5 shall define rre f (A B) by performing the row-reduction algorithm on A (as if to compute rre f (A) ) and carrying the row operations across to B. 1 This yields (rre f (A) E(A) B), since E(A) operates on both blocks; we stop here rather than further reducing the righthand part of the augmented matrix. If B is just the vector y, then this gives a way of solving A x = y : invertibility of E(A) = solutions of A x = y and rre f (A) x = E(A) y coincide, and solutions to the latter are easily obtained. Note that regardless of the number of equations (= m ) an inhomogeneous system may have no solutions. For A an m n matrix, the columns of E(A) are E(A) ê i, i = 1,..., m. So rre f (A ê i ) = ( rre f (A) {ith column of E(A)} ) and putting all the columns together, rre f (A I m ) = (rre f (A) E(A)). For A invertible (m m) [ rre f (A) = I m ], notice that E(A) = A 1 and rre f (A : I m ) = (I m : A 1 ). To make sure you understand this process, try using it to rederive Example I.C.1. EXAMPLE 4. Given the inhomogeneous linear system 3x 1 6x 2 + 2x 3 x 4 = 1 2x 1 + 4x 2 + x 3 + 3x 4 = 4 x 3 + x 4 = 2 x 1 2x 2 + x 3 = 1, we write the augmented matrix 3 6 2 1 1 2 4 1 3 4 0 0 1 1 2, 1 2 1 0 1 1 Warning: this is different from taking rref of the whole m (n1 + n 2 ) matrix!

6 (I.D) SOLVING LINEAR SYSTEMS VIA ROW-REDUCTION then apply the rref algorithm to the 4 4 block (carrying them over to the last column) to obtain 1 2 0 1 1 0 0 1 1 2 0 0 0 0 0. 0 0 0 0 0 Now (interpreting this as a linear system) write the pivot variables x 1 = 2x 2 + x 4 1 x 3 = 2 x 4 in terms of the free variables x 2, x 4, which then parametrize all the solutions. Note that if we were to replace the original third equation by x 3 + x 4 = 3, the system would be inconsistent. REMARK 5. If you know how to do rre f, then you also know how to do rce f (reduced column echelon form). Flip the matrix, take rre f, and flip it back: rce f (A) := t (rre f ( t A)) = t (E( t A) ta) = A te( t A) where the column-operations t E( t A) are invertible, and occur on the right. Exercises (1) Use the rref algorithm to prove that 1 0 0 a 1 0 c b 1 is invertible, and to compute this inverse, for arbitrary a, b, c. (2) Find all solutions of 2x 1 3x 2 7x 3 + 5x 4 + 2x 5 = 2 x 1 2x 2 4x 3 + 3x 4 + x 5 = 2 2x 1 4x 3 + 2x 4 + x 5 = 3 x 1 5x 2 7x 3 + 6x 4 + 2x 5 = 7.

EXERCISES 7 What is the rank of the matrix A in this case? (3) Prove that if A is an m n matrix, B is an n m matrix, and n < m, then AB is not invertible.