Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Similar documents
Row Reduced Echelon Form

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Linear Equations in Linear Algebra

Section Gaussian Elimination

Lecture 2 Systems of Linear Equations and Matrices, Continued

1300 Linear Algebra and Vector Geometry

Chapter 1. Vectors, Matrices, and Linear Spaces

Chapter 4. Solving Systems of Equations. Chapter 4

Lecture 2e Row Echelon Form (pages 73-74)

1300 Linear Algebra and Vector Geometry Week 2: Jan , Gauss-Jordan, homogeneous matrices, intro matrix arithmetic

4 Elementary matrices, continued

LECTURES 4/5: SYSTEMS OF LINEAR EQUATIONS

Row Reduction and Echelon Forms

Midterm 1 Review. Written by Victoria Kala SH 6432u Office Hours: R 12:30 1:30 pm Last updated 10/10/2015

Lecture 3: Gaussian Elimination, continued. Lecture 3: Gaussian Elimination, continued

4 Elementary matrices, continued

Section 6.2 Larger Systems of Linear Equations

Gauss-Jordan Row Reduction and Reduced Row Echelon Form

Lecture 12: Solving Systems of Linear Equations by Gaussian Elimination

Notes on Row Reduction

Solving Linear Systems Using Gaussian Elimination

Pre-Calculus I. For example, the system. x y 2 z. may be represented by the augmented matrix

Chapter 1 Linear Equations. 1.1 Systems of Linear Equations

Linear Algebra I Lecture 10

Solving Systems of Linear Equations

Elementary matrices, continued. To summarize, we have identified 3 types of row operations and their corresponding

Homework Set #1 Solutions

Homework 1.1 and 1.2 WITH SOLUTIONS

CHAPTER 8: MATRICES and DETERMINANTS

1.4 Gaussian Elimination Gaussian elimination: an algorithm for finding a (actually the ) reduced row echelon form of a matrix. A row echelon form

2 Systems of Linear Equations

Lecture 4: Gaussian Elimination and Homogeneous Equations

Matrices and RRE Form

9.1 - Systems of Linear Equations: Two Variables

Last Time. x + 3y = 6 x + 2y = 1. x + 3y = 6 y = 1. 2x + 4y = 8 x 2y = 1. x + 3y = 6 2x y = 7. Lecture 2

Linear Algebra Handout

March 19 - Solving Linear Systems

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

Math 54 HW 4 solutions

Math 1314 Week #14 Notes

Exercise Sketch these lines and find their intersection.

The Gauss-Jordan Elimination Algorithm

Section 1.2. Row Reduction and Echelon Forms

Math Computation Test 1 September 26 th, 2016 Debate: Computation vs. Theory Whatever wins, it ll be Huuuge!

Lectures on Linear Algebra for IT

SOLVING Ax = b: GAUSS-JORDAN ELIMINATION [LARSON 1.2]

MAC1105-College Algebra. Chapter 5-Systems of Equations & Matrices

1 Last time: linear systems and row operations

36 What is Linear Algebra?

Lecture 1 Systems of Linear Equations and Matrices

Recall, we solved the system below in a previous section. Here, we learn another method. x + 4y = 14 5x + 3y = 2

Algebra & Trig. I. For example, the system. x y 2 z. may be represented by the augmented matrix

Number of solutions of a system

MATH 54 - WORKSHEET 1 MONDAY 6/22

Math 2331 Linear Algebra

Matrix Solutions to Linear Equations

Solving Systems of Linear Equations Using Matrices

Methods for Solving Linear Systems Part 2

Handout 1 EXAMPLES OF SOLVING SYSTEMS OF LINEAR EQUATIONS

MTH 2032 Semester II

Linear Equation: a 1 x 1 + a 2 x a n x n = b. x 1, x 2,..., x n : variables or unknowns

MAC Module 1 Systems of Linear Equations and Matrices I

Chapter 2. Systems of Equations and Augmented Matrices. Creighton University

And, even if it is square, we may not be able to use EROs to get to the identity matrix. Consider

22A-2 SUMMER 2014 LECTURE 5

Linear Algebra. Ben Woodruff. Compiled July 17, 2010

10.3 Matrices and Systems Of

1111: Linear Algebra I

CHAPTER 8: MATRICES and DETERMINANTS

Announcements Wednesday, August 30

3. Replace any row by the sum of that row and a constant multiple of any other row.

DM559 Linear and Integer Programming. Lecture 2 Systems of Linear Equations. Marco Chiarandini

Lecture 9: Elementary Matrices

Chapter 1: Linear Equations

Linear Algebra, Summer 2011, pt. 2

Math "Matrix Approach to Solving Systems" Bibiana Lopez. November Crafton Hills College. (CHC) 6.3 November / 25

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.

7.5 Operations with Matrices. Copyright Cengage Learning. All rights reserved.

1. Solve each linear system using Gaussian elimination or Gauss-Jordan reduction. The augmented matrix of this linear system is

Row Reduction

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Chapter 1: Linear Equations

MATH240: Linear Algebra Review for exam #1 6/10/2015 Page 1

L1-5. Reducing Rows 11 Aug 2014

EBG # 3 Using Gaussian Elimination (Echelon Form) Gaussian Elimination: 0s below the main diagonal

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

0.0.1 Section 1.2: Row Reduction and Echelon Forms Echelon form (or row echelon form): 1. All nonzero rows are above any rows of all zeros.

Introduction to Systems of Equations

Linear equations The first case of a linear equation you learn is in one variable, for instance:

Linear Algebra March 16, 2019

Linear Equations in Linear Algebra

Math 320, spring 2011 before the first midterm

Math Week 1 notes

MathQuest: Linear Algebra. 1. Which of the following operations on an augmented matrix could change the solution set of a system?

MA 1B PRACTICAL - HOMEWORK SET 3 SOLUTIONS. Solution. (d) We have matrix form Ax = b and vector equation 4

is a 3 4 matrix. It has 3 rows and 4 columns. The first row is the horizontal row [ ]

1 - Systems of Linear Equations

CHAPTER 9: Systems of Equations and Matrices

System of Linear Equations

Transcription:

Math 0 Row Reduced Echelon Form Techniques for solving systems of linear equations lie at the heart of linear algebra. In high school we learn to solve systems with or variables using elimination and substitution of variables. In order to solve systems with a large number of variables we need to be more organized. The process of Gauss-Jordan Elimination gives us a systematic way of solving linear systems. To solve a system of equations we should first drop as many unncessary symbols as possible. This is done by constructing an augmented matrix. Example: x y + z = = : 0 : 7 To solve our system we need to manipulate our equations. We will see that standard manipulations correspond to changing the rows of our augmented matrix. In the end, it turns out that we need just types of operations to solve any linear system. We call these elementary operations. Definition: Elementary Row Operations Effect on the linear system: Effect on the matrix: Type I Type II Type III Interchange equation i and equation j (List the equations in a different order.) Multiply both sides of equation i by a nonzero scalar c Multiply both sides of equation i by c and add to equation j Swap Row i and Row j Multiply Row i by c where c 0 Add c times Row i to Row j where c is any scalar If we can get matrix A from matrix B by performing a series of elementary row operations, then A and B are called row equivalent matrices. Example: Type I swap rows and x y + z = : 0 : 7 Example: Type II scale row by - x y + z = : 0 : 7 = R R = R x y + z = 0 : 7 : x y + z = x + 8z = : 0 5 6 : 0 0 8 :

Example: Type III add times row to row x y + z = : 0 : 7 = R+R x y + z = x 5y + 6z = : 5 6 : 0 : 7 It is important to notice several things about these operations. First, they are all reversible (that s why we want c 0 in type II operations) in fact the inverse of a type X operation is another type X operation. Next, these operations don t effect the set of solutions for the system that is row equivalent matrices represent systems with the same set of solutions. Finally, these are row operations columns never interact with each other. This last point is quite important as it will allow us to check our work and later allow us to find bases for subspaces associated with matrices (see Linear Correspondence between columns). Doing operations blindly probably won t get us anywhere. Instead we will choose our operations carefully so that we head towards some shape of equations which will let us read off the set of solutions. Thus the next few defintions. Definition: A matrix is in Row Echelon Form (or REF) if... Each non-zero row is above all zero rows that is zero rows are pushed to the bottom. The leading entry of a row is strictly to the right of the leading entries of the rows above. (The leftmost non-zero entry of a row is called the leading entry.) If in addition... Each leading entry is. (Note: Our textbook says this is a requirement of REF.) Only zeros appear above (& below) a leading entry of a row. then a matrix is in reduced row echelon form (or RREF). Example: The matrix A (below) is in REF but is not reduced. The matrix B is in RREF. 0 5 0 0 0 A = 0 5 6 B = 5 0 0 0 Gauss-Jordan Elimination is an algorithm which given a matrix returns a row equivalent matrix in reduced row echelon form (RREF). We first perform a forward pass:. Determine the leftmost non-zero column. This is a pivot column and the topmost entry is a pivot position. If 0 is in this pivot position, swap (an unignored) row with the topmost row (use a Type I operation) so that there is a non-zero entry in the pivot position.. Add appropriate multiples of the topmost (unignored) row to the rows beneath it so that only 0 appears below the pivot (use several Type III operations).. Ignore the topmost (unignored) row. If any non-zero rows remain, go to step.

The forward pass is now complete. Now our matrix in row echelon form (in my sense not our textbook s sense). Sometimes the forward pass alone is referred to as Gaussian Elimination. However, we should be careful since the term Gaussian Elimination more commonly refers to both the forward and backward passes. Now let s finish Gauss-Jordan Elimination by performing a backward pass:. If necessary, scale the rightmost unfinished pivot to (use a Type II operation).. Add appropriate multiples of the current pivot s row to rows above it so that only 0 appears above the current pivot (using several Type III operations).. The current pivot is now finished. If any unfinished pivots remain, go to step. It should be fairly obvious that the entire Gauss-Jordan algorithm will terminate in finitely many steps. Also, only elementary row operations have been used. So we end up with a row equivalent matrix. A tedious, wordy, and unenlightening proof would show us that the resulting matrix is in reduced row echelon form (RREF). Example: Let s solve the system x + y = x + y = : R+R : [ : 0 : The first non-zero column is just the first column. So the upper left hand corner is a pivot position. This position already has a non-zero entry so no swap is needed. The type III operation times row added to row clears the only position below the pivot, so after one operation we have finished with this pivot and can ignore row. : 0 : Among the (unignored parts of) columns the leftmost non-zero column is the second column. So the sits in a pivot position. Since it s non-zero, no swap is needed. Also, there s nothing below it, so no type III operations are necessary. Thus we re done with this row and we can ignore it. : 0 : : Nothing s left so we re done with the forward pass. is in row echelon form. 0 : Next, we need to take the rightmost pivot (the ) and scale it to then clear everything above it. : / R : R+R 0 : 0 : 0 : 0 : This finishes that pivot. The next rightmost pivot is the in the upper left hand corner. But it s already scaled to and has nothing above it, so it s finished as well. That takes care of all of the pivots so the backward pass is complete leaving our matrix in reduced row echelon form. Finally, let translate the RREF matrix back into a system of equations. The (new equivalent) system x = is. So the only solution for this system is x = and y =. y = Note: One can also solve a system quite easily once (just) the forward pass is complete. This is done x + y = using back substitution. Notice that the system after the forward pass was y =. So we have y = thus y =. Substituting this back into the first equation we get x+() = so x =. ]

Example: Let s solve the system : 0 : 0 : : 0 : 0 / / : R+R R R x y + z = 0 x y + z = x + z = : 0 : 0 : : 0 0 / / : : / R+R Ignore R : 0 : 0 / / : : 0 0 / / : : Ignore R Ignore R : 0 which leaves us with nothing. So the forward pass is complete and 0 / / : is in REF. : : 0 0 / / : : 0 0 / / : 0 : 0 0 : 0 : : : 0 : 0 0 : 0 0 : 0 0 : 0 : : This finishes the backward pass and our matrix is now in RREF. Our new system of equations is x + z = 0 y + z = 0. Of course 0, so this is an inconsistant system it has no solutions. 0 = Note: If our only goal was to solve this system, we could have stopped after the very first operation (row number already said 0 = ). Example: Let s solve the system : 5 6 : 9 7 8 9 : 5 R+R : 0 6 : : 0 x + y + z = x + 5y + 6z = 9 7x + 8y + 9z = 5 : 0 6 : 7 8 9 : 5 / R 7 R+R : 0 : : 0 R+R : 0 6 : 0 6 : 6 0 : 0 : : 0 R+R x z = Now our matrix is in RREF. The new system of equations is y + z =. This is new 0 = 0 we don t have an equation of the form z =... This is because z does not lie in a pivot column. So we can make z a free variable. Let s relabel it z = t. Then we have x t =, y + t =, and z = t. So x = + t, y = t, and z = t is a solution for any choice of t. In particular, x = y = and z = 0 is a solution. But so is x =, y =, z =. In fact, there are infinitely many solutions. Note: A system of linear equations will always have either one solution, infinitely many solutions, or no solution at all.

Multiple Systems: Gauss-Jordan can handle solving multiple systems at once, if these systems share the same coefficient matrix (the part of the matrix before the : s). x + y = x + y = Suppose we wanted to solve both x + 5y = 6 and also x + 5y = 9. These lead to the 7x + 8y = 9 7x + 8y = 5 : : following augmented matrices: 5 : 6 and 5 : 9. We can combine them together and get 7 8 : 9 7 8 : 5 : 0 : 5 : 6 9 which we already know has the RREF of 0 : (from the last example 7 8 : 9 5 0 0 : 0 0 0 : 0 : only the : s have moved). This corresponds to the augmented matrices 0 : and 0 :. 0 0 : : 0 These in turn tell us that the first system s solution is x =, y = and the second system s solution is x = and y =. This works because we aren t mixing columns together (all operations are row operations). Also, notice that the same matrix can be interpreted in a number of ways. Before we had a single system in variables and now we have systems in variables. Homework Problems: For each of the following matrices perform Gauss-Jordan elimination (carefully following the algorithm defined above). I have given the result of the performing the forward pass to help you verify that you are on the right track. [Note: Each matrix has a unique RREF. However, REF is not unique. So if you do not follow my algorithm and instead either use the text s method or your own random assortment of operations, you will almost certainly get different REFs along the way to the RREF.] Once you have completed row reduction, identify the pivots and pivot columns. Finally, interpret your matrices as a system or collection of systems of equations and note the corresponding solutions. 5 5. = forward pass = 0 8... 0 0 0 0 0 = forward pass = 0 0 0 8 6 0 = forward pass = 0 / / 0 0 / 0 0 6 = forward pass = 0 0 5

Math 0 The Linear Correspondence Between Columns It will be extremely useful to notice that when we preform elementary row operations on a matrix A, the linear relationships between columns of A do not change. Specifically... Suppose that a, b, and c are columns of A, and suppose that we perform some row operation and get A whose corresponding columns are a, b, and c. Then it turns out that: for some real numbers x, y, and z. xa + yb + zc = 0 if and only if xa + yb + zc = 0 This also holds for bigger or smaller collections of columns. Why? Essentially, since we are preforming row operations, the relationships between columns are unaffected we aren t mixing the columns together. Example: 0 0 R R 0 0 R+R 0 0 0 R+R 0 0 R 0 0 In the RREF (on the far right) we have that the final column is times the first column plus times the second column. 0 0 = So this must be true for all of the other matrices as well. In particular, we have that the third column of the original matrix is times the first column plus times the second column: 0 0 = Why is this important? Well, first, Gauss-Jordan elimination typically requires a lot of computation. This correspondence gives you a way to check that you have row-reduced correctly! In some cases, the linear relations among columns are obvious and we can just write down the RREF without performing Gaussian elimination at all! We will see other applications of this correspondence later in the course. 0 Example: Let A be a x matrix whose RREF is 0. Suppose that we know the first column of A is and the second is 5. Then, by the linear correspondence, we know that the third column 7 8 must be + 5 = 6. Therefore, the mystery matrix is A = 5 6. 7 8 9 7 8 9 0 0 Example: Let B be a matrix whose RREF is. Suppose that we know the first 0 0 pivot column (i.e. the second column) of B is and the second pivot column (i.e. the fourth column) 6

is. Then, by the linear correspondence, we know that the third and fifth columns must be... 8 = and = 6 5 0 8 Therefore, the mystery matrix is B = 0. 0 6 5 Homework Problems: 5. Verify that the linear correspondence holds between the each matrix, its REF, and its RREF in the previous homework problems and examples. 0 6. I just finished row reducing the matrix 0 and got 0. Something s wrong. Just 5 0 using the linear correspondence explain how I know something s wrong and then find the real RREF (without row reducing). 0 7. A certain matrix has the RREF of 0 0. The first column of the matrix is 0 and 0 0 the third column is. Find the matrix. 0 5 0 0 8. A certain matrix has the RREF of 0 0 0 0. The second column of the matrix is and the last column is. Find the matrix. 5 0 0 9. A certain matrix has the RREF of 0 0 0. The first pivot column of the matrix is 0, the second pivot column is, and the third pivot column is. Find the matrix. 0 0 0 0 c 0. matrices can row reduce to either,,, or for some c R. What can 0 be said about the original matrices in each of these cases? 6 8 7

A Few Answers: 5. If you d like a particular example worked out, just ask me. 6. According to the linear correspondence, the final column of my matrix should be times the first pivot colum minus the second pivot column. However, 0 = 0 = which 5 5 does not match the third column of my original matrix. So the RREF must be wrong. Let s find a and b so that a times column plus b times column is column. In the final column of the original matrix, the second entry is. So we must have b =. Thus a (column ) = 5 (column ) + (column ). Adding those columns together, we get 0. This is 5 times the first 5 0 5 column. So a = 5. Therefore, the correct RREF is 0. 7. According to the RREF, the second column is times the first column, the fourth column is obtained by subtracting the first column from the third column (i.e. the second pivot column), and the last column is twice the first column plus times the third column. This gives us 0 0. 8 8. According to the RREF, the second column is twice the first column. Thus the first column must be. The last column is times the first column plus the second pivot column (i.e. column ). 6 Thus the second pivot column must be + = 8 0. And finally, the fourth column is 5 6 times the first column plus times the third column. So we get 8 6 0 5. 8 56 9. The second column is 5 times the first, the fourth column is times column plus times column (the second pivot column), and the sixth column is times the first column, times the third column, 5 5 and times the fifth column (the third pivot column). Thus we get 5 5. 0 8 0. If your RREF is the zero matrix, you must have started with the zero matrix (the result of doing or undoing ( any [ row ] ) operation on a zero matrix yields a zero matrix). If your RREF is the identity 0 matrix i.e., then the linear correspondence guarantees that both columns of the original 0 matrix are non-zero and they must not be multiples of each other. If your RREF is the third option, then the first column of your original matrix was zeros and the second column was not a column made entirely of zeros. Finally, in the last case, the second column of the original matrix must be c times the first column (the second column is a multiple of the first column). 8