MATH 320, WEEK 6: Linear Systems, Gaussian Elimination, Coefficient Matrices

Similar documents
Math 1314 Week #14 Notes

Linear equations The first case of a linear equation you learn is in one variable, for instance:

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Chapter 4. Solving Systems of Equations. Chapter 4

LECTURES 4/5: SYSTEMS OF LINEAR EQUATIONS

MATH 320, WEEK 11: Eigenvalues and Eigenvectors

Math 123, Week 2: Matrix Operations, Inverses

Lecture 1 Systems of Linear Equations and Matrices

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

9.1 - Systems of Linear Equations: Two Variables

MATH 320, WEEK 7: Matrices, Matrix Operations

DM559 Linear and Integer Programming. Lecture 2 Systems of Linear Equations. Marco Chiarandini

Chapter 1 Linear Equations. 1.1 Systems of Linear Equations

Systems of Linear Equations

Conceptual Explanations: Simultaneous Equations Distance, rate, and time

Solving Linear Systems Using Gaussian Elimination

Notes on Row Reduction

March 19 - Solving Linear Systems

Section 1.1 System of Linear Equations. Dr. Abdulla Eid. College of Science. MATHS 211: Linear Algebra

System of Linear Equations. Slide for MA1203 Business Mathematics II Week 1 & 2

Methods for Solving Linear Systems Part 2

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

CHAPTER 1 Systems of Linear Equations

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way

1 - Systems of Linear Equations

Math 123, Week 5: Linear Independence, Basis, and Matrix Spaces. Section 1: Linear Independence

Midterm 1 Review. Written by Victoria Kala SH 6432u Office Hours: R 12:30 1:30 pm Last updated 10/10/2015

MAC Module 1 Systems of Linear Equations and Matrices I

2 Systems of Linear Equations

Introduction to Systems of Equations

Elementary Linear Algebra

Notes: Pythagorean Triples

Systems of Linear Equations and Inequalities

1300 Linear Algebra and Vector Geometry

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Section 6.2 Larger Systems of Linear Equations

Row Reduced Echelon Form

Chapter 1. Linear Equations

MTH 2032 Semester II

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way

Gauss-Jordan Row Reduction and Reduced Row Echelon Form

SYDE 112, LECTURE 7: Integration by Parts

Eigenvalues and Eigenvectors: An Introduction

CS123 INTRODUCTION TO COMPUTER GRAPHICS. Linear Algebra /34

Relationships Between Planes

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124

Solving Systems of Equations Row Reduction

36 What is Linear Algebra?

EBG # 3 Using Gaussian Elimination (Echelon Form) Gaussian Elimination: 0s below the main diagonal

Lecture 2 Systems of Linear Equations and Matrices, Continued

Finite Mathematics Chapter 2. where a, b, c, d, h, and k are real numbers and neither a and b nor c and d are both zero.

Math 308 Midterm Answers and Comments July 18, Part A. Short answer questions

Linear Algebra, Summer 2011, pt. 2

Law of Trichotomy and Boundary Equations

Chapter 5. Linear Algebra. Sections A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Recall, we solved the system below in a previous section. Here, we learn another method. x + 4y = 14 5x + 3y = 2

1111: Linear Algebra I

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

Matrices. A matrix is a method of writing a set of numbers using rows and columns. Cells in a matrix can be referenced in the form.

Vector Spaces, Orthogonality, and Linear Least Squares

CHAPTER 8: MATRICES and DETERMINANTS

Chapter 1 Review of Equations and Inequalities

Lecture 7: Introduction to linear systems

Exercise Sketch these lines and find their intersection.

Section Gaussian Elimination

Solving Systems of Linear Equations Using Matrices

Linear Algebra I Lecture 8

Math Week 1 notes

Elimination Method Streamlined

MATH-1420 Review Concepts (Haugen)

0. Introduction 1 0. INTRODUCTION

I am trying to keep these lessons as close to actual class room settings as possible.

MAC1105-College Algebra. Chapter 5-Systems of Equations & Matrices

Unit 4 Systems of Equations Systems of Two Linear Equations in Two Variables

Linear vector spaces and subspaces.

Chapter 1: Systems of Linear Equations and Matrices

Chapter 5. Linear Algebra. Sections A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

Matrix Algebra: Introduction

1111: Linear Algebra I

Row Reduction

Math 2331 Linear Algebra

Number of solutions of a system

Solving Linear Systems

Linear Algebra. Chapter Linear Equations

1. (7pts) Find the points of intersection, if any, of the following planes. 3x + 9y + 6z = 3 2x 6y 4z = 2 x + 3y + 2z = 1

Linear Algebra I Lecture 10

Rectangular Systems and Echelon Forms

ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER / Lines and Their Equations

MAC Module 3 Determinants. Learning Objectives. Upon completing this module, you should be able to:

CS123 INTRODUCTION TO COMPUTER GRAPHICS. Linear Algebra 1/33

x y = 1, 2x y + z = 2, and 3w + x + y + 2z = 0

Homework 1.1 and 1.2 WITH SOLUTIONS

Chapter 9: Systems of Equations and Inequalities

5x 2 = 10. x 1 + 7(2) = 4. x 1 3x 2 = 4. 3x 1 + 9x 2 = 8

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

x 1 2x 2 +x 3 = 0 2x 2 8x 3 = 8 4x 1 +5x 2 +9x 3 = 9

Introduction to Algebra: The First Week

Transcription:

MATH 320, WEEK 6: Linear Systems, Gaussian Elimination, Coefficient Matrices We will now switch gears and focus on a branch of mathematics known as linear algebra. There are a few notes worth making before beginning: Linear algebra is a distinct branch of mathematics from differential equations in fact, there are no derivatives at all! We will see later in the course that the applications of linear algebra to the study of differential equations are plentiful, but we can be forgiven if we do not see the connections immediately. For now, we will treat the subject matter as its own distinct entity. It is no coincidence that we have focused much attention on linear differential equations and are now transitioning into the study of linear algebra. The studies over the next five-to-six weeks will allow us to answer questions about general-order linear differential equations i.e. not just first-order linear differential equations, which could be solved by using the trick of finding an integrating factor. While linear algebra is very important in the study of linear (and nonlinear!) differential equations, its applications extend far beyond that into areas such as statistics, economics, computer science, control theory pretty well anywhere there is an equation to be solved. It is one of the foundational branches of mathematics it simply cannot be escaped. To motivate the study of linear algebra, consider being asked to solve the system of linear equations x y = 4 2x + 3y = 3. This is a linear system of two equations in two unknowns (x and y). By a solution to this system, we mean a set of values x and y which satisfy both equations simultaneously. How could we handle something like this? Our first observation is that, just like with differential equations, it is exceptionally easy to verify whether something is a solution to given a given 1

system of equations all we have to do is plug the proposed solution into the given equations. For instance, suppose we have the proposed solutions (a) (x, y) = (1, 1), (b) (x, y) = (4, 0), and (c) (x, y) = (3, 1). We can see easily that the first proposed solution is not a solution since x = 0, y = 0 does not satisfy either equation. For the second proposed solution, we can see that (4) (0) = 4 so that the first equation is satisfied, but it is not a solution because 2(4) + 3(0) = 8 3. Only the third proposal works, since (3) ( 1) = 4 and 2(3) + 3( 1) = 3. In fact, this is the unique solution of the set of equations. If we were asked to solve this without knowing the solution, we would not find ourselves particularly overwhelmed. We could simply solve for x (or y) in one of the equations, and then substitute this into the other equation. This would solve for one of the variables, and the remaining equation would allow us to solve for the other. For this example, we have so that Back substitution yields x y = 4 = x = 4 + y 2x + 3y = 3 = 2(4 + y) + 3y = 3 = 8 + 2y + 3y = 3 = y = 1. x = 4 + y = 4 + ( 1) = 3. Remark: It is worth taking a step back and considering what it means for a system of equations to be linear. It is analogous to what we mean by a graph being linear (a line, i.e. y = mx + b) or a differential equation being linear. We simply mean that the unsolved variables of interest appear in their own terms (i.e. are separated by addition or subtraction) and are not allowed to be modified by anything more complicated than a multiplicative constant. Examples of terms which are nonlinear are: 1. x 2. sin(x) 3. 1/x or even terms involving more than one unsolved variable such as xy or y z. These terms will be called nonlinear, as they would have been if they involved the unknown function y(x) (or any of its derivatives) during our differential 2

equations portion of the course. Just as in the case of differential equations, nonlinear equations will be much harder to solve than linear ones. For instance, consider being asked to solve the following nonlinear equation in a single variable: e x = x. Simply graphing the curves y = e x and y = x shows us that there must be a solution to the expression, but what is it? It certainly cannot be found by rearranging the expression into the form x equals to something, since no algebraic rearrangement permits us to isolate x. The best method we have (and what your computer uses) is numerical approximation using something such as Newton s method. We will see that with linear systems such problems simply do not arise. Through our study of linear algebra, we will see that we are able to develop methods which allow us to firmly answer such questions for linear systems and answer many of questions about the systems as well. 1 Gaussian Elimination Now consider a more complicated system of equations. Suppose we are asked to solve the following system of three equations in three unknowns: x 2y + z = 4 3x + y + 2z = 13 x + y z = 6. We could still (and always will be able to) follow our earlier intuition of solving for one variable at a time, and then using back substitution. That is to say, we could follow the following algorithm: 1. Solve for x (in terms of y and z) in the first equation. 2. Substitute this result in the second equation. 3. Solve for y (in terms of z) in the second equation. 4. Substitute this result in the third equation. 5. Solve for z. 6. Substitute z back in to solve for y. 7. Substitute z and y back in to solve for x. 3

This method would work, but would take a little bit of time. We would like to develop a short-hand method for going through this process which requires less writing like synthetic division for long-division of polynomials, or the u/du, dv/v integration table for integration by parts. Our primary observation is that when we solve for x in the first equation and substitute it in the second, what we are really doing is eliminating x from the second equation. This is not the only avenue available to us for accomplishing this task. We might notice that we could simply add three of the first equation to the second equation and get 3(x 2y + z) = 3(4) +( 3x + y + 2z) = (13) 5y + 5z = 25. This equation has the same solution set as the original set we have not changed anything. We could check that we could have obtained the second equation by solving for x directly in the first equation and substituting into the second. This is the same process! If we perform this process of eliminate to the third equation as well, we have (x 2y + z) = (4) +( 1)(x + y z) = ( 6) 3y + 2z = 10. Replacing the corresponding original equations with these new (equivalent) ones, we have the new system x 2y + z = 4 5y + 5z = 25 3y + 2z = 10. We notice that the second two equations represent a system of two equations in two unknown, like we have already dealt with. We will, however, continue with our current intuition. We want to eliminate the variable y from the last equation by finding multiples of the last two equations which eliminate 4

it. It is more complicated than the previous example, but we have 3( 5y + 5z) = 3(25) 5( 3y + 2z) = 5(10) 5z = 25. This clearly implies that z = 5. We can substitute this back into 3y +2z = 10 to get 3y + 2(5) = 10 = y = 0. We then have x + 2(0) + (5) = 4 = x = 1. It follows that the solution is (x, y, z) = ( 1, 0, 5). This certainly does not seem like less work, but consider the following observation: at each one of these steps, the basic structure of the equations has remained the same. That is to say, at each step we have three equations in three unknowns (x, y, and z). The general structure for such a system is a 1 x + a 2 y + a 3 z = a 4 b 1 x + b 2 y + b 3 z = b 4 c 1 x + c 2 y + c 3 z = c 4. We have performed a number of operations which have changed the individual equations, but the structure has not changed, only the coefficients of x, y, and z have. If we could find a short-hand way to represent such systems without having to write x, y and z at every step, we would save ourselves a lot of writing! To that end, we define the coefficient matrix M to be the grid (gridlines not shown!) with the coefficients of the linear system in the appropriate boxes, i.e. a 1 a 2 a 3 a 4 M = b 1 b 2 b 3 b 4. c 1 c 2 c 3 c 4 The general idea behind this representation of a linear system is that we will be able to do the same intuitive steps outlined above without writing x, y, and z at every step. Let s formalize the operations we just used to find the solution (x, y, z) = ( 1, 0, 5) and systemize the procedure. We are allowed to perform the following elementary row operations on the coefficient matrix M: 1. Interchange any two rows R i and R j. 2. Scale rows R i by a non-zero constant. 5

3. Add two rows R i and R j to form a new row. The purpose of formally stating these operations is that we can perform them on the coefficient matrix to eliminate variables from specific lines without changing the solution set of the corresponding linear system of equations. It is very important that we apply these operations properly and recognize how the correspond to the underlying linear system otherwise, we will end up with the incorrect answer. Let s apply these operations to the previous example, with the ultimate goal of eliminating x from the second and third line, and then eliminating y from the third line. We have R 3 =R 1 R 3 3 1 2 13 1 1 1 6 0 3 2 10 R 2 =3R 1+R 2 R 3 =3R 2 5R 3 1 1 1 6 0 0 5 25 We recall that the last line corresponds to (0)x + (0)y 5(z) = 25 so that we have z = 5. We could back substitute this to get our previous solution. This process is called Gaussian elimination and the matrix form we have obtained is called row echelon form. In order to be in row echelon form, we need to have that: 1. Every row of zeroes (if applicable) appears below every line with nonzero entries. 2. Every row R i (except for the last row) is followed by a row R i+1 which satisfies the following: if row R i has zeroes in its first n columns, then row R i+1 has zeroes in (at least) its first n + 1 columns. It should be obvious that this corresponds to the structure we expect from eliminating variables. If we eliminate x from all but the first equation, the resulting coefficient matrix will have zeroes in the first column of every row except the first. Let s continue this example a little further, though. It turns out that the back substitution steps back also be carried out by using elementary row operations by eliminate the variable z from the first two equations, and then eliminating y from the first. We have R 3 =( 1/5)R 3 0 0 1 5 R 2 =R 2 5R 3 6 0 5 0 0 0 0 1 5

R 2 = (1/5)R 2 0 1 0 0 0 0 1 5 R 1 =R 1+2R 2 R 3 1 0 0 1 0 1 0 0 0 0 1 5 This process is called Gauss-Jordan elimination and the matrix form we have obtained is called row-reduced echelon form. In order to be in row-reduced echelon form, we need to have that: 1. The matrix is in row echelon form. 2. The leading (i.e. first) coefficient in each row is a one. 3. In each column which has a leading one, all other coefficients are zero. The advantage of having a matrix in row-reduced echelon form should be obvious. The underlying system of equations corresponding to the final matrix form is (1)x + (0)y + (0)z = 1 (0)x + (1)y + (0)z = 0 (0)x + (0)y + (1)z = 5. In other words, it directly gives the solution (x, y, z) = ( 1, 0, 5). 2 Other Possibilities So far, the examples we have seen have had a single solution. We have not given any consideration as to whether, given an arbitrary system of linear equations, we always end up with a single solution. Reconsider the previous example, with a slight modification to the last equation: x 2y + z = 4 3x + y + 2z = 13 x z = 6. As with the previous example, we set up our coefficient matrix and begin performing Gaussian elimination. We have: 3 1 2 13 R 2 =3R 1+R 2 1 0 1 6 1 0 1 6. 7

R 3 =R 1 R 3 0 2 2 10 R 3 =2R 2 5R 3 0 0 0 0 Something appears to have gone terribly wrong. The last equation corresponds to the equation 0 = 0, but this does not tell us anything at all meaningful at all. The remain two equations are underdetermined. We have three equations to solve for, but only two meaningful equations within which to do so. To answer what is going on in this example, let s consider some geometrical points in two dimensions. A linear system of two equations in two unknowns looks like a 1 x + a 2 y = a 3 b 1 x + b 2 y = b 3. We can rearrange these into two equations of the familiar form y = mx + b. That is to say, they are lines, and the solution corresponds to the intersection of these lines. It should not take much geometrical convincing that there are three possibilities: 1. The lines intersect at a unique point, i.e. there is a single solution. 2. The lines are parallel and do not intersection, i.e. there are no solutions. 3. The lines are parallel and overlap, i.e. there are an infinite number of solutions. Although the situations are more difficult to visualize, this intuition generalizes to an arbitrary number of dimensions! It is always the case that a linear system of equations has either no solution, one solution, or an infinite number of solutions. It is impossible for a linear system of equations to have exactly two solutions, or twelve solutions, or a thousand solutions. To see which case we are in for our previous example, let s consider the two remaining equations. We have x 2y + z = 4 5y + 5z = 25. Let s continue to put this into row-reduced echelon form. We have 1 0 1 6 R 2 = (1/5)R 2 0 1 1 5 R 1 =R 1+2R 2 0 1 1 5 0 0 0 0 0 0 0 0 8..

This corresponds to the system x z = 6 y z = 5. It should not be hard to convince ourselves that this system has an infinite number of solutions. It we choose the parametrization z = t, we have solution set given by x = 6 + t y = 5 + t z = t. This was obtained by subbing z = t into the previous expressions and rearranging. What this equation tells us is that any value of t in the above expression corresponds to a solution of the system. For instance, if we select t = 0, we have the point (x, y, z) = ( 6, 5, 0), which we can easily show satisfies the original system of equations. If we pick t = 3, we have the point (x, y, z) = ( 3, 2, 3), which again can easily be seen to be a solution of the original system of equations. The really important point to recognize is that any t value will work, there is an infinite number of solutions. This is the general case when systems are underdetermined i.e. when there are more variables to solve for than equations to solve with. To see what else can happen, consider the (again) slightly modified system of equations x 2y + z = 4 3x + y + 2z = 13 x z = 5. we set up our coefficient matrix and begin performing Gaussian elimination. We have: R 3 =R 1 R 3 3 1 2 13 1 0 1 5 0 2 2 9 R 2 =3R 1+R 2 R 3 =2R 2 5R 3 1 0 1 5 0 0 0 5. 9

This looks very similar to our previous example, and we might be tempted to continue on by parametrizing the first two equations. It will all be in vain, however. We recall that the last equation means (0)x + (0)y + (0)z = 5. In other words, the equation says 0 = 5, which is a mathematically nonsensical statement. It does not matter what the first two equations evaluate to, there can be no solution to this system of equations because there is no solution to the last equation. We will say that the system is inconsistent, which means that there is no solution. This is the third possible case. And the somewhat remarkable thing about linear systems is that is all that can happen. We can either have: 1. A unique solution in which case we can find it; 2. An infinite number of solutions in which case we can parametrize the set of solutions; or 3. No solution in which case we say the system is inconsistent. There are no other cases to consider, and no other methods to learn. Performing Gauss-Jordan elimination to find the row-reduced echelon form is sufficient to determine which case we are in, and if there are solutions, it will find them. All we need to do is practice! 10