Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Similar documents
Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

2.1 Gaussian Elimination

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Numerical Linear Algebra

Direct Methods for Solving Linear Systems. Matrix Factorization

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

1 GSW Sets of Systems

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

Next topics: Solving systems of linear equations

Numerical Linear Algebra

Chapter 4. Solving Systems of Equations. Chapter 4

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

MTH 464: Computational Linear Algebra

MATH 3511 Lecture 1. Solving Linear Systems 1

Math 552 Scientific Computing II Spring SOLUTIONS: Homework Set 1

Gaussian Elimination and Back Substitution

CHAPTER 6. Direct Methods for Solving Linear Systems

30.3. LU Decomposition. Introduction. Prerequisites. Learning Outcomes

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

Chapter 7. Tridiagonal linear systems. Solving tridiagonal systems of equations. and subdiagonal. E.g. a 21 a 22 a A =

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Section Gaussian Elimination

Process Model Formulation and Solution, 3E4

Computational Methods. Systems of Linear Equations

Matrix decompositions

Linear Algebraic Equations

A Review of Matrix Analysis

22A-2 SUMMER 2014 LECTURE 5

Solving Linear Systems Using Gaussian Elimination. How can we solve

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Introduction to Mathematical Programming

Matrix decompositions

Linear Systems of n equations for n unknowns

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

MAC1105-College Algebra. Chapter 5-Systems of Equations & Matrices

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

Linear Algebra and Matrix Inversion

Chapter 2 Notes, Linear Algebra 5e Lay

The Solution of Linear Systems AX = B

Finite Mathematics Chapter 2. where a, b, c, d, h, and k are real numbers and neither a and b nor c and d are both zero.

Elementary Linear Algebra

Linear Algebraic Equations

7.6 The Inverse of a Square Matrix

LU Factorization a 11 a 1 a 1n A = a 1 a a n (b) a n1 a n a nn L = l l 1 l ln1 ln 1 75 U = u 11 u 1 u 1n 0 u u n 0 u n...

Lecture 3: Gaussian Elimination, continued. Lecture 3: Gaussian Elimination, continued

SOLVING LINEAR SYSTEMS

12/1/2015 LINEAR ALGEBRA PRE-MID ASSIGNMENT ASSIGNED BY: PROF. SULEMAN SUBMITTED BY: M. REHAN ASGHAR BSSE 4 ROLL NO: 15126

This can be accomplished by left matrix multiplication as follows: I

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

1 - Systems of Linear Equations

Solving Ax = b w/ different b s: LU-Factorization

Gaussian Elimination without/with Pivoting and Cholesky Decomposition

7.5 Operations with Matrices. Copyright Cengage Learning. All rights reserved.

CSE 160 Lecture 13. Numerical Linear Algebra

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Y = ax + b. Numerical Applications Least-squares. Start with Self-test 10-1/459. Linear equation. Error function: E = D 2 = (Y - (ax+b)) 2

Lecture 12: Solving Systems of Linear Equations by Gaussian Elimination

Solving Systems of Linear Equations

Numerical Methods - Numerical Linear Algebra

MATH2210 Notebook 2 Spring 2018

TMA4125 Matematikk 4N Spring 2017

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Matrix notation. A nm : n m : size of the matrix. m : no of columns, n: no of rows. Row matrix n=1 [b 1, b 2, b 3,. b m ] Column matrix m=1

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Math Computation Test 1 September 26 th, 2016 Debate: Computation vs. Theory Whatever wins, it ll be Huuuge!

Linear System of Equations

Linear Algebra I Lecture 8

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

Computational Linear Algebra

Recall, we solved the system below in a previous section. Here, we learn another method. x + 4y = 14 5x + 3y = 2

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

Midterm 1 Review. Written by Victoria Kala SH 6432u Office Hours: R 12:30 1:30 pm Last updated 10/10/2015

Lecture 2e Row Echelon Form (pages 73-74)

Homework 2 Foundations of Computational Math 2 Spring 2019

MAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :

Introduction to Systems of Equations

Section 9.2: Matrices.. a m1 a m2 a mn

MAC Module 1 Systems of Linear Equations and Matrices I

The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.

MTH 215: Introduction to Linear Algebra

Matrices and systems of linear equations

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

Chapter 9: Gaussian Elimination

Chapter 1 Matrices and Systems of Equations

Numerical Analysis Fall. Gauss Elimination

Elementary Matrices. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

Section 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.

MAT 343 Laboratory 3 The LU factorization

Math 1314 Week #14 Notes

POLI270 - Linear Algebra

Fundamentals of Engineering Analysis (650163)

Determinants - Uniqueness and Properties

Linear Systems and Matrices

Transcription:

Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix system Ax = b 2.) Here A represents a known m n matrix and b a known n vector. The vector x represents the n unknowns. Since a large variety of problems can be transformed into this general formulation a number of methods have been developed which can produce exact or approximate solutions for this problem. For systems where m = n the obvious solution, which is also the simplest, is to find the inverse of the matrix A in order to write the solution as x = A b. One important aspect when performing any numerical computations which we pay particular attention to will be that of the computation cost. Computation cost refers to the number of additions, subtractions, multiplications and divisions that must be performed in the computer in order for us to obtain the desired result. When the size of the matrices is sufficiently large the idea of simply finding the inverse of matrix A if it exists) is not the most effective way to solve this problem. Computing the inverse has a very high computational cost! Alternatively, you might recall your early algebra classes where you encountered elimination and pivoting methods such as Gaussian elimination and backward substitution or otherwise also called Gauss-Jordan method. These methods require n operations for matrices of size n n. Thus as the size of the matrix increases the cost in operation skyrockets. Instead alternate more effective techniques are used in practice. One common procedure is to produce a factorized version of A. That idea can reduce the operational cost of solving for x from n to n 2. This translates to almost 99% reduction in calculations assuming that the matrices are larger than 00 00 not unusual for applications nowadays). Unfortunately the operational cost of producing the factors of A in the first place is still in order of n anyway. So overall we have not really gained much... Well that is not true. There is a benefit. For that however you should read further on regarding these methods below. 2. Gaussian elimination We begin by providing an outline for performing Gaussian elimination which you learned in your introductory linear algebra classes. We will subsequently improve this basic algorithm into the more 20

FMN050 Spring 204. Claus Führer and Alexandros Sopasakis page 2 efficient methods which were hinted to above and which we will explain in more detail in the sections below. One key aspect of the method which we need to emphasize is that of numerical stability. We could easily provide a method which performs naive Gaussian elimination and regretably obtain completely wrong solutions! Numerical stability of luck of it depends on how you are going to perform the required operations in order to preserve as much numerical accurasy as possible. To avoid such numerical issues we must make sure that the largest numbers in a given row are used as denominators in the divisions which must be performed. To achieve this we perform row operations in order to place the largest such elements of each column in the proper position in the augmented matrix. Keep in mind two important points about Gaussian elimination: a) if the matrix A is singular it is not possible to perform the method and b) Gaussian elimination can be applied to any m mn matrix. As a result it is a general method and not limited to just square matrices. Note also that we should perform Gaussian elimination on the augmented matrix which consist of a new m n + matrix which contains all of matrix A with vector b attached at the end. Pseudocode for Gaussian Elimination into row-echelon form. Main loop in k =, 2,..., m. 2. Find the largest element in absolute value in column k and call it maxk).. If maxk) = 0 then stop. The matrix is singular. 4. Swap rows in order to place the row with the largest element for column k in row k. This ensures numerical stability. 5. Do the following for all rows below the pivot. Loop in i = k +, k + 2,..., m. 6. Do the following for all elements in current row. For j = k, k +,..., n. 7. Ai, j) = Ai, j) Ak, j)ai, k)/ak, k)) 8. Fill the remaining lower triangular part of the matrix with zeros Ai, k) = 0. As already discussed in the introduction we are particularly interested in methods which are efficient. In that respect the number of operations performed during the computation is of great interest. In that respect we must count the number of additions, subtractions, multiplications and divisions required in order to completely solve the problem above using Gaussian elimination. Note that the total number of divisions above is nn )/2. The number of multiplications is nn )2n 2)/6. Finally the number of additions/subtractions are nn )2n )/6. The overall cost of Gaussian elimination therefore is O2n /). The Big O notation is used to imply that the largest term in the total number of operations for this method is 2n /. Now we must also undertake the task of back-substitution in order to find the actual solution x for this system. This however is a relatively easy computational task. We provide this short pseudocode below as well. We assume for now that we have a system of the form Ux = b where the matrix U is an upper triangular matrix for order m n. Pseudocode for back-substitution

FMN050 Spring 204. Claus Führer and Alexandros Sopasakis page 22. Main loop for i = m, m,...,. 2. If Ui, i) = 0 then stop. The matrix is singular.. Construct bi) = bi) n j=i+ Ui, j)xj) 4. Solve xi) = bi)/ui, i) The number of operations for back-substitution is as follows: n divisions, n )n/2 multiplications, nn )/2 additions and subtractions. So clearly the highest operational cost is of order On 2 ). Therefore the overall cost of solving the sytem Ax = b is still in the order of On ). Again, we believe that we can improve slightly on the efficiency of our methodology by considering a factorization of the matrix A instead. We do this next. 2.2 LU factorization - Doolittle s version In the next method which we examine now we factor matrix A into two other matrices: a lower triangular matrix L and an upper triangular matrix U such that A = LU The overall idea for solving system Ax = b will be as follows: we start by replacing the matrix A with its factors LU. Thus we can write the system as LUx = b We now define the product Ux to a new variable y. Thus we have, LUx = b becomes Ly = b where y = Ux. Since L is a lower triangular matrix the system Ly = b is almost trivial to solve for the unknown y s. Once we find all the values for y then we can start solving the system Ux = y Note that this system is also very easy to solve since U is an upper triangular matrix. Thus finding x with this method is also very easy. The only thing left to do is to actually compute the lower triagnular matrix L and the upper triangular matrix U for which A = LU. This is accomplished by the usual Gaussian elimination method which is applied only up to the point of obtaining an upper triangular matrix without the back substitution). Let us look at a simple example: Example Solve the following matrix system using an LU factorization. 2 4 0 x x 2 x = Solution The main part will be to produce the LU factorization. Once this is done then solving the system 0 6

FMN050 Spring 204. Claus Führer and Alexandros Sopasakis page 2 will be easy. To produce the factorization we start with the usual Gaussian elimination method. For ease in notation we denote by R each row of A. Then to create zeros below the element a, ) we simply do the following R + R 2 R2, R + R R, 2 0 2 8 0 2 2 Last we create zero below a2, 2) via R 2 + R R, 2 0 2 8 0 0 6 This procedure remarkably has already produced our required matrices L and U from A. In fact the matrices are, 0 0 2 A = LU = 0 0 2 8 0 0 6 Do the multiplication to check the result! How did we obtain the matrix L? Note that L is simply the matrix containing all the coefficients by which we multiplied in order to create L through the Gaussian elimination. The diagonal elements of L are always supposed to be, for the Doolittle method, so we do not need to compute that. Let us now revisit the original system Ax = b. Given L and U we can solve easily the original system as follows: first solve LY = B, 0 0 0 y y 2 y = Top down you can almost read the solution as, y = 0, y 2 = 6 and y = 5. Now you can solve the second part which is UX = Y for X, 2 x 0 UX = 0 2 8 x 2 = 6 0 0 6 x 5 This time the solution is read from the bottom up as x = /6, x 2 = / and x = 5/6. The following pseudo-code outlines this procedure, Pseudo-code for LU. Input matrix A, and the diagonal elements of L i.e. ones). 2. Let u, ) = a, )/l, ). If l, )u, ) = 0 then LU factorization is not possible and STOP. For j = 2,..., n let u, j) = a, j)/l) and lj, ) = aj, )/u, ). 4. For i = 2,,..., n do 0 6

FMN050 Spring 204. Claus Führer and Alexandros Sopasakis page 24 Let ui, i) = ai, i) i li, i) k= li, k)uk, i) li, i) If li, i)ui, i) = 0 then STOP. Print Factorization is not possible. For j = i +,..., n Let ui, j) = ai, j) ) i k= li, k)uk, j) /li, i) Let lj, i) = aj, i) ) i k= lj, k)uk, i) /ui, i) 5. Let un, n) = an, n) n k= ln, k)uk, n). If ln, n)un, n) = 0 then The factorization exist A = LU but A is a singular matrix!. 6. Print out all L and U elements. Once you have the factorization then you can solve the matrix system with the following very simple substitution scheme, Pseudo-code for solution of AX = B. First solve LY = B. 2. For i =, 2,..., n do yi) = bi) ) i j= li, j)yj) /li, i). Now solve UX = Y by back substitution in exactly the same way, 4. For i = n, n,..., do xi) = yi) ) j=n ui, j)xj) /ui, i) There are a couple of results which are interesting since they give us an incite at when these methods work. The following definition is necessary first, Definition 2.2.. The n n matrix A is said to be strictly diagonally dominant if ai, i) > n ai, j) j i for all i =, 2,..., n The results comes indirectly through Gaussian elimination: Theorem 2.2.2. A strictly diagonally dominant matrix A is non-singular. Furthermore Gaussian elimination can be performed on any linear system of the form Ax = B to obtain its unique solution without row or column interchanges, and the computations are stable with respect to the growth of round-off errors. When can we perform LU decomposition? The following theorem gives the answer,

FMN050 Spring 204. Claus Führer and Alexandros Sopasakis page 25 Theorem 2.2.. If Gaussian elimination can be performed on the linear system AX = B without row interchanges then the matrix A can be factored into the product of a lower-triangular matrix L and an upper-triangular matrix U, where A = LU. There is another type of factorization which is in fact very similar to this LU or Doolittle s decomposition. The alternate factorization method also produces an LU decomposition where U being a unit upper triangular matrix instead of L. This is called Crout s factorization. Naturally either factorization will do the job and producing one or the other is a matter of taste than anything else. You can change the provided pseudo-code very easily in order to produce such a factorization. LDL T and LL T or Choleski s factorization We continue here by presenting more methods for factoring A. All the techniques presented, similarly to the LU decomposition of A are of the same overall operational cost of On ). As the name denotes an LDL T type factorization takes the following form, A = LDL T where L as usual is lower triangular and D is a diagonal matrix with positive entries in the diagonal. Similarly the Choleski factorization A = LL T consists of a lower and upper triangular matrix where neither of which have s in the diagonal in contrast to either Doolittle s or Crout s factorizations). It is very easy to construct any of the above factorizations once you have an LU decomposition of A. Let us look at the equivalent factorizations for the following matrix 60 0 20 A = 0 20 5 20 5 2 Using our pseudo-code we obtain the following LU decomposition of A, 0 0 60 0 20 A = LU = /2 0 0 5 5 / 0 0 / Now the equivalent LDL T decomposition consist of the following three matrices, 0 0 60 0 0 /2 / A = LDL T = /2 0 0 5 0 0 / 0 0 / 0 0 Note how the new upper triangular matrix has been obtained by simply dividing each row of the old upper triangular matrix with the respective diagonal element. Now that we have the LDL T factorization we can also easily obtain the equivalent Crout s factorization, A = 60 0 0 0 5 0 20 5 / /2 / 0 0 0

FMN050 Spring 204. Claus Führer and Alexandros Sopasakis page 26 Note here that the new lower triangular matrix is constructed by simply multiplying out matrices L and D. Last the Choleski decomposition is also easily constructed from the LDL T form above by simply dividing the diagonal matrix D into to matrices D = D D and multiplying out L D to produce a lower triangular and DL T to produce an upper triangular matrix A = L D DL T 0 0 60 2 60 0 0 = /2 0 0 5 4 0 5 0 / 0 0 0 0 60 0 0 60 2 60 60 = 2 60 5 0 0 5 5 60 5 0 0 /2 / 0 0 0 This is the LL T form of the matrix A. Let us now look at results regarding when we can perform most of these factorizations. We will first need the following definition, Definition 2.2.4. A matrix A is positive definite if it is symmetric and if x T Ax > 0 for every x 0. Thus based on this definition the following theorem holds, Theorem 2.2.5. If A is an n n positive definite matrix then the following are equivalent, A max ak, j) k,j n ai, i) > 0 a 2 i, j) < ai, i)aj, j) is nonsingular max n ai, i) for all i =, 2,..., n for each i j Recall that one of conditions for a matrix to be nonsingular is that det A 0. Further, Theorem 2.2.6. A matrix A is positive definite if and only if any of the following hold, A = LDL T A = LL T