AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

Similar documents
Gaussian Elimination and Back Substitution

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Direct Methods for Solving Linear Systems. Matrix Factorization

2.1 Gaussian Elimination

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations

1.5 Gaussian Elimination With Partial Pivoting.

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Ax = b. Systems of Linear Equations. Lecture Notes to Accompany. Given m n matrix A and m-vector b, find unknown n-vector x satisfying

Scientific Computing: An Introductory Survey

Review Questions REVIEW QUESTIONS 71

The Solution of Linear Systems AX = B

AMS526: Numerical Analysis I (Numerical Linear Algebra)

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

Numerical Linear Algebra

CHAPTER 6. Direct Methods for Solving Linear Systems

MATH 3511 Lecture 1. Solving Linear Systems 1

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

Linear Systems of n equations for n unknowns

Computational Methods. Systems of Linear Equations

Linear Algebraic Equations

30.3. LU Decomposition. Introduction. Prerequisites. Learning Outcomes

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Linear Equations and Matrix

NUMERICAL MATHEMATICS & COMPUTING 7th Edition

Gaussian Elimination without/with Pivoting and Cholesky Decomposition

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

CSE 160 Lecture 13. Numerical Linear Algebra

Scientific Computing

MODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1].

Numerical Linear Algebra

Introduction to Determinants

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

SOLVING LINEAR SYSTEMS

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Chapter 2 - Linear Equations

Next topics: Solving systems of linear equations

MATRICES. a m,1 a m,n A =

Linear Algebra and Matrix Inversion

y b where U. matrix inverse A 1 ( L. 1 U 1. L 1 U 13 U 23 U 33 U 13 2 U 12 1

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3

(17) (18)

Math 240 Calculus III

5 Solving Systems of Linear Equations

22A-2 SUMMER 2014 LECTURE 5

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Section 5.6. LU and LDU Factorizations

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

Matrices and systems of linear equations

This can be accomplished by left matrix multiplication as follows: I

POLI270 - Linear Algebra

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

MTH 464: Computational Linear Algebra

1 Last time: linear systems and row operations

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey

Elementary maths for GMT

Matrix decompositions

Linear Equations in Linear Algebra

Matrix decompositions

Numerical Linear Algebra

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

MA 1B PRACTICAL - HOMEWORK SET 3 SOLUTIONS. Solution. (d) We have matrix form Ax = b and vector equation 4

Solution of Linear Equations

Solving Linear Systems Using Gaussian Elimination. How can we solve

MTH 215: Introduction to Linear Algebra

CS513, Spring 2007 Prof. Amos Ron Assignment #5 Solutions Prepared by Houssain Kettani. a mj i,j [2,n] a 11

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

A Review of Matrix Analysis

Roundoff Analysis of Gaussian Elimination

4 Elementary matrices, continued

Math 502 Fall 2005 Solutions to Homework 3

1300 Linear Algebra and Vector Geometry

The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.

Outline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy,

4.2 Floating-Point Numbers

I-v k e k. (I-e k h kt ) = Stability of Gauss-Huard Elimination for Solving Linear Systems. 1 x 1 x x x x

MAC1105-College Algebra. Chapter 5-Systems of Equations & Matrices

Solving Linear Systems of Equations

Matrices and Matrix Algebra.

Chapter 7. Tridiagonal linear systems. Solving tridiagonal systems of equations. and subdiagonal. E.g. a 21 a 22 a A =

Notes on Row Reduction

Math/CS 466/666: Homework Solutions for Chapter 3

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Computational Linear Algebra

Section 3.5 LU Decomposition (Factorization) Key terms. Matrix factorization Forward and back substitution LU-decomposition Storage economization

Linear Algebra March 16, 2019

Ack: 1. LD Garcia, MTH 199, Sam Houston State University 2. Linear Algebra and Its Applications - Gilbert Strang

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

ACM106a - Homework 2 Solutions

MA2501 Numerical Methods Spring 2015

MH1200 Final 2014/2015

Lesson U2.1 Study Guide

Transcription:

AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given as Ax = b, () where A is a n n square matrix and x and b are n-vectors... Invariant Transformations... Permutation To solve a linear system, we wish to transform the given linear system into an easier linear system where the solution x = A b remains unchanged. The answer is that we can introduce any nonsingular matrix M and multiply from the left both sides of the given linear system: MAx = Mb. (2) We can easily check that the solution remains the same. To see this, let z be the solution of the linear system in Eqn. 2. Then z = (MA) Mb = A M Mb = A b = x. (3) Example: A permutation matrix P, a square matrix having exactly one in each row and column and zeros elsewhere which is also always a nonsingular can always be multiplied without affecting the original solution to the system. For instance, P = 0 0 0 0 (4) 0 0 permutes v as P v v 2 v 3 0 0 0 0 0 0 v v 2 v 3 v 3 v v 2. (5)..2. Row scaling Another invariant transformation exists which is called row scaling, an outcome of a multiplication by a diagonal matrix D with nonzero diagonal entries d ii, i =,... n. In this case, we have DAx = Db, (6)

2 by which each row of the transformed matrix DA gets to be scaled by d ii from the original matrix A. Note that the scaling factors are cancelled by the same scaling factors introduced on the right hand side vector, leaving the solution to the original system unchanged. Note: The column scaling does not preserve the solution in general..2. LU factorization by Gaussian elimination Consider the following system of linear equations: x + 2x 2 + 2x 3 = 3, (7) 4x 2 6x 3 = 6, (8) x 3 =. (9) We know this is easily solvable since we already know x 3 =, which gives x 2 = 3, therefore recursively arriving a complete set of solution with x =. When putting these equations into a matrix-vector form, we have 2 2 0 4 6 0 0 x x 2 x 3 where the matrix has a form of (upper) triangular. 3 6, (0) Therefore, our strategy then is to devise a nonsingular linear transformation that transforms a given general linear system into a triangular linear system. This is a key idea of LU factorization (or LU decomposition) or also known as Gaussian elimination. The main idea is to find a matrix M such that the first column of M A becomes zero below the first row. The right hand side b is also multiplied by M as well. Again, we repeat this process in the next step so that we find M 2 such that the second column of M 2 M A becomes zero below the second row, along with applying the equivalent multiplication on the right hand side, M 2 M b. This process is continued for each successive column until all of the subdiagonal entries of the resulting matrix have been annihilated. If we define the final matrix M = M n M, the transformed linear system becomes M n M Ax = MAx = Mb = M n M b. () Note: As seen in the previous section, we recall that any nonsingular matrix multiplication is an invariant transformation that does not affect the solution to the given linear system. The resulting transformed linear system MAx = Mb is upper triangular which is what we want, and can be solved by back-substitution to obtain the

3 solution to the original linear system Ax = b. Example: We illustrate Gaussian elimination by considering: 2x +x 2 +x 3 = 3, 4x +3x 2 +3x 3 +x 4 = 6, 8x +7x 2 +9x 3 +5x 4 = 0, 6x +7x 2 +9x 3 +8x 4 =. (2) or in a matrix notation Ax = 2 0 4 3 3 6 7 9 8 x x 2 x 3 x 4 3 6 0 b. (3) The first question is to find a matrix M that annihilates the subdiagonal entries of the first column of A. This can be done if we consider a matrix M that can subtract twice the first row from the second row, four times the first row from the third row, and three times the first row from the fourth row. The matrix M is then identical to the identity matrix I 4, except for those multiplication factors in the first column: M A = 2 4 3 2 0 4 3 3 6 7 9 8 2 0 3 5 5 4 6 8, (4) where we treat the blank entries to be zero entries. At the same time, we proceed the corresponding multiplication on the right hand side to get: M b = 3 0 2 8. (5) The next step would be to annihilate the third and fourth entries from the second column (3 and 4), which will give a next matrix M 2 that has the form: M 2 M A = 3 4 now with the right hand side: M 2 M b = 2 0 3 5 5 4 6 8 3 0 2 8 2 0 2 2 2 4, (6). (7)

4 The last matrix M 3 will complete the process, resulting an upper triangular matrix U: 2 0 2 0 M 3 M 2 M A = 2 2 2 2 U, 2 4 2 (8) together with the right hand side: M 3 M 2 M b = 3 0 2 6 y. (9) We see that the final transformed linear system MAx = Ux = y is upper triangular which is what we wanted and it can be solved easily by backsubstitution, starting from obtaining x 4 = 3, followed by x 3, x 2, and x in reverse order to find a complete solution x = 0 2 3. (20) The full LU factorization A = LU can be established if we compute L = (M 3 M 2 M ) = M M 2 M 3. (2) At first sight this looks like an expensive process as it involves inverting a series of matrices. Surprisingly, however, this turns out to be a trivial task. The inverse of M i, i =, 2, 3 is just itself but with each entry below the diagonal negated. Therefore, we have L = M = = = M 2 M 3 2 4 3 2 4 3 2 4 3 3 4 3 4 3 4. (22) Notice also that the matrix multiplication M M 2 M 3 is also trivial and is just the unit lower triangle matrix with the nonzero subdiagonal entries of M M 2, and M 3 inserted in the appropriate places.,

All together, we finally have our decomposition A = LU: 2 0 4 3 3 6 7 9 8 2 4 3 3 4 2 0 2 2 2 5. (23) Quick summary: Gaussian elimination proceeds in steps until a upper triangular matrix is obtained for back-substitution: M M 2 0 0 0 0 M 3 0 0 0 0 0 (24) Algorithm: LU factorization by Gaussian elimination: for k = to n #[loop over column] if a kk = 0 then stop #[stop if pivot (or divisor) is zero] endif for i = k + to n m ik = a ik /a kk #[compute multipliers for each column] for j = k + to n for i = k + to n a ij = a ij m ik a kj #[transformation to remaining submatrix].3. Pivoting.3.. Need for pivoting We obviously run into trouble when the choice of a divisor called a pivot is zero, whereby the Gaussian elimination algorithm breaks down. As illustrated in Algorithm above, this situation can be easily checked and avoided so that the algorithm stops when one of the diagonal entries become singular.

6 The solution to this singular pivot issue is almost equally straightforward: if the pivot entry is zero at state k, i.e., a kk = 0, then one interchange row k of both the matrix and the right hand side vector with some subsequent row whose entry in column k is nonzero and resume the process as usual. Recall that permutation does not alter the solution to the system. This row interchanging process is called pivoting, which is illustrated in the following example. Example: Pivoting with permutation matrix can be easily explained as below: 0 0 P (25) 0 0 where we interchange the second row with the fourth row using a permutation matrix P given as P =. (26) Note: The potential need for pivoting has nothing to do with the matrix being singular. For example, the matrix 0 A = (27) 0 is nonsingular, yet we can t process LU factorization unless we interchange rows. On the other hand, the matrix A = (28) can easily allow LU factorization A = = while being singular. [ 0 ] [ 0 ] = LU, (29).3.2. Partial pivoting There is not only zero pivots, but also another situation we must avoid in Gaussian elimination a case with small pivots. The problem is closely related to computer s finite-precision arithmetic which fails to recover any numbers smaller than the machine precision ɛ. Recall that we have ɛ 0 7 for single precision, and ɛ 0 6 for double precision.

Example: Let us now consider a matrix A defined as ɛ A =, (30) where ɛ < ɛ 0 6, say, ɛ = 0 20. If we proceed without any pivoting (i.e., no row interchange) and take ɛ as the first pivot element, then we obtain the elimination matrix 0 M =, (3) / ɛ and hence the lower triangular matrix 0 L = / ɛ which is correct. For the upper triangular matrix, however, we see an incorrect floating-point arithmetic operation ɛ ɛ U = =, (33) 0 / ɛ 0 / ɛ 7 (32) since / ɛ >>. But then we simply fail to recover the original matrix A from the factorization: 0 ɛ ɛ LU = = A. (34) / ɛ 0 / ɛ 0 Using a small pivot, and a correspondingly large multiplier, has caused an unrecoverable loss of information in the transformation. We can cure the situation by interchanging the two rows first, which gives the first pivot element to be and the resulting multiplier is ɛ: 0 M =, (35) ɛ and hence L = [ 0 ɛ ] and U = [ 0 ɛ ] = [ 0 ] (36) in floating-point arithmetic. We therefore recover the original relation: 0 LU = = = A, (37) ɛ 0 ɛ which is the correct result after permutation. The foregoing example is rather extreme, however, the principle in general holds to find the largest pivot in producing each elimination matrix, by

8 which one obtains a smaller multiplier as an outcome and hence smaller errors in floating-point arithmetic. We see that this process involves repeated use of permutation matrix P k that interchanges rows to bring the entry of largest magnitude on or below the diagonal in column k into the diagonal pivot position. Quick summary: Gaussian elimination with partial pivoting proceeds as below. Assume x ik is chosen to be the maximum in magnitude among the entries in k-th column, thereby selected as a k-th pivot: x ik P x ik M x ik 0 0 In general, A becomes an upper triangular matrix U after n steps, (38) M n P n M P A = U. (39) Note: The expression in Eq. 39 can be rewritten in a way that separates the elimination and the permutation processes into two different groups so that we write the final transformed matrix as P = P n P 2 P, (40) L = (M n M 2M ), (4) PA = LU. (42) To do this we first need to find what M i should be. Consider reordering the operations in Eq. 39 in the form, for instance with n = 3, Rearranging operations, M 3 P 3 M 2 P 2 M P = M 3M 2M P 2 P 2 P (= L P). (43) M 3 P 3 M 2 P 2 M P (44) = (M 3 )(P 3 M 2 P 3 )(P 3P 2 M P 2 P 3 )(P 3P 2 P ) (45) (M 3)(M 2)(M )P 2 P 2 P, (46) whereby we can define M i, i =, 2, 3 equals to M i but with the subdiagonal entries permuted: M 3 = M 3 (47) M 2 = P 3 M 2 P 3 (48) M = P 3 P 2 M P 2 P 3 (49)

We can see that the matrix M n M 2 M is unit lower triangular and hence easily invertible by negating the subdiagonal entries to obtain L. Example: To see what is going on, consider 2 0 4 3 3 A =. (50) 6 7 9 8 With partial pivoting, let s interchange the first and third rows with P : 2 0 4 3 3 4 3 3 2 0. (5) 6 7 9 8 6 7 9 8 The first elimination step now looks like this with left-multiplication by M : /2 4 3 3 /2 3/2 3/2 /4 2 0 3/4 5/4 5/4. (52) 3/4 6 7 9 8 7/4 9/4 7/4 Now the second and fourth rows are interchanged with P 2 : /2 3/2 3/2 3/4 5/4 5/4 7/4 9/4 7/4 7/4 9/4 7/4 3/4 5/4 5/4 /2 3/2 3/2 With multiplication by M 2 the second elimination step looks like: 7/4 9/4 7/4 3/7 3/4 5/4 5/4 2/7 /2 3/2 3/2 Interchanging the third and fourth rows now with P 3 : 7/4 9/4 7/4 2/7 4/7 6/7 2/7 The final elimination step is obtained with M 3 : 7/4 9/4 7/4 6/7 2/7 /3 2/7 4/7 7/4 9/4 7/4 2/7 4/7 6/7 2/7 7/4 9/4 7/4 6/7 2/7 2/7 4/7. 9 (53) (54). (55) 7/4 9/4 7/4 6/7 2/7 2/3. (56)

0 Remark: The name partial pivoting comes from the fact that only the current column is searched for a suitable pivot. A more exhausting pivoting strategy is complete pivoting, in which the entire remaining unreduced sub matrix is searched for the largest entry, which is then permuted into the diagonal pivot position. Algorithm: LU factorization by Gaussian elimination with Partial Pivoting: for k = to n #[loop over column] Find index p such that a pk a ik for k i n #[search for pivot in current column] if p k then interchange rows k and p #[interchange rows if needed] endif if a kk = 0 then continue with next k #[skip current column if zero] endif for i = k + to n m ik = a ik /a kk #[compute multipliers for each column] for j = k + to n for i = k + to n a ij = a ij m ik a kj #[transformation to remaining submatrix]