Roundoff Analysis of Gaussian Elimination

Similar documents
Gaussian Elimination and Back Substitution

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 9

Linear Algebraic Equations

Gaussian Elimination for Linear Systems

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

2.1 Gaussian Elimination

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Numerical methods for solving linear systems

MATH 3511 Lecture 1. Solving Linear Systems 1

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

The Solution of Linear Systems AX = B

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Linear Algebra and Matrix Inversion

Lecture 9. Errors in solving Linear Systems. J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico

MATRICES. a m,1 a m,n A =

Next topics: Solving systems of linear equations

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3

Scientific Computing

Matrix Factorization and Analysis

AMS526: Numerical Analysis I (Numerical Linear Algebra)

5 Solving Systems of Linear Equations

MODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1].

CSE 160 Lecture 13. Numerical Linear Algebra

Linear Systems of n equations for n unknowns

1.5 Gaussian Elimination With Partial Pivoting.

CHAPTER 6. Direct Methods for Solving Linear Systems

Matrix decompositions

Matrix decompositions

Numerical Linear Algebra

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

5.6. PSEUDOINVERSES 101. A H w.

SOLVING LINEAR SYSTEMS

MH1200 Final 2014/2015

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

Lecture 7. Gaussian Elimination with Pivoting. David Semeraro. University of Illinois at Urbana-Champaign. February 11, 2014

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

MATH2210 Notebook 2 Spring 2018

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

MTH 215: Introduction to Linear Algebra

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

Dense LU factorization and its error analysis

Solving Linear Systems of Equations

Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems

Numerical Linear Algebra

Numerical Linear Algebra

Computational Methods. Systems of Linear Equations

Fundamentals of Engineering Analysis (650163)

Matrix Factorization Reading: Lay 2.5

Outline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy,

Solving Linear Systems Using Gaussian Elimination. How can we solve

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

Math Lecture 26 : The Properties of Determinants

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Scientific Computing: Dense Linear Systems

Chapter 2 - Linear Equations

This can be accomplished by left matrix multiplication as follows: I

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.

Engineering Computation

Review Questions REVIEW QUESTIONS 71

Introduction to Numerical Analysis

MTH 464: Computational Linear Algebra

Lectures on Linear Algebra for IT

14.2 QR Factorization with Column Pivoting

MA2501 Numerical Methods Spring 2015

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

Review. Example 1. Elementary matrices in action: (a) a b c. d e f = g h i. d e f = a b c. a b c. (b) d e f. d e f.

4 Elementary matrices, continued

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Lecture Note 2: The Gaussian Elimination and LU Decomposition

MAT 610: Numerical Linear Algebra. James V. Lambers

Math 502 Fall 2005 Solutions to Homework 3

Practical Linear Algebra: A Geometry Toolbox

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey

MA580: Numerical Analysis: I. Zhilin Li

lecture 2 and 3: algorithms for linear algebra

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place...

6 Linear Systems of Equations

Numerical Analysis FMN011

Computational Linear Algebra

MODEL ANSWERS TO THE THIRD HOMEWORK

Draft. Lecture 12 Gaussian Elimination and LU Factorization. MATH 562 Numerical Analysis II. Songting Luo

Applied Numerical Linear Algebra. Lecture 8

1 Positive definiteness and semidefiniteness

Numerical Methods - Numerical Linear Algebra

Gaussian Elimination without/with Pivoting and Cholesky Decomposition

Hani Mehrpouyan, California State University, Bakersfield. Signals and Systems

MAT 1332: CALCULUS FOR LIFE SCIENCES. Contents. 1. Review: Linear Algebra II Vectors and matrices Definition. 1.2.

AMS 147 Computational Methods and Applications Lecture 17 Copyright by Hongyun Wang, UCSC

I-v k e k. (I-e k h kt ) = Stability of Gauss-Huard Elimination for Solving Linear Systems. 1 x 1 x x x x

12/1/2015 LINEAR ALGEBRA PRE-MID ASSIGNMENT ASSIGNED BY: PROF. SULEMAN SUBMITTED BY: M. REHAN ASGHAR BSSE 4 ROLL NO: 15126

Transcription:

Jim Lambers MAT 60 Summer Session 2009-0 Lecture 5 Notes These notes correspond to Sections 33 and 34 in the text Roundoff Analysis of Gaussian Elimination In this section, we will perform a detailed error analysis of Gaussian elimination, illustrating the analysis originally carried out by J H Wilkinson The process of solving Ax = b consists of three stages: Factoring A = LU, resulting in an approximate LU decomposition A + E = L U 2 Solving Ly = b, or, numerically, computing y such that ( L + δ L)(y + δy) = b 3 Solving Ux = y, or, numerically, computing x such that ( U + δ U)(x + δx) = y + δy Combining these stages, we see that b = ( L + δ L)( U + δ U)(x + δx) = ( L U + δ L U + Lδ U + δ Lδ U)(x + δx) = (A + E + δ L U + Lδ U + δ Lδ U)(x + δx) = (A + δa)(x + δx) where δa = E + δ L U + Lδ U + δ Lδ U In this analysis, we will view the computed solution x = x + δx as the exact solution to the perturbed problem (A + δa)x = b This perspective is the idea behind backward error analysis, which we will use to determine the size of the perturbation δa, and, eventually, arrive at a bound for the error in the computed solution x Error in the LU Factorization Let A (k) denote the matrix A after k steps of Gaussian elimination have been performed, where a step denotes the process of making all elements below the diagonal within a particular column

equal to zero Then, in exact arithmetic, the elements of A (k+) are given by a (k+) = a (k) m ik a (k), m ik = a(k) ik a (k) kk () Let B (k) denote the matrix A after k steps of Gaussian elimination have been performed using floating-point arithmetic Then the elements of B (k+) are ( ) b (k+) = b (k) (k) b + ε(k+) ik, s ik = fl b (k) (2) kk For j i, we have Combining these equations yields i Cancelling terms, we obtain where e = i For i > j, k=2 ε(k) k=2 b (2) = b () s i b () j + ε(2) b (3) = b (2) s i2 b (2) 2j + ε(3) b (i) = b (i ) s i,i b (i ) i,j + ε(i) b (k) i = b () = b (i) b (k) + i i i + b (2) = b () s i b () j + ε(2) k=2 ε (k) s ik b (k) e, j i, (3) b (j) = b (j ) s b (j ) j,j + ε(j) where s = fl(b (j) /b(j) jj ) = b(j) /b(j) jj ( + η ), and therefore 0 = b (j) = b (j) = b () s b (j) jj s b (j) jj j 2 + b(j) η + ε(j+) s ik b (k) + e (4)

From (3) and (4), we obtain L U = s 2 s n b () b () 2 b () n = A + E b (n) nn where Then, and so, s ik = fl(b (k) ik /b(k) kk ) = b(k) ik b (k) kk fl(s ik b (k) ) = s ikb (k) ( + θ(k) ), ( + η ik ), η ik u θ(k) u b (k+) = fl(b (k) = (b (k) ( + θ(k) )) ( + θ(k) ))( + φ(k) ), φ(k) u After applying (2), and performing some manipulations, we obtain ε (k+) = b (k+) ( (k) ) φ + φ (k) θ(k) We then have the following bound for the entries of E: 0 0 2 2 E ( + l)gau 3 3 2 3 n n + O(u 2 ), where l = max s and G is the growth factor defined by G = max,k b (k) max a The entries of the above matrix indicate how many of the ε (k) are included in each entry e of E 3

Bounding the perturbation in A From a roundoff analysis of forward and back substitution, which we do not reproduce here, we have the bounds max δ L nul + O(u 2 ), max δ U nuga + O(u 2 ) where a = max a, l = max L, and G is the growth factor Putting our bounds together, we have max from which it follows that δa max e + max Bounding the error in the solution Lδ U + max Uδ L + max δ Lδ U n( + l)gau + n 2 lgau + n 2 lgau + O(u 2 ) δa n 2 (2nl + l + )Gau + O(u 2 ) Let x = x + δx be the computed solution Since the exact solution to Ax = b is given by x = A b, we are also interested in examining x = (A + δa) b Can we say something about (A + δa) b A b? We assume that A δa = r < We have A + δa = A(I + A δa) = A(I F ), F = A δa From the manipulations (A + δa) b A b = (I + A δa) A b A b = (I + A δa) (A (I + A δa)a )b = (I + A δa) ( A (δa)a )b and this result from Lecture 2, we obtain δx x (I F ) r, = (x + δx) x x 4

= (A + δa) b A b A b r A δa A δa κ(a) δa A κ(a) δa κ(a) δa A A Note that a similar conclusion is reached if we assume that the computed solution x solves a nearby problem in which both A and b are perturbed, rather than just A We see that the important factors in the accuracy of the computed solution are The growth factor G The size of the multipliers m ik, bounded by l The condition number κ(a) The precision u In particular, κ(a) must be large with respect to the accuracy in order to be troublesome For example, consider the scenario where κ(a) = 0 2 and u = 0 3, as opposed to the case where κ(a) = 0 2 and u = 0 50 However, it is important to note that even if A is well-conditioned, the error in the solution can still be very large, if G and l are large Pivoting During Gaussian elimination, it is necessary to interchange rows of the augmented matrix whenever the diagonal element of the column currently being processed, known as the pivot element, is equal to zero However, if we examine the main step in Gaussian elimination, a (j+) ik = a (j) ik m a (j) jk, we can see that any roundoff error in the computation of a (j) jk is amplified by m Because the multipliers can be arbitrarily small, it follows from the previous analysis that the error in the computed solution can be arbitrarily large Therefore, Gaussian elimination is numerically unstable Therefore, it is helpful if it can be ensured that the multipliers are small This can be accomplished by performing row interchanges, or pivoting, even when it is not absolutely necessary to do so for elimination to proceed 5

Partial Pivoting One approach is called partial pivoting When eliminating elements in column j, we seek the largest element in column j, on or below the main diagonal, and then interchanging that element s row with row j That is, we find an integer p, j p n, such that a pj = max j i n a Then, we interchange rows p and j In view of the definition of the multiplier, m = a (j) /a(j) jj, it follows that m for j =,, n and i = j +,, n Furthermore, while pivoting in this manner requires O(n 2 ) comparisons to determine the appropriate row interchanges, that extra expense is negligible compared to the overall cost of Gaussian elimination, and therefore is outweighed by the potential reduction in roundoff error We note that when partial pivoting is used, the growth factor G is 2 n, where A is n n Complete Pivoting While partial pivoting helps to control the propagation of roundoff error, loss of significant digits can still result if, in the abovementioned main step of Gaussian elimination, m a (j) jk is much larger in magnitude than a (j) Even though m is not large, this can still occur if a (j) jk is particularly large Complete pivoting entails finding integers p and q such that a pq = max a, j i n,j q n and then using both row and column interchanges to move a pq into the pivot position in row j and column j It has been proven that this is an effective strategy for ensuring that Gaussian elimination is backward stable, meaning it does not cause the entries of the matrix to grow exponentially as they are updated by elementary row operations, which is undesirable because it can cause undue amplification of roundoff error The LU Decomposition with Pivoting Suppose that pivoting is performed during Gaussian elimination Then, if row j is interchanged with row p, for p > j, before entries in column j are eliminated, the matrix A (j) is effectively multiplied by a permutation matrix P (j) A permutation matrix is a matrix obtained by permuting the rows (or columns) of the identity matrix I In P (j), rows j and p of I are interchanged, so that multiplying A (j) on the left by P (j) interchanges these rows of A (j) It follows that the process of Gaussian elimination with pivoting can be described in terms of the matrix multiplications M (n ) P (n ) M (n 2) P (n 2) M () P () A = U, 6

where P (k) = I if no interchange is performed before eliminating entries in column k However, because each permutation matrix P (k) at most interchanges row k with row p, where p > k, there is no difference between applying all of the row interchanges up front, instead of applying P (k) immediately before applying M (k) for each k It follows that [M (n ) M (n 2) M () ][P (n ) P (n 2) P () ]A = U, and because a product of permutation matrices is a permutation matrix, we have P A = LU, where L is defined as before, and P = P (n ) P (n 2) P () This decomposition exists for any nonsingular matrix A Once the LU decomposition P A = LU has been computed, we can solve the system Ax = b by first noting that if x is the solution, then P Ax = LUx = P b Therefore, we can obtain x by first solving the system Ly = P b, and then solving Ux = y Then, if b should change, then only these last two systems need to be solved in order to obtain the new solution; as in the case of Gaussian elimination without pivoting, the LU decomposition does not need to be recomputed Practical Computation of Determinants, Revisited When Gaussian elimination is used without pivoting to obtain the factorization A = LU, we have det(a) = det(u), because det(l) = due to L being unit lower-triangular When pivoting is used, we have the factorization P A = LU, where P is a permutation matrix Because a permutation matrix is orthogonal; that is, P T P = I, and det(a) = det(a T ) for any square matrix, it follows that det(p ) 2 =, or det(p ) = ± Therefore, det(a) = ± det(u), where the sign depends on the sign of det(p ) To determine this sign, we note that when two rows (or columns) of A are interchanged, the sign of the determinant changes Therefore, det(p ) = ( ) p, where p is the number of row interchanges that are performed during Gaussian elimination The number p is known as the sign of the permutation represented by P that determines the final ordering of the rows We conclude that det(a) = ( ) p det(u) 7