Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3

Similar documents
Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Direct Methods for Solving Linear Systems. Matrix Factorization

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

Numerical Linear Algebra

2.1 Gaussian Elimination

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

Gaussian Elimination and Back Substitution

Review. Example 1. Elementary matrices in action: (a) a b c. d e f = g h i. d e f = a b c. a b c. (b) d e f. d e f.

MATRICES. a m,1 a m,n A =

Scientific Computing

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

Computational Methods. Systems of Linear Equations

MATH 3511 Lecture 1. Solving Linear Systems 1

Matrix decompositions

AMS526: Numerical Analysis I (Numerical Linear Algebra)

The Solution of Linear Systems AX = B

Section 5.6. LU and LDU Factorizations

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

Linear Systems of n equations for n unknowns

Homework 2 Foundations of Computational Math 2 Spring 2019

Roundoff Analysis of Gaussian Elimination

MATH2210 Notebook 2 Spring 2018

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

Review Questions REVIEW QUESTIONS 71

MAT 343 Laboratory 3 The LU factorization

Matrix decompositions

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

Gaussian Elimination without/with Pivoting and Cholesky Decomposition

Numerical Linear Algebra

Section 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.

CHAPTER 6. Direct Methods for Solving Linear Systems

4.2 Floating-Point Numbers

MIDTERM 1 - SOLUTIONS

Linear Equations in Linear Algebra

Section 3.5 LU Decomposition (Factorization) Key terms. Matrix factorization Forward and back substitution LU-decomposition Storage economization

9. Numerical linear algebra background

MTH 464: Computational Linear Algebra

Illustration of Gaussian elimination to find LU factorization. A = a 11 a 12 a 13 a 14 a 21 a 22 a 23 a 24 a 31 a 32 a 33 a 34 a 41 a 42 a 43 a 44

5 Solving Systems of Linear Equations

MATLAB Project: LU Factorization

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

(17) (18)

Chapter 7. Tridiagonal linear systems. Solving tridiagonal systems of equations. and subdiagonal. E.g. a 21 a 22 a A =

Matrix decompositions

MAC1105-College Algebra. Chapter 5-Systems of Equations & Matrices

Linear Algebra I Lecture 8

Linear Algebraic Equations

Linear Algebra and Matrix Inversion

Lectures on Linear Algebra for IT

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

Section Gaussian Elimination

Matrices and systems of linear equations

System of Linear Equations

Numerical Linear Algebra

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Matrix Factorization Reading: Lay 2.5

Scientific Computing: An Introductory Survey

Practical Linear Algebra: A Geometry Toolbox

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

1.Chapter Objectives

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations

Fundamentals of Engineering Analysis (650163)

Linear Equations in Linear Algebra

9. Numerical linear algebra background

1 - Systems of Linear Equations

The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.

Revised Simplex Method

12/1/2015 LINEAR ALGEBRA PRE-MID ASSIGNMENT ASSIGNED BY: PROF. SULEMAN SUBMITTED BY: M. REHAN ASGHAR BSSE 4 ROLL NO: 15126

CS513, Spring 2007 Prof. Amos Ron Assignment #5 Solutions Prepared by Houssain Kettani. a mj i,j [2,n] a 11

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey

Matrix Factorization and Analysis

LINEAR SYSTEMS (11) Intensive Computation

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

Solving Dense Linear Systems I

Math 1314 Week #14 Notes

30.3. LU Decomposition. Introduction. Prerequisites. Learning Outcomes

Gaussian Elimination for Linear Systems

The following steps will help you to record your work and save and submit it successfully.

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

Lecture 3: QR-Factorization

Solving Ax = b w/ different b s: LU-Factorization

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

4 Elementary matrices, continued

Dense LU factorization and its error analysis

A Review of Matrix Analysis

Linear System of Equations

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

7.6 The Inverse of a Square Matrix

Elementary Row Operations on Matrices

Matrix Algebra for Engineers Jeffrey R. Chasnov

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015

Applied Linear Algebra in Geoscience Using MATLAB

V C V L T I 0 C V B 1 V T 0 I. l nk

MODEL ANSWERS TO THE THIRD HOMEWORK

Process Model Formulation and Solution, 3E4

Chapter 4. Solving Systems of Equations. Chapter 4

Transcription:

Pivoting Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3 In the previous discussions we have assumed that the LU factorization of A existed and the various versions could compute it in a stable manner. The fact that A is nonsingular is not enough to justify this assumption. Example: The first step fails for the following nonsingular matrix. ( ) 0 1 1 1 If, however, the rows are interchanged (or the columns) we can proceed. The interchanging of rows and/or columns in order to find a nonzero pivot element is called pivoting. It is needed to guarantee existence and stability of the factorization.

U O x active matrix A i y k i j y is element of largest magnitude and is in position (k,j). interchange columns i and j and rows i and k to interchange old pivot element x with preferred pivot y.

Complete Pivoting Before eliminating the subdiagonal elements in the i-th column of the transformed matrix (the first column of the active matrix) find the element with the largest magnitude and permute it via row and column interchanges to the (i, i) diagonal element position. Unconditionally stable Requires examining the entire active portion of the matrix. This typically has been viewed as too expensive since it was usually implemented as an extra pass through the matrix. This can be mitigated if the rank-1 primitive is extended The set of candidate pivot elements is the entire active matrix, therefore the immediate update forms must be used.

Row interchanges can be represented by an elementary permutation matrix. Definition: An elementary permutation matrix P which interchanges rows i and j of the matrix to which it is applied by premultiplication, i.e., P A, is the identity matrix I with rows i and j interchanged. EXAMPLE: To interchange rows 1 and 3 of a matrix A of order 3 compute P A where 0 0 1 0 1 0 1 0 0

column interchanges is done by postmultiplication by an identity with two columns interchanged, call it Q. P = P 1 and Q = Q 1 follow immediately from the definition. In general, P or Q can be specified by two integers, however, in the LU factorization one of the integers is always i if the permutation is applied on the i-th step of the algorithm. P x and Qx simply interchange two components of the vector, the implied i-th position and the store index k i. The product of a series of permutations can be stored as a vector with n entries that are initialized to j in the j-th component and to which all of the permutations are applied. The product can then be easily applied to a vector by simple indirect array addressing.

We have U = (Mn 1 1 n 1 M1 1 1)A(Q 1 Q n 1 ) U = T AQ T = Mn 1 1 n 1 M1 1 1 Q = Q 1 Q n 1

To solve Ax = b given T, Q, and U we have Ax = b T A(QQ 1 )x = T b (T AQ)(Q 1 x) = (T b) U x = b

This yields the algorithm: Compute b = T b = Mn 1 1 P n 1 M1 1 P 1b via a modified forward substitution (T and T 1 are not lower triangular) Solve U x = b via backward substitution (upper triangular solve) Compute x = Q x = Q 1 Q n 1 x by applying permutations.

Partial Pivoting Rather than searching the entire active matrix search only its first column for the element of maximum magnitude. Permute the chosen element to the (i, i) diagonal element position via a row interchange. (A similar strategy can be defined by searching the first column of the active matrix.) less stable than complete pivoting and may be unstable but, in practice, satisfactory stability is achieved. only requires the production of the first column (row) of the active matrix so delayed updates can be used. only requires row (column) interchanges.

We have U = (Mn 1 1 n 1 M1 1 1)A U = T A T = Mn 1 1 n 1 M1 1 1 To solve Ax = b given T and U we have Ax = b T Ax = T b Ux = b

This yields the algorithm: Compute b = T b = Mn 1 1 P n 1 M1 1 P 1b via a modified forward substitution (T and T 1 are not lower triangular) Solve U x = b via backward substitution (upper triangular solve)

Have we lost the LU factorization since T is not lower triangular? LEMMA: If A 1 exists then there exists a product of elementatry row permutation matrices P = P n 1... P 1, a unit lower triangular matrix L, and an upper triangular matrix U such that P A = LU. In other words, partial pivoting is equivalent to computing the LU factorization of a row permuted version of the matrix A.

To solve Ax = b given P, L and U we have Ax = b P Ax = P b LU x = b Ly = b

This yields the algorithm: Compute b = P b Solve Ly = b via forward substitution (lower triangular solve) Solve U x = y via backward substitution (upper triangular solve)

How do we get L and U in practice? LEMMA: L = M 1 M 2... M n 1 i = P n 1... P i+1 M 1 P i+1... P n 1 M 1 i and Mi 1 is the Gauss transform from the i-th step of the factorization.

M 1 To see that i is still and elementary unit lower triangular matrix (and therefore L is unit lower triangular) note that if j > i P j ˆM i 1 P j = P j (I l i e T i )P j = I P j l i e T i P j = I l i e T i where ˆM i is an elementary unit triangular matrix and l i = P j l i. This follows from j > i since the two rows exchanged in the pivoting process that define P j have indices larger than i and therefore row i of P j = e T i. and l i is just l i with two components interchanged and with the same nonzero sparsity structure.

It follows that for BLAS2-based algorithms we need to apply the permutation to all previous Mi 1 and the rows of the active matrix. This can be implemented as interchanging the entire row i with the entire row j.

Partial pivoting in a BLAS2 based LU U x L y Interchange the all elements in the old and new pivot rows. The elements interchanged in the L portion updates the M i The interchange in the active portion updates A. At the end of the factorization, the LU stored in the array will be that of PA.

For BLAS3-based algorithms we have a similar procedure. If k = n/ω then M 1 k 1 P k 1... M 1 1 P 1A = U where Mi 1 is the rank-ω update on the i-th block elimination step and P i is the accumulated permutation matrix from the LU factorization of the rectangular matrix defined by the first ω columns of the active matrix. This accumulation is done as described above for the BLAS2-based methods.

Basic step of BLAS3 based LU with partial pivoting L 1 1. compute LU = A w/ pivoting to get P i B 2. apply P to i B C L 2 A C 3. apply P to L 2 i 4. update B using L from step (1) 5. update C via a rank update w

Note that, due to pivoting, the BLAS2-based factorization of the first ω columns of the active matrix is no longer decomposable into the LU factorization of an ω ω matrix followed by a triangular solve with multiple right-hand side vectors. This portion of the algorithm therefore can become a bottleneck. A two level version would use a block LU factorization with block size θ < ω to mitigate this bottleneck.