STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

Similar documents
1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

This can be accomplished by left matrix multiplication as follows: I

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Gaussian Elimination and Back Substitution

1 Determinants. 1.1 Determinant

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

Algebra C Numerical Linear Algebra Sample Exam Problems

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

Illustration of Gaussian elimination to find LU factorization. A = a 11 a 12 a 13 a 14 a 21 a 22 a 23 a 24 a 31 a 32 a 33 a 34 a 41 a 42 a 43 a 44

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3

1 - Systems of Linear Equations

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

AMS526: Numerical Analysis I (Numerical Linear Algebra)

22A-2 SUMMER 2014 LECTURE 5

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

Chapter 4. Solving Systems of Equations. Chapter 4

Computational Linear Algebra

System of Linear Equations

Scientific Computing

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Lecture 1 Systems of Linear Equations and Matrices

Solution of Linear Equations

Section 5.6. LU and LDU Factorizations

Numerical Linear Algebra

The Solution of Linear Systems AX = B

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

14.2 QR Factorization with Column Pivoting

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

MODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1].

Numerical Linear Algebra

Direct Methods for Solving Linear Systems. Matrix Factorization

Lecture 12: Solving Systems of Linear Equations by Gaussian Elimination

A Review of Matrix Analysis

Matrix & Linear Algebra

Matrix Factorization and Analysis

Linear equations The first case of a linear equation you learn is in one variable, for instance:

Gaussian Elimination without/with Pivoting and Cholesky Decomposition

Review Questions REVIEW QUESTIONS 71

Eigenvalues and Eigenvectors

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

Numerical Linear Algebra

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Linear Algebraic Equations

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Matrices and systems of linear equations

Review of Matrices and Block Structures

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky

MTH 464: Computational Linear Algebra

MAT 2037 LINEAR ALGEBRA I web:

Math 240 Calculus III

Matrix decompositions

Lecture 4 Orthonormal vectors and QR factorization

ACM106a - Homework 2 Solutions

Numerical Linear Algebra

Lectures on Linear Algebra for IT

5.6. PSEUDOINVERSES 101. A H w.

Eigenvalues and Eigenvectors

1 Last time: linear systems and row operations

Section Gaussian Elimination

Numerical Methods - Numerical Linear Algebra

Roundoff Analysis of Gaussian Elimination

Elementary matrices, continued. To summarize, we have identified 3 types of row operations and their corresponding

Linear Algebra I Lecture 8

Cheat Sheet for MATH461

Math Linear Algebra Final Exam Review Sheet

Linear Algebra. James Je Heon Kim

Linear Algebra and Matrix Inversion

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

SOLVING LINEAR SYSTEMS

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

Conceptual Questions for Review

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Fundamentals of Engineering Analysis (650163)

Matrices and Matrix Algebra.

MATH 3511 Lecture 1. Solving Linear Systems 1

Chapter 1. Vectors, Matrices, and Linear Spaces

Methods for Solving Linear Systems Part 2

4 Elementary matrices, continued

Linear Systems and Matrices

4 Elementary matrices, continued

Next topics: Solving systems of linear equations

2.1 Gaussian Elimination

Practical Linear Algebra: A Geometry Toolbox

Notes on Row Reduction

Since the determinant of a diagonal matrix is the product of its diagonal elements it is trivial to see that det(a) = α 2. = max. A 1 x.

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

Transcription:

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l n2 l n,n 0 0 0 u nn what exactly are proper circumstances? we must have a (k) kk 0, or we cannot proceed with the decomposition for example, if A 0 3 7 2 or A 3 4 2 6 4 2 9 3 7 2 Gaussian elimination will fail; note that both matrices are nonsingular in the first case, it fails immediately; in the second case, it fails after the subdiagonal entries in the first column are zeroed, and we find that a (k) 22 0 in general, we must have det A ii 0 for i,, n where a a i A ii a i a ii for the LU factorization to exist the existence of LU factorization (without pivoting) can be guaranteed by several conditions, one example is column diagonal dominance: if a nonsingular A R n n satisfies n a jj a ij, j,, n, i i j then one can guarantee that Gaussian elimination as described above produces A LU with l ij there are necessary and sufficient conditions guaranteeing the existence of LU decomposition but those are difficult to check in practice and we do not state them here how can we obtain the LU factorization for a general nonsingular matrix? if A is nonsingular, then some element of the first column must be nonzero if a i 0, then we can interchange row i with row and proceed Date: December 3, 208, version Comments, bug reports: lekheng@galtonuchicagoedu the usual type of diagonal dominance, ie, aii n j,j i aij, i,, n, is called row diagonal dominance

this is equivalent to multiplying A by a permutation matrix Π that interchanges row and row i: 0 0 0 0 Π 0 0 0 thus M Π A A 2 (refer to earlier lecture notes for more information about permutation matrices) then, since A 2 is nonsingular, some element of column 2 of A 2 below the diagonal must be nonzero proceeding as before, we compute M 2 Π 2 A 2 A 3, where Π 2 is another permutation matrix continuing, we obtain A (M n Π n M Π ) U it can easily be shown that ΠA LU where Π is a permutation matrix easy but a bit of a pain because notation is cumbersome so we will be informal but you ll get the idea for example if after two steps we get (recall that permutation matrices or orthogonal matrices), A (M 2 Π 2 M Π ) A 2 Π T M ΠT 2M 2 A 2 Π T Π T 2(Π 2 M ΠT 2)M 2 A 2 Π T L L 2 A 2 then Π Π 2 Π is a permutation matrix L 2 M2 is a unit lower triangular matrix L Π 2 M ΠT 2 will always be a unit lower triangular matrix because M is of the form in [ M l I] whereas Π 2 must be of the form [ Π 2 Π2] 0 l 2 0 l n for some (n ) (n ) permutation matrix Π 2 and so [ ] 0 Π 2 M ΠT 2 Π 2 l I in other words Π 2 M ΠT 2 also has the form in () 2 ()

if we do one more steps we get A (M 3 Π 3 M 2 Π 2 M Π ) A 3 Π T M ΠT 2M 2 ΠT 3M 3 A 3 Π T Π T 2Π T 3(Π 3 Π 2 M ΠT 2Π T 3)(Π 3 M 2 ΠT 3)M 3 A 3 Π T L L 2 L 3 A 3 where Π Π 3 Π 2 Π is a permutation matrix L 3 M3 is a unit lower triangular matrix L 2 Π 3 M2 ΠT 3 will always be a unit lower triangular matrix because M2 is of the form 0 M 2 l whereas Π 3 must be of the form Π 3 0 l 32 I 0 l n2 for some (n 2) (n 2) permutation matrix Π 3 and so Π 3 M2 ΠT 3 0 Π 3 l I Π 3 (2) in other words Π 3 M2 ΠT 3 also has the form in (2) L Π 3 Π 2 M ΠT 2 ΠT 3 will always be a unit lower triangular matrix for the same reason above because Π 3 Π 2 must have the form [ [ Π 3 Π 2 Π2] Π32] Π 3 for some (n ) (n ) permutation matrix [ Π 32 Π 2 Π3] more generally if we keep doing this, then A Π T L L 2 L n A n where Π Π n Π n 2 Π is a permutation matrix L n Mn is a unit lower triangular matrix L k Π n Π k+ M k ΠT k+ ΠT n is a unit lower triangular matrix for all k,, n 2 A n U is an upper triangular matrix L L L 2 L n is a unit lower triangular matrix this algorithm with the row permutations is called Gaussian elimination with partial pivoting or gepp for short; we will say more in the next section 3

2 pivoting strategies the (k, k) entry at step k during Gaussian elimination is called the pivoting entry or just pivot for short in the preceding section, we said that if the pivoting entry is zero, ie, a (k) kk 0, then we just need to find an entry below it in the same column, ie, a (k) ik for some i > k, and then permute this entry into the pivoting position, before carrying on with the algorithm but it is really better to choose the largest entry below the pivot, and not just any nonzero entry that is, the permutation Π k is chosen so that row k is interchanged with row i, where ik max ik,k+,,n ik, ie, upon this permutation, we are guaranteed kk max ik,k+,,n a(k) ik this guarantees that l kj for all k and j this strategy is known as partial pivoting, which is guaranteed to produce an LU factorization if A R m n has full column-rank, ie, rank(a) n m (it can fail if A doesn t have full column-rank, think of what happens when A has a column of zeros) another common strategy, complete pivoting, which uses both row and column interchanges to ensure that at step k of the algorithm, the element a (k) kk is the largest element in absolute value from the entire submatrix obtained by deleting the first k rows and columns, ie, kk max ik,k+,,n jk,k+,,n ij in this case we need both row and column permutation matrices, ie, we get Π AΠ 2 LU when we do complete pivoting complete pivoting is necessary when rank(a) < min{m, n} there are yet other pivoting strategies due to considerations such as preserving sparsity (if you re interested, look up minimum degree algorithm or Markowitz algorithm) or a tradeoff between partial and complete pivoting (eg, rook pivoting) 3 uniqueness of the LU factorization the LU decomposition of a nonsingular matrix, if it exists (ie, without row or column permutations), is unique if A has two LU decompositions, A L U and A L 2 U 2 from L U L 2 U 2 we obtain L 2 L U 2 U the inverse of a unit lower triangular matrix is a unit lower triangular matrix, and the product of two unit lower triangular matrices is a unit lower triangular matrix, so L 2 L must be a unit lower triangular matrix similarly, U 2 U is an upper triangular matrix the only matrix that is both upper triangular and unit lower triangular is the identity matrix I, so we must have L L 2 and U U 2 4 Gauss Jordan elimination a variant of Gaussian elimination is called Gauss Jordan elimination it entails zeroing elements above the diagonal as well as below, transforming an m n matrix into reduced row echelon form, ie, a form where all pivoting entries in U are and all entries above the pivots are zeros 4

this is what you probably learnt in your undergraduate linear algebra class, eg, 3 9 3 9 3 9 0 2 3 A 0 2 2 8 0 2 2 8 0 4 U 3 5 35 0 2 2 8 0 0 0 0 0 0 0 0 the main drawback is that the elimination process can be numerically unstable, since the multipliers can be large furthermore the way it is done in undergraduate linear algebra courses is that the elimination matrices (ie, the L and Π) are not stored 5 condensed LU factorization just like QR and svd, LU factorization with complete pivoting has a condensed form too let A R m n and rank(a) r min{m, n}, recall that gecp yields Π AΠ 2 LU [ ] [ ] L 0 U U 2 L 2 I m r 0 0 [ ] L [U ] U L 2 : LŨ 2 where L R r r is unit lower triangular (thus nonsingular) and U R r r is also nonsingular note that L R m r and Ũ Rr n and so is a rank-retaining factorization A (Π T L)(ŨΠT 2) 6 LDU and LDL T factorizations if A R n n has nonsingular principal submatrices A :k,:k for k,, n, then there exists a unit lower triangular matrix L R n n, a unit upper triangular matrix U R n n, and a diagonal matrix D diag(d,, d nn ) R n n such that 0 d u 2 u n l 2 d 22 u 2n A LDU l n l n2 d nn this is called the LDU factorization of A if A is furthermore symmetric, then L U T and this called the LDL T factorization if they exist, then both LDU and LDL T factorizations are unique (exercise) if a symmetric A has an LDL T factorization and if d ii > 0 for all i,, n, then A is positive definite in fact, even though d,, d nn are not the eigenvalues of A (why not?), they must have the same signs as the eigenvalues of A, ie, if A has p positive eigenvalues, q negative eigenvalues, and z zero eigenvalues, then there are exactly p, q, and z positive, negative, and zero entries in d,, d nn a consequence of the Sylvester law of inertia unfortunately, both LDU and LDL T factorizations are difficult to compute because the condition on the principal submatrices is difficult to check in advance algorithms for computing them are invariably unstable because size of multipliers cannot be bounded in terms of the entries of A 5

for example, the LDL T factorization of a 2 2 symmetric matrix is [ ] [ ] [ ] [ ] [ ] [ ] a c 0 a c 0 a 0 c/a c d c/a 0 d (c/a)c c/a 0 d (c/a)c 0 so [ ] [ ] [ ] [ ] ε 0 ε 0 /ε ε /ε 0 ε /ε 0 the elements of L and D are arbitrarily large when ε is small note that you can t do partial or complete pivoting in LDL T factorization since those could destroy the symmetry in A nonetheless there is one special case when LDL T factorization not only exists but can be computed in an efficient and stable way when A is positive definite 6