Gaussian Elimination without/with Pivoting and Cholesky Decomposition

Similar documents
Linear Systems of n equations for n unknowns

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

Direct Methods for Solving Linear Systems. Matrix Factorization

Numerical Linear Algebra

Cheat Sheet for MATH461

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Gaussian Elimination and Back Substitution

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

The Solution of Linear Systems AX = B

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

2.1 Gaussian Elimination

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

30.3. LU Decomposition. Introduction. Prerequisites. Learning Outcomes

Numerical Linear Algebra

This can be accomplished by left matrix multiplication as follows: I

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

ACM106a - Homework 2 Solutions

MATH 3511 Lecture 1. Solving Linear Systems 1

Solving Linear Systems of Equations

Solving Linear Systems Using Gaussian Elimination. How can we solve

Scientific Computing

Fundamentals of Engineering Analysis (650163)

CHAPTER 6. Direct Methods for Solving Linear Systems

Illustration of Gaussian elimination to find LU factorization. A = a 11 a 12 a 13 a 14 a 21 a 22 a 23 a 24 a 31 a 32 a 33 a 34 a 41 a 42 a 43 a 44

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky

Computational Linear Algebra

Draft. Lecture 12 Gaussian Elimination and LU Factorization. MATH 562 Numerical Analysis II. Songting Luo

Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems

SOLVING LINEAR SYSTEMS

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

The determinant. Motivation: area of parallelograms, volume of parallepipeds. Two vectors in R 2 : oriented area of a parallelogram

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

Matrix decompositions

Linear Algebraic Equations

Computational Methods. Systems of Linear Equations

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Matrix decompositions

Matrix Factorization and Analysis

Orthogonal Transformations

1 Determinants. 1.1 Determinant

Class notes: Approximation

Numerical Methods - Numerical Linear Algebra

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

Linear Algebra and Matrix Inversion

TMA4125 Matematikk 4N Spring 2017

1.Chapter Objectives

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Review Questions REVIEW QUESTIONS 71

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

Conceptual Questions for Review

Solution of Linear Equations

5 Solving Systems of Linear Equations

Ack: 1. LD Garcia, MTH 199, Sam Houston State University 2. Linear Algebra and Its Applications - Gilbert Strang

5.6. PSEUDOINVERSES 101. A H w.

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

LINEAR SYSTEMS (11) Intensive Computation

April 26, Applied mathematics PhD candidate, physics MA UC Berkeley. Lecture 4/26/2013. Jed Duersch. Spd matrices. Cholesky decomposition

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

MODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1].

Math Matrix Algebra

y b where U. matrix inverse A 1 ( L. 1 U 1. L 1 U 13 U 23 U 33 U 13 2 U 12 1

1 GSW Sets of Systems

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

CS513, Spring 2007 Prof. Amos Ron Assignment #5 Solutions Prepared by Houssain Kettani. a mj i,j [2,n] a 11

1.4 Gaussian Elimination Gaussian elimination: an algorithm for finding a (actually the ) reduced row echelon form of a matrix. A row echelon form

Matrix decompositions

Eigenvalues and Eigenvectors

7.6 The Inverse of a Square Matrix

Lecture Note 2: The Gaussian Elimination and LU Decomposition

Solving Dense Linear Systems I

1 Positive definiteness and semidefiniteness

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

Algebra C Numerical Linear Algebra Sample Exam Problems

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Math 22AL Lab #4. 1 Objectives. 2 Header. 0.1 Notes

Applied Linear Algebra in Geoscience Using MATLAB

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I: Numerical linear algebra

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

9. Numerical linear algebra background

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

The following steps will help you to record your work and save and submit it successfully.

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Solving Linear Systems of Equations

Linear Algebra, part 3 QR and SVD

Practical Linear Algebra: A Geometry Toolbox

Transcription:

Gaussian Elimination without/with Pivoting and Cholesky Decomposition Gaussian Elimination WITHOUT pivoting Notation: For a matrix A R n n we define for k {,,n} the leading principal submatrix a a k A (k) := a k a kk We found out that Gaussian elimination without pivoting can fail even if the matrix A is nonsingular Example: For 4 2 2 2 3 2 2 2 we use l 2 = 2 4,l 3 = 2 4 and obtain after step of the elimination U = Now we have u 22 = and the algorithm fails in column 2 Note that a linear system with the matrix A (2) = [ 4 2 equivalent to a linear system with the matrix U (2) = 4 2 2 4 2 [ 4 2 2 This matrix is singular, hence the matrix A (2) is singular This explains why Gaussian elimination fails in column 2: The matrix A () = [4 was nonsingular, so elimination worked in column But the matrix A (2) is singular, so the elimination fails in column 2 For a matrix A R n n we consider the submatrices A (),,A (n) If all of these matrices are nonsingular, then Gaussian elimination WITHOUT pivoting succeeds, and we obtain an upper triangular matrix U with nonzero elements on the diagonal If one of these submatrices is singular: let A (k) be the first submatrix which is singular Then Gaussian elimination WITHOUT pivoting works in columns,,k, but fails in column k Theorem: Proof: For a matrix A R n n the following three statements are equivalent: All the submatrices A (),,A (n) are nonsingular 2 Gaussian elimination WITHOUT pivoting succeeds and yields u j j for j =,,n 3 The matrix A has a decomposition LU where L is lower triangular with s on the diagonal and U is upper triangular with nonzero diagonal elements () = (2): Assume Gaussian elimination fails in column k, yielding a matrix U with u kk = Then a linear system with the matrix A (k) is equivalent to a linear system with the matrix This matrix is singular, hence the matrix A (k) is singular U (k) = (2) = (3): The row operations given by the multipliers l i j turn the matrix A into the matrix U Hence reversing these row operations turns the matrix U back into the matrix A, yielding the equation LU (3) = () Let j {,,n} LU implies A ( j) = L ( j) U ( j) Hence solving a linear system with the matrix A ( j) is equivalent to solving a linear system with the matrix U ( j) Since U ( j) is nonsingular the matrix A ( j) is nonsingular is

Gaussian Elimination WITH Pivoting We now have for each column several pivot candidates: the diagonal element and all elements below it If one of the pivot candidates is nonzero we use a row interchange to move it to the diagonal position, and we can perform elimination in this column If all pivot candidates are zero the algorithm breaks down If we can perform elimination for columns,,n and then obtain u nn = we say that the algorithm breaks down in column n The algorithm succeeds if we can perform elimination in columns,,n and we obtain u nn In this case we obtain an upper triangular matrix U with nonzero diagonal elements If we start with a nonsingular matrix A, then Gaussian elimination with pivoting succeeds Solving a linear system Ax = b is then equivalent to a linear system Ux = y which we can solve by back substitution Theorem: Proof: For a matrix A R n n the following three statements are equivalent: The matrix A R n n is nonsingular 2 Gaussian elimination WITH pivoting succeeds and yields u j j for j =,,n 3 There exists a decomposition row p of A row p n of A = LU where L is lower triangular with s on the diagonal, U is upper triangular with nonzero diagonal elements, and p,, p n is a permutation of the numbers,,n () = (2): Assume Gaussian elimination fails in column k, yielding a matrix U where all pivot candidates u kk,,u nk are zero Since elimination succeeded in columns,,k the diagonal elements u,,u k,k are all nonzero: U = We want to show that the linear system Ax = has a solution x : The linear system Ax = is equivalent x k to the linear system Ux = Consider a vector x with x k =, x k+ = = x n =, ie, x = Then equations k,k +,,n are satisfied We can find x,,x k such that equations,,k are satisfied by using back substitution: x

equation k yields a unique x k (since u k,k ),, equation yields a unique x (since u ) We found a nonzero vector x such that Ax =, hence A is singular row p of A (2) = (3): Let à := row p n of A exactly the same L,U Hence we have à = LU Then applying Gaussian elimination WITHOUT pivoting to the matrix à yields (3) = () The linear system with the matrix Ax = b is equivalent to solving a linear system Ux = y Since U is nonsingular the matrix A is nonsingular Quadratic forms A symmetric matrix A R n n defines a quadratic function q(x,,x n ): q(x,x n ) = Such a function is called a quadratic form For x = n n i= j= a i j x i x j = x Ax we obviously have q(x) = What happens for x Examples: [ [ [ 2 2 2 2 3 2 2 2 3 2 q(x,x 2 ) = 2x 2 + 2x x 2 + 2x2 2 q(x,x 2 ) = 2x 2 + 4x x 2 + 2x2 2 q(x,x 2 ) = 2x 2 + 6x x 2 + 2x2 2? [ for x [ : q(x) is positive for x [ : q(x) is positive or zero for x : q(x) is positive or zero or negative eigenvalues of A are 3, eigenvalues of A are 4, eigenvalues of A are 5, If q(x) = x Ax > for all x we call the matrix A positive definite A symmetric matrix has real eigenvalues λ,,λ n and an orthonormal basis of eigenvectors v (),,v (n) If we write the vector x in terms of eigenvectors we obtain λ c x = c v () + + c n v (n), q(x) = [c,,c n = λ c 2 + + λ n c 2 n λ n c n We see: a symmetric matrix is positive definite all eigenvalues are positive

Cholesky decomposition Many application problems give a linear system with a symmetric matrix A R n n, ie, a i j = a ji for all i, j Example : Consider the matrix obtain LU: 6 5 6 5 6 5 6 5 = We perform Gaussian elimination WITHOUT pivoting and 2 3 2 3 3 3 2 We can then use this to solve a linear system Ax = b: First solve Ly = b using forward substitution, then solve Ux = b using back substitution We would like to exploit the fact that A is symmetric and use a decomposition which reflects this: we want to find an upper triangular matrix C R n n with C C 6 5 6 5 = = c c 2 c 22 c 3 c 23 c 33 3 2 2 3 2 c c 2 c 3 c 22 c 23 c 33 3 2 2 3 2 Note: this decomposition C C is called Cholesky decomposition C C implies A = C C = A, ie, such a decomposition can only be obtained for symmetric A We can then use this decomposition to solve a linear system Ax = b: First solve C y = b using forward substitution, then solve Cx = y using back substitution How can we find the matrix C? row of A: diagonal element: a = c c = c = a for k = 2,n: a k = c k c = c k = a k /c row 2 of A: diagonal element: a 22 = c 2 2 + c2 22 = c 22 = a 22 c 2 2 for k = 3,,n: a 2k = c k c 2 + c 2k c 22 = c 2k = (a 2k c k c 2 )/c 22 Algorithm: For j =,,n: s := a j j j l= c2 l j If s : stop with error matrix not positive definite c j j = s For k = j +,,n: ( c jk = a jk c l j c lk )/c j j j l=

How to compute row j = 3 of C: C = v M v v ( v M)/c j j Here v = C(:j-,j) and M = C(:j-,j+:n) We obtain the following Matlab code: function C = cholesky(a) n = size(a,); C = zeros(n,n); for j=:n v = C(:j-,j); s = A(j,j) - v *v; if s<= error( matrix is not positive definite ) end C(j,j) = sqrt(s); C(j,j+:n) = (A(j,j+:n)-v *C(:j-,j+:n))/C(j,j); end Work of Cholesky decomposition: For row j: compute c j j : ( j ) multiplications and square root compute c jk for k = j +,,n: (n j) times (( j ) multiplications and division ) If we count multiplications, divisions and square roots we have j + (n j) j = (n j + ) j operations for row j The total is therefore n j= (n j + ) j = = n3 6 + O(n2 ) operations This is half the work of the LU decomposition which was n3 3 operations Example 2: Consider the matrix n(n + )(n + 2) 6 6 5 6 2 The algorithm breaks down when we try to compute c 33 : s = 2 (2 2 + 3 2 ) = : C C = 3 2 2 3 3 2 2 3 = 6 5 6 3 Therefore we have s = 2 3 =, and the Cholesky decomposition does not exist The matrix A is symmetric and nonsingular Why is there no Cholesky decomposition? Assme that a matrix A has a Cholesky decomposition C C where C has nonzero elements For a vector x R n we consider the scalar x Ax = n i, j= a i jx i x j (this is called a quadratic form ) For any vector x we have y := Cx since C is nonsingular (upper triangular with nonzero diagonal entries) Hence the quadratic form is positive: x Ax = x C Cx = y y > ()

Definition: A symmetric matrix A R n n is called positive definite if for all x [ [ 2 Eg, the matrix is positive definite since x 3 Ax = 2x 2 + 3x2 2 > for x [ [ 2 On the other hand the matrix is not positive definite since [, A we have x Ax > = Note: A symmetric matrix A has real eigenvalues The matrix is positive definite if and only if all eigenvalues are positive We can check whether a matrix is positive definite by trying to find the Cholesky decomposition If C with nonzero diagonal elements exists, the matrix is positive definite because of () If the algorithm breaks down we claim that the matrix is not positive definite Eg, the matrix A from Example 2 is not positive definite: Pick x 3 = Solve Cx = [,, using back substitution: x = 8/3 3, x Ax = x C Cx + x x = Theorem: For a symmetric matrix A R n n the following three statements are equivalent: The matrix is positive definite 2 The Cholesky algorithm succeeds and gives C R n n with nonzero diagonal elements 3 There exists a decomposition C C where C R n n is upper triangular with nonzero diagonal elements Proof: () = (2): Assume the algorithm breaks down in row j with s We want to show that A is not positive definite by finding a nonzero vector x with x Ax : Let, eg, j = 3 We have already computed the first two rows of C with c, c 22 and have s = a 33 (c 2 3 + c2 23 ) C(3) C (3) = Now we construct a vector x = c c 2 c 22 c 3 c 23 A (3) = C(3) C (3) + x x 2 with s c c 2 c 3 c 22 c 23 c c 2 c 3 c 22 c 23 } {{ } C (3) x x 2 = = a a 2 a 3 a 2 a 22 a 23 a 3 a 3 c 2 3 + c2 23 then x This works since c,c 22 Hence x A (3) x = x C(3) C (3) x + x x = + s }{{} s : We use back substitution to find x 2 and

x x 2 Now we define x = and obtain x Ax = x A (3) x = s (2) = (3): The algorithm gives C with C C = A (3) = (): as shown above in ()