CS: Lecture #7 Mridul Aanjaneya March 9, 5 Solving linear systems of equations Consider a lower triangular matrix L: l l l L = l 3 l 3 l 33 l n l nn A procedure similar to that for upper triangular systems can be followed to solve Lx = b The overall complexity is again O(n ) Algorithm Forward substitution for Lx = b: : for j = n do : if l jj = then 3: return matrix is singular : end if 5: x j b j /l jj 6: for i = j + n do 7: b i b i l ij x j : end for 9: end for The forward and backward substitution processes can be used to solve a non-triangular system by virtue of the following factorization property Theorem If A is an n n matrix, it can be (generally) written as a product: A = LU where L is a lower triangular matrix and U is an upper triangular matrix Furthermore, it is possible to construct L such that all diagonal elements l ii =
Algorithm LU factorization by Gaussian Elimination : for k = n do : if a kk = then 3: return : end if 5: for i = k + n do 6: a ik a ik /a kk 7: end for : for j = k + n do 9: for i = k + n do : a ij a ij a ik a kj : end for : end for 3: end for Note that this algorithm executes in-place, ie, the matrix A is replaced by its LU factorization, in compact form More specifically, this algorithm produces a factorization A = LU, where: L = l O l 3 l 3 l l l 3 l n n n l n3 l n,n, U = u u u 3 u n u u 3 u n u 33 u 3n O un,n u nn After the in-place factorization algorithm completes, A is replaced by the following compacted encoding of L and U together: u u u 3 u n l u u 3 u n A = l 3 l 3 u 33 u 3n l n l n l n,n u nn Here is a slightly different algorithm for Gaussian Elimination via elimination matrices Define the basis vector e k as: e k =
where the is in the kth row and the length of e k is n In order to perform Gaussian Elimination on the kth column a k of A, we define the n n elimination matrix M k = I m k e T k where m k = a kk a k+,k a n,k M k adds multiples of row k to the rows > k in order to create s As an example, for a k = (,, ) T Similarly, M a k = = M a k = / = The inverse of an elimination matrix is defined as L k = M k = I + m k e T k For example, L = M =, and L = M = / The algorithm now proceeds as follows Consider the example: 9 3 3 7 x x x 3 = First, we eliminate the lower triangular portion of A one column at a time using M k to get U = M n M A Note that we also carry out the operations on b to get a new system of equations M M Ax = M M b or Ux = M M b which can be solved for via back substitution M A = 9 3 3 7 = 5 3
M b = = M M A = 5 = M M b = = Finally, solve the following system via back substitution x y z = Note that using the fact that the L matrices are inverses of the M matrices allows us to write LU = (L L n )(M n M A) = A where L = L L n can be formed trivially from the M k to obtain: L = L L = = And thus, although we never needed it to solve the equations, the LU factorization of A is A = 9 3 3 7 = = LU Existence/Uniqueness and pivoting When computing the LU factorization, the algorithm will halt if the diagonal element a kk = This can be avoided by swapping rows of A prior to computing the LU factorization This is done to always select the largest a kk from the equations that follow As an example, consider the matrix A and the action of the permutation matrix P on it
A = 5 3 6 3, P =, P A = 5 6 3 3 This is pivoting: the pivot a kk is selected to be non-zero In this process, we can guarantee uniqueness and existence of LU: Theorem If P is a permutation matrix such that all pivots in the Gaussian Elimination of P A are non-zero, and l ii =, then the LU factorization exists and is unique! P A = LU 5