14.2 QR Factorization with Column Pivoting

Size: px
Start display at page:

Download "14.2 QR Factorization with Column Pivoting"

Transcription

1 page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution Process (Algorithms 43 and 44) Gaussian Elimination with and without Pivoting (Sections 522 and 524) Householder QR Factorization Method (Section 722) 141 Introduction In this final Chapter, we shall briefly discuss the following advanced (but important) topics: QR factorization with column pivoting; modifying QR factorization; a taste of round-off error analysis 142 QR Factorization with Column Pivoting If an m n(m n) matrix A has rank r<n, then the matrix R is singular In this case the QR factorization cannot be employed to produce an orthonormal basis of R(A) To see this, just consider the following simple 2 2 example from Björck (1996, p 21): A = = 0 1 ( c s s c ) 0 s = QR (141) 0 c If c and s are chosen such that c 2 + s 2 = 1, rank(a) = 1 < 2, and the columns of Q do not form an orthonormal basis of R(A), nor is one formed for its complement 531 AUTHOR PROOFS NOT FOR DISTRIBUTION REPORT ABUSE TO BOOKS@SIAMORG Copyright by SIAM Unauthorized reproduction of this article is prohibited

2 page Chapter 14 Special Topics Fortunately, however, the process of QR factorization (for example, the Householder method) can be modified to yield an orthonormal basis The idea here is to generate a permutation matrix P such that AP = QR, where R = R11 R 12 Here R 11 is r r upper triangular, r is the rank of A, and Q is orthogonal The first r columns of Q will then form an orthonormal basis of R(A) The following theorem guarantees the existence of such a factorization Theorem 141 (QR column pivoting theorem) Let A be an m n matrix with rank(a) = r min(m, n) Then there exist an n n permutation matrix P and an m m orthogonal matrix Q such that Q T R11 R AP = 12, where R 11 is an r r upper triangular matrix with nonzero diagonal entries Proof Since rank(a) = r, there exists a permutation matrix P such that AP = (A 1,A 2 ), where A 1 is = m r and has linearly independent columns Consider now the QR factorization of A 1, Q T A 1 = R 11 0, where by the uniqueness theorem (Theorem 714), Q and R 11 are uniquely determined and R 11 has positive diagonal entries Then Q T AP = (Q T A 1,Q T R11 R A 2 ) = 12 0 R 22 Since rank(q T AP ) = rank(a) = r and rank(r 11 ) = r, we must have R 22 = 0 Remark Note that in practice, R 22 can even be not small See our discussion on RRQR decomposition later Column Pivoting QR Factorization The above factorization in known as QR factorization with column pivoting The factorization in general is not unique A standard algorithm can be briefly described as follows The details can be found in Golub and Van Loan (1996) and Björck (1996) Step 1 Find the column of A having the maximum norm Permute now the columns of A so that the column of maximum norm becomes the first column This is equivalent to creating a permutation matrix P 1 such that the matrix AP 1 has the first column having the maximum norm Create now a Householder matrix H 1 so that A 1 = H 1 AP 1 AUTHOR PROOFS NOT FOR DISTRIBUTION REPORT ABUSE TO BOOKS@SIAMORG Copyright by SIAM Unauthorized reproduction of this article is prohibited

3 page QR Factorization with Column Pivoting 533 has zeros in the first column below the (1,1) entry 0 A 1 = 0  1 0 Step 2 Find the column with the maximum norm of the submatrix Aˆ 1 obtained from A 1 by deleting the first row and the first column (as shown above) Permute the columns of this submatrix so that the column of maximum norm becomes the first column This is equivalent to constructing a permutation matrix Pˆ 2 such that the first column of  1 Pˆ 2 has the maximum norm Now, construct a Householder matrix Hˆ 2 such that the first column of Hˆ 2 Aˆ 1 Pˆ 2 has zeros below its (1,1) entry Form now P 2 and H 2 in the usual way; that is P 2 = diag (1, Pˆ 2 ) and H 2 = diag (1, Hˆ 2 ) Then A 2 = H 2 A 1 P 2 = H 2 H 1 AP 1 P 2 has zeros in the second column of A 2 below the (2, 2) entry The matrix A 2 has the following structure: 0 0 A 2 = = The process is now continued with ˆ A 2 The kth step can now easily be written down  2 The process is continued until the entries below the diagonal of the current matrix all become zero Suppose r steps are needed Then at the end of the rth step, we have A A r = H r H 1 AP 1 P r = Q T AP = R = ( R11 R 12 Flop-count and storage consideration The above method requires 4mnr 2r 2 (m + n) + 4r3 flops The matrix Q, as in the Householder factorization, is stored in factored 3 form in the subdiagonal part of A The triangular part of A can be overwritten by the upper triangular part of R ) Example 142 A = = (a 1,a 2,a 3 ) AUTHOR PROOFS NOT FOR DISTRIBUTION REPORT ABUSE TO BOOKS@SIAMORG Copyright by SIAM Unauthorized reproduction of this article is prohibited

4 page Chapter 14 Special Topics Step 1 The column a 3 has the largest norm Thus, P 1 = , H 1 = , A 1 = H 1 AP 1 = Step  1 =, 2500 ˆ P 2 = 0 1, Hˆ = , P 2 = 1 1, H 2 = , A 2 = H 2 A 1 P 2 = = ( R11 R 12 ) Note that R A 2, and Q = H 1 H 2,P = P 1 P 2 MATLAB Note: MATLAB command [Q, R, E] =QR(A) produces a permutation matrix E such that A E = Q R E is chosen so that ABS(DIAG(R)) is decreasing Complete Orthogonal Factorization It is easy to see that the submatrix (R 11,R 12 ) can further be reduced by using orthogonal transformations, yielding ( T 0 ) Thus we have the following theorem Theorem 143 (complete orthogonalization theorem) Given A m n with rank(a) = r, there exist orthogonal matrices Q m m and W n n such that Q T T 0 AW =, where T is an r r upper triangular matrix with positive diagonal entries Proof The proof is left as an exercise (Exercise 142) The above decomposition of A is called the complete orthogonal decomposition AUTHOR PROOFS NOT FOR DISTRIBUTION REPORT ABUSE TO BOOKS@SIAMORG Copyright by SIAM Unauthorized reproduction of this article is prohibited

5 page Modifying a QR Factorization 535 A Rank-Revealing QR Factorization The process of QR factorization with column pivoting was developed by Businger and Golub (1965) In exact arithmetic, it reveals the rank of matrix A which is the order of the nonsingular upper triangular matrix R 11 However, in the presence of rounding errors, we will actually have R11 R R = 12, 0 R 22 and if R 22 is small in some measure (say, R 22 is of O(µ), where µ is the machine precision), then the reduction will be terminated Thus, from the above discussion, we note that, given an m n matrix A(m n), if there exists a permutation matrix P such that Q T R11 R AP = R = 12, 0 R 22 where R 11 is r r, and R 22 is small in some measure, then we will say that A has numerical rank r (For more on numerical rank, see Chapter 7 (Section 789)) Unfortunately, the converse is not true A celebrated counterexample due to Kahan (1966) shows that a matrix can be nearly rank-deficient without having R 22 small at all Consider 1 c c c 0 1 c c A = diag(1,s,,s n 1 ) = R c with c 2 + s 2 = 1, c,s>0 For n = 100, c= 02, it can be shown that this matrix is nearly singular (the smallest singular value is O(10 8 )) On the other hand, r nn = s n 1 = 0133, which is not small, so R cannot be nearly singular The question whether at any stage R 22 becomes really small for any matrix has been investigated by Chan (1987), Hong and Pan (1992), and others It can be shown that if A R m n (m n) and r is any integer 0 <r<n, then there exists a permutation matrix such that QR factorization has the form R11 R A = Q 12, 0 R 22 where R 11 is an r r upper triangular matrix, with σ r (R 11 ) 1 c σ r(a), R 22 cσ r+1 (A), c = r(n r) + min(r, n r) σ i (A) stands for the ith singular value of A A QR factorization of the above form is called a rank-revealing QR (RRQR) factorization If σ r+1 = 0, then we have the decomposition in Theorem Modifying a QR Factorization Suppose the QR factorization of an m k matrix A = (a 1,,a k )(m k) has been obtained A vector a k+1 is now appended to obtain a new matrix: A = (a 1,,a k,a k+1 ) AUTHOR PROOFS NOT FOR DISTRIBUTION REPORT ABUSE TO BOOKS@SIAMORG Copyright by SIAM Unauthorized reproduction of this article is prohibited

6 page Chapter 14 Special Topics It is natural to wonder how the QR factorization of A can be obtained from the given QR factorization of A, without starting from scratch This is known as the updating QR factorization The downdating QR factorization is similarly defined The updating and downdating QR factorizations arise in a variety of practical applications, such as signal and image processing We present below a simple algorithm using Householder matrices to solve the updating problem Algorithm 141 Updating QR Factorization Using Householder Matrices Inputs: A R m k (m k), an arbitrary column vector a k+1, and Householder matrices H 1 through H k such that ( R H k H k 1 H 2 H 1 A = 0) Output: QR factorization of the augmented matrix A = (A, a k+1 ): (Q ) T A = R Step 1 Compute b k+1 = H k H 1 a k+1 Step 2 Compute a Householder matrix H k+1 so that H k+1 b k+1 = r k+1 has zeros in entries k + 2,,m Step 3 Form R = (( R 0 ),rk+1 ) Step 4 Form Q = H k+1 H 1 Example 144 a 2 = 1 4 ; A = 1 2, H 1 = Q T = , R = Step 1 Step 2 b 2 = H 1 a 2 = H 2 = , r 2 = AUTHOR PROOFS NOT FOR DISTRIBUTION REPORT ABUSE TO BOOKS@SIAMORG Copyright by SIAM Unauthorized reproduction of this article is prohibited

7 page A Taste of Round-Off Error Analysis 537 Step R = (R, r 2 ) = , Q = H 2 H 1 = Verification: (Q ) T R = = A A Taste of Round-Off Error Analysis Here we give the readers a taste of round-off error analysis in matrix computations by proving backward analyses of some standard matrix computations, such as solutions of triangular systems, LU factorization using Gaussian elimination, and solution of a linear system Let s recall that by backward error analysis we mean an analysis that shows that the computed solution by the algorithm is an exact solution of a perturbed problem When the perturbed problem is close to the original problem, we say that the algorithm is backward stable Basic Laws of Floating Point Arithmetic We first remind the reader of the basic laws of floating point arithmetic which will be used in what follows These laws were obtained in Chapter 3 Let δ <µ,where µ is the machine precision Then the following hold: 1 fl(x ± y) = (x ± y)(1 + δ) (142) 2 fl(xy) = xy(1 + δ) (143) 3 If y = 0, then fl( x y ) = ( x )(1 + δ) y (144) Occasionally, we will also use 4 fl(x y) = x y 1+δ, (145) where * denotes any of the arithmetic operations +,,, / 1441 Backward Error Analysis for Forward Elimination and Back Substitution Case 1 Lower Triangular System Consider solving the lower triangular system Ly = b, (146) AUTHOR PROOFS NOT FOR DISTRIBUTION REPORT ABUSE TO BOOKS@SIAMORG Copyright by SIAM Unauthorized reproduction of this article is prohibited

8 page Chapter 14 Special Topics where and L = (l ), b = (b 1,,b n ) T (147) y = (y 1,,y n ) T, using the forward elimination scheme We will use ŝ to denote a computed quantity of s Step 1 This gives or ŷ 1 = fl ( b1 l 11 ) b 1 = l 11 (1 + δ 1 ), δ 1 µ (using 145) l 11 (1 + δ 1 )ŷ 1 = b 1 ˆl 11 ŷ 1 = b 1, where ˆl 11 = l 11 (1 + δ 1 ) That is, ŷ 1 is the exact solution of an equation whose coefficient is a number close to l 11 Step 2 b2 l 21 ŷ 1 b2 fl(l 21 ŷ 1 ) ŷ 2 = fl = fl l 22 l 22 = (b 2 l 21 y 1 (1 + δ 11 ))(1 + δ 22 ) l 22 (1 + δ 2 ) (148) (using both (143) and (145)), where δ 11, δ 21, and δ 2 are all less than or equal to µ Equation (148) can be rewritten as l 21 (1 + δ 11 )(1 + δ 22 )ŷ 1 + l 22 (1 + δ 2 )ŷ 2 = b 2 (1 + δ 22 ) (149) That is, l 21 (1 + ɛ 21 )ŷ 1 + l 22 (1 + ɛ 22 )ŷ 2 = b 2, where δ2 δ 22 ɛ 21 = δ 11 ɛ 22 = 1 + δ 22 (neglecting δ 11 δ 22, which is small) Thus, we can say that ŷ 1 and ŷ 2 satisfy ˆl 21 ŷ 1 + ˆl 22 ŷ 2 = b 2, where ˆl 21 = l 21 (1 + ɛ 21 ) and ˆl 22 = l 22 (1 + ɛ 22 ) (1410) Step k The preceding can be easily generalized, and we can say that at the kth step, the unknowns y 1 through y k satisfy ˆl k1 ŷ 1 + ˆl k2 ŷ 2 + +ˆl kk ŷ k = b k, (1411) where ˆl kj = l kj (1 + ɛ kj ), j = 1,,k The process can be continued until k = n AUTHOR PROOFS NOT FOR DISTRIBUTION REPORT ABUSE TO BOOKS@SIAMORG Copyright by SIAM Unauthorized reproduction of this article is prohibited

9 page A Taste of Round-Off Error Analysis 539 Thus, we see that the computed ŷ 1 through ŷ n satisfy the following perturbed triangular system: ˆl 11 ŷ 1 = b 1, ˆl 21 ŷ 1 + ˆl 22 ŷ 2 = b 2, ˆl n1 ŷ 1 + ˆl n2 ŷ 2 + +ˆl nn ŷ n = b n, where ˆl kj = l kj (1 + ɛ kj ), k = 1,,n, j = 1,,k Note that ɛ 11 = δ 1 These equations can be written in matrix form, ˆLŷ = (L + L)ŷ = b, (1412) where L is a lower triangular matrix whose (i, j)th element ( L) = l ɛ Knowing the bounds for ɛ, the bounds for ( L) can be easily computed For example, if n is small enough so that nµ < 1 10, then ɛ kj 106(k j + 2)µ (see Chapter 3, Section 35) Then ( L) 106(i j + 2)µ l (1413) The preceding discussions can be summarized in the following theorem Theorem 145 The computed solution ŷ to the n n lower triangular system Ly = b, obtained by forward elimination, satisfies a perturbed triangular system: (L + L)ŷ = b, where the entries of L are bounded by (1413) assuming that nµ < 1 10 Case 2 Upper Triangular System The round-off error analysis for solving an upper triangular system Ux = c using back substitution is similar to Case 1 In this case, we have the following theorem Theorem 146 Let U be an n n upper triangular matrix and let c be a vector Then the computed solution ˆx to the system Ux = c using back substitution process satisfies (U + U) ˆx = c, (1414) where ( U) 106(i j + 2)µ u, (1415) assuming nµ < 1 10 AUTHOR PROOFS NOT FOR DISTRIBUTION REPORT ABUSE TO BOOKS@SIAMORG Copyright by SIAM Unauthorized reproduction of this article is prohibited

10 page Chapter 14 Special Topics 1442 Backward Error Analysis for Triangularization by Gaussian Elimination The treatment here follows very closely to the one given in Ortega (1990), and in Forsythe and Moler (1967) Recall that the process of triangularization using Gaussian elimination consists of (n 1) steps At step k, matrix A (k) is constructed, which is triangular in the first k columns; that is, a (k) 11 a (k) ln A (k) = a (k) kk a (k) kn (1416) a (k) nk a nn (k) The final matrix A (n 1) is triangular We shall assume that the quantities a (k) are the computed numbers Step 1 First, let s analyze the computations of the entries of A (1) from A in the first step Let the computed multipliers be ˆm i1,i = 2, 3,,nThen ai1 ˆm i1 = fl = a i1 (1 + δ i1 ), δ i1 µ (1417) a 11 a 11 Thus, the error e (0) i1 e (0) i1 in setting a(1) i1 to zero is given by = a(1) i1 a i1 +ˆm i1 a 11 = 0 a i1 + ( ai1 a 11 ) (1 + δ i1 )a 11 = δ i1 a i1 Let us now find the errors in computing the other elements a (1) elements a (1),i,j= 2,,n, are given by of A (1) The computed a (1) = fl(a fl( ˆm i1 a )) = (a fl( ˆm i1 a 1j ))(1 + α (1) ) [ ] = a ˆm 1j a 1j (1 + β (1) ) (1 + α (1) ), i, j = 2,,n, where α (1) µ, β (1) µ The last equation can be rewritten as a (1) = (a ˆm i1 a 1j ) + e (0), i,j = 2,,n, (1418) where e (0) = α(1) a (1) 1 + α (1) ˆm i1 a 1j β (1), i,j = 2,,n (1419) From (1418) and (1419), we have (noting that the first row of A (1) is the same on the first row of A) A (1) = A L (0) A + E (0), (1420) AUTHOR PROOFS NOT FOR DISTRIBUTION REPORT ABUSE TO BOOKS@SIAMORG Copyright by SIAM Unauthorized reproduction of this article is prohibited

11 page A Taste of Round-Off Error Analysis 541 where 0 0 L (0) ˆm =, e (0) E(0) 21 e (0) 2n = ˆm n1 0 0 e (0) n1 e nn (0) Step 2 Analysis of computing A (2) from A (1) is similar Analogously, at the end of Step 2, we will have A (2) = A (1) L (1) A (1) + E (1), (1421) where L (1) and E (1) are similarly defined Substituting (1420) in (1421), we have A (2) = A (1) L (1) A (1) + E (1) = A L (0) A + E (0) L (1) A (1) + E (1) (1422) Continuing in this way, we can write A (n 1) + L (0) A + L (1) A (1) + +L (n 2) A (n 2) = A +E (0) +E (1) + +E (n 2) (1423) Because 0 0 L (k 1) = ˆm k+1,k, 0 ˆm n,k 0 we have L (k) A (k) = L (k) A (n 1), k = 0, 1, 2,,n 2 (1424) Thus from (1423) and (1424), we obtain A (n 1) + L (0) A (n 1) + L (1) A (n 1) + +L (n 2) A (n 1) = A + E (0) + E (1) + +E (n 2) (1425) That is, A+E (0) +E (1) + +E (n 2) = (I +L (0) +L (1) + +L (n 2) )A (n 1) (1426) Noting now that 1 0 ˆm I + L (0) + L (1) + +L (n 2) = ˆm 31 ˆm = ˆL (1427) ˆm n1 ˆm n2 ˆm n,n 1 1 (the computed L) and A (n 1) = Û (computed U), and denoting E (0) + E (1) + +E (n 2) by E, we have from (1426) and (1427) A + E = ˆLÛ, (1428) AUTHOR PROOFS NOT FOR DISTRIBUTION REPORT ABUSE TO BOOKS@SIAMORG Copyright by SIAM Unauthorized reproduction of this article is prohibited

12 page Chapter 14 Special Topics where the matrices E (0),,E (n 2) are given by E (k 1) = e (k 1) k+1,k e (k 1) k+1,n, k = 1, 2,,n 1, (1429) e (k 1) n,k e n,n (k 1) e (k 1) i,k = a (k 1) i,k δ i,k, i = k + 1,,n (1430) and and e (k 1) = α(k) 1 + α (k) a (k) ˆm ik a (k 1) kj β (k), i,j = k + 1,,n, (1431) δ ik µ, α (k) µ, (1432) β (k) µ (1433) We formalize the above discussion in the following theorem Theorem 147 The computed upper and lower triangular matrices ˆL and Û produced by Gaussian elimination satisfy A + E = ˆLÛ, where Û = A (n 1) and ˆL is the unit lower triangular matrix of the computed multipliers given by ˆm ˆL = ˆm n1 ˆm n2 ˆm n,n 1 1 Example 148 Using two-digit arithmetic in the computations of ˆL and Û, find the error matrix E such that A + E = ˆLÛ: A = Step 1 ˆm 21 = ˆm 31 = a (1) 22 = 052, a(1) 23 = 157, a(1) 32 a (1) 33 = = 063, = = 016, = = 033, = = 022, AUTHOR PROOFS NOT FOR DISTRIBUTION REPORT ABUSE TO BOOKS@SIAMORG Copyright by SIAM Unauthorized reproduction of this article is prohibited

13 page A Taste of Round-Off Error Analysis A (1) = , e (0) 21 e (0) 22 e (0) 23 e (0) 31 e (0) 32 e (0) 33 = 0 [ ] = 00008, = 063 [ ] =00020, = 016 [ ] = 00028, = 0 [ ] = 00003, = 033 [ ] = 00005, = 022 [ ] =00027, E (0) = Step 2 m 32 = 033 = 052, a(2) 33 = = 030, A (2) = = Û 030 e (1) 32 e (1) 33 = 0 [ ] =00024, = 030 [ ] = 00032, E (1) = Thus Since E = E (0) + E (1) = ˆL = we can easily verify that ˆLÛ = A + E AUTHOR PROOFS NOT FOR DISTRIBUTION REPORT ABUSE TO BOOKS@SIAMORG Copyright by SIAM Unauthorized reproduction of this article is prohibited

14 page Chapter 14 Special Topics Bounds for the Elements of E We now assess how large the entries of the error matrix E can be For this purpose we assume that pivoting has been used in Gaussian elimination so that ˆm ik 1 Recall that the growth factor ρ is defined by ρ = max i,j,k a (k) max i,j a Let a = max i,j a Then from (1429) (1433), we have e (k 1) ik aρµ, k = 1, 2,,n 1, i = k + 1,,n, and, for i, j = k + 1,,n(k = 1, 2,,n 1), e (k 1) µ 1 µ a(k) Denote µ by η Then 1 µ +µ a (k 1) 2 1 µ aρµ (since ˆm ik 1) E = E (0) + +E (n 2) E (0) + + E (n 2) aρη = aρη (1434) n 2 Remark Inequalities (1434) hold elementwise We can immediately obtain a bound in terms of norms Thus, E aρη( (2n 2)) aρn 2 η (1435) Theorem 149 (round-off error analysis for GEPP) The matrices ˆL and Û, computed by Gaussian elimination with partial pivoting satisfy A + E = ˆLÛ, where E aρn 2 η, a = max i,j a, and η = µ 1 µ 1443 Backward Error Analysis for Solving Ax = b We are now ready to give a backward round-off error analysis for solving Ax = b using triangularization by Gaussian elimination, followed by forward elimination and back substitution First, from Theorem 147, we know that triangularization of A using Gaussian elimination yields ˆL and Û such that A + E = ˆLÛ AUTHOR PROOFS NOT FOR DISTRIBUTION REPORT ABUSE TO BOOKS@SIAMORG Copyright by SIAM Unauthorized reproduction of this article is prohibited

15 page A Taste of Round-Off Error Analysis 545 These ˆL and Û will then be used to solve ˆLy = b, Ûx = y From Theorem 145 and Theorem 146, we know that computed solution ŷ and ˆx of the above two triangular systems satisfy From these equations, we have or or where (Note that A + E = ˆLÛ) ( ˆL + L)ŷ = b and (Û + U) ˆx =ŷ (Û + U) ˆx = ( ˆL + L) 1 b ( ˆL + L)(Û + U) ˆx = b (A + F)ˆx = b, (1436) F = E + ( L)Û + ˆL( U) + ( L)( U) (1437) Bounds for F From (1437) we have F E + L Û + ˆL U + L U We now obtain expressions for L, U, ˆL, and Û Because 1 ˆL = ˆm 21 0, ˆm n1 ˆm n,n 1 1 from (1413), we obtain ˆm 21 2 L 106µ (1438) (n + 1) ˆm 21 3 ˆm n,n 1 2 Assuming partial pivoting, ie, ˆm ik 1, k= 1, 2,,n 1, i= k + 1,,n, we have and L ˆL n (1439) n(n + 3) (106µ) (1440) 2 AUTHOR PROOFS NOT FOR DISTRIBUTION REPORT ABUSE TO BOOKS@SIAMORG Copyright by SIAM Unauthorized reproduction of this article is prohibited

16 page Chapter 14 Special Topics Similarly, Û naρ (note that U = A (n 1) ) (1441) and using (1415), we have n(n + 3) U 106aρµ 2 (note that max u aρ) (1442) Also recall that E n 2 aρ µ µ 1 (1443) Assume that n 2 µ 1 (which is a very reasonable assumption in practice) Then L U n 2 ρaµ (1444) Using (1439) (1443) in (1437), we have F E + L Û + ˆL U + L U n 2 aρ µ µ n2 (n + 3)aρµ + n 2 ρaµ (1445) Since µ µ 1 1 and a A, from (1445) we can write F 106(n 3 + 5n 2 )ρ A µ (1446) Neglecting the terms involving n 2 µ, we have the following result Theorem 1410 The computed solution ˆx to the linear system Ax = b using Gaussian elimination with partial pivoting satisfies a perturbed system (A + F)ˆx = b, where F is defined by (1437) and F cn 3 ρ A µ, where c is a small constant Remarks 1 The proceeding bound for F is grossly overestimated In practice, this bound for F is very rarely attained Wilkinson (1995) states that in practice F is always less than or equal to nµ A 2 Making use of (1413), (1415), and (1434), we can also obtain an elementwise bound for F (Exercise 147) 145 Review and Summary In this chapter, some special topics have been discussed Then include QR factorization with column pivoting; updating of a QR factorization; error analyses for LU factorization and solution of linear systems AUTHOR PROOFS NOT FOR DISTRIBUTION REPORT ABUSE TO BOOKS@SIAMORG Copyright by SIAM Unauthorized reproduction of this article is prohibited

17 page Review and Summary QR Factorization with Column Pivoting If A is a rank-deficient matrix, its QR factorization can no longer be used to determine an orthonormal basis of R(A) However, in this case a variation of QR factorization, called QR factorization with column pivoting, given by Q T AP = ( R11 R 12 can be used See Theorem 141 for a proof In exact arithmetic this factorization would reveal the rank of A However, if A is nearly rank-deficient, then a nonzero matrix R 22 might appear in place of the last zero diagonal block in the above factorization This is known as rank-revealing QR factorization A bound of R 22 in terms of the singular values of A has been provided in Section 142 ), 1452 QR Updating Given the QR factorization of A, the QR updating problem is that of finding the QR factorization of the augmented matrix (A, a k+1 ), where a k+1 is an arbitrary column vector, by making use of the given QR factorization of A A method for QR updating has been described in Algorithm Error Analysis The backward error analysis for the following computation have been given: 1 lower and upper triangular systems using forward elimination and back substitution (Theorems 145 and 146); 2 LU factorizations using Gaussian elimination without and with partial pivoting (Theorems 147 and 149); 3 linear systems problem Ax = b using Gaussian elimination with partial pivoting followed by forward elimination and back substitution (Theorem 1410) Bounds for the error matrix E in each case have been derived in (1413), (1415), (1434), and (1446) We have merely attempted here to give the readers a taste of round-off error analysis, as the title of the section suggests The results of this chapter are already known to the reader They have been stated earlier in the book without proofs We have tried to give formal proofs here To repeat, these results say that the forward elimination and back substitution methods for triangular systems are backward stable, whereas the stability of the Gaussian elimination process for LU factorization, and therefore for the linear system problem Ax = b using the process, depends upon the size of the growth factor 1454 Suggestions for Further Reading For more on rank-revealing factorization, computational algorithms, and their applications, see Chan (1987), Chan and Hansen (1992), Foster (1986), and Chandrasekaran and Ipsen AUTHOR PROOFS NOT FOR DISTRIBUTION REPORT ABUSE TO BOOKS@SIAMORG Copyright by SIAM Unauthorized reproduction of this article is prohibited

18 page Chapter 14 Special Topics (1994) Li and Zeng (2005) and Lee, Li, and Zeng (2009) have discussed rank-revealing methods with updating and downdating and their applications See Daniel et al (1976) for updating of the QR factorization using the Gram Schmidt process; for downdating of the Cholesky factorization, see Bojanczyk et al (1987) and Eldén and Park (1994) See also an earlier papers by Gill et al (1974) and Nazareth (1989) for methods for modifying matrix factorization For details of round-off errors and backward stability, see Wilkinson s classics (1963, 1965) and the book by Higham (2002) Also see Ortega (1990) and Forsythe and Moler (1967) Exercises on Chapter Compute the QR factorization with column pivoting and find an orthonormal basis for R(A) for each of the following matrices: (a) A = , (b) A = , (c) A = 1 2, (d) A = , (e) A = Give a proof of the complete orthogonalization theorem (Theorem 143) starting from the QR column pivoting factorization theorem (Theorem 141) 143 Work out an algorithm to modify the QR factorization of a matrix A from which a column has been removed 144 Consider the problems of solving linear systems Ax = b using Gaussian elimination with partial pivoting with each of the matrices from Exercise 141 and taking b to be the vector with all entries equal to 1 in each case Find F in each case such that the computed solution x satisfies (A + F)x = b Compare the bounds predicated by (1446) with actual errors 145 Using β = 10 and t = 2, compute the LU factorization using Gaussian elimination (without pivoting) for the following matrices, and find the error matrix E in each case such that A + E = LU: (a) A =, (b) A =, (c) A = (d) A =, (e) A = , 8 5 AUTHOR PROOFS NOT FOR DISTRIBUTION REPORT ABUSE TO BOOKS@SIAMORG Copyright by SIAM Unauthorized reproduction of this article is prohibited

19 page Review and Summary Suppose now that partial pivoting has been used in computing the LU factorization of each of the above matrices of Exercise 145 Find again the error matrix E in each case, and compare the bounds of the entries in E predicted by (1434) with the actual errors 147 Making use of (1413), (1415), and (1434), find an elementwise bound for F in Theorem 1410 satisfying (A + F)x = b 148 From Theorems 145 and 146, show that the process of forward elimination and back substitution for lower and upper triangular systems, respectively, are backward stable 149 From (1435), conclude that the backward stability of Gaussian elimination is essentially determined by the size of the growth factor ρ 1410 Consider the problem of evaluating the polynomial p(α) = a n α n + a n 1 α n 1 + +a 0 by synthetic division: p n = a n, p i 1 = fl(αp i + a i 1 ), i = n, n 1,,1 Then p 0 = p(α) Show that p 0 = a n (1 + δ n )α n + a n 1 (1 + δ n+1 )α n 1 + +a 0 (1 + δ 0 ) Find a bound for each δ i,i= 0, 1,,n What can you say about the backward stability of the algorithm from your bounds? AUTHOR PROOFS NOT FOR DISTRIBUTION REPORT ABUSE TO BOOKS@SIAMORG Copyright by SIAM Unauthorized reproduction of this article is prohibited

Dense LU factorization and its error analysis

Dense LU factorization and its error analysis Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II) Math 59 Lecture 2 (Tue Mar 5) Gaussian elimination and LU factorization (II) 2 Gaussian elimination - LU factorization For a general n n matrix A the Gaussian elimination produces an LU factorization if

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers. MATH 434/534 Theoretical Assignment 3 Solution Chapter 4 No 40 Answer True or False to the following Give reasons for your answers If a backward stable algorithm is applied to a computational problem,

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization

More information

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

2.1 Gaussian Elimination

2.1 Gaussian Elimination 2. Gaussian Elimination A common problem encountered in numerical models is the one in which there are n equations and n unknowns. The following is a description of the Gaussian elimination method for

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

Lecture Note 2: The Gaussian Elimination and LU Decomposition

Lecture Note 2: The Gaussian Elimination and LU Decomposition MATH 5330: Computational Methods of Linear Algebra Lecture Note 2: The Gaussian Elimination and LU Decomposition The Gaussian elimination Xianyi Zeng Department of Mathematical Sciences, UTEP The method

More information

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015 CS: Lecture #7 Mridul Aanjaneya March 9, 5 Solving linear systems of equations Consider a lower triangular matrix L: l l l L = l 3 l 3 l 33 l n l nn A procedure similar to that for upper triangular systems

More information

Gaussian Elimination for Linear Systems

Gaussian Elimination for Linear Systems Gaussian Elimination for Linear Systems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 3, 2011 1/56 Outline 1 Elementary matrices 2 LR-factorization 3 Gaussian elimination

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

Gaussian Elimination and Back Substitution

Gaussian Elimination and Back Substitution Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 1. qr and complete orthogonal factorization poor man s svd can solve many problems on the svd list using either of these factorizations but they

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Lecture 2 INF-MAT 4350 2009: 7.1-7.6, LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Tom Lyche and Michael Floater Centre of Mathematics for Applications, Department of Informatics,

More information

Rank Revealing QR factorization. F. Guyomarc h, D. Mezher and B. Philippe

Rank Revealing QR factorization. F. Guyomarc h, D. Mezher and B. Philippe Rank Revealing QR factorization F. Guyomarc h, D. Mezher and B. Philippe 1 Outline Introduction Classical Algorithms Full matrices Sparse matrices Rank-Revealing QR Conclusion CSDA 2005, Cyprus 2 Situation

More information

The System of Linear Equations. Direct Methods. Xiaozhou Li.

The System of Linear Equations. Direct Methods. Xiaozhou Li. 1/16 The Direct Methods xiaozhouli@uestc.edu.cn http://xiaozhouli.com School of Mathematical Sciences University of Electronic Science and Technology of China Chengdu, China Does the LU factorization always

More information

Linear Systems of n equations for n unknowns

Linear Systems of n equations for n unknowns Linear Systems of n equations for n unknowns In many application problems we want to find n unknowns, and we have n linear equations Example: Find x,x,x such that the following three equations hold: x

More information

Linear Algebra, part 3 QR and SVD

Linear Algebra, part 3 QR and SVD Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We

More information

SOLVING LINEAR SYSTEMS

SOLVING LINEAR SYSTEMS SOLVING LINEAR SYSTEMS We want to solve the linear system a, x + + a,n x n = b a n, x + + a n,n x n = b n This will be done by the method used in beginning algebra, by successively eliminating unknowns

More information

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4 Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math Week # 1 Saturday, February 1, 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x

More information

Gram-Schmidt Orthogonalization: 100 Years and More

Gram-Schmidt Orthogonalization: 100 Years and More Gram-Schmidt Orthogonalization: 100 Years and More September 12, 2008 Outline of Talk Early History (1795 1907) Middle History 1. The work of Åke Björck Least squares, Stability, Loss of orthogonality

More information

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector

More information

Applied Numerical Linear Algebra. Lecture 8

Applied Numerical Linear Algebra. Lecture 8 Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ

More information

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix EE507 - Computational Techniques for EE 7. LU factorization Jitkomut Songsiri factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers. MATH 434/534 Theoretical Assignment 3 Solution Chapter 4 No 40 Answer True or False to the following Give reasons for your answers If a backward stable algorithm is applied to a computational problem,

More information

Stability of the Gram-Schmidt process

Stability of the Gram-Schmidt process Stability of the Gram-Schmidt process Orthogonal projection We learned in multivariable calculus (or physics or elementary linear algebra) that if q is a unit vector and v is any vector then the orthogonal

More information

Solving Linear Systems of Equations

Solving Linear Systems of Equations Solving Linear Systems of Equations Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab:

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Numerical methods for solving linear systems

Numerical methods for solving linear systems Chapter 2 Numerical methods for solving linear systems Let A C n n be a nonsingular matrix We want to solve the linear system Ax = b by (a) Direct methods (finite steps); Iterative methods (convergence)

More information

Orthogonalization and least squares methods

Orthogonalization and least squares methods Chapter 3 Orthogonalization and least squares methods 31 QR-factorization (QR-decomposition) 311 Householder transformation Definition 311 A complex m n-matrix R = [r ij is called an upper (lower) triangular

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

Matrix Factorization and Analysis

Matrix Factorization and Analysis Chapter 7 Matrix Factorization and Analysis Matrix factorizations are an important part of the practice and analysis of signal processing. They are at the heart of many signal-processing algorithms. Their

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 12: Gaussian Elimination and LU Factorization Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 10 Gaussian Elimination

More information

Lecture 9: Numerical Linear Algebra Primer (February 11st)

Lecture 9: Numerical Linear Algebra Primer (February 11st) 10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one

More information

LAPACK-Style Codes for Pivoted Cholesky and QR Updating

LAPACK-Style Codes for Pivoted Cholesky and QR Updating LAPACK-Style Codes for Pivoted Cholesky and QR Updating Sven Hammarling 1, Nicholas J. Higham 2, and Craig Lucas 3 1 NAG Ltd.,Wilkinson House, Jordan Hill Road, Oxford, OX2 8DR, England, sven@nag.co.uk,

More information

Orthogonal Transformations

Orthogonal Transformations Orthogonal Transformations Tom Lyche University of Oslo Norway Orthogonal Transformations p. 1/3 Applications of Qx with Q T Q = I 1. solving least squares problems (today) 2. solving linear equations

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 2 Systems of Linear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Since the determinant of a diagonal matrix is the product of its diagonal elements it is trivial to see that det(a) = α 2. = max. A 1 x.

Since the determinant of a diagonal matrix is the product of its diagonal elements it is trivial to see that det(a) = α 2. = max. A 1 x. APPM 4720/5720 Problem Set 2 Solutions This assignment is due at the start of class on Wednesday, February 9th. Minimal credit will be given for incomplete solutions or solutions that do not provide details

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 2 Systems of Linear Equations Copyright c 2001. Reproduction permitted only for noncommercial,

More information

Numerical Linear Algebra And Its Applications

Numerical Linear Algebra And Its Applications Numerical Linear Algebra And Its Applications Xiao-Qing JIN 1 Yi-Min WEI 2 August 29, 2008 1 Department of Mathematics, University of Macau, Macau, P. R. China. 2 Department of Mathematics, Fudan University,

More information

Linear Analysis Lecture 16

Linear Analysis Lecture 16 Linear Analysis Lecture 16 The QR Factorization Recall the Gram-Schmidt orthogonalization process. Let V be an inner product space, and suppose a 1,..., a n V are linearly independent. Define q 1,...,

More information

Chapter 2 - Linear Equations

Chapter 2 - Linear Equations Chapter 2 - Linear Equations 2. Solving Linear Equations One of the most common problems in scientific computing is the solution of linear equations. It is a problem in its own right, but it also occurs

More information

CHAPTER 6. Direct Methods for Solving Linear Systems

CHAPTER 6. Direct Methods for Solving Linear Systems CHAPTER 6 Direct Methods for Solving Linear Systems. Introduction A direct method for approximating the solution of a system of n linear equations in n unknowns is one that gives the exact solution to

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-92, 2004. Copyright 2004,. ISSN 1068-9613. ETNA STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Ý Abstract. For any symmetric

More information

LAPACK-Style Codes for Pivoted Cholesky and QR Updating. Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig. MIMS EPrint: 2006.

LAPACK-Style Codes for Pivoted Cholesky and QR Updating. Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig. MIMS EPrint: 2006. LAPACK-Style Codes for Pivoted Cholesky and QR Updating Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig 2007 MIMS EPrint: 2006.385 Manchester Institute for Mathematical Sciences School of Mathematics

More information

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1.

More information

Cheat Sheet for MATH461

Cheat Sheet for MATH461 Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

Gaussian Elimination without/with Pivoting and Cholesky Decomposition

Gaussian Elimination without/with Pivoting and Cholesky Decomposition Gaussian Elimination without/with Pivoting and Cholesky Decomposition Gaussian Elimination WITHOUT pivoting Notation: For a matrix A R n n we define for k {,,n} the leading principal submatrix a a k A

More information

ECE133A Applied Numerical Computing Additional Lecture Notes

ECE133A Applied Numerical Computing Additional Lecture Notes Winter Quarter 2018 ECE133A Applied Numerical Computing Additional Lecture Notes L. Vandenberghe ii Contents 1 LU factorization 1 1.1 Definition................................. 1 1.2 Nonsingular sets

More information

Matrix Algebra for Engineers Jeffrey R. Chasnov

Matrix Algebra for Engineers Jeffrey R. Chasnov Matrix Algebra for Engineers Jeffrey R. Chasnov The Hong Kong University of Science and Technology The Hong Kong University of Science and Technology Department of Mathematics Clear Water Bay, Kowloon

More information

Rank revealing factorizations, and low rank approximations

Rank revealing factorizations, and low rank approximations Rank revealing factorizations, and low rank approximations L. Grigori Inria Paris, UPMC January 2018 Plan Low rank matrix approximation Rank revealing QR factorization LU CRTP: Truncated LU factorization

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

MATH 3511 Lecture 1. Solving Linear Systems 1

MATH 3511 Lecture 1. Solving Linear Systems 1 MATH 3511 Lecture 1 Solving Linear Systems 1 Dmitriy Leykekhman Spring 2012 Goals Review of basic linear algebra Solution of simple linear systems Gaussian elimination D Leykekhman - MATH 3511 Introduction

More information

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given

More information

Orthonormal Transformations and Least Squares

Orthonormal Transformations and Least Squares Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the

More information

Subset Selection. Deterministic vs. Randomized. Ilse Ipsen. North Carolina State University. Joint work with: Stan Eisenstat, Yale

Subset Selection. Deterministic vs. Randomized. Ilse Ipsen. North Carolina State University. Joint work with: Stan Eisenstat, Yale Subset Selection Deterministic vs. Randomized Ilse Ipsen North Carolina State University Joint work with: Stan Eisenstat, Yale Mary Beth Broadbent, Martin Brown, Kevin Penner Subset Selection Given: real

More information

Roundoff Analysis of Gaussian Elimination

Roundoff Analysis of Gaussian Elimination Jim Lambers MAT 60 Summer Session 2009-0 Lecture 5 Notes These notes correspond to Sections 33 and 34 in the text Roundoff Analysis of Gaussian Elimination In this section, we will perform a detailed error

More information

Ax = b. Systems of Linear Equations. Lecture Notes to Accompany. Given m n matrix A and m-vector b, find unknown n-vector x satisfying

Ax = b. Systems of Linear Equations. Lecture Notes to Accompany. Given m n matrix A and m-vector b, find unknown n-vector x satisfying Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T Heath Chapter Systems of Linear Equations Systems of Linear Equations Given m n matrix A and m-vector

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition

ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University of Hong Kong Lecture 8: QR Decomposition

More information

A Review of Linear Algebra

A Review of Linear Algebra A Review of Linear Algebra Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab: Implementations

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra R. J. Renka Department of Computer Science & Engineering University of North Texas 02/03/2015 Notation and Terminology R n is the Euclidean n-dimensional linear space over the

More information

ETNA Kent State University

ETNA Kent State University C 8 Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-2, 2004. Copyright 2004,. ISSN 1068-613. etnamcs.kent.edu STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Abstract.

More information

Solving Linear Systems Using Gaussian Elimination. How can we solve

Solving Linear Systems Using Gaussian Elimination. How can we solve Solving Linear Systems Using Gaussian Elimination How can we solve? 1 Gaussian elimination Consider the general augmented system: Gaussian elimination Step 1: Eliminate first column below the main diagonal.

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Logistics Notes for 2016-09-14 1. There was a goof in HW 2, problem 1 (now fixed) please re-download if you have already started looking at it. 2. CS colloquium (4:15 in Gates G01) this Thurs is Margaret

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

Linear Least Squares Problems

Linear Least Squares Problems Linear Least Squares Problems Introduction We have N data points (x 1,y 1 ),...(x N,y N ). We assume that the data values are given by y j = g(x j ) + e j, j = 1,...,N where g(x) = c 1 g 1 (x) + + c n

More information

Introduction to Numerical Linear Algebra II

Introduction to Numerical Linear Algebra II Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in

More information

Orthonormal Transformations

Orthonormal Transformations Orthonormal Transformations Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 25, 2010 Applications of transformation Q : R m R m, with Q T Q = I 1.

More information

MODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1].

MODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1]. Topics: Linear operators MODULE 7 We are going to discuss functions = mappings = transformations = operators from one vector space V 1 into another vector space V 2. However, we shall restrict our sights

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to

More information

AMS 147 Computational Methods and Applications Lecture 17 Copyright by Hongyun Wang, UCSC

AMS 147 Computational Methods and Applications Lecture 17 Copyright by Hongyun Wang, UCSC Lecture 17 Copyright by Hongyun Wang, UCSC Recap: Solving linear system A x = b Suppose we are given the decomposition, A = L U. We solve (LU) x = b in 2 steps: *) Solve L y = b using the forward substitution

More information

Subset Selection. Ilse Ipsen. North Carolina State University, USA

Subset Selection. Ilse Ipsen. North Carolina State University, USA Subset Selection Ilse Ipsen North Carolina State University, USA Subset Selection Given: real or complex matrix A integer k Determine permutation matrix P so that AP = ( A 1 }{{} k A 2 ) Important columns

More information

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4 Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix

More information

MATH 387 ASSIGNMENT 2

MATH 387 ASSIGNMENT 2 MATH 387 ASSIGMET 2 SAMPLE SOLUTIOS BY IBRAHIM AL BALUSHI Problem 4 A matrix A ra ik s P R nˆn is called symmetric if a ik a ki for all i, k, and is called positive definite if x T Ax ě 0 for all x P R

More information

Introduction to Numerical Analysis

Introduction to Numerical Analysis Université de Liège Faculté des Sciences Appliquées Introduction to Numerical Analysis Edition 2015 Professor Q. Louveaux Department of Electrical Engineering and Computer Science Montefiore Institute

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Introduction to Mathematical Programming

Introduction to Mathematical Programming Introduction to Mathematical Programming Ming Zhong Lecture 6 September 12, 2018 Ming Zhong (JHU) AMS Fall 2018 1 / 20 Table of Contents 1 Ming Zhong (JHU) AMS Fall 2018 2 / 20 Solving Linear Systems A

More information