c 2010 Society for Industrial and Applied Mathematics
|
|
- Nora Fitzgerald
- 6 years ago
- Views:
Transcription
1 SIAM J MATRIX ANAL APPL Vol 31, No 5, pp c 010 Society for Industrial and Applied Mathematics RIGOROUS PERTURBATION BOUNDS OF SOME MATRIX FACTORIZATIONS X-W CHANG AND D STEHLÉ Abstract This article presents rigorous normwise perturbation bounds for the Cholesky, LU, and QR factorizations with normwise or componentwise perturbations in the given matrix The considered componentwise perturbations have the form of backward rounding errors for the standard factorization algorithms The used approach is a combination of the classic and refined matrix equation approaches Each of the new rigorous perturbation bounds is a small constant multiple of the corresponding first-order perturbation bound obtained by the refined matrix equation approach in the literature and can be estimated efficiently These new bounds can be much tighter than the existing rigorous bounds obtained by the classic matrix equation approach, while the conditions for the former to hold are almost as moderate as the conditions for the latter to hold Key words perturbation analysis, normwise perturbation, componentwise perturbation, Cholesky factorization, LU factorization, QR factorization, column and row scaling, first-order bounds, rigorous bounds AMS subject classifications 15A3, 65F35 DOI / Introduction Let A be a given matrix and have a factorization 11 A = BC Suppose that A is perturbed to A +ΔA, where a normwise or componentwise bound on ΔA is known Let the same factorization for A +ΔA be 1 A +ΔA =B +ΔBC +ΔC The aim of a perturbation analysis is to assess the effects of ΔA on ΔB and ΔC In the analysis, normwise or componentwise bounds on ΔB and ΔC are derived The perturbation theory of matrix factorizations has been extensively studied The following table summarizes the relevant works on perturbation bounds of Cholesky, LU, and QR factorizations which are known to the authors P B Cholesky LU QR N FN [], [8], [18], [19], [0] [], [6], [1], [18], [19] [], [9], [18], [0], [3] N RN [8], [1], [13], [17], [0] [1], [1] [17], [0], [3] C FN [4] [5] [7], [5] C RN [3], [13] [7], [10] C FC [3] [5] [7] C RC [1], [1], [] [1], [] [] In the first column, P stands for the type of perturbation in the matrix to be factorized, and N and C stand for normwise perturbation and componentwise Received by the editors November 30, 009; accepted for publication in revised form by R-C Li September 9, 010; published electronically November 4, School of Computer Science, McGill University, Montreal, QC, H3A A7, Canada chang@csmcgillca The work of this author was supported by NSERC of Canada grant RGPIN CNRS, Laboratoire LIP U Lyon, CNRS, ENS de Lyon, INRIA, UCBL, École Normale Supérieure delyon, 46Allée d Italie, Lyon Cedex 07, France damienstehle@ens-lyonfr 841
2 84 X-W CHANG AND D STEHLÉ perturbation, respectively; in the second column, B stands for perturbation bound of the factor, FN, RN, FC and RC stand for first-order normwise perturbation bound, rigorous normwise perturbation bound, first-order componentwise perturbation bound, and rigorous componentwise perturbation bound, respectively In the present article, we call a bound rigorous if it does not neglect any higher-order terms as the first-order bound does: Under appropriate assumptions, it always holds true Two types of approaches are often used to derive normwise perturbation bounds One is the matrix-vector equation approach, and the other is the matrix equation approach; see [3] Here we give a brief explanation about these two approaches in the context of first-order analysis From 11 and 1 we have by dropping the second-order term that 13 ΔA BΔC +ΔBC The basic idea of the matrix-vector equation approach is to write this approximate matrix equation 13 as a matrix-vector equation by using the special structures and properties of the involved matrices, then obtain the vector-type expressions for ΔB and ΔC, from which normwise bounds on ΔB and ΔC can be derived The approach can be extended to obtain rigorous bounds This approach usually leads to sharp bounds, but the bounds first-order bounds or rigorous bounds are expensive to estimate, and the conditions for the rigorous bounds to hold are often too restrictive and complicated The matrix equation approach comes in two flavors The classic matrix equation approach keeps 13 in the matrix-matrix form and drives bounds on ΔB and ΔC The approach can be extended to obtain rigorous bounds The bounds first-order bounds or rigorous bounds can be efficiently estimated, and the conditions for the rigorous bounds to hold are less restrictive and simpler But the bounds are usually not tight The refined matrix equation approach additionally uses row or column scaling techniques It has been mainly used to derive first-order bounds, which numerical experiments showed are often good approximations to the sharp firstorder bounds derived by the matrix-vector equation approach It is often unclear whether a first-order bound is a good approximate bound, as the ignored higher-order terms may dominate the true perturbation see, eg, Remark 51 Furthermore, in some applications rigorous bounds are needed in order to certify the accuracy of computations; see, eg, [10, 15] for an application with the QR factorization and [16] for an application with the Cholesky factorization The present article aims at providing tight rigorous perturbation bounds for the Cholesky, LU, and QR factorizations, which can be efficiently estimated in On flops, where n is the number of columns of the matrix to be factorized Additionally, the conditions for the bounds to hold are simple and moderate We consider both normwise and componentwise perturbations in the matrix to be factorized The componentwise perturbations have the form of backward errors resulting from standard factorization algorithms In [10] we obtained such a rigorous bound for the R-factor of the QR factorization under a componentwise perturbation which has the form of backward rounding errors of standard QR factorization algorithms The approach used in the latter work is actually a combination of the classic and refined matrix equation approaches We will use a similar approach in this article The rest of this article is organized as follows In section, we introduce notation and give some basics that will be necessary for the following three sections Sections 3, 4, and 5 are devoted to Cholesky, LU, and QR factorizations, respectively Finally a summary is given in section 6
3 PERTURBATION BOUNDS OF SOME MATRIX FACTORIZATIONS 843 Notation and basics For a matrix X R n n,weusexi, : and X:,jto denote its ith row and jth column, respectively, and use X k to denote its k k leading principal submatrix We define a lower triangular matrix and two upper triangular matrices associated with X R n n as follows: 1 3 sltx =s ij, s ij = { xij if i>j, 0 otherwise, utx =X sltx, x ij if i<j, 1 upx =s ij, s ij = x ij if i = j, 0 otherwise For any absolute matrix norm ie, A = A for any A, we have 4 sltx X, utx X, upx X Let D n denote the set of all real n n positive definite diagonal matrices We will use the following properties, which hold for any D D n : 5 sltdx = D sltx, utxd = utxd, upxd = upxd It can be verified that if X T = X, then 6 upx F 1 X F It is proved in [9, Lemma 51] that, for any D =diagδ 1,,δ n D n, 7 upx+d upx T D F ρ D X F, ρ D = [ ] 1/ 1+ max δ j/δ i 1 i<j n For any matrix X R m n and any consistent matrix norm ν, we define κ ν X = X ν X ν, cond ν X = X X ν, where X is the Moore-Penrose pseudo-inverse of X The following well-known results are due to van der Sluis [4] Lemma 1 Let S, T R n n with S nonsingular and define D rp = diag Si, : p, D cp = diag S:,j p, p =1, Then T S = TD r1 Dr1 S = min TD D S, D D n S T 1 = SDc1 1 D c1 T 1 = min SD 1 DT 1, D D n TD r Dr S n inf TD D S, D D n SDc D c T n inf SD DT D D n
4 844 X-W CHANG AND D STEHLÉ This lemma indicates that if one wants to estimate the rightmost sides of 8 11, one can select appropriate scaling matrices and estimate the norms of scaled matrices T and S IfbothS and T are available, or if only S is available but S is triangular and T = DS in 8 and 10, or T = S D in 9, and 11 for a known D D n, then the above estimations can be done by norm estimators in On flops; see, eg, [14, Chap 15] These results can be used to estimate the perturbation bounds to be presented Finally, we give the following basic result, which will be used in later sections many times Lemma Let a, b > 0 Letc be a continuous function of a parameter t [0, 1] such that b 4act > 0 holds for all t Suppose that a continuous function xt satisfies the quadratic inequality axt bxt +ct 0 Ifc0 = x0 = 0, then x1 a 1 b b 4ac1 Proof Thetworootsofaxt bxt+ct =0are x 1 t = 1 b b 4act, x t = 1 b + b 4act a a Notice that x 1 t <x t and both are continuous Since axt bxt+ct 0, we have either xt x 1 t orxt x t But xt is continuous and x0 = c0 = 0 so that x0 = x 1 0 <x 0, and therefore we must have xt x 1 t for all t 3 Cholesky factorization We first present rigorous perturbation bounds for the Cholesky factor when the given symmetric positive definite matrix has a general normwise perturbation Theorem 31 Let A R n n be symmetric positive definite with the Cholesky factorization A = R T R,whereR R n n is upper triangular with positive diagonal entries and let ΔA R n n be symmetric If 31 κ A ΔA F A < 1/, then A +ΔA has the unique Cholesky factorization 3 A +ΔA =R +ΔR T R +ΔR, where R κ R [ inf D Dn κ D R ] ΔA F A 1+ 1 κ A ΔA F A + [ ] κ R inf κ D ΔA F R D D n A Proof From the condition 31, A ΔA κ A ΔA / A < 1 Thus, the matrix A + tδa for t [0, 1] is symmetric positive definite and has the unique Cholesky factorization 35 A + tδa =R +ΔRt T R +ΔRt,
5 PERTURBATION BOUNDS OF SOME MATRIX FACTORIZATIONS 845 which, with ΔR1 = ΔR, leads to 3 Notice that ΔRt is a continuous function of t From 35 we obtain 36 R T ΔRt T +ΔRtR = tr T ΔAR R T ΔRt T ΔRtR As ΔRtR is upper triangular, it follows from 3 that 37 ΔRtR =uptr T ΔAR R T ΔRt T ΔRtR Taking the Frobenius norm on both sides of 37 and using the inequality 6 and the fact that A = R,weobtain ΔRtR F 1 tr T ΔAR R T ΔRt T ΔRtR F 1 t A ΔA F + ΔRtR F Therefore, as the assumption 31 guarantees that the condition of Lemma holds, we have by Lemma that 310 ΔRR F A ΔA F Taking t = 1 in 37, multiplying both sides by a diagonal D D n from the right, and using the fact that upxd = upxd see 5, we have 311 ΔRR D = upr T ΔAR D R T ΔR T ΔRR D Taking the Frobenius norm on both sides of 311 and using upx F see 4, we obtain X F ΔRR D F R R D ΔA F + ΔRR F ΔRR D F Then, it follows by using 310 that ΔRR D F R R D ΔA F 1+ 1 A ΔA F Therefore, ΔRR D F D R R R D D R ΔA F 1+ 1 A ΔA F Since D D n is arbitrary and A = R, we have 33 and then 34 Nowwemakesomeremarkstoshowtherelations between the new results and existing results in the literature Remark 31 In [8], the following first-order perturbation bound, which can be estimated in On flops, was derived: 31 R [ ] κ R inf κ D ΔA F R D D n A ΔA + O F A
6 846 X-W CHANG AND D STEHLÉ Note that the difference between this first-order bound and the rigorous bound 34 is afactorof+ Numerical experiments indicated that 31 is a good approximation to the optimal first-order bound derived by the matrix-vector equation approach in [8]: R κ C A ΔA F A ΔA + O F A, where 313 [ ] 1 κ R κ C A κ R inf κ D R D D n The expression of κ C A involvesan nn+1 nn+1 lower triangular matrix defined by the entries of R The best known method to estimate it requires On 3 flops; see [8, Remark 6] If the standard symmetric pivoting strategy is used in computing the Cholesky factorization, the quantity inf D Dn κ D R is bounded by a function of n; see [8, sections 4 and 5] Remark 3 One of the rigorous bounds derived by the classic matrix equation approach presented in [0] is as follows: 314 R κ A ΔA F A 1+ 1 κ A ΔA F A under the same condition as 31 If we take D = I in 33, we obtain 315 R κ A ΔA F A 1+ 1 κ A ΔA F A Comparing 315 with 314, we observe that the new rigorous bound 33 is at most + 1 times as large as 314 But κ Rinf D Dn κ D R can be much smaller than κ A whenr has bad row scaling For example, for R = diag1,γ with large γ>0, κ Rκ D R=Θγ with D = diag1,γ, and κ A =Θγ Thus the bound 33 can be much tighter than 314 Remark 33 In [8, Theorem 9], the following rigorous perturbation bound was derived by the matrix-vector equation approach: 316 R κ C A ΔA F A By 313, the new bound 34 is not as tight as this bound, but no numerical experiment has indicated that the former can be significantly larger than the latter; see Remark 31 As we mentioned in Remark 31, it is more expensive to estimate the latter than the former A more serious problem with 316 is that the condition for it to hold given in [8, Theorem 9] can be as bad as κ A ΔA C F / A < 1/4 This is much more constraining than the condition 31 if inf D Dn κ D R is not bounded by a constant; see 313 For example, for R = [ ] 1 γ 01 with large γ>0, κ A C =Θγ4 and κ A =Θγ
7 PERTURBATION BOUNDS OF SOME MATRIX FACTORIZATIONS 847 Remark 34 In [3, Theorem 8], the following rigorous bound was derived by the refined matrix equation approach: 317 under the condition R κ Rκ D R ΔA F A 318 κ R R R D D ΔA F < 1/4 A for any D D n Notice that κ R R R D D κ R =κ A Thus the condition 318 is not only more complicated but also more constraining than the condition 31 If we want to make the bound 317 similar to the new bound 34, then we may minimize κ D RoverthesetD n But the optimal choice of D may make the condition 318 much more constraining than the condition 31 Here is an example Let R = 1 γ γ 0 γ γ 0 0 γ with large γ>0 By 10, D r =diag 1+γ + γ 4, γ + γ 4,γ is an approximate optimal D Itiseasytoverifythatκ R R R D r Dr =Θγ 5 and κ A =Θγ 4 Thus the former can be arbitrarily larger than the latter In the following we present rigorous perturbation bounds for the Cholesky factor when the perturbation ΔA has the form we could expect from the backward error in A resulting from a standard Cholesky factorization algorithm see [11] and [14, section 101] Theorem 3 Let A R n n be symmetric positive definite with the Cholesky factorization A = R T R and let A = D c HD c with D c = diaga 1/ 11,,a1/ nn Let ΔA R n n be symmetric such that ΔA εdd T for some constant ε and d = [a 1/ 11,,a1/ nn ] T If 319 n H ε<1/, then A +ΔA has the unique Cholesky factorization 30 A +ΔA =R +ΔR T R +ΔR, where n DcR inf D Dn D cr D D R 31 R ε R 1+ 1 n H ε 3 + n D c R infd Dn D c R D D R ε R Proof In the proof, we will use the following fact: D c ΔAD c F ε D c dd T D F = ε ee T F = nε, where e =[1,,1] T Note that the spectral radius of A ΔA satisfies ρa ΔA =ρd c H D c H D c c ΔA =ρh D ΔAD c c ΔADc n H ε<1
8 848 X-W CHANG AND D STEHLÉ Thus, the matrix A + tδa for t [0, 1] is symmetric positive definite and has the unique Cholesky factorization 35, which, with ΔR1 = ΔR, leads to 30 From 37 we obtain 33 ΔRtR =up tr T D c Dc ΔADc D c R R T ΔRt T ΔRtR Then, using 6 and the fact that H = D c R,weobtain ΔRtR F 1 tn H ε + ΔRtR F Therefore, as the assumption 319 guarantees that the condition of Lemma holds, we have by Lemma that 34 ΔRR F n H ε Taking t = 1 in 33, multiplying both sides by a diagonal D D n from the right and then taking the Frobenius norm, we obtain ΔRR D F n D c R D c R D ε + ΔRR F ΔRR D F Then, using 34, we obtain ΔRR D F n Dc R D c R D ε 1+ 1 n H ε This, combined with the inequality ΔRR D F D R, leads to 31 and then 3 In the following we make some remarks, which are analogous to Remarks Remark 35 In [4] the following first-order perturbation bound, which can be estimated in On flops, was presented: R n D cr infd Dn D c R D D R R ε + Oε Note that the difference between the above first-order bound and the rigorous bound 3 is a factor of + cf Remark 3 Numerical experiments indicated that the above first-order bound is often a reasonable approximation to the nearly optimal first-order bound derived by the matrix-vector equation approach in [4]: R χ C Aε + Oε, where the first inequality below was proved in [3, Remark 35] 35 na 1/ nn A 1/ H 1/ χ C A n D cr infd Dn D c R D D R R The expression of χ C A involvesan nn+1 nn+1 lower triangular matrix defined by the entries of RDc and the best known estimator of χ C A requireson 3 flops Here we would like to point out that an example given in [3, Remark 39] shows that
9 PERTURBATION BOUNDS OF SOME MATRIX FACTORIZATIONS 849 in the second inequality in 35 the right-hand side can be arbitrarily larger than the left-hand side, although numerical tests have shown that usually the former is a reasonable approximation to the latter If the standard symmetric pivoting strategy is used in computing the Cholesky factorization, the quantity inf D Dn D c R D D R / R is bounded by a function of n; see [3, Theorem 38 and section 34] Remark 36 In [13] rigorous bounds on ΔRR F, were derived The bound on ΔRR F, which was credited to Ji-guang Sun, is identical to 34 under the identical condition 319 As mentioned in [13], the bound on can be obtained by using ΔRR F R, leading to n H R ε = If we take D = I in the bound in 31, we obtain n H ε 1+ 1 n H ε R n H ε 1+ 1 n H ε Thus the bound in 31 is at most + 1 times as large as the bound in 36 But D c R infd Dn D c R D [ D ] R / R in 31 can be much smaller than H γ γ For example, for R = 0 1 with large γ>0, DcR D cr D D R R = Θγ with D = diagγ,1, and H =Θγ Thus the bound 31 can be much tighter than the bound 36 Remark 37 In [3, Theorem 39], the following rigorous perturbation bound was derived by the matrix-vector equation approach: 37 R χ C Aε under the condition see [3, Theorem 39, Remark 34] 38 A n min i a ii χ C Aε <1 4 By the second inequality in 35, the new bound 3 is not as tight as 37 But, as we mentioned in Remark 35, estimating the latter is more expensive than estimating the former A more serious problem is that the condition 38 can be much more constraining than the condition 319 In fact, by the first inequality in 35, we have A n min i a ii χ CA A n min i a ii n a nn 4 A H 1 4 n H Thus, if a nn is much larger than min i a ii, then 38 is much more constraining than 319 Remark 38 In [3, Theorem 310], the following rigorous bound was derived by the refined matrix equation approach: 39 n D cr D c R D D R ε R R
10 850 X-W CHANG AND D STEHLÉ under the condition 330 n D c R D c R D D ε<1/4 for any D D n Notice that D c R D c R D D D c R = H Thus the condition 330 is not only more complicated but also more constraining than the condition 319 If we want to make the bound 39 similar to the new bound 3, then we may minimize D c R D D R over the set D n But the optimal choice of D may make the condition 330 much more constraining than the condition 319 Here is an example Let R = [ ] 11 0 γ with a large γ>0 By 10, D r =diag,γ is an approximate optimal D It is easy to verify that D c R D c R D r Dr =Θγ and H = Θ1 Thus the former can be arbitrarily larger than the latter 4 LU factorization We first present rigorous perturbation bounds for the LU factors when the given matrix has a general normwise perturbation Theorem 41 Let A R n n have nonsingular leading principal submatrices with the LU factorization A = LU, wherel R n n is unit lower triangular and U R n n is upper triangular, and let ΔA R n n be a small perturbation in A If 41 L U ΔA F < 1/4, then A +ΔA has the unique LU factorization 4 A +ΔA =L +ΔLU +ΔU, where ΔL F L F ΔU F U F infdl D n κ LD L 1+ U n A F L F ΔA F 1 4 L U ΔA F inf κ LD L D L D n infdu D n κ D U U n L F U L U F ΔA F, ΔA F L U ΔA F L inf κ D 46 U U D U D n U F Proof With the condition 41, we have for 1 k n, A k ΔA F ΔA k L k U k ΔA k F L U ΔA F < 1 Thus A k + tδa k for t [0, 1] is nonsingular In other words, all the leading principal submatrices of A + tδa are nonsingular Thus, the matrix A + tδa has a unique LU factorization 47 A + tδa =L +ΔLtU +ΔUt, which, with ΔL1 = ΔL and ΔU1 = ΔU, leads to 4 From 47, we obtain 48 L ΔLt+ΔUtU = tl ΔAU L ΔLtΔUtU
11 PERTURBATION BOUNDS OF SOME MATRIX FACTORIZATIONS 851 Notice that L ΔLt is strictly lower triangular and ΔUtU is upper triangular Taking the Frobenius norm on both sides of 48, we obtain 49 L ΔLt+ΔUtU F t L U ΔA F + L ΔLt F ΔUtU F Let xt =max L ΔLt F, ΔUtU F Then we have xt L ΔLt+ΔUtU F, L ΔLt F ΔUtU F xt Thus, from 49 it follows that xt xt +t L U ΔA F 0 The assumption 41 ensures that the condition of Lemma is satisfied Therefore, by Lemma we obtain 410 max L ΔL F, ΔUU F L U ΔA F We now derive perturbation bounds for the L-factor Let [ U n ] 0 u nn From 48 with t = 1 it follows that [ L ΔL =slt L U ΔA n Un u/u ] nn sltl ΔLΔUU 0 1/u nn [ ] =slt L U ΔA n sltl ΔLΔUU 0 0 Multiplying both sides of 411 from the left by a diagonal D L D n and taking the Frobenius norm, we obtain D L L ΔL F D L L U n ΔA F + D L L ΔL F ΔUU F Using 410, we have D L L ΔL F D L L U n ΔA F L U ΔA F Combining the inequality ΔL F LD L D L L ΔL F and the above inequality leads to 43 and then 44 Now we derive perturbation bounds for the U-factor From 48 with t =1, 41 ΔUU =utl ΔAU utl ΔLΔUU u Multiplying both sides of 41 from the right by a diagonal D U the Frobenius norm, we obtain D n and taking ΔUU D U F L U D U ΔA F + L ΔL F ΔUU D U F Using 410, we have ΔUU D U F L U D U ΔA F L U ΔA F Therefore, with the inequality ΔU F 45 and 46 ΔUU D U F D U U, we can obtain
12 85 X-W CHANG AND D STEHLÉ Remark 41 In [6] the following first-order perturbation bounds, which can be estimated in On flops, were presented: ΔL F L F ΔU F U F inf κ LD L D L D n inf κ D U U D U D n U n L F L U F ΔA F ΔA F ΔA + O F A F ΔA + O F A F Note that the difference between the above first-order bound for the L-factor and the rigorous bound 44 is a factor of The same holds for the U-factor as well Numerical experiments have indicated that the above first-order bounds are good approximations to the corresponding optimal first-order bounds derived by the matrix-vector equation approach in [6]: ΔL F L F ΔU F U F κ L A ΔA F κ U A ΔA F ΔA + O F A F ΔA + O F A F,,, where Un U κ L A inf κ LD n L, L F D L D n L F L L κ U A inf κ D U U U F D U D n U F The expressions of κ L A andκ U A involve an n n matrix defined by the entries of L and U and are expensive to estimate To see how partial pivoting and complete pivoting affect the bounds in 413 and 414, we refer to [6, sections 4 and 5] Remark 4 In [1] the following rigorous bounds were presented: 415 ΔL F L L ΔAU F 1 L ΔAU, ΔU F U L ΔAU F 1 L ΔAU under the condition that L ΔAU < 1 If we know only ΔA F or ΔA rather than L ΔAU this is often the case, then the tightest bounds we can derive from 415 are as follows: U L F ΔA F ΔL F κ L L F 1 L U, ΔA L U F ΔA F ΔU F κ U U F 1 L U, ΔA whereweassume L U ΔA < 1, which is a little less restrictive than 41 A comparison between these two bounds with 43 and 45 shows that the formers can be much larger than the latters when L has bad column scaling or U is much larger than Un for the L-factor and when U has bad row scaling for the U-factor If the Gaussian elimination is used for computing the LU factorization of A and runs to completion, then the computed LU factors L and Ũ satisfy 416 A +ΔA = LŨ, ΔA ε L Ũ,
13 PERTURBATION BOUNDS OF SOME MATRIX FACTORIZATIONS 853 where ε = nu/1 nu with u being the unit roundoff; see, for example, [14, Theorem 93] In the following theorem we will consider the perturbation ΔA, which has the same form as in 416 The perturbation bounds will involve the LU factors of A + ΔA, unlike other perturbation bounds given in this paper, which involve the factors of A The reason is that the bound on ΔA in 416 involves the LU factors of A + ΔA The perturbation bounds will use a consistent absolute matrix norm eg, the 1-norm, -norm, and F -norm, unlike other bounds given in this paper, which use the F-norm or -norm Theorem 4 Suppose that ΔA R n n is a perturbation in A R n n and A + ΔA has nonsingular leading principal submatrices with the LU factorization satisfying 416 Let denote a consistent absolute matrix norm If 417 cond LcondŨ ε<1/4, then A has the unique LU factorization A = LU LetΔL = L L and ΔU = Ũ U Then inf ΔL DL Dn L ΔU Ũ LD L L DL L L condũ n ε cond LcondŨ ε inf D L D n LD L L D L L L inf DU Dn Ũ Ũ D U D U Ũ Ũ condũ n ε, cond Lε cond LcondŨ ε inf D U D n Ũ Ũ D U D Ũ U 41 Ũ cond Lε Proof The proof is similar to the proof of Theorem 41 and we mainly reverse the roles of A and A +ΔA Using the bound on ΔA in 416 and 417, we have for 1 k n, 4 L k ΔA kũ k = L k L k Ũk Ũ k ε cond LcondŨ ε<1 For t [0, 1], A k +ΔA k tδa k = L k Ũ k tδa k = L k [I t L k ΔA kũ k ]Ũk Thus, by 4, the matrix A k +ΔA k tδa k is nonsingular Therefore A+ΔA tδa has the unique LU factorization 43 A +ΔA tδa = L ΔLt Ũ ΔUt, which, with ΔL1 = ΔL and ΔU1 = ΔU, gives the LU factorization A = LU From 43, we obtain 44 L ΔLt+ΔUtŨ = t L ΔAŨ + L ΔLtΔUtŨ, where L ΔLt is strictly lower triangular and ΔUtŨ is upper triangular Taking the consistent absolute matrix norm on both sides of 44 and using the bound on ΔA in 416, we obtain 45 L ΔLt+ΔUtŨ t cond LcondŨ ε+ L ΔLt ΔUtŨ
14 854 X-W CHANG AND D STEHLÉ Let xt =max L ΔLt, ΔUtŨ Then we have xt L ΔLt+ΔUtŨ, L ΔLt ΔUtŨ xt Thus, from 45 it follows that xt xt +t cond LcondŨ ε 0 The assumption 417 ensures that the condition of Lemma is satisfied Therefore, by Lemma, 46 max L ΔL, ΔUŨ cond LcondŨ ε We now derive perturbation bounds for the L-factor Let [ Ũ n ] 0 ũ nn Similarly to 411, from 44 with t =1wehave [Ũ ] 47 L ΔL =slt L ΔA n 0 + slt L 0 0 ΔLΔUŨ Then, with the bound on ΔA in 416, from 47 we obtain that for any D L D n, D L L ΔL D L L L condũ n ε + D L L ΔL ΔUŨ Therefore, using 46, we have D L L ΔL D L L L condũ n ε cond LcondŨ ε Combining the inequality ΔL LD D L L L ΔL and the above inequality leads to 418 and then 419 Now we derive perturbation bounds for the U-factor From 44 with t =1,it follows that ΔUŨ =ut L ΔAŨ +ut L ΔLΔUŨ Then, for any D U D n, with 416 we obtain ΔUŨ D U cond L Ũ Ũ D U ε + L ΔL ΔUŨ D U Therefore, using 46, we have ũ ΔUŨ D U Ũ Ũ D U cond Lε cond LcondŨ ε Combining the inequality ΔU ΔUŨ D U D Ũ and the above inequality, U we obtain 40 and then 41 Remark 43 For some choices of norms, we can remove the scaling matrices in Theorem 4 From Lemma 1 we observe that if we take the 1-norm or -norm, the bounds 418, 419, 40, and 41 can be written as p =1, ΔL p L p 1+ L L L p L p cond p Ũ n ε 1 4cond p Lcond p Ũ ε L L L p cond p Ũ n L ε, p Ũ Ũ Ũ p ΔU cond p Lε p Ũ Ũ p p cond p Lcond p Ũ ε Ũ Ũ Ũ p Ũ cond p Lε p
15 PERTURBATION BOUNDS OF SOME MATRIX FACTORIZATIONS 855 under the condition cond p Lcond p Ũ ε < 1/4 In [5] the following first-order bounds were derived with being a consistent absolute matrix norm: ΔL L ΔU Ũ L L L condũ n L ε + Oε, Ũ Ũ Ũ Ũ cond Lε + Oε We can see the obvious relation between these first-order bounds and the rigorous bounds derived above when the 1-norm and -norm are used For estimation of the perturbation bounds, we refer to [5] We would like to point out that to our knowledge there are no optimal or nearly optimal first-order bounds or rigorous bounds derived by the matrix-vector equation approach in the literature To see how partial pivoting, rook pivoting, and complete pivoting affect the firstorder bounds in 48 and 49, we refer to [5, section 4] 5 QR factorization In this section we consider the perturbation of the R- factor of the QR factorization of A As we do not have any new result concerning the Q-factor, we will not consider it First we present rigorous perturbation bounds when the given matrix A has a general normwise perturbation Theorem 51 Let A R m n be of full column rank with QR factorization A = QR, whereq R m n has orthonormal columns and R R n n is upper triangular with positive diagonal entries If the perturbation matrix ΔA R m n satisfies 51 κ A ΔA F < 3/ 1, A then A +ΔA has a unique QR factorization 5 A +ΔA =Q +ΔQR +ΔR, where, with ρ D defined in 7, R infd Dn ρ D κ D R Q T ΔA F A κ A ΔA F A + κ A ΔA F A κ A ΔA F A 3 infd Dn ρ D κ D R ΔA F A κ A ΔA F A κ A ΔA F A 6+ 3 inf ρ D κ D ΔA F R D D n A Proof Notice that for any t [0, 1], Q T A + tδa = RI + tr Q T ΔA = RI + ta ΔA and A ΔA < 1 by 51 Thus Q T A + tδa is nonsingular, and then A + tδa has full column rank and has the unique QR factorization 56 A + tδa =Q +ΔQtR +ΔRt, which, with ΔQ1 = ΔQ and ΔR1 = ΔR, gives 5 From 56, we obtain R T ΔRt+ΔRt T R = tr T Q T ΔA + tδa T QR + t ΔA T ΔA ΔRt T ΔRt
16 856 X-W CHANG AND D STEHLÉ Multiplying the above by R T from left and R from right, we obtain R T ΔRt T +ΔRtR = tq T ΔAR + tr T ΔA T Q + R T t ΔA T ΔA ΔRt T ΔRt R Since ΔRR is upper triangular, it follows that 57 ΔRtR =up [ tq T ΔAR + tr T ΔA T Q + R T t ΔA T ΔA ΔRt T ΔRt R ] Thus, by 6, the quantity ΔRtR F verifies ΔRtR F 1 t R Q T ΔA F + t R ΔA F + ΔRtR F Itcaneasilybeverifiedthat1 4t R Q T ΔA F t R ΔA F > 0when R ΔA F < 3/ 1, which is equivalent to the condition 51 The condition of Lemma is thus satisfied and we can apply it, with xt = ΔRtR F,toget 58 ΔRR F R Q T ΔA F R ΔA F For any D D n, we have from 57 with t =1that ΔRR D =up [ Q T ΔAR D+D DR T ΔA T QD ] 59 +up [ R T ΔA T ΔA ΔR T ΔR R D ] Then, by 7, it follows that ΔRR D F ρ D Q T ΔA F R D + R ΔA F R D + ΔRR F ΔRR D F Therefore, using 58 and the fact that ρ D 1 see 7, we obtain 510 ΔRR ρd R D Q T ΔA F + R ΔA F D F R ΔA F R ΔA F Combining the inequality ΔRR D F D R and the above inequality we obtain 53 Since Q T ΔA F ΔA F and 51 holds, 54 follows from 53 Then 55 is obtained Remark 51 In [9] the following first-order bound was derived by the refined matrix equation approach: 511 R Q inf ρ D κ D T ΔA F R D D n A ΔA + O F A Some practice choices of D were given in [9] to estimate the above bound This first-order bound 511 has some similarity to 53 But if Q T ΔA = 0 ie, ΔA lies in the orthogonal complement of the range of A, then this first-order bound becomes useless, but the rigorous bound 53 clearly shows how R is sensitive to the perturbation ΔA Numerical experiments have indicated that this first-order bound is
17 PERTURBATION BOUNDS OF SOME MATRIX FACTORIZATIONS 857 a good approximation to the optimal first-order bound derived by the matrix-vector equation approach in [9]: κ R A QT ΔA F ΔA + O F R A A, where 1 κ R A inf D D n ρ D κ D R The expression of κ R A involvesan nn+1 nn+1 lower triangular matrix and an nn+1 n matrix defined by the entries of R and is expensive to estimate If the standard column pivoting strategy is used in computing the QR factorization, the quantity inf D Dn ρ D κ D R can be bounded by a function of n; see [9, sections 5 and 6] Remark 5 The following rigorous bound was derived in [0] by the classic matrix equation approach: κ A ΔA F A 51 R 1 κ A ΔA A under the condition κ A ΔA / A < 1, which is a little less restrictive than 51 Note that if D = I, thenρ D κ D R= κ A If R has bad row scaling and the -norm of its rows decreases from the top to bottom, then inf D Dn ρ D κ D R canbemuchsmallerthanκ A For example, for R = diagγ,1 with large γ, ρ D κ D R = Θ1 with D = diagγ,1, κ A =κ R =Θγ Thus the new rigorous bounds can be much tighter than 51 Here we would like to point out that to our knowledge there are no rigorous bounds derived by the matrix-vector equation approach in the literature For the componentwise perturbation ΔA which has the form of backward error we could expect from a standard QR factorization algorithm, the analysis has been done in [10] For completeness, we give the result here, without a proof Theorem 5 Let A R m n be of full column rank with QR factorization A = QR, whereq R m n has orthonormal columns and R R n n is upper triangular with positive diagonal entries Let ΔA R m n be a perturbation matrix in A such that 513 ΔA εc A, C R m m, 0 c ij 1, ε a small constant If 514 cond R ε< 3/ 1 m n, then A +ΔA has a unique QR factorization 515 A +ΔA =Q +ΔQR +ΔR, where, with ρ D defined in 7, 516 R 6mn 1/ inf D D n ρ D R R D D R R ε
18 858 X-W CHANG AND D STEHLÉ The assumption 513 on the perturbation ΔA can essentially handle two special cases First, there is a small relative componentwise perturbation in A; ie, ΔA ε A Second, there is a small relative columnwise perturbation in A; ie, ΔA:,j ε A:,j for 1 j n; see, eg, [7, section ] The second case may arise when ΔA is the backward error of the QR factorization by a standard algorithm; see [14, Chap 19] For practical choices of D to estimate the bound 516, we refer to [7, 10] 6 Summary We have presented new rigorous normwise perturbation bounds for the Cholesky, LU, and QR factorizations with normwise and componentwise perturbations in the given matrix by using a hybrid approach of the classic and refined matrix equation approaches Each of the new rigorous perturbation bounds is a small constant multiple of the corresponding first-order perturbation bound obtained by the refined matrix equation approach in the literature and can be estimated efficiently These new bounds can be much tighter than the existing rigorous bounds obtained by the classic matrix equation approach, while the conditions for the former to hold are almost as moderate as the conditions for the latter to hold Acknowledgments The authors thank Gilles Villard for early discussions on this work They are grateful to the referees very helpful suggestions, which improved the presentation of the paper REFERENCES [1] A Barrlund, Perturbation bounds for the LDL H and the LU factorizations, BIT, , pp [] R Bhatia, Matrix factorizations and their perturbations, Linear Algebra Appl, , pp [3] X-W Chang, Perturbation Analysis of Some Matrix Factorizations, PhD thesis, Computer Science, McGill University, Montreal, Canada, February 1997 [4] X-W Chang, Perturbation analyses for the Cholesky factorization with backward rounding errors, in Proceedings of the Workshop on Scientific Computing, Hong Kong, 1997, pp [5] X-W Chang, Some features of Gaussian elimination with rook pivoting, BIT, 4 00, pp [6] X-W Chang and C C Paige, On the sensitivity of the LU factorization, BIT, , pp [7] X-W Chang and C C Paige, Componentwise perturbation analyses for the QR factorization, Numer Math, , pp [8] X-W Chang, C Paige, and G Stewart, New perturbation analyses for the Cholesky factorization, IMA J Numer Anal, , pp [9] X-W Chang, C Paige, and G Stewart, Perturbation analyses for the QR factorization, SIAM J Matrix Anal Appl, , pp [10] X-W Chang, D Stehlé, and G Villard, Perturbation analysis of the QR factor R in the context of LLL lattice basis reduction, 5 pages, submitted Available at ens-lyonfr/damienstehle/qrperturbhtml [11] J W Demmel, On floating point errors in Cholesky, Technical report CS 89-87, Department of Computer Science, University of Tennessee, Knoxville, TN, 1989, 6 pages LAPACK Working Note 14 [1] F M Dopico and J M Molera, Perturbation theory for factorizations of LU type through series expansions, SIAM J Matrix Anal Appl, 7 005, pp [13] Z Drma c, M Omladi c, K Veselić, On the perturbation of the Cholesky factorization, SIAM J Matrix Anal Appl, , pp [14] N J Higham, Accuracy and Stability of Numerical Algorithms, nd ed, Society for Industrial and Applied Mathematics, Philadelphia, PA, 00 [15] I Morel, G Villard, D Stehlé, H-LLL: Using Householder inside LLL, in Proceedings of ISSAC, Seoul, Korea, 009, pp 71 78
19 PERTURBATION BOUNDS OF SOME MATRIX FACTORIZATIONS 859 [16] P Q Nguyen, D Stehlé, An LLL algorithm with quadratic complexity, SIAMJ Comput, , pp [17] G W Stewart, Perturbation bounds for the QR factorization of a matrix, SIAMJ Numer Anal, , pp [18] G W Stewart, On the perturbation of LU, Cholesky, and QR factorizations, SIAMJ Matrix Anal Appl, , pp [19] G W Stewart, On the perturbation of LU and Cholesky factors, IMA J Numer Anal, , pp 1 6 [0] J-G Sun, Perturbation bounds for the Cholesky and QR factorizations, BIT, , pp [1] J-G Sun, Rounding-error and perturbation bounds for the Cholesky and LDL T factorizations, Linear Algebra Appl, , pp [] J-G Sun, Componentwise perturbation bounds for some matrix decompositions, BIT, 3 199, pp [3] J-G Sun, On the perturbation bounds for the QR factorization, Linear Algebra Appl, , pp [4] A van der Sluis, Condition numbers and equilibration of matrices, Numer Math, , pp 14 3 [5] H Zha, A componentwise perturbation analysis of the QR decomposition, SIAM J Matrix Anal Appl, , pp
On the Perturbation of the Q-factor of the QR Factorization
NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. ; :1 6 [Version: /9/18 v1.] On the Perturbation of the Q-factor of the QR Factorization X.-W. Chang McGill University, School of Comptuer
More informationMultiplicative Perturbation Analysis for QR Factorizations
Multiplicative Perturbation Analysis for QR Factorizations Xiao-Wen Chang Ren-Cang Li Technical Report 011-01 http://www.uta.edu/math/preprint/ Multiplicative Perturbation Analysis for QR Factorizations
More informationMULTIPLICATIVE PERTURBATION ANALYSIS FOR QR FACTORIZATIONS. Xiao-Wen Chang. Ren-Cang Li. (Communicated by Wenyu Sun)
NUMERICAL ALGEBRA, doi:10.3934/naco.011.1.301 CONTROL AND OPTIMIZATION Volume 1, Number, June 011 pp. 301 316 MULTIPLICATIVE PERTURBATION ANALYSIS FOR QR FACTORIZATIONS Xiao-Wen Chang School of Computer
More informationPerturbation Analyses for the Cholesky Factorization with Backward Rounding Errors
Perturbation Analyses for the holesky Fatorization with Bakward Rounding Errors Xiao-Wen hang Shool of omputer Siene, MGill University, Montreal, Quebe, anada, H3A A7 Abstrat. This paper gives perturbation
More informationWe first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix
BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity
More informationInstitute for Advanced Computer Studies. Department of Computer Science. On the Perturbation of. LU and Cholesky Factors. G. W.
University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{95{93 TR{3535 On the Perturbation of LU and Cholesky Factors G. W. Stewart y October, 1995
More informationOn the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination
On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination J.M. Peña 1 Introduction Gaussian elimination (GE) with a given pivoting strategy, for nonsingular matrices
More informationPERTURBATION ANALYSIS OF THE QR FACTOR R IN THE CONTEXT OF LLL LATTICE BASIS REDUCTION
MATHEMATICS OF COMPUTATION Volume 81, Number 79, July 01, Pages 1487 1511 S 005-5718(01)0545- Article electronically published on February 17, 01 PERTURBATION ANALYSIS OF THE QR FACTOR R IN THE CONTEXT
More informationDense LU factorization and its error analysis
Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,
More informationNumerische Mathematik
Numer. Math. (2001) 88: 319 345 Digital Object Identifier (DOI) 10.1007/s002110000236 Numerische Mathematik Componentwise perturbation analyses for the Q factorization Xiao-Wen Chang, Christopher C. Paige
More informationFor δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15])
LAA 278, pp.2-32, 998 STRUCTURED PERTURBATIONS AND SYMMETRIC MATRICES SIEGFRIED M. RUMP Abstract. For a given n by n matrix the ratio between the componentwise distance to the nearest singular matrix and
More information14.2 QR Factorization with Column Pivoting
page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution
More informationComponentwise perturbation analysis for matrix inversion or the solution of linear systems leads to the Bauer-Skeel condition number ([2], [13])
SIAM Review 4():02 2, 999 ILL-CONDITIONED MATRICES ARE COMPONENTWISE NEAR TO SINGULARITY SIEGFRIED M. RUMP Abstract. For a square matrix normed to, the normwise distance to singularity is well known to
More informationOrthogonal Transformations
Orthogonal Transformations Tom Lyche University of Oslo Norway Orthogonal Transformations p. 1/3 Applications of Qx with Q T Q = I 1. solving least squares problems (today) 2. solving linear equations
More informationLAPACK-Style Codes for Pivoted Cholesky and QR Updating
LAPACK-Style Codes for Pivoted Cholesky and QR Updating Sven Hammarling 1, Nicholas J. Higham 2, and Craig Lucas 3 1 NAG Ltd.,Wilkinson House, Jordan Hill Road, Oxford, OX2 8DR, England, sven@nag.co.uk,
More informationAccuracy and Stability in Numerical Linear Algebra
Proceedings of the 1. Conference on Applied Mathematics and Computation Dubrovnik, Croatia, September 13 18, 1999 pp. 105 112 Accuracy and Stability in Numerical Linear Algebra Zlatko Drmač Abstract. We
More informationNumerical Methods I Solving Square Linear Systems: GEM and LU factorization
Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,
More informationIndex. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)
page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation
More informationAnalysis of Block LDL T Factorizations for Symmetric Indefinite Matrices
Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices Haw-ren Fang August 24, 2007 Abstract We consider the block LDL T factorizations for symmetric indefinite matrices in the form LBL
More informationLAPACK-Style Codes for Pivoted Cholesky and QR Updating. Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig. MIMS EPrint: 2006.
LAPACK-Style Codes for Pivoted Cholesky and QR Updating Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig 2007 MIMS EPrint: 2006.385 Manchester Institute for Mathematical Sciences School of Mathematics
More informationNumerical Linear Algebra
Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost
More informationProgram Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects
Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen
More informationRELATIVE PERTURBATION THEORY FOR DIAGONALLY DOMINANT MATRICES
RELATIVE PERTURBATION THEORY FOR DIAGONALLY DOMINANT MATRICES MEGAN DAILEY, FROILÁN M. DOPICO, AND QIANG YE Abstract. In this paper, strong relative perturbation bounds are developed for a number of linear
More informationA Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem
A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13
STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l
More informationOPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY
published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower
More informationSince the determinant of a diagonal matrix is the product of its diagonal elements it is trivial to see that det(a) = α 2. = max. A 1 x.
APPM 4720/5720 Problem Set 2 Solutions This assignment is due at the start of class on Wednesday, February 9th. Minimal credit will be given for incomplete solutions or solutions that do not provide details
More informationThis can be accomplished by left matrix multiplication as follows: I
1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method
More informationLinear Algebraic Equations
Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff
More information9. Numerical linear algebra background
Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization
More informationRounding error analysis of the classical Gram-Schmidt orthogonalization process
Cerfacs Technical report TR-PA-04-77 submitted to Numerische Mathematik manuscript No. 5271 Rounding error analysis of the classical Gram-Schmidt orthogonalization process Luc Giraud 1, Julien Langou 2,
More informationScientific Computing: Dense Linear Systems
Scientific Computing: Dense Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 February 9th, 2012 A. Donev (Courant Institute)
More informationMatrix decompositions
Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 13: Conditioning of Least Squares Problems; Stability of Householder Triangularization Xiangmin Jiao Stony Brook University Xiangmin Jiao
More informationLecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky
Lecture 2 INF-MAT 4350 2009: 7.1-7.6, LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Tom Lyche and Michael Floater Centre of Mathematics for Applications, Department of Informatics,
More informationOrthogonalization and least squares methods
Chapter 3 Orthogonalization and least squares methods 31 QR-factorization (QR-decomposition) 311 Householder transformation Definition 311 A complex m n-matrix R = [r ij is called an upper (lower) triangular
More informationOrthonormal Transformations
Orthonormal Transformations Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 25, 2010 Applications of transformation Q : R m R m, with Q T Q = I 1.
More informationReview Questions REVIEW QUESTIONS 71
REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationETNA Kent State University
C 8 Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-2, 2004. Copyright 2004,. ISSN 1068-613. etnamcs.kent.edu STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Abstract.
More informationOrthonormal Transformations and Least Squares
Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving
More informationMatrix decompositions
Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The
More informationTHE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR
THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional
More informationLinear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4
Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math Week # 1 Saturday, February 1, 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x
More informationELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n
Electronic Journal of Linear Algebra ISSN 08-380 Volume 22, pp. 52-538, May 20 THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE WEI-WEI XU, LI-XIA CAI, AND WEN LI Abstract. In this
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationSome Notes on Least Squares, QR-factorization, SVD and Fitting
Department of Engineering Sciences and Mathematics January 3, 013 Ove Edlund C000M - Numerical Analysis Some Notes on Least Squares, QR-factorization, SVD and Fitting Contents 1 Introduction 1 The Least
More informationEE731 Lecture Notes: Matrix Computations for Signal Processing
EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationChapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.
MATH 434/534 Theoretical Assignment 3 Solution Chapter 4 No 40 Answer True or False to the following Give reasons for your answers If a backward stable algorithm is applied to a computational problem,
More informationLinear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4
Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix
More informationMatrix decompositions
Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers
More informationLecture Note 2: The Gaussian Elimination and LU Decomposition
MATH 5330: Computational Methods of Linear Algebra Lecture Note 2: The Gaussian Elimination and LU Decomposition The Gaussian elimination Xianyi Zeng Department of Mathematical Sciences, UTEP The method
More informationSubset selection for matrices
Linear Algebra its Applications 422 (2007) 349 359 www.elsevier.com/locate/laa Subset selection for matrices F.R. de Hoog a, R.M.M. Mattheij b, a CSIRO Mathematical Information Sciences, P.O. ox 664, Canberra,
More informationLecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)
Math 59 Lecture 2 (Tue Mar 5) Gaussian elimination and LU factorization (II) 2 Gaussian elimination - LU factorization For a general n n matrix A the Gaussian elimination produces an LU factorization if
More informationApplied Numerical Linear Algebra. Lecture 8
Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationG1110 & 852G1 Numerical Linear Algebra
The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the
More informationLecture 9. Errors in solving Linear Systems. J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico
Lecture 9 Errors in solving Linear Systems J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico J. Chaudhry (Zeb) (UNM) Math/CS 375 1 / 23 What we ll do: Norms and condition
More informationc 2013 Society for Industrial and Applied Mathematics
SIAM J. MATRIX ANAL. APPL. Vol. 34, No. 3, pp. 1401 1429 c 2013 Society for Industrial and Applied Mathematics LU FACTORIZATION WITH PANEL RANK REVEALING PIVOTING AND ITS COMMUNICATION AVOIDING VERSION
More informationError estimates for the ESPRIT algorithm
Error estimates for the ESPRIT algorithm Daniel Potts Manfred Tasche Let z j := e f j j = 1,..., M) with f j [ ϕ, 0] + i [ π, π) and small ϕ 0 be distinct nodes. With complex coefficients c j 0, we consider
More information5.6. PSEUDOINVERSES 101. A H w.
5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and
More informationInstitute for Computational Mathematics Hong Kong Baptist University
Institute for Computational Mathematics Hong Kong Baptist University ICM Research Report 08-0 How to find a good submatrix S. A. Goreinov, I. V. Oseledets, D. V. Savostyanov, E. E. Tyrtyshnikov, N. L.
More informationCONVERGENCE OF MULTISPLITTING METHOD FOR A SYMMETRIC POSITIVE DEFINITE MATRIX
J. Appl. Math. & Computing Vol. 182005 No. 1-2 pp. 59-72 CONVERGENCE OF MULTISPLITTING METHOD FOR A SYMMETRIC POSITIVE DEFINITE MATRIX JAE HEON YUN SEYOUNG OH AND EUN HEUI KIM Abstract. We study convergence
More informationDEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular
form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix
More informationDirect Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le
Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization
More informationACCURACY AND STABILITY OF THE NULL SPACE METHOD FOR SOLVING THE EQUALITY CONSTRAINED LEAST SQUARES PROBLEM
BIT 0006-3835/98/3901-0034 $15.00 1999, Vol. 39, No. 1, pp. 34 50 c Swets & Zeitlinger ACCURACY AND STABILITY OF THE NULL SPACE METHOD FOR SOLVING THE EQUALITY CONSTRAINED LEAST SQUARES PROBLEM ANTHONY
More informationScientific Computing
Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting
More informationNumerical Linear Algebra And Its Applications
Numerical Linear Algebra And Its Applications Xiao-Qing JIN 1 Yi-Min WEI 2 August 29, 2008 1 Department of Mathematics, University of Macau, Macau, P. R. China. 2 Department of Mathematics, Fudan University,
More informationComputing least squares condition numbers on hybrid multicore/gpu systems
Computing least squares condition numbers on hybrid multicore/gpu systems M. Baboulin and J. Dongarra and R. Lacroix Abstract This paper presents an efficient computation for least squares conditioning
More informationChapter 7 Iterative Techniques in Matrix Algebra
Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition
More informationGaussian Elimination for Linear Systems
Gaussian Elimination for Linear Systems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 3, 2011 1/56 Outline 1 Elementary matrices 2 LR-factorization 3 Gaussian elimination
More informationMATH 3511 Lecture 1. Solving Linear Systems 1
MATH 3511 Lecture 1 Solving Linear Systems 1 Dmitriy Leykekhman Spring 2012 Goals Review of basic linear algebra Solution of simple linear systems Gaussian elimination D Leykekhman - MATH 3511 Introduction
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.
More informationCOURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.
COURSE 9 4 Numerical methods for solving linear systems Practical solving of many problems eventually leads to solving linear systems Classification of the methods: - direct methods - with low number of
More informationthe Unitary Polar Factor æ Ren-Cang Li Department of Mathematics University of California at Berkeley Berkeley, California 94720
Relative Perturbation Bounds for the Unitary Polar Factor Ren-Cang Li Department of Mathematics University of California at Berkeley Berkeley, California 9470 èli@math.berkeley.eduè July 5, 994 Computer
More informationbe a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u
MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =
More informationImportant Matrix Factorizations
LU Factorization Choleski Factorization The QR Factorization LU Factorization: Gaussian Elimination Matrices Gaussian elimination transforms vectors of the form a α, b where a R k, 0 α R, and b R n k 1,
More information1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )
Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical
More informationlecture 2 and 3: algorithms for linear algebra
lecture 2 and 3: algorithms for linear algebra STAT 545: Introduction to computational statistics Vinayak Rao Department of Statistics, Purdue University August 27, 2018 Solving a system of linear equations
More informationNumerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??
Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement
More informationStructured Condition Numbers of Symmetric Algebraic Riccati Equations
Proceedings of the 2 nd International Conference of Control Dynamic Systems and Robotics Ottawa Ontario Canada May 7-8 2015 Paper No. 183 Structured Condition Numbers of Symmetric Algebraic Riccati Equations
More informationInterlacing Inequalities for Totally Nonnegative Matrices
Interlacing Inequalities for Totally Nonnegative Matrices Chi-Kwong Li and Roy Mathias October 26, 2004 Dedicated to Professor T. Ando on the occasion of his 70th birthday. Abstract Suppose λ 1 λ n 0 are
More informationThe Eigenvalue Problem: Perturbation Theory
Jim Lambers MAT 610 Summer Session 2009-10 Lecture 13 Notes These notes correspond to Sections 7.2 and 8.1 in the text. The Eigenvalue Problem: Perturbation Theory The Unsymmetric Eigenvalue Problem Just
More informationHomework 2 Foundations of Computational Math 2 Spring 2019
Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.
More information9. Numerical linear algebra background
Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization
More informationOn the Solution of Constrained and Weighted Linear Least Squares Problems
International Mathematical Forum, 1, 2006, no. 22, 1067-1076 On the Solution of Constrained and Weighted Linear Least Squares Problems Mohammedi R. Abdel-Aziz 1 Department of Mathematics and Computer Science
More informationOn condition numbers for the canonical generalized polar decompostion of real matrices
Electronic Journal of Linear Algebra Volume 26 Volume 26 (2013) Article 57 2013 On condition numbers for the canonical generalized polar decompostion of real matrices Ze-Jia Xie xiezejia2012@gmail.com
More informationMath Matrix Algebra
Math 44 - Matrix Algebra Review notes - (Alberto Bressan, Spring 7) sec: Orthogonal diagonalization of symmetric matrices When we seek to diagonalize a general n n matrix A, two difficulties may arise:
More informationNotes on Eigenvalues, Singular Values and QR
Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square
More informationToday s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn
Today s class Linear Algebraic Equations LU Decomposition 1 Linear Algebraic Equations Gaussian Elimination works well for solving linear systems of the form: AX = B What if you have to solve the linear
More informationOutline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy,
Math 54 - Numerical Analysis Lecture Notes Linear Algebra: Part B Outline Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences
More informationENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition
ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University of Hong Kong Lecture 8: QR Decomposition
More informationChapter 2. Solving Systems of Equations. 2.1 Gaussian elimination
Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix
More informationPart IB Numerical Analysis
Part IB Numerical Analysis Definitions Based on lectures by G. Moore Notes taken by Dexter Chua Lent 206 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after
More informationThe antitriangular factorisation of saddle point matrices
The antitriangular factorisation of saddle point matrices J. Pestana and A. J. Wathen August 29, 2013 Abstract Mastronardi and Van Dooren [this journal, 34 (2013) pp. 173 196] recently introduced the block
More informationDirect methods for symmetric eigenvalue problems
Direct methods for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 4, 2008 1 Theoretical background Posing the question Perturbation theory
More informationNumerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to
More informationExponentials of Symmetric Matrices through Tridiagonal Reductions
Exponentials of Symmetric Matrices through Tridiagonal Reductions Ya Yan Lu Department of Mathematics City University of Hong Kong Kowloon, Hong Kong Abstract A simple and efficient numerical algorithm
More informationJae Heon Yun and Yu Du Han
Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose
More information