Numerische Mathematik
|
|
- Geoffrey Franklin
- 5 years ago
- Views:
Transcription
1 Numer. Math. (2001) 88: Digital Object Identifier (DOI) /s Numerische Mathematik Componentwise perturbation analyses for the Q factorization Xiao-Wen Chang, Christopher C. Paige School of Computer Science, McGill University, Montreal, Quebec, Canada H3A 2A7; chang@cs.mcgill.ca, paige@cs.mcgill.ca eceived April 11, 1999 / Published online October 16, 2000 c Springer-Verlag 2000 Summary. This paper gives componentwise perturbation analyses for Q and in the Q factorization A = Q, Q T Q = I, upper triangular, for a given real m n matrix A of rank n. Suchspecific analyses are important for example when the columns of A are badly scaled. First order perturbation bounds are given for both Q and. The analyses more accurately reflect the sensitivity of the problem than previous such results. The condition number for is bounded for a fixed n when the standard column pivoting strategy is used. This strategy also tends to improve the condition of Q, so usually the computed Q and will both have higher accuracy when we use the standard column pivoting strategy. Practical condition estimators are derived. The assumptions on the form of the perturbation A are explained and extended. Weaker rigorous bounds are also given. Mathematics Subject Classification (1991): 15A23, 65F35 1 Introduction The Q factorization is an important tool in matrix computations (see for example [6, Chap. 5]): given an m n real matrix A withfull column rank, there exists a unique m n real matrix Q withorthonormal columns, and a unique nonsingular upper triangular n n real matrix withpositive diagonal entries suchthat A = Q. This research was supported by NSEC of Canada Grants GPIN for Xiao- Wen Chang, and OGP for Chris Paige. Correspondence to: Xiao-Wen Chang
2 320 X.-W. Chang, C. Paige The matrix Q is referred to as the orthogonal factor, and the triangular factor. Let A be an m n real matrix suchthat A + A still has full column rank, then A + A has the unique Q factorization A + A =(Q + Q)( + ), where (Q + Q) T (Q + Q) =I and + is upper triangular with positive diagonal elements. The goal of the perturbation analysis for the Q factorization is to determine bounds on Q (or Q ) and (or ) in terms of (a bound on) A (or A ), where for a matrix C =(c ij ), C is defined by ( c ij ). The perturbation analysis for the Q factorization has been considered by several authors. Given (a bound on) A, the first results were presented by Stewart [12]. Analyses based on bounds on A are sometimes called normwise or norm-based perturbation analyses. Stewart s results were modified and improved by Sun [14]. Similar results to those of Sun [14] were obtained by Stewart [13] by a different approach. Later Sun [16] gave new strict perturbation bounds for Q alone. More recently Chang et al. [4] gave new first-order perturbation analyses using the so called refined matrix equation and matrix-vector equation approaches. Analyses based on bounds on A have been called componentwise analyses. Given a bound on A, Sun [15] presented strict but somewhat complicated bounds on Q and. In [18], Zha considered the following class of perturbations: (1.1) A ɛc A ; C m m, 0 c ij 1, and presented first-order perturbation bounds on Q and. An important motivation for considering such a class of perturbations is that the equivalent backward rounding error from a rounding error analysis of the standard Q factorization fits in this class, see Higham [7, Chap. 18] and the last paragraph of Sect. 2 here. The main purpose of this paper is to establish new first-order perturbation analyses under the condition (1.1). The perturbation bounds that are derived here are significantly sharper than the equivalent results in Zha [18, Theorem 2.1]. Simple rigorous perturbation bounds are also presented. Thus the present paper will, among other things, increase our understanding of the errors we can expect in computing Q and in A = Q. In Sect. 2 we discuss the generality of the class of perturbations (1.1), how this class may be extended, and how the equivalent backward rounding error for the Householder Q factorization belongs to this class. In Sect. 3 we define our notation. In Sect. 4 we obtain expressions for Q(0) and Ṙ(0) in the Q factorization A + tg = Q(t)(t). These basic sensitivity expressions will be used to obtain our new perturbation bounds in Sects. 7 and 8,
3 Componentwise Q perturbation analyses 321 but in Sect. 5 they are used to derive simple 2- and F-norm versions of Zha s results [18, Theorem 2.1] on the sensitivity of and Q. Section 6 derives basic rigorous bounds that will help us understand some of the more refined first-order bounds. In Sect. 7 we give a refined perturbation analysis for Q, showing why the standard column pivoting strategy for A can be beneficial for certain aspects of the sensitivity of Q. In Sect. 8 we analyze the perturbation in by the matrix vector equation approach, then we combine this with the matrix equation approach to get useful estimates. The ideas behind these two approaches were discussed in the norm-based perturbation analysis for the Q factorization [4]. Here these approaches show that the sensitivity of can be significantly improved by pivoting. We give numerical results and suggest practical condition estimators in Sect. 9. We summarize our findings and point out possible future work in Sect The class of perturbations, and rounding effects We now discuss the generality of the assumption (1.1). Taking C = I in (1.1) gives bounds on eachelement a ij of the form A ɛ A, which covers the case of small relative errors in the elements. Now suppose that we only have the column information a j 1 ɛ a j 1, j =1,...,n. This implies a ij a j 1 ɛ e T a j with e =[1, 1,...,1] T, which implies (1.1) with C = ee T. Similarly (1.1) with C = ee T implies a j 1 ɛm a j 1. Since for any v m, v v 1 m 1/2 v 2 m 1/2 v 1 m 3/2 v, etc., we see that (1.1) essentially handles any information of the form (2.1) a j p ɛ a j p, j =1,...,n, p=1, 2,. Thus (1.1) is an elegant way of handling most bounds on the elements or the columns of A. However to cover cases where some columns of A have different relative bounds than others, as might happen when the columns of A are obtained by experimental observation at different times or with different instruments, we can extend (1.1) to A ɛc A D; C m m, 0 c ij 1; D =diag(δ 1,...,δ n ) > 0. (2.2)
4 322 X.-W. Chang, C. Paige This then includes the extension of (2.1) (2.3) a j p ɛδ j a j p, j =1,...,n. For the Q factorization A = Q, (2.2) leads to A ɛc Q D (for clarity indicates multiplication), and where Zha [18] considered 1 p, which is independent of column scaling, we can define the (extended) condition number (2.4) cond p (, D) D 1 p, p =1, 2,. This condition number is also independent of column scaling, and can be arbitrarily smaller than D p 1 p. For brevity we give the analysis without D and assume (1.1) only, but all the results can trivially be extended to changes in A of the form (2.2). The equivalent backward error for a numerically stable Q factorization is important for this exposition. For an m n matrix A of rank n let A = Q be the exact, and A Q c c the computed Q factorization of A obtained via Householder transformations. Higham [7, Theorem 18.4]) showed (2.5) A + A = Q c, A ɛc A, ɛ = γ m,n u, where Q T Q = I, γm,n is a moderate constant depending on m and n, u is the unit roundoff, C 0 and C F =1. The bound on A has the form (1.1), so the perturbation analyses here will allow us to obtain good bounds on the errors Q Q and c. Also the computed Q c satisfies Q c = Q +, where γ m,n uc 2 Q with C 2 0, C 2 F =1. Since Q c Q = +( Q Q) we have (2.6) Q c Q F n 1/2 γ m,n u + Q Q F. For the whole of this paper we assume perturbations satisfying (1.1). 3 Notation In this paper, for any matrix X m n, we denote by (X) i,: the ithrow of X, and by (X) :,j the jthcolumn of X. For any nonsingular matrix X we define (3.1) κ 2 (X) X 2 X 1 2, cond 2 (X) X X 1 2. Notice X 1 X is the standard Bauer Skeel condition number, but the present variant seems more intuitive for column scaling.
5 Componentwise Q perturbation analyses 323 For an m n matrix Q suchthat Q T Q = I we can find Q suchthat [Q, Q] is square and orthogonal, then define (3.2) η 1 Q T C Q F, η 2 Q T C Q F, η 3 C Q F. Since Q T F = Q F = n 1/2, Q T F =(m n) 1/2, and in (1.1) C F m, if we use the fact that AB F A F B F we obtain η 1 mn, η 2 ((m n)n) 1/2 m, η 3 mn 1/2. To simplify the presentation, for any n n matrix X, we define the upper and lower triangular matrices (3.3) up(x) 1 2 x 11 x 12 x 1n x 22 x 2n x nn, low(x) up(xt ) T, so that X =low(x)+up(x). For any n n (n >1) matrix X and positive definite D = diag(δ 1,...,δ n ), we can show (for a proof, see Lemma 5.1 in [4]; it is straightforward by considering elements) up(x)+d 1 up(x T )D F ρ D X F,ρ D (3.4) In particular with D = I, [ 1+ max 1 i<j n (δ j/δ i ) 2] 1/2. (3.5) up(x + X T ) F 2 X F ; up(x) F 1 2 X F if X = X T. It is also easy to see that for any n n (n >1) matrix X (3.6) X up(x + X T ) F = low(x) [low(x)] T F 2 X F. When n =1, we have the following equalities: up(x)+d 1 up(x T )D F = up(x +X T ) F =2 up(x) F = X F, X up(x + X T ) F =0. In this paper we assume that the matrix A has more than one column, i.e., n>1. The case n =1is trivial and straightforward bounds can be derived by using these last equalities.
6 324 X.-W. Chang, C. Paige 4 ate of change of Q and Here we derive the basic results on how Q and change as A changes. The following theorem summarizes several results that we use later. Theorem 4.1 Let A m n be of full column rank n with the Q factorization A = Q, and let A be a real m n matrix satisfying (4.1) A = ɛg; ɛ 0, G C A, C m m, 0 c ij 1. Let A denote the Moore-Penrose inverse of A.If (4.2) ɛ A C A 2 < 1, then A + tg has the unique Q factorization (4.3) A(t) A + tg = Q(t)(t), Q T (t)q(t) =I, t ɛ, where (4.4) T Ṙ(0) + ṘT (0) = T Q T G + G T Q, (4.5) Ṙ(0) = up[q T G 1 +(Q T G 1 ) T ], (4.6) Q(0) = G 1 Q up[q T G 1 +(Q T G 1 ) T ]. In particular, A + A has the unique Q factorization (4.7) A + A =(Q + Q)( + ), where and Q satisfy (4.8) (4.9) = ɛṙ(0) + O(ɛ2 ), Q = ɛ Q(0) + O(ɛ 2 ). Proof. Since X 2 X 2, if (4.2) holds, then for all t ɛ, Also from ta G 2 ɛ A C A 2 < 1. Q T (A + tg) = + tq T G = (I + t 1 Q T G)=(I + ta G) we see that Q T (A + tg) is nonsingular. Hence for all t ɛ, A + tg has full column rank and the unique Q factorization (4.3). Taking t = ɛ shows that (4.7) is unique, and then (0) =, (ɛ) = +, Q(0) = Q and Q(ɛ) =Q + Q. It is easy to verify that Q(t) and (t) are twice continuously differentiable for t ɛ from a standard algorithm for the Q factorization. If we differentiate (t) T (t) =A(t) T A(t) withrespect to t and set t =0, and use A = Q, we obtain (4.4). This we will see is a linear equation uniquely
7 Componentwise Q perturbation analyses 325 defining the elements of upper triangular Ṙ(0) in terms of the elements of Q T G. From upper triangular Ṙ(0) 1 in Ṙ(0) 1 +(Ṙ(0) 1 ) T = Q T G 1 +(Q T G 1 ) T, we see with (3.3) that (4.5) holds. Differentiating (4.3) at t =0gives G = QṘ(0) + Q(0), whichwith(4.5) gives (4.6). Finally the Taylor expansions for (t) and Q(t) about t =0give (4.8) and (4.9) at t = ɛ. The perturbation A in (4.1) satisfies (1.1), and we will always assume (4.2) holds, so the results of this theorem will apply for the rest of this paper. 5 Zha s first-order bounds We can use the notation of (3.1) and (3.2) to derive the combined 2-norm and F-norm results which are analogous to Zha s [18] first-order consistent monotone norm results, but are a little simpler in form and derivation. We then give examples to show how these can be too pessimistic. From (4.5), we have by using (3.5) and (4.1) that Ṙ(0) F up[q T G 1 +(Q T G 1 ) T ] F 2 2 Q T G 1 F 2 2 Q T C A 1 F 2 2 Q T C Q 1 F 2 2 η 1 cond 2 () 2. Similarly, from (4.6), (3.6), (4.1), if [Q, Q] is square and orthogonal, Q(0) 2 F = Q T Q(0) 2 F + Q T Q(0) 2 F = Q T G 1 up[q T G 1 +(Q T G 1 ) T ] 2 F + Q T G 1 2 F 2 Q T G 1 2 F + Q T G 1 2 F 2 G 1 2 F, Q(0) F 2 C Q 1 F 2 η 3 cond 2 (). Thus with (4.8) and (4.9) we get the following bounds: (5.1) (5.2) (5.3) F η 1 ϕ(a)ɛ + O(ɛ 2 ), 2 Q F η 3 ϕ(a)ɛ + O(ɛ 2 ), ϕ(a) 2 cond 2 (). Apart from the multipliers η 1 and η 3 (see also (6.6), (6.5)), ϕ(a) can be thought of as (an upper bound on) the condition number for both Q and (for small enough A) when we use the combination of 2 and F norms. The
8 326 X.-W. Chang, C. Paige constant 2 involved in the definition of ϕ(a) may be removed, but we keep it here since it is useful for comparison with the modified results to be given in Sect. 8. Notice that ϕ(a) is invariant under any column scaling of A. This is a significant improvement on the normwise perturbation results published before [4] when the perturbation A satisfies (1.1), but sometimes these perturbation bounds do not reflect the true sensitivity of the Q factorization very well, as we see from the following example. Example 5.1 Consider the following two matrices (the first one is quoted from [18], the new one gives even worse results). A 1 = 1 1 [ ] (5.4), A 2 = Computing the Q factorization of A 1 and A 2 in MATLAB withunit roundoff u (all our computations were performed in MATLAB 5.2 on a Pentium-II running LINUX), we obtained the following computed factors, shown here to 5 figures (to make the diagonal elements of 1c and 2c positive, some signs have been altered) e e 06 ] Q 1c = e e+00, 1c =[, e e e 06 [ e e 01 Q 2c = e e 01 These have errors ], 2c = [ e e 10 e Q1 = Q 1c Q 1 F , e 1 = 1c 1 F , e Q2 = Q 2c Q 2 F , e 2 = 2c 2 F , (5.5) where A 1 = Q 1 1 and A 2 = Q 2 2 are the exact Q factorizations. The condition numbers (5.3) are ]. (5.6) ϕ(a 1 ) , ϕ(a 2 ) MATLAB computes the Q factorization using Householder transformations. Comparing (2.5) with(4.7) we see Q = Q Q and = c, so (2.6) and (5.2) with ɛ = γ m,n u in (2.5) show that (5.7) Q c Q F γ m,nϕ(a)u + O(u 2 ),
9 Componentwise Q perturbation analyses 327 where γ m,n is a moderate constant depending on m and n. Finally (5.1) with ɛ = γ m,n u in (2.5) shows that c F (5.8) γ m,nϕ(a)u + O(u 2 ), 2 where again γ m,n is a moderate constant depending on m and n. For more details of sucharguments, see [7, pp , 382] and [18]. From the computed results, we see for Q 1 the bound (5.7) matches the actual error e Q1 in (5.5) very well, but for Q 2, 1 and 2 the bounds (5.7) and (5.8) are bad overestimates of the corresponding errors. Although the matrices (5.4) are contrived, they do represent a fairly common case when A has a very large condition number: each matrix has only one very small singular value. By choosing suchexamples withsmall dimensions we are able to illustrate the drawbacks of the bounds in [18] simply and directly, showing that it is necessary to obtain stronger perturbation bounds. 6 igorous bounds Later we will derive tighter first-order bounds, but in order to explain some subtleties of these we first obtain some simple but weak rigorous bounds. From the Q factorization (4.7), with A = Q, T + T + T = T Q T A + A T Q + A T A. Multiplying on the left by T and the right by 1 we see that 1 + T T = Q T A 1 + T A T Q + T ( A T A T ) 1. Since 1 is upper triangular, this gives with (3.3) 1 = up[q T A 1 + T A T Q + T ( A T A T ) 1 ]. Using (3.5) we obtain 1 F 2 Q T A 1 F +( A 1 2 F F )/ 2, 2 1 F A 1 F (2 + A 1 F )+ 1 2 F. (6.1) Also from (Q + Q)( + ) = Q + A we see that (6.2) Q +(Q + Q) = A, Q = A 1 (Q + Q) 1, Q F A 1 F + 1 F.
10 328 X.-W. Chang, C. Paige If we replace A here by tg, t ɛ satisfying (4.2) as in Theorem 4.1, then (6.1) and (6.2) still hold with Q and continuous functions of t. Let ρ(t) 1 F, δ(t) A 1 F, β(t) δ(t)(2 + δ(t)), then ρ(0) = δ(0) = β(0) = 0, and from (6.1) ρ(t)( 2 ρ(t)) β(t). Here the left hand side has its maximum of 1/2 with ρ(t) = 1/ 2.If β(t) < 1/2 for t ɛ then the left hand side cannot attain its maximum, and so for t ɛ, ρ(t) < 1/ 2. This means that 2 ρ(t) > 1/ 2, and (6.3) ρ(t) β(t)/( 2 ρ(t)) 2β(t) = 2δ(t)(2+δ(t)), t ɛ. But with δ(t) 0, β(t) δ(t)(2 + δ(t))=(δ(t)+1) 2 1 < 1/2 if and only if δ(t) < 3/ 2 1, and the following rigorous bounds hold. Theorem 6.1 Assume that the conditions and assumptions of Theorem 4.1 hold together with (6.4) ɛ G 1 F A 1 F < 3/ Then A + A = A + ɛg has the unique Q factorization A + A =(Q + Q)( + ), where with the notation of (3.1) and (3.2), (6.5) Q F ( )η 3 cond 2 ()ɛ, (6.6) F 2 ( 2+ 3)η 3 cond 2 ()ɛ. Proof. Since G C Q from (4.1), (6.3) and (6.4) show 1 F 2 A 1 F (2+ 3/2 1)=( 2+ 3) A 1 F ( 2+ 3)ɛ C Q 1 F ( 2+ 3)η 3 cond 2 ()ɛ, This result with (6.2) gives (6.5). Finally (6.6) follows since F 1 F 2. emark 6.2 The bounds (6.5) and (6.6) are clearly the rigorous versions of the first order bounds (5.2) and (5.1), which were analogous to Zha s [18, Theorem 2.1] results. Thus (6.5) and (6.6) are just as weak as (5.2) and (5.1) were shown to be by Example 5.1.
11 Componentwise Q perturbation analyses efined analysis for Q The expression (5.2) gives an important overall bound on the change Q in Q. But by looking at how Q is distributed between (Q), the range of Q, and its orthogonal complement (Q), we will obtain better results. These show more clearly where any ill-conditioning lies. Take a matrix Q m (m n) suchthat [Q, Q] is square and orthogonal. Then from (4.9) we see that Q = ɛqq T Q(0) + ɛ Q Q T Q(0) + O(ɛ 2 ), and the key is to bound the first term on the right separately from the second. Since Q is an orthonormal matrix, Q T Q(0)=0when n =1, and results involving Q T Q(0) will only be nontrivial when n>1. For the part of Q(0) orthogonal to (Q), we see from (4.6) that (7.1) Q T Q(0) = Q T G 1, and combining this with (4.1) gives Q Q T Q(0) F = Q Q T G 1 F Q T C Q F 1 2. Thus with (3.2) and (3.1) we have Q Q T Q = ɛ Q Q T Q(0) + O(ɛ 2 ), (7.2) Q Q T Q F η 2 cond 2 ()ɛ + O(ɛ 2 ), and cond 2 () can be regarded as the condition number for the part of Q in ( Q). Note the similarity with (5.2). Now we deal withthat part of Q lying in (Q), first we show there is an important lower bound on QQ T Q 2. Since Q + Q has orthonormal columns, (Q + Q) T (Q + Q) =I + Q T Q + Q T Q + Q T Q = I, (7.3) Q T Q 2 = Q 2 2 = Q T Q + Q T Q 2 2 Q T Q 2, and we have the useful lower bound QQ T Q 2 = Q T Q Q 2 2. To obtain a good upper bound, we will manipulate the equations to avoid using the triangle equality ( X + Y X + Y ) etc. where possible. We see from (4.6) and (3.3) that (7.4) Q T Q(0) = Q T G 1 up[q T G 1 +(Q T G 1 ) T ] =low(q T G 1 ) [low(q T G 1 )] T,
12 330 X.-W. Chang, C. Paige whichis skew symmetric withclearly zero diagonal. Now partition Q, G and as follows Q = [Q n 1 1 n 1,q], G = [G n 1 1 [ n 1 1] n 1 r n 1,g], =. 0 r nn This allows us to rewrite (7.4) as Q T Q(0) = low([q T G n 1 n 1 1,QT ( G n 1 n 1 1 r + g)/r nn]) {low([q T G n 1 n 1 1,QT ( G n 1 n 1 1 r + g)/r nn])} T (7.5) =low([q T G n 1 n 1 1, 0]) {low([qt G n 1 n 1 1, 0])}T. From (4.1) it follows that (7.6) G n 1 C Q n 1 n 1, and using (3.6), we have from (7.5) and (7.6) that QQ T Q(0) F 2 Q T G n 1 n 1 1 F 2 Q T C Q n 1 F n 1 n Q T C Q F n 1 n This with (4.9), (3.2) and (3.1) gives our bound (7.7) QQ T Q F 2 η 1 cond 2 ( n 1 )ɛ + O(ɛ 2 ). If we define (7.8) κ Q (A) 2 cond 2 ( n 1 ), then we can regard this as the the condition number (for small enough A) for that part of Q in (Q), and summarize the results for Q. Theorem 7.1 Suppose all the assumptions of Theorem 4.1 hold, and Q m (m n) is a matrix such that [Q, Q] is orthogonal. Then A + A = A + ɛg has the unique Q factorization A + A =(Q + Q)( + ), such that (7.9) Q Q T Q F η 2 cond 2 ()ɛ + O(ɛ 2 ), 1 (7.10) 2 Q 2 2 QQ T Q F η 1 κ Q (A)ɛ + O(ɛ 2 ). If ɛ G 1 F A 1 F < 3/ 2 1 holds as well, then Q F =[ QQ T Q 2 F+ Q Q T Q 2 F ] 1/2 ( )η 3 cond 2 ()ɛ. (7.11) Here η 1, η 2 and η 3 are defined in (3.2), cond 2 ( ) is defined in (3.1), and κ Q (A) is defined by (7.8).
13 Componentwise Q perturbation analyses 331 Proof. The unique Q factorization follows from Theorem 4.1, (7.9) is just (7.2), (7.10) follows from (7.3) and (7.7) (7.8), while (7.11) is just (6.5) in Theorem 6.1, since (6.4) holds. In some problems we are interested in the change in Q lying in (Q), that is QQ T Q. For example when A is square Q is nonexistent, and the change in Q necessarily lies in (Q). Theorem 7.1 shows the upper bound on QQ T Q F can be smaller than was previously thought in for example [18], see (5.2). In particular if A has only one small singular value and its columns are not badly scaled (bothmatrices in Example 5.1 are of this form), and we use the standard column pivoting strategy in computing the Q factorization (see, for example, [6, p248]), then usually we will have cond 2 ( n 1 ) cond 2 (). For the two matrices in Example 5.1, the values of cond 2 ( n 1 ) are 1 and 1, while the values of cond 2 () are about and , withor without column pivoting. In some special cases standard column pivoting may not give cond 2 ( n 1 ) cond 2 (), for example the Kahan matrices (see [10]). For these a rank revealing pivoting strategy such as in [9] is required to obtain a significant improvement of cond 2 ( n 1 ) over cond 2 (), see the κ Q (A) or κ Q (AP ) ( 2 cond 2 ( n 1 )) and ϕ(a) or ϕ(ap ) ( 2 cond 2 ()) columns in Tables 9 and 10 of Sect. 9. Now we return to the error e Q2 in (5.5) for the example with A 2 in (5.4). When m = n, Q does not exist, so (7.10) gives Q F η 1 κ Q (A)ɛ + O(ɛ 2 ), and by a similar argument to that for (5.7), we have for the MATLAB Q factorization (7.12) Q c Q F γ n,nκ Q (A)u + O(u 2 ). For A 2, m = n =2,soκ Q (A 2 )= 2 in (7.8). We see for Q 2 the bound of O(10 16 ) using (7.12) matches the observed e Q2 of in (5.5) well, whereas the bound of O(10 6 ) using (5.7) was very weak. emark 7.2 When m>nit is possible to have Q F QQ T Q F, and (7.10) must be used carefully. Of course (7.10) is asymptotically correct, but when ɛ is large enough, the O(ɛ 2 ) term can dominate in the upper bound in (7.10) when the overall Q F is large. That is, the multiplier in the O(ɛ 2 ) term can be very large. This is illustrated nicely in the computational example with A 1 in (5.4), for there m =3,butn =2so κ Q (A 1 )= 2, also from (2.5) (2.6) A 1 + A 1 = Q 1 1c, QT 1 Q1 = I, Q 1 Q 1c F = O(10 16 ), A 1 F = O(10 16 ).
14 332 X.-W. Chang, C. Paige We see from the argument preceding (2.6), and e Q1 in (5.5), that Q 1 Q 1 Q 1 Q 1c Q 1, e Q1 Q 1c Q 1 F , ϕ(a 1 ) 2 cond 2 ( 1 ) , see (5.6). However we also found using MATLAB that so that necessarily Q 1 Q T 1 (Q 1c Q 1 ) F , Q 1 Q T 1 Q 1 F = Q 1 Q T 1 ( Q 1 Q 1 ) F , which is much larger than the first-order term in the upper bound in (7.10), whose value is O(10 16 ). But from our computations the lowest bound in (7.10) is 1 2 Q , which is also much larger than the firstorder term, so the O(ɛ 2 ) term dominates the ɛ term in (7.10) even though ɛ 10 16, explaining this result. Theorem 7.1 can be used effectively as follows. Estimate cond 2 ()ɛ and κ Q (A)ɛ. Since (7.11) is rigorous, the O(ɛ 2 ) term in (7.9) can never obscure the η 2 cond 2 ()ɛ term, so use this latter as the (approximate) bound. (η 3 cond 2 ()ɛ) 2 gives an indication of how large the lower bound in (7.10) could be. The O(ɛ) term in the upper bound of (7.10) is an excellent asymptotic bound, but if (η 3 cond 2 ()ɛ) 2 η 1 κ Q (A)ɛ, then the O(ɛ 2 ) term may dominate in (7.10), and we are forced to use η 3 cond 2 ()ɛ as the (approximate) upper bound for QQ T Q F as well. 8 Perturbation analyses for In Sect. 4 we saw (4.4) gives a basis for deriving first-order perturbation bounds for in the Q factorization of a full column rank matrix A. In [4] it was shown there were two effective approaches to carrying out such derivations. These were named the matrix vector equation approach, and the (refined) matrix equation approach. The matrix vector equation approach will be used to provide a good measure of the conditioning of, and a tight lower bound on the resulting condition number. We will then use ideas from the refined matrix equation approach to obtain an upper bound on this condition number, and a useful approachto estimating it in practice. 8.1 Matrix vector equation analysis for The first approach views the matrix equation (4.4) as a large matrix vector equation. For any matrix C (c ij ) [c 1,...,c n ] n n, denote by c (i) j
15 Componentwise Q perturbation analyses 333 the vector of the first i elements of column c j. Withthis, we define ( u denotes upper ) c (1) 1 uvec(c) c (2) 2. c n (n) It is the vector formed by stacking the columns of the upper triangular part of C into one long vector. The matrix equation (4.4) can be rewritten as the following form (for the derivation, see [2,4], or just write down the uvec of (4.4) for the n =2 case). (8.1) W uvec(ṙ(0)) = Z vec(q T G), where W n(n+1) n(n+1) 2 2 and Z n(n+1) n 2 2 are W Z r 11 r 12 r 11 r 12 r 22 r 1n r 11 r 1n r 2n r 12 r 22 r 11 r 1n r 2n r nn, r 12 r 22 r 11 r 12 r 22 r 1n r 2n r nn r 11 r 1n r 2n r nn r 12 r 22 r 1n r 2n r nn Since is nonsingular, W is also, and from (8.1). (8.2) 1 uvec(ṙ(0)) = W Z vec(q T G). In [4] we only assumed a bound on G F, and the tight condition number for was immediately seen to be W 1 Z 2. Here the analysis has to be more subtle to take account of the important additional information in (4.1). From (4.1) Q T G Q T G Q T C A Q T C Q,
16 334 X.-W. Chang, C. Paige and withthis (8.2) gives 1 uvec(ṙ(0)) W Z vec( Q T G ) W 1 Z vec( Q T C Q ). The second inequality here appears to lead to upper bounds which are not in general tight, but this seems unavoidable. Note for any matrix Y p m and N m n, (8.3) vec(yn)=(n T I p )vec(y ), where denotes the Kronecker product (see for example [11, p. 410]). It follows that (8.4) 1 uvec(ṙ(0)) W Z T I n vec( Q T C Q ). Taking the 2-norm, we obtain Ṙ(0) F W 1 Z T I n 2 Q T C Q F. Finally using the Taylor expansion (4.8) and the notation in (3.2), (8.5) F 2 η 1 W 1 Z T I n 2 2 ɛ + O(ɛ 2 ). Thus for perturbations of the form (4.1) we can regard the quantity (8.6) κ (A) W 1 Z T I n 2 2 as the condition number for in the Q factorization of A. Now we obtain a lower bound for κ (A). It is not difficulty to verify (see the Appendix of [2]) from the structures of W and Z that the first row of W 1 Z is (1, 0,...,0, 0,...,0), }{{}}{{} n (n 1)n and the i(i 1)/2+1-throw of W 1 Z is, for i =2,...,n, (0,r 2i /r 11,...,r ii /r 11, 0,...,0, 0,...,0, 1, 0,...,0, 0,...,0). }{{}}{{}}{{}}{{} n (i 2)n n (n i)n Thus by simple multiplications, we see that the first n elements of the i(i 1)/2+1-throw of W 1 Z T I n are ( r 1i, r 2i,..., r ii, 0,...,0), i =1,...,n. }{{} n i
17 Componentwise Q perturbation analyses 335 That is to say there exists a permutation matrix P suchthat ( P W 1 Z T I n = T ). Hence we have 1 W Z T I n 2 (8.7) κ (A) = 1, 2 where it can be seen from the matrices W and Z following (8.1) that this lower bound is attained with = I. The difficulty with the condition number κ (A) is that it is expensive to compute or even estimate directly. In Sect. 8.2 we will obtain bounds suggesting a practical condition estimator. 8.2 Obtaining upper bounds using matrix equation ideas The matrix equation description (4.5) showing Ṙ(0) = up[qt G 1 + (Q T G 1 ) T ], is just another way of saying the same thing as 1 uvec(ṙ(0)) = W Z vec(q T G) in (8.2). So for any X n n, (8.8) W 1 Z vec(x) =uvec{up[x 1 +(X 1 ) T ]}. It is clear from the right hand side of this that each element of W 1 Z is a sum of terms, where each term is a product of an element of 1 withan element of. It follows that for any X 0 n n, (8.9) W 1 Z vec(x) uvec{up[x 1 +(X 1 ) T ] }. This can also be proven by comparing the ithelements of bothsides of (8.9) (i =1, 2,...,n(n +1)/2). Let D i = diag(sign((w 1 Z ) i,: )) and define X i n n by vec(x i )=D i vec(x). Then for X 0 ( W 1 Z vec(x)) i =(W 1 Z ) i,: D i vec(x) =(W 1 Z ) i,: vec(x i ) =(uvec{up[x i 1 +(X i 1 ) T ]}) i, see (8.8), (uvec{up[ X i 1 +( X i 1 ) T ] }) i. Notice that X 0 and X i = X, so (8.9) indeed holds. Now we define M to be our matrix of interest in (8.6), that is (8.10) M W 1 Z T I n, κ (A) M 2 / 2. For later use, notice from (8.3) that for any Y n n W 1 Z vec(y ) =Mvec(Y ).
18 336 X.-W. Chang, C. Paige We want to find practical bounds for M 2. Write D n for the set of all n n real positive definite diagonal matrices. For any D D n, let = D. Notice that for any matrix B we have up(b)d = up(bd). Now from (8.9) with X Y and Y 0, it follows that Mvec(Y ) 2 vec(y ) 2 = W 1 Z vec(y ) 2 vec(y ) 2 uvec{up[y 1 +(Y 1 ) T ] } 2 vec(y ) 2 = up[y 1 +(Y 1 ) T ] F / Y F = up[y 1 + D 1 (Y 1 ) T D] F / Y F ρ D Y 1 F 2 / Y F, see (3.4) ρ D 1 D 2 D 1 2. But since M 0, M 2 = max Y 0,Y /=0 Mvec(Y ) 2 / vec(y ) 2,so (8.11) M 2 ρ D 1 D 2 D 1 2, D D n. When D = I, ρ D = 2, and this last with (8.5) and (8.10) gives F 2η 1 cond 2 ()ɛ + O(ɛ 2 ), 2 which is exactly our 2 and F norm analogy (5.1) of Zha s [18, Theorem 2.1] bound. Clearly by choosing D carefully we will usually get a better result than this. Now we define (with ρ D as in (3.4)) (8.12) (8.13) κ (A) inf κ(, D), D D n κ(, D) ρd 1 D 2 D 1 2 / 2. This gives bounds on the true condition number κ (A) in (8.6) and (8.7) (with ϕ(a) in (5.3)) of (8.14) 1 κ (A) κ (A) κ(, I) = 2 cond 2 () =ϕ(a). The above analysis, with (8.5), leads to the following theorem. Theorem 8.1 Assume that the conditions and assumptions of Theorem 4.1 hold, then A + A = A + ɛg has the unique Q factorization where A + A =(Q + Q)( + ), (8.15) (8.16) F η 1 κ (A)ɛ + O(ɛ 2 ), 2 1 κ (A) κ (A) ϕ(a),
19 Componentwise Q perturbation analyses 337 with η 1, κ (A), κ (A) and ϕ(a) defined by (3.2), (8.6), (8.12) with (8.13), and (5.3). When we use the standard column pivoting strategy in AP = Q, where P is the permutation matrix, we get a very nice result. Here the elements of the resulting satisfy, for i =1,...,n, j rii 2 rkj 2, k=i for each j = i,...,n. It follows that r 11 r r nn, and r ii r ij. Take D = diag(r ii ), then ρ D 2, and D 1 is unit upper triangular with 1= r ii r ij for all j i, and it is easy to show that for j>i, ( 1 ) ij 2 j i 1 (see, for example, [7, Lemma 8.6]). Thus from (8.13) we have κ (AP ) κ(, D) 2n 1 F F n 2 (1+n)(4 n +6n 1)/3. We see that when the standard pivoting strategy is used, the sensitivity of is bounded for any n. We summarize this as a theorem. Theorem 8.2 Let A m n be of full column rank, with the Q factorization AP = Q when the standard column pivoting strategy is used. Then in (8.16) (8.17) 1 κ (AP ) κ (AP ) n 2 (1 + n)(4 n +6n 1)/3. In contrast, ϕ(a) in (5.3) can be arbitrarily large for fixed[ n,evenwhen ] 11/2 standard column pivoting is used. For example, A = =, with 0 ɛ very small ɛ>0. It is easy to see ϕ(a) =O( 1 ɛ ). So the bounds (5.1) can severely overestimate the sensitivity of. Clearly κ (A) in (8.12) is a candidate for estimating the condition number κ (A) of in the Q factorization. We now give some insight as to why in the Q factorization is often less sensitive than the earlier condition estimates ϕ(a) = 2 cond 2 () suggested. We know that cond 2 () = 1 2 only takes out the effect of bad column scaling in, whereas according to [17, Thm. 3.3], we have 1 D r 2 Dr 1 2 n inf 1 D 2 D 1 2, D D n where D r = diag( () i,: 2 ). Thus 1 D r 2 Dr 1 2 takes out the effect of bad column and row scaling in. Thus for an withpoor row scaling, if ρ D is not large, then κ (A) will be muchsmaller than ϕ(a).
20 338 X.-W. Chang, C. Paige Now we return to the example in Sect. 5. By a similar argument to that for (5.8), we have for the MATLAB Q factorization (8.18) c F 2 γ m,nκ (A)u + O(u 2 ). For the matrices A 1 and A 2, we take row scaling D 1 = diag(2, ) and D 2 = diag(2, ), respectively, then with (8.12) (8.14) κ (A 1 ) κ( 1,D 1 ) 2.3, κ (A 2 ) κ( 2,D 2 ) 2.3. The analysis by Zha [18] leads to condition numbers of about ϕ(a 1 ) , ϕ(a 2 ) , see (5.1), (5.2). Obviously the new error bound (8.18) gives good estimates for both e 1 and e 2 in (5.5), in contrast to [18]. 9 Condition estimation and numerical experiments In Sect. 7 we gave first-order perturbation bounds for the change in Q lying in (Q), and the change in Q lying in the orthogonal complement of (Q), and defined κ Q (A) = 2 n 1 n as the (asymptotic) condition number for the former. In Sect. 8.2 we presented a perturbation bound for the factor, and defined κ (A) as a corresponding condition number. In practice we would like to choose D suchthat κ(, D) ρ D 1 D 2 D 1 2 / 2 is a good approximation to the infimum κ (A) of κ(, D), where we know κ (A) κ (A). It seems there is no obvious way to cheaply find the best D. Our numerical experiments indicate if we choose D to equilibrate the rows of D 1, i.e., D = D r diag( () i,: 2 ), we usually obtain a good estimate of κ (A). But sometimes it can lead to large ρ D and result in an overestimate. However in suchsituations we found the previous estimate ϕ(a) =κ(, I) gave a good approximation. So we may use min{κ(, D r ),κ(, I)} as an estimate of the condition number κ (A). The remaining problem is in practice how to cheaply estimate 1 D 2 with D = D r or D = I. Since 1 D 2 and 1 D 1 differ by at most a factor of n,we can estimate the latter instead of the former. Following van der Sluis [17, Thm. 2.6], (9.1) 1 D 1 = D 1 c 1 D c 1 D 1, D c diag( () :,j 1 ).
21 Componentwise Q perturbation analyses 339 Notice D c 1 D 1 can be estimated in O(n 2 ) flops (see for example Higham [7, Chap. 14]). Thus both κ(, D r ) and κ(, I) can be estimated in a total of O(n 2 ) flops. Our numerical experiments also suggest that another option for D may give a good approximation. With D c as given in (9.1), choose D = diag(δ j ) to approximately equilibrate the columns of D c 1 in (9.1) while keeping ρ D 2. To do this, take δ 1 =1/ (D c 1 ) :,1 2, then for j =2,...,n take δ j =1/ (D c 1 ) :,j 2 if (D c 1 ) :,j 2 (D c 1 ) :,j 1 2 ; otherwise δ j = δ j 1. For this D we write (9.2) D e D to indicate this choice in the tables ( e denotes equilibration ). The problem is that to our knowledge there is no known way to estimate the 2-norm of eachcolumn of the inverse of an upper triangular matrix in O(n 2 ) flops yet. This is an interesting problem in itself. We showed with (5.4) that it is easy to construct artificial examples where the previous bounds (5.1) (5.3) are exceptionally poor, while the new ones here are very good. The examples we now give are intended to develop a feeling for more usual behaviour. We give three sets of examples, each with and without pivoting, to show how good the new condition numbers are compared with the previous ones, how well the new error bounds match the actual errors in the computed factors, and how well the condition estimates approximate these new condition numbers. In all these experiments we computed the condition numbers and condition estimates. For the first and second sets of examples, see Tables 1 8, we also computed (c.f. (7.12) and (8.18), with P = I for no pivoting): e Q = Q c Q F, b Q = κ Q (AP )u, e = c F (9.3), b = κ (AP )u. 2 Here Q c and c are the computed Q factors of A in single precision by means of a MATLAB program chop.m provided by Higham [8], Q and are the computed Q factors of A by MATLAB (in double precision), and u is the single precision unit roundoff. So e Q and e are very good approximations to the norms of the actual errors in the computed Q c and c, and b Q and b are approximations to the error bounds (note the matrices in our examples are square and here we ignore the constant γ n,n in (7.12) and γ m,n in (8.18)). Each matrix in the first set has the form A = D 1 BD 2, where D 1 = diag(1,d 1,...,d n 1 1 ), D 2 = diag(1,d 2,...,d2 n 1 ) and B is an n n random matrix produced by the MATLAB function randn, sothesea are graded. The results for n =20, d 1,d 2 {0.8, 1, 2}, and the same matrix
22 340 X.-W. Chang, C. Paige B, are shown in Tables 1 4. The results are given for Q, then for, first without, then with standard pivoting. Each matrix in the second set has the form A = Q(D 1 UD 2 ), where U is the upper triangular part of a random matrix produced also by randn, and D 1 and D 2 are the same as in the first set of examples. Q is a random orthogonal matrix produced by qmult.m in [8]. This gives the less likely case of graded when no pivoting is used. The results for n =20, d 1,d 2 {0.8, 1, 2}, the same matrix U, but different Q for eachcase, are shown in Tables 5 8. The third set involves n n Kahan matrices (see [10]): 1 c c A = = diag(1,s,,s n 1 ) 1 c, 1 where c = cos(θ), s = sin(θ). The results for n =5, 10, 15, 20, 25 with θ = π/8 are shown in Table 9 without pivoting, and in Table 10 for AP where P is a permutation such that the first column moves to the last column position and the remaining columns are moved left one position this permutation is adopted in the rank-revealing Q factorization for Kahan matrices, see for example Hong and Pan [9]. We now comment on the tables. Ideally we have AP = Q, with P = I for no pivoting. emember that ϕ(ap )=κ(, I), see (8.14). 1. We used square matrices in the examples, so QQ T Q = Q.Thenew measure of sensitivity κ Q (AP ) of Q, see (7.8), (7.10), is never larger than the old ϕ(ap )=κ(, I), see (5.2), (5.3). In Tables 7, 10 the new measure gives a significant improvement over the old, and even in the other tables the difference makes it worthwhile using the new measure. 2. For the sensitivity of, the old measure κ(, I) compares poorly with the new measure κ (A) and estimates κ(, D r ) and κ(, D e ) in all except Table 6, where it is clearly superior to κ(, D r ) in the last three cases (it is also marginally better in the fifthand sixthcases). In all but these three anomalous cases κ(, D r ) gave an excellent estimate of κ (AP ), while in these anomalous cases κ(, I) gave a good estimate, supporting the suggestion of taking min{κ(, D r ),κ(, I)}. In all the cases where κ(, I) is very large, the new measure κ (AP ) is significantly smaller. This suggests in the Q factorization is not nearly as sensitive to perturbations of the form (1.1) as was previously thought, see [18]. 3. When standard column pivoting is used κ (AP ) can be improved significantly, and in fact in Tables 4 and 8, κ (AP ) is O(1). κ Q (AP ) can also be improved significantly, compare Table 5 with7. But the old measure for both, κ(, I), does not change much.
23 Componentwise Q perturbation analyses 341 Table 1. A = D 1BD 2 without pivoting, sensitivity of Q d 1 d 2 κ Q(A) ϕ(a) =κ(, I) e Q b Q e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e-02 Table 2. A = D 1BD 2 without pivoting, sensitivity of d 1 d 2 κ (A) κ(, D r) κ(, D e) κ(, I) e b e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e The error bounds b Q and b matchthe corresponding actual errors e Q and e very well, see (9.3). The numerical results show that standard pivoting can significantly improve the accuracy of both the Q factors, compare Table 5 with7, and 6 with8. 5. The Kahan matrix is theoretically already in standard column pivoting form, and κ (A) grows significantly as n increases, though not as fast as the bound in (8.17). But rank-revealing pivoting brought κ (AP ) back down to O(1). 10 Summary and future work Componentwise perturbation analyses have been given for the Q factorization of a matrix A withthe form of perturbations we could expect from the equivalent backward error in A resulting from numerically stable computations. For the Q factor we derived the condition numbers for that part of Q in (A), and that part in (A). Bothcan be estimated in O(n 2 ) flops. For the factor, we first derived the new condition number, and then suggested practical condition estimators. These provide estimates in O(n 2 )
24 342 X.-W. Chang, C. Paige Table 3. A = D 1BD 2 withstandard pivoting, sensitivity of Q d 1 d 2 κ Q(AP ) ϕ(ap )=κ(, I) e Q b Q e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e-02 Table 4. A = D 1BD 2 withstandard pivoting, sensitivity of d 1 d 2 κ (AP ) κ(, D r) κ(, D e) κ(, I) e b e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e-08 Table 5. A = Q(D 1UD 2) without pivoting, sensitivity of Q d 1 d 2 κ Q(A) ϕ(a) =κ(, I) e Q b Q e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e-06 flops. The analyses more accurately reflect the sensitivity of the problem than previous such results. Both the analysis and numerical results show that standard column pivoting can significantly decrease the sensitivity of, and Q as well, and so give more accurate Q factors. We also found a rank-revealing pivoting strategy could improve the condition of and Q. The Q and factors have practical meanings in several applications, see for example [3]. Certainly in these cases it can be important to know how accurate, or sensitive, these factors are. One contribution of this paper has
25 Componentwise Q perturbation analyses 343 Table 6. A = Q(D 1UD 2) without pivoting, sensitivity of d 1 d 2 κ (A) κ(, D r) κ(, D e) κ(, I) e b e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e-06 Table 7. A = Q(D 1UD 2) withstandard pivoting, sensitivity of Q d 1 d 2 κ Q(AP ) ϕ(ap )=κ(, I) e Q b Q e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e-06 Table 8. A = Q(D 1UD 2) withstandard pivoting, sensitivity of d 1 d 2 κ (AP ) κ(, D r) κ(, D e) κ(, I) e b e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e-08 Table 9. n n Kahan matrices without pivoting n κ Q(A) κ (A) κ(, D r) κ(, D e) ϕ(a) =κ(, I) 5 1.8e e e e e e e e e e e e e e e e e e e e e e e e e+16
26 344 X.-W. Chang, C. Paige Table 10. n n Kahan matrices with rank-revealing pivoting n κ Q(AP ) κ (AP ) κ(, D r) κ(, D e) ϕ(ap )=κ(, I) 5 2.8e e e e e e e e e e e e e e e e e e e e e e e e e+16 been to provide the theory whereby the sensitivity of computed Q factors may be obtained. We propose that standard numerical linear algebra software packages include the option of estimating the condition estimates given in this paper for the Q factors. This can be done by using standard norm estimators that are already available in the packages. These estimates can then be used to supply measures of the accuracy of the computed factors. As pointed out in Sect. 9, this estimation can be done in O(n 2 ) flops. Compared withthe O(mn 2 ) flops required for the computation of the Q factorization itself by the Householder or givens transformations, this extra computation does not cost much. Such information will be very helpful to users interested in either the accuracy of the computed Q factors, or their sensitivity to other perturbations of the form (1.1). In the future we would like to investigate how well the suggested practical condition estimates approximate the corresponding condition numbers. Also we would like to apply the approaches here to the Householder Q factorization withcomplete pivoting (see Cox and Higham [5]). Acknowledgements. The referees suggestions improved the presentation greatly. eferences 1. X.-W. Chang, Ph.D. Thesis, School of Computer Science, McGill University, Montreal, Quebec, Canada, February X.-W. Chang, C. C. Paige, A perturbation analysis for in the Q factorization, Technical eport, SOCS-95.7, McGill University, School of Computer Science, X.-W. Chang, C. C. Paige, Sensitivity Analyses for Factorizations of Sparse or Structured Matrices. Linear Algebra Appl. 284, (1998) 4. X.-W. Chang, C. C. Paige, G. W. Stewart, Perturbation analyses for the Q factorization, SIAM J. Matrix Anal. Appl. 18, (1997) 5. A. J. Cox, N. J. Higham, Stability of Householder Q factorization for weighted least squares problem, In D. F. Griffiths, D. J. Higham, G. A. Watson, editors, Numerical Analysis 1997, Proceeding of the 17th Dundee Biennial Conference, Volume 380 of Pitman esearchnotes in Mathematics, pp Longman, Harlow, Essex, UK: Addison Wesley G. H. Golub, C. F. Van Loan, Matrix Computations, 3rd ed. The Johns Hopkins University press, Baltimore, MD, 1996
27 Componentwise Q perturbation analyses N. J. Higham, Accuracy and Stability of Numerical Algorithms. Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, N. J. Higham, The Test Matrix Toolbox for Matlab, version 3.0, Numerical Analysis eport No. 265, University of Manchester, Manchester, England, Y. P. Hong, C.-T. Pan, ank-revealing Q factorizations and the singular value decomposition. Math. Comp. 58, (1992) 10. W. Kahan, Numerical linear algebra. Can. Math. Bull. 9, (1966) 11. P. Lancaster, M. Tismenetsky, The Theory of Matrices, 2nd ed. New York: Academic Press G. W. Stewart, Perturbation bounds for the Q factorization of a matrix. SIAM J. Numer. Anal. 14, (1977) 13. G. W. Stewart, On the perturbation of LU, Cholesky, and Q factorizations. SIAM J. Matrix Anal. Appl. 14, (1993) 14. J.-G. Sun, Perturbation bounds for the Cholesky and Q factorization. BIT 31, (1991) 15. J.-G. Sun, Componentwise perturbation bounds for some matrix decompositions. BIT 32, (1992) 16. J.-G. Sun, On perturbation bounds for the Q factorization. Linear Algebra Appl. 215, (1995) 17. A. van der Sluis, Condition numbers and equilibration of matrices. Numerische Mathematik 14, (1969) 18. H. Zha, A componentwise perturbation analysis of the Q decomposition. SIAM J. Matrix Anal. Appl. 14, (1993)
On the Perturbation of the Q-factor of the QR Factorization
NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. ; :1 6 [Version: /9/18 v1.] On the Perturbation of the Q-factor of the QR Factorization X.-W. Chang McGill University, School of Comptuer
More informationMultiplicative Perturbation Analysis for QR Factorizations
Multiplicative Perturbation Analysis for QR Factorizations Xiao-Wen Chang Ren-Cang Li Technical Report 011-01 http://www.uta.edu/math/preprint/ Multiplicative Perturbation Analysis for QR Factorizations
More informationMULTIPLICATIVE PERTURBATION ANALYSIS FOR QR FACTORIZATIONS. Xiao-Wen Chang. Ren-Cang Li. (Communicated by Wenyu Sun)
NUMERICAL ALGEBRA, doi:10.3934/naco.011.1.301 CONTROL AND OPTIMIZATION Volume 1, Number, June 011 pp. 301 316 MULTIPLICATIVE PERTURBATION ANALYSIS FOR QR FACTORIZATIONS Xiao-Wen Chang School of Computer
More informationWe first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix
BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity
More informationFor δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15])
LAA 278, pp.2-32, 998 STRUCTURED PERTURBATIONS AND SYMMETRIC MATRICES SIEGFRIED M. RUMP Abstract. For a given n by n matrix the ratio between the componentwise distance to the nearest singular matrix and
More informationc 2010 Society for Industrial and Applied Mathematics
SIAM J MATRIX ANAL APPL Vol 31, No 5, pp 841 859 c 010 Society for Industrial and Applied Mathematics RIGOROUS PERTURBATION BOUNDS OF SOME MATRIX FACTORIZATIONS X-W CHANG AND D STEHLÉ Abstract This article
More informationPerturbation Analyses for the Cholesky Factorization with Backward Rounding Errors
Perturbation Analyses for the holesky Fatorization with Bakward Rounding Errors Xiao-Wen hang Shool of omputer Siene, MGill University, Montreal, Quebe, anada, H3A A7 Abstrat. This paper gives perturbation
More informationInstitute for Advanced Computer Studies. Department of Computer Science. On the Perturbation of. LU and Cholesky Factors. G. W.
University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{95{93 TR{3535 On the Perturbation of LU and Cholesky Factors G. W. Stewart y October, 1995
More informationWHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS
IMA Journal of Numerical Analysis (2002) 22, 1-8 WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS L. Giraud and J. Langou Cerfacs, 42 Avenue Gaspard Coriolis, 31057 Toulouse Cedex
More informationETNA Kent State University
Electronic Transactions on Numerical Analysis. Volume 1, pp. 1-11, 8. Copyright 8,. ISSN 168-961. MAJORIZATION BOUNDS FOR RITZ VALUES OF HERMITIAN MATRICES CHRISTOPHER C. PAIGE AND IVO PANAYOTOV Abstract.
More informationStructured Condition Numbers of Symmetric Algebraic Riccati Equations
Proceedings of the 2 nd International Conference of Control Dynamic Systems and Robotics Ottawa Ontario Canada May 7-8 2015 Paper No. 183 Structured Condition Numbers of Symmetric Algebraic Riccati Equations
More informationDense LU factorization and its error analysis
Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,
More informationA Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem
A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present
More information14.2 QR Factorization with Column Pivoting
page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution
More informationScientific Computing
Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting
More informationUNIFYING LEAST SQUARES, TOTAL LEAST SQUARES AND DATA LEAST SQUARES
UNIFYING LEAST SQUARES, TOTAL LEAST SQUARES AND DATA LEAST SQUARES Christopher C. Paige School of Computer Science, McGill University, Montreal, Quebec, Canada, H3A 2A7 paige@cs.mcgill.ca Zdeněk Strakoš
More informationKey words. conjugate gradients, normwise backward error, incremental norm estimation.
Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322
More informationETNA Kent State University
C 8 Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-2, 2004. Copyright 2004,. ISSN 1068-613. etnamcs.kent.edu STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Abstract.
More informationNumerical Linear Algebra
Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost
More informationProgram Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects
Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen
More informationOPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY
published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower
More informationLAPACK-Style Codes for Pivoted Cholesky and QR Updating
LAPACK-Style Codes for Pivoted Cholesky and QR Updating Sven Hammarling 1, Nicholas J. Higham 2, and Craig Lucas 3 1 NAG Ltd.,Wilkinson House, Jordan Hill Road, Oxford, OX2 8DR, England, sven@nag.co.uk,
More informationLAPACK-Style Codes for Pivoted Cholesky and QR Updating. Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig. MIMS EPrint: 2006.
LAPACK-Style Codes for Pivoted Cholesky and QR Updating Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig 2007 MIMS EPrint: 2006.385 Manchester Institute for Mathematical Sciences School of Mathematics
More informationNumerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725
Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple
More informationSome Notes on Least Squares, QR-factorization, SVD and Fitting
Department of Engineering Sciences and Mathematics January 3, 013 Ove Edlund C000M - Numerical Analysis Some Notes on Least Squares, QR-factorization, SVD and Fitting Contents 1 Introduction 1 The Least
More informationOn the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination
On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination J.M. Peña 1 Introduction Gaussian elimination (GE) with a given pivoting strategy, for nonsingular matrices
More informationLinear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4
Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math Week # 1 Saturday, February 1, 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9
STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 1. qr and complete orthogonal factorization poor man s svd can solve many problems on the svd list using either of these factorizations but they
More informationAnalysis of Block LDL T Factorizations for Symmetric Indefinite Matrices
Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices Haw-ren Fang August 24, 2007 Abstract We consider the block LDL T factorizations for symmetric indefinite matrices in the form LBL
More informationA Note on Eigenvalues of Perturbed Hermitian Matrices
A Note on Eigenvalues of Perturbed Hermitian Matrices Chi-Kwong Li Ren-Cang Li July 2004 Let ( H1 E A = E H 2 Abstract and à = ( H1 H 2 be Hermitian matrices with eigenvalues λ 1 λ k and λ 1 λ k, respectively.
More informationOn the Solution of Constrained and Weighted Linear Least Squares Problems
International Mathematical Forum, 1, 2006, no. 22, 1067-1076 On the Solution of Constrained and Weighted Linear Least Squares Problems Mohammedi R. Abdel-Aziz 1 Department of Mathematics and Computer Science
More informationOrthogonal Transformations
Orthogonal Transformations Tom Lyche University of Oslo Norway Orthogonal Transformations p. 1/3 Applications of Qx with Q T Q = I 1. solving least squares problems (today) 2. solving linear equations
More informationRounding error analysis of the classical Gram-Schmidt orthogonalization process
Cerfacs Technical report TR-PA-04-77 submitted to Numerische Mathematik manuscript No. 5271 Rounding error analysis of the classical Gram-Schmidt orthogonalization process Luc Giraud 1, Julien Langou 2,
More informationACCURACY AND STABILITY OF THE NULL SPACE METHOD FOR SOLVING THE EQUALITY CONSTRAINED LEAST SQUARES PROBLEM
BIT 0006-3835/98/3901-0034 $15.00 1999, Vol. 39, No. 1, pp. 34 50 c Swets & Zeitlinger ACCURACY AND STABILITY OF THE NULL SPACE METHOD FOR SOLVING THE EQUALITY CONSTRAINED LEAST SQUARES PROBLEM ANTHONY
More informationIndex. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)
page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation
More informationLecture 9: Numerical Linear Algebra Primer (February 11st)
10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template
More informationThe Lanczos and conjugate gradient algorithms
The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization
More informationNumerical Methods I Solving Square Linear Systems: GEM and LU factorization
Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,
More informationMatrix decompositions
Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers
More informationMatrix decompositions
Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers
More informationThe numerical stability of barycentric Lagrange interpolation
IMA Journal of Numerical Analysis (2004) 24, 547 556 The numerical stability of barycentric Lagrange interpolation NICHOLAS J. HIGHAM Department of Mathematics, University of Manchester, Manchester M13
More informationPartial LLL Reduction
Partial Reduction Xiaohu Xie School of Computer Science McGill University Montreal, Quebec, Canada H3A A7 Email: xiaohu.xie@mail.mcgill.ca Xiao-Wen Chang School of Computer Science McGill University Montreal,
More informationDiagonal and Monomial Solutions of the Matrix Equation AXB = C
Iranian Journal of Mathematical Sciences and Informatics Vol. 9, No. 1 (2014), pp 31-42 Diagonal and Monomial Solutions of the Matrix Equation AXB = C Massoud Aman Department of Mathematics, Faculty of
More informationImproved Newton s method with exact line searches to solve quadratic matrix equation
Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan
More informationFrom Matrix to Tensor. Charles F. Van Loan
From Matrix to Tensor Charles F. Van Loan Department of Computer Science January 28, 2016 From Matrix to Tensor From Tensor To Matrix 1 / 68 What is a Tensor? Instead of just A(i, j) it s A(i, j, k) or
More informationLecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation
Lecture Cholesky method QR decomposition Terminology Linear systems: Eigensystems: Jacobi transformations QR transformation Cholesky method: For a symmetric positive definite matrix, one can do an LU decomposition
More informationOn The Belonging Of A Perturbed Vector To A Subspace From A Numerical View Point
Applied Mathematics E-Notes, 7(007), 65-70 c ISSN 1607-510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ On The Belonging Of A Perturbed Vector To A Subspace From A Numerical View
More informationLinear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4
Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix
More informationOn the loss of orthogonality in the Gram-Schmidt orthogonalization process
CERFACS Technical Report No. TR/PA/03/25 Luc Giraud Julien Langou Miroslav Rozložník On the loss of orthogonality in the Gram-Schmidt orthogonalization process Abstract. In this paper we study numerical
More informationReview Questions REVIEW QUESTIONS 71
REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of
More informationNotes on Solving Linear Least-Squares Problems
Notes on Solving Linear Least-Squares Problems Robert A. van de Geijn The University of Texas at Austin Austin, TX 7871 October 1, 14 NOTE: I have not thoroughly proof-read these notes!!! 1 Motivation
More informationI-v k e k. (I-e k h kt ) = Stability of Gauss-Huard Elimination for Solving Linear Systems. 1 x 1 x x x x
Technical Report CS-93-08 Department of Computer Systems Faculty of Mathematics and Computer Science University of Amsterdam Stability of Gauss-Huard Elimination for Solving Linear Systems T. J. Dekker
More informationSubset selection for matrices
Linear Algebra its Applications 422 (2007) 349 359 www.elsevier.com/locate/laa Subset selection for matrices F.R. de Hoog a, R.M.M. Mattheij b, a CSIRO Mathematical Information Sciences, P.O. ox 664, Canberra,
More informationOn condition numbers for the canonical generalized polar decompostion of real matrices
Electronic Journal of Linear Algebra Volume 26 Volume 26 (2013) Article 57 2013 On condition numbers for the canonical generalized polar decompostion of real matrices Ze-Jia Xie xiezejia2012@gmail.com
More informationQR factorization with complete pivoting and accurate computation of the SVD
Linear Algebra and its Applications 309 (2000) 153 174 www.elsevier.com/locate/laa QR factorization with complete pivoting and accurate computation of the SVD Nicholas J. Higham 1 Department of Mathematics,
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationImproved Inverse Scaling and Squaring Algorithms for the Matrix Logarithm. Al-Mohy, Awad H. and Higham, Nicholas J. MIMS EPrint: 2011.
Improved Inverse Scaling and Squaring Algorithms for the Matrix Logarithm Al-Mohy, Awad H. and Higham, Nicholas J. 2011 MIMS EPrint: 2011.83 Manchester Institute for Mathematical Sciences School of Mathematics
More informationON THE QR ITERATIONS OF REAL MATRICES
Unspecified Journal Volume, Number, Pages S????-????(XX- ON THE QR ITERATIONS OF REAL MATRICES HUAJUN HUANG AND TIN-YAU TAM Abstract. We answer a question of D. Serre on the QR iterations of a real matrix
More informationOrthonormal Transformations and Least Squares
Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving
More informationLecture 2: Computing functions of dense matrices
Lecture 2: Computing functions of dense matrices Paola Boito and Federico Poloni Università di Pisa Pisa - Hokkaido - Roma2 Summer School Pisa, August 27 - September 8, 2018 Introduction In this lecture
More informationON MATRIX BALANCING AND EIGENVECTOR COMPUTATION
ON MATRIX BALANCING AND EIGENVECTOR COMPUTATION RODNEY JAMES, JULIEN LANGOU, AND BRADLEY R. LOWERY arxiv:40.5766v [math.na] Jan 04 Abstract. Balancing a matrix is a preprocessing step while solving the
More informationENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition
ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University of Hong Kong Lecture 8: QR Decomposition
More informationSince the determinant of a diagonal matrix is the product of its diagonal elements it is trivial to see that det(a) = α 2. = max. A 1 x.
APPM 4720/5720 Problem Set 2 Solutions This assignment is due at the start of class on Wednesday, February 9th. Minimal credit will be given for incomplete solutions or solutions that do not provide details
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches
More informationTHE RELATION BETWEEN THE QR AND LR ALGORITHMS
SIAM J. MATRIX ANAL. APPL. c 1998 Society for Industrial and Applied Mathematics Vol. 19, No. 2, pp. 551 555, April 1998 017 THE RELATION BETWEEN THE QR AND LR ALGORITHMS HONGGUO XU Abstract. For an Hermitian
More informationANONSINGULAR tridiagonal linear system of the form
Generalized Diagonal Pivoting Methods for Tridiagonal Systems without Interchanges Jennifer B. Erway, Roummel F. Marcia, and Joseph A. Tyson Abstract It has been shown that a nonsingular symmetric tridiagonal
More informationNumerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization
Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725 Consider Last time: proximal Newton method min x g(x) + h(x) where g, h convex, g twice differentiable, and h simple. Proximal
More informationarxiv:math/ v1 [math.na] 12 Jul 2004
arxiv:math/0407177v1 [math.na] 12 Jul 2004 On improving the accuracy of Horner s and Goertzel s algorithms Alica Smoktunowicz and Iwona Wróbel Faculty of Mathematics and Information Science, Warsaw University
More informationTHE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR
THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional
More informationRELATIVE PERTURBATION THEORY FOR DIAGONALLY DOMINANT MATRICES
RELATIVE PERTURBATION THEORY FOR DIAGONALLY DOMINANT MATRICES MEGAN DAILEY, FROILÁN M. DOPICO, AND QIANG YE Abstract. In this paper, strong relative perturbation bounds are developed for a number of linear
More informationBlock Bidiagonal Decomposition and Least Squares Problems
Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition
More informationStructured Condition Numbers and Backward Errors in Scalar Product Spaces. February MIMS EPrint:
Structured Condition Numbers and Backward Errors in Scalar Product Spaces Françoise Tisseur and Stef Graillat February 2006 MIMS EPrint: 2006.16 Manchester Institute for Mathematical Sciences School of
More informationLeast Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo
Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 26, 2010 Linear system Linear system Ax = b, A C m,n, b C m, x C n. under-determined
More informationComponentwise perturbation analysis for matrix inversion or the solution of linear systems leads to the Bauer-Skeel condition number ([2], [13])
SIAM Review 4():02 2, 999 ILL-CONDITIONED MATRICES ARE COMPONENTWISE NEAR TO SINGULARITY SIEGFRIED M. RUMP Abstract. For a square matrix normed to, the normwise distance to singularity is well known to
More informationScientific Computing: Dense Linear Systems
Scientific Computing: Dense Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 February 9th, 2012 A. Donev (Courant Institute)
More information7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix
EE507 - Computational Techniques for EE 7. LU factorization Jitkomut Songsiri factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization
More informationOn the stability of the Bareiss and related Toeplitz factorization algorithms
On the stability of the Bareiss and related oeplitz factorization algorithms A. W. Bojanczyk School of Electrical Engineering Cornell University Ithaca, NY 14853-5401, USA adamb@toeplitz.ee.cornell.edu
More informationA fast randomized algorithm for overdetermined linear least-squares regression
A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm
More informationETNA Kent State University
Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-92, 2004. Copyright 2004,. ISSN 1068-9613. ETNA STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Ý Abstract. For any symmetric
More informationON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH
ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix
More informationPROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS
PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS Abstract. We present elementary proofs of the Cauchy-Binet Theorem on determinants and of the fact that the eigenvalues of a matrix
More informationComponentwise Error Analysis for Stationary Iterative Methods
Componentwise Error Analysis for Stationary Iterative Methods Nicholas J. Higham Philip A. Knight June 1, 1992 Abstract How small can a stationary iterative method for solving a linear system Ax = b make
More informationNumerically Stable Cointegration Analysis
Numerically Stable Cointegration Analysis Jurgen A. Doornik Nuffield College, University of Oxford, Oxford OX1 1NF R.J. O Brien Department of Economics University of Southampton November 3, 2001 Abstract
More informationInstitute for Advanced Computer Studies. Department of Computer Science. On the Adjoint Matrix. G. W. Stewart y ABSTRACT
University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{97{02 TR{3864 On the Adjoint Matrix G. W. Stewart y ABSTRACT The adjoint A A of a matrix A
More informationAn Iterative Method for the Least-Squares Minimum-Norm Symmetric Solution
Copyright 2011 Tech Science Press CMES, vol.77, no.3, pp.173-182, 2011 An Iterative Method for the Least-Squares Minimum-Norm Symmetric Solution Minghui Wang 1, Musheng Wei 2 and Shanrui Hu 1 Abstract:
More informationLinear Least-Squares Data Fitting
CHAPTER 6 Linear Least-Squares Data Fitting 61 Introduction Recall that in chapter 3 we were discussing linear systems of equations, written in shorthand in the form Ax = b In chapter 3, we just considered
More informationNumerical Methods I Non-Square and Sparse Linear Systems
Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant
More informationOutline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy,
Math 54 - Numerical Analysis Lecture Notes Linear Algebra: Part B Outline Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences
More information6 Linear Systems of Equations
6 Linear Systems of Equations Read sections 2.1 2.3, 2.4.1 2.4.5, 2.4.7, 2.7 Review questions 2.1 2.37, 2.43 2.67 6.1 Introduction When numerically solving two-point boundary value problems, the differential
More informationNumerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??
Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement
More informationA Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations
A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant
More informationMath 471 (Numerical methods) Chapter 3 (second half). System of equations
Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular
More informationBlock Lanczos Tridiagonalization of Complex Symmetric Matrices
Block Lanczos Tridiagonalization of Complex Symmetric Matrices Sanzheng Qiao, Guohong Liu, Wei Xu Department of Computing and Software, McMaster University, Hamilton, Ontario L8S 4L7 ABSTRACT The classic
More informationPractical Linear Algebra: A Geometry Toolbox
Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla
More informationarxiv: v1 [math.na] 1 Sep 2018
On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing
More informationYimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract
Linear Algebra and its Applications 49 (006) 765 77 wwwelseviercom/locate/laa Relative perturbation bounds for the eigenvalues of diagonalizable and singular matrices Application of perturbation theory
More information12. Perturbed Matrices
MAT334 : Applied Linear Algebra Mike Newman, winter 208 2. Perturbed Matrices motivation We want to solve a system Ax = b in a context where A and b are not known exactly. There might be experimental errors,
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 Logistics Notes for 2016-08-26 1. Our enrollment is at 50, and there are still a few students who want to get in. We only have 50 seats in the room, and I cannot increase the cap further. So if you are
More informationExponentials of Symmetric Matrices through Tridiagonal Reductions
Exponentials of Symmetric Matrices through Tridiagonal Reductions Ya Yan Lu Department of Mathematics City University of Hong Kong Kowloon, Hong Kong Abstract A simple and efficient numerical algorithm
More informationELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n
Electronic Journal of Linear Algebra ISSN 08-380 Volume 22, pp. 52-538, May 20 THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE WEI-WEI XU, LI-XIA CAI, AND WEN LI Abstract. In this
More information