On the Perturbation of the Q-factor of the QR Factorization

Size: px
Start display at page:

Download "On the Perturbation of the Q-factor of the QR Factorization"

Transcription

1 NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. ; :1 6 [Version: /9/18 v1.] On the Perturbation of the Q-factor of the QR Factorization X.-W. Chang McGill University, School of Comptuer Science, 348 University Street, McConnell Engineering Building, Room 318, Montreal, H3A A7, Canada SUMMARY This paper gives normwise and componentwise perturbation analyses for the Q-factor of the QR factorization of the matrix A with full column rank when A suffers from an additive perturbation. Rigorous perturbation bounds are derived on the projections of the perturbation of the Q-factor in the range of A and its orthogonal complement. These bounds overcome a serious shortcoming of the first-order perturbation bounds in the literature and can be used safely. From these bounds, identical or equivalent first-order perturbation bounds in the literature can easily be derived. When A is square and nonsingular, tighter and simpler rigorous perturbation bounds on the perturbation of the Q-factor are presented. Copyright c John Wiley & Sons, Ltd. key words: QR factorization, perturbation analysis 1. INTRODUCTION One of the important matrix factorizations in matrix computations is the QR factorization. If A R m n has full column rank, then it has a unique QR factorization A = QR, where Q R m n has orthonormal columns and R R n n is upper triangular with positive diagonal entries. The matrices Q and R are referred to as the Q-factor and the R-factor, respectively. Suppose that A R m n is a small perturbation matrix such that A + A has full column rank, then A + A has a unique QR factorization A + A = (Q + Q)(R + R), where Q + Q has orthonormal columns and R + R is upper triangular with positive diagonal entries. In the perturbation analysis of the QR factorization, one typically derives bounds on Q (or Q ) and R (or R ) in terms of (a bound on) A (or A ), where for a matrix C = (c ij ), is a norm and C is defined by ( c ij ). The perturbation analysis of the QR factorization has been extensively studied. Three types of analysis have been presented, each corresponding to different information on A: 1. Given A F (or P A A F and P A A F, where P A and P A denotes the orthogonal projectors onto the range of A and its orthogonal complement, respectively), the first rigorous Correspondence to: chang@cs.mcgill.ca Contract/grant sponsor: The author s research was supported by NSERC of Canada grant RGPIN Copyright c John Wiley & Sons, Ltd.

2 X.-W. CHANG normwise perturbation bounds for the Q and R factors were presented by Stewart [8]. Then improved rigorous bounds and new first-order bounds were presented by Sun [1]. By a different approach Stewart [9] gave first-order bounds, which are similar to the first-order bounds of Sun [1]. Later Sun [1] gave new rigorous perturbation bounds for the Q-factor alone. The main contribution of [1] is that a first-order bound which can easily be obtained from one of the rigorous bounds improves the first-order bounds given in [1] and [9] by a factor of 1 + 1/. This first-order bound was also obtained by Bhatia and Mukherjea [] earlier by a different approach when A is square and nonsingular. In [1], the same first-order bound for the R-factor as that given by Sun [1] was derived when A is square and nonsingular. Using two different approaches, Chang et al [4] gave two new first-order perturbation bounds for the R-factor. Both can be arbitrarily smaller than the previous first-order bounds. One of those two bounds is expensive to estimate, but it is optimal, leading to the condition number for the R-factor, which is bounded above by a function of n when the standard column pivoting strategy is used in the QR factorization. To obtain tight results and see clearly where any ill conditioning lies, Chang et al [4] looked at how Q is distributed between the range of Q (equivalently the range of A) and its orthogonal complement. Specifically two tight first-order bounds on P A Q F and P A Q F were given in [4]. Very recently, a new rigorous bound for the R-factor was given by Chang and Stehlé [5], which is a small constant multiple of one of the first-order bounds given in [4].. Given A, rigorous bounds on Q and R were given by Sun [11]. 3. Suppose A ǫc A, where 1 c ij 1. An important motivation for considering such a class of perturbations is that the equivalent backward rounding error from a rounding error analysis of a standard QR factorization algorithm fits in this class. The first first-order normwise perturbation bounds for both the Q-factor and R-factor were presented by Zha [14]. Later Chang and Paige [3] presented rigorous perturbation bounds for both the Q-factor and R-factor, which are small constant multiples of Zha s first-order bounds. For the R-factor, [3] also gave two new first-order perturbation bounds, which can be arbitrarily smaller than Zha s first-order bound. For the Q factor, [3] gave nearly tight first-order bounds on P A Q F and P A Q F, respectively. Recently Chang et al [6] derived a new rigorous bound for the R-factor, which is a small constant multiple of one of the two first-order bounds given in [3]. In the above we mentioned that [4] and [3] gave first-order bounds on P A Q F and P A Q F for the first type and the third type of perturbation in A, respectively. Both [4] and [3] pointed out that the first-order bounds on P A Q F must be used carefully, since they may be much smaller than the actual P A Q F somtimes. In this paper we will give rigorous bounds on P A Q F to avoid this problem, so that the bounds can be used safely in all cases. The first-order bounds obtained from these rigorous bounds are the same or equivalent to those given in [4] and [3]. We will also present rigorous bounds on P A Q F. For the special case where A is square and nonsingular we will derive tighter bounds on P A Q F, which is then identical to Q F. The results fill in gaps left in [5] and [6], which do not present any new rigorous perturbation bounds on the Q-factor. For any matrix X R m n, we define. NOTATION AND BASICS κ (X) = X X, cond (X) = X X, Copyright c John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. ; :1 6

3 PERTURBATION OF THE Q-FACTOR OF THE QR FACTORIZATION 3 where X is the Moore-Penrose pseudo-inverse of X. Let A R m n with full column rank have the QR factorization A = QR, then A = R 1 Q T, = R, κ (A) = κ (R). Let Q R m (m n) be an orthonormal matrix such that [Q, Q] R m m is orthogonal. Then the orthogonal projectors P A and P A onto the range of A and its orthogonal complement respectively satisify P A = QQ T, P A = Q Q T. It is easy to verify that for any matrix X R m k we have P A X p = Q T X p and P A X p = Q T X p for p =, F. For any matrix X = [X n 1, x] = (x ij ) R n n, define (see [4]) up(x) It is easy to verify that 1 x 11 x 1 x 1n 1 x x n 1 x nn, low(x) [up(xt )] T. (1) low(x) {low(x)} T = low([x n 1, ]) {low([x n 1, ])} T, () up(x) F X F, (3) up(x T + X) F X F, (4) up(x) F 1 X F, if X T = X, (5) X up(x + X T ) F = low(x) [low(x)] T F X F. (6) Later when we derive perturbation bounds, we will use the following lemma, which was essentially proven in [4, Section 4]. [ ] Lemma.1. Given upper triangular R = Rn 1 r r nn R n n and F = [F n 1, f] R n n, then low(fr 1 ) [low(fr 1 )] T F F n 1 Rn 1 1 F F n 1 F Rn 1 1 F F Rn 1 1. (7) The above bounds are tight, in that for any R, F can be chosen such that the equalities hold. Proof. Since F = [F n 1, f] and R 1 = [ R 1 n 1 R 1 n 1 r/r nn 1/r nn ], from () low(fr 1 ) [low(fr 1 )] T = low ( [F n 1 R 1 n 1, ]) { low ( [F n 1 R 1 n 1, ])} T. Therefore, by (6) low(fr 1 ) [low(fr 1 )] T F F n 1 Rn 1 1 F F n 1 F Rn 1 1. For any R, we can choose F such that the above inequalities become equalities. In fact, for any R, take F = [e n y T, ], where e n = [,...,, 1] T R n, y and Rn 1 T y = Rn 1 1 y, then [ low(fr 1 ) [low(fr 1 )] T F = Rn 1 T y ] y T R 1 = Rn 1 T y n 1 = R 1 n 1 y = R 1 n 1 F n 1 F = R 1 n 1 F F. F Copyright c John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. ; :1 6

4 4 X.-W. CHANG 3. MAIN RESULTS We first consider the type 1 analysis mentioned in Section 1. Theorem 3.1. Let A = [A n 1, a n ] R m n be of full column rank with QR factorization A = QR, and let A = [ A n 1, a n ] R m n be a perturbation in A. Let P A and P A be the orthogonal projectors onto the range of A and the orthogonal complement of the range of A, respectively. If then A + A has a unique QR factorization where κ (A n 1 ) PA An 1 F A P A Q F n 1 1 κ (A n 1 ) An 1 κ (A) A F +, 1 κ (A) A and when m = n, κ (A) A < 1, (8) A + A = (Q + Q)(R + R), (9) + Q F If the condition (8) is strengthened to κ (A n 1 ) An 1 1 κ (A n 1 ) An 1 κ (A) A F 1 κ (A) A (1) κ (A n 1 ) An 1 F. (11) 1 κ (A n 1 ) An 1 then (1 + )κ (A) A F < 1, (1) P A Q F κ (A) Q T A F 1 (1 +. (13) )κ (A) A F Proof. For t [, 1], Q T (A + t A) = R(I + tr 1 Q T A), where tr 1 Q T A A Q T A < 1 by (8). Thus Q T (A + t A) is nonsingular, and then A + t A has full column rank and has the unique QR factorization A + t A = Q(t)R(t), (14) which, with Q() = Q, R() = R, Q(1) = Q + Q and R(1) = R + R, gives (9). Notice that ( t T Q T Q F = Q T Q(t)dt = Q(t) Q(τ)dτ) Q(t)dt F F ( Q(t) T Q(t) F dt + Q(t) F dt), (15) Copyright c John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. ; :1 6

5 PERTURBATION OF THE Q-FACTOR OF THE QR FACTORIZATION 5 where Q(t) is the derivative of Q(t) with respect to t. Thus, in order to bound Q T Q F, in the following we derive bounds on Q(t) T Q(t) F and Q(t) F. From (14) we obtain (A + t A) T (A + t A) = R(t) T R(t). Differentiating both sides of this equation with respect to t leads to A T (A + t A) + (A + t A) T A = Ṙ(t)T R(t) + R(t) T Ṙ(t). Then, using (14) and multiplying both sides by R(t) T from the left and by R(t) 1 from the right, we obtain R(t) T Ṙ(t) + Ṙ(t)R(t) 1 = R(t) T A T Q(t) + Q(t) T AR(t) 1. Therefore, with the up notation defined in (1), Ṙ(t)R(t) 1 = up[r(t) T A T Q(t) + Q(t) T AR(t) 1 ]. (16) Differentiating both sides of (14) with respect to t and then multiplying both sides of the resulting equation by R(t) 1 leads to Q(t) = AR(t) 1 Q(t)Ṙ(t)R(t) 1. (17) Substituting (16) into (17) and multiplying both sides of the resulting equation by Q(t) T from the left, we obtain Q(t) T Q(t) = Q(t) T AR(t) 1 up[r(t) T A T Q(t) + Q(t) T AR(t) 1 ] = low(q(t) T AR(t) 1 ) (low(q(t) T AR(t) 1 )) T. (18) ]. Let Q(t) and R(t) have the partitioning: Q(t) = [Q n 1 (t), q n (t)] and R(t) = In particular, when t =, Q = [Q n 1, q n ] and R = and applying Lemma.1, we get [ Rn 1(t) r(t) r nn(t) [ Rn 1 r r nn ]. Taking the F-norm in (18) Q(t) T Q(t) F = low(q(t) T AR(t) 1 ) (low(q(t) T AR(t) 1 )) T F Q(t) T A n 1 R n 1 (t) 1 F (19) Q(t) T A n 1 F R n 1 (t) 1. () Let [Q(t), Q(t)] R m m be orthogonal. Multiplying (17) by Q(t) T from the left, we obtain Then it follows that Q(t) T Q(t) = Q(t) T AR(t) 1. (1) Q(t) T Q(t) F = Q(t) T AR(t) 1 F Q(t) T A F R(t) 1. () Since A + t A = Q(t)R(t), for t [, 1], R(t) 1 = 1 σ min (R(t)) 1 σ min (A) t A Similarly, since A n 1 + t A n 1 = Q n 1 (t)r n 1 (t), we have R n 1 (t) 1 A 1 A A. (3) A n 1 1 A n 1 A n 1, (4) Copyright c John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. ; :1 6

6 6 X.-W. CHANG where A n 1 A n 1 A A < 1. Then, it follows from (19), () and (3) that Q(t) F = Q(t) T Q(t) F + Q(t) T Q(t) F Q(t) T AR(t) 1 F + Q(t) T AR(t) 1 F AR(t) 1 F (5) A A F 1 A. (6) A From (), it follows by using (4) and (6) that Q(t) T Q(t) F ( t ) T An 1 Q(τ)dτ + Q R n 1 (t) 1 F ) ( Q T A n 1 F + A n 1 Q(τ) F dτ R n 1 (t) 1 ( ) Q T An 1 A A F A n 1 A n 1 F + 1 A A 1 A n 1 A n 1 κ (A n 1 ) QT A n 1 F 1 κ (A n 1 ) An 1 + κ (A n 1 ) An 1 1 κ (A n 1 ) An 1 With (6) and (7), from (15) we can now conclude that (1) holds. When m = n, Q(t) R n n is orthogonal. Using (4), from () we obtain κ (A) A F. 1 κ (A) A (7) Then we have Q F = Q(t) F Q(t) dt F leading to (11). Now we want to show (13). Since An 1 F A n 1 1 A n 1 A n 1. Q T Q F = Q T Q(t) F dt Q(t)dt F An 1 F A n 1 1 A n 1 A n 1, Q T Q(t) F dt, (8) we bound Q T Q(t) F dt in the following. Multiplying (17) by Q T from the left, we obtain ( t ) Q T Q(t)= Q T AR(t) 1 Q T Q(t)Ṙ(t)R(t) 1 = Q T AR(t) 1 Q T Q(τ)dτ Ṙ(t)R(t) 1, where we have used the fact that Q T Q() =. Then it follows that Q T Q(t) F Q T AR(t) 1 F + Ṙ(t)R(t) 1 F Q T Q(τ) F dτ. (9) Copyright c John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. ; :1 6

7 PERTURBATION OF THE Q-FACTOR OF THE QR FACTORIZATION 7 To bound Ṙ(t)R(t) 1 F, we take the F-norm on both sides of (16) and apply (4), leading to Then, from (9), (3) and (3) it follows Therefore, Q T Q(t) F dt Q T AR(t) 1 F dt + Ṙ(t)R(t) 1 F Q(t) T AR(t) 1 F. (3) Q(t) T AR(t) 1 F Q T Q(τ) F dτ (31) Q T A F R(t) 1 dt + A F R(t) 1 dt Q T A F A A F A 1 1 A + A 1 A A Q T Q(τ) F dτ. Q T Q(τ) F dτ Q T Q(t) F dt A Q T A F 1 A A A A F 1 A A 1 κ (A) Q T A F 1 (1 +, )κ (A) A F where the denominator of the most right hand side is positive due to the condition (1). Thus from (8) we can conclude that (13) holds. In the following we make some remarks. Our computations were performed in MATLAB 7.1. on a MacBook running Mac OS X Remark 3.1. From (1) we obtain the following first-order bounds on P A Q F : P A Q F κ (A n 1 ) P A A n 1 F A n 1 A n 1 P A A F. (3) The above second bound was given in [4] and proved tight. The first one was not explicitly given in [4], although it can be seen it holds from the proof given there. In the proof of Theorem 3.1, we used the second inequality in (7), while in [4] the third inequality in (7) was used. The first bound is independent of the perturbation in the last column of A, while the second bound depends on it. Thus the second bound can be significantly larger than the first bound. Here is an example: A = 1 1 4, A = , P A Q F 1 1, 1 1 κ (A n 1 ) P A A n 1 F A n 1 = 1 1, A n 1 P A A F 1 6. Remark 3.. When we use the first-order bounds on P A Q F in (3) we have to be careful. Notice that P A A n 1 F can be very small or even zero, but P A Q F may not be very small; Copyright c John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. ; :1 6

8 8 X.-W. CHANG see the following example: A = 1 1 4, A = , P A Q F , κ (A n 1 ) P A A (1) n 1 F A n 1 = 1 1, bound (1) This example indicates that sometimes the first-order bound may significantly underestimate the true perturbation and the rigorous bound (1) should be used instead. Remark 3.3. From (13) we obtain the following first-order bound on P A Q F presented in [4]: P A Q F κ (A) P A A F. Remark 3.4. From (6) we obtain the following rigorous bound on A F : 1 Q F Q(t) κ (A) A F A F dt. (33) 1 κ (A) A This is the simplest and most often cited rigorous bound among a few rigorous bounds given in [1]. We now compare the bound in (11) with the bounds in (1) and (33) when A is square. Notice that in (1), when A is square, P A Q F = Q F and P A A n 1 F = A n 1 F. Thus the bound in (1) has two more terms than the bound in (11) and the latter is sharper than the former. If we compare (11) with (33), we can also observe that the bound in (11) is sharper. Here we can give an example to illustrate the comparisons. A = [ ] 1 1 4, A = [ ] , Q F 1 1, A n 1 F / A n 1 = A n 1 / A n 1 = 1 1, κ (A n 1 ) = 1, A F / A / 1 6, κ (A) = 1 4, bound (11) 1 1, bound (1) , bound (33). 1. Now we consider the type 3 analysis mentioned in Section 1. Theorem 3.. Let A R m n be of full column rank with QR factorization [ ] A = QR, where the upper triangular matrix R R n n has the partitioning R = Rn 1 r r nn. Let A R m n be a perturbation matrix in A such that A ǫc A, C R m m, c ij 1, ǫ a small constant. (34) Let P A and P A be the orthogonal projectors onto the range of A and its orthogonal complement, respectively. Define η = C Q F, χ (R, D) = R D 1 DR 1, χ (R) = inf D D n χ (R, D), Copyright c John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. ; :1 6

9 PERTURBATION OF THE Q-FACTOR OF THE QR FACTORIZATION 9 where D n is the set of all n n positive diagonal entries. If ηχ (R)ǫ < 1, (35) then A + A has a unique QR factorization A + A = (Q + Q)(R + R), (36) where P A Q F ( ) ηχ (R n 1 )ǫ 1 ηχ (R n 1 )ǫ + ηχ (R)ǫ, (37) 1 ηχ (R)ǫ and if m = n, Q F If the condition (35) is strengthened to then P A Q F η χ (R n 1 )ǫ 1 η χ (R n 1 )ǫ. (38) (1 + )ηχ (R)ǫ < 1, (39) ηχ (R)ǫ 1 (1 + )ηχ (R)ǫ. (4) Proof. We will use the proof for Theorem 3.1. Since (35) holds, there exists D D n such that ηχ (R, D)ǫ < 1. Let D = diag(d n 1, d n ) D n with D n 1 D n 1. Then it is easy to verify that ηχ(r n 1, D n 1 )ǫ < 1. For t [, 1], Q T (A + t A) = (I + tq T AR 1 )R. But Q T AR 1 F AD 1 DR 1 C Q F R D 1 DR 1 ǫ = ηχ (R, D)ǫ < 1. Thus Q T (A+t A) is nonsingular, and then A+t A has full column rank and has the unique QR factorization A + t A = Q(t)R(t), (41) which, with Q() = Q, R() = R, Q(1) = Q + Q and R(1) = R + R, gives (36). Now we derive two inequalities which will be used a few times later. Since A + t A = Q(t)R(t), we have QRD 1 + t AD 1 = Q(t)R(t)D 1. Therefore, for t [, 1], DR(t) 1 = Then, 1 σ min (R(t)D 1 ) 1 σ min (RD 1 ) t AD 1 AR(t) 1 F C Q F R D 1 DR(t) 1 Since D D n is arbitrary, we must have AR(t) 1 F DR 1 1 η R D 1 DR 1 ǫ. ηχ (R, D)ǫ 1 ηχ (R, D)ǫ. (4) ηχ (R)ǫ 1 ηχ (R)ǫ. (43) Copyright c John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. ; :1 6

10 1 X.-W. CHANG From (34) it follows that A n 1 ǫc Q n 1 R n 1. Notice that C Q n 1 F C Q F = η. By an argument analogous to that for (43), we can obtain From (5) and (43) we obtain From (19) and (44), we obtain A n 1 R n 1 (t) 1 F ηχ (R n 1 )ǫ 1 ηχ (R n 1 )ǫ. (44) Q(t) F AR(t) 1 F Q(t) T Q(t) F A n 1 R n 1 (t) 1 F With (45) and (46), from (15) we can conclude that (37) holds. When m = n, (46) becomes Q(t) ηχ (R n 1 )ǫ F 1 ηχ (R n 1 )ǫ. Then from Q F Q(t) F dt, we can conclude that (38) holds. Now we prove (4). From (31) we obtain Q T Q(t) F AR(t) 1 F + ηχ (R)ǫ 1 ηχ (R)ǫ + ηχ (R)ǫ 1 ηχ (R)ǫ ηχ (R)ǫ 1 ηχ (R)ǫ. (45) ηχ (R n 1 )ǫ 1 ηχ (R n 1 )ǫ. (46) AR(t) 1 F Q T Q(τ) F dτ where in proving the second inequality we used (43). Therefore, Q T Q F Q T Q(t) F dt 1 ηχ (R)ǫ 1 ηχ (R)ǫ ηχ(r)ǫ 1 ηχ (R)ǫ Q T Q(τ) F dτ, = ηχ (R)ǫ 1 (1 + )ηχ (R)ǫ, where the denominator of the most right hand side is positive due to the condition (39). Then we can conclude that (4) holds. In the following we make some remarks. Remark 3.5. To estimate χ (R) in (37), we can use the following result (see van der Sluis [13]): χ (R, D c ) = R Dc 1 D c R 1 nχ (R), where D c = diag( R(:, j) ) D n. Given R, we use a norm estimator (see, e.g., [7, Chap. 14]) to estimate χ (R, D c ), which can usually be done in O(n ) flops, and then use the estimated value as an approximation to χ (R). Similarly we can use this approach to estimate χ (R n 1 ). Copyright c John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. ; :1 6

11 PERTURBATION OF THE Q-FACTOR OF THE QR FACTORIZATION 11 Remark 3.6. From (37) and (4), we obtain the following first-order perturbation bounds: P A Q F ηχ (R n 1 )ǫ, P A Q F ηχ (R)ǫ, (47) while the first-order bounds given in [3] are as follows: P A Q F η 1 cond (R n 1 )ǫ, P A Q F η cond (R)ǫ, (48) where η 1 = Q T C Q F and η = Q T C Q F. Now we look at the relation between cond (R) and χ (R). On one hand, for any D D n, we have cond (R) R D 1 D R 1 n R D 1 DR 1 = nχ (R, D). Thus cond (R) nχ (R). On the other hand, Thus R D 1 DR 1 n R D 1 D R 1, χ (R) inf D D n R D 1 D R 1 = n R R 1 n n cond (R), where the equality is due to a general result proved by van der Sluis [13]. Notice that η, η 1 and η can be bounded by functions in terms of m and n. Therefore, there is no essential difference between the first-order bounds given in (47) and the first-order bounds given in (48). Remark 3.7. From (45) we obtain the following rigorous perturbation bound on Q F : Q F Q(t) ηχ (R)ǫ F dt 1 ηχ (R)ǫ. (49) This bound is similar to the following rigorous bound given in [3]: Q F ( )η cond (R)ǫ, (5) under the condition that AR 1 F < 3/ 1. Here the condition may not be practical as often A is unknown. Since A ǫc A, AR 1 F = AR 1 F η cond (R)ǫ. So a more practical condition would be η cond (R)ǫ < 3/ 1. This condition is more restrictive than ηχ (R)ǫ < 1 required by (49) and it can easily be verified that the bound in (5) is slightly less tighter than the bound in (49), if we ignore the difference between cond (R) and χ (R). When A is square, both bounds in (49) and (5) can be significantly larger than the bound in (38). Here is an example: [ ] R =, cond 1 (R) 1 5, χ (R) χ (R, D c ) 1 5, χ (R n 1,n 1 ) = 1, where D c is defined in Remark 3.5. Remark 3.8. Suppose that Q c R c is the computed QR factorization of A obtained via Householder transformations. Higham [7, Theorem 18.4]) showed A + A = QR c, A ǫ C A, ǫ = γ m,n u, (51) where Q T Q = I, γm,n is a moderate constant depending on m and n, u is the unit roundoff, C and C F = 1. Also the computed Q c satisfies Q c = Q + Q c, where Copyright c John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. ; :1 6

12 1 X.-W. CHANG Q c γ m,n uc Q with C, C F = 1. Let Q = Q Q. Since Q c Q = Q c + Q we have P A (Q c Q) F n 1/ γ m,n u + P A Q F. (5) In [3] the following example was used to illustrate that the first-older bound on P A Q F in (48) can severally underestimate the true value of P A Q F : A = 1 1 / [ ] 1 1, Q = 1, R = , / Q c = 7.711e e 7 1.e+, P A (Q c Q) F , 7.711e e 7 χ (R n 1 )ǫ = cond (R n 1 )ǫ O(1 16 ), bound (37) O(1 11 ). We see that the rigorous bound on P A Q F given in (37) should be used when we bound P A (Q c Q) F in (5). 4. CONCLUDING REMARKS We have derived rigorous perturbation bounds on projections of the perturbation of the Q- factor of the QR factorization of full column rank A in the range of A and its orthogonal complement for both normwise perturbation and componentwise perturbation in A. These bounds, unlike the first-order perturbation bounds in the literature, can be used safely. From these bounds, identical or equivalent first-order perturbation bounds in the literature can easily be derived. The results filled gaps left in the literature. When A is square and nonsingular, tighter and simpler rigorous perturbation bounds on the perturbation of the Q-factor were presented. There are still some questions which need to be investigated further. In Theorems 3.1 and 3., the conditions (1) and (39) for the bounds on P A Q F to hold are slightly stronger than the conditions (8) and (35) for the bounds on P A Q F to hold, respectively. It would be interesting to investigate if some new bounds on P A Q F can be derived under the conditions (8) and (35). This is for a theoretical purpose, not for a practical purpose. For the special case that A is square and nonsingular, our tighter and simpler rigorous bounds on Q F in (11) and (38) were derived separately. It would be more elegant if they could be derived from the general bounds. ACKNOWLEDGEMENT The author is grateful to Chris Paige for his valuable suggestions. REFERENCES 1. R. Bhatia, Matrix factorizations and their perturbations, Linear Algebra and Appl., 197, 198 (1994), pp Copyright c John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. ; :1 6

13 PERTURBATION OF THE Q-FACTOR OF THE QR FACTORIZATION 13. R. Bhatia and K. K. Mukherjea, Variation of the unitary part of a matrix, SIAM J. Matrix Anal. Appl., 15 (1994), pp X.-W. Chang and C. C. Paige, Componentwise perturbation analyses for the QR factorization, Numerische Mathematik 88 (1), pp X.-W. Chang, C. Paige, and G. Stewart, Perturbation analyses for the QR factorization, SIAM J. Matrix Anal. Appl., 18 (1997), pp X.-W. Chang and D. Stehlé, Rigorous perturbation bounds of some matrix factorizations, SIAM J. Matrix Anal. Appl., 31 (1), pp X.-W. Chang, D. Stehlé, and G. Villard, Perturbation analysis of the QR Factor R in the context of LLL lattice basis reduction, to appear in Mathematics of Computation, 5 pages. 7. N. J. Higham, Accuracy and Stability of Numerical Algorithms, 1st ed., Society for Industrial and Applied Mathematics, Philadelphia, PA, G. W. Stewart, Perturbation bounds for the QR factorization of a matrix, SIAM J. Numer. Anal., 14 (1977), pp G. W. Stewart, On the perturbation of LU, Cholesky, and QR factorizations, SIAM J. Matrix Anal. Appl., 14 (1993), pp J.-G. Sun, Perturbation bounds for the Cholesky and QR factorization, BIT, 31 (1991), pp J.-G. Sun, Componentwise perturbation bounds for some matrix decompositions, BIT, 3 (199), pp J.-G. Sun, On perturbation bounds for the QR factorization, Linear Algebra and Appl., 15 (1995), pp A. van der Sluis, Condition numbers and equilibration of matrices, Numerische Mathematik, 14 (1969), pp H. Zha, A componentwise perturbation analysis of the QR decomposition, SIAM J. Matrix Anal. Appl., 14 (1993), pp Copyright c John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. ; :1 6

Multiplicative Perturbation Analysis for QR Factorizations

Multiplicative Perturbation Analysis for QR Factorizations Multiplicative Perturbation Analysis for QR Factorizations Xiao-Wen Chang Ren-Cang Li Technical Report 011-01 http://www.uta.edu/math/preprint/ Multiplicative Perturbation Analysis for QR Factorizations

More information

MULTIPLICATIVE PERTURBATION ANALYSIS FOR QR FACTORIZATIONS. Xiao-Wen Chang. Ren-Cang Li. (Communicated by Wenyu Sun)

MULTIPLICATIVE PERTURBATION ANALYSIS FOR QR FACTORIZATIONS. Xiao-Wen Chang. Ren-Cang Li. (Communicated by Wenyu Sun) NUMERICAL ALGEBRA, doi:10.3934/naco.011.1.301 CONTROL AND OPTIMIZATION Volume 1, Number, June 011 pp. 301 316 MULTIPLICATIVE PERTURBATION ANALYSIS FOR QR FACTORIZATIONS Xiao-Wen Chang School of Computer

More information

c 2010 Society for Industrial and Applied Mathematics

c 2010 Society for Industrial and Applied Mathematics SIAM J MATRIX ANAL APPL Vol 31, No 5, pp 841 859 c 010 Society for Industrial and Applied Mathematics RIGOROUS PERTURBATION BOUNDS OF SOME MATRIX FACTORIZATIONS X-W CHANG AND D STEHLÉ Abstract This article

More information

Numerische Mathematik

Numerische Mathematik Numer. Math. (2001) 88: 319 345 Digital Object Identifier (DOI) 10.1007/s002110000236 Numerische Mathematik Componentwise perturbation analyses for the Q factorization Xiao-Wen Chang, Christopher C. Paige

More information

Institute for Advanced Computer Studies. Department of Computer Science. On the Perturbation of. LU and Cholesky Factors. G. W.

Institute for Advanced Computer Studies. Department of Computer Science. On the Perturbation of. LU and Cholesky Factors. G. W. University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{95{93 TR{3535 On the Perturbation of LU and Cholesky Factors G. W. Stewart y October, 1995

More information

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity

More information

Perturbation Analyses for the Cholesky Factorization with Backward Rounding Errors

Perturbation Analyses for the Cholesky Factorization with Backward Rounding Errors Perturbation Analyses for the holesky Fatorization with Bakward Rounding Errors Xiao-Wen hang Shool of omputer Siene, MGill University, Montreal, Quebe, anada, H3A A7 Abstrat. This paper gives perturbation

More information

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.) page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices

Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices Haw-ren Fang August 24, 2007 Abstract We consider the block LDL T factorizations for symmetric indefinite matrices in the form LBL

More information

Backward perturbation analysis for scaled total least-squares problems

Backward perturbation analysis for scaled total least-squares problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 009; 16:67 648 Published online 5 March 009 in Wiley InterScience (www.interscience.wiley.com)..640 Backward perturbation analysis

More information

Orthogonal Transformations

Orthogonal Transformations Orthogonal Transformations Tom Lyche University of Oslo Norway Orthogonal Transformations p. 1/3 Applications of Qx with Q T Q = I 1. solving least squares problems (today) 2. solving linear equations

More information

Structured Condition Numbers of Symmetric Algebraic Riccati Equations

Structured Condition Numbers of Symmetric Algebraic Riccati Equations Proceedings of the 2 nd International Conference of Control Dynamic Systems and Robotics Ottawa Ontario Canada May 7-8 2015 Paper No. 183 Structured Condition Numbers of Symmetric Algebraic Riccati Equations

More information

For δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15])

For δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15]) LAA 278, pp.2-32, 998 STRUCTURED PERTURBATIONS AND SYMMETRIC MATRICES SIEGFRIED M. RUMP Abstract. For a given n by n matrix the ratio between the componentwise distance to the nearest singular matrix and

More information

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present

More information

Notes on Solving Linear Least-Squares Problems

Notes on Solving Linear Least-Squares Problems Notes on Solving Linear Least-Squares Problems Robert A. van de Geijn The University of Texas at Austin Austin, TX 7871 October 1, 14 NOTE: I have not thoroughly proof-read these notes!!! 1 Motivation

More information

Orthonormal Transformations and Least Squares

Orthonormal Transformations and Least Squares Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

Computational Methods. Least Squares Approximation/Optimization

Computational Methods. Least Squares Approximation/Optimization Computational Methods Least Squares Approximation/Optimization Manfred Huber 2011 1 Least Squares Least squares methods are aimed at finding approximate solutions when no precise solution exists Find the

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

Dense LU factorization and its error analysis

Dense LU factorization and its error analysis Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,

More information

Rounding error analysis of the classical Gram-Schmidt orthogonalization process

Rounding error analysis of the classical Gram-Schmidt orthogonalization process Cerfacs Technical report TR-PA-04-77 submitted to Numerische Mathematik manuscript No. 5271 Rounding error analysis of the classical Gram-Schmidt orthogonalization process Luc Giraud 1, Julien Langou 2,

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 5: Projectors and QR Factorization Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 14 Outline 1 Projectors 2 QR Factorization

More information

Lecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation

Lecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation Lecture Cholesky method QR decomposition Terminology Linear systems: Eigensystems: Jacobi transformations QR transformation Cholesky method: For a symmetric positive definite matrix, one can do an LU decomposition

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

LAPACK-Style Codes for Pivoted Cholesky and QR Updating

LAPACK-Style Codes for Pivoted Cholesky and QR Updating LAPACK-Style Codes for Pivoted Cholesky and QR Updating Sven Hammarling 1, Nicholas J. Higham 2, and Craig Lucas 3 1 NAG Ltd.,Wilkinson House, Jordan Hill Road, Oxford, OX2 8DR, England, sven@nag.co.uk,

More information

A simplified pivoting strategy for symmetric tridiagonal matrices

A simplified pivoting strategy for symmetric tridiagonal matrices NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 6 [Version: 2002/09/18 v1.02] A simplified pivoting strategy for symmetric tridiagonal matrices James R. Bunch 1 and Roummel

More information

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4 Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math Week # 1 Saturday, February 1, 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x

More information

On condition numbers for the canonical generalized polar decompostion of real matrices

On condition numbers for the canonical generalized polar decompostion of real matrices Electronic Journal of Linear Algebra Volume 26 Volume 26 (2013) Article 57 2013 On condition numbers for the canonical generalized polar decompostion of real matrices Ze-Jia Xie xiezejia2012@gmail.com

More information

LAPACK-Style Codes for Pivoted Cholesky and QR Updating. Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig. MIMS EPrint: 2006.

LAPACK-Style Codes for Pivoted Cholesky and QR Updating. Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig. MIMS EPrint: 2006. LAPACK-Style Codes for Pivoted Cholesky and QR Updating Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig 2007 MIMS EPrint: 2006.385 Manchester Institute for Mathematical Sciences School of Mathematics

More information

WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS

WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS IMA Journal of Numerical Analysis (2002) 22, 1-8 WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS L. Giraud and J. Langou Cerfacs, 42 Avenue Gaspard Coriolis, 31057 Toulouse Cedex

More information

Some Notes on Least Squares, QR-factorization, SVD and Fitting

Some Notes on Least Squares, QR-factorization, SVD and Fitting Department of Engineering Sciences and Mathematics January 3, 013 Ove Edlund C000M - Numerical Analysis Some Notes on Least Squares, QR-factorization, SVD and Fitting Contents 1 Introduction 1 The Least

More information

Lecture 6, Sci. Comp. for DPhil Students

Lecture 6, Sci. Comp. for DPhil Students Lecture 6, Sci. Comp. for DPhil Students Nick Trefethen, Thursday 1.11.18 Today II.3 QR factorization II.4 Computation of the QR factorization II.5 Linear least-squares Handouts Quiz 4 Householder s 4-page

More information

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower

More information

Math 407: Linear Optimization

Math 407: Linear Optimization Math 407: Linear Optimization Lecture 16: The Linear Least Squares Problem II Math Dept, University of Washington February 28, 2018 Lecture 16: The Linear Least Squares Problem II (Math Dept, University

More information

ETNA Kent State University

ETNA Kent State University C 8 Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-2, 2004. Copyright 2004,. ISSN 1068-613. etnamcs.kent.edu STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Abstract.

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 1. qr and complete orthogonal factorization poor man s svd can solve many problems on the svd list using either of these factorizations but they

More information

Important Matrix Factorizations

Important Matrix Factorizations LU Factorization Choleski Factorization The QR Factorization LU Factorization: Gaussian Elimination Matrices Gaussian elimination transforms vectors of the form a α, b where a R k, 0 α R, and b R n k 1,

More information

ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition

ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University of Hong Kong Lecture 8: QR Decomposition

More information

Tikhonov Regularization of Large Symmetric Problems

Tikhonov Regularization of Large Symmetric Problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi

More information

Linear Least squares

Linear Least squares Linear Least squares Method of least squares Measurement errors are inevitable in observational and experimental sciences Errors can be smoothed out by averaging over more measurements than necessary to

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2 HE SINGULAR VALUE DECOMPOSIION he SVD existence - properties. Pseudo-inverses and the SVD Use of SVD for least-squares problems Applications of the SVD he Singular Value Decomposition (SVD) heorem For

More information

Partial LLL Reduction

Partial LLL Reduction Partial Reduction Xiaohu Xie School of Computer Science McGill University Montreal, Quebec, Canada H3A A7 Email: xiaohu.xie@mail.mcgill.ca Xiao-Wen Chang School of Computer Science McGill University Montreal,

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

Stability of the Gram-Schmidt process

Stability of the Gram-Schmidt process Stability of the Gram-Schmidt process Orthogonal projection We learned in multivariable calculus (or physics or elementary linear algebra) that if q is a unit vector and v is any vector then the orthogonal

More information

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n Electronic Journal of Linear Algebra ISSN 08-380 Volume 22, pp. 52-538, May 20 THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE WEI-WEI XU, LI-XIA CAI, AND WEN LI Abstract. In this

More information

MATH 22A: LINEAR ALGEBRA Chapter 4

MATH 22A: LINEAR ALGEBRA Chapter 4 MATH 22A: LINEAR ALGEBRA Chapter 4 Jesús De Loera, UC Davis November 30, 2012 Orthogonality and Least Squares Approximation QUESTION: Suppose Ax = b has no solution!! Then what to do? Can we find an Approximate

More information

The QR Factorization

The QR Factorization The QR Factorization How to Make Matrices Nicer Radu Trîmbiţaş Babeş-Bolyai University March 11, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) The QR Factorization March 11, 2009 1 / 25 Projectors A projector

More information

Orthonormal Transformations

Orthonormal Transformations Orthonormal Transformations Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 25, 2010 Applications of transformation Q : R m R m, with Q T Q = I 1.

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

Perturbation results for nearly uncoupled Markov. chains with applications to iterative methods. Jesse L. Barlow. December 9, 1992.

Perturbation results for nearly uncoupled Markov. chains with applications to iterative methods. Jesse L. Barlow. December 9, 1992. Perturbation results for nearly uncoupled Markov chains with applications to iterative methods Jesse L. Barlow December 9, 992 Abstract The standard perturbation theory for linear equations states that

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

RELATIVE PERTURBATION THEORY FOR DIAGONALLY DOMINANT MATRICES

RELATIVE PERTURBATION THEORY FOR DIAGONALLY DOMINANT MATRICES RELATIVE PERTURBATION THEORY FOR DIAGONALLY DOMINANT MATRICES MEGAN DAILEY, FROILÁN M. DOPICO, AND QIANG YE Abstract. In this paper, strong relative perturbation bounds are developed for a number of linear

More information

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

Componentwise perturbation analysis for matrix inversion or the solution of linear systems leads to the Bauer-Skeel condition number ([2], [13])

Componentwise perturbation analysis for matrix inversion or the solution of linear systems leads to the Bauer-Skeel condition number ([2], [13]) SIAM Review 4():02 2, 999 ILL-CONDITIONED MATRICES ARE COMPONENTWISE NEAR TO SINGULARITY SIEGFRIED M. RUMP Abstract. For a square matrix normed to, the normwise distance to singularity is well known to

More information

arxiv: v1 [math.na] 1 Sep 2018

arxiv: v1 [math.na] 1 Sep 2018 On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing

More information

Block Lanczos Tridiagonalization of Complex Symmetric Matrices

Block Lanczos Tridiagonalization of Complex Symmetric Matrices Block Lanczos Tridiagonalization of Complex Symmetric Matrices Sanzheng Qiao, Guohong Liu, Wei Xu Department of Computing and Software, McMaster University, Hamilton, Ontario L8S 4L7 ABSTRACT The classic

More information

Lecture 3: QR-Factorization

Lecture 3: QR-Factorization Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization

More information

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination J.M. Peña 1 Introduction Gaussian elimination (GE) with a given pivoting strategy, for nonsingular matrices

More information

Moore Penrose inverses and commuting elements of C -algebras

Moore Penrose inverses and commuting elements of C -algebras Moore Penrose inverses and commuting elements of C -algebras Julio Benítez Abstract Let a be an element of a C -algebra A satisfying aa = a a, where a is the Moore Penrose inverse of a and let b A. We

More information

The 'linear algebra way' of talking about "angle" and "similarity" between two vectors is called "inner product". We'll define this next.

The 'linear algebra way' of talking about angle and similarity between two vectors is called inner product. We'll define this next. Orthogonality and QR The 'linear algebra way' of talking about "angle" and "similarity" between two vectors is called "inner product". We'll define this next. So, what is an inner product? An inner product

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Linear Algebra, part 3 QR and SVD

Linear Algebra, part 3 QR and SVD Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We

More information

arxiv: v2 [math.na] 27 Dec 2016

arxiv: v2 [math.na] 27 Dec 2016 An algorithm for constructing Equiangular vectors Azim rivaz a,, Danial Sadeghi a a Department of Mathematics, Shahid Bahonar University of Kerman, Kerman 76169-14111, IRAN arxiv:1412.7552v2 [math.na]

More information

Problem 1. CS205 Homework #2 Solutions. Solution

Problem 1. CS205 Homework #2 Solutions. Solution CS205 Homework #2 s Problem 1 [Heath 3.29, page 152] Let v be a nonzero n-vector. The hyperplane normal to v is the (n-1)-dimensional subspace of all vectors z such that v T z = 0. A reflector is a linear

More information

Matrix Inequalities by Means of Block Matrices 1

Matrix Inequalities by Means of Block Matrices 1 Mathematical Inequalities & Applications, Vol. 4, No. 4, 200, pp. 48-490. Matrix Inequalities by Means of Block Matrices Fuzhen Zhang 2 Department of Math, Science and Technology Nova Southeastern University,

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 7: More on Householder Reflectors; Least Squares Problems Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 15 Outline

More information

Q T A = R ))) ) = A n 1 R

Q T A = R ))) ) = A n 1 R Q T A = R As with the LU factorization of A we have, after (n 1) steps, Q T A = Q T A 0 = [Q 1 Q 2 Q n 1 ] T A 0 = [Q n 1 Q n 2 Q 1 ]A 0 = (Q n 1 ( (Q 2 (Q 1 A 0 }{{} A 1 ))) ) = A n 1 R Since Q T A =

More information

Orthogonalization and least squares methods

Orthogonalization and least squares methods Chapter 3 Orthogonalization and least squares methods 31 QR-factorization (QR-decomposition) 311 Householder transformation Definition 311 A complex m n-matrix R = [r ij is called an upper (lower) triangular

More information

Applied Numerical Linear Algebra. Lecture 8

Applied Numerical Linear Algebra. Lecture 8 Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ

More information

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4 Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix

More information

Linear Analysis Lecture 16

Linear Analysis Lecture 16 Linear Analysis Lecture 16 The QR Factorization Recall the Gram-Schmidt orthogonalization process. Let V be an inner product space, and suppose a 1,..., a n V are linearly independent. Define q 1,...,

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 1, pp. 1-11, 8. Copyright 8,. ISSN 168-961. MAJORIZATION BOUNDS FOR RITZ VALUES OF HERMITIAN MATRICES CHRISTOPHER C. PAIGE AND IVO PANAYOTOV Abstract.

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

Scientific Computing: Dense Linear Systems

Scientific Computing: Dense Linear Systems Scientific Computing: Dense Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 February 9th, 2012 A. Donev (Courant Institute)

More information

Linear Algebraic Equations

Linear Algebraic Equations Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 13: Conditioning of Least Squares Problems; Stability of Householder Triangularization Xiangmin Jiao Stony Brook University Xiangmin Jiao

More information

MIDTERM. b) [2 points] Compute the LU Decomposition A = LU or explain why one does not exist.

MIDTERM. b) [2 points] Compute the LU Decomposition A = LU or explain why one does not exist. MAE 9A / FALL 3 Maurício de Oliveira MIDTERM Instructions: You have 75 minutes This exam is open notes, books No computers, calculators, phones, etc There are 3 questions for a total of 45 points and bonus

More information

1 Cricket chirps: an example

1 Cricket chirps: an example Notes for 2016-09-26 1 Cricket chirps: an example Did you know that you can estimate the temperature by listening to the rate of chirps? The data set in Table 1 1. represents measurements of the number

More information

Error estimates for the ESPRIT algorithm

Error estimates for the ESPRIT algorithm Error estimates for the ESPRIT algorithm Daniel Potts Manfred Tasche Let z j := e f j j = 1,..., M) with f j [ ϕ, 0] + i [ π, π) and small ϕ 0 be distinct nodes. With complex coefficients c j 0, we consider

More information

Moore-Penrose Inverse and Operator Inequalities

Moore-Penrose Inverse and Operator Inequalities E extracta mathematicae Vol. 30, Núm. 1, 9 39 (015) Moore-Penrose Inverse and Operator Inequalities Ameur eddik Department of Mathematics, Faculty of cience, Hadj Lakhdar University, Batna, Algeria seddikameur@hotmail.com

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 26, 2010 Linear system Linear system Ax = b, A C m,n, b C m, x C n. under-determined

More information

SENSITIVITY OF THE STATIONARY DISTRIBUTION OF A MARKOV CHAIN*

SENSITIVITY OF THE STATIONARY DISTRIBUTION OF A MARKOV CHAIN* SIAM J Matrix Anal Appl c 1994 Society for Industrial and Applied Mathematics Vol 15, No 3, pp 715-728, July, 1994 001 SENSITIVITY OF THE STATIONARY DISTRIBUTION OF A MARKOV CHAIN* CARL D MEYER Abstract

More information

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector

More information

ANONSINGULAR tridiagonal linear system of the form

ANONSINGULAR tridiagonal linear system of the form Generalized Diagonal Pivoting Methods for Tridiagonal Systems without Interchanges Jennifer B. Erway, Roummel F. Marcia, and Joseph A. Tyson Abstract It has been shown that a nonsingular symmetric tridiagonal

More information

LECTURE 7. Least Squares and Variants. Optimization Models EE 127 / EE 227AT. Outline. Least Squares. Notes. Notes. Notes. Notes.

LECTURE 7. Least Squares and Variants. Optimization Models EE 127 / EE 227AT. Outline. Least Squares. Notes. Notes. Notes. Notes. Optimization Models EE 127 / EE 227AT Laurent El Ghaoui EECS department UC Berkeley Spring 2015 Sp 15 1 / 23 LECTURE 7 Least Squares and Variants If others would but reflect on mathematical truths as deeply

More information

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue

More information

ON THE QR ITERATIONS OF REAL MATRICES

ON THE QR ITERATIONS OF REAL MATRICES Unspecified Journal Volume, Number, Pages S????-????(XX- ON THE QR ITERATIONS OF REAL MATRICES HUAJUN HUANG AND TIN-YAU TAM Abstract. We answer a question of D. Serre on the QR iterations of a real matrix

More information

Chapter 5 Orthogonality

Chapter 5 Orthogonality Matrix Methods for Computational Modeling and Data Analytics Virginia Tech Spring 08 Chapter 5 Orthogonality Mark Embree embree@vt.edu Ax=b version of February 08 We needonemoretoolfrom basic linear algebra

More information