Orthogonalization and least squares methods

Size: px
Start display at page:

Download "Orthogonalization and least squares methods"

Transcription

1 Chapter 3 Orthogonalization and least squares methods 31 QR-factorization (QR-decomposition) 311 Householder transformation Definition 311 A complex m n-matrix R = [r ij is called an upper (lower) triangular matrix, if r ij = for i > j (i < j) Example 311 r 11 r 1n (1) m = n : R = r nn r 11 r 1n (3) m > n : R = r nn, (2) m < n : R = r 11 r 1n r mm r mn Definition 312 Given A C m n, Q C m m unitary and R C m n upper triangular as in Examples such that A = QR Then the product is called a QR-factorization of A Basic problem: Given b, b C n Find a vector w C n with w w = 1 and c C such that Solution (Householder transformation): (1) b = : w arbitrary (in general w = ) and c = (I 2ww )b = ce 1 (311), (2) b : { b 1 c = b b 1 2, if b 1, b 2, if b 1 =, { w = 1 (b 2k 1 c, b 2,, b n ) T := 1 u 2k with2k = 2 b 2 ( b 2 + b 1 ) (312) (313)

2 46 Chapter 3 Orthogonalization and least squares methods Theorem 311 Any complex m n matrix A can be factorized by the product A = QR, where Q is m m-unitary R is m n upper triangular Proof: Let A () = A = [a () 1 a() 2 a() n Find Q 1 = (I 2w 1 w1 ) such that Q 1a () 1 = ce 1 Then A (1) = Q 1 A () = [Q 1 a () 1, Q 1a () 2,, Q 1a () n = c 1 a (1) 2 a (1) n 1 Find Q 2 = such that (I 2w 2 w I w 2 w2 2 )a(1) 2 = c 2 e 1 Then c 1 c 2 A (2) = Q 2 A (1) = a (2) 3 a (2) n (314) We continue this process Then after l = min(m, n) steps A (l) is an upper triangular matrix satisfying A (l 1) = R = Q l 1 Q 1 A Then A = QR, where Q = Q 1 Q l 1 Remark 311 We usually call the method in Theorem 311 as Householder method (Algorithm??) Theorem 312 Let A be a nonsingular n n matrix Then the QR- factorization is essentially unique That is, if A = Q 1 R 1 = Q 2 R 2, then there is a unitary diagonal matrix D = diag(d i ) with d i = 1 such that Q 1 = Q 2 D and DR 1 = R 2 Proof: matrix Let A = Q 1 R 1 = Q 2 R 2 Then Q 2Q 1 = R 2 R 1 1 = D must be a diagonal unitary Remark 312 The QR-factorization is unique, if it is required that the diagonal elements of R are positive Corollary 311 A is an arbitrary m n-matrix The following factorizations exist: (i) A = LQ, where Q is n n unitary and L is m n lower triangular (ii) A = QL, where Q is m m unitary and L is m n lower triangular (iii) A = RQ, where Q is n n unitary and R is m n upper triangular

3 31 QR-factorization (QR-decomposition) 47 Proof: (i) A has a QR-factorization Then A = QR A = R Q (i) O 1 (ii) Let P m = Then by Theorem 311 we have P m AP n = QR This implies 1 O A = (P m QP m )(P m RP n ) QL (ii) (iii) A has a QL-factorization (from (ii)), ie, A = QL This implies A = L Q (iii) Cost of Householder method Consider that the multiplications in (314) can be computed in the form (I 2w 1 w 1 )A = (I u 1 b b 1 b 2 u 1 )A = (I vu 1 )A = A vu 1 A := A vw So the first step for a m n-matrix A requires; c 1 : m multiplications, 1 root; 4k 2 : 1 multiplication; v: m divisions (= multiplications); w: mn multiplications; A (1) = A vw : m(n 1) multiplications Similarly, for the j-th step m and n are replaced by m j+1 and n j+1, respectively Let l = min(m, n) Then the number of multiplications is l 1 [2(m j + 1)(n j + 1) + (m j + 2) (315) j=1 = l(l 1)[ 2l 1 (m + n) 5/2 + (l 1)(2mn + 3m + 2n + 4) 3 (= mn 2 1/3n 3, if m n) Especially, for m = n, it needs n 1 [2(n j + 1) 2 + m j + 2 = 2/3n 3 + 3/2n /6n 4 (316) j=1 flops and (l + n 2) roots To compute Q = Q 1 Q l 1, it requires 2[m 2 n mn 2 + n 3 /3 multiplications (m n) (317) Remark 313 Let A = QR be a QR-factorization A Then we have A A = R Q QR = R R If A has full column rank and we require that the diagonal elements of R are positive, then we obtain the Cholesky factorization of A A

4 48 Chapter 3 Orthogonalization and least squares methods 312 Gram-Schmidt method Remark 314 Theorem 311 (or Algorithm??) can be used to solved orthonormal basis (OB) problem (OB) : Given linearly independent vectors a 1,, a n R n 1 Find an orthonormal basis for span{a 1,, a n } If A = [a 1,, a n = QR with Q = [q 1,, q n, and R = [r ij, then a k = k r ik q i (318) By assumption rank(a) = n and (318) it implies r kk So, we have q k = 1 k 1 (a k r ik q i ) (319) r kk The vector q k can be thought as a unit vector in the direction of z k = a k k 1 s ikq i To ensure that z k q 1,, q k 1 we choose s ik = q T i a k, for i = 1,, k 1 This leads to the Classical Gram-Schmidt (CGS) Algorithm for solving (OB) problem Algorithm 311 (Classical Gram-Schmidt (CGS) Algorithm) Given A R m n with rank(a) = n We compute A = QR, where Q R m n has orthonormal columns and R R n n For i = 1,, n, q i = a i ; For j = 1,, i 1 r ji = q T j a i, q i = q i r ji q j, end for r ii = q i 2, q i = q i /r ii, end for Disadvantage : The CGS method has very poor numerical properties, if some columns of A are nearly linearly independent Advantage : The method requires mn 2 multiplications (m n) Remark 315 Modified Gram-Schmidt (MGS): Write A = n q ir T i Define A (k) by k 1 [, A (k) = A q i ri T = n q i ri T (311) It follows that if A (k) = [z, B, z R m, B R m (n k) then r kk = z 2 and q k = z/r kk by (319) Compute [r k,k+1,, r kn = q T k B Next step: A (k+1) = B q k [r k,k+1,, r kn i=k

5 31 QR-factorization (QR-decomposition) 49 Algorithm 312 (MGS) Given A R m n with rank(a) = n We compute A = QR, where Q R m n has orthonormal columns and R R n n is upper triangular For i = 1,, n, q i = a i ; For j = 1,, i 1 r ji = q T j q i, q i = q i r ji q j, end for r ii = q i 2, q i = q i /r ii, end for The MGS requires mn 2 multiplications Remark 316 MGS computes the QR factorization at the kth step, the kth column of Q and the kth row of R are computed CGS at the kth step, the kth column of Q and the kth column of R are computed Advantage for OB problem (m n): (i) Householder method requires mn 2 n 3 /3 flops to get factorization A = QR and mn 2 n 3 /3 flops to get the first n columns of Q But MGS requires only mn 2 flops Thus for the problem of finding an orthonormal basis of range(a), MGS is about twice as efficient as Householder orthogonalization (ii) MGS is numerically stable 313 Givens method Basic problem: Given (a, b) T R 2, find c, s R with c 2 + s 2 = 1 such that [ [ [ c s a k =, s c b where c = cosα and s = sinα Solution: { c = 1, s =, k = a; if b =, Let c = a a, s = 2 +b 2 1 G(i, j, α) = b a, k = a 2 + b 2 ; if b 2 +b 2 cos α sin α sin α cos α 1 (3111) Then G(i, j, α) is called a Givens rotation in the (i, j)-coordinate plane In the matrix à = G(i, j, α)a, the rows with index i, j are the same as in A and ã ik = cos(α)a ik + sin(α)a jk, for k = 1,, n, ã jk = sin(α)a ik + cos(α)a jk, for k = 1,, n

6 5 Chapter 3 Orthogonalization and least squares methods Algorithm 313 (Givens orthogonalization) Given A R m n The folllowing Algorithm overwrites A with Q T A = R, where Q is orthonormal and R is upper triangular For q = 2,, m, for p = 1, 2,, min{q 1, n}, Find c = cos α and s = sin α as in (3111) such that [ [ [ c s app = s c A := G(p, q, α)a This algorithm requires 2n 2 (m n/3) flops Fast Givens method (See Matrix Computations, pp25-29): A modification of Givens method bases on the fast Givens rotations and requires about n 2 (m n/3) flops 32 Overdetermined linear Systems - Least Squares Methods Given A R m n, b R m and m > n Consider the least squares(ls) problem: a qp min x R n Ax b 2 (321) Let X be the set of minimizers defined by X = {x R n Ax b 2 = min!} It is easy to see the following properties: x X A T (b Ax) = (322) X is convex (323) X has a unique element x LS having minimal 2-norm (324) X = {x LS } rank(a) = n (325) For x R n, we refer to r = b Ax as its residual A T (b Ax) = is refered to as the normal equation The minimum sum is defined by ρ 2 LS = Ax LS b 2 2 If we let ϕ(x) = 1 Ax 2 b 2 2, then ϕ(x) = AT (Ax b) Theorem 321 Let A = r σ i u i v T i, with r =rank(a), U = [u 1,, u m and V = [v 1,, v n be the SVD of A R m n (m n) If b R m, then and x LS = r (u T i b/σ i )v i (326) ρ 2 LS = m (u T i b) 2 (327) i=r+1

7 32 Overdetermined linear Systems - Least Squares Methods 51 Proof: For any x R n we have Ax b 2 2 = U T AV (V T x) U T b 2 2 = r m (σ i α i u T i b)2 + (u T i b)2, i=r+1 where α = V T x Clearly, if x solves the LS-problem, then α i = (u T i b/σ i), for i = 1,, r If we set α r+1 = = α n =, then x = x LS Remark 321 If we define A + by A + = V Σ + U T, where Σ + = diag(σ 1 1,, σ 1 r,,, ) R n m then x LS = A + b and ρ LS = (I AA + )b 2 A + is refered to as the pseudo-inverse of A A + is defined to be the unique matrix X R n m that satisfies Moore-Penrose conditions : (i)axa = A, (ii)xax = X, (iii) (AX) T = AX, (iv) (XA) T = XA (328) Existence of X is easy to check by taking X = A + Now, we show the uniqueness of X Suppose X and Y satisfying the conditions (i) (iv) Then X = XAX = X(AY A)X = X(AY A)Y (AY A)X = (XA)(Y A)Y (AY )(AX) = (XA) T (Y A) T Y (AY ) T (AX) T = (AXA) T Y T Y Y T (AXA) T = A T Y T Y Y T A T = Y (AY A)Y = Y AY = Y If rank(a) = n (m n), then A + = (A T A) 1 A T If rank(a) = m (m n), then A + = A T (AA T ) 1 If m = n = rank(a), then A + = A 1 For the case rank(a)=n: Algorithm 321 (Normal equations) Given A R m n (m n) with rank(a) = n and b R m This Algorithm computes the solution to the LS-problem: min{ Ax b 2 ; x R n } Compute d := A T b, and form C := A T A by computing the Cholesky factorization C = R T R (see Remark 61) Solve R T y = d and Rx LS = y Algorithm 322 (Householder and Givens orthogonalizations) Given A R m n (m n) with rank(a) = n and b R m This Algorithm computes the solutins to the LS-problem: min{ Ax b 2 ; x R n } [ Compute QR-factorization Q T R1 A = by using Householder and Givens methods respectively (Here R 1 is upper triangular) Then where Q T b = [ c d Ax b 2 2 = Q T Ax Q T b 2 2 = R 1 x c d 2 2, Thus, x LS = R 1 1 c, (since rank(a) =rank(r 1 ) = n) and ρ 2 LS = d 2 2

8 52 Chapter 3 Orthogonalization and least squares methods Algorithm 323 (Modified Gram-Schmidt) Given A R m n (m n) with rank(a) = n and b R m The solution of min Ax b 2 is given by: Compute A = Q 1 R 1, where Q 1 R m n with Q T 1 Q 1 = I n and R 1 R n n upper triangular Then the normal equation (A T A)x = A T b is transformed to the linear system R 1 x = Q T 1 b x LS = R1 1 Q1 T b For the case rank(a) < n: Problem: (i) How to find a solution to the LS-problem? (ii) How to find the unique solution having minimal 2-norm? (iii) How to compute x LS reliably with infinite conditioned A? Definition 321 Let A be a m n matrix with rank(a) = r (r m, n) The factorization A = BC with B R m r and C R r n is called a full rank factorization, provided that B has full column rank and C has full row rank Theorem 322 If A = BC is a full rank factorization, then A + = C + B + = C T (CC T ) 1 (B T B) 1 B T (329) Proof: From assumption follows that B + B = (B T B) 1 B T B = I r, CC + = CC T (CC T ) 1 = I r We calculate (328) with A(C + B + )A = BCC + B + BC = BC = A, (C + B + )A(C + B + ) = C + B + BCC + B + = C + B +, A(C + B + ) = BCC + B + = BB + symmetric, (C + B + )A = C + B + BC = C + C symmetric These imply that X = C + B + satisfies (328) It follows A + = C + B + Unfortunately, if rank(a) < n, then the QR-factorization does not necessarily produce a full rank factorization of A For example A = [a 1, a 2, a 3 = [q 1, q 2, q Fortunately,we have the following two methods to produce a full rank factorization of A

9 32 Overdetermined linear Systems - Least Squares Methods Rank Deficiency I : QR with column pivoting Algorithm?? can be modified in a simple way so as to produce a full rank factorization of A [ R11 R AΠ = QR, R = 12 }r, (321) }m-r }{{} r }{{} n r where r = rank(a) < n (m n), Q is orthogonal, R 11 is nonsingular upper triangular and Π is a permuatation Once (321) is computed, then the LS-problem can be readily solved by Ax b 2 2 = (QT Aπ)(π T x) Q T b 2 2 = R 11y (c R 12 z) d 2 2, [ [ y where Π T }r c x = and Q z T }r b = }n-r d Thus if Ax b 2 = min!, then we }m-r must have [ R 1 11 (c R x = Π 12 z) z If z is set to be zero, then we obtain the basic solution [ R 1 11 c x B = π The basic solution is not the solution with minimal 2-norm, unless the submatrix R 12 is zero Since [ x LS 2 = min R 1 z R x 11 R n r B π 12 z I n r (3211) 2 We now solve the LS-problem (3211) by using Algorithms 321 to 323 Algorithm 324 Given A R m n, with rank(a) = r < n The following algorithm computes the factorization AΠ = QR defined by (321) The element a ij is overwritten by r ij (i j) The permutation Π = [e c1,, e cn is determined according to choosing the maximum of column norm in the current step c j := j (j = 1, 2,, n), r j := m (j = 1,, n), a 2 ij For k = 1,, n, Detemine p with (k p n) so that r p = max r j k j n If r p = then stop; else Interchange c k and c p, r k and r p, and a ik and a ip, for i = 1,, m Determine a Householder ˆQ k such that ˆQ k a kk a mk = A := diag(i k 1, ˆQ k )A; r j := r j a 2 kj (j = k + 1,, n)

10 54 Chapter 3 Orthogonalization and least squares methods This algorithm requires 2mnr r 2 (m + n) + 2r 3 /3 flops Algorithm 324 produces the full rank factorization (321) of A We have the following important relations: { r11 r 22 r rr, r jj =, j = r + 1,, n, r ii r ik, i = 1,, r, k = i + 1,, n (3212) Here, r = rank(a) < n, and R = (r jj ) In the following we show another application of the full rank factorization for solving the LS-problem Algorithm 325 (Compute x LS = A + b directly) ( (i) Compute (321): AΠ = QR (Q }{{} (1) Q (2) R1 ) r ) }r }m-r, AΠ = Q(1) R 1 (ii) (AΠ) + = R + 1 Q (1)+ = R + 1 Q (1)T (iii) Compute R + 1 : Either: R 1 + = RT 1 (R 1R1 T ) 1 (since R 1 has full row rank) (AΠ) + = R1 T (R 1 R1 T ) 1 Q (1)T Or: [ Find Q using Householder transformation (Algorithm??) such that QR1 T = T, where T R r r is upper triangular Let Q T := ( ˆQ (1), ˆQ (2) ) R T 1 = ˆQ (1) T + ˆQ (2) = ˆQ (1) T R 1 = T T ˆQ(1) T R + 1 = ( ˆQ (1)T ) + (T T ) + = ˆQ (1) (T T ) 1 (AΠ) + = ˆQ (1) (T T ) 1 Q (1)T (iv) Since min Ax b 2 = min AΠ(Π T x) b 2 (Π T x) LS = (AΠ ) + b x LS = Π (AΠ ) + b Remark 322 Unfortunately, QR with column pivoting is not entirely reliable as a method for detecting near rank deficiency For example: T n (c) = diag(1, s,, s n 1 ) 1 c c c 1 c c 1 c2 + s 2 = 1, c, s > If n = 1, c = 2, then σ n =3679e 8 But this matrix is unaltered by Algorithm 324 However,the degree of unreliability is somewhat like that for Gaussian elimination with partial pivoting, a method that works very well in practice

11 32 Overdetermined linear Systems - Least Squares Methods Rank Deficiency II : The Singular Value Decomposition Algorithm 326 (Householder Bidiagonalization) Given A R m n (m n) The following algorithm overwrite A with U T B AV B = B, where B is upper bidiagonal and U B and V B are orthogonal For k = 1,, n, Determine a Householder matrix Ũk of order n k + 1 such that U k a kk a mk =, A := diag(i k 1, U k )A, If k 2, then determine a Householder matrix V k of order n k + 1 such that [a k,k+1,, a kn V k = (,,, ), A := Adiag(I k, V k ) This algorithm requires 2mn 2 2/3n 3 flops Algorithm 327 (R-Bidiagonalization) when m n we can use the following faster method of bidiagonalization [ (1) Compute an orthogonal Q 1 R m m such that Q T 1 A = R1, where R 1 R n n is upper triangular (2) Applying Algorithm 326 to R 1, we get Q T 2 R 1 V B = B 1, where Q 2, V B R n n orthogonal and B 1 R n n upper bidiagonal [ (3) Define U B = Q 1 diag(q 2, I m n ) Then UB T AV B1 B = B bidiagonal This algorithm require mn 2 + n 3 It involves fewer compuations comparing with Algorithm 76 (2mn 2 2/3n 3 ) whenever m 5/3n Once the bidiagonalization of A has been achieved,the next step in the Golub-Reinsch SVD algorithm is to zero out the super diagonal elements in B Unfortunately, we must defer our discussion of this iteration until Chapter 5 since it requires an understanding of the symmetric QR algorithm for eigenvalues That is, it computes orthogonal matrices U Σ and V Σ such that UΣ T BV Σ = Σ = diag(σ 1,, σ n ) By defining U = U B U Σ and V = V B V Σ, we see that U T AV = Σ is the SVD of A

12 56 Chapter 3 Orthogonalization and least squares methods Algorithms Flop Counts Algorithm 321 Normal equations mn 2 /2 + n 3 /6 Algorithm 322 Householder orthogonalization mn 2 n 3 /3 rank(a)=n Algorithm 323 Modified Gram-Schmidt mn 2 Algorithm 313 Givens orthogonalization 2mn 2 2/3n 3 Algorithm 326 Householder Bidiagonalization 2mn 2 2/3n 3 Algorithm 327 R-Bidiagonalization mn 2 + n 3 LINPACK Golub-Reinsch SVD 2mn 2 + 4n 3 rank(a) < n Algorithm 325 QR-with column pivoting 2mnr mr 2 + 1/3r 3 Alg 327+SVD Chan SVD mn /2n 3 Table 31: Solving the LS problem (m n) Remark 323 If the LINPACK SVD Algorithm is applied with eps=1 17 to 1 c c c T 1 (2) = diag(1, s,, s n 1 1 c c ), 1 then ˆσ n = Remark 324 As we mentioned before, when solving the LS problem via the SVD, only Σ and V have to be computed (see (326)) Table 31 compares the efficiency of this approach with the other algorithms that we have presented 323 The Sensitivity of the Least Squares Problem Corollary 321 (of Theorem 123) Let U = [u 1,, u m, V = [v 1,, v n and U AV = Σ = diag(σ 1,, σ r,,, ) If k < r = rank(a) and A k = k σ i u i vi T, Then min A B 2 = A A k 2 = σ k+1 rank(b)=k Proof: Since U T A k V = diag(σ 1,, σ k,,, ), it follows rank(a k ) = k and that A A k 2 = U T (A A k )V 2 = diag(,,, σ k+1,, σ r ) 2 = σ k+1 Suppose B R m n and rank(b) = k, ie, there are orthogonal vectors x 1,, x n k such that N (B) = span{x 1,, x n k } This implies span{x 1,, x n k } span{v 1,, v k+1 } {}

13 32 Overdetermined linear Systems - Least Squares Methods 57 Let z be a unit vector in the intersection set Then Bz = and Az = k+1 σ i (vi T z)u i Thus, k+1 A B 2 2 (A B)z 2 2 = Az 2 2 = σi 2 (vi T z) 2 σk Condition number of a Rectangular Matrix Let A R m n, rank(a) = n, κ 2 (A) = σ max (A)/σ min (A) (i) The method of normal equation: (a) C = A T A, d = A T b min Ax b x R n 2 A T Ax = A T b (b) Compute the Cholesky factorization C = GG T (c) Solve Gy = d and G T x LS = y Then x LS x LS 2 x LS epsκ 2 (A T A) = epsκ 2 (A) 2 x x x κ(a) where (A + F ) x = b + f and Ax = b (ii) LS solution via QR factorization ( ε F ) A + ε f + o(ε 2 ), b Ax b 2 2 = QT Ax Q T b 2 2 = R 1x c d 2 2, x LS = R 1 1 c, ρ LS = d 2 Numerically, trouble can be expected wherever κ 2 (A) = κ 2 (R) 1/eps But this is in contrast to normal equation, Cholesky factorization becomes problematical once κ 2 (A) is in the neighborhood of 1/ eps Remark 325 A 2 (A T A) 1 A T 2 = κ 2 (A), A 2 2 (AT A) 1 2 = κ 2 (A) 2 Theorem 323 Let A R m n, (m n), b Suppose that x, r, x, r satisfy Ax b = min!, r = b Ax, ρ LS = r 2, (A + δa) x (b + δb) 2 = min!, r = (b + δb) (A + δa) x

14 58 Chapter 3 Orthogonalization and least squares methods If ε = max { δa 2, δb 2 } < σ n(a) A 2 b 2 σ 1 (A) and then and x x 2 x 2 sin θ = ρ LS b 2 1, ε{ 2κ 2(A) cos θ + tan θκ 2(A) 2 } + O(ε 2 ) r r 2 b 2 ε(1 + 2κ 2 (A)) min(1, m n) + O(ε 2 ) Proof: Let E = δa/ε and f = δb/ε Since δa 2 < σ n (A), by previous Corollary follows that rank(a + εe) = n for t [, ε [t = ε A + te = A + δa If rank(a + δa) = k < n, then A (A + δa) 2 = δa 2 A A k 2 = σ k+1 δ n Contradiction! So min A B 2 = A A k 2 rank(b)=k = A k σ i u i vi T 2 = σ k+1 Hence we have, (A + te) T (A + te)x(t) = (A + te) T (b + tf) (3213) Since x(t) is continuously differentiable for all t [, ε, x = x() and x = λ(ε), it follows that x = x + εẋ() + O(ε 2 ) and x x x = ε ẋ() 2 x Differentiating (3213) and setting t = then we have Thus, + O(ε 2 ) E T Ax + A T Ex + A T Aẋ() = A T f + E T b ẋ() = (A T A) 1 A T (f Ex) + (A T A) 1 E T r From f 2 b 2 and E 2 A 2 follows x x 2 ε{ A 2 (A T A) 1 A T b 2 2 ( + 1) x 2 A 2 x 2 ρ LS + A 2 2 A 2 x (AT A) 1 2 } + O(ε 2 ) 2 Since A T (Ax LS b) =, Ax LS Ax LS b and then b Ax Ax 2 2 = b 2 2 and A 2 2 x 2 2 b 2 2 ρ2 LS

15 32 Overdetermined linear Systems - Least Squares Methods 59 Thus, x x 2 1 eps{κ 2 (A)( x 2 cos θ + 1) + κ 2(A) 2 sin θ cos θ } + O(ε2 ) Furthermore, by sin θ cos θ = ρ LS, we have b 2 2 ρ 2 LS x x 2 x 2 eps(κ 2 (A) + κ 2 (A) 2 ρ LS ) (θ : small ) Remark 326 Normal equation: eps κ 2 (A) 2 QR-approach: eps(κ 2 (A) + ρ LS κ 2 (A) 2 ) (i) If ρ LS is small and κ 2 (A) is large, then QR is better than the normal equation (ii) The normal equation approach involves about half of the arithmetic when m n and does not requires as much storage (iii) The QR approach is applicable to a wider class of matrices because the Cholesky to A T A break down before the back substitution process on Q T A = R 325 Iterative Improvement r + Ax = b, [ f (k) g (k) [ Im A A T [ r x = [ b A T r = A T Ax = A T b Thus, [ [ [ b I A r (k) = A T and x (k), b Ax 2 = min! [ I A A T [ p (k) z (k) [ f (k) = g (k) This implies, [ R1 If A = QR = Q where Q T f = [ f1 f 2, then [ h Q T p = f 2 [ [ [ r (k+1) r (k) p (k) = + x (k+1) [ I A A T I n R 1 I m n R1 T [ p z x (k) = h f 2 z [ f g z (k) implies that = f 1 f 2 g, Thus, R T 1 h = g h = R T 1 g Then [ h z = R1 1 (f 1 h), P = Q f2

16 6 Chapter 3 Orthogonalization and least squares methods

MATH36001 Generalized Inverses and the SVD 2015

MATH36001 Generalized Inverses and the SVD 2015 MATH36001 Generalized Inverses and the SVD 201 1 Generalized Inverses of Matrices A matrix has an inverse only if it is square and nonsingular. However there are theoretical and practical applications

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 1. qr and complete orthogonal factorization poor man s svd can solve many problems on the svd list using either of these factorizations but they

More information

Orthonormal Transformations and Least Squares

Orthonormal Transformations and Least Squares Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

Introduction to Numerical Linear Algebra II

Introduction to Numerical Linear Algebra II Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in

More information

Orthogonal Transformations

Orthogonal Transformations Orthogonal Transformations Tom Lyche University of Oslo Norway Orthogonal Transformations p. 1/3 Applications of Qx with Q T Q = I 1. solving least squares problems (today) 2. solving linear equations

More information

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2 HE SINGULAR VALUE DECOMPOSIION he SVD existence - properties. Pseudo-inverses and the SVD Use of SVD for least-squares problems Applications of the SVD he Singular Value Decomposition (SVD) heorem For

More information

Householder reflectors are matrices of the form. P = I 2ww T, where w is a unit vector (a vector of 2-norm unity)

Householder reflectors are matrices of the form. P = I 2ww T, where w is a unit vector (a vector of 2-norm unity) Householder QR Householder reflectors are matrices of the form P = I 2ww T, where w is a unit vector (a vector of 2-norm unity) w Px x Geometrically, P x represents a mirror image of x with respect to

More information

ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition

ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University of Hong Kong Lecture 8: QR Decomposition

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

The QR Factorization

The QR Factorization The QR Factorization How to Make Matrices Nicer Radu Trîmbiţaş Babeş-Bolyai University March 11, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) The QR Factorization March 11, 2009 1 / 25 Projectors A projector

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 7: More on Householder Reflectors; Least Squares Problems Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 15 Outline

More information

Linear Analysis Lecture 16

Linear Analysis Lecture 16 Linear Analysis Lecture 16 The QR Factorization Recall the Gram-Schmidt orthogonalization process. Let V be an inner product space, and suppose a 1,..., a n V are linearly independent. Define q 1,...,

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

Applied Numerical Linear Algebra. Lecture 8

Applied Numerical Linear Algebra. Lecture 8 Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ

More information

Cheat Sheet for MATH461

Cheat Sheet for MATH461 Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

Math 577 Assignment 7

Math 577 Assignment 7 Math 577 Assignment 7 Thanks for Yu Cao 1. Solution. The linear system being solved is Ax = 0, where A is a (n 1 (n 1 matrix such that 2 1 1 2 1 A =......... 1 2 1 1 2 and x = (U 1, U 2,, U n 1. By the

More information

5 Selected Topics in Numerical Linear Algebra

5 Selected Topics in Numerical Linear Algebra 5 Selected Topics in Numerical Linear Algebra In this chapter we will be concerned mostly with orthogonal factorizations of rectangular m n matrices A The section numbers in the notes do not align with

More information

Linear Least squares

Linear Least squares Linear Least squares Method of least squares Measurement errors are inevitable in observational and experimental sciences Errors can be smoothed out by averaging over more measurements than necessary to

More information

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

Q T A = R ))) ) = A n 1 R

Q T A = R ))) ) = A n 1 R Q T A = R As with the LU factorization of A we have, after (n 1) steps, Q T A = Q T A 0 = [Q 1 Q 2 Q n 1 ] T A 0 = [Q n 1 Q n 2 Q 1 ]A 0 = (Q n 1 ( (Q 2 (Q 1 A 0 }{{} A 1 ))) ) = A n 1 R Since Q T A =

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 12: Gaussian Elimination and LU Factorization Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 10 Gaussian Elimination

More information

Vector and Matrix Norms. Vector and Matrix Norms

Vector and Matrix Norms. Vector and Matrix Norms Vector and Matrix Norms Vector Space Algebra Matrix Algebra: We let x x and A A, where, if x is an element of an abstract vector space n, and A = A: n m, then x is a complex column vector of length n whose

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l

More information

Linear Algebra, part 3 QR and SVD

Linear Algebra, part 3 QR and SVD Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

Notes on Solving Linear Least-Squares Problems

Notes on Solving Linear Least-Squares Problems Notes on Solving Linear Least-Squares Problems Robert A. van de Geijn The University of Texas at Austin Austin, TX 7871 October 1, 14 NOTE: I have not thoroughly proof-read these notes!!! 1 Motivation

More information

5. Orthogonal matrices

5. Orthogonal matrices L Vandenberghe EE133A (Spring 2017) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

More information

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 26, 2010 Linear system Linear system Ax = b, A C m,n, b C m, x C n. under-determined

More information

Gaussian Elimination for Linear Systems

Gaussian Elimination for Linear Systems Gaussian Elimination for Linear Systems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 3, 2011 1/56 Outline 1 Elementary matrices 2 LR-factorization 3 Gaussian elimination

More information

Orthonormal Transformations

Orthonormal Transformations Orthonormal Transformations Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 25, 2010 Applications of transformation Q : R m R m, with Q T Q = I 1.

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

Chapter 1. Matrix Algebra

Chapter 1. Matrix Algebra ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

Math 407: Linear Optimization

Math 407: Linear Optimization Math 407: Linear Optimization Lecture 16: The Linear Least Squares Problem II Math Dept, University of Washington February 28, 2018 Lecture 16: The Linear Least Squares Problem II (Math Dept, University

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

CHARACTERIZATIONS. is pd/psd. Possible for all pd/psd matrices! Generating a pd/psd matrix: Choose any B Mn, then

CHARACTERIZATIONS. is pd/psd. Possible for all pd/psd matrices! Generating a pd/psd matrix: Choose any B Mn, then LECTURE 6: POSITIVE DEFINITE MATRICES Definition: A Hermitian matrix A Mn is positive definite (pd) if x Ax > 0 x C n,x 0 A is positive semidefinite (psd) if x Ax 0. Definition: A Mn is negative (semi)definite

More information

Least-Squares Systems and The QR factorization

Least-Squares Systems and The QR factorization Least-Squares Systems and The QR factorization Orthogonality Least-squares systems. The Gram-Schmidt and Modified Gram-Schmidt processes. The Householder QR and the Givens QR. Orthogonality The Gram-Schmidt

More information

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector

More information

Lecture 3: QR-Factorization

Lecture 3: QR-Factorization Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

LinGloss. A glossary of linear algebra

LinGloss. A glossary of linear algebra LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal

More information

Matrices and systems of linear equations

Matrices and systems of linear equations Matrices and systems of linear equations Samy Tindel Purdue University Differential equations and linear algebra - MA 262 Taken from Differential equations and linear algebra by Goode and Annin Samy T.

More information

Lecture 4 Orthonormal vectors and QR factorization

Lecture 4 Orthonormal vectors and QR factorization Orthonormal vectors and QR factorization 4 1 Lecture 4 Orthonormal vectors and QR factorization EE263 Autumn 2004 orthonormal vectors Gram-Schmidt procedure, QR factorization orthogonal decomposition induced

More information

Solving linear equations with Gaussian Elimination (I)

Solving linear equations with Gaussian Elimination (I) Term Projects Solving linear equations with Gaussian Elimination The QR Algorithm for Symmetric Eigenvalue Problem The QR Algorithm for The SVD Quasi-Newton Methods Solving linear equations with Gaussian

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

Fundamentals of Numerical Linear Algebra

Fundamentals of Numerical Linear Algebra Fundamentals of Numerical Linear Algebra Seongjai Kim Department of Mathematics and Statistics Mississippi State University Mississippi State, MS 39762 USA Email: skim@math.msstate.edu Updated: November

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

CSC 576: Linear System

CSC 576: Linear System CSC 576: Linear System Ji Liu Department of Computer Science, University of Rochester September 3, 206 Linear Equations Consider solving linear equations where A R m n and b R n m and n could be extremely

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

7. Dimension and Structure.

7. Dimension and Structure. 7. Dimension and Structure 7.1. Basis and Dimension Bases for Subspaces Example 2 The standard unit vectors e 1, e 2,, e n are linearly independent, for if we write (2) in component form, then we obtain

More information

Lecture 9: Numerical Linear Algebra Primer (February 11st)

Lecture 9: Numerical Linear Algebra Primer (February 11st) 10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template

More information

Numerical methods for solving linear systems

Numerical methods for solving linear systems Chapter 2 Numerical methods for solving linear systems Let A C n n be a nonsingular matrix We want to solve the linear system Ax = b by (a) Direct methods (finite steps); Iterative methods (convergence)

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

The Full-rank Linear Least Squares Problem

The Full-rank Linear Least Squares Problem Jim Lambers COS 7 Spring Semeseter 1-11 Lecture 3 Notes The Full-rank Linear Least Squares Problem Gien an m n matrix A, with m n, and an m-ector b, we consider the oerdetermined system of equations Ax

More information

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson

More information

Basic Elements of Linear Algebra

Basic Elements of Linear Algebra A Basic Review of Linear Algebra Nick West nickwest@stanfordedu September 16, 2010 Part I Basic Elements of Linear Algebra Although the subject of linear algebra is much broader than just vectors and matrices,

More information

11 Adjoint and Self-adjoint Matrices

11 Adjoint and Self-adjoint Matrices 11 Adjoint and Self-adjoint Matrices In this chapter, V denotes a finite dimensional inner product space (unless stated otherwise). 11.1 Theorem (Riesz representation) Let f V, i.e. f is a linear functional

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

The Singular Value Decomposition and Least Squares Problems

The Singular Value Decomposition and Least Squares Problems The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity

More information

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.) page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation

More information

NUMERICAL LINEAR ALGEBRA. Lecture notes for MA 660A/B. Rudi Weikard

NUMERICAL LINEAR ALGEBRA. Lecture notes for MA 660A/B. Rudi Weikard NUMERICAL LINEAR ALGEBRA Lecture notes for MA 660A/B Rudi Weikard Contents Chapter 1. Numerical Linear Algebra 1 1.1. Fundamentals 1 1.2. Error Analysis 6 1.3. QR Factorization 13 1.4. LU Factorization

More information

Least squares: the big idea

Least squares: the big idea Notes for 2016-02-22 Least squares: the big idea Least squares problems are a special sort of minimization problem. Suppose A R m n where m > n. In general, we cannot solve the overdetermined system Ax

More information

Scientific Computing: Solving Linear Systems

Scientific Computing: Solving Linear Systems Scientific Computing: Solving Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 September 17th and 24th, 2015 A. Donev (Courant

More information

Lecture 5 Singular value decomposition

Lecture 5 Singular value decomposition Lecture 5 Singular value decomposition Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn

More information

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

BlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas

BlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas BlockMatrixComputations and the Singular Value Decomposition ATaleofTwoIdeas Charles F. Van Loan Department of Computer Science Cornell University Supported in part by the NSF contract CCR-9901988. Block

More information

Chapter 0 Miscellaneous Preliminaries

Chapter 0 Miscellaneous Preliminaries EE 520: Topics Compressed Sensing Linear Algebra Review Notes scribed by Kevin Palmowski, Spring 2013, for Namrata Vaswani s course Notes on matrix spark courtesy of Brian Lois More notes added by Namrata

More information

Linear Least Squares. Using SVD Decomposition.

Linear Least Squares. Using SVD Decomposition. Linear Least Squares. Using SVD Decomposition. Dmitriy Leykekhman Spring 2011 Goals SVD-decomposition. Solving LLS with SVD-decomposition. D. Leykekhman Linear Least Squares 1 SVD Decomposition. For any

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

Numerical Linear Algebra Chap. 2: Least Squares Problems

Numerical Linear Algebra Chap. 2: Least Squares Problems Numerical Linear Algebra Chap. 2: Least Squares Problems Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation TUHH Heinrich Voss Numerical Linear Algebra

More information

Applied Linear Algebra

Applied Linear Algebra Applied Linear Algebra Gábor P. Nagy and Viktor Vígh University of Szeged Bolyai Institute Winter 2014 1 / 262 Table of contents I 1 Introduction, review Complex numbers Vectors and matrices Determinants

More information

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in Vectors and Matrices Continued Remember that our goal is to write a system of algebraic equations as a matrix equation. Suppose we have the n linear algebraic equations a x + a 2 x 2 + a n x n = b a 2

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş Babeş-Bolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( Babeş-Bolyai University)The Singular Value Decomposition

More information

Linear Systems of n equations for n unknowns

Linear Systems of n equations for n unknowns Linear Systems of n equations for n unknowns In many application problems we want to find n unknowns, and we have n linear equations Example: Find x,x,x such that the following three equations hold: x

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 3: Positive-Definite Systems; Cholesky Factorization Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 11 Symmetric

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 2 Systems of Linear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

A fast randomized algorithm for overdetermined linear least-squares regression

A fast randomized algorithm for overdetermined linear least-squares regression A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm

More information

Problem set 5: SVD, Orthogonal projections, etc.

Problem set 5: SVD, Orthogonal projections, etc. Problem set 5: SVD, Orthogonal projections, etc. February 21, 2017 1 SVD 1. Work out again the SVD theorem done in the class: If A is a real m n matrix then here exist orthogonal matrices such that where

More information

Computational Methods. Least Squares Approximation/Optimization

Computational Methods. Least Squares Approximation/Optimization Computational Methods Least Squares Approximation/Optimization Manfred Huber 2011 1 Least Squares Least squares methods are aimed at finding approximate solutions when no precise solution exists Find the

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,

More information

Inverses. Stephen Boyd. EE103 Stanford University. October 28, 2017

Inverses. Stephen Boyd. EE103 Stanford University. October 28, 2017 Inverses Stephen Boyd EE103 Stanford University October 28, 2017 Outline Left and right inverses Inverse Solving linear equations Examples Pseudo-inverse Left and right inverses 2 Left inverses a number

More information