Numerical methods for solving linear systems

Size: px
Start display at page:

Download "Numerical methods for solving linear systems"

Transcription

1 Chapter 2 Numerical methods for solving linear systems Let A C n n be a nonsingular matrix We want to solve the linear system Ax = b by (a) Direct methods (finite steps); Iterative methods (convergence) (See Chapter 4) 2 Elementary matrices x ȳ x ȳ n Let X = K n and x, y X Then y x K, xy = The eigenvalues x n ȳ x n ȳ n of xy are {0,, 0, y x}, since rank(xy ) = by (xy )z = (y z)x and (xy )x = (y x)x Definition 2 A matrix of the form is called an elementary matrix I αxy (α K, x, y K n ) (2) The eigenvalues of (I αxy ) are {,,,, αy x} Compute If αy x 0 and letβ = (I αxy )(I βxy ) = I (α + β αβy x)xy (22) α αy x, then α + β αβy x = 0 We have (I αxy ) = (I βxy ), α + β = y x (23) Example 2 Let x K n, and x x = Let H = {z : z x = 0} and Q = I 2xx (Q = Q, Q = Q) Then Q reflects each vector with respect to the hyperplane H Let y = αx + w, w H Then, we have Qy = αqx + Qw = αx + w 2(x w)x = αx + w

2 24 Chapter 2 Numerical methods for solving linear systems Example 22 Let y = e i = the i-th column of unit matrix and x = l i = [0,, 0, l i+,i,, l n,i ] T Then, I + l i e T i = (24) l i+,i l n,i Since e T i l i = 0, we have From the equality (I + l i e T i ) = (I l i e T i ) (25) follows that (I + l e T )(I + l 2e T 2 ) = I + l e T + l 2e T 2 + l (e T l 2)e T 2 = I + l e T + l 2e T 2 (I + l e T ) (I + l ie T i ) (I + l n e T n ) = I + l e T + l 2e T l n e T n l = 2 0 (26) l n l n,n Theorem 2 A lower triangular with on the diagonal can be written as the product of n elementary matrices of the form (24) Remark 2 (I + l e T + + l n e T n ) = (l l n e T n ) (I l e T ) which can not be simplified as in (26) 22 LR-factorization Definition 22 Given A C n n, a lower triangular matrix L and an upper triangular matrix R If A = LR, then the product LR is called a LR-factorization (or LRdecomposition) of A Basic problem: Given b 0, b K n Find a vector l = [0, l 2,, l n ] T and c K such that Solution: (I l e T )b = ce { b = c, b i l i b = 0, i = 2,, n { b = 0, it has no solution (since b 0), b 0, then c = b, l i = b i /b, i = 2,, n

3 22 LR-factorization 25 Construction of LR-factorization: Let A = A (0) = [a (0) a (0) n ] Apply basic problem to a (0) : If a (0) 0, then there exists L = I l e T such that (I l e T )a (0) = a (0) e Thus The i-th step: A () = L A (0) = [L a (0) L a (0) n ] = a (0) a (0) 2 a (0) n 0 a () 22 a () 2n (22) 0 a () n2 a () nn A (i) = L i A (i ) = L i L i L A (0) a (0) a (0) n 0 a () 22 a () 2n 0 = a (i ) ii a (i ) in 0 a (i) i+,i+ a (i) i+,n 0 0 a (i) n,i+ a (i) nn (222) If a (i ) ii 0, for i =,, n, then the method is executable and we have that A (n ) = L n L A (0) = R (223) is an upper triangular matrix Thus, A = LR Explicit representation of L: L i = I l i e T i, L i = I + l i e T i L = L L n = (I + l e T ) (I + l n e T n ) = I + l e T + + l n e T n (by (26)) Theorem 22 Let A be nonsingular Then A has an LR-factorization (A=LR) if and only if k i := det(a i ) 0, where A i is the leading principal matrix of A, ie, a a i A i =, a i a ii for i =,, n Proof: (Necessity ): Since A = LR, we have a a i l = O r r i O r ii a i a ii l i l ii

4 26 Chapter 2 Numerical methods for solving linear systems From det(a) 0 follows that det(l) 0 and det(r) 0 Thus, l jj 0 and r jj 0, for j =,, n Hence k i = l l ii r r ii 0 (Sufficiency ): From (222) we have A (0) = (L L i )A (i) Consider the (i + )-th leading principle determinant From (223) we have a a i,i+ a i+ a i+,i+ = 0 l 2 l i+, l i+,i a (0) a (0) 2 a () 22 a (i ) ii a (i ) i,i+ 0 a (i) i+,i+ 0 Therefore, the LR- Thus, k i = a (0) a() 22 a(i) i+,i+ factorization of A exists 0 which implies a(i) i+,i+ Theorem 222 If a nonsingular matrix A has an LR-factorization with A = LR and l = = l nn =, then the factorization is unique Proof: Let A = L R = L 2 R 2 Then L 2 L = R 2 R = I Corollary 22 If a nonsingular matrix A has an LR-factorization with A = LDR, where D is diagonal, L and R T are unit lower triangular (with one on the diagonal) if and only if k i 0 Theorem 223 Let A be a nonsingular matrix Then there exists a permutation P, such that P A has an LR-factorization (Proof): By construction! Consider (222): There is a permutation P i, which interchanges the i-th row with a row of index large than i, such that 0 a (i ) ii ( P i A (i ) ) This procedure is executable, for i =,, n So we have L n P n L i P i L P A (0) = R (224) Let P be a permutation which affects only elements i +,, n It holds P (I l i e T i )P = I (P l i )e T i = I l i e T i = L i, (e T i P = e T i ) where L i is lower triangular Hence we have Now write all P i in (224) to the right as P L i = L i P (225) L n Ln 2 L P n P A (0) = R Then we have P A = LR with L = L n Ln 2 L and P = P n P

5 23 Gaussian elimination Gaussian elimination 23 Practical implementation Given a linear system Ax = b (23) with A nonsingular We first assume that A has an LR-factorization ie, A = LR Thus LRx = b We then (i) solve Ly = b; (ii) solve Rx = y These imply that LRx = Ly = b From (224), we have L n L 2 L (A b) = (R L b) Algorithm 23 (without permutation) For k =,, n, if a kk = 0 then stop ( ); else ω j := a kj (j = k +,, n); for i = k +,, n, η := a ik /a kk, a ik := η; for j = k +,, n, a ij := a ij ηω j, b j := b j ηb k For x: (back substitution!) x n = b n /a nn ; for i = n, n 2,,, x i = (b i n j=i+ a ijx j )/a ii Cost of computation (one multiplication + one addition one flop): (i) LR-factorization: n 3 /3 n/3 flops; (ii) Computation of y: n(n )/2 flops; (iii) Computation of x: n(n + )/2 flops For A : 4/3n 3 n 3 /3 + kn 2 (k = n linear systems) Pivoting: (a) Partial pivoting; (b) Complete pivoting From (222), we have A (k ) = a (0) n 0 a (k 2) k,k a (k 2) k,n 0 a (k ) kk a (k ) kn 0 0 a (k ) nk a (k ) nn a (0)

6 28 Chapter 2 Numerical methods for solving linear systems For (a): Find a p {k,, n} such that a pk = max k i n a ik (r k = p) (232) swap a kj, b k and a pj, b p respectively, (j =,, n) Replacing ( ) in Algorithm 23 by (232), we have a new factorization of A with partial pivoting, ie, P A = LR (by Theorem 22) and l ij for i, j =,, n For solving linear system Ax = b, we use P Ax = P b L(Rx) = P T b b It needs extra n(n )/2 comparisons For (b): Find p, q {k,, n} such that a pq max a ij, (r k := p, c k := q) k i,j n swap a kj, b k and a pj, b p respectivly, (j = k,, n), swap a ik and a iq (i =,, n) (233) Replacing ( ) in Algorithm 23 by (233), we also have a new factorization of A with complete pivoting, ie, P AΠ = LR (by Theorem 22) and l ij, for i, j =,, n For solving linear system Ax = b, we use P AΠ(Π T x) = P b LR x = b x = Π x It needs n 3 /3 comparisons [ ] 0 4 Example 23 Let A = be in three decimal-digit floating point arithmetic Then κ(a) = A A 4 A is well-conditioned Without pivoting: [ ] 0 L = fl(/0 4, fl(/0 4 ) = 0 4, ) [ ] 0 4 R = 0 fl( 0 4, fl( 0 4 ) = 0 4 ) [ ] [ ] [ ] [ ] LR = = = A 0 [ ] Here a 22 entirely lost from computation It is numerically unstable Let Ax = [ ] [ ] 2 Then x But Ly = solves y 2 = and y 2 = fl(2 0 4 ) = 0 4, Rˆx = y solves ˆx 2 = fl(( 0 4 )/( 0 4 )) =, ˆx = fl(( )/0 4 ) = 0 We have an erroneous solution with cond(l), cond(r) 0 8 Partial pivoting: L = R = L and R are both well-conditioned [ ] [ 0 fl(0 4 = /) [ ] 0 fl( 0 4 = ) [ 0 ] ],

7 23 Gaussian elimination LDR- and LL T -factorizations Let A = LDR as in Corollary 22 Algorithm 232 (Crout s factorization or compact method) For k =,, n, for p =, 2,, k, r p := d p a pk, ω p := a kp d p, d k := a kk k p= a kpr p, if d k = 0, then stop; else for i = k +,, n, a ik := (a ik k p= a ipr p )/d k, a ki := (a ki k p= ω pa pi )/d k Cost: n 3 /3 flops With partial pivoting: see Wilkinson EVP pp225- Advantage: One can use double precision for inner product Theorem 23 If A is nonsingular, real and symmetric, then A has a unique LDL T - factorization, where D is diagonal and L is a unit lower triangular matrix (with one on the diagonal) Proof: A = LDR = A T = R T DL T It implies L = R T Theorem 232 If A is symmetric and positive definite, then there exists a lower triangular G R n n with positive diagonal elements such that A = GG T Proof: A is symmetric positive definite x T Ax 0, for all nonzero vector x R n n k i 0, for i =,, n, all eigenvalues of A are positive From Corollary 22 and Theorem 23 we have A = LDL T From L AL T = D follows that d k = (e T k L )A(L T e k ) > 0 Thus, G = Ldiag{d /2,, d /2 n } is real, and then A = GG T Algorithm 233 (Cholesky factorization) Let A be symmetric positive definite To find a lower triangular matrix G such that A = GG T For k =, 2,, n, a kk := (a kk k p= a2 kp )/2 ; for i = k +,, n, a ik = (a ik k p= a ipa kp )/a kk Cost: n 3 /6 flops Remark 23 For solving symmetric, indefinite systems: See Golub/ Van Loan Matrix Computation pp P AP T = LDL T, D is or 2 2 block-diagonal matrix, P is a permutation and L is lower triangular with one on the diagonal

8 30 Chapter 2 Numerical methods for solving linear systems 233 Error estimation for linear systems Consider the linear system and the perturbed linear system Ax = b, (234) (A + δa)(x + δx) = b + δb, (235) where δa and δb are errors of measure or round-off in factorization Definition 23 Let be an operator norm and A be nonsingular Then κ κ(a) = A A is a condition number of A corresponding to Theorem 233 (Forward error bound) Let x be the solution of the (234) and x+δx be the solution of the perturbed linear system (235) If δa A <, then ( δx x κ δa κ δa A + δb ) (236) b A Proof: From (235) we have Thus, (A + δa)δx + Ax + δax = b + δb δx = (A + δa) [(δa)x δb] (237) Here, Corollary 27 implies that (A + δa) exists Now, (A + δa) = (I + A δa) A A A δa On the other hand, b = Ax implies b A x So, x A b (238) From (237) follows that δx inequality (236) is proved A ( δa x + δb ) By using (238), the A δa Remark 232 If κ(a) is large, then A (for the linear system Ax = b) is called illconditioned, else well-conditioned 234 Error analysis for Gaussian algorithm A computer in characterized by four integers: (a) the machine base β; (b) the precision t; (c) the underflow limit L; (d) the overflow limit U Define the set of floating point numbers F = {f = ±0d d 2 d t β e 0 d i < β, d 0, L e U} {0} (239)

9 23 Gaussian elimination 3 Let G = {x R m x M} {0}, where m = β L and M = β U ( β t ) are the minimal and maximal numbers of F \ {0} in absolute value, respectively We define an operator fl : G F by One can show that fl satisfies fl(x) = the nearest c F to x by rounding arithmetic fl(x) = x( + ε), ε eps, (230) where eps = 2 β t (If β = 2, then eps = 2 t ) It follows that or where ε eps and = +,,, / fl(a b) = (a b)( + ε) fl(a b) = (a b)/( + ε), Algorithm 234 Given x, y R n The following algorithm computes x T y and stores the result in s s = 0, for k =,, n, s = s + x k y k Theorem 234 If n2 t 00, then n fl( x k y k ) = k= n x k y k [ + 0(n + 2 k)θ k 2 t ], θ k k= Proof: Let s p = fl( p k= x ky k ) be the partial sum in Algorithm 234 Then s = x y ( + δ ) with δ eps and for p = 2,, n, s p = fl[s p + fl(x p y p )] = [s p + x p y p ( + δ p )]( + ε p ) with δ p, ε p eps Therefore fl(x T y) = s n = n x k y k ( + γ k ), k= where ( + γ k ) = ( + δ k ) n j=k ( + ε j), and ε 0 Thus, n fl( x k y k ) = k= n x k y k [ + 0(n + 2 k)θ k 2 t ] (23) k= The result follows immediately from the following useful Lemma

10 32 Chapter 2 Numerical methods for solving linear systems Lemma 235 If ( + α) = n k= ( + α k), where α k 2 t and n2 t 00, then n ( + α k ) = + 0nθ2 t with θ k= Proof: From assumption it is easily seen that ( 2 t ) n n ( + α k ) ( + 2 t ) n (232) Expanding the Taylor expression of ( x) n as < x <, we get Hence ( x) n = nx + k= Now, we estimate the upper bound of ( + 2 t ) n : If 0 x 00, then n(n ) ( θx) n 2 x 2 nx 2 ( 2 t ) n n2 t (233) e x = + x + x2 2! + x3 3! + = + x + x 2 x( + x 3 + 2x2 + ) 4! + x e x + x + 00x 2 ex + 0x (234) (Here, we use the fact e 00 < 2 to the last inequality) Let x = 2 t Then the left inequality of (234) implies ( + 2 t ) n e 2 t n (235) Let x = 2 t n Then the second inequality of (234) implies From (235) and (236) we have e 2 tn + 0n2 t (236) ( + 2 t ) n + 0n2 t Let the exact LR-factorization of A be L and R (A = LR) and let L, R be the LR-factorization of A by using Gaussian Algorithm (without pivoting) There are two possibilities: (i) (ii) Forward error analysis: Estimate L L and R R Backward error analysis: Let L R be the exact LR-factorization of a perturbed matrix à = A + F Then F will be estimated, ie, F?

11 23 Gaussian elimination Àpriori error estimate for backward error bound of LRfactorization From (222) we have a (k+) ij = A (k+) = L k A (k), for k =, 2,, n (A () = A) Denote the entries of A (k) by a (k) ij and let l ik = fl(a (k) ik /a(k) kk ), i k + From (222) we know that 0; for i k +, j = k fl(a (k) ij a (k) fl(l ik a (k) kj )); for i k +, j k + (237) ij ; otherwise From (230) we have l ik = (a (k) ik /a(k) kk )( + δ ik) with δ ik 2 t Then a (k) ik Let a (k) ik δ ik ε (k) From (230) we also have ik with δ ij, δ ij 2 t Then a (k+) ij = a (k) ij l ika (k) kk + a(k) ij δ ik = 0, for i k + (238) a (k+) ij = fl(a (k) ij l ik a (k) kj = (a (k) ij Let ε (k) ij l ik a (k) kj δ ij + a (k+) ij (237), (238) and (2320) we obtain where a (k+) ij = ε (k) ij = a (k) a (k) ij a (k) ij a (k) ij fl(l ik a (k) kj )) (239) (l ik a (k) kj ( + δ ij)))/( + δ ij ) l ika (k) kj δ ij + a (k+) ij δ ij, for i, j k + (2320) δ ij which is the computational error of a (k) ij in A (k+) From l ik a (k) kk + ε(k) ij ; for i k +, j = k l ik a (k) kj + ε(k) ij ; for i k +, j k + (232) + ε (k) ij ; otherwise, ij δ ij; for i k +, j = k, l ik a (k) kj δ ij a (k+) ij δ ij ; for i k +, j k + 0; otherwise Let E (k) be the error matrix with entries ε (k) ij Then (232) can be written as where (2322) A (k+) = A (k) M k A (k) + E (k), (2323) 0 M k = 0 l k+,k l n,k 0 (2324)

12 34 Chapter 2 Numerical methods for solving linear systems For k =, 2, n, we add the n equations in (2323) together and get M A () + M 2 A (2) + + M n A (n ) + A (n) = A () + E () + + E (n ) From (237) we know that the k-th row of A (k) is equal to the k-th row of A (k+),, A (n), respectively and from (2324) we also have Thus, Then where M k A (k) = M k A (n) = M k R (M + M M n + I) R = A () + E () + + E (n ) L R = A + E, (2325) l 2 O L = and E = E() + + E (n ) (2326) l n l n,n Now we assume that the partial pivotings in Gaussian Elimination are already arranged such that pivot element a (k) kk has the maximal absolute value So, we have l ik Let ρ = max i,j,k a(k) ij / A (2327) Then From (2322) and (2328) follows that Therefore, ε (k) From (2326) we get ij ρ A a (k) ij ρ A (2328) 2 t ; for i k +, j = k, 2 t ; for i k +, j k +, 0; otherwise (2329) E (k) ρ A 2 t (2330) E ρ A 2 t Hence we have the following theorem n 4 2n n 3 2n 2 (233)

13 23 Gaussian elimination 35 Theorem 236 The LR-factorization L and R of A using Gaussian Elimination with partial pivoting satisfies L R = A + E, where Proof: E n 2 ρ A 2 t (2332) n E ρ A 2 t ( (2j ) ) < n 2 ρ A 2 t j= Now we shall solve the linear system Ax = b by using the factorization L and R, ie, Ly = b and Rx = y For Ly = b: From Algorithm 23 we have y = fl(b /l ), ( ) li y l i2 y 2 l i,i y i + b i y i = fl, (2333) l ii for i = 2, 3,, n From (230) we have y = b /l ( + δ ), with δ 2 t y i = fl( fl( l iy l i2 y 2 l i,i y i )+b i l ii (+δ ii ) ) (2334) = fl( l iy l i2 y 2 l i,i y i )+b i l ii (+δ ii )(+δ ii ), with δ ii, δ ii 2 t Applying Theorem 234 we get where fl( l i y l i2 y 2 l i,i y i ) = l i ( + δ i )y l i,i ( + δ i,i )y i, So, (2334) can be written as δ i (i )0 2 t ; for i = 2, 3,, n, { i = 2, 3,, n, δ ij (i + j)0 2 t ; for j = 2, 3,, i l ( + δ )y = b, l i ( + δ i )y + + l i,i ( + δ i,i )y i + l ii ( + δ ii )( + δ ii )y i = b i, for i = 2, 3,, n (2335) (2336) or (L + δl)y = b (2337)

14 36 Chapter 2 Numerical methods for solving linear systems From (2335) (2336) and (2337) follow that l 0 l 2 2 l 22 δl 0 2 t 2 l 3 2 l 32 2 l 33 3 l 4 3 l 42 2 l 43 (n ) l n (n ) l n2 (n 2) l n3 2 l n,n 2 l nn (2338) This implies, δl n(n + ) t max l ij i,j n(n + ) t (2339) Theorem 237 For lower triangular linear system Ly = b, if y is the exact solution of (L + δl)y = b, then δl satisfies (2338) and (2339) Applying Theorem 237 to the linear system Ly = b and Rx = y, respectively, the solution x satisfies ( L + δ L)( R + δ R)x = b or ( L R + (δ L) R + L(δ R) + (δ L)(δ R))x = b (2340) Since L R = A + E, substituting this equation into (2340) we get The entries of L and R satisfy [A + E + (δ L) R + L(δ R) + (δ L)(δ R)]x = b (234) l ij, and r ij ρ A Therefore, we get L n, R nρ A, δ L n(n+) t, δ R n(n+) 2 0ρ2 t In practical implementation we usually have n 2 2 t << So it holds (2342) Let Then, (2332) and (2342) we get δ L δ R n 2 ρ A 2 t δa = E + (δ L) R + L(δ R) + (δ L)(δ R) (2343) δa E + δ L R + L δ R + δ L δ R 0(n 3 + 3n 2 )ρ A 2 t (2344)

15 23 Gaussian elimination 37 Theorem 238 For a linear system Ax = b the solution x computed by Gaussian Elimination with partial pivoting is the exact solution of the equation (A + δa)x = b and δa satisfies (2343) and (2344) Remark 233 The quantity ρ defined by (2327) is called a growth factor The growth factor measures how large the numbers become during the process of elimination In practice, ρ is usually of order 0 for partial pivot selection But it can be as large as ρ = 2 n, when A = Better estimates hold for special types of matrices For example in the case of upper Hessenberg matrices, that is, matrices of the form A = 0 the bound ρ (n ) can be shown (Hessenberg matrices arise in eigenvalus problems) For tridiagonal matrices α β 2 0 γ 2 A = βn 0 γ n α n it can even be shown that ρ 2 holds for partial pivot selection elimination is quite numerically stable in this case For complete pivot selection, Wilkinson (965) has shown that Hence, Gaussian with the function a k ij f(k) max a ij i,j f(k) := k 2 [ k (k ) ] 2 This function grows relatively slowly with k: k f(k)

16 38 Chapter 2 Numerical methods for solving linear systems Even this estimate is too pessimistic in practice Up until now, no matrix has been found which fails to satisfy a (k) ij (k + ) max a ij k =, 2,, n, i,j when complete pivot selection is used This indicates that Gaussian elimination with complete pivot selection is usually a stable process Despite this, partial pivot selection is preferred in practice, for the most part, because: (i) Complete pivot selection is more costly than partial pivot selection (To compute A (i), the maximum from among (n i + ) 2 elements must be determined instead of n i + elements as in partial pivot selection) (ii) Special structures in a matrix, ie the band structure of a tridiagonal matrix, are destroyed in complete pivot selection 236 Improving and Estimating Accuracy Iterarive Improvement: Suppose that the linear system Ax = b has been solved via the LR-factorization P A = LR Now we want to improve the accuracy of the computed solution x We compute Then in exact arithmatic we have r = b Ax, Ly = P r, Rz = y, x new = x + z Ax new = A(x + z) = (b r) + Az = b (2345) Unfortunately, r = fl(b Ax) renders an x new that is no more accurate than x It is necessary to compute the residual b Ax with extended precision floating arithmetic Algorithm 235 Compute P A = LR (t-digit) Repeat: r := b Ax (2t-digit) Solve Ly = P r for y (t-digit) Solve Rz = y for z (t-digit) Update x = x + z (t-digit) This is referred to as an iterative improvement From (2345) we have r i = b i a i x a i2 x 2 a in x n (2346) Now, r i can be roughly estimated by 2 t max j a ij x j That is r 2 t A x (2347)

17 23 Gaussian elimination 39 Let e = x A b = A (Ax b) = A r Then we have From (2347) follows that Let Then we have e A r (2348) e A 2 t A x = 2 t cond(a) x cond(a) = 2 p, 0 < p < t, (p is integer) (2349) e / x 2 (t p) (2350) From (2350) we know that x has q = t p correct significant digits Since r is computed by double precision, so we can assume that it has at least t correct significant digits Therefore for solving Az = r according to (2350) the solution z (comparing with e = A r) has q-digits accuracy so that x new = x + z has usually 2q-digits accuracy From above discussion, the accuracy of x new is improved about q-digits after one iteration Hence we stop the iteration, when the number of the iterates k (say!) satifies kq t From above we have z / x e / x 2 q = 2 t 2 p (235) From (2349) and (235) we have By (235) we get cond(a) = 2 t ( z / x ) q = log 2 ( x z ) and k = t log 2 ( x z ) In the following we shall give a further discussion of convergence of the iterative improvement From Theorem 238 we know that z in Algorithm 55 is computed by (A+δA)z = r That is A(I + F )z = r, (2352) where F = A δa Theorem 239 Let the sequence of vectors {x v } be the sequence of improved solutions in Algorithm 55 for solving Ax = b and x = A b be the exact solution Assume that F k in (2352) satisfies F k σ < /2 for all k Then {x k } converges to x, ie, lim v x k x = 0 Proof: From (2352) and r k = b Ax k we have A(I + F k )z k = b Ax k (2353) Since A is nonsingular, multiplying both sides of (2353) by A we get (I + F k )z k = x x k

18 40 Chapter 2 Numerical methods for solving linear systems From x k+ = x k + z k we have (I + F k )(x k+ x k ) = x x k, ie, Subtracting both sides of (2354) from (I + F k )x we get Applying Corollary 2 we have Hence, Let τ = σ/( σ) Then (I + F k )x k+ = F k x k + x (2354) (I + F k )(x k+ x ) = F k (x k x ) x k+ x = (I + F k ) F k (x k x ) x k+ x F k x k x F k σ σ x k x x k x τ k x x But σ < /2 follows τ < This implies convergence of Algorithm 235 Corollary 23 If then Algorithm 235 converges 0(n 3 + 3n 2 )ρ2 t A A < /2, Proof: From (2352) and (2344) follows that F k 0(n 3 + 3n 2 )ρ2 t cond(a) < /2 237 Special Linear Systems Toeplitz Systems Definition 232 (i) T R n n is called a Toeplitz matrix if there exists r n+,, r 0,, r n such that a ij = r j i for all i, j eg, T = r 0 r r 2 r 3 r r 0 r r 2 r 2 r r 0 r r 3 r 2 r r 0, (n = 4) (ii) B R n n is called a Persymmetric matrix if it is symmetric about northest-southwest diagonal, ie, b ij = b n j+,n i+ for all i, j That is, B = EB T E, where E = [e n, e ]

19 23 Gaussian elimination 4 Given scalars r,, r n such that the matrices r r 2 r k r T k = r r k are all positive definite, for k =,, n Three algorithms will be described: (a) Durbin s Algorithm for the Yule-Walker problem T n y = (r,, r n ) T (b) Levinson s Algorithm for the general right hand side T n x = b (c) Trench s Algorithm for computing B = T n To (a): Let E k = [e (k) k,, e(k) ] Suppose the k-th order Yule-Walker system T k y = (r,, r k ) T = r T has been solved Consider the (k + )-st order system [ ] [ ] [ Tk E k r z r r T = E k α r k+ can be solved in O(k) flops Observe that and Since T k we get z = T k ( r αe kr) = y αt k E kr (2355) ] α = r k+ = r T E k z (2356) is persymmetric, T k E k = E k T k and z = y +αe k y Substituting into (2356) α = r k+ r T E k (y + αe k y) = (r k+ + r T E k y)/( + r T y) Here ( + r T y) is positive, because T k+ is positive definite and [ ] T [ ] [ ] [ I Ek y Tk E k r I Ek y Tk 0 0 r T = E k r T y Algorithm 236 (Durbin Algorithm, 960) Let T k y (k) = r (k) = (r,, r k ) T For k =,, n, y () = r, for k =,, n, β k = + r (k)t y (k), α k = (r k+ + r (k)t E k y (k) )/β k, z (k) = y (k) [ + α k E] k y (k), z y (k+) (k) = α k ]

20 42 Chapter 2 Numerical methods for solving linear systems This algorithm requires 3 2 n2 flops to generate y = y (n) Further reduction: β k = + r (k)t y (k) [ ] y = + [r (k )T, r (k) ] (k ) + α k E k y (k ) α k = + r (k )T y (k ) + α k (r (k )T E k y (k ) + r k ) = β k + α k ( β k α k ) = ( α 2 k )β k To (b): Want to solve T k x = b = (b,, b k ) T, for k n (2357) [ Tk E k r r T E k ] [ ν µ ] [ b b k+ ], (2358) where r = (r,, r k ) T Since ν = T k (b µe kr) = x + µe k y, it follows that µ = b k+ r T E k ν = b k+ r T E k x µr T y = (b k+ r T E k x)/( + r T y) We can effect the transition form (2357) to (2358) in O(k) flops We can solve the system T n x = b by solving and T k x (k) = b (k) = (b,, b k ) T T k y (k) = r (k) = (r,, r k ) T It needs 2n 2 flops See Algorithm Levinson (947) in Matrix Computations, pp28-29 for details To (c): [ A Er Tn = r T E ] [ B ν = ν T γ where A = T n, E = E n and r = (r,, r n ) T From the equation [ ] [ ] [ ] A Er ν 0 r T = = E γ follows that Aν = γer = γe(r,, r n ) T and γ = r T Eν If y is the solution of (n )-st Yule-Walker system Ay = r, then Thus the last row and column of Tn have γ = /( + r T y) and ν = γey ], are readily obtained Since AB + Erν T = I n, we B = A (A Er)ν T = A + ννt γ

21 23 Gaussian elimination 43 Since A = T n is nonsingular and Toeplitz, its inverse is persymmetric Thus b ij = (A ) ij + ν iν j γ = b n j,n i ν n iν n j γ = (A ) n j,n i + ν iν j γ + ν iν j γ = b n j,n i γ (ν iν j ν n i ν n j ) It needs 7 4 n2 flops See Algorithm Trench (964) in Matrix Computations, pp32 for details Banded Systems Definition 233 Let A be a n n matrix A is called a (p, q)-banded matrix, if a ij = 0 whenever i j > p or j i > q A has the form A = O O } {{ } p where p and q are the lower and upper band widthes, respectively q, Example 232 (, ): tridiagonal matrix; (, n ): upper Hessenberg matrix; (n, ): lower Hessenberg matrix Theorem 230 Let A be a (p, q)-banded matrix Suppose A has a LR-factorization (A = LR) Then L = (p, 0) and R = (0, q)-banded matrix, respectively Algorithm 237 See Algorithm 43 in Matrix Computations, pp50 Theorem 23 Let A be a (p, q)-banded nonsingular matrix If Gaussian Elimination with partial pivoting is used to compute Gaussian transformations L j = I l j e T j, for j =,, n, and permutations P,, P n such that L n P n L P A = R is upper triangular, then R is a (0, p + q)-banded matrix and l ij = 0 whenever i j or i > j + p (Since the j-th column of L is a permutation of the Gaussian vector l j, it follows that L has at most p + nonzero elements per column)

22 44 Chapter 2 Numerical methods for solving linear systems Symmetric Indefinite Systems Consider the linear system Ax = b, where A R n n is symmetric but indefinite There are a method using n 3 /6 flops due to Aasen (97) that computes the factorization P AP T = LT L T, where L = [l ij ] is unit lower triangular, P is a permutation chosen such that l ij, and T is tridiagonal Rather than the above factorization P AP T = LT L T we have the calculation of P AP T = LDL T, where D is block diagonal with by and 2 by 2 blocks on diagonal, L = [l ij ] is unit lower triangular, and P is a permutation chosen such that l ij Bunch and Parlett (97) has proposed a pivot strategy to do this, n 3 /6 flops are required Unfortunately the overall process requires n 3 /2 n 3 /6 comparisons A better method described by Bunch and Kaufmann (977) requires n 3 /6 flops and O(n 2 ) comparisons A detailed discussion of this subsection see p59-68 in Matrix Computations

Gaussian Elimination for Linear Systems

Gaussian Elimination for Linear Systems Gaussian Elimination for Linear Systems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 3, 2011 1/56 Outline 1 Elementary matrices 2 LR-factorization 3 Gaussian elimination

More information

Lecture Notes of Matrix Computations

Lecture Notes of Matrix Computations Lecture Notes of Matrix Computations Wen-Wei Lin Department of Mathematics National Tsing Hua University Hsinchu, Taiwan 30043, R.O.C. May 5, 2008 ii Contents I On the Numerical Solutions of Linear Systems

More information

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems. COURSE 9 4 Numerical methods for solving linear systems Practical solving of many problems eventually leads to solving linear systems Classification of the methods: - direct methods - with low number of

More information

Roundoff Analysis of Gaussian Elimination

Roundoff Analysis of Gaussian Elimination Jim Lambers MAT 60 Summer Session 2009-0 Lecture 5 Notes These notes correspond to Sections 33 and 34 in the text Roundoff Analysis of Gaussian Elimination In this section, we will perform a detailed error

More information

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization

More information

Dense LU factorization and its error analysis

Dense LU factorization and its error analysis Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

Linear Algebraic Equations

Linear Algebraic Equations Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

Lecture 9. Errors in solving Linear Systems. J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico

Lecture 9. Errors in solving Linear Systems. J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico Lecture 9 Errors in solving Linear Systems J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico J. Chaudhry (Zeb) (UNM) Math/CS 375 1 / 23 What we ll do: Norms and condition

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place...

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place... 5 Direct Methods for Solving Systems of Linear Equations They are all over the place Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13, 2012

More information

5 Solving Systems of Linear Equations

5 Solving Systems of Linear Equations 106 Systems of LE 5.1 Systems of Linear Equations 5 Solving Systems of Linear Equations 5.1 Systems of Linear Equations System of linear equations: a 11 x 1 + a 12 x 2 +... + a 1n x n = b 1 a 21 x 1 +

More information

Gaussian Elimination and Back Substitution

Gaussian Elimination and Back Substitution Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving

More information

Lecture Note 2: The Gaussian Elimination and LU Decomposition

Lecture Note 2: The Gaussian Elimination and LU Decomposition MATH 5330: Computational Methods of Linear Algebra Lecture Note 2: The Gaussian Elimination and LU Decomposition The Gaussian elimination Xianyi Zeng Department of Mathematical Sciences, UTEP The method

More information

CHAPTER 6. Direct Methods for Solving Linear Systems

CHAPTER 6. Direct Methods for Solving Linear Systems CHAPTER 6 Direct Methods for Solving Linear Systems. Introduction A direct method for approximating the solution of a system of n linear equations in n unknowns is one that gives the exact solution to

More information

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Lecture 2 INF-MAT 4350 2009: 7.1-7.6, LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Tom Lyche and Michael Floater Centre of Mathematics for Applications, Department of Informatics,

More information

Computational Methods. Systems of Linear Equations

Computational Methods. Systems of Linear Equations Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations

More information

Iterative techniques in matrix algebra

Iterative techniques in matrix algebra Iterative techniques in matrix algebra Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan September 12, 2015 Outline 1 Norms of vectors and matrices 2 Eigenvalues and

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 2 Systems of Linear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 2 Systems of Linear Equations Copyright c 2001. Reproduction permitted only for noncommercial,

More information

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Solving Linear Systems of Equations

Solving Linear Systems of Equations Solving Linear Systems of Equations Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab:

More information

Orthogonalization and least squares methods

Orthogonalization and least squares methods Chapter 3 Orthogonalization and least squares methods 31 QR-factorization (QR-decomposition) 311 Householder transformation Definition 311 A complex m n-matrix R = [r ij is called an upper (lower) triangular

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 9

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 9 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 9 GENE H GOLUB 1 Error Analysis of Gaussian Elimination In this section, we will consider the case of Gaussian elimination and perform a detailed

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

Ax = b. Systems of Linear Equations. Lecture Notes to Accompany. Given m n matrix A and m-vector b, find unknown n-vector x satisfying

Ax = b. Systems of Linear Equations. Lecture Notes to Accompany. Given m n matrix A and m-vector b, find unknown n-vector x satisfying Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T Heath Chapter Systems of Linear Equations Systems of Linear Equations Given m n matrix A and m-vector

More information

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers. MATH 434/534 Theoretical Assignment 3 Solution Chapter 4 No 40 Answer True or False to the following Give reasons for your answers If a backward stable algorithm is applied to a computational problem,

More information

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra R. J. Renka Department of Computer Science & Engineering University of North Texas 02/03/2015 Notation and Terminology R n is the Euclidean n-dimensional linear space over the

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II) Math 59 Lecture 2 (Tue Mar 5) Gaussian elimination and LU factorization (II) 2 Gaussian elimination - LU factorization For a general n n matrix A the Gaussian elimination produces an LU factorization if

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

Solving Linear Systems Using Gaussian Elimination. How can we solve

Solving Linear Systems Using Gaussian Elimination. How can we solve Solving Linear Systems Using Gaussian Elimination How can we solve? 1 Gaussian elimination Consider the general augmented system: Gaussian elimination Step 1: Eliminate first column below the main diagonal.

More information

Linear Systems of n equations for n unknowns

Linear Systems of n equations for n unknowns Linear Systems of n equations for n unknowns In many application problems we want to find n unknowns, and we have n linear equations Example: Find x,x,x such that the following three equations hold: x

More information

Lecture 2 Decompositions, perturbations

Lecture 2 Decompositions, perturbations March 26, 2018 Lecture 2 Decompositions, perturbations A Triangular systems Exercise 2.1. Let L = (L ij ) be an n n lower triangular matrix (L ij = 0 if i > j). (a) Prove that L is non-singular if and

More information

Lecture 7. Gaussian Elimination with Pivoting. David Semeraro. University of Illinois at Urbana-Champaign. February 11, 2014

Lecture 7. Gaussian Elimination with Pivoting. David Semeraro. University of Illinois at Urbana-Champaign. February 11, 2014 Lecture 7 Gaussian Elimination with Pivoting David Semeraro University of Illinois at Urbana-Champaign February 11, 2014 David Semeraro (NCSA) CS 357 February 11, 2014 1 / 41 Naive Gaussian Elimination

More information

4. Direct Methods for Solving Systems of Linear Equations. They are all over the place...

4. Direct Methods for Solving Systems of Linear Equations. They are all over the place... They are all over the place... Numerisches Programmieren, Hans-Joachim ungartz page of 27 4.. Preliminary Remarks Systems of Linear Equations Another important field of application for numerical methods

More information

Outline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy,

Outline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy, Math 54 - Numerical Analysis Lecture Notes Linear Algebra: Part B Outline Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences

More information

2.1 Gaussian Elimination

2.1 Gaussian Elimination 2. Gaussian Elimination A common problem encountered in numerical models is the one in which there are n equations and n unknowns. The following is a description of the Gaussian elimination method for

More information

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3 Pivoting Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3 In the previous discussions we have assumed that the LU factorization of A existed and the various versions could compute it in a stable manner.

More information

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015 CS: Lecture #7 Mridul Aanjaneya March 9, 5 Solving linear systems of equations Consider a lower triangular matrix L: l l l L = l 3 l 3 l 33 l n l nn A procedure similar to that for upper triangular systems

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.2: LU and Cholesky Factorizations 2 / 82 Preliminaries 3 / 82 Preliminaries

More information

Solving Linear Systems of Equations

Solving Linear Systems of Equations 1 Solving Linear Systems of Equations Many practical problems could be reduced to solving a linear system of equations formulated as Ax = b This chapter studies the computational issues about directly

More information

4.2 Floating-Point Numbers

4.2 Floating-Point Numbers 101 Approximation 4.2 Floating-Point Numbers 4.2 Floating-Point Numbers The number 3.1416 in scientific notation is 0.31416 10 1 or (as computer output) -0.31416E01..31416 10 1 exponent sign mantissa base

More information

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Chapter 2 - Linear Equations

Chapter 2 - Linear Equations Chapter 2 - Linear Equations 2. Solving Linear Equations One of the most common problems in scientific computing is the solution of linear equations. It is a problem in its own right, but it also occurs

More information

Matrix Factorization and Analysis

Matrix Factorization and Analysis Chapter 7 Matrix Factorization and Analysis Matrix factorizations are an important part of the practice and analysis of signal processing. They are at the heart of many signal-processing algorithms. Their

More information

Solving Dense Linear Systems I

Solving Dense Linear Systems I Solving Dense Linear Systems I Solving Ax = b is an important numerical method Triangular system: [ l11 l 21 if l 11, l 22 0, ] [ ] [ ] x1 b1 = l 22 x 2 b 2 x 1 = b 1 /l 11 x 2 = (b 2 l 21 x 1 )/l 22 Chih-Jen

More information

Numerical Linear Algebra And Its Applications

Numerical Linear Algebra And Its Applications Numerical Linear Algebra And Its Applications Xiao-Qing JIN 1 Yi-Min WEI 2 August 29, 2008 1 Department of Mathematics, University of Macau, Macau, P. R. China. 2 Department of Mathematics, Fudan University,

More information

Gaussian Elimination without/with Pivoting and Cholesky Decomposition

Gaussian Elimination without/with Pivoting and Cholesky Decomposition Gaussian Elimination without/with Pivoting and Cholesky Decomposition Gaussian Elimination WITHOUT pivoting Notation: For a matrix A R n n we define for k {,,n} the leading principal submatrix a a k A

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

1.5 Gaussian Elimination With Partial Pivoting.

1.5 Gaussian Elimination With Partial Pivoting. Gaussian Elimination With Partial Pivoting In the previous section we discussed Gaussian elimination In that discussion we used equation to eliminate x from equations through n Then we used equation to

More information

LINEAR SYSTEMS (11) Intensive Computation

LINEAR SYSTEMS (11) Intensive Computation LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information

Introduction to Numerical Analysis

Introduction to Numerical Analysis Université de Liège Faculté des Sciences Appliquées Introduction to Numerical Analysis Edition 2015 Professor Q. Louveaux Department of Electrical Engineering and Computer Science Montefiore Institute

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices

Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices Haw-ren Fang August 24, 2007 Abstract We consider the block LDL T factorizations for symmetric indefinite matrices in the form LBL

More information

Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems

Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems Platzhalter für Bild, Bild auf Titelfolie hinter das Logo einsetzen Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems Dr. Noemi Friedman, 09.2.205. Reminder: Instationary heat

More information

11 Adjoint and Self-adjoint Matrices

11 Adjoint and Self-adjoint Matrices 11 Adjoint and Self-adjoint Matrices In this chapter, V denotes a finite dimensional inner product space (unless stated otherwise). 11.1 Theorem (Riesz representation) Let f V, i.e. f is a linear functional

More information

CSE 160 Lecture 13. Numerical Linear Algebra

CSE 160 Lecture 13. Numerical Linear Algebra CSE 16 Lecture 13 Numerical Linear Algebra Announcements Section will be held on Friday as announced on Moodle Midterm Return 213 Scott B Baden / CSE 16 / Fall 213 2 Today s lecture Gaussian Elimination

More information

Fundamentals of Numerical Linear Algebra

Fundamentals of Numerical Linear Algebra Fundamentals of Numerical Linear Algebra Seongjai Kim Department of Mathematics and Statistics Mississippi State University Mississippi State, MS 39762 USA Email: skim@math.msstate.edu Updated: November

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l

More information

LU Factorization. LU Decomposition. LU Decomposition. LU Decomposition: Motivation A = LU

LU Factorization. LU Decomposition. LU Decomposition. LU Decomposition: Motivation A = LU LU Factorization To further improve the efficiency of solving linear systems Factorizations of matrix A : LU and QR LU Factorization Methods: Using basic Gaussian Elimination (GE) Factorization of Tridiagonal

More information

Applied Linear Algebra

Applied Linear Algebra Applied Linear Algebra Gábor P. Nagy and Viktor Vígh University of Szeged Bolyai Institute Winter 2014 1 / 262 Table of contents I 1 Introduction, review Complex numbers Vectors and matrices Determinants

More information

Applied Numerical Linear Algebra. Lecture 8

Applied Numerical Linear Algebra. Lecture 8 Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

I-v k e k. (I-e k h kt ) = Stability of Gauss-Huard Elimination for Solving Linear Systems. 1 x 1 x x x x

I-v k e k. (I-e k h kt ) = Stability of Gauss-Huard Elimination for Solving Linear Systems. 1 x 1 x x x x Technical Report CS-93-08 Department of Computer Systems Faculty of Mathematics and Computer Science University of Amsterdam Stability of Gauss-Huard Elimination for Solving Linear Systems T. J. Dekker

More information

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4 Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix

More information

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix EE507 - Computational Techniques for EE 7. LU factorization Jitkomut Songsiri factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization

More information

Linear System of Equations

Linear System of Equations Linear System of Equations Linear systems are perhaps the most widely applied numerical procedures when real-world situation are to be simulated. Example: computing the forces in a TRUSS. F F 5. 77F F.

More information

Numerical Analysis FMN011

Numerical Analysis FMN011 Numerical Analysis FMN011 Carmen Arévalo Lund University carmen@maths.lth.se Lecture 4 Linear Systems Ax = b A is n n matrix, b is given n-vector, x is unknown solution n-vector. A n n is non-singular

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information

Let x be an approximate solution for Ax = b, e.g., obtained by Gaussian elimination. Let x denote the exact solution. Call. r := b A x.

Let x be an approximate solution for Ax = b, e.g., obtained by Gaussian elimination. Let x denote the exact solution. Call. r := b A x. ESTIMATION OF ERROR Let x be an approximate solution for Ax = b, e.g., obtained by Gaussian elimination. Let x denote the exact solution. Call the residual for x. Then r := b A x r = b A x = Ax A x = A

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 12: Gaussian Elimination and LU Factorization Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 10 Gaussian Elimination

More information

Iterative solution methods

Iterative solution methods J.M. Burgerscentrum Research School for Fluid Mechanics Iterative solution methods Prof.dr.ir. C. Vuik 2017 Delft University of Technology Faculty of Electrical Engineering, Mathematics and Computer Science

More information

Orthogonal Transformations

Orthogonal Transformations Orthogonal Transformations Tom Lyche University of Oslo Norway Orthogonal Transformations p. 1/3 Applications of Qx with Q T Q = I 1. solving least squares problems (today) 2. solving linear equations

More information

6 Linear Systems of Equations

6 Linear Systems of Equations 6 Linear Systems of Equations Read sections 2.1 2.3, 2.4.1 2.4.5, 2.4.7, 2.7 Review questions 2.1 2.37, 2.43 2.67 6.1 Introduction When numerically solving two-point boundary value problems, the differential

More information

MATH 3511 Lecture 1. Solving Linear Systems 1

MATH 3511 Lecture 1. Solving Linear Systems 1 MATH 3511 Lecture 1 Solving Linear Systems 1 Dmitriy Leykekhman Spring 2012 Goals Review of basic linear algebra Solution of simple linear systems Gaussian elimination D Leykekhman - MATH 3511 Introduction

More information

Solving linear equations with Gaussian Elimination (I)

Solving linear equations with Gaussian Elimination (I) Term Projects Solving linear equations with Gaussian Elimination The QR Algorithm for Symmetric Eigenvalue Problem The QR Algorithm for The SVD Quasi-Newton Methods Solving linear equations with Gaussian

More information

A Review of Matrix Analysis

A Review of Matrix Analysis Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value

More information

MATH 387 ASSIGNMENT 2

MATH 387 ASSIGNMENT 2 MATH 387 ASSIGMET 2 SAMPLE SOLUTIOS BY IBRAHIM AL BALUSHI Problem 4 A matrix A ra ik s P R nˆn is called symmetric if a ik a ki for all i, k, and is called positive definite if x T Ax ě 0 for all x P R

More information

Lecture # 20 The Preconditioned Conjugate Gradient Method

Lecture # 20 The Preconditioned Conjugate Gradient Method Lecture # 20 The Preconditioned Conjugate Gradient Method We wish to solve Ax = b (1) A R n n is symmetric and positive definite (SPD). We then of n are being VERY LARGE, say, n = 10 6 or n = 10 7. Usually,

More information

Orthonormal Transformations and Least Squares

Orthonormal Transformations and Least Squares Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving

More information

Engineering Computation

Engineering Computation Engineering Computation Systems of Linear Equations_1 1 Learning Objectives for Lecture 1. Motivate Study of Systems of Equations and particularly Systems of Linear Equations. Review steps of Gaussian

More information

SOLVING LINEAR SYSTEMS

SOLVING LINEAR SYSTEMS SOLVING LINEAR SYSTEMS We want to solve the linear system a, x + + a,n x n = b a n, x + + a n,n x n = b n This will be done by the method used in beginning algebra, by successively eliminating unknowns

More information

MODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1].

MODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1]. Topics: Linear operators MODULE 7 We are going to discuss functions = mappings = transformations = operators from one vector space V 1 into another vector space V 2. However, we shall restrict our sights

More information

NUMERICAL LINEAR ALGEBRA. Lecture notes for MA 660A/B. Rudi Weikard

NUMERICAL LINEAR ALGEBRA. Lecture notes for MA 660A/B. Rudi Weikard NUMERICAL LINEAR ALGEBRA Lecture notes for MA 660A/B Rudi Weikard Contents Chapter 1. Numerical Linear Algebra 1 1.1. Fundamentals 1 1.2. Error Analysis 6 1.3. QR Factorization 13 1.4. LU Factorization

More information

April 26, Applied mathematics PhD candidate, physics MA UC Berkeley. Lecture 4/26/2013. Jed Duersch. Spd matrices. Cholesky decomposition

April 26, Applied mathematics PhD candidate, physics MA UC Berkeley. Lecture 4/26/2013. Jed Duersch. Spd matrices. Cholesky decomposition Applied mathematics PhD candidate, physics MA UC Berkeley April 26, 2013 UCB 1/19 Symmetric positive-definite I Definition A symmetric matrix A R n n is positive definite iff x T Ax > 0 holds x 0 R n.

More information