7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.
|
|
- Marcia Grant
- 6 years ago
- Views:
Transcription
1 7.3 The Jacobi and Gauss-Siedel Iterative Techniques Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.
2 7.3 The Jacobi and Gauss-Siedel Iterative Techniques Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 0 Matrix splitting a 2,1 0 A = diag a 1,1, a 2,2,, a n,n ) a n 1,1 a n 1,2 0 a n,1 a n,2 a n,n a 1,2 a 1,n 1 a 1,n 0 a 2,n 1 a 2,n a n 1,n 0
3 7.3 The Jacobi and Gauss-Siedel Iterative Techniques Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 0 Matrix splitting a 2,1 0 A = diag a 1,1, a 2,2,, a n,n ) a n 1,1 a n 1,2 0 a n,1 a n,2 a n,n a 1,2 a 1,n 1 a 1,n 0 a 2,n 1 a 2,n a n 1,n 0 def = D L U =.
4 Ex: Matrix splitting for A = A = = diag 10, 11, 10, 8)
5 The Jacobi and Gauss-Siedel Methods for solving Ax = b Jacobi Method: With matrix splitting A = D L U, rewrite x = D 1 L + U) x + D 1 b. Jacobi iteration with given x 0), x k+1) = D 1 L + U) x k) + D 1 b, for k = 0, 1, 2,.
6 The Jacobi and Gauss-Siedel Methods for solving Ax = b Jacobi Method: With matrix splitting A = D L U, rewrite x = D 1 L + U) x + D 1 b. Jacobi iteration with given x 0), x k+1) = D 1 L + U) x k) + D 1 b, for k = 0, 1, 2,. Gauss-Siedel Method: Rewrite x = D L) 1 U x + D L) 1 b. Gauss-Siedel iteration with given x 0), x k+1) = D L) 1 U x k) + D L) 1 b, for k = 0, 1, 2,.
7 Ex: Jacobi Method for Ax = b, with A = , b = A = D L U = diag 10, 11, 10, 8)
8 Ex: Jacobi Method for Ax = b, with A = , b = A = D L U = diag 10, 11, 10, 8) Jacobi iteration with x 0) = 0, for k = 0, 1, 2, x k+1) J = D 1 L + U) x k) J + D 1 b = xk) J
9 Ex: Gauss-Siedel Method for Ax = b A = D L U =
10 Ex: Gauss-Siedel Method for Ax = b A = D L U = Gauss-Siedel iteration with x 0) = 0, for k = 0, 1, 2, x k+1) GS = D L) 1 U x GS + D L) 1 b 1 10 = xk) GS
11 Jacobi vs. Gauss-Siedel: solution x = Convergence Comparision, Jacobi vs. G-S 10-2 Jacobi G-S
12 General Iteration Methods To solve A x = b with matrix splitting A = D L U, Jacobi Method: x k+1) J Gauss-Siedel Method: x k+1) GS = D 1 L + U) x k) J + D 1 b. = D L) 1 U x k) GS + D L) 1 b. General Iteration Method: for k = 0, 1, 2, x k+1) = T x k) + c. Next: convergence analysis on General Iteration Method
13 General Iteration: x k+1) = T x k) + c for k = 0, 1, 2, Thm: The following statements are equivalent ρt ) < 1. The equation x = T x + c 1) has a unique solution and {x k) } converges to this solution from any x 0).
14 General Iteration: x k+1) = T x k) + c for k = 0, 1, 2, Thm: The following statements are equivalent ρt ) < 1. The equation x = T x + c 1) has a unique solution and {x k) } converges to this solution from any x 0). Proof: Assume ρt ) < 1. Then 1) has unique solution x ). x k+1) x ) = T x k) x )) = T 2 x k 1) x )) Conversely, if omitted) = = T k+1 x 0) x )) = 0.
15 Jacobi on random upper triangular matrix A = D U. T = D 1 U with ρt ) = 0. 0 A is randn upper triangular with n = nz = 1275 Convergence plot G-S Convergence on upper triangular matrix
16 7.4 Relaxation Techniques for Solving Linear Systems To solve A x = b with matrix splitting A = D L U, rewrite D x = D x, ω L x = ω D U) x ω b, for any ω.
17 7.4 Relaxation Techniques for Solving Linear Systems To solve A x = b with matrix splitting A = D L U, rewrite D x = D x, ω L x = ω D U) x ω b, for any ω. Taking difference of two equations, D ω L) x = 1 ω) D + ω U) x + ω b.
18 7.4 Relaxation Techniques for Solving Linear Systems To solve A x = b with matrix splitting A = D L U, rewrite D x = D x, ω L x = ω D U) x ω b, for any ω. Taking difference of two equations, D ω L) x = 1 ω) D + ω U) x + ω b. Successive Over-Relaxation SOR), for k = 0, 1, 2, x k+1) SOR = D ω L) 1 1 ω) D + ω U) x k) SOR + ω D ω L) 1 b def = T SOR x k) SOR + c SOR.
19 7.4 Relaxation Techniques for Solving Linear Systems To solve A x = b with matrix splitting A = D L U, rewrite D x = D x, ω L x = ω D U) x ω b, for any ω. Taking difference of two equations, D ω L) x = 1 ω) D + ω U) x + ω b. Successive Over-Relaxation SOR), for k = 0, 1, 2, x k+1) SOR = D ω L) 1 1 ω) D + ω U) x k) SOR + ω D ω L) 1 b def = T SOR x k) SOR + c SOR. converges if ρ T SOR ) < 1. Good choice of ω is tricky, but critical for accelerated convergence
20 Optimal SOR parameters Thm: If A is symmetric positive definite and tridiagonal, then ρ T GS ) = ρ T J )) 2 < 1, and the optimal choice of ω for the SOR method is ω OPT = ρ T J )) 2 with ρ T SOR ) = ω OPT 1 = 1 + ρ T J ) 1 ρ T J )) 2 2.
21 A = , b = 1 1 1, x = 1 3 If A is symmetric positive definite and tridiagonal, ) 4 3 det A) = 24, det = 7, 4 >
22 A = , b = 1 1 1, x = 1 3 If A is symmetric positive definite and tridiagonal, ) 4 3 det A) = 24, det = 7, 4 > ρt J ) = T J = D 1 L + U) = Optimal ω: 2 ω OPT = = ω OPT = ρ T J )) 2, b =
23 A = , b = 1 1 1, x = Convergence Comparision, G-S vs. SOR 10-2 G-S SOR
24 7.5 Error Bounds and Iterative Refinement Assume that x is an approximation to the solution x of A x = b. Residual r def = b A x = A x x). Thus small x x implies small r.
25 7.5 Error Bounds and Iterative Refinement Assume that x is an approximation to the solution x of A x = b. Residual r def = b A x = A x x). Thus small x x implies small r. However, big x x can still lead to small r. Ex: τ 2 ) x = τ ) ) 1 3 Exact solution x =. Bad approximation x = 1 0 has a small residual for large τ: ) ) ) r = τ τ = τ ). ).
26 Near Linear Dependence For τ = 4, equations define two nearly parallel lines l 1 : x x 2 = 3, and l 2 : x x 2 = Parallel lines do not have intersections.
27 Let A x = b with non-singular A and non-zero b T hm: Assume x is an approximate solution with r = b A x. Then for any natural norm, x x A 1 r, x x x κ A) r b, where κ A) def = A A 1 is the condition number of A
28 Let A x = b with non-singular A and non-zero b T hm: Assume x is an approximate solution with r = b A x. Then for any natural norm, x x A 1 r, x x x κ A) r b, where κ A) def = A A 1 is the condition number of A A is well-conditioned if κ A) = O1): small residual implies small solution error. A is ill-conditioned if κ A) 1: small residual may still allow large solution error.
29 Ex: Condition Number for A = τ 2 ) Solution: For τ > 0, A = τ. Since A 1 = 10τ τ ) 1 ), we have A 1 = 2 10 τ. Thus κ A) = A A 1 = 6 10 τ + 2. κ A) grows exponentially in τ. A is ill-conditioned for large τ.
30 Iterative Refinement I) Let A x = b with non-singular A and non-zero b. Let F ) be an in-exact equation solver, so F b) is approximate solution. Assume F ) is accurate enough that there exists a ρ < 1 so b A F b) b ρ for any b 0.
31 Iterative Refinement I) Let A x = b with non-singular A and non-zero b. Let F ) be an in-exact equation solver, so F b) is approximate solution. Assume F ) is accurate enough that there exists a ρ < 1 so b A F b) b ρ for any b 0. In practice, F ) could be from an in-exact) LU factorization, F b) = U 1 L 1 b ). Inaccuracies in LU factorization could be due to rounding-error, A L U.
32 Ex: A = randn n, n), b = randn n, 1), n = 3000 LU factorize A to get L, U, x 0 = U 1 L 1 b ), LU without pivoting) r 0) = b A x 0, x 1 = U 1 L 1 ) r 0, r 1 = r 0 A x 1, x = x 0 + x 1 disp normr 0 ), normr 1 )) e e-16
33 Iterative Refinement II) Given a tolerance τ > 0 and x 0) Initialize r 0) = b A x 0). for k = 0, 1, Compute x k) = F r k)), x k+1) = x k) + x k), r k+1) = r k) A x k). If r k+1) τ b stop.
34 Iterative Refinement II) Given a tolerance τ > 0 and x 0) Initialize r 0) = b A x 0). for k = 0, 1, Compute x k) = F r k)), x k+1) = x k) + x k), r k+1) = r k) A x k). If r k+1) τ b stop. Convergence Proof: r k+1) r ρ k) ρ 2 r k 1) ρ k+1 r 0).
35 Perturbation Theory Thm: Let x and x be solutions to A x = b and A + A) x = b + b with perturbations A and b. Then x x x κ A) 1 κ A) A A with κ A) = A A 1. A A + b ). b
36 7.6 The Conjugate Gradient Method CG) for A x = b Assumption: A is symmetric positive definite SPD) A T = A, x T Ax 0 for any x, x T Ax = 0 if and only if x = 0.
37 7.6 The Conjugate Gradient Method CG) for A x = b Assumption: A is symmetric positive definite SPD) A T = A, x T Ax 0 for any x, x T Ax = 0 if and only if x = 0. Thm: The vector x solves the SPD equations A x = b if and only if it minimizes function g x) def = x T Ax 2 x T b.
38 7.6 The Conjugate Gradient Method CG) for A x = b Assumption: A is symmetric positive definite SPD) A T = A, x T Ax 0 for any x, x T Ax = 0 if and only if x = 0. Thm: The vector x solves the SPD equations A x = b if and only if it minimizes function Proof: Let A x = b. Then g x) def = x T Ax 2 x T b. g x) = x T Ax 2 x T A x = x x ) T A x x ) x ) T A x ) = x x ) T A x x ) + g x ). Thus, g x) g x ) for all x; and g x) = g x ) iff x = x.
39 CG for A x = b Thm: The vector x solves the SPD equations A x = b if and only if it minimizes function g x) def = x T Ax 2 x T b. The CG Idea: Starting from an initial vector x 0), quickly compute new vectors x 1),, x k), with g x 0)) > g x 1)) > g x 2)) > > g x k)) > so that the sequence {x k) } will converge to x.
40 search direction and line search The CG Idea: Starting from an initial vector x 0), quickly compute new vectors x 1),, x k), with g x 0)) > g x 1)) > g x 2)) > > g x k)) > so that the sequence {x k) } will converge to x.
41 search direction and line search The CG Idea: Starting from an initial vector x 0), quickly compute new vectors x 1),, x k), with g x 0)) > g x 1)) > g x 2)) > > g x k)) > so that the sequence {x k) } will converge to x. Descent Method: Assume a search direction v k) at iteration x k 1), next iteration with step-size t k x k) def = x k 1) + t k v k) minimizes g x k 1) + tv k)).
42 search direction and line search The CG Idea: Starting from an initial vector x 0), quickly compute new vectors x 1),, x k), with g x 0)) > g x 1)) > g x 2)) > > g x k)) > so that the sequence {x k) } will converge to x. Descent Method: Assume a search direction v k) at iteration x k 1), next iteration with step-size t k x k) def = x k 1) + t k v k) minimizes g x k 1) + tv k)). Optimality Condition: 0 = d d t g x k 1) + tv k)) = v k)) T g x k 1) + tv k)) = v k)) T 2A x k 1) + tv k)) ) 2 b,
43 search direction and line search The CG Idea: Starting from an initial vector x 0), quickly compute new vectors x 1),, x k), with g x 0)) > g x 1)) > g x 2)) > > g x k)) > so that the sequence {x k) } will converge to x. Descent Method: Assume a search direction v k) at iteration x k 1), next iteration with step-size t k x k) def = x k 1) + t k v k) minimizes g x k 1) + tv k)). Optimality Condition: 0 = d d t g x k 1) + tv k)) = v k)) T g x k 1) + tv k)) = v k)) T 2A x k 1) + tv k)) ) 2 b, v k) ) T r k 1) ) t k = ) v k) T ), r k 1) def = b A x k 1) residual). A v k)
44 search direction choices For a small step-size t: g x k 1) + tv k)) g x k 1)) + t v k)) T g x k 1)). steepest descent: Greatest decrease in the value of g x k 1) + tv k)) : v k) = g x k 1)). A-orthogonal directions: non-zero vectors {v i) } n i=1 v i)) T A v j)) = 0 for all i j. A-orthogonal vectors associated with the positive definite matrix A is linearly independent.
45 A orthogonality Craft
46 A orthogonality Craft Thm: Let non-zero vectors {v k) } be A-orthogonal with v 1) = r 0) and for k = 1,, n v k) ) T r k 1) ) t k = v k) ) T A v k) ), r k 1) def = b A x k 1) residual). Then for g x) = x T A x 2 x T b and for k = 1,, n, min τ1,,τ k g x 0 + τ 1 v τ k v k ) = g x 0 + t 1 v t k v k ).
47 A orthogonality Craft Thm: Let non-zero vectors {v k) } be A-orthogonal with v 1) = r 0) and for k = 1,, n v k) ) T r k 1) ) t k = v k) ) T A v k) ), r k 1) def = b A x k 1) residual). Then for g x) = x T A x 2 x T b and for k = 1,, n, min τ1,,τ k g x 0 + τ 1 v τ k v k ) = g x 0 + t 1 v t k v k ). Magic I): min τ1,τ 2 g min x g x) = min τ1,,τ n g min τ1 g x 0 + τ 1 v 1)) = g x 0 + τ 1 v 1) + τ 2 v 2)) = g x 0 + τ 1 v 1) + + τ n v n)) = g x 0 + t 1 v 1)). x 0 + t 1 v 1) + t 2 v 2)). Thus x = x 0 + t 1 v 1) + + t n v n) is solution to A x = b. x 0 + t 1 v 1) + + t n v n)).
48 A orthogonality Craft
49 A orthogonality Craft Thm: Let non-zero vectors {v k) } be A-orthogonal with v 1) = r 0) and for k = 1,, n t k = v k) ) T r k 1) ) v k) ) T A v k) ), r k 1) def = b A x k 1) residual). Then for g x) = x T A x 2 x T b and for k = 1,, n, min τ1,,τ k g x 0 + τ 1 v τ k v k ) = g x 0 + t 1 v t k v k ).
50 A orthogonality Craft Thm: Let non-zero vectors {v k) } be A-orthogonal with v 1) = r 0) and for k = 1,, n t k = v k) ) T r k 1) ) v k) ) T A v k) ), r k 1) def = b A x k 1) residual). Then for g x) = x T A x 2 x T b and for k = 1,, n, min τ1,,τ k g x 0 + τ 1 v τ k v k ) = g x 0 + t 1 v t k v k ). Proof I): Let t = τ 1,, τ k ). Then g x 0 + τ 1 v τ k v k ) = g x 0 ) +t T v 1,, v k ) T A v 1,, v k ) t 2t T v 1,, v k ) T r 0), t g = 2 v 1,, v k ) T A v 1,, v k ) t v 1,, v k ) T r 0))
51 A orthogonality Craft Thm: Let non-zero vectors {v k) } be A-orthogonal with v 1) = r 0) and for k = 1,, n t k = v k) ) T r k 1) ) v k) ) T A v k) ), r k 1) def = b A x k 1) residual). Then for g x) = x T A x 2 x T b and for k = 1,, n, min τ1,,τ k g x 0 + τ 1 v τ k v k ) = g x 0 + t 1 v t k v k ). Proof I): Let t = τ 1,, τ k ). Then g x 0 + τ 1 v τ k v k ) = g x 0 ) +t T v 1,, v k ) T A v 1,, v k ) t 2t T v 1,, v k ) T r 0), t g = 2 v 1,, v k ) T A v 1,, v k ) t v 1,, v k ) T r 0)) min τ1,,τ k g x 0 + τ 1 v τ k v k ) t g = 0.
52 A orthogonality Craft
53 A orthogonality Craft Thm: Let non-zero vectors {v k) } be A-orthogonal with v 1) = r 0) and for k = 1,, n v k) ) T r k 1) ) t k = v k) ) T A v k) ), r k 1) def = b A x k 1) residual). Then for g x) = x T A x 2 x T b and for k = 1,, n, min τ1,,τ k g x 0 + τ 1 v 1) + + τ k v k)) = g x 0 + t 1 v 1) + + t k v k)).
54 A orthogonality Craft Thm: Let non-zero vectors {v k) } be A-orthogonal with v 1) = r 0) and for k = 1,, n v k) ) T r k 1) ) t k = v k) ) T A v k) ), r k 1) def = b A x k 1) residual). Then for g x) = x T A x 2 x T b and for k = 1,, n, min τ1,,τ k g x 0 + τ 1 v 1) + + τ k v k)) = g x 0 + t 1 v 1) + + t k v k)). Proof II): Since vectors {v k) } are A-orthogonal t g = 2 diag v 1)) T A v 1),, v k)) ) T A v k) t t g = 0 t = v 1) ) T r 0) ) v 1) ) T A v 1) ). v k) ) T r 0) ) v k) ) T A v k) ) v 1),, v k)) ) T r 0).
55 A orthogonality Craft
56 A orthogonality Craft Thm: Let non-zero vectors {v k) } be A-orthogonal with v 1) = r 0) and for k = 1,, n t k = v k) ) T r k 1) ) v k) ) T A v k) ), r k 1) def = b A x k 1) residual). Then for g x) = x T A x 2 x T b and for k = 1,, n, min τ1,,τ k g x 0 + τ 1 v 1) + + τ k v k)) = g x 0 + t 1 v 1) + + t k v k)).
57 A orthogonality Craft Thm: Let non-zero vectors {v k) } be A-orthogonal with v 1) = r 0) and for k = 1,, n t k = v k) ) T r k 1) ) v k) ) T A v k) ), r k 1) def = b A x k 1) residual). Then for g x) = x T A x 2 x T b and for k = 1,, n, min τ1,,τ k g x 0 + τ 1 v 1) + + τ k v k)) = g x 0 + t 1 v 1) + + t k v k)). Proof III): Since v k)) T r k 1)) = v k)) T k 1 r 0) t j A v j) = v k) ) T r 0) ) j=1 v k) ) T r k 1) ) t k = ) v k) T ) = ) A v k) v k) T ). A v k) v k)) T r 0)), so
58 A orthogonality vectors I)
59 A orthogonality vectors I) Thm: Set v 1) = r 0), and for k = 2,, n k 1 v k) = r k 1) + j=1 v j) ) T A r k 1) ) v j) ) T A v j) ) v j). Assume that {v k) } are non-zero. Then they are A orthogonal.
60 A orthogonality vectors I) Thm: Set v 1) = r 0), and for k = 2,, n k 1 v k) = r k 1) + j=1 v j) ) T A r k 1) ) v j) ) T A v j) ) v j). Assume that {v k) } are non-zero. Then they are A orthogonal. Induction Proof: For all 1 i < k, v k)) T A v i)) = r k 1)) T A v i)) = k 1 + j=1 v j) ) T A r k 1) ) v j) ) T A v j) ) v j)) T A v i)) r k 1)) T A v i)) + v i)) T A r k 1)) = 0.
61 A orthogonality vectors II)
62 A orthogonality vectors II) Thm: Set v 1) = r 0), and for k = 2,, n k 1 v k) = r k 1) + j=1 v j) ) T A r k 1) ) v j) ) T A v j) ) v j). Let x k) = x 0) + t 1 v 1) + + t k v k) and r k) = b A x k). Then v j)) T r k)) = 0, j = 1,, k; r j)) T r k)) = 0, j = 1,, k 1.
63 A orthogonality vectors II) Thm: Set v 1) = r 0), and for k = 2,, n k 1 v k) = r k 1) + j=1 v j) ) T A r k 1) ) v j) ) T A v j) ) v j). Let x k) = x 0) + t 1 v 1) + + t k v k) and r k) = b A x k). Then v j)) T r k)) = 0, j = 1,, k; r j)) T r k)) = 0, j = 1,, k 1. Proof: Due to optimality property of x k), for all τ and for 1 j k, g x k)) g x k) + τ v j)) = g x k)) 2τ r k)) T v j) + τ 2 v j)) T A v j)). This is true only when r k)) T v j) = 0.
64 A orthogonality vectors II) Thm: Set v 1) = r 0), and for k = 2,, n k 1 v k) = r k 1) + j=1 v j) ) T A r k 1) ) v j) ) T A v j) ) v j). Let x k) = x 0) + t 1 v 1) + + t k v k) and r k) = b A x k). Then v j)) T r k)) = 0, j = 1,, k; r j)) T r k)) = 0, j = 1,, k 1. Proof: Due to optimality property of x k), for all τ and for 1 j k, g x k)) g x k) + τ v j)) = g x k)) 2τ r k)) T v j) + τ 2 v j)) T A v j)). This is true only when r k)) T v j) = 0. Residual vector orthogonality: r j) = linear combination of v 1),, v j+1).
65 A orthogonality vectors III)
66 A orthogonality vectors III) Thm: Set v 1) = r 0), and for k = 2,, n k 1 v k) = r k 1) + j=1 v j) ) T A r k 1) ) v j) ) T A v j) ) v j). Let x k) = x 0) + t 1 v 1) + + t k v k) and r k) = b A x k). Then v k)) T r j)) = r k 1)) T r k 1)), j = 1,, k 1.
67 A orthogonality vectors III) Thm: Set v 1) = r 0), and for k = 2,, n k 1 v k) = r k 1) + j=1 v j) ) T A r k 1) ) v j) ) T A v j) ) v j). Let x k) = x 0) + t 1 v 1) + + t k v k) and r k) = b A x k). Then v k)) T r j)) = r k 1)) T r k 1)), j = 1,, k 1. Proof I): For j = k 1, v k)) T r k 1)) = = k 1 v r k 1) j) ) T A r k 1) ) T + ) v j) T ) v j) r k 1)) A v j) j=1 r k 1)) T r k 1)).
68 A orthogonality vectors III)
69 A orthogonality vectors III) Thm: Set v 1) = r 0), and for k = 2,, n k 1 v k) = r k 1) + j=1 v j) ) T A r k 1) ) v j) ) T A v j) ) v j). Let x k) = x 0) + t 1 v 1) + + t k v k) and r k) = b A x k). Then v k)) T r j)) = r k 1)) T r k 1)), j = 1,, k 1.
70 A orthogonality vectors III) Thm: Set v 1) = r 0), and for k = 2,, n k 1 v k) = r k 1) + j=1 v j) ) T A r k 1) ) v j) ) T A v j) ) v j). Let x k) = x 0) + t 1 v 1) + + t k v k) and r k) = b A x k). Then v k)) T r j)) = r k 1)) T r k 1)), j = 1,, k 1. Proof II): For j < k 1 v k)) T r j)) = = v k)) T r k 1)) + v k)) T r j) r k 1)) v k)) T r k 1)) + v k)) T = r k 1)) T r k 1)). k 1 i=j+1 t i A v i)
71 A orthogonality: A Gift from Math God
72 A orthogonality: A Gift from Math God Set v 1) = r 0), and for k = 2,, n, write Then k 1 v k) = j=0 = r k 1) v k) = k 1 j=0 r k 1) ) T r k 1) ) v k) ) T r j) ) r j) ) T r j) ) r j) r j) ) T r j) ) r j). r k 1) ) T r k 1) ) r k 2) ) T r k 2) ) k 2 = r k 1) + s k 1 v k 1), with s k 1 = rk 1) ) T r k 1) ) r k 2) ) T r k 2) ). j=0 r k 2) ) T r k 2) ) r j) ) T r j) ) r j)
73 Thm: Let {v i) } n i=1 be A-orthogonal with v1) = r 0) and for k = 1,, n v k) ) T r k 1) ) t k = ) v k) T ) = A v k) r k 1) ) T r k 1) ) v k) ) T A v k) ), x k) def = x k 1) +t k v k). Then A x n) = b in exact arithmetic.
74 Conjugate Gradient Algorithm Thm: For k = 1,, n, define, r v k) = r k 1) + s k 1 v k 1) k 1) ) T r k 1) ) with s k 1 = ) r k 2) T ), r k 2) r x k) = x k 1) + t k v k) k 1) ) T r k 1) ) with t k = ) v k) T ). A v k) Then vectors {v k) } are A-orthogonal and A x n) = b in exact arithmetic. The CG Algorithm: C is for Craft, G is for Gift.
75 Algorithm 1 Conjugate Gradient Algorithm Input: Symmetric positive definite A R n n, b R n, initial guess x 0) R n, and tolerance τ > 0. Output: Approximate solution x. Algorithm: Initialize: r 0) = b A x 0), v 0) = r 0), k = 1 while r k 1) 2 τ do t k = rk 1) ) T r k 1) ) v k) ) T A v k) ). x k) = x k 1) + t k v k) r k) = r k 1) t k A v k). s k = rk) ) T r k) ) r k 1) ) T r k 1) ). v k+1) = r k) + s k v k). k = k + 1. end while
Solving linear equations with Gaussian Elimination (I)
Term Projects Solving linear equations with Gaussian Elimination The QR Algorithm for Symmetric Eigenvalue Problem The QR Algorithm for The SVD Quasi-Newton Methods Solving linear equations with Gaussian
More informationChapter 7 Iterative Techniques in Matrix Algebra
Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition
More informationIterative techniques in matrix algebra
Iterative techniques in matrix algebra Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan September 12, 2015 Outline 1 Norms of vectors and matrices 2 Eigenvalues and
More informationLecture 11. Fast Linear Solvers: Iterative Methods. J. Chaudhry. Department of Mathematics and Statistics University of New Mexico
Lecture 11 Fast Linear Solvers: Iterative Methods J. Chaudhry Department of Mathematics and Statistics University of New Mexico J. Chaudhry (UNM) Math/CS 375 1 / 23 Summary: Complexity of Linear Solves
More informationJACOBI S ITERATION METHOD
ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes
More informationIterative Methods for Solving A x = b
Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http
More informationCOURSE Iterative methods for solving linear systems
COURSE 0 4.3. Iterative methods for solving linear systems Because of round-off errors, direct methods become less efficient than iterative methods for large systems (>00 000 variables). An iterative scheme
More informationToday s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn
Today s class Linear Algebraic Equations LU Decomposition 1 Linear Algebraic Equations Gaussian Elimination works well for solving linear systems of the form: AX = B What if you have to solve the linear
More informationIterative methods for Linear System
Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and
More informationIterative Methods. Splitting Methods
Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition
More informationIntroduction to Scientific Computing
(Lecture 5: Linear system of equations / Matrix Splitting) Bojana Rosić, Thilo Moshagen Institute of Scientific Computing Motivation Let us resolve the problem scheme by using Kirchhoff s laws: the algebraic
More informationMath Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.
Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationLecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University
Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationLecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University
Lecture 17 Methods for System of Linear Equations: Part 2 Songting Luo Department of Mathematics Iowa State University MATH 481 Numerical Methods for Differential Equations Songting Luo ( Department of
More information4.6 Iterative Solvers for Linear Systems
4.6 Iterative Solvers for Linear Systems Why use iterative methods? Virtually all direct methods for solving Ax = b require O(n 3 ) floating point operations. In practical applications the matrix A often
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.
More informationTsung-Ming Huang. Matrix Computation, 2016, NTNU
Tsung-Ming Huang Matrix Computation, 2016, NTNU 1 Plan Gradient method Conjugate gradient method Preconditioner 2 Gradient method 3 Theorem Ax = b, A : s.p.d Definition A : symmetric positive definite
More informationComputational Methods. Systems of Linear Equations
Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations
More informationAlgebraic Multigrid as Solvers and as Preconditioner
Ò Algebraic Multigrid as Solvers and as Preconditioner Domenico Lahaye domenico.lahaye@cs.kuleuven.ac.be http://www.cs.kuleuven.ac.be/ domenico/ Department of Computer Science Katholieke Universiteit Leuven
More informationEIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4
EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD
More informationMATH 571: Computational Assignment #2
MATH 571: Computational Assignment #2 Due on Tuesday, November 26, 2013 TTH 12:pm Wenqiang Feng 1 MATH 571 ( TTH 12:pm): Computational Assignment #2 Contents Problem 1 3 Problem 2 8 Page 2 of 9 MATH 571
More information1 Number Systems and Errors 1
Contents 1 Number Systems and Errors 1 1.1 Introduction................................ 1 1.2 Number Representation and Base of Numbers............. 1 1.2.1 Normalized Floating-point Representation...........
More informationConjugate Gradient (CG) Method
Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous
More informationNumerical Methods I Non-Square and Sparse Linear Systems
Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant
More information9. Iterative Methods for Large Linear Systems
EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1
More information9.1 Preconditioned Krylov Subspace Methods
Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method The minimization problem We are given a symmetric positive definite matrix R n n and a right hand side vector b R n We want to solve the linear system Find u R n such that
More informationLecture 10b: iterative and direct linear solvers
Lecture 10b: iterative and direct linear solvers MATH0504: Mathématiques appliquées X. Adriaens, J. Dular Université de Liège November 2018 Table of contents 1 Direct methods 2 Iterative methods Stationary
More information7.2 Steepest Descent and Preconditioning
7.2 Steepest Descent and Preconditioning Descent methods are a broad class of iterative methods for finding solutions of the linear system Ax = b for symmetric positive definite matrix A R n n. Consider
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationIterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)
Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential
More informationTheory of Iterative Methods
Based on Strang s Introduction to Applied Mathematics Theory of Iterative Methods The Iterative Idea To solve Ax = b, write Mx (k+1) = (M A)x (k) + b, k = 0, 1,,... Then the error e (k) x (k) x satisfies
More informationCS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3
CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3 Felix Kwok February 27, 2004 Written Problems 1. (Heath E3.10) Let B be an n n matrix, and assume that B is both
More informationCOURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.
COURSE 9 4 Numerical methods for solving linear systems Practical solving of many problems eventually leads to solving linear systems Classification of the methods: - direct methods - with low number of
More informationIterative Solution methods
p. 1/28 TDB NLA Parallel Algorithms for Scientific Computing Iterative Solution methods p. 2/28 TDB NLA Parallel Algorithms for Scientific Computing Basic Iterative Solution methods The ideas to use iterative
More information6. Iterative Methods for Linear Systems. The stepwise approach to the solution...
6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse
More informationLECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel
LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count
More informationSome definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts
Some definitions Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 A matrix A is SPD (Symmetric
More informationNotes on PCG for Sparse Linear Systems
Notes on PCG for Sparse Linear Systems Luca Bergamaschi Department of Civil Environmental and Architectural Engineering University of Padova e-mail luca.bergamaschi@unipd.it webpage www.dmsa.unipd.it/
More informationNumerical methods, midterm test I (2018/19 autumn, group A) Solutions
Numerical methods, midterm test I (2018/19 autumn, group A Solutions x Problem 1 (6p We are going to approximate the limit 3/2 x lim x 1 x 1 by substituting x = 099 into the fraction in the present form
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline
More informationIntroduction to Applied Linear Algebra with MATLAB
Sigam Series in Applied Mathematics Volume 7 Rizwan Butt Introduction to Applied Linear Algebra with MATLAB Heldermann Verlag Contents Number Systems and Errors 1 1.1 Introduction 1 1.2 Number Representation
More informationNotes on Some Methods for Solving Linear Systems
Notes on Some Methods for Solving Linear Systems Dianne P. O Leary, 1983 and 1999 and 2007 September 25, 2007 When the matrix A is symmetric and positive definite, we have a whole new class of algorithms
More informationA Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations
A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant
More informationthe method of steepest descent
MATH 3511 Spring 2018 the method of steepest descent http://www.phys.uconn.edu/ rozman/courses/m3511_18s/ Last modified: February 6, 2018 Abstract The Steepest Descent is an iterative method for solving
More informationNumerical Methods - Numerical Linear Algebra
Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear
More informationNumerical Methods I Eigenvalue Problems
Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)
AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More informationMath 577 Assignment 7
Math 577 Assignment 7 Thanks for Yu Cao 1. Solution. The linear system being solved is Ax = 0, where A is a (n 1 (n 1 matrix such that 2 1 1 2 1 A =......... 1 2 1 1 2 and x = (U 1, U 2,, U n 1. By the
More informationLab 1: Iterative Methods for Solving Linear Systems
Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as
More informationCHAPTER 5. Basic Iterative Methods
Basic Iterative Methods CHAPTER 5 Solve Ax = f where A is large and sparse (and nonsingular. Let A be split as A = M N in which M is nonsingular, and solving systems of the form Mz = r is much easier than
More informationNumerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD
Mathematical Question we are interested in answering numerically How to solve the following linear system for x Ax = b? where A is an n n invertible matrix and b is vector of length n. Notation: x denote
More informationAPPLIED NUMERICAL LINEAR ALGEBRA
APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation
More informationMath 5630: Conjugate Gradient Method Hung M. Phan, UMass Lowell March 29, 2019
Math 563: Conjugate Gradient Method Hung M. Phan, UMass Lowell March 29, 219 hroughout, A R n n is symmetric and positive definite, and b R n. 1 Steepest Descent Method We present the steepest descent
More informationMA3232 Numerical Analysis Week 9. James Cooley (1926-)
MA umerical Analysis Week 9 James Cooley (96-) James Cooley is an American mathematician. His most significant contribution to the world of mathematics and digital signal processing is the Fast Fourier
More informationLinear System of Equations
Linear System of Equations Linear systems are perhaps the most widely applied numerical procedures when real-world situation are to be simulated. Example: computing the forces in a TRUSS. F F 5. 77F F.
More informationClassical iterative methods for linear systems
Classical iterative methods for linear systems Ed Bueler MATH 615 Numerical Analysis of Differential Equations 27 February 1 March, 2017 Ed Bueler (MATH 615 NADEs) Classical iterative methods for linear
More informationMath 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018
1 Linear Systems Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March, 018 Consider the system 4x y + z = 7 4x 8y + z = 1 x + y + 5z = 15. We then obtain x = 1 4 (7 + y z)
More informationPETROV-GALERKIN METHODS
Chapter 7 PETROV-GALERKIN METHODS 7.1 Energy Norm Minimization 7.2 Residual Norm Minimization 7.3 General Projection Methods 7.1 Energy Norm Minimization Saad, Sections 5.3.1, 5.2.1a. 7.1.1 Methods based
More informationFEM and Sparse Linear System Solving
FEM & sparse system solving, Lecture 7, Nov 3, 2017 1/46 Lecture 7, Nov 3, 2015: Introduction to Iterative Solvers: Stationary Methods http://people.inf.ethz.ch/arbenz/fem16 Peter Arbenz Computer Science
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD
More informationConjugate Gradient Method
Conjugate Gradient Method Hung M Phan UMass Lowell April 13, 2017 Throughout, A R n n is symmetric and positive definite, and b R n 1 Steepest Descent Method We present the steepest descent method for
More informationIterative Methods for Smooth Objective Functions
Optimization Iterative Methods for Smooth Objective Functions Quadratic Objective Functions Stationary Iterative Methods (first/second order) Steepest Descent Method Landweber/Projected Landweber Methods
More informationJordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS
Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative
More informationLecture 18 Classical Iterative Methods
Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,
More informationCME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.
CME342 Parallel Methods in Numerical Analysis Matrix Computation: Iterative Methods II Outline: CG & its parallelization. Sparse Matrix-vector Multiplication. 1 Basic iterative methods: Ax = b r = b Ax
More informationTopics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems
Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate
More informationMonte Carlo simulation inspired by computational optimization. Colin Fox Al Parker, John Bardsley MCQMC Feb 2012, Sydney
Monte Carlo simulation inspired by computational optimization Colin Fox fox@physics.otago.ac.nz Al Parker, John Bardsley MCQMC Feb 2012, Sydney Sampling from π(x) Consider : x is high-dimensional (10 4
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large
More informationCLASSICAL ITERATIVE METHODS
CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped
More informationLinear System of Equations
Linear System of Equations Linear systems are perhaps the most widely applied numerical procedures when real-world situation are to be simulated. Example: computing the forces in a TRUSS. F F 5. 77F F.
More information6.4 Krylov Subspaces and Conjugate Gradients
6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P
More informationIterative Methods for Sparse Linear Systems
Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University
More informationMath 1080: Numerical Linear Algebra Chapter 4, Iterative Methods
Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 March 2015 1 / 70 Topics Introduction to Iterative Methods
More informationMathematical Optimisation, Chpt 2: Linear Equations and inequalities
Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson
More informationHOMEWORK 10 SOLUTIONS
HOMEWORK 10 SOLUTIONS MATH 170A Problem 0.1. Watkins 8.3.10 Solution. The k-th error is e (k) = G k e (0). As discussed before, that means that e (k+j) ρ(g) k, i.e., the norm of the error is approximately
More informationSummary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method
Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,
More information1 Error analysis for linear systems
Notes for 2016-09-16 1 Error analysis for linear systems We now discuss the sensitivity of linear systems to perturbations. This is relevant for two reasons: 1. Our standard recipe for getting an error
More informationIntroduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy
Introduction Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 Solve system Ax = b by repeatedly computing
More informationFEM and sparse linear system solving
FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich
More informationThe Triangle Algorithm: A Geometric Approach to Systems of Linear Equations
: A Geometric Approach to Systems of Linear Equations Thomas 1 Bahman Kalantari 2 1 Baylor University 2 Rutgers University July 19, 2013 Rough Outline Quick Refresher of Basics Convex Hull Problem Solving
More informationIterative Methods and Multigrid
Iterative Methods and Multigrid Part 3: Preconditioning 2 Eric de Sturler Preconditioning The general idea behind preconditioning is that convergence of some method for the linear system Ax = b can be
More informationMath 411 Preliminaries
Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector
More informationMath 471 (Numerical methods) Chapter 3 (second half). System of equations
Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular
More informationParallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1
Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations
More informationThe Lanczos and conjugate gradient algorithms
The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization
More informationThe conjugate gradient method
The conjugate gradient method Michael S. Floater November 1, 2011 These notes try to provide motivation and an explanation of the CG method. 1 The method of conjugate directions We want to solve the linear
More informationNumerical solutions of nonlinear systems of equations
Numerical solutions of nonlinear systems of equations Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan E-mail: min@math.ntnu.edu.tw August 28, 2011 Outline 1 Fixed points
More informationUp to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas
Finding Eigenvalues Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas plus the facts that det{a} = λ λ λ n, Tr{A}
More informationProcess Model Formulation and Solution, 3E4
Process Model Formulation and Solution, 3E4 Section B: Linear Algebraic Equations Instructor: Kevin Dunn dunnkg@mcmasterca Department of Chemical Engineering Course notes: Dr Benoît Chachuat 06 October
More informationMultigrid Methods for Discretized PDE Problems
Towards Metods for Discretized PDE Problems Institute for Applied Matematics University of Heidelberg Feb 1-5, 2010 Towards Outline A model problem Solution of very large linear systems Iterative Metods
More informationConjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)
Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps
More informationNumerical Solution Techniques in Mechanical and Aerospace Engineering
Numerical Solution Techniques in Mechanical and Aerospace Engineering Chunlei Liang LECTURE 3 Solvers of linear algebraic equations 3.1. Outline of Lecture Finite-difference method for a 2D elliptic PDE
More informationLecture 16 Methods for System of Linear Equations (Linear Systems) Songting Luo. Department of Mathematics Iowa State University
Lecture 16 Methods for System of Linear Equations (Linear Systems) Songting Luo Department of Mathematics Iowa State University MATH 481 Numerical Methods for Differential Equations Songting Luo ( Department
More informationLecture # 20 The Preconditioned Conjugate Gradient Method
Lecture # 20 The Preconditioned Conjugate Gradient Method We wish to solve Ax = b (1) A R n n is symmetric and positive definite (SPD). We then of n are being VERY LARGE, say, n = 10 6 or n = 10 7. Usually,
More information