Tsung-Ming Huang. Matrix Computation, 2016, NTNU

Size: px
Start display at page:

Download "Tsung-Ming Huang. Matrix Computation, 2016, NTNU"

Transcription

1 Tsung-Ming Huang Matrix Computation, 2016, NTNU 1

2 Plan Gradient method Conjugate gradient method Preconditioner 2

3 Gradient method 3

4 Theorem Ax = b, A : s.p.d Definition A : symmetric positive definite if Inner product A = A x Ax 0, x 0 < x, y = x y for any x, y! n Define g(x) = < x, Ax 2 < x,b = x Ax 2x b Theorem A : s.p.d x is the sol. of Ax = b g(x ) = min x! n g(x) 4

5 Proof Assume x is the sol. of Ax = b Ax = b g(x) = < x, Ax 2 < x,b = < x x, A(x x ) + < x, Ax + < x, Ax < x, Ax 2 < x,b = < x x, A(x x ) < x, Ax + 2 < x, Ax 2 < x,b = < x x, A(x x ) < x, Ax +2 < x, Ax b = < x x, A(x x ) < x, Ax < x x, A(x x ) 0 g(x ) = min x! n g(x) 5

6 Proof Assume g(x ) = min x! n g(x) Fixed vectors x and v, for any α! f (α ) g(x + αv) = < x + αv, Ax + α Av 2 < x + αv,b = < x, Ax +α < v, Ax +α < x, Av +α 2 < v, Av 2 < x,b 2α < v,b = < x, Ax 2 < x,b +2α < v, Ax 2α < v,b +α 2 < v, Av = g(x) + 2α < v, Ax b +α 2 < v, Av 6

7 Proof f (α ) = g(x) + 2α < v, Ax b +α 2 < v, Av f is a quadratic function of α A : s.p.d f has a minimal value when f (α ) = 0 f ( ˆα ) = 2 < v, Ax b +2 ˆα < v, Av = 0 ˆα = < v, Ax b < v, Av = < v,b Ax < v, Av g(x + ˆαv) = f ( ˆα ) = g(x) 2 < v,b Ax < v, Av < v,b Ax < v,b Ax 2 < v,b Ax 2 + < v, Av < v, Av = g(x) < v, Av 7

8 Proof v 0 < v,b Ax 2 g(x + ˆαv) = g(x) < v, Av g(x + ˆαv) < g(x) if < v,b Ax 0 Suppose that g(x + ˆαv) = g(x) if < v,b Ax = 0 g(x ) = min x! n g(x) g(x + ˆαv) g(x ) for any v < v,b Ax = 0, v Ax = b 8

9 α = < v,b Ax < v, Av = < v,r < v, Av, r b Ax If r 0 and < v,r 0 < v,b Ax 2 g(x + αv) = g(x) < g(x) < v, Av x + αv is closer to x than is x Given and x (0) v (1) 0 For k = 1,2,3,! α k = < v(k ),b Ax (k 1) < v (k ), Av (k ), x (k ) = x (k 1) + α k v (k ) Choose a new search direction v (k+1) 9

10 Steepest descent { } { x (k ) } x Question How to choose v (k ) s.t. rapidly? Let Φ :! n! be a differentiable function on x Φ(x + ε p) Φ(x) ε = Φ(x) p + O(ε) The right hand side takes minimum at p = Φ(x) Φ(x) (i.e., the largest descent) for all p with p = 1 (neglect O(ε) ) 10

11 Steepest descent direction of g Denote x = [x 1, x 2,!, x n ] g(x) = < x, Ax 2 < x,b = n n n a x x 2 x ij i j i i=1 j=1 i=1 b i g x k (x) = 2 n a ki i=1 x 2b = 2( A(k,:)x b ) i k k g(x) = g (x), g,!, g (x) x x x 1 2 n = 2(Ax b) = 2r 11

12 Steepest descent method (gradient method) Given x (0) 0 For k = 1,2,3,! r k 1 = b Ax (k 1) If Else r = 0, then k 1 Stop; α k = < r k 1,r k 1 < r k 1, Ar k 1 Convergence Theorem λ 1 λ 2! λ n 0 : eigenvalues x (k ), x (k 1) : approx. sol. x : exact sol. x (k ) x * A λ 1 λ n λ 1 + λ n x(k 1) x * A End x (k ) = x (k 1) + α k r k 1 where x A = x Ax End 12

13 Conjugate gradient method 13

14 A-orthogonal If κ (A) = λ 1 λ n is large λ 1 λ n λ 1 + λ n 1 Convergence is very slow NOT recommend it Improvement Choose A-orthogonal search directions Definition p,q! n are called A-orthogonal (A-conjugate) if p Aq = 0 14

15 Lemma v 1,,v n 0 : pairwisely A-conjugate v 1,,v n : linearly independent Proof n 0 = c v j j j=1 0 = (v ) A c v k j j = n j=1 c k = 0, k = 1,,n n c (v ) Av j k j j=1 = c k (v k ) Av k v 1,,v n : linearly independent 15

16 Theorem A : symmetric positive definite v,,v 0! n : pairwisely A-conjugate 1 n x : given 0 For, let k = 1,,n α k = < v k,b Ax k 1 < v k, Av k x k = x k 1 + α k v k Then Ax n = b < b Ax k,v j = 0, for j = 1,2,,k 16

17 Proof x k = x k 1 + α k v k Ax n = Ax n 1 + α n Av n = (Ax n 2 + α n 1 Av n 1 ) + α n Av n =! = Ax + α Av + α Av +!+ α Av n n < Ax b,v n k = < Ax b,v +α < Av,v +!+ α < Av,v 0 k 1 1 k n n k = < Ax b,v +α < v, Av +!+ α < v, Av 0 k 1 1 k n n k = < Ax b,v +α < v, Av 0 k k k k = < Ax 0 b,v k + < v k,b Ax k 1 < v k, Av k < v k, Av k = < Ax 0 b,v k + < v k,b Ax k 1 17

18 Proof < Ax b,v = < Ax b,v + < v,b Ax n k 0 k k k 1 = < Ax b,v 0 k + < v,b Ax + Ax Ax +! Ax + Ax Ax k k 2 k 2 k 1 = < Ax b,v + < v,b Ax 0 k k 0 + < v, Ax Ax +!+ < v, Ax Ax k 0 1 k k 2 k 1 = < v, Ax Ax +!+ < v, Ax Ax k 0 1 k k 2 k 1 x i = x i 1 + α i v i, i Ax i = Ax i 1 + α i Av i Ax i 1 Ax i = α i Av i < Ax n b,v k = α 1 < v k, Av 1! α k 1 < v k, Av k 1 = 0 Ax n = b 18

19 Proof Assume < b Ax k,v j = 0, for j = 1,2,,k < r k 1,v j = 0, for j = 1,2,,k 1 r k = b Ax k = b A(x k 1 + α k v k ) = r k 1 α k Av k < r k,v k = < r k 1,v k α k < Av k,v k = < r k 1,v k < v k,b Ax k 1 < v k, Av k < Av k,v k = 0 For j = 1,,k 1 Assumption A-conjugate < r k,v j = < r k 1,v j α k < Av k,v j = 0 which is completed the proof by the mathematic induction. 19

20 Method of conjugate directions Given x (0), v,,v! n \ {0}: pairwisely A-orthogonal 1 n r 0 = b Ax (0) For k = 1,,n α k = < v k,r k 1 < v k, Av k, x(k ) = x (k 1) + α k v k r k = r k 1 α k Av k = b Ax (k 1) End Question How to find A-orthogonal search directions? 20

21 A-orthogonalization!v 2 = v 2 αv 1 v 1 v 2 αv 1!v 2 v 1 0 = v 1!v 2 = v 1 v 2 αv 1 v 1 A-orthogonal!v 2 = v 2 αv 1 A v 1 α = v v 1 2 v v = v 1 A!v 2 = v 1 Av 2 αv 1 Av 1 α = v 1 Av 2 v 1 Av 1 21

22 A-orthogonalization!v = v v Av v v v Av 1 A { v, v } { v,!v } : A-orthogonal { v, v, v } { v,!v,!v } : A-orthogonal !v = v α v α!v { v,!v } A = v 1 A!v 3 = v 1 Av 3 α 1 v 1 Av 1 α 1 = v 1 Av 3 / v 1 Av 1 0 =!v 2 A!v 3 =!v 2 Av 3 α 2!v 2 A!v 2 α 2 =!v 2 Av 3 /!v 2 A!v 2 22

23 Practical Implementation Given x (0) r 0 = b Ax (0) v 1 = r 0 α 1 = < v 1,r 0 < v 1, Av 1, x(1) = x (0) + α 1 v 1 r 1 = r 0 α 1 Av 1 steepest descent direction Construct A-orthogonal vector { v, r } 1 1 NOT A-orthogonal set v 2 = r 1 + β 1 v 1, β 1 = < v 1, Ar 1 < v 1, Av 1 α 2 = < v 2,r 1 < v 2, Av 2, x(2) = x (1) + α 2 v 2 r 2 = r 1 α 2 Av 2 23

24 Construct A-orthogonal vector { v, v,r } v = r + β v + β v, β = v Ar v, β = v Ar 2 2 Av 22 v Av r 1 = r 0 α 1 Av 1 v Ar = r Av = α 1 ( r r r r ) v r = v r α v Av = v r v r v v Av = 0 Av = v r = ( r + β v )r = r r + β v r = r r + β v ( r α Av ) = r r + β v r = r r + β v ( r α Av ) = r r + β v r < v 1,r < v, Av v Av = r 1 r 2 24

25 r = r α Av, α = < v 1,r < v, Av 1 1 < v,r = < v,r α < v, Av = < r,r = < r,v = < r,v α < Av,v = v Ar = α 1 ( r r r r ) = β = v Ar v = 0 Av 1 1 v 3 = r 2 + β 2 v 2, β 2 = v 2 Ar 2 v 2 Av 2 25

26 In general case v k = r k 1 + β k 1 v k 1 if r k = < v k 1, Av k = < v k 1, Ar k 1 + β k 1 Av k 1 = < v k 1, Ar k 1 +β k 1 < v k 1, Av k 1 β k 1 = < v k 1, Ar k 1 < v k 1, Av k 1 Theorem (i). { r,r,,r } is an orthogonal set 0 1 k 1 (ii). { v,,v } is an A-orthogonal set 1 k 26

27 Reformula α k, β k v k = r k 1 + β k 1 v k 1 α = < v k,r k 1 k < v, Av = < r + β v,r k 1 k 1 k 1 k 1 < v, Av k k k k = < r k 1,r k 1 < v k, Av k + β k 1 < v,r k 1 k 1 < v, Av = < r k 1,r k 1 < v, Av k k k k < r k 1,r k 1 = α k < v k, Av k r k = r k 1 α k Av k < r k,r k = < r k 1,r k α k < Av k,r k = α k < r k, Av k β k = < v k, Ar k < v k, Av k = < r k, Av k < v k, Av k = < r k,r k < r k 1,r k 1 27

28 Algorithm (Conjugate Gradient Method) Given For End compute x (0), r 0 = b Ax (0) = v 0 k = 0,1, α k = < r k,r k < v k, Av k, x(k+1) = x (k ) + α k v k r k+1 = r k α k Av k If Else End r = 0, then k+1 Stop; β k = < r k+1,r k+1 < r k,r k, v k+1 = r k+1 + β k v k Theorem Ax n = b well-conditioned r n < tol ill-conditioned r < tol k k n 28

29 Conjugate Gradient Method Convergence Theorem λ 1 λ 2! λ n 0 : eigenvalues { x (k ) } : produced by CG method x : exact sol. x (k ) x * 2 A κ 1 κ +1 k CG is much better than Gradient method x 0 x * A, κ = λ 1 λ n { x (k )} : produced by Gradient method G x G (k ) x * A λ 1 λ n λ 1 + λ n k x (0) x * = κ 1 G A κ +1 k κ 1 κ +1 κ 1 κ +1 x G (0) x * A 29

30 Preconditioner 30

31 !A!x! b Ax = b C 1 A C C x = C 1 b Goal Choose C such that κ (C 1 AC ) <κ (A) Apply CG method to!a!x = b! Get!x Solve x = C!x Question Nothing NEW Apply CG method to! A!x =! b Get x 31

32 Algorithm (Conjugate Gradient Method) Given For k = 0,1, If compute!x (0),!r 0 =! b!a!x (0) =!v 0!α k = <!r k,!r k <!v k,!a!v k!x (k+1) =!x (k ) +!α k!v k!r k+1 =! b!a!x (k+1)!r = 0, then Stop k+1!β k = <!r k+1,!r k+1 <!r k,!r k!v k+1 =!r k+1 +! β k!v k = C 1 r k+1!r = C 1 b ( C 1 AC T )C x k+1 k+1 Let = < w k+1,w k+1 < w k,w k = C 1 (b Ax k+1 ) = C 1 r k+1!v k = C v k, w k = C 1 r k!β k = < C 1 r k+1,c 1 r k+1 < C 1 r k,c 1 r k = < w k+1,w k+1 < w k,w k End 32

33 Algorithm (Conjugate Gradient Method) Given For compute!x (0),!r 0 =! b!a!x (0) =!v 0 k = 0,1,!α k = <!r k,!r k <!v k,!a!v k!x (k+1) =!x (k ) +!α k!v k!r k+1 = C 1 r k+1 = < w k,w k < v k, Av k!α k = = < C 1 r,c 1 r k k < C v,c 1 AC C v k k < w,w k k < C v,c 1 Av k k < C v,c 1 Av k k If!r = 0, then Stop k+1!β k = < w k+1,w k+1 < w k,w k = v k CC 1 Av k = v k Av k!α k = < w k,w k < v k, Av k!v k+1 =!r k+1 +! β k!v k End 33

34 Algorithm (Conjugate Gradient Method) Given For compute!x (0),!r 0 =! b!a!x (0) =!v 0 k = 0,1,!α k = < w k,w k < v k, Av k C x (k+1) = C x (k ) +!α k C v k x (k+1) = x (k ) +!α k v k!x (k+1) =!x (k ) +!α k!v k C 1 r k+1 = C 1 r k!α k C 1 AC C v k!r k+1 =!r k!α k!a!v k r k+1 = r k!α k Av k If!r = 0, then Stop k+1!v k = C v k, w k = C 1 r k!β k = < w k+1,w k+1 < w k,w k!v k+1 =!r k+1 +! β k!v k C v k+1 = C 1 r k+1 +! β k C v k v k+1 = C C 1 r k+1 +! β k v k End = C w k+1 +! β k v k 34

35 Algorithm (Conjugate Gradient Method) (0)!! Given x!, compute r!0 = b Ax! = v!0 1 For k = 0,1, need w0 wk = C rk < wk,wk 1 1 (0) α! k = w0 = C r0 = C (b Ax ) < vk, Avk (0) x (k+1) =x (k ) + α! k vk need v0 rk+1 = rk α! k Avk vk+1 = C wk+1 + β! k vk v0 = C w0 If rk+1 = 0, then Stop < wk+1,wk+1! βk = < wk,wk Solve C wk+1 = rk+1! v =C w +β v k+1 End k+1 k k 35

36 Algorithm (CG Method with preconditioner C) Given C and x (0), compute r = b Ax (0) 0 Solve Cw = r and C 0 0 v = w 1 0 For k = 0,1, α k = < w k,w k / < v k, Av k x (k+1) = x (k ) + α k v k r k+1 = r k α k Av k If r = 0, then Stop k+1 r k+1 = CC z k+1 Mz k+1 β k = < C 1 r k+1,c 1 r k+1 < C 1 r k,c 1 r k = < z k+1,r k+1 < z k,r k End Solve Cw = r and C k+1 k+1 z = w k+1 k+1 β k = < w k+1,w k+1 / < w k,w k v k+1 = z k+1 + β k v k α k = < C 1 r k,c 1 r k < C v k,c 1 Av k = < z k,r k < v k, Av k 36

37 Algorithm (CG Method with preconditioner M) Given M and x (0), compute r = b Ax (0) 0 Solve Mz = r and set v = z For If k = 0,1, Compute α k = < z k,r k / < v k, Av k Compute x (k+1) = x (k ) + α k v k Compute r k+1 = r k α k Av k r = 0, then Stop k+1 Solve Mz k+1 = r k+1 Compute β k = < z k+1,r k+1 / < z k,r k Compute v k+1 = z k+1 + β k v k End 37

38 Choices of M (Criterion) cond (M 1/2 AM 1/2 ) is nearly by 1, i.e., M 1/2 AM 1/2 I, A M The linear system Mz = r must be easily solved. e.g. M = LL M is symmetric positive definite 38

39 Preconditioner M Jacobi method A = D + (L +U), M = D x = D 1 (L +U)x + D 1 b k+1 k = D 1 (A D)x + D 1 b k = x + D 1 r k k Gauss-Seidel A = (D + L) +U, M = D + L x = (D + L) 1 Ux + (D + L) 1 b k+1 k = (D + L) 1 (D + L A)x + (D + L) 1 b k = x k + (D + L) 1 r k 39

40 Preconditioner M SOR: ω A = (D + ω L) ((1 ω )D ωu) M N x k+1 = (D + ω L) 1 [(1 ω )D ωu ]x + ω(d + ω L) 1 b k = (D + ω L) 1 [(D + ω L) ω A]x + ω(d + ω L) 1 b k = I ω(d + ω L) 1 A x k + ω(d + ω L) 1 b = x k + ω(d + ω L) 1 r k SSOR: M (ω ) = 1 ω(2 ω ) (D + ω ( L)D 1 D + ω L ) 40

Iterative techniques in matrix algebra

Iterative techniques in matrix algebra Iterative techniques in matrix algebra Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan September 12, 2015 Outline 1 Norms of vectors and matrices 2 Eigenvalues and

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Conjugate Gradient Method

Conjugate Gradient Method Conjugate Gradient Method Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 10, 2011 T.M. Huang (NTNU) Conjugate Gradient Method October 10, 2011 1 / 36 Outline 1 Steepest

More information

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

Conjugate Gradient Method

Conjugate Gradient Method Conjugate Gradient Method Hung M Phan UMass Lowell April 13, 2017 Throughout, A R n n is symmetric and positive definite, and b R n 1 Steepest Descent Method We present the steepest descent method for

More information

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous

More information

COURSE Iterative methods for solving linear systems

COURSE Iterative methods for solving linear systems COURSE 0 4.3. Iterative methods for solving linear systems Because of round-off errors, direct methods become less efficient than iterative methods for large systems (>00 000 variables). An iterative scheme

More information

Math 5630: Conjugate Gradient Method Hung M. Phan, UMass Lowell March 29, 2019

Math 5630: Conjugate Gradient Method Hung M. Phan, UMass Lowell March 29, 2019 Math 563: Conjugate Gradient Method Hung M. Phan, UMass Lowell March 29, 219 hroughout, A R n n is symmetric and positive definite, and b R n. 1 Steepest Descent Method We present the steepest descent

More information

PETROV-GALERKIN METHODS

PETROV-GALERKIN METHODS Chapter 7 PETROV-GALERKIN METHODS 7.1 Energy Norm Minimization 7.2 Residual Norm Minimization 7.3 General Projection Methods 7.1 Energy Norm Minimization Saad, Sections 5.3.1, 5.2.1a. 7.1.1 Methods based

More information

Some definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts

Some definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts Some definitions Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 A matrix A is SPD (Symmetric

More information

Notes on Some Methods for Solving Linear Systems

Notes on Some Methods for Solving Linear Systems Notes on Some Methods for Solving Linear Systems Dianne P. O Leary, 1983 and 1999 and 2007 September 25, 2007 When the matrix A is symmetric and positive definite, we have a whole new class of algorithms

More information

Iterative Solution methods

Iterative Solution methods p. 1/28 TDB NLA Parallel Algorithms for Scientific Computing Iterative Solution methods p. 2/28 TDB NLA Parallel Algorithms for Scientific Computing Basic Iterative Solution methods The ideas to use iterative

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

Numerical solutions of nonlinear systems of equations

Numerical solutions of nonlinear systems of equations Numerical solutions of nonlinear systems of equations Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan E-mail: min@math.ntnu.edu.tw August 28, 2011 Outline 1 Fixed points

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis

More information

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294) Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices Chapter 7 Iterative methods for large sparse linear systems In this chapter we revisit the problem of solving linear systems of equations, but now in the context of large sparse systems. The price to pay

More information

4.6 Iterative Solvers for Linear Systems

4.6 Iterative Solvers for Linear Systems 4.6 Iterative Solvers for Linear Systems Why use iterative methods? Virtually all direct methods for solving Ax = b require O(n 3 ) floating point operations. In practical applications the matrix A often

More information

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University Lecture 17 Methods for System of Linear Equations: Part 2 Songting Luo Department of Mathematics Iowa State University MATH 481 Numerical Methods for Differential Equations Songting Luo ( Department of

More information

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate

More information

FEM and Sparse Linear System Solving

FEM and Sparse Linear System Solving FEM & sparse system solving, Lecture 7, Nov 3, 2017 1/46 Lecture 7, Nov 3, 2015: Introduction to Iterative Solvers: Stationary Methods http://people.inf.ethz.ch/arbenz/fem16 Peter Arbenz Computer Science

More information

Lecture # 20 The Preconditioned Conjugate Gradient Method

Lecture # 20 The Preconditioned Conjugate Gradient Method Lecture # 20 The Preconditioned Conjugate Gradient Method We wish to solve Ax = b (1) A R n n is symmetric and positive definite (SPD). We then of n are being VERY LARGE, say, n = 10 6 or n = 10 7. Usually,

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

Iterative Methods for Sparse Linear Systems

Iterative Methods for Sparse Linear Systems Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University

More information

1 Conjugate gradients

1 Conjugate gradients Notes for 2016-11-18 1 Conjugate gradients We now turn to the method of conjugate gradients (CG), perhaps the best known of the Krylov subspace solvers. The CG iteration can be characterized as the iteration

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

The Conjugate Gradient Method

The Conjugate Gradient Method CHAPTER The Conjugate Gradient Method Exercise.: A-norm Let A = LL be a Cholesy factorization of A, i.e.l is lower triangular with positive diagonal elements. The A-norm then taes the form x A = p x T

More information

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan April 28, 2011 T.M. Huang (Taiwan Normal Univ.)

More information

GMRES: Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems

GMRES: Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems GMRES: Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University December 4, 2011 T.-M. Huang (NTNU) GMRES

More information

7.2 Steepest Descent and Preconditioning

7.2 Steepest Descent and Preconditioning 7.2 Steepest Descent and Preconditioning Descent methods are a broad class of iterative methods for finding solutions of the linear system Ax = b for symmetric positive definite matrix A R n n. Consider

More information

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative

More information

Notes on PCG for Sparse Linear Systems

Notes on PCG for Sparse Linear Systems Notes on PCG for Sparse Linear Systems Luca Bergamaschi Department of Civil Environmental and Architectural Engineering University of Padova e-mail luca.bergamaschi@unipd.it webpage www.dmsa.unipd.it/

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

CHAPTER 6. Projection Methods. Let A R n n. Solve Ax = f. Find an approximate solution ˆx K such that r = f Aˆx L.

CHAPTER 6. Projection Methods. Let A R n n. Solve Ax = f. Find an approximate solution ˆx K such that r = f Aˆx L. Projection Methods CHAPTER 6 Let A R n n. Solve Ax = f. Find an approximate solution ˆx K such that r = f Aˆx L. V (n m) = [v, v 2,..., v m ] basis of K W (n m) = [w, w 2,..., w m ] basis of L Let x 0

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 20 1 / 20 Overview

More information

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication. CME342 Parallel Methods in Numerical Analysis Matrix Computation: Iterative Methods II Outline: CG & its parallelization. Sparse Matrix-vector Multiplication. 1 Basic iterative methods: Ax = b r = b Ax

More information

Solutions and Notes to Selected Problems In: Numerical Optimzation by Jorge Nocedal and Stephen J. Wright.

Solutions and Notes to Selected Problems In: Numerical Optimzation by Jorge Nocedal and Stephen J. Wright. Solutions and Notes to Selected Problems In: Numerical Optimzation by Jorge Nocedal and Stephen J. Wright. John L. Weatherwax July 7, 2010 wax@alum.mit.edu 1 Chapter 5 (Conjugate Gradient Methods) Notes

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.

More information

Master Thesis Literature Study Presentation

Master Thesis Literature Study Presentation Master Thesis Literature Study Presentation Delft University of Technology The Faculty of Electrical Engineering, Mathematics and Computer Science January 29, 2010 Plaxis Introduction Plaxis Finite Element

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

Conjugate Gradient Tutorial

Conjugate Gradient Tutorial Conjugate Gradient Tutorial Prof. Chung-Kuan Cheng Computer Science and Engineering Department University of California, San Diego ckcheng@ucsd.edu December 1, 2015 Prof. Chung-Kuan Cheng (UC San Diego)

More information

Numerical Optimization

Numerical Optimization Numerical Optimization Unit 2: Multivariable optimization problems Che-Rung Lee Scribe: February 28, 2011 (UNIT 2) Numerical Optimization February 28, 2011 1 / 17 Partial derivative of a two variable function

More information

Lecture 11. Fast Linear Solvers: Iterative Methods. J. Chaudhry. Department of Mathematics and Statistics University of New Mexico

Lecture 11. Fast Linear Solvers: Iterative Methods. J. Chaudhry. Department of Mathematics and Statistics University of New Mexico Lecture 11 Fast Linear Solvers: Iterative Methods J. Chaudhry Department of Mathematics and Statistics University of New Mexico J. Chaudhry (UNM) Math/CS 375 1 / 23 Summary: Complexity of Linear Solves

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1 Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations

More information

Numerical Optimization

Numerical Optimization Numerical Optimization Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Spring 2010 Emo Todorov (UW) AMATH/CSE 579, Spring 2010 Lecture 9 1 / 8 Gradient descent

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method The minimization problem We are given a symmetric positive definite matrix R n n and a right hand side vector b R n We want to solve the linear system Find u R n such that

More information

HOMEWORK 10 SOLUTIONS

HOMEWORK 10 SOLUTIONS HOMEWORK 10 SOLUTIONS MATH 170A Problem 0.1. Watkins 8.3.10 Solution. The k-th error is e (k) = G k e (0). As discussed before, that means that e (k+j) ρ(g) k, i.e., the norm of the error is approximately

More information

M e t ir c S p a c es

M e t ir c S p a c es A G M A A q D q O I q 4 78 q q G q 3 q v- q A G q M A G M 3 5 4 A D O I A 4 78 / 3 v D OI A G M 3 4 78 / 3 54 D D v M q D M 3 v A G M 3 v M 3 5 A 4 M W q x - - - v Z M * A D q q q v W q q q q D q q W q

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Theory of Iterative Methods

Theory of Iterative Methods Based on Strang s Introduction to Applied Mathematics Theory of Iterative Methods The Iterative Idea To solve Ax = b, write Mx (k+1) = (M A)x (k) + b, k = 0, 1,,... Then the error e (k) x (k) x satisfies

More information

Monte Carlo simulation inspired by computational optimization. Colin Fox Al Parker, John Bardsley MCQMC Feb 2012, Sydney

Monte Carlo simulation inspired by computational optimization. Colin Fox Al Parker, John Bardsley MCQMC Feb 2012, Sydney Monte Carlo simulation inspired by computational optimization Colin Fox fox@physics.otago.ac.nz Al Parker, John Bardsley MCQMC Feb 2012, Sydney Sampling from π(x) Consider : x is high-dimensional (10 4

More information

Chapter 4. Unconstrained optimization

Chapter 4. Unconstrained optimization Chapter 4. Unconstrained optimization Version: 28-10-2012 Material: (for details see) Chapter 11 in [FKS] (pp.251-276) A reference e.g. L.11.2 refers to the corresponding Lemma in the book [FKS] PDF-file

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Jason E. Hicken Aerospace Design Lab Department of Aeronautics & Astronautics Stanford University 14 July 2011 Lecture Objectives describe when CG can be used to solve Ax

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 4 : CONJUGATE GRADIENT METHOD

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 4 : CONJUGATE GRADIENT METHOD ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 4 : CONJUGATE GRADIENT METHOD Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation

More information

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc. Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 24: Preconditioning and Multigrid Solver Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 5 Preconditioning Motivation:

More information

Scientific Computing II

Scientific Computing II Technische Universität München ST 008 Institut für Informatik Dr. Miriam Mehl Scientific Computing II Final Exam, July, 008 Iterative Solvers (3 pts + 4 extra pts, 60 min) a) Steepest Descent and Conjugate

More information

CLASSICAL ITERATIVE METHODS

CLASSICAL ITERATIVE METHODS CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD

More information

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES ITERATIVE METHODS BASED ON KRYLOV SUBSPACES LONG CHEN We shall present iterative methods for solving linear algebraic equation Au = b based on Krylov subspaces We derive conjugate gradient (CG) method

More information

CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3

CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3 CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3 Felix Kwok February 27, 2004 Written Problems 1. (Heath E3.10) Let B be an n n matrix, and assume that B is both

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Lecture 5, Continuous Optimisation Oxford University Computing Laboratory, HT 2006 Notes by Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The notion of complexity (per iteration)

More information

Vector and Matrix Norms I

Vector and Matrix Norms I Vector and Matrix Norms I Scalar, vector, matrix How to calculate errors? Scalar: absolute error: ˆα α relative error: Vectors: vector norm Norm is the distance!! ˆα α / α Chih-Jen Lin (National Taiwan

More information

Iterative Methods for Linear Systems of Equations

Iterative Methods for Linear Systems of Equations Iterative Methods for Linear Systems of Equations Projection methods (3) ITMAN PhD-course DTU 20-10-08 till 24-10-08 Martin van Gijzen 1 Delft University of Technology Overview day 4 Bi-Lanczos method

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

Numerical methods part 2

Numerical methods part 2 Numerical methods part 2 Alain Hébert alain.hebert@polymtl.ca Institut de génie nucléaire École Polytechnique de Montréal ENE6103: Week 6 Numerical methods part 2 1/33 Content (week 6) 1 Solution of an

More information

Lecture 7 Conjugate Gradient Method(CG)

Lecture 7 Conjugate Gradient Method(CG) Lecture 7 Conjugate Gradient Method(CG) Jinn-Liang Liu 2017/6/7 The CG method is used for solving symmetric and positive definite(spd) systems A x = b (7.1) with A = A T, (7.2) x T A x > 0 forallnon-zerovectors

More information

Iterative Methods for Ax=b

Iterative Methods for Ax=b 1 FUNDAMENTALS 1 Iterative Methods for Ax=b 1 Fundamentals consider the solution of the set of simultaneous equations Ax = b where A is a square matrix, n n and b is a right hand vector. We write the iterative

More information

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018 1 Linear Systems Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March, 018 Consider the system 4x y + z = 7 4x 8y + z = 1 x + y + 5z = 15. We then obtain x = 1 4 (7 + y z)

More information

Eigenvalues and eigenvectors

Eigenvalues and eigenvectors Chapter 6 Eigenvalues and eigenvectors An eigenvalue of a square matrix represents the linear operator as a scaling of the associated eigenvector, and the action of certain matrices on general vectors

More information

The conjugate gradient method

The conjugate gradient method The conjugate gradient method Michael S. Floater November 1, 2011 These notes try to provide motivation and an explanation of the CG method. 1 The method of conjugate directions We want to solve the linear

More information

KRYLOV SUBSPACE ITERATION

KRYLOV SUBSPACE ITERATION KRYLOV SUBSPACE ITERATION Presented by: Nab Raj Roshyara Master and Ph.D. Student Supervisors: Prof. Dr. Peter Benner, Department of Mathematics, TU Chemnitz and Dipl.-Geophys. Thomas Günther 1. Februar

More information

Iterative Methods and Multigrid

Iterative Methods and Multigrid Iterative Methods and Multigrid Part 1: Introduction to Multigrid 2000 Eric de Sturler 1 12/02/09 MG01.prz Basic Iterative Methods (1) Nonlinear equation: f(x) = 0 Rewrite as x = F(x), and iterate x i+1

More information

Classical iterative methods for linear systems

Classical iterative methods for linear systems Classical iterative methods for linear systems Ed Bueler MATH 615 Numerical Analysis of Differential Equations 27 February 1 March, 2017 Ed Bueler (MATH 615 NADEs) Classical iterative methods for linear

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

Poisson Equation in 2D

Poisson Equation in 2D A Parallel Strategy Department of Mathematics and Statistics McMaster University March 31, 2010 Outline Introduction 1 Introduction Motivation Discretization Iterative Methods 2 Additive Schwarz Method

More information

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-3: Unconstrained optimization II Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428

More information

Vector and Matrix Norms I

Vector and Matrix Norms I Vector and Matrix Norms I Scalar, vector, matrix How to calculate errors? Scalar: absolute error: ˆα α relative error: Vectors: vector norm Norm is the distance!! ˆα α / α Chih-Jen Lin (National Taiwan

More information

Math 577 Assignment 7

Math 577 Assignment 7 Math 577 Assignment 7 Thanks for Yu Cao 1. Solution. The linear system being solved is Ax = 0, where A is a (n 1 (n 1 matrix such that 2 1 1 2 1 A =......... 1 2 1 1 2 and x = (U 1, U 2,, U n 1. By the

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

Gradient Method Based on Roots of A

Gradient Method Based on Roots of A Journal of Scientific Computing, Vol. 15, No. 4, 2000 Solving Ax Using a Modified Conjugate Gradient Method Based on Roots of A Paul F. Fischer 1 and Sigal Gottlieb 2 Received January 23, 2001; accepted

More information

Iterative Methods for Smooth Objective Functions

Iterative Methods for Smooth Objective Functions Optimization Iterative Methods for Smooth Objective Functions Quadratic Objective Functions Stationary Iterative Methods (first/second order) Steepest Descent Method Landweber/Projected Landweber Methods

More information

Constructions with ruler and compass.

Constructions with ruler and compass. Constructions with ruler and compass. Semyon Alesker. 1 Introduction. Let us assume that we have a ruler and a compass. Let us also assume that we have a segment of length one. Using these tools we can

More information

Some minimization problems

Some minimization problems Week 13: Wednesday, Nov 14 Some minimization problems Last time, we sketched the following two-step strategy for approximating the solution to linear systems via Krylov subspaces: 1. Build a sequence of

More information

LOWELL JOURNAL.. TOOK H BR LIFE.

LOWELL JOURNAL.. TOOK H BR LIFE. ( \ R 277 v G v R 7 889 C? K R F Y C G F R C K 6 RYC K C K K 8 9 2 K C CK» R C G RR F v K K v Rk k v k x k Y QR FF F RKR C k \ R 4 ( k R F G q 5 R Y Y FR v R ; k k Y 8 K k : F K CK{ 8 k K x K K 2 K «v

More information

Conjugate Gradients: Idea

Conjugate Gradients: Idea Overview Steepest Descent often takes steps in the same direction as earlier steps Wouldn t it be better every time we take a step to get it exactly right the first time? Again, in general we choose a

More information

2.29 Numerical Fluid Mechanics Spring 2015 Lecture 9

2.29 Numerical Fluid Mechanics Spring 2015 Lecture 9 Spring 2015 Lecture 9 REVIEW Lecture 8: Direct Methods for solving (linear) algebraic equations Gauss Elimination LU decomposition/factorization Error Analysis for Linear Systems and Condition Numbers

More information

Linear Analysis Lecture 5

Linear Analysis Lecture 5 Linear Analysis Lecture 5 Inner Products and V Let dim V < with inner product,. Choose a basis B and let v, w V have coordinates in F n given by x 1. x n and y 1. y n, respectively. Let A F n n be the

More information

Goal: to construct some general-purpose algorithms for solving systems of linear Equations

Goal: to construct some general-purpose algorithms for solving systems of linear Equations Chapter IV Solving Systems of Linear Equations Goal: to construct some general-purpose algorithms for solving systems of linear Equations 4.6 Solution of Equations by Iterative Methods 4.6 Solution of

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information