Some definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts

Size: px
Start display at page:

Download "Some definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts"

Transcription

1 Some definitions Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 A matrix A is SPD (Symmetric Postive Definite) 1. A = A T 2. x Ax = x, Ax = x T Ax > 0 for x 0 A is negative definite if A is SPD A is skew-symmetric if A T = A A is indefinite if symmetric and neither SPD nor negative definite. Two symmetric matrices satisfy A > B if (A B) is SPD March 2015 Important facts 1 / 66 A-inner product 2 / 66 All eigenvalues of a symmetric matrix are real A symmetric matrix has an orthogonal basis of eigenvectors A symmetric matrix A is positive definite if and only if all λ(a) > 0. If A is SPD then x, y A = x t Ay is a weighted inner product on R n, the A-inner product, and x A = x, x A = x t Ax is a weighted norm, the A-norm. 3 / 66 4 / 66

2 Proof The A-norm is a weighted 2-norm bilinear Positive Symmetric u+v, y A = (u+v) t Ay = u t Ay +v t Ay = u, y A + v, y A and αu, y A = (αu) t Ay = α ( u t Ay ) = α u, y A. x, x A = x t Ax > 0, for x 0, since A is SPD. x, y A = x t Ay = (x t Ay) t = y t A t x tt = y t Ax = y, x A. The norm induced by a 2 2 SPD matrix A is just a weighted 2-norm. Proof: Let (λ 1, φ 1 ) and (λ 2, φ 2 ) be the two eigenvalues and eigenvectors of A Any vector v = α 1 φ 1 + α 2 φ 2 v 2 A = (α 1φ 1 + α 2 φ 2 ) T A(α 1 φ 1 + α 2 φ 2 ) v 2 A = (α 1φ 1 + α 2 φ 2 ) T (α 1 λ 1 φ 1 + α 2 λ 2 φ 2 ) v 2 A = (α2 1 λ 1 + α 2 2 λ 2) Hence, x, x A induces a norm x A = x, x A 5 / 66 6 / 66 Homework Finding a solution = finding a minimum For A SPD, the unique solution of Ax = b Text, Exercises 258, 259 is also the unique minimizer of the functional J(x) = 1 2 x, Ax x, b = 1 2 (x T Ax) x T b Proof Take derivatives of both sides. J x i = 0 = 1 2 ((Ax) i + (Ax) i ) b i 7 / 66 9 / 66

3 Notation Theorem 260 x = A 1 b is the argument that minimizes J(y): Let A be SPD. The solution of Ax = b is the unique minimizer of J(x). For any y x = arg min y R N J(y) And J(x + y) = J(x) y t Ay > J(x) x y 2 A = 2 (J(y) J(x)). 10 / / 66 Proof of first claim Proof of second claim J(x + y) = 1 2 (x + y)t A(x + y) (x + y) t b = 1 2 x t Ax + y t Ax y t Ay x t b y t b = ( 1 2 x t Ax x t b) + (y t Ax y t b) y t Ay = J(x) + y t (Ax b) y t Ay = J(x) y t Ay > J(x). A is SPD, so y 0 = y t Ay > 0. LHS x y 2 A = (x y) t A(x y) = x t Ax y t Ax x t Ay + y t Ay = ( since Ax = b) = x t b y t b y t b + y t Ay = y t Ay 2y t b + x t b RHS 2 (J(y) J(x)) = y t Ay 2y t b x t Ax + 2x t b = (since Ax = b) = y t Ay 2y t b x t [Ax b] + x t b = y t Ay 2y t b + x t b 12 / / 66

4 The 2 2 case 2 2 system Ax = b with [ a c A = c d Eigenvalues: (a λ)(d λ) c 2 = 0 ]. The 2 2 case cont d Write x = [x 1, x 2 ] T J(x 1, x 2 ) = 1 [ a c 2 (x 1, x 2 ) c d ] [ ] x1 [x x 1, x 2 ] 2 [ b1 = 1 2 (ax cx 1 x 2 + dx 2 2 ) (b 1 x 1 + b 2 x 2 ). b 2 ] The surface z = J(x 1, x 2 ) is a paraboloid opening up if and only if a > 0, d > 0, and c 2 ad < 0. Hessian matrix is SPD (Theorem 260) λ = (a + d) ± (a + d) 2 4(ad c 2 ) 2 Positive definite (λ > 0) = (a + d) > 0 and c 2 ab < 0 z=j(x,y) Paraboloid opening upward (again) Consider b 1 = b 2 = 0: x 1, x 2 are components of the error J(x 1, x 2 ) = 1 2 (ax cx 1x 2 + dx 2 2 ) Set x 1 = r a cos θ and x 2 = r d sin θ J = r 2 cos 2 θ + r 2 sin 2 θ + 2r 2 c sin θ cos θ ad ( = r c ) sin 2θ ad J > 0 if and only if ± c ad < 1 or c 2 < ad. 14 / 66 General Descent Method, Algorithm 263 Given 1. Ax = b 2. a quadratic functional J( ) that is minimized at x = A 1 b 3. a maximum number of iterations itmax 4. an initial guess x 0 : Compute r 0 = b Ax 0 for n=1:itmax ( ) Choose a direction vector d n Find α = α n by solving the 1d minimization problem: ( ) α n = arg min α J(x n + αd n ) x n+1 = x n + α n d n r n+1 = b Ax n+1 if converged, exit, end end 15 / / / 66

5 Steepest descent J = r when J(x) = 1 2 x t Ax x t b J = 1 2 x T Ax x T b Functional: J(x) = 1 2 x t Ax x t b Descent direction: d n = J(x n ) J x i = 1 2 (Ax) i (x T A) i b i But 1 2 (x T A) i = 1 2 ((Ax)T ) i = 1 2 (Ax) i So J x i J = r = (Ax) i b i = r i 19 / / 66 Formula for α Homework Proof of following is a homework problem: For arbitrary d n, α n = (d n ) T r n (d n ) T Ad n Text, 264. d Hint: Set dα (J(x + αd)) = 0 and solve. 21 / / 66

6 Notation Descent demonstration Use the, notation α for steepest descent becomes α n = d n, r n d n, Ad n = d n, r n d n, d n A. descentdemo.m 23 / / 66 Different descent methods: descent direction d n Different descent methods: choice of functional J If A is SPD the most common choice is Steepest descent direction: d n = J(x n ) Random directions: d n = a randomly chosen vector Gauss-Seidel like descent: d n cycles through the standard basis of unit vectors e 1, e 2,, e N and repeats if necessary. Conjugate directions: d n cycles through an A-orthogonal set of vectors. J(x) = 1 2 x t Ax x t b Minimum residual methods: for general A, J(x) = 1 b Ax, b Ax. 2 Various combinations such as residuals plus updates: J(x) = 1 2 b Ax, b Ax x x n, x x n 25 / / 66

7 Homework Stationary Iterative Methods Text, 265, 268 A = M N Iterative method Mx n+1 = b + Nx n Equivalently M(x n+1 x n ) = b Ax n 27 / / 66 Examples Householder lemma (269) FOR: M = ρi, N = ρi A Jacobi: M = diag(a) Gauss-Seidel: M = D + L (lower triangular part of A). Let A be SPD Let x n be given by M(x n+1 x n ) = b Ax n Let e n = x x n then e n, Ae n e n+1, Ae n+1 = (x n+1 x n ), P(x n+1 x n ) where P = M + M T A. 30 / / 66

8 Convergence of FOR, Jacobi, GS and SOR Proof Corollary 270 For A SPD, P SPD = convergence is monotonic in the A norm: e n A > e n+1 A >... 0 as n 0. Householder lemma in this case gives e n, Ae n e n+1, Ae n+1 = (x n+1 x n ), P(x n+1 x n ), so e n, Ae n > e n+1, Ae n+1, or e n A 0 and monotone decreasing, so has a limit Cauchy criterion: ( e n, Ae n e n+1, Ae n+1 ) 0. Householder gives x n+1 x n P 0. So that M ( x n+1 x n) = b Ax n 0 e n+1 A < e n A 32 / / 66 Monotone convergence Proofs 1. For Jacobi, M = diag(a), so 1. FOR converges monotonically in A if P = M + M t A = 2ρI A > 0 if ρ > 1 2 λ max(a). 2. Jacobi converges monotonically in the A-norm if diag(a) > 1 2 A. 3. Gauss-Seidel converges monotonically in the A norm for SPD A in all cases. 4. SOR converges monotonically in the A-norm if 0 < ω < 2. P = M + M t A = 2diag(A) A > 0 if diag(a) > 1 2 A. 2. For GS, A = D + L + L T (since A SPD) P = M + M T A = (D + L) + (D + L) T A = 3. For SOR, D + L + D + L T (D + L + L T ) = D > 0 P = M + M t A = M + M t (M N) = M t + N = ω 1 D + L t + 1 ω ω D U = 2 ω ω D > 0, for 0 < ω < 2 4. Convergence of FOR in the A-norm is left as an exercise. 34 / / 66

9 Homework Picking ρ for FOR Text, Exercises 275, 278 In Exercise 278, you are to: 1. Give details of the proof that P > 0 implies convergence of FOR. 2. Prove the Householder lemma. Hint: expand the right side using P = M + M T A and the identities x n+1 x n = e n e n+1 and y, Wz = W T y, z. FOR: ρ optimal = (λ max + λ min )/2 This gives fastest overall convergence for fixed ρ Hard to calculate. What about ρ n = Best possible for this step 36 / / 66 Variable ρ FOR again Residuals r n = b A x n for FOR satisfy Given x 0 and a maximum number of iterations, itmax: for n=0:itmax Compute r n = b Ax n Compute ρ n via a few auxiliary calculations x n+1 = x n + (ρ n ) 1 r n if converged, exit, end end r n+1 = r n ρ 1 Ar n. Proof: x n+1 = x n ρ 1 r n. Multiply by A and add b to both sides b Ax n+1 = b Ax n ρ 1 Ar n, 39 / / 66

10 Homework Option 1 Residual Minimization Pick ρ n to minimize r n+1 2 : r n+1 2 = r n ρ 1 Ar n 2 = r n ρ 1 Ar n, r n ρ 1 Ar n. Since r n is fixed, this is a simple function of ρ Text, Exercise 281 Easy! Preparation for following work. or J(ρ) = r n ρ 1 Ar n, r n ρ 1 Ar n J(ρ) = r n, r n 2 r n, Ar n ρ 1 Ar n, Ar n ρ 2. Taking J (ρ) = 0 and solving for ρ = ρ optimal gives ρ optimal = Ar n, Ar n r n, Ar n. The cost of using this optimal value at each step: two extra dot products per step. 41 / / 66 Option 2 J minimization Algorithm for Option 2 Minimize J(x n+1 ): φ(ρ) = J(x n + ρ 1 r n ) = 1 2 x n + ρ 1 r n, A(x n + ρ 1 r n ) x n + ρ 1 r n, b Expanding, setting φ (ρ) = 0 and solving, as before, gives Option 2 requires SPD A Faster than Option 1 ρ optimal = r n, Ar n r n, r n. Given x 1 the matrix A a maximum number of iterations, itmax: r 1 = b Ax 1 for n=1:itmax ρ n = Ar n, r n / r n, r n x n+1 = x n + (1/ρ n ) r n if satisfied, exit, end r n+1 = b Ax n+1 end 43 / / 66

11 Homework The normal equations Ae = r so r 2 2 = Ae 2 2 = e 2 A T A Text, Exercise 283. Implement and test Options 1 and 2. A T A is SPD, so minimizing e 2 is minimizing A T A (A T A)x = A T b. Both bandwidth and condition number of A T A are much larger than A, but A T A is SPD. 45 / / 66 Condition number of normal equations (284) Steepest descent Let A be N N and invertible. Then A T A is SPD. If A is SPD then λ(a T A) = λ(a) 2 and cond 2 (A T A) = [cond 2 (A)] 2. Proof Symmetry: (A T A) T = A T A TT = A T A Positivity: x, AA T x = A T x, A T x = A T x 2 > 0 Condition no.: For A SPD Reduce J as much as possible in gradient direction Option 2 cond 2 (A t A) = cond 2 (A 2 ) = λ max(a 2 ) λ min (A 2 ) = ( ) 2 λmax (A) = (cond 2 (A)) 2. λ min (A) Squaring the condition number greatly increases number of iterations 47 / / 66

12 Steepest descent algorithm Example: It is still slow! Given SPD A x 0, r 0 = b Ax 0 maximum number of iterations itmax for n=0:itmax r n = b Ax n α n = r n, r n / r n, Ar n x n+1 = x n + α n r n if converged, exit, end end α n was called 1/ρ n earlier. N = 2, x = (x 1, x 2 ) T. [ ] [ ] A =, b = J( x ) = 1 [ [x 1, x 2 ] 0 50 ] [ x1 x 2 ] [ 2 [x 1, x 2 ] 0 = 1 2 (2x x2 2 ) 2x 1 = x1 2 2x x }{{} ellipse = (x 1 1) x )2 ( 1 5 ] Steepest descent illustration 50 / 66 SD is no faster than FOR (287) 51 / 66 limit=(1, 0) x 0 =(11,1) x 4 =(6.37,0.54) Let A be SPD and κ = λ max (A)/λ min (A).. The steepest descent method converges to the solution of Ax = b for any x 0. The error x x n satisfies ( ) n κ 1 x x n A x x 0 A κ + 1 and J(x n ) J(x) But no eigenvalue estimates are required! ( ) n κ 1 ( J(x 0 ) J(x) ). κ / / 66

13 Proof Theorem 288: inequality is sharp! x x n, A(x x n ) = x, b + 2J(x n ) Minimizing J is the same as minimizing x x n A Steepest descent reduces J maximally FOR reduces x x n FOR A with known bound Doing one iteration of each, starting from same x n 1, ( ) κ 1 x x n A x xfor n A x x n 1 A κ + 1 Ax = b If x 0 = x (λ 2 φ 1 ± λ 1 φ 2 ) where φ 1 and φ 2 are the eigenvectors of λ 1 = λ min (A) and λ 2 = λ max (A) respectively Then the convergence rate is precisely λ 2 λ 1 = κ 1 λ 2 + λ 1 κ + 1 Proof 54 / 66 Homework 55 / 66 x 0 = x (λ 2 φ 1 ± λ 1 φ 2 ) e 0 = λ 2 φ 1 ± λ 1 φ 2 r 0 = Ae 0 = λ 1 λ 2 φ 1 λ 1 λ 2 φ 2 Ar 0 = λ 2 1λ 2 φ 1 λ 1 λ 2 2φ 2 α 0 = r 0, r 0 / r 0, Ar 0 = λ2 1 λ2 2 + λ2 1 λ2 2 λ 3 1 λ2 2 + λ2 1 λ3 2 2 = λ 1 + λ 2 x 1 = x 0 + αr 0 ( 2 = x λ 2 λ 1 λ 2 λ 1 + λ 2 = x λ 2 ( λ2 λ 1 λ 2 + λ 1 = x λ 2 λ 1 λ 2 + λ 1 (λ 2 φ 1 λ 1 φ 2 ) ) ( φ 1 ) φ 1 λ 1 ( ±λ1 λ 2 λ 1 + λ 2 λ λ 1 λ 2 λ 1 + λ 2 ) φ 2 ) φ 2 Text, Exercise / / 66

14 Exercise 290: Convection-Diffusion Equation Comments about CDE ɛ u + c u = f inside Ω = [0, 1] [0, 1] [ ] 1 c = 0 u = g on Ω The convection term is discretized as c is like a wind, blowing the solution to the right. Interesting behavior for ɛ < c Pe= cell Péclet number Discretization becomes unstable as Pe becomes large. c u = u x u(i + 1, j) u(i 1, j) 2h Modify jacobi2d.m 58 / 66 Real problem 59 / 66 Add in convection term. Ends up with a factor of h in the numerator Debug with f = 1, g = x y. Exact solution is u = g Also exact discrete solution! Diffusion is debugged already Debug with N = 5 Can see the solution on the screen h = 0.2 so can recognize solution values on sight Start from exact: must get answer in 1 iteration. Start from exact, run many iterations: must stay exact Start from zero, get exact? If not, program in exact solution, print norm of error at each iteration. Double-check original code! N = 50, f = x + y, g = 0 Jacobi, steepest descent, residual minimization ɛ = 1.0, 0.1, 0.01, / / 66

15 Solution, ɛ = 1.0 Solution, ɛ = 0.1 Solution, ɛ = / 66 Solution, ɛ = / / / 66

16 Iterative methods Numbers of iterations ɛ Jacobi Steepest descent Residual minimization diverges / 66

Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods

Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 March 2015 1 / 70 Topics Introduction to Iterative Methods

More information

Introduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy

Introduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy Introduction Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 Solve system Ax = b by repeatedly computing

More information

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294) Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps

More information

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

PETROV-GALERKIN METHODS

PETROV-GALERKIN METHODS Chapter 7 PETROV-GALERKIN METHODS 7.1 Energy Norm Minimization 7.2 Residual Norm Minimization 7.3 General Projection Methods 7.1 Energy Norm Minimization Saad, Sections 5.3.1, 5.2.1a. 7.1.1 Methods based

More information

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1 Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations

More information

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University Lecture 17 Methods for System of Linear Equations: Part 2 Songting Luo Department of Mathematics Iowa State University MATH 481 Numerical Methods for Differential Equations Songting Luo ( Department of

More information

CAAM 454/554: Stationary Iterative Methods

CAAM 454/554: Stationary Iterative Methods CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are

More information

7.2 Steepest Descent and Preconditioning

7.2 Steepest Descent and Preconditioning 7.2 Steepest Descent and Preconditioning Descent methods are a broad class of iterative methods for finding solutions of the linear system Ax = b for symmetric positive definite matrix A R n n. Consider

More information

, b = 0. (2) 1 2 The eigenvectors of A corresponding to the eigenvalues λ 1 = 1, λ 2 = 3 are

, b = 0. (2) 1 2 The eigenvectors of A corresponding to the eigenvalues λ 1 = 1, λ 2 = 3 are Quadratic forms We consider the quadratic function f : R 2 R defined by f(x) = 2 xt Ax b T x with x = (x, x 2 ) T, () where A R 2 2 is symmetric and b R 2. We will see that, depending on the eigenvalues

More information

the method of steepest descent

the method of steepest descent MATH 3511 Spring 2018 the method of steepest descent http://www.phys.uconn.edu/ rozman/courses/m3511_18s/ Last modified: February 6, 2018 Abstract The Steepest Descent is an iterative method for solving

More information

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication. CME342 Parallel Methods in Numerical Analysis Matrix Computation: Iterative Methods II Outline: CG & its parallelization. Sparse Matrix-vector Multiplication. 1 Basic iterative methods: Ax = b r = b Ax

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

CLASSICAL ITERATIVE METHODS

CLASSICAL ITERATIVE METHODS CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

JACOBI S ITERATION METHOD

JACOBI S ITERATION METHOD ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes

More information

HOMEWORK 10 SOLUTIONS

HOMEWORK 10 SOLUTIONS HOMEWORK 10 SOLUTIONS MATH 170A Problem 0.1. Watkins 8.3.10 Solution. The k-th error is e (k) = G k e (0). As discussed before, that means that e (k+j) ρ(g) k, i.e., the norm of the error is approximately

More information

Notes on Some Methods for Solving Linear Systems

Notes on Some Methods for Solving Linear Systems Notes on Some Methods for Solving Linear Systems Dianne P. O Leary, 1983 and 1999 and 2007 September 25, 2007 When the matrix A is symmetric and positive definite, we have a whole new class of algorithms

More information

Theory of Iterative Methods

Theory of Iterative Methods Based on Strang s Introduction to Applied Mathematics Theory of Iterative Methods The Iterative Idea To solve Ax = b, write Mx (k+1) = (M A)x (k) + b, k = 0, 1,,... Then the error e (k) x (k) x satisfies

More information

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method. Optimization Unconstrained optimization One-dimensional Multi-dimensional Newton s method Basic Newton Gauss- Newton Quasi- Newton Descent methods Gradient descent Conjugate gradient Constrained optimization

More information

ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, Please leave fractions as fractions, but simplify them, etc.

ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, Please leave fractions as fractions, but simplify them, etc. ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, 2015 1 Name: Solution Score: /100 This exam is closed-book. You must show ALL of your work for full credit. Please read the questions carefully.

More information

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018 Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer 28. (Vector and Matrix Norms) Homework 3 Due: Tuesday, July 3, 28 Show that the l vector norm satisfies the three properties (a) x for x

More information

The Conjugate Gradient Method

The Conjugate Gradient Method CHAPTER The Conjugate Gradient Method Exercise.: A-norm Let A = LL be a Cholesy factorization of A, i.e.l is lower triangular with positive diagonal elements. The A-norm then taes the form x A = p x T

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 20 1 / 20 Overview

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis

More information

The convergence of stationary iterations with indefinite splitting

The convergence of stationary iterations with indefinite splitting The convergence of stationary iterations with indefinite splitting Michael C. Ferris Joint work with: Tom Rutherford and Andy Wathen University of Wisconsin, Madison 6th International Conference on Complementarity

More information

CHAPTER 6. Projection Methods. Let A R n n. Solve Ax = f. Find an approximate solution ˆx K such that r = f Aˆx L.

CHAPTER 6. Projection Methods. Let A R n n. Solve Ax = f. Find an approximate solution ˆx K such that r = f Aˆx L. Projection Methods CHAPTER 6 Let A R n n. Solve Ax = f. Find an approximate solution ˆx K such that r = f Aˆx L. V (n m) = [v, v 2,..., v m ] basis of K W (n m) = [w, w 2,..., w m ] basis of L Let x 0

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

CHAPTER 5. Basic Iterative Methods

CHAPTER 5. Basic Iterative Methods Basic Iterative Methods CHAPTER 5 Solve Ax = f where A is large and sparse (and nonsingular. Let A be split as A = M N in which M is nonsingular, and solving systems of the form Mz = r is much easier than

More information

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques

More information

4.6 Iterative Solvers for Linear Systems

4.6 Iterative Solvers for Linear Systems 4.6 Iterative Solvers for Linear Systems Why use iterative methods? Virtually all direct methods for solving Ax = b require O(n 3 ) floating point operations. In practical applications the matrix A often

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Notes on PCG for Sparse Linear Systems

Notes on PCG for Sparse Linear Systems Notes on PCG for Sparse Linear Systems Luca Bergamaschi Department of Civil Environmental and Architectural Engineering University of Padova e-mail luca.bergamaschi@unipd.it webpage www.dmsa.unipd.it/

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.

More information

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

Iterative techniques in matrix algebra

Iterative techniques in matrix algebra Iterative techniques in matrix algebra Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan September 12, 2015 Outline 1 Norms of vectors and matrices 2 Eigenvalues and

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method The minimization problem We are given a symmetric positive definite matrix R n n and a right hand side vector b R n We want to solve the linear system Find u R n such that

More information

Mathematical optimization

Mathematical optimization Optimization Mathematical optimization Determine the best solutions to certain mathematically defined problems that are under constrained determine optimality criteria determine the convergence of the

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

Tsung-Ming Huang. Matrix Computation, 2016, NTNU

Tsung-Ming Huang. Matrix Computation, 2016, NTNU Tsung-Ming Huang Matrix Computation, 2016, NTNU 1 Plan Gradient method Conjugate gradient method Preconditioner 2 Gradient method 3 Theorem Ax = b, A : s.p.d Definition A : symmetric positive definite

More information

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.

More information

Week Quadratic forms. Principal axes theorem. Text reference: this material corresponds to parts of sections 5.5, 8.2,

Week Quadratic forms. Principal axes theorem. Text reference: this material corresponds to parts of sections 5.5, 8.2, Math 051 W008 Margo Kondratieva Week 10-11 Quadratic forms Principal axes theorem Text reference: this material corresponds to parts of sections 55, 8, 83 89 Section 41 Motivation and introduction Consider

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Jason E. Hicken Aerospace Design Lab Department of Aeronautics & Astronautics Stanford University 14 July 2011 Lecture Objectives describe when CG can be used to solve Ax

More information

Linear Algebra. Brigitte Bidégaray-Fesquet. MSIAM, September Univ. Grenoble Alpes, Laboratoire Jean Kuntzmann, Grenoble.

Linear Algebra. Brigitte Bidégaray-Fesquet. MSIAM, September Univ. Grenoble Alpes, Laboratoire Jean Kuntzmann, Grenoble. Brigitte Bidégaray-Fesquet Univ. Grenoble Alpes, Laboratoire Jean Kuntzmann, Grenoble MSIAM, 23 24 September 215 Overview 1 Elementary operations Gram Schmidt orthonormalization Matrix norm Conditioning

More information

Iterative Methods for Linear Systems

Iterative Methods for Linear Systems Iterative Methods for Linear Systems 1. Introduction: Direct solvers versus iterative solvers In many applications we have to solve a linear system Ax = b with A R n n and b R n given. If n is large the

More information

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

6. Iterative Methods for Linear Systems. The stepwise approach to the solution... 6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse

More information

Gradient Descent. Dr. Xiaowei Huang

Gradient Descent. Dr. Xiaowei Huang Gradient Descent Dr. Xiaowei Huang https://cgi.csc.liv.ac.uk/~xiaowei/ Up to now, Three machine learning algorithms: decision tree learning k-nn linear regression only optimization objectives are discussed,

More information

Symmetric Matrices and Eigendecomposition

Symmetric Matrices and Eigendecomposition Symmetric Matrices and Eigendecomposition Robert M. Freund January, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 Symmetric Matrices and Convexity of Quadratic Functions

More information

Iterative Methods and Multigrid

Iterative Methods and Multigrid Iterative Methods and Multigrid Part 1: Introduction to Multigrid 2000 Eric de Sturler 1 12/02/09 MG01.prz Basic Iterative Methods (1) Nonlinear equation: f(x) = 0 Rewrite as x = F(x), and iterate x i+1

More information

Iterative Methods for Sparse Linear Systems

Iterative Methods for Sparse Linear Systems Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University

More information

8 Numerical methods for unconstrained problems

8 Numerical methods for unconstrained problems 8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields

More information

Analysis of Iterative Methods for Solving Sparse Linear Systems C. David Levermore 9 May 2013

Analysis of Iterative Methods for Solving Sparse Linear Systems C. David Levermore 9 May 2013 Analysis of Iterative Methods for Solving Sparse Linear Systems C. David Levermore 9 May 2013 1. General Iterative Methods 1.1. Introduction. Many applications lead to N N linear algebraic systems of the

More information

CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3

CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3 CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3 Felix Kwok February 27, 2004 Written Problems 1. (Heath E3.10) Let B be an n n matrix, and assume that B is both

More information

The Steepest Descent Algorithm for Unconstrained Optimization

The Steepest Descent Algorithm for Unconstrained Optimization The Steepest Descent Algorithm for Unconstrained Optimization Robert M. Freund February, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 1 Steepest Descent Algorithm The problem

More information

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018 1 Linear Systems Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March, 018 Consider the system 4x y + z = 7 4x 8y + z = 1 x + y + 5z = 15. We then obtain x = 1 4 (7 + y z)

More information

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

Lecture 11. Fast Linear Solvers: Iterative Methods. J. Chaudhry. Department of Mathematics and Statistics University of New Mexico

Lecture 11. Fast Linear Solvers: Iterative Methods. J. Chaudhry. Department of Mathematics and Statistics University of New Mexico Lecture 11 Fast Linear Solvers: Iterative Methods J. Chaudhry Department of Mathematics and Statistics University of New Mexico J. Chaudhry (UNM) Math/CS 375 1 / 23 Summary: Complexity of Linear Solves

More information

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative

More information

Monte Carlo simulation inspired by computational optimization. Colin Fox Al Parker, John Bardsley MCQMC Feb 2012, Sydney

Monte Carlo simulation inspired by computational optimization. Colin Fox Al Parker, John Bardsley MCQMC Feb 2012, Sydney Monte Carlo simulation inspired by computational optimization Colin Fox fox@physics.otago.ac.nz Al Parker, John Bardsley MCQMC Feb 2012, Sydney Sampling from π(x) Consider : x is high-dimensional (10 4

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent. Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u

More information

Properties of Linear Transformations from R n to R m

Properties of Linear Transformations from R n to R m Properties of Linear Transformations from R n to R m MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Topic Overview Relationship between the properties of a matrix transformation

More information

MATH 4211/6211 Optimization Quasi-Newton Method

MATH 4211/6211 Optimization Quasi-Newton Method MATH 4211/6211 Optimization Quasi-Newton Method Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 Quasi-Newton Method Motivation:

More information

Conjugate Gradient Method

Conjugate Gradient Method Conjugate Gradient Method Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 10, 2011 T.M. Huang (NTNU) Conjugate Gradient Method October 10, 2011 1 / 36 Outline 1 Steepest

More information

Iterative Methods for Smooth Objective Functions

Iterative Methods for Smooth Objective Functions Optimization Iterative Methods for Smooth Objective Functions Quadratic Objective Functions Stationary Iterative Methods (first/second order) Steepest Descent Method Landweber/Projected Landweber Methods

More information

ECE580 Partial Solution to Problem Set 3

ECE580 Partial Solution to Problem Set 3 ECE580 Fall 2015 Solution to Problem Set 3 October 23, 2015 1 ECE580 Partial Solution to Problem Set 3 These problems are from the textbook by Chong and Zak, 4th edition, which is the textbook for the

More information

The conjugate gradient method

The conjugate gradient method The conjugate gradient method Michael S. Floater November 1, 2011 These notes try to provide motivation and an explanation of the CG method. 1 The method of conjugate directions We want to solve the linear

More information

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL) Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective

More information

Math 504 (Fall 2011) 1. (*) Consider the matrices

Math 504 (Fall 2011) 1. (*) Consider the matrices Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture

More information

9. Iterative Methods for Large Linear Systems

9. Iterative Methods for Large Linear Systems EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1

More information

Lecture 10: October 27, 2016

Lecture 10: October 27, 2016 Mathematical Toolkit Autumn 206 Lecturer: Madhur Tulsiani Lecture 0: October 27, 206 The conjugate gradient method In the last lecture we saw the steepest descent or gradient descent method for finding

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007 Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007 You have 1 hour and 20 minutes. No notes, books, or other references. You are permitted to use Maple during this exam, but you must start with a blank

More information

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES ITERATIVE METHODS BASED ON KRYLOV SUBSPACES LONG CHEN We shall present iterative methods for solving linear algebraic equation Au = b based on Krylov subspaces We derive conjugate gradient (CG) method

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

Stabilization and Acceleration of Algebraic Multigrid Method

Stabilization and Acceleration of Algebraic Multigrid Method Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas Finding Eigenvalues Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas plus the facts that det{a} = λ λ λ n, Tr{A}

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Math 4B Notes. Written by Victoria Kala SH 6432u Office Hours: T 12:45 1:45pm Last updated 7/24/2016

Math 4B Notes. Written by Victoria Kala SH 6432u Office Hours: T 12:45 1:45pm Last updated 7/24/2016 Math 4B Notes Written by Victoria Kala vtkala@math.ucsb.edu SH 6432u Office Hours: T 2:45 :45pm Last updated 7/24/206 Classification of Differential Equations The order of a differential equation is the

More information

Math 577 Assignment 7

Math 577 Assignment 7 Math 577 Assignment 7 Thanks for Yu Cao 1. Solution. The linear system being solved is Ax = 0, where A is a (n 1 (n 1 matrix such that 2 1 1 2 1 A =......... 1 2 1 1 2 and x = (U 1, U 2,, U n 1. By the

More information

Boundary Value Problems and Iterative Methods for Linear Systems

Boundary Value Problems and Iterative Methods for Linear Systems Boundary Value Problems and Iterative Methods for Linear Systems 1. Equilibrium Problems 1.1. Abstract setting We want to find a displacement u V. Here V is a complete vector space with a norm v V. In

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

Vector Derivatives and the Gradient

Vector Derivatives and the Gradient ECE 275AB Lecture 10 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 10 ECE 275A Vector Derivatives and the Gradient ECE 275AB Lecture 10 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San Diego

More information

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm CS 622 Data-Sparse Matrix Computations September 19, 217 Lecture 9: Krylov Subspace Methods Lecturer: Anil Damle Scribes: David Eriksson, Marc Aurele Gilles, Ariah Klages-Mundt, Sophia Novitzky 1 Introduction

More information

Introduction to gradient descent

Introduction to gradient descent 6-1: Introduction to gradient descent Prof. J.C. Kao, UCLA Introduction to gradient descent Derivation and intuitions Hessian 6-2: Introduction to gradient descent Prof. J.C. Kao, UCLA Introduction Our

More information

Conjugate Gradients I: Setup

Conjugate Gradients I: Setup Conjugate Gradients I: Setup CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Justin Solomon CS 205A: Mathematical Methods Conjugate Gradients I: Setup 1 / 22 Time for Gaussian Elimination

More information

Lecture 22. r i+1 = b Ax i+1 = b A(x i + α i r i ) =(b Ax i ) α i Ar i = r i α i Ar i

Lecture 22. r i+1 = b Ax i+1 = b A(x i + α i r i ) =(b Ax i ) α i Ar i = r i α i Ar i 8.409 An Algorithmist s oolkit December, 009 Lecturer: Jonathan Kelner Lecture Last time Last time, we reduced solving sparse systems of linear equations Ax = b where A is symmetric and positive definite

More information

Quasi-Newton methods for minimization

Quasi-Newton methods for minimization Quasi-Newton methods for minimization Lectures for PHD course on Numerical optimization Enrico Bertolazzi DIMS Universitá di Trento November 21 December 14, 2011 Quasi-Newton methods for minimization 1

More information

SyDe312 (Winter 2005) Unit 1 - Solutions (continued)

SyDe312 (Winter 2005) Unit 1 - Solutions (continued) SyDe3 (Winter 5) Unit - Solutions (continued) March, 5 Chapter 6 - Linear Systems Problem 6.6 - b Iterative solution by the Jacobi and Gauss-Seidel iteration methods: Given: b = [ 77] T, x = [ ] T 9x +

More information

Iterative Solution methods

Iterative Solution methods p. 1/28 TDB NLA Parallel Algorithms for Scientific Computing Iterative Solution methods p. 2/28 TDB NLA Parallel Algorithms for Scientific Computing Basic Iterative Solution methods The ideas to use iterative

More information