High Performance Nonlinear Solvers

Size: px
Start display at page:

Download "High Performance Nonlinear Solvers"

Transcription

1 What is a nonlinear system? High Performance Nonlinear Solvers Michael McCourt Division Argonne National Laboratory IIT Meshfree Seminar September 19, 2011 Every nonlinear system of equations can be described as F(u) = 0 for u R N and F : R N R. F is often referred to as a residual function. This includes x + 2 = 3 Ax = b x 3 = 3 x This does not include x + 2 < 3 mint Ax = b(t) x 3 = 3 x ; x Z mccomic@mcs.anl.gov (Argonne) Newton-Krylov-Schwarz September 19, / 34 mccomic@mcs.anl.gov (Argonne) Newton-Krylov-Schwarz September 19, / 34 What can become a nonlinear system? Consider the problem u F(u) = α e t2 dt = 0, α R. 0 This is a nonlinear equation, but because e t2 has no antiderivative, there is no way to compute F(u). Solution Approximate the integral with e.g., trapezoid rule, Gauss quadrature, monte carlo, and call that discretization Ĩ. Then call F (u) = α Ĩ(u). mccomic@mcs.anl.gov (Argonne) Newton-Krylov-Schwarz September 19, / 34 What can become a nonlinear system? Consider the problem ut(t) f (u, t) = 0, u(0) = u 0 In trying to solve for u, what does it mean to apply d dt? Solution Among other possible options, we could discretize the solution on a grid and solve for u(t) at specific t (labeled u k+1 ), with a finite difference approximation to ut(t) yielding u k : 1 t (uk+1 u k ) f (u k+1, t) = 0, k = 0, 1,... mccomic@mcs.anl.gov (Argonne) Newton-Krylov-Schwarz September 19, / 34

2 What can become a nonlinear system? How do we solve nonlinear systems? Consider the problem min u Ω R N G(u) Picard iteration: u k+1 = f (u k ) Also called fixed point iteration, or nonlinear Richardson As mentioned earlier, optimization problems are not nonlinear systems because there is no residual function to evaluate. Solution A technique referred to as Quasi-Newton leverages the fact that local minima are reached when G(u) = 0. By discretizing the gradient as we can define F(u) = (G)(u). Charles Émile Picard Limitations to Picard include Must be able to write F (u) = u f (u) such that f < 1 near the solution. May need good initial guess u 0. Convergence may be slow. mccomic@mcs.anl.gov (Argonne) Newton-Krylov-Schwarz September 19, / 34 mccomic@mcs.anl.gov (Argonne) Newton-Krylov-Schwarz September 19, / 34 How do we solve nonlinear systems? How do we solve nonlinear systems? Stochastic search: F(u) = 0 minu F(u) Reformulate the nonlinear system as an optimization problem and solve it with optimization techniques. Newton s Method: u k+1 = u k J(F)(u k ) 1 F(u k ) Quadratically convergent algorithm from back in the day Limitations to stochastic search include Limitations to Newton s method include Nicholas Metropolis Produces a solution in distribution Computationally costly; may require extra memory Less rigorous mathematics ( F(u) may not have smooth derivatives) Sir Isaac Newton Good initial guess needed Requires Jacobian knowledge Linear solve required at each nonlinear iteration mccomic@mcs.anl.gov (Argonne) Newton-Krylov-Schwarz September 19, / 34 mccomic@mcs.anl.gov (Argonne) Newton-Krylov-Schwarz September 19, / 34

3 Derivation of Newton s Method Making Newton s method practical Where does the iteration u k+1 = u k J(F)(u k ) 1 F(u k ), k = 0, 1,... to solve F(u) = 0 come from? Taylor series Assume you are at step u k and the solution is u, meaning u k = u u k. F(u k + u k ) = F(u k ) + J(F)(u k ) u k + O( u k 2 ) }{{}}{{} F (u)=0 0 0 F(u k ) + J(F)(u k ) u k Quadratic convergence makes Newton s method the optimal choice, if we can circumvent the limitations. For Newton s method to be practical we need Globalization - How bad can our initial guess be to still see convergence? Linear solvers - Can we efficiently invert the Jacobian? Jacobian computation - How can we efficiently evaluate the Jacobian? Can we make do with a cheap approximation to the Jacobian? When the steps u k get small enough, u k u. mccomic@mcs.anl.gov (Argonne) Newton-Krylov-Schwarz September 19, / 34 mccomic@mcs.anl.gov (Argonne) Newton-Krylov-Schwarz September 19, / 34 Globalization Why does a bad initial guess prevent convergence? Recall the Taylor expansion F(u k + u k ) = F(u k ) + J(F)(u k ) u k + O ( u k 2) J(F)(u k ) 1 F(u k ) = u k + O ( J(F )(u k ) 1 u k 2) Globalization How do we implement Newton s method for a bad initial guess? Line search - take a shorter step in the Newton direction and make sure to reduce the norm. Why does that make sense? Newton s method converges quadratically with a decent initial guess. If u k is too large, the assumption that O( u k 2 ) is negligible is invalid. This means that the linear system solution J(F)(u k ) 1 F(u k ) is a poor approximation to u k. mccomic@mcs.anl.gov (Argonne) Newton-Krylov-Schwarz September 19, / 34 As long as we are reducing the norm, we will eventually get close enough for Newton s method to converge as it should. mccomic@mcs.anl.gov (Argonne) Newton-Krylov-Schwarz September 19, / 34

4 Globalization How do we implement Newton s method for a bad initial guess? Trust region - prevent the iteration from entering a region with unacceptable values. Why does that make sense? If you have physical knowledge about the system, use it to restrict the steps when possible. Example - Pressure cannot be negative, so if the iteration produces a negative value, take a smaller step. Globalization How do we implement Newton s method for a bad initial guess? Pseudotransient continuation - solve an equivalent system where the. Why does that make sense? This one is a little more difficult to understand. In trying to solve F(u) = 0, we can find the steady-state solution to ut(x, t) = F(u(x, t)), u(x, 0) = u 0 (x) This time dependent system at steady-state is independent of the initial condition. It is much better conditioned, although we re not interested in why here. mccomic@mcs.anl.gov (Argonne) Newton-Krylov-Schwarz September 19, / 34 mccomic@mcs.anl.gov (Argonne) Newton-Krylov-Schwarz September 19, / 34 Linear Solvers How do we find the Newton step J(F)(u k ) u k = F(u k ) efficiently? Question Do we even need the exact inverse J(F)(u k ) 1 F(u k )? Actually, no It turns out that Inexact Newton will also converge quadratically J(F)(u k ) u k + F(u k ) < ɛ This means an iterative solver can be used. Furthermore, what s the point in exactly solving the linear system if a globalization technique (e.g., line search) is being used? mccomic@mcs.anl.gov (Argonne) Newton-Krylov-Schwarz September 19, / 34 Linear Solvers Now that we know an iterative solver can be used to find the Newton step, new opportunities are available: The Jacobian no longer needs to be computed - only the action J(F)(u)v. How can we take advantage of this? Finite differences F(u + hv) = F(u) + hj(f )(u)v + O(h 2 ) J(F)(u)v = 1 (F(u + hv) F(u)) h Jacobian-vector products can be approximated by finite differences at the cost of 1 function evaluation. This does not require computing the full Jacobian. mccomic@mcs.anl.gov (Argonne) Newton-Krylov-Schwarz September 19, / 34

5 Linear Solvers Now that we know an iterative solver can be used to find the Newton step, new opportunities are available: The Jacobian no longer needs to be computed - only the action J(F)(u)v. How can we take advantage of this? Complex derivatives To avoid cancelation from finite differences, F(u + ıv) = F(u) + ıj(f)(u)v + O(h 2 ) R(F(u + ıhv)) = F(u) I(F(u + ıhv)) = J(F)(u)v Function evaluations and Jacobian-vector products can be computed simultaneously given a real function F if it is overloaded to accept complex arguments. mccomic@mcs.anl.gov (Argonne) Newton-Krylov-Schwarz September 19, / 34 Linear Solvers After choosing a linear solver tolerance ɛ, J(F)(u k ) u k + F(u k ) < ɛ can be solved via GMRES or some other iterative method without ever computing the true Jacobian. This introduces the Krylov into Newton-Krylov-Schwarz. Unfortunately, most problems of interest are rather ill-conditioned, meaning that an iterative solver will converge very slowly. Preconditioning To combat this, it is common to use a preconditioner. Unfortunately, since we don t have the true Jacobian, we have no idea what a good preconditioner looks like. mccomic@mcs.anl.gov (Argonne) Newton-Krylov-Schwarz September 19, / 34 Jacobian Computation Recall the iterative approach to solving systems: unpreconditioned methods for Ax = b form the Krylov space Kn = {b, Ab,..., A n b}. We only have the ability to conduct matrix-vector products and do not have access to the true Jacobian. Since the Jacobian-vector products are being approximated via finite differences, the true Jacobian is not necessary. Recall the structure of a preconditioned Krylov subspace for the problem (AM 1 )(Mx) = b: Kn = {b, AM 1 b,..., (AM 1 ) n b} How can we approximate a Jacobian matrix with which to create a preconditioner? (Hint: it doesn t need to be perfect...) mccomic@mcs.anl.gov (Argonne) Newton-Krylov-Schwarz September 19, / 34 Jacobian Computation Approximating the Jacobian can be done via finite differences: J(F)(u)v = 1 (F(u + hv) F(u)) h If v is set to the k th column of the identity matrix IN, J(F)(u)v will be the k th column of J(F)(u). J(F)(u)IN = J(F)(u) Approximating the Jacobian with this approach will require N function evaluations, which is unacceptably high. mccomic@mcs.anl.gov (Argonne) Newton-Krylov-Schwarz September 19, / 34

6 Jacobian Computation Approximating the Jacobian can be done via finite differences is practical when working with a sparse matrix. The nonzero structure of the matrix may produce columns which are orthogonal. These columns can be computed with a single function evaluation. Jacobian Computation Approximating the Jacobian can be done via automatic differentiation (AD). This will compute derivatives of functions without loss of accuracy from cancelation or truncation, as was present in finite differences. AD likely requires access to the source code, which may be unreasonable in some cases. mccomic@mcs.anl.gov (Argonne) Newton-Krylov-Schwarz September 19, / 34 mccomic@mcs.anl.gov (Argonne) Newton-Krylov-Schwarz September 19, / 34 Where are we now? Preconditioners We have the following steps to solve F(u) = 0: 1 Use Newton s method to iterate from an initial guess u 0 to the solution u. 2 Find the next iterate by solving J(F)(u k ) u k + F(u k ) < ɛ iteratively. 3 ** Precondition the iterative method using an approximate Jacobian. 4 Apply line search to the Newton iterate to improve convergence. Now that we have an approximate Jacobian via coloring, how can we precondition our system? There are literally thousands of preconditioners that exist for solving systems. There is a cottage industry for every application where a specialized preconditioner could exist. The most common preconditioners are: LU - Use the full inverse of M. ILU - Cheaply approximate the full inverse while controlling memory costs. Multigrid - Multilevel solvers are much more complicated but helpful for many problems. Schwarz - Domain decomposition techniques help reduce parallel communication and improve scalability. FFT - Some systems respond well to transforms. mccomic@mcs.anl.gov (Argonne) Newton-Krylov-Schwarz September 19, / 34 mccomic@mcs.anl.gov (Argonne) Newton-Krylov-Schwarz September 19, / 34

7 Preconditioners What does I ll use preconditioner [pick one] mean? When we compute M via coloring we get a matrix M J(F)(u k ). This matrix is not necessarily the matrix which is inverted in AM 1 b. What s going on here? In order to make M 1 easier to compute, some values are often discarded from M before computing M 1. Components in preconditioned GMRES J(F)(u k )v products are approximated via finite differences M is an approximate Jacobian computed via finite difference with coloring Note that (M 1 ) 1 M because some values are lost. M 1 is applied efficiently by dumping some values in M. Preconditioners For example, consider a simple Schwarz preconditioner called block Jacobi on 2 processors. Each processor retains only the M values which it owns, and ignores the rest. The blocks of M are inverted by LU. ( ) M1 M2 M = M3 M4 ( ) M 1 M 1 = M 1 4 Even though the full matrix M may have been computed, some terms were dumped to speed up the computation and application of M 1. mccomic@mcs.anl.gov (Argonne) Newton-Krylov-Schwarz September 19, / 34 mccomic@mcs.anl.gov (Argonne) Newton-Krylov-Schwarz September 19, / 34 Preconditioners Example of preconditioning To allow for a speedy solve, the preconditioners have to be tailored to the physics of the system: 1 If the system is well-conditioned, ILU may be used in place of LU. 2 If the system is elliptic, Multigrid will be effective. 3 If you need a large system solved, Schwarz methods will allow you to reduce communication between processors. 4 When the system is very ill-conditioned, sometimes all you can use is LU. The more you know about the system, the better your preconditioner can be... mccomic@mcs.anl.gov (Argonne) Newton-Krylov-Schwarz September 19, / 34 The neutral terms make the system so ill-conditioned that the LU preconditioner needs to be used. mccomic@mcs.anl.gov (Argonne) Newton-Krylov-Schwarz September 19, / 34

8 Example of preconditioning Example of preconditioning The LU preconditioner shows poor scalability. What can we do?? What if we used a targeted approach of solving the ill-conditioned neutral velocity terms with LU, elliptic neutral density terms with Multigrid, and well-conditioned plasma terms with a Schwarz method? mccomic@mcs.anl.gov (Argonne) Newton-Krylov-Schwarz September 19, / 34 mccomic@mcs.anl.gov (Argonne) Newton-Krylov-Schwarz September 19, / 34 Example of preconditioning Conclusion By targeting the preconditioning, the solver can be sped up significantly because unnecessary work is removed from the process. Today we have gone over techniques to make Newton s method a practical solver nonlinear systems F(u) = 0: Line search is a common approach to allow for bad initial guesses. Iterative solvers may be used to find Newton directions. Jacobian-vector products can be approximated via finite differences. A preconditioning matrix can be computed with graph coloring. Targeting your preconditioner to your system can speed it significantly. mccomic@mcs.anl.gov (Argonne) Newton-Krylov-Schwarz September 19, / 34 mccomic@mcs.anl.gov (Argonne) Newton-Krylov-Schwarz September 19, / 34

9 Other cool stuff Other bad stuff There are other things which may be important in speeding up your nonlinear solver, including Jacobian lagging - Recompute the preconditioner less frequently, since matrix-vector products are independent of the M matrix Variable linear tolerance (Eisenstat-Walker trick) - Some of your linear solves can be crummy and you can still reach the solution Nonlinear preconditioning - Is there a F which you can apply as F(F(u)) = 0 to make your system easier to solve? High order finite differences - Will more accurate Jacobian vector products speed the solution? There are problems I didn t talk about today Jacobian coloring - How does your choice of coloring hurt the accuracy of the finite difference approximation? Line search - Can this trap you in a local minimum? Preconditioning - How do I pick a good preconditioner? Note: This is the main impediment for people not using implicit methods. Storage - Newton-Krylov-Schwarz can demand a lot of memory that simpler nonlinear schemes don t demand. mccomic@mcs.anl.gov (Argonne) Newton-Krylov-Schwarz September 19, / 34 mccomic@mcs.anl.gov (Argonne) Newton-Krylov-Schwarz September 19, / 34

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

Iterative Methods and Multigrid

Iterative Methods and Multigrid Iterative Methods and Multigrid Part 3: Preconditioning 2 Eric de Sturler Preconditioning The general idea behind preconditioning is that convergence of some method for the linear system Ax = b can be

More information

Progress in Parallel Implicit Methods For Tokamak Edge Plasma Modeling

Progress in Parallel Implicit Methods For Tokamak Edge Plasma Modeling Progress in Parallel Implicit Methods For Tokamak Edge Plasma Modeling Michael McCourt 1,2,Lois Curfman McInnes 1 Hong Zhang 1,Ben Dudson 3,Sean Farley 1,4 Tom Rognlien 5, Maxim Umansky 5 Argonne National

More information

Newton s Method and Efficient, Robust Variants

Newton s Method and Efficient, Robust Variants Newton s Method and Efficient, Robust Variants Philipp Birken University of Kassel (SFB/TRR 30) Soon: University of Lund October 7th 2013 Efficient solution of large systems of non-linear PDEs in science

More information

An Efficient Low Memory Implicit DG Algorithm for Time Dependent Problems

An Efficient Low Memory Implicit DG Algorithm for Time Dependent Problems An Efficient Low Memory Implicit DG Algorithm for Time Dependent Problems P.-O. Persson and J. Peraire Massachusetts Institute of Technology 2006 AIAA Aerospace Sciences Meeting, Reno, Nevada January 9,

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 11-14 On the convergence of inexact Newton methods R. Idema, D.J.P. Lahaye, and C. Vuik ISSN 1389-6520 Reports of the Department of Applied Mathematical Analysis Delft

More information

Solving Ax = b, an overview. Program

Solving Ax = b, an overview. Program Numerical Linear Algebra Improving iterative solvers: preconditioning, deflation, numerical software and parallelisation Gerard Sleijpen and Martin van Gijzen November 29, 27 Solving Ax = b, an overview

More information

ITERATIVE METHODS FOR NONLINEAR ELLIPTIC EQUATIONS

ITERATIVE METHODS FOR NONLINEAR ELLIPTIC EQUATIONS ITERATIVE METHODS FOR NONLINEAR ELLIPTIC EQUATIONS LONG CHEN In this chapter we discuss iterative methods for solving the finite element discretization of semi-linear elliptic equations of the form: find

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

Preface to the Second Edition. Preface to the First Edition

Preface to the Second Edition. Preface to the First Edition n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................

More information

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation Tao Zhao 1, Feng-Nan Hwang 2 and Xiao-Chuan Cai 3 Abstract In this paper, we develop an overlapping domain decomposition

More information

Notes for CS542G (Iterative Solvers for Linear Systems)

Notes for CS542G (Iterative Solvers for Linear Systems) Notes for CS542G (Iterative Solvers for Linear Systems) Robert Bridson November 20, 2007 1 The Basics We re now looking at efficient ways to solve the linear system of equations Ax = b where in this course,

More information

Stabilization and Acceleration of Algebraic Multigrid Method

Stabilization and Acceleration of Algebraic Multigrid Method Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration

More information

Solving Boundary Value Problems (with Gaussians)

Solving Boundary Value Problems (with Gaussians) What is a boundary value problem? Solving Boundary Value Problems (with Gaussians) Definition A differential equation with constraints on the boundary Michael McCourt Division Argonne National Laboratory

More information

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers Applied and Computational Mathematics 2017; 6(4): 202-207 http://www.sciencepublishinggroup.com/j/acm doi: 10.11648/j.acm.20170604.18 ISSN: 2328-5605 (Print); ISSN: 2328-5613 (Online) A Robust Preconditioned

More information

Lab 1: Iterative Methods for Solving Linear Systems

Lab 1: Iterative Methods for Solving Linear Systems Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

Review for Exam 2 Ben Wang and Mark Styczynski

Review for Exam 2 Ben Wang and Mark Styczynski Review for Exam Ben Wang and Mark Styczynski This is a rough approximation of what we went over in the review session. This is actually more detailed in portions than what we went over. Also, please note

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Preconditioned inverse iteration and shift-invert Arnoldi method

Preconditioned inverse iteration and shift-invert Arnoldi method Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical

More information

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization

More information

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccs Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary c 2008,2010

More information

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary

More information

7.2 Steepest Descent and Preconditioning

7.2 Steepest Descent and Preconditioning 7.2 Steepest Descent and Preconditioning Descent methods are a broad class of iterative methods for finding solutions of the linear system Ax = b for symmetric positive definite matrix A R n n. Consider

More information

Bindel, Spring 2016 Numerical Analysis (CS 4220) Notes for

Bindel, Spring 2016 Numerical Analysis (CS 4220) Notes for Life beyond Newton Notes for 2016-04-08 Newton s method has many attractive properties, particularly when we combine it with a globalization strategy. Unfortunately, Newton steps are not cheap. At each

More information

Boundary Value Problems and Iterative Methods for Linear Systems

Boundary Value Problems and Iterative Methods for Linear Systems Boundary Value Problems and Iterative Methods for Linear Systems 1. Equilibrium Problems 1.1. Abstract setting We want to find a displacement u V. Here V is a complete vector space with a norm v V. In

More information

Review: From problem to parallel algorithm

Review: From problem to parallel algorithm Review: From problem to parallel algorithm Mathematical formulations of interesting problems abound Poisson s equation Sources: Electrostatics, gravity, fluid flow, image processing (!) Numerical solution:

More information

Multigrid absolute value preconditioning

Multigrid absolute value preconditioning Multigrid absolute value preconditioning Eugene Vecharynski 1 Andrew Knyazev 2 (speaker) 1 Department of Computer Science and Engineering University of Minnesota 2 Department of Mathematical and Statistical

More information

Advanced Computational Methods for VLSI Systems. Lecture 4 RF Circuit Simulation Methods. Zhuo Feng

Advanced Computational Methods for VLSI Systems. Lecture 4 RF Circuit Simulation Methods. Zhuo Feng Advanced Computational Methods for VLSI Systems Lecture 4 RF Circuit Simulation Methods Zhuo Feng 6. Z. Feng MTU EE59 Neither ac analysis nor pole / zero analysis allow nonlinearities Harmonic balance

More information

2 CAI, KEYES AND MARCINKOWSKI proportional to the relative nonlinearity of the function; i.e., as the relative nonlinearity increases the domain of co

2 CAI, KEYES AND MARCINKOWSKI proportional to the relative nonlinearity of the function; i.e., as the relative nonlinearity increases the domain of co INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS Int. J. Numer. Meth. Fluids 2002; 00:1 6 [Version: 2000/07/27 v1.0] Nonlinear Additive Schwarz Preconditioners and Application in Computational Fluid

More information

Solving Large Nonlinear Sparse Systems

Solving Large Nonlinear Sparse Systems Solving Large Nonlinear Sparse Systems Fred W. Wubs and Jonas Thies Computational Mechanics & Numerical Mathematics University of Groningen, the Netherlands f.w.wubs@rug.nl Centre for Interdisciplinary

More information

1 Extrapolation: A Hint of Things to Come

1 Extrapolation: A Hint of Things to Come Notes for 2017-03-24 1 Extrapolation: A Hint of Things to Come Stationary iterations are simple. Methods like Jacobi or Gauss-Seidel are easy to program, and it s (relatively) easy to analyze their convergence.

More information

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

A Linear Multigrid Preconditioner for the solution of the Navier-Stokes Equations using a Discontinuous Galerkin Discretization. Laslo Tibor Diosady

A Linear Multigrid Preconditioner for the solution of the Navier-Stokes Equations using a Discontinuous Galerkin Discretization. Laslo Tibor Diosady A Linear Multigrid Preconditioner for the solution of the Navier-Stokes Equations using a Discontinuous Galerkin Discretization by Laslo Tibor Diosady B.A.Sc., University of Toronto (2005) Submitted to

More information

HOMEWORK 10 SOLUTIONS

HOMEWORK 10 SOLUTIONS HOMEWORK 10 SOLUTIONS MATH 170A Problem 0.1. Watkins 8.3.10 Solution. The k-th error is e (k) = G k e (0). As discussed before, that means that e (k+j) ρ(g) k, i.e., the norm of the error is approximately

More information

A THEORETICAL INTRODUCTION TO NUMERICAL ANALYSIS

A THEORETICAL INTRODUCTION TO NUMERICAL ANALYSIS A THEORETICAL INTRODUCTION TO NUMERICAL ANALYSIS Victor S. Ryaben'kii Semyon V. Tsynkov Chapman &. Hall/CRC Taylor & Francis Group Boca Raton London New York Chapman & Hall/CRC is an imprint of the Taylor

More information

Numerical Methods for Large-Scale Nonlinear Equations

Numerical Methods for Large-Scale Nonlinear Equations Slide 1 Numerical Methods for Large-Scale Nonlinear Equations Homer Walker MA 512 April 28, 2005 Inexact Newton and Newton Krylov Methods a. Newton-iterative and inexact Newton methods. Slide 2 i. Formulation

More information

FAS and Solver Performance

FAS and Solver Performance FAS and Solver Performance Matthew Knepley Mathematics and Computer Science Division Argonne National Laboratory Fall AMS Central Section Meeting Chicago, IL Oct 05 06, 2007 M. Knepley (ANL) FAS AMS 07

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Iterative Methods for Linear Systems

Iterative Methods for Linear Systems Iterative Methods for Linear Systems 1. Introduction: Direct solvers versus iterative solvers In many applications we have to solve a linear system Ax = b with A R n n and b R n given. If n is large the

More information

Contents. Preface... xi. Introduction...

Contents. Preface... xi. Introduction... Contents Preface... xi Introduction... xv Chapter 1. Computer Architectures... 1 1.1. Different types of parallelism... 1 1.1.1. Overlap, concurrency and parallelism... 1 1.1.2. Temporal and spatial parallelism

More information

Integration of PETSc for Nonlinear Solves

Integration of PETSc for Nonlinear Solves Integration of PETSc for Nonlinear Solves Ben Jamroz, Travis Austin, Srinath Vadlamani, Scott Kruger Tech-X Corporation jamroz@txcorp.com http://www.txcorp.com NIMROD Meeting: Aug 10, 2010 Boulder, CO

More information

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Denis Ridzal Department of Computational and Applied Mathematics Rice University, Houston, Texas dridzal@caam.rice.edu

More information

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1 Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations

More information

The method of lines (MOL) for the diffusion equation

The method of lines (MOL) for the diffusion equation Chapter 1 The method of lines (MOL) for the diffusion equation The method of lines refers to an approximation of one or more partial differential equations with ordinary differential equations in just

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

Lecture 17: Iterative Methods and Sparse Linear Algebra

Lecture 17: Iterative Methods and Sparse Linear Algebra Lecture 17: Iterative Methods and Sparse Linear Algebra David Bindel 25 Mar 2014 Logistics HW 3 extended to Wednesday after break HW 4 should come out Monday after break Still need project description

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

An Accelerated Block-Parallel Newton Method via Overlapped Partitioning

An Accelerated Block-Parallel Newton Method via Overlapped Partitioning An Accelerated Block-Parallel Newton Method via Overlapped Partitioning Yurong Chen Lab. of Parallel Computing, Institute of Software, CAS (http://www.rdcps.ac.cn/~ychen/english.htm) Summary. This paper

More information

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-3: Unconstrained optimization II Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428

More information

Newton-Krylov-Schwarz Method for a Spherical Shallow Water Model

Newton-Krylov-Schwarz Method for a Spherical Shallow Water Model Newton-Krylov-Schwarz Method for a Spherical Shallow Water Model Chao Yang 1 and Xiao-Chuan Cai 2 1 Institute of Software, Chinese Academy of Sciences, Beijing 100190, P. R. China, yang@mail.rdcps.ac.cn

More information

Kasetsart University Workshop. Multigrid methods: An introduction

Kasetsart University Workshop. Multigrid methods: An introduction Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available

More information

An Iterative Descent Method

An Iterative Descent Method Conjugate Gradient: An Iterative Descent Method The Plan Review Iterative Descent Conjugate Gradient Review : Iterative Descent Iterative Descent is an unconstrained optimization process x (k+1) = x (k)

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12,

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, 2000 74 6 Summary Here we summarize the most important information about theoretical and numerical linear algebra. MORALS OF THE STORY: I. Theoretically

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

Indefinite and physics-based preconditioning

Indefinite and physics-based preconditioning Indefinite and physics-based preconditioning Jed Brown VAW, ETH Zürich 2009-01-29 Newton iteration Standard form of a nonlinear system F (u) 0 Iteration Solve: Update: J(ũ)u F (ũ) ũ + ũ + u Example (p-bratu)

More information

Fast Iterative Solution of Saddle Point Problems

Fast Iterative Solution of Saddle Point Problems Michele Benzi Department of Mathematics and Computer Science Emory University Atlanta, GA Acknowledgments NSF (Computational Mathematics) Maxim Olshanskii (Mech-Math, Moscow State U.) Zhen Wang (PhD student,

More information

Solving PDEs with Multigrid Methods p.1

Solving PDEs with Multigrid Methods p.1 Solving PDEs with Multigrid Methods Scott MacLachlan maclachl@colorado.edu Department of Applied Mathematics, University of Colorado at Boulder Solving PDEs with Multigrid Methods p.1 Support and Collaboration

More information

30.5. Iterative Methods for Systems of Equations. Introduction. Prerequisites. Learning Outcomes

30.5. Iterative Methods for Systems of Equations. Introduction. Prerequisites. Learning Outcomes Iterative Methods for Systems of Equations 0.5 Introduction There are occasions when direct methods (like Gaussian elimination or the use of an LU decomposition) are not the best way to solve a system

More information

Preliminary Results of GRAPES Helmholtz solver using GCR and PETSc tools

Preliminary Results of GRAPES Helmholtz solver using GCR and PETSc tools Preliminary Results of GRAPES Helmholtz solver using GCR and PETSc tools Xiangjun Wu (1),Lilun Zhang (2),Junqiang Song (2) and Dehui Chen (1) (1) Center for Numerical Weather Prediction, CMA (2) School

More information

ADAPTIVE ACCURACY CONTROL OF NONLINEAR NEWTON-KRYLOV METHODS FOR MULTISCALE INTEGRATED HYDROLOGIC MODELS

ADAPTIVE ACCURACY CONTROL OF NONLINEAR NEWTON-KRYLOV METHODS FOR MULTISCALE INTEGRATED HYDROLOGIC MODELS XIX International Conference on Water Resources CMWR 2012 University of Illinois at Urbana-Champaign June 17-22,2012 ADAPTIVE ACCURACY CONTROL OF NONLINEAR NEWTON-KRYLOV METHODS FOR MULTISCALE INTEGRATED

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Numerical Methods for Large-Scale Nonlinear Systems

Numerical Methods for Large-Scale Nonlinear Systems Numerical Methods for Large-Scale Nonlinear Systems Handouts by Ronald H.W. Hoppe following the monograph P. Deuflhard Newton Methods for Nonlinear Problems Springer, Berlin-Heidelberg-New York, 2004 Num.

More information

Lecture 10 Preconditioning, Software, Parallelisation

Lecture 10 Preconditioning, Software, Parallelisation March 26, 2018 Lecture 10 Preconditioning, Software, Parallelisation A Incorporating a preconditioner We are interested in solving Ax = b (10.1) for x. Here, A is an n n non-singular matrix and b is a

More information

An advanced ILU preconditioner for the incompressible Navier-Stokes equations

An advanced ILU preconditioner for the incompressible Navier-Stokes equations An advanced ILU preconditioner for the incompressible Navier-Stokes equations M. ur Rehman C. Vuik A. Segal Delft Institute of Applied Mathematics, TU delft The Netherlands Computational Methods with Applications,

More information

Poisson Equation in 2D

Poisson Equation in 2D A Parallel Strategy Department of Mathematics and Statistics McMaster University March 31, 2010 Outline Introduction 1 Introduction Motivation Discretization Iterative Methods 2 Additive Schwarz Method

More information

AM 205: lecture 18. Last time: optimization methods Today: conditions for optimality

AM 205: lecture 18. Last time: optimization methods Today: conditions for optimality AM 205: lecture 18 Last time: optimization methods Today: conditions for optimality Existence of Global Minimum For example: f (x, y) = x 2 + y 2 is coercive on R 2 (global min. at (0, 0)) f (x) = x 3

More information

Shifted Laplace and related preconditioning for the Helmholtz equation

Shifted Laplace and related preconditioning for the Helmholtz equation Shifted Laplace and related preconditioning for the Helmholtz equation Ivan Graham and Euan Spence (Bath, UK) Collaborations with: Paul Childs (Schlumberger Gould Research), Martin Gander (Geneva) Douglas

More information

Iterative Methods for Sparse Linear Systems

Iterative Methods for Sparse Linear Systems Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

From Stationary Methods to Krylov Subspaces

From Stationary Methods to Krylov Subspaces Week 6: Wednesday, Mar 7 From Stationary Methods to Krylov Subspaces Last time, we discussed stationary methods for the iterative solution of linear systems of equations, which can generally be written

More information

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative

More information

Nonlinear Optimization for Optimal Control

Nonlinear Optimization for Optimal Control Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]

More information

Conjugate Gradients: Idea

Conjugate Gradients: Idea Overview Steepest Descent often takes steps in the same direction as earlier steps Wouldn t it be better every time we take a step to get it exactly right the first time? Again, in general we choose a

More information

Lecture 12: Randomized Least-squares Approximation in Practice, Cont. 12 Randomized Least-squares Approximation in Practice, Cont.

Lecture 12: Randomized Least-squares Approximation in Practice, Cont. 12 Randomized Least-squares Approximation in Practice, Cont. Stat60/CS94: Randomized Algorithms for Matrices and Data Lecture 1-10/14/013 Lecture 1: Randomized Least-squares Approximation in Practice, Cont. Lecturer: Michael Mahoney Scribe: Michael Mahoney Warning:

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 24: Preconditioning and Multigrid Solver Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 5 Preconditioning Motivation:

More information

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD Mathematical Question we are interested in answering numerically How to solve the following linear system for x Ax = b? where A is an n n invertible matrix and b is vector of length n. Notation: x denote

More information

1. Fast Iterative Solvers of SLE

1. Fast Iterative Solvers of SLE 1. Fast Iterative Solvers of crucial drawback of solvers discussed so far: they become slower if we discretize more accurate! now: look for possible remedies relaxation: explicit application of the multigrid

More information

Termination criteria for inexact fixed point methods

Termination criteria for inexact fixed point methods Termination criteria for inexact fixed point methods Philipp Birken 1 October 1, 2013 1 Institute of Mathematics, University of Kassel, Heinrich-Plett-Str. 40, D-34132 Kassel, Germany Department of Mathematics/Computer

More information

Introduction, basic but important concepts

Introduction, basic but important concepts Introduction, basic but important concepts Felix Kubler 1 1 DBF, University of Zurich and Swiss Finance Institute October 7, 2017 Felix Kubler Comp.Econ. Gerzensee, Ch1 October 7, 2017 1 / 31 Economics

More information

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate

More information

Non-linear least squares

Non-linear least squares Non-linear least squares Concept of non-linear least squares We have extensively studied linear least squares or linear regression. We see that there is a unique regression line that can be determined

More information

Chapter 9 Implicit integration, incompressible flows

Chapter 9 Implicit integration, incompressible flows Chapter 9 Implicit integration, incompressible flows The methods we discussed so far work well for problems of hydrodynamics in which the flow speeds of interest are not orders of magnitude smaller than

More information

Physics 403. Segev BenZvi. Numerical Methods, Maximum Likelihood, and Least Squares. Department of Physics and Astronomy University of Rochester

Physics 403. Segev BenZvi. Numerical Methods, Maximum Likelihood, and Least Squares. Department of Physics and Astronomy University of Rochester Physics 403 Numerical Methods, Maximum Likelihood, and Least Squares Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Review of Last Class Quadratic Approximation

More information

Motivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes)

Motivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes) AMSC/CMSC 460 Computational Methods, Fall 2007 UNIT 5: Nonlinear Equations Dianne P. O Leary c 2001, 2002, 2007 Solving Nonlinear Equations and Optimization Problems Read Chapter 8. Skip Section 8.1.1.

More information

Numerical Methods for Inverse Kinematics

Numerical Methods for Inverse Kinematics Numerical Methods for Inverse Kinematics Niels Joubert, UC Berkeley, CS184 2008-11-25 Inverse Kinematics is used to pose models by specifying endpoints of segments rather than individual joint angles.

More information

Notes on Some Methods for Solving Linear Systems

Notes on Some Methods for Solving Linear Systems Notes on Some Methods for Solving Linear Systems Dianne P. O Leary, 1983 and 1999 and 2007 September 25, 2007 When the matrix A is symmetric and positive definite, we have a whole new class of algorithms

More information

Preconditioners for the incompressible Navier Stokes equations

Preconditioners for the incompressible Navier Stokes equations Preconditioners for the incompressible Navier Stokes equations C. Vuik M. ur Rehman A. Segal Delft Institute of Applied Mathematics, TU Delft, The Netherlands SIAM Conference on Computational Science and

More information

Lecture 8: Fast Linear Solvers (Part 7)

Lecture 8: Fast Linear Solvers (Part 7) Lecture 8: Fast Linear Solvers (Part 7) 1 Modified Gram-Schmidt Process with Reorthogonalization Test Reorthogonalization If Av k 2 + δ v k+1 2 = Av k 2 to working precision. δ = 10 3 2 Householder Arnoldi

More information

Using PETSc Solvers in PyLith

Using PETSc Solvers in PyLith Using PETSc Solvers in PyLith Matthew Knepley, Brad Aagaard, and Charles Williams Computational and Applied Mathematics Rice University PyLith Virtual 2015 Cyberspace August 24 25, 2015 M. Knepley (Rice)

More information

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science EAD 115 Numerical Solution of Engineering and Scientific Problems David M. Rocke Department of Applied Science Taylor s Theorem Can often approximate a function by a polynomial The error in the approximation

More information

MATH 310, REVIEW SHEET 2

MATH 310, REVIEW SHEET 2 MATH 310, REVIEW SHEET 2 These notes are a very short summary of the key topics in the book (and follow the book pretty closely). You should be familiar with everything on here, but it s not comprehensive,

More information

Quasi-Newton Methods

Quasi-Newton Methods Newton s Method Pros and Cons Quasi-Newton Methods MA 348 Kurt Bryan Newton s method has some very nice properties: It s extremely fast, at least once it gets near the minimum, and with the simple modifications

More information