Handout on Newton s Method for Systems

Size: px
Start display at page:

Download "Handout on Newton s Method for Systems"

Transcription

1 Handout on Newton s Method for Systems The following summarizes the main points of our class discussion of Newton s method for approximately solving a system of nonlinear equations F (x) = 0, F : IR n IR n. Conventions: Notation for the nonlinear system is x 1 f 1 (x) x =.., F (x) =. x n f n (x), F (x) = f 1 (x)/ x 1... f 1 (x)/ x n... f n (x)/ x 1... f n (x)/ x n Subscripts are used to denote vector components and matrix entries. Superscripts are used to denote members of sequences, e.g., x (0) is the initial member of {x (k) }. The norm is an arbitrary norm of interest. The phrase x is sufficiently near x means that x x is sufficiently small. Similarly, x is near x means that x x is appropriately small. Newton s method. The basic method is Newton s Method: Given an initial x, x x F (x) 1 F (x). A more appropriate framework for practical implementation is Newton s Method: Given an initial x, evaluate F (x). Update x x + s. The following is our basic local convergence theorem for Newton s method. Theorem 1: Suppose that F is continuously differentiable near x IR n such that F (x ) = 0 and F (x ) is non-singular. Then whenever x (0) is sufficiently near x, the Newton iterates {x (k) } converge to x superlinearly, i.e., with x (k+1) x β k x (k) x, k = 0, 1,... where β k 0. If F also satisfies an inequality F (x) F (x ) L x x (1) 1

2 for x near x, then the convergence is quadratic, i.e., for a constant β independent of k. x (k+1) x β x (k) x 2, k = 0, 1,... Remark: The property (1) is called Lipschitz continuity of F at x. A proof of local quadratic convergence assuming (1) is given in [1, Th ]. With only a little effort, this can be extended to a proof of local superlinear convergence assuming only continuity of F near x. Newton s method with backtracking. In this, we augment Newton s method with a globalization procedure that tests each step for adequate progress toward a solution and, if necessary, modifies it to obtain a step that gives adequate progress. The globalization considered here is backtracking: at the current approximate solution x, the procedure begins with the Newton step s N = F (x) 1 F (x) and shortens it, if necessary, to obtain an acceptable step s = λs N for some λ (0, 1]. Our test for adequate progress is based on the actual reduction in F and the predicted reduction in F, given, respectively, by ared = F (x) F (x + s), pred = F (x) F (x) + F (x)s. We accept a step s from the current approximate solution x if ared t pred > 0 (2) for a prescribed t (0, 1). The following proposition confirms that a sufficiently short step obtained by backtracking will be acceptable. Proposition 2: If F is differentiable at x and F (x) 0, then a step s = λs N satisfies (2) for all sufficiently small λ > 0. Proof. Note that if s = λs N and 0 < λ 1, then pred = F (x) F (x) + F (x)s = F (x) F (x) + F (x)(λs N ) = F (x) (1 λ)f (x) + λ[f (x) + F (x)s N ] = F (x) (1 λ) F (x) = λ F (x). (3) To justify the third line in (3), we note that F (x) + F (x)s N = 0 since s N = F (x) 1 F (x) and that (1 λ)f (x) = (1 λ) F (x) since 1 λ 0. Then ared = F (x) F (x + s) = F (x) F (x) + F (x)s + o( s ) F (x) F (x) + F (x)s + o( s ) = pred + o( λs N ) = λ F (x) + o(λ). It follows that if F (x) 0 and t (0, 1), then ared t (λ F (x) ) = t pred for all sufficiently small λ > 0. 2

3 Our first method is the following somewhat general formulation. Newton s Method with Backtracking: Given t (0, 1), 0 < θ min < θ max < 1, and an initial x, evaluate F (x). Evaluate F (x + s). While ared < t pred do: Choose θ [θ min, θ max ]. Update s θs and re-evaluate F (x + s). Update x x + s and F (x) F (x + s). The backtracking globalization is implemented in the while loop. At each pass through the loop, the step s is shortened by a factor θ [θ min, θ max ], where 0 < θ min < θ max < 1. This is known as safeguarded backtracking. The requirement θ θ max < 1 ensures that the step length will be reduced by at least the fraction θ max, and it follows from Proposition 2 that an acceptable step will be determined after at most a finite number of passes through the loop. The requirement 0 < θ min θ ensures that step lengths will not be reduced so much that the iterates cannot converge to a solution. The following is the global convergence result for the method. Theorem 3 [2, Cor. 6.2]: Suppose that F is continuously differentiable and that {x (k) } is a sequence of iterates produced by the method. If x is a limit point 1 of {x (k) } such that F (x ) is non-singular, then F (x ) = 0, x (k) x, and s (k) x (k+1) x (k) = F (x k ) 1 F (x k ) for all sufficiently large k. Note that the theorem does not guarantee that the iterates will always converge to a solution. (Indeed, there can be no such guarantee some problems have no solutions!) Rather, it only asserts that the iterates will behave about as desirably as the function F will allow. Another way of stating the result, which may offer additional insight, is that exactly one of the following must hold: (i) x (k) ; (ii) {x (k) } has one or more limit points, and F is singular at each of them; (iii) {x (k) } converges to a solution x such that F (x ) is nonsingular, and the iterates are ultimately those of Newton s method. In the case of alternative (i), the iterates diverge. In the case of (ii), the iterates may or may not converge, depending on additional properties of F. Alternative (iii) is the 1 We say x is a limit point of {x (k) } if, for every δ > 0, there are infinitely many x (k) such that x (k) x < δ. Note that if {x (k) } is bounded, i.e., there exists an M such that x (k) M for all k, then {x (k) } converges to x if and only if x is the only limit point of {x (k) }. 3

4 desirable outcome; in this case the iterates converge to a solution, ultimately with the speed of Newton iterates (at least superlinearly and typically quadratically). We now work toward a more refined version of the method. With s = λs N for λ (0, 1] and with pred = λ F (x) by (3), the condition ared < t pred can be simplified to F (x + s) / F (x) > 1 t λ. Also, we can make a sophisticated choice of each θ [θ min, θ max ] in an important (and common) special case: that in which the norm is an inner-product norm, i.e., v = v, v 1/2 for all v IR n, where, is an inner product on IR n. 2 Then, in the while loop, we can choose each θ to minimize over [θ min, θ max ] a quadratic p(θ) = a + bθ + cθ 2 that satisfies p(0) = F (x) 2, p(1) = F (x + s) 2, p (0) = d dθ ( F (x + θs) 2 ) θ=0 = 2 F (x), F (x)s. The quadratic satisfying these conditions is p(θ) = F (x) F (x), F (x)s θ + { F (x + s) 2 F (x) 2 2 F (x), F (x)s } θ 2. Writing s = λs N and noting F (x)s = λf (x)s N = λf (x), we have F (x), F (x)s = λ F (x) 2 and p(θ) = F (x) 2[ 1 2λθ + { F (x + s) 2 / F (x) λ } θ 2]. We have that p (θ) = 0 if and only if θ = λ /{ F (x + s) 2 / F (x) λ }, and this θ minimizes p if p (θ) = 2 { F (x + s) 2 / F (x) λ } > 0. These observations lead to the following more refined method. 2 An inner product on IR n is a function, from pairs of vectors (u, v) to scalars in IR 1 that satisfies (a) v, v 0 for all v IR n, with v, v = 0 if and only if v = 0; and for all u IR n and v IR n, (b) u, v = v, u, (c) αu, v = α u, v for all α IR 1, and (d) u + v, w = u, w + v, w for all w IR n. The most familiar example is the Euclidean inner-product (the usual dot product), given by u, v = n i=1 u iv i. 4

5 Newton s Method with Backtracking: Given t (0, 1), 0 < θ min < θ max < 1, and an initial x, evaluate F (x). Evaluate F (x + s) and set λ = 1. While ρ F (x + s) / F (x) > 1 t λ do: If δ ρ λ 0, set θ = θ max. Else do: Set θ = λ/δ. If θ > θ max, θ θ max. If θ < θ min, θ θ min. Update s θs, λ θλ, and re-evaluate F (x + s). Update x x + s and F (x) F (x + s). Remarks: Common practical recommendations are to take t = 10 4, θ min = 1/10, and θ max = 1/2. An additional refinement can be added to the backtracking, as follows: After the first step-length reduction in the while loop, there is enough information about F (x + θs) to construct a cubic interpolating polynomial, and one can choose θ to minimize this cubic over [θ min, θ max ]. See [1, Ch. 6] for details. References. 1. J. E. Dennis, Jr., and R. B. Schnabel, Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Classics in Applied Mathematics, SIAM, Philadelphia, 1996; originally published in Series in Automatic Computation, Prentice Hall, Englewood Cliffs, NJ, S. C. Eisenstat and H. F. Walker, Globally convergent inexact Newton methods, SIAM J. Optimization, 4 (1994), pp

Numerical Methods for Large-Scale Nonlinear Equations

Numerical Methods for Large-Scale Nonlinear Equations Slide 1 Numerical Methods for Large-Scale Nonlinear Equations Homer Walker MA 512 April 28, 2005 Inexact Newton and Newton Krylov Methods a. Newton-iterative and inexact Newton methods. Slide 2 i. Formulation

More information

Inexact Newton Methods Applied to Under Determined Systems. Joseph P. Simonis. A Dissertation. Submitted to the Faculty

Inexact Newton Methods Applied to Under Determined Systems. Joseph P. Simonis. A Dissertation. Submitted to the Faculty Inexact Newton Methods Applied to Under Determined Systems by Joseph P. Simonis A Dissertation Submitted to the Faculty of WORCESTER POLYTECHNIC INSTITUTE in Partial Fulfillment of the Requirements for

More information

Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations

Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations Oleg Burdakov a,, Ahmad Kamandi b a Department of Mathematics, Linköping University,

More information

THE INEXACT, INEXACT PERTURBED AND QUASI-NEWTON METHODS ARE EQUIVALENT MODELS

THE INEXACT, INEXACT PERTURBED AND QUASI-NEWTON METHODS ARE EQUIVALENT MODELS THE INEXACT, INEXACT PERTURBED AND QUASI-NEWTON METHODS ARE EQUIVALENT MODELS EMIL CĂTINAŞ Abstract. A classical model of Newton iterations which takes into account some error terms is given by the quasi-newton

More information

Research Article A Two-Step Matrix-Free Secant Method for Solving Large-Scale Systems of Nonlinear Equations

Research Article A Two-Step Matrix-Free Secant Method for Solving Large-Scale Systems of Nonlinear Equations Applied Mathematics Volume 2012, Article ID 348654, 9 pages doi:10.1155/2012/348654 Research Article A Two-Step Matrix-Free Secant Method for Solving Large-Scale Systems of Nonlinear Equations M. Y. Waziri,

More information

Spectral gradient projection method for solving nonlinear monotone equations

Spectral gradient projection method for solving nonlinear monotone equations Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department

More information

Statistics 580 Optimization Methods

Statistics 580 Optimization Methods Statistics 580 Optimization Methods Introduction Let fx be a given real-valued function on R p. The general optimization problem is to find an x ɛ R p at which fx attain a maximum or a minimum. It is of

More information

Maria Cameron. f(x) = 1 n

Maria Cameron. f(x) = 1 n Maria Cameron 1. Local algorithms for solving nonlinear equations Here we discuss local methods for nonlinear equations r(x) =. These methods are Newton, inexact Newton and quasi-newton. We will show that

More information

A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION

A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION Anders FORSGREN Technical Report TRITA-MAT-2009-OS7 Department of Mathematics Royal Institute of Technology November 2009 Abstract

More information

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

Numerical Methods for Differential Equations Mathematical and Computational Tools

Numerical Methods for Differential Equations Mathematical and Computational Tools Numerical Methods for Differential Equations Mathematical and Computational Tools Gustaf Söderlind Numerical Analysis, Lund University Contents V4.16 Part 1. Vector norms, matrix norms and logarithmic

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T Heath Chapter 5 Nonlinear Equations Copyright c 2001 Reproduction permitted only for noncommercial, educational

More information

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by:

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by: Newton s Method Suppose we want to solve: (P:) min f (x) At x = x, f (x) can be approximated by: n x R. f (x) h(x) := f ( x)+ f ( x) T (x x)+ (x x) t H ( x)(x x), 2 which is the quadratic Taylor expansion

More information

Keywords: Nonlinear least-squares problems, regularized models, error bound condition, local convergence.

Keywords: Nonlinear least-squares problems, regularized models, error bound condition, local convergence. STRONG LOCAL CONVERGENCE PROPERTIES OF ADAPTIVE REGULARIZED METHODS FOR NONLINEAR LEAST-SQUARES S. BELLAVIA AND B. MORINI Abstract. This paper studies adaptive regularized methods for nonlinear least-squares

More information

On the use of quadratic models in unconstrained minimization without derivatives 1. M.J.D. Powell

On the use of quadratic models in unconstrained minimization without derivatives 1. M.J.D. Powell On the use of quadratic models in unconstrained minimization without derivatives 1 M.J.D. Powell Abstract: Quadratic approximations to the objective function provide a way of estimating first and second

More information

A NOTE ON Q-ORDER OF CONVERGENCE

A NOTE ON Q-ORDER OF CONVERGENCE BIT 0006-3835/01/4102-0422 $16.00 2001, Vol. 41, No. 2, pp. 422 429 c Swets & Zeitlinger A NOTE ON Q-ORDER OF CONVERGENCE L. O. JAY Department of Mathematics, The University of Iowa, 14 MacLean Hall Iowa

More information

STOP, a i+ 1 is the desired root. )f(a i) > 0. Else If f(a i+ 1. Set a i+1 = a i+ 1 and b i+1 = b Else Set a i+1 = a i and b i+1 = a i+ 1

STOP, a i+ 1 is the desired root. )f(a i) > 0. Else If f(a i+ 1. Set a i+1 = a i+ 1 and b i+1 = b Else Set a i+1 = a i and b i+1 = a i+ 1 53 17. Lecture 17 Nonlinear Equations Essentially, the only way that one can solve nonlinear equations is by iteration. The quadratic formula enables one to compute the roots of p(x) = 0 when p P. Formulas

More information

Interior-Point Methods as Inexact Newton Methods. Silvia Bonettini Università di Modena e Reggio Emilia Italy

Interior-Point Methods as Inexact Newton Methods. Silvia Bonettini Università di Modena e Reggio Emilia Italy InteriorPoint Methods as Inexact Newton Methods Silvia Bonettini Università di Modena e Reggio Emilia Italy Valeria Ruggiero Università di Ferrara Emanuele Galligani Università di Modena e Reggio Emilia

More information

The Newton-Raphson Algorithm

The Newton-Raphson Algorithm The Newton-Raphson Algorithm David Allen University of Kentucky January 31, 2013 1 The Newton-Raphson Algorithm The Newton-Raphson algorithm, also called Newton s method, is a method for finding the minimum

More information

Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2

Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Coralia Cartis, University of Oxford INFOMM CDT: Modelling, Analysis and Computation of Continuous Real-World Problems Methods

More information

MAJORIZATION OF DIFFERENT ORDER

MAJORIZATION OF DIFFERENT ORDER MAJORIZATION OF DIFFERENT ORDER JAN DE LEEUW 1. INTRODUCTION The majorization method [De Leeuw, 1994; Heiser, 1995; Lange et al., 2000] for minimization of real valued loss functions has become very popular

More information

Nonlinear Stationary Subdivision

Nonlinear Stationary Subdivision Nonlinear Stationary Subdivision Michael S. Floater SINTEF P. O. Box 4 Blindern, 034 Oslo, Norway E-mail: michael.floater@math.sintef.no Charles A. Micchelli IBM Corporation T.J. Watson Research Center

More information

(, ) : R n R n R. 1. It is bilinear, meaning it s linear in each argument: that is

(, ) : R n R n R. 1. It is bilinear, meaning it s linear in each argument: that is 17 Inner products Up until now, we have only examined the properties of vectors and matrices in R n. But normally, when we think of R n, we re really thinking of n-dimensional Euclidean space - that is,

More information

c 2007 Society for Industrial and Applied Mathematics

c 2007 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 18, No. 1, pp. 106 13 c 007 Society for Industrial and Applied Mathematics APPROXIMATE GAUSS NEWTON METHODS FOR NONLINEAR LEAST SQUARES PROBLEMS S. GRATTON, A. S. LAWLESS, AND N. K.

More information

Worst Case Complexity of Direct Search

Worst Case Complexity of Direct Search Worst Case Complexity of Direct Search L. N. Vicente May 3, 200 Abstract In this paper we prove that direct search of directional type shares the worst case complexity bound of steepest descent when sufficient

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 11-14 On the convergence of inexact Newton methods R. Idema, D.J.P. Lahaye, and C. Vuik ISSN 1389-6520 Reports of the Department of Applied Mathematical Analysis Delft

More information

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations Methods for Systems of Methods for Systems of Outline Scientific Computing: An Introductory Survey Chapter 5 1 Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign

More information

How do we recognize a solution?

How do we recognize a solution? AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2010 UNIT 2: Unconstrained Optimization, Part 1 Dianne P. O Leary c 2008,2010 The plan: Unconstrained Optimization: Fundamentals How do we recognize

More information

system of equations. In particular, we give a complete characterization of the Q-superlinear

system of equations. In particular, we give a complete characterization of the Q-superlinear INEXACT NEWTON METHODS FOR SEMISMOOTH EQUATIONS WITH APPLICATIONS TO VARIATIONAL INEQUALITY PROBLEMS Francisco Facchinei 1, Andreas Fischer 2 and Christian Kanzow 3 1 Dipartimento di Informatica e Sistemistica

More information

An Efficient Solver for Systems of Nonlinear. Equations with Singular Jacobian. via Diagonal Updating

An Efficient Solver for Systems of Nonlinear. Equations with Singular Jacobian. via Diagonal Updating Applied Mathematical Sciences, Vol. 4, 2010, no. 69, 3403-3412 An Efficient Solver for Systems of Nonlinear Equations with Singular Jacobian via Diagonal Updating M. Y. Waziri, W. J. Leong, M. A. Hassan

More information

Affine covariant Semi-smooth Newton in function space

Affine covariant Semi-smooth Newton in function space Affine covariant Semi-smooth Newton in function space Anton Schiela March 14, 2018 These are lecture notes of my talks given for the Winter School Modern Methods in Nonsmooth Optimization that was held

More information

Bindel, Spring 2016 Numerical Analysis (CS 4220) Notes for

Bindel, Spring 2016 Numerical Analysis (CS 4220) Notes for Life beyond Newton Notes for 2016-04-08 Newton s method has many attractive properties, particularly when we combine it with a globalization strategy. Unfortunately, Newton steps are not cheap. At each

More information

Improved Damped Quasi-Newton Methods for Unconstrained Optimization

Improved Damped Quasi-Newton Methods for Unconstrained Optimization Improved Damped Quasi-Newton Methods for Unconstrained Optimization Mehiddin Al-Baali and Lucio Grandinetti August 2015 Abstract Recently, Al-Baali (2014) has extended the damped-technique in the modified

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 5 Nonlinear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

CS 323: Numerical Analysis and Computing

CS 323: Numerical Analysis and Computing CS 323: Numerical Analysis and Computing MIDTERM #2 Instructions: This is an open notes exam, i.e., you are allowed to consult any textbook, your class notes, homeworks, or any of the handouts from us.

More information

Trust Regions. Charles J. Geyer. March 27, 2013

Trust Regions. Charles J. Geyer. March 27, 2013 Trust Regions Charles J. Geyer March 27, 2013 1 Trust Region Theory We follow Nocedal and Wright (1999, Chapter 4), using their notation. Fletcher (1987, Section 5.1) discusses the same algorithm, but

More information

GLOBALIZATION TECHNIQUES FOR NEWTON KRYLOV METHODS AND APPLICATIONS TO THE FULLY-COUPLED SOLUTION OF THE NAVIER STOKES EQUATIONS

GLOBALIZATION TECHNIQUES FOR NEWTON KRYLOV METHODS AND APPLICATIONS TO THE FULLY-COUPLED SOLUTION OF THE NAVIER STOKES EQUATIONS GLOBALIZATION TECHNIQUES FOR NEWTON KRYLOV METHODS AND APPLICATIONS TO THE FULLY-COUPLED SOLUTION OF THE NAVIER STOKES EQUATIONS ROGER P. PAWLOWSKI, JOHN N. SHADID, JOSEPH P. SIMONIS, AND HOMER F. WALKER

More information

An Alternative Three-Term Conjugate Gradient Algorithm for Systems of Nonlinear Equations

An Alternative Three-Term Conjugate Gradient Algorithm for Systems of Nonlinear Equations International Journal of Mathematical Modelling & Computations Vol. 07, No. 02, Spring 2017, 145-157 An Alternative Three-Term Conjugate Gradient Algorithm for Systems of Nonlinear Equations L. Muhammad

More information

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition 6 Vector Spaces with Inned Product Basis and Dimension Section Objective(s): Vector Spaces and Subspaces Linear (In)dependence Basis and Dimension Inner Product 6 Vector Spaces and Subspaces Definition

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

1. Introduction. The problem of interest is a system of nonlinear equations

1. Introduction. The problem of interest is a system of nonlinear equations SIAM J. NUMER. ANAL. Vol. 46, No. 4, pp. 2112 2132 c 2008 Society for Industrial and Applied Mathematics INEXACT NEWTON DOGLEG METHODS ROGER P. PAWLOWSKI, JOSEPH P. SIMONIS, HOMER F. WALKER, AND JOHN N.

More information

8 Numerical methods for unconstrained problems

8 Numerical methods for unconstrained problems 8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields

More information

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES IJMMS 25:6 2001) 397 409 PII. S0161171201002290 http://ijmms.hindawi.com Hindawi Publishing Corp. A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

More information

R-Linear Convergence of Limited Memory Steepest Descent

R-Linear Convergence of Limited Memory Steepest Descent R-Linear Convergence of Limited Memory Steepest Descent Fran E. Curtis and Wei Guo Department of Industrial and Systems Engineering, Lehigh University, USA COR@L Technical Report 16T-010 R-Linear Convergence

More information

An Implicit Multi-Step Diagonal Secant-Type Method for Solving Large-Scale Systems of Nonlinear Equations

An Implicit Multi-Step Diagonal Secant-Type Method for Solving Large-Scale Systems of Nonlinear Equations Applied Mathematical Sciences, Vol. 6, 2012, no. 114, 5667-5676 An Implicit Multi-Step Diagonal Secant-Type Method for Solving Large-Scale Systems of Nonlinear Equations 1 M. Y. Waziri, 2 Z. A. Majid and

More information

Nodal bases for the serendipity family of finite elements

Nodal bases for the serendipity family of finite elements Foundations of Computational Mathematics manuscript No. (will be inserted by the editor) Nodal bases for the serendipity family of finite elements Michael S. Floater Andrew Gillette Received: date / Accepted:

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

Preliminary Examination in Numerical Analysis

Preliminary Examination in Numerical Analysis Department of Applied Mathematics Preliminary Examination in Numerical Analysis August 7, 06, 0 am pm. Submit solutions to four (and no more) of the following six problems. Show all your work, and justify

More information

Zangwill s Global Convergence Theorem

Zangwill s Global Convergence Theorem Zangwill s Global Convergence Theorem A theory of global convergence has been given by Zangwill 1. This theory involves the notion of a set-valued mapping, or point-to-set mapping. Definition 1.1 Given

More information

arxiv: v1 [math.na] 25 Sep 2012

arxiv: v1 [math.na] 25 Sep 2012 Kantorovich s Theorem on Newton s Method arxiv:1209.5704v1 [math.na] 25 Sep 2012 O. P. Ferreira B. F. Svaiter March 09, 2007 Abstract In this work we present a simplifyed proof of Kantorovich s Theorem

More information

5 Quasi-Newton Methods

5 Quasi-Newton Methods Unconstrained Convex Optimization 26 5 Quasi-Newton Methods If the Hessian is unavailable... Notation: H = Hessian matrix. B is the approximation of H. C is the approximation of H 1. Problem: Solve min

More information

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical

More information

A New Approach for Solving Dual Fuzzy Nonlinear Equations Using Broyden's and Newton's Methods

A New Approach for Solving Dual Fuzzy Nonlinear Equations Using Broyden's and Newton's Methods From the SelectedWorks of Dr. Mohamed Waziri Yusuf August 24, 22 A New Approach for Solving Dual Fuzzy Nonlinear Equations Using Broyden's and Newton's Methods Mohammed Waziri Yusuf, Dr. Available at:

More information

Newton s Method and Efficient, Robust Variants

Newton s Method and Efficient, Robust Variants Newton s Method and Efficient, Robust Variants Philipp Birken University of Kassel (SFB/TRR 30) Soon: University of Lund October 7th 2013 Efficient solution of large systems of non-linear PDEs in science

More information

Newton s Method. Javier Peña Convex Optimization /36-725

Newton s Method. Javier Peña Convex Optimization /36-725 Newton s Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, f ( (y) = max y T x f(x) ) x Properties and

More information

Termination criteria for inexact fixed point methods

Termination criteria for inexact fixed point methods Termination criteria for inexact fixed point methods Philipp Birken 1 October 1, 2013 1 Institute of Mathematics, University of Kassel, Heinrich-Plett-Str. 40, D-34132 Kassel, Germany Department of Mathematics/Computer

More information

A NOTE ON PAN S SECOND-ORDER QUASI-NEWTON UPDATES

A NOTE ON PAN S SECOND-ORDER QUASI-NEWTON UPDATES A NOTE ON PAN S SECOND-ORDER QUASI-NEWTON UPDATES Lei-Hong Zhang, Ping-Qi Pan Department of Mathematics, Southeast University, Nanjing, 210096, P.R.China. Abstract This note, attempts to further Pan s

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

Single Variable Minimization

Single Variable Minimization AA222: MDO 37 Sunday 1 st April, 2012 at 19:48 Chapter 2 Single Variable Minimization 2.1 Motivation Most practical optimization problems involve many variables, so the study of single variable minimization

More information

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information Introduction Consider a linear system y = Φx where Φ can be taken as an m n matrix acting on Euclidean space or more generally, a linear operator on a Hilbert space. We call the vector x a signal or input,

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

1. Introduction. In this paper we derive an algorithm for solving the nonlinear system

1. Introduction. In this paper we derive an algorithm for solving the nonlinear system GLOBAL APPROXIMATE NEWTON METHODS RANDOLPH E. BANK AND DONALD J. ROSE Abstract. We derive a class of globally convergent and quadratically converging algorithms for a system of nonlinear equations g(u)

More information

Conditional Gradient (Frank-Wolfe) Method

Conditional Gradient (Frank-Wolfe) Method Conditional Gradient (Frank-Wolfe) Method Lecturer: Aarti Singh Co-instructor: Pradeep Ravikumar Convex Optimization 10-725/36-725 1 Outline Today: Conditional gradient method Convergence analysis Properties

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT -09 Computational and Sensitivity Aspects of Eigenvalue-Based Methods for the Large-Scale Trust-Region Subproblem Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

x 2 x n r n J(x + t(x x ))(x x )dt. For warming-up we start with methods for solving a single equation of one variable.

x 2 x n r n J(x + t(x x ))(x x )dt. For warming-up we start with methods for solving a single equation of one variable. Maria Cameron 1. Fixed point methods for solving nonlinear equations We address the problem of solving an equation of the form (1) r(x) = 0, where F (x) : R n R n is a vector-function. Eq. (1) can be written

More information

The Steepest Descent Algorithm for Unconstrained Optimization

The Steepest Descent Algorithm for Unconstrained Optimization The Steepest Descent Algorithm for Unconstrained Optimization Robert M. Freund February, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 1 Steepest Descent Algorithm The problem

More information

A Numerical Study of Globalizations of Newton-GMRES Methods. Joseph P. Simonis. AThesis. Submitted to the Faculty WORCESTER POLYTECHNIC INSTITUTE

A Numerical Study of Globalizations of Newton-GMRES Methods. Joseph P. Simonis. AThesis. Submitted to the Faculty WORCESTER POLYTECHNIC INSTITUTE A Numerical Study of Globalizations of Newton-GMRES Methods by Joseph P. Simonis AThesis Submitted to the Faculty of WORCESTER POLYTECHNIC INSTITUTE in Partial Fulfillment of the Requirements for the Degree

More information

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications Weijun Zhou 28 October 20 Abstract A hybrid HS and PRP type conjugate gradient method for smooth

More information

CONVERGENCE BEHAVIOUR OF INEXACT NEWTON METHODS

CONVERGENCE BEHAVIOUR OF INEXACT NEWTON METHODS MATHEMATICS OF COMPUTATION Volume 68, Number 228, Pages 165 1613 S 25-5718(99)1135-7 Article electronically published on March 1, 1999 CONVERGENCE BEHAVIOUR OF INEXACT NEWTON METHODS BENEDETTA MORINI Abstract.

More information

Search Directions for Unconstrained Optimization

Search Directions for Unconstrained Optimization 8 CHAPTER 8 Search Directions for Unconstrained Optimization In this chapter we study the choice of search directions used in our basic updating scheme x +1 = x + t d. for solving P min f(x). x R n All

More information

Math 411 Preliminaries

Math 411 Preliminaries Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Generalized eigenspaces

Generalized eigenspaces Generalized eigenspaces November 30, 2012 Contents 1 Introduction 1 2 Polynomials 2 3 Calculating the characteristic polynomial 5 4 Projections 7 5 Generalized eigenvalues 10 6 Eigenpolynomials 15 1 Introduction

More information

Introduction to Unconstrained Optimization: Part 2

Introduction to Unconstrained Optimization: Part 2 Introduction to Unconstrained Optimization: Part 2 James Allison ME 555 January 29, 2007 Overview Recap Recap selected concepts from last time (with examples) Use of quadratic functions Tests for positive

More information

On the Iteration Complexity of Some Projection Methods for Monotone Linear Variational Inequalities

On the Iteration Complexity of Some Projection Methods for Monotone Linear Variational Inequalities On the Iteration Complexity of Some Projection Methods for Monotone Linear Variational Inequalities Caihua Chen Xiaoling Fu Bingsheng He Xiaoming Yuan January 13, 2015 Abstract. Projection type methods

More information

Global convergence of a regularized factorized quasi-newton method for nonlinear least squares problems

Global convergence of a regularized factorized quasi-newton method for nonlinear least squares problems Volume 29, N. 2, pp. 195 214, 2010 Copyright 2010 SBMAC ISSN 0101-8205 www.scielo.br/cam Global convergence of a regularized factorized quasi-newton method for nonlinear least squares problems WEIJUN ZHOU

More information

M-Arctan estimator based on the trust-region method

M-Arctan estimator based on the trust-region method M-Arctan estimator based on the trust-region method Yacine Hassaïne Benoît Delourme Pierre Hausheer French transmission system operator (RTE) RTE RTE Yacine.Hassaine@rte-france.com Benoit.Delourme@rte-france.com

More information

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

e-companion ONLY AVAILABLE IN ELECTRONIC FORM OPERATIONS RESEARCH doi 0.87/opre.00.0894ec e-companion ONLY AVAILABLE IN ELECTRONIC FORM informs 0 INFORMS Electronic Companion Fied-Point Approaches to Computing Bertrand-Nash Equilibrium Prices Under

More information

A new Newton like method for solving nonlinear equations

A new Newton like method for solving nonlinear equations DOI 10.1186/s40064-016-2909-7 RESEARCH Open Access A new Newton like method for solving nonlinear equations B. Saheya 1,2, Guo qing Chen 1, Yun kang Sui 3* and Cai ying Wu 1 *Correspondence: yksui@sina.com

More information

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. 29 May :45 11:45

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. 29 May :45 11:45 Two hours MATH20602 To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER NUMERICAL ANALYSIS 1 29 May 2015 9:45 11:45 Answer THREE of the FOUR questions. If more

More information

NUMERICAL COMPARISON OF LINE SEARCH CRITERIA IN NONLINEAR CONJUGATE GRADIENT ALGORITHMS

NUMERICAL COMPARISON OF LINE SEARCH CRITERIA IN NONLINEAR CONJUGATE GRADIENT ALGORITHMS NUMERICAL COMPARISON OF LINE SEARCH CRITERIA IN NONLINEAR CONJUGATE GRADIENT ALGORITHMS Adeleke O. J. Department of Computer and Information Science/Mathematics Covenat University, Ota. Nigeria. Aderemi

More information

OPER 627: Nonlinear Optimization Lecture 9: Trust-region methods

OPER 627: Nonlinear Optimization Lecture 9: Trust-region methods OPER 627: Nonlinear Optimization Lecture 9: Trust-region methods Department of Statistical Sciences and Operations Research Virginia Commonwealth University Sept 25, 2013 (Lecture 9) Nonlinear Optimization

More information

Examination paper for TMA4180 Optimization I

Examination paper for TMA4180 Optimization I Department of Mathematical Sciences Examination paper for TMA4180 Optimization I Academic contact during examination: Phone: Examination date: 26th May 2016 Examination time (from to): 09:00 13:00 Permitted

More information

Extra-Updates Criterion for the Limited Memory BFGS Algorithm for Large Scale Nonlinear Optimization 1

Extra-Updates Criterion for the Limited Memory BFGS Algorithm for Large Scale Nonlinear Optimization 1 journal of complexity 18, 557 572 (2002) doi:10.1006/jcom.2001.0623 Extra-Updates Criterion for the Limited Memory BFGS Algorithm for Large Scale Nonlinear Optimization 1 M. Al-Baali Department of Mathematics

More information

2. Quasi-Newton methods

2. Quasi-Newton methods L. Vandenberghe EE236C (Spring 2016) 2. Quasi-Newton methods variable metric methods quasi-newton methods BFGS update limited-memory quasi-newton methods 2-1 Newton method for unconstrained minimization

More information

Implementation of an Interior Point Multidimensional Filter Line Search Method for Constrained Optimization

Implementation of an Interior Point Multidimensional Filter Line Search Method for Constrained Optimization Proceedings of the 5th WSEAS Int. Conf. on System Science and Simulation in Engineering, Tenerife, Canary Islands, Spain, December 16-18, 2006 391 Implementation of an Interior Point Multidimensional Filter

More information

A PENALIZED FISCHER-BURMEISTER NCP-FUNCTION. September 1997 (revised May 1998 and March 1999)

A PENALIZED FISCHER-BURMEISTER NCP-FUNCTION. September 1997 (revised May 1998 and March 1999) A PENALIZED FISCHER-BURMEISTER NCP-FUNCTION Bintong Chen 1 Xiaojun Chen 2 Christian Kanzow 3 September 1997 revised May 1998 and March 1999 Abstract: We introduce a new NCP-function in order to reformulate

More information

Newton s Method. Ryan Tibshirani Convex Optimization /36-725

Newton s Method. Ryan Tibshirani Convex Optimization /36-725 Newton s Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, Properties and examples: f (y) = max x

More information

CS 323: Numerical Analysis and Computing

CS 323: Numerical Analysis and Computing CS 323: Numerical Analysis and Computing MIDTERM #2 Instructions: This is an open notes exam, i.e., you are allowed to consult any textbook, your class notes, homeworks, or any of the handouts from us.

More information

On Solving Large Algebraic. Riccati Matrix Equations

On Solving Large Algebraic. Riccati Matrix Equations International Mathematical Forum, 5, 2010, no. 33, 1637-1644 On Solving Large Algebraic Riccati Matrix Equations Amer Kaabi Department of Basic Science Khoramshahr Marine Science and Technology University

More information

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY ILSE C.F. IPSEN Abstract. Absolute and relative perturbation bounds for Ritz values of complex square matrices are presented. The bounds exploit quasi-sparsity

More information

2 CAI, KEYES AND MARCINKOWSKI proportional to the relative nonlinearity of the function; i.e., as the relative nonlinearity increases the domain of co

2 CAI, KEYES AND MARCINKOWSKI proportional to the relative nonlinearity of the function; i.e., as the relative nonlinearity increases the domain of co INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS Int. J. Numer. Meth. Fluids 2002; 00:1 6 [Version: 2000/07/27 v1.0] Nonlinear Additive Schwarz Preconditioners and Application in Computational Fluid

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 10-12 Large-Scale Eigenvalue Problems in Trust-Region Calculations Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug ISSN 1389-6520 Reports of the Department of

More information

New Inexact Line Search Method for Unconstrained Optimization 1,2

New Inexact Line Search Method for Unconstrained Optimization 1,2 journal of optimization theory and applications: Vol. 127, No. 2, pp. 425 446, November 2005 ( 2005) DOI: 10.1007/s10957-005-6553-6 New Inexact Line Search Method for Unconstrained Optimization 1,2 Z.

More information

Quasi-Newton Methods. Javier Peña Convex Optimization /36-725

Quasi-Newton Methods. Javier Peña Convex Optimization /36-725 Quasi-Newton Methods Javier Peña Convex Optimization 10-725/36-725 Last time: primal-dual interior-point methods Consider the problem min x subject to f(x) Ax = b h(x) 0 Assume f, h 1,..., h m are convex

More information

Unconstrained minimization of smooth functions

Unconstrained minimization of smooth functions Unconstrained minimization of smooth functions We want to solve min x R N f(x), where f is convex. In this section, we will assume that f is differentiable (so its gradient exists at every point), and

More information

Quasi-Newton Methods

Quasi-Newton Methods Quasi-Newton Methods Werner C. Rheinboldt These are excerpts of material relating to the boos [OR00 and [Rhe98 and of write-ups prepared for courses held at the University of Pittsburgh. Some further references

More information

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true? . Let m and n be two natural numbers such that m > n. Which of the following is/are true? (i) A linear system of m equations in n variables is always consistent. (ii) A linear system of n equations in

More information

A QP-FREE CONSTRAINED NEWTON-TYPE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS. Christian Kanzow 1 and Hou-Duo Qi 2

A QP-FREE CONSTRAINED NEWTON-TYPE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS. Christian Kanzow 1 and Hou-Duo Qi 2 A QP-FREE CONSTRAINED NEWTON-TYPE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS Christian Kanzow 1 and Hou-Duo Qi 2 1 University of Hamburg Institute of Applied Mathematics Bundesstrasse 55, D-20146 Hamburg,

More information