Matrix Secant Methods
|
|
- Matthew Fletcher
- 5 years ago
- Views:
Transcription
1 Equation Solving g(x) = 0
2
3 Newton-Lie Iterations: x +1 := x J g(x ), where J g (x ).
4 Newton-Lie Iterations: x +1 := x J g(x ), where J g (x ) years ago the Babylonians used the secant method in 1D: J = x x 1 g(x ) g(x 1 ).
5 Newton-Lie Iterations: x +1 := x J g(x ), where J g (x ) years ago the Babylonians used the secant method in 1D: This gives the error J = x x 1 g(x ) g(x 1 ). g (x ) 1 J = g(x 1 ) [g(x ) + g (x )(x 1 x )] g (x )[g(x 1 ) g(x. )]
6 Newton-Lie Iterations: x +1 := x J g(x ), where J g (x ) years ago the Babylonians used the secant method in 1D: This gives the error J = x x 1 g(x ) g(x 1 ). g (x ) 1 J = g(x 1 ) [g(x ) + g (x )(x 1 x )] g (x )[g(x 1 ) g(x. )] If g (x ) 0, then α > 0 such that g (x ) 1 J (L/2) x 1 x 2 α g (x ) x 1 x K x 1 x by the Quadratic Bound Lemma, so x x 2-step quadratic rate.
7 Can we apply the secant method to higher dimentions than 1? J = x x 1 g(x ) g(x 1 )
8 Can we apply the secant method to higher dimentions than 1? J = x x 1 g(x ) g(x 1 ) Multiply on the rhs by g(x ) g(x 1 ) gives J (g(x ) g(x 1 )) = x x 1. This is called the matrix secant equation (MSE), or quasi-newton equation.
9 Can we apply the secant method to higher dimentions than 1? J = x x 1 g(x ) g(x 1 ) Multiply on the rhs by g(x ) g(x 1 ) gives J (g(x ) g(x 1 )) = x x 1. This is called the matrix secant equation (MSE), or quasi-newton equation. Here the matrix J is unnown (n 2 unnowns), and satisfies the n linear equations of the MSE.
10 Can we apply the secant method to higher dimentions than 1? J = x x 1 g(x ) g(x 1 ) Multiply on the rhs by g(x ) g(x 1 ) gives J (g(x ) g(x 1 )) = x x 1. This is called the matrix secant equation (MSE), or quasi-newton equation. Here the matrix J is unnown (n 2 unnowns), and satisfies the n linear equations of the MSE. There are not enough equations to nail down all of the entries if J, so further conditions are required. What should they be?
11 To better understand what further conditions on J are sensible, we revert to discussing the matrices B = J 1, so the MSE becomes B (x x 1 ) = g(x ) g(x 1 ).
12 To better understand what further conditions on J are sensible, we revert to discussing the matrices B = J 1, so the MSE becomes B (x x 1 ) = g(x ) g(x 1 ). Key Idea: Thin of the J s as part of an iteration scheme {x, B }.
13 To better understand what further conditions on J are sensible, we revert to discussing the matrices B = J 1, so the MSE becomes B (x x 1 ) = g(x ) g(x 1 ). Key Idea: Thin of the J s as part of an iteration scheme {x, B }. As the iteration proceeds, it is hoped that B becomes a better approximation to g (x ). With this in mind, we assume that, in the long run, B is already close to g (x ), so B +1 should not differ from B by too much, i.e. B should be close to B +1.
14 What do we mean by close in this context? Should B B +1 be small?
15 What do we mean by close in this context? Should B B +1 be small? We tae close to mean algebraically close in the sense that B +1 should be easy to compute from B and yet satisfy the MSE.
16 What do we mean by close in this context? Should B B +1 be small? We tae close to mean algebraically close in the sense that B +1 should be easy to compute from B and yet satisfy the MSE. Let s start by assuming that B +1 is a ran one update to B. That is, there exist u, v R n such that B +1 = B + uv T
17 Set s := x +1 x and y := g(x +1 ) g(x ). The MSE becomes B +1 s = y.
18 Set s := x +1 x and y := g(x +1 ) g(x ). The MSE becomes B +1 s = y. Multiplying by s gives Hence, if v T s 0, we obtain and y = B +1 s = B s + uv T s. B +1 = B + u = y B s v T s ( y B s ) v T v T s.
19 ( y B s ) v T B +1 = B + v T s This equation determines a whole class of ran one updates that satisfy the MSE by choosing v R n so that v T s 0.
20 ( y B s ) v T B +1 = B + v T s This equation determines a whole class of ran one updates that satisfy the MSE by choosing v R n so that v T s 0. One choice is v = s giving B +1 = B = ( y B s ) s T s T s. This is nown as Broyden s update. It turns out that the Broyden update is also analytically close to B.
21 Let A R n n, s, y R n, s 0. Then for any matrix norms and e such that AB A B e and the solution to vv T v T v e 1, min { B A : Bs = y} B R n n is T (y As)s A + = A +. s T s In particular, A + solves the given optimization problem when is the l 2 matrix norm, and A + is the unique solution when is the Frobenius norm.
22 Proof: Let B {B R n n : Bs = y}, then T (y As)s A + A = = (B A) ss T s T s s T s B A ss T s T s e B A.
23 Broyden s Methods Algorithm: Broyden s Method Initialization: x 0 R n, B 0 R n n Having (x, B ) compute (x +1, B x+1 ) as follows: Solve B s = g(x ) for s and set x +1 : = x + s y : = g(x +1 ) g(x ) B +1 : = B + (y B s )s T. s T s
24 Sherman-Morrison-Woodbury Formula for Matrix Inversion Suppose A R n n, U R n, V R n are such that both A 1 and (I + V T A 1 U) 1 exist, then (A + UV T ) 1 = A 1 A 1 U(I + V T A 1 U) 1 V T A 1
25 Inverse Broyden Update If B 1 J +1 = = J exists and s T J y = s T B 1 y 0, then [ B + (y B s )s T s T s = J + (s J y )s T J. s T J y ] 1 = B 1 + (s B 1 y )s T B 1 s T B 1 y
26 Inverse Broyden Update If B 1 J +1 = = J exists and s T J y = s T B 1 y 0, then [ B + (y B s )s T s T s = J + (s J y )s T J. s T J y ] 1 = B 1 + (s B 1 y )s T B 1 s T B 1 y Under suitable hypotheses x x superlinearly.
27 Broyden Updates Algorithm: Broyden s Method (Inverse Updating) Initialization: x 0 R n, B 0 R n n Having (x, B ) compute (x +1, B x+1 ) as follows: s : = J g(x ) x +1 : = x + s y : = g(x +1 ) g(x ) J +1 = J + (s J y )s T J. s T J y
Search Directions for Unconstrained Optimization
8 CHAPTER 8 Search Directions for Unconstrained Optimization In this chapter we study the choice of search directions used in our basic updating scheme x +1 = x + t d. for solving P min f(x). x R n All
More informationSECTION: CONTINUOUS OPTIMISATION LECTURE 4: QUASI-NEWTON METHODS
SECTION: CONTINUOUS OPTIMISATION LECTURE 4: QUASI-NEWTON METHODS HONOUR SCHOOL OF MATHEMATICS, OXFORD UNIVERSITY HILARY TERM 2005, DR RAPHAEL HAUSER 1. The Quasi-Newton Idea. In this lecture we will discuss
More information1. Search Directions In this chapter we again focus on the unconstrained optimization problem. lim sup ν
1 Search Directions In this chapter we again focus on the unconstrained optimization problem P min f(x), x R n where f : R n R is assumed to be twice continuously differentiable, and consider the selection
More information17 Solution of Nonlinear Systems
17 Solution of Nonlinear Systems We now discuss the solution of systems of nonlinear equations. An important ingredient will be the multivariate Taylor theorem. Theorem 17.1 Let D = {x 1, x 2,..., x m
More information5 Quasi-Newton Methods
Unconstrained Convex Optimization 26 5 Quasi-Newton Methods If the Hessian is unavailable... Notation: H = Hessian matrix. B is the approximation of H. C is the approximation of H 1. Problem: Solve min
More informationJim Lambers MAT 419/519 Summer Session Lecture 11 Notes
Jim Lambers MAT 49/59 Summer Session 20-2 Lecture Notes These notes correspond to Section 34 in the text Broyden s Method One of the drawbacks of using Newton s Method to solve a system of nonlinear equations
More informationUniversity of Houston, Department of Mathematics Numerical Analysis, Fall 2005
3 Numerical Solution of Nonlinear Equations and Systems 3.1 Fixed point iteration Reamrk 3.1 Problem Given a function F : lr n lr n, compute x lr n such that ( ) F(x ) = 0. In this chapter, we consider
More informationQuasi-Newton methods: Symmetric rank 1 (SR1) Broyden Fletcher Goldfarb Shanno February 6, / 25 (BFG. Limited memory BFGS (L-BFGS)
Quasi-Newton methods: Symmetric rank 1 (SR1) Broyden Fletcher Goldfarb Shanno (BFGS) Limited memory BFGS (L-BFGS) February 6, 2014 Quasi-Newton methods: Symmetric rank 1 (SR1) Broyden Fletcher Goldfarb
More informationNumerical solutions of nonlinear systems of equations
Numerical solutions of nonlinear systems of equations Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan E-mail: min@math.ntnu.edu.tw August 28, 2011 Outline 1 Fixed points
More informationMATH 4211/6211 Optimization Quasi-Newton Method
MATH 4211/6211 Optimization Quasi-Newton Method Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 Quasi-Newton Method Motivation:
More informationMath 408A: Non-Linear Optimization
February 12 Broyden Updates Given g : R n R n solve g(x) = 0. Algorithm: Broyden s Method Initialization: x 0 R n, B 0 R n n Having (x k, B k ) compute (x k+1, B x+1 ) as follows: Solve B k s k = g(x
More informationMaria Cameron. f(x) = 1 n
Maria Cameron 1. Local algorithms for solving nonlinear equations Here we discuss local methods for nonlinear equations r(x) =. These methods are Newton, inexact Newton and quasi-newton. We will show that
More informationSolving Nonlinear Equations
Solving Nonlinear Equations Jijian Fan Department of Economics University of California, Santa Cruz Oct 13 2014 Overview NUMERICALLY solving nonlinear equation Four methods Bisection Function iteration
More informationOptimization and Root Finding. Kurt Hornik
Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding
More informationQuasi-Newton methods for minimization
Quasi-Newton methods for minimization Lectures for PHD course on Numerical optimization Enrico Bertolazzi DIMS Universitá di Trento November 21 December 14, 2011 Quasi-Newton methods for minimization 1
More informationShiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 3. Gradient Method
Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 3 Gradient Method Shiqian Ma, MAT-258A: Numerical Optimization 2 3.1. Gradient method Classical gradient method: to minimize a differentiable convex
More informationBindel, Spring 2016 Numerical Analysis (CS 4220) Notes for
Life beyond Newton Notes for 2016-04-08 Newton s method has many attractive properties, particularly when we combine it with a globalization strategy. Unfortunately, Newton steps are not cheap. At each
More informationSolutions Preliminary Examination in Numerical Analysis January, 2017
Solutions Preliminary Examination in Numerical Analysis January, 07 Root Finding The roots are -,0, a) First consider x 0 > Let x n+ = + ε and x n = + δ with δ > 0 The iteration gives 0 < ε δ < 3, which
More informationMath and Numerical Methods Review
Math and Numerical Methods Review Michael Caracotsios, Ph.D. Clinical Associate Professor Chemical Engineering Department University of Illinois at Chicago Introduction In the study of chemical engineering
More informationQuasi-Newton Methods. Javier Peña Convex Optimization /36-725
Quasi-Newton Methods Javier Peña Convex Optimization 10-725/36-725 Last time: primal-dual interior-point methods Consider the problem min x subject to f(x) Ax = b h(x) 0 Assume f, h 1,..., h m are convex
More informationQuasi-Newton Methods
Newton s Method Pros and Cons Quasi-Newton Methods MA 348 Kurt Bryan Newton s method has some very nice properties: It s extremely fast, at least once it gets near the minimum, and with the simple modifications
More informationConvex Optimization CMU-10725
Convex Optimization CMU-10725 Quasi Newton Methods Barnabás Póczos & Ryan Tibshirani Quasi Newton Methods 2 Outline Modified Newton Method Rank one correction of the inverse Rank two correction of the
More informationMarch 8, 2010 MATH 408 FINAL EXAM SAMPLE
March 8, 200 MATH 408 FINAL EXAM SAMPLE EXAM OUTLINE The final exam for this course takes place in the regular course classroom (MEB 238) on Monday, March 2, 8:30-0:20 am. You may bring two-sided 8 page
More information1 Implementation (continued)
Mathematical Programming Lecture 13 OR 630 Fall 2005 October 6, 2005 Notes by Saifon Chaturantabut 1 Implementation continued We noted last time that B + B + a q Be p e p BI + ā q e p e p. Now, we want
More informationQuasi-Newton Methods
Quasi-Newton Methods Werner C. Rheinboldt These are excerpts of material relating to the boos [OR00 and [Rhe98 and of write-ups prepared for courses held at the University of Pittsburgh. Some further references
More informationMATH 3795 Lecture 13. Numerical Solution of Nonlinear Equations in R N.
MATH 3795 Lecture 13. Numerical Solution of Nonlinear Equations in R N. Dmitriy Leykekhman Fall 2008 Goals Learn about different methods for the solution of F (x) = 0, their advantages and disadvantages.
More information1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0
Numerical Analysis 1 1. Nonlinear Equations This lecture note excerpted parts from Michael Heath and Max Gunzburger. Given function f, we seek value x for which where f : D R n R n is nonlinear. f(x) =
More information2. Quasi-Newton methods
L. Vandenberghe EE236C (Spring 2016) 2. Quasi-Newton methods variable metric methods quasi-newton methods BFGS update limited-memory quasi-newton methods 2-1 Newton method for unconstrained minimization
More information1. Introduction and motivation. We propose an algorithm for solving unconstrained optimization problems of the form (1) min
LLNL IM Release number: LLNL-JRNL-745068 A STRUCTURED QUASI-NEWTON ALGORITHM FOR OPTIMIZING WITH INCOMPLETE HESSIAN INFORMATION COSMIN G. PETRA, NAIYUAN CHIANG, AND MIHAI ANITESCU Abstract. We present
More informationNonlinear equations. Norms for R n. Convergence orders for iterative methods
Nonlinear equations Norms for R n Assume that X is a vector space. A norm is a mapping X R with x such that for all x, y X, α R x = = x = αx = α x x + y x + y We define the following norms on the vector
More informationChapter 4. Unconstrained optimization
Chapter 4. Unconstrained optimization Version: 28-10-2012 Material: (for details see) Chapter 11 in [FKS] (pp.251-276) A reference e.g. L.11.2 refers to the corresponding Lemma in the book [FKS] PDF-file
More informationNumerical Methods for Large-Scale Nonlinear Systems
Numerical Methods for Large-Scale Nonlinear Systems Handouts by Ronald H.W. Hoppe following the monograph P. Deuflhard Newton Methods for Nonlinear Problems Springer, Berlin-Heidelberg-New York, 2004 Num.
More informationTHE RELATIONSHIPS BETWEEN CG, BFGS, AND TWO LIMITED-MEMORY ALGORITHMS
Furman University Electronic Journal of Undergraduate Mathematics Volume 12, 5 20, 2007 HE RELAIONSHIPS BEWEEN CG, BFGS, AND WO LIMIED-MEMORY ALGORIHMS ZHIWEI (ONY) QIN Abstract. For the solution of linear
More informationNotes on Numerical Optimization
Notes on Numerical Optimization University of Chicago, 2014 Viva Patel October 18, 2014 1 Contents Contents 2 List of Algorithms 4 I Fundamentals of Optimization 5 1 Overview of Numerical Optimization
More informationAcademic Editors: Alicia Cordero, Juan R. Torregrosa and Francisco I. Chicharro
Algorithms 2015, 8, 774-785; doi:10.3390/a8030774 OPEN ACCESS algorithms ISSN 1999-4893 www.mdpi.com/journal/algorithms Article Parallel Variants of Broyden s Method Ioan Bistran, Stefan Maruster * and
More informationarxiv: v2 [math.na] 27 Jun 2018
NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 0000; 00:1 31 Published online in Wiley InterScience (www.interscience.wiley.com). Nonlinearly Preconditioned L-BFGS as an Acceleration
More informationMath 4329: Numerical Analysis Chapter 03: Fixed Point Iteration and Ill behaving problems. Natasha S. Sharma, PhD
Why another root finding technique? iteration gives us the freedom to design our own root finding algorithm. The design of such algorithms is motivated by the need to improve the speed and accuracy of
More informationAlgorithms for Constrained Optimization
1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic
More informationStochastic Quasi-Newton Methods
Stochastic Quasi-Newton Methods Donald Goldfarb Department of IEOR Columbia University UCLA Distinguished Lecture Series May 17-19, 2016 1 / 35 Outline Stochastic Approximation Stochastic Gradient Descent
More informationEE5120 Linear Algebra: Tutorial 1, July-Dec Solve the following sets of linear equations using Gaussian elimination (a)
EE5120 Linear Algebra: Tutorial 1, July-Dec 2017-18 1. Solve the following sets of linear equations using Gaussian elimination (a) 2x 1 2x 2 3x 3 = 2 3x 1 3x 2 2x 3 + 5x 4 = 7 x 1 x 2 2x 3 x 4 = 3 (b)
More informationOptimization II: Unconstrained Multivariable
Optimization II: Unconstrained Multivariable CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Justin Solomon CS 205A: Mathematical Methods Optimization II: Unconstrained Multivariable 1
More informationLecture 18: November Review on Primal-dual interior-poit methods
10-725/36-725: Convex Optimization Fall 2016 Lecturer: Lecturer: Javier Pena Lecture 18: November 2 Scribes: Scribes: Yizhu Lin, Pan Liu Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:
More informationQuasi-Newton Methods. Zico Kolter (notes by Ryan Tibshirani, Javier Peña, Zico Kolter) Convex Optimization
Quasi-Newton Methods Zico Kolter (notes by Ryan Tibshirani, Javier Peña, Zico Kolter) Convex Optimization 10-725 Last time: primal-dual interior-point methods Given the problem min x f(x) subject to h(x)
More informationResearch Article A Two-Step Matrix-Free Secant Method for Solving Large-Scale Systems of Nonlinear Equations
Applied Mathematics Volume 2012, Article ID 348654, 9 pages doi:10.1155/2012/348654 Research Article A Two-Step Matrix-Free Secant Method for Solving Large-Scale Systems of Nonlinear Equations M. Y. Waziri,
More informationInstitute for Computational Mathematics Hong Kong Baptist University
Institute for Computational Mathematics Hong Kong Baptist University ICM Research Report 08-0 How to find a good submatrix S. A. Goreinov, I. V. Oseledets, D. V. Savostyanov, E. E. Tyrtyshnikov, N. L.
More information15 Singular Value Decomposition
15 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing
More informationABSTRACT 1. INTRODUCTION
A DIAGONAL-AUGMENTED QUASI-NEWTON METHOD WITH APPLICATION TO FACTORIZATION MACHINES Aryan Mohtari and Amir Ingber Department of Electrical and Systems Engineering, University of Pennsylvania, PA, USA Big-data
More informationAn analysis for the DIIS acceleration method used in quantum chemistry calculations
An analysis for the DIIS acceleration method used in quantum chemistry calculations Thorsten Rohwedder and Reinhold Schneider Abstract. This work features an analysis for the acceleration technique DIIS
More informationMultipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations
Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations Oleg Burdakov a,, Ahmad Kamandi b a Department of Mathematics, Linköping University,
More informationTwo improved classes of Broyden s methods for solving nonlinear systems of equations
Available online at www.isr-publications.com/jmcs J. Math. Computer Sci., 17 (2017), 22 31 Research Article Journal Homepage: www.tjmcs.com - www.isr-publications.com/jmcs Two improved classes of Broyden
More informationNewton's method and secant methods: A long-standing. relationship from vectors to matrices
Universidad Central de Venezuela Facultad de Ciencias Escuela de Computación Lecturas en Ciencias de la Computación ISSN 1316-6239 Newton's method and secant methods: A long-standing relationship from
More informationImproving L-BFGS Initialization for Trust-Region Methods in Deep Learning
Improving L-BFGS Initialization for Trust-Region Methods in Deep Learning Jacob Rafati http://rafati.net jrafatiheravi@ucmerced.edu Ph.D. Candidate, Electrical Engineering and Computer Science University
More informationMethods that avoid calculating the Hessian. Nonlinear Optimization; Steepest Descent, Quasi-Newton. Steepest Descent
Nonlinear Optimization Steepest Descent and Niclas Börlin Department of Computing Science Umeå University niclas.borlin@cs.umu.se A disadvantage with the Newton method is that the Hessian has to be derived
More informationNew class of limited-memory variationally-derived variable metric methods 1
New class of limited-memory variationally-derived variable metric methods 1 Jan Vlček, Ladislav Lukšan Institute of Computer Science, Academy of Sciences of the Czech Republic, L. Lukšan is also from Technical
More informationHigher-Order Methods
Higher-Order Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. PCMI, July 2016 Stephen Wright (UW-Madison) Higher-Order Methods PCMI, July 2016 1 / 25 Smooth
More informationPreconditioned conjugate gradient algorithms with column scaling
Proceedings of the 47th IEEE Conference on Decision and Control Cancun, Mexico, Dec. 9-11, 28 Preconditioned conjugate gradient algorithms with column scaling R. Pytla Institute of Automatic Control and
More informationDerivation of the Kalman Filter
Derivation of the Kalman Filter Kai Borre Danish GPS Center, Denmark Block Matrix Identities The key formulas give the inverse of a 2 by 2 block matrix, assuming T is invertible: T U 1 L M. (1) V W N P
More informationA modified BFGS quasi-newton iterative formula
A modified BFGS quasi-newton iterative formula W. Chen Present mail address (as a SPS Postdoctoral Research Fellow): Apt.4, West st floor, Himawari-so, 36-2, Waasato-itaichi, Nagano-city, Nagano-en, 380-0926,
More informationME451 Kinematics and Dynamics of Machine Systems
ME451 Kinematics and Dynamics of Machine Systems Introduction to Dynamics Newmark Integration Formula [not in the textbook] December 9, 2014 Dan Negrut ME451, Fall 2014 University of Wisconsin-Madison
More informationUnconstrained optimization
Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout
More informationMath 411 Preliminaries
Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector
More informationCubic regularization in symmetric rank-1 quasi-newton methods
Math. Prog. Comp. (2018) 10:457 486 https://doi.org/10.1007/s12532-018-0136-7 FULL LENGTH PAPER Cubic regularization in symmetric rank-1 quasi-newton methods Hande Y. Benson 1 David F. Shanno 2 Received:
More informationA REDUCED HESSIAN METHOD FOR LARGE-SCALE CONSTRAINED OPTIMIZATION by Lorenz T. Biegler, Jorge Nocedal, and Claudia Schmid ABSTRACT We propose a quasi-
A REDUCED HESSIAN METHOD FOR LARGE-SCALE CONSTRAINED OPTIMIZATION by Lorenz T. Biegler, 1 Jorge Nocedal, 2 and Claudia Schmid 1 1 Chemical Engineering Department, Carnegie Mellon University, Pittsburgh,
More informationCME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6
CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently
More informationJim Lambers MAT 610 Summer Session Lecture 1 Notes
Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra
More informationHYBRID RUNGE-KUTTA AND QUASI-NEWTON METHODS FOR UNCONSTRAINED NONLINEAR OPTIMIZATION. Darin Griffin Mohr. An Abstract
HYBRID RUNGE-KUTTA AND QUASI-NEWTON METHODS FOR UNCONSTRAINED NONLINEAR OPTIMIZATION by Darin Griffin Mohr An Abstract Of a thesis submitted in partial fulfillment of the requirements for the Doctor of
More informationMAT 610: Numerical Linear Algebra. James V. Lambers
MAT 610: Numerical Linear Algebra James V Lambers January 16, 2017 2 Contents 1 Matrix Multiplication Problems 7 11 Introduction 7 111 Systems of Linear Equations 7 112 The Eigenvalue Problem 8 12 Basic
More informationA linear equation in two variables is generally written as follows equation in three variables can be written as
System of Equations A system of equations is a set of equations considered simultaneously. In this course, we will discuss systems of equation in two or three variables either linear or quadratic or a
More informationKey words. Broyden s method, nonlinear system, backtracking line search, Sherman-Morrison, superlinear
A QUASI-NEWTON METHOD FOR SOLVING SMALL NONLINEAR SYSTEMS OF ALGEBRAIC EQUATIONS JEFFREY L. MARTIN Abstract. Finding roots of nonlinear algebraic systems is paramount to many applications in applied mathematics
More informationConditional Gradient (Frank-Wolfe) Method
Conditional Gradient (Frank-Wolfe) Method Lecturer: Aarti Singh Co-instructor: Pradeep Ravikumar Convex Optimization 10-725/36-725 1 Outline Today: Conditional gradient method Convergence analysis Properties
More informationReview Questions REVIEW QUESTIONS 71
REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of
More informationMath 273a: Optimization Netwon s methods
Math 273a: Optimization Netwon s methods Instructor: Wotao Yin Department of Mathematics, UCLA Fall 2015 some material taken from Chong-Zak, 4th Ed. Main features of Newton s method Uses both first derivatives
More informationORIE 6326: Convex Optimization. Quasi-Newton Methods
ORIE 6326: Convex Optimization Quasi-Newton Methods Professor Udell Operations Research and Information Engineering Cornell April 10, 2017 Slides on steepest descent and analysis of Newton s method adapted
More informationGlobal convergence of a regularized factorized quasi-newton method for nonlinear least squares problems
Volume 29, N. 2, pp. 195 214, 2010 Copyright 2010 SBMAC ISSN 0101-8205 www.scielo.br/cam Global convergence of a regularized factorized quasi-newton method for nonlinear least squares problems WEIJUN ZHOU
More informationThe matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.
) Find all solutions of the linear system. Express the answer in vector form. x + 2x + x + x 5 = 2 2x 2 + 2x + 2x + x 5 = 8 x + 2x + x + 9x 5 = 2 2 Solution: Reduce the augmented matrix [ 2 2 2 8 ] to
More informationON THE CONNECTION BETWEEN THE CONJUGATE GRADIENT METHOD AND QUASI-NEWTON METHODS ON QUADRATIC PROBLEMS
ON THE CONNECTION BETWEEN THE CONJUGATE GRADIENT METHOD AND QUASI-NEWTON METHODS ON QUADRATIC PROBLEMS Anders FORSGREN Tove ODLAND Technical Report TRITA-MAT-203-OS-03 Department of Mathematics KTH Royal
More informationLine search methods with variable sample size. Nataša Krklec Jerinkić. - PhD thesis -
UNIVERSITY OF NOVI SAD FACULTY OF SCIENCES DEPARTMENT OF MATHEMATICS AND INFORMATICS Nataša Krklec Jerinkić Line search methods with variable sample size - PhD thesis - Novi Sad, 2013 2. 3 Introduction
More informationIterative Methods. Splitting Methods
Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition
More informationThe Secant-Newton Map is Optimal Among Contracting Quadratic Maps for Square Root Computation
The Secant-Newton Map is Optimal Among Contracting Quadratic Maps for Square Root Computation Mădălina Eraşcu Research Institute for Symbolic Computation, Johannes Kepler University, A-4040, Linz, Austria
More informationLMS Algorithm Summary
LMS Algorithm Summary Step size tradeoff Other Iterative Algorithms LMS algorithm with variable step size: w(k+1) = w(k) + µ(k)e(k)x(k) When step size µ(k) = µ/k algorithm converges almost surely to optimal
More informationUnconstrained Multivariate Optimization
Unconstrained Multivariate Optimization Multivariate optimization means optimization of a scalar function of a several variables: and has the general form: y = () min ( ) where () is a nonlinear scalar-valued
More informationPrimal-Dual Interior-Point Methods for Linear Programming based on Newton s Method
Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach
More informationLINEAR AND NONLINEAR PROGRAMMING
LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico
More informationECS550NFB Introduction to Numerical Methods using Matlab Day 2
ECS550NFB Introduction to Numerical Methods using Matlab Day 2 Lukas Laffers lukas.laffers@umb.sk Department of Mathematics, University of Matej Bel June 9, 2015 Today Root-finding: find x that solves
More informationORIE 6334 Spectral Graph Theory November 8, Lecture 22
ORIE 6334 Spectral Graph Theory November 8, 206 Lecture 22 Lecturer: David P. Williamson Scribe: Shijin Rajakrishnan Recap In the previous lectures, we explored the problem of finding spectral sparsifiers
More informationFIXED POINT ITERATION
FIXED POINT ITERATION The idea of the fixed point iteration methods is to first reformulate a equation to an equivalent fixed point problem: f (x) = 0 x = g(x) and then to use the iteration: with an initial
More informationModule 6 : Solving Ordinary Differential Equations - Initial Value Problems (ODE-IVPs) Section 1 : Introduction
Module 6 : Solving Ordinary Differential Equations - Initial Value Problems (ODE-IVPs) Section 1 : Introduction 1 Introduction In this module, we develop solution techniques for numerically solving ordinary
More informationInstitute of Computer Science. Limited-memory projective variable metric methods for unconstrained minimization
Institute of Computer Science Academy of Sciences of the Czech Republic Limited-memory projective variable metric methods for unconstrained minimization J. Vlček, L. Lukšan Technical report No. V 036 December
More informationThe speed of Shor s R-algorithm
IMA Journal of Numerical Analysis 2008) 28, 711 720 doi:10.1093/imanum/drn008 Advance Access publication on September 12, 2008 The speed of Shor s R-algorithm J. V. BURKE Department of Mathematics, University
More informationChapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence
Chapter 6 Nonlinear Equations 6. The Problem of Nonlinear Root-finding In this module we consider the problem of using numerical techniques to find the roots of nonlinear equations, f () =. Initially we
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)
AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More informationQR Decomposition. When solving an overdetermined system by projection (or a least squares solution) often the following method is used:
(In practice not Gram-Schmidt, but another process Householder Transformations are used.) QR Decomposition When solving an overdetermined system by projection (or a least squares solution) often the following
More informationOptimization II: Unconstrained Multivariable
Optimization II: Unconstrained Multivariable CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Optimization II: Unconstrained
More informationAuthor copy. for some integers a i, b i. b i
Cent. Eur. J. Math. 6(3) 008 48-487 DOI: 10.478/s11533-008-0038-4 Central European Journal of Mathematics Rational points on the unit sphere Research Article Eric Schmutz Mathematics Department, Drexel
More informationHomework 3 Conjugate Gradient Descent, Accelerated Gradient Descent Newton, Quasi Newton and Projected Gradient Descent
Homework 3 Conjugate Gradient Descent, Accelerated Gradient Descent Newton, Quasi Newton and Projected Gradient Descent CMU 10-725/36-725: Convex Optimization (Fall 2017) OUT: Sep 29 DUE: Oct 13, 5:00
More informationHk+lYk 8k, Hk+ Hk + (Sk Hkyk)V[, LINEAR EQUATIONS* WHY BROYDEN S NONSYMMETRIC METHOD TERMINATES ON. Y g(xk+l) g(xk) gk+l gk
SIAM J. OPIMIZAION Vol. 5, No. 2, pp. 231-235, May 1995 1995 Society for Industrial and Applied Mathematics 001 WHY BROYDEN S NONSYMMERIC MEHOD ERMINAES ON LINEAR EQUAIONS* DIANNE P. O LEARYt Abstract.
More informationSherman-Morrison-Woodbury
Week 5: Wednesday, Sep 23 Sherman-Mrison-Woodbury The Sherman-Mrison fmula describes the solution of A+uv T when there is already a factization f A. An easy way to derive the fmula is through block Gaussian
More informationSome definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts
Some definitions Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 A matrix A is SPD (Symmetric
More informationFast Approximate Matrix Multiplication by Solving Linear Systems
Electronic Colloquium on Computational Complexity, Report No. 117 (2014) Fast Approximate Matrix Multiplication by Solving Linear Systems Shiva Manne 1 and Manjish Pal 2 1 Birla Institute of Technology,
More informationPair of Linear Equations in Two Variables
Pair of Linear Equations in Two Variables Linear equation in two variables x and y is of the form ax + by + c= 0, where a, b, and c are real numbers, such that both a and b are not zero. Example: 6x +
More information