Introduction to Nonlinear Optimization Paul J. Atzberger
|
|
- Zoe Tyler
- 6 years ago
- Views:
Transcription
1 Introduction to Nonlinear Optimization Paul J. Atzberger Comments should be sent to:
2 Introduction We shall discuss in these notes a brief introduction to nonlinear optimization concepts, with wide applicability in finance and other applied fields. The basic problem is to solve min x f(x) (1) where typically x R n, but this can also be subject to constraints. To numerically approximate the solution (if it exists), we shall construct a sequence {x n } n=1 such that x n x, where f(x ) = min x f(x). There are many algorithms which attempt to construct such a sequence, with different approaches appropriate depending on the problem. A good reference on nonlinear optimization methods is Numerical Optimization by Nocedal and Wright. In these notes, we shall discuss primarily line search methods and handle any constraints that arise using penalty terms. In line search algorithms the sequence {x n } n=1 is constructed iteratively at each step choosing a search direction p and attempting to minimize the objective function along the line (ray) in this direction. This reduces the problem essentially to a sequence of one dimensional problems, where x is given by the basic recurrence: x +1 = x + α p (2) where the step-length α > 0 must be estimated. The ey to having a sequence x which converges rapidly is to construct an algorithm which maes effective use of the information about f(x) from the previous iterates to choose a good p and step-length α. To ensure that progress can be made each iteration, a natural condition to impose on p is that the function decrease, at least locally, in this direction. We call p a descent direction if f p < 0. Typically, the descent direction has the form p = B 1 f. In the case that B is position definite this ensures that p is a descent direction For example: f p = f T B 1 f < 0. Steepest descent has B = I set to the identity matrix so that p = f. Newton s method has B = 2 f set to the Hessian so that p = ( 2 f ) 1 f. It is important to note that p is only ensured to be descent direction if the Hessian is position definite. The Hessian however is often expensive to compute numerically. Quasi-Newton methods construct a B which approximates the Hessian by maing use of the previous function evaluations of f and f, see Nocedal and Wright for a further discussion. In designing a good numerical optimization method some care must be taen in choosing the not only the direction p but the step-length α, and this will usually depend on the function f(x) being optimized. We shall assume that p is some chosen descent direction and now discuss how to choose the step-length α. Wolfe Conditions: Choosing a Step-Length α Let φ(α) = f(x + αp ) then one choice of α which might seem ideal would be to minimize φ(α) globally for all α. However, it is typically computationally expensive to find the global optimum even to the one dimensional problem. Since the ray p will not typically contain the minimizer to the full problem, multiple 2
3 iterations will almost always be required. This suggests that in designing an algorithm there should be a balance between maing progress in minimizing the objective function f in each search direction p while iterating over a number of different search directions. To ensure our algorithm converges we must be more mathematically precise about what we mean by maing progress over each search direction. The simplest criteria we might consider to determine if progress has been made over a particular search direction is that the function be reduced f(x + α p ) < f(x ), however, this is not sufficient to ensure convergence to the minimizer of x. For example, consider f(x,y) = x 2 + y 2, and suppose the algorithm has the output x +1 = x + α p where α = 1. Then if p 2 = [ 1,0] T we have f(x +1 ) < f(x ), but x +1 = x 0 =0 1 p 2. If the first component is [x 0 ] 1 > 2 then x 0. In other words, to summarize, if the progress made each iteration is not sufficiently large, the sequence may converge prematurely to a non-optimal value. Sufficient Decrease New Point Must Lie Below This Line y x Figure 1: Sufficient Decrease Condition Curvature Condition New Slope Must Be Greater Than This Line y x Figure 2: Curvature Condition To ensure that sufficient progress is made each iteration in the line search we shall require the following two conditions to hold: 3
4 Wolfe Conditions: (i) (sufficient decrease) f(x + αp ) < f(x ) + c 1 α f Tp, where c 1 (0,1). (ii) (curvature condition) f(x + αp ) T p c 2 f Tp, where c 2 (c 1,1). (See the figures 1 2 for a geometric interpretation). We now prove that such step lengths exist satisfying the Wolfe Conditions. Convergence of Line Search Optimization Methods Lemma: Let f : R n R be continuously differentiable and let p be a descent direction for each x. If we assume that along each ray {x + αp α > 0} f is bounded below then there exist intervals of step lengths [α (1),α (2) ] such that the Wolfe Conditions are satisfied. Proof: Since φ(α) = f(x + αp ) is bounded below for all α > 0 and 0 < c 1 < 1, the line l(α) = f(x ) + αc 1 f T p must intersect the graph of φ(α) at least once. Let α > 0 be the smallest such α, that is f(x + α p ) = f(x ) + α c 1 f T p. The sufficient decrease condition (i) then clearly holds for α < α. By the differentiability of the function we have from the mean-value theorem that there exists α (0,α ) such that f(x +α p ) f(x ) = α f(x +α p ) T p. Now by substituting for f(x +α p ) the expression above (sufficient-decrease), we obtain f(x + α p ) T p = c 1 f T p > c 2 f T p, since c 1 < c 2 and f p < 0. Thus α satisfies the curvature condition (ii). Therefore, since α < α, we have that in a neighborhood of α both Wolfe Conditions (i) and (ii) hold simultaneously. We now discuss the convergence of line search algorithms when the Wolfe Conditions are satisfied. Theorem: (Zoutendij s Condition) Assume {x } is generated by a line search algorithm satisfying the Wolfe Conditions. Suppose that f(x) : R n R is bounded below with Lipschitz continuous f, f(x) f(y) L x y then cos 2 (θ ) f 2 < 0 where cos(θ ) = ft p f p. We remar that θ is the angle between the f T steepest descent direction and the search direction p. Proof: From the Wolfe Conditions (i) and (ii) we have From Lipschitz continuity we have ( f +1 f ) T p (c 2 1) f T p. ( f +1 f ) T p α L p 2. 4
5 By combining both relations we have α c 2 1 f Tp L p 2. By substituting this in (i) we have From the definition of cos(θ ) we have f +1 f c 1 1 c 2 L ( f T p ) 2 p 2. f +1 f ccos(θ ) f 2 where c = c1(1 c2) L. By summing all indices less than we have f +1 f 0 c j=0 cos2 (θ j ) f j 2. Since f(x) is bounded below we must have cos 2 (θ j ) f j 2 < j=0. Using the standard facts about convergence of series, we have cos 2 (θ j ) f j 2 0, as. This result while appearing somewhat technical, is actually quite useful. For if we construct a line search method such that cos(θ ) > δ > 0, then the theorem above implies that lim f = 0. Thus if the line search sequence {x } remains bounded there will be a limit point x which is a critical point such that f(x ) = 0. Given the sufficient decrease condition this point will liely be a minimizer, although one must be careful to chec if no special structure is assumed about f. To further describe the convergence, we shall say that a method converges in f with rate p if f(x +1 ) f(x ) c f(x ) f(x ) p. We say that a method converges in x with rate p if x +1 x c x x p. We shall now discuss the convergence of two common optimization methods, namely, steepest descent and newton s method. Convergence of Steepest Descent Steepest Descent: In this case p = f for each step. Let us consider a problem which we can readily analyze with an objective function of the form: f(x) = 1 2 xt Qx b T x where Q = Q T is positive definite. The minimizer to this problem can be readily found as solves, Qx = b. In this example, we can explicitly compute the step-length α which minimizes the function ρ(α) each interaction. This is given by f(x αg ) = 1 2 (x αg ) T Q(x αg ) b(x αg ) 5
6 so that f(x) = Qx b gives the minimizer in α > 0. That is α should satisfy This gives when g = f we have f(x αg ) = Q(x αg ) b = 0. α = gt g g T Qg α = ft f f T Q f thus the next iterate is given by (when minimizing in the line search), ( ) f T x +1 = x f f TQ f f. From the expression above it can be shown that [ ( ) f f(x +1 ) f(x T 2 ] ) = 1 f ( ) ( ) (f(x f T Q f f T Q 1 ) f(x )). f This can be expressed in terms of the eigenvalues of Q as ( ) 2 (f(x +1 ) f(x λn λ 1 )) (f(x ) f(x )) λ n + λ 1 where 0 λ 1 λ 2 λ n are eigenvalues of Q. (Proof is given elsewhere, see Luenberger 1984). We remar that for nonlinear objective functions we can approximate the function near a minimizer using a truncation of the Taylor expansion at the quadratic term. In this case, the analysis above will be applicable up to this truncation error where we set Q to be the Hessian, Q = 2 f. From the analysis we see find that steepest descent has a rate of convergence in f of first order, in the ideal case. Convergence of Newton s Method Newton s Method: Let us now consider the the case when the search direction is p (N) = 2 f 1 f. As we mentioned above, we must be a little careful since in this case the Hessian 2 f may not be positive definite and the search direction may not be a descent direction. For the example above Q positive definite the search direction is p (N) = Q 1 f. Analysis similar to that done in the case of Steepest Descent can be carried out to show that the rate of convergence in f is quadratic. In particular, in the case of a quadratic objective function it can be shown that for each iteration the optimal α = 1. This gives x + p x = x x 2 f 1 f = 2 f 1 [ 2 f (x x ) ( f f ) ] 6
7 where f = f(x ) = 0. Now using that we have f f = f(x + t(x x ))(x x )dt, 2 f(x )(x x ) ( f f(x )) = where L is the Lipschitz constant for 2 f(x) for x near x. 0 [ 2 f(x ) 2 f(x + t(x x )) ] (x x )dt 2 f(x ) 2 f(x + t(x x )) x x dt 1 x x 2 Ltdt 0 x + p (N) x L 2 f(x ) 1 x x 2 Therefore, the method converges in x quadratically, p = 2. A Line Search Algorithm We now discuss an algorithm to find a step-length α which satisfies the Wolfe conditions. The basic strategy is to use interpolation and a bisection search by determining which half of an interval contains points satisfying the Wolfe conditions. We state a general algorithm here in pseudo code. Algorithm: LineSearch (Input: x, p, α max. Output α.) α 0 0, and specify α 1 > 0 and α max ; i 1 repeat: Evaluate φ(α i ); if φ(α i ) > φ(0) + c 1 α i φ (0) or φ(α i ) φ(α i 1 ) and i > 1. α zoom (α i 1,α i ) and stop. Evaluate φ (α i ); if φ (α i ) c 2 φ (0) set α α i and stop; if φ (α i ) 0 set α zoom(α i,α i 1 ) and stop; Choose α i+1 (α i,α max ) i i + 1; end (repeat) Algorithm: Zoom (Input: α lo, α hi. Output: α.) repeat Interpolate (using quadratic, cubic, or bisection) to find a trial step length α j between α lo and α hi Evaluate φ(α j ); if φ(α j ) > φ(0) + c 1 α j φ (0) or φ(α j ) φ(α lo ) α hi α j ; 7
8 else Evaluate φ (α j ); if φ(α j ) c 2 φ (0) set α α j and stop; if φ (α j )(α hi α lo ) 0 α hi α lo ; α lo α j ; end (repeat) For a more in depth discussion and motivation for these algorithms see (Nocedal and Wright, Numerical Optimization, pages 58-61). Solving Nonlinear Unconstrained Optimization Problems For unconstrained optimization problems of the form the basic line search algorithm is as follows: min x f(x) Unconstrained Line Search Optimization Algorithm: (Input: x 0, α max,ǫ. Output: x.) 0 x x 0 Evaluate f f(x ); repeat p = B 1 f ; α LineSearch(x,p,α max ); x +1 x + α p ; Evaluate f f(x ); If f < ǫ then x x and stop; end (repeat) Solving Nonlinear Constrained Optimization Problems For constrained optimization problems of the form min x subject f(x) c i (x) = 0, i E c i (x) 0, i I a line search algorithm can be constructed. To handle the constraints we shall formulate an unconstrained optimization problem which incorporates the constraints through penalty terms. The strategy is to then solve each of the unconstrained problems with a penalty parameter µ j up to a tolerance of ǫ j. By repeatedly solving this problem using as the starting point the solution x j of the previous iteration, and by successively reducing the values of µ j and ǫ j, a sequence of solutions x j x can be generated. One must tae some care on how µ j and ǫ j are reduced each iteration to ensure convergence and efficient use of computational resources. These issues are further discussed in 8
9 Nocedal and Wright. The basic algorithm is: A Constrained Line Search Optimization Algorithm: (Input: x 0, α max,ǫ,δ. Output: x.) j 0 x j x 0 µ j µ 0 repeat 1 Let F(x,µ j ) = f(x) + 1 2µ j i E c2 i (x) µ j i I log (c i(x)) x j UnconstrainedOptimization(x j,α max,ǫ j ) Evaluate F j x F(x j,µ j); if F j < ǫ and µ j < δ then x x j and stop; j j + 1 reduce the value of µ j reduce the value of ǫ j end (repeat 1) 9
j=1 r 1 x 1 x n. r m r j (x) r j r j (x) r j (x). r j x k
Maria Cameron Nonlinear Least Squares Problem The nonlinear least squares problem arises when one needs to find optimal set of parameters for a nonlinear model given a large set of data The variables x,,
More informationReview of Classical Optimization
Part II Review of Classical Optimization Multidisciplinary Design Optimization of Aircrafts 51 2 Deterministic Methods 2.1 One-Dimensional Unconstrained Minimization 2.1.1 Motivation Most practical optimization
More informationUnconstrained optimization
Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization
E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained
More informationSingle Variable Minimization
AA222: MDO 37 Sunday 1 st April, 2012 at 19:48 Chapter 2 Single Variable Minimization 2.1 Motivation Most practical optimization problems involve many variables, so the study of single variable minimization
More informationUnconstrained minimization of smooth functions
Unconstrained minimization of smooth functions We want to solve min x R N f(x), where f is convex. In this section, we will assume that f is differentiable (so its gradient exists at every point), and
More informationHigher-Order Methods
Higher-Order Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. PCMI, July 2016 Stephen Wright (UW-Madison) Higher-Order Methods PCMI, July 2016 1 / 25 Smooth
More informationNotes on Numerical Optimization
Notes on Numerical Optimization University of Chicago, 2014 Viva Patel October 18, 2014 1 Contents Contents 2 List of Algorithms 4 I Fundamentals of Optimization 5 1 Overview of Numerical Optimization
More informationx 2 x n r n J(x + t(x x ))(x x )dt. For warming-up we start with methods for solving a single equation of one variable.
Maria Cameron 1. Fixed point methods for solving nonlinear equations We address the problem of solving an equation of the form (1) r(x) = 0, where F (x) : R n R n is a vector-function. Eq. (1) can be written
More informationNonlinear Optimization: What s important?
Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global
More information8 Numerical methods for unconstrained problems
8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields
More informationMATH 4211/6211 Optimization Basics of Optimization Problems
MATH 4211/6211 Optimization Basics of Optimization Problems Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 A standard minimization
More information17 Solution of Nonlinear Systems
17 Solution of Nonlinear Systems We now discuss the solution of systems of nonlinear equations. An important ingredient will be the multivariate Taylor theorem. Theorem 17.1 Let D = {x 1, x 2,..., x m
More informationCONSTRAINED NONLINEAR PROGRAMMING
149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach
More information1 Numerical optimization
Contents 1 Numerical optimization 5 1.1 Optimization of single-variable functions............ 5 1.1.1 Golden Section Search................... 6 1.1. Fibonacci Search...................... 8 1. Algorithms
More informationLine Search Techniques
Multidisciplinary Design Optimization 33 Chapter 2 Line Search Techniques 2.1 Introduction Most practical optimization problems involve many variables, so the study of single variable minimization may
More informationMVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg
MVE165/MMG631 Overview of nonlinear programming Ann-Brith Strömberg 2015 05 21 Areas of applications, examples (Ch. 9.1) Structural optimization Design of aircraft, ships, bridges, etc Decide on the material
More informationMaria Cameron. f(x) = 1 n
Maria Cameron 1. Local algorithms for solving nonlinear equations Here we discuss local methods for nonlinear equations r(x) =. These methods are Newton, inexact Newton and quasi-newton. We will show that
More informationIntroduction to gradient descent
6-1: Introduction to gradient descent Prof. J.C. Kao, UCLA Introduction to gradient descent Derivation and intuitions Hessian 6-2: Introduction to gradient descent Prof. J.C. Kao, UCLA Introduction Our
More informationOptimization Methods. Lecture 18: Optimality Conditions and. Gradient Methods. for Unconstrained Optimization
5.93 Optimization Methods Lecture 8: Optimality Conditions and Gradient Methods for Unconstrained Optimization Outline. Necessary and sucient optimality conditions Slide. Gradient m e t h o d s 3. The
More informationPenalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques
More informationOptimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30
Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained
More information1 Numerical optimization
Contents Numerical optimization 5. Optimization of single-variable functions.............................. 5.. Golden Section Search..................................... 6.. Fibonacci Search........................................
More informationSuppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.
Maria Cameron 1. Trust Region Methods At every iteration the trust region methods generate a model m k (p), choose a trust region, and solve the constraint optimization problem of finding the minimum of
More informationOptimization Methods. Lecture 19: Line Searches and Newton s Method
15.93 Optimization Methods Lecture 19: Line Searches and Newton s Method 1 Last Lecture Necessary Conditions for Optimality (identifies candidates) x local min f(x ) =, f(x ) PSD Slide 1 Sufficient Conditions
More informationThe Steepest Descent Algorithm for Unconstrained Optimization
The Steepest Descent Algorithm for Unconstrained Optimization Robert M. Freund February, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 1 Steepest Descent Algorithm The problem
More informationTMA4180 Solutions to recommended exercises in Chapter 3 of N&W
TMA480 Solutions to recommended exercises in Chapter 3 of N&W Exercise 3. The steepest descent and Newtons method with the bactracing algorithm is implemented in rosenbroc_newton.m. With initial point
More informationNonlinear Programming
Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week
More information5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More informationIntroduction. New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems
New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems Z. Akbari 1, R. Yousefpour 2, M. R. Peyghami 3 1 Department of Mathematics, K.N. Toosi University of Technology,
More informationOptimization Tutorial 1. Basic Gradient Descent
E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.
More informationLine Search Methods. Shefali Kulkarni-Thaker
1 BISECTION METHOD Line Search Methods Shefali Kulkarni-Thaker Consider the following unconstrained optimization problem min f(x) x R Any optimization algorithm starts by an initial point x 0 and performs
More informationOptimization II: Unconstrained Multivariable
Optimization II: Unconstrained Multivariable CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Justin Solomon CS 205A: Mathematical Methods Optimization II: Unconstrained Multivariable 1
More informationNONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)
NONLINEAR PROGRAMMING (Hillier & Lieberman Introduction to Operations Research, 8 th edition) Nonlinear Programming g Linear programming has a fundamental role in OR. In linear programming all its functions
More informationNumerical Methods I Solving Nonlinear Equations
Numerical Methods I Solving Nonlinear Equations Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 16th, 2014 A. Donev (Courant Institute)
More informationNumerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen
Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen
More informationWritten Examination
Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes
More informationMath 273a: Optimization Netwon s methods
Math 273a: Optimization Netwon s methods Instructor: Wotao Yin Department of Mathematics, UCLA Fall 2015 some material taken from Chong-Zak, 4th Ed. Main features of Newton s method Uses both first derivatives
More informationMA/OR/ST 706: Nonlinear Programming Midterm Exam Instructor: Dr. Kartik Sivaramakrishnan INSTRUCTIONS
MA/OR/ST 706: Nonlinear Programming Midterm Exam Instructor: Dr. Kartik Sivaramakrishnan INSTRUCTIONS 1. Please write your name and student number clearly on the front page of the exam. 2. The exam is
More informationStatistics 580 Optimization Methods
Statistics 580 Optimization Methods Introduction Let fx be a given real-valued function on R p. The general optimization problem is to find an x ɛ R p at which fx attain a maximum or a minimum. It is of
More informationSolution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark
Solution Methods Richard Lusby Department of Management Engineering Technical University of Denmark Lecture Overview (jg Unconstrained Several Variables Quadratic Programming Separable Programming SUMT
More informationLecture 3: Linesearch methods (continued). Steepest descent methods
Lecture 3: Linesearch methods (continued). Steepest descent methods Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 3: Linesearch methods (continued).
More informationOptimization 2. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng
Optimization 2 CS5240 Theoretical Foundations in Multimedia Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (NUS) Optimization 2 1 / 38
More informationNumerical Optimization of Partial Differential Equations
Numerical Optimization of Partial Differential Equations Part I: basic optimization concepts in R n Bartosz Protas Department of Mathematics & Statistics McMaster University, Hamilton, Ontario, Canada
More informationNonlinear equations. Norms for R n. Convergence orders for iterative methods
Nonlinear equations Norms for R n Assume that X is a vector space. A norm is a mapping X R with x such that for all x, y X, α R x = = x = αx = α x x + y x + y We define the following norms on the vector
More informationStatic unconstrained optimization
Static unconstrained optimization 2 In unconstrained optimization an objective function is minimized without any additional restriction on the decision variables, i.e. min f(x) x X ad (2.) with X ad R
More informationPractical Optimization: Basic Multidimensional Gradient Methods
Practical Optimization: Basic Multidimensional Gradient Methods László Kozma Lkozma@cis.hut.fi Helsinki University of Technology S-88.4221 Postgraduate Seminar on Signal Processing 22. 10. 2008 Contents
More informationIntroduction. A Modified Steepest Descent Method Based on BFGS Method for Locally Lipschitz Functions. R. Yousefpour 1
A Modified Steepest Descent Method Based on BFGS Method for Locally Lipschitz Functions R. Yousefpour 1 1 Department Mathematical Sciences, University of Mazandaran, Babolsar, Iran; yousefpour@umz.ac.ir
More informationOptimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23
Optimization: Nonlinear Optimization without Constraints Nonlinear Optimization without Constraints 1 / 23 Nonlinear optimization without constraints Unconstrained minimization min x f(x) where f(x) is
More informationLecture 4 - The Gradient Method Objective: find an optimal solution of the problem
Lecture 4 - The Gradient Method Objective: find an optimal solution of the problem min{f (x) : x R n }. The iterative algorithms that we will consider are of the form x k+1 = x k + t k d k, k = 0, 1,...
More informationMethods for Unconstrained Optimization Numerical Optimization Lectures 1-2
Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Coralia Cartis, University of Oxford INFOMM CDT: Modelling, Analysis and Computation of Continuous Real-World Problems Methods
More informationPart 2: Linesearch methods for unconstrained optimization. Nick Gould (RAL)
Part 2: Linesearch methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective
More informationGeneralization. A cat that once sat on a hot stove will never again sit on a hot stove or on a cold one either. Mark Twain
Generalization Generalization A cat that once sat on a hot stove will never again sit on a hot stove or on a cold one either. Mark Twain 2 Generalization The network input-output mapping is accurate for
More informationPart 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)
Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective
More informationmin f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;
Chapter 2 Gradient Methods The gradient method forms the foundation of all of the schemes studied in this book. We will provide several complementary perspectives on this algorithm that highlight the many
More informationUnconstrained Optimization
1 / 36 Unconstrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University February 2, 2015 2 / 36 3 / 36 4 / 36 5 / 36 1. preliminaries 1.1 local approximation
More informationLecture 4 - The Gradient Method Objective: find an optimal solution of the problem
Lecture 4 - The Gradient Method Objective: find an optimal solution of the problem min{f (x) : x R n }. The iterative algorithms that we will consider are of the form x k+1 = x k + t k d k, k = 0, 1,...
More informationOptimization with Scipy (2)
Optimization with Scipy (2) Unconstrained Optimization Cont d & 1D optimization Harry Lee February 5, 2018 CEE 696 Table of contents 1. Unconstrained Optimization 2. 1D Optimization 3. Multi-dimensional
More information1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by:
Newton s Method Suppose we want to solve: (P:) min f (x) At x = x, f (x) can be approximated by: n x R. f (x) h(x) := f ( x)+ f ( x) T (x x)+ (x x) t H ( x)(x x), 2 which is the quadratic Taylor expansion
More informationNewton s Method. Javier Peña Convex Optimization /36-725
Newton s Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, f ( (y) = max y T x f(x) ) x Properties and
More informationUnconstrained optimization I Gradient-type methods
Unconstrained optimization I Gradient-type methods Antonio Frangioni Department of Computer Science University of Pisa www.di.unipi.it/~frangio frangio@di.unipi.it Computational Mathematics for Learning
More informationNumerical Optimization
Numerical Optimization Unit 2: Multivariable optimization problems Che-Rung Lee Scribe: February 28, 2011 (UNIT 2) Numerical Optimization February 28, 2011 1 / 17 Partial derivative of a two variable function
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More informationLecture 3: Basics of set-constrained and unconstrained optimization
Lecture 3: Basics of set-constrained and unconstrained optimization (Chap 6 from textbook) Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 9, 2018 Optimization basics Outline Optimization
More informationIE 5531: Engineering Optimization I
IE 5531: Engineering Optimization I Lecture 15: Nonlinear optimization Prof. John Gunnar Carlsson November 1, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I November 1, 2010 1 / 24
More informationComplexity analysis of second-order algorithms based on line search for smooth nonconvex optimization
Complexity analysis of second-order algorithms based on line search for smooth nonconvex optimization Clément Royer - University of Wisconsin-Madison Joint work with Stephen J. Wright MOPTA, Bethlehem,
More informationOn the iterate convergence of descent methods for convex optimization
On the iterate convergence of descent methods for convex optimization Clovis C. Gonzaga March 1, 2014 Abstract We study the iterate convergence of strong descent algorithms applied to convex functions.
More informationOptimization. Next: Curve Fitting Up: Numerical Analysis for Chemical Previous: Linear Algebraic and Equations. Subsections
Next: Curve Fitting Up: Numerical Analysis for Chemical Previous: Linear Algebraic and Equations Subsections One-dimensional Unconstrained Optimization Golden-Section Search Quadratic Interpolation Newton's
More informationGradient Descent. Dr. Xiaowei Huang
Gradient Descent Dr. Xiaowei Huang https://cgi.csc.liv.ac.uk/~xiaowei/ Up to now, Three machine learning algorithms: decision tree learning k-nn linear regression only optimization objectives are discussed,
More informationNewton s Method. Ryan Tibshirani Convex Optimization /36-725
Newton s Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, Properties and examples: f (y) = max x
More informationECE 680 Modern Automatic Control. Gradient and Newton s Methods A Review
ECE 680Modern Automatic Control p. 1/1 ECE 680 Modern Automatic Control Gradient and Newton s Methods A Review Stan Żak October 25, 2011 ECE 680Modern Automatic Control p. 2/1 Review of the Gradient Properties
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 5 Nonlinear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More informationSecond Order Optimization Algorithms I
Second Order Optimization Algorithms I Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 7, 8, 9 and 10 1 The
More informationNumerical optimization
Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal
More informationSECTION: CONTINUOUS OPTIMISATION LECTURE 4: QUASI-NEWTON METHODS
SECTION: CONTINUOUS OPTIMISATION LECTURE 4: QUASI-NEWTON METHODS HONOUR SCHOOL OF MATHEMATICS, OXFORD UNIVERSITY HILARY TERM 2005, DR RAPHAEL HAUSER 1. The Quasi-Newton Idea. In this lecture we will discuss
More informationNumerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09
Numerical Optimization 1 Working Horse in Computer Vision Variational Methods Shape Analysis Machine Learning Markov Random Fields Geometry Common denominator: optimization problems 2 Overview of Methods
More informationNonlinear Optimization
Nonlinear Optimization (Com S 477/577 Notes) Yan-Bin Jia Nov 7, 2017 1 Introduction Given a single function f that depends on one or more independent variable, we want to find the values of those variables
More informationMATH 205C: STATIONARY PHASE LEMMA
MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)
More informationOptimization II: Unconstrained Multivariable
Optimization II: Unconstrained Multivariable CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Optimization II: Unconstrained
More informationLecture V. Numerical Optimization
Lecture V Numerical Optimization Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Numerical Optimization p. 1 /19 Isomorphism I We describe minimization problems: to maximize
More informationOptimal Newton-type methods for nonconvex smooth optimization problems
Optimal Newton-type methods for nonconvex smooth optimization problems Coralia Cartis, Nicholas I. M. Gould and Philippe L. Toint June 9, 20 Abstract We consider a general class of second-order iterations
More informationSelf-Concordant Barrier Functions for Convex Optimization
Appendix F Self-Concordant Barrier Functions for Convex Optimization F.1 Introduction In this Appendix we present a framework for developing polynomial-time algorithms for the solution of convex optimization
More informationNumerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems
1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of
More informationCE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions
CE 191: Civil and Environmental Engineering Systems Analysis LEC : Optimality Conditions Professor Scott Moura Civil & Environmental Engineering University of California, Berkeley Fall 214 Prof. Moura
More informationMotivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes)
AMSC/CMSC 460 Computational Methods, Fall 2007 UNIT 5: Nonlinear Equations Dianne P. O Leary c 2001, 2002, 2007 Solving Nonlinear Equations and Optimization Problems Read Chapter 8. Skip Section 8.1.1.
More informationECS550NFB Introduction to Numerical Methods using Matlab Day 2
ECS550NFB Introduction to Numerical Methods using Matlab Day 2 Lukas Laffers lukas.laffers@umb.sk Department of Mathematics, University of Matej Bel June 9, 2015 Today Root-finding: find x that solves
More information5 Overview of algorithms for unconstrained optimization
IOE 59: NLP, Winter 22 c Marina A. Epelman 9 5 Overview of algorithms for unconstrained optimization 5. General optimization algorithm Recall: we are attempting to solve the problem (P) min f(x) s.t. x
More informationSteepest Descent. Juan C. Meza 1. Lawrence Berkeley National Laboratory Berkeley, California 94720
Steepest Descent Juan C. Meza Lawrence Berkeley National Laboratory Berkeley, California 94720 Abstract The steepest descent method has a rich history and is one of the simplest and best known methods
More informationLine Search Algorithms
Lab 1 Line Search Algorithms Investigate various Line-Search algorithms for numerical opti- Lab Objective: mization. Overview of Line Search Algorithms Imagine you are out hiking on a mountain, and you
More informationLecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations
Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T Heath Chapter 5 Nonlinear Equations Copyright c 2001 Reproduction permitted only for noncommercial, educational
More informationECE580 Fall 2015 Solution to Midterm Exam 1 October 23, Please leave fractions as fractions, but simplify them, etc.
ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, 2015 1 Name: Solution Score: /100 This exam is closed-book. You must show ALL of your work for full credit. Please read the questions carefully.
More informationMultidisciplinary System Design Optimization (MSDO)
Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential
More informationSF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren
SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory
More informationCS 450 Numerical Analysis. Chapter 5: Nonlinear Equations
Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80
More informationLecture Notes: Geometric Considerations in Unconstrained Optimization
Lecture Notes: Geometric Considerations in Unconstrained Optimization James T. Allison February 15, 2006 The primary objectives of this lecture on unconstrained optimization are to: Establish connections
More information1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that
Chapter 4 Nonlinear equations 4.1 Root finding Consider the problem of solving any nonlinear relation g(x) = h(x) in the real variable x. We rephrase this problem as one of finding the zero (root) of a
More informationAlgorithms for Constrained Optimization
1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic
More informationMath 273a: Optimization Basic concepts
Math 273a: Optimization Basic concepts Instructor: Wotao Yin Department of Mathematics, UCLA Spring 2015 slides based on Chong-Zak, 4th Ed. Goals of this lecture The general form of optimization: minimize
More informationMathematical optimization
Optimization Mathematical optimization Determine the best solutions to certain mathematically defined problems that are under constrained determine optimality criteria determine the convergence of the
More informationOPER 627: Nonlinear Optimization Lecture 14: Mid-term Review
OPER 627: Nonlinear Optimization Lecture 14: Mid-term Review Department of Statistical Sciences and Operations Research Virginia Commonwealth University Oct 16, 2013 (Lecture 14) Nonlinear Optimization
More information