Stationarity results for generating set search for linearly constrained optimization
|
|
- Melissa Jackson
- 5 years ago
- Views:
Transcription
1 Stationarity results for generating set search for linearly constrained optimization Virginia Torczon, College of William & Mary Collaborators: Tammy Kolda, Sandia National Laboratories Robert Michael Lewis, College of William & Mary
2 The problem: minimize f(x) subject to Ax b, where f : R n R, x R n, A is an m n matrix, and b R m. We use Ω to denote the feasible region: Ω = { x R n : Ax b }. We assume that f is continuously differentiable on Ω but that gradient information is not computationally available. We do not assume that the constraints are nondegenerate.
3 The stationarity measure: 1 χ(x) max x+w Ω, w 1 f(x)t w, where χ(x) is a continuous function with the property if and only if χ(x) = 0 for x Ω x is a KKT point of the linearly constrained problem. 1 A. R. Conn, N. Gould, A. Sartenaer, and P. L. Toint, Convergence propreties of an augmented Lagrangian algorithm for optimization with a combination of general equality and linear constraints, SIOPT, 1996.
4 First-order stationarity Main result: Once k is small enough the subsequence x k for k U {0, 1, 2,...} produced by a Generating Set Search (GSS) method for the linearly constrained optimization problems satisfies χ(x k ) = O( k ). Observe: a careful specification of a GSS method for linearly constrained optimization ensures, at a minimum, that lim inf k = 0, so that firstorder stationarity for this subsequence of iterates is immediate. But that is not the primary motivation for this investigation since global convergence to KKT points for the linearly constrained problems have already been established [see May, 1974; Yu/Li, 1981; Lewis/Torczon, 2000; Lucidi/Sciandrone/Tseng, 2002;....]
5 What are GSS methods for linearly constrained optimization? Look at one simple example applied to the problem: where minimize x R 2 f(x 1, x 2 ) f(x) = (3 2x 1 )x 1 2x (3 2x 2 )x 2 x , the modified Broyden tridiagonal function augmented with three linear constraints. Use a feasible iterates approach.
6 The initial set of search directions:
7 Identify feasible improvement:
8 Move Northeast and keep the set of search directions
9 Identify feasible improvement:
10 Move Northeast and change the set of search directions
11 No feasible improvement:
12 Contract and change the set of search directions
13 Identify feasible improvement:
14 Move Northeast and change the set of search directions
15 No feasible improvement:
16 Contract and change the set of search directions
17 Critical: When close to the boundary of the feasible region, the set of directions must conform to the geometry of the nearby constraints. Essential features: identifying the nearby constraints obtaining a set of search directions finding a step of an appropriate length accepting a step
18 Identifying the nearby constraints Find the outward-pointing normals within distance ε of the current iterate. ε a 2 x 2 a 2 a 3 ε ε Ω x 1 a 1 Ω Ω x 3 The conditions on ε depend on the convergence analysis in effect.
19 Obtaining a set of search directions: Part I Translate the outward-pointing normals within distance ε of the current iterate x to obtain the ε-normal cone N(x, ε) and its polar, the ε-tangent cone T (x, ε). N(x,ε 1 ) N(x,ε 2 ) x ε 1 x ε 2 ε 3 x Ω Ω Ω T(x,ε 1 ) T(x,ε 2 ) T(x,ε 3 )
20 Obtaining a set of search directions: Part II The set of search directions must contain generators for the ε-tangent cone T (x, ε). N(x,ε 1 ) N(x,ε 2 ) x x x Ω Ω Ω T(x,ε 1 ) T(x,ε 2 ) T(x,ε 3 ) Conditioning is important!
21 Conditioning: concern When T (x, ε) is a lineality space or half space, there is freedom in choosing the generators: N(x,ε) d (1) d (3) poor N(x,ε) d (1) N(x,ε) d (1) x ε x ε x ε Ω d (2) f(x) Ω d (2) d (3) good f(x) Ω d (2) f(x) d (3) ideal T(x,ε) T(x,ε) T(x,ε)
22 Conditioning: requirement that must be enforced There exists a constant κ min > 0, independent of k, such that for all k there exists a set of generators G for T (x k, ε k ) and, furthermore, κ(g) min max v R n d G v K 0 v T d v K d κ min.
23 Conditioning: gratis There exists a constant ν min > 0, independent of k, such that for all k there exists a set of generators A for N(x k, ε k ) such that κ(a) ν min. The for free is because each N(x, ε) is generated by at most m rows of the constraint matrix A.
24 Obtaining a set of search directions: Part III What precisely the set of search directions contains depends on the convergence analysis in effect. Choices: At a minimum, generators for the ε-tangent cones T (x k, ε) for all ε [0, ε ]. [Lewis/Torczon, 2000] Only generators for the ε-tangent cone T (x k, ε k ). [Lucidi/Sciandrone/Tseng, 2002] At a minimum, generators for the ε-tangent cone T (x k, ε k ), where ε k = min{ε max, β max k }. [Kolda/Torczon/Lewis, 2004] The sets used for the example contained generators for both T (x k, ε k ) and N(x k, ε k ).
25 The initial set of search directions:
26 Move Northeast and keep the set of search directions
27 Move Northeast and change the set of search directions
28 Contract and change the set of search directions
29 Move Northeast and change the set of search directions
30 Contract and change the set of search directions
31 Finding steps of an appropriate length First: bound the lengths of the direction vectors. Specifically, there must exist β min and β max, independent of k, such that for all k the following holds: β min d β max for all d G k.
32 Finding steps of an appropriate length Second: once the lengths of the direction vectors are bounded for all k, tie the lengths of the steps tried to a step-length control parameter k. Requirement: If x k + k d (i) k Ω, then the actual step taken along d (i) must be of length k, just as for unconstrained generating set search methods. Question: What to do when x k + k d (i) k Ω?
33 Finding feasible steps of an appropriate length x k x k x k Once again, the analysis allows multiple options.
34 Accepting a step If x k + k d k Ω and f(x k + k ) < f(x k ) ρ( k ) then k S, x k+1 = x k + k and k+1 = φ k k for a choice of φ k 1. Otherwise, k U, x k+1 = x k, and k+1 = θ k k, for some choice θ k (0, 1).
35 The forcing function: 1. The function ρ( ) is a nonnegative continuous function on [0, + ). 2. The function ρ( ) is o(t) as t 0; i.e., lim t 0 ρ(t)/t = The function ρ( ) is nondecreasing; i.e., ρ(t 1 ) ρ(t 2 ) if t 1 t 2.
36 The stationarity result: If the set F = {x Ω f(x) f(x 0 )} is bounded and the gradient of f is Lipschitz continuous with constant M on F, then there exists γ > 0 such that for all x F, f(x) < γ. Further, for all k U, if ε k = β max k, then χ(x k ) ( M κ min + γ ν min ) k β max + 1 κ min β min ρ( k ) k.
37 Conclusions: k can be used to assess progress toward a KKT point. The bound on χ(x k ) illuminates what algorithmic parameters can and should be monitored to assure the effectiveness of an implementation. Our analysis yields an estimate that makes it possible to use linearly constrained GSS methods with the augmeted Lagrangian approach of Conn/Gould/Sartenaer/Toint to handle problems with general constraints.
1. Introduction. We consider ways to implement direct search methods for solving linearly constrained nonlinear optimization problems of the form
IMPLEMENTING GENERATING SET SEARCH METHODS FOR LINEARLY CONSTRAINED MINIMIZATION ROBERT MICHAEL LEWIS, ANNE SHEPHERD, AND VIRGINIA TORCZON Abstract. We discuss an implementation of a derivative-free generating
More informationIMPLEMENTING GENERATING SET SEARCH METHODS FOR LINEARLY CONSTRAINED MINIMIZATION
Technical Report Department of Computer Science WM CS 2005 01 College of William & Mary Revised July 18, 2005 Williamsburg, Virginia 23187 IMPLEMENTING GENERATING SET SEARCH METHODS FOR LINEARLY CONSTRAINED
More informationSTATIONARITY RESULTS FOR GENERATING SET SEARCH FOR LINEARLY CONSTRAINED OPTIMIZATION
STATIONARITY RESULTS FOR GENERATING SET SEARCH FOR LINEARLY CONSTRAINED OPTIMIZATION TAMARA G. KOLDA, ROBERT MICHAEL LEWIS, AND VIRGINIA TORCZON Abstract. We present a new generating set search (GSS) approach
More informationCollege of William & Mary Department of Computer Science
Technical Report WM-CS-2008-07 College of William & Mary Department of Computer Science WM-CS-2008-07 Active Set Identification without Derivatives Robert Michael Lewis and Virginia Torczon September 2008
More informationWorst Case Complexity of Direct Search
Worst Case Complexity of Direct Search L. N. Vicente October 25, 2012 Abstract In this paper we prove that the broad class of direct-search methods of directional type based on imposing sufficient decrease
More informationAsynchronous parallel generating set search for linearly-constrained optimization
SANDIA REPORT SAND2006-4621 Unlimited Release Printed August 2006 Asynchronous parallel generating set search for linearly-constrained optimization Joshua D. Griffin, Tamara G. Kolda, and Robert Michael
More informationWorst Case Complexity of Direct Search
Worst Case Complexity of Direct Search L. N. Vicente May 3, 200 Abstract In this paper we prove that direct search of directional type shares the worst case complexity bound of steepest descent when sufficient
More informationA generating set direct search augmented Lagrangian algorithm for optimization with a combination of general and linear constraints
SANDIA REPORT SAND2006-5315 Unlimited Release Printed August 2006 A generating set direct search augmented Lagrangian algorithm for optimization with a combination of general and linear constraints T.
More informationGlobal convergence of trust-region algorithms for constrained minimization without derivatives
Global convergence of trust-region algorithms for constrained minimization without derivatives P.D. Conejo E.W. Karas A.A. Ribeiro L.G. Pedroso M. Sachine September 27, 2012 Abstract In this work we propose
More informationWorst case complexity of direct search
EURO J Comput Optim (2013) 1:143 153 DOI 10.1007/s13675-012-0003-7 ORIGINAL PAPER Worst case complexity of direct search L. N. Vicente Received: 7 May 2012 / Accepted: 2 November 2012 / Published online:
More informationCollege of William & Mary Department of Computer Science
Technical Report WM-CS-2010-01 College of William & Mary Department of Computer Science WM-CS-2010-01 A Direct Search Approach to Nonlinear Programming Problems Using an Augmented Lagrangian Method with
More informationExamination paper for TMA4180 Optimization I
Department of Mathematical Sciences Examination paper for TMA4180 Optimization I Academic contact during examination: Phone: Examination date: 26th May 2016 Examination time (from to): 09:00 13:00 Permitted
More informationPATTERN SEARCH METHODS FOR LINEARLY CONSTRAINED MINIMIZATION
PATTERN SEARCH METHODS FOR LINEARLY CONSTRAINED MINIMIZATION ROBERT MICHAEL LEWIS AND VIRGINIA TORCZON Abstract. We extend pattern search methods to linearly constrained minimization. We develop a general
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods
More informationDirect search based on probabilistic feasible descent for bound and linearly constrained problems
Direct search based on probabilistic feasible descent for bound and linearly constrained problems S. Gratton C. W. Royer L. N. Vicente Z. Zhang March 8, 2018 Abstract Direct search is a methodology for
More informationOptimality Conditions for Constrained Optimization
72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)
More informationMesh adaptive direct search algorithms for mixed variable optimization
Optim Lett (2009) 3:35 47 DOI 10.1007/s11590-008-0089-2 ORIGINAL PAPER Mesh adaptive direct search algorithms for mixed variable optimization Mark A. Abramson Charles Audet James W. Chrissis Jennifer G.
More informationConstrained Optimization Theory
Constrained Optimization Theory Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Constrained Optimization Theory IMA, August
More informationA Multilevel Proximal Algorithm for Large Scale Composite Convex Optimization
A Multilevel Proximal Algorithm for Large Scale Composite Convex Optimization Panos Parpas Department of Computing Imperial College London www.doc.ic.ac.uk/ pp500 p.parpas@imperial.ac.uk jointly with D.V.
More informationQuiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006
Quiz Discussion IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Motivation Why do we care? We are interested in
More information1. Introduction. We analyze a trust region version of Newton s method for the optimization problem
SIAM J. OPTIM. Vol. 9, No. 4, pp. 1100 1127 c 1999 Society for Industrial and Applied Mathematics NEWTON S METHOD FOR LARGE BOUND-CONSTRAINED OPTIMIZATION PROBLEMS CHIH-JEN LIN AND JORGE J. MORÉ To John
More informationConstrained Optimization
1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange
More informationOn the complexity of an Inexact Restoration method for constrained optimization
On the complexity of an Inexact Restoration method for constrained optimization L. F. Bueno J. M. Martínez September 18, 2018 Abstract Recent papers indicate that some algorithms for constrained optimization
More informationChapter 2. Optimization. Gradients, convexity, and ALS
Chapter 2 Optimization Gradients, convexity, and ALS Contents Background Gradient descent Stochastic gradient descent Newton s method Alternating least squares KKT conditions 2 Motivation We can solve
More informationarxiv: v1 [math.oc] 1 Jul 2016
Convergence Rate of Frank-Wolfe for Non-Convex Objectives Simon Lacoste-Julien INRIA - SIERRA team ENS, Paris June 8, 016 Abstract arxiv:1607.00345v1 [math.oc] 1 Jul 016 We give a simple proof that the
More informationConvex Optimization. Ofer Meshi. Lecture 6: Lower Bounds Constrained Optimization
Convex Optimization Ofer Meshi Lecture 6: Lower Bounds Constrained Optimization Lower Bounds Some upper bounds: #iter μ 2 M #iter 2 M #iter L L μ 2 Oracle/ops GD κ log 1/ε M x # ε L # x # L # ε # με f
More informationc 1999 Society for Industrial and Applied Mathematics
SIAM J. OPTIM. Vol. 9, No. 4, pp. 1082 1099 c 1999 Society for Industrial and Applied Mathematics PATTERN SEARCH ALGORITHMS FOR BOUND CONSTRAINED MINIMIZATION ROBERT MICHAEL LEWIS AND VIRGINIA TORCZON
More informationNumerical Optimization
Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,
More information2.098/6.255/ Optimization Methods Practice True/False Questions
2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence
More informationAlgorithms for constrained local optimization
Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained
More informationOn the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search
ANZIAM J. 55 (E) pp.e79 E89, 2014 E79 On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search Lijun Li 1 Weijun Zhou 2 (Received 21 May 2013; revised
More informationUC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009
UC Berkeley Department of Electrical Engineering and Computer Science EECS 227A Nonlinear and Convex Optimization Solutions 5 Fall 2009 Reading: Boyd and Vandenberghe, Chapter 5 Solution 5.1 Note that
More informationPenalty, Barrier and Augmented Lagrangian Methods
Penalty, Barrier and Augmented Lagrangian Methods Jesús Omar Ocegueda González Abstract Infeasible-Interior-Point methods shown in previous homeworks are well behaved when the number of constraints are
More informationA trust-region derivative-free algorithm for constrained optimization
A trust-region derivative-free algorithm for constrained optimization P.D. Conejo and E.W. Karas and L.G. Pedroso February 26, 204 Abstract We propose a trust-region algorithm for constrained optimization
More informationUnconstrained optimization
Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout
More informationConstrained Derivative-Free Optimization on Thin Domains
Constrained Derivative-Free Optimization on Thin Domains J. M. Martínez F. N. C. Sobral September 19th, 2011. Abstract Many derivative-free methods for constrained problems are not efficient for minimizing
More informationAN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING
AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general
More informationA Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity
A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity Mohammadreza Samadi, Lehigh University joint work with Frank E. Curtis (stand-in presenter), Lehigh University
More informationKey words. nonlinear programming, pattern search algorithm, derivative-free optimization, convergence analysis, second order optimality conditions
SIAM J. OPTIM. Vol. x, No. x, pp. xxx-xxx c Paper submitted to Society for Industrial and Applied Mathematics SECOND ORDER BEHAVIOR OF PATTERN SEARCH MARK A. ABRAMSON Abstract. Previous analyses of pattern
More informationMATH 4211/6211 Optimization Basics of Optimization Problems
MATH 4211/6211 Optimization Basics of Optimization Problems Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 A standard minimization
More informationA multistart multisplit direct search methodology for global optimization
1/69 A multistart multisplit direct search methodology for global optimization Ismael Vaz (Univ. Minho) Luis Nunes Vicente (Univ. Coimbra) IPAM, Optimization and Optimal Control for Complex Energy and
More informationDerivative-free methods for nonlinear programming with general lower-level constraints*
Volume 30, N. 1, pp. 19 52, 2011 Copyright 2011 SBMAC ISSN 0101-8205 www.scielo.br/cam Derivative-free methods for nonlinear programming with general lower-level constraints* M.A. DINIZ-EHRHARDT 1, J.M.
More informationPart 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)
Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x
More informationAdditional Exercises for Introduction to Nonlinear Optimization Amir Beck March 16, 2017
Additional Exercises for Introduction to Nonlinear Optimization Amir Beck March 16, 2017 Chapter 1 - Mathematical Preliminaries 1.1 Let S R n. (a) Suppose that T is an open set satisfying T S. Prove that
More informationLagrange duality. The Lagrangian. We consider an optimization program of the form
Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization
More informationAugmented Lagrangian methods under the Constant Positive Linear Dependence constraint qualification
Mathematical Programming manuscript No. will be inserted by the editor) R. Andreani E. G. Birgin J. M. Martínez M. L. Schuverdt Augmented Lagrangian methods under the Constant Positive Linear Dependence
More informationON AUGMENTED LAGRANGIAN METHODS WITH GENERAL LOWER-LEVEL CONSTRAINTS. 1. Introduction. Many practical optimization problems have the form (1.
ON AUGMENTED LAGRANGIAN METHODS WITH GENERAL LOWER-LEVEL CONSTRAINTS R. ANDREANI, E. G. BIRGIN, J. M. MARTíNEZ, AND M. L. SCHUVERDT Abstract. Augmented Lagrangian methods with general lower-level constraints
More informationA PATTERN SEARCH FILTER METHOD FOR NONLINEAR PROGRAMMING WITHOUT DERIVATIVES
A PATTERN SEARCH FILTER METHOD FOR NONLINEAR PROGRAMMING WITHOUT DERIVATIVES CHARLES AUDET AND J.E. DENNIS JR. Abstract. This paper formulates and analyzes a pattern search method for general constrained
More informationGeneralized Pattern Search Algorithms : unconstrained and constrained cases
IMA workshop Optimization in simulation based models Generalized Pattern Search Algorithms : unconstrained and constrained cases Mark A. Abramson Air Force Institute of Technology Charles Audet École Polytechnique
More informationQuadratic Programming
Quadratic Programming Outline Linearly constrained minimization Linear equality constraints Linear inequality constraints Quadratic objective function 2 SideBar: Matrix Spaces Four fundamental subspaces
More informationDual methods and ADMM. Barnabas Poczos & Ryan Tibshirani Convex Optimization /36-725
Dual methods and ADMM Barnabas Poczos & Ryan Tibshirani Convex Optimization 10-725/36-725 1 Given f : R n R, the function is called its conjugate Recall conjugate functions f (y) = max x R n yt x f(x)
More informationComputational Optimization. Constrained Optimization Part 2
Computational Optimization Constrained Optimization Part Optimality Conditions Unconstrained Case X* is global min Conve f X* is local min SOSC f ( *) = SONC Easiest Problem Linear equality constraints
More informationLecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming
Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 11 and
More informationA SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION
A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-19-3 March
More information2018 Fall 2210Q Section 013 Midterm Exam I Solution
8 Fall Q Section 3 Midterm Exam I Solution True or False questions ( points = points) () An example of a linear combination of vectors v, v is the vector v. True. We can write v as v + v. () If two matrices
More informationSecond Order Optimality Conditions for Constrained Nonlinear Programming
Second Order Optimality Conditions for Constrained Nonlinear Programming Lecture 10, Continuous Optimisation Oxford University Computing Laboratory, HT 2006 Notes by Dr Raphael Hauser (hauser@comlab.ox.ac.uk)
More informationConstrained optimization
Constrained optimization In general, the formulation of constrained optimization is as follows minj(w), subject to H i (w) = 0, i = 1,..., k. where J is the cost function and H i are the constraints. Lagrange
More informationNumerical Experience with a Class of Trust-Region Algorithms May 23, for 2016 Unconstrained 1 / 30 S. Smooth Optimization
Numerical Experience with a Class of Trust-Region Algorithms for Unconstrained Smooth Optimization XI Brazilian Workshop on Continuous Optimization Universidade Federal do Paraná Geovani Nunes Grapiglia
More informationConstraint qualifications for nonlinear programming
Constraint qualifications for nonlinear programming Consider the standard nonlinear program min f (x) s.t. g i (x) 0 i = 1,..., m, h j (x) = 0 1 = 1,..., p, (NLP) with continuously differentiable functions
More information12. Interior-point methods
12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More informationLectures on Parametric Optimization: An Introduction
-2 Lectures on Parametric Optimization: An Introduction Georg Still University of Twente, The Netherlands version: March 29, 2018 Contents Chapter 1. Introduction and notation 3 1.1. Introduction 3 1.2.
More informationGradient Descent. Dr. Xiaowei Huang
Gradient Descent Dr. Xiaowei Huang https://cgi.csc.liv.ac.uk/~xiaowei/ Up to now, Three machine learning algorithms: decision tree learning k-nn linear regression only optimization objectives are discussed,
More information1 Computing with constraints
Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)
More informationSolving Dual Problems
Lecture 20 Solving Dual Problems We consider a constrained problem where, in addition to the constraint set X, there are also inequality and linear equality constraints. Specifically the minimization problem
More informationPenalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques
More informationOn sequential optimality conditions for constrained optimization. José Mario Martínez martinez
On sequential optimality conditions for constrained optimization José Mario Martínez www.ime.unicamp.br/ martinez UNICAMP, Brazil 2011 Collaborators This talk is based in joint papers with Roberto Andreani
More informationNonmonotone Trust Region Methods for Nonlinear Equality Constrained Optimization without a Penalty Function
Nonmonotone Trust Region Methods for Nonlinear Equality Constrained Optimization without a Penalty Function Michael Ulbrich and Stefan Ulbrich Zentrum Mathematik Technische Universität München München,
More informationLecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem
Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R
More informationAn Inexact Newton Method for Optimization
New York University Brown Applied Mathematics Seminar, February 10, 2009 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc)
More informationSuppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.
Maria Cameron 1. Trust Region Methods At every iteration the trust region methods generate a model m k (p), choose a trust region, and solve the constraint optimization problem of finding the minimum of
More informationOptimization. Yuh-Jye Lee. March 28, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 40
Optimization Yuh-Jye Lee Data Science and Machine Intelligence Lab National Chiao Tung University March 28, 2017 1 / 40 The Key Idea of Newton s Method Let f : R n R be a twice differentiable function
More informationEvaluation complexity for nonlinear constrained optimization using unscaled KKT conditions and high-order models by E. G. Birgin, J. L. Gardenghi, J. M. Martínez, S. A. Santos and Ph. L. Toint Report NAXYS-08-2015
More informationRANK ORDERING AND POSITIVE BASES IN PATTERN SEARCH ALGORITHMS
RANK ORDERING AND POSITIVE BASES IN PATTERN SEARCH ALGORITHMS ROBERT MICHAEL LEWIS AND VIRGINIA TORCZON Abstract. We present two new classes of pattern search algorithms for unconstrained minimization:
More information2.3 Linear Programming
2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are
More informationInfeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization
Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel
More informationExtreme Abridgment of Boyd and Vandenberghe s Convex Optimization
Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The
More informationA Trust-Funnel Algorithm for Nonlinear Programming
for Nonlinear Programming Daniel P. Robinson Johns Hopins University Department of Applied Mathematics and Statistics Collaborators: Fran E. Curtis (Lehigh University) Nic I. M. Gould (Rutherford Appleton
More informationUNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems
UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More informationConstrained optimization: direct methods (cont.)
Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a
More informationTrust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization
Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Denis Ridzal Department of Computational and Applied Mathematics Rice University, Houston, Texas dridzal@caam.rice.edu
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More information10 Numerical methods for constrained problems
10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside
More information5.6 Penalty method and augmented Lagrangian method
5.6 Penalty method and augmented Lagrangian method Consider a generic NLP problem min f (x) s.t. c i (x) 0 i I c i (x) = 0 i E (1) x R n where f and the c i s are of class C 1 or C 2, and I and E are the
More informationA pattern search and implicit filtering algorithm for solving linearly constrained minimization problems with noisy objective functions
A pattern search and implicit filtering algorithm for solving linearly constrained minimization problems with noisy objective functions M. A. Diniz-Ehrhardt D. G. Ferreira S. A. Santos December 5, 2017
More informationKey words. Pattern search, linearly constrained optimization, derivative-free optimization, degeneracy, redundancy, constraint classification
PATTERN SEARCH METHODS FOR LINEARLY CONSTRAINED MINIMIZATION IN THE PRESENCE OF DEGENERACY OLGA A. BREZHNEVA AND J. E. DENNIS JR. Abstract. This paper deals with generalized pattern search (GPS) algorithms
More informationParameter Optimization in the Nonlinear Stepsize Control Framework for Trust-Region July 12, 2017 Methods 1 / 39
Parameter Optimization in the Nonlinear Stepsize Control Framework for Trust-Region Methods EUROPT 2017 Federal University of Paraná - Curitiba/PR - Brazil Geovani Nunes Grapiglia Federal University of
More informationUniversity of Houston, Department of Mathematics Numerical Analysis, Fall 2005
3 Numerical Solution of Nonlinear Equations and Systems 3.1 Fixed point iteration Reamrk 3.1 Problem Given a function F : lr n lr n, compute x lr n such that ( ) F(x ) = 0. In this chapter, we consider
More informationConvex Optimization & Lagrange Duality
Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT
More informationA SHIFTED PRIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OPTIMIZATION
A SHIFTED RIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OTIMIZATION hilip E. Gill Vyacheslav Kungurtsev Daniel. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-18-1 February 1, 2018
More informationA Line search Multigrid Method for Large-Scale Nonlinear Optimization
A Line search Multigrid Method for Large-Scale Nonlinear Optimization Zaiwen Wen Donald Goldfarb Department of Industrial Engineering and Operations Research Columbia University 2008 Siam Conference on
More information5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More informationDouble Smoothing technique for Convex Optimization Problems with Linear Constraints
1 Double Smoothing technique for Convex Optimization Problems with Linear Constraints O. Devolder (F.R.S.-FNRS Research Fellow), F. Glineur and Y. Nesterov Center for Operations Research and Econometrics
More informationAN INTERIOR-POINT METHOD FOR NONLINEAR OPTIMIZATION PROBLEMS WITH LOCATABLE AND SEPARABLE NONSMOOTHNESS
AN INTERIOR-POINT METHOD FOR NONLINEAR OPTIMIZATION PROBLEMS WITH LOCATABLE AND SEPARABLE NONSMOOTHNESS MARTIN SCHMIDT Abstract. Many real-world optimization models comse nonconvex and nonlinear as well
More informationLecture 12 Unconstrained Optimization (contd.) Constrained Optimization. October 15, 2008
Lecture 12 Unconstrained Optimization (contd.) Constrained Optimization October 15, 2008 Outline Lecture 11 Gradient descent algorithm Improvement to result in Lec 11 At what rate will it converge? Constrained
More informationCSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods
CSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods March 23, 2018 1 / 35 This material is covered in S. Boyd, L. Vandenberge s book Convex Optimization https://web.stanford.edu/~boyd/cvxbook/.
More informationMATH 4211/6211 Optimization Constrained Optimization
MATH 4211/6211 Optimization Constrained Optimization Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 Constrained optimization
More information12. Interior-point methods
12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More informationCubic regularization of Newton s method for convex problems with constraints
CORE DISCUSSION PAPER 006/39 Cubic regularization of Newton s method for convex problems with constraints Yu. Nesterov March 31, 006 Abstract In this paper we derive efficiency estimates of the regularized
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko
More information