Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation
|
|
- Millicent Hamilton
- 5 years ago
- Views:
Transcription
1 Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation Peter J.C. Dickinson DMMP, University of Twente version: 06/11/17 Monday 6th November 2017 Peter J.C. Dickinson CO17, Chpt 6: Sol n methods: Constrained Opt. 1/25
2 Problem min x f (x) s. t. g j (x) 0 for all j = 1,..., m x R n. (C) f, g 1,..., g m C 1, f, g 1,..., g m : R n R, F := {x C : g j (x) 0 for all j = 1,..., m}. We will not make any convexity assumptions. Peter J.C. Dickinson CO17, Chpt 6: Sol n methods: Constrained Opt. 2/25
3 Table of Contents 1 Introduction 2 Feasible descent method Basic idea Naive choice of direction Alternative choice of direction 3 Unconstrained optimisation 4 Penalty method 5 Barrier method Peter J.C. Dickinson CO17, Chpt 6: Sol n methods: Constrained Opt. 3/25
4 Basic idea 1 Start at a point x 0 F. (k = 0) 2 If x k is a John point then STOP. 3 If it is not a John point then there is a strictly feasible descent direction d k. 4 Line search: Find λ k = arg min λ {f (x k + λd k ) : λ R, x k + λd k F} (or just f (x k + λ k d k ) < f (x k ) and x k + λd k F). 5 Let x k+1 = x k + λ k d k F and k k If stopping criteria satisfied then STOP, else go to step 2. Peter J.C. Dickinson CO17, Chpt 6: Sol n methods: Constrained Opt. 4/25
5 Choosing d k : Naive method If there is a strictly feasible descent direction, then the following problem will provide one: min d,z z s. t. f (x k ) T d z g i (x k ) T d z for all i s. t. g i (x k ) = 0 1 d j 1 for all j = 1,..., n. Remark 6.1 (+) This is a relative simple method for choosing d k. (-) It ignores constraints s.t. g i (x k ) < 0 but g i (x k ) 0. This can lead to bad convergence, and possibly even converging to points which are not John points. Peter J.C. Dickinson CO17, Chpt 6: Sol n methods: Constrained Opt. 5/25
6 Choosing d k : Topkis and Veinott method Ex. 6.1 Consider the following optimisation problem: f (x k ) T d z min d,z z : g i (x k ) T d z g i (x k ) for all i = 1,..., m 1 d j 1 for all j = 1,..., n. 1 Prove that if (d, z ) is optimal solution to the problem above with z < 0 then d is strictly feasible descent direction in (C) 2 Prove that if there is strictly feasible descent direction in (C) then the optimal value to problem above is strictly negative. (+) Relatively simple method for choosing d k. (+) All constraints are taken into account. (+) If there is a x F such that a subsequence of the solutions tend towards x, then x is a John point. [FKS,Th.12.5] (-) This gives a first order method (only gradients are taken in to account) and such methods generally have slow convergence. Peter J.C. Dickinson CO17, Chpt 6: Sol n methods: Constrained Opt. 6/25
7 Table of Contents 1 Introduction 2 Feasible descent method 3 Unconstrained optimisation Newton s method Interpretations 4 Penalty method 5 Barrier method Peter J.C. Dickinson CO17, Chpt 6: Sol n methods: Constrained Opt. 7/25
8 Newton s method To minimise f : C R, f C 2 we do the following: 1 Start at a point x 0 C (k = 0). 2 If f (x k ) = 0 then STOP. 3 Assuming 2 f (x k ) O, let h k = ( 2 f (x k )) 1 f (x k ). 4 Let x k+1 = x k + h k and k k If stopping criteria is satisfied then STOP, else go to step 2. Remark 6.2 We could penalise moving too far away from x k by exchanging f (x) for f k,µ (x) = f (x) + µ x x k 2 2, with parameter µ > 0. f k,µ (x) = f (x) + 2µ(x x k ), f k,µ (x k ) = f (x k ), 2 f k,µ (x) = 2 f (x) + 2µI, For µ high enough we then have 2 f k,µ (x) O. 2 f k,µ (x k ) = 2 f (x k ) + 2µI. Peter J.C. Dickinson CO17, Chpt 6: Sol n methods: Constrained Opt. 8/25
9 Interpretations Interpretation 1 Want to find h in order to minimise f (x k + h). Have f (x k + h) f (x k ) + f (x k ) T h ht ( 2 f (x k ))h. Assuming 2 f (x k ) O and considering RHS of above as function of h, it is minimised at h = ( 2 f (x k )) 1 f (x k ). Interpretation 2 Want to find h such that f (x k + h) = 0. Have f (x k + h) f (x k ) + 2 f (x k )h. Assuming 2 f (x k ) is nonsingular, the RHS of the above is equal to 0 if and only if h = ( 2 f (x k )) 1 f (x k ). Peter J.C. Dickinson CO17, Chpt 6: Sol n methods: Constrained Opt. 9/25
10 Table of Contents 1 Introduction 2 Feasible descent method 3 Unconstrained optimisation 4 Penalty method Basic idea Basic results (Dis)advantages Choices for p Example Implementation Peter J.C. 5 Dickinson Barrier methodhttp://dickinson.website CO17, Chpt 6: Sol n methods: Constrained Opt. 10/25
11 Basic idea Definition 6.3 p : R n R is a penalty function with respect to F if Penalty method p C 0, p(x) = 0 for all x F, p(x) > 0 for all x R n \ F. In the penalty method we solve the following unconstrained optimisation problem (for a suitable parameter r > 0 and penalty function p): min{f (x) + r p(x)}. x Peter J.C. Dickinson CO17, Chpt 6: Sol n methods: Constrained Opt. 11/25
12 Basic results Lemma 6.4 For r > 0 we have min{f (x) + r p(x)} min{f (x) + r p(x) : x F} = val(c). x x If F arg min x {f (x) + r p(x)} then we have equality above. Theorem 6.5 Suppose we have {r k : k N} R ++ with lim k r k = and x k arg min x {f (x) + r k p(x)} for all k N, such that x = lim k x k for some x R n. Then x F and x is a global minimiser of (C). Peter J.C. Dickinson CO17, Chpt 6: Sol n methods: Constrained Opt. 12/25
13 (Dis)advantages (+) This is an unconstrained problem, and thus we can use our methods from unconstrained optimisation. (+) The optimal solution for a given r > 0 gives a lower bound on the optimal value of (C). (+) If for some r > 0 we have an optimal solution to the penalty problem in F, then this is also an optimal solution to the original problem. (Under some conditions we can guarantee this happens for r large enough). (+) If x is a limiting point of a subsequence of optimal solutions x r as r then x F and x is a global minimiser to (C). ( ) In general we will get optimal solutions x r / F. Peter J.C. Dickinson CO17, Chpt 6: Sol n methods: Constrained Opt. 13/25
14 Choice of p Letting g + j (x) = max{0, g j (x)}, two common choices are: m m ( 2 p(x) = g + j (x), and p(x) = g + j (x)). Ex. 6.2 j=1 Show that 1 if g is convex then g + is also convex. j=1 2 if g is convex then (g + (x)) 2 is also convex. If g C 1 then (g + (x)) 2 also has a continuous derivative. In general g + / C 1. If LICQ satisfied at local minimiser x F of (C), and y R m + are the KKT multipliers, then for p(x) = m j=1 g + j (x) and r > max{y j : j J x }, we have that x is local minimiser of penalty problem. [FKS, Th.12.10] Peter J.C. Dickinson CO17, Chpt 6: Sol n methods: Constrained Opt. 14/25
15 Example Example Ex. 6.3 Consider the problems min x {x : x 1}, min x {x 3 : x 1}. For each of these problems: 1 What is the global minimiser, denoted x, and optimal value to this problem? 2 For p(x) = m j=1 g + j (x) and p(x) = m j=1 (g + j (x)) 2 : 1 For r > 0, is the derivative of f r (x) := f (x) + r p(x) with respect to x continuous or not? 2 Find all the stationary points to f r (x), as a function of r. 3 Find the optimal value and solution to min x {f r (x) : x R} as a function of r > 0. Peter J.C. Dickinson CO17, Chpt 6: Sol n methods: Constrained Opt. 15/25
16 Implementation One implementation would be to solve the penalty problem once for r very large. Alternatively, we could note that we are only interested in the limit as r, and not the solutions to the penalty problem for any fixed r > 0. We could thus use something like Newton s method to attempt to find a solution to the penalty problem, and in each iteration increase r. Peter J.C. Dickinson CO17, Chpt 6: Sol n methods: Constrained Opt. 16/25
17 Table of Contents 1 Introduction 2 Feasible descent method 3 Unconstrained optimisation 4 Penalty method 5 Barrier method Basic idea Basic results (Dis)advantages Frisch s barrier function Implementation Peter J.C. Dickinson CO17, Chpt 6: Sol n methods: Constrained Opt. 17/25
18 Basic idea Will let F = {x R n : g i (x) < 0 for all i} and assume F = cl F. Lemma 6.6 inf x {f (x) : x F} = Definition 6.7 inf x {f (x) : x F}. b : F R, b C 0 is a barrier function for (C) if for all x bd F we have lim x x b(x) =. Barrier method In the barrier method we solve the following unconstrained optimisation problem (for a suitable parameter ρ > 0 and a suitable barrier function b): min{f (x) + ρb(x) : x F}. x Peter J.C. Dickinson CO17, Chpt 6: Sol n methods: Constrained Opt. 18/25
19 Basic results Lemma 6.8 Have F F, and thus for all x F, we get an upper bound of f ( x) on the optimal value to (C). Theorem 6.9 Suppose we have {ρ k : k N} R ++ with lim k ρ k = 0 and x k arg min x {f (x) + ρ k b(x)} for all k N, such that x = lim k x k for some x R n. Then x F and x is a global minimiser of (C). Peter J.C. Dickinson CO17, Chpt 6: Sol n methods: Constrained Opt. 19/25
20 (Dis)advantages (+) This is an unconstrained problem, and thus we can use our methods from unconstrained optimisation. (+) F F, and thus all feasible points for this problem are feasible for (C). (+) If x is a limiting point of a subsequence of optimal solutions x ρ as ρ 0 + then x F and x is a global minimiser to (C). Peter J.C. Dickinson CO17, Chpt 6: Sol n methods: Constrained Opt. 20/25
21 Frisch s barrier function Frisch s barrier function b(x) = m i=1 ln( g i(x)). Ex. 6.4 Consider g C 2 and b : {x R n : g(x) < 0} R, b(x) = ln( g(x)). Find 2 b(x) and using this, show that if g is a convex function then so is b. Peter J.C. Dickinson CO17, Chpt 6: Sol n methods: Constrained Opt. 21/25
22 Parameterised KKT conditions Theorem 6.10 For Frisch s barrier function we have (f (x) + ρb(x)) = f (x) + m i=1 ρ g i (x) g i(x). Have that x is a stationary point to the barrier function if and only if its gradient is zero, or equivalently λ R m such that: x R n, λ R m +, 0 = f (x) + m λ i g i (x), i=1 g i (x) 0, λ i g i (x) = ρ for all i = 1,..., m. This system is known as the parameterised KKT conditions. Peter J.C. Dickinson CO17, Chpt 6: Sol n methods: Constrained Opt. 22/25
23 Parameterised KKT conditions continued Theorem 6.11 Suppose we have {ρ k : k N} R ++ with lim k ρ k = 0 and (x k, λ k ) are solutions to the parameterised KKT conditions (with ρ = ρ k ). Then x k F and λ k R m +, implying that ψ(λ k ) val(c) f (x k ) for all k N. If ( x, λ) = lim k (x k, λ k ) for some ( x, λ) R n R m. Then x is a KKT point for (C) with multipliers λ. Recall that if (C) is convex, this implies that ( x, λ) is a saddle point to the Lagrangian function, and thus x is a optimal solution to the primal problem, whilst λ is an optimal solution to the dual problem. Peter J.C. Dickinson CO17, Chpt 6: Sol n methods: Constrained Opt. 23/25
24 Example Example Ex. 6.5 Consider the problems min x {x : x 1}, min x {x : (x 1) exp(x 2 ) 0}. For each of these problems: 1 What is the global minimiser, denoted x, and optimal value to this problem? 2 For Frisch s barrier function, determine the optimal value to the barrier problem as a function of ρ. Peter J.C. Dickinson CO17, Chpt 6: Sol n methods: Constrained Opt. 24/25
25 Implementation One implementation would be to solve the barrier problem once for ρ > 0 very small. Alternatively, we could note that we are only interested in the limit as ρ 0, and not the solutions to the penalty problem for any fixed ρ > 0. We could thus use something like Newton s method to attempt to find a solution to the penalty problem, and in each iteration decrease ρ. Peter J.C. Dickinson CO17, Chpt 6: Sol n methods: Constrained Opt. 25/25
10 Numerical methods for constrained problems
10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside
More informationExamination paper for TMA4180 Optimization I
Department of Mathematical Sciences Examination paper for TMA4180 Optimization I Academic contact during examination: Phone: Examination date: 26th May 2016 Examination time (from to): 09:00 13:00 Permitted
More informationAlgorithms for constrained local optimization
Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained
More informationNumerical Optimization
Constrained Optimization - Algorithms Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Consider the problem: Barrier and Penalty Methods x X where X
More informationContinuous Optimisation, Chpt 7: Proper Cones
Continuous Optimisation, Chpt 7: Proper Cones Peter J.C. Dickinson DMMP, University of Twente p.j.c.dickinson@utwente.nl http://dickinson.website/teaching/2017co.html version: 10/11/17 Monday 13th November
More informationOptimization. Yuh-Jye Lee. March 21, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 29
Optimization Yuh-Jye Lee Data Science and Machine Intelligence Lab National Chiao Tung University March 21, 2017 1 / 29 You Have Learned (Unconstrained) Optimization in Your High School Let f (x) = ax
More informationThe Fundamental Theorem of Linear Inequalities
The Fundamental Theorem of Linear Inequalities Lecture 8, Continuous Optimisation Oxford University Computing Laboratory, HT 2006 Notes by Dr Raphael Hauser (hauser@comlab.ox.ac.uk) Constrained Optimisation
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More informationContinuous Optimisation, Chpt 9: Semidefinite Optimisation
Continuous Optimisation, Chpt 9: Semidefinite Optimisation Peter J.C. Dickinson DMMP, University of Twente p.j.c.dickinson@utwente.nl http://dickinson.website/teaching/2017co.html version: 28/11/17 Monday
More informationContinuous Optimisation, Chpt 9: Semidefinite Problems
Continuous Optimisation, Chpt 9: Semidefinite Problems Peter J.C. Dickinson DMMP, University of Twente p.j.c.dickinson@utwente.nl http://dickinson.website/teaching/2016co.html version: 21/11/16 Monday
More informationOptimization. Yuh-Jye Lee. March 28, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 40
Optimization Yuh-Jye Lee Data Science and Machine Intelligence Lab National Chiao Tung University March 28, 2017 1 / 40 The Key Idea of Newton s Method Let f : R n R be a twice differentiable function
More informationIE 5531 Midterm #2 Solutions
IE 5531 Midterm #2 s Prof. John Gunnar Carlsson November 9, 2011 Before you begin: This exam has 9 pages and a total of 5 problems. Make sure that all pages are present. To obtain credit for a problem,
More informationLagrangian Duality for Dummies
Lagrangian Duality for Dummies David Knowles November 13, 2010 We want to solve the following optimisation problem: f 0 () (1) such that f i () 0 i 1,..., m (2) For now we do not need to assume conveity.
More informationPenalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques
More informationSecond Order Optimality Conditions for Constrained Nonlinear Programming
Second Order Optimality Conditions for Constrained Nonlinear Programming Lecture 10, Continuous Optimisation Oxford University Computing Laboratory, HT 2006 Notes by Dr Raphael Hauser (hauser@comlab.ox.ac.uk)
More informationLecture 18: Optimization Programming
Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming
More informationConstrained Optimization
1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange
More informationPenalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.
AMSC 607 / CMSC 878o Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 3: Penalty and Barrier Methods Dianne P. O Leary c 2008 Reference: N&S Chapter 16 Penalty and Barrier
More informationConvex Optimization. Prof. Nati Srebro. Lecture 12: Infeasible-Start Newton s Method Interior Point Methods
Convex Optimization Prof. Nati Srebro Lecture 12: Infeasible-Start Newton s Method Interior Point Methods Equality Constrained Optimization f 0 (x) s. t. A R p n, b R p Using access to: 2 nd order oracle
More informationLecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem
Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R
More informationWritten Examination
Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes
More informationOptimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30
Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained
More informationLecture 13: Constrained optimization
2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems
More informationQuiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006
Quiz Discussion IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Motivation Why do we care? We are interested in
More informationICS-E4030 Kernel Methods in Machine Learning
ICS-E4030 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 28. September, 2016 Juho Rousu 28. September, 2016 1 / 38 Convex optimization Convex optimisation This
More informationDetermination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study
International Journal of Mathematics And Its Applications Vol.2 No.4 (2014), pp.47-56. ISSN: 2347-1557(online) Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms:
More information8 Barrier Methods for Constrained Optimization
IOE 519: NL, Winter 2012 c Marina A. Epelman 55 8 Barrier Methods for Constrained Optimization In this subsection, we will restrict our attention to instances of constrained problem () that have inequality
More information1 Computing with constraints
Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)
More informationLecture 3. Optimization Problems and Iterative Algorithms
Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex
More informationLagrange duality. The Lagrangian. We consider an optimization program of the form
Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization
More informationAlgorithms for nonlinear programming problems II
Algorithms for nonlinear programming problems II Martin Branda Charles University Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects of Optimization
More informationNonlinear Optimization: What s important?
Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global
More informationConvex Optimization. Newton s method. ENSAE: Optimisation 1/44
Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)
More informationUnconstrained Optimization
1 / 36 Unconstrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University February 2, 2015 2 / 36 3 / 36 4 / 36 5 / 36 1. preliminaries 1.1 local approximation
More informationTMA947/MAN280 APPLIED OPTIMIZATION
Chalmers/GU Mathematics EXAM TMA947/MAN280 APPLIED OPTIMIZATION Date: 06 08 31 Time: House V, morning Aids: Text memory-less calculator Number of questions: 7; passed on one question requires 2 points
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More informationChap 2. Optimality conditions
Chap 2. Optimality conditions Version: 29-09-2012 2.1 Optimality conditions in unconstrained optimization Recall the definitions of global, local minimizer. Geometry of minimization Consider for f C 1
More informationOn sequential optimality conditions for constrained optimization. José Mario Martínez martinez
On sequential optimality conditions for constrained optimization José Mario Martínez www.ime.unicamp.br/ martinez UNICAMP, Brazil 2011 Collaborators This talk is based in joint papers with Roberto Andreani
More information5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More informationA SHIFTED PRIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OPTIMIZATION
A SHIFTED RIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OTIMIZATION hilip E. Gill Vyacheslav Kungurtsev Daniel. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-18-1 February 1, 2018
More information10-725/ Optimization Midterm Exam
10-725/36-725 Optimization Midterm Exam November 6, 2012 NAME: ANDREW ID: Instructions: This exam is 1hr 20mins long Except for a single two-sided sheet of notes, no other material or discussion is permitted
More informationAlgorithms for nonlinear programming problems II
Algorithms for nonlinear programming problems II Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects
More informationPrimal-dual Subgradient Method for Convex Problems with Functional Constraints
Primal-dual Subgradient Method for Convex Problems with Functional Constraints Yurii Nesterov, CORE/INMA (UCL) Workshop on embedded optimization EMBOPT2014 September 9, 2014 (Lucca) Yu. Nesterov Primal-dual
More informationSECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING
Nf SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING f(x R m g HONOUR SCHOOL OF MATHEMATICS, OXFORD UNIVERSITY HILARY TERM 5, DR RAPHAEL
More informationNumerical Optimization
Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,
More informationOptimality Conditions for Constrained Optimization
72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method Lecture 5, Continuous Optimisation Oxford University Computing Laboratory, HT 2006 Notes by Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The notion of complexity (per iteration)
More informationSECTION C: CONTINUOUS OPTIMISATION LECTURE 11: THE METHOD OF LAGRANGE MULTIPLIERS
SECTION C: CONTINUOUS OPTIMISATION LECTURE : THE METHOD OF LAGRANGE MULTIPLIERS HONOUR SCHOOL OF MATHEMATICS OXFORD UNIVERSITY HILARY TERM 005 DR RAPHAEL HAUSER. Examples. In this lecture we will take
More informationMathematical Optimisation, Chpt 2: Linear Equations and inequalities
Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson
More informationPrimal/Dual Decomposition Methods
Primal/Dual Decomposition Methods Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Subgradients
More informationComputational Optimization. Augmented Lagrangian NW 17.3
Computational Optimization Augmented Lagrangian NW 17.3 Upcoming Schedule No class April 18 Friday, April 25, in class presentations. Projects due unless you present April 25 (free extension until Monday
More informationMVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg
MVE165/MMG631 Overview of nonlinear programming Ann-Brith Strömberg 2015 05 21 Areas of applications, examples (Ch. 9.1) Structural optimization Design of aircraft, ships, bridges, etc Decide on the material
More informationCS-E4830 Kernel Methods in Machine Learning
CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This
More informationGeneralization to inequality constrained problem. Maximize
Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum
More informationConvex Optimization and SVM
Convex Optimization and SVM Problem 0. Cf lecture notes pages 12 to 18. Problem 1. (i) A slab is an intersection of two half spaces, hence convex. (ii) A wedge is an intersection of two half spaces, hence
More informationAlgorithms for Constrained Optimization
1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic
More informationOptimisation in Higher Dimensions
CHAPTER 6 Optimisation in Higher Dimensions Beyond optimisation in 1D, we will study two directions. First, the equivalent in nth dimension, x R n such that f(x ) f(x) for all x R n. Second, constrained
More information2.3 Linear Programming
2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are
More informationPart 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)
Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x
More informationConvex Optimization and Support Vector Machine
Convex Optimization and Support Vector Machine Problem 0. Consider a two-class classification problem. The training data is L n = {(x 1, t 1 ),..., (x n, t n )}, where each t i { 1, 1} and x i R p. We
More informationNonlinear Programming
Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week
More informationConvex Optimization. Ofer Meshi. Lecture 6: Lower Bounds Constrained Optimization
Convex Optimization Ofer Meshi Lecture 6: Lower Bounds Constrained Optimization Lower Bounds Some upper bounds: #iter μ 2 M #iter 2 M #iter L L μ 2 Oracle/ops GD κ log 1/ε M x # ε L # x # L # ε # με f
More informationOn the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method
Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical
More informationCSCI : Optimization and Control of Networks. Review on Convex Optimization
CSCI7000-016: Optimization and Control of Networks Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one
More informationApplications of Linear Programming
Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal
More informationA Brief Review on Convex Optimization
A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More informationOptimization Problems with Constraints - introduction to theory, numerical Methods and applications
Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)
More informationSubgradients. subgradients and quasigradients. subgradient calculus. optimality conditions via subgradients. directional derivatives
Subgradients subgradients and quasigradients subgradient calculus optimality conditions via subgradients directional derivatives Prof. S. Boyd, EE392o, Stanford University Basic inequality recall basic
More informationMathematical Optimisation, Chpt 2: Linear Equations and inequalities
Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl
More informationOptimization and Root Finding. Kurt Hornik
Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding
More informationComputational Optimization. Constrained Optimization Part 2
Computational Optimization Constrained Optimization Part Optimality Conditions Unconstrained Case X* is global min Conve f X* is local min SOSC f ( *) = SONC Easiest Problem Linear equality constraints
More informationCONSTRAINED NONLINEAR PROGRAMMING
149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach
More informationNumerical optimization
Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal
More informationDuality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities
Duality Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Lagrangian Consider the optimization problem in standard form
More informationYou should be able to...
Lecture Outline Gradient Projection Algorithm Constant Step Length, Varying Step Length, Diminishing Step Length Complexity Issues Gradient Projection With Exploration Projection Solving QPs: active set
More informationAM 205: lecture 18. Last time: optimization methods Today: conditions for optimality
AM 205: lecture 18 Last time: optimization methods Today: conditions for optimality Existence of Global Minimum For example: f (x, y) = x 2 + y 2 is coercive on R 2 (global min. at (0, 0)) f (x) = x 3
More informationIntroduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research
Introduction to Machine Learning Lecture 7 Mehryar Mohri Courant Institute and Google Research mohri@cims.nyu.edu Convex Optimization Differentiation Definition: let f : X R N R be a differentiable function,
More informationLecture 12 Unconstrained Optimization (contd.) Constrained Optimization. October 15, 2008
Lecture 12 Unconstrained Optimization (contd.) Constrained Optimization October 15, 2008 Outline Lecture 11 Gradient descent algorithm Improvement to result in Lec 11 At what rate will it converge? Constrained
More informationOptimality conditions for unconstrained optimization. Outline
Optimality conditions for unconstrained optimization Daniel P. Robinson Department of Applied Mathematics and Statistics Johns Hopkins University September 13, 2018 Outline 1 The problem and definitions
More informationNumerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems
1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of
More informationNONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)
NONLINEAR PROGRAMMING (Hillier & Lieberman Introduction to Operations Research, 8 th edition) Nonlinear Programming g Linear programming has a fundamental role in OR. In linear programming all its functions
More informationInterior Point Algorithms for Constrained Convex Optimization
Interior Point Algorithms for Constrained Convex Optimization Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Inequality constrained minimization problems
More informationImage restoration: numerical optimisation
Image restoration: numerical optimisation Short and partial presentation Jean-François Giovannelli Groupe Signal Image Laboratoire de l Intégration du Matériau au Système Univ. Bordeaux CNRS BINP / 6 Context
More informationISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints
ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationA Primal-Dual Interior-Point Method for Nonlinear Programming with Strong Global and Local Convergence Properties
A Primal-Dual Interior-Point Method for Nonlinear Programming with Strong Global and Local Convergence Properties André L. Tits Andreas Wächter Sasan Bahtiari Thomas J. Urban Craig T. Lawrence ISR Technical
More informationApolynomialtimeinteriorpointmethodforproblemswith nonconvex constraints
Apolynomialtimeinteriorpointmethodforproblemswith nonconvex constraints Oliver Hinder, Yinyu Ye Department of Management Science and Engineering Stanford University June 28, 2018 The problem I Consider
More informationSolving Dual Problems
Lecture 20 Solving Dual Problems We consider a constrained problem where, in addition to the constraint set X, there are also inequality and linear equality constraints. Specifically the minimization problem
More informationCS711008Z Algorithm Design and Analysis
CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief
More informationSolution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark
Solution Methods Richard Lusby Department of Management Engineering Technical University of Denmark Lecture Overview (jg Unconstrained Several Variables Quadratic Programming Separable Programming SUMT
More informationInterior-Point Methods for Linear Optimization
Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function
More informationNonlinear Optimization for Optimal Control
Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]
More informationSupport Vector Machines for Regression
COMP-566 Rohan Shah (1) Support Vector Machines for Regression Provided with n training data points {(x 1, y 1 ), (x 2, y 2 ),, (x n, y n )} R s R we seek a function f for a fixed ɛ > 0 such that: f(x
More informationPDE-Constrained and Nonsmooth Optimization
Frank E. Curtis October 1, 2009 Outline PDE-Constrained Optimization Introduction Newton s method Inexactness Results Summary and future work Nonsmooth Optimization Sequential quadratic programming (SQP)
More informationConstrained Nonlinear Optimization Algorithms
Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu Institute for Mathematics and its Applications University of Minnesota August 4, 2016
More informationChapter 2. Optimization. Gradients, convexity, and ALS
Chapter 2 Optimization Gradients, convexity, and ALS Contents Background Gradient descent Stochastic gradient descent Newton s method Alternating least squares KKT conditions 2 Motivation We can solve
More informationDual methods and ADMM. Barnabas Poczos & Ryan Tibshirani Convex Optimization /36-725
Dual methods and ADMM Barnabas Poczos & Ryan Tibshirani Convex Optimization 10-725/36-725 1 Given f : R n R, the function is called its conjugate Recall conjugate functions f (y) = max x R n yt x f(x)
More informationApplied Lagrange Duality for Constrained Optimization
Applied Lagrange Duality for Constrained Optimization February 12, 2002 Overview The Practical Importance of Duality ffl Review of Convexity ffl A Separating Hyperplane Theorem ffl Definition of the Dual
More information