Algorithms for constrained local optimization

Size: px
Start display at page:

Download "Algorithms for constrained local optimization"

Transcription

1 Algorithms for constrained local optimization Fabio Schoen Algorithms for constrained local optimization p.

2 Feasible direction methods Algorithms for constrained local optimization p.

3 Frank Wolfe method Let X: convex set. Consider the problem: min f(x) x X Let x k X choosing a feasible direction d k corresponds to choosing a point x X : d k = x x k. Steepest descent choice: min x X T f(x k )(x x k ) (a linear objective with convex constraints, usually easy to solve). Let ˆx k be an optimal solution of this problem. Algorithms for constrained local optimization p.

4 Frank Wolfe If T f(x k )(ˆx k x k ) = 0 then T f(x k )d 0 for every feasible direction d first order necessary conditions hold. Otherwise, letting d k = ˆx k x, this is a descent direction along which a step α k (0, 1] might be chosen according to Armijo s rule. Algorithms for constrained local optimization p.

5 Convergence of Frank-Wolfe method Under mild conditions the method converges to a point satisfying first order necessary conditions. However it is usually extremely slow (convergence may be sub linear) It might find applications in very large scale problems in which solving the sub-problem for direction determination is very easy (e.g. when X is a polytope). Algorithms for constrained local optimization p.

6 Gradient Projection methods Generic iteration: x k+1 = x k + α k ( x k x k ) where the direction d k = x k x k is obtained finding x k = [x k s k f(x k )] + where: s k R + and [ ] + represents projection over the feasible set. Algorithms for constrained local optimization p.

7 The method is slightly faster than Frank-Wolfe, with a linear convergence rate similar to that of (unconstrained) steepest descent. It might be applied if projection is relatively cheap, e.g. when the feasible set is a box. A point x k satisfies first order necessary conditions d T f(x k ) 0 iff x k = [x k s k f(x k )] + Algorithms for constrained local optimization p.

8 Lagrange Multiplier Algorithms Algorithms for constrained local optimization p.

9 Barrier Methods min f(x) g j (x) 0 j = 1,...,r A Barrier is a continuous function which tends to + whenever x approaches the boundary of the feasible region. Examples of barrier functions: B(x) = j log( g j (x)) logaritmic barrier B(x) = j 1 g j (x) invers barrier Algorithms for constrained local optimization p.

10 Barrier Method Let ε k 0 and x 0 strictly feasible, i.e. g j (x 0 ) < 0 j. Then let x k = arg min x R n(f(x) + ε kb(x)) Proposition: every limit point of {x k } is a global minimum of the constrained optimization problem Algorithms for constrained local optimization p. 1

11 Analysis of Barrier methods Special case: a single constraint (might be generalized) Let x be a limit point of {x k } (a global minimum). If KKT conditions hold, then there exists a unique λ 0: f( x) + λ g( x) = 0 (with λg( x) = 0. x k, solution of the barrier problem satisfies min f(x) + ε k B(x) g(x) < 0 f(x k ) + ε k B(x k ) = 0 Algorithms for constrained local optimization p. 1

12 ... If B(x) = φ(g(x)), In the limit, for k : f(x k ) + ε k φ (g(x k )) g(x k ) = 0 lim ε k φ (g(x k )) g(x k ) = λ g( x) if lim k g(x k ) < 0 φ (g(x k )) g(x k ) K (finite) and Kε k 0 if lim k g(x k ) = 0 (thanks to the unicity of Lagrange multipliers), λ = lim k ε k φ (g(x k )) Algorithms for constrained local optimization p. 1

13 Difficulties in Barrier Methods strong numeric instability: the condition number of the hessian matrix grows as ε k 0 need for an initial strictly feasible point x 0 (partial) remedy: ε k is very slowly decreased and the solution of the k + 1 th problem is obtained starting an unconstrained optimization from x k Algorithms for constrained local optimization p. 1

14 Example min(x 1) 2 + (y 1) 2 x + y 1 Logarithmic Barrier problem: min(x 1) 2 + (y 1) 2 ε k log(1 x y) x + y 1 < 0 Gradient: 2(x 1) + ε k 1 x y 2(y 1) + ε k 1 x y Stationary points x = y = 3 4 ± 1+ε k 4 (only the - solution is acceptable) Algorithms for constrained local optimization p. 1

15 Barrier methods and L.P. min c T x Ax = b x 0 Logarithmic Barrier on x 0: min c T x ε j log x j Ax = b x > 0 Algorithms for constrained local optimization p. 1

16 The central path The starting point is usually associated with ε = and is the unique solution of min j log x j Ax = b x > 0 The trajectory x(ε) of solutions to the barrier problem is called the central path and leads to an optimal solution of the LP. Algorithms for constrained local optimization p. 1

17 Penalty Methods Penalized problem: min f(x) + ρp(x) where ρ > 0 and P(x) 0 with P(x) = 0 if x is feasible. Example: min f(x) h i (x) = 0 i = 1,...,m A penalized problem might be: min f(x) + ρ i h i (x) 2 Algorithms for constrained local optimization p. 1

18 Convergence of the quadratic penalty me (for equality constrained problems): let P(x;ρ) = f(x) + ρ i h i (x) 2 Given ρ 0 > 0, x 0 R n, k = 0, let x k+1 = arg minp(x;ρ k ) (found with an iterative method initialized at x k ); let ρ k+1 > ρ k, k := k + 1. If x k+1 is a global minimizer of P and ρ k then every limit point of {x k } is a global optimum of the constrained problem. Algorithms for constrained local optimization p. 1

19 Exact penalties Exact penalties: there exists a penalty parameter value s.t. the optimal solution to the penalized problem is the optimal solution of the original one. l 1 penalty function: P 1 (x;ρ) = f(x) + ρ i h i (x) Algorithms for constrained local optimization p. 1

20 Exact penalties for inequality constrained problems: min f(x) h i (x) = 0 g j (x) 0 the penalized problem is P 1 (x;ρ) = f(x)ρ i h i (x) + ρ j max(0, g j (x)) Algorithms for constrained local optimization p. 2

21 Augmented Lagrangian method Given an equality constrained problem, reformulate it as: min f(x) ρ h(x) 2 h(x) = 0 The Lagrange function of this problem is called Augmented Lagrangian: L(x;λ) = f(x) ρ h(x) 2 + λ T h(x) Algorithms for constrained local optimization p. 2

22 Motivation min x f(x) ρ h(x) 2 + λ T h(x) x L ρ (x, λ) = f(x) + i λ i h(x) + ρh(x) h(x) = x L(x, λ) + ρh(x) h(x) 2 xxl ρ (x, λ) = 2 f(x) + i λ i 2 h(x) + ρh(x) 2 h(x) + ρ h(x) T h(x) = 2 xxl(x, λ) + ρh(x) 2 h(x) + ρ h(x) T h(x) Algorithms for constrained local optimization p. 2

23 motivation... Let (x,λ ) an optimal (primal and dual) solution. Necessarily: x L(x,λ ) = 0; moreover h(x ) = 0 thus x L ρ (x,λ ) = x L(x,λ ) + ρh(x ) h(x ) = 0 (x,λ ) is a stationary point for the augmented lagrangian. Algorithms for constrained local optimization p. 2

24 motivation... Observe that: 2 xxl ρ (x,λ) = 2 xxl(x,λ) + ρh(x) 2 h(x) + ρ h(x) T h(x) = 2 xxl(x,λ) + ρ h(x) T h(x) Assume that sufficient optimality conditions hold: v T 2 xxl(x,λ )v > 0 v : v T h(x ) = 0, Algorithms for constrained local optimization p. 2

25 ... Let v 0 : v T h(x )= 0. Then v T 2 xxl ρ (x,λ )v T = v T 2 xxl(x,λ )v T + ρv T h(x ) T h(x )v = v T 2 xxl(x,λ )v T > 0 Algorithms for constrained local optimization p. 2

26 ... Let v 0 : v T h(x ) 0. Then v T 2 xxl ρ (x,λ )v T = v T 2 xxl(x,λ )v T + ρv T h(x ) T h(x )v = v T 2 xxl(x,λ )v T + ρ(v T h(x )) 2 which might be negative. However ρ > 0: if ρ ρ v T 2 xxl ρ (x,λ )v T > 0. Thus, if ρ is large enough, the Hessian of the augmented lagrangian is positive definite and x is a (strict) local minimum of L ρ (,λ ) Algorithms for constrained local optimization p. 2

27 Inequality constraints min f(x) g(x) 0 Nonlinear transformation of inequalities into equalities: min x,s f(x) g j (x) + s 2 j = 0 j = 1,p Algorithms for constrained local optimization p. 2

28 Given the problem min f(x) h i (x) = 0 i = 1,m g j (x) 0 j = 1,p an Augmented Lagrangian problem might be defined as min L ρ (x,z;λ,µ) = min x,z f(x) + λt h(x) ρ h(x) 2 + j µ j (g j (x) + z 2 j) ρ j (g j (x) + z 2 j) 2 Algorithms for constrained local optimization p. 2

29 ... Consider minimization with respect to z variables: min z µ j (g j (x) + zj) ρ (g j (x) + zj) 2 2 j j µ j (g j (x) + u j ) ρ(g j(x) + u j ) 2 = min u 0 j (quadratic minimization over the nonnegative orthant). Solution: u j = max{0,ū j } where ū is the unconstrained optimum: ū : µ j + ρ(g j (x) + ū j ) = 0 Algorithms for constrained local optimization p. 2

30 ... Thus: u j = max{0, µ j ρ g j(x)}. Substituting: L ρ (x;λ,µ) = f(x) + λ T h(x) ρ h(x) ρ ( max{0,µj + ρg j (x)} µ j) 2 j This is an Augmented Lagragian for inequality constrained problems. Algorithms for constrained local optimization p. 3

31 Sequential Quadratic Programming minf(x) h i (x) = 0 Idea: apply Newton s method to solve the KKT equations: Lagrangian function: L(x;λ) = f(x) + i λ i h i (x) let H(x) = [h i (x)], H(x) = [ h i (x)]. KKT conditions: [ ] f(x) + H F[x;λ] = T (x)λ = 0 H(x) Algorithms for constrained local optimization p. 3

32 Newton step for SQP Jacobian of KKT system: F (x,λ) = [ 2 xx L(x;λ) T H(x) H(x) 0 ] Newton step: [ ] xk+1 λ k+1 = [ xk λ k ] + [ dk k ] where [ ][ ] 2 xx L(x k ;λ k ) T H(x k ) dk H(x k ) 0 k = [ f(xk ) H T (x k )λ k H(x k ) ] Algorithms for constrained local optimization p. 3

33 existence The Newton step exists if the Jacobian of the constraint set H(x k ) has full row rank the Hessian 2 xxl(x k ;λ k ) is positive definite In this case the Newton step is the unique solution of 2 xx L(x k;λ k )d k + T H(x k ) k + f(x k ) + H T (x k )λ k = 0 H(x k )d k + H(x k ) = 0 Algorithms for constrained local optimization p. 3

34 Alternative view: SQP KKT conditions: min d f(x k ) + f(x k ) T d dt 2 xxl(x k ;λ k )d H(x k )d + H(x k ) = 0 2 xxl(x k ;λ k )d + f(x k ) + H(x k )Λ k = 0 Under the same conditions as before this QP has a unique solution d k with Lagrange multipliers Λ k = λ k+1 Algorithms for constrained local optimization p. 3

35 Alternative view: SQP min d L(x k,λ k ) + T x L(x k,λ k )d dt 2 xxl(x k ;λ k )d KKT conditions: H(x k )d + H(x k ) = 0 2 xxl(x k ;λ k )d + f(x k ) + H(x k )λ k + H(x k )Λ k = 0 Under the same conditions as before this QP has a unique solution d k with Lagrange multipliers Λ k = k+1 Algorithms for constrained local optimization p. 3

36 Thus SQP can be seen as a method which minimizes a quadratic approximation to the Lagrangian subject to a first order approximation of the constraints. Algorithms for constrained local optimization p. 3

37 Inequalities If the original problem is then the SQP iteration solves min f(x) h i (x) = 0 g j (x) 0 min d f k + f(x k ) T d dt 2 xxl(x k,λ k )d T i h i (x k )p + h i (x k ) = 0 T j g j (x k )p + g j (x k ) 0 Algorithms for constrained local optimization p. 3

38 Filter Methods Basic idea: min f(x) g(x) 0 can be considered as a problem with two objectives: minimize f(x) minimize g(x) (the second objective has priority over the first) Algorithms for constrained local optimization p. 3

39 Filter Given the problem minf(x) g j (x) 0 j = 1,...,k let us consider the bi-criteria optimization problem min f(x) min h(x) where h(x) = j max{g j (x), 0} Algorithms for constrained local optimization p. 3

40 Let {f k,h k,k = 1, 2,...} the observed values of f and h at points x 1,x 2,... A pair (f k,h k ) dominates a pair (f l,h l ) iff f k f l and h k h l A filter is a list of pairs which are non-dominated by the others Algorithms for constrained local optimization p. 4

41 f(x) h(x) Algorithms for constrained local optimization p. 4

42 Trust region SQP Consider a Trust-region SQP method: min d f k + L(x k ;λ k ) T d dt 2 xxl(x k ;λ k )d T j g j (x k )p + g j (x k ) 0 d ρ (the norm is used here in order to keep the problem a QP) Traditional (unconstrained) trust region methods: if the current step is a failure reduce the trust region eventually the step will become a pure gradient step convergence! Algorithms for constrained local optimization p. 4

43 Trust region SQP Here diminishing the trust region radius might lead to infeasible QP s: T j g j(x k )p + g j (x k ) 0 g j (x) 0 x k Algorithms for constrained local optimization p. 4

44 Filter methods Data: x 0 : starting point, ρ, k = 0 while Convergence criterion not satisfied do if QP is infeasible then Find x k+1 minimizing constraint violation; else Solve QP and get a step d k ; try setting x k+1 = x k + d k ; end if (f k+1, h k+1 ) is acceptable to the filter then Accept x k+1 and add (f k+1, h k+1 ) to the filter; Remove dominated points from the filter; Possibly increase ρ; else Reject the step; Reduce ρ; end set k = k + 1; end Algorithms for constrained local optimization p. 4

45 Comparison with other methods f(x) Rejected filter steps acceptable steps "classical" method h(x) Algorithms for constrained local optimization p. 4

Lecture 13: Constrained optimization

Lecture 13: Constrained optimization 2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

Algorithms for nonlinear programming problems II

Algorithms for nonlinear programming problems II Algorithms for nonlinear programming problems II Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Algorithms for nonlinear programming problems II

Algorithms for nonlinear programming problems II Algorithms for nonlinear programming problems II Martin Branda Charles University Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects of Optimization

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

8 Barrier Methods for Constrained Optimization

8 Barrier Methods for Constrained Optimization IOE 519: NL, Winter 2012 c Marina A. Epelman 55 8 Barrier Methods for Constrained Optimization In this subsection, we will restrict our attention to instances of constrained problem () that have inequality

More information

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection 6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE Three Alternatives/Remedies for Gradient Projection Two-Metric Projection Methods Manifold Suboptimization Methods

More information

2.3 Linear Programming

2.3 Linear Programming 2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are

More information

You should be able to...

You should be able to... Lecture Outline Gradient Projection Algorithm Constant Step Length, Varying Step Length, Diminishing Step Length Complexity Issues Gradient Projection With Exploration Projection Solving QPs: active set

More information

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark Solution Methods Richard Lusby Department of Management Engineering Technical University of Denmark Lecture Overview (jg Unconstrained Several Variables Quadratic Programming Separable Programming SUMT

More information

Lecture 18: Optimization Programming

Lecture 18: Optimization Programming Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming

More information

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel

More information

Examination paper for TMA4180 Optimization I

Examination paper for TMA4180 Optimization I Department of Mathematical Sciences Examination paper for TMA4180 Optimization I Academic contact during examination: Phone: Examination date: 26th May 2016 Examination time (from to): 09:00 13:00 Permitted

More information

Computational Optimization. Augmented Lagrangian NW 17.3

Computational Optimization. Augmented Lagrangian NW 17.3 Computational Optimization Augmented Lagrangian NW 17.3 Upcoming Schedule No class April 18 Friday, April 25, in class presentations. Projects due unless you present April 25 (free extension until Monday

More information

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

An Inexact Newton Method for Nonlinear Constrained Optimization

An Inexact Newton Method for Nonlinear Constrained Optimization An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results

More information

Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation

Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation Peter J.C. Dickinson DMMP, University of Twente p.j.c.dickinson@utwente.nl http://dickinson.website/teaching/2017co.html version:

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

Miscellaneous Nonlinear Programming Exercises

Miscellaneous Nonlinear Programming Exercises Miscellaneous Nonlinear Programming Exercises Henry Wolkowicz 2 08 21 University of Waterloo Department of Combinatorics & Optimization Waterloo, Ontario N2L 3G1, Canada Contents 1 Numerical Analysis Background

More information

A Primal-Dual Interior-Point Method for Nonlinear Programming with Strong Global and Local Convergence Properties

A Primal-Dual Interior-Point Method for Nonlinear Programming with Strong Global and Local Convergence Properties A Primal-Dual Interior-Point Method for Nonlinear Programming with Strong Global and Local Convergence Properties André L. Tits Andreas Wächter Sasan Bahtiari Thomas J. Urban Craig T. Lawrence ISR Technical

More information

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 11 and

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way. AMSC 607 / CMSC 878o Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 3: Penalty and Barrier Methods Dianne P. O Leary c 2008 Reference: N&S Chapter 16 Penalty and Barrier

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

Applications of Linear Programming

Applications of Linear Programming Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal

More information

Convex Optimization Boyd & Vandenberghe. 5. Duality

Convex Optimization Boyd & Vandenberghe. 5. Duality 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

AM 205: lecture 18. Last time: optimization methods Today: conditions for optimality

AM 205: lecture 18. Last time: optimization methods Today: conditions for optimality AM 205: lecture 18 Last time: optimization methods Today: conditions for optimality Existence of Global Minimum For example: f (x, y) = x 2 + y 2 is coercive on R 2 (global min. at (0, 0)) f (x) = x 3

More information

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity Mohammadreza Samadi, Lehigh University joint work with Frank E. Curtis (stand-in presenter), Lehigh University

More information

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study International Journal of Mathematics And Its Applications Vol.2 No.4 (2014), pp.47-56. ISSN: 2347-1557(online) Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms:

More information

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition) NONLINEAR PROGRAMMING (Hillier & Lieberman Introduction to Operations Research, 8 th edition) Nonlinear Programming g Linear programming has a fundamental role in OR. In linear programming all its functions

More information

Interior Point Methods for Mathematical Programming

Interior Point Methods for Mathematical Programming Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained

More information

Lecture 15: SQP methods for equality constrained optimization

Lecture 15: SQP methods for equality constrained optimization Lecture 15: SQP methods for equality constrained optimization Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 15: SQP methods for equality constrained

More information

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following

More information

Interior Point Methods in Mathematical Programming

Interior Point Methods in Mathematical Programming Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000

More information

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014 Convex Optimization Dani Yogatama School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA February 12, 2014 Dani Yogatama (Carnegie Mellon University) Convex Optimization February 12,

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 2 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization 1 Outline 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization Summary 2 Outline 3.2

More information

10-725/ Optimization Midterm Exam

10-725/ Optimization Midterm Exam 10-725/36-725 Optimization Midterm Exam November 6, 2012 NAME: ANDREW ID: Instructions: This exam is 1hr 20mins long Except for a single two-sided sheet of notes, no other material or discussion is permitted

More information

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Lectures 9 and 10: Constrained optimization problems and their optimality conditions Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained

More information

Constrained Nonlinear Optimization Algorithms

Constrained Nonlinear Optimization Algorithms Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu Institute for Mathematics and its Applications University of Minnesota August 4, 2016

More information

Optimisation in Higher Dimensions

Optimisation in Higher Dimensions CHAPTER 6 Optimisation in Higher Dimensions Beyond optimisation in 1D, we will study two directions. First, the equivalent in nth dimension, x R n such that f(x ) f(x) for all x R n. Second, constrained

More information

Numerical Optimization

Numerical Optimization Constrained Optimization - Algorithms Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Consider the problem: Barrier and Penalty Methods x X where X

More information

Numerical optimization

Numerical optimization Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global

More information

8 Numerical methods for unconstrained problems

8 Numerical methods for unconstrained problems 8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields

More information

A New Penalty-SQP Method

A New Penalty-SQP Method Background and Motivation Illustration of Numerical Results Final Remarks Frank E. Curtis Informs Annual Meeting, October 2008 Background and Motivation Illustration of Numerical Results Final Remarks

More information

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns

More information

On nonlinear optimization since M.J.D. Powell

On nonlinear optimization since M.J.D. Powell On nonlinear optimization since 1959 1 M.J.D. Powell Abstract: This view of the development of algorithms for nonlinear optimization is based on the research that has been of particular interest to the

More information

What s New in Active-Set Methods for Nonlinear Optimization?

What s New in Active-Set Methods for Nonlinear Optimization? What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Multidisciplinary System Design Optimization (MSDO)

Multidisciplinary System Design Optimization (MSDO) Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential

More information

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems 1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of

More information

CS-E4830 Kernel Methods in Machine Learning

CS-E4830 Kernel Methods in Machine Learning CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This

More information

TMA947/MAN280 APPLIED OPTIMIZATION

TMA947/MAN280 APPLIED OPTIMIZATION Chalmers/GU Mathematics EXAM TMA947/MAN280 APPLIED OPTIMIZATION Date: 06 08 31 Time: House V, morning Aids: Text memory-less calculator Number of questions: 7; passed on one question requires 2 points

More information

Generalization to inequality constrained problem. Maximize

Generalization to inequality constrained problem. Maximize Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum

More information

Optimization and Root Finding. Kurt Hornik

Optimization and Root Finding. Kurt Hornik Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Lecture: Duality.

Lecture: Duality. Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong

More information

An Inexact Newton Method for Optimization

An Inexact Newton Method for Optimization New York University Brown Applied Mathematics Seminar, February 10, 2009 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc)

More information

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization

More information

Numerical Optimization: Basic Concepts and Algorithms

Numerical Optimization: Basic Concepts and Algorithms May 27th 2015 Numerical Optimization: Basic Concepts and Algorithms R. Duvigneau R. Duvigneau - Numerical Optimization: Basic Concepts and Algorithms 1 Outline Some basic concepts in optimization Some

More information

A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30, 2014 Abstract Regularized

More information

ICS-E4030 Kernel Methods in Machine Learning

ICS-E4030 Kernel Methods in Machine Learning ICS-E4030 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 28. September, 2016 Juho Rousu 28. September, 2016 1 / 38 Convex optimization Convex optimisation This

More information

Constrained optimization

Constrained optimization Constrained optimization In general, the formulation of constrained optimization is as follows minj(w), subject to H i (w) = 0, i = 1,..., k. where J is the cost function and H i are the constraints. Lagrange

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

5.6 Penalty method and augmented Lagrangian method

5.6 Penalty method and augmented Lagrangian method 5.6 Penalty method and augmented Lagrangian method Consider a generic NLP problem min f (x) s.t. c i (x) 0 i I c i (x) = 0 i E (1) x R n where f and the c i s are of class C 1 or C 2, and I and E are the

More information

Introduction to Nonlinear Stochastic Programming

Introduction to Nonlinear Stochastic Programming School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS

More information

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg MVE165/MMG631 Overview of nonlinear programming Ann-Brith Strömberg 2015 05 21 Areas of applications, examples (Ch. 9.1) Structural optimization Design of aircraft, ships, bridges, etc Decide on the material

More information

A STABILIZED SQP METHOD: GLOBAL CONVERGENCE

A STABILIZED SQP METHOD: GLOBAL CONVERGENCE A STABILIZED SQP METHOD: GLOBAL CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-13-4 Revised July 18, 2014, June 23,

More information

PDE-Constrained and Nonsmooth Optimization

PDE-Constrained and Nonsmooth Optimization Frank E. Curtis October 1, 2009 Outline PDE-Constrained Optimization Introduction Newton s method Inexactness Results Summary and future work Nonsmooth Optimization Sequential quadratic programming (SQP)

More information

Unconstrained Optimization

Unconstrained Optimization 1 / 36 Unconstrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University February 2, 2015 2 / 36 3 / 36 4 / 36 5 / 36 1. preliminaries 1.1 local approximation

More information

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global and local convergence results

More information

Lagrange Relaxation and Duality

Lagrange Relaxation and Duality Lagrange Relaxation and Duality As we have already known, constrained optimization problems are harder to solve than unconstrained problems. By relaxation we can solve a more difficult problem by a simpler

More information

Optimization. Yuh-Jye Lee. March 21, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 29

Optimization. Yuh-Jye Lee. March 21, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 29 Optimization Yuh-Jye Lee Data Science and Machine Intelligence Lab National Chiao Tung University March 21, 2017 1 / 29 You Have Learned (Unconstrained) Optimization in Your High School Let f (x) = ax

More information