Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Size: px
Start display at page:

Download "Optimization Problems with Constraints - introduction to theory, numerical Methods and applications"

Transcription

1 Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)

2 Optimization problems with constraints A general form of a constrained optimization problem is (NLP) min f (x) x s.t. h i (x) = 0, i = 1, 2,..., p; g j (x) 0, j = 1, 2,..., m. where f, g i, h j : R n R are at least one-time differentiable functions and x R n. Feasible set of NLP: x R n is feasible point of NLP if h i (x) = 0, i = 1, 2,..., p and g j (x) 0, j = 1, 2,..., m. The set of all feasible points of NLP is the set which we represent by F := { x R n h i (x) = 0, i = 1,..., p; g j (x) 0, j = 1,..., m }. known as the feasible set of NLP.

3 Optimization problems with constraints... Some times it is convenient to write the constraints of NLP in a compact form as h(x) = 0, g(x) 0, where Example: h 1 (x) g 1 (x) h 2 (x) g 2 (x) h(x) = and g(x) =.. h p (x) g m (x) (NLP1) { } 1 min x 2 x2 1 + x 1x2 2 s.t. x 1 x = 0, x x 2 0, x 2 0.

4 Optimization problems with constraints... In the example problem NLP1 above: there is only one equality constraint h 1 (x) = x 1 x and two inequality constraints g 1 (x) = x x 2 0 and g 2 (x) = x 2 0. Observe that the point x = (1, 1) is a feasible point; while the point (0, 0) is not feasible (or infeasible); i.e., x = (0, 0) does not belong to the feasible set {( ) x1 F = x 2 } R 2 x 1 x2 2 1 = 0, x2 1 + x 2 0, x 2 0. Optimal Solution A point x R n is an optimal solution of the constrained problem NLP if and only if (i) if x is feasible to NLP; i.e. x F and (ii) f (x) f (x ) for all x F.

5 Optimization problems with constraints... For the example problem NLP1, the point x = (1, 1) is an optimal solution. In general, it is not easy to determine an optimal solution of a constrained optimization problem like NLP. Question: (Q1) How do we verify that a give point x R n is an optimal solution of NLP? ( This about optimality criteria. ) (Q2) What methods are available to find an optimal solution of NLP? ( This about computation of optimal solutions. ) Definition (Active Constraints) Let x be a feasible point of NLP. An inequality constraint g i (x) 0 of NLP is said to be active at x if g i (x) = 0. The set A(x) = {i {1, 2,..., m} g i (x) = 0} denotes index set of active constraints at x.

6 Optimization with constraints... Optimality criteria For the example problem NLP1, the constraint g 1 = x x 2 is active at the point x = (1, 1); while g 2 = x 2 is active at x. Hence, A = {1} descent direction A vector d is a descent direction for the objective function f at the point x if f (x + d) f (x) Moving from point x in the direction of d decreases the function f. Any vector d that satisfies d f (x) 0 is a descent direction. To verify this, recall the 1st order Taylor approximation of f at the point x. It follows that f (x + d) f (x) + d f (x). f (x + d) f (x) = d f (x) 0 f (x + d) f (x) 0. Hence, f (x + d) f (x). Therefore, d is a descent direction. NB: If d is a decent direction, then d = αd, α > 0, is also a descent direction.

7 Optimization with constraints... Optimality criteria feasible direction Let x a feasible point (i.e. x F) and d is a vector in R n. If (i) h i (x + d) = 0, i = 1,..., p and (ii) g j (x + d) 0, j = 1,..., m, then d is said to be to be a feasible direction for the NLP. Let x a feasible point. If a vector d satisfies d h i (x) = 0, i = 1,..., m and d g j (x) < 0, j A(x), then d = αd is feasible direction at x, for some α > 0. To verify this,use the 1st order Taylor approximations at the point x. i = 1,..., p : h i (x + αd) h i (x) +α d h i (x); h i (x + d) = 0, }{{} } {{ } =0 =0 j A(x) : g j (x + αd) g j (x) +α d g j (x) g j (x + d) 0; }{{} } {{ } 0 <0 j / A(x) : g j (x + αd) g j (x) +αd g j (x) g j (x + d) 0, for 0 < α g j (x) }{{}}{{} d if d g j (x) > 0. g j (x) non active constraints <0

8 Optimization with constraints... Optimality criteria Optimality condition If x is an optimal solution of NLP, then there is no vector d R n which is both a descent direction for f and feasible direction at x. That is, if x an optimal solution of NLP, then system of inequalities d f (x ) < 0, d h i (x ) = 0, i = 1,..., p; d g j (x ) < 0, j A(x ) equivalently [ f (x )] d > 0, [ h i (x ) ] d = 0, i = 1,..., p; [ g j (x )] d < 0, j A(x ) (1) has no solution d. Farkas Theorem Given any set of vectors c, a i, b j R n, i = 1,..., m; j = 1,..., l. Then exactly only one of the following two systems has a solution System I: c d > 0, ai d = 0, i = 1,..., p; bj d < 0, j = 1,..., m p l System II: c = λ i a i + µ j b j, µ > 0. i=1 j=1 Now if we let c = f (x ), a i = h i (x ), i = 1,..., p and b j = g j (x ), j A(x ), then, since x an optimal point, then only system II has a solution.

9 Optimization with constraints... Optimality criteria Hence, if x is an optimal solution of NLP, then there exist vectors λ R m, λ = ( λ 1, λ 2,..., λm) and µ R l, µ = ( µ 1, µ 2,..., µ l ) > 0 such that m l f (x ) = λ i h i (x ) + µ j g j (x ) i=1 j=1 where l = #A(x ). If we let µ j = 0 for for j {1,..., m} \ A(x ), then we can write f (x ) = m i=1 λ i h i (x ) + m j=1 µ j g j (x ). The Karush-Kuhn-Tucker (KKT) optimality condition If x is a minimum point of NLP, then there is λ R p and µ R m, µ 0 such that the following hold true: m m f (x ) + λ i h i (x ) + µ j g j (x ) = 0 (Optimality) i=1 j=1 h(x ) = 0 (feasibility) g(x ) 0 µ 0 (Nonnegativity) µ j g j (x ) = 0, j = 1,..., m. (Complementarity )

10 Optimization with constraints... Optimality criteria The function L(x, λ, µ) = f (x) + m λ i h i (x) + i=1 m µ j g j (x) j=1 called the Lagrange function associated to the NLP. Example: Solve the following optimization problem: { (NLP2) min f (x) = x 2 x 1 x2 2 } (2) s.t. (3) x 1 + 2x = 0 (4) x 1 x 2 3. (5) Solution: Lagrange function L(x, λ, µ) = (x 2 1 x 2 2 ) + λ(x 1 + 2x 2 + 1) + µ(x 1 x 2 3).

11 Optimization with constraints... Optimality criteria Optimality condition: Feasibility L = 0 2x 1 + λ + µ = 0 x 1 = 1 (λ + µ) x 1 2 (6) L = 0 2x 2 + 2λ µ = 0 x 2 = 1 (2λ µ) x 2 2 (7) h(x) = 0 x 1 + 2x = 0 1 (λ + µ) + (2λ µ) + 1 = 0 2 λ = µ 2 3 Complementarity µg(x) = 0 µ(x 1 x 2 3) = 0 µ ( 12 (λ + µ) 12 ) (2λ µ) 3 = 0 ( µ 1 [(µ 23 ] 2 ) + µ [(µ 23 ] ) ) µ 3 = 0 µ [ µ 2] = 0 µ = 0 or µ = 2. µ = 2 < 0 is not allowed. Hence, µ = 0 is the only possibility. As a result λ = µ 2 3 = 2 3.

12 Optimization with constraints... Optimality criteria Now using µ = 0 and λ = 1 x 1 = 1 2 (λ + µ ) = 1 2 ( ) = 1 3 (8) x 2 = 1 2 (2λ µ ) = 1 2 (2 ( 2 3 ) 0) = 2 3. (9) The point x = ( 1 3, 2 3 ) is a candidate for optimality. Observe that: the inequality constraint g(x) = x 1 x is not active at x. In general, the KKT conditions above are only necessary optimality conditions.

13 Optimization with constraints... Optimality criteria Sufficient optimality conditions Suppose the functions f, h i, i =,..., p; g j, j = 1,..., m are twice differentiable. Let x be a feasible point of NLP. If there are Lagrange multipliers λ and µ 0 such that: (i) the KKT conditions are satisfied at (x, µ, λ ); and (ii) and the hessian of the Lagrangian p m x L(x, λ, µ ) = 2 f (x ) + λ i 2 h j (x ) + µ j 2 g j (x ) i=1 j=1 is positive definite (i.e. d x L(x, λ, µ )d > 0) for d from the subspace S = { d R n d h i (x ) = 0, i = 1,..., p; d g j (x ) = 0, µ j > 0, j A(x ) }, then x is an optimal solution of NLP. For the problem NLP2, since g(x ) < 0 we have ( S = {d R 2 d h(x ) = 0} = {d R 2 1 (d 1, d 2 ) = 0} = {d R 2) 2 d 1 = 2d 2 }. ( ) x L(x, λ, µ 2 0 ) =. Let d 0 2 = (d 1.d 2 ) S, d 0 (note that if d 1 0, then d 2 0 and vice-versa.) ( 2 0 (d 1, d 2 ) 0 2 ) ( d1 ( ) Therefore, x = 13, 2 is an optimal solution of NLP2. 3 ) ( ) ( ) 2 0 2d2 = ( 2d d 2, d 2 ) = 2d d 2 > 0. 2

14 Numerical methods for constrained optimization problems Unless in some trivial cases, solution of a constrained optimization problem by hand is not a trivial task. As a result, currently we find several numerical methods to determine an approximate solution(s) for a constrained optimization problem. Some well known gradient-based methods are (I) The sequential quadratic programming (SQP) method (II) The Interior Point Method (III) Penalty function methods (IV) The Augmented Lagrangian method, etc SQP based algorithms are highly favored for the solution of constrained nonlinear optimization problems. In the SQP method: a sequence of iterates x k+1 = x k + α k d k are generated by solving quadratic optimization (programming) problems to determine the d k s.

15 Quadratic optimization (programming) problems Quadratic optimization problems appear in several engineering applications. Eg: Least square approximations (regression), Model predictive control, in SQP method for NLP, etc. (I) Quadratic programming problems with equality constraints (QP) min x { } f (x) = x Qx + q x (10) s.t. (11) Ax = b, (12) where: Q R n n a symmetric (i.e. Q = Q) positive definite matrix. x R n the (unknown) optimization variable; q R n a given vector. A R m n and b R m are given; i.e. there are m equality constraint. rank(a) = m.

16 Quadratic optimization... The Lagrange function for (QP): Karush-Kuhn-Tucker Conditions: L(x, λ) = x Qx + q x + λ (Ax b). L x = 0 Qx + A λ = q (13) L = 0 Ax = b. (14) λ This implies [ ] [ [ ] Q A x q = A 0 λ] c [ ] Q A If Q is positive definite and rank(a) = m, then is invertible and A 0 [ [ ] x Q A 1 [ ] q = λ] A 0 c x = Q 1 q + Q 1 A ( AQ 1 A ) 1 ( b + AQ 1 q ) ( λ = AQ 1 A ) 1 ( b + AQ 1 q ).

17 Quadratic optimization (programming) problems Example: A material flow from two reservoirs R1 and R2 flows into a third reservoir R3. The rate of flow from R1 is measured to be F1 M = 6.5m 3 /h and from R2 is F2 M = 14.7m 3 /h. As a result of these two inflows, the volume in R3 is measured to change at a rate F3 M = 19.8m 3 /h. Flow rate measurement devices for each reservoirs are known to incur measurement errors each with standard-deviation: σ 1 = 0.1m 3 /h, σ 2 = 0.2m 3 /h and σ 3 = 0.3m 3 /h. State an optimization problem to determine the actual flow rates. Solution: Let F 1, F 2 and F 3 be the true flow rates. Note that F i = Fi M ± η i σ i, i = 1, 2, 3; measurement ±η i σ i = F i Fi M represents measurement error. To minimize all these measurement error {( F1 F1 M min (F 1,F 2,F 3 ) σ 1 s.t. F 1 + F 2 = F 3. ) ( F2 F2 M + σ 22 ) ( F3 F3 M + σ 3 )}

18 Quadratic optimization (programming) problems (II) Quadratic programming problems with equality and inequality constraints (QP) { } min x f (x) = x Qx + q x (15) s.t. (16) a i x = b i, i = 1, 2,..., p (17) a i x b i, i = p + 1,..., m. (18) where: Q is symmetric positive definite. If we set A = [a 1, a 2,..., a p], a = (b 1, b 2,..., b p), B = [a p+1, a p+2,..., a m] and b = (b p+1, b p+2,..., b m) Then (QP) can be compactly written as: (QP) { } min x f (x) = x Qx + q x (19) s.t. (20) Ax = a (21) Bx b. (22) In general, inequality constrained (QP)s may not be easy to be solved analytically.

19 Quadratic optimization... Active set Method Active Set Method Strategy: Start from an arbitrary point x 0 Find the next iterate by setting x k+1 = x k + α k d k, where α k is a step-length and d k is search direction. Question How to determine the search direction d k? How to determine the step-length α k? (A) Determination of the search direction: At the current iterate x k determine the index set of active constraints A k = {i a i x k b i = 0, i = p + 1,..., m} {1, 2,..., p}.

20 Quadratic optimization... Active set Method Active Set Method Strategy: Start from an arbitrary point x 0 Find the next iterate by setting x k+1 = x k + α k d k, where α k is a step-length and d k is search direction. Question How to determine the search direction d k? How to determine the step-length α k? (A) Determination of the search direction: At the current iterate x k determine the index set of active constraints A k = {i a i x k b i = 0, i = p + 1,..., m} {1, 2,..., p}.

21 Quadratic optimization... Active set Method Solve the direction finding problem { } 1 min d 2 (xk + d) Q(x k + d) + q (x k + d) s.t. a i (x k + d) = b i, i A k. Expand 2 1 (xk + d) Q(x k + d) + q (x k + d) = 1 2 d Qd d Qx k (xk ) Qd (xk ) Qx k + q d + q x k ( ) a i x k + d = b i a i d = b i ai x k = 0. Simplify these expressions and drop constants to obtain: { 1 [ ] } min d 2 d Q + d + Qx k + q d s.t. a i d = 0, i A k. Set a 1 A = a2, i Ak.. Solve this problem using the problem using the KKT conditions to obtain d k.

22 Quadratic optimization... Active set Method (A) Determination of the step length: Once you obtained d k compute α k as α k = min [1, b i ai x k ] ai d k, i / A k

23 Quadratic optimization... Active set Method Example: Application Problem A parallel flow heat exchanger of a given length to be designed. The conducting tubes, all of the same diameter, are enclosed in an outer shell of diameter D = 1m (see Fugure). The diameter of each of the conducting tubes should not exceed 60mm. Determine the number of tubes and the diameter for the largest surface area.

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

Generalization to inequality constrained problem. Maximize

Generalization to inequality constrained problem. Maximize Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum

More information

2.3 Linear Programming

2.3 Linear Programming 2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are

More information

Lecture 18: Optimization Programming

Lecture 18: Optimization Programming Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

Nonlinear Optimization

Nonlinear Optimization Nonlinear Optimization Etienne de Klerk (UvT)/Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos Course WI3031 (Week 4) February-March, A.D. 2005 Optimization Group 1 Outline

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

Numerical Optimization

Numerical Optimization Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

The Karush-Kuhn-Tucker conditions

The Karush-Kuhn-Tucker conditions Chapter 6 The Karush-Kuhn-Tucker conditions 6.1 Introduction In this chapter we derive the first order necessary condition known as Karush-Kuhn-Tucker (KKT) conditions. To this aim we introduce the alternative

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

More on Lagrange multipliers

More on Lagrange multipliers More on Lagrange multipliers CE 377K April 21, 2015 REVIEW The standard form for a nonlinear optimization problem is min x f (x) s.t. g 1 (x) 0. g l (x) 0 h 1 (x) = 0. h m (x) = 0 The objective function

More information

Chap 2. Optimality conditions

Chap 2. Optimality conditions Chap 2. Optimality conditions Version: 29-09-2012 2.1 Optimality conditions in unconstrained optimization Recall the definitions of global, local minimizer. Geometry of minimization Consider for f C 1

More information

Math 5311 Constrained Optimization Notes

Math 5311 Constrained Optimization Notes ath 5311 Constrained Optimization otes February 5, 2009 1 Equality-constrained optimization Real-world optimization problems frequently have constraints on their variables. Constraints may be equality

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 12: Nonlinear optimization, continued Prof. John Gunnar Carlsson October 20, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I October 20,

More information

Constrained optimization

Constrained optimization Constrained optimization In general, the formulation of constrained optimization is as follows minj(w), subject to H i (w) = 0, i = 1,..., k. where J is the cost function and H i are the constraints. Lagrange

More information

CE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions

CE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions CE 191: Civil and Environmental Engineering Systems Analysis LEC : Optimality Conditions Professor Scott Moura Civil & Environmental Engineering University of California, Berkeley Fall 214 Prof. Moura

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained

More information

MATH 4211/6211 Optimization Basics of Optimization Problems

MATH 4211/6211 Optimization Basics of Optimization Problems MATH 4211/6211 Optimization Basics of Optimization Problems Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 A standard minimization

More information

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study International Journal of Mathematics And Its Applications Vol.2 No.4 (2014), pp.47-56. ISSN: 2347-1557(online) Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms:

More information

Scientific Computing: Optimization

Scientific Computing: Optimization Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Numerical Optimization. Review: Unconstrained Optimization

Numerical Optimization. Review: Unconstrained Optimization Numerical Optimization Finding the best feasible solution Edward P. Gatzke Department of Chemical Engineering University of South Carolina Ed Gatzke (USC CHE ) Numerical Optimization ECHE 589, Spring 2011

More information

CS-E4830 Kernel Methods in Machine Learning

CS-E4830 Kernel Methods in Machine Learning CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This

More information

Examination paper for TMA4180 Optimization I

Examination paper for TMA4180 Optimization I Department of Mathematical Sciences Examination paper for TMA4180 Optimization I Academic contact during examination: Phone: Examination date: 26th May 2016 Examination time (from to): 09:00 13:00 Permitted

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Lectures 9 and 10: Constrained optimization problems and their optimality conditions Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained

More information

Applications of Linear Programming

Applications of Linear Programming Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal

More information

CE 191: Civil & Environmental Engineering Systems Analysis. LEC 17 : Final Review

CE 191: Civil & Environmental Engineering Systems Analysis. LEC 17 : Final Review CE 191: Civil & Environmental Engineering Systems Analysis LEC 17 : Final Review Professor Scott Moura Civil & Environmental Engineering University of California, Berkeley Fall 2014 Prof. Moura UC Berkeley

More information

Numerical optimization

Numerical optimization Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal

More information

Computational Optimization. Augmented Lagrangian NW 17.3

Computational Optimization. Augmented Lagrangian NW 17.3 Computational Optimization Augmented Lagrangian NW 17.3 Upcoming Schedule No class April 18 Friday, April 25, in class presentations. Projects due unless you present April 25 (free extension until Monday

More information

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers Optimization for Communications and Networks Poompat Saengudomlert Session 4 Duality and Lagrange Multipliers P Saengudomlert (2015) Optimization Session 4 1 / 14 24 Dual Problems Consider a primal convex

More information

Nonlinear Programming, Elastic Mode, SQP, MPEC, MPCC, complementarity

Nonlinear Programming, Elastic Mode, SQP, MPEC, MPCC, complementarity Preprint ANL/MCS-P864-1200 ON USING THE ELASTIC MODE IN NONLINEAR PROGRAMMING APPROACHES TO MATHEMATICAL PROGRAMS WITH COMPLEMENTARITY CONSTRAINTS MIHAI ANITESCU Abstract. We investigate the possibility

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems 1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of

More information

An Introduction to Model-based Predictive Control (MPC) by

An Introduction to Model-based Predictive Control (MPC) by ECE 680 Fall 2017 An Introduction to Model-based Predictive Control (MPC) by Stanislaw H Żak 1 Introduction The model-based predictive control (MPC) methodology is also referred to as the moving horizon

More information

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014 Convex Optimization Dani Yogatama School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA February 12, 2014 Dani Yogatama (Carnegie Mellon University) Convex Optimization February 12,

More information

Numerical Optimization

Numerical Optimization Constrained Optimization - Algorithms Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Consider the problem: Barrier and Penalty Methods x X where X

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

SF2822 Applied nonlinear optimization, final exam Saturday December

SF2822 Applied nonlinear optimization, final exam Saturday December SF2822 Applied nonlinear optimization, final exam Saturday December 5 27 8. 3. Examiner: Anders Forsgren, tel. 79 7 27. Allowed tools: Pen/pencil, ruler and rubber; plus a calculator provided by the department.

More information

8 Barrier Methods for Constrained Optimization

8 Barrier Methods for Constrained Optimization IOE 519: NL, Winter 2012 c Marina A. Epelman 55 8 Barrier Methods for Constrained Optimization In this subsection, we will restrict our attention to instances of constrained problem () that have inequality

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Duality in Nonlinear Optimization ) Tamás TERLAKY Computing and Software McMaster University Hamilton, January 2004 terlaky@mcmaster.ca Tel: 27780 Optimality

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

MATH2070 Optimisation

MATH2070 Optimisation MATH2070 Optimisation Nonlinear optimisation with constraints Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The full nonlinear optimisation problem with equality constraints

More information

Multidisciplinary System Design Optimization (MSDO)

Multidisciplinary System Design Optimization (MSDO) Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential

More information

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen

More information

Algorithms for nonlinear programming problems II

Algorithms for nonlinear programming problems II Algorithms for nonlinear programming problems II Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects

More information

Some new facts about sequential quadratic programming methods employing second derivatives

Some new facts about sequential quadratic programming methods employing second derivatives To appear in Optimization Methods and Software Vol. 00, No. 00, Month 20XX, 1 24 Some new facts about sequential quadratic programming methods employing second derivatives A.F. Izmailov a and M.V. Solodov

More information

Nonlinear optimization

Nonlinear optimization Nonlinear optimization Anders Forsgren Optimization and Systems Theory Department of Mathematics Royal Institute of Technology (KTH) Stockholm, Sweden evita Winter School 2009 Geilo, Norway January 11

More information

Optimization. Yuh-Jye Lee. March 28, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 40

Optimization. Yuh-Jye Lee. March 28, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 40 Optimization Yuh-Jye Lee Data Science and Machine Intelligence Lab National Chiao Tung University March 28, 2017 1 / 40 The Key Idea of Newton s Method Let f : R n R be a twice differentiable function

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

Optimization. Yuh-Jye Lee. March 21, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 29

Optimization. Yuh-Jye Lee. March 21, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 29 Optimization Yuh-Jye Lee Data Science and Machine Intelligence Lab National Chiao Tung University March 21, 2017 1 / 29 You Have Learned (Unconstrained) Optimization in Your High School Let f (x) = ax

More information

Differentiable exact penalty functions for nonlinear optimization with easy constraints. Takuma NISHIMURA

Differentiable exact penalty functions for nonlinear optimization with easy constraints. Takuma NISHIMURA Master s Thesis Differentiable exact penalty functions for nonlinear optimization with easy constraints Guidance Assistant Professor Ellen Hidemi FUKUDA Takuma NISHIMURA Department of Applied Mathematics

More information

ICS-E4030 Kernel Methods in Machine Learning

ICS-E4030 Kernel Methods in Machine Learning ICS-E4030 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 28. September, 2016 Juho Rousu 28. September, 2016 1 / 38 Convex optimization Convex optimisation This

More information

On sequential optimality conditions for constrained optimization. José Mario Martínez martinez

On sequential optimality conditions for constrained optimization. José Mario Martínez  martinez On sequential optimality conditions for constrained optimization José Mario Martínez www.ime.unicamp.br/ martinez UNICAMP, Brazil 2011 Collaborators This talk is based in joint papers with Roberto Andreani

More information

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Session: 15 Aug 2015 (Mon), 10:00am 1:00pm I. Optimization with

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

Support Vector Machines: Maximum Margin Classifiers

Support Vector Machines: Maximum Margin Classifiers Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 16, 2008 Piotr Mirowski Based on slides by Sumit Chopra and Fu-Jie Huang 1 Outline What is behind

More information

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark Solution Methods Richard Lusby Department of Management Engineering Technical University of Denmark Lecture Overview (jg Unconstrained Several Variables Quadratic Programming Separable Programming SUMT

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

What s New in Active-Set Methods for Nonlinear Optimization?

What s New in Active-Set Methods for Nonlinear Optimization? What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for

More information

Sufficient Conditions for Finite-variable Constrained Minimization

Sufficient Conditions for Finite-variable Constrained Minimization Lecture 4 It is a small de tour but it is important to understand this before we move to calculus of variations. Sufficient Conditions for Finite-variable Constrained Minimization ME 256, Indian Institute

More information

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009 UC Berkeley Department of Electrical Engineering and Computer Science EECS 227A Nonlinear and Convex Optimization Solutions 5 Fall 2009 Reading: Boyd and Vandenberghe, Chapter 5 Solution 5.1 Note that

More information

A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30,

More information

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS GERD WACHSMUTH Abstract. Kyparisis proved in 1985 that a strict version of the Mangasarian- Fromovitz constraint qualification (MFCQ) is equivalent to

More information

Lecture 1: Introduction. Outline. B9824 Foundations of Optimization. Fall Administrative matters. 2. Introduction. 3. Existence of optima

Lecture 1: Introduction. Outline. B9824 Foundations of Optimization. Fall Administrative matters. 2. Introduction. 3. Existence of optima B9824 Foundations of Optimization Lecture 1: Introduction Fall 2009 Copyright 2009 Ciamac Moallemi Outline 1. Administrative matters 2. Introduction 3. Existence of optima 4. Local theory of unconstrained

More information

Course on Model Predictive Control Part II Linear MPC design

Course on Model Predictive Control Part II Linear MPC design Course on Model Predictive Control Part II Linear MPC design Gabriele Pannocchia Department of Chemical Engineering, University of Pisa, Italy Email: g.pannocchia@diccism.unipi.it Facoltà di Ingegneria,

More information

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research Introduction to Machine Learning Lecture 7 Mehryar Mohri Courant Institute and Google Research mohri@cims.nyu.edu Convex Optimization Differentiation Definition: let f : X R N R be a differentiable function,

More information

Computational Finance

Computational Finance Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples

More information

Lecture 1: Introduction. Outline. B9824 Foundations of Optimization. Fall Administrative matters. 2. Introduction. 3. Existence of optima

Lecture 1: Introduction. Outline. B9824 Foundations of Optimization. Fall Administrative matters. 2. Introduction. 3. Existence of optima B9824 Foundations of Optimization Lecture 1: Introduction Fall 2010 Copyright 2010 Ciamac Moallemi Outline 1. Administrative matters 2. Introduction 3. Existence of optima 4. Local theory of unconstrained

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Support vector machines (SVMs) are one of the central concepts in all of machine learning. They are simply a combination of two ideas: linear classification via maximum (or optimal

More information

Introduction to Optimization Techniques. Nonlinear Optimization in Function Spaces

Introduction to Optimization Techniques. Nonlinear Optimization in Function Spaces Introduction to Optimization Techniques Nonlinear Optimization in Function Spaces X : T : Gateaux and Fréchet Differentials Gateaux and Fréchet Differentials a vector space, Y : a normed space transformation

More information

Lecture V. Numerical Optimization

Lecture V. Numerical Optimization Lecture V Numerical Optimization Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Numerical Optimization p. 1 /19 Isomorphism I We describe minimization problems: to maximize

More information

Lecture: Duality of LP, SOCP and SDP

Lecture: Duality of LP, SOCP and SDP 1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:

More information

A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30, 2014 Abstract Regularized

More information

Optimization Tutorial 1. Basic Gradient Descent

Optimization Tutorial 1. Basic Gradient Descent E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1394 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1394 1 / 34 Table

More information

Lecture 13 Newton-type Methods A Newton Method for VIs. October 20, 2008

Lecture 13 Newton-type Methods A Newton Method for VIs. October 20, 2008 Lecture 13 Newton-type Methods A Newton Method for VIs October 20, 2008 Outline Quick recap of Newton methods for composite functions Josephy-Newton methods for VIs A special case: mixed complementarity

More information

Mathematical Economics. Lecture Notes (in extracts)

Mathematical Economics. Lecture Notes (in extracts) Prof. Dr. Frank Werner Faculty of Mathematics Institute of Mathematical Optimization (IMO) http://math.uni-magdeburg.de/ werner/math-ec-new.html Mathematical Economics Lecture Notes (in extracts) Winter

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written 11.8 Inequality Constraints 341 Because by assumption x is a regular point and L x is positive definite on M, it follows that this matrix is nonsingular (see Exercise 11). Thus, by the Implicit Function

More information

Optimisation in Higher Dimensions

Optimisation in Higher Dimensions CHAPTER 6 Optimisation in Higher Dimensions Beyond optimisation in 1D, we will study two directions. First, the equivalent in nth dimension, x R n such that f(x ) f(x) for all x R n. Second, constrained

More information

Lecture 13: Constrained optimization

Lecture 13: Constrained optimization 2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems

More information

Constrained Optimization Theory

Constrained Optimization Theory Constrained Optimization Theory Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Constrained Optimization Theory IMA, August

More information

You should be able to...

You should be able to... Lecture Outline Gradient Projection Algorithm Constant Step Length, Varying Step Length, Diminishing Step Length Complexity Issues Gradient Projection With Exploration Projection Solving QPs: active set

More information

Computational Optimization. Constrained Optimization Part 2

Computational Optimization. Constrained Optimization Part 2 Computational Optimization Constrained Optimization Part Optimality Conditions Unconstrained Case X* is global min Conve f X* is local min SOSC f ( *) = SONC Easiest Problem Linear equality constraints

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Lecture 2: Linear SVM in the Dual

Lecture 2: Linear SVM in the Dual Lecture 2: Linear SVM in the Dual Stéphane Canu stephane.canu@litislab.eu São Paulo 2015 July 22, 2015 Road map 1 Linear SVM Optimization in 10 slides Equality constraints Inequality constraints Dual formulation

More information