8 Barrier Methods for Constrained Optimization
|
|
- Julianna Chandler
- 6 years ago
- Views:
Transcription
1 IOE 519: NL, Winter 2012 c Marina A. Epelman 55 8 Barrier Methods for Constrained Optimization In this subsection, we will restrict our attention to instances of constrained problem () that have inequality constraints only: S = {x 2 R n : g i (x) apple 0, i =1,...,n}. The algorithm we develop can be easily extended to problems that also have linear equality constraints, in the spirit of the previous section. Barrier methods can also be applied to problems with general equality constraints, however, an appropriate problem transformation is required. Di erent implementations of barrier methods deal with this issue di erently. Barrier methods (also known as Interior oint Methods) are designed to solve () by instead solving a sequence of specially constructed unconstrained optimization problems. In a barrier method, we presume that we are given a point x 0 that lies in the interior of the feasible region S, and we impose a very large cost on feasible points that lie ever closer to the boundary of S, thereby creating a barrier to exiting the feasible region. A barrier function for () is any continuous function b(x) defined on the the interior of the feasible set S such that b(x)! +1 as max,...,m g i (x)! 0. Two common examples of barrier functions are 1 b(x) = ln( g i (x)) and b(x) = g i (x). The idea in a barrier method is to dissuade points x from ever approaching the boundary of the feasible region. We consider solving: B( ) : minimize f(x)+ b(x) s.t. g(x) < 0 x 2 R n. for a sequence of k > 0 such that k! 0. Note that the problem B( ) is unconstrained, with inequalities g(x) < 0 defining the open domain of the objective function. The next result concerns convergence of the barrier method. Recall that N (x) denote the ball of radius centered at the point x. Theorem 8.1 (Barrier Convergence Theorem). Suppose f(x), g(x), andb(x) are continuous functions, and suppose B( ) hasasolutionforany >0. Let the sequence { k } satisfy 0 < k+1 < k and k! 0 as k!1, and let x k denote the exact solution to B( k ). Suppose there exists an optimal solution x? of () for which N (x? ) \{x : g(x) < 0} 6= ; for every >0. Then any limit point x of {x k } solves (). roof: Let x be any limit point of the sequence {x k }. Without loss of generality, assume that x is in fact the limit of the sequence. From the continuity of f(x) and g(x), lim k!1 f(x k )=f( x) and g( x) =lim k!1 g(x k ) apple 0. Thus, x is feasible for (). If g( x) < 0, then lim k!1 k b(x k ) = 0. If x lies on the boundary of S, then, by our assumption, lim k!1 b(x k )=+1. In either case, lim inf k!1 kb(x k ) 0,
2 IOE 519: NL, Winter 2012 c Marina A. Epelman 56 which implies that lim inf k!1 n o f(x k )+ k b(x k ) = f( x)+liminf kb(x k ) f( x). k!1 If x were not a global minimizer, there would exist a feasible vector x? such that f(x? ) <f( x) that can be approached arbitrarily closely through the interior of the feasible set. That is, there exists a point x in the domain of the barrier function such that f( x) <f( x). By the definition of x k, f(x k )+ k b(x k ) apple f( x)+ k b( x) for all k, which by taking the limit at k!1implies that f( x) apple f( x). This is a contradiction, therefore proving that x is a global minimizer of (). Advantages of the barrier methods: Avoid the combinatorial issues associated with finding the active constraints Converge under mild conditions rovide estimates of KKT/Lagrangian multipliers at the optimum (see below) For many barrier functions (including the two examples above) the barrier function is convex if () is a convex problem. Barrier problems are typically solved via Newton s method, which converges rapidly if initialized close to the solution (of the barrier problem). Since for su ciently small the solution of B( ) is su ciently close to the optimal solution of (), one may wonder why you should bother with solving a sequence of barrier problems instead of solving just one, with a low value of. However, the Hessian r 2 xxb(x, ) of the barrier function tends to become ill-conditioned when is very small, making the problem di cult to solve. Solving a sequence of barrier problems provides a remedy: when the solution x k 1 of the previous barrier problem is used as an initial point for solving the current barrier problem, the problem tends to be easier to solve if k is not much smaller than k 1, since then, under reasonable assumptions, x k 1 is close to x k. Suppose more generally, b(x) = (g(x)), where (y) :R m! R, and assume that (y) is continuously di erentiable for all y<0. Note that to be a valid barrier function, (y) should (y) 0 for values of y<0 such that y i is close to 0. (g(x)) rb(x) = rg i (x), and if x k solves B( k ), then rf(x k )+ k rb(x k ) = 0, that is, rf(x k )+ k m (g(x k )) rg i (x k )=0. (7) Let us define u k i = (g(x k )). (8)
3 IOE 519: NL, Winter 2012 c Marina A. Epelman 57 Then (7) becomes: rf(x k )+ u k i rg i (x k )=0. (9) Therefore we can interpret the u k as a sort of vector of Karush-Kuhn-Tucker multipliers. In fact, we have: Lemma 8.2 (Karush-Kuhn-Tucker Multipliers in Barrier Methods) Let () satisfy the conditions of the Barrier Convergence Theorem. Suppose (y) is continuously di erentiable, and let u k be defined by (8). Then if x k! x, and x satisfies the linear independence condition for gradient vectors of active constraints, then u k! ū, where ū is a vector of Karush-Kuhn-Tucker multipliers for the optimal solution x of (). roof: Let x k! x and let I = {i : g i ( x) =0} and N = {i : g i ( x) < 0}. For all i 2 N, u k i = (g(x k ))! 0 as k!1, since k! 0 and g i (x k )! g i ( x) < 0, (g( x)) is finite (since ( ) is continuously di erentiable). Also u k i 0 for all i 2 I, and k su ciently large. Suppose u k! ū as k!1. Then ū functions involved, (9) implies that rf( x)+ 0, and ū i = 0 for all i 2 N. From the continuity of all ū i rg i ( x) =0, ū 0, ū T g( x) =0. Thus ū is a vector of Kuhn-Tucker multipliers. It remains to show that u k! ū for some unique ū. Suppose {u k } 1 k=1 has no accumulation point. Then kuk k!1.butthendefine v k = uk ku k k, and then kv k k = 1 for all k, and so the sequence {v k } 1 k=1 has some accumulation point v satisfying k vk = 1. Notice that for all i 2 N, wehave v i = 0. Using (9), we have u vi k rg i (x k k )= i ku k rg i (x k )= rf(xk ) k ku k k for all k. As k!1,wehavex k! x, v k! v (with v i = 0 for i 2 N and k vk = 1), and ku k k!1, and so the above becomes X v i rg i ( x) =0, i2i which violates the linear independence condition. Therefore {u k } is a bounded sequence, and so has at least one accumulation point. Now suppose that {u k } has two accumulation points, ũ and ū. Note that ū i = 0 and ũ i = 0 for i 2 N, and so X ū i rg i ( x) = rf( x) = X ũ i rg i ( x), i2i i2i
4 IOE 519: NL, Winter 2012 c Marina A. Epelman 58 so that X (ū i ũ i )rg i ( x) =0. i2i But by the linear independence condition, ū i ũ i = 0 for all i 2 I, and so ū i =ũ i. This then implies that ū =ũ.
5 IOE 519: NL, Winter 2012 c Marina A. Epelman 59 9 enalty Methods for Constrained Optimization Consider the constrained optimization problem (): : min x f(x) s.t. g i (x) apple 0 i =1,...,m x 2 R n. We will write g(x) :=(g 1 (x),...,g m (x)) T for convenience. Definition: A function p(x) :R n! R is called a penalty function for () if p(x) satisfies: p(x) =0ifg(x) apple 0 and p(x) > 0ifg(x) 6apple 0. Example: p(x) = (max{0,g i (x)}) 2. We then consider solving the following penalty program: (c) : min x f(x)+cp(x) s.t. x 2 R n for an increasing sequence of constants c as c! +1. Note that in the program (c) we are assigning a penalty to the violated constraints. The scalar quantity c is called the penalty parameter. Let c k 0,k =1,...,1, be a sequence of penalty parameters that satisfies c k+1 >c k for all k and lim k!1 c k =+1. Let q(c, x) =f(x)+cp(x), and let x k be the exact solution to the program (c k ), and let x? denote any optimal solution of (). The following Lemma presents some basic properties of penalty methods: Lemma 9.1 (enalty Lemma) (i) q(c k,x k ) apple q(c k+1,x k+1 ) (ii) p(x k ) p(x k+1 ) (iii) f(x k ) apple f(x k+1 ) (iv) f(x? ) q(c k,x k ) f(x k ) roof: (i) We have q(c k+1,x k+1 )=f(x k+1 )+c k+1 p(x k+1 ) f(x k+1 )+c k p(x k+1 ) f(x k )+c k p(x k )=q(c k,x k ) (ii) and f(x k )+c k p(x k ) apple f(x k+1 )+c k p(x k+1 ) f(x k+1 )+c k+1 p(x k+1 ) apple f(x k )+c k+1 p(x k )
6 IOE 519: NL, Winter 2012 c Marina A. Epelman 60 Thus (c k+1 c k )p(x k ) (c k+1 c k )p(x k+1 ), whereby p(x k ) p(x k+1 ). (iii) We have f(x k+1 )+c k p(x k+1 ) f(x k )+c k p(x k ). But p(x k ) p(x k+1 ) from (ii), which implies that f(x k+1 ) f(x k ). (iv) f(x k ) apple f(x k )+c k p(x k ) apple f(x? )+c k p(x? )=f(x? ). The next result concerns convergence of the penalty method. Theorem 9.2 (enalty Convergence Theorem). Suppose that f(x), g(x), andp(x) are continuous functions. Let {x k },k =1,...,1, be a sequence of solutions to (c k ). Then any limit point x of {x k } solves (). roof: Let x be a limit point of {x k }. From the continuity of the functions involved, lim k!1 f(x k )= f( x). Also, from the enalty Lemma, Thus q? := lim k!1 q(c k,x k ) apple f(x? ). h i lim c kp(x k )= lim q(c k,x k ) f(x k ) = q? f( x). k!1 k!1 But c k!1, which implies from the above that lim k!1 p(xk )=0. Therefore, from the continuity of p(x) and g(x), p( x) = 0, and so g( x) apple 0, that is, x is a feasible solution of (). Finally, from the enalty Lemma, f(x k ) apple f(x? ) for all k, and so f( x) apple f(x? ), which implies that f( x) =f(x? ) and therefore x is an optimal solution of (). An often-used class of penalty functions is: We note the following: p(x) = [max{0,g i (x)}] q, where q 1. (10) If q = 1, p(x) in (10) is called the linear penalty function. This function may not be di erentiable at points where g i (x) = 0 for some i. Setting q = 2 is the most common form of (10) that is used in practice, and is called the quadratic penalty function. Notice that the penalty function p(x) is often a function only of g + (x), where g + (x) = max{0,g i (x)} (the nonnegative part of g i (x)), i =1,...,m. Then we can write p(x) = (g + (x)) where (y) is a function of y 2 R m +. Two examples of this type of penalty function are (y) = y i,
7 IOE 519: NL, Winter 2012 c Marina A. Epelman 61 which corresponds to the linear penalty function, and (y) = yi 2, which corresponds to the quadratic penalty function. Note that even if (y) is continuously di erentiable, p(x) might not be continuously di erentiable, since g + (x) is not di erentiable at points x where g i + (x) = 0 for some i. However, if we assume the (y) = 0 at y i =0, i =1,...,m, (11) then p(x) is di erentiable whenever the functions g i (x) are di erentiable, i =1,...,m, and we can (g + (x)) rp(x) = rg i (x). (12) Now let x k solve (c k ). Then x k will satisfy that is, Let us define Then rf(x k )+c k rf(x k )+c k rp(x k )=0, m (g + (x k )) rg i (x k )=0. u k i = c (g + (x k )). (13) rf(x k )+ u k i rg i (x k )=0, and so we can interpret the u k as a sort of vector of Karush-Kuhn-Tucker multipliers. In fact, we have: Lemma 9.3 Suppose (y) is monotone in y, is continuously di erentiable, and satisfies (11), and that f(x) and g(x) are di erentiable. Let u k be definied by (13). Then if x k! x, and x satisfies the linear independence condition for gradient vectors of active constraints, then u k! ū, where ū is a vector of Karush-Kuhn-Tucker multipliers for the optimal solution x of (). roof: From the enalty Convergence Theorem, x is an optimal solution of (). Let I = {i : g i ( x) =0} and N = {i : g i ( x) < 0}. For i 2 N, g i (x k ) < 0 for all k su ciently large, so u k i = 0 for all k su ciently large, whereby ū i = 0 for i 2 N. From (13) and the definition of a penalty function, it follows that u k i su ciently large. 0 for i 2 I, for all k Suppose u k! ū as k! 1. Then ū i = 0 for i 2 N. From the continuity of all functions involved, rf(x k )+ u k i rg(x k )=0
8 IOE 519: NL, Winter 2012 c Marina A. Epelman 62 implies rf( x)+ ū i rg( x) =0. From the above remarks, we also have ū 0 and ū i = 0 for all i 2 N. Thus ū is a vector of Karush-Kuhn-Tucker multipliers. It therefore remains to show that u k! ū for some unique ū, which is done along the same lines as in Lemma 8.2. Remark. The quadratic penalty function satisfies the condition (11), but the linear penalty function does not satisfy (11). 9.1 Exact enalty Methods The idea in an exact penalty method is to choose a penalty function p(x) and a constant c so that the optimal solution x of (c) is also an optimal solution of the original problem (). Theorem 9.4 Suppose () is a convex program for which the Karush-Kuhn-Tucker conditions are necessary. Suppose that p(x) := (g i (x)) +. Then as long as c is chosen su ciently large, the sets of optimal solutions of (c) and () coincide. In fact, it su cies to choose c>max i {u? i }, where u? is a vector of Karush-Kuhn-Tucker multipliers. roof: Suppose ˆx solves (). For any x 2 R n we have: q(c, x) =f(x)+c m (g i (x)) + f(x)+ m (u? i g i(x)) + f(x)+ m u? i g i(x) f(x)+ m u? i (g i(ˆx)+rg i (ˆx) T (x = f(x)+ m u? i rg i(ˆx) T (x Thus q(c, ˆx) apple q(c, x) for all x, and therefore ˆx solves (c). Next suppose that x solves (c). Then if ˆx solves (), we have: ˆx) ˆx)) = f(x) rf(ˆx) T (x ˆx) f(ˆx) =f(ˆx)+c m (g i (ˆx)) + = q(c, ˆx). and so f( x)+c (g i ( x)) + apple f(ˆx)+c (g i (ˆx)) + = f(ˆx), f( x) apple f(ˆx) c (g i ( x)) +. (14)
9 IOE 519: NL, Winter 2012 c Marina A. Epelman 63 However, if x is not feasible for (), then f( x) f(ˆx)+rf(ˆx) T ( x ˆx) m = f(ˆx) u? i rg i(ˆx) T ( x f(ˆx)+ m u? i (g i(ˆx) g i ( x)) m = f(ˆx) u? i g i( x) >f(ˆx) ˆx) c m (g i ( x)) +, which contradicts (14). Thus x is feasible for (). That being the case, f( x) apple f(ˆx) f(ˆx) from (14), and so x solves (). c m (g i ( x)) + = 9.2 enalty Methods for Inequality and Equality Constraints The presentation of penalty methods has assumed that the problem () has no equality constraints. If the problem has equality constraints h(x) = 0, we could convert them into inequality constraints by writing h(x) apple 0 and h i (x) apple 0, but such conversion usually violates good judgement in that it unnecessarily complicates the problem. Furthermore, it can cause the linear independence condition to be automatically violated for every feasible solution. Therefore, instead let us consider the constrained optimization problem () with both inequality and equality constraints: : min x f(x) s.t. g(x) apple 0 h(x) =0 x 2 R n, where g(x) and h(x) are vector-valued functions, that is, g(x) :=(g 1 (x),...,g m (x)) T and h(x) := (h 1 (x),...,h l (x)) T for notational convenience. Definition: A function p(x) :R n! R is called a penalty function for () if p(x) satisfies: p(x) =0ifg(x) apple 0 and h(x) =0 p(x) > 0ifg(x) 6apple 0 or h(x) 6= 0. The main class of penalty functions for this general problem are of the form: p(x) = [max{0,g i (x)}] q + lx h i (x) q, where q 1. All of the results of this section extend naturally to problems with equality constraints and for penalty functions of the above form. For example, in the analogous result of Theorem 9.4, it su ces to choose c>max{u? 1,...,u? m, v 1?,..., v? l }. 9.3 Augmented Lagrangian methods A modification of the classical penalty methods involves explicitly incorporating the KKT multipliers into the method. This complicates the algorithm, but has the potential of ameliorating the ill-conditioning encountered in the penalty method, and improving rates of convergence.
10 IOE 519: NL, Winter 2012 c Marina A. Epelman 64 Consider the problem of the form The Lagrangian of this problem is defined to be min f(x) s.t. h(x) =0. L(x, v) =f(x)+v T h(x). If KKT conditions are necessary at a local minimizer x?, then for v = v? (the set of KKT multipliers) x? is a stationary point of the Lagrangian with v = v?, but usually not a minimizer. It is a minimizer of the problem min f(x)+v T h(x) s.t. h(x) =0. Applying the penalty approach to the latter problem, consider min A(x, v, ) =f(x)+v T h(x)+ 1 x 2 h(x)t h(x) to obtain x?. The above function is referred to as the augmented Lagrangian. Intuitively, a minimizer of A(x, v, ) is close to the solution x? if v is close to v? and >0 is su is very large. Example Consider ciently large. min f(x) = 1 2 (x2 1 + x 2 2)subjecttox 1 =1. The optimal solution is x? =(1, 0) T and v? = 1. The augmented Lagrangian is min x A(x, v, ) = 1 2 (x2 1 + x 2 2)+v(x 1 1) (x 1 1) 2. By setting its gradient to zero, the unique unconstrained minimizer is Thus, for all >0, On the other hand, for all v, x 1 (v, ) = c v c +1, x 2(v, ) =0. lim x 1(v, ) =1, v!v? lim x 1(v, ) =0. v!v? lim x 1(v, ) =1, lim x 1(v, ) =0.!1!1 An augmented Lagrangian algorithm is initialized with some x 0,v 0, and 0 > 0, and proceeds as follows: 1. If rl(x k,v k ) = 0, then stop 2. Compute x k+1 by solving min x A(x, v k, k ) 3. Determine v k+1 and k+1. For example, take v k+1 = v k + k h(x k+1 ), and k+1 k.
11 IOE 519: NL, Winter 2012 c Marina A. Epelman 65 It can be shown that if v k = v? and is su ciently large, so that the Hessian of A(x?,v?, )is positive definite, then the unconstrained minimizer of the augmented Lagrangian is a minimizer of the original problem. Thus, if we knew the Lagrangian multipliers at the solution, the problem can be solved in one iteration. Moreover, the closer v k estimates v?, the closer the next iterate x k+1 will be to x? (assuming k is su ciently large).
Algorithms for constrained local optimization
Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained
More informationConstrained Optimization
1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange
More informationLecture 3. Optimization Problems and Iterative Algorithms
Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex
More informationMore on Lagrange multipliers
More on Lagrange multipliers CE 377K April 21, 2015 REVIEW The standard form for a nonlinear optimization problem is min x f (x) s.t. g 1 (x) 0. g l (x) 0 h 1 (x) = 0. h m (x) = 0 The objective function
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More informationLecture 18: Optimization Programming
Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming
More informationCONSTRAINED NONLINEAR PROGRAMMING
149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach
More information5 Overview of algorithms for unconstrained optimization
IOE 59: NLP, Winter 22 c Marina A. Epelman 9 5 Overview of algorithms for unconstrained optimization 5. General optimization algorithm Recall: we are attempting to solve the problem (P) min f(x) s.t. x
More informationOptimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30
Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained
More information5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More informationOptimisation in Higher Dimensions
CHAPTER 6 Optimisation in Higher Dimensions Beyond optimisation in 1D, we will study two directions. First, the equivalent in nth dimension, x R n such that f(x ) f(x) for all x R n. Second, constrained
More informationAlgorithms for Constrained Optimization
1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic
More information1 Computing with constraints
Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)
More informationOptimization Problems with Constraints - introduction to theory, numerical Methods and applications
Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)
More informationShiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient
Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 4 Subgradient Shiqian Ma, MAT-258A: Numerical Optimization 2 4.1. Subgradients definition subgradient calculus duality and optimality conditions Shiqian
More informationPenalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques
More informationOptimality Conditions for Constrained Optimization
72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)
More informationSequential Unconstrained Minimization: A Survey
Sequential Unconstrained Minimization: A Survey Charles L. Byrne February 21, 2013 Abstract The problem is to minimize a function f : X (, ], over a non-empty subset C of X, where X is an arbitrary set.
More informationSolution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark
Solution Methods Richard Lusby Department of Management Engineering Technical University of Denmark Lecture Overview (jg Unconstrained Several Variables Quadratic Programming Separable Programming SUMT
More informationContinuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation
Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation Peter J.C. Dickinson DMMP, University of Twente p.j.c.dickinson@utwente.nl http://dickinson.website/teaching/2017co.html version:
More informationChap 2. Optimality conditions
Chap 2. Optimality conditions Version: 29-09-2012 2.1 Optimality conditions in unconstrained optimization Recall the definitions of global, local minimizer. Geometry of minimization Consider for f C 1
More informationInterior-Point Methods for Linear Optimization
Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function
More information5.6 Penalty method and augmented Lagrangian method
5.6 Penalty method and augmented Lagrangian method Consider a generic NLP problem min f (x) s.t. c i (x) 0 i I c i (x) = 0 i E (1) x R n where f and the c i s are of class C 1 or C 2, and I and E are the
More informationPenalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.
AMSC 607 / CMSC 878o Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 3: Penalty and Barrier Methods Dianne P. O Leary c 2008 Reference: N&S Chapter 16 Penalty and Barrier
More information6 Optimization. The interior of a set S R n is the set. int X = {x 2 S : 9 an open box B such that x 2 B S}
6 Optimization The interior of a set S R n is the set int X = {x 2 S : 9 an open box B such that x 2 B S} Similarly, the boundary of S, denoted@s, istheset @S := {x 2 R n :everyopenboxb s.t. x 2 B contains
More informationExtreme Abridgment of Boyd and Vandenberghe s Convex Optimization
Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The
More informationUNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems
UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction
More informationLecture 13: Constrained optimization
2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems
More informationMath 273a: Optimization Subgradients of convex functions
Math 273a: Optimization Subgradients of convex functions Made by: Damek Davis Edited by Wotao Yin Department of Mathematics, UCLA Fall 2015 online discussions on piazza.com 1 / 42 Subgradients Assumptions
More informationExamination paper for TMA4180 Optimization I
Department of Mathematical Sciences Examination paper for TMA4180 Optimization I Academic contact during examination: Phone: Examination date: 26th May 2016 Examination time (from to): 09:00 13:00 Permitted
More informationISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints
ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained
More informationDuality Theory of Constrained Optimization
Duality Theory of Constrained Optimization Robert M. Freund April, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 The Practical Importance of Duality Duality is pervasive
More informationPattern Classification, and Quadratic Problems
Pattern Classification, and Quadratic Problems (Robert M. Freund) March 3, 24 c 24 Massachusetts Institute of Technology. 1 1 Overview Pattern Classification, Linear Classifiers, and Quadratic Optimization
More informationIntroduction to Optimization Techniques. Nonlinear Optimization in Function Spaces
Introduction to Optimization Techniques Nonlinear Optimization in Function Spaces X : T : Gateaux and Fréchet Differentials Gateaux and Fréchet Differentials a vector space, Y : a normed space transformation
More informationICS-E4030 Kernel Methods in Machine Learning
ICS-E4030 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 28. September, 2016 Juho Rousu 28. September, 2016 1 / 38 Convex optimization Convex optimisation This
More information2.3 Linear Programming
2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are
More informationGeneralization to inequality constrained problem. Maximize
Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum
More information2.098/6.255/ Optimization Methods Practice True/False Questions
2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence
More informationExistence of minimizers
Existence of imizers We have just talked a lot about how to find the imizer of an unconstrained convex optimization problem. We have not talked too much, at least not in concrete mathematical terms, about
More informationSeminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1
Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Session: 15 Aug 2015 (Mon), 10:00am 1:00pm I. Optimization with
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationAlgorithms for nonlinear programming problems II
Algorithms for nonlinear programming problems II Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects
More informationAM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods
AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α
More informationNumerical optimization
Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal
More informationAM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods
AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality
More informationQuiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006
Quiz Discussion IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Motivation Why do we care? We are interested in
More informationAlgorithms for nonlinear programming problems II
Algorithms for nonlinear programming problems II Martin Branda Charles University Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects of Optimization
More informationNumerical Optimization
Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,
More informationCS-E4830 Kernel Methods in Machine Learning
CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This
More informationINTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE
INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global
More informationPart 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)
Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x
More informationNonlinear Optimization
Nonlinear Optimization Etienne de Klerk (UvT)/Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos Course WI3031 (Week 4) February-March, A.D. 2005 Optimization Group 1 Outline
More informationTMA947/MAN280 APPLIED OPTIMIZATION
Chalmers/GU Mathematics EXAM TMA947/MAN280 APPLIED OPTIMIZATION Date: 06 08 31 Time: House V, morning Aids: Text memory-less calculator Number of questions: 7; passed on one question requires 2 points
More informationNumerical Optimization
Constrained Optimization - Algorithms Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Consider the problem: Barrier and Penalty Methods x X where X
More informationYou should be able to...
Lecture Outline Gradient Projection Algorithm Constant Step Length, Varying Step Length, Diminishing Step Length Complexity Issues Gradient Projection With Exploration Projection Solving QPs: active set
More informationWHY DUALITY? Gradient descent Newton s method Quasi-newton Conjugate gradients. No constraints. Non-differentiable ???? Constrained problems? ????
DUALITY WHY DUALITY? No constraints f(x) Non-differentiable f(x) Gradient descent Newton s method Quasi-newton Conjugate gradients etc???? Constrained problems? f(x) subject to g(x) apple 0???? h(x) =0
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More informationMultidisciplinary System Design Optimization (MSDO)
Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential
More information5. Duality. Lagrangian
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationNONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)
NONLINEAR PROGRAMMING (Hillier & Lieberman Introduction to Operations Research, 8 th edition) Nonlinear Programming g Linear programming has a fundamental role in OR. In linear programming all its functions
More informationNumerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems
1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of
More informationMVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg
MVE165/MMG631 Overview of nonlinear programming Ann-Brith Strömberg 2015 05 21 Areas of applications, examples (Ch. 9.1) Structural optimization Design of aircraft, ships, bridges, etc Decide on the material
More informationOutline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems
Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More informationGradient Descent. Dr. Xiaowei Huang
Gradient Descent Dr. Xiaowei Huang https://cgi.csc.liv.ac.uk/~xiaowei/ Up to now, Three machine learning algorithms: decision tree learning k-nn linear regression only optimization objectives are discussed,
More informationCONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING
CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global and local convergence results
More informationScientific Computing: Optimization
Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationLectures 9 and 10: Constrained optimization problems and their optimality conditions
Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained
More informationand to estimate the quality of feasible solutions I A new way to derive dual bounds:
Lagrangian Relaxations and Duality I Recall: I Relaxations provide dual bounds for the problem I So do feasible solutions of dual problems I Having tight dual bounds is important in algorithms (B&B), and
More information10 Numerical methods for constrained problems
10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside
More informationBindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems.
Consider constraints Notes for 2017-04-24 So far, we have considered unconstrained optimization problems. The constrained problem is minimize φ(x) s.t. x Ω where Ω R n. We usually define x in terms of
More informationLagrange Relaxation and Duality
Lagrange Relaxation and Duality As we have already known, constrained optimization problems are harder to solve than unconstrained problems. By relaxation we can solve a more difficult problem by a simpler
More informationMATH 4211/6211 Optimization Basics of Optimization Problems
MATH 4211/6211 Optimization Basics of Optimization Problems Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 A standard minimization
More informationConvex Optimization Boyd & Vandenberghe. 5. Duality
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationIntroduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research
Introduction to Machine Learning Lecture 7 Mehryar Mohri Courant Institute and Google Research mohri@cims.nyu.edu Convex Optimization Differentiation Definition: let f : X R N R be a differentiable function,
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More informationSolutions Chapter 5. The problem of finding the minimum distance from the origin to a line is written as. min 1 2 kxk2. subject to Ax = b.
Solutions Chapter 5 SECTION 5.1 5.1.4 www Throughout this exercise we will use the fact that strong duality holds for convex quadratic problems with linear constraints (cf. Section 3.4). The problem of
More informationConvex Optimization & Lagrange Duality
Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT
More informationIE 5531: Engineering Optimization I
IE 5531: Engineering Optimization I Lecture 12: Nonlinear optimization, continued Prof. John Gunnar Carlsson October 20, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I October 20,
More informationCONSTRAINED OPTIMALITY CRITERIA
5 CONSTRAINED OPTIMALITY CRITERIA In Chapters 2 and 3, we discussed the necessary and sufficient optimality criteria for unconstrained optimization problems. But most engineering problems involve optimization
More informationCONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS
CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS A Dissertation Submitted For The Award of the Degree of Master of Philosophy in Mathematics Neelam Patel School of Mathematics
More informationConstrained Optimization Theory
Constrained Optimization Theory Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Constrained Optimization Theory IMA, August
More informationUC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009
UC Berkeley Department of Electrical Engineering and Computer Science EECS 227A Nonlinear and Convex Optimization Solutions 5 Fall 2009 Reading: Boyd and Vandenberghe, Chapter 5 Solution 5.1 Note that
More informationInterior Point Methods in Mathematical Programming
Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000
More informationApplied Lagrange Duality for Constrained Optimization
Applied Lagrange Duality for Constrained Optimization February 12, 2002 Overview The Practical Importance of Duality ffl Review of Convexity ffl A Separating Hyperplane Theorem ffl Definition of the Dual
More informationDifferentiable exact penalty functions for nonlinear optimization with easy constraints. Takuma NISHIMURA
Master s Thesis Differentiable exact penalty functions for nonlinear optimization with easy constraints Guidance Assistant Professor Ellen Hidemi FUKUDA Takuma NISHIMURA Department of Applied Mathematics
More informationLecture: Duality of LP, SOCP and SDP
1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:
More informationChapter 2. Optimization. Gradients, convexity, and ALS
Chapter 2 Optimization Gradients, convexity, and ALS Contents Background Gradient descent Stochastic gradient descent Newton s method Alternating least squares KKT conditions 2 Motivation We can solve
More informationConstrained optimization: direct methods (cont.)
Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a
More informationPrimal-Dual Interior-Point Methods for Linear Programming based on Newton s Method
Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach
More informationA SHIFTED PRIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OPTIMIZATION
A SHIFTED RIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OTIMIZATION hilip E. Gill Vyacheslav Kungurtsev Daniel. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-18-1 February 1, 2018
More informationConvex Optimization M2
Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization
More informationThe general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.
1 Optimization Mathematical programming refers to the basic mathematical problem of finding a maximum to a function, f, subject to some constraints. 1 In other words, the objective is to find a point,
More informationPrimal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization
Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department
More informationConstrained optimization
Constrained optimization In general, the formulation of constrained optimization is as follows minj(w), subject to H i (w) = 0, i = 1,..., k. where J is the cost function and H i are the constraints. Lagrange
More informationA Primal-Dual Interior-Point Method for Nonlinear Programming with Strong Global and Local Convergence Properties
A Primal-Dual Interior-Point Method for Nonlinear Programming with Strong Global and Local Convergence Properties André L. Tits Andreas Wächter Sasan Bahtiari Thomas J. Urban Craig T. Lawrence ISR Technical
More informationSECTION C: CONTINUOUS OPTIMISATION LECTURE 11: THE METHOD OF LAGRANGE MULTIPLIERS
SECTION C: CONTINUOUS OPTIMISATION LECTURE : THE METHOD OF LAGRANGE MULTIPLIERS HONOUR SCHOOL OF MATHEMATICS OXFORD UNIVERSITY HILARY TERM 005 DR RAPHAEL HAUSER. Examples. In this lecture we will take
More informationCE 191: Civil & Environmental Engineering Systems Analysis. LEC 17 : Final Review
CE 191: Civil & Environmental Engineering Systems Analysis LEC 17 : Final Review Professor Scott Moura Civil & Environmental Engineering University of California, Berkeley Fall 2014 Prof. Moura UC Berkeley
More informationARE202A, Fall Contents
ARE202A, Fall 2005 LECTURE #2: WED, NOV 6, 2005 PRINT DATE: NOVEMBER 2, 2005 (NPP2) Contents 5. Nonlinear Programming Problems and the Kuhn Tucker conditions (cont) 5.2. Necessary and sucient conditions
More information