A Computationally Fast and Approximate Method for Karush-Kuhn-Tucker Proximity Measure

Size: px
Start display at page:

Download "A Computationally Fast and Approximate Method for Karush-Kuhn-Tucker Proximity Measure"

Transcription

1 A Computationally Fast and Approximate Method for Karush-Kuhn-Tucer Proximity Measure Kalyanmoy Deb and Mohamed Abouhawwash 1 Department of Electrical and Computer Engineering Computational Optimization and Innovation (COIN) Laboratory Michigan State University, East Lansing, MI 48824, USA deb@egr.msu.edu, mhawwash@msu.edu deb COIN Report Number Abstract Karush-Kuhn-Tucer (KKT) optimality conditions are used for checing whether a solution obtained by an optimization algorithm is truly an optimal solution or not by theoretical and applied optimization researchers. When a point is not the true optimum, a simple violation measure of KKT optimality conditions cannot indicate anything about it s proximity to the optimal solution. Past few studies by the first author and his collaborators suggested a KKT proximity measure (KKTPM) that is able to identify relative closeness of any point from the theoretical optimum point without actually nowing the exact location of the optimum point. In this paper, we suggest several computationally fast methods for computing an approximate KKTPM value, so that a convergence measure for iteration-wise best solutions of an optimization algorithm can be quantified for terminating an optimization run. The KKTPM value can also be used to isolate less-converged solutions in a population-based optimization algorithm so as to specially modify them for an overall faster execution of the optimization run. The approximate KKTPM values are evaluated in comparison with the original exact KKTPM value on standard single-objective, multi-objective and many-objective optimization problems. In all cases, our proposed estimated approximate method is found to achieve a strong correlation of KKTPM values with the exact values and achieve such results in two or three orders of magnitude smaller computational time. These results are extremely motivating to launch further studies in using the proposed estimated KKTPM procedure for establishing termination-based and developing other modified optimization procedures. Keywords: Multi-objective optimization, Evolutionary optimization, KKT optimality conditions, method, method. 1 Also Department of Mathematics, Faculty of Science, Mansoura University, Mansoura 35516, Egypt. saleh1284@mans.edu.eg.

2 1. Introduction Karush-Kuhn-Tucer (KKT) optimality conditions [1, 2] are necessary requirement for an optimal solution to a single or multi-objective optimization problem to satisfy [3, 4, 5]. These conditions require first-order derivatives of objective function(s) and constraint functions, although extensions of them for handling non-smooth problems using subdifferentials exist [6, 4]. Although KKT conditions are guaranteed to be satisfied at optimal solutions, the mathematical optimization literature is mostly silent about the regularity in violation of these conditions in the vicinity of optimal solutions. This lac of literature prohibits optimization algorithmists woring particularly with approximation based algorithms that attempt to find a near-optimal solution at best to chec or evaluate the closeness of a solution to the theoretical optimal solution using KKT optimality conditions. However, recent studies have suggested interesting new definitions of neighboring and approximate KKT points [7, 8, 9], which have been resulted in a KKT proximity measure [1, 7]. The study has clearly shown that a naive measure of the extent of violation of KKT optimality conditions cannot provide a proximity measure to the theoretical optimum, however, the KKT proximity measure (KKTPM) that uses the approximate KKT point definition maes a measure of closeness of a solution from the KKT point. This is a remarable achievement in its own right, as KKTPM allows a way to now the proximity of a solution from a KKT point without actually nowing the location of the KKT point. A recent study has extended the KKTPM procedure developed for single-objective optimization problems to be applied to multi-objective optimization problems [11]. Since in a multi-objective optimization problem with conflicting objectives, multiple Pareto-optimal solutions exist, KKTPM procedure can be applied to each obtained solution to obtain a proximity measure of the solution from its nearest Pareto-optimal solution. The KKTPM computation opens up several new and promising avenues for further development of evolutionary single [12] and multi-objective optimization algorithms [13, 14]: (i) KK- TPM can be used for a reliable convergence measure for terminating a run, (ii) KKTPM value can be used to identify poorly converged non-dominated solutions in a multi-objective optimization problem, (iii) KKTPM-based local search procedure can be invoed in an algorithm to speed up the convergence procedure, and (iv) dynamics of KKTPM variation from start to end of an optimization run can provide useful information about multi-modality and other attractors in the search space. However, since the exact KKTPM computation involves a nested optimization tas in an algorithm, the computational effort is a bottlenec for the above development. In this paper, we analyze the KKTPM computation procedure and suggest three approximate algorithmic procedures for a fast computation of KKTPM. For certain points including the KKT point or optimal solution, the proposed approximate KKTPM values are identical to the exact KKTPM value and we are able to find a implementable condition when this happens. In other cases, we find two approximate methods that are guaranteed to bracet the exact KKTPM value and another approximate method to obtain a closer to exact KKTPM value. We then demonstrate the use of approximate KKTPM methods and show the accuracy of the proposed estimated KK- TPM values compared to the exact KKTPM values and importantly report two or three orders of magnitude faster computation with the approximate method. In the remainder of the paper, we first discuss the importance for computing KKTPM in Section 2. Then, in Section 3, we describe the principle of KKT proximity measure for single and multi-objective optimization problems based on the achievement scalarizing function (ASF) concept proposed in the multiple criterion decision maing (MCDM) literature. In Section 4, we propose three fast yet approximate methods for computing the KKT proximity measure without 2

3 using any explicit optimization procedure. Section 5 suggests to replace ASF with its augmented version so as to avoid wea Pareto-optimal solutions. Section 6 compares all three proposed approximate methods with previously-published exact KKTPM values on standard single, multiand many-objective optimization problems including a few engineering design problems. Finally, conclusions are made in Section Need for Computing The earlier study by the authors [11] have clearly demonstrated that the recently proposed KKT proximity measure (KKTPM) provide a quantitative measure of proximity of populationbest solutions in an evolutionary algorithm to theoretical optimal solutions, which then can be used as a termination condition. Although the study did not simulate a classical point-by-point optimization method, the proposed KKTPM is also applicable for classical optimization methods. A recent study [15] demonstrated that evolutionary multi-objective optimization (EMO) methods constitute a parallel search in finding multiple trade-off solutions in a computationally quicer manner compared to their point-by-point generative approaches [16, 17]. Such exploratory and application studies are maing EMO methods popular and useful in practice. For solving single-objective optimization problems, the best solution at every generation (or iteration) can be recorded and its KKTPM value can be computed. As demonstrated in the original study [7, 11], the KKTPM value reduces almost monotonically to zero as the iterate approaches the optimal solution (or KKT point). The study has also shown the correlation between the distance from the nown optimal solution and the corresponding KKTPM value. Thus, a threshold on the KKTPM value can be enforced for a suitable termination of an optimization run. This will avoid any guesswor for choosing a pre-specified number of generations or function evaluations, or stability of objective value of current best solution over the past few generations. Besides allowing an automatic termination, it also provides confidence in the final solution, as a small value of KKTPM guarantees the solution to be close to a KKT point. For multi- and many-objective optimization problems, the non-dominated solutions at every generation of an EMO can be recorded and their KKTPM values can be computed. Although these solutions are non-dominated to each other, their closeness to the true Pareto-optimal front are liely to be different. Thus, the respective KKTPM value for each non-dominated solution at a generation is expected to be different from each other. A few statistics of these KKTPM values (such as smallest, median and largest) can be computed and analyzed for a termination. Although the smallest and largest KKTPM values may not be good indicators of convergence of the entire set, the median KKTPM value can be checed against a threshold value. In such a case, a termination will then mean that at least 5% of the trade-off non-dominated solutions have come to close to the Pareto-optimal front with the specified threshold value. There exist a few other reasons for pursuing to compute KKTPM, which we highlight here. 1. It is important to realize that not all non-dominated solutions are liely to be close to the Pareto-optimal front. Figure 1 shows that with respect to a few true Pareto-optimal solutions (with KKTPM value equal to zero for each of them) shown in filled circles, two types of solutions can be non-dominated. First, there can be solutions that are close to the Pareto-optimal front, but are not truly Pareto-optimal. These points are shown in open circles. Second, there can be solutions that are not even close to the Pareto-optimal front, but are non-dominated to the rest of the trade-off solutions. These solutions are mared in open boxes. A computation of KKTPM for each of these solutions will reveal the 3

4 f2 Small KKTPM Zero KKTPM Large KKTPM Pareto optimal front f1 Figure 1: A set of non-dominated points may have widely different KKT proximity measure values. properties of each of the three types of non-dominated solutions. For true Pareto-optimal (in the sense of theory) solutions, KKTPM value will be guaranteed to be zero. For near Pareto-optimal solutions, KKTPM value will be small and for far away solutions, KKTPM value will be large. For the first time, the above observation provides an unique information about individual non-dominated population members to an EMO algorithm. So far, the only way to differentiate non-dominated solutions in an EMO procedure is based on their neighborhood information [18, 19] or closeness to a reference direction [2, 21]. For the first time, KKTPM computation opens up a unique way to differentiate different non-dominated solutions in an EMO population based on their relative proximity to true Pareto-optimal front. Special action can then be taen to fix the non-dominated solutions that are away from the true Pareto-optimal front to obtain a faster overall convergence. We do not address these additional and special efforts in this paper, but before we are able to do them, we need a faster way to compute KKTPM value, which is the main goal of this paper. 2. In many problems, objective values near optimal or Pareto-optimal region can tae more or less similar values (maing the landscape somewhat flat ). In such problems, evolutionary optimization algorithms lac an adequate search power to converge close to the true optimal or Pareto-optimal solutions quicly, rather populations wander around them with elapse of generations. For problem SRN, Figure 2 shows the NSGA-II population after 25 generations. While the true Pareto-optimal solutions are along the vertical line at x 1 = 2.5, NSGA-II solutions wander around the optimal line until 5 generations. Since evolutionary optimization methods do not use any gradient information, there is no real way to salvage any second-order information to attain better convergence. Since it has been established that KKTPM has a direct correlation with the distance of a point from its nearest optimum, it can be provide the additional information needed to distinguish a relatively close solution to an optimum than one that is away. Thereafter, the use of an appropriate local search method can be employed to mae the solution converge to the optimal solution. Again, we do not address this aspect of fine-tuning near optimal solutions 4

5 x x1 Figure 2: NSGA-II population members wander around Pareto-optimal solutions ever after 25 generations. in this paper, but we are currently pursuing a separate study and shall report results at a later publication. It is amply clear from the above discussions that besides using KKTPM to aid an automatic and more confident termination of an optimization run, KKTPM can be used to expedite the search process better. However, there are a few bottlenecs for computing and using KKTPM which we discuss next. 1. The original study [11] proposed an optimization procedure for computing the exact KK- TPM value. This optimization procedure uses Lagrange multipliers (one for each constraint or each variable bound) and two additional parameters (KKTPM itself and a slac variable for converting a non-differentiable problem into a smooth problem) as variables. The problem also has two main inequality constraints one quadratic and one linear to Lagrange multipliers. The solution of this optimization problem can be time-consuming, particularly if KKTPM value needs to be computed for every population member in each generation. In this paper, we address this issue and avoid solving the optimization problem by suggesting a computationally fast and approximate method without sacrificing much on the accuracy of KKTPM value. With this ability, we believe that KKTPM approach paves the way of its use in the above-mentioned algorithmic modifications for a faster overall approach. 2. Second, KKPTM requires gradient information of the original objective functions and constraints at the point at which the KKTPM value needs to be computed. The original study used exact gradients for most examples but demonstrated by woring with numerical gradient computation (forward difference method) on a single problem that the induced error is small and is proportional to the stepsize used for numerical gradient computation. This is encouraging and we plan to pursue this aspect in more detail at a later study. Other interval-based KKT optimality conditions [22] will also be explored. 3. Third, a zero KKTPM value can come from both local and global optimal solutions. However, this is not a big issue, as in addition to KKTPM value, a solution s objective value 5

6 can be used to differentiate local and global solutions, but a hierarchical selection operator with KKTPM value followed by the objective value (or non-domination level) can be used to steer the search towards the global optimal solution. 4. Fourth, KKTPM value indicates a measure of a solution s proximity to the respective optimal solution. In multi-objective optimization, in addition to convergence, diversity preservation among non-dominated solutions is another equally important goal. Thus, a termination as well as other modifications suggested above must accompany a careful consideration between convergence and diversity of solutions. Fortunately, the convergence issue can now be dealt with KKTPM reliably and further research is needed to combine it with a diversity measure for achieving a fast and reliable optimization. Setting the stage for highlighting the importance and role of KKTPM in enhancing performance and reliably terminating an optimization algorithm from a convergence issue, we now return to the main goal of this paper a fast and approximate computation of KKT proximity measure. We highlight the fact that if we are able to achieve a fast computation of KKTPM without sacrificing much on the accuracy, it will then open up a number of avenues for algorithmic improvements, as discussed above. In the following, we first present a review of the philosophy behind computing KKTPM and then suggest the proposed fast but approximate procedure. 3. Review of KKT proximity measure was first proposed elsewhere [7] for single-objective optimization problems. Since KKT error measure, derived from the violation of KKT equilibrium conditions, has a singularity property at any KKT point and is unable to relate the proximity of a solution to a KKT point, the development of KKT proximity measure (KKTPM) was a seminal achievement and for the first time it demonstrated its importance in arriving at a suitable termination condition. Recently, the single-objective KKT proximity measure has been extended for multi-objective optimization problems [11]. In this section, we mae a brief introduction to both studies, so that we can then build up a fast computational method in the next section for Single-Objective Optimization Dutta et al. [7] defined an approximate KKT solution to compute a KKT proximity measure for any iterate x for a single-objective optimization problem of the following type: Minimize (x) f (x), Subject to g j (x), j = 1, 2,...,J. (1) After a long theoretical calculation, they suggested a procedure of computing the KKT proximity measure for an iterate (x ) as follows: Minimize (ɛ,u) ɛ Subject to f (x ) + J j=1 u j g j (x ) 2 ɛ, J j=1 u jg j (x ) ɛ, u j, j, (2) However, that study restricted the computation of the KKT proximity measure for feasible solutions only. For an infeasible iterate, they simply ignored the computation of the KKT proximity 6

7 measure. For the above optimization problem, the variables are (ɛ, u). The KKT proximity measure (KKTPM) was then defined as follows: (x ) = ɛ, (3) where ɛ is the optimal objective value of the problem stated in Equation 2. To illustrate the woring of KKTPM, we consider a two-variable constrained minimization problem: Minimize f (x 1, x 2 ) = x x2 2, s.t. g 1 (x 1, x 2 ) 3x 1 x 2 + 1, g 2 (x 1, x 2 ) x (x 2 2) 2 1. (4) Figure 3 shows the feasible space and the optimum point (C) (x = (, 1) T. For points along line 3.5 x O B C g 2 f D g1= A f g 1 f g 1 g 2 g 2 g2= x1 Figure 3: Feasible search space of the example problem. g 1 KKTPM Value A Distance from A Along AC Figure 4: Variation of KKTPM value along line AC from point A for the example problem. C AC, Figure 4 shows the variation of KKTPM. It is clear that as points get closer to the optimal point, KKTPM value reduces monotonically and finally becomes zero at the optimal point Exact for Multi-Objective Optimization The theory for the development of the exact KKTPM for multi-objective optimization problems was presented in an earlier study by the authors [11]. Here, we summarize the main results for completeness. For a n-variable, M-objective optimization problem with J inequality constraints: Minimize (x) { f 1 (x), f 2 (x),..., f M (x)}, Subject to g j (x), j = 1, 2,...,J, (5) the Karush-Kuhn-Tucer (KKT) optimality conditions for Equation 5 are given as follows [6, 7

8 17]: M u m f m (x ) + m=1 J u j g j (x ) =, (6) j=1 g j (x ), j = 1, 2,...,J, (7) u j g j (x ) =, j = 1, 2,...,J, (8) u j, j = 1, 2,...,J, (9) u m, m = 1, 2,...,M, and u. (1) Multipliers u m are non-negative, but at least one of them must be non-zero. The parameter u j is called the Lagrange multiplier for the j-th inequality constraint and they are also non-negative. Any solution x that satisfies all the above conditions is called a KKT point [4]. Variable bounds of the type x (L) i x i x (U) i andg J+2i (x) = x i x (U) i can be splitted into two inequality constraints: g J+2i 1 (x) = x (L) i x i. Thus, in the presence of all n pairs of specified variable bounds, there are a total of J + 2n inequality constraints for the above problem. For a given iterate (solution), x, the original study [11] formulated an achievement scalarization function (ASF) optimization problem [23]: ( Minimize (x) ASF(x, z, w) = maxm=1 M fm ) (x) z m w m, Subject to g j (x), j = 1, 2,...,J. (11) The reference point z R M was considered as an utopian point and the weight vector w R M is computed for x as follows: f i (x ) z i w i =. (12) M m=1 ( f m(x ) z m ) 2 Thereafter, the KKT proximity measure was computed by applying the KKTPM computation procedure developed for single-objective optimization problems stated in equation 2 to the ASF formulation given above. Since the ASF formulation maes the objective function non-differentiable, a smooth transformation of the ASF problem was first made by introducing a slac variables x n+1 and reformulating the original problem as follows: Minimize ( F(x, x n+1 ) ) = x n+1, Subject to fi (x) z i x w n+1, i = 1, 2,...,M, i g j (x), j = 1, 2,...,J. (13) Now, the KKTPM optimization problem for the above smooth single-objective problem for y = (x; x n+1 ) can be written as follows: Minimize (ɛ,x n+1,u) ɛ + ( J j=1 um+ j g j (x ) ) 2, Subject to F(y) + M+J j=1 u j G j (y) 2 ɛ, M+J ( j=1 u jg ) j (y) ɛ, f j (x) z j x w n+1, j = 1,...,M, j u j, j = 1, 2,...,(M + J). 8 (14)

9 The added term in the objective function allows a penalty associated with the violation of complementary slacness condition [11]. The constraints G j (y)aregivenbelow: f j (x) z j G j (y) = x n+1, j = 1,...,M, (15) w j G M+ j (y) = g j (x), j = 1, 2,...,J. (16) The optimal objective value (ɛ ) to the above problem corresponds to the exact KKT proximity measure. It is observed that for feasible solutions ɛ 1, hence the exact KKTPM was defined as follows: ɛ Exact (x ) =, if x is feasible, 1 + J j=1 g j (x ) 2 (17), otherwise. 4. Proposed Fast and Approximate Computation of KKTPM The above exact KKTPM computation procedure requires an optimization problem (equation 14) to be solved for every iterate x. Since the variables for this optimization tas are (ɛ, x n+1, u) (M + J + 2) of them, the computational time to compute KKTPM value for every non-dominated solution at every generation of an EMO can be large. To mae a fast computation of an approximation of the KKTPM value, we suggest a few fast but approximate methods here KKTPM Computation Method There are two main constraints to the optimization tas in equation 14. The first constraint requires the computation of the gradient of F and G functions and can be written as: M ɛ u j w j=1 j 2 2 J f j (x ) + u M+ j g j (x M ) + 1 u j. (18) The second constraint in equation 14 can be written as follows: ɛ j=1 M f j (x u j ) z j J x n+1 u M+ j g j (x ). (19) j=1 w j Since x n+1 is a slac variable, the third set of constraints will get satisfied by setting x n+1 = M f j (x ) z j. (2) max j=1 For any feasible solution x, f j (x ) is always greater than the respective j-th component of any utopian point z. Hence, x n+1 is also non-negative. Thus, the constraint set of the problem given in equation 14 reduces to the first and second constraints and the non-negativity constraint of the Lagrange multipliers u = (u M, u J ), where u M and u J are M-dimensional and J-dimensional vectors of Lagrange multipliers. w j j=1 j=1 9

10 Let us consider the first constraint alone. It is clear that it is a quadratic function in terms of Lagrange multipliers u and can be written as follows: n M+J 2 2 M ɛ u i a ij + 1 u i, (21) where a ij can be written in terms of derivatives of f j and g j functions: a ij = 1 f i w i j=1 i=1 i=1 x x j, for i = 1, 2,...,M, g (i M) x x j, for i = M + 1, M + 2,...,M + J, for j = 1, 2,...,n. Writing the above constraint in terms of vectors u M and u J,wehave ɛ ( A T M u M + A T J u J) T ( A T M u M + A T J u J) + (1 1 T M u M ) 2, (22) where 1 M is a M 1-dimensional matrix of ones, A M is (M n)-dimensional matrix and A J is (J n)-dimensional matrix, extracted from the overall ((M + J) n)-dimensional matrix [ ] AM A = [a ij ] =. In our first approximation (we refer here as the ) method, we assume that the second constraint given in equation 19 is inactive and the first constraint is active at the optimal solution to the KKTPM optimization problem given in equation 14. Thus, the problem becomes as follows: A J Min. (ɛ,u M,u J ) ɛ + u T J GGT u J, Subject to ɛ ( A T M u M + A T J u T ( J) A T M u M + A T J u J) + (1 1 T M 1 u M ) 2, u M, u J, (23) where G is the J-dimensional constraint vector (g 1 (x ), g 2 (x K ),...,g J (x ) T.Sinceɛ K is an independent variable, the above optimization problem can be simplified as follows: Min. (um,u J ) ( A T M u M + A T J u ) T ( J A T M u M + A T J u ) J +(1 1 T M 1 u M ) 2 +u T J GGT u J, Subject to u M, u J, (24) The above objective function is a quadratic function in terms of u M and u J.Bydifferentiating the objective function with respect to u M and u J,wehave A M A T M u M + A M A T J u J ( 1 1 T M 1 u M) 1M 1 = M 1, (25) A J A T M u M + A J A T J u J + GG T u J = J 1. (26) Rearranging the above two vector equations, we have [ AM A T M + 1 ]( ) M M A M A T J um A J A T M A J A T J + GGT u J 1 = ( ) 1M 1. (27) J 1

11 Interestingly, the above equations are a set of linear equations, which can be solved to obtain (ū M, ū J ) T vector. However, it is not necessary that each component of this vector be non-negative. To satisfy the non-negativity of constraints, if any ū M or ū J is calculated to be negative, we set them to zero, and eliminate that equation from above set of linear equation and re-solve the remaining equations. We continue this process until the non-negativity of u M and u J are established. Let us say that the final vector is (u D M, ud J )T. By transposing and post-multiplying equation 25 by u D M and doing the same for equation 26 by u D J, we obtain (A T M ud M + AT J ud J )T A T M ud M = ( 1 1 T M 1 M) ud 1M 1 u D M, (28) (A T M ud M + AT J ud J )T A T J ud J = ( ) u D T J GG T u D J. (29) Substituting these left-hand terms for ɛ term and manipulating, we have the direct approximated KKTPM value arising only from the first (quadratic) constraint: ɛ D = ( 1 1 T M 1 M) ud 1M 1 u D M ( ) u D T J GG T u D J + (1 1T M 1 ud M )2, = 1 1 T M 1 ud M ( ) G T u D 2 J. (3) Since um D, u D j, and GG T is a matrix with positive elements, it can be concluded that ɛ D 1 for any feasible iterate x. This result also supports our earlier definition of the exact KKTPM value for infeasible solutions, as described in equation 17. The above-computed ɛ D value corresponds to the exact ɛ value, if at the above (ud M, ud J )T vector, the second constraint is also satisfied. Using equations 12 and 13, we obtain the following condition for this to be true: M x n+1 ( f (x ) z ) 2. (31) =1 Since x n+1 is an independent variable, the minimization of objective function at equation 14 will force x n+1 to tae the smallest positive value. Hence, the equation will be satisfied with equality sign. This will simplify equation 19 as follows: ɛ G T u J. (32) Therefore, ɛ D = ɛ happens only when the following condition is true at (ud M, ud J )T vector: or, 1 1 T M 1 ud M ( ) G T u D 2 J G T u J, (33) 1 T M 1 ud M ( )( ) G T u D J 1 G T u D J 1. (34) Here is the summary of our approach so far. After finding the feasible solution of the linear system of equation 27 with non-negativity restriction of u D -vector, it can be used to chec the above condition. If the condition is satisfied, ɛ D is identical to the exact KKTPM value (ɛ ). If this happens, we are then able to avoid a complete optimization run to compute ɛ and have managed to find a computationally faster method of computing it. This scenario is illustrated with a setch in Figure 5 with a single Lagrange multiplier u j. The first constraint is quadratic to u j (equation 18) and the second constraint is a linear function of u j (equation 32). In a generic case 11

12 ε First constraint Feasible region B A ε Feasible region ε =ε opt D Second constraint u ε ε ε ε Adj P opt D A P O D Second constraint First constraint u Figure 5: First constraint governs at iterate x which is not a KKT point (A) and a KKT point (B). Figure 6: Second constraint governs at iterate x which is not a KKT point. satisfying condition 34, the second constraint becomes redundant in the KKTPM optimization process having ɛ and the use of the first constraint is adequate to find the optimal ɛ.also, due to the solution of a set of linear equations, the computation of ɛ becomes faster. An interesting scenario to consider is the case when the iterate x is a KKT point (or an optimal point). In this case, the second constraint passes through the minimal point of the first constraint function and condition 34 becomes 1 T M 1 ud M ( )( ) G T u D J 1 G T u D J = 1. (35) At the KKT or an optimal point, the complementary slacness condition is satisfied, maing G T u D J =. With this condition, equations 26 and 25 yields 1T M 1 ud M = 1. These two conditions satisfy the above identity, thereby meaning that at the optimal or a KKT point, the above identity (equation 35) is always true. The above brings out an interesting property of a KKT or an optimal point using our derivation in this paper the respective Lagrange multipliers for a KKT or an optimal point can be scaled so that the sum of Lagrange multipliers u m for objective functions is one. Although original KKT conditions for an optimal or KKT point x may not mae M =1 u equal to one, our formulation for KKT proximity measure forces this condition to be true. This can be explained as follows. Let us consider the KKT optimality conditions given in equations 6 to 1 again. By dividing these equations by a non-negative quantity M =1 u and writing KKT conditions for the normalized Lagrange multipliers ū = u / M =1 u and ū j = u j / M =1 u, we have the following 12

13 modified KKT conditions: M J ū m f m (x ) + ū j g j (x ) =, (36) m=1 j=1 g j (x ), j = 1, 2,...,J, (37) ū j g j (x ) =, j = 1, 2,...,J, (38) ū j, j = 1, 2,...,J, (39) ū m, m = 1, 2,...,M, and ū. (4) Since a KKT or an optimal point x must also satisfy the above conditions, the sum of modified Lagrange multipliers ū m can be written as follows: M ū m = m=1 M m=1 u m M=1 u = 1. (41) Let us now get bac to a more generic scenario in which the condition 34 is not satisfied for an iterate x. In this scenario, ɛ ɛ D and in fact, ɛ >ɛ D. The scenario is illustrated in Figure 6. The second constraint (line) now dictates the location of the optimal ɛ,whichis mared as the point O in the figure. The feasible region due to both constraints are shown shaded. The minimum ɛ occurs at the point O. The only way to find this point exactly is to solve the optimization problem given in equation 14, which can be computationally demanding. Note that for a single u j, the intersection of the quadratic and linear constraint functions occurs at only two points, as shown in the figure, and choosing the intersection point with smaller ɛ value will ensure finding ɛ. However, for more than one u j elements, the intersection between the quadratic and linear constraint functions produces an ellipsoidal intersecting surface having infinite possibilities. The use of an minimization method to choose the minimum ɛ value of all intersection points will produce ɛ ; hence the need for using an optimization procedure. Note that this optimum ɛ occurs at the intersection of a quadratic function and a linear function. Thus, a quadratic programming (QP) method cannot be used, rather a sequential quadratic programming (SQP) method must be used. This is why in the original study [11], we have used Matlab s fmincon routine, in which SQP is an option. There is no other computationally simpler way to avoid the optimization run to compute ɛ for this scenario. However, we can find approximate values of ɛ by using computationally faster approaches, which we discuss in the next few subsections KKTPM Computation Method From the direct solution u D and corresponding ɛ D (point D in the Figure 6), we compute an adjusted point A (mared in the figure), by simply computing the ɛ value from the second constraint boundary at u = u D, as follows: ɛ Adj = G T u D J. (42) From the geometry, it is easy to realize that this adjusted KKTPM value, ɛ Adj, will always overestimate the optimal ɛ. Moreover, since ɛd is the minimum of the quadratic constraint, the following relationship is always true: ɛ D ɛ ɛadj. (43) 13

14 4.3. KKTPM Computation Method Next, we consider another approximation method using u D j. This time, we mae a projection from the direct solution (point D ) (u D,ɛ D ) on the second constraint boundary. We then obtain the projected point P on the second constraint. Let us consider that the projected point has a coordinate (u P,ɛ P). The linear (second) constraint boundary can be written as ɛp + GT u P J =. The perpendicular distance, d PD, from point D to the linear constraint boundary is given as d PD = ɛd + GT u D J 1 + GT G. The unit vector along line PD is ( 1, G T ) T. Writing PD in two different ways and equating them, we have ( ) ɛ P ɛ D u P J ud J ( ) 1 = d PD G = ɛd + GT u D J 1 + G T G ( ) 1. G From the first element of the above vector equation, we estimate the projected KKTPM value, as follows: ɛ P = ɛ D ɛd + GT u D J 1 + G T G, ) ( = GT ɛ DG ud j 1 + G T G. (44) While ɛ P is always smaller than ɛadj, ɛ P can be larger or smaller than ɛ depending on the shape and location of the two constraints. The projected KKTPM value can either be smaller or larger than the exact KKTPM value, but is always guaranteed to be within ɛ D and ɛadj values KKTPM Computation Method Due to uncertainty in the range of the braceting ɛ values (ɛ D and ɛ Adj ) for the optimal ɛ and due to the fact that ɛ P always lie within these two bounds, we suggest taing an average of the three approximated ɛ values as our proposed estimated KKTPM value: ɛ est = 1 3 ( ɛ D + ɛ P + ) ɛadj. (45) Due to above properties, this measure can be regarded as a more reliable approximate to the exact KKTPM value than the other three approximate KKTPM values. It is intuitive to realize from Figure 5 that for a KKT point or the optimal iterate x,the following condition is always true: ɛ D = ɛadj = ɛ P = ɛest = ɛ =. (46) 14

15 5. with Augmented ASF The ASF procedure calculated from an ideal or an utopian point may result in a wea Paretooptimal solution [13]. To avoid, an augmented version of ASF was proposed [17]: ( fi (x) z i Minimize (x) ASF(x, z, w) = maxi=1 M w i Subject to g j (x), j = 1, 2,...,J. ) + ρ Mi=1 ( fi (x) z i w i ), (47) Here, the parameter ρ taes a small positive value ( 1 4 ). The approximate KKT proximity metric computation procedures (ɛ D from equation 3, ɛadj from equation 42, ɛ P from equation 44, and ɛ est from equation 45) can be easily extended using the above augmented ASF formulation. For brevity, we do not show the calculations here. For all simulations of this paper, we have used the augmented ASF function so that wea Pareto-optimal solution will have a non-zero KKTPM value. 6. Results In this section, we consider single objective, multi-objective and many-objective test problems to demonstrate the woring of the proposed approximate KKT proximity measures for solutions obtained using different evolutionary optimization algorithms. Single-objective problems are solved using the elite-preserving real-coded genetic algorithm [24] with p c =.9andη c = and polynomial mutation operator [13] with p m = 1/n (where n is the number of variables) and η m = 2. A C-programming code is available from website For bi-objective problems, we use NSGA-II [18] and for three or many-objective problems we use the recently proposed NSGA-III procedure [21, 25]. For problems solved using NSGA-II and NSGA-III, we use the simulated binary crossover (SBX) operator [24] with p c =.9 and η c = 3, and polynomial mutation operator [13] with p m = 1/n (where n is the number of variables) and η m = Single-Objective Optimization Problems We start with results on a simple problem and then present results on standard G-series of constrained test-problems [26] A Proof-of-Principle Example Problem First, we consider a single-variable, single-constraint problem to mae different approximate methods proposed in this study clear: Minimize f (x) = x 2, Subject to g(x) =.5 x. (48) The optimal solution is x =.5 and by theory, ɛ = at this solution. Noting f (x) = 2x and g(x) = 1, the first constraint (18) at any iterate x () becomes: ɛ u 1 f (x () ) + u 2 g(x () ) 2 + (1 u 1 ) 2, or, ɛ (2u 1 x () u 2 ) 2 + (1 u 1 ) 2. The second constraint at iterate x becomes ɛ u 2 (.5 x () ). (49) 15

16 Let us now tae two x () values and investigate the difference between exact and approximated KKTPM values. For x () =.5 (the optimal solution to the above optimization problem), two constraint surfaces are given as follows: ɛ (u 1 u 2 ) 2 + (1 u 1 ) 2, (5) ɛ, (51) Figure 7 shows these two constraint surfaces. The z-axis indicates the ɛ. It is clear that the first Eps x=.5 2 1st 1.5 constr nd constr u u2.8 1 Eps u2 ε P ε opt 1st constraint ε Adj 2nd constr. ε D u1 3 2 Figure 7: Two constraint surfaces are shown for x () =.5 (optimal solution). Figure 8: Two constraint surfaces are shown for x () = 1. (an non-optimal feasible solution). constraint value is never smaller than the second constraint value at any point (u 1, u 2 ) T. Hence, we have a scenario similar to that discussed in Figure 5. Interestingly, the two constraint surfaces meet at (u m, u j ) = (1, 1) T and the corresponding ɛ =. First, we use the direct method and formulate the optimization problem given in equation 24 using the first constraint, as follows: Min. (u1,u 2 ) (u 1 u 2 ) 2 + (1 u 1 ) 2, Subject to u 1, u 2. (52) The solution to the above problem is u1 D = ud 2 = 1 with ɛd =. Thus, the direct method maes an accurate estimation of the optimal ɛ. It is also interesting to note that since G = (.5.5) = for this solution, equations 42 and 44 estimate ɛ Adj = ɛ P = andɛest =, thereby supporting our conclusion about exact and approximated KKTPM values (equation 46) for optimal solutions. Next, we consider a feasible but non-optimal solution x () = 1. At this point, the solution of the KKT optimality conditions yields in the optimal KKTPM value as ɛ = Let us now estimate different approximate KKTPM values. First, we compute ɛ D using the direct method. For this purpose, we formulate the following quadratic problem: Min. (u1,u 2 ) ɛ = (2u 1 (1) u 2 ) 2 + (1 u 1 ) 2 + u 2 2 (.5 1)2, Subject to u 1, u 2. (53) The solution to this problem is u1 D =.556, ud 2 =.889 and corresponding ɛd =.2469, which is smaller than the exact KKTPM value (ɛ =.3441). Figure 8 mars the quadratic constraint 16

17 function and corresponding ɛ D value. Next, we observe that the second constraint function is as follows: ɛ.5u 2. (54) Figure 8 shows that ɛ D surface lies below this linear constraint surface at ud j. Putting ud m and u D j values found above to condition 34, we find that left side value is 1.197, which is greater than 1, thereby violating the condition. Since condition 34 is violated, ɛ D ɛ. Using equations 42, 44 and 45, we then obtain the following estimations of KKTPM: ɛ Adj =.4444, ɛ P =.449, ɛest = Note that exact KKTPM (.3441) is bounded within direct KKTPM (ɛ D =.2469) and adjusted KKTPM (ɛ Adj =.4444). Moreover, due to the use of ɛ P in the averaging process, the estimated KKTPM =.3654 is closer to the exact KKTPM value of.3441 than the two braceting KK- TPM values. This example clearly illustrates the relationship of each approximated KKTPM value with the exact KKTPM value G-Series Constrained Problems Now, we are ready to present results of our proposed approximation methods comparing with exact KKTPM values on standard single-objective constrained optimization problems g- functions [26]. We apply an elitist RGA [24] and record the population-best solution x () at every generation. Thereafter, we compute exact and approximate KKTPM values and compare. Figure 9 is for g1 problem from [26] having 35 constraints and 13 variables. We run an elite-preserving real-coded genetic algorithm (RGA) [27, 28] for 2 generations and with 1 population size. The KKT proximity measure of the population-best solution is calculated using optimal (shown in solid line), direct, projection, adjusted, estimated (shown with upside down triangles) methods, discussed above. The proposed estimated values are close to the exact KK- TPM value. It can also be observed that the ɛ is always bounded by two approximated KKTPM values: ɛ D (pin line) and ɛ Adj (green line). For this problem, RGA is able to find the true optimum (1, 1, 1, 1, 1, 1, 1, 1, 1, 3, 3, 3, 1) T with f = at generation 94. The exact KKTPM value ɛ is found to be exactly zero. The inset figure shows this fact in a semi-log plot better. At this generation, all approximated KKTPM values are also zero. The population-best solution for the above run is infeasible until generation 1, then in generation 11, the first feasible solution is found. Notice how all KKTPM measures including approximated ones have identical and large value (more then one) until generation 1 and thereafter when feasible solutions are found, the approximated KKTPM values are different from the exact KKTPM value. But the estimated KKTPM matches well with the exact KKTPM value. Importantly, the relative change in exact KKTPM values over generations is in agreement with changes in the estimated KKTPM values. We compute the correlation coefficient between ɛ and ɛest l and observe the value is.9897, meaning that there is very strong positive correlation in the changes in two sets of KKTPM values. Next, we solve the g2 problem using the eltist RGA with 1 population size and 2 generations. There are 2 variables and the optimal function value is f = RGA improves its best solution slowly towards the end of the run and after 2 generations finds its best solution having f = , indicating the RGA is unable to find the true optimal solution for this problem. Figure 1 shows the variation of exact and approximate KKTPM values of the population-best solution with generations. To show that the true optimal solution 17

18 Figure 9: Generation-wise KKT proximity measure for the population-best RGA solution for Problem g1. Figure 1: Generation-wise KKT proximity measure for the population-best RGA solution for Problem g2. has a zero KKTPM value, we replace 2-th generation RGA solution with the true optimal solution. The figure shows that all the KKTPM values drops to zero at 2-th generation. In this problem, the projected KKTPM approximation is closer to the exact KKTPM values. However, the correlation coefficient between ɛ and ɛest is found to be.976, meaning a strong positive correlation in their changing pattern with generations. Next we consider g4 problem, which has five variables and six constraints besides 1 variable bounds. Figure 11 shows that the elitist RGA with 2, population size run for 5 generations is able to quicly reduce the population-best function value close to nown optimum ( f = ) but get stuc to a near-optimal solution having f = at the 71-st generation. The exact KKTPM value for this solution is.271. However, when this optimal solution is computed for its KKTPM value, it is found to be exactly zero. The estimated KKTPM (ɛ est ) value at the final RGA solution is.18. All approximated KKTPM values are found to be close to the exact KKTPM values. It is also interesting to note that the optimal KKTPM value is always bounded by ɛ D and ɛadj values. The correlation coefficient between ɛ and ɛadj is found to be The problem g6 contains two variables and six constraints, including four variable bounds which are treated as constraints. The elitist RGA is applied with 1, population members and is run for 2, generations. The reason for these large numbers is that for this problem we wanted to achieve a large accuracy. The optimal solution for this problem has a function value of f = Figure 13 shows the variation of different KKTPM values with generation. The elitist RGA finds a solution with f = in 674 generations (having an exact KKTPM value of ) and a solution with f = at 1,898-th generation having an exact KKTPM value of.59. The corresponding estimated KKTPM values at these two solutions are and.73, respectively. The correlation coefficient between ɛ and ɛadj is found to be Problem g7 is a 1-variable, 28-constraint problem. The elitist RGA is run with 1, population members and for 1, generations. The optimal solution of this problem has a function value f = Figure 13 shows the variation of different KKTPM values with generation. Elitist RGA converges to a near-optimal solution having f = This solution 18

19 Figure 11: Generation-wise KKT proximity measure for the population-best RGA solution for Problem g4. Figure 12: Generation-wise KKT proximity measure for the population-best RGA solution for Problem g6. has an exact KKTPM value of.661. The corresponding estimated KKTPM value is.857. The correlation coefficient between ɛ and ɛadj is found to be The problem g8 has two variables and 1 constrains including variable bounds. The elitist RGA is applied to this problem using 2 population members for 1 generations. Figure 14 shows the variation of different KKTPM values with generation. The elitist RGA finds the exact optimum (having f = at 48 generations. All KKTPM values at this generation are zero. The correlation coefficient between ɛ and ɛadj values from all generations of a RGA run is found to be.9985, as also evident from the closeness of these two KKTPM values in the figure Figure 13: Generation-wise KKT proximity measure for the population-best RGA solution for Problem g7. Figure 14: Generation-wise KKT proximity measure for the population-best RGA solution for Problem g8. The next problem g9 is a seven-variable, 18-constraint problem. The elitist RGA is applied using 1, population members and is run for 5 generations. The optimal function value is f = As seen from Figure 15, the elitist RGA converges gradually near this optimal 19

20 solution with generation, but cannot find the true optimal solution until 499 generations. It gets to a solution having f = , which has an exact KKTPM value of.9192, whereas the optimal solution has a KKTPM value of zero. Figure 15 adds the nown optimal solution at 5- th generation to show this fact. Interestingly, for this problem, all approximated KKTPM values are almost identical to each other and to the exact KKTPM value. This can also be explained from the fact that the correlation coefficient of ɛ and ɛadj is found to be one Figure 15: Generation-wise KKT proximity measure for the population-best RGA solution for Problem g9. Figure 16: Generation-wise KKT proximity measure for the population-best RGA solution for Problem g1. The next problem g1 is a challenging eight-variable, 22-constraint problem. The optimal function value for this problem is f = There is a scaling issue of constraint functions for this problem, which maes it difficult to solve this problem to optimality. As shown in Figure 16, the elitist RGA with 2 population members fails to converge near the optimum solution. The KKTPM values are also away from zero even after 5 generations. The best RGA solution has a function value f = (11.3% worse from the optimal function value). The exact KKTPM value at this point is , as shown in the figure, whereas when the optimal solution is deliberately added at the 5-th generation for KKTPM computation, it is calculated to be zero. The corresponding estimated KKTPM value at the best RGA solution is , which is very close to exact KKTPM value. For this problem as well, all approximated KKTPM values are very close to each other. It is not surprising that the correlation coefficient between ɛ and ɛ Adj is found to be one. Problem g18 is a nine-variable, 31-constraint problem. The optimal function value is f = The elitist RGA is run with 1 population members, Figure 17 shows that g18 needs 3 generations to find a feasible solution. At 31-st generation, the best RGA solution has a function value f = with an exact KKTPM value of At generation 199, the population-best near-optimal solution with f = has a KKTPM value of The corresponding estimated KKTPM values are and.1734, respectively. The correlation coefficient between ɛ and ɛadj is found to be The final problem G24 is a two-variable, six constraint problem. The optimal solution has a function value f = The elitist RGA is applied with 2 population size and is run for 1 generations. Figure 18 shows that KKT-proximity measure reduces with generations and reaches to a near-optimal solution quicly. The best RGA solution after 1 generations has a 2

21 Figure 17: Generation-wise KKT proximity measure for the population-best RGA solution for Problem g18. Figure 18: Generation-wise KKT proximity measure for the population-best RGA solution for Problem g24. function value of f = with an exact KKTPM value of The corresponding estimated KKTPM value is Exact and estimated ɛ and ɛadj values follow a similar pattern and the correlation coefficient between them for all generation-wise best solutions is found to be.9925, meaning a strong positive correlation between them Computational Time The extensive results on the above constrained problems indicate that the correlation between the exact KKTPM and the proposed estimated KKTPM is quite strong and close to one. This means that the pattern of variation of these two KKTPM quantities are similar to each other, indicating that the estimated KKTPM can be used to indicate the level of convergence of the population-best solution close to the theoretical optimal solution, without nowing the true optimal solution. As mentioned earlier, the purpose of computing the approximated KKTPM, instead of the exact KKTPM, is the computational advantage which we demonstrated in Table 1. The computational time needed to solve each of the G-series constrained problems with the exact and approximated KKTPM values computed at every generation are presented in seconds of CPU time. It is clear from the table that the optimization based KKTPM computation taes two or three orders of magnitude more time than the approximated methods. The results in above figures and computational time shown in this table together indicates that the proposed estimated KKTPM procedure allows to compute a near-exact KKTPM value within a fraction of computational effort. The final column indicates the computational advantage of the proposed estimated KKTPM compared to the exact KKTPM computational time. As the table shows, the computational advantage varies from about 31 to 781. This is encouraging for the use of the KKTPM for modifying evolutionary or classical optimization methods. While we do not address the possible modifications in this paper, in the next subsection we present results on multi-objective optimization problems Two-Objective ZDT Problems First, we consider commonly used two-objective ZDT problems [29]. ZDT1 has 3 variables and the efficient solutions occur for x i = fori = 2, 3,...,3. NSGA-II is applied with 21

Monotonicity Analysis, Evolutionary Multi-Objective Optimization, and Discovery of Design Principles

Monotonicity Analysis, Evolutionary Multi-Objective Optimization, and Discovery of Design Principles Monotonicity Analysis, Evolutionary Multi-Objective Optimization, and Discovery of Design Principles Kalyanmoy Deb and Aravind Srinivasan Kanpur Genetic Algorithms Laboratory (KanGAL) Indian Institute

More information

A Theory-Based Termination Condition for Convergence in Real- Parameter Evolutionary Algorithms

A Theory-Based Termination Condition for Convergence in Real- Parameter Evolutionary Algorithms A Theory-Based Termination Condition for Convergence in Real- Parameter Evolutionary Algorithms Kalyanmoy Deb Koenig Endowed Chair Professor Department of Electrical and Computer Engineering Michigan State

More information

Bi-objective Portfolio Optimization Using a Customized Hybrid NSGA-II Procedure

Bi-objective Portfolio Optimization Using a Customized Hybrid NSGA-II Procedure Bi-objective Portfolio Optimization Using a Customized Hybrid NSGA-II Procedure Kalyanmoy Deb 1, Ralph E. Steuer 2, Rajat Tewari 3, and Rahul Tewari 4 1 Department of Mechanical Engineering, Indian Institute

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Gradient Descent. Dr. Xiaowei Huang

Gradient Descent. Dr. Xiaowei Huang Gradient Descent Dr. Xiaowei Huang https://cgi.csc.liv.ac.uk/~xiaowei/ Up to now, Three machine learning algorithms: decision tree learning k-nn linear regression only optimization objectives are discussed,

More information

Efficient Non-domination Level Update Method for Steady-State Evolutionary Multi-objective. optimization

Efficient Non-domination Level Update Method for Steady-State Evolutionary Multi-objective. optimization Efficient Non-domination Level Update Method for Steady-State Evolutionary Multi-objective Optimization Ke Li, Kalyanmoy Deb, Fellow, IEEE, Qingfu Zhang, Senior Member, IEEE, and Qiang Zhang COIN Report

More information

Operations Research Lecture 4: Linear Programming Interior Point Method

Operations Research Lecture 4: Linear Programming Interior Point Method Operations Research Lecture 4: Linear Programg Interior Point Method Notes taen by Kaiquan Xu@Business School, Nanjing University April 14th 2016 1 The affine scaling algorithm one of the most efficient

More information

Interactive Evolutionary Multi-Objective Optimization and Decision-Making using Reference Direction Method

Interactive Evolutionary Multi-Objective Optimization and Decision-Making using Reference Direction Method Interactive Evolutionary Multi-Objective Optimization and Decision-Making using Reference Direction Method Kalyanmoy Deb Department of Mechanical Engineering Indian Institute of Technology Kanpur Kanpur,

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints Klaus Schittkowski Department of Computer Science, University of Bayreuth 95440 Bayreuth, Germany e-mail:

More information

Recall that in finite dynamic optimization we are trying to optimize problems of the form. T β t u(c t ) max. c t. t=0 T. c t = W max. subject to.

Recall that in finite dynamic optimization we are trying to optimize problems of the form. T β t u(c t ) max. c t. t=0 T. c t = W max. subject to. 19 Policy Function Iteration Lab Objective: Learn how iterative methods can be used to solve dynamic optimization problems. Implement value iteration and policy iteration in a pseudo-infinite setting where

More information

A New Penalty-SQP Method

A New Penalty-SQP Method Background and Motivation Illustration of Numerical Results Final Remarks Frank E. Curtis Informs Annual Meeting, October 2008 Background and Motivation Illustration of Numerical Results Final Remarks

More information

Lecture 13: Constrained optimization

Lecture 13: Constrained optimization 2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

IBM Research Report. Line Search Filter Methods for Nonlinear Programming: Motivation and Global Convergence

IBM Research Report. Line Search Filter Methods for Nonlinear Programming: Motivation and Global Convergence RC23036 (W0304-181) April 21, 2003 Computer Science IBM Research Report Line Search Filter Methods for Nonlinear Programming: Motivation and Global Convergence Andreas Wächter, Lorenz T. Biegler IBM Research

More information

LAGRANGE MULTIPLIERS

LAGRANGE MULTIPLIERS LAGRANGE MULTIPLIERS MATH 195, SECTION 59 (VIPUL NAIK) Corresponding material in the book: Section 14.8 What students should definitely get: The Lagrange multiplier condition (one constraint, two constraints

More information

MPC Infeasibility Handling

MPC Infeasibility Handling MPC Handling Thomas Wiese, TU Munich, KU Leuven supervised by H.J. Ferreau, Prof. M. Diehl (both KUL) and Dr. H. Gräb (TUM) October 9, 2008 1 / 42 MPC General MPC Strategies 2 / 42 Linear Discrete-Time

More information

Iteration-complexity of first-order penalty methods for convex programming

Iteration-complexity of first-order penalty methods for convex programming Iteration-complexity of first-order penalty methods for convex programming Guanghui Lan Renato D.C. Monteiro July 24, 2008 Abstract This paper considers a special but broad class of convex programing CP)

More information

Support Vector Machines: Maximum Margin Classifiers

Support Vector Machines: Maximum Margin Classifiers Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 16, 2008 Piotr Mirowski Based on slides by Sumit Chopra and Fu-Jie Huang 1 Outline What is behind

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Introduction to Nonlinear Optimization Paul J. Atzberger

Introduction to Nonlinear Optimization Paul J. Atzberger Introduction to Nonlinear Optimization Paul J. Atzberger Comments should be sent to: atzberg@math.ucsb.edu Introduction We shall discuss in these notes a brief introduction to nonlinear optimization concepts,

More information

Markscheme May 2017 Mathematics Standard level Paper 1

Markscheme May 2017 Mathematics Standard level Paper 1 M17/5/MATME/SP1/ENG/TZ/XX/M Marscheme May 017 Mathematics Standard level Paper 1 17 pages M17/5/MATME/SP1/ENG/TZ/XX/M This marscheme is the property of the International Baccalaureate and must not be reproduced

More information

Numerical optimization

Numerical optimization Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal

More information

1 Basic Combinatorics

1 Basic Combinatorics 1 Basic Combinatorics 1.1 Sets and sequences Sets. A set is an unordered collection of distinct objects. The objects are called elements of the set. We use braces to denote a set, for example, the set

More information

Unconstrained Multivariate Optimization

Unconstrained Multivariate Optimization Unconstrained Multivariate Optimization Multivariate optimization means optimization of a scalar function of a several variables: and has the general form: y = () min ( ) where () is a nonlinear scalar-valued

More information

Survey of NLP Algorithms. L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA

Survey of NLP Algorithms. L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA Survey of NLP Algorithms L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA NLP Algorithms - Outline Problem and Goals KKT Conditions and Variable Classification Handling

More information

2.3 Linear Programming

2.3 Linear Programming 2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are

More information

A Brief Introduction to Multiobjective Optimization Techniques

A Brief Introduction to Multiobjective Optimization Techniques Università di Catania Dipartimento di Ingegneria Informatica e delle Telecomunicazioni A Brief Introduction to Multiobjective Optimization Techniques Maurizio Palesi Maurizio Palesi [mpalesi@diit.unict.it]

More information

KKT Examples. Stanley B. Gershwin Massachusetts Institute of Technology

KKT Examples. Stanley B. Gershwin Massachusetts Institute of Technology Stanley B. Gershwin Massachusetts Institute of Technology The purpose of this note is to supplement the slides that describe the Karush-Kuhn-Tucker conditions. Neither these notes nor the slides are a

More information

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition) NONLINEAR PROGRAMMING (Hillier & Lieberman Introduction to Operations Research, 8 th edition) Nonlinear Programming g Linear programming has a fundamental role in OR. In linear programming all its functions

More information

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems 1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of

More information

Evolutionary Multiobjective. Optimization Methods for the Shape Design of Industrial Electromagnetic Devices. P. Di Barba, University of Pavia, Italy

Evolutionary Multiobjective. Optimization Methods for the Shape Design of Industrial Electromagnetic Devices. P. Di Barba, University of Pavia, Italy Evolutionary Multiobjective Optimization Methods for the Shape Design of Industrial Electromagnetic Devices P. Di Barba, University of Pavia, Italy INTRODUCTION Evolutionary Multiobjective Optimization

More information

An Application of Multiobjective Optimization in Electronic Circuit Design

An Application of Multiobjective Optimization in Electronic Circuit Design An Application of Multiobjective Optimization in Electronic Circuit Design JAN MÍCHAL JOSEF DOBEŠ Department of Radio Engineering, Faculty of Electrical Engineering Czech Technical University in Prague

More information

On sequential optimality conditions for constrained optimization. José Mario Martínez martinez

On sequential optimality conditions for constrained optimization. José Mario Martínez  martinez On sequential optimality conditions for constrained optimization José Mario Martínez www.ime.unicamp.br/ martinez UNICAMP, Brazil 2011 Collaborators This talk is based in joint papers with Roberto Andreani

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)

More information

Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method

Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method Yi-Chih Hsieh and Dennis L. Bricer Department of Industrial Engineering The University of Iowa Iowa City, IA 52242

More information

Multi-objective approaches in a single-objective optimization environment

Multi-objective approaches in a single-objective optimization environment Multi-objective approaches in a single-objective optimization environment Shinya Watanabe College of Information Science & Engineering, Ritsumeikan Univ. -- Nojihigashi, Kusatsu Shiga 55-8577, Japan sin@sys.ci.ritsumei.ac.jp

More information

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general

More information

Optimization Tutorial 1. Basic Gradient Descent

Optimization Tutorial 1. Basic Gradient Descent E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.

More information

CONSTRAINED OPTIMALITY CRITERIA

CONSTRAINED OPTIMALITY CRITERIA 5 CONSTRAINED OPTIMALITY CRITERIA In Chapters 2 and 3, we discussed the necessary and sufficient optimality criteria for unconstrained optimization problems. But most engineering problems involve optimization

More information

A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30,

More information

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Carlos Humes Jr. a, Benar F. Svaiter b, Paulo J. S. Silva a, a Dept. of Computer Science, University of São Paulo, Brazil Email: {humes,rsilva}@ime.usp.br

More information

Optimality Conditions

Optimality Conditions Chapter 2 Optimality Conditions 2.1 Global and Local Minima for Unconstrained Problems When a minimization problem does not have any constraints, the problem is to find the minimum of the objective function.

More information

Support Vector Machine (SVM) and Kernel Methods

Support Vector Machine (SVM) and Kernel Methods Support Vector Machine (SVM) and Kernel Methods CE-717: Machine Learning Sharif University of Technology Fall 2014 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Zhaosong Lu October 5, 2012 (Revised: June 3, 2013; September 17, 2013) Abstract In this paper we study

More information

Lecture 18: Optimization Programming

Lecture 18: Optimization Programming Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming

More information

CS-E4830 Kernel Methods in Machine Learning

CS-E4830 Kernel Methods in Machine Learning CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This

More information

CHAPTER 10: Numerical Methods for DAEs

CHAPTER 10: Numerical Methods for DAEs CHAPTER 10: Numerical Methods for DAEs Numerical approaches for the solution of DAEs divide roughly into two classes: 1. direct discretization 2. reformulation (index reduction) plus discretization Direct

More information

8 Barrier Methods for Constrained Optimization

8 Barrier Methods for Constrained Optimization IOE 519: NL, Winter 2012 c Marina A. Epelman 55 8 Barrier Methods for Constrained Optimization In this subsection, we will restrict our attention to instances of constrained problem () that have inequality

More information

Machine Learning and Data Mining. Support Vector Machines. Kalev Kask

Machine Learning and Data Mining. Support Vector Machines. Kalev Kask Machine Learning and Data Mining Support Vector Machines Kalev Kask Linear classifiers Which decision boundary is better? Both have zero training error (perfect training accuracy) But, one of them seems

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

Statistical Machine Learning from Data

Statistical Machine Learning from Data Samy Bengio Statistical Machine Learning from Data 1 Statistical Machine Learning from Data Support Vector Machines Samy Bengio IDIAP Research Institute, Martigny, Switzerland, and Ecole Polytechnique

More information

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem: CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through

More information

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT Pacific Journal of Optimization Vol., No. 3, September 006) PRIMAL ERROR BOUNDS BASED ON THE AUGMENTED LAGRANGIAN AND LAGRANGIAN RELAXATION ALGORITHMS A. F. Izmailov and M. V. Solodov ABSTRACT For a given

More information

Constrained Real-Parameter Optimization with Generalized Differential Evolution

Constrained Real-Parameter Optimization with Generalized Differential Evolution 2006 IEEE Congress on Evolutionary Computation Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada July 16-21, 2006 Constrained Real-Parameter Optimization with Generalized Differential Evolution

More information

Numerical Tests of Center Series

Numerical Tests of Center Series Numerical Tests of Center Series Jorge L. delyra Department of Mathematical Physics Physics Institute University of São Paulo April 3, 05 Abstract We present numerical tests of several simple examples

More information

Support Vector Machine (SVM) and Kernel Methods

Support Vector Machine (SVM) and Kernel Methods Support Vector Machine (SVM) and Kernel Methods CE-717: Machine Learning Sharif University of Technology Fall 2015 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin

More information

More on Lagrange multipliers

More on Lagrange multipliers More on Lagrange multipliers CE 377K April 21, 2015 REVIEW The standard form for a nonlinear optimization problem is min x f (x) s.t. g 1 (x) 0. g l (x) 0 h 1 (x) = 0. h m (x) = 0 The objective function

More information

PIBEA: Prospect Indicator Based Evolutionary Algorithm for Multiobjective Optimization Problems

PIBEA: Prospect Indicator Based Evolutionary Algorithm for Multiobjective Optimization Problems PIBEA: Prospect Indicator Based Evolutionary Algorithm for Multiobjective Optimization Problems Pruet Boonma Department of Computer Engineering Chiang Mai University Chiang Mai, 52, Thailand Email: pruet@eng.cmu.ac.th

More information

Machine Learning. Support Vector Machines. Manfred Huber

Machine Learning. Support Vector Machines. Manfred Huber Machine Learning Support Vector Machines Manfred Huber 2015 1 Support Vector Machines Both logistic regression and linear discriminant analysis learn a linear discriminant function to separate the data

More information

Behavior of EMO Algorithms on Many-Objective Optimization Problems with Correlated Objectives

Behavior of EMO Algorithms on Many-Objective Optimization Problems with Correlated Objectives H. Ishibuchi N. Akedo H. Ohyanagi and Y. Nojima Behavior of EMO algorithms on many-objective optimization problems with correlated objectives Proc. of 211 IEEE Congress on Evolutionary Computation pp.

More information

Interactive Evolutionary Multi-Objective Optimization and Decision-Making using Reference Direction Method

Interactive Evolutionary Multi-Objective Optimization and Decision-Making using Reference Direction Method Interactive Evolutionary Multi-Objective Optimization and Decision-Making using Reference Direction Method ABSTRACT Kalyanmoy Deb Department of Mechanical Engineering Indian Institute of Technology Kanpur

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

Multi Objective Optimization

Multi Objective Optimization Multi Objective Optimization Handout November 4, 2011 (A good reference for this material is the book multi-objective optimization by K. Deb) 1 Multiple Objective Optimization So far we have dealt with

More information

The Proximal Gradient Method

The Proximal Gradient Method Chapter 10 The Proximal Gradient Method Underlying Space: In this chapter, with the exception of Section 10.9, E is a Euclidean space, meaning a finite dimensional space endowed with an inner product,

More information

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming Altuğ Bitlislioğlu and Colin N. Jones Abstract This technical note discusses convergence

More information

A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization

A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization Noname manuscript No. (will be inserted by the editor) A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization Fran E. Curtis February, 22 Abstract Penalty and interior-point methods

More information

Homework 3. Convex Optimization /36-725

Homework 3. Convex Optimization /36-725 Homework 3 Convex Optimization 10-725/36-725 Due Friday October 14 at 5:30pm submitted to Christoph Dann in Gates 8013 (Remember to a submit separate writeup for each problem, with your name at the top)

More information

A Primal-Dual Interior-Point Method for Nonlinear Programming with Strong Global and Local Convergence Properties

A Primal-Dual Interior-Point Method for Nonlinear Programming with Strong Global and Local Convergence Properties A Primal-Dual Interior-Point Method for Nonlinear Programming with Strong Global and Local Convergence Properties André L. Tits Andreas Wächter Sasan Bahtiari Thomas J. Urban Craig T. Lawrence ISR Technical

More information

Generalization of Dominance Relation-Based Replacement Rules for Memetic EMO Algorithms

Generalization of Dominance Relation-Based Replacement Rules for Memetic EMO Algorithms Generalization of Dominance Relation-Based Replacement Rules for Memetic EMO Algorithms Tadahiko Murata 1, Shiori Kaige 2, and Hisao Ishibuchi 2 1 Department of Informatics, Kansai University 2-1-1 Ryozenji-cho,

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

Scientific Computing: Optimization

Scientific Computing: Optimization Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture

More information

Support Vector Machines. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

Support Vector Machines. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington Support Vector Machines CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington 1 A Linearly Separable Problem Consider the binary classification

More information

OPRE 6201 : 3. Special Cases

OPRE 6201 : 3. Special Cases OPRE 6201 : 3. Special Cases 1 Initialization: The Big-M Formulation Consider the linear program: Minimize 4x 1 +x 2 3x 1 +x 2 = 3 (1) 4x 1 +3x 2 6 (2) x 1 +2x 2 3 (3) x 1, x 2 0. Notice that there are

More information

A STABILIZED SQP METHOD: GLOBAL CONVERGENCE

A STABILIZED SQP METHOD: GLOBAL CONVERGENCE A STABILIZED SQP METHOD: GLOBAL CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-13-4 Revised July 18, 2014, June 23,

More information

The Squared Slacks Transformation in Nonlinear Programming

The Squared Slacks Transformation in Nonlinear Programming Technical Report No. n + P. Armand D. Orban The Squared Slacks Transformation in Nonlinear Programming August 29, 2007 Abstract. We recall the use of squared slacks used to transform inequality constraints

More information

Inter-Relationship Based Selection for Decomposition Multiobjective Optimization

Inter-Relationship Based Selection for Decomposition Multiobjective Optimization Inter-Relationship Based Selection for Decomposition Multiobjective Optimization Ke Li, Sam Kwong, Qingfu Zhang, and Kalyanmoy Deb Department of Electrical and Computer Engineering Michigan State University,

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

The Simplex Method: An Example

The Simplex Method: An Example The Simplex Method: An Example Our first step is to introduce one more new variable, which we denote by z. The variable z is define to be equal to 4x 1 +3x 2. Doing this will allow us to have a unified

More information

Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems

Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems Yongjia Song James R. Luedtke August 9, 2012 Abstract We study solution approaches for the design of reliably

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

DESIGN OF OPTIMUM CROSS-SECTIONS FOR LOAD-CARRYING MEMBERS USING MULTI-OBJECTIVE EVOLUTIONARY ALGORITHMS

DESIGN OF OPTIMUM CROSS-SECTIONS FOR LOAD-CARRYING MEMBERS USING MULTI-OBJECTIVE EVOLUTIONARY ALGORITHMS DESIGN OF OPTIMUM CROSS-SECTIONS FOR LOAD-CARRING MEMBERS USING MULTI-OBJECTIVE EVOLUTIONAR ALGORITHMS Dilip Datta Kanpur Genetic Algorithms Laboratory (KanGAL) Deptt. of Mechanical Engg. IIT Kanpur, Kanpur,

More information

Support Vector Machines for Regression

Support Vector Machines for Regression COMP-566 Rohan Shah (1) Support Vector Machines for Regression Provided with n training data points {(x 1, y 1 ), (x 2, y 2 ),, (x n, y n )} R s R we seek a function f for a fixed ɛ > 0 such that: f(x

More information

Support Vector Machine (continued)

Support Vector Machine (continued) Support Vector Machine continued) Overlapping class distribution: In practice the class-conditional distributions may overlap, so that the training data points are no longer linearly separable. We need

More information

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity Mohammadreza Samadi, Lehigh University joint work with Frank E. Curtis (stand-in presenter), Lehigh University

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form: 0.1 N. L. P. Katta G. Murty, IOE 611 Lecture slides Introductory Lecture NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP does not include everything

More information

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg MVE165/MMG631 Overview of nonlinear programming Ann-Brith Strömberg 2015 05 21 Areas of applications, examples (Ch. 9.1) Structural optimization Design of aircraft, ships, bridges, etc Decide on the material

More information

NLPJOB : A Fortran Code for Multicriteria Optimization - User s Guide -

NLPJOB : A Fortran Code for Multicriteria Optimization - User s Guide - NLPJOB : A Fortran Code for Multicriteria Optimization - User s Guide - Address: Prof. K. Schittkowski Department of Computer Science University of Bayreuth D - 95440 Bayreuth Phone: (+49) 921 557750 E-mail:

More information

IV. Violations of Linear Programming Assumptions

IV. Violations of Linear Programming Assumptions IV. Violations of Linear Programming Assumptions Some types of Mathematical Programming problems violate at least one condition of strict Linearity - Deterministic Nature - Additivity - Direct Proportionality

More information

A New Sequential Optimality Condition for Constrained Nonsmooth Optimization

A New Sequential Optimality Condition for Constrained Nonsmooth Optimization A New Sequential Optimality Condition for Constrained Nonsmooth Optimization Elias Salomão Helou Sandra A. Santos Lucas E. A. Simões November 23, 2018 Abstract We introduce a sequential optimality condition

More information