by F. A. LOOTSMA Throughout this paper we shall be dealing with a number of methods for solving the constrained-minimization problem:

Size: px
Start display at page:

Download "by F. A. LOOTSMA Throughout this paper we shall be dealing with a number of methods for solving the constrained-minimization problem:"

Transcription

1 R 676 Philips Res. Repts 23, , 1968 CONSTRAINED VIA PENALTY OPTIMIZATION FUNCTIONS by F. A. LOOTSMA Abstract 1. Introduetion This paper is concerned with a generalized combination ofthe interiorpoint methods and the outside-in methods for solving constrainedoptimization problems. Primal and dual convergence are established and a basis for extrapolation is presented which reflects the nature of the singularities introduced into the penalty function. A discussion of the applications and a numerical example conclude the paper. Throughout this paper we shall be dealing with a number of methods for solving the constrained-minimization problem: minimize f(x) s~bje~~to the constraints } gl(x) ~ 0, 1-1,..., m, (1.1) where x denotes an n-dimensional vector. A common feature of the methods under consideration is that a minimum solution of (1.1) is obtained by sequential unconstrained minimization of a penalty function incorporating the objective functionfand the c0j?-straintfunctions g1'..., gm in a particular way. Interior-point methods The interior-point methods are concerned with penalty functions of the form RI f(x) - r ~ tp[gl(x)], 1=1 (1.2) where r is a positive controlling parameter. The function tp is a function of one variable y; it is defined for y > and' lim tp(y) = -00. y~o Hence, (1.2) is defined on the interior Ro of the constraint set R = {xlgi(x) ~ 0; i = 1,..., m}, (1.3) but it is singular at every boundary point of R. Let x(r) denote a point minimizing (1.2) over Ro for a fixed, positive value of r, and let x be a minimum solution of (1.1). Under mild conditions we have limf[x(r)] = f(x). ' r~o

2 CONSTRAINED OPTIMIZATION VIA PENALTY FUNCTIONS 409 This provides the framework of several methods for solving (1.1). Frisch 11) and Parisot 15) have suggested logarithmic programming using cp(y) = In y. This version has been worked out previously 13.14). The sequential-unconstrained-minimization technique (SUMT) using cp(y) = _y-l has been studied '. thoroughly by Fiacco and McCormick 4.5.6). In using these methods one has to find points x(r 1 ), x(r 2 ),, minimizing (1.2) for positive, decreasing values ri, r2'..., of the controlling parameter r. One can then approximate x by means of an extrapolation device which is based on the behaviour of x(r) as a function of r in a neighbourhood of I' = O. Fiacco and McCormick 6) have demonstrated that SUMT provides a vector function x(r) which can be expanded in a power series about r = 0 in terms of Vr, provided that the objective function f and the constraint functions g1>..., gm satisfy a number of suitable conditions implying among other things that a minimum solution x of (1.1) is uniquely determined. Recently, we have shown 14) that under similar conditions the vector function x(r) originating from the logarithmic-programming method has a series expansion about I' = 0 in terms of r. The method of centres proposed by Huard 12) does not completely fit into the above scheme. In a previous paper 14), however, we have shown that a close relationship exists between the method of centres and logarithmic programming. In fact, it is a parameter-free version of logarithmic programming; it does not work explicitly with a controlling parameter r. Similarly, Fiacco and McCormick 8) have discovered a parameter-free version of ' SUMT. Furthermore, it can easily be shown that any interior-point method using a penalty function (1.2) has associated with it a parameter-free version. Outside-in methods A second class of methods to be studied here are mostly referred to as outside-in methods. They handle penalty functions of the type f(x)-- 1 III ~1p[gl(x)], I' 1=1 (1.4) with a positive controlling parameter r. The function 1p(y) vanishes for all y ~ 0 and it is negative for all y < 0 so that, if we define the loss term we obtain L(x) III = - ~ 1p[gl(X)]' 1=1 L(x) = 0 for all x E R, L(x) > 0 for all x ~ R.

3 410 F. A. LOOTSMA Let x(r) here denote a point minimizing (1.4) over the n-dimensional vector space for a fixed, positive r. It can readily be seen that 1 1 f[x(r)] ~f[x(r)] + - L[x(r)] ~f(x) + - L(x) = f(x). I' I' Hence, x(r) is a minimum solution of (1.1) as soon as it is feasible, and indeed, under certain convexity conditions we have liml[x(r)] = 0, r~o which proves the convergence of x(r) to a minimum solution of(1.1) and clarifies the name "outside-in". A particular class of outside-in methods is given by Zangwill ' 7) who has been working with the function 7jJ(y) = -[-min (O,y)]P, p ~ 1.- Substituting p = 2 one obtains precisely the slacked-unconstrained-minimization technique (SLUMT) of Fiacco and McCormick 7). Purpose of the present paper Outside-in methods offer the striking advantage that unconstrained minimization of the penalty function (1.4) can start from any point. For interior-point methods, however, special starting procedures have to be developed which can be applied if an interior starting point is not available. On the other hand, this reveals one of the possible traps of outside-in methods: theoretically, the objective function and the constraint functions have to be defined on the n-dimensional vector space. Furthermore, the controlling parameter I' has to be given a value which is sufficiently small in order that a minimizing point x(r) exists. Interior-point methods do not suffer from these disadvantages. The penalty function (1.2) contains a boundary-repulsion term which generates a barrier in order to prevent an unconstrained-minimization procedure from attaining a point outside the constraint set R; a minimizing point x(r) exists under mild conditions for any r > 0. A natural way out of this dilemma seems to be a combination of these methods, reducing the disadvantage of working on an unbounded set but preserving the easy starting facilities. We are therefore led to a study of the penalty function (1.5)

4 CONSTRAINED OPTIMIZATION VIA PENALTY FUNCTIONS 411 where 1 1 and 1 2 denote disjunct index sets the union of which is given by {I,..., m}. Therè are many ways of partitioning the constraints but it is reasonable that the starting point x(o) of a computational procedure should indicate which constraint number will be assigned to 1 1 and which to 1 2 One could think of the constraints as classified in such a way that We shall be assuming that the set I, = {i!gt[x(o)] > 0; 1 ~ i ~ m}, 12= {i!gt[x(o)] ~ 0; 1 ~ i ~ m}. E = {x!gt(x) ~ 0; i E Id (1.6) is bounded. Here we have a situation which is very common" in practice: one frequently comes across constrained-minimization problems where it is easy to find a point satisfying a number of (relatively simple) constraints generating a bounded subset of the n-dimensional vector space. The penalty function defined by (1.5) contains two controlling parameters r and s which is clearly justified by the different treatment of the corresponding constraints. The author has recently discovered that the above combination of interiorpoint and outside-in methods has also been discussed by Fiacco 9) from a more general point of view. The dual convergence, however, and the subsequent basis for extrapolation presented here seem to be new results. 2. Problem conditions We shall start off by imposing the following conditions on problem (1.1). Condition 2.1. The functions/, -gl>...,-gm are convex and have continuous second-order partial derivatives on an open, convex subset V ofthe n-dimensional vector space. Condition 2.2. The set E defined by (1.6) is a bounded subset of V. The constraint set R defined by (1.3) has a non-empty interior Ro. As a consequence of these conditions problem (1.1) is that of minimizing a convex functionf over the convex, compact set R. It can easily be seen that the problem has a minimum solution to be denoted by x. Furthermore, a positive number U and a number L can be found such that gt(x) ~ U; f(x) ~ L i e 11 } for a~y XE E. (2.1) Kuhn-Tucker relations and uniqueness conditions Let '\l f denote the gradient off; a similar notation will be employed for the gradients of the constraint functions. Under the above conditions x is a mini-

5 412 F. A. LOOTSMA mum solution of (1.1) if, and only if, there is an m vector u with non-negative components UI'..., Un! such that the equations n!, \1!(X) - 1~1 UI \1~I(X) = 0, ) UIgl(X) = 0; l = 1,..., m (2.2) are satisfied for x = x, UI = UI (i = 1,..., m). These are the Kuhn-Tucker relations for minima of convex-programming problems, constituting a system of (m + n) non-linear equations. Let J denote the Jacobian matrix of (2.2) evaluated at the point (x,u). If J is non-singular, it must be true by the inverse-function theorem 1.16) that a neighbourhood N of (x,u) exists such that (x,u) is the unique solution of (2.2) in N. Then X is the unique minimum solution of problem (1.1) - since we are dealing with a convex-programming problem - and U is also uniquely determined. We shall accordingly supply a set of conditions which, if satisfied, guarantee that J is non-singular. For notational convenience,we shall employ the following symbols. The Hessian matrix of f will be represented by \1 2 }; etc. Let D(x,u) = \1 2 f(x) - n! ~ UI \12gl(X), 1=1 We shall think of the constraints as arranged in such a way that whence gl(x) = 0; gl(x) > 0; i = 1,..., 0(, } i = 0( + 1,..., m, (2.3) UI:;;:::0; UI =0; i = 1,..., 0(, i = 0( + 1,..., m. Now, a set of sufficient conditions for J to be non-singular is given by Condition 2.3. The gradients \1 gl(x) (i = 1,..., O() are linearly independent. Condition 2.4. The multipliers UI (i = 1,..., O() are positive. Condition 2.5. Either D(x,ü)' is positive definite or 0( = n. We may note that D(x,u) is already positive semi-definite in virtue of conditions 2.1 and 2.2. If D(x,u) is singular, as in the linear-programming case, it is sufficient to stipulate exactly n active constraints at x satisfying conditions 2.3 and 2.4. The proof that J is non-singular under conditions 2.3 to 2.5 will be omitted here. For more details the re,ader is referred to a previous paper 14). The dual problem Problem (1.1), which from now on will be referred to as the primal problem, has 'associated with it ~ dual problem which can be formulated as the maximizing

6 CONSTRAnmo OPTIMIZATION VIA PENALTY FUNCTIONS 413 of the dual objective function III subject to the dual constraints I III 'Vf(x) - ~ UI 'V gl(x) = 0, 1=1 i= 1,..., m. Using the convexity properties of the functions involved one can easily show that any point x ER (a primal-feasible solution) and any point (x,u) satisfying the dual constraints (a dual-feasible solution) are connected by the inequality whence Consequently, the maximum of the dual problem cannot exceed the minimum of the primal. Furthermore, the point (x,ü) satisfying (2.2) is a dual-feasible solution possessing the property (2.4) F(x,ü) = f(x), which implies that (x,ii) is a maximum solution of the dual problem. 3. Penalty conditions Let us now consider the penalty function defined by (1.5). We shall impose the conditions listed below on the functions cp and "p appearing in (1.5). Condition 3.1. The function cp(y) is concave and analytic for every y > O. Condition 3.2. Its derivative cp'(y) is positive for every y > 0 and has a pole of order À at y = O. Condition 3.3. The function 1p(y) is concave for every y and 1p(Y) = 0 for every y ;;::::O. There is a function co(y) and a positive e such that co(y) is analytic for all y < e and w(y) = "P(y) for all y ~ O. Condition 3.4. The derivative CO'(y) of w(y) is positive for every y < 0 and has a zero of order f-l at y = O. It can readily be seen that cp is monotonically increasing in the interval (0, 00); furthermore, it has a logarithmic singularity or, if À > 1, a pole of I order À-I at y = O. This can only be true if lim m(y) = -00. Y.j.O 'r. (3.1)

7 414 F. A. LOOTSMA Let lim cp(y) = b, )' (3.2) where b is finite or infinite. Then the inverse cp-i of cp is defined unambiguously in the interval (-00, b), valued positively and increasing monotonically. We shall find it convenient to employ the following symbols for the boundary-repulsion term and the loss term respectively: (3.3) so that 1 Q,s(x) =f(x) - r W(x) - - P(x). s (3.4) (3.5) Let Eo denote the interior of E. From (2.1) we can infer that a number B exists such that W(x) ~ B for any x E e; (3.6) Moreover, P(x) = 0 P(x) < 0 for any x E R, } for any x E E, x ~R. (3.7) Conditions 3.1 and 3.3 imply that Q,s is defined and convex over Eo. The following condition is merely imposed in order to ensure the uniqueness of a point minimizing Q,s over e; Condition 3.5. The penalty function Q,s is strictly convex over Eo for any fixed r > 0 and s > o. The properties of tp' and co' at y = 0 (a pole and a zero respectively) will be used extensively in sec. 6 where a basis for extrapolation is established. Hence, a discussion of conditions 3.2 and 3.4 will be postponed until that subject is reached. 4. Existence and primal convergence Theorem 4.1 (existence theorem). For any r > 0 and s > 0 a point x(r,s) E Eo exists minimizing Q,s over e; Proof Consider an arbitrary x(o)e Eo and let Q,s[x(O)] be denoted by Mo. Any point x E Eo with the property Qrs(x) ~ Mo is governed by the inequality L-Mo W(x) ~ _'-- r

8 CONSTRAINED OPTIMIZATION VIA PENALTY FUNCTIONS 415 and hence L- Mo def cp[g,(x)] ~ - (mi -1) cp(u) = M; r where mi that denotes the number of elements in 11,This result certainly implies def cp[glx)] ~ min (M, tb) = K; i e 11, where b is the number defined by (3.2). The right-hand member K belongs to the interval (-00, b) and we can accordingly write (4.1) It can readily be verified that the set S, consisting of all points x E Eo satisfying (4.1), is a non-empty (since x(o) ES), compact subset of Eo. The penalty function is continuous on S so that a point x(r,s) exists minimizing Q,s over S. But from the construction of S it follows that x(r,s) is a point minimizing Q,s over Eo Condition 3.5 implies that x(r,s) is uniquely determined for any r > 0 and s > O. Theorem 4.2 (primal-convergence theorem). Any convergent sequence {x(rk,sk)} where {rk} and {sd denote monotonie, decreasing null sequences as k ~ 00, converges to a minimum solution of problem (1.1). Proof First of all, we may note that such a convergent sequence exists since E is a compact set. Choose a positive (J and a point x E Ro such thatf(x) <f(x) + (J. According to theorem 4.1 we have Q,s[x(r,s)] ~ Q,.(x), (4.2) which together with (3.7) leads to the inequality f[x(r,s)] - /' q.j[x(r,s)] ~f(x) - r q.j(x). (4.3) From (3.6) and (4.3) we obtain f[x(r,s)] ~f(x) - r cjj(x) + rb <f(x) + 3 <5 (4.4) for any s > 0 and 0 < r < (!ö, and (!ö such that From (4.2) we can also infer -(!ö cjj(x) < <5, (!ö B < <5. 1 ~ ~ - - P[x(r,s)] ~ f(x) - r cjj(x) - f[x(r,s)] + r!1>[x(r,s)], (4.5) s

9 416 F. A. LOOTSMÀ which can easily be reduced to Hence, we have 1 ~_ o ~ - - P[x(r,s)] ~f(x) - r ifj(x) -L + r B. s Iim P[x(r,s)] = 0 slo for any r ;» O. Now, let z denote the limit of a convergent sequence as mentioned in the statement of the theorem. Then P(z) = 0 so that z is feasible and fez) ;;;;: f(x). From. (4.4) we obtain fez) ~f(x) which proves the primalconvergence theorem. Corollary 4.1. lim f[x(r,s)] r,slo =f(x), (4.6) lim r ifj[x(r,s)] = 0, r,slo 1 lim - P[x(r,s)] = O. r,slo S (4.7) (4.8) Proof Formula (4.6) follows immediately from the preceding theorem. Hence, for every ö > 0 positive numbers rö < f2ö and Sö can be found such that Jf[x(r,s)] - f(x)j < ö for every 0 < r < rs and 0 < s < Sö. From (3.6) and (4.3) we can infer so that r B ~ r <1>[x(r,s)] ~ f[x(r,s)] - f(x) - r <1>(x), ö > r ifj[x(r,s)] > for any 0 < r < rö and 0 < s < sö, which proves (4.7). Lastly, formula (4.8) follows easily from (4.5). Corollary 4.2. lim x(r,s) = x. r,slo Proof. This is a direct consequence of the preceding theorem and of the conditions 2.3 to 2.5 implying that problem (1.1) has a unique solution x. (4.9) 5. Dual convergence The penalty function Qrs is minimized at x(r,s) over the open set Eo. This implies that the gradient of Qrs vanishes at x(r,s), whence

10 CONSTRAINED OPTIMIZATION VIA PENALTY FUNCTIONS 417' with m Vf[x(r,s)] - ~ UI(r, s) Vgl[x(r,s)] = 0, ' (5:1) 1=1 ul(r,s) = I'!p'{gl[x(r,s)]}; i e Iç, (5.2) 1 ul(r, s) = -'11" {gl[x(r,s)]};. ie 12, S (5.3) Let u(r,s) denote the m vector with components ul(r,s), i = 1,...,m. Then it follows from (5.1) and from conditions 3.2 and 3.4 that [x(r,s), u(r,s)] satisfies the dual constraints of problem (1.1). On the basis of (2.4) we can accordingly write F[x(r,s), u(r,s)] ~/(x). (5.4) Theorem 5.1 (dual-convergence theorem). There is a convergent sequence (5.5) where {I'd and {sd denote monotonie, decreasing null sequences as k -«00. The limit of any such sequence is a maximum solution of the dual problem. Proof Using (4.7) one can readily show that From (5.3) we obtain ul(r,s)gl[x(r,s)] ~O; ie/2. (5.6) Hence, (4.6), (5.4) and (5.6) lead to lim ~ ul(r,s)gl[x(r,s)] = r,s!o 1,,12 and!im F[x(r,s), r.s!o u(r,s)] = I(x). (5.7) (5.8) We still have to answer the question of whether a convergent sequence (5.5) exists. First of all we may note that x(r,s) is a point minimizing m (5.9) over E so that, if we consider an arbitrary x* E Ro, we find that whence m F[x(r,s), u(r,s)] ~f(x*) - ~ ul(r,s) gl(x*)' 1=1 ul(r,s) gl(x*) ~/(x*) - F[x(r,s), u(r,s)]; i= 1,..., m

11 418 F. A:LOOTSMA for all positive rand s. It then follows from the strict inéqualities gl(x*) > 0 (i = 1,..., rn) that u(i",s) is bounded in a neighbourhood of the point (r,s) = = (0,0). This implies the existence of a convergent sequence (5.5). Corollary 5.1. lim u(r,s) = Ü. r.s~o (5.10) Proof Let us consider the sequence (5.5) and let (x,u) denote the limit point of (5.5) as k ~ 00. On the basis of (4.9) we have x = i and it will immediately be clear that i = oe+ 1,... where oeis defined by (2.3). Using (5.1) and the Kuhn-Tucker relations (2.2), and taking the limit we find that, rn, a ~ (UI - ÜI) V gl(i) = 0 1=1 and now the corollary follows directly from condition 2.3. Corollary 5.2. A positive (! and a exist such that for all 0 < r < (! and 0 < s < a gl[x(r,s)] < } 0 1 ~ i ~ lx, ie 1 2, (5.11) ul(r,s) > 0 gl[x(r,s)] > 0 ul(r,s) = 0 } oe<i~m, i e 1 2, (5.12) Proof These formulas follow directly from (4.9), (5.3), (5.10), and from the strict inequalities ÜI > 0; i = 1,..., lx, gli) > 0; i = oe+ 1,..., m. 6. Basis for extrapolation Let us now turn to the question of how the vector function [x(r,s), u(r,s)] (6.1) behaves as a function of rand s in a neighbourhood of (r,s) = (0,0). From, (5.1) to (5.3) we can infer that (6.1) solves the system III \JI(x) -I~ UI \J gl(x) = 0, ) UI - r 9" {gl(x)} = 0; ie 1 1, SUI- 'IjJ'{gl(x)} = 0; iel2 (6.2)

12 CONSTRAINED OPTIMIZATION VIA PEN~LTY FUNCTIONS 419 for any I' > 0 and s > O. Let us now restrict ourselves to pairs (r,s) such that o < I' < (! and 0 < s < G, (! and G as defined in corollary 5.2. Furthermore, let us think of the constraints which are inactive at x as arranged in such a way that i e 1 1 for all (I( 1 ~ i ~ {J, ie 1 2 for all {J + 1 ~ i ~ m, and let A 2 = {iliei 2, 1 ~ i ~ (I(}. Then we have for all 0 < I' < (! and 0 < s < G that Ui(I',S) = 0 l : (J 1p'{gi[X(I',S)]}=O 1= +l,...,m, gi[x(i',s)] < 0; iea 2 In what follows, the constraints numbered from (J + 1 to m will be dropped from consideration. Then we only have to deal with the behaviour of 1p(y) for y ~ 0 so that 1p can be replaced by the function w named in conditions 3.3 and 3.4. On the basis of conditions 3.2 and 3.4 we can write cp'(y) = y-;. ;(y), w'(y) = yll B(y). (6.3) (6.4) The functions ;(y) and B(y) are analytic in an open interval containing {yly ~ O} and {yly ~ O} respectively. Furthermore, we have ;(0) > 0, B(O) > 0 if fh is even, B(O) < 0 if fh is odd, since cp'(y) > 0 for all y > 0, and w'(y) > 0 for all y < O. Employing these notations we replace the system (6.2) by the (slightly reduced) system p \li(x) -.L, Ui \lgi(x) = 0, 1= 1 (6.5) Ui g/(x) - I' ~{gi(x)} = 0; SUi - g/(x) B{gi(X)} = 0; a solution of which is given by [x(r,s), Ul(r,s),..., up(r;s)] (6.6) for all 0 < r < (! and 0 < s < G. Furthermore, it can readily be verified that I (x, Ül>.., üp) solves (6.5) for r = 0 and s = 0 so that, if we define x(o,o) = x, u(o,o) = û,

13 420 F. A. LOOTSMA we obtain straightaway that (6.6) is a solution of (6.5) for any ~ r < é and ~ s < (J. With the additional definitions. q = r 1/ A, t = s'!", VI = Uil!),; 1 /J,; VI = Ü I VI = u 1/ 1 1'; VI = ü1 1/ 1'; the system (6.5) can be rewritten as 'V/(X) - 2: v/ 'Vgl(x) - ~v/ 'Vgl(x) = 0, ) IEll IEA2 VI gl(x) - q [~{gl(x)} FIJ. = 0; ie 11> (6.7) tv1 - gl(x) [8{gl(x)} ]111' = 0; ie A 2. For any ~q < e 11 J. and ~ t < (J1/1' a solution of (6.7) is given by [x(q\tl'), v(q\tl')], (6.8) where V denotes the (3 vector with components V 1, ', vp. Let J denote the Jacobian matrix of (6.7) with respect to X and v evaluated for X = x, v = V, q = and t = 0. One can then readily verify that J is nonsingular under conditions 2.3 to 2.5. Furthermore, the functions in (6.7) have continuous first-order partial derivatives. Then, by the implicit-function theorem 1.16) a neighbourhood of (q,t) = (0,0) exists such that x and v can be solved uniquely from (6.7) in terms of q and t. Hence, (6.8) is the unique solution of (6.7) in this neighbourhood and it has continuous first-order partial derivatives at the point (q,t) = (0,0). If we assume that the problem functions f, gl'..., gm have continuous (k + l)th-order partial derivatives, we can invoke the implicit-function theorem in order to show that the vector function (6.8) has continuous kth-order partial derivatives at (q,t) = (0,0). Hence, (6.8) can be expanded in a Taylor series about the point (q,t) = (0,0), which implies that (6.1) can be expanded in a series in terms of r 11 J. and Sl/l' about (r,s) = (0,0). This provides a basis for extrapolation according to the Richardson-Romberg principle 2). Let us move on to an example of the penalty function (1.5) studied in this paper. Substituting and so that cp(y) = lny 1p(y)= -[min'(o,y)]2, w(y) = _y2,. we obtain a vector function ofthe type (6.1) which can be expanded in a series in terms of rand s about (r,s) = JO,O), since tp' has a pole and to' a zero at

14 CONSTRAINED OPTIMIZATION VIA PENALTY FUNCTIONS 421 y = 0, both of order 1. Similarly, one finds a series expansion of (6.1) in terms of Vr and Vs after substituting, and rp(y) = _y-l 'IjJ(y) = [min (O,y)]3 into the penalty function (1.5). 7. Examples We shall start with an example where interior-point methods and outside-in methods fail to provide a solution. 'Let us consider the problem of minimizing VXl + X2 subject to the constraints Xl ~ and X2 ~ 0. Starting from the point (1,-1) a solution can only be found if a mixed penalty function of the type (1.5) is utilized, for example which leads to 1 VXl + X2 - r In Xl + - [min (0,X2)]2, s X (r s) = 4 1'2. X (1' s) = _l S 1,,2, 2 Lastly, we shall discuss a cubic problem formulated as minimizing the function subject to the constraints gl(x) = -X1 2 - X2 2 + x/ ~ 0, gz(x)= X12+X22+X32_4~0, g3(x) = - X3 + 5 ~ 0, g4(x) = Xl. ~ 0, gs(x) = X2 ~ 0, g6(x) = X3 ~ 0. This example has been discussed by Fiacco and McCormick 4) and by the author 13.14). The theoretical solution is given by Xl = 0, X2 = X3 = V2. In solving the cubic problem we have used the penalty function so that the corresponding vector function x(r) can be expanded in a series in

15 422 F. A. LOOTSMA terms of r. Then The starting point x(o)_ was chosen as Xl(O) = 0, X2(O) = X3(O) = 1. gi[x(o)] = g4[x(o)] = 0, g2[x(o)] < 0, whereas the remaining constraints are initially satisfied with strict inequality sign. Hence 1 1 = {3, 5, 6} and 1 2 = {I, 2,4}. Computer TABLE I solution of cubic problem. 1 2 I r iterations f[x(r)] I\7QI I xl(r) x2(r) X3(r) 1' ' ' ' ' ' ' '41406 extrapolation ' ' ' ' ' extrapolation , ' ' extrapolation ' ' ' ' extrapolation ' , Table I shows the computer results. Minimization of the penalty function (7.1) was carried out in accordance with the algorithm of Davidon 3) as described by Fletcher and Powell+"). Successive values of r for which the penalty function was minimized are displayed in column 1, whereas the word "extrapolation" in this column announces the results obtained from an extrapolation procedure using the preceding r-minima. Column 2 lists the number of iterations required in order to minimize the penalty function with the preceding r-minimum as a starting point. Column 4 gives the length of the gradient of the penalty function at its computed minimum. The remaining columns show the steadily improving approximation of the minimum solution.. It is interesting to note that the same method fails to produce a solution if one starts from the point (1, 1,-3). On the other hand, an outside-in method using the penalty function f(x) 1 m + - 1: {min [O,g,(x)]}2 r '=1 converges to the point (0, V2, - V1'75) which is unfeasible. Both failures are mainly due to the first and the second constraint implying X3 2 ~ 2, a nonconvex requirement.

16 Acknowledgement CONSTRAINED OPTIMIZATION VlA PENALTY FUNCTIONS. 423 I am grateful to Prof. Dr J. F. Benders (Technological University, Eindhoven) for the inspiring discussions, criticisms and suggestions. I am also indebted to Prof. Dr G. W. Veltkamp (Technological University, Eindhoven) and Dr J. A. Zonneveld (Philips Research Laboratories), particularly for their encouragement to pursue a basis for extrapolation. REFERENCES Eindhoven, March ) T. M. Apostol, Mathematical analysis, Addison-Wesley, Reading, Mass., ) R. Bulirsch and J. Stoer, Num. Math. 6, , ) W. C. Davidon, Variable metric method for minimization, AEC Research and Development report, ANL-5990, ) A. V. Fiacco and G. P. McCormick, Programming under nonlinear constraints by unconstrained minimization: a primal-dual method, Research Analysis Corporation, RAC-TP-96, ) A. V. Fiacco and G. P. McCormick, Management Science 10, , ) A. V. Fiacco and G. P. McCormick, Management Science 12, , ) A. V. Fiacco and G. P. McCormick, SIAM J. appl. Math. 15, , ) A. V. Fiacco and G. P. McCormick, Operations Research 15, , ) A. V. Fiacco, Sequential unconstrained minimization methods for nonlinear programming, Thesis, Northwestern University, Evanston, Illinois, June ) R. Fletcher and M. J. D. Powell, The Computer Journal 6, , ) R. Frisch, The logarithmic potential method for solving linear-programming problems, The University Institute of Economics, Oslo, Memorandum, May 7, ) P. Huard in J. Abadie (ed.), Nonlinear programming, North-Holland Publishing Company, Amsterdam, 1967, pp ) F. A. Lootsma, Philips Res. Repts 22, , ) F. A. Lootsma, Philips Res. Repts 23, , ) G. R. Parisot, Revue fr. de Rech. operationelle 20, , ) Ch.-J. de la Vallée Poussin, Cours d'analyse infinitésimale, Dover Publ., New York, ) W. J. Zan gwill, Management Science 13, , 1967.

HESSIAN MATRICES OF PENALTY FUNCTIONS FOR SOLVING CONSTRAINED-OPTIMIZATION PROBLEMS

HESSIAN MATRICES OF PENALTY FUNCTIONS FOR SOLVING CONSTRAINED-OPTIMIZATION PROBLEMS R 702 Philips Res. Repts 24, 322-330, 1969 HESSIAN MATRICES OF PENALTY FUNCTIONS FOR SOLVING CONSTRAINED-OPTIMIZATION PROBLEMS by F. A. LOOTSMA Abstract This paper deals with the Hessian atrices of penalty

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

A DUALITY THEOREM FOR NON-LINEAR PROGRAMMING* PHILIP WOLFE. The RAND Corporation

A DUALITY THEOREM FOR NON-LINEAR PROGRAMMING* PHILIP WOLFE. The RAND Corporation 239 A DUALITY THEOREM FOR N-LINEAR PROGRAMMING* BY PHILIP WOLFE The RAND Corporation Summary. A dual problem is formulated for the mathematical programming problem of minimizing a convex function under

More information

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition) NONLINEAR PROGRAMMING (Hillier & Lieberman Introduction to Operations Research, 8 th edition) Nonlinear Programming g Linear programming has a fundamental role in OR. In linear programming all its functions

More information

Generalization to inequality constrained problem. Maximize

Generalization to inequality constrained problem. Maximize Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum

More information

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

CONSTRAINED OPTIMALITY CRITERIA

CONSTRAINED OPTIMALITY CRITERIA 5 CONSTRAINED OPTIMALITY CRITERIA In Chapters 2 and 3, we discussed the necessary and sufficient optimality criteria for unconstrained optimization problems. But most engineering problems involve optimization

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints. 1 Optimization Mathematical programming refers to the basic mathematical problem of finding a maximum to a function, f, subject to some constraints. 1 In other words, the objective is to find a point,

More information

On well definedness of the Central Path

On well definedness of the Central Path On well definedness of the Central Path L.M.Graña Drummond B. F. Svaiter IMPA-Instituto de Matemática Pura e Aplicada Estrada Dona Castorina 110, Jardim Botânico, Rio de Janeiro-RJ CEP 22460-320 Brasil

More information

Lecture: Duality.

Lecture: Duality. Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong

More information

Optimization Theory. Lectures 4-6

Optimization Theory. Lectures 4-6 Optimization Theory Lectures 4-6 Unconstrained Maximization Problem: Maximize a function f:ú n 6 ú within a set A f ú n. Typically, A is ú n, or the non-negative orthant {x0ú n x$0} Existence of a maximum:

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

Lecture 13: Constrained optimization

Lecture 13: Constrained optimization 2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

Convex Optimization Boyd & Vandenberghe. 5. Duality

Convex Optimization Boyd & Vandenberghe. 5. Duality 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM H. E. Krogstad, IMF, Spring 2012 Karush-Kuhn-Tucker (KKT) Theorem is the most central theorem in constrained optimization, and since the proof is scattered

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

Lecture: Duality of LP, SOCP and SDP

Lecture: Duality of LP, SOCP and SDP 1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:

More information

Nonlinear Programming (NLP)

Nonlinear Programming (NLP) Natalia Lazzati Mathematics for Economics (Part I) Note 6: Nonlinear Programming - Unconstrained Optimization Note 6 is based on de la Fuente (2000, Ch. 7), Madden (1986, Ch. 3 and 5) and Simon and Blume

More information

January 29, Introduction to optimization and complexity. Outline. Introduction. Problem formulation. Convexity reminder. Optimality Conditions

January 29, Introduction to optimization and complexity. Outline. Introduction. Problem formulation. Convexity reminder. Optimality Conditions Olga Galinina olga.galinina@tut.fi ELT-53656 Network Analysis Dimensioning II Department of Electronics Communications Engineering Tampere University of Technology, Tampere, Finl January 29, 2014 1 2 3

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

Calculus and optimization

Calculus and optimization Calculus an optimization These notes essentially correspon to mathematical appenix 2 in the text. 1 Functions of a single variable Now that we have e ne functions we turn our attention to calculus. A function

More information

Title Problems. Citation 経営と経済, 65(2-3), pp ; Issue Date Right

Title Problems. Citation 経営と経済, 65(2-3), pp ; Issue Date Right NAOSITE: Nagasaki University's Ac Title Author(s) Vector-Valued Lagrangian Function i Problems Maeda, Takashi Citation 経営と経済, 65(2-3), pp.281-292; 1985 Issue Date 1985-10-31 URL http://hdl.handle.net/10069/28263

More information

A Proximal Method for Identifying Active Manifolds

A Proximal Method for Identifying Active Manifolds A Proximal Method for Identifying Active Manifolds W.L. Hare April 18, 2006 Abstract The minimization of an objective function over a constraint set can often be simplified if the active manifold of the

More information

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents MATHEMATICAL ECONOMICS: OPTIMIZATION JOÃO LOPES DIAS Contents 1. Introduction 2 1.1. Preliminaries 2 1.2. Optimal points and values 2 1.3. The optimization problems 3 1.4. Existence of optimal points 4

More information

University of California, Davis Department of Agricultural and Resource Economics ARE 252 Lecture Notes 2 Quirino Paris

University of California, Davis Department of Agricultural and Resource Economics ARE 252 Lecture Notes 2 Quirino Paris University of California, Davis Department of Agricultural and Resource Economics ARE 5 Lecture Notes Quirino Paris Karush-Kuhn-Tucker conditions................................................. page Specification

More information

INVEX FUNCTIONS AND CONSTRAINED LOCAL MINIMA

INVEX FUNCTIONS AND CONSTRAINED LOCAL MINIMA BULL. AUSRAL. MAH. SOC. VOL. 24 (1981), 357-366. 9C3 INVEX FUNCIONS AND CONSRAINED LOCAL MINIMA B.D. CRAVEN If a certain weakening of convexity holds for the objective and all constraint functions in a

More information

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

TECHNICAL NOTE. A Finite Algorithm for Finding the Projection of a Point onto the Canonical Simplex of R"

TECHNICAL NOTE. A Finite Algorithm for Finding the Projection of a Point onto the Canonical Simplex of R JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS: VoL 50, No. 1, JULY 1986 TECHNICAL NOTE A Finite Algorithm for Finding the Projection of a Point onto the Canonical Simplex of R" C. M I C H E L O T I Communicated

More information

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

Date: July 5, Contents

Date: July 5, Contents 2 Lagrange Multipliers Date: July 5, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 14 2.3. Informative Lagrange Multipliers...........

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

Constrained maxima and Lagrangean saddlepoints

Constrained maxima and Lagrangean saddlepoints Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory Winter 2018 Topic 10: Constrained maxima and Lagrangean saddlepoints 10.1 An alternative As an application

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

Optimization. A first course on mathematics for economists

Optimization. A first course on mathematics for economists Optimization. A first course on mathematics for economists Xavier Martinez-Giralt Universitat Autònoma de Barcelona xavier.martinez.giralt@uab.eu II.3 Static optimization - Non-Linear programming OPT p.1/45

More information

The Kuhn-Tucker Problem

The Kuhn-Tucker Problem Natalia Lazzati Mathematics for Economics (Part I) Note 8: Nonlinear Programming - The Kuhn-Tucker Problem Note 8 is based on de la Fuente (2000, Ch. 7) and Simon and Blume (1994, Ch. 18 and 19). The Kuhn-Tucker

More information

McMaster University. Advanced Optimization Laboratory. Title: A Proximal Method for Identifying Active Manifolds. Authors: Warren L.

McMaster University. Advanced Optimization Laboratory. Title: A Proximal Method for Identifying Active Manifolds. Authors: Warren L. McMaster University Advanced Optimization Laboratory Title: A Proximal Method for Identifying Active Manifolds Authors: Warren L. Hare AdvOl-Report No. 2006/07 April 2006, Hamilton, Ontario, Canada A Proximal

More information

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Lectures 9 and 10: Constrained optimization problems and their optimality conditions Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained

More information

2.3 Linear Programming

2.3 Linear Programming 2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

Support Vector Machines: Maximum Margin Classifiers

Support Vector Machines: Maximum Margin Classifiers Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 16, 2008 Piotr Mirowski Based on slides by Sumit Chopra and Fu-Jie Huang 1 Outline What is behind

More information

',,~ ""':-- -- : -,... I''! -.. w ;... I I I :. I..F 4 ;., I- -,4 -.,, " I - .. :.. --.~~~~~~ ~ - -,..,... I... I. ::,., -...,

',,~ ':-- -- : -,... I''! -.. w ;... I I I :. I..F 4 ;., I- -,4 -.,,  I - .. :.. --.~~~~~~ ~ - -,..,... I... I. ::,., -..., ..-,, I, I.~~~~~~~~ ~ li I --I -- II - I I. II ~~~~~~~,~~~~; :. ~ ~.I.-I --~ ~~~~'-.-. -~~~~~ -- I I.;. I-~~~~~~. - ~.,,? ~, ~. - >. : ~..?~. I : I - "'- - ~ '--,-..4.1... ',~,~ ~,~- ~ 1.1i I mi1. -.,

More information

Nonlinear Programming and the Kuhn-Tucker Conditions

Nonlinear Programming and the Kuhn-Tucker Conditions Nonlinear Programming and the Kuhn-Tucker Conditions The Kuhn-Tucker (KT) conditions are first-order conditions for constrained optimization problems, a generalization of the first-order conditions we

More information

Some Notes on Approximate Optimality Conditions in Scalar and Vector Optimization Problems

Some Notes on Approximate Optimality Conditions in Scalar and Vector Optimization Problems ISSN: 2281-1346 Department of Economics and Management DEM Working Paper Series Some Notes on Approximate Optimality Conditions in Scalar and Vector Optimization Problems Giorgio Giorgi (Università di

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

Solving Dual Problems

Solving Dual Problems Lecture 20 Solving Dual Problems We consider a constrained problem where, in addition to the constraint set X, there are also inequality and linear equality constraints. Specifically the minimization problem

More information

Lecture V. Numerical Optimization

Lecture V. Numerical Optimization Lecture V Numerical Optimization Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Numerical Optimization p. 1 /19 Isomorphism I We describe minimization problems: to maximize

More information

A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials

A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials G. Y. Li Communicated by Harold P. Benson Abstract The minimax theorem for a convex-concave bifunction is a fundamental theorem

More information

Summary Notes on Maximization

Summary Notes on Maximization Division of the Humanities and Social Sciences Summary Notes on Maximization KC Border Fall 2005 1 Classical Lagrange Multiplier Theorem 1 Definition A point x is a constrained local maximizer of f subject

More information

MATH2070 Optimisation

MATH2070 Optimisation MATH2070 Optimisation Nonlinear optimisation with constraints Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The full nonlinear optimisation problem with equality constraints

More information

A Brief Review on Convex Optimization

A Brief Review on Convex Optimization A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review

More information

Convex Functions and Optimization

Convex Functions and Optimization Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

OPTIMALITY OF RANDOMIZED TRUNK RESERVATION FOR A PROBLEM WITH MULTIPLE CONSTRAINTS

OPTIMALITY OF RANDOMIZED TRUNK RESERVATION FOR A PROBLEM WITH MULTIPLE CONSTRAINTS OPTIMALITY OF RANDOMIZED TRUNK RESERVATION FOR A PROBLEM WITH MULTIPLE CONSTRAINTS Xiaofei Fan-Orzechowski Department of Applied Mathematics and Statistics State University of New York at Stony Brook Stony

More information

SWFR ENG 4TE3 (6TE3) COMP SCI 4TE3 (6TE3) Continuous Optimization Algorithm. Convex Optimization. Computing and Software McMaster University

SWFR ENG 4TE3 (6TE3) COMP SCI 4TE3 (6TE3) Continuous Optimization Algorithm. Convex Optimization. Computing and Software McMaster University SWFR ENG 4TE3 (6TE3) COMP SCI 4TE3 (6TE3) Continuous Optimization Algorithm Convex Optimization Computing and Software McMaster University General NLO problem (NLO : Non Linear Optimization) (N LO) min

More information

Multidisciplinary System Design Optimization (MSDO)

Multidisciplinary System Design Optimization (MSDO) Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006 Quiz Discussion IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Motivation Why do we care? We are interested in

More information

Review of Optimization Methods

Review of Optimization Methods Review of Optimization Methods Prof. Manuela Pedio 20550 Quantitative Methods for Finance August 2018 Outline of the Course Lectures 1 and 2 (3 hours, in class): Linear and non-linear functions on Limits,

More information

CSCI : Optimization and Control of Networks. Review on Convex Optimization

CSCI : Optimization and Control of Networks. Review on Convex Optimization CSCI7000-016: Optimization and Control of Networks Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one

More information

Math 291-2: Final Exam Solutions Northwestern University, Winter 2016

Math 291-2: Final Exam Solutions Northwestern University, Winter 2016 Math 29-2: Final Exam Solutions Northwestern University, Winter 206 Determine whether each of the following statements is true or false f it is true, explain why; if it is false, give a counterexample

More information

Economics 101A (Lecture 3) Stefano DellaVigna

Economics 101A (Lecture 3) Stefano DellaVigna Economics 101A (Lecture 3) Stefano DellaVigna January 24, 2017 Outline 1. Implicit Function Theorem 2. Envelope Theorem 3. Convexity and concavity 4. Constrained Maximization 1 Implicit function theorem

More information

Applications of Linear Programming

Applications of Linear Programming Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal

More information

Lagrange Relaxation and Duality

Lagrange Relaxation and Duality Lagrange Relaxation and Duality As we have already known, constrained optimization problems are harder to solve than unconstrained problems. By relaxation we can solve a more difficult problem by a simpler

More information

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written 11.8 Inequality Constraints 341 Because by assumption x is a regular point and L x is positive definite on M, it follows that this matrix is nonsingular (see Exercise 11). Thus, by the Implicit Function

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

Interior Point Methods for Mathematical Programming

Interior Point Methods for Mathematical Programming Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained

More information

PENALTY-FUNCTION PERFORMANCE OF SEVERAL UNCONSTRAINED-MINIMIZATION,!ECHNIQUES

PENALTY-FUNCTION PERFORMANCE OF SEVERAL UNCONSTRAINED-MINIMIZATION,!ECHNIQUES R805, \ Philips Res. Repts 27, 358-385, 197~ PENALTY-FUNCTION PERFORMANCE OF SEVERAL UNCONSTRAINED-MINIMIZATION,!ECHNIQUES by F. A. LOOTSMA *) Abstract Three unconstrained-minimization techniques: a non-gradient

More information

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)

More information

Sequential Unconstrained Minimization: A Survey

Sequential Unconstrained Minimization: A Survey Sequential Unconstrained Minimization: A Survey Charles L. Byrne February 21, 2013 Abstract The problem is to minimize a function f : X (, ], over a non-empty subset C of X, where X is an arbitrary set.

More information

by Al ok Das Dissertation submitted to the Faculty of the in partial fulfillment of the requirements for the degree of

by Al ok Das Dissertation submitted to the Faculty of the in partial fulfillment of the requirements for the degree of AN ACTIVE-CONSTRAINT LOGIC FOR NONLINEAR PROGRAMMING by Al ok Das Dissertation submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the requirements

More information

AN INDEFINITE-QUADRATIC-PROGRAMMING MODEL FORA CONTINUOUS-PRODUCTION PROBLEM

AN INDEFINITE-QUADRATIC-PROGRAMMING MODEL FORA CONTINUOUS-PRODUCTION PROBLEM R734 Philips Res. Repts 25,244-254,1970 AN INDEFINITE-QUADRATIC-PROGRAMMING MODEL FORA CONTINUOUS-PRODUCTION PROBLEM by F. A. LOOTSMA and J. D. PEARSON Abstract A model is presented for a problem of scheduling

More information

Mathematical Economics. Lecture Notes (in extracts)

Mathematical Economics. Lecture Notes (in extracts) Prof. Dr. Frank Werner Faculty of Mathematics Institute of Mathematical Optimization (IMO) http://math.uni-magdeburg.de/ werner/math-ec-new.html Mathematical Economics Lecture Notes (in extracts) Winter

More information

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Session: 15 Aug 2015 (Mon), 10:00am 1:00pm I. Optimization with

More information

EE/AA 578, Univ of Washington, Fall Duality

EE/AA 578, Univ of Washington, Fall Duality 7. Duality EE/AA 578, Univ of Washington, Fall 2016 Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION

A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-19-3 March

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

LINEAR AND NONLINEAR PROGRAMMING

LINEAR AND NONLINEAR PROGRAMMING LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico

More information

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,

More information

Optimality Conditions

Optimality Conditions Chapter 2 Optimality Conditions 2.1 Global and Local Minima for Unconstrained Problems When a minimization problem does not have any constraints, the problem is to find the minimum of the objective function.

More information

Chapter 7. Extremal Problems. 7.1 Extrema and Local Extrema

Chapter 7. Extremal Problems. 7.1 Extrema and Local Extrema Chapter 7 Extremal Problems No matter in theoretical context or in applications many problems can be formulated as problems of finding the maximum or minimum of a function. Whenever this is the case, advanced

More information

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg MVE165/MMG631 Overview of nonlinear programming Ann-Brith Strömberg 2015 05 21 Areas of applications, examples (Ch. 9.1) Structural optimization Design of aircraft, ships, bridges, etc Decide on the material

More information

On nonlinear optimization since M.J.D. Powell

On nonlinear optimization since M.J.D. Powell On nonlinear optimization since 1959 1 M.J.D. Powell Abstract: This view of the development of algorithms for nonlinear optimization is based on the research that has been of particular interest to the

More information

THE PRESERVATION OF CONVERGENCE OF MEASURABLE FUNCTIONS UNDER COMPOSITION

THE PRESERVATION OF CONVERGENCE OF MEASURABLE FUNCTIONS UNDER COMPOSITION THE PRESERVATION OF CONVERGENCE OF MEASURABLE FUNCTIONS UNDER COMPOSITION ROBERT G. BARTLE AND JAMES T. JOICHI1 Let / be a real-valued measurable function on a measure space (S,, p.) and let

More information

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS GERD WACHSMUTH Abstract. Kyparisis proved in 1985 that a strict version of the Mangasarian- Fromovitz constraint qualification (MFCQ) is equivalent to

More information

IMM OPTIMIZATION WITH CONSTRAINTS CONTENTS. 2nd Edition, March K. Madsen, H.B. Nielsen, O. Tingleff

IMM OPTIMIZATION WITH CONSTRAINTS CONTENTS. 2nd Edition, March K. Madsen, H.B. Nielsen, O. Tingleff IMM CONTENTS OPTIMIZATION WITH CONSTRAINTS 2nd Edition, March 2004 K. Madsen, H.B. Nielsen, O. Tingleff. INTRODUCTION..... Smoothness and Descent Directions.... 4.2. Convexity.... 7 2. LOCAL, CONSTRAINED

More information

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Carlos Humes Jr. a, Benar F. Svaiter b, Paulo J. S. Silva a, a Dept. of Computer Science, University of São Paulo, Brazil Email: {humes,rsilva}@ime.usp.br

More information

Interior Point Algorithms for Constrained Convex Optimization

Interior Point Algorithms for Constrained Convex Optimization Interior Point Algorithms for Constrained Convex Optimization Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Inequality constrained minimization problems

More information