Symmetric and Asymmetric Duality

Similar documents
NONDIFFERENTIABLE SECOND-ORDER MINIMAX MIXED INTEGER SYMMETRIC DUALITY

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

NONDIFFERENTIABLE MULTIOBJECTIVE SECOND ORDER SYMMETRIC DUALITY WITH CONE CONSTRAINTS. Do Sang Kim, Yu Jung Lee and Hyo Jung Lee 1.

On constraint qualifications with generalized convexity and optimality conditions

Generalization to inequality constrained problem. Maximize

Research Article Optimality Conditions and Duality in Nonsmooth Multiobjective Programs

CONSTRAINED OPTIMALITY CRITERIA

Convex Optimization M2

1. J. Abadie, Nonlinear Programming, North Holland Publishing Company, Amsterdam, (1967).

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

Additional Homework Problems

Optimality Conditions for Constrained Optimization

Nondifferentiable Higher Order Symmetric Duality under Invexity/Generalized Invexity

Introduction to Optimization Techniques. Nonlinear Optimization in Function Spaces

Characterizations of Solution Sets of Fréchet Differentiable Problems with Quasiconvex Objective Function

Chap 2. Optimality conditions

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

Franco Giannessi, Giandomenico Mastroeni. Institute of Mathematics University of Verona, Verona, Italy

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

5. Duality. Lagrangian

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

HIGHER ORDER OPTIMALITY AND DUALITY IN FRACTIONAL VECTOR OPTIMIZATION OVER CONES

Constrained Optimization Theory

Characterizations of the solution set for non-essentially quasiconvex programming

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS

On Weak Pareto Optimality for Pseudoconvex Nonsmooth Multiobjective Optimization Problems

4TE3/6TE3. Algorithms for. Continuous Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS

On Optimality Conditions for Pseudoconvex Programming in Terms of Dini Subdifferentials

Numerical Optimization

Lagrange Relaxation and Duality

Lecture 4: Optimization. Maximizing a function of a single variable

Enhanced Fritz John Optimality Conditions and Sensitivity Analysis

Convex Optimization Boyd & Vandenberghe. 5. Duality

Convex Optimization & Lagrange Duality

Lagrangian Duality Theory

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient

Constrained Optimization and Lagrangian Duality

Semi-infinite programming, duality, discretization and optimality conditions

Lecture: Duality of LP, SOCP and SDP

Lecture: Duality.

CONVEX FUNCTIONS AND OPTIMIZATION TECHINIQUES A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Primal/Dual Decomposition Methods

INVEX FUNCTIONS AND CONSTRAINED LOCAL MINIMA

1 Review of last lecture and introduction

Duality Theory of Constrained Optimization

Duality (Continued) min f ( x), X R R. Recall, the general primal problem is. The Lagrangian is a function. defined by

Absolute Value Programming

Subgradients. subgradients and quasigradients. subgradient calculus. optimality conditions via subgradients. directional derivatives

Sharpening the Karush-John optimality conditions

OPTIMALITY OF RANDOMIZED TRUNK RESERVATION FOR A PROBLEM WITH MULTIPLE CONSTRAINTS

Karush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Lagrangian Duality and Convex Optimization

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

New Class of duality models in discrete minmax fractional programming based on second-order univexities

Linear and Combinatorial Optimization

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

3.10 Lagrangian relaxation

Journal of Inequalities in Pure and Applied Mathematics

Finite Dimensional Optimization Part III: Convex Optimization 1

Nonlinear Programming and the Kuhn-Tucker Conditions

Application of Harmonic Convexity to Multiobjective Non-linear Programming Problems

Subgradients. subgradients. strong and weak subgradient calculus. optimality conditions via subgradients. directional derivatives

Centre d Economie de la Sorbonne UMR 8174

On smoothness properties of optimal value functions at the boundary of their domain under complete convexity

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research

Part IB Optimisation

Lecture Note 18: Duality

4. Algebra and Duality

IE 521 Convex Optimization Homework #1 Solution

The exact absolute value penalty function method for identifying strict global minima of order m in nonconvex nonsmooth programming

Sequential Pareto Subdifferential Sum Rule And Sequential Effi ciency

GEOMETRIC PROGRAMMING: A UNIFIED DUALITY THEORY FOR QUADRATICALLY CONSTRAINED QUADRATIC PRO GRAMS AND /^-CONSTRAINED /^-APPROXIMATION PROBLEMS 1

January 29, Introduction to optimization and complexity. Outline. Introduction. Problem formulation. Convexity reminder. Optimality Conditions

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities

On the Value Function of a Mixed Integer Linear Optimization Problem and an Algorithm for its Construction

Applied Mathematics Letters

Some Notes on Approximate Optimality Conditions in Scalar and Vector Optimization Problems

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

OPTIMALITY CONDITIONS AND DUALITY FOR SEMI-INFINITE PROGRAMMING INVOLVING SEMILOCALLY TYPE I-PREINVEX AND RELATED FUNCTIONS

Exactness Property of the Exact Absolute Value Penalty Function Method for Solving Convex Nondifferentiable Interval-Valued Optimization Problems

Outline. Roadmap for the NPP segment: 1 Preliminaries: role of convexity. 2 Existence of a solution

EE/AA 578, Univ of Washington, Fall Duality

FIRST ORDER CHARACTERIZATIONS OF PSEUDOCONVEX FUNCTIONS. Vsevolod Ivanov Ivanov

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Lecture Note 5: Semidefinite Programming for Stability Analysis

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

Nonlinear Programming

A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials

DUALITY, OPTIMALITY CONDITIONS AND PERTURBATION ANALYSIS

Optimization for Machine Learning

EE 227A: Convex Optimization and Applications October 14, 2008

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

Transcription:

journal of mathematical analysis and applications 220, 125 131 (1998) article no. AY975824 Symmetric and Asymmetric Duality Massimo Pappalardo Department of Mathematics, Via Buonarroti 2, 56127, Pisa, Italy Submitted by Augustine O. Esogbue Received June 4, 1990 In this paper we show in a constructive way that the two duality schemes, Lagrangian and symmetric, are equivalent in a suitable sense; moreover we analyze the possibilities of obtaining other duality results. 1998 Academic Press 1. INTRODUCTION Given a constrained extremum problem (primal), the Lagrange multipliers theory allows us to formulate an unconstrained extremum problem (dual) satisfying the so-called weak duality theorem (i.e. the dual optimum is less than or equal to the primal one). Such a Lagrangian dual is asymmetric in the sense that the dual feasible region is contained in a space whose dimension is greater than that of the primal one. When the data-functions are differentiable it is possible to give another classical formulation [4], perhaps more elegant, but, unfortunately, satisfying the weak duality property only under some suitable assumptions (for example, regularity and convexity). In [4], in fact, a symmetric duality scheme is formulated (in the sense that the feasible regions of the two problems, primal and dual, are contained in spaces having the same dimension) and a weak duality theorem, under the hypotheses of convexity and differentiability for the considered functions is established. Such a scheme has been generalized to the pseudoconvex [7] and to the pseudoinvex [9] case. We are interested in the following constrained extremum problem, v P min x R φ x (1.1) where R x n x 0 g x 0,φ n,g n m. 125 0022-247X/98 $25.00 Copyright 1998 by Academic Press All rights of reproduction in any form reserved.

126 massimo pappalardo Definition 1.1. (i) (ii) points. A couple of minimum problems is called equivalent iff: Both minima exist and are equal. There exists a one-to-one correspondence between minimum Definition 1.2. A couple of equivalent minimum problems is called strictly equivalent iff the minimum points of the two problems are the same. Problem (1.1) will be called primal, while the following is called Lagrangian (or asymmetric) dual, v D max min ω 0 x 0 L x ω (1.2) where L x ω φ x ω g x is the Lagrangian function. If the functions φ and g are differentiable we can define the following extremum problem, v RD sup x ω R L x ω (1.3) where R x ω n + m φ x ω g x 0, and denotes the gradient. It is well known that the duality relationship v P v D holds; while if we suppose that φ and g are convex and a regularity condition for problem (1.1) holds (for example, the Mangasarian Fromovitz condition) then we have v P v RD. Let us now consider the symmetric duality scheme as in [4]. Let K x ω be a continuously differentiable function from n m to and define ( K x K K x 1 x n ) ( K ω K K ω 1 ω m S { x ω n m x ω 0 and K ω x ω 0 } S { x ω n m x ω 0 and K x x ω 0 } v SP min x ω S { K x ω ω Kω x ω } (1.4) ) v SD max x ω S { K x ω x Kx x ω } (1.5) Problems (1.4) and (1.5) are called, respectively, symmetric primal and dual. In [4] it is shown that if K is convex in x and concave in ω then the following duality result holds: v SP v SD (1.6)

symmetric and asymmetric duality 127 The aim of this paper is to show the equivalence between Lagrangian and symmetric duality in the sense expressed by the following theorem which we will prove in the next section: Main Theorem. (i) Let us suppose that v P and v D exist. Problems (1.1) and (1.2) are equivalent to (1.4) and (1.5), respectively. (ii) Suppose that v SP and v SD exist. There exists a function g such that the problems (1.4) and (1.5) are equivalent to (1.1) and (1.2), respectively. 2. PROOF OF THE MAIN THEOREM To prove the theorem it is necessary to define a new couple of constrained extremum problems. With the notations of Section 1 we define v HP min K x ω (2.1) x ω S v HD Let us now prove three technical lemmata: max K x ω (2.2) x ω S Lemma 2.1. (i) If v SP and v SD exist then problems (1.4) and (1.5) are strictly equivalent to (2.1) and (2.2), respectively. (ii) If v HP and v HD exist then problems (2.1) and (2.2) are strictly equivalent to (1.4) and (1.5), respectively. Proof. First of all we prove that if x ω is a minimum point for (2.1) then ω K ω x ω 0. This is trivial if, for every i 1 m, we have either ω i 0 or K ω x ω i 0; otherwise, if there exists an index ī 1 m such that ω i > 0 and K ω x ω i < 0, we find that the function ω i K x ω is strictly decreasing in a neighborhood of ω i and, since ω i > 0 (i.e. feasible), x ω could not be a minimum for (2.1). Now, we suppose that x ω Sis a minimum point for (2.1) but not for (1.4). Then ˆx ˆω exists such that K ˆx ˆω ˆω K ω ˆx ˆω <K x ω ω K ω x ω K x ω (2.3) But ˆx ˆω Simplies ˆω K ω ˆx ˆω 0 and then we see from (2.3) that K ˆx ˆω <K x ω, which is a contradiction. Vice versa, suppose that x ω Sis a minimum point for (1.4) but not for (2.1). In this case, ˆx ˆω optimal for (2.1) exists such that K ˆx ˆω <K x ω [ K ω x ω ω K ω x ω ] But ˆω K ω ˆx ˆω 0, as shown at the beginning, and thus we will find [ K ˆx ˆω ˆω Kω ˆx ˆω ] < [ K x ω ω K ω x ω ]

128 massimo pappalardo and this is absurd. The equality of minima comes from the fact, showed at the beginning, that if x ω is a minimum point for (2.1) then ω K ω x ω 0. We can argue analogously for the couple of problems (1.5) and (2.2). Lemma 2.2. Let us suppose that v P and v D exist, that φ, g are convex and continuously differentiable and that the Slater regularity condition holds. Then there exists a function K n+m such that problems (1.1) and (1.2) are equivalent to (1.4) and (1.5), respectively. Proof. We set K x ω φ x ω g x, which is convex in x, concave in ω and continuously differentiable. Problem (1.1) can be written as min sup K x ω (2.4) x 0 ω 0 Because of the definition of the function K and the well-known Lagrangian duality results, problem (2.4) is equivalent to problem (2.1) and then exploiting Lemma 2.1 we obtain the thesis. An analogous reasoning holds for the dual problems. Remark 2.1. In the same assumptions of Lemma 2.2 we have min sup K x ω max min K x ω x 0 ω 0 ω 0 x 0 In fact, if x ω is an optimum of (2.4), from a well-known minimax result K x ω min sup K x ω sup min K x ω x 0 ω 0 ω 0 x 0 and the last sup can be replaced with a maximum, because we are supposing that v D exists. On the other hand this was already known because, when problem (1.1) is convex (i.e., φ and g convex), we have v P v D. Lemma 2.3. Let us suppose that v SP and v SD exist and that K x ω is convex in x, concave in ω, and continuously differentiable. Then there exist two functions φ n and g n m such that problems (1.1) and (1.2) are equivalent to (1.4) and (1.5), respectively. Proof. Let us call x ω an optimum point of (1.4) and let us define φ x K x ω and A x n sup ω 0 K x ω < +. Let us observe that the set A is nonempty and let us suppose that g x is any concave function from n to m such that g x 0 iff x A. Then, we shall prove that problem (1.4) is equivalent to (1.1) and we shall explain what is the correspondence between the minima. If ˆx ˆω is optimal for (1.4) then ˆx is optimal for (1.1); in fact, taking into account Lemma 2.1 and Lemma 2.2, ˆx ˆω optimal for (1.4) implies

symmetric and asymmetric duality 129 that it is optimal for (2.1) and then also for (2.4). Recalling the definition of the set A we then obtain the thesis. Vice versa, if ˆx is optimal for (1.1) then ˆx ω is optimal for (2.4) by virtue of the definition of φ and g. Taking into account the equivalence between (1.4) and (2.4) we obtain the thesis for the part regarding the primal problems. We can argue analogously for the dual by recalling that the dual of (1.1) is equivalent to the dual of (2.4). Taking into account Remark 2.1, we have also the equality of the optima. Proof of the Main Theorem. (i) Putting together Lemmata 2.1, 2.2, and 2.3 we have the thesis if we define K x ω φ x ω g x. (ii) Let x ω be optimal for (1.4) and define φ x K x ω, the set A x n sup ω 0 K x ω < + and g x any function which is nonnegative iff x A. Then we have the thesis if we apply Lemmata 2.1, 2.2, and 2.3. 3. OTHER DUALITY RESULTS We recall that (1.6) has been generalized in several ways. In Bazaraa [3] the set S becomes S { x ω n m x ω C 1 C 2 and K ω x ω C 0 2} where C 1 and C 2 are closed convex cones with nonempty interior and the 0 denotes the positive polar; in [7] the convexity assumptions are weakened with the hypotheses of pseudoconvexity but another feasibility assumption is added. This result cannot be generalized to the case of quasiconvexity; in fact if we consider K x ω x 1 3, we obtain v SP 1 and v SD 0. More recently the symmetric duality has been extending also to the pseudoinvex problems [9]. Moreover, we recall that in [1,8] the possibility of extending the symmetric duality scheme in the mixed-integer programming has been studied. An analysis, analogous to that of the continuous case, could be made with small further efforts. The theorem stated in Section 1 and proved in Section 2 allows us to obtain other results concerning weak duality in the symmetric scheme. To do this it is necessary to recall that if the Lagrange function L, for every ω, is pseudoconvex in x, and a regularity condition holds then v P v RD. From this fact descends the following proposition: Proposition 3.1. Let us suppose that v SP and v SD exist and that K x ω is pseudoconvex in x, concave in ω, and continuously differentiable. If K x ω is bounded in ω, for every x, then v SP v SD.

130 massimo pappalardo Proof. It is an immediate consequence of Lemma 2.3 taking into account that in this case the set A defined in that proof is n and then it is possible to choose g x 0. We shall now examine, briefly, how duality schemes may be treated in the nondifferentiable case. Let us consider a function K x ω convex in x n, for each ω m, concave in ω, for each x n, and not necessarily differentiable and let us consider the following two constrained extremum problems, where denotes the classical subdifferential of convex analysis, min [ K x y y t F x y t ] s.t. x y t R P where where R P { x y t n+m+m t y K x y x 0 y 0 t 0 } max [ K x y x w G x y w ] s.t. x y w R D R D { x y w n+m+n w x K x y x 0 y 0 w 0 } Then we can prove the following duality result: Theorem 3.1. Let us suppose that K x y is convex in x and concave in y; then, for every x y t R P and u v w R D we have F x y t G u v w Proof. By the assumptions of convexity and concavity of the function K, a well known property of convex analysis ensures us the following two inequalities: K x w K u v x u w K x w K x y v y t w u K u v t y K u v Subtracting and rearranging, we get K x y y t [ K u v u w ] x w v t 0 which is desired the thesis. Remark 3.1. We have confined ourselves to the convex case in Theorem 3.1 (as in the main theorem) for the sake of simplicity, but it would be possible to extend the proofs to generalized notions of convexity without great changes.

symmetric and asymmetric duality 131 REFERENCES 1. E. Balas, A duality theorem and an algorithm for (mixed)-integer nonlinear programming, Linear Algebra Appl. 4 (1971), 341 352. 2. E. Balas, Duality for mixed integer programming, in Integer and Nonlinear Programming (Abadie, Ed.), North-Holland, Amsterdam, 1970. 3. M. S. Bazaraa, On symmetric duality in nonlinear programming, Oper. Res. 21, No. 1 (1973), 1 9. 4. G. B. Dantzig, E. Eisenberg, and R. Cottle, Symmetric dual nonlinear programs, Pacific J. Math. 15, No. 3 (1965), 809 812. 5. T. L. Magnanti, Fenchel and Lagrange duality are equivalent, Math. Programming 7 (1974), 253 258. 6. O. L. Mangasarian, Nonlinear Programming, McGraw Hill, New York, (1969). 7. M. S. Mishra, S. Nanda, and D. Acharya, Pseudoconvexity and symmetric duality in nonlinear programming problems, Opsearch 23, No. 1 (1986), 15 18. 8. M. S. Mishra, S. Nanda, and D. Acharya, A note on a pair of nonlinear mixed integer programming, European J. Oper. Res. 28 (1986), 89 92. 9. S. Nanda and L. N. Das, Pseudoconvexity and symmetric duality in nonlinear programming, Optimization 28 (1994), 267 273.