GEOMETRIC PROGRAMMING: A UNIFIED DUALITY THEORY FOR QUADRATICALLY CONSTRAINED QUADRATIC PRO GRAMS AND /^-CONSTRAINED /^-APPROXIMATION PROBLEMS 1

Similar documents
Constrained Optimization

A DUALITY THEOREM FOR NON-LINEAR PROGRAMMING* PHILIP WOLFE. The RAND Corporation

Optimality Conditions for Constrained Optimization

Nondifferentiable Higher Order Symmetric Duality under Invexity/Generalized Invexity

4TE3/6TE3. Algorithms for. Continuous Optimization

Symmetric and Asymmetric Duality

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

UNCLASSIFIED. .n ARMED SERVICES TECHNICAL INFORMATION AGENCY ARLINGTON HALL STATION ARLINGTON 12, VIRGINIA UNCLASSIFIED

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

Absolute value equations

NONDIFFERENTIABLE SECOND-ORDER MINIMAX MIXED INTEGER SYMMETRIC DUALITY

A Polyhedral Cone Counterexample

1. J. Abadie, Nonlinear Programming, North Holland Publishing Company, Amsterdam, (1967).

Contents Real Vector Spaces Linear Equations and Linear Inequalities Polyhedra Linear Programs and the Simplex Method Lagrangian Duality

Convex Optimization & Lagrange Duality

Generalization to inequality constrained problem. Maximize

Nonlinear Optimization: What s important?

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

TECHNICAL NOTE. A Finite Algorithm for Finding the Projection of a Point onto the Canonical Simplex of R"

Lecture 18: Optimization Programming

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Research Article Optimality Conditions and Duality in Nonsmooth Multiobjective Programs

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

Answers to problems. Chapter 1. Chapter (0, 0) (3.5,0) (0,4.5) (2, 3) 2.1(a) Last tableau. (b) Last tableau /2 -3/ /4 3/4 1/4 2.

A Simple Computational Approach to the Fundamental Theorem of Asset Pricing

Constrained Optimization Theory

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

INVEX FUNCTIONS AND CONSTRAINED LOCAL MINIMA

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS

Absolute Value Programming

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

Chapter 3 Transformations

A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Lecture Notes 1: Vector spaces

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

4. Algebra and Duality

EE 227A: Convex Optimization and Applications October 14, 2008

CONSTRAINED OPTIMALITY CRITERIA

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

CONSTRAINED NONLINEAR PROGRAMMING

Chap 2. Optimality conditions

Karush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Lecture 6: Conic Optimization September 8

Lecture 7: Semidefinite programming

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS

Stationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2

Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications

IE 5531: Engineering Optimization I

A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials

On constraint qualifications with generalized convexity and optimality conditions

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Optimality, Duality, Complementarity for Constrained Optimization

Sharpening the Karush-John optimality conditions

MODIFIED GEOMETRIC PROGRAMMING PROBLEM AND ITS APPLICATIONS

Constrained optimization

LECTURE 13 LECTURE OUTLINE

UNCLASSIFIED AD NUMBER LIMITATION CHANGES

Support Vector Machines: Maximum Margin Classifiers

Elements of Convex Optimization Theory

Enhanced Fritz John Optimality Conditions and Sensitivity Analysis

Support Vector Machines

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Lecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008

Duality Uses and Correspondences. Ryan Tibshirani Convex Optimization

Constrained optimization: direct methods (cont.)

TRINITY COLLEGE DUBLIN THE UNIVERSITY OF DUBLIN. School of Mathematics

Math 5593 Linear Programming Week 1

Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition

Lecture 3. Optimization Problems and Iterative Algorithms

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

Machine Learning. Support Vector Machines. Manfred Huber

A projection algorithm for strictly monotone linear complementarity problems.

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

Appendix A Taylor Approximations and Definite Matrices

Convex Optimization M2

Introduction to Optimization Techniques. Nonlinear Optimization in Function Spaces

Chapter 1: Linear Programming

Lecture 7: Convex Optimizations

Constrained Optimization and Lagrangian Duality

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research

Nonlinear Programming and the Kuhn-Tucker Conditions

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities

HIGHER ORDER OPTIMALITY AND DUALITY IN FRACTIONAL VECTOR OPTIMIZATION OVER CONES

CONVEX FUNCTIONS AND OPTIMIZATION TECHINIQUES A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF

GYULA FARKAS WOULD ALSO FEEL PROUD

Pattern Classification, and Quadratic Problems

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.

Duality Theory for the. Matrix Linear Programming Problem H. W. CORLEY. Received March 25, INTRODUCTION

96 CHAPTER 4. HILBERT SPACES. Spaces of square integrable functions. Take a Cauchy sequence f n in L 2 so that. f n f m 1 (b a) f n f m 2.

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

Constraint qualifications for nonlinear programming

Constrained optimization

Title Problems. Citation 経営と経済, 65(2-3), pp ; Issue Date Right

ON THE "BANG-BANG" CONTROL PROBLEM*

Date: July 5, Contents

Transcription:

GEOMETRIC PROGRAMMING: A UNIFIED DUALITY THEORY FOR QUADRATICALLY CONSTRAINED QUADRATIC PRO GRAMS AND /^-CONSTRAINED /^-APPROXIMATION PROBLEMS 1 BY ELMOR L. PETERSON AND J. G. ECKER Communicated by L. Cesari, August 31, 1967 The duality theory of geometric programming as developed by Duffin, Peterson, and Zener [l] is based on abstract properties shared by certain classical inequalities, such as Cauchy's arithmeticgeometric mean inequality and Holder's inequality. Inequalities with these abstract properties have been termed "geometric inequalities" ([l, p. 195]). We have found a new geometric inequality, which we state below, and have used it to extend the "refined duality theory' ' of geometric programming developed by Duffin and Peterson ( [2] and [l, Chapter VI]). This extended duality theory treats both quadratically-constrained quadratic programs and /^-constrained ^-approximation problems. By a quadratically constrained quadratic program we mean: to minimize a positive semidefinite quadratic function, subject to inequality constraints expressed in terms of the same type of functions. By an Zp-constrained ^-approximation problem we mean: to minimize the l p norm of the difference between a fixed vector and a variable linear combination of other fixed vectors, subject to inequality constraints expressed by means of l p norms. Both the classical unsymmetrical duality theorems for linear programming (Gale, Kuhn and Tucker [3], and Dantzig and Orden [4]) and the unsymmetrical duality theorems for linearly-constrained quadratic programs (Dennis [5], Dorn [6], [7], Wolfe [8], Hanson [9], Mangasarian [lo], Huard [ll], and Cottle [12]) can be derived from the extended duality theorems that we state below and have proved on the basis of the new geometric inequality. The new geometric inequality is N+l / N _ x \ X *<y< ^ y*r+il ] pi I Xi hi \ P% + (XN+I for+i)j 1 Research partially supported by US-Army Research Office Durham, Grant 07701, at the University of Michigan. 316

A UNIFIED DUALITY THEORY 317 which is valid for each x in E^+1 and each y in the cone T = {y G E N+1 1 y N +i ^ 0, and y N +i = 0 only if y = o}, with the understanding that ]Cf ^MM^I^I w * s defined to be zero when y = 0. Here b = (bi, #2,, ôiv-4-1) is an arbitrary, but fixed, vector in Ex+i, and pi and #» are arbitrary, but fixed, real numbers that satisfy the conditions pi, qi>l and l/pi+l/qi=l, i=l, 2, Every quadratically-constrained quadratic program and every /p-constrained /^-approximation problem are special cases of the following program. PRIMAL PROGRAM A. Find the infirment of G 0 (x) subject to the following constraints on x. (1) G k (x)^0foreachkin {l, 2,, r}. (2) z (P. Here Gk(x) = J^pi I a* bi \ V% + («]*[ b] k [), k = 0, 1, 2,, r, w [&] = {w*, *»*+ 1,,»*}, * = 0, 1, 2,, r, ]*[ =»*+ 1, = 0, 1, 2,,r, and m 0 = 1, mi = n 0 + 2, w 2 = n% + 2,, m r = w r _x + 2, ^r + 1 = w. rfe^ z/ c/0r b = (&i, & 2,, b n ) is an arbitrary, butfixed,vector in E n, and the arbitrary, butfixed,constants pi satisfy the condition pi>\ for each i in [k],fe = 0, 1, 2,, r. The set <P is a fixed, but arbitrary, vector sub s pace of E n. To put an arbitrary quadratically-constrained quadratic program with m independent variables Zi,, z m into the form of primal program A, first observe that each positive semidefinite quadratic function (J)s*C»+c'2 can be factored as (%)(Dz)*(Dz)+c'z where D is an appropriate mxnt matrix. In particular, the objective function for such a program can be factored to give a matrix D 0 and a row vector Co'. Correspondingly, the &th constraint can be factored to give a matrix A, and a row vectored. Let M== [D 0, c 0 ', Di, ci',,d r, c r ']' and specialize primal program A by letting nk=(k + i)m+k, k = 0, 1,, r pi 2 for each i [&], k = 0, 1,, r, bi = 0 for each i [fc], k = 0, 1,, r.

318 E. L. PETERSON AND J. G. ECKER [March Finally, identify (P with the column space of the above matrix M; that is, let x = Mz, With these specializations, primal program A is equivalent to the quadratically-constrained quadratic program. All linearly-constrained quadratic programs can be obtained from the most general quadratically-constrained quadratic program by choosing the ith. row vector of M equal to 0 for each i in [k] 9 k 1, 2,, r. Moreover, all linear programs can be obtained by further restricting M so that its ith row vector equals 0 for each i in [0]. To obtain from primal program A the most general /^-constrained / p -approximation problem with m spanning vectors, choose pi= : pk for each i in [k], k = 0, 1,, r, and identify (P with the column space of an arbitrary nxm matrix M with ]k[ih row vector equal to 0 for k = 0, 1, 2,, r. The dual program corresponding tö primal program A is DUAL PROGRAM B. Find the supremum ofv(y) subject to the following constraints ony. (1) yioi = i. (2) For each integer k in {l,2,,r} the vector component ywi^o, and ym 0 only if;y,= 0 for each i in [k], (3) yesx Here»(y) = - Z)\ ]C (?< y\hi q% I yi\ q% + fayù + *i*r^]*i> A-0 V [k] ) where [k], ]k[, and r are as defined in primal program A. The fixed vector b = (6i, &2,, b n ) is identical to the vector b of primal program A, and the constants qi are determined from the constants pi of primal program A by the condition 1/pi + 1/qi 1 for each i in [k], k = 0, 1, 2,, r. The subset 3D of E n is the orthogonal complement of the vector subspace (P of primal program A. If for primal program A we take any linearly-constrained quadratic program, or any linear program, then correspondingly dual program B reduces to the well-known dual program. To recognize this, two recollections and two elementary observations are needed. First, recall that appropriate row vectors of the matrix M must be set equal to 0 and then observe that the dual variable yi corresponding to such a row vector is essentially unconstrained. Second, recall that &, = 0

1968] A UNIFIED DUALITY THEORY 319 for each i in [k], k = 0, 1, 2,, r, and then observe from the form of the resulting dual objective function v that when the variable yi is essentially unconstrained for some i in some [k] this variable yi can be set equal to zero without changing the constrained supremum of v(y). In the linearly-constrained quadratic case v reduces to the quadratic function v (y) = - Z)?< - *]o[ - IL bmym. 1 [0] 1 In the completely linear case v further reduces to the linear function r We shall use the following standard terminology in stating our duality theorems. The objective functionîor a program is the function to be optimized (Go or v in our theory). A program is consistent if there is at least one point that satisfies its constraints. Each such point is said to be a, feasible solution to the program. The constrained infimum (supremum) of the objective function for a consistent program is termed the infimum (supremum) of the program. A feasible solution to a program is optimal if the resulting value of the objective function is actually equal to the infimum (supremum) of the program. The infimum (supremum) of a program with an optimal feasible solution is said to be the minimum (maximum) of the program. Thus a program can have an infimum (supremum) without having a minimum (maximum), but not conversely. The following existence theorem relates primal program A and its dual program B. THEOREM 1. If primal program A is consistent, then it has a finite infimum M A if, and only if, its dual program B is consistent. If dual program B is consistent, then it has a finite supremum MB if, and only if, its primal program A is consistent. Unlike Theorem 1, the following duality theorem is not symmetrical relative to primal program A and its dual program B. THEOREM 2. If primal program A and its dual program B are both consistent, then (I) Program A has a finite infimum MA and program B has a finite supremum M B, with MA = MB- (II) The infimum MA of program A is actually a minimum. i

320 E. L. PETERSON AND J. G. ECKER [March (III) The supremum MB of program B is actually a maximum if, and only if, there are nonnegative Kuhn-Tucker (Lagrange) multipliers that solve the saddle-point problem for program A. Conclusion III can be strengthened so as to make Theorem 2 symmetrical relative to primal program A and its dual program B, if the class of programs is restricted to those programs for which primal program A has linear constraints. This strengthening is due to the well-known fact that each convex program with linear constraints has Kuhn-Tucker multipliers if it has a minimum. The following duality theorem characterizes the sets of optimal feasible solutions. THEOREM 3. Given an optimal feasible solution x' to primal program A, a feasible solution y to dual program B is optimal if, and only if, y satisfies the conditions and y\u\pk<y) =0, k = 1, 2, -, r, yi = :V]*[(sgn [xl - bi]) x{ - bi I'*- 1, i G [*], * «0, 1,, r. For such an optimal feasible solution y ' the number s y'ikbk = 1,2,, r, are Kuhn-Tucker multipliers f or primal program A. Given an optimal feasible solution y' to dual program B, a feasible solution x to primal program A is optimal if, and only if, x satisfies the conditions and where «y ]A [Gjfc(x) =0, k ~ 1, 2,, r, Xi = (sgn yi)( y'i\/y\kù * % ~ + *<> * [*]> P~ {*l>î«>o}. * G -P> Proofs for the preceding theorems will be given elsewhere. Sensitivity analyses and a computational algorithm that takes advantage of the essentially linear of the dual constraints will be described in forthcoming papers. REFERENCES 1. R. J. Duffin, E. L. Peterson and C. Zener, Geometric programming, Wiley, New- York, 1967. 2. R. J. Duffin and E. L. Peterson, Duality theory for geometric programming, SIAM J. Appl. Math. 14 (1966), 1307-1349.

i 9 68] A UNIFIED DUALITY THEORY 321 3. D. Gale, H. W. Kuhn and A. W. Tucker, Linear programming and the theory of games, Cowles Commission Monograph No. 13 (1951). 4. G. B. Dantzig and A. Orden, Duality theorems, RAND Report RM-1265, The RAND Corporation, Santa Monica, Calif., October, 1953. 5. J. B. Dennis, Mathematical programming and electrical networks, Wiley, New York, 1959. 6. W. S. Dorn, Duality in quadratic programming, Quart. Appl. Math. 18 (1960-1961), 155-162. 7., A duality theorem for convex programs, IBM Journal, 4 (1960), 407-413. 8. P. Wolfe, A duality theorem for nonlinear programming, Quart. Appl. Math. 19 (1961), 239-244. 9. M. A. Hanson, A duality theorem in nonlinear programming with nonlinear constraints, Australian J. of Statistics 3 (1961), 64-72. 10. O. L. Mangasarian, Duality in nonlinear programming, Quart. Appl. Math. 20 (1962-1963), 300-302. 11. P. Huard, "Dual programs" in Recent advances in mathematical programming, McGraw-Hill, New York, 1963. 12. R. W, Cottle, Symmetric dual quadratic programs, Quart. Appl. Math, 21 (1963-1964), 237-243. UNIVERSITY OF MICHIGAN