A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials

Similar documents
A Robust von Neumann Minimax Theorem for Zero-Sum Games under Bounded Payoff Uncertainty

Robust Farkas Lemma for Uncertain Linear Systems with Applications

Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition

Characterizing Robust Solution Sets of Convex Programs under Data Uncertainty

Exact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization

Lecture Note 5: Semidefinite Programming for Stability Analysis

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Characterizations of the solution set for non-essentially quasiconvex programming

Centre d Economie de la Sorbonne UMR 8174

Robust Duality in Parametric Convex Optimization

Strong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty

On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q)

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

Relationships between upper exhausters and the basic subdifferential in variational analysis

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities

A Proximal Method for Identifying Active Manifolds

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION

Research Article Existence and Duality of Generalized ε-vector Equilibrium Problems

Convexification by Duality for a Leontief Technology Production Design Problem

Franco Giannessi, Giandomenico Mastroeni. Institute of Mathematics University of Verona, Verona, Italy

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

Applied Mathematics Letters

Characterizations of Solution Sets of Fréchet Differentiable Problems with Quasiconvex Objective Function

Research Article Optimality Conditions of Vector Set-Valued Optimization Problem Involving Relative Interior

Constraint qualifications for convex inequality systems with applications in constrained optimization

4. Algebra and Duality

Optimization and Optimal Control in Banach Spaces

CHARACTERIZATION OF (QUASI)CONVEX SET-VALUED MAPS

SCALARIZATION APPROACHES FOR GENERALIZED VECTOR VARIATIONAL INEQUALITIES

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

On duality gap in linear conic problems

POLARS AND DUAL CONES

CONVEX OPTIMIZATION VIA LINEARIZATION. Miguel A. Goberna. Universidad de Alicante. Iberian Conference on Optimization Coimbra, November, 2006

ZERO DUALITY GAP FOR CONVEX PROGRAMS: A GENERAL RESULT

SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction

Lecture 8. Strong Duality Results. September 22, 2008

Solving Global Optimization Problems with Sparse Polynomials and Unbounded Semialgebraic Feasible Sets

Lecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

NONDIFFERENTIABLE SECOND-ORDER MINIMAX MIXED INTEGER SYMMETRIC DUALITY

On the convexity of piecewise-defined functions

Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research

A Dual Condition for the Convex Subdifferential Sum Formula with Applications

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Quasi-relative interior and optimization

SECOND ORDER DUALITY IN MULTIOBJECTIVE PROGRAMMING

INVEX FUNCTIONS AND CONSTRAINED LOCAL MINIMA

SOS TENSOR DECOMPOSITION: THEORY AND APPLICATIONS

Optimization for Machine Learning

Chapter 2: Preliminaries and elements of convex analysis

Semidefinite Programming Basics and Applications

On constraint qualifications with generalized convexity and optimality conditions

ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION

Convex Optimization and Modeling

Summer School: Semidefinite Optimization

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES

Condensing KKM maps and its applications. Ivan D. Arand - elović, Z.Mitrović

Global Optimality Conditions for Optimization Problems

LECTURE 12 LECTURE OUTLINE. Subgradients Fenchel inequality Sensitivity in constrained optimization Subdifferential calculus Optimality conditions

Stability of efficient solutions for semi-infinite vector optimization problems

that a broad class of conic convex polynomial optimization problems, called

Lagrange Relaxation and Duality

A Solution Method for Semidefinite Variational Inequality with Coupled Constraints

c 2000 Society for Industrial and Applied Mathematics

Sequential Pareto Subdifferential Sum Rule And Sequential Effi ciency

Generalized Convexity and Invexity in Optimization Theory: Some New Results

Optimality Conditions for Nonsmooth Convex Optimization

Inequality Constraints

Convex analysis and profit/cost/support functions

Solution existence of variational inequalities with pseudomonotone operators in the sense of Brézis

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

CONTROLLABILITY OF NONLINEAR DISCRETE SYSTEMS

Nonlinear Programming 3rd Edition. Theoretical Solutions Manual Chapter 6

On deterministic reformulations of distributionally robust joint chance constrained optimization problems

LECTURE 10 LECTURE OUTLINE

Finding the Maximum Eigenvalue of Essentially Nonnegative Symmetric Tensors via Sum of Squares Programming

On John type ellipsoids

arxiv: v1 [math.oc] 21 Mar 2015

Maximal Monotone Inclusions and Fitzpatrick Functions

5. Duality. Lagrangian

Lecture 5. The Dual Cone and Dual Problem

4TE3/6TE3. Algorithms for. Continuous Optimization

Monotone operators and bigger conjugate functions

STABLE AND TOTAL FENCHEL DUALITY FOR CONVEX OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES

1. Introduction. Consider the deterministic multi-objective linear semi-infinite program of the form (P ) V-min c (1.1)

On Polynomial Optimization over Non-compact Semi-algebraic Sets

8. Geometric problems

Lifting for conic mixed-integer programming

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

On a Class of Multidimensional Optimal Transportation Problems

Symmetric and Asymmetric Duality

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Duality for almost convex optimization problems via the perturbation approach

Convex Functions. Pontus Giselsson

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

Some Contributions to Convex Infinite-Dimensional Optimization Duality

Convex Optimization M2

Transcription:

A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials G. Y. Li Communicated by Harold P. Benson Abstract The minimax theorem for a convex-concave bifunction is a fundamental theorem in optimization and convex analysis, and has a lot of applications in economics. In the last two decades, a nonconvex extension of this minimax theorem has been well studied under various generalized convexity assumptions. In this note, by exploiting the hidden convexity (joint range convexity) of separable homogeneous polynomials, we establish a nonconvex minimax theorem involving separable homogeneous polynomials. Our result complements the existing study of nonconvex minimax theorem by obtaining easily verifiable conditions for the nonconvex minimax theorem to hold. Key words: Minimax theorem, Separable homogeneous polynomial, Generalized convexity, Joint range convexity. AMS subject classification: 65H10, 90C26 The author is grateful to the referees and the associate editor for their helpful comments and valuable suggestions which have contributed to the final preparation of the paper. Moreover, the author would like to thank Professor Jeyakumar for valuable suggestions and stimulated discussions. Research was partially supported by a grant from the Australian Research Council. Department of Applied Mathematics, University of New South Wales, Sydney 2052, Australia. E- mail: g.li@unsw.edu.au 1

1 Introduction The minimax theorem for a convex-concave bifunction is a fundamental theorem in optimization and convex analysis, and has a lot of applications in economics. Extension of the classical minimax theorem to the nonconvex case has been well studied (for example, see [1,2,3]) in the last two decades, by imposing generalized convexity assumptions. However, much of the study has been devoted to obtaining more general relaxed conditions rather than explicit and easily verifiable conditions. The purpose of this note is to provide a nonconvex minimax theorem with easily verifiable conditions. In particular, by exploiting the hidden convexity (joint range convexity) of separable homogeneous polynomials, we establish a nonconvex minimax theorem involving separable homogeneous polynomials. (Similar ideas along this line have been successfully employed to obtain theorems of the alternative for special nonconvex quadratic system; see [4,5]). Our result complements the existing study of nonconvex minimax theorem by obtaining easily verifiable conditions for the nonconvex minimax theorem to hold. The organization of this paper is as follows. In Section 2, we establish the convexity of the joint range mapping of separable homogeneous polynomials. In Section 3, we provide a nonconvex minimax theorem involving separable homogeneous polynomials. Finally, as a direct application, we establish a zero duality gap result for nonconvex separable homogeneous polynomial programming with bounded constraints in Section 4. 2 Separable Homogeneous Polynomials: Joint Range Convexity Firstly, R m denotes the Euclidean space with dimension m. For each x, y R m, the inner product between x and y is defined by x, y = m x i y i x = (x 1,..., x m ) and y = (y 1,..., y m ). 2

Recall that f : R m R {+ } is said to be convex iff f((1 µ)x + µy) (1 µ)f(x) + µf(y), µ [0, 1] and x, y R m. A set C is said to be convex iff µc 1 + (1 µ)c 2 C, µ [0, 1] and c 1, c 2 C. We say f is a homogeneous polynomial with degree q iff f is a polynomial and f(αx) = α q f(x), α 0, x R m. The function f : R m R is said to be a separable and homogeneous polynomial with degree q iff f(x) = m f j(x j ), x = (x 1,..., x m ) where each f j ( ) is a homogeneous polynomial with degree q on R. Let f i, i = 1,..., p, be (nonconvex) separable and homogeneous polynomials on R m with degree q, where q N. Let be a compact box, i.e., := m X j, where each j is an interval of R. Consider the joint range mapping of {f 1,..., f p } over, defined by R (f 1,..., f p ) := {(f 1 (x),..., f p (x)) : x }. Below, we present a lemma showing that R (f 1,..., f p ) is always convex. This hidden convexity lemma will play an important role in our nonconvex minimax theorem. Lemma 2.1. Let be a compact box in R m. Let f i, i = 1,..., p be separable and homogeneous polynomials on R m with degree q (q N). Then, R (f 1,..., f p ) is a convex set in R p. Proof. Since is a compact box in R m, we can write = X m j where j, j = 1,..., m are intervals in R. Moreover, noting that each f i, i = 1,..., p is a separable and homogeneous polynomial on R m with degree q, we can express f i (x) = m f ij (x j ) x = (x 1,..., x m ), where each f ij : R R is defined by f ij (x) := a j i xq for some a j i j = 1,..., m. Next, we first show that R, i = 1,..., p and R (f 1,..., f p ) = m {(f 1j (x j ),..., f pj (x j )) : x j j }. (1) 3

To see (1), take (u 1,..., u p ) R (f 1,..., f p ). Then, we have (u 1,..., u p ) {(f 1 (x),..., f p (x)) : x m X j }, and so, there exists x = (x 1,..., x m ) X m j such that u i = f i (x) = m f ij (x j ) i = 1,..., p. Thus, (u 1,..., u p ) m {(f 1j(x j ),..., f pj (x j )) : x j j } and so, R (f 1,..., f p ) m {(f 1j (x j ),..., f pj (x j )) : x j j }. The converse inclusion can be verified in a similar way. Now, by (1), it suffices to show that, for each j = 1,..., m, {(f 1j (z),..., f pj (z)) : z j } is a convex set. (2) (Indeed, suppose that (2) be true. Since the sum of convex sets is still a convex set, the conclusion follows by (1).) To see (2), fix an arbitrary j {1,..., m}. Since j is a convex compact set in R, we may assume that j = [α j, β j ]. Then, {(f 1j (z),..., f pj (z)) : z j } = {(a j 1z q,..., a j pz q ) : z [α j, β j ]}. Since z z q is a continuous map in R and [α j, β j ] is a compact connected set in R, C j = {z q : z [α j, β j ]} is also a compact and connected set in R. Thus, C j is some compact interval in R, j = 1,..., m. This, together with {(a j 1z q,..., a j pz q ) : z [α j, β j ]} = t{(a j 1,..., a j p)}, t C j implies that {(a j 1z q,..., a j pz q ) : z [α j, β j ]} is a convex set. Therefore, we see that, for each j = 1,..., m, {(f 1j (z),..., f pj (z)) : z j } is a convex set. This proves (2) and completes the proof. 4

Definition 2.1. Let q N. We define the set S q which consisting of all homogeneous separable polynomial (up to a constant) as follows: S q = {f : f(x) = m a j x q j + b, aj, b R, j = 1,..., m}. Note that translation preserve the convexity. Thus, the following corollary follows immediately from the preceding lemma (Lemma 2.1). Corollary 2.1. Let be a compact box in R m. Let q N and f i S q, i = 1,..., p. Then, we have R (f 1,..., f p ) is a convex set in R p. 3 Nonconvex Minimax Theorem Using the joint range convexity of separable homogeneous polynomial, we now present our promised nonconvex minimax theorem. Our proof is along the similar line of the classical proof of minimax theorem for convex-concave bifunctions presented in [6]. However, for the convenience of the reader, we present a complete and self-contained proof here. Theorem 3.1. Let be a compact box in R m. Let q N and let A be a convex subset of R n. Consider the bifunction f : R m R n R {+ } such that (1) for each fixed y A, f(, y) S q ; (2) for each fixed x, f(x, ) is a convex function. Then, we have inf max y A x f(x, y) = max inf f(x, y). x y A Proof. It suffices to show that inf max y A x f(x, y) max inf f(x, y). x y A To see this, let max x inf y A f(x, y) < α. Then, for each x, there exists y x A such that f(x, y x ) < α. Since f(, y x ) is continuous, there exists an open neighbourhood V x of 5

x such that f(u, y x ) < α for all u V x. (3) Since is compact and x V x, we can find x 1,..., x p such that Let y i = y xi and consider the following set p V xi. C 1 := conv {(f(x, y 1 ) α,..., f(x, y p ) α) : x } and C 2 = R p +, where conv P denotes the convex hull of the set P. It is clear that C 1, C 2 are both convex sets and int C 2. Next, we show that C 1 int C 2 =. Otherwise, there exists (u 1,..., u p ) int R p + with (u 1,..., u p ) C 1 := conv {(f(x, y 1 ) α,..., f(x, y p ) α) : x }. Thus, there exist x, q N and λ j 0, j = 1,..., q with q λ j = 1 such that for each i = 1,..., p, 0 < u i = q λ j (f(x j, y i ) α) = q λ j f(x j, y i ) α. (4) Let f i (x) = f(x, y i ), i = 1,..., p. Then by our assumption, each f i S q, i = 1,..., p. This together with Corollary 2.1 implies that R (f 1,..., f p ) := {(f 1 (x),..., f p (x)) : x } is a convex set in R p. Note that, for each j = 1,..., q, ( f(xj, y 1 ), f(x j, y 2 )..., f(x j, y p ) ) = ( f 1 (x j ), f 2 (x j ),..., f p (x j ) ) R (f 1,..., f p ). Thus, we see that their convex combination q ( λ j f(xj, y 1 ), f(x j, y 2 )...., f(x j, y p ) ) R (f 1,..., f p ), and hence there exists x 0 such that q λ j f(x j, y i ) = f i (x 0 ) = f(x 0, y i ), i = 1,..., p. 6

This, together with (4), gives f(x 0, y i ) > α for all i = 1,..., p. (5) On the other hand, since x 0 and p V x i, there exists some i 0 {1,..., p} such that x 0 V xi0. Let y i0 = y xi0. This together with (3) implies that This contradicts (5) and so, C 1 int C 2 =. f(x 0, y i0 ) < α. Thus, from the convex separation theorem, we see that there exist µ i R, i = 1,..., p with p µ i = 1 such that ( µ i f(x, yi ) α ) p µ i u i for all u i 0 and for all x. By letting u i if necessary, we see that each µ i 0, i = 1,..., p. This gives us that p µ i f(x, y i ) α for all x. Let y 0 := p µ iy i A (thanks to the convexity of A). Then, as f(x, ) is convex for all x, we have Thus, So, the conclusion follows. f(x, y 0 ) inf max y A x p µ i f(x, y i ) α for all x. f(x, y) max x f(x, y 0) α. Next, we provide three corollaries, which give easily verifiable conditions for minimax theorem to hold. In particular, the last one is known as the famous von-neumann Minimax Theorem. Corollary 3.1. Let be a compact box in R m. Let q N and let A be a convex subset of R n. Let f 1 : R m R be a separable and homogeneous polynomial with degree q, and let f 2 : R n R be an affine function. Then, we have inf max f 1(x)f 2 (y) = max inf f 1(x)f 2 (y). y A x x y A 7

Proof. Consider the bifunction f : R m R n R, defined by f(x, y) = f 1 (x)f 2 (y). Note that, for each fixed y R n, f(, y) is a homogeneous and separable polynomial with degree q, and for each fixed x R m, f(x, ) is an affine function. Thus, the conclusion follows from Theorem 3.1. Corollary 3.2. Let be a compact box in R m. Let q N and let A be a convex subset of R n. Let f 1 : R m R be a non-negative, separable and homogeneous polynomial with degree q and let f 2 : R n R be a convex function. Then, we have inf max f 1(x)f 2 (y) = max inf f 1(x)f 2 (y). y A x x y A Proof. Consider the bifunction f : R m R n R, defined by f(x, y) = f 1 (x)f 2 (y). Note that for each fixed y R n, f(, y) is a homogeneous and separable polynomial with degree q and, for each fixed x R m, f(x, ) is a convex function (since f 1 is non-negative and f 2 is convex). Thus, the conclusion follows from Theorem 3.1. Corollary 3.3. Let m, n N. Let = {x = (x 1,..., x m ) R m : x i 1} and let U R m n. Then, we have inf max x, Uy = max inf y R n x x y Rn x, Uy. Proof. Let A = R n. Consider the bifunction f : R m R n R, defined by f(x, y) = x, Uy ; for each fixed y R, f(, y) is a linear function and, for each fixed x, f(x, ) is also a linear function. Thus, the conclusion follows from Theorem 3.1, as any linear function is in particular convex and belongs to the set S 1. Next, we present an example illustrating Corollary 3.1. 8

Example 3.1. Let m = 2 and n = 1. Let = [ 1, 1] [ 1, 1] and A = R. Consider the following bifunction f : R 2 R R, f(x, y) := (x 4 1 x 4 2)(y 1). Then, it can be verified that and so, Moreover, and hence Thus we see that max (x 1,x 2 ) (x4 1 x 4 2)(y 1) = (y 1) for all y [ 1, 1], inf y A (x4 1 x 4 2)(y 1) = inf max y A x inf max y A x f(x, y) = 0., if x 4 1 x 4 2 < 0, x 1 1, x 2 1, 0, if x 4 1 x 4 2 = 0, x 1 1, x 2 1,, if x 4 1 x 4 2 > 0, x 1 1, x 2 1, max inf f(x, y) = 0. x y A f(x, y) = max inf f(x, y). x y A On the other hand, this equality can also be seen by Corollary 3.1, since, for each fixed y R, f(, y) is a homogeneous and separable polynomial with degree 4 and, for each fixed x R 2, f(x, ) is affine. 4 Application Consider the following nonconvex separable homogeneous polynomial programming with bounded box constraints: (P ) min x R n p(x) s.t x n X [ 1, 1], where p is a separable homogeneous nonconvex polynomial with degree 2q (q N). In this section, as a direct application of our nonconvex minimax theorem, we obtain a zero 9

duality gap result for problem (P). (For other approaches to establish zero duality gap result, one could consult [7,8,9,10,11,12]) Note that the constraint can be equivalently rewritten as x 2q i 1, i = 1,..., n. Thus, the Lagrangian dual of (P) can be formulated as (DP ) sup inf x R n{p(x) + i 1)}. As a corollary of Theorem 3.1, we now show that zero duality gap holds between (P) and its Lagrangian dual (DP). Theorem 4.1. For the dual pair (P) and (DP), the following zero duality gap result holds min p(x) = sup x X n [ 1,1] inf x R n{p(x) + i 1)}. Proof. Let A = R n +. For each t > 1, denote t = X n [ t, t]. Consider the bifunction f : R n R n R, defined by f(x, y) = p(x) i 1), where x = (x 1,..., x n ) and y = (y 1,..., y n ). Clearly, for each fixed y, f(, y) S 2q and for each fixed x, f(x, ) is affine (hence convex). Then, from Theorem 3.1, we have for each t > 1, It can be verified that inf max f(x, y) = max y A x t inf max f(x, y) = sup y A x t Moreover, for each x t = X n [ t, t], inf f(x, y) = inf { p(x) y A inf x t y A f(x, y). min x X n [ t,t]{p(x) + i 1)} = i 1)}. p(x), if x X n [ 1, 1],, else. 10

Thus, It follows that, for each t > 1, max inf f(x, y) = max x t y A x X n [ 1,1]{ p(x)}. min p(x) = sup x X n [ 1,1] min x X n [ t,t]{p(x) + i 1)}. (6) Let p(x) = n ai x 2q i. Note that, there exists t 0 > 1 such that, for each y R n + with y i a i, ( argmin x R n{p(x) + ) i 1)} ( nx ) [ t 0, t 0 ] and, if there exists some i 0 {1,..., n} such that y i0 < a i 0, then Thus, sup inf x R n{p(x) + i and so, by (6), we have inf x R n{p(x) + i 1)} =. 1)} = sup min {p(x) + x X n [ t 0,t 0 ] sup inf x R n{p(x) + i 1)} = min p(x). x X n [ 1,1], i 1)}, References 1. Craven, B.D., Jeyakumar, V.: Equivalence of a Ky Fan type minimax theorem and a Gordan type alternative theorem, Oper. Res. Lett., 5, 99 102, (1986). 2. Frenk, J.B., Kassay, G.: Lagrangian duality and cone convexlike functions. J. Optim. Theory Appl., 134, 207 222, (2007). 3. Jeyakumar, V.: A generalization of a minimax theorem of Fan via a theorem of the alternative, J. Optim. Theory Appl., 48, 525-533, (1986). 11

4. Jeyakumar, V., Huy, N.Q., Li, G.: Necessary and sufficient conditions for S-lemma and nonconvex quadratic optimization, Optim. Eng., 10, 491-503, (2009). 5. Jeyakumar, V., Lee, G.M., Li, G.: Alternative theorems for quadratic inequality systems and global quadratic optimization, SIAM J. Optim., 20, 983-1001, (2009). 6. Zalinescu, C.: Convex Analysis in General Vector Spaces, World Scientific, 2002. 7. Giannessi, F., Mastroeni, G.: Separation of sets and Wolfe duality, J. Global Optim., 42, 401-412, (2008). 8. Jeyakumar, V., Li, G.: Stable zero duality gaps in convex programming: complete dual characterisations with applications to semidefinite programs, J. Math. Anal. Appl., 360, 156-167, (2009). 9. Jeyakumar, V., Li, G.: New dual constraint qualifications characterizing zero duality gaps of convex programs and semidefinite programs, Nonlinear Anal., 71, 2239-2249, (2009). 10. Li, G., Jeyakumar, V.: Qualification-free optimality conditions for convex programs with separable inequality constraints, J. Convex Anal. 16, 845-856, (2009). 11. Li, G., Ng K.F.: On extension of Fenchel duality and its application, SIAM J. Optim., 19, 1489-1509, (2008). 12. Mastroeni, G.: Some applications of the image space analysis to the duality theory for constrained extremum problems, to appear in J. Global. Optim. 12