CONVEX OPTIMIZATION VIA LINEARIZATION. Miguel A. Goberna. Universidad de Alicante. Iberian Conference on Optimization Coimbra, November, 2006

Similar documents
Nguyen Dinh 1, Miguel A. Goberna 2,MarcoA.López 2 and Ta Quang Son 3

Some Contributions to Convex Infinite-Dimensional Optimization Duality

DUALITY AND FARKAS-TYPE RESULTS FOR DC INFINITE PROGRAMMING WITH INEQUALITY CONSTRAINTS. Xiang-Kai Sun*, Sheng-Jie Li and Dan Zhao 1.

STABLE AND TOTAL FENCHEL DUALITY FOR CONVEX OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES

A Dual Condition for the Convex Subdifferential Sum Formula with Applications

Sequential Pareto Subdifferential Sum Rule And Sequential Effi ciency

Robust Duality in Parametric Convex Optimization

Robust Farkas Lemma for Uncertain Linear Systems with Applications

Research Article A Note on Optimality Conditions for DC Programs Involving Composite Functions

Constraint qualifications for convex inequality systems with applications in constrained optimization

Subdifferential representation of convex functions: refinements and applications

Robust linear semi-in nite programming duality under uncertainty?

Conditions for zero duality gap in convex programming

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions

Victoria Martín-Márquez

On duality theory of conic linear problems

CONTINUOUS CONVEX SETS AND ZERO DUALITY GAP FOR CONVEX PROGRAMS

Quasi-relative interior and optimization

The sum of two maximal monotone operator is of type FPV

Extended Monotropic Programming and Duality 1

A convergence result for an Outer Approximation Scheme

A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions

Research Article Strong and Total Lagrange Dualities for Quasiconvex Programming

Stationarity and Regularity of Infinite Collections of Sets. Applications to

Brézis - Haraux - type approximation of the range of a monotone operator composed with a linear mapping

Semi-infinite programming, duality, discretization and optimality conditions

On the Brézis - Haraux - type approximation in nonreflexive Banach spaces

Exact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems

AW -Convergence and Well-Posedness of Non Convex Functions

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

Local strong convexity and local Lipschitz continuity of the gradient of convex functions

On the use of semi-closed sets and functions in convex analysis

On Gap Functions for Equilibrium Problems via Fenchel Duality

Epiconvergence and ε-subgradients of Convex Functions

Stability of efficient solutions for semi-infinite vector optimization problems

Maximal monotonicity for the precomposition with a linear operator

Necessary and Sufficient Conditions for the Existence of a Global Maximum for Convex Functions in Reflexive Banach Spaces

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

NONSMOOTH ANALYSIS AND PARAMETRIC OPTIMIZATION. R. T. Rockafellar*

Maximal Monotone Inclusions and Fitzpatrick Functions

Downloaded 09/27/13 to Redistribution subject to SIAM license or copyright; see

A Robust von Neumann Minimax Theorem for Zero-Sum Games under Bounded Payoff Uncertainty

Optimality Conditions for Nonsmooth Convex Optimization

Perturbation Analysis of Optimization Problems

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane

On Linear Systems Containing Strict Inequalities in Reflexive Banach Spaces

Convex Optimization and Modeling

SEMI-INFINITE PROGRAMMING

Continuous Sets and Non-Attaining Functionals in Reflexive Banach Spaces

ZERO DUALITY GAP FOR CONVEX PROGRAMS: A GENERAL RESULT

Existence and Approximation of Fixed Points of. Bregman Nonexpansive Operators. Banach Spaces

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

The local equicontinuity of a maximal monotone operator

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE

Journal of Inequalities in Pure and Applied Mathematics

arxiv: v1 [math.oc] 10 Feb 2016

Self-equilibrated Functions in Dual Vector Spaces: a Boundedness Criterion

A characterization of essentially strictly convex functions on reflexive Banach spaces

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1

On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q)

Convex Optimization M2

Global Maximum of a Convex Function: Necessary and Sufficient Conditions

Convex Optimization & Lagrange Duality

8. Conjugate functions

Enhanced Fritz John Optimality Conditions and Sensitivity Analysis

A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials

SOME REMARKS ON SUBDIFFERENTIABILITY OF CONVEX FUNCTIONS

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS

On duality gap in linear conic problems

Convergence Theorems for Bregman Strongly Nonexpansive Mappings in Reflexive Banach Spaces

Convex Functions. Pontus Giselsson

The Subdifferential of Convex Deviation Measures and Risk Functions

DUALIZATION OF SUBGRADIENT CONDITIONS FOR OPTIMALITY

CHARACTERIZATION OF (QUASI)CONVEX SET-VALUED MAPS

1. Introduction. Consider the deterministic multi-objective linear semi-infinite program of the form (P ) V-min c (1.1)

On Total Convexity, Bregman Projections and Stability in Banach Spaces

5. Duality. Lagrangian

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem

On the relations between different duals assigned to composed optimization problems

Stability in linear optimization under perturbations of the left-hand side coefficients 1

Characterizations of the solution set for non-essentially quasiconvex programming

Dedicated to Michel Théra in honor of his 70th birthday

On smoothness properties of optimal value functions at the boundary of their domain under complete convexity

Convex Optimization Conjugate, Subdifferential, Proximation

New formulas for the Fenchel subdifferential of the conjugate function

Stability in Linear Optimization: applications

OPTIMALITY CONDITIONS FOR GLOBAL MINIMA OF NONCONVEX FUNCTIONS ON RIEMANNIAN MANIFOLDS

Convex series of convex functions with applications to Statistical Mechanics

On ɛ-solutions for robust fractional optimization problems

LECTURE 12 LECTURE OUTLINE. Subgradients Fenchel inequality Sensitivity in constrained optimization Subdifferential calculus Optimality conditions

Applied Mathematics Letters

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

OPTIMALITY CONDITIONS FOR D.C. VECTOR OPTIMIZATION PROBLEMS UNDER D.C. CONSTRAINTS. N. Gadhi, A. Metrane

Relationships between upper exhausters and the basic subdifferential in variational analysis

On the Semicontinuity of Vector-Valued Mappings

Error bounds for convex constrained systems in Banach spaces 1

Duality for almost convex optimization problems via the perturbation approach

Vector Variational Principles; ε-efficiency and Scalar Stationarity

Transcription:

CONVEX OPTIMIZATION VIA LINEARIZATION Miguel A. Goberna Universidad de Alicante Iberian Conference on Optimization Coimbra, 16-18 November, 2006

Notation X denotes a l.c. Hausdorff t.v.s and X its topological dual endowed with the weak -topology. R (T ) + = { λ : T R + supp λ < + }, with supp λ = {t T λ t 0}. The asymptotic (or recession) cone of C X is C { = {z X C + z C} } c C such that = z X c + λz C λ 0 = {z X c + λz C c C and λ 0}

For a set D X, the normal cone of D at x is N D (x) = {u X u (y x) 0 for all y D}, if x D, N D (x) =, if x / D. The effective domain, the graph, and the epigraph of h : X R {+ } are denoted by domh, gph h and epi h The subdifferential of h at a point x domh is h (x) = {u X h (y) h (x) + u (y x) y X}

The conjugate of h is h (v) = sup{v(x) h(x) x dom h} h is also a proper l.s.c. convex function and its conjugate (biconjugate of h) is h = h. In particular, if f (x) = a x + b, then i.e., f = δ {a} b. f { (u) = sup (u a) x b } = δ {a} (u) b, x R n

The asymptotic function of h is h such that epi h = (epi h). The indicator function of D X is { 0, if x D δ D (x) = +, if x / D. If D is closed and convex, then δ D is a proper l.s.c. convex function. The support function of D is δ D (u) = δ cl(convd) (u) = sup u(x), u X. x D In particular, δ R n (u) = sup x R n u x = δ {0n } (u) δ R n = δ {0n }.

LINEARIZING CONVEX SYSTEMS We consider σ := {f t (x) 0, t T ; x C}, where T is an arbitrary (possibly infinite) index set, C is a nonempty closed convex subset of X, and f t : X R {+ } is a proper l.s.c. convex function, t T. In many applications C = X, in which case we write σ := {f t (x) 0, t T }

Given t T, f t (x) 0 f t (x) 0 u t (x) f t (u t) 0, u t domf t u t (x) f t (u t), u t domf t u t (x) f t (u t) + α, u t domf t, α R + Analogously, x C δ C (x) 0 u(x) δc (u), u domδ C u(x) δc (u) + β, u domδc, β R +

Consequently, the following linear systems are equivalent to σ : { ut (x) f t (u t), u t domf t, t T } and u(x) δ C (u), u domδ C { ut (x) f t (u t) + α, u t domf t, t T, α R + u(x) δ C (u) + β, u domδ C, β R + }

EXISTENCE THEOREMS For linear systems [(Chu, 1966), Goberna et al. (1995)]: (i) {a t (x) b t, t T } is consistent (ii) (0, 1) / cl cone {(a t, b t ), t T } (iii) cl cone {(a t, b t ), t T ; (0, 1)} cl cone {a t, t T } R.

Associating with σ the convex cones we get M = cone { t T dom ft dom δ C N = cone { t T gph ft gph } δ C K = cone { t T epift } epiδ C P = cone { t T epift + } epiδ C } (i) σ is consistent (ii) (0, 1) / cl K (cl N, cl P ) (iv) cl K cl M R

1. Short history of these cones K: Chu (1966), in LISs. M, N and K: Charnes, Cooper & Kortanek (1965-1969), in LSIP. P : Jeyakumar, Dinh & Lee (2004), in CP. 2. Closedness P is weak -closed K is weak -closed N is weak -closed and σ is consistent The converse statements are not true and the consistency of σ is not superfluous.

Example 1: C = X = R and = ff 1 (x) = x 0g : Since f1 = f1g and R = f0g, epi R = R + (0; 1) and epi f1 = epi f1 + epi R = (1; 0) + R + (0; 1) :

Example 2: C = X = R 2 and = f t (x) = tx 1 + t 2 x 2 + 1 0; t 2 [ 1; 1] Since gph R = f(0; 0; 0)g and gph f 2 t = t; t 2 ; 1 8t; N = cone S t2t gph f t is closed whereas K is non-closed.

The following recession condition was introduced by Borwein (1981): (RC) C {x X f t (x) 0, t T } = {0} Generalized Fan s theorem: if either (a) K (N) is weak -closed, or (b) (RC) holds and K (N) is solid if X is infinite dimensional then σ is consistent iff λ R (T ) +, x λ C such that t T λ t f t (x λ ) 0

Some previous versions Under closedness conditions Bohnenblust, Karlin & Shapley (1950), with X = R n and C compact. Fan (1957), assuming that f t : X R t T and C is compact. Shioji & Takahashi (1988), with C compact. Under recession conditions Rockafellar (1970), with X = R n.

ASYMPTOTIC FARKAS LEMMA From now on we assume that σ is consistent with solution set A. Given v X and α R, then v(x) α is a consequence of the consistent system {a t (x) b t, t T } iff (v, α) cl cone {(a t, b t ), t T ; (0, 1)} Applying this result (Chu, 1966) to the linearization of σ we get the Asymptotic Farkas Lemma for linear inequalities: given v X and α R, v (x) α is consequence of σ iff (v, α) cl K.

From now on f : X R {+ } denotes a proper l.s.c. convex function. Another consequence is the Asymptotic Farkas Lemma for convex inequalities: f(x) α is a consequence of σ iff (0, α) + epif cl K. From here we get the following Characterization of the set containment convex-convex: A {x X h w (x) 0, w W } (h w as f t ) iff w W epih w cl K

Precedents: For Farkas Lemma: see Jeyakumar (2001). For set containment: Goberna & López (1998), with C = X = R n and f t and h w affine t T, w W. Mangasarian (2002), with C = X = R n and T < and W <. Jeyakumar (2003), with C = X = R n and h w affine w W. Goberna, Jeyakumar & Dinh (2006), with C = X = R n.

FARKAS-MINKOWSKI SYSTEMS The following concept was introduced by Charnes, Cooper & Kortanek (1965), in LSIP: σis FM if Kis weak -closed. Since cl K = epiδ A, {δ A (x) 0} is a FM representation of A. If σ is FM, then every continuous linear consequence of σ is also consequence of a finite subsystem of σ. The converse statement holds if σ is linear (but not if σ is convex). Example 3: Let X = C = R n and σ = { f 1 (x) := 1 2 x 2 0 }. Since f 1 (v) = 1 2 v 2, K = ( R n R ++ ) {0} is not closed.

Non-asymptotic Farkas lemma for linear inequalities: let σ be FM, v X {0} and α R. Then: (i) v (x) α is consequence of σ (ii) (v, α) K (iii) λ R (T ) + such that v(x) + t T λ t f t (x) α, x C Non-asymptotic Farkas Lemma for convex inequalities: if σ is FM, then f(x) α is consequence of σ iff (0, α) + epif K.

Asymptotic Farkas Lemma for reverse-convex inequalities: If σ is FM, then f (x) α is consequence of σ iff (0, α) cl ( epif + K ). From here we get the following characterization of the set containment convex-reverse convex: A {x X h w (x) 0, w W } (h w as f t ) iff 0 w W cl {epih w + K} Precedents: Jeyakumar (2003) and Bot & Wanka (2005), in CSISs.

The following closedness condition was introduced by Burachik & Jeyakumar (2005): (CC) epif + cl K is weak -closed. Each of the following conditions implies (CC): (i) epif + K is weak -closed. (ii) σ is FM and f is linear. (ii) σ is FM and f is continuous at some point of A.

Non-asymptotic Farkas Lemma for reverse-convex inequalities: if σ is FM, (CC) holds, and α R, then (i) f (x) α is consequence of σ (ii) (0, α) epif + K (iii) λ R (T ) + such that f(x) + t T λ t f t (x) α, x C Precedents: Gwinner (1987) and Dinh, Jeyakumar & Lee (2005) under strong assumptions.

FM SYSTEMS IN CONVEX OPTIMIZATION From now on we consider the CP problem (P) Minimize f(x) s.t. f t (x) 0, t T, x C. Solvability theorem: if X = R n and σ satisfies (RC), then (P) is solvable. This is not true for reflexive Banach spaces, unless f + δ A is coercive (Zalinescu, 2002).

Example 4: let X = l 2 (Hilbert space), and f(x) := n=1 ξ nn, with f X. C := { x = {ξ n } l 2 ξ n n n N }, C is a closed convex set which is not bounded (because ne n C, for every n N) and such that C = {0}. Thus (RC) holds. Consider c k := (γ k n) n 1, k = 1, 2,..., γ k n := { n, if n k, 0, if n > k. We have { c k} C and f(c k ) = k, k N, so that f is not bounded from below on C and no minimizer exists.

KKT optimality theorem: assume that σ is FM, that (CC) holds, and let a A dom f. Then a is a minimizer of (P) iff λ R (T ) + such that (i) f t (a) t supp λ (ii) λ t f t (a) = 0, t T, and (iii) 0 f(a) + t T λ t f t (a) + N C (a) Precedent: without the FM property, the optimality condition is 0 f(a) + N A (a) (Burachik & Jeyakumar, 2005).

Now we consider the parametric problem (P u ), for u R T, with feasible set A u. (P u ) Minimize f(x) subject to f t (x) u t, t T, x C, Defining ψ(x, u) := f(x) + δ Au (x), ψ : X R T R {+ }, we can write (P u ) Minimize ψ(x, u), x X.

Then (P) (P 0 ) Minimize ψ(x, 0), x X. The dual problem of (P) is (D) Maximize ψ (0, λ), λ R (T ) +. Duality theorem: if (P) is bounded, σ is FM, and (CC) holds, then v (D) = v (P) and (D) is solvable. Precedents: Rockafellar (1974) and Bonnans & Shapiro (2000).

Consider the Lagrange function L : X R (T ) R {+ }, where L(x, λ) is { f(x) + t T λ t f t (x), if x C, λ R (T ) +, otherwise. +, Lagrange optimality theorem: suppose that σ is FM and that (CC) holds. Then a point a A is minimizer of (P) iff λ 0 R (T ) + such that (a, λ 0) is a saddle point of the Lagrangian function L, i.e., L(a, λ) L(a, λ 0 ) L(x, λ 0 ), λ R (T ) + Then λ 0 is a maximizer of (D). x C.

Denote by v (u) the value of (P u ), so that v (P) = v(0). The following stability concepts (Laurent, 1972) involve the value function v : R T R, whose directional derivative at 0 in the direction u is denoted by v (0, u). (P) is called: inf-stable if v(0) R and v is l.s.c. at 0. inf-dif-stable if v(0) R and λ 0 R (T ) such that v (0, u) λ 0 (u), u R T.

These concepts are related as follows: (P) inf-stable v (D) = v (P) R (called normality in Zalinescu, 2002). (P) inf-dif-stable v(0) (called calmness in Clarke, 1976) v (D) = v (P) and (D) is solvable.

Thus, (P) inf-dif-stable = (P) inf-stable Stability Theorem: if (P) is bounded, σ is FM, and (CC) holds, then (P) is inf-dif-stable.

LOCALLY FARKAS-MINKOWSKI SYSTEMS The following local c.q. was introduced by Puente & Vera de Serio (1999), in LSIP. It is also equivalent to the so-called basic c.q. in CP (Hiriart Urruty & Lemarechal, 1993) if x int A and sup t T f t is continuous at x : σis LFM at x Aif where T (x) := {t T f t (x) = 0}. N A (x) N C (x) + cone ( t T (x) f t(x) ), σis said to be LFM if it is LFM at every feasible point x A.

As a consequence of the optimality theorem, σ FM σ LFM If σ is LFM at x A, then every continuous linear consequence of σ which is binding at x is also consequence of a finite subsystem of σ. The converse statement holds if σ is linear (but not if σ is convex). σ is LFM at a iff the KKT optimality theorem holds for any l.s.c. convex function f such that a dom f and f is continuous at some point of A. Precedent: Li & Ng (2005), with real-valued functions and basic c.q.

References Bohnenblust, Karlin & Shapley, Games with continuous pay-off, Annals of Math. Studies 24 (1950) 181-192 Bonnans & Shapiro, Perturbation Analysis of Optimization Problems. Springer, 2000 Borwein, The limiting Lagrangian as a consequence of Helly s theorem, JOTA 33 (1981) 497-513 Burachik & Jeyakumar, Dual condition for the convex subdifferential sum formula with applications. J. Convex Anal. 12 (2005) 279-290 Charnes, Cooper & Kortanek, On representations of semi-infinite programs which have no duality gaps, Mang. Sci. 12 (1965) 113-121 Chu, Generalization of some fundamental theorems on linear inequalities, Acta Math. Sinica 16 (1966) 25-40

Clarke, A new approach to Lagrange multipliers, MOR 2 (1976) 165-174 Dinh, Goberna & López, From linear to convex systems: consistency, Farkas lemma and applications, J Convex Analysis 13 (2006) 279-290. Dinh, Goberna & López, New Farkas-type constraint qualifications in convex infinite programming, ESAIM: COCV, to appear. Dinh, Jeyakumar & Lee, Sequential Lagrangian conditions for convex programs with applications to semidefinite programming. JOTA 125 (2005) 85-112 Fan, Existence theorems and extreme solutions for inequailities concerning convex functions or linear transformations, Math. Zeitschrift 68 (1957) 205-217 Goberna, Jeyakumar & Dinh, Dual characterizations of set containments with strict convex inequalities, JOGO 34 (2006) 33-54

Goberna & López, Linear Semi-infinite Optimization, Wiley, 1998 Goberna, López, Mira & Valls, On the existence of solutions for linear inequality systems, JMAA 192 (1995) 133-150 Gwinner, On results of Farkas type. Numer. Funct. Anal. Appl. 9 (1987), 471-520 Hiriart-Urruty & Lemarechal, Convex Analysis and Minimization Algorithms I. Springer, 1993 Jeyakumar, Farkas lemma: Generalizations, Encyclopedia of Optimization, Kluwer, II (2001) 87-91 Jeyakumar, Characterizing set containments involving infinite convex constraints and reverse-convex constraints, SIAM J. Optim. 13 (2003) 947-959

Jeyakumar, Lee, & Dinh, A new closed cone constraint qualification for convex Optimization, manuscript Laurent, Approximation et optimization, Hermann, 1972 Mangasarian, Set Containment characterization, JOGO 24 (2002) 473-480 Puente & Vera de Serio, Locally Farkas-Minkowski linear semi-infinite systems. TOP 7 (1999) 103-121 R.T. Rockafellar, Convex Analysis, Princeton U.P., 1970 Rockafellar, Conjugate Duality and Optimization, CBMS-NSF Reg. Conf. Series in Appl. Math. 16, SIAM, 1974 Shioji & Takahashi, Fan s theorem concerning systems of convex inequalities and its applications, JOTA 135 (1988) 383-398 Zălinescu, Convex analysis in general vector spaces, World Scientific, 2002