Some Perturbation Theory. James Renegar. School of Operations Research. Cornell University. Ithaca, New York October 1992

Similar documents
Springer-Verlag Berlin Heidelberg

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Strong Duality and Dual Pricing Properties in Semi-infinite Linear Programming A Non-Fourier-Motzkin Elimination Approach

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

58 Appendix 1 fundamental inconsistent equation (1) can be obtained as a linear combination of the two equations in (2). This clearly implies that the

Optimality, Duality, Complementarity for Constrained Optimization

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

MIT LIBRARIES. III III 111 l ll llljl II Mil IHII l l

Agenda. 1 Duality for LP. 2 Theorem of alternatives. 3 Conic Duality. 4 Dual cones. 5 Geometric view of cone programs. 6 Conic duality theorem

Optimality Conditions for Constrained Optimization

On duality theory of conic linear problems

SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction

Lecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008

THE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION

On the sufficiency of finite support duals in semi-infinite linear programming

Robust Solutions to Multi-Objective Linear Programs with Uncertain Data

Subdifferentiability and the Duality Gap

4. Algebra and Duality

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

APPROXIMATING THE COMPLEXITY MEASURE OF. Levent Tuncel. November 10, C&O Research Report: 98{51. Abstract

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

Topological properties

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016

Topics in Mathematical Economics. Atsushi Kajii Kyoto University

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Chapter 1. Preliminaries

ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS

Continuous Sets and Non-Attaining Functionals in Reflexive Banach Spaces

Functional Analysis HW #5

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma

Solutions Chapter 5. The problem of finding the minimum distance from the origin to a line is written as. min 1 2 kxk2. subject to Ax = b.

(convex combination!). Use convexity of f and multiply by the common denominator to get. Interchanging the role of x and y, we obtain that f is ( 2M ε

Centre d Economie de la Sorbonne UMR 8174

The Simplex Algorithm

NOTES ON VECTOR-VALUED INTEGRATION MATH 581, SPRING 2017

Topics in Mathematical Economics. Atsushi Kajii Kyoto University

Robust Farkas Lemma for Uncertain Linear Systems with Applications

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Semi-strongly asymptotically non-expansive mappings and their applications on xed point theory

ON STATISTICAL INFERENCE UNDER ASYMMETRIC LOSS. Abstract. We introduce a wide class of asymmetric loss functions and show how to obtain

Strong Dual for Conic Mixed-Integer Programs

and are based on the precise formulation of the (vague) concept of closeness. Traditionally,

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Midterm 1. Every element of the set of functions is continuous

Stanford Mathematics Department Math 205A Lecture Supplement #4 Borel Regular & Radon Measures

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f

Mathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector

Week 3 Linear programming duality

Richard DiSalvo. Dr. Elmer. Mathematical Foundations of Economics. Fall/Spring,

1.4 The Jacobian of a map

Approximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko

Lecture 7: Semidefinite programming

arxiv:math.fa/ v1 2 Oct 1996

UNIVERSITY OF VIENNA

Convex Analysis and Economic Theory AY Elementary properties of convex functions

2 RODNEY G. DOWNEY STEFFEN LEMPP Theorem. For any incomplete r.e. degree w, there is an incomplete r.e. degree a > w such that there is no r.e. degree

On duality gap in linear conic problems

Alternative theorems for nonlinear projection equations and applications to generalized complementarity problems

Convex Optimization Notes

Date: July 5, Contents

Solving Dual Problems

Continuity of convex functions in normed spaces

MATH 51H Section 4. October 16, Recall what it means for a function between metric spaces to be continuous:

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

SOME REMARKS ON THE SPACE OF DIFFERENCES OF SUBLINEAR FUNCTIONS

Jónsson posets and unary Jónsson algebras

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is

Chapter 1: Linear Programming

Computation Of Asymptotic Distribution. For Semiparametric GMM Estimators. Hidehiko Ichimura. Graduate School of Public Policy

The Asymptotic Theory of Transaction Costs

INTRODUCTION TO NETS. limits to coincide, since it can be deduced: i.e. x

On the Existence and Convergence of the Central Path for Convex Programming and Some Duality Results

Lecture 3: Semidefinite Programming

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION

CONVEX OPTIMIZATION VIA LINEARIZATION. Miguel A. Goberna. Universidad de Alicante. Iberian Conference on Optimization Coimbra, November, 2006

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

Linear and Combinatorial Optimization

Where is matrix multiplication locally open?

REAL RENORMINGS ON COMPLEX BANACH SPACES

Input: System of inequalities or equalities over the reals R. Output: Value for variables that minimizes cost function

2 Garrett: `A Good Spectral Theorem' 1. von Neumann algebras, density theorem The commutant of a subring S of a ring R is S 0 = fr 2 R : rs = sr; 8s 2

ORLICZ - PETTIS THEOREMS FOR MULTIPLIER CONVERGENT OPERATOR VALUED SERIES

Special Classes of Fuzzy Integer Programming Models with All-Dierent Constraints

Summer School: Semidefinite Optimization

On constraint qualifications with generalized convexity and optimality conditions

Functional Analysis HW #1

Winter Lecture 10. Convexity and Concavity

Mixed-integer nonlinear programming, Conic programming, Duality, Cutting

Introduction to Bases in Banach Spaces

ON THE ASYMPTOTIC STABILITY IN TERMS OF TWO MEASURES FOR FUNCTIONAL DIFFERENTIAL EQUATIONS. G. Makay

A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions

4TE3/6TE3. Algorithms for. Continuous Optimization

Extreme points of compact convex sets

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Transcription:

Some Perturbation Theory for Linear Programming James Renegar School of Operations Research and Industrial Engineering Cornell University Ithaca, New York 14853 e-mail: renegar@orie.cornell.edu October 1992 Research was supported by NSF Grant CCR-9103285 and IBM. 0

1 Introduction This paper examines a few relations between solution characteristics of an LP and the amount by which the LP must be perturbed to obtain either a primal infeasible LP or a dual infeasible LP. We consider such solution characteristics as the size of the optimal solution and the sensitivity of the optimal value to data perturbations. We show, for example, that an LP has a large optimal solution, or has a sensitive optimal value, only if the instance is nearly primal infeasible or dual infeasible. The results are not particularly surprising but they do formalize an interesting viewpoint which apparently has not been made explicit in the linear programming literature. The results are rather general. Several of the results are valid for linear programs dened in arbitrary real normed spaces. A Hahn-Banach Theorem is the main tool employed in the analysis; given a closed convex set in a normed vector space and a point in the space but not in the set, there exists a continuous linear functional strictly separating the set from the point. We introduce notation, then the results. Let X; Y denote real vector spaces, each with a norm. We use the same notation (i.e. k k) for all norms, it being clear from context which norm is referred to. Let X denote the dual space for X; this is the space of all continuous linear functionals c : X! RI (continuous with respect to the norm topology). Endow X with the operator norm; if c 2 X then kc k := supfc x : kxk = 1g Dene Y and its norm analogously. Let X denote the dual space of X. Note that X can be viewed as a subset of X ; x 2 X induces the continuous linear functional on X given by c 7! c x. If X = X then X is said to be reexive. To make this introductory section expositionally clean we assume throughout it that X is reexive; no such restriction is placed on Y ; in later sections the requirement that X be reexive is sometimes removed. Many important normed spaces are reexive, e.g., nite dimensional spaces regardless of the norm, Hilbert spaces. Let L(X; Y ) denote the space of bounded (i.e. continuous) linear operators from X to Y. Endow this space with the usual operator norm; if A 2 L(X; Y ) then kak := supfkaxk; kxk = 1g: Let C X ; C Y be convex cones in X; Y, each with vertex at the origin, i.e., each is closed under multiplication by non-negative scalars and under addition. The cone C X induces an \ordering" on X by 1

x 0 x 00 $ x 0? x 00 2 C X : (It is easily veried that we obtain a partial ordering i the cone C X is pointed; we do not assume pointedness.) Similarly, C Y induces an \ordering" on Y: In this introductory section we assume C X and C Y are closed. Given A 2 L(X; Y ); b 2 Y; c 2 X we dene the LP instance d := (A; b; c ) by sup x2x c x s:t: Ax b x 0: Many researchers have studied linear programming in this generality (cf. [1], [2], [3], [5]). Although linear programming from this vantage point is generally referred to by names such as \innite linear programming" we prefer the phrase \analytic linear programming" because of close connections to functional analysis. Although we use the symbol \" the reader should note that all common forms of LP are covered by the general setting. For example, what one typically writes as \Ax = b" is obtained by letting C Y = f0g. Similarly, what one typically expresses as \no non-negativity constraints" is obtained by letting C X = X. The LP instance d has a natural dual which we now dene. First, the cone C X has a dual cone in X dened by C X := f~c 2 X ; x 2 C X ) ~c x 0g: Dene CY analogously. The closed cones CX and C Y induce orderings on X and Y just as C X and C Y induced orderings on X and Y. Relying on these orderings, the dual d of d := (A; b; c ) is dened as the following LP: inf y 2Y y b s:t: y A c y 0: (Here, y A is the linear functional x 7! y Ax. The linear transformation y 7! y A is an element of L(Y ; X ); it is the dual operator of A.) Let val(d); val(d ) denote the optimal value for d; d ; if d is infeasible dene val(d) =?1; if d is infeasible dene val(d ) = 1. 2

It is a simple exercise to verify the weak duality relation val(d) val(d ). However, unlike nite dimensional polyhedral linear programming, strong duality may be absent in analytic linear programming; it can happen that val(d) < val(d ) even when d and/or d is feasible. Conditions guaranteeing that no such \duality gap "occurs are of interest. Consider the space D consisting of all instances d = (A; b; c ); we view the cones C X ; C Y (and hence the orderings) to be xed independently of d. For d = (A; b; c ) 2 D dene the norm of d as the value kdk := maxfkak; kbk; kc kg: Let P ri; denote the set of all d 2 D which are (primal) infeasible. Let Dual; denote the set of all d 2 D for which the dual d is infeasible. Given d 2 D dene dist(d; P ri;) := inffkd? ~ dk; ~ d 2 P ri;g; the distance from d to the set of primal infeasible LP's. Similarly, dene dist(d; Dual;) := inffkd? ~ dk; ~ d 2 Dual;g: Let F eas(d) denote the set of feasible points for d, and let Opt(d) denote the optimal solution set. It can happen that Opt(d) = ; even when val(d) is nite and the cones C X and C Y are closed; in general, \sup" cannot be replaced by \max" in specifying the objective of d. Theorem 1.1 Assume X is reexive, C X and C Y are closed. Assume d = (A; b; c ) 2 D. If d satises dist(d; P ri;) > 0 then statements (1) and (2) are true: (1) There exists x 2 F eas(d) satisfying kxk kbk dist(d; P ri;) (2) If x 0 2 Feas(d+d) where d := ( * 0 ; b; * 0 ) (i.e. perturbation of b alone), then there exists x 2 Feas(d) satisfying kx? x 0 k kbk maxf1; kx0 kg dist(d; P ri;) If d satises both dist(d; P ri;) > 0 and dist(d; Dual;) > 0 then statements (3), (4) and (5) are true: 3

(3)?kbk kc k dist(d; P ri;) val(d) = val(d ) kbk kc k dist(d; Dual;) (4) Opt(d) 6= ;; moreover, if x 2 Opt(d) then kbk kdk kxk dist(d; Dual;) dist(d; P ri;) (5) If d := (A; b; c ) satises both kdk < dist(d; P ri;) and kdk < dist(d; Dual;); then kbk + kbk jval(d + d)? val(d)j kak dist(d; Dual;)? kdk kdk dist(d; P ri; [ Dual;) kc k + kc k + kbk dist(d; P ri;)? kdk + kc kbk + kbk k dist(d; Dual;)? kdk kc k + kc k dist(d; P ri;)? kdk kdk dist(d; Dual;) kdk dist(d; P ri;) The propositions in Section 3 provide bounds which are more detailed; several of the bounds allow both X and Y to be arbitrary normed vector spaces. Note that the rst order terms of the quantity on the right of the inequality in assertion (5) of the theorem are bounded above by 2 where kak d 1 d 2 d 3 + kbk d 1 d 2 + kc k (1.1) d 1 d 2 d 1 := dist(d; P ri;) ; d 2 := kdk dist(d; Dual;) ; d 3 := minfd 1 ; d 2 g: kdk This bound depends cubically on the inverses of the relative distances d 1 and d 2. In Section 5 we show by way of examples that this bound cannot be improved in general; similarly for the other bounds in the theorem. However, the results of Section 3 can be used to obtain better bounds for many special cases, e.g., if jval(d)j kdk then one obtains a bound analogous to (1.1) depending quadratically on the inverses of d 1 and d 2 rather than cubically. 4

The theorem focuses on solution characteristics of d rather than d ; analogous results pertaining to d are discussed in Section 3. Assertions (1) and (2) in the theorem are similar to results one nds in the literature on linear equations, i.e., when the cones C X and C Y are subspaces. (See [11]). In this restricted context the term \maxf1; kx 0 kg" occurring in assertion (2) can be replaced simply with \1"; in fact, this replacement is valid for arbitrary closed cones C X and C Y if d = (A; b; c ) has the property that dist((a; tb; c ); P ri;) (1:2) is independent of t satisfying 0 < t 1 (as it is if C X and C Y are closed subspaces). 1 Assertion (2) of the theorem can be easily extended to allow perturbations in A as well as b. If x 0 2 F eas(d + d) where d = (A; b; * 0 ) then x 0 2 F eas(d+ 0 d) where 0 d := ( * 0 ; b?(a)x 0 ; * 0 ) and hence assertion (2) implies there exists x 2 F eas(d) satisfying kx? x 0 k (kbk + kak kx 0 k) maxf1; kx0 kg dist(d; P ri;) : The bounds asserted by the theorem will be useful in developing a complexity theory for linear programming where problem instance \size" is dened using quantities similar to condition numbers; see Renegar [3] and Vera [5] for work in this direction. Others have studied perturbations of linear programs but not in terms of the quantities dist(d; P ri;) and dist(d; Dual;); cf. Homan [4], Mangasarian [6], [7] and Robinson [9]. In the sections that follow we do not assume X or Y is reexive unless stated. However, we do assume X and Y are indeed normed as is natural for perturbation theory. Whenever we write \cone" we mean \convex cone with vertex at the origin". We do not assume the cones C X and C Y are closed unless stated. 2 Duality Gaps In this section we prove that if X is reexive, C X and C Y are closed, dist(d; P ri;) > 0 and dist(d; Dual;) > 0 then val(d) = val(d ), i.e. no duality gap. We begin with well-known propositions from which the proof follows easily. For completeness we include short proofs of the well-known propositions. 1 To see this fact note we may assume that kx 0 k > 1. Let t := 1=kx 0 k and observe that x=kx 0 k is feasible for (A; tb; c ) and x 0 =kx 0 k is feasible for (A; t(b+b); c ). Applying assertion (2) to these \scaled" LP's and using the assumption that (1.2) is independent of 0 < t 1 gives the desired conclusion. 5

The exposition throughout the paper allows the reader to skip all proofs yet still follow the main thread of the development. Fix A 2 L(X; Y ) and dene C(A) := fb 2 Y ; 9 x 0 such that Ax bg: It is easily veried that C(A) is a cone. Let C(A) denote the closure of C(A); the closure is also a cone. If X and Y are nite dimensional and if the cones C X and C Y which dene the orderings are polyhedral, then C(A) is polyhedral and hence C(A) = C(A). In other cases the closure may contain additional points. A system of inequalities Ax b x 0 is said to be asymptotically consistent if b 2 C(A), i.e. if it can be made consistent by an arbitrarily slight perturbation of b. The following proposition relies only on the local convexity of the normed space Y. In the case of nite-dimensional polyhedral linear programming the proposition is Farkas' lemma. Proposition 2.1 (Dun [2]) Assume A 2 L(X; Y ); b 2 Y. Consider the following two systems: Ax b x 0 y A 0 y 0 y b < 0 The rst system is asymptotically consistent if and only if the second is inconsistent. Proof. Let C(A) denote the set of all functionals y 2 Y satisfying y ~ b 0 for all ~ b 2 C(A). Since in a normed space any closed convex set can be strictly separated from a point not in the set by a continuous linear functional (separation version of the Hahn-Banach Theorem; cf. [10], Theorem 3.4b), and since C(A) is a convex set (because C(A) is), it easily follows that Noting that C(A) = f ~ b 2 Y ; y 2 C(A) ) y ~ b 0g: (2.1) C(A) = f ~ b ; 9 x 0;^b 0 such that ~ b = Ax + ^bg; 6

and recalling that fx; x 0g and f^b; ^b 0g are cones, we have that is, C(A) = fy 2 Y ; (x 0 ) y Ax 0) and (^b 0 ) y ^b 0)g; C(A) = fy 2 Y ; (y A 0) and (y 0)g: (2.2) The proposition follows immediately from (2.1) and (2.2). 2 Shortly, we state a dual analog of the proposition. Before doing so we digress to present a simple technical lemma important for establishing the analog. Assuming A 2 L(X; Y ), the system Ax b $ b? Ax 2 C Y x 0 $ x 2 C X (2:3) is naturally associated with a system which we will call its \double-dual -extension": b? A x 2 CY x 2 CX (2:4) where x 2 X, CX is the dual cone for C X ; C Y is the dual cone for C Y, and A is the dual operator of the dual operator of A. Viewing X as a subset of X ; Y as a subset of Y, it is trivial to see that C X CX ; C Y CY ; also, if x 2 X then A x = Ax. Hence, any solution of the original system is also a solution of the double-dual-extension. However, the doubledual-extension can have solutions which are not solutions of the original system; it can have solutions in X nx. This possibility is a well-known obstruction to the development of a symmetric duality theory for analytic linear programming. To gain symmetry we occasionally impose additional assumptions; ironically, these are not symmetric, being restrictive for X but not for Y. The following lemma becomes important. Lemma 2.2 Assume X is reexive, C X and C Y are closed. Assume A 2 L(X; Y ); b 2 Y: Then the solutions for the system (2.3) are identical with those for the double-dual-extension (2.4). Proof. It is a simple exercise using the separation version of the Hahn-Banach Theorem to prove that C Y = CY \ Y since C Y is closed. Similarly, C X = CX \ X = C X using X = X. Since Ax = A x for all x 2 X, the lemma follows. 2 7

Corollary 2.3 Assume X is reexive, C X and C Y are closed. Assume A 2 L(X; Y ); c 2 X. Consider the following two systems: Ax 0 x 0 c x > 0 y A c y 0 The rst system is inconsistent if and only if the second is asymptotically consistent (meaning that it can be made consistent by an arbitrarily slight perturbation of c ). Proof. Replacing the rst system of Proposition 2.1 with the second system of the corollary, note that the appropriate second system for the proposition will then have the same solutions as the rst system of the corollary by Lemma 2.2. 2 Letting d = (A; b; c ) denote an LP instance, the asymptotic optimal value "val(d) is dened to be the supremum of the optimal objective values of LP's obtained from d by perturbing b by an arbitrarily small amount, that is, "val(d) := lim sup val( d): ~ #0 ~b k ~ b?bk< where ~ d := (A; ~ b; c ): Of course "val(d) val(d). Recall that d is the dual LP for d. The following proposition is well-known. The condition "val(d) 6=?1 simply means that if d is not itself feasible, then it can be made feasible by an arbitrarily slight perturbation of b. Proposition 2.4 If "val(d) 6=?1 then "val(d) = val(d ). Proof. Let k 2 RI and consider the following two systems: I) Ax b?c x?k x 0 II) y A? tc 0 y 0; t 0 y b? tk < 0 where t 2 RI. Note the following three facts: i) I is asymptotically consistent i II is inconsistent (by Proposition 2.1, replacing Y there with Y RI, the operator x 7! Ax with x 7! (Ax;?c x), etc.); ii) If y is feasible for d then (y ; 1) is feasible for all constraints in II except perhaps y b? tk < 0; 8

iii) If (y ; 0) is feasible for II then "val(d) =?1 (because if (y ; 0) is feasible for II, then y is feasible for the second system of Proposition 2.1 and hence that proposition implies the system Ax b; x 0 is not asymptotically consistent.) If k "val(d) then I is asymptotically consistent and hence, by (i), II is inconsistent. Thus, by (ii), if y is feasible for d then y b k. It follows that "val(d) val(d ). If?1 < "val(d) < k then I is not asymptotically consistent and hence, by (i), II is consistent. Assume (y ; t) is feasible for II. By (iii), t 6= 0. Hence y =t is feasible for d and satises (y =t)b < k. It follows that "val(d) val(d ): 2 Continuing to let d = (A; b; c ) denote an LP instance, and d its dual, dene where ~ d is the dual of ~ d := (A; b; ~c ). #val(d ) = lim inf val( d ~ ) #0 ~c k~c?c k< Proposition 2.5 Assume X is reexive, C X and C Y are closed. If #val(d ) 6= 1 then #val(d ) = val(d). Proof. The proof is analogous to that of Proposition 2.4, using Corollary 2.3 rather than Proposition 2.1, noting that X RI is reexive if X is. 2 Proposition 2.6 Assume X is reexive, C X and C Y are closed. If dist(d; P ri;) > 0 and dist(d; Dual;) > 0 then val(d) = val(d ). Proof. We know val(d) val(d ) by weak duality. It thus suces to prove that if d is any LP instance satisfying? 1 < val(d) < val(d ) < 1 (2:5) then either dist(d; P ri;) = 0 or dist(d; Dual;) = 0. So assume d = (A; b; c ) satises (2.5). Proposition 2.4 implies there exists a sequence of pairs f(x i ; b i )g such that Ax i b i x i 0; kb i? bk! 0 and c x i! val(d ). Similarly, Proposition 2.5 implies there exists a sequence of pairs f(yi ; c i )g such that 9

yi A c i yi 0; kc i? c k! 0 and yi b! val(d). We claim that either kx i k! 1 or kyi k! 1. For assume otherwise. Then, restricting to a subsequence if necessary, lim c i x i = limc x i = val(d ); (2.6) limyi b i = lim yi b = val(d): (2.7) Consider the LP instances d i := (A; b i ; c i ). Noting that x i is feasible for d i and yi is feasible for d i, we have by weak duality c i x i val(d i ) val(d i ) y i b i (2.8) From (2.6), (2.7) and (2.8) we nd that val(d ) val(d) contradicting (2.5). Assuming kx i k! 1, x i 6= * 0, we now prove that dist(d; Dual;) = 0. In doing so we re-use notation. By the extension version of the Hahn-Banach Theorem there exists c i 2 X such that kc i k = 1 and c i x i = kx i k: Let > 0 and consider the LP instances 1 d i := A? b i c i kx i k ; * 0 ; c? c xi + c i : kx i k Note that x i is feasible for d i ; in fact, tx i is feasible for d i for all t 0. Since the objective value for d i at x i is positive, it follows that val(d i ) = 1. Hence d i 2 Dual; by weak duality. Let d 0 i be obtained from d i by replacing the RHS vector * 0 with b. Then d 0 i 2 Dual; since d i 2 Dual;. Noting that kd 0 i? dk! 0 since kx ik! 1 and c x i! val(d ) < 1, we nally have dist(d; Dual;) = 0. In similar fashion one proves that if kyi k! 1 then dist(d; P ri;) = 0. 2 3 Bounds We prove various bounds depending on the quantities dist(d; P ri;) and dist(d; Dual;). For most propositions presented in this section, a dual analog is also presented; of the two, we rst present the one with fewest assumptions. 10

We always assume X and Y are normed spaces, C X X and C Y Y are cones, A 2 L(X; Y ); b 2 Y; c 2 X. No other assumptions are made unless stated. Given an LP d = (A; b; c ), we continue to let F eas(d) refer to the set of feasible points for d and Opt(d) refer to the set of optimal points; similarly we have F eas(d ) and Opt(d ) with respect to the dual d In general one can have Opt(d) = ; and?1 < val(d) < 1; i.e., the supremum, although nite, is not attained. Similarly, one can have Opt(d ) = ; and?1 < val(d ) < 1. Lemma 3.1 Assume d = (A; b; c ) satises dist(d; P ri;) > 0: If y 2 F eas(d ) then ky k maxfkc k; y bg dist(d; P ri;) : Proof. Let 2 RI be such that 0 < 1 and there exists ~ b 2 Y satisfying y ~ b = (1? )ky k, k ~ bk = 1; note that can be chosen arbitrarily near zero if not zero itself. Let 0 2 RI satisfy 0 > 0. Consider the LP where d + d := (A + A; b + b; c ) A :=? b :=? Note that since y 2 F eas(d ), 1 (1? )ky k maxf0; y b + 0 g (1? )ky k ~ bc ~ b y (A + A) 0 y 0 y (b + b) < 0: Proposition 2.1 thus implies d + d 2 P ri;. Since kdk maxfkc k; y b + 0 g (1? )ky k and since can be chosen arbitrarily near zero, the proposition follows by letting 0 # 0: 2 11

Lemma 3.2 Assume X is reexive, C X and C Y are closed. Assume d = (A; b; c ) satises dist(d; Dual;) > 0. If x 2 F eas(d) then kxk maxfkbk;?c xg dist(d; Dual;) : Proof. Analogous to the proof of Lemma 3.1, using Proposition 2.3; begin by noting the extension form of the Hahn-Banach theorem implies there exists ~c 2 X satisfying ~c x = 1 and k~c k = 1: 2 Proposition 3.3 Assume d = (A; b; c ) satises dist(d; P ri;) > 0. If fy i g F eas(d ) satises y i b! val(d ) then lim supky i k maxfkc k; val(d )g dist(d; P ri;) Proof. Immediate from Lemma 3.1. 2 (3.1) Proposition 3.4 Assume Y is reexive. Assume d = (A; b; c ) satises dist(d; P ri;) > 0 and F eas(d ) 6= ;. Then Opt(d ) 6= ;; moreover, if y 2 Opt(d ) then ky k maxfkc k; val(d )g dist(d; P ri;) Proof. Proposition 3.3 shows that an additional constraint of the form ky k R can be added to d without changing the optimal value. The resulting feasible region is bounded, closed and convex. Since Y is reexive (equivalent to Y being reexive), the Banach-Alaoglu Theorem (c.f. [10], Theorem 3.15) implies the optimal value is attained; thus Opt(d ) 6= ;: The nal bound is immediate from Proposition 3.3. 2 Proposition 3.5 Assume X is reexive, C X and C Y are closed. Assume d = (A; b; c ) satises dist(d; Dual;) > 0 and F eas(d) 6= ;. Then Opt(d) 6= ;; moreover, if x 2 Opt(d) then kxk maxfkbk;?val(d)g dist(d; Dual;) Proof. Analogous to those of Propositions 3.3 and 3.4, using Lemma 3.2. 2 Proposition 3.6 Assume d = (A; b; c ) satises dist(d; P ri;) > 0. Then?kbk kc k dist(d; P ri;) val(d ): 2 12

Proof. We may assume val(d ) < 0. Letting fy i g F eas(d ) satisfy y i b! val(d ); Proposition 3.3 implies?kbk kc k dist(d; P ri;)?kbk lim supky i k val(d ): 2 Proposition 3.7 Assume X is reexive, C X and C Y are closed. Assume d = (A; b; c ) satises dist(d; Dual;) > 0: Then val(d) kbk kc k dist(d; Dual;) : Proof. Analogous to the proof of Proposition 3.6 using Proposition 3.5. 2 Proposition 3.8 Assume X is reexive, C X and C Y are closed. Assume d = (A; b; c ) satises dist(d; P ri;) > 0 and dist(d; Dual;) > 0. Then?kbk kc k dist(d; P ri;) val(d) = val(d ) Proof. Combine Propositions 2.6, 3.6 and 3.7. 2 kbk kc k dist(d; Dual;) Lemma 3.9 Assume d = (A; b; c ) satises dist(d; P ri;) > 0 and F eas(d ) 6= ;: Assume d := (0; b; 0): Then val([d + d] )? val(d ) kbk maxfkc k; val(d )g dist(d; P ri;) Proof. Assume fyi g F eas(d ) satises yi b! val(d ): Note that fyi g F eas([d + d] ): Hence, val([d + d] ) lim inf y i (b + b) = val(d ) + lim inf y i (b) val(d ) + kbk lim supky i k: Substituting the bound of Proposition 3.3 completes the proof. 2 A dual analog of Lemma 3.9 is obtained by an analogous proof using Proposition 3.5. We leave this to the reader. Proposition 3.10 Assume X is reexive, C X and C Y are closed. Assume d = (A; b; c ) satises dist(d; P ri;) > 0 and dist(d; Dual;) > 0. Assume d := (A; b; c ) satises dist(d + d; P ri;) > 0 and dist(d + d; Dual;) > 0. Then h i h i val(d + d)? val(d) kak maxfkc k;val(d)g maxfkb+bk;?val(d+d)g dist(d;pri;) dist(d+d;dual;) i + kbk + kc k h maxfkc k;val(d)g dist(d;pri;) h maxfkb+bk;?val(d+d)g dist(d+d;dual;) i 13

An analogous lower bound on val(d + d)? val(d) is obtained by interchanging the roles of d and d + d. Remarks. The value?val(d+d) occurring on the right side of the inequality can be replaced with?val(d); this follows immediately from the inequality by considering the two cases val(d + d) val(d) and val(d) val(d + d). Similarly, the value val(d + d) appearing in the analogous lower bound can be replaced with val(d). Proof. Proposition 3.5 implies d + d has an optimal solution x. Let 0 d := (0; 0 b; 0) where 0 b := b? (A)x: Note that x is feasible for d + 0 d and hence c x val(d + 0 d). Thus, and hence val(d + d)? val(d + 0 d) (c )x kc k kxk val(d + d)? val(d) kc k kxk + [val(d + 0 d)? val(d)]: Noting that k 0 bk kbk + kak kxk, the proof is now easily completed by substituting the bounds of Proposition 3.5 and Lemma 3.9, using the facts that val(d) = val(d ) by Proposition 2.6 and val(d + 0 d) val([d + 0 d] ) by weak duality. 2 The following two propositions regard perturbations of the feasible region; the objective functional c plays no role. For consistency we retain the same notation although c is irrelevant to the results. Proposition 3.11 Assume X is reexive, C X and C Y are closed. Assume d = (A; b; c ) satises dist(d; P ri;) > 0. Then there exists x 2 F eas(d) satisfying kxk kbk dist(d; P ri;) : Proposition 3.12 Assume X is reexive, C X and C Y are closed. Assume d = (A; b; c ) satises dist(d; P ri;) > 0. Assume x 0 2 F eas(d + d) where d := (0; b; 0). Then there exists x 2 F eas(d) satisfying kx? x 0 k kbk maxf1; kx0 kg dist(d; P ri;) : The two propositions are proven via the following lemma. 14

Lemma 3.13 Assume X is reexive, C X and C Y are closed, x 0 2 X. Assume d = (A; b; c ) satises "val(d) 6=?1 and F eas(d) \ fx; kx? x 0 k rg = ; (3.2) Then there exists ~c 2 X such that the dual ~ d of ~ d := (A; b; ~c ) satises val( ~ d ) < ~c x 0? k~c kr (3:3) Proof. There exists r 0 > r such that (3.2) remains valid if r 0 is substituted for r. For otherwise there would exist a sequence fx i g F eas(d) such that kx i? x 0 k # r; the sequence would have a weakly convergent subsequence by the Eberlein-Shmulyan Theorem (cf. [14]); because the weak-closure of a convex set in a normed space is identical to the strong closure (by the Hahn-Banach Theorem) it is easily argued that the weak limit x of the subsequence would satisfy kx? x 0 k r, x 2 C X and b? Ax 2 C Y contradicting (3.2). There exists 0 > 0 such that kb 0? bk 0 ) F eas(d 0 ) \ fx; kx? x 0 k r 0 g = ; where d 0 := (A; b 0 ; c ): For otherwise there would exist a sequence of pairs f(x i ; b i )g such that kb i? bk # 0, kx i? x 0 k r 0 and x i 2 F eas(d i ) where d i = (A; b i ; c ); the sequence fx i g would have a weakly convergent subsequence by the Eberlein-Shmulyan Theorem; the weak limit x of the subsequence would be easily argued to satisfy both x 2 F eas(d) and kx? x 0 k r 0 contradicting the denition of r 0. Let K denote the closed, convex set consisting of all points of distance at most r from the convex set [ F eas(d 0 ): b 0 kb 0?bk 0 By denition of 0, x 0 =2 K; thus, by the Hahn-Banach Theorem there exists ~c 2 X such that and hence supf~c x; x 2 Kg < ~c x 0 kb 0? bk 0 ) supf~c x; x 2 F eas(d 0 )g < ~c x 0? k~c kr: Letting ~ d := (A; b; ~c ), it follows that the asymptotic optimal value "val( ~ d) satises "val( ~ d) < ~c x 0? k~c kr: 15

Hence, by Proposition 2.4 and the fact that "val( ~ d) 6= 1 (because "val(d) 6=?1), (3.3) is valid. 2 Proof of Proposition 3.11. Assume the conclusion is not true, i.e., assume F eas(d) \ fx; kx? x 0 k rg = ; where x 0 := * 0 and r := kbk dist(d; P ri;) : Then d; x 0 and r satisfy the assumptions of Lemma 3.13. Let ~ d be as in the conclusion of that lemma; thus val( d ~ ) < ~c x 0? k~c kr =? kbk k~c k dist(d; P ri;) : However, this contradicts Proposition 3.6 applied to ~ d since dist( ~ d; P ri;) = dist(d; P ri;). 2 Proof of Proposition 3.12. Assume the conclusion is not true, i.e., assume F eas(d) \ fx; kx? x 0 k rg = ; where r := kbk maxf1; kx0 kg dist(d; P ri;) Then d; x 0 and r satisfy the assumptions of Lemma 3.13. Let d ~ conclusion of that lemma; thus be as in the val( ~ d ) < ~c x 0? k~c kr = ~c x 0? kbk k~c k maxf1; kx0 kg dist(d; P ri;) Note that x 0 2 F eas( ~ d + d) implying, by weak duality, (3:4) Combining (3.4) and (3.5), ~c x 0 val([ ~ d + d] ): (3:5) val([ ~ d + d] )? val( ~ d ) > kbk k~c k maxf1; kx0 kg dist(d; P ri;) : (3:6) Since dist( ~ d; P ri;) = dist(d; P ri;) > 0; (3.6) and Lemma 3.9 yield k~c kmaxf1; kx 0 kg < maxfk~c k; val( ~ d )g This gives a contradiction since val( ~ d ) k~c k kx 0 k by (3.4). 2 16

4 Proof of Theorem 1.1 We now collect our results to prove the theorem. As mentioned in the introduction, the results of Section 3 provide for many special cases better bounds than those asserted by the theorem. Proposition 3.11 establishes (1) of the theorem, Proposition 3.12 establishes (2), and Proposition 3.8 establishes (3). Proposition 3.5 together with the lower bound provided by (3) of the theorem show that in the setting of (4) we have Opt(d) 6= ; and kbk kc k x 2 Opt(d) ) kxk max 1; : dist(d; Dual;) dist(d; P ri;) Since kc k kdk, to establish (4) it thus suces to show that under the assumptions of (4) at least one of the following two relations is true: kdk dist(d; P ri;) 1 (4:1) Opt(d) = f * 0 g: (4:2) Assume (4.1) is not true. Then dist( * 0 ; P ri;) (i.e., the distance from P ri; to the identically zero LP) satises dist( * 0 ; P ri;) > 0 and hence, since P ri; is closed under multiplication by positive scalars, P ri; = ;. Note that P ri; = ; implies f ~ b 2 Y ; ~ b 0g = Y ; otherwise the RHS of the identically zero LP could be perturbed to obtain an infeasible LP, a contradiction. Hence, P ri; = ; implies that the feasible region for all LP's is precisely the closed cone fx; x 0g. Consequently, if (4.1) and (4.2) are not true then a slight perturbation of c in d = (A; b; c ) creates an LP with unbounded optimal solution, contradicting dist(d; Dual;) > 0 as is assumed in (4) of the Theorem. In all, either (4.1) or (4.2) is true, concluding the proof of (4). Towards proving (5) rst note we may assume that kdk 1: (4:3) dist(d; P ri; [ Dual;) Otherwise, by arguments analogous to the preceding, we deduce that f~c 2 X ; ~c 0g = X (hence fx 2 X; x 0g = f * 0 g) and f~ b 2 Y ; ~ b 0g = Y. Consequently * 0 is an optimal solution for all LP's; then (5) follows trivially. So we may assume (4.3). Assertion (5) is established with (3), Proposition 3.10 and tedious consideration of several cases. For example, consider the case jval(d + d)? val(d)j = val(d + d)? val(d) (4:4) 17

val(d + d) < 0: (4:5) Then the factor for kak in the bound of Proposition 3.10 is kc k dist(d; P ri;) maxfkb + bk;?val(d + d)g dist(d + d; Dual;) (4:6) The crucial point is that at most one of the two quantities val(d) and val(d+d) appear in this expression; the same is true for all cases. As noted in the remark following the statement of Proposition 3.10, the value?val(d + d) in (4.6) can be replaced with?val(d); in turn,?val(d) can be replaced with the upper bound kbk kc k=dist(d; P ri;) (provided by statement (3) of the theorem), which clearly does not exceed (kbk + kbk)kdk dist(d; P ri; [ Dual;) : (4.7) Substituting (4.7) for?val(d + d) in (4.6) and using (4.3), one arrives at a quantity which is obviously bounded from above by the factor for kak in (5). Similarly, one argues that the other factors are correct in the case that (4.4) and (4.5) are valid. The other cases are handled with equally tedious and obvious arguments. 2 5 Examples In this section we display simple examples indicating that the bounds of Theorem 1.1 cannot be improved in general without relying on additional parameters. The examples form a family of two variable LP's depending on parameters s and t: max x 1 s:t: (st)x 1 + x 2 s tx 1 + x 2 1 x 1 ; x 2 0 We use the notation d(s; t) = (A(s; t); b(s; t); c (s; t)) when referring to the family. We assume 0 < s 1; 0 < t 1 Endow the domain X = RI 2 with the `1-norm and endow the range Y = RI 2 with the `1-norm. So X = RI 2 is given the `1-norm and Y = RI 2 is given the `1-norm. 18

Lemma 5.1 ka(s; t)k = kb(s; t)k = kc (s; t)k = 1 dist(d(s; t); P ri;) = s dist(d(s; t); Dual;) = t Proof. A straightforward exercise left to the reader. 2 Lemma 5.1 elucidates the roles of the parameters s and t; they allow us to choose the values kdk dist(d; P ri;) ; kdk dist(d; Dual;) independently between 1 and 1. In light of this the following lemma indiciates the \optimality" of several of the bounds of Theorem 1.1 or their dual versions. Lemma 5.2 (1) If y is feasible for the dual d(s; t ) then ky k 1 t (2) Let d(r; s; t) denote the LP obtained from d(s; t) by replacing the rst coef- cient of b(s; t) with s? r. Letting x 0 = ( 1 t ; 0), which is feasible for d(s; t); and assuming r 0, all feasible points x for d(r; s; t), satisfy kx? x 0 k r st : (3) val(d(s; t)) = 1 t (4) (namely, y = ( 1 st ; 0).) 9 y 2 Opt(d (s; t)) 3 ky k = 1 st (5a) If ~ d(r; s; t) is the LP obtained from d(s; t) by replacing the (1; 1) coecient of A(s; t) with st + r then jval( d(r; lim ~ s; t))? val(d(s; t))j r#0 k A(r; ~ = 1 s; t)? A(s; t)k st 2 19

(5b) If d(r; s; t) is as dened in (2) then jval(d(r; s; t))? val(d(s; t))j lim = 1 r#0 kb(r; s; t)? b(s; t)k st : Proof. A straightforward exercise left to the reader. 2 The \optimality" of the bounds of Theorem 1.1 not addressed by Lemma 5.2, and the \optimality" of the remaining dual version bounds, are veried by considering the dual of d(s; t) as the primal, multiplying the objective by?1 to obtain a maximization problem. The reader might be confused as to why Lemma 5.2(2) indicates the optimality of Theorem 1.1(2); in the notation of the theorem, let d = d(r; s; t); d + d = d(s; t) and consider r # 0 so dist(d; P ri;)! s. 20

References [1] E.J. Anderson and P. Nash, Linear Programming in Innite Dimensional Spaces: Theory and Applications, Wiley, Chichester, 1987. [2] R.J. Dun, \Innite programs," in Linear Equalities and Related Systems, H.W. Kuhn and A.W. Tucker, eds., Princeton University Press, Princeton, 1956, 157-170. [3] A.V. Fiacco and K.O. Kortanek, eds., Semi-Innite Programming and Applications, Lecture Notes in Economics and Mathematical Systems 215, Springer-Verlag, New York, 1983. [4] A.J. Homan, \On approximate solutions of systems of linear inequalities," Journal of Research of the National Bureau of Standards 49, 1952, 263-265. [5] C. Kallina and A.C. Williams, \Linear programming in reexive spaces," SIAM Review (13) 1971, 350-376. [6] O.L. Mangasarian, \A condition number for linear inequalities and linear programs," in Methods of Operations Research (43), Proceedings of the 6th Symposium on Operations Research, Universitat Augsburg, G. Bamberg and O. Opitz, eds., Verlagsgruppe Athenaum/Hain/Scriptor/Hanstein, Konigstein 1981, 3-15. [7] O.L. Mangasarian, \Lipschitz continuity of solutions of linear inequalities, programs and complementarity problems," SIAM J. Control and Optimization, Vol. 25, No. 3, May 1987. [8] J. Renegar, \Incorporating condition measures into the complexity theory of linear programming," preprint, School of Operations Research and Industrial Engineering, Cornell University (renegar@orie.cornell.edu). [9] S.M. Robinson, \Bounds for error in the solution set of a perturbed linear program," Linear Algebra and Its Applications 6, 1973, 69-81. [10] W. Rudin, Functional Analysis, McGraw-Hill, New York, 1973. [11] G.W. Stewart and J. Sun, Matrix Perturbation Theory, Academic Press, Boston, 1990. [12] J. Vera, \Ill-posedness and the computation of solutions to linear programs with approximate data," preprint, Dept. Ingenieria Industrial, Universidad de Chile, Republica 701, Casilla 2777, Santiago, Chile (jvera@uchcecvm.bitnet). 21

[13] J. Vera, \Ill-posedness and the complexity of deciding existence of solutions to linear programs," preprint, Dept. Ingenieria Industrial, Universidad de Chile, Republica 701, Casilla 2777, Santiago, Chile (jvera@uchcecvm.bitnet). [14] K. Yosida, Functional Analysis, Springer-Verlag, New York, 1966. 22