NONLINEAR PERTURBATION OF LINEAR PROGRAMS*

Similar documents
Absolute value equations

Absolute Value Programming

Title Problems. Citation 経営と経済, 65(2-3), pp ; Issue Date Right

Characterizations of Solution Sets of Fréchet Differentiable Problems with Quasiconvex Objective Function

Sharpening the Karush-John optimality conditions

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

Optimality Conditions for Constrained Optimization

Numerical Optimization

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

On the Convergence of the Concave-Convex Procedure

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS

("-1/' .. f/ L) I LOCAL BOUNDEDNESS OF NONLINEAR, MONOTONE OPERA TORS. R. T. Rockafellar. MICHIGAN MATHEMATICAL vol. 16 (1969) pp.

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

On Weak Pareto Optimality for Pseudoconvex Nonsmooth Multiobjective Optimization Problems

On constraint qualifications with generalized convexity and optimality conditions

Solving generalized semi-infinite programs by reduction to simpler problems.

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1

Optimality and Duality Theorems in Nonsmooth Multiobjective Optimization

Constrained Optimization Theory

March Algebra 2 Question 1. March Algebra 2 Question 1

Finite-Dimensional Cones 1

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION

6 Lecture 6: More constructions with Huber rings

THE NUMERICAL EVALUATION OF THE MAXIMUM-LIKELIHOOD ESTIMATE OF A SUBSET OF MIXTURE PROPORTIONS*

Nonlinear Optimization

Inequality Constraints

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma

Mathematical Economics: Lecture 16

Convexity of marginal functions in the discrete case

Chap 2. Optimality conditions

SOLUTIONS TO ADDITIONAL EXERCISES FOR II.1 AND II.2

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

Optimality, Duality, Complementarity for Constrained Optimization

NECESSARY CONDITIONS FOR WEIGHTED POINTWISE HARDY INEQUALITIES

4. Algebra and Duality

Division of the Humanities and Social Sciences. Sums of sets, etc.

Summary Notes on Maximization

Implications of the Constant Rank Constraint Qualification

Enhanced Fritz John Optimality Conditions and Sensitivity Analysis

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

Math 273a: Optimization Subgradients of convex functions

CHARACTERIZATIONS OF LIPSCHITZIAN STABILITY

OPTIMALITY OF RANDOMIZED TRUNK RESERVATION FOR A PROBLEM WITH MULTIPLE CONSTRAINTS

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Characterizing Robust Solution Sets of Convex Programs under Data Uncertainty

CONSTRAINED OPTIMALITY CRITERIA

Convex Analysis and Economic Theory Winter 2018

Linear Programming: Simplex

University of California, Davis Department of Agricultural and Resource Economics ARE 252 Lecture Notes 2 Quirino Paris

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Introduction to Optimization Techniques. Nonlinear Optimization in Function Spaces

Applied Mathematics Letters

(convex combination!). Use convexity of f and multiply by the common denominator to get. Interchanging the role of x and y, we obtain that f is ( 2M ε

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.

Nonlinear Programming and the Kuhn-Tucker Conditions

ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS

Subdifferential representation of convex functions: refinements and applications

(7: ) minimize f (x) subject to g(x) E C, (P) (also see Pietrzykowski [5]). We shall establish an equivalence between the notion of

6.254 : Game Theory with Engineering Applications Lecture 7: Supermodular Games

c 1998 Society for Industrial and Applied Mathematics

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM

REGULAR LAGRANGE MULTIPLIERS FOR CONTROL PROBLEMS WITH MIXED POINTWISE CONTROL-STATE CONSTRAINTS

Linear Complementarity as Absolute Value Equation Solution

be a path in L G ; we can associated to P the following alternating sequence of vertices and edges in G:

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

Convex Analysis and Economic Theory AY Elementary properties of convex functions

UNIQUENESS OF THE UNIFORM NORM

SECOND ORDER DUALITY IN MULTIOBJECTIVE PROGRAMMING

Research Article Optimality Conditions and Duality in Nonsmooth Multiobjective Programs

ON SOME SUFFICIENT OPTIMALITY CONDITIONS IN MULTIOBJECTIVE DIFFERENTIABLE PROGRAMMING

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping.

Date: July 5, Contents

Radius Theorems for Monotone Mappings

Mathematical Programming Involving (α, ρ)-right upper-dini-derivative Functions

Minimality Concepts Using a New Parameterized Binary Relation in Vector Optimization 1

Existence and uniqueness: Picard s theorem

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45

',,~ ""':-- -- : -,... I''! -.. w ;... I I I :. I..F 4 ;., I- -,4 -.,, " I - .. :.. --.~~~~~~ ~ - -,..,... I... I. ::,., -...,

FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1. Tim Hoheisel and Christian Kanzow

The exact absolute value penalty function method for identifying strict global minima of order m in nonconvex nonsmooth programming

An Enhanced Spatial Branch-and-Bound Method in Global Optimization with Nonconvex Constraints

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS

Discussion Paper Series

8. Constrained Optimization

ARE202A, Fall Contents

The Trust Region Subproblem with Non-Intersecting Linear Constraints

8 Barrier Methods for Constrained Optimization

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

Lagrange Multipliers with Optimal Sensitivity Properties in Constrained Optimization

Topological properties

ON THE ESSENTIAL BOUNDEDNESS OF SOLUTIONS TO PROBLEMS IN PIECEWISE LINEAR-QUADRATIC OPTIMAL CONTROL. R.T. Rockafellar*

Largest dual ellipsoids inscribed in dual cones

Finite Dimensional Optimization Part III: Convex Optimization 1

where u is the decision-maker s payoff function over her actions and S is the set of her feasible actions.

Corrections and additions for the book Perturbation Analysis of Optimization Problems by J.F. Bonnans and A. Shapiro. Version of March 28, 2013

Generalized metric properties of spheres and renorming of Banach spaces

Maths 212: Homework Solutions

Transcription:

SIAM J. CONTROL AND OPTIMIZATION Vol. 17, No. 6, November 1979 1979 Society for Industrial and Applied Mathematics 0363-,0129/79/1706-0006 $01.00/0 NONLINEAR PERTURBATION OF LINEAR PROGRAMS* O. L. MANGASARIAN" AND R R. MEYERf Abstract. The objective function of any solvable linear program can be perturbed by a differentiable, convex or Lipschitz continuous function in such a way that (a) a solution of the original linear program is also a Karush-Kuhn-Tucker point, local or global solution of the perturbed program, or (b) each global solution of the perturbed problem is also a solution of the linear program. We are concerned here with the linear program (1) Minimize px subject to Ax >-_ b, where p and b are given vectors in R and R respectively and A is a given rn n real matrix. We shall assume throughout this work that this problem has a nonempty optimal solution set $c S ={x lax >-_b}. We shall be interested in the perturbed problem P(e) definied as follows" (2) Minimize px + el(x) subject to Ax >-b, where f: Rn R and e is a nonnegative real number. For convenience we define the optimal solution set of (2) as {(e). Note that g(0)= S-. Perturbed problems such as (2) are considered in [3], [4]. In [3] it was shown that if (1) has a unique solution and f is a differentiable function at, then there exists a positive 7 such that for all e in [0, g], satisfies the Karush-Kuhn-Tucker conditions [1 ], [2] for the perturbed problem (2). By r considering a specific perturbation f(x) x x in [4] an iterative technique is proposed for solving linear programming problems. In this work we show that, under suitable conditions, given by f there exists a positive number f such that some solution of the linear program is a Karush-Kuhn-Tucker point or a local or global solution of the perturbed problem (2) for e in the interval [0, 7]. In Theorem 1 we show that if f is differentiable and has a bounded level set on, then there exists a Karush-Kuhn- Tucker point of the perturbed problem (2) which also solves the original linear program (1). In Theorem 2 we indicate how the same type of perturbation applies to a nonlinear programming problem. The rest of the paper is again devoted to the perturbed linear program. In Theorem 3 we show that if f satisfies a local Lipschitz or local convexity property then there exists a solution of the linear program (1) which is a local solution of the perturbed problem (2). Among other things Theorem 4 globalizes the result of Theorem 3 and shows that for sufficiently small e => 0 the set of optimal solutions of the perturbed problem is actually a subset of the solutions of (1). Corollary 1 deals with the case when the linear program (1) has a unique solution, while Corollary 2 treats the case when the perturbation function f is strictly convex on R n. We begin with the first result. THEOREM 1. Let f be a function from R into R which is differentiable on the nonempty solution set S of (1). Let either the level set L {x Ix S, f(x <= } be nonempty and bounded ] or some real number, or let 0 be the minimum value of (1) and let the nonlinear program (3) Minimize f(x) subject to Ax >-b, px <-0 have a Karush-Kuhn-Tucker point. Then there exists an in R and an f > 0 such that * " Received by the editors November 8, 1978 and in revised form March 25, 1979. Computer Sciences and Industrial Engineering Departments, University of WisconsinmMadison, Madison, Wisconsin 53706. This research was supported in part by National Science Foundation (rant MCS74-20584 A02. 745

746 O.L. MANGASARIAN AND R. R. MEYER for eache in [0, g] there exists a a(e) in R"such that (2, ti (e)) is a Karush-Kuhn-Tucker point of the perturbed problem (2), and is also a solution of the linear program (1). If in addition f is convex or pseudoconvex at, then 2 solves the perturbed problem (2) for e in [0, ] Ṗroof. By explicit assumption or by the boundedness of L, problem (3) has a Karush-Kuhn-Tucker point (2, 5, /) in R "+ +1 which satisfies V/(2)- A 7- + /p 0, A2>-b, (4) p2 0, 5(a2-b) 0, #, /->0. Since $ is also a solution of the linear program (1), there exists a in R such that (5) -at"+p=0, AY>=b, ff(ay-b)=0, >-0. Case 1" 2 0. From (4) and (5) we have that for any e -> 0 e Vf(X)-a T ( + e5) + p 0, A2 => b, (, + es)(a$ b) O, Hence (, ff + es) is a Karush-Kuhn-Tucker point of (2) for any e >= 0. Case.2" /> 0. When z/> 0, it follows from (4) that (, K 5//) is a Karush-Kuhn- Tucker point of (2) with e g 1 / /. From (4) and (5) we have for 35 > 0 and A [0, 1 that Vf(2) AT((1 A)+A) +p =0, A2 _-> b, +A)(A$b) 0, (1-A)+h--=>0._ Hence (, (1-h) + h (5/z/)) is a Karush-Kuhn-Tucker point for (2) for e hg A/q and A e [0, 1]. The last statement of the theorem follows from the standard sufficiency theory of nonlinear programming [2, Thm. 10.1.2]. ] We can apply the same proof technique above to a considerably more general problem than (1), namely to the nonlinear programming problem: (6) Minimize O(x) subject to g(x)=<o, h(x)=o, where 0, g and h are functions from R" into R, R and R k respectively. However because of a constraint qualification restriction the results apply to a narrow class outside linear programs. Hence we shall merely state the result and omit the proof

NONLINEAR PERTURBATION 747 which is quite simil.ar to the proof of Theorem 1. We shall again associate with (6) a perturbed problem, namely for some e >= 0 (7) Minimize O(x)+ef(x) subject to g(x)<=o, h(x)=0, where f is from R into R. We shall assume that (6) has a local solution at 2 with minimum value of 0 0() and that B is the open ball with center 2 such that 0() <= O(x) for all x in B satisfying the constraints g(x)<=0 and h(x)= 0. We further admit the possibility of the nonuniqueness of and define ={xlo(x)=, g(x)<=0, h(x)=o, xsb}. THEOREM 2. Let (6) have a nonempty set of local optimal solutions satisfying a constraint qualification. Let O, g, h. and f be differentiable on and let the nonlinear program Minimize f(x) subject to g(x) <- O, (8) h(x) 0, O(x)<-O, xb have a Karush-Kuhn-Tucker point (2,, L /) in R n+"+k+l Then there exists an > 0 such thatforeach e in [0, f] there exists a (,?)" [0, f]- R"/ksuch that (, t(e),?(e)) is a Karush-Kuhn-Tucker pointfor the perturbed problem (7), and is also a local solution of the nonlinear program (6). In fact it is possible to take 1 / / when / > 0 and as any positive number when /= O. The main cause of the restrictive nature of this theorem outside linear programming is that in order for (8) to have a Karush-Kuhn-Tucker point its constraints must in general satisfy a constraint qualification. This is difficult when g, h and 0 are nonlinear because of the constraint O(x) <= O. However when h is linear and 0 and g are pseudoconcave or concave at 2 then a constraint qualification is automatically satisfied [2, Thm. 11.3.6]. This is a somewhat restrictive extension which does however include the case when (6) is a linear program. The rest of the paper is devoted exclusively to the perturbation (2) of the linear program (1). We will first show that, under appropriate assumptions, some element 2 of the solution set S of (1) will be a local (global) solution of P(e) for all sufficiently small e -> 0. We will then show that under slightly stronger assumptions, each global solution of P(e) for sufficiently small e =>0 is also a solution of (1). We begin by assuming that minx/(x) has a local (global) solution, so that there exists an open ball B with center such that S f )B is optimal for the problem (9) Minimize f(x) subject to x g f lb. The proof of the subsequent results depends crucially on establishing a minimum rate of increase of px in certain directions that lead "away" from S. These directions are related to pro]ections of points in $ on $. The projection of a point x on S is denoted by Ix (x) with Ix (x) S and ]l (x)-xll min I[ix -xli, where I1"11 denotes the c norm throughout this paper unless otherwise subscripted. We state now the key result which gives the desired lower bound on p(x Ix(x)) and give the proof in the Appendix. LEMMA 1. There exists an a > 0 such that p (x Ix (x)) _-> a IIx Ix (x)[[ for all x S.,

748 O. L. MANGASARIAN AND R.R. MEYER We shall also need the following Lipschitz property on the perturbation function There exist positive numbers 8 and K such that (10) f((x))-f(x)<-kiix-m(x)ll for x S and IIx-(x)ll<=. Note that it follows from the definition of/z (x) that Ilx (x)l[ whenever ]Ix With the above concepts we establish our next principal result. THEOREM 3. Let be a local solution of minxg/(x). Then, for sufficiently small e >= O, is both a global solution of the linear program (1) and a local solution of the perturbed problem (2) provided that either of the two following conditions holds" (a) The Lipschitz property (1 O) holds.. (b) f is convex on some open set containing Proof. (a) Let (10) hold and let B B (, g) {xlllx < g} where g is chosen such that 0 < 8-< 6 and is an optimal solution of (9). Note that if x e B(, 6), then IIx (x)ll IIx 11 <, Hence by part (b) of Lemma A.3 of the Appendix we have upon noting the equality p plz(x) that ef()+p<--ef(x)+px forxesb, Hence solves (2) for e e [0, /K] with the added constraint that x e B(, /2). (b) Let be convex on B(, r) for some r>0. By Theorem 10.4 of [5] is Lipschitzian on any open ball B (, 8) with 8 < r, and again we have that IIx- (x)ll llx- ll< for x B(, 8). Hence because f is Lipschitzian on B(, 8) the first inequality of (10) holds for x B(, 8), and because [ is convex on B(, ), is an optimal solution of (9) with B B (, 8). Again by part (b) of Lemma A.3 of the Appendix we have that e() +p e(x) + px for x e S B, and e e 0,, Hence solves (2) for e e [0, /K] with the added constraint that x Example 1. To illustrate the need for the Lipschitz property (10), let x e R let S {x 0}, p 1, [(x)=-x /. Note that is continuous on S, but does not have the Lipschitz property in a neighborhood ol g {0}. (Note also that [ is convex on S, but cannot be extended to a finite convex function on R.) In this case it is easily verified that g(e) e/4 for e 0 and thus g(e) never includes {0} for any positive Note that in Theorem 3, the Lipschitz property (10) is needed only that lie in some open neighborhood of, since only such points are involved in the statement of the theorem and its proof. On the other hand by using the full strength (10) and under slightly stronger assumptions than those of Theorem 3 we can show that each global solution of the perturbed problem (2) for suciently small e 0 is also a solution o the linear program (1). In particular we have the following.

NONLINEAR PERTURBATION 749 THEOREM 4. Let S be a solution of minxgf(x) and letpx + e*f(x) be bounded from below on S for some e*>0. Then S(e) S for sufficiently small e >=0 provided that any of the following conditions holds" (a) The Lipschitz property (10) holds. (b) f is convex on some open convex set containing S. (c) f has continuous first partial derivatives on some open set containing S and S is compact. Proof. We will first establish that $ S(e) S for sufficiently small e _->0 under hypothesis (a) by showing that for sufficiently small e _-> 0 (11) p +ef().<px +el(x) for x S\S and (12) p2 + el(2) <-- px + ef(x) for x S. Inequality (12) holds because 2 minimizes f on S. To establish (11), let x S\S; thus x #/ (x), and consider the two following cases. Case 1" O<llp,(x)-xll<-6. The strict inequality (11) follows from part (a) of Lemma A.3 of the Appendix for e e [0, a/k) upon noting that p2 pi,(x). Case 2" IIl(x)-xll>6. Let u be such that px+e*f(x)>= for x es, so that f(x) >- /e * px/e *. By defining we have that q -p/e* and p u/e* + f(2) + py/e* f(y)-d(x)<=q(i.t(x)-x)+p forx 6S. Because I]/ (x)- xll > 6 it follows upon using the H61der inequality that e(f(x)-f(x))/[lx-l(x)ll<=e]lq[l+ep/ for x S and consequently for e small enough, that is e [0, a/(]lq]l + p/6)), the right hand side of the last inequality is less than a. Thus, for such e (x)ll <-px p2 (by Lemma 1). This establishes (11) for this second case also. Now note that hypothesis (c) implies (a) and that hypothesis (b) also implies (a) in the case that S is compact [5, Thin. 10.4], so that the proof will be completed by showing that the result holds under hypothesis (b) even when is not compact. Let e (f(2)-fix)) < llx T {x I[Ix :]]--< k}, where k is some positive number, let S = S 0 T, and let S S 0 T. Note that S is a compact polyhedral set and that S is the set of optimal solutions of minxes, px, so that the preceding arguments imply that there exists an e > 0 such that Y S (e) S for e [0, e ] where S (e) denotes the solution set of min,stpx +el(x). Now suppose that for some e [0, e ], S(e) contains a point S. By the convexity of px + el(x), S (e) implies that S(e) (since a local solution of P(e) must also be a global solution), and consequently by the convexity of S(e) it follows that x(h)=(1-h)2+hs(e) for all h [O, 1]. However, for h (0, 1] we have that x (h),q and hence x (h). But for sufficiently

750 O.L. MANGASARIAN AND R.R. MEYER small A > 0, x(a) S (e) c S, which is a contradiction. Thus S(e) S for e [0, e ], and since 2 S(e) the theorem is established under hypothesis (b). [q In the terminology of point-to-set mappings the result S(e) S of Theorem 4 for e -> 0 sufficiently small implies that the mapping S(e) is upper semi-continuous at 0 in a strong sense. (Note that if px + el(x) is not bounded from below for any e > 0, then the inclusion S(e)S holds trivially, since S(e) b for all e >0.) To see that the compactness of S is necessary in hypothesis (c) of Theorem 4 we give below an example in which the conclusion of Theorem 4 fails when the compactness assumption of part (c) is dropped. (0) 1 S {(x1, x2)[1 <X1, 0=<x2 < 1} and f(x) Example 2. Let x R2 P --XIX2q"X31X22. Note that on S, px +ef(x)>-e(-xlx2+(xlx2)2)>=-e/4. Moreover, S {(X 1, x2)lxl 1, xz 0} and f(x)= 0 on S, so that px + el(x)- 0 on S for all e-> 0. However for 0 < e 2, 2 xl 2/e, x2 e2/16, we have that (x, x.) S, px + ef(x) -e /32 < 0, and hence no solution of minxgf(x) can be in S(e), the solution set of minxes px + el(x). It can also be shown that S(e) is nonempty for all e > 0 so that $(e) is not contained in S. Note that in the case that the linear program (1) has a unique solution, many of the results above may be simplified. In particular, Theorems 1 and 4 yield the following. (See also Remark 4 in [3].) COROLLARY 1. Let S consist of a single point.2. Iff is differentiable at 2, then 2 is a Karush-Kuhn-Tucker point of (2) for all sufficiently small e >-0. If, in addition, S(e *) ( for some e * > O, then S(e) {2} for all sufficiently small e >- O. Proof. The first conclusion follows directly from Theorem 1. The second follows from the fact that S- {2} implies that /x(x)= 2 for all x S, so that the Lipschitz property (10) holds as a consequence of differentiability of f at 2. This part of the corollary then follows from Theorem 4. A similar result also holds without assuming uniqueness in (1) if a strict convexity property is assumed instead. Iff is strictly convex on some open set containing S and if 2 is the COROLLARY 2. solution of mingf(x), then S(e)= {2} ]:or all sufficiently small e >0. Proof. The proof follows from Theorem 3 and the fact that, for e > 0, px + el(x) is strictly convex and therefore assumes its minimum at not more than one point in S. [-1 Appendix. LEMMA A. 1. There exists an a > 0 such that p(x-z(x))=llx-(x)[[ for all x S. Proof. Obviously the lemma holds trivially when x S or equivalently when x =/z (x). Suppose now that x S\S and let e be a vector of ones in R n. Then 0 < IIx (x)ll Minimum {6 I-e x 6e, AI >- b, plz <- } Maximum {x(y -v)-0- + bwly -v -p + A rw O, (by linear programming duality) ey + ev 1, y, v, r, w -> 0}

NONLINEAR PERTURBATION 75 1 (A.1) Maximum {((px -)+ w(b -Ax)ly -v -p(+a 7"w O, Thus ((x)> 0 for x ((x)(px -ptt(x))+ w(x)(b-ax) (since 0 <-(x)p(x-(x)) ey + ev l, y, v, (, w >-O}. ptz(x) and (y(x), v(x), (x), w(x)) is a solution to the maximum problem) (since w(x)>--o, and b-ax 0). SS, and in addition (A.2) l(x) p(x-(x)) for x S$2 I1-()11 But since (x) may be chosen as a component of a solution vertex of the linear program (A. 1) and since the feasible region of (A. 1) is independent of x and has a finite number of vertices, (x) for x SS may be bounded as follows 1 ((x) -<--- := maximum {srl(y, v, sr, w) is a vertex of y v -p( + A TW O, ey + ev l, y,v,r,w=>0}. This bound on r(x) together with (A.2) establishes the lemma. 71 LEMMA A.2. If 2 S and x R then [[t (x) 2[[ <- 211x 211. Proof. I1 (x)- xll <--11 (x)- xll + I[x I1 xll + IIx 11 (since (x) is the projection of x on ) LZMMA A.3. Let the Lipschitz condition (10) hom, let Y S B be a solution mingn [(x) with the ball B B (Y, [or some g > O. Then for llx (x)ll 6 and xsb(y, 6/2) and (a) e(f($)-f(x))<p(x-(x)) [or x U(x) and e [0, a/k) (b) e(f(2)-f(x))<=p(x-(x)) for e 6[0, a/k]. Proof. Let x S B(2, 6/2) and IIx -t(x)ll<--6; then ([()- (x)) _-< ([( (x))- f(x)) (since by Lemma A.2 tz(x)esfqb(2, 6)) gll(x)-xll (by (10) and <ll(x)-xll (for e [0, c/k) and x # t (x)), <-p(x -(x)) (by Lemma A. 1). This establishes part (a) of the lemma. Part (b) follows by changing the strict inequality in the above string of inequalities to an inequality for the case of e [0, a/k]. 71

752 O. L. MANGASARIAN AND R.R. MEYER REFERENCES [1] W. KARUSH, Minima o1 functions of several variables with inequalities as side conditions, Master of Science Dissertation, Dept. of Mathematics, Univ. of Chicago, December 1939. [2] O. L. MANGASARIAN, Nonlinear Programming, McGraw-Hill, New York, 1969. [3]., Uniqueness ofsolution in linear programming, Linear Algebra and Appl., 25 (1979), pp. 151-162. [4]., Iterative solution of linear programs, Computer Sciences Tech. Rep. 327, Univ. of Wisconsin, Madison, in preparation. [5] R. T. ROCKAFELLAR, Convex Analysis, Princeton University Press, Princeton, NJ, 1970.