(Marcotte and Dussault [9]) are provided.

Size: px
Start display at page:

Download "(Marcotte and Dussault [9]) are provided."

Transcription

1 SIAM J. CONTROL AND OPTIMIZATION Vol. 27, No. 6, pp , November Society for Industrial and Applied Mathematics 002 A SEQUENTIAL LINEAR PROGRAMMING ALGORITHM FOR SOLVING MONOTONE VARIATIONAL INEQUALITIES* PATRICE MARCOTTE? AND JEAN-PIERRE DUSSAULT$ Abstract. Applied to strongly monotone variational inequalities, Newton s algorithm achieves local quadratic convergence. In this paper it is shown how the basic Newton method can be modified to yield an algorithm whose global convergence can be guaranteed by monitoring the monotone decrease of the "gap function" associated with the variational inequality. Each iteration consists in the solution of a linear program in the space of primal-dual variables and of a linesearch. Convergence does not depend on strong monotonicity. However, under strong monotonicity and geometric stability assumptions, the set of active constraints at the solution is implicitly identified, and quadratic convergence is achieved. Key words, method mathematical programming, variational inequalities, nonlinear complementarity, Newton s AMS(MOS) subject classifications. 49D05, 49D10, 49D15, 49D35 0. Introduction. In this paper we consider the variational inequality problem defined on a convex compact polyhedron in R n. Since this problem can be formulated as a fixed-point problem involving an upper semicontinuous mapping, it can be solved by simplicial or homotopy methods for which there already exists a vast literature (see Zangwill [16], Todd [14], Saigal [13]). For large-scale problems, however, these algorithms tend to become inefficient, both in terms of computer memory and running time requirements. This explains the renewed interest in algorithms closely related to procedures originally devised for iteratively solving systems of nonlinear equations (Ortega and Rheinboldt [10]) such as the Jacobi, Gauss-Seidel, and Newton schemes (see Pang and Chan [11], Josephy [5], Robinson [12]) or projection algorithms (Bertsekas and Gafni [2], Dafermos [4]) where the cost function is approximated, at each iteration, by a simpler, e.g., linear, separable, or symmetric function. Local or global convergence of the latter methods usually hinges on the a priori knowledge of lower bounds for the Lipschitz constant of the cost function, either in a neighborhood of a solution (for local convergence) or uniformly on the feasible domain (for global convergence). These conditions are difficult, while not impossible, to verify in practice. Our approach is basically different. We choose as a merit function the complementary term (or gapfunction associated with the primal-dual formulation ofthe variational inequality and find its global minimum by application of a first-order minimization algorithm. For monotone cost functions, we show that the algorithm converges globally to an equilibrium solution and possesses the finite termination property if the function is affine. Furthermore, under geometric stability and strong monotonicity assumptions, the algorithm implicitly identifies the set of constraints that are binding at the equilibrium solution, and convergence toward the equilibrium solution is quadratic. Numerical results comparing this method to Newton s method with and without linesearch (Marcotte and Dussault [9]) are provided. * " Received bythe editors February 25, 1985; accepted for publication (in revised form) February 24, D6partement de Math6matiques, Coll6ge Militaire Royal de Saint-Jean, Richelain, Qu6bec, J0J 1R0, Canada. This research was supported by National Sciences and Engineering Research Council of Canada grants 5491 and 5789, and Academic Research Program ofthe Department of National Defense grant FUHBP. $ D6partement de Math6matiques et d informatique, Universit6 de Sherbrooke, Boul. Universit6, Sherbrooke, Qu6bec, J1K2R1, Canada. 1260

2 SEQUENTIAL LINEAR PROGRAMMING ALGORITHM Problem formulation. Notation and basic definitions. Let {Bx <-_ b}, where B is an m x n matrix (m > n), represent a nonempty convex compact polyhedron in R" and let F be a continuously, differentiable function from into R" with Jacobian F. The variational inequality problem (VIP) associated with F and consists in finding some vector x* in called an equilibrium solution, satisfying the variational inequality (VI):. () (x*-x) t:(x*) <-o for all x in Since an equilibrium solution is a fixed point of the upper semicontinuous mapping defined by x- T(x)= {arg maxya, (x-y) F(x)} it follows from Kakutani s Theorem [6] and the compactness of that the set S of equilibria is nonempty. If the Jacobian F (x) is symmetric for all x in then the function F(x) is the gradient of some function f: R", and (1) is the mathematical expression of the first-order necessary conditions corresponding to the optimization problem: (2) rain f(x) F( t) dt where the line integral is independent of the path of integration and therefore unambiguously defined. In order that a feasible point x be an equilibrium, it is necessary and sufficient that x be optimal for the linear program (3) min ytf(x). ye The optimality conditions for (3) are met by x if and only if we have A >- O, F(x) + BtA 0 dual feasibility, (4) At(Bx-b) =0 complementary slackness, Bx <- b primal feasibility. In the following, (4) will be referred to as the complementary formulation of VIP. If F is symmetric, (4) corresponds to the Kuhn-Tucker necessary optimality conditions for the optimization problem (2). If the constraint set is not polyhedral, a formulation similar to (4) can be obtained by imposing a suitable constraint qualification condition on the problem. The constraints Bx <= b will be referred to as the structural constraints associated with the variational inequality problem, and the constraints F(x)+ BtA --O, A -> 0 as the nonstructural constraints. DEFINITION 1. The function F is (i) Monotone on if (x-y)(f(x)-f(y))>=o for all x, y in ; (ii) Strictly monotone on if (x-y)(f(x) F(y))>0 for all. x, y in (x y); (iii) Strongly monotone on if there exists a positive number K such that (x-y) (F(x)-F(y))>-Kllx-yl[ 2 for all x,y in. When F is the gradient of some ditterentiable function f, then the various concepts of monotonicity previously defined correspond, respectively, to convexity, strict convexity, and strong convexity off on For ditterentiable functions, we also have the following characterization (see Auslender [1]): (i) Monotonicity on : (x-y) F (x-y)>=o for all x, y in ;, (ii) Strong monotonicity on : (x-y)tf (x)(x-y)>= llx-yll for all x, y in for some positive number r.

3 1262 P. MARCOTTE AND J.-P. DUSSAULT The solution set S of (1) is nonempty, as noted earlier, convex if F is monotone,, and a singleton if F is strictly monotone. DEFINITION 2. The gap function associated with a VIP is defined, for x in as g(x)=max(x-y) F(x). y It is clear that a feasible point x is a solution of VIP if and only if it is a global minimizer for the gap function, i.e., g(x) 0. Using this concept, VIP can be formulated as the linearly constrained optimization problem (5) min g(x). xi9 Although, in general, neither quasiconvex nor ditterentiable, it will be shown in Lemma 3 that any stationary point of (5) is an equilibrium solution. In particular, a globally convergent algorithm using the gap function as a merit function has been proposed by Marcotte [7]. DEFINITION 3. The dual gap function associated with VIP is defined as,(x)=max(x-y) F(y). ye The dual gap function is convex, but its evaluation requires the solution of a nonconvex (in contrast with linear for the gap function) mathematical program. Under a monotonicity assumption, any global minimizer of the problem minxes, (x) is a solution to VIP. A solution algorithm based on direct minimization of the dual gap function can be found in Nguyen and Dupuis [17]. DEFINITION 4. We say that VIP is geometrically stable if (y-x*) F(x*)<=0 for any equilibrium solution x* implies that y lies in the optimal face T*, i.e., the minimal face of containing the set S of all solutions to VIP. The above stability condition, especially useful when S is a singleton, ensures that T* is stable under slight perturbations to the cost function F. It is implied by the generalization to VIP of the usual strict complementarity condition: (6) {Bx* b :> a * > 0} where A* is an optimal dual vector corresponding to x* in the complementarity formulation (4). If F is strongly monotone, then geometric stability implies the strong regularity condition of Robinson 12]. Also, under geometric stability, there must exist at least one solution of VIP satisfying the strict complementarity condition (6); however it need not be unique, and there might exist optimal primal-dual couples that are not strictly complementary. Figure 1 provides examples where geometric stability holds while strict complementarity is not satisfied. In the first case, the problem is caused by a redundant constraint, while in the second case it is due to the linear dependence of the constraints gradients at x*. 2. Newton s algorithm. Since Newton s method is central to our local convergence analysis we recall its definition and main properties. Applied to VIP, Newton s method generates a sequence of iterates {x k} where x is any vector in and xk+(k >_-0) is a solution to the VIP obtained by replacing F by its first-order Taylor expansion around x k, i.e., (7) (x + -y)t(f(x )+ F (x )(x-x ))<-O ryes. The linearized problem will be denoted LVIP (x) and its (nonempty) set of solutions

4 SEQUENTIAL LINEAR PROGRAMMING ALGORITHM F(x*) redundant constraint (a) dependent constraints gradients (b) FIG. 1. Geometric stability does not imply strict complementarity. (a) Redundant constraint. (b) Dependent constraints gradients. NEW (xk). The gap function associated with LVIP (xk) will be denoted Lg (xk, x) and its mathematical expression is (8) Lg (x k, x) max (x y) (F(x k) + F (xk)(x xk)). y (the linearized gap function) In a similar fashion we define the linearized dual gap function L, (x k, x)" (9) Lp, (x k, x)=max (x-- y) (F(xk) + F (xk)(y-- xk)). y When F is strongly monotone and its F is Lipschitzian, it can be shown that Newton s method is locally quadratically convergent. We quote Pang and Chan s [11] version of this result, also obtained by Josephy [5]. THEOREM 1. If the matrix F (x*) is positive definite and the function F is Lipschitz continuous at x* then there exists a neighborhood N of x* such that if x k N then the sequence {x k} is well-defined and converges quadratically to x*, i.e., there exists a constant such that (10) lix x*ll llx x*ll Vk such that x k N where I1" denotes the Euclidian norm in R". The next result shows that Newton s algorithm has the capability of identifying T*. Actually we will prove this result for a broad class of approximation algorithms where, at each iteration, x k+l is defined as a solution to a VI where F(x) is replaced by the function G(x, x k) parameterized in x k and such that (11) (i) G(x, y) is strictly monotone in x; (12) (ii) G(x, y) is continuous as a function of (x, y); (13) (iii) G(x, x)-- F(x). Property (i) above ensures that x k+ is unambiguously defined. Property (iii) ensures that if x k+ x k then x k is the solution to the original VIP. In many practical situations, G is chosen as a strongly monotone function with symmetric Jacobian. Popular choices

5 1264 P. MARCOTTE AND J.-P. DUSSAULT for G are" Gi(x, y) Fi(Yl,, Yi-1, xi, Yi+l, Y,,), 1,., n Jacobi iteration, Gi(x, y) Fi(x1, Xi, Yi+l, Yn), 1,, n Gauss-Seidel iteration, G(x, y)- F(y)+ F (y)(x- y) Newton s method, G(x, y) Ax + p[f(y) Ay] Projection method, where p > 0 and A is a symmetric positive definite matrix. Other choices for G may be found in Pang and Chan [11] and Marcotte [8]. PROPOSITION 1. Assume that F is monotone and that geometric stability holds for VIP. Let X k+l be a solution to the VI: (xk+i y)tg(xk+l, xk)<0= for all y e where G satisfies (11), (12), (13). Then, for each optimal solution x* of VIP there exists a neighborhood V of x* such that if x k V then x k+l T*. Proof Assume that the result does not hold. Then there exists an extreme point u of-t* and a subsequence {Xk}k converging to some x* such that (x k+l u)tg(xk+, x k) 0 for all k e I. Taking the limit as k--> o (k e I) we obtain implying, by geometric stability, that u (x* u)tf(x *) (x*- u)tg(x *, x*) <= 0 T*, a contradiction. 3. A linear approximation algorithm. In this section we present a model algorithm for solving VIP based on its complementarity formulation (4) that proceeds by successive linear approximations of both the objective and the nonstructural, usually nonlinear, constraints. Throughout this section the function F will be assumed monotone with Lipschitz continuous Jacobian F. Any solution to (4) is clearly a global minimizer for the following (usually) nonconvex, nonlinearly constrained mathematical program" def min h(x, A) ht(b-bx)=xtf(x)+bth x,a (14) subject to F(x)+Bth=0, Bx<-b, h>-o. The following lemma relates the objective in (14) to the gap function. LEMMA 1. We have (15) Proof. g(x)=minh(x,h) subject to F x + B ta O, A >- O. g(x) max (x y) F(x) ye x F(x)- min ytf(x) By<--b =xf(x)- max b Brl F( la,o

6 SEQUENTIAL LINEAR PROGRAMMING ALGORITHM 1265 by linear programming duality theory. Hence g(x)=x F(x)+ min b A F(x)q-BtA =0 Z--->0 after setting A -/x, and the result follows if we replace F(x) by the equivalent term -BtA. The next lemma, basic to our global convergence analysis, states that any stationary point of the mathematical program (14) is actually an equilibrium solution to VIP and justifies the use of an algorithm based on identifying points satisfying first-order conditions of (14). The proof does not rely on any sort of constraint qualification for the nonlinearly constrained problem (14). LEMMA 2. Let (2,.) be a vector satisfying the first-order necessary optimality conditions for (14). Then is a solution to VIP. Proof. It suffices to show that h(, )= 0. Assume that h(, )> 0. Without loss of generality we also assume that h(, )= g(); otherwise, for the linear program (16) min h(x, A) A subject to F(:)+B A=0, A>-0 would not be optimal and an optimal A-solution to (16) would constitute, together with 2, an obvious descent direction for h at (2, ). Consider the linearized problem LVIP (2) with its gap function Lg (:) and complementarity formulation: (17) min h(x, A) def xt[f(,)h F (g)(x-g)] + bra subject to F(g) + F ()(x- 2) + B A O, A=>0. Problem (17) constitutes a positive semidefinite quadratic program whose optimal solution s primal vector corresponds to a (not necessarily unique) Newton direction. Consider a Frank-Wolfe direction d (Y 2, ]) for (17) at the point (2, ). Direction d is a feasible descent direction for the linearized gap function Lg at 2. Since V h(g,,) is identical with N. V/(:, ), and so are the directional derivatives of Lg and g, it follows that-is also a feasible descent direction for g at We are now in a position to give a precise statement of our algorithm. ALGORITHM Initialization. Let x be any vector in and and set k <-- 1. while convergence criterion not met do 1) Find descent direction d. A o arg min b A subject to F(x) + B A 0

7 1266 P. MARCOTTE AND J.-P. DUSSAULT (18) Let (dx(xk), d(xk)) be an extremal solution to the linear problem min xt(f(xk)+ F"(xk)xk)+ b A AO subject to F(x k) + F (xk)(x x k) + B A O. (G(xk)- x Set d 2) Perform arc search on the gap function. (19) if g(d(xk))<=1/2g(x k) then ff-i else 3) Update. endwhile. Some comments are in order: x+,,_ x ffarg min g[xk+o(dx(xk)--xk)]. 0[0,1] + g(d(x)-x) A k+ earg min F(xk+I)+BtA =0 k-k+l (1) At step 2) of Algorithm N, the minimization, with respect to the primal vector x, of the nonditterentiable objective g could be seen as a search along an arc in the space of primal-dual variables (x, h). Since dual vectors h have to be computed repeatedly, this operation can be carried out efficiently using reoptimization techniques of linear programming. (2) It is not required, or even advisable, that the arc search be carried out exactly. For instance, the Armijo-Goldstein stepsize rule, or any rule guaranteeing a "sufficient" decrease of the objective along the search direction could be implemented. (3) For affine functions F Ax + a, Algorithm N reduces to the standard Frank- Wolfe procedure for solving quadratic programming problems, as then the nonstructural constraints become linear. N. 4. Convergence analysis. We first state and prove a global convergence result for Algorithm PROPOSITION 2. Any point of accumulation of a sequence generated by Algorithm N is an equilibrium solution. Proof. If g(dx(xk))--<.5g(x k) infinitely often at (19), then limk_o g(xk) =0. Otherwise the linesearch in (19) is asymptotically always performed and, to prove global convergence, we will strive to check the conditions behind Zangwill s global convergence theorem, namely: (i) All points generated by the algorithm lie in a compact set. (ii) The algorithmic map is closed outside the set of solution points S. (iii) At each iteration, strict decrease of the objective function occurs. (i) Since is compact by assumption, it is sufficient to show that the sequence {dx (xk)} is bounded. By definition of the sequence {d(xk)} we have (20) d, (x k) arg min b A AO subject to B A H(x k) where H(xk) r F(xk)+ F,(xk)(dx(Xk)_xk) R".

8 SEQUENTIAL LINEAR PROGRAMMING ALGORITHM 1267 (21) First observe that the linear program: min bta A0 subject to B A -H(x k) is the dual of the linear program (22) max -xf(x). that is feasible and bounded; hence, by linear programming duality, we have that (17) is also feasible and bounded, i.e., that (17) possesses at least one optimal basic solution. Let {Ne}e=,...,p denote the set of full rank square submatrices (basis) of B Since da (x) is extremal, we have d (x) -N-H(x) for some e e (1,, p}. From the continuity of F and F we deduce that H(x) must lie in some compact set K independent of x k. Therefore d(x)e C der UP=_- -N-K, which is bounded. The same continuity argument is then used to show boundedness of the sequence {+}. (ii) The closedness of the algorithmic map follows directly from the continuity of F and the closedness of the linesearch strategy used. (iii) We must prove that h(x+, +)< h(x,,) if the latter term is positive (not zero). This is a direct consequence of Lemma 2. [3 PROPOSITION 3. If F is monotone on and affine, then Algorithm N * converges in a finite number of iterations. Proof Replacing F by Ax + a in (16) yields a quadratic programming problem. Its solution set is a face T of the polyhedron {Ax + B & O, Bx <= b,, >- 0}. For some iterate k we must have that (dx(x), d(x)) lies in (otherwise the iterates would always be bounded away from T, When (dx(x), da(xk))e ". = we have contradicting global convergence of the method). 1 and (xk+l, k+l) [-] Remark. The preceding result is also valid under the assumption that T* is a singleton (F monotone but not necessarily affine). The proof is similar. To obtain a rate-of-convergence result for Algorithm N we assume, until explicitly stated otherwise, that the function F is strongly monotone in a neighborhood of the solution x* with strong monotonicity coefficient and that the geometric stability condition is satisfied at x*. This implies that the entire sequence {x} converges to the unique solution x*. Under these assumptions we will show that Algorithm N * is locally equivalent to Newton s method, thus implying quadratic convergence and implicit identification of the set of active constraints at x*. We first show that the descent direction d obtained from Algorithm N * satisfies d),(x) NEW (xk) if x is sufficiently close to x*. The following lemmas will be used in the proof. LEMMA 3. The optimal dual vector y(x) associated with the nonstructura! constraint F(x)+ B, =0 of (18) satisfies lim_ y(x) x*. Proof Write the Lagrangian dual of the linear program (18)" max min x [F(xk) + F t(x)x] + b A y[ F(x) + F (x)(x-xk) + B, ]. y xccd Then observe that the inner minimum has value -oe unless By <= b, in which case the minimum over nonnegative A is achieved when I is zero, yielding max min xt[ F(x) + F" (x)x y [ F(x) + F (xk)(x x )]. yc xc

9 1268 P. MARCOTTE AND J.-P. DUSSAULT This expression is equivalent, modulo a constant term, to (23) max min (x--y)t[f(xk)nt-f (xk)(x--xk)]--(x--xk)f (xk)(x--x k) ycb x and constitutes a quadratic perturbation of the dual gap function L at x k. Since y(x k) is dual-optimal for (18) it must correspond to the y-part of a solution to (23). If y(xk) does not converge to x* then there exists a subsequence {Xk}k1 such that limk_oo.ky(xk) 37 X*. Passing to the limit in (23) we obtain, after setting x to : lim max min (x--y) [F(xk)+ F (Xk)(x--xk)]--(x--xk) F (xk)(x--xk) kcx y x ki But this taking y (24) _<_ (7-7) [F(x*) + V (x*)(;- x*)] _-<-KJJ)7-x*ll 2 <0. by strong monotonicity (- x*) F (x*)(- x*) contradicts the optimality of the sequence {y(xk)}k1 since we obtain, by x*" lim min (x--x*) [F(xk)+ F (Xk)(X--Xk)]--(X--Xk) F (xk)(x--x k) k-c x min (x x*) F(x*) =0 by definition of x*. LEMMA 4. There exists an index K such that k >-K implies d,,(x) T*. Proof From (23) we get dx(xk)e D(x k) dez arg min (x-- y(xk)) [F(xk)+ F (xk)(x--xk)] (x x) F (x)(x x). Since y(xk)-+ X* as koo (Lemma 3), (24) represents, for x k close to x*, a small quadratic perturbation of the linear program: min,a, x F(x*). It follows from the geometric stability assumption that dx(x k) T*. [3 COROLLARY. dx(x*) T*. Proof Since F is continuous, the point-to-set mapping :-+{d,,(ff)} is upper semicontinuous. Hence dx(x*){dx(limk_,ooxk)}=limk_,oo{dx(xk)} T*. V] LEMMA 5. limk+oo d,,(xk) x*. Proof From the proof of Lemma 2, we have g (xk; dx(xk)--xk)<o. Passing to the limit and using upper semicontinuity there comes g (x*; dx(x*) x*) <= O. But, by Danskin s rule of differentiation of max-functions (see [18]), we have. g (x*" dx(x*)-x*)=max [dx(x*)-x*] [F(x*)- F"(x*)(y-x*)]. yet* Assume that T* is not the singleton x* (otherwise the result follows trivially from Lemma 5) and let e be a positive number such that o-rx*-e(d(x*)-x*) T* (see

10 Fig. 2). Then we have SEQUENTIAL LINEAR PROGRAMMING ALGORITHM 1269 o->_ g (x*; d(x*)-x*), >- [dx(x*)-x*] [(x*)+ F"(x*)(d(x*)-x*)] >=,lldx(x*)-x*ll implying that &, (x*) x*. FG. 2 LEMMA 6. There exists an index K such that for k >= K, dx(x k) NEW (xk). Proof From (18), d (x k) is an optimal dual vector for the linear program (25) min z [F(x )+ F (xk)(dx(xk)--xk)]. For k large, d(x k) is close to x* (Lemma 5) and problem (25) is an arbitrary small perturbation of the linear program min z F(x*) T*, by definition. Therefore the optimal solutions to whose set of optimal solutions is (25) lie in T* by geometric stability. From the complementary slackness theorem of linear programming we can write d, (xk)t( Bd(xk) b O. We conclude that the couple (d,(xk), d(xk)) is optimal for the quadratic program (17). Since its solution is unique in x and equal by definition to NEW (xk) we conclude that d(x k) NEW (xk). ] PROPOSITION 4. There exist positive constants and such that IIx-x*ll g(x) llx- x*ll,

11 1270 P. MARCOTTE AND J.-P. DUSSAULT Proof It suffices to prove the result in a neighborhood of x*. (26) (i) Proof that g(x) -<_/3 IIx x*[[. g(x)=max(x-y) F(x) (x x*) F(x) + max (x* y)tf(x) <-IIx-x*ll. IIF(x)ll +max (x*- y) F(x*) ycd +max (x*- y) (F(x). diameter of Then set M + M D. (27) (ii) Proof that g(x) >_- a ][x- F(x*)) --< IIx- x*ll, IIV(x)ll / DIIx- x*ll sup <= M + M diam where M d=supxa, ]]F(x)II, M d--esup,,a,,, ]lf(()-f(rl)[i/i]-rl]l, and D is the We consider three mutually exclusive cases. Case 1. T*= x*. (Fig. 3.) For x sufficiently close to x* we have g(x)=(x-x*) V(x) (x x*) F(x*)+ (x x*) (f(x) F(x*)) => llx-x*ll, IIF(x*)ll cos (x-x*, F(x*))/,, IIx-x*ll 2. F(x*) FIG. Case 1. Since F(x*) is orthogonal to no feasible direction from x* (by geometric stability) we must have that cos (x-x*, F(x*)) is positive and bounded away from zero. Hence (27) holds with def a c inf {cos (- x*, F(x*))}llF(x*)ll > o. :x*

12 SEQUENTIAL LINEAR PROGRAMMING ALGORITHM 1271 Case 2. T* {x*} and x T*. (Fig. 4.) Let p be a positive number such that the mapping Proj. (x-(1/p)f(x)), defined for x cp, is contracting, where Proj. denotes the projection operator on P, in the usual Euclidean norm. The existence of such a number p is a consequence of, say, Example 3.1 of Dafermos [4]. For x sufficiently x P x* Y -F(x*) x- f(x) FIG. 4. Case 2. [0, 1] be close to x*, p a er Proj. (x-(1/p)f(x)) lies in T* (see Proposition 1). Let 0 the contraction constant, dependent on p; we have lip- x*ll <--ollx-x*ll. we have (28) Also by construction of p IIp-xll>= [Ix-x*]l-[Ix*-pl[ >-_(1-o)llx-x*ll. (29) (x-p) F(x)- Ilx-p 2. Define (30) 0=max {4)lx+ck(p-x) r*} by the triangle inequality (0 must be positive since x lies in the relative interior ri (T*)) and y=x+o(p-x). We have (x- y)tf(x) q,(x-p) F(x) =Ollx-pll = by (29) pllx-pll(1-o)llx-x*ll by (28). Now 4,1lx-pll- Ilx-y[I must be bounded from below by some positive number s since x lies in ri (T*) and )7 is on the boundary of T*. It follows that g(x)>=(x-y) F(x) p(1-o)sllx-x*ll and the result holds with ce ps(1-0). Case 3. x : T* (consequently T* - ). (Fig. 5.) Define p Projr. (x). First we will show that cos (x-p, F(x*)) is bounded below by some positive number % Define, for x T*,, the function x r/(x) where r/(x) is the intersection of the line going through the segment [p, x] with the boundary of in the direction x-p. Let Be(x*) be a ball of radius e about x*, H B(x*)f3 -T*, and E the closure of r/(h) (see Fig. 6). We have E f l T*= and (31) cos (x-p, (x*)) cos (n(x)-p, F(x*)) ->min cos (v-p, F(x*)).

13 1272 e. MARCOTTE AND J.-P. DUSSAULT X T* x* x-f(x) FIG. 5. Case 3. F(x*) E FIG. 6 But cos(v-p,f(x*))>o (pet*) for each vc_t* by geometric stability. Hence, cos (x-p, F(x*)) >= 3/ > O. We then write (x-p)tf(x*) >- llx-pll" IIF(x*)ll. Thus >3 (32) (x-p)tf(x)=-[ix-pll IIF(x*)ll for x sufficiently close to x*. Now consider the following two subcases. Case 3.1. IIx-pll<-_llp-x*ll with C=ps(1-O)/2ODM. Define 37 as in Case 2 (see Fig. 4). Then g(x) >= (x-.9)tf(x) (x -p) F(x)+ (p -y) F(x) O+ (p -.9) F(p) + (p y) (F(x) F(p)) >-os(a-o)llx-x*ll-rllp-x*[idm since pc T* (see Case 2) _-> (ts(1 o) ODM )IIx x*ll >ps(1-o) 2 Set c ps 1 O) / 2. Case 3.2. IIx-Pll--> ffllp-x*l[. We have (33) [Ix-x*ll Ilx-pll + p-x*ll (1 + ff)iix-pll.

14 SEQUENTIAL LINEAR PROGRAMMING ALGORITHM 1273 We obtain g(x)>=(x-p) F(x) llx-pl[" F(x*) by (32) -2 >_..y 1 x* =2 1+ [Ix- I1" IIF( x* by(28)and(29),withf(x*)o, andtheresultholdswitha= yllf(x*)ll/2(l+). [3 Remark 1. The above general proof does not require ditterentiability of the cost mapping F. If F is ditterentiable, the proof of Proposition 4 can be somewhat streamlined (see Dussault and Marcotte [21]). Remark 2. Proposition 4 strengthens a result of Pang 19] who derives an estimate of the form IIx- x*ll oo4g(x) N. for some positive constant w. PROPOSITION 5. Let {x k} be a sequence generated by Algorithm Then there exists an index K such that for k >-K, x k+l= NEW (xk), the Newton iterate. Proof We must prove that g(new(xk))<-1/2g(x) for k>=k, in which case Algorithm N will set x +1 to d,(x), which is equal to NEW (x) by Lemma 6: g(new(x))=</3llnew(x)-x*ii by Proposition 4 <- t3cllx-x*ll 2 from (10) _<-- c ilxk _x, llg(x) by Proposition 4 1 <_-g(x) 2 If as soon as IIx-x*ll c/2c, l-1 The preceding results can be summarized in a theorem. THEOREM 2. Consider a VIP with monotone cost function F and let {x} be a sequence generated by Algorithm N. (i) g(x k+l) < g(x k) if g(x) O. (ii) lim_. g(x) O. VIP is geometrically stable, (iii) IfF is affine or T* is a singleton then there exists an index K such that g(x) 0 for k >-_ K (finite convergence). (iv) If F is strongly monotone then the sequence {x} converges quadratically to the point x* and there exists an index K such that x T* whenever k >= K. 5. Numerical results. A working version of Algorithm N has been developed, using a standard linear programming code, and contrasted against Newton s method, with or without linesearch. The asymmetric linear complementarity subproblems in Newton s method have been solved by Lemke s Complementary Pivoting Algorithm. then

15 1274 P. MARCOTTE AND J.-P. DUSSAULT In the test problems, has been taken as the unit simplex i=1 Xi--- 1, x >= 0 and the mapping F assumed the general form F(x) (A- A )x + B Bx + yc(x) + b where the entries of matrices A and B are randomly generated uniform variates, C(x) is a nonlinear diagonal mapping with components Ci(x) arctan (xi), and the constant vector b is chosen such that the exact optimum be known a priori. The parameter y is used to vary the asymmetry and nonlinearity of the cost function. Sixteen five-dimensional and sixteen 15-dimensional problems have been generated, with y-values ranging from Newton s search direction differs from Algorithm N s direction in 18 of the 32 problems. In some instances (Figs. 7-10) Algorithm N yields a direction as good or better than Newton s direction. For some other problems (Figs ) Newton s direction is slightly superior. In all cases, the difference in the number of iterations required to achieve a very low gap value is small. 1.4 Dimension 15, <(C) ALG.N Iteration k NEWTON o-o-o- NEWTON (with step) FIG Dimension- 15, y ALG.N # Iteration k NEWTON o-o-o- NEWTON (with step) FG. 8

16 SEQUENTIAL LINEAR PROGRAMMING ALGORITHM Dimension 5, y o Iteration k ALG.N NEWTON o-o-o-o- NEWTON(with step) FIG Dimension 5, y Iteration k ALG.N NEWTON NEWTON(with step) FIG Dimension =15,7 = , Iteration k NEWTON o--o-o- FIG NEWTON(with step)

17 1276 P. MARCOTTE AND J.-P. DUSSAULT Dimension 5, " Iteration k ALG.N # NEWTON o-o-o-o- NEWTON(with step) c3-tzc3- FIG, 12 t3.8 Dimension 15, Iteration k ALG.N u t, NEWTON c)cc)c) NEWTON(with step) [] [] [] FIG o0 Dimension 5, <(, 0.6 N o " Iteration k ALG.N NEWTONo-o-o-o- NEWTON(with step) FIG. 14

18 SEQUENTIAL LINEAR PROGRAMMING ALGORITHM 1277 This preliminary testing shows some promise for the linearization algorithm. Its direction finding subproblem involves a linear program, versus an asymmetric linear complementarity problem for Newton s method. The linear subproblem bears close resemblance to the linear program that must be solved to evaluate the gap function, and as such could benefit from some fine tuning of the computer code. Moreover, it may well prove unnecessary to solve the subproblem exactly, yielding another area for further improvement. In contrast, solving linear complementarity problems yields a feasible solution only at termination, therefore making the implementation of an inexact strategy more difficult. Finally let us mention that Marcotte and Gu61at [20] have successfully implemented Algorithm N to solve large-scale network equilibrium problems when the mapping F, i.e., its Jacobian matrix, is highly asymmetric. 6. Conclusion. The main result of this paper has been to prove global and quadratic convergence of an algorithm for solving monotone variational inequalities. The algorithm operates by solving linear programs in the space of primal-dual variables. Computational experiments show that the algorithm is efficient for solving both smallscale and large-scale problems. Acknowledgments. The authors are indebted to anonymous referees for relevant comments on an earlier version of this paper that led to numerous improvements. REFERENCES [1] A. AUSLENDER, Optimisation. Mdthodes numdriques, Masson, Paris, [2] D. BERTSEKAS AND E. M. GAFNI, Projection methods for variational inequalities with application to the traffic assignment problem, Math. Programming Stud., 17 (1982), pp [3] R.W. COTTLE AND G. B. DANTZIG, Positive (semi) definite programming, in Nonlinear Programming, J. Abadie, ed., North-Holland, Amsterdam, [4] S. C. DAFERMOS, An iterative scheme for variational inequalities, Math. Programming, 26 (1983), pp [5] N. H. JOSEPHY, Newton s method for generalized equations, Technical Report 1966, Mathematical Research Center, University of Wisconsin, Madison, WI, [6] S. KAKUTANI, A generalization ofbrouwer sfixed point theorem, Duke Math. J., 8 (1941), pp [7] P. MARCOTTE, A new algorithm for solving variational inequalities, with application to the traffic assignment problem, Math. Programming, 33 (1985), pp [8] Algorithms for the network oligopoly problem, J. Oper. Res. Soc., 38 (1987), pp [9] P. MARCOTTE AND J.-P.fDussAULT, A note on a globally convergent method for solving monotone variational inequalities, Oper. Res. Lett., 6 (1987), pp ] J. M. ORTEGA AND W. C. RHEINBOLDT, Iterative Solution ofnonlinear Equations in Several Variables, Academic Press, New York, [11] J. S. PANG AND D. CHAN, Iterative methods for variational and complementarity problems, Math. Programming, 24 (1982), pp [12] S. M. ROBINSON, Generalized equations, in Mathematical Programming: The State of the Art, A. Bachem, M. Gr6tschel, and B. Korte, eds., Springer-Verlag, Berlin, New York, 1983, pp [13] R. SAIGAL, Fixed point computing methods, in Operations Research Support Methodology, A. G. Holzman, ed., Marcel Dekker, New York, ] M.J. TODD, The Computation offixed Point and Applications, Springer-Verlag, Berlin, New York, [15] W. I. ZANGWILL, Nonlinear Programming: A Unified Approach, Prentice-Hall, Englewood Cliffs, NJ, [16] W. I. ZANGWILL AND C. B. GARCIA, Equilibrium programming: the path-following approach and dynamics, Math. Programming, 21 (1981), pp [17] S. NGUYEN AND C. DUPUIS, An efficient method for computing traffic equilibria in networks with asymmetric transportation costs, Transportation Sci., 18 (1984), pp [18] J. M. DANSKIN, The theory ofmax-min, with applications, SIAM J. Appl. Math., 14 (1966), pp

19 1278 P. MARCOTTE AND J.-P. DUSSAULT [19] J. S. PANG, A posteriori error bounds for the linearly-constrained variational inequality problem, Math. Oper. Res., 12 (1987), pp [20] P. MARCOTTE AND J. GUELAT, Adaptation of a modified Newton method for solving the asymmetric traffic equilibrium problem, Transportation Sci., 22 (1988), pp [21] J.-P. DUSSAULT AND P. MARCOTTE, Conditions de r.gularitd gom.trique pour les indquations variationnelles, RAIRO Rech. Op6r., 23 (1988), pp

A NOTE ON A GLOBALLY CONVERGENT NEWTON METHOD FOR SOLVING. Patrice MARCOTTE. Jean-Pierre DUSSAULT

A NOTE ON A GLOBALLY CONVERGENT NEWTON METHOD FOR SOLVING. Patrice MARCOTTE. Jean-Pierre DUSSAULT A NOTE ON A GLOBALLY CONVERGENT NEWTON METHOD FOR SOLVING MONOTONE VARIATIONAL INEQUALITIES Patrice MARCOTTE Jean-Pierre DUSSAULT Resume. Il est bien connu que la methode de Newton, lorsqu'appliquee a

More information

SOME COMMENTS ON WOLFE'S 'AWAY STEP'

SOME COMMENTS ON WOLFE'S 'AWAY STEP' Mathematical Programming 35 (1986) 110-119 North-Holland SOME COMMENTS ON WOLFE'S 'AWAY STEP' Jacques GUt~LAT Centre de recherche sur les transports, Universitd de Montrdal, P.O. Box 6128, Station 'A',

More information

c 1998 Society for Industrial and Applied Mathematics

c 1998 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 9, No. 1, pp. 179 189 c 1998 Society for Industrial and Applied Mathematics WEAK SHARP SOLUTIONS OF VARIATIONAL INEQUALITIES PATRICE MARCOTTE AND DAOLI ZHU Abstract. In this work we

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties

More information

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received

More information

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS A Dissertation Submitted For The Award of the Degree of Master of Philosophy in Mathematics Neelam Patel School of Mathematics

More information

Merit Functions and Descent Algorithms for a Class of Variational Inequality Problems

Merit Functions and Descent Algorithms for a Class of Variational Inequality Problems Merit Functions and Descent Algorithms for a Class of Variational Inequality Problems Michael Patriksson June 15, 2011 Abstract. We consider a variational inequality problem, where the cost mapping is

More information

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization

More information

NONLINEAR PERTURBATION OF LINEAR PROGRAMS*

NONLINEAR PERTURBATION OF LINEAR PROGRAMS* SIAM J. CONTROL AND OPTIMIZATION Vol. 17, No. 6, November 1979 1979 Society for Industrial and Applied Mathematics 0363-,0129/79/1706-0006 $01.00/0 NONLINEAR PERTURBATION OF LINEAR PROGRAMS* O. L. MANGASARIAN"

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008 Lecture 8 Plus properties, merit functions and gap functions September 28, 2008 Outline Plus-properties and F-uniqueness Equation reformulations of VI/CPs Merit functions Gap merit functions FP-I book:

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

1. Introduction. We consider the classical variational inequality problem [1, 3, 7] VI(F, C), which is to find a point x such that

1. Introduction. We consider the classical variational inequality problem [1, 3, 7] VI(F, C), which is to find a point x such that SIAM J. CONTROL OPTIM. Vol. 37, No. 3, pp. 765 776 c 1999 Society for Industrial and Applied Mathematics A NEW PROJECTION METHOD FOR VARIATIONAL INEQUALITY PROBLEMS M. V. SOLODOV AND B. F. SVAITER Abstract.

More information

Absolute value equations

Absolute value equations Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West

More information

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University February 7, 2007 2 Contents 1 Metric Spaces 1 1.1 Basic definitions...........................

More information

Existence of Global Minima for Constrained Optimization 1

Existence of Global Minima for Constrained Optimization 1 Existence of Global Minima for Constrained Optimization 1 A. E. Ozdaglar 2 and P. Tseng 3 Communicated by A. Miele 1 We thank Professor Dimitri Bertsekas for his comments and support in the writing of

More information

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version Convex Optimization Theory Chapter 5 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

Spectral gradient projection method for solving nonlinear monotone equations

Spectral gradient projection method for solving nonlinear monotone equations Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department

More information

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES IJMMS 25:6 2001) 397 409 PII. S0161171201002290 http://ijmms.hindawi.com Hindawi Publishing Corp. A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

More information

You should be able to...

You should be able to... Lecture Outline Gradient Projection Algorithm Constant Step Length, Varying Step Length, Diminishing Step Length Complexity Issues Gradient Projection With Exploration Projection Solving QPs: active set

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

Convergence Analysis of Perturbed Feasible Descent Methods 1

Convergence Analysis of Perturbed Feasible Descent Methods 1 JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS Vol. 93. No 2. pp. 337-353. MAY 1997 Convergence Analysis of Perturbed Feasible Descent Methods 1 M. V. SOLODOV 2 Communicated by Z. Q. Luo Abstract. We

More information

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions Angelia Nedić and Asuman Ozdaglar April 15, 2006 Abstract We provide a unifying geometric framework for the

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 October 2003 The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 by Asuman E. Ozdaglar and Dimitri P. Bertsekas 2 Abstract We consider optimization problems with equality,

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

Optimization for Machine Learning

Optimization for Machine Learning Optimization for Machine Learning (Problems; Algorithms - A) SUVRIT SRA Massachusetts Institute of Technology PKU Summer School on Data Science (July 2017) Course materials http://suvrit.de/teaching.html

More information

Extended Monotropic Programming and Duality 1

Extended Monotropic Programming and Duality 1 March 2006 (Revised February 2010) Report LIDS - 2692 Extended Monotropic Programming and Duality 1 by Dimitri P. Bertsekas 2 Abstract We consider the problem minimize f i (x i ) subject to x S, where

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

On the Convergence of the Concave-Convex Procedure

On the Convergence of the Concave-Convex Procedure On the Convergence of the Concave-Convex Procedure Bharath K. Sriperumbudur and Gert R. G. Lanckriet Department of ECE UC San Diego, La Jolla bharathsv@ucsd.edu, gert@ece.ucsd.edu Abstract The concave-convex

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

CSCI : Optimization and Control of Networks. Review on Convex Optimization

CSCI : Optimization and Control of Networks. Review on Convex Optimization CSCI7000-016: Optimization and Control of Networks Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one

More information

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS MATHEMATICS OF OPERATIONS RESEARCH Vol. 28, No. 4, November 2003, pp. 677 692 Printed in U.S.A. ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS ALEXANDER SHAPIRO We discuss in this paper a class of nonsmooth

More information

Polynomial complementarity problems

Polynomial complementarity problems Polynomial complementarity problems M. Seetharama Gowda Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland 21250, USA gowda@umbc.edu December 2, 2016

More information

Optimization. Charles J. Geyer School of Statistics University of Minnesota. Stat 8054 Lecture Notes

Optimization. Charles J. Geyer School of Statistics University of Minnesota. Stat 8054 Lecture Notes Optimization Charles J. Geyer School of Statistics University of Minnesota Stat 8054 Lecture Notes 1 One-Dimensional Optimization Look at a graph. Grid search. 2 One-Dimensional Zero Finding Zero finding

More information

A Brief Review on Convex Optimization

A Brief Review on Convex Optimization A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review

More information

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is New NCP-Functions and Their Properties 3 by Christian Kanzow y, Nobuo Yamashita z and Masao Fukushima z y University of Hamburg, Institute of Applied Mathematics, Bundesstrasse 55, D-2146 Hamburg, Germany,

More information

SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction

SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction ACTA MATHEMATICA VIETNAMICA 271 Volume 29, Number 3, 2004, pp. 271-280 SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM NGUYEN NANG TAM Abstract. This paper establishes two theorems

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm Volume 31, N. 1, pp. 205 218, 2012 Copyright 2012 SBMAC ISSN 0101-8205 / ISSN 1807-0302 (Online) www.scielo.br/cam A sensitivity result for quadratic semidefinite programs with an application to a sequential

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

Lecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008

Lecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008 Lecture 9 Monotone VIs/CPs Properties of cones and some existence results October 6, 2008 Outline Properties of cones Existence results for monotone CPs/VIs Polyhedrality of solution sets Game theory:

More information

The Newton Bracketing method for the minimization of convex functions subject to affine constraints

The Newton Bracketing method for the minimization of convex functions subject to affine constraints Discrete Applied Mathematics 156 (2008) 1977 1987 www.elsevier.com/locate/dam The Newton Bracketing method for the minimization of convex functions subject to affine constraints Adi Ben-Israel a, Yuri

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Zhaosong Lu October 5, 2012 (Revised: June 3, 2013; September 17, 2013) Abstract In this paper we study

More information

Numerical Optimization

Numerical Optimization Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,

More information

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We

More information

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Session: 15 Aug 2015 (Mon), 10:00am 1:00pm I. Optimization with

More information

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0 Numerical Analysis 1 1. Nonlinear Equations This lecture note excerpted parts from Michael Heath and Max Gunzburger. Given function f, we seek value x for which where f : D R n R n is nonlinear. f(x) =

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week

More information

Convex Optimization Boyd & Vandenberghe. 5. Duality

Convex Optimization Boyd & Vandenberghe. 5. Duality 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Lecture: Duality of LP, SOCP and SDP

Lecture: Duality of LP, SOCP and SDP 1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:

More information

Primal/Dual Decomposition Methods

Primal/Dual Decomposition Methods Primal/Dual Decomposition Methods Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Subgradients

More information

Solving generalized semi-infinite programs by reduction to simpler problems.

Solving generalized semi-infinite programs by reduction to simpler problems. Solving generalized semi-infinite programs by reduction to simpler problems. G. Still, University of Twente January 20, 2004 Abstract. The paper intends to give a unifying treatment of different approaches

More information

Copositive Plus Matrices

Copositive Plus Matrices Copositive Plus Matrices Willemieke van Vliet Master Thesis in Applied Mathematics October 2011 Copositive Plus Matrices Summary In this report we discuss the set of copositive plus matrices and their

More information

A projection-type method for generalized variational inequalities with dual solutions

A projection-type method for generalized variational inequalities with dual solutions Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 4812 4821 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa A projection-type method

More information

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints. 1 Optimization Mathematical programming refers to the basic mathematical problem of finding a maximum to a function, f, subject to some constraints. 1 In other words, the objective is to find a point,

More information

Variational Inequalities. Anna Nagurney Isenberg School of Management University of Massachusetts Amherst, MA 01003

Variational Inequalities. Anna Nagurney Isenberg School of Management University of Massachusetts Amherst, MA 01003 Variational Inequalities Anna Nagurney Isenberg School of Management University of Massachusetts Amherst, MA 01003 c 2002 Background Equilibrium is a central concept in numerous disciplines including economics,

More information

Lectures on Parametric Optimization: An Introduction

Lectures on Parametric Optimization: An Introduction -2 Lectures on Parametric Optimization: An Introduction Georg Still University of Twente, The Netherlands version: March 29, 2018 Contents Chapter 1. Introduction and notation 3 1.1. Introduction 3 1.2.

More information

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given. HW1 solutions Exercise 1 (Some sets of probability distributions.) Let x be a real-valued random variable with Prob(x = a i ) = p i, i = 1,..., n, where a 1 < a 2 < < a n. Of course p R n lies in the standard

More information

A Review of Linear Programming

A Review of Linear Programming A Review of Linear Programming Instructor: Farid Alizadeh IEOR 4600y Spring 2001 February 14, 2001 1 Overview In this note we review the basic properties of linear programming including the primal simplex

More information

CONVERGENCE PROPERTIES OF COMBINED RELAXATION METHODS

CONVERGENCE PROPERTIES OF COMBINED RELAXATION METHODS CONVERGENCE PROPERTIES OF COMBINED RELAXATION METHODS Igor V. Konnov Department of Applied Mathematics, Kazan University Kazan 420008, Russia Preprint, March 2002 ISBN 951-42-6687-0 AMS classification:

More information

Constrained Optimization Theory

Constrained Optimization Theory Constrained Optimization Theory Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Constrained Optimization Theory IMA, August

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

Miscellaneous Nonlinear Programming Exercises

Miscellaneous Nonlinear Programming Exercises Miscellaneous Nonlinear Programming Exercises Henry Wolkowicz 2 08 21 University of Waterloo Department of Combinatorics & Optimization Waterloo, Ontario N2L 3G1, Canada Contents 1 Numerical Analysis Background

More information

Convex Analysis and Optimization Chapter 2 Solutions

Convex Analysis and Optimization Chapter 2 Solutions Convex Analysis and Optimization Chapter 2 Solutions Dimitri P. Bertsekas with Angelia Nedić and Asuman E. Ozdaglar Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions

A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions Angelia Nedić and Asuman Ozdaglar April 16, 2006 Abstract In this paper, we study a unifying framework

More information

Using exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities

Using exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities Using exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities Thiago A. de André Paulo J. S. Silva March 24, 2007 Abstract In this paper, we present

More information

A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials

A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials G. Y. Li Communicated by Harold P. Benson Abstract The minimax theorem for a convex-concave bifunction is a fundamental theorem

More information

Discussion Paper Series

Discussion Paper Series Discussion Paper Series 2004 27 Department of Economics Royal Holloway College University of London Egham TW20 0EX 2004 Andrés Carvajal. Short sections of text, not to exceed two paragraphs, may be quoted

More information

A Continuation Method for the Solution of Monotone Variational Inequality Problems

A Continuation Method for the Solution of Monotone Variational Inequality Problems A Continuation Method for the Solution of Monotone Variational Inequality Problems Christian Kanzow Institute of Applied Mathematics University of Hamburg Bundesstrasse 55 D 20146 Hamburg Germany e-mail:

More information

A SHIFTED PRIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OPTIMIZATION

A SHIFTED PRIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OPTIMIZATION A SHIFTED RIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OTIMIZATION hilip E. Gill Vyacheslav Kungurtsev Daniel. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-18-1 February 1, 2018

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

On Penalty and Gap Function Methods for Bilevel Equilibrium Problems

On Penalty and Gap Function Methods for Bilevel Equilibrium Problems On Penalty and Gap Function Methods for Bilevel Equilibrium Problems Bui Van Dinh 1 and Le Dung Muu 2 1 Faculty of Information Technology, Le Quy Don Technical University, Hanoi, Vietnam 2 Institute of

More information

8 Numerical methods for unconstrained problems

8 Numerical methods for unconstrained problems 8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

A Unified Approach to Proximal Algorithms using Bregman Distance

A Unified Approach to Proximal Algorithms using Bregman Distance A Unified Approach to Proximal Algorithms using Bregman Distance Yi Zhou a,, Yingbin Liang a, Lixin Shen b a Department of Electrical Engineering and Computer Science, Syracuse University b Department

More information

Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems

Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems Mathematical and Computational Applications Article Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems Wenling Zhao *, Ruyu Wang and Hongxiang Zhang School of Science,

More information

Solving Dual Problems

Solving Dual Problems Lecture 20 Solving Dual Problems We consider a constrained problem where, in addition to the constraint set X, there are also inequality and linear equality constraints. Specifically the minimization problem

More information

arxiv: v1 [math.oc] 1 Jul 2016

arxiv: v1 [math.oc] 1 Jul 2016 Convergence Rate of Frank-Wolfe for Non-Convex Objectives Simon Lacoste-Julien INRIA - SIERRA team ENS, Paris June 8, 016 Abstract arxiv:1607.00345v1 [math.oc] 1 Jul 016 We give a simple proof that the

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

Absolute Value Programming

Absolute Value Programming O. L. Mangasarian Absolute Value Programming Abstract. We investigate equations, inequalities and mathematical programs involving absolute values of variables such as the equation Ax + B x = b, where A

More information

Lagrange Multipliers with Optimal Sensitivity Properties in Constrained Optimization

Lagrange Multipliers with Optimal Sensitivity Properties in Constrained Optimization Lagrange Multipliers with Optimal Sensitivity Properties in Constrained Optimization Dimitri P. Bertsekasl Dept. of Electrical Engineering and Computer Science, M.I.T., Cambridge, Mass., 02139, USA. (dimitribhit.

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual

More information

In English, this means that if we travel on a straight line between any two points in C, then we never leave C.

In English, this means that if we travel on a straight line between any two points in C, then we never leave C. Convex sets In this section, we will be introduced to some of the mathematical fundamentals of convex sets. In order to motivate some of the definitions, we will look at the closest point problem from

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information