ENLARGEMENT OF MONOTONE OPERATORS WITH APPLICATIONS TO VARIATIONAL INEQUALITIES. Abstract
|
|
- Annis Hicks
- 5 years ago
- Views:
Transcription
1 ENLARGEMENT OF MONOTONE OPERATORS WITH APPLICATIONS TO VARIATIONAL INEQUALITIES Regina S. Burachik* Departamento de Matemática Pontíficia Universidade Católica de Rio de Janeiro Rua Marques de São Vicente, 225 Rio de Janeiro, RJ, CEP ,Brazil address: Alfredo N. Iusem** B. F. Svaiter*** address: Instituto de Matemática Pura e Aplicada Estrada Dona Castorina, 110 Rio de Janeiro, RJ, CEP , Brazil Abstract Given a point-to-set operator T, we introduce the operator T ε defined as T ε (x) = {u : u v, x y ε for all y R n, v T (y)}. When T is maximal monotone T ε inherits most properties of the ε-subdifferential, e.g. it is bounded on bounded sets, T ε (x) contains the image through T of a sufficiently small ball around x, etc. We prove these and other relevant properties of T ε, and apply it to generate an inexact proximal point method with generalized distances for variational inequalities, whose subproblems consists of solving problems of the form 0 H ε (x), while the subproblems of the exact method are of the form 0 H(x). If ε k is the coefficient used in the k-th iteration and the ε k s are summable, then the sequence generated by the inexact algorithm is still convergent to a solution of the original problem. If the original operator is enough well behaved, then the solution set of each subproblem contains a ball around the exact solution, and so each subproblem can be finitely solved. Keywords: Convex optimization, variational inequalities, proximal point methods, monotone operators AMS Classification numbers: 90C25, 90C30. *Research of this author was supported by CNPq grant no /94-0 **Research of this author was partially supported by CNPq grant no /86 ***Research of this author was supported by CNPq grant no /93-9(RN)
2 1. Introduction Given a convex function f: R n R { }, the subdifferential of f at x, i.e. the set of subgradients of f at x, denoted by f(x), is defined as f(x) = {u R n : f(y) f(x) u, y x 0 for all y R n }. An extension of this concept is the notion of ε-subdifferential, denoted as ε f(x) and defined, for ε 0, as ε f(x) = {u R n : f(y) f(x) u, y x ε for all y R n }. This notion has proved to be quite useful in connection with optimization algorithms (see [8], [13]): in many cases in which an algorithm requires computation of an element of f(x), or of a zero of f (i.e. an x such that 0 f(x), which is a minimizer of f) the convergence properties of the method are preserved when f is replaced by ε f for some adequate ε > 0. Since in general f(x) ε f(x), such replacement gives at the same time more latitude and more robustness to the method, taking into account that exact computation is not achievable in computer implementations. In this paper, we introduce a similar enlargement of a monotone operator T, which we denote by T ε and define as T ε (x) = {u R n : u v, x y ε for all y R n, v T (y)}. When T = f for some f, we do not get ε f = T ε but rather ε f T ε. In Section 2 we establish several properties of T ε, like the one just mentioned. Most properties valid for ε f extend to T ε under suitable assumptions on T. For instance, when T is uniformly monotone for all ε > 0 there exist radiuses δ, δ such that T ε (x) is in between the images through T of the balls with centers at x and such radiuses. We want to point out previous works in enlargements of monotone operators for solving the variational inequality problem. For instance, a different regularization process with monotone perturbations of the original maximal monotone operator has been considered in [1]. In [18] and [19], Liskovets studied a nonmonotone perturbation based on the notion of Hausdorff distance. Next, we proceed to develop an application of the new concept in relation with proximal point methods for variational inequalities. Given a convex set C and a point-to-set monotone operator T, the variational inequality problem (VIP(T, C) from now on), consists of finding z C such that there exists u T (z) satisfying u, x z 0 for all x C. VIP(T, C) reduces to minimizing f(x) subject to x C when T (x) = f(x) with f a convex function, and to solving the equation 0 T (x) when C = R n. The methods of interest, introduced in [4], generate a sequence {x k }, where x k+1 is the unique zero of an operator of the form T + H k, with H k (x) = λ k ( g(x) g(x k )), for some λ k > 0 and some strictly convex function g whose gradient diverges at the boundary of the constraint set C. The function g is said to be a Bregman function with zone C 0 (the interior of C). In Section 3 we introduce some material required for the analysis of 2
3 these methods. These algorithms extend the standard proximal point method for finding zeroes of monotone operators, corresponding to C = R n and g(x) = x 2 (see [22]), to the constrained case, i.e. to VIP(T, C), and transform the original constrained problem into a sequence of unconstrained (or perhaps more easily constrained) subproblems. In Section 4 we use the concept introduced in Section 2 to present a relaxation of the methods studied in [4]. Now the subproblems do not consist any more of finding exactly the unique zero of T + H k, but rather any zero of T ε k + H k, with k=0 ε k <. In this sense, the relaxed method can be seen as an inexact version of the method in [4]. We will prove that the convergence properties of the relaxed method are the same as those of the exact one, showing that it is rather robust with respect to numerical errors. Under adequate regularity assumptions on T (e.g. uniform monotonicity) we prove that the set of solutions of the relaxed subproblem contains a ball with center in the exact solution of the unrelaxed subproblem. This means that if the subproblem is solved with any method guaranteed to converge to a zero of a monotone operator (e.g. [9], [15]), a solution of a relaxed subproblem will be found after finitely many iterations. However, it is not clear at present how to exploit this fact in practice. Similar convergence results for an inexact version of the classical proximal point method are presented in [22]. In this case the radius of the ball is just ε k and no hypotheses on T are required beyond maximal monotonicity and existence of zeroes. This is a consequence of some particular properties of g(x) = x 2 (like the triangular inequality) which do not hold for other Bregman functions appropriate for the constrained case (C R n ). In the context of constrained problems, a related inexact proximal point method has been considered in [14] for the optimization case, i.e. with T = f, so that VIP(T, C) becomes min f(x) s.t. x C. Its iteration is of the form 0 ( εk f + H k )(x k+1 ). Since, as mentioned before, we have in general T ε ε f for T = f, our method, when applied to optimization problems, does not reduce to Kiwiel s. In fact, the sets of solutions of our subproblems are in general larger than the corresponding ones in Kiwiel s method. This is the case, for instance, of T as in Examples 2 and 3 in Section 2. Throughout the paper, superindex t denotes transpose,, and are the Euclidean inner product and norm respectively, g indicates the gradient of a differentiable function g and R n +, R n ++ are defined as R n + = {x R n : x j 0 for all j}, R n ++ = {x R n : x j > 0 for all j}. 2. Definition and properties of T ε. Recall that an operator T : R n P(R n ) is monotone if and only if u v, x y 0 for all x, y R n, and all u T (x), v T (y). T is maximal monotone if and only if it is monotone and, additionally, T (x) T (x) for all x R n and some monotone T implies T = T. Given T : R n P(R n ) and ε 0 define T ε : R n P(R n ) as T ε (x) = {u R n : u v, x y ε for all y R n, v T (y)}. (1) We start with some elementary properties of T ε which require no assumptions on T. The notation T T means T (x) T (x) for all x R n, and T +T is the operator defined as (T + T )(x) = {u + v : u T (x), v T (x)}. 3
4 Proposition 1. i) If ε 1 ε 2 0 then T ε 2 T ε 1. ii) T ε T ε 2 2 (T 1 + T 2 ) ε 1+ε 2. iii) T ε (x) is closed and convex for all x R n. iv) If 0 ε = lim k εk, x = lim k xk, u = lim k uk and u k T (x k ) for all k then u T ε (x). v) αt ε = (αt ) αε for all ε 0 and all α 0. vi) αt1 ε + (1 α)t2 ε (αt 1 + (1 α)t 2 ) ε for all ε 0, α [0, 1]. vii) If E R + then T ε = T ε with ε = inf E. Proof: Elementary. ε E The properties above hold for any T, but little else can be said about T ε without monotonicity of T, so from now on we will work with maximal monotone operators. Recall that dom T = {x R n : T (x) }, Im T = x R n T (x). Proposition 2. Assume that T is maximal monotone. Then i) T 0 = T, ii) If dom T is closed then dom T ε = dom T for all ε 0, iii) If dom T is closed them Im T ε Im T for all ε 0, iv) If dom T and Im T are both closed then Im T ε = Im T for all ε 0. Proof: i) Follows easily from the maximality property of T. ii) dom T dom T ε for all ε 0 by (i) and Proposition 1(i). dom T is closed by hypothesis and consequently convex (see [23, p. 915] or [2, Theorem 2.2]). Take x dom T ε and let y dom T be the projection of x onto dom T. By the projection property of y, x y, z y 0 for all z dom T. Take v T (y), z dom T, v T (z) and λ 0. Then v + λ(x y) v, y z = λ x y, y z + v v, y z 0 using a well known property of the projection and monotonicity of T. By maximality of T, v + λ(x y) T (y) for all λ 0. Take u T ε (x). Then ε u v λ(x y), x y = u v, x y λ x y 2 implying λ x y 2 u v, x y +ε for all λ 0, which is impossible unless x y 2 = 0, i.e. x = y. Since y dom T, we get x dom T, proving that dom T ε dom T. iii) Let P = Im T. P is convex (see [18, p. 915] or [2, Proposition 2.5]). Take u Im T ε. Then u T ε (x) for some x dom T ε. Let ū be the projection of u onto P. Take any λ 0 and define z = x + λ(u ū) (2) Let z be the projection of z onto dom T, and v the projection of u onto T ( z), which is closed and convex. By a well known property of the projection, u ū, v ū 0 for all v P, (3) 4
5 Since u T ε (x), v T ( z), z z, y z 0 for all y dom T, (4) u v, v v 0 for all v T ( z). (5) ε u v, x z = u v, z z λ u v, u ū = u v, z z λ u ū 2 λ ū v, u ū u v, z z λ u ū 2 (6) using (2) in the first equality and (3) in the inequality, and noting that v T ( z) P. Now take any y dom T, v T (y) and get v + z z v, z y = v v, z y + z z, z y 0 using monotonicity of T and (4). By maximality of T, v + z z T ( z), i.e. v + z z = ˆv for some ˆv T ( z). Then, using (5), u v, z z = u v, ˆv v 0. (7) From (6) and (7), ε λ u ū 2 implying 0 u ū 2 ε/λ. Since λ is an arbitrary positive real number, we get u = ū. Since ū P, we get u P. It follows that Im T ε Im T. iv) By (i) and Proposition 1(i), Im T Im T ε for all ε 0, and the result follows from (iii). We mention that in general neither T ε nor ε f are monotone operators. compare T ε with ε f in the case of T = f with f a convex function. Next we Proposition 3. If T = f with f a convex function then ε f(x) T ε (x) for all x R n. Proof: Take u ε f(x). Consider any y R n, v T (y) = f(y). By definition of ε f and f, Using (1), it follows that u T ε (x). ε + u, x y f(x) f(y) v, x y. One could expect to have in this case T ε = ε f or at least T ε ε f for some ε. This is not the case, as is shown in the following examples of explicit computation of T ε and ε f, which require just elementary calculus and some care. Example 1: n = 1, f(x) = p 1 x p with p 1, T = f. i) For p = 1 we have [1 ε x, 1] if x > ε 2 ε f(x) = T ε (x) = [ 1, 1] if x ε 2 [ 1, 1 ε x ] if x < ε 2 5
6 and ε f = T ε ii) For p = 2, T ε (x) = [x 2 ε, x + 2 ε], ε f(x) = [x 2ε, x + 2ε] so that T ε (x) = 2ε f(x). iii) For any p > 1 and x = 0, T ε ε ε (0) = [ p( p 1 )1 1/p, p( p 1 )1 1/p ], [ ε f(0) = ( pε p 1 )1 1/p, ( pε ] p 1 )1 1/p, so that T ε (0) = ε f(0) with ε = ε p 1/(p 1). Example 2: n = 1, f(x) = log x, T = f. For x > 0 [ 1 T ε (x) = x ( 1 ε 2 ε), 1 ] min{ 1 + ε, 0}, ε f(x) = x [ 1 x s 1(ε), 1 ] x s 2(ε) where 0 < s 2 (ε) 1 s 1 (ε) and s 1 (ε), s 2 (ε) are the two roots of s 1 log s = ε. More generally, for f(x) = α n j=1 log x j and T = f, with α > 0, it is easy to check that 0 T ε (x) for all x > 0 and all ε αn, while 0 / ε f(x) for any x > 0 and any ε > 0. Therefore, if ε αn then T ε (x) ε f(x) for any x > 0 and any ε 0. Example 3: f(x) = x t Qx with Q R n n symmetric and positive definite, T = f. Then ε f(x) = {Qx + w : w t Q 1 w 2ε}, T ε (x) = {Qx + w : w t Q 1 w 4ε}, so that T ε (x) = 2ε (x). Proposition 3 and the examples above can be seen as a favorable feature of T ε as compared to ε f: since T ε (x) is larger than ε f(x) for T = f, to find a vector in T ε (x) or to solve u T ε (x) for a fixed u is in principle easier than with ε f instead of T ε. But then we have to check that T ε does not enlarge T too much. The next propositions show that T ε is indeed not too large. For Q R n we will denote by T (Q) the set T (x) and by B(x, δ) the ball with center at x and radius δ. An operator T will be said to be locally bounded if for all x (dom T ) 0 (the interior of dom T ) there exists δ x such that T (B(x, δ x )) is bounded, and bounded on bounded sets if for all bounded set Q R n whose closure Q is contained in (dom T ) 0 it holds that T (Q) is bounded. We consider only points and sets in (dom T ) 0 because no maximal monotone operator can be bounded at a point in the boundary of its domain, where T (x) contains a nontrivial cone. An elementary compactness argument shows that any locally bounded operator is bounded on bounded sets: the balls B(x, δ x ) cover Q and if M x is an upper bound of the norm of any element of B(x, δ x ) then the maximum of M xi corresponding to a finite subcovering is an upper bound of u for any u T (Q). Proposition 4. If T is maximal monotone then T is locally bounded, and henceforth bounded on bounded sets. Proof: See [21, Theorem 1]. 6 x Q
7 For Q R n let M T (Q) = sup{ u : u T (Q)}. Proposition 5. If T is maximal monotone and dom T is closed then T ε is bounded on bounded sets for all ε 0. Proof: By Proposition 2(ii), (dom T ε ) 0 = (dom T ) 0. Take Q bounded and such that Q (dom T ) 0. Let d(x, Q) be the Euclidean distance from a point x to the set Q, and, for ν 0, let Q ν = {x R n : d(x, Q) ν}. Clearly there exists ν > 0 such that Q ν (dom T ) 0. We will prove that M T ε(q) ε ν +M T (Q ν )+2M T (Q) for any such ν. Note that M T (Q), M T (Q ν ) are finite by Proposition 4, so that the inequality above establishes ν u û the result. Take x Q, u T ε (x), û T (x). Write u = û + λ ν w with w = (u û), λ = u û (if u = û, take w = 0, λ = 0). Let y = x + w, so that x y = w = ν. Then, since u T ε (x), for all v T (y) ε u v, x y = λ ν w, x y + û v, x y = λ ν w 2 + û v, x y λν + û v x y λν + ( û + v )ν. (8) Since x y = ν and x Q, we have y Q ν. So û M T (Q), v M T (Q ν ). It follows from (8) that λν ε + νm T (Q) + νm T (Q ν ) and therefore u u û + û = λ + û ε ν + M T (Q ν ) + 2M T (Q). (9) Since u is an arbitrary element of T ε (Q), it follows from (9) that M T ε(q) ε ν + M T (Q ν ) + 2M T (Q) <. Next we proceed to look more closely at the relation between T ε (x) and T (B(x, δ)), which is basic for some applications of the new concept. First we need a simple continuity property of maximal monotone operators. Proposition 6. Let T be maximal monotone and take ˆx (dom T ) 0. For all θ > 0 there exists δ > 0 (depending on ˆx) such that for all x B(ˆx, δ) and all u T (x) there exists û T (ˆx) satisfying u û θ. Proof: If the result does not hold then there exists a sequence {x k } converging to ˆx, and u k T (x k ) such that u k û θ for all û T (ˆx). {u k } is bounded by Proposition 4. Refine the sequence {u k } so that it converges to some u, and get u û θ for all û T (ˆx). On the other hand, since the graph of T is closed (see [2]), use Proposition 1(iv) with ε k = ε = 0 and Proposition 2(i) to conclude that u T (ˆx), which is a contradiction. Corollary 2. If T is maximal monotone and point-to-point in (dom T ) 0 then T is continuous in (dom T ) 0. Proof: In the point-to-point case (i.e. u = T (x), û = T (ˆx), etc), Proposition 6 reduces to establishing continuity of T at ˆx. 7
8 We will also impose stronger hypotheses on T. A maximal monotone point-to-set operator T is said to be uniformly monotone if and only if there exist γ > 0, p > 1 such that u v, x y γ x y p for all x, y dom(t ), all u T (x) and all v T (y). Recall that any point-to-set operator T can be inverted: T 1 (x) is just the set {y R n : T (y) = x}. Uniformly monotone operators have the following fundamental property. Proposition 7. If T : R n P(R n ) is uniformly monotone then dom T 1 = R n and T 1 is point-to-point and Hölder-continuous. Proof: [23, Theorem 26.A]. Now we establish the main property of T ε. Theorem 1. Assume that T is maximal monotone and uniformly monotone. Take x (dom T ) 0. Then i) For all ε > 0 there exists δ > 0 such that T (B(x, δ)) T ε (x). ii) For all δ > 0 there exists ε > 0 such that T ε (x) T (B(x, δ)). Proof: i) Let [ ] 1 1/p ε β = pγ (10) γ(p 1) where p, γ are the constants of uniform monotonicity. Apply Proposition 6 with θ = β and ˆx = x. We will verify that the δ whose existence is ensured by Proposition 6 is enough for the result. Take x B(x, δ), u T (x). By Proposition 6 there exists u T (x) such that u u β. Take any y dom T and v T (y). Then u v, x y = u u, x y + u v, x y u u x y + γ x y p β x y + γ x y p (11) using uniform monotonicity in the first inequality. Let ϕ: R n R be defined as ϕ(t) = βt+γt p with β as in (10). Use elementary calculus to find the minimum of ϕ and conclude that ϕ(t) ε for all t 0. It follows from (11) that u v, x y ε for all y dom T and all v T (y), so that u T ε (x). We have proved that T (B(x, δ)) T ε (x). ii) First we claim that there exists σ > 0 such that d(u, T (x)) σ implies u T (B(x, δ)), where d, as before, denotes the Euclidean distance from a point to a set. Otherwise there exist sequences {u k } T (x), {u k } such that lim k (uk u k ) = 0 and u k / T (B(x, δ)), or equivalently T 1 (u k ) x δ, for all k. Since T (x) is bounded by Proposition 4, we may assume without loss of generality that u k converges to some u, and therefore u = lim k uk. By Propositions 1(iv) and 2(i) u belongs to T (x). By Proposition 7, T 1 (u k ) converges to T 1 (u) = x so that lim T 1 (u k ) x = 0. The resulting contradiction shows that the k claim holds. By Proposition 6 there exists η > 0 such that for all y B(x, η) and all v T (y) there exists û T (x) satisfying v û σ 2. Take ε ησ 2. We claim that T ε (x) T (B(x, δ)). Take u T ε (x). Let u be the orthogonal projection of u onto T (x), which is closed and 8
9 convex by Proposition 1(iii) and 2(i). If u = u then u T (x) T (B(x, δ)). Otherwise, write u = u + u u η w for some w R n. Let y = x + w and take v T (y). Note that w = η, so that y B(x, η). By definition of η, there exists û T (x) such that v û σ 2. Since u T ε (x) ε u v, x y = u u, x y + u û, x y + û v, x y = u u, w u û, w + û v, x y η = η u u + u û, u u + û v, x y. (12) u u Since u is the orthogonal projection of u onto T (x) and û T (x), the second term in the rightmost expression of (12) is nonpositive. Also, the third one is bounded above by û v x y ση ση 2. We conclude that ε η u u + 2, so that u u ε η + σ 2 σ 2 + σ 2 = σ. We have shown that d(u, T (x)) σ and therefore u T (B(x, δ)). Since u is any element of T ε (x) we have proved that for ε ησ 2 it holds that T ε (x) T (B(x, δ)). The uniform monotonicity hypothesis in Theorem [ 1 cannot ] be just dropped, as the 0 1 following examples show: if T (x) = Ax with A = then T 1 0 ε = T for all ε 0, so that T ε (0) = 0, while T (B(0, δ)) = B(0, δ) for all δ 0, showing that item (i) fails for this T ; if T is as in Example 1(i) then T (B(1, δ)) = {1} for all δ < 1, while max{0, 1 ε} T ε (1) for all ε > 0, showing that item (ii) fails for this T. In both cases, T is maximal monotone. In the sequel we will need a stronger version of Theorem 1(i), where the same δ does the job for all x in some neighborhood of x. We remark that Proposition 6 does not hold in this locally uniform way: if T (x) = f(x) with f(x) = x, we must take δ x x, because otherwise we can always find x B(ˆx, δ x ) with sg(x) = sg(ˆx), in which case u û = 2 for all u T (x), û T (ˆx). We will use a different line of proof, but then we need to impose some further condition besides uniform monotonicity: either dom T = R n or T is point-topoint. On the other hand, we will establish this stronger version for some operators which need not be uniformly monotone: those of the form T = f. Theorem 2. Assume that T is maximal monotone. Fix x (dom T ) 0. If either i) T = f for some convex function f, or ii) T is uniformly monotone and dom T = R n, or iii) T is uniformly monotone and point-to-point in (dom T ) 0, then there exists δ > 0 such that T (B(x, δ)) T ε (x) for all x B(x, δ). Proof: Let M t = sup{ u : u T (B(x, t))}. We will treat separately the three hypotheses on T. i) Take ν such that B(x, ν) (dom T ) 0. Define δ = 1 2 min {ν, ε M ν }. Note that M ν is finite by Proposition 4. Take x B(x, δ), x B(x, δ), u T (x ), u T (x). Observe that δ ν 2 implies x, x B(x, ν) (dom T ) 0. Then, for all y R n f(y) f(x) u, y x = f(y) f(x ) u, y x u, x x + f(x ) f(x) u, x x + u, x x ( u + u ) x x 2M ν x x 2M ν δ ε 9
10 using the subgradient property in the first inequality. We have proved that u ε f(x), i.e. that T (B(x, δ)) ε f(x). The result follows then from Proposition 3. ii) Now dom T = R n and we can start with an arbitrary ball around x, say of radius 1. ( Let ρ = 1 + 2M 1 γ ) 1/(p 1), where γ, p are the constants of uniform monotonicity, and take δ = 1 2 min {1, ε M ρ }. Note that M ρ is finite by Proposition 4. Take x B(x, δ), x B(x, δ), u T (x ), u T (x). We need to prove that u v, x y ε (13) for all y R n, v T (y). We consider two cases ( a) y x ρ. In this case y x y x x x ρ δ ρ 1 = so that Then u v, x y = u u, x y + u v, x y ) 1/(p 1) 2M 1 γ γ y x p 1 2M 1. (14) u u x y + γ x y p ( u + u ) x y + γ x y p (15) using uniform monotonicity of T in the last inequality. Since δ 1 2, we have x x 1, i.e. x, x B(x, 1) so that u M 1, u M 1. From (14) and (15) u v, x y 2M 1 x y + γ x y p = x y ( 2M 1 + x y p 1 ) 0 ε. We have proved that (13) holds. b) y x ρ. In this case v T (B(x, ρ)), so that v M ρ. M ρ is finite by Proposition 4, because dom T = R n. In this case u v, x y = u v, x x + u v, x y u v, x x ( u + v ) x x ( u + v )δ (16) using monotonicity of T in the first inequality. Note that x B(x, 1) B(x, ρ) because ρ > 1. It follows that u M ρ. We conclude from (16) that u v, x y 2M ρ δ ε and (13) holds also in this case. We remark that we need dom T = R n because we must have B(x, ρ) (dom T ) 0 in order to ensure finiteness of M ρ, and we cannot make ρ arbitrarily small, even if we start with a ball of radius t instead of 1, i.e. with M t instead of M 1 in the formula of ρ, because even M 0 can be strictly positive. iii) Now (13) becomes T (x ) T (y), x y ε. (17) Take ν such that B(x, ν) (dom T ) 0. By Corollary 2, T is continuous in B(x, ν) and therefore uniformly continuous. Take η such that for all x, x B(x, η) satisfying x x η it holds that T (x) T (x ) β with β as in (10). Take δ = 1 2 min{η, ν}. Since 10
11 x B(x, δ), x B(x, δ) it follows that x x η, so that T (x) T (x ) β. Then, as in the proof of Theorem 1(i), T (x ) T (y), x y = T (x ) T (x), x y + T (x) T (y), x y T (x ) T (x) x y + γ x y p β x y + γ x y p = ϕ( x y ) ε using uniform monotonicity of T in the first inequality, and therefore (17) holds. Corollary 3. If both (ii) and (iii) of Theorem 2 hold and furthermore T is Lipschitz continuous with constant L then T (B(x, δ)) T ε (x) with the explicit value of δ given by [ 1 1/p. δ = pγ ε L γ(p 1)] Proof: If x x δ then T (x ) T (x) L x x Lδ = β, with β as in (10), and then we continue as in the proof of item (iii) of Theorem 2. Next we look at our extension of T applied to approximate solutions of variational inequality problems, defined in Section 1. For a closed and convex set Q, define I Q as I Q (x) = 0 if x Q, I Q (x) = + otherwise, and let N C = I C. N C is called the normal cone operator of C. It is easy to check that the set of solutions of VIP(T, C) is the set of zeroes of T + N C and, more generally, that the solution set of VIP(T, C V ) coincides with the solution set of VIP(T + N V, C). In Section 4 we will consider an algorithm for solving VIP(T, D) with the following structure: write D = C V in such a way that C has nonempty interior and proceed to solve VIP(T + N V, C) by solving a sequence of subproblems of the type VIP(T + N V + H k, R n ), equivalent to VIP(T + H k, V ), where H k is a regularization operator which guarantees that the solutions of VIP(T + H k, V ) belong to the interior of C, so that they solve in fact VIP(T + H k, V C) = VIP(T + H k, D). The idea is to put in V those constraints which make the interior of D empty, if any (typically linear equalities), and to leave in C the remaining ones, in which case VIP(T + H k, V ) is an equality constrained problem, easier to solve than the original problem VIP(T, D). We look now at problems of the type VIP(T + H, V ), equivalent to finding a zero of T + H + N V, and, for the reasons just mentioned, we assume that V is an affine manifold, i.e. V = {x R n : Ax = b} with A R m n, b R n. In this case N V (x) = Im(A t ) for all x V, N V (x) = otherwise. H is assumed to be point-to-point and continuous. Finding a zero of T + H + N V is equivalent to finding x V such that H(x) T (x) + W (18) where W = Im(A t ). The proposal is to replace (18) by H(x) T ε (x) + W (19) for some ε > 0. We will prove that if T is uniformly monotone and x solves (18) then any point x close enough to x solves (19). 11
12 Lemma 2. Assume that T is maximal monotone and it satisfies any of the three hypotheses of Theorem 2. Let U dom T be open and take H: U R n continuous. Let V = {x R n : Ax = b} with A R m n, b R m and W = Im(A t ). Assume that U V. If x U is a solution of (18) then for all ε > 0 there exists η > 0 such that any x B(x, η) V solves (19). Proof: By Theorem 2 there exists δ such that T (B(x, δ)) T ε (x) for all x B(x, δ) (note that x (dom T ) 0 because x U). By Proposition 7, T 1 is continuous, so that there exists θ such that x T 1 (x) δ/2 for all x B(T (x), θ). By continuity of H, there exists ρ such that H(x) H(x) θ for all x B(x, ρ). Let η = min{ρ, δ 2 }. Take x B(x, η) V. Since η ρ we have H(x) H(x) θ. Let w = T 1 (T (x) + H(x) H(x)). Then T (w) T (x) = H(x) H(x). It follows that T (w) T (x) θ and therefore x w δ 2. Then x w x x + x w η + δ 2 δ. We have proved that w B(x, δ) and therefore T (w) T ε (x). So T (x) + H(x) H(x) T ε (x). Since x solves (18), T (x) + H(x) W. It follows that H(x) T ε (x) + W, i.e. x solves (19). Lemma 2 can be extended to the case in which V, instead of being an affine manifold, is of the form V = {x R n : f i (x) 0 (1 i m)} with f i convex and differentiable, but then the result must be modified in the following way: let I(x) = {i {1,... m} : f i (x) = 0}, V (x) = {x V : I(x) = I(x)}. Then any point in B(x, δ) V (x) solves (19). The reason is that when x V (x) then N V (x) and N V (x) (though not equal as in the case of the affine manifold) are close enough to each other. We have prefered to omit the details because in most relevant cases V will indeed be an affine manifold. A consequence of Lemma 2 for a particular algorithm will be presented in Section Bregman functions, quasi-fejér convergence and paramonotone operators This section contains the required material for the formulation and convergence analysis of the algorithm to be introduced in Section 4. Let C be a closed and convex subset of R n and C 0 its interior. Consider a convex real function g whose effective domain contains C and let D g : C C 0 R be defined as D g (x, y) = g(x) g(y) g(y) t (x y). (20) g is said to be a Bregman function (and D g the Bregman distance induced by g) if the following conditions hold: B1. g is continuously differentiable on C 0. B2. g is strictly convex and continuous on C. B3. For all δ R the partial level sets Γ(x, δ) = {y C 0 : D g (x, y) δ} are bounded for all x C. B4. If {y k } C 0 converges to y then D g (y, y k ) converges to 0. B5. If {x k } C and {y k } C 0 are sequences such that {x k } is bounded, lim k yk = y and lim D g(x k, y k ) = 0 then lim k k xk = y. C 0 is called the zone of g. These definitions originate in the results of [3]. It is easy to check that D g (x, y) 0 for all x C, y C 0 and D g (x, y) = 0 if and only if x = y. We 12
13 remark that B4 and B5 hold automatically when x k, y are in C 0, as a consequence of B1, B2 and B3, and so they need to be checked only at points in the boundary C of C. It has been proved in [6] that when C = R n a sufficient condition for a convex and differentiable function g to be a Bregman functions is lim x g(x) x =. It is easy to check that D g(, y) is strictly convex and continuous in C for all y C 0. By convention, we take D g (, ) = outside C C 0. Before presenting examples of Bregman functions, we introduce two subclasses to be used in the sequel. A Bregman function g is said to be zone coercive if: B6. For all y R n there exists x C 0 such that g(x) = y. A Bregman function g is said to be boundary coercive if: B7. If {x k } is a sequence contained in C 0 which converges to a point x in the boundary C of C, and y is any point in C 0 then lim k g(x k ), y x k =. Observe that condition B6 together with the strict convexity of g implies that the domain of g( ) is the set C 0. It has been proved in [5] that zone coerciveness implies boundary coerciveness. Example 4: C = R n, g(x) = x 2. In this case D g (x, y) = x y 2. Example 5: C = R n +, g(x) = n x j log x j, extended with continuity to R n + with the j=1 convention that 0 log 0 = 0. In this case D g (x, y) = Example 6: C = R n +, g(x) = n (x j log x j + y j x j ). y j j=1 n (x α j x β j ) with α 1, 0 < β < 1. For α = 2, j=1 β = 1 2 we get D g(x, y) = x y 2 + D g (x, y) = n j=1 1 2 y j ( x j y j ) 2. n j=1 1 2 y j ( x j y j ) 2, and for α = 1, β = 1 2 we get The Bregman functions of Examples 4 6 are all zone coercive, with the exception of Example 6 with α = 1, which is just boundary coercive. We will need in Section 4 a few properties of Bregman functions. Proposition 8. Let g be a Bregman function with zone C 0. Then i) D g (z, x) D g (z, y) D g (y, x) = g(x) g(y), y z for all z C, x, y C 0. ii) If g is either zone or boundary coercive then dom D g (, y) = C 0 for all y C 0. Proof: (i) follows immediately from (20). (ii) has been proved in [4, Lemma 1]. 13
14 Next we introduce the notion of quasi-fejér convergence. Let g be a Bregman function with zone C 0 and take Q C. A sequence {x k } C 0 is said to be quasi-fejér convergent to Q with respect to D g if and only if for all z Q there exists {ε k } R + such that ε k < and k=0 D g (z, x k+1 ) D g (z, x k ) + ε k. (21) The main properties of quasi-fejér convergence are summarized in the next proposition. Proposition 9. Assume that {x k } is quasi-fejér convergent to Q with respect to D g. Then i) {x k } is bounded. ii) For all z Q there exists a subsequence {x l k } of {x k } such that lim k (D g(z, x l k ) D g (z, x l k+1 )) = 0. iii) If a cluster point x of {x k } belongs to Q then x = lim k xk. Proof: k 1 i) By recurrence from (21) D g (z, x k ) D g (z, x 0 ) + ε i D g (z, x 0 ) + follows from B3 with δ = D g (z, x 0 ) + ε k. k=0 i=0 ε k. The result ii) If there exists a subsequence {x l k } such that D g (z, x lk+1 ) D g (z, x l k ) then, by (21), 0 D g (z, x lk+1 ) D g (z, x l k ) ε lk and the result follows because lim ε k = 0. Otherwise k there exists k such that 0 D g (z, x k+1 ) D g (z, x k ) for all k k. Then the sequence {D g (z, x k )} k k is nonincreasing and nonnegative, henceforth convergent, and therefore lim (D g(z, x k+1 ) D g (z, x k )) = 0, i.e. the whole sequence satisfies the requirement. k iii) Given any δ > 0 take ˆk such that ε k δ 2. Let {xl k } be a subsequence of {x k } which k= k converges to x. By B4 there exists k such that l k ˆk and D g (x, x l k) δ 2. Take any k l k. Then, by recurrence from (21) with z = x, D g (x, x k ) = D g (x, x l k) + k 1 i=l k ε i D g (x, x l k) + k=0 ε i δ 2 + δ 2 = δ. It follows that lim k D g(x, x k ) = 0. The result follows from B5 with y = x k = x, y k = x k. Quasi-Fejér convergence has been introduced in [7] and further studied in [10], where Proposition 9(i) and 9(iii) are proved for other type of generalized distances. For the case of g(x) = x 2, a stronger result can be found in [20], where Proposition 9(ii) is proved for 14 i= k
15 the whole sequence {x k }. Other applications of the notion of quasi-fejér convergence can be found in [16] and [17]. Next we deal with paramonotone operators, introduced in [5]. An operator T : R n P(R n ) is said to be paramonotone if it is monotone and additionally u v, x y = 0 with x, y R n, u T (x), v T (y) imply u T (y), v T (x). The main properties of paramonotone operators are summarized in the next proposition. Proposition 10. i) If T = f with f a convex function then T is paramonotone. ii) Assume that T is paramonotone in C and let z be a solution of VIP(T, C). Then y C is a solution of VIP(T, C) if and only if there exists v T (y) such that v, z y 0. iii) Let T : R n R n be a differentiable operator and J(x) its Jacobian matrix at x. If, for all x, J(x) + J(x) t is positive semidefinite and its rank is equal to the rank of J(x) then T is paramonotone. iv) If T 1 and T 2 are paramonotone then T 1 + T 2 is paramonotone. Proof: (i) Monotonicity of T is well known. Assume that u v, x y = 0 with x C, y C, u T (x), v T (y), and define f: C R as f(z) = f(z) + u, x z. Then f is convex and f(z) = f(z) u = {w u : w f(z)}. Taking w = u, we get that 0 f(x) and so x is an unrestricted minimizer of f. By hypothesis and definition of subgradients, f(x) f(y) u, x y = v, x y f(x) f(y), implying f(x) = f(y)+ u, x y = f(y). Since f(x) = f(x) by definition of f, we conclude that f(x) = f(y), so that y is also an unrestricted minimizer of f, i.e. 0 f(y). Therefore 0 = w u for some w f(y), which is equivalent to u f(y) = T (y). Reversing the roles of (x, u), (y, v) the same argument proves that v T (x) and the result is established. (ii) The only if part is immediate. We prove the if part. Assume that v, z y 0 for some v T (y) and some solution z of VIP(T, C). Pick u T (z) such that u, x z 0 for all x C. By monotonicity of T, 0 v, z y u, z y 0, implying that v, z y = u, z y = 0. Hence, v u, z y = 0 and by paramonotonicity of T we get u T (y). These two facts altogether imply, for all x C u, x y = u, x z + u, z y = u, x z 0 where we used the definition of solution in the rightmost inequality. It follows that y is a solution of VIP(T, C). (iii) See [11, Proposition 6]. (iv) Follows easily from the definition of paramonotonicity. 15
16 Finally we introduce a gap function for VIP(T, C), as defined in Section 1. Let q T,C (x) = sup{ v, x y : y C dom T, v T (y)}. (22) Several sufficient conditions for finiteness of q T,C in C are given in [4]. 4. An inexact proximal point method with Bregman distances for variational inequalities Take T : R n P(R n ) and C R n closed and convex. The variational inequality problem VIP(T, C) consists of finding z C such that there exists u T (z) satisfying u, x z 0 for all x C. From now on S(T, C) will denote the set of solutions of VIP(T, C). We make the following assumptions: A1. T = ˆT + N V with dom ˆT closed, (dom ˆT ) 0 C and V closed and convex (N V is the normal cone operator defined after Theorem 2). A2. ˆT is maximal monotone. A3. C 0 V. A4. S(T, C). Consider a Bregman function g with zone C 0 (i.e. conditions B1-B5 hold for g) satisfying A5. Either a) g is zone coercive, or b) g is boundary coercive and q T,C (x) < for all x C (with q T,C as in (22)). Take a sequence {λ k } R such that ˆλ λ k λ for some 0 < ˆλ λ and a sequence {ε k } R ++ such that ε k <. k=0 Let 1 D g (x, y) be the subdifferential of D g with respect to its first argument. The algorithm is defined as follows. Initialization: x 0 C 0 V. (23) Iterative Step: Given x k, define T k : R n P(R n ) as and take x k+1 such that T k = ˆT ε k + N V + λ k 1 D g (, x k ), (24) 0 T k (x k+1 ). (25) The exact algorithm considered in [4] is just (23)-(25) with ε k = 0 for all k. A convergence result similar to our next theorem can be found in [4] where an assumption A1 weaker than A1 is used: instead of the decomposition T = ˆT + N V with (dom T ) 0 C it is 16
17 asumed that dom T C 0 and that T is pseudomonotone (see [4]). Our slightly stronger hypothesis A1 gives rise to a considerable simplification in the proofs. We present next the convergence properties of the method. Theorem 3. Assume that A1-A5 hold. Let {x k } be the sequence generated by (23)-(25). Then i) The sequence {x k } is well defined and is contained in C 0 V. ii) The sequence {x k } is bounded. iii) If a cluster point x of {x k } solves VIP(T, C) then x = lim k xk. iv) If z S(T, C) then there exists a cluster point x of {x k } and u T (x ) such that u, z x 0. v) If ˆT is paramonotone then the sequence {x k } converges to a solution x of VIP(T, C). Proof: i) In [4, Theorems 1 and 2] it was proved, using essentially B2, A2 and A5, that the operator T k,y = ˆT + N V + λ k 1 D g (, y) has a unique zero for any y C 0. We proceed by induction. The result holds for k = 0 by A3 and (23). Assume that x k exists and belongs to C 0 V. By Propositions 1(i), 2(i) and A2, ˆT = ˆT 0 ˆT ε k, so that, using (24), T k,x k T k and then the unique zero of Tk,x k is a zero of T k, i.e. x k+1 exists. Since 0 T k (x k+1 ), x k+1 dom T k. From (24), dom T k = dom ˆT dom N V dom 1 D g (, x k ) = V C 0, using A1, dom N V = V, A5 and Proposition 8(ii). It follows that x k+1 V C 0 and the induction step is complete. ii) (iii) It follows easily from (20) that 1 D g (x, x k ) = g(x) g(x k ) (note that g(x k ) exists by (i) and B1). Then, using (24), we can rewrite (25) as λ k ( g(x k ) g(x k+1 )) ( ˆT ε k + N V )(x k+1 ). (26) Since N V is maximal monotone, using Proposition 1(ii) with ε 1 = ε k, ε 2 = 0, Proposition 2(i) and A1, we get ˆT ε k + N V = ˆT ε k + N 0 V ( ˆT + N V ) ε k = T ε k. (27) Let u k = λ k ( g(x k ) g(x k+1 )). From (26), (27) u k T ε k (x k+1 ). (28) Take z S(T, C) (nonempty by A4). Then there exists v T (z) such that, for all x C, By (1) and (28) v, x z 0. (29) ε k u k v, x k+1 z = u k, x k+1 z v, x k+1 z u k, x k+1 z = λ k g(x k ) g(x k+1 ), x k+1 z = λ k [D g (z, x k ) D g (z, x k+1 ) D g (x k+1, x k )] λ k [D g (z, x k ) D g (z, x k+1 )] (30) 17
18 using (29) in the first inequality, Proposition 8(i) in the last equality, and nonnegativity of D g in the last inequality. It follows from (30) that Since D g (z, x k+1 ) D g (z, x k ) + ε k λ k D g (z, x k ) + ε k λ. (31) ε k < and S(T, C) C we have proved that {x k } is quasi-fejér convergent k=0 to S(T, C) so that (ii) and (iii) follow from Proposition 9(i), 9(iii) respectively. iv) Fix z S(T, C). By Proposition 9(ii) there exists a sequence {x l k } of {x k } such that lim k (D g(z, x l k ) D g (z, x l k+1 )) = 0. By (30) Since λ k λ for all k and ε lk u l k, x l k+1 z λ lk (D g (z, x l k ) D g (z, x l k+1 )). (32) ε k <, we conclude from (32) that k=0 lim u l k, x lk+1 z = 0. (33) k In order to find cluster points of {u l k }, {x l k+1 } we must take care of some technical details. Since u k ( ˆT ε k + N V )(x k+1 ) by (26), we can write u k = u k + w k with u k ˆT ε k (x k+1 ), w k N V (x k+1 ). It follows easily from the definition of N V that w, y x 0 for all x, y V, w N V (x). Since x k+1 V by (i) and z S(T, C) dom T = dom ˆT dom N V dom N V = V, we conclude that w k, z x k+1 0, so that u k, z x k+1 = u k, z x k+1 + w k, z x k+1 u k, z x k+1. (34) Let Q = {x k k = 1, 2,... }. Since Q C 0 by (i), we get from A1 that Take ε such that ε k ε for all k. By Proposition 1(i) Q C (dom ˆT ) 0. (35) u k ˆT ε k (x k+1 ) ˆT ε (x k+1 ) ˆT ε (Q). (36) Since ˆT is maximal monotone by A2 and dom ˆT is closed by A1, we are within the hypotheses of Proposition 5, and so ˆT ε (Q) is bounded, i.e. {u k } is bounded. Since Q is bounded by (ii), we may assume without loss of generality (i.e. refining the subsequence {x l k } if necessary) that {u l k }, {x l k+1 } converge to some u, x respectively, so that we get from (33) and (34) u, z x 0. (37) 18
19 Since lim k ε k = 0, by (36), A2 and Propositions 1(iv) and 2(i) we get that u ˆT (x ). It is easy to check that 0 N V (x) for all x V, so that u = u + 0 ˆT (x ) + N V (x ) = ( ˆT + N V )(x ) = T (x ). (38) (iv) follows immediately from (37), (38). v) Since N V = I V, N V is paramonotone by Proposition 10(i). Since ˆT is paramonotone, T = ˆT + N V is paramonotone by Proposition 10(iv). By (iv) and Proposition 10(ii), {x k } has a cluster point x belonging to S(T, C). By (iii), x = lim k xk. The main advantage of (23)-(25) over the method in [4] (i.e. (23)-(25) with ε k = 0 for all k) is that now, instead of finding the unique zero of ˆT + NV + λ k 1 D g (, x k ), ε the k-th subproblem reduces to finding any zero of ˆT k + N V + λ k 1 D g (, x k ). In both cases the subproblems are generally easier than the original problem, because the term λ k 1 D g (, x k ) has a penalization effect, forcing the sequence {x k } to remain in C 0, so that the constraints in C are absent in the subproblems, which are either unconstrained (when V = R n ) so that they consist of finding a zero of the well behaved operator ˆT + λ k [ g( ) g(x k )], or constrained only by V, i.e. they reduce to VIP( ˆT + H k, V ), with H k (x) = λ k ( g(x) g(x k )). Since typically V will contain those constraints in D = C V which make D 0 empty, V will be in general an affine manifold, and then the equality constrained VIP( ˆT + H k, V ) will be considerably easier than VIP(T, C) = VIP( ˆT, C V ). But nevertheless, in the algorithm of [4] such easier subproblems must be exactly solved (and they have unique solutions). Now, with the help of Lemma 2, we will show that when V is an affine manifold, the solution sets of the subproblems of (23)-(25) contain a whole ball in V around the exact solution, corresponding to ε k = 0, under adequate regularity assumptions on ˆT. This means that the convergence properties of the algorithm are not affected by sufficiently small errors in the solution of the subproblems, and also that any explicit algorithm with feasible iterates guaranteeed to converge to a solution of a linear equality constrained variational inequality problem will find a solution of the subproblem after finitely many iterations. Corollary 4. Assume that A1-A5 hold, that V = {x R n : Ax = b} with A R m n, b R m and that, additionally, either i) ˆT = f for some convex function f, or ii) ˆT is uniformly monotone and point-to-point in (dom ˆT ) 0, or iii) ˆT is uniformly monotone and dom ˆT = R n. Let {x k } be the sequence generated by (23)-(25) and let ˆx k be the (unique) solution of 0 [ ˆT + N V + λ k 1 D g (, x k )](x). (39) Then there exists η k > 0 such that B(ˆx k, η k ) V is contained in the solution set of 0 T k (x) (40) 19
20 with T k as in (24), i.e. any element in B(ˆx k, η k ) V can be taken as x k+1 for algorithm (23)-(25). Proof: Let H k = λ k ( g(x) g(x k )) = λ k 1 D g (x, x k ). We check that we are within the hypotheses of Lemma 2 with U = C 0, H = H k, T = ˆT, x = ˆx k, ε = ε k and W = Im(A t ). H k is continuous in C 0 by B1, C 0 dom ˆT by A1, (39) is equivalent to (18) because N V (x) = Im(A t ) for all x V and (40) is equivalent to (19) by the same reason. Since (i)-(iii) are the hypotheses of Theorem 2, Lemma 2 holds and we can take η k = η, where η is as given by Lemma 2. References [1] Alber, Ya.I. On the regularization method for variational inequalities with nonsmooth unbounded operators in a Banach space. Applied Mathematics Letters 6,4 (1993) [2] Brézis, H. Opérateurs Monotones Maximaux et Semigroups de Contractions dans les Espaces de Hilbert. Mathematics Studies 5, North Holland, New York (1973). [3] Bregman, L.M. The relaxation method of finding the common points of convex sets and its application to the solution of problems in convex programming. USSR Computational Mathematics and Mathematical Physics 7,3 (1967) [4] Burachik, R.S., Iusem, A.N. A generalized proximal point algorithm for the variational inequality problem in a Hilbert space (submitted for publication). [5] Censor, Y., Iusem, A.N., Zenios, S.A. An interior point method with Bregman functions for the variational inequality problem with paramonotone operators (submitted for publication). [6] De Pierro, A.R., Iusem, A.N. A relaxed version of Bregman s method for convex programming. Journal of Optimization Theory and Applications 51 (1986) [7] Ermol ev, Yu.M. On the method of generalized stochastic gradients and quasi-fejér sequences. Cybernetics 5 (1969) [8] Hiriart-Urruty, J.-B., Lemarechal, C. Convex Analysis and Minimization Algorithms. Springer, Berlin (1993). [9] Iusem, A.N. An iterative algorithm for the variational inequality problem. Computational and Applied Mathematics 13 (1994) [10] Iusem, A.N., Svaiter, B.F., Teboulle, M. Entropy-like proximal methods in convex programming. Mathematics of Operations Research 19 (1994) [11] Iusem, A.N. On some properties of paramonotone operators (submitted for publication). [12] Kabbadj, S. Méthodes Proximales Entropiques. Thesis in Mathematics, Université de Montpellier, France (1994). [13] Kiwiel, K.C. Methods of Descent for Nondifferentiable Optimization. Lecture Notes in Mathematics, Springer, Berlin 1133 (1985). 20
21 [14] Kiwiel, K.C. Proximal minimization methods with generalized Bregman functions. SIAM Journal on Control and Optimization 35 (1997) [15] Korpelevich, G.M. The extragradient method for finding saddle points and other problems. Ekonomika i Matematicheskie Metody 12 (1976) [16] Lemaire, B. Bounded diagonally stationary sequences in convex optimization. Journal of Convex Analysis 1 (1994) [17] Lemaire, B. On the convergence of some iterative methods in convex analysis. In Lecture Notes in Economics and Mathematical Systems. Springer, Berlin 129 (1995) [18] Liskovets, O.A. Regularization of problems with discontinuous monotone, arbitrarily perturbed operators. Soviet Mathematics, Doklady 28 (1983) [19] Liskovets, O.A. Discrete regularization of problems with arbitrarily perturbed monotone operators. Soviet Mathematics, Doklady 34 (1987) [20] Martinet, B. Algorithmes pour la Résolution de Problèmes d Optimisation et de Minimax. Thése d Etát, Université de Grenoble, France (1972). [21] Rockafellar, R.T. Local boundedness of nonlinear monotone operators. Michigan Mathematical Journal 16 (1969) [22] Rockafellar, R.T. Monotone operators and the proximal point algorithm. SIAM Journal on Control and Optimization 14 (1976) [23] Zeidler, E. Functional Analysis and Its Applications, Part II/B (Nonlinear Monotone Operators). Springer, Berlin (1985). 21
AN INEXACT HYBRID GENERALIZED PROXIMAL POINT ALGORITHM AND SOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS. May 14, 1998 (Revised March 12, 1999)
AN INEXACT HYBRID GENERALIZED PROXIMAL POINT ALGORITHM AND SOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS M. V. Solodov and B. F. Svaiter May 14, 1998 (Revised March 12, 1999) ABSTRACT We present
More informationA convergence result for an Outer Approximation Scheme
A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento
More informationAN INEXACT HYBRID GENERALIZED PROXIMAL POINT ALGORITHM AND SOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS. M. V. Solodov and B. F.
AN INEXACT HYBRID GENERALIZED PROXIMAL POINT ALGORITHM AND SOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS M. V. Solodov and B. F. Svaiter May 14, 1998 (Revised July 8, 1999) ABSTRACT We present a
More informationOn the convergence properties of the projected gradient method for convex optimization
Computational and Applied Mathematics Vol. 22, N. 1, pp. 37 52, 2003 Copyright 2003 SBMAC On the convergence properties of the projected gradient method for convex optimization A. N. IUSEM* Instituto de
More informationFIXED POINTS IN THE FAMILY OF CONVEX REPRESENTATIONS OF A MAXIMAL MONOTONE OPERATOR
PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 00, Number 0, Pages 000 000 S 0002-9939(XX)0000-0 FIXED POINTS IN THE FAMILY OF CONVEX REPRESENTATIONS OF A MAXIMAL MONOTONE OPERATOR B. F. SVAITER
More informationAn Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods
An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods Renato D.C. Monteiro B. F. Svaiter May 10, 011 Revised: May 4, 01) Abstract This
More informationAN INEXACT HYBRIDGENERALIZEDPROXIMAL POINT ALGORITHM ANDSOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS
MATHEMATICS OF OPERATIONS RESEARCH Vol. 25, No. 2, May 2000, pp. 214 230 Printed in U.S.A. AN INEXACT HYBRIDGENERALIZEDPROXIMAL POINT ALGORITHM ANDSOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS M.
More informationOn the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean
On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean Renato D.C. Monteiro B. F. Svaiter March 17, 2009 Abstract In this paper we analyze the iteration-complexity
More informationarxiv: v3 [math.oc] 18 Apr 2012
A class of Fejér convergent algorithms, approximate resolvents and the Hybrid Proximal-Extragradient method B. F. Svaiter arxiv:1204.1353v3 [math.oc] 18 Apr 2012 Abstract A new framework for analyzing
More informationJournal of Convex Analysis (accepted for publication) A HYBRID PROJECTION PROXIMAL POINT ALGORITHM. M. V. Solodov and B. F.
Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION PROXIMAL POINT ALGORITHM M. V. Solodov and B. F. Svaiter January 27, 1997 (Revised August 24, 1998) ABSTRACT We propose a modification
More informationOn Total Convexity, Bregman Projections and Stability in Banach Spaces
Journal of Convex Analysis Volume 11 (2004), No. 1, 1 16 On Total Convexity, Bregman Projections and Stability in Banach Spaces Elena Resmerita Department of Mathematics, University of Haifa, 31905 Haifa,
More informationε-enlargements of maximal monotone operators: theory and applications
ε-enlargements of maximal monotone operators: theory and applications Regina S. Burachik, Claudia A. Sagastizábal and B. F. Svaiter October 14, 2004 Abstract Given a maximal monotone operator T, we consider
More informationOptimization and Optimal Control in Banach Spaces
Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,
More informationarxiv: v1 [math.fa] 16 Jun 2011
arxiv:1106.3342v1 [math.fa] 16 Jun 2011 Gauge functions for convex cones B. F. Svaiter August 20, 2018 Abstract We analyze a class of sublinear functionals which characterize the interior and the exterior
More informationA proximal-like algorithm for a class of nonconvex programming
Pacific Journal of Optimization, vol. 4, pp. 319-333, 2008 A proximal-like algorithm for a class of nonconvex programming Jein-Shan Chen 1 Department of Mathematics National Taiwan Normal University Taipei,
More informationMaximal Monotone Operators with a Unique Extension to the Bidual
Journal of Convex Analysis Volume 16 (2009), No. 2, 409 421 Maximal Monotone Operators with a Unique Extension to the Bidual M. Marques Alves IMPA, Estrada Dona Castorina 110, 22460-320 Rio de Janeiro,
More informationOptimality Conditions for Nonsmooth Convex Optimization
Optimality Conditions for Nonsmooth Convex Optimization Sangkyun Lee Oct 22, 2014 Let us consider a convex function f : R n R, where R is the extended real field, R := R {, + }, which is proper (f never
More informationMaximal monotone operators, convex functions and a special family of enlargements.
Maximal monotone operators, convex functions and a special family of enlargements. Regina Sandra Burachik Engenharia de Sistemas e Computação, COPPE UFRJ, CP 68511, Rio de Janeiro RJ, 21945 970, Brazil.
More informationBrøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane
Conference ADGO 2013 October 16, 2013 Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions Marc Lassonde Université des Antilles et de la Guyane Playa Blanca, Tongoy, Chile SUBDIFFERENTIAL
More informationAn inexact subgradient algorithm for Equilibrium Problems
Volume 30, N. 1, pp. 91 107, 2011 Copyright 2011 SBMAC ISSN 0101-8205 www.scielo.br/cam An inexact subgradient algorithm for Equilibrium Problems PAULO SANTOS 1 and SUSANA SCHEIMBERG 2 1 DM, UFPI, Teresina,
More informationFixed points in the family of convex representations of a maximal monotone operator
arxiv:0802.1347v2 [math.fa] 12 Feb 2008 Fixed points in the family of convex representations of a maximal monotone operator published on: Proc. Amer. Math. Soc. 131 (2003) 3851 3859. B. F. Svaiter IMPA
More informationSubdifferential representation of convex functions: refinements and applications
Subdifferential representation of convex functions: refinements and applications Joël Benoist & Aris Daniilidis Abstract Every lower semicontinuous convex function can be represented through its subdifferential
More informationUpper sign-continuity for equilibrium problems
Upper sign-continuity for equilibrium problems D. Aussel J. Cotrina A. Iusem January, 2013 Abstract We present the new concept of upper sign-continuity for bifunctions and establish a new existence result
More informationA projection-type method for generalized variational inequalities with dual solutions
Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 4812 4821 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa A projection-type method
More informationA double projection method for solving variational inequalities without monotonicity
A double projection method for solving variational inequalities without monotonicity Minglu Ye Yiran He Accepted by Computational Optimization and Applications, DOI: 10.1007/s10589-014-9659-7,Apr 05, 2014
More informationOn the iterate convergence of descent methods for convex optimization
On the iterate convergence of descent methods for convex optimization Clovis C. Gonzaga March 1, 2014 Abstract We study the iterate convergence of strong descent algorithms applied to convex functions.
More informationProximal Point Methods and Augmented Lagrangian Methods for Equilibrium Problems
Proximal Point Methods and Augmented Lagrangian Methods for Equilibrium Problems Doctoral Thesis by Mostafa Nasri Supervised by Alfredo Noel Iusem IMPA - Instituto Nacional de Matemática Pura e Aplicada
More informationA strongly convergent hybrid proximal method in Banach spaces
J. Math. Anal. Appl. 289 (2004) 700 711 www.elsevier.com/locate/jmaa A strongly convergent hybrid proximal method in Banach spaces Rolando Gárciga Otero a,,1 and B.F. Svaiter b,2 a Instituto de Economia
More informationIterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem
Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Charles Byrne (Charles Byrne@uml.edu) http://faculty.uml.edu/cbyrne/cbyrne.html Department of Mathematical Sciences
More informationOn well definedness of the Central Path
On well definedness of the Central Path L.M.Graña Drummond B. F. Svaiter IMPA-Instituto de Matemática Pura e Aplicada Estrada Dona Castorina 110, Jardim Botânico, Rio de Janeiro-RJ CEP 22460-320 Brasil
More informationConvex Analysis and Economic Theory AY Elementary properties of convex functions
Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory AY 2018 2019 Topic 6: Convex functions I 6.1 Elementary properties of convex functions We may occasionally
More informationConvergence rate of inexact proximal point methods with relative error criteria for convex optimization
Convergence rate of inexact proximal point methods with relative error criteria for convex optimization Renato D. C. Monteiro B. F. Svaiter August, 010 Revised: December 1, 011) Abstract In this paper,
More informationWEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE
Fixed Point Theory, Volume 6, No. 1, 2005, 59-69 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.htm WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE YASUNORI KIMURA Department
More informationSome Inexact Hybrid Proximal Augmented Lagrangian Algorithms
Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Carlos Humes Jr. a, Benar F. Svaiter b, Paulo J. S. Silva a, a Dept. of Computer Science, University of São Paulo, Brazil Email: {humes,rsilva}@ime.usp.br
More informationAn Infeasible Interior Proximal Method for Convex Programming Problems with Linear Constraints 1
An Infeasible Interior Proximal Method for Convex Programming Problems with Linear Constraints 1 Nobuo Yamashita 2, Christian Kanzow 3, Tomoyui Morimoto 2, and Masao Fuushima 2 2 Department of Applied
More informationAn Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods
An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods Renato D.C. Monteiro B. F. Svaiter May 10, 011 Abstract This paper presents an accelerated
More informationExtensions of Korpelevich s Extragradient Method for the Variational Inequality Problem in Euclidean Space
Extensions of Korpelevich s Extragradient Method for the Variational Inequality Problem in Euclidean Space Yair Censor 1,AvivGibali 2 andsimeonreich 2 1 Department of Mathematics, University of Haifa,
More informationChapter 2 Convex Analysis
Chapter 2 Convex Analysis The theory of nonsmooth analysis is based on convex analysis. Thus, we start this chapter by giving basic concepts and results of convexity (for further readings see also [202,
More informationMerit functions and error bounds for generalized variational inequalities
J. Math. Anal. Appl. 287 2003) 405 414 www.elsevier.com/locate/jmaa Merit functions and error bounds for generalized variational inequalities M.V. Solodov 1 Instituto de Matemática Pura e Aplicada, Estrada
More informationThe effect of calmness on the solution set of systems of nonlinear equations
Mathematical Programming manuscript No. (will be inserted by the editor) The effect of calmness on the solution set of systems of nonlinear equations Roger Behling Alfredo Iusem Received: date / Accepted:
More informationExistence results for quasi-equilibrium problems
Existence results for quasi-equilibrium problems D. Aussel J. Cotrina A. Iusem January 03, 2014 Abstract Recently in Castellani-Guili (J. Optim. Th. Appl., 147 (2010), 157-168), it has been showed that
More informationJournal of Inequalities in Pure and Applied Mathematics
Journal of Inequalities in Pure and Applied Mathematics http://jipam.vu.edu.au/ Volume 4, Issue 4, Article 67, 2003 ON GENERALIZED MONOTONE MULTIFUNCTIONS WITH APPLICATIONS TO OPTIMALITY CONDITIONS IN
More informationA Strongly Convergent Method for Nonsmooth Convex Minimization in Hilbert Spaces
This article was downloaded by: [IMPA Inst de Matematica Pura & Aplicada] On: 11 November 2011, At: 05:10 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954
More informationMonotone operators and bigger conjugate functions
Monotone operators and bigger conjugate functions Heinz H. Bauschke, Jonathan M. Borwein, Xianfu Wang, and Liangjin Yao August 12, 2011 Abstract We study a question posed by Stephen Simons in his 2008
More informationA hybrid proximal extragradient self-concordant primal barrier method for monotone variational inequalities
A hybrid proximal extragradient self-concordant primal barrier method for monotone variational inequalities Renato D.C. Monteiro Mauricio R. Sicre B. F. Svaiter June 3, 13 Revised: August 8, 14) Abstract
More informationFinite Convergence for Feasible Solution Sequence of Variational Inequality Problems
Mathematical and Computational Applications Article Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems Wenling Zhao *, Ruyu Wang and Hongxiang Zhang School of Science,
More informationJournal of Convex Analysis Vol. 14, No. 2, March 2007 AN EXPLICIT DESCENT METHOD FOR BILEVEL CONVEX OPTIMIZATION. Mikhail Solodov. September 12, 2005
Journal of Convex Analysis Vol. 14, No. 2, March 2007 AN EXPLICIT DESCENT METHOD FOR BILEVEL CONVEX OPTIMIZATION Mikhail Solodov September 12, 2005 ABSTRACT We consider the problem of minimizing a smooth
More informationThai Journal of Mathematics Volume 14 (2016) Number 1 : ISSN
Thai Journal of Mathematics Volume 14 (2016) Number 1 : 53 67 http://thaijmath.in.cmu.ac.th ISSN 1686-0209 A New General Iterative Methods for Solving the Equilibrium Problems, Variational Inequality Problems
More informationLocal strong convexity and local Lipschitz continuity of the gradient of convex functions
Local strong convexity and local Lipschitz continuity of the gradient of convex functions R. Goebel and R.T. Rockafellar May 23, 2007 Abstract. Given a pair of convex conjugate functions f and f, we investigate
More informationSequential Unconstrained Minimization: A Survey
Sequential Unconstrained Minimization: A Survey Charles L. Byrne February 21, 2013 Abstract The problem is to minimize a function f : X (, ], over a non-empty subset C of X, where X is an arbitrary set.
More informationON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction
J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties
More information1 Introduction We consider the problem nd x 2 H such that 0 2 T (x); (1.1) where H is a real Hilbert space, and T () is a maximal monotone operator (o
Journal of Convex Analysis Volume 6 (1999), No. 1, pp. xx-xx. cheldermann Verlag A HYBRID PROJECTION{PROXIMAL POINT ALGORITHM M. V. Solodov y and B. F. Svaiter y January 27, 1997 (Revised August 24, 1998)
More informationFunctional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...
Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................
More informationEpiconvergence and ε-subgradients of Convex Functions
Journal of Convex Analysis Volume 1 (1994), No.1, 87 100 Epiconvergence and ε-subgradients of Convex Functions Andrei Verona Department of Mathematics, California State University Los Angeles, CA 90032,
More informationIteration-Complexity of a Newton Proximal Extragradient Method for Monotone Variational Inequalities and Inclusion Problems
Iteration-Complexity of a Newton Proximal Extragradient Method for Monotone Variational Inequalities and Inclusion Problems Renato D.C. Monteiro B. F. Svaiter April 14, 2011 (Revised: December 15, 2011)
More informationConvergence rate estimates for the gradient differential inclusion
Convergence rate estimates for the gradient differential inclusion Osman Güler November 23 Abstract Let f : H R { } be a proper, lower semi continuous, convex function in a Hilbert space H. The gradient
More informationImplications of the Constant Rank Constraint Qualification
Mathematical Programming manuscript No. (will be inserted by the editor) Implications of the Constant Rank Constraint Qualification Shu Lu Received: date / Accepted: date Abstract This paper investigates
More informationShiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient
Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 4 Subgradient Shiqian Ma, MAT-258A: Numerical Optimization 2 4.1. Subgradients definition subgradient calculus duality and optimality conditions Shiqian
More informationAsymptotic Convergence of the Steepest Descent Method for the Exponential Penalty in Linear Programming
Journal of Convex Analysis Volume 2 (1995), No.1/2, 145 152 Asymptotic Convergence of the Steepest Descent Method for the Exponential Penalty in Linear Programming R. Cominetti 1 Universidad de Chile,
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationOn proximal-like methods for equilibrium programming
On proximal-lie methods for equilibrium programming Nils Langenberg Department of Mathematics, University of Trier 54286 Trier, Germany, langenberg@uni-trier.de Abstract In [?] Flam and Antipin discussed
More informationON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION
ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION CHRISTIAN GÜNTHER AND CHRISTIANE TAMMER Abstract. In this paper, we consider multi-objective optimization problems involving not necessarily
More informationOn Penalty and Gap Function Methods for Bilevel Equilibrium Problems
On Penalty and Gap Function Methods for Bilevel Equilibrium Problems Bui Van Dinh 1 and Le Dung Muu 2 1 Faculty of Information Technology, Le Quy Don Technical University, Hanoi, Vietnam 2 Institute of
More informationDivision of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45
Division of the Humanities and Social Sciences Supergradients KC Border Fall 2001 1 The supergradient of a concave function There is a useful way to characterize the concavity of differentiable functions.
More informationA Brøndsted-Rockafellar Theorem for Diagonal Subdifferential Operators
A Brøndsted-Rockafellar Theorem for Diagonal Subdifferential Operators Radu Ioan Boţ Ernö Robert Csetnek April 23, 2012 Dedicated to Jon Borwein on the occasion of his 60th birthday Abstract. In this note
More informationSome Properties of the Augmented Lagrangian in Cone Constrained Optimization
MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented
More informationError bounds for proximal point subproblems and associated inexact proximal point algorithms
Error bounds for proximal point subproblems and associated inexact proximal point algorithms M. V. Solodov B. F. Svaiter Instituto de Matemática Pura e Aplicada, Estrada Dona Castorina 110, Jardim Botânico,
More information1 Introduction The study of the existence of solutions of Variational Inequalities on unbounded domains usually involves the same sufficient assumptio
Coercivity Conditions and Variational Inequalities Aris Daniilidis Λ and Nicolas Hadjisavvas y Abstract Various coercivity conditions appear in the literature in order to guarantee solutions for the Variational
More informationConvex Analysis and Optimization Chapter 2 Solutions
Convex Analysis and Optimization Chapter 2 Solutions Dimitri P. Bertsekas with Angelia Nedić and Asuman E. Ozdaglar Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com
More informationc 2013 Society for Industrial and Applied Mathematics
SIAM J. OPTIM. Vol. 3, No., pp. 109 115 c 013 Society for Industrial and Applied Mathematics AN ACCELERATED HYBRID PROXIMAL EXTRAGRADIENT METHOD FOR CONVEX OPTIMIZATION AND ITS IMPLICATIONS TO SECOND-ORDER
More informationA Unified Approach to Proximal Algorithms using Bregman Distance
A Unified Approach to Proximal Algorithms using Bregman Distance Yi Zhou a,, Yingbin Liang a, Lixin Shen b a Department of Electrical Engineering and Computer Science, Syracuse University b Department
More informationSubgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus
1/41 Subgradient Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes definition subgradient calculus duality and optimality conditions directional derivative Basic inequality
More informationExtreme Abridgment of Boyd and Vandenberghe s Convex Optimization
Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The
More informationEC 521 MATHEMATICAL METHODS FOR ECONOMICS. Lecture 1: Preliminaries
EC 521 MATHEMATICAL METHODS FOR ECONOMICS Lecture 1: Preliminaries Murat YILMAZ Boğaziçi University In this lecture we provide some basic facts from both Linear Algebra and Real Analysis, which are going
More informationON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES
U.P.B. Sci. Bull., Series A, Vol. 80, Iss. 3, 2018 ISSN 1223-7027 ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES Vahid Dadashi 1 In this paper, we introduce a hybrid projection algorithm for a countable
More information1 Directional Derivatives and Differentiability
Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=
More information1 Lyapunov theory of stability
M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability
More informationMaximal monotone operators are selfdual vector fields and vice-versa
Maximal monotone operators are selfdual vector fields and vice-versa Nassif Ghoussoub Department of Mathematics, University of British Columbia, Vancouver BC Canada V6T 1Z2 nassif@math.ubc.ca February
More informationLECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE
LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization
More informationReal Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi
Real Analysis Math 3AH Rudin, Chapter # Dominique Abdi.. If r is rational (r 0) and x is irrational, prove that r + x and rx are irrational. Solution. Assume the contrary, that r+x and rx are rational.
More informationA CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE
Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received
More informationMATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018
MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S
More informationResearch Article Algorithms for a System of General Variational Inequalities in Banach Spaces
Journal of Applied Mathematics Volume 2012, Article ID 580158, 18 pages doi:10.1155/2012/580158 Research Article Algorithms for a System of General Variational Inequalities in Banach Spaces Jin-Hua Zhu,
More informationConvex Optimization Notes
Convex Optimization Notes Jonathan Siegel January 2017 1 Convex Analysis This section is devoted to the study of convex functions f : B R {+ } and convex sets U B, for B a Banach space. The case of B =
More informationOn the Weak Convergence of the Extragradient Method for Solving Pseudo-Monotone Variational Inequalities
J Optim Theory Appl 208) 76:399 409 https://doi.org/0.007/s0957-07-24-0 On the Weak Convergence of the Extragradient Method for Solving Pseudo-Monotone Variational Inequalities Phan Tu Vuong Received:
More informationVariational Inequalities. Anna Nagurney Isenberg School of Management University of Massachusetts Amherst, MA 01003
Variational Inequalities Anna Nagurney Isenberg School of Management University of Massachusetts Amherst, MA 01003 c 2002 Background Equilibrium is a central concept in numerous disciplines including economics,
More informationConvex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version
Convex Optimization Theory Chapter 5 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com
More informationMetric Spaces and Topology
Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies
More informationOptimality Conditions for Constrained Optimization
72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)
More informationOn Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q)
On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q) Andreas Löhne May 2, 2005 (last update: November 22, 2005) Abstract We investigate two types of semicontinuity for set-valued
More informationUNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems
UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction
More informationA Dykstra-like algorithm for two monotone operators
A Dykstra-like algorithm for two monotone operators Heinz H. Bauschke and Patrick L. Combettes Abstract Dykstra s algorithm employs the projectors onto two closed convex sets in a Hilbert space to construct
More informationKaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä. New Proximal Bundle Method for Nonsmooth DC Optimization
Kaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä New Proximal Bundle Method for Nonsmooth DC Optimization TUCS Technical Report No 1130, February 2015 New Proximal Bundle Method for Nonsmooth
More informationAn inexact strategy for the projected gradient algorithm in vector optimization problems on variable ordered spaces
An inexact strategy for the projected gradient algorithm in vector optimization problems on variable ordered spaces J.Y. Bello-Cruz G. Bouza Allende November, 018 Abstract The variable order structures
More informationCHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.
1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function
More informationMathematics for Economists
Mathematics for Economists Victor Filipe Sao Paulo School of Economics FGV Metric Spaces: Basic Definitions Victor Filipe (EESP/FGV) Mathematics for Economists Jan.-Feb. 2017 1 / 34 Definitions and Examples
More informationMOSCO STABILITY OF PROXIMAL MAPPINGS IN REFLEXIVE BANACH SPACES
MOSCO STABILITY OF PROXIMAL MAPPINGS IN REFLEXIVE BANACH SPACES Dan Butnariu and Elena Resmerita Abstract. In this paper we establish criteria for the stability of the proximal mapping Prox f ϕ =( ϕ+ f)
More informationRefined optimality conditions for differences of convex functions
Noname manuscript No. (will be inserted by the editor) Refined optimality conditions for differences of convex functions Tuomo Valkonen the date of receipt and acceptance should be inserted later Abstract
More informationSubgradients. subgradients and quasigradients. subgradient calculus. optimality conditions via subgradients. directional derivatives
Subgradients subgradients and quasigradients subgradient calculus optimality conditions via subgradients directional derivatives Prof. S. Boyd, EE392o, Stanford University Basic inequality recall basic
More information