On proximal-like methods for equilibrium programming

Size: px
Start display at page:

Download "On proximal-like methods for equilibrium programming"

Transcription

1 On proximal-lie methods for equilibrium programming Nils Langenberg Department of Mathematics, University of Trier Trier, Germany, Abstract In [?] Flam and Antipin discussed a proximal-lie method for equilibrium programming problems. In fact, they mae use of a Bregman function with Lipschitz continuous gradient to generate well-posed subproblems. However, they require the feasible set i.e. the set of all possible strategies to be compact. Further, as a sort of inexactness, they only allow the use of ε-subgradients, but no inexact solution of the subproblems in its proper sense. The present paper extends their method to the case of unbounded strategy sets and thus permits to treat a much broader class. Further, besides the use of ε-subgradients, we admit the inexact solution of the auxiliary problems by means of a summable-error-criterion in order to increase the numerical attraction of our method. Further, we develop another proof technique which permits to omit the frequently assumed Lipschitz continuity of the gradient of the Bregman function in almost all of our results. Thus, under rather restrictive but not unusual additional assumptions also zone-coercive Bregman-lie functions can be used. These functions do not have Lipschitz continuous gradients; their advantage consists in the obtained interior-pointeffect, i.e. the generated subproblems may be treated as unconstrained. Key words: Saddle point problems, Nash equilibria, Bregman distances, unbounded feasible sets, inexact solution AMS Subject Classifications: 91A06, 91A10, 90C25, 90C30, 90D10 1 Introduction In the 1950 s Nash [?,?] introduced and studied a concept of equilibria in non-cooperative games. It mainly consists in the idea that a given strategy tuple should be considered as an equilibrium if no player can improve his situation i.e. reduce his costs, increase his profits etc. by a unilateral deviation of this strategy tuple.

2 N. Langenberg: Proximal-lie methods for equilibrium programming 2 For the moment, denote K i R n i the set of all strategies of each player i {1,..., N} and N K := K i R n, 1 i=1 where n = N i=1 n i denotes the dimension of the game. For the convergence analysis below, K does not have to be of product structure as in 1. Further, each player i has an objective function f i : K R; without loss of generality we agree that we deal with the minimization e.g. of costs. Finally, denote y i, x i := x i, y i := y 1,..., y i 1, x i, y i+1,..., y N, meaning the situation that player i chooses strategy x i and all the other players j choose y j, where x, y K are arbitrary strategy tuples. Now we are able to reformulate the Nash equilibrium problem in mathematical terms. Given a strategy tuple x K, some player i would not deviate from this tuple unilaterally, i.e. from his strategy x i, as far as his costs would increase otherwise, i.e. as far as for each strategy y i K i we have f i x f i x i, y i. 2 Thus, some strategy tuple x is a Nash equilibrium if and only if 2 holds for each player; in other words, if and only if f i x f i x i, y i y i K i, i = 1,... N. 3 Definition 1. Niaido-Isoda-function / Ky-Fan-function The Niaido-Isoda-function sometimes also called Ky-Fan-function related to the objective functionals f i is defined by Fa, b := N fi a i, b i f i a a, b K. 4 i=1 One easily observes the following facts: Remar Fx, x = 0 for each x K. Flam and Antipin [?] used a function without this property, namely F a, b = N i=1 f ia i, b i. 2. In standard Nash games, i.e. games with property 1, some x K is a Nash equilibrium iff one has Fx, x 0 for each x K, respectively F x, x 0 for each x K. In generalized Nash games, i.e. games without property 1, x K fulfilling Fx, x 0 for each x K is a so-called normalized or variational Nash equilibrium [?].

3 N. Langenberg: Proximal-lie methods for equilibrium programming 3 3. Combining both the latter properties we have that x K is a Nash equilibrium iff x Arg min{fx, x : x K}. 1 The last remar leads to the following general problem: Definition 2. Statement of Problem Given a set K R n and a bivariate function F : K K R, find some x K such that x Arg min{fx, x : x K}. 5 As the following examples show, besides Nash equilibrium problems, several other problem classes are covered by the discussed problem 5. Example 1. see [?] for further examples and some detail 1. Nonlinear optimization problems: Given some f : K R, let Fx, y := fy fx. Then x minimizes f on K iff it solves Variational inequality problems: Given some T : K R n, let Fx, y := T x, y x. Then some x solves 5 iff it solves the related variational inequality problem in the common sense, i.e. T x, y x 0 y K, see e.g. [?]. This case is intensively investigated in [?] using assumptions similar to those made below. 3. Convex-concave saddle point problems: Let K i R n i for i = 1, 2 and K = K 1 K 2 be the product of two nonempty convex compact subsets. Suppose that the Lagrangian Lx, y of some convex saddlepoint-problem is convex-concave. Let Fx, y := Ly 1, x 2 Lx 1, y 2, where x = x 1, x 2, y = y 1, y 2 K. Then x is a min-max saddlepoint of L iff it solves 5. Remar 2. It is also possible to formulate Nash equilibrium problems in terms of variational inequalities. Define T x := xi f i x N i=1, then x is a Nash equilibrium iff it solves the V IT, K see [?], Proposition However, this approach has two central disadvantages: 1 Argmin is written with capital A to indicate that this set is possibly not a singleton in fact, in many problems it is even unbounded. When uniqueness is ensured, we will use the notation argmin with a small a.

4 N. Langenberg: Proximal-lie methods for equilibrium programming 4 Only a subset of the set of Nash equilibria, namely the so-called variational equilibria, can be computed see for example [?], Example 1. Maybe even worse, efficient solution methods for variational inequality problems all require certain monotonicity conditions in the sense of VI theory, which is a rather restrictive assumption here, as we will setch briefly. Just consider the case that each f i is twice differentiable, then T is differentiable and thus monotone iff its Jacobian J T is positive semidefinite. But this, loosely speaing, corresponds to the situation that each player has more influence on his costs than all the other players together, and thus it seems to be rather stringent. Sufficient criteria for some maybe weaened monotonicity properties of the operator T seem to be unnown in literature. For these reasons, we discard the above variational inequality approach and turn bac to the introduced fixed-point approach, which, admittedly, also only covers normalized Nash equilibria in games without property 1. This paper is organized as follows: In the following section we will shortly discuss the notion of Bregman-lie functions which will be used in the algorithms below. In Sections 3-5 the method of Flam and Antipin [?] is investigated in various extensions. Besides some improvements concerning the admittance of inexact solutions of the generated subproblems, the central purposes of these sections are to allow also unbounded strategy sets K and to obtain new proofs which also permit to omit the hypothesis of Lipschitz continuity of the gradient of the regularizing functional nearly everywhere. Section 6 is dedicated to possible conditions which allow to omit the above mentioned Lipschitz hypothesis really everywhere. Some concluding remars can be found in Section 7. 2 Bregman-lie functions Both as a regularization and a penaltization term, we will mae use of a Bregman-lie function. This concept is frequently used in proximal-lie methods see e.g. [?,?,?,?,?]. For a better understanding, we give a short definition and some remars on Bregman-lie functions. Definition 3. Bregman-lie functions Let = S R n be an open and convex set. A function h : cls R is said to be a Bregman-lie function with zone S, when the following holds:

5 N. Langenberg: Proximal-lie methods for equilibrium programming 5 B.1 h is continuous, strictly convex on cls and differentiable on S. B.2 The set Mx, α := {y S : D h x, y α} is bounded for all fixed α R and x cls, where the Bregman distance is defined by when x cls, y S. D h x, y := hx hy hy, x y, B.3 If {z } N is a sequence in S, converging to z cls, at least one of the following statements holds: a D h z, z 0 for. b If z z is another point in cls, then D h z, z. B.4 Let {z } cls and {y } S be two sequences and assume that one of these sequences is convergent. If further D h z, y 0 holds, then the other sequence converges to the same limit as well. One recognizes that the difference between such Bregman-lie functions and standard Bregman functions can be found in condition B.3, where property a corresponds to the standard Bregman case which is used in [?]. Property b was introduced in [?], see also [?]. Solodov and Svaiter [?] showed that B.4 is a consequence of B.1. It is nown that D h is non-negative and D h x, y = 0 iff x = y, but D h is not a true distance. The nown three-point-formula [?] reads as follows: D h x, x D h x +1, x D h x, x +1 = hx +1 hx, x x Definition 4. Additional assumptions on Bregman-lie functions B.5 Zone-coerciveness: hs = R n. B.6 Cone property: For each z K there are αz > 0, cz R such that D h z, w + cz αz z w. Assume that αz > α > 0 and + > c cz for all z cls. Property B.5 can be used to guarantee an interior-point-effect. This can be explained by dom h = S, and h cannot be extended to the boundary of S. Clearly, the next iterate of corresponding algorithms has to belong to dom h = S and thus, the generated subproblems might be treated as unconstrained ones with certain precaution in numerical experiments.

6 N. Langenberg: Proximal-lie methods for equilibrium programming 6 However, B.5 contradicts the Lipschitz continuity of h when S R n. Property B.6 does not appear frequently in the literature. It was introduced in [?] and strengthens B.2. In the referred article it is shown that B.6 is fulfilled, for example, for strongly convex and other functions. Using this property, we can also admit unbounded feasible sets K as well as inexact solutions of the generated subproblems. In the sequel we assume that h is a Bregman-lie function with zone intk, where K denotes the feasible set of the problem under consideration. When it is assumed to be strongly convex which is useful to chec some smoothness hypothesis below κ denotes its modulus of strong convexity. Zonecoercive and strongly convex Bregman-lie functions are nown for a broad class 2 of convex sets K [?]. However, apart from B.5 when K R n, the function hx := 1 2 x 2 obviously has all the above properties, and it is the only function explicitly mentioned in [?]. It corresponds to the classical proximal-point-regularization and has a Lipschitz continuous gradient. 3 A Proximal-lie Method Algorithm 1 illustrates the method to be discussed. Algorithm 1: A Proximal-lie method 1. Let a start-iterate x 0 K be given. Choose some χ 0 > 0, ε 0 0, δ 0. Set := If x is an equilibrium point STOP. 3. Calculate the next iterate x +1 K: x +1 = arg min{fx +, x + χ D h x, x : x K}, 7 where the argmin operation can be executed inexactly: f +,+1 + χ hx +1 hx, x x +1 δ x x +1 8 for some f +,+1 ε 2 Fx+, x +1 and all x K. 4. Choose χ +1 > 0, ε +1 0, δ Set := + 1 and go to 2. 2 By the way, [?] contains an example of a set K which is described by convex inequalities but which does not have some required additional structure. In such case, the customary prox-regularization hx = 1 2 x 2 still might be chosen as in [?].

7 N. Langenberg: Proximal-lie methods for equilibrium programming 7 This method mainly coincides with the one discussed in [?]. x + denotes the so-called foresight point, see Definition 5 below, and ε 2 F, stands for the partial ε -subdifferential of F with respect to the second argument. 8 is fulfilled if there is some normal vector ξ +1 N K x +1 such that f +,+1 + χ hx +1 hx + ξ +1 = e +1 9 for some error vector e +1 fulfilling e +1 δ, i.e. equation 9 can be solved inexactly, whereas Flam and Antipin [?] required e +1 0 for all N no errrors are admitted. Definition 5. The point x + intk in 7 is called foresight point. As done by Flam and Antipin [?] we will consider the following cases: 1. perfect foresight: x + := x +1 for all N. 2. imperfect foresight: The foresight point is calculated by x + := arg min{fx, x + χ D h x, x : x K}. 10 The concept of foresight was introduced by Antipin, see e.g. [?,?]. Besides a non-mathematical interpretation this will also provide convenient mathematical properties. As will be seen below, the imperfect foresight which should, from a point of view beyond mathematics, be the standard case leads to subproblems of different type in comparison to perfect foresight. Imperfect foresight might be considered as a generalization of extragradient methods; especially for the case of nonlinear optimization problems see Example 1 this reminds of the extragradient method of Korpelevich [?]. Before maing some assumptions for the convergence analysis, let us first define two essential properties of the considered bivariate function F. Definition 6. Monotonicity properties of F 1. F is said to be monotone on K, if holds true for all a, b K. Fa, b Fa, a Fb, b + Fb, a F is said to be pseudomonotone with respect to the solution set on K, if Fa, x Fa, a 12 holds true for all a K, where x is an arbitrary solution of 5.

8 N. Langenberg: Proximal-lie methods for equilibrium programming 8 These notions of monotoncity stem from the theory of variational inequalities. Indeed, given some T : K R n, let Fx, y := T x, y x. Then it is easily seen that monotonicity properties of F as defined above correspond to analogue properties in the theory of variational inequalities see e.g. [?]. This gives a motivation to define the notions as above. Flam and Antipin used the notion of monotonicity for property 12, whereas in several other wors of Antipin property 11 is called sew-symmetry see [?] as well as Definition 1 and Properties 1,2 in [?]. 12 is weaer than 11, at least in the case of Fz, z = 0 for all z K, which can be assumed without loss of generality, see Section 1. We will mae the following general assumptions for the discussion below. Assumption A A.1 Existence of solutions: x K is an arbitrary but fixed solution. A.2 The function F is pseudomonotone w. r. t. the solution set on K. A.3 For fixed first argument, F convex in its second argument. A.4 =0 max{δ, ε } < with δ 0, ε 0 for each N. A.5 There are χ, χ > 0 such that χ χ χ < for all N. Let us briefly discuss the above, rather mild assumptions. A.1 surely is a rather natural assumption and guaranteed when K is nonempty, convex and compact and F is lower semicontinuous and continuous in its first argument [?]. Other sufficient conditions for solvability are e.g. discussed in [?]. In the present paper, we do not distinguish between solution and equilibrium. Obviously, in the covered case of variational inequalities A.2 is weaer than the hypothesis of a monotone operator which is often used in the discussion of interior-point proximal-lie methods see [?,?,?]. In [?] the convergence of the Bregman-PPA is analyzed for pseudomonotone problems. Quite mild sufficient conditions for A.2 are proven in [?], this holds for example true if F is continuous, concave in its first argument and convex in its second argument. Also results concerning quadratic objective functions f i in the case of Nash games can be found in [?]. We observe that, discussing saddle-point-problems as mentioned above, F always fulfills A.2.

9 N. Langenberg: Proximal-lie methods for equilibrium programming 9 In Nash games A.3 especially holds true for player convex games, i.e. when f i is convex in the variables x i of player i see also Lemma 6 in Section 6. Concerning the summability of ε in A.4 we note that this is a weaer hypothesis than summability of ε as required in [?]. Clearly, in contrast to [?], our condition A.4 also permits to use ε = 2. As a resumen of the above discussion, we state that our assumptions here are rather mild and convenient. The next results will be required below. Lemma 1. see [?] Assume that {a }, {b }, {c }, {d } are sequences of non-negative numbers. If the relation a b a + c d 13 holds true and =0 max{b, c } <, then the sequence {a } is convergent and further =0 d < also holds true. We also need the following inequality: If 0 r 1 2, then 4 Case of perfect foresight 1 1 r r Let us turn to the case of perfect foresight first. Then, due to x + = x +1, Algorithm 1 turns to the following method: Algorithm 2: The method in case of perfect foresight 1. Let a start-iterate x 0 K be given. Choose some χ 0 > 0, ε 0 0, δ 0. Set := If x is an equilibrium point STOP. 3. Calculate the next iterate x +1 K by x +1 = arg min{fx +1, x + χ D h x, x : x K}, 15 again inexactly in the sense that f +1,+1 +χ hx +1 hx, x x +1 δ x x for all x K and some appropriate f +1,+1 ε 2 Fx+1, x Choose χ +1 > 0, ε +1 0, δ Set := + 1 and go to 2.

10 N. Langenberg: Proximal-lie methods for equilibrium programming 10 In view of 15 it is easy to see that the generated subproblem is an implicit one, since the searched vector x +1 appears on both sides of the equation. As mentioned above, deviating from the strategy of Flam and Antipin [?], here we also admit to solve 15 inexactly by means of a summableerror-criterion, i.e. according to A.4, =0 δ < +, whereas Flam and Antipin required δ = 0 for every N. Now let us turn to the convergence analysis of Algorithm 2. Since the entire convergence analysis in [?] is based on Lemma 4.1 therein and the latter result essentially requires Lipschitz continuity of h, we mae a different approach here. Insert the three-point-formula 6 in 16 and divide by χ to obtain δ χ x x +1 1 χ f +1,+1, x x D h x, x D h x, x +1 D h x +1, x Since Fx +1, is convex, we have due to the ε-subgradient inequality: Fx +1, x Fx +1, x +1 f +1,+1, x x +1 ε, 18 or after a rearrangement f +1,+1, x x +1 Fx +1, x Fx +1, x +1 + ε. 19 Resuming the above discussion for x = x being any solution of the considered problem, via A.2 we obtain the following: D h x, x +1 D h x, x D h x +1, x + δ x x χ + 1 Fx +1, x Fx +1, x +1 + ε χ χ To apply Lemma 1, it remains to investigate the last term in line 20. Theoretically this can be done similarly to Flam and Antipin [?]: Suppose boundedness of K and use A.4, A.5 to obtain =1 δ χ x x +1 diamk which would be sufficient to apply Lemma 1. =1 δ χ <, 21 Here we mae another approach which also covers unbounded sets K. In view of the Bregman axiom B.6 the existence of cx, αx with x x +1 1 Dh αx x, x +1 + cx 22

11 N. Langenberg: Proximal-lie methods for equilibrium programming 11 is guaranteed. Therefore 20 turns to D h x, x +1 D h x, x D h x +1, x + ε χ 23 which in turn implies + δ αx χ D hx, x +1 + δ cx αx χ + 1 Fx +1, x Fx +1, x +1, χ δ 1 Dh αx x, x +1 D h x, x D h x +1, x + ε + δ cx χ χ αx χ + 1 Fx +1, x Fx +1, x χ Due to A.4 we now that there is some 0 N such that 0 r := δ αx χ < Multiply 24 with 1 r 1 and obtain D h x, x +1 1 r 1 D h x, x 1 r 1 D h x +1, x r 1 ε + 1 r 1 δ cx χ αx χ 1 r 1 + Fx +1, x Fx +1, x +1. χ Remembering 14, 26 can be rewritten as follows: D h x, x δ Dh αx x, x 1 r 1 D h x +1, x χ δ ε αx χ χ 2δ δ cx αx χ αx χ 1 r 1 + Fx +1, x Fx +1, x +1 χ Dh x, x D h x +1, x δ αx χ δ ε αx δ δ cx χ χ αx χ αx χ + 1 Fx +1, x Fx +1, x +1 χ At this place, an application of Lemma 1 with

12 N. Langenberg: Proximal-lie methods for equilibrium programming 12 a := D h x, x, b := 2δ αx χ, c := 1 + 2δ ε αx χ χ δ δ cx αx χ αx χ, d := D h x +1, x 1 χ Fx +1, x Fx +1, x +1, remember A.2 directly proves the following Theorem 1. Theorem 1. Under the assumptions A.1-A.5 the following is valid: 1. The sequence {D h x, x } converges for any solution x. 2. The series =0 D hx +1, x is convergent. 3. The series =0 1 χ Fx +1, x Fx +1, x +1 is convergent. Let us state some important consequences. Corollary 1. Directly from Theorem 1 we obtain: 1. In view of the Bregman axiom B.2 the sequence {x } generated by Algorithm 2 is bounded. 2. In view of the three-point-formula 6 it is true that hx +1 hx, x x D h x +1, x 0 and Fx +1, x Fx +1, x +1 0 for. Since {x } is bounded, it has at least one cluster point x which belongs to K since the latter set is closed. From now on, x j x denote a convergent subsequence and a cluster point, respectively. In view of the Bregman axiom B.4 and Theorem 1 it is clear that also x j+1 x has to hold. Then, we might without loss of generality assume that the corresponding subsequence {χ j } is convergent to, say, χ > 0. Next, let us conclude that every cluster point of {x } solves the considered problem under the additional hypothesis of Lipschitz continuity of h. In this case the following result is useful.

13 N. Langenberg: Proximal-lie methods for equilibrium programming 13 Lemma 2. see [?], Lemma 4.2 Suppose some function η : R n R {+ } is lower semicontinuous, proper convex. If M R n is nonempty closed convex, satisfying the domain qualification ri dom η rim. If the used Bregman-lie function h is real-valued, convex on an open set containing M and has a Lipschitz continuous gradient, the following statements are equivalent: 1. x arg min{ηx + ρd h x, x : x M} for some ρ > x arg min{ηx : x M} Remar 3. Since we do not mae use of the ε-subdifferential with respect to h, no application of the Brøndsted-Rocafellar-property is necessary with respect to h. Hence the condition that h is real-valued convex on an open set containing K used in [?] can be left out. It is sufficient to require these properties on K. Lemma 3. Let A.1-A.5 as well as the assumptions of Lemma 2 hold true. Then every cluster point of the generated sequence {x } is a solution. Proof. Consider some subsequence {x j } and the corresponding subsequence {χ j } which are both assumed to converge to x and χ, respectively. Passing to the limit in the iteration scheme 15 we obtain that x = arg min{fx, x + χ D h x, x : x K}. 29 Now Lemma 2 implies x Arg min{fx, x : x K}. Whereas Lipschitz continuity of h is required throughout all results in [?], we only require it here to prove Lemma 2 and its consequence, Lemma 3. In Section 6 some other properties are discussed which allow to omit the Lipschitz assumption. The next result is well-nown in the theory of Bregman-based proximal methods and it contains the conclusion of convergence of the sequence {x } generated by Algorithm 2. Theorem 2. see e.g. [?,?] Under the assumptions of Lemma 3 the sequence {x } generated by Algorithm 2 converges to an equilibrium point.

14 N. Langenberg: Proximal-lie methods for equilibrium programming 14 5 Case of imperfect foresight Here, Algorithm 1 has the following form: Algorithm 3: The method in case of imperfect foresight 1. Let a start-iterate x 0 K be given. Choose some χ 0 > 0, ε 0 0, δ 0. Set := If x is an equilibrium point STOP. 3. First, calculate the foresight point x + by x + = arg min{fx, x + χ i D hx, x : x K}, 30 inexactly in the sense that f,+ + χ i hx + hx, x x + δ i x x+ 31 for all x K and some appropriate f,+ εi 2 Fx, x + which is again the partial ε -subdifferential of F w.r.t. the second argument. 4. Use the calculated foresight point x + to determine x +1 K by x +1 = arg min{fx +, x + χ o D hx, x : x K}, 32 again inexactly in the sense that f +,+1 +χ o hx +1 hx, x x +1 δ o x x+1 33 for some f +,+1 εo 2 Fx+, x +1 and all x K. 5. Choose χ +1 > 0, ε +1 0, δ Set := + 1 and go to 2. Concerning the parameters χ, δ and ε a superscript i indicates that the concerning parameters are those for the inner problem i.e. the determination of the foresight point x +, whereas a superscript o indicates the relation to the outer problem, i.e. calculating the next iterate x +1. The parameters to be chosen in Steps 1 and 5, respectively, are understood as vectors in the sense that χ +1 = χ i +1, χo +1, for example, and nonnegativity clearly is meant componentwise. It can be seen easily that both essential steps in Algorithm 3 consist in the solution of an optimization problem with strictly convex objective function

15 N. Langenberg: Proximal-lie methods for equilibrium programming 15 which is even strongly convex if h is strongly convex. Further, and this is a significant advantage to the case of exact foresight, the subproblems are explicit ones. However, to calculate x +1, here two subproblems have to be solved instead of one implicit subproblem in Algorithm 2. Now let us turn to the analysis of Algorithm 3 which needs two additional assumptions. Both are taen from Flam and Antipin [?]. Assumption A continuation A.6 Hypothesis on Smoothness: There is some Λ > 0 such that 1 2 Fa, b Fc, b Fa, d+fc, d 2Λ D h c, a D h b, d 34 holds true for all a, b, c, d K. A.7 The regularization parameters are chosen such that χ i, χo χ Λ holds true, where Λ fulfills the smoothness property A.6. While special conditions on the regularization parameters lie A.7 are not unusual in the literature concerning proximal-lie methods, condition A.6 should be discussed in some detail. This rather strange appearing condition simplifies in the case hx = 1 2 x 2 to Fa, b Fc, b Fa, d + Fc, d Λ 0 a c b d, 35 which holds true whenever F is twice continuously differentiable [?]. The reason for choosing Λ 0 instead of Λ will emerge from 37 below. Remar For any strongly convex regularizing functional h we obviously have D h v, w κ 2 v w 2 for every v, w such that D h v, w is well-defined. In other words, v w 2 κ D h v, w Assume that 35 holds with modulus Λ 0. Combining this with 36 we obtain for strongly convex Bregman-lie functions: Fa, b Fc, b Fa, d + Fc, d Λ 0 a c b d Λ 0 2 κ D h a, c 12 = 2Λ 0 κ 1 D h a, cd h b, d κ D h b, d

16 N. Langenberg: Proximal-lie methods for equilibrium programming 16 Consequently, when 35 holds true with Λ 0 and h is strongly convex with modulus κ, then A.6 holds true with Λ := Λ 0 κ 1. The previous result extends the discussion of A.6 made by Flam and Antipin [?] significantly since it gives a sufficient and convenient criterion to chec validity of A.6 which is resumed in the following. Lemma 4. Assume that F is twice continuously differentiable and that h is strongly convex. Then A.6 is always fulfilled. Of course we should note that when h is zone-coercive we only require A.6 on the interior of K. This is necessary since D h v, w is not defined if w / intk, but also sufficient, as will be seen in the sequel, since the second arguments of Bregman distances will be some iterates which, due to the interior-point-effect, belong to intk and thus the corresponding Bregman distances will be well-defined. We return to the convergence analysis of Algorithm 3 and start with the discussion of the inner subproblem, i.e. the determination of the foresight point. 31 implies, jointly with the three-point-formula 6 and A.2 3 : δi χ i x x + 1 χ i f,+, x x D h x, x D h x, x + D h x +, x 1 χ i Fx, x Fx, x D h x, x D h x, x + D h x +, x, which implies for x = x +1 K the following relation: δi χ i x +1 x + 1 Fx χ i, x +1 Fx, x D h x +1, x D h x +1, x + D h x +, x. An analogue result holds true for the outer problem, i.e. the determination of the next iterate remember that both the problems have the same structure: 3 It is also possible to omit the following division by the regularization parameters χ i, χ o. In this case, the convergence can be analyzed analogeously, and when assuming χ i < χ o one might prove D h x +1, x 0 as well. However, since the latter property is not necessary for the further analysis we refrain from a restriction on the parameters.

17 N. Langenberg: Proximal-lie methods for equilibrium programming 17 δo χ o x x +1 1 χ o f +,+1, x x D h x, x D h x, x +1 D h x +1, x 1 χ o Fx +, x Fx +, x D h x, x D h x, x +1 D h x +1, x, which in turn implies for x = x K the following relation: δo χ o x x +1 1 Fx + χ o, x Fx +, x D h x, x D h x, x +1 D h x +1, x. Now sum up inequalities 40 and 43 and obtain δo χ o x x +1 δi χ i x +1 x + 1 Fx + χ o, x Fx +, x Fx χ i, x +1 Fx, x + +D h x, x D h x, x +1 D h x +1, x + D h x +1, x D h x +1, x + D h x +, x. Due to pseudomonotonicity with respect to the equilibrium set and the smoothness hypothesis A.6 we may deduce: 1 χ Fx + o, x Fx +, x χ Fx, x +1 Fx, x + i 1 min{χ Fx + i,χo }, x Fx +, x +1 + Fx, x +1 Fx, x + 1 min{χ i,χo } Fx +, x + Fx +, x +1 + Fx, x +1 Fx, x + Now 44 turns to 44 2Λ min{χ i,χo } Dh x +, x D h x +1, x

18 N. Langenberg: Proximal-lie methods for equilibrium programming 18 D h x, x +1 D h x, x D h x +1, x + D h x +, x 46 2Λ + min{χ i, χo } Dh x +, x D h x +1, x Rearranged, this is equivalent to D h x +, x 2D h x +, x δo χ o x x +1 + δi χ i x +1 x + 47 D h x, x D h x +1, x + D h x +, x 48 +2D h x +, x 1 Λ 2 min{χ i, χo }D hx +1, x δo χ o x x +1 + δi χ i x +1 x Λ min{χ i, χo }D hx +1, x Dh x +1, x + D h x, x D h x, x +1 + δo χ o x x +1 + δi χ i x +1 x +. Now add Λ 2 min{χ i,χo }2 1 D h x +1, x + to both sides of the latter inequality. Then D h x +, x 2D h x +, x Λ 2 min{χ i,χo }2 Dh x +1, x + Λ min{χ i,χo }D hx +1, x D h x, x D h x, x +1 + δo χ o x x +1 + δi x +1 x +. χ i + Λ 2 1 D min{χ i h x +1, x +.,χo }2 At this place, we mae use of the elementary binomial theorem a 2 2ab + b 2 = a b Letting a := D h x +, x 1 Λ 2 and b := min{χ i,χo}d hx +1, x + 1 2, we obtain Dh x +, x 1 Λ 2 min{χ i,χo}d hx +1, x = 51 D h x +, x 2D h x +, x Λ 2 min{χ i,χo }2 Dh x +1, x + Λ min{χ i,χo }D hx +1, x D h x, x D h x, x +1 + δo χ o x x +1 + δi x +1 x + χ i + Λ 2 1 D min{χ i h x +1, x +.,χo }2

19 N. Langenberg: Proximal-lie methods for equilibrium programming 19 Rearranging the latter inequality the following holds: D h x, x +1 D h x, x D h x +, x 1 Λ 2 min{χ i, χo }D hx +1, x δo χ o x x +1 + δi χ i x +1 x + + Λ 2 min{χ i, 1 D h x +1, x +. χo }2 Remembering the Bregman axiom B.6 we further now the existence of reals α and c such that for every N x x +1 D hx, x +1 + c α For this reason, and x +1 x + D hx +1, x + + c. α 53 D h x, x +1 D h x, x D h x +, x 1 Λ 2 min{χ i, χo }D hx +1, x Hence, + δo χ o α Dh x, x +1 + c + δi χ i α Dh x +1, x + + c + Λ 2 min{χ i, χo }2 1 D h x +1, x + = D h x, x D h x +, x 1 2 Λ min{χ i, χo }D hx +1, x δo χ o αd hx, x +1 + δi χ i α Dh x +1, x δ o χ o + δi c χ i α + Λ 2 min{χ i, 1 D h x +1, x +. χo }2 2 δ o 1 χ o α Dh x, x +1 D h x, x + δ o χ o + δi c χ i 55 α D h x +, x 1 Λ 2 min{χ i, χo }D hx +1, x Λ 2 min{χ i, + δi χo }2 χ i α 1 D h x +1, x +. }{{} =:θ

20 N. Langenberg: Proximal-lie methods for equilibrium programming 20 We next shall investigate the term θ. Due to Assumptions A.4, A.5 and A.7 there is some 0 N such that it holds θ χ δi χ i }{{}}{{} α <0 0 Λ2 < 0 56 for all 0. Further, if 0 is large enough, from A.4 we also have that r := δo χ o α < As in the case of perfect foresight, this allows to apply inequality 14: 1 1 δo χ o α δ o χ o α 58 Thus, we multiply 55 with 1 r 1 > 0 and obtain: D h x, x +1 1 δo 1Dh χ o α x, x + 1 δo 1 δ o χ o α χ o + δi c χ i α 1 δo χ o α 1 Dh x +, x 1 2 Λ min{χ i, χo }D hx +1, x δo χ o α 1 Λ 2 min{χ i, χo }2 + δi χ i α 1 D h x +1, x δ o χ o α Dh x, x D h x +, x 1 2 δ o δ o χ o α χ o + δi Λ min{χ i, χo }D hx +1, x Λ 2 min{χ i, χo }2 + δi χ i α 1 D h x +1, x +. χ i 2 c α 2 59 Now we are ready for an application of Lemma 1. Therefore, let a := D h x, x, b := 2 δo χ o α, c := δ o δ o χ o α χ o + δi c χ i α and d := D h x +, x 1 Λ 2 min{χ i,χo}d hx +1, x Λ 2 min{χ i, χo }2 + δi χ i α 1 D h x +1, x +.

21 N. Langenberg: Proximal-lie methods for equilibrium programming 21 Theorem 3. Under the Assumptions A.1-A.7 the following holds true: 1. The sequence {D h x, x } is convergent for any equilibrium x. 2. The series =0 D hx +1, x + is convergent. D h x +, x is con- 3. The series =0 vergent. We gather some consequences of the latter theorem. Λ min{χ i,χo }D hx +1, x Corollary 2. Under the assumptions of Theorem 3 the following is valid: 1. Due to the convergence of {D h x, x } the generated sequence {x } is bounded in view of the Bregman axiom B D h x +1, x + 0 for. 3. D h x +, x 1 Λ 2 min{χ i,χo}d hx +1, x due to 2., D h x +, x 0. 0 for, especially, 4. Several other results hold analogeously to the previous section, lie for example hx + hx, x x + 0 and also for the outer problem hx +1 hx +, x x If {x j } x is a convergent subsequence, we directly obtain from the preceding corollary that also {x j+ } x and {x j+1 } x has to hold true. Now if h is Lipschitz continuous, the conclusion of the convergence of {x } to a solution is the same as in Section 4 using Lemma 3 and Theorem 2. 6 Case of zone-coercive regularizing functionals In the previous sections we proved various important auxiliary results lie boundedness of {x } and convergence of {D h x, x } without using Lipschitzcontinuity of h. The latter assumption was only required to show that each cluster point of {x } and {x + }, respectively, is an equilibrium of the considered problem. The present section is devoted to some alternative conditions on F which allow the application of zone-coercive Bregman-lie functions, i.e. those functions without Lipschitz-continuous gradient. The above argumentation cannot be repeated for a zone-coercive Bregman- lie function h since dom h = intk, i.e. hx and thus also D h x, x is not defined whenever x belongs to the boundary of the feasible set K.

22 N. Langenberg: Proximal-lie methods for equilibrium programming 22 In the considerations below we focus on the situation of Nash games, where Fx, x = 0 can be assumed without loss of generality, see Section 1. Concerning the covered problem class of nonlinear optimization, no analogue property is necessary, and for the related discussion of variational inequalities, we refer the reader to [?]. Now let us impose two further conditions which permit to conclude that every cluster point of {x } is a solution. Then, Theorem 2 can be applied to obtain the convergence of the entire sequence. In the following discussion we only consider Algorithm 2. However, it will be easy to derive analogue assumptions and results concerning Algorithm 3. Assumption A continuation A.8 F is continuous and pseudomonotone 4 not just with respect to the solution set, meaning that the implication Fa, b 0 Fb, a 0 60 holds true for arbitrary a, b K. Further Fz, is strictly convex. A.9 F is continuous and F, z is strictly concave for any z K. Let us postpone a discussion of these additional assumptions and first conclude the desired result. Lemma 5. Assume that in addition to the assumptions of Lemma 3 also A.8 holds true. Then each cluster point of {x } is an equilibrium. Proof. Continuity of F and Corollary 1 imply Fx, x = 0, from which Fx, x 0 and thus also Fx, x = 0 follow, since x is assumed to be an equilibrium. Now since Fx, is strictly convex we obtain for x x 0 Fx, 1 2 x + x < 1 2 Fx, x + Fx, x = 0, 61 which is an obvious contradiction. Before turning to the discussion of A.9 we give a rather mild sufficient condition for Fz, to be strictly convex in the situation of Nash problems. 4 This holds true in the case of saddle point problems of the Lagrangian of a convex program.

23 N. Langenberg: Proximal-lie methods for equilibrium programming 23 Lemma 6. Fx, is strictly convex already if there is at least one objective function f l being strictly convex in x l and all the other objective functions f i are assumed just to be convex in x i. Proof. Tae some v w K and λ [0, 1] arbitrary. Using the notation M l := {1,..., N} \ {l} we obtain: Fx, λv + 1 λw = N i=1 f i x i, λv i + 1 λw i f i x = f i x i, λv i + 1 λw i f i x i M l +f l x l, λv l + 1 λw l f l x λ f i x i, v i f i x + 1 λ f i x i, w i f i x i M l i M l +f l x l, λv l + 1 λw l f l x < λ f i x i, v i f i x + 1 λ i M l i M l f i x i, w i f i x +λ f l x l, v l f l x + 1 λ f l x l, w l f l x = λfx, v + 1 λfx, w Hence, the assertion follows. Clearly, the proof is rather simple; nevertheless, the result can easily be used to chec strict convexity of F when all the functions f i are nown. Lemma 7. Assume that in addition to the above assumptions also A.7 holds true. Then each cluster point of {x } is an equilibrium. Proof. Pseudomonotonicity with respect to the solution set and strict concavity of F, x yield for x x the contradiction 0 F 1 2 x + x, x > 1 Fx, x + Fx, x = 0, 62 2 where the last equation follows from Corollary 1 and continuity of F. Now let us shortly discuss assumptions A.8 and A.9. First, note that pseudomonotonicity in the sense of A.8 lies between pseudomonotonicity w.r.t. the solution set and monotonicity sew-symmetry in the wors of

24 N. Langenberg: Proximal-lie methods for equilibrium programming 24 Antipin [?,?], at least when Fz, z = 0 for any z K. Clearly, strict convexity concavity is rather restrictive since it implies uniqueness of solutions. Nevertheless, it is not unusual in the literature with respect to Nash equilibrium problems, see e.g. [?]. We should mention that the latter authors require this type of strict convexity assumption for a constrained reformulation of the Nash game whereas in the present paper we only require it to mae use of zone-coercive Bregmanlie functions to obtain unconstrained subproblems. Further, they mae use of a regularization term with Lipschitz continuous gradient; to do this, we did not require such a hypothesis. Another assumption is called strict complementarity, see e.g. [?,?]. However, this assumption seems to be harder to chec a priori. An optimal problem-tailored condition would loo lie A.* If F is pseudomonotone with respect to the solution set, x K is an equilibrium and x K is such that Fx, x = 0, then x is an equilibrium as well. This condition is closely related to the so-called cut property see [?]. In [?] the convergence of the BPPA for pseudomonotone variational problems is studied under an additional condition which seems to be rather mild [?,?]. However, sufficient conditions on the objective functions f i to fulfill this property seem to be unnown in the literature. Discussing nonlinear optimization problems, letting Fx, y = fy fx we already obtain fx fx fx. For the covered situation of variational inequality problems the concept of pseudomonotonicity turns out to be appropriate [?,?]. With respect to Algorithm 3 the discussion is quite the same. By continuity of F and passing to the limit in 44, again Fx, x = 0 is obtained such that the argumentation concerning Algorithm 2 can be applied again. Finally, we proof another result. The underlying hypothesis is the following: A.10 There is an equilibrium x intk and 2 F is locally bounded on an open set containing K. Of course, A.10 is quite hard to chec a priori nevertheless, there is a sufficient condition in [?], where A.10 is investigated for the case of a rather general scheme of variational inequalities. However, we can prove:

25 N. Langenberg: Proximal-lie methods for equilibrium programming 25 Lemma 8. Assume that in addition to the above assumptions also A.10 holds true. Then each cluster point of {x }, generated by Algorithm 2, is an equilibrium. Proof. As a consequence of the iteration scheme and Corollary 1 we have lim f j+1, j +1, x x j j Owing to the Brøndsted-Rocafellar-property cf. [?] of the ε subdifferential and convexity of Fx j+1, we now that for each j there is x j+1 and some f j+1, j +1 2 Fx j+1, x j+1 such that x j+1 x j+1 ε j +1 and f j+1, j +1 f j+1, j +1 ε j Since 2 F is locally bounded on an open set containing K, we can without loss of generality assume the existence of some f 2 Fx, x such that lim j f j+1, j +1 = lim j f j+1, j +1 = f. 65 Recall that as a consequence of zone-coerciveness also the so-called boundary coerciveness holds true, i.e. hx j, x x j 66 is true when {x j } x and x belongs to the boundary of K. In this situation 66 obviously contradicts the nown convergence of D h x, x j. Therefore, any cluster point of {x } has to belong to intk. This in turn implies the existence of hx also for the case of zone-coercive regularizing functionals. In consequence, hx j+1 hx j 0, 67 for j. Now due to the boundedness of the generated sequence {x }, we obtain by passing to the limit j in the iteration scheme: f, x x = lim f j+1, j +1, x x j+1 0 x K. 68 j Therefore x Arg min Fx, follows, i.e. each cluster point of {x } is a solution. Again, a similar result for Algorithm 3 can be derived quite analogeously.

26 N. Langenberg: Proximal-lie methods for equilibrium programming 26 7 Concluding Remars We investigated a proximal-lie scheme to solve a class of fixed-point-problems, which in turn covers several other important classes as Nash equilibrium problems, variational inequalities or also nonlinear optimization problems. In comparison to the original wor [?] the proof techniques demonstrated in the present paper permit to achieve several essential advances: Theoretical advances as for example the applicability of the method for unbounded sets K, numerical advances as for example the allowance to solve the generated subproblems inexactly as well as a relaxation concerning the sequence of subgradient parameters ε. From now on, for example ε = 2 might be used which probably contributes to a better numerical performance of the discussed algorithms. Also, in the case of imperfect foresight the regularization parameters χ i, χo do not have to be the same in our method. Further, we proved some auxiliary results lie the convergence of {D h x, x } without the hypothesis of a Bregman function with Lipschitz continuous gradient, which enables to prove convergence also for zone-coercive Bregmanlie functions although, admittedly, some rather restrictive additional assumption on the problem data is required. Possibly one might construct singular examples which can do without such a property of strict convexity type, but in general such an additional assumption which does not imply uniqueness of the solution seems to be unnown in the literature.

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints. 1 Optimization Mathematical programming refers to the basic mathematical problem of finding a maximum to a function, f, subject to some constraints. 1 In other words, the objective is to find a point,

More information

On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean

On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean Renato D.C. Monteiro B. F. Svaiter March 17, 2009 Abstract In this paper we analyze the iteration-complexity

More information

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods Renato D.C. Monteiro B. F. Svaiter May 10, 011 Revised: May 4, 01) Abstract This

More information

On Penalty and Gap Function Methods for Bilevel Equilibrium Problems

On Penalty and Gap Function Methods for Bilevel Equilibrium Problems On Penalty and Gap Function Methods for Bilevel Equilibrium Problems Bui Van Dinh 1 and Le Dung Muu 2 1 Faculty of Information Technology, Le Quy Don Technical University, Hanoi, Vietnam 2 Institute of

More information

A projection-type method for generalized variational inequalities with dual solutions

A projection-type method for generalized variational inequalities with dual solutions Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 4812 4821 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa A projection-type method

More information

Convex Optimization Notes

Convex Optimization Notes Convex Optimization Notes Jonathan Siegel January 2017 1 Convex Analysis This section is devoted to the study of convex functions f : B R {+ } and convex sets U B, for B a Banach space. The case of B =

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

An inexact subgradient algorithm for Equilibrium Problems

An inexact subgradient algorithm for Equilibrium Problems Volume 30, N. 1, pp. 91 107, 2011 Copyright 2011 SBMAC ISSN 0101-8205 www.scielo.br/cam An inexact subgradient algorithm for Equilibrium Problems PAULO SANTOS 1 and SUSANA SCHEIMBERG 2 1 DM, UFPI, Teresina,

More information

Douglas-Rachford splitting for nonconvex feasibility problems

Douglas-Rachford splitting for nonconvex feasibility problems Douglas-Rachford splitting for nonconvex feasibility problems Guoyin Li Ting Kei Pong Jan 3, 015 Abstract We adapt the Douglas-Rachford DR) splitting method to solve nonconvex feasibility problems by studying

More information

Convex Analysis and Economic Theory AY Elementary properties of convex functions

Convex Analysis and Economic Theory AY Elementary properties of convex functions Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory AY 2018 2019 Topic 6: Convex functions I 6.1 Elementary properties of convex functions We may occasionally

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

Kaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä. New Proximal Bundle Method for Nonsmooth DC Optimization

Kaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä. New Proximal Bundle Method for Nonsmooth DC Optimization Kaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä New Proximal Bundle Method for Nonsmooth DC Optimization TUCS Technical Report No 1130, February 2015 New Proximal Bundle Method for Nonsmooth

More information

An accelerated non-euclidean hybrid proximal extragradient-type algorithm for convex-concave saddle-point problems

An accelerated non-euclidean hybrid proximal extragradient-type algorithm for convex-concave saddle-point problems An accelerated non-euclidean hybrid proximal extragradient-type algorithm for convex-concave saddle-point problems O. Kolossoski R. D. C. Monteiro September 18, 2015 (Revised: September 28, 2016) Abstract

More information

Convex Functions. Pontus Giselsson

Convex Functions. Pontus Giselsson Convex Functions Pontus Giselsson 1 Today s lecture lower semicontinuity, closure, convex hull convexity preserving operations precomposition with affine mapping infimal convolution image function supremum

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

Local strong convexity and local Lipschitz continuity of the gradient of convex functions

Local strong convexity and local Lipschitz continuity of the gradient of convex functions Local strong convexity and local Lipschitz continuity of the gradient of convex functions R. Goebel and R.T. Rockafellar May 23, 2007 Abstract. Given a pair of convex conjugate functions f and f, we investigate

More information

Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems

Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems Lu-Chuan Ceng 1, Nicolas Hadjisavvas 2 and Ngai-Ching Wong 3 Abstract.

More information

Maximal Monotone Inclusions and Fitzpatrick Functions

Maximal Monotone Inclusions and Fitzpatrick Functions JOTA manuscript No. (will be inserted by the editor) Maximal Monotone Inclusions and Fitzpatrick Functions J. M. Borwein J. Dutta Communicated by Michel Thera. Abstract In this paper, we study maximal

More information

Identifying Active Constraints via Partial Smoothness and Prox-Regularity

Identifying Active Constraints via Partial Smoothness and Prox-Regularity Journal of Convex Analysis Volume 11 (2004), No. 2, 251 266 Identifying Active Constraints via Partial Smoothness and Prox-Regularity W. L. Hare Department of Mathematics, Simon Fraser University, Burnaby,

More information

Lecture 7 Monotonicity. September 21, 2008

Lecture 7 Monotonicity. September 21, 2008 Lecture 7 Monotonicity September 21, 2008 Outline Introduce several monotonicity properties of vector functions Are satisfied immediately by gradient maps of convex functions In a sense, role of monotonicity

More information

Applied Mathematics Letters

Applied Mathematics Letters Applied Mathematics Letters 25 (2012) 974 979 Contents lists available at SciVerse ScienceDirect Applied Mathematics Letters journal homepage: www.elsevier.com/locate/aml On dual vector equilibrium problems

More information

On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q)

On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q) On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q) Andreas Löhne May 2, 2005 (last update: November 22, 2005) Abstract We investigate two types of semicontinuity for set-valued

More information

An accelerated non-euclidean hybrid proximal extragradient-type algorithm for convex concave saddle-point problems

An accelerated non-euclidean hybrid proximal extragradient-type algorithm for convex concave saddle-point problems Optimization Methods and Software ISSN: 1055-6788 (Print) 1029-4937 (Online) Journal homepage: http://www.tandfonline.com/loi/goms20 An accelerated non-euclidean hybrid proximal extragradient-type algorithm

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo September 6, 2011 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45 Division of the Humanities and Social Sciences Supergradients KC Border Fall 2001 1 The supergradient of a concave function There is a useful way to characterize the concavity of differentiable functions.

More information

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT Pacific Journal of Optimization Vol., No. 3, September 006) PRIMAL ERROR BOUNDS BASED ON THE AUGMENTED LAGRANGIAN AND LAGRANGIAN RELAXATION ALGORITHMS A. F. Izmailov and M. V. Solodov ABSTRACT For a given

More information

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,

More information

On duality theory of conic linear problems

On duality theory of conic linear problems On duality theory of conic linear problems Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 3332-25, USA e-mail: ashapiro@isye.gatech.edu

More information

Convex Functions and Optimization

Convex Functions and Optimization Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized

More information

2019 Spring MATH2060A Mathematical Analysis II 1

2019 Spring MATH2060A Mathematical Analysis II 1 2019 Spring MATH2060A Mathematical Analysis II 1 Notes 1. CONVEX FUNCTIONS First we define what a convex function is. Let f be a function on an interval I. For x < y in I, the straight line connecting

More information

ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION

ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION CHRISTIAN GÜNTHER AND CHRISTIANE TAMMER Abstract. In this paper, we consider multi-objective optimization problems involving not necessarily

More information

CONVERGENCE PROPERTIES OF COMBINED RELAXATION METHODS

CONVERGENCE PROPERTIES OF COMBINED RELAXATION METHODS CONVERGENCE PROPERTIES OF COMBINED RELAXATION METHODS Igor V. Konnov Department of Applied Mathematics, Kazan University Kazan 420008, Russia Preprint, March 2002 ISBN 951-42-6687-0 AMS classification:

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

On the Convergence and O(1/N) Complexity of a Class of Nonlinear Proximal Point Algorithms for Monotonic Variational Inequalities

On the Convergence and O(1/N) Complexity of a Class of Nonlinear Proximal Point Algorithms for Monotonic Variational Inequalities STATISTICS,OPTIMIZATION AND INFORMATION COMPUTING Stat., Optim. Inf. Comput., Vol. 2, June 204, pp 05 3. Published online in International Academic Press (www.iapress.org) On the Convergence and O(/N)

More information

Convergence rate of inexact proximal point methods with relative error criteria for convex optimization

Convergence rate of inexact proximal point methods with relative error criteria for convex optimization Convergence rate of inexact proximal point methods with relative error criteria for convex optimization Renato D. C. Monteiro B. F. Svaiter August, 010 Revised: December 1, 011) Abstract In this paper,

More information

An Infeasible Interior Proximal Method for Convex Programming Problems with Linear Constraints 1

An Infeasible Interior Proximal Method for Convex Programming Problems with Linear Constraints 1 An Infeasible Interior Proximal Method for Convex Programming Problems with Linear Constraints 1 Nobuo Yamashita 2, Christian Kanzow 3, Tomoyui Morimoto 2, and Masao Fuushima 2 2 Department of Applied

More information

Design and Analysis of Algorithms Lecture Notes on Convex Optimization CS 6820, Fall Nov 2 Dec 2016

Design and Analysis of Algorithms Lecture Notes on Convex Optimization CS 6820, Fall Nov 2 Dec 2016 Design and Analysis of Algorithms Lecture Notes on Convex Optimization CS 6820, Fall 206 2 Nov 2 Dec 206 Let D be a convex subset of R n. A function f : D R is convex if it satisfies f(tx + ( t)y) tf(x)

More information

THE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION

THE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION THE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION HALUK ERGIN AND TODD SARVER Abstract. Suppose (i) X is a separable Banach space, (ii) C is a convex subset of X that is a Baire space (when endowed

More information

On the Weak Convergence of the Extragradient Method for Solving Pseudo-Monotone Variational Inequalities

On the Weak Convergence of the Extragradient Method for Solving Pseudo-Monotone Variational Inequalities J Optim Theory Appl 208) 76:399 409 https://doi.org/0.007/s0957-07-24-0 On the Weak Convergence of the Extragradient Method for Solving Pseudo-Monotone Variational Inequalities Phan Tu Vuong Received:

More information

Extensions of Korpelevich s Extragradient Method for the Variational Inequality Problem in Euclidean Space

Extensions of Korpelevich s Extragradient Method for the Variational Inequality Problem in Euclidean Space Extensions of Korpelevich s Extragradient Method for the Variational Inequality Problem in Euclidean Space Yair Censor 1,AvivGibali 2 andsimeonreich 2 1 Department of Mathematics, University of Haifa,

More information

Proof. We indicate by α, β (finite or not) the end-points of I and call

Proof. We indicate by α, β (finite or not) the end-points of I and call C.6 Continuous functions Pag. 111 Proof of Corollary 4.25 Corollary 4.25 Let f be continuous on the interval I and suppose it admits non-zero its (finite or infinite) that are different in sign for x tending

More information

Variational Inequalities. Anna Nagurney Isenberg School of Management University of Massachusetts Amherst, MA 01003

Variational Inequalities. Anna Nagurney Isenberg School of Management University of Massachusetts Amherst, MA 01003 Variational Inequalities Anna Nagurney Isenberg School of Management University of Massachusetts Amherst, MA 01003 c 2002 Background Equilibrium is a central concept in numerous disciplines including economics,

More information

An Enhanced Spatial Branch-and-Bound Method in Global Optimization with Nonconvex Constraints

An Enhanced Spatial Branch-and-Bound Method in Global Optimization with Nonconvex Constraints An Enhanced Spatial Branch-and-Bound Method in Global Optimization with Nonconvex Constraints Oliver Stein Peter Kirst # Paul Steuermann March 22, 2013 Abstract We discuss some difficulties in determining

More information

Maximization of Submodular Set Functions

Maximization of Submodular Set Functions Northeastern University Department of Electrical and Computer Engineering Maximization of Submodular Set Functions Biomedical Signal Processing, Imaging, Reasoning, and Learning BSPIRAL) Group Author:

More information

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008 Lecture 8 Plus properties, merit functions and gap functions September 28, 2008 Outline Plus-properties and F-uniqueness Equation reformulations of VI/CPs Merit functions Gap merit functions FP-I book:

More information

Iteration-complexity of first-order penalty methods for convex programming

Iteration-complexity of first-order penalty methods for convex programming Iteration-complexity of first-order penalty methods for convex programming Guanghui Lan Renato D.C. Monteiro July 24, 2008 Abstract This paper considers a special but broad class of convex programing CP)

More information

Algorithms for Bilevel Pseudomonotone Variational Inequality Problems

Algorithms for Bilevel Pseudomonotone Variational Inequality Problems Algorithms for Bilevel Pseudomonotone Variational Inequality Problems B.V. Dinh. L.D. Muu Abstract. We propose easily implementable algorithms for minimizing the norm with pseudomonotone variational inequality

More information

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions Angelia Nedić and Asuman Ozdaglar April 15, 2006 Abstract We provide a unifying geometric framework for the

More information

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS A Dissertation Submitted For The Award of the Degree of Master of Philosophy in Mathematics Neelam Patel School of Mathematics

More information

A double projection method for solving variational inequalities without monotonicity

A double projection method for solving variational inequalities without monotonicity A double projection method for solving variational inequalities without monotonicity Minglu Ye Yiran He Accepted by Computational Optimization and Applications, DOI: 10.1007/s10589-014-9659-7,Apr 05, 2014

More information

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION Chapter 4 GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION Alberto Cambini Department of Statistics and Applied Mathematics University of Pisa, Via Cosmo Ridolfi 10 56124

More information

AN INEXACT HYBRID GENERALIZED PROXIMAL POINT ALGORITHM AND SOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS. May 14, 1998 (Revised March 12, 1999)

AN INEXACT HYBRID GENERALIZED PROXIMAL POINT ALGORITHM AND SOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS. May 14, 1998 (Revised March 12, 1999) AN INEXACT HYBRID GENERALIZED PROXIMAL POINT ALGORITHM AND SOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS M. V. Solodov and B. F. Svaiter May 14, 1998 (Revised March 12, 1999) ABSTRACT We present

More information

6.254 : Game Theory with Engineering Applications Lecture 7: Supermodular Games

6.254 : Game Theory with Engineering Applications Lecture 7: Supermodular Games 6.254 : Game Theory with Engineering Applications Lecture 7: Asu Ozdaglar MIT February 25, 2010 1 Introduction Outline Uniqueness of a Pure Nash Equilibrium for Continuous Games Reading: Rosen J.B., Existence

More information

Optimality Conditions for Nonsmooth Convex Optimization

Optimality Conditions for Nonsmooth Convex Optimization Optimality Conditions for Nonsmooth Convex Optimization Sangkyun Lee Oct 22, 2014 Let us consider a convex function f : R n R, where R is the extended real field, R := R {, + }, which is proper (f never

More information

Subdifferential representation of convex functions: refinements and applications

Subdifferential representation of convex functions: refinements and applications Subdifferential representation of convex functions: refinements and applications Joël Benoist & Aris Daniilidis Abstract Every lower semicontinuous convex function can be represented through its subdifferential

More information

AN INTERIOR-POINT METHOD FOR NONLINEAR OPTIMIZATION PROBLEMS WITH LOCATABLE AND SEPARABLE NONSMOOTHNESS

AN INTERIOR-POINT METHOD FOR NONLINEAR OPTIMIZATION PROBLEMS WITH LOCATABLE AND SEPARABLE NONSMOOTHNESS AN INTERIOR-POINT METHOD FOR NONLINEAR OPTIMIZATION PROBLEMS WITH LOCATABLE AND SEPARABLE NONSMOOTHNESS MARTIN SCHMIDT Abstract. Many real-world optimization models comse nonconvex and nonlinear as well

More information

Chapter 2 Convex Analysis

Chapter 2 Convex Analysis Chapter 2 Convex Analysis The theory of nonsmooth analysis is based on convex analysis. Thus, we start this chapter by giving basic concepts and results of convexity (for further readings see also [202,

More information

f(x) f(z) c x z > 0 1

f(x) f(z) c x z > 0 1 INVERSE AND IMPLICIT FUNCTION THEOREMS I use df x for the linear transformation that is the differential of f at x.. INVERSE FUNCTION THEOREM Definition. Suppose S R n is open, a S, and f : S R n is a

More information

A Brøndsted-Rockafellar Theorem for Diagonal Subdifferential Operators

A Brøndsted-Rockafellar Theorem for Diagonal Subdifferential Operators A Brøndsted-Rockafellar Theorem for Diagonal Subdifferential Operators Radu Ioan Boţ Ernö Robert Csetnek April 23, 2012 Dedicated to Jon Borwein on the occasion of his 60th birthday Abstract. In this note

More information

An inexact strategy for the projected gradient algorithm in vector optimization problems on variable ordered spaces

An inexact strategy for the projected gradient algorithm in vector optimization problems on variable ordered spaces An inexact strategy for the projected gradient algorithm in vector optimization problems on variable ordered spaces J.Y. Bello-Cruz G. Bouza Allende November, 018 Abstract The variable order structures

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual

More information

On smoothness properties of optimal value functions at the boundary of their domain under complete convexity

On smoothness properties of optimal value functions at the boundary of their domain under complete convexity On smoothness properties of optimal value functions at the boundary of their domain under complete convexity Oliver Stein # Nathan Sudermann-Merx June 14, 2013 Abstract This article studies continuity

More information

Characterizing Robust Solution Sets of Convex Programs under Data Uncertainty

Characterizing Robust Solution Sets of Convex Programs under Data Uncertainty Characterizing Robust Solution Sets of Convex Programs under Data Uncertainty V. Jeyakumar, G. M. Lee and G. Li Communicated by Sándor Zoltán Németh Abstract This paper deals with convex optimization problems

More information

Proximal-like contraction methods for monotone variational inequalities in a unified framework

Proximal-like contraction methods for monotone variational inequalities in a unified framework Proximal-like contraction methods for monotone variational inequalities in a unified framework Bingsheng He 1 Li-Zhi Liao 2 Xiang Wang Department of Mathematics, Nanjing University, Nanjing, 210093, China

More information

Lecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008

Lecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008 Lecture 9 Monotone VIs/CPs Properties of cones and some existence results October 6, 2008 Outline Properties of cones Existence results for monotone CPs/VIs Polyhedrality of solution sets Game theory:

More information

CONVERGENCE AND STABILITY OF A REGULARIZATION METHOD FOR MAXIMAL MONOTONE INCLUSIONS AND ITS APPLICATIONS TO CONVEX OPTIMIZATION

CONVERGENCE AND STABILITY OF A REGULARIZATION METHOD FOR MAXIMAL MONOTONE INCLUSIONS AND ITS APPLICATIONS TO CONVEX OPTIMIZATION Variational Analysis and Appls. F. Giannessi and A. Maugeri, Eds. Kluwer Acad. Publ., Dordrecht, 2004 CONVERGENCE AND STABILITY OF A REGULARIZATION METHOD FOR MAXIMAL MONOTONE INCLUSIONS AND ITS APPLICATIONS

More information

1 Directional Derivatives and Differentiability

1 Directional Derivatives and Differentiability Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=

More information

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS MATHEMATICS OF OPERATIONS RESEARCH Vol. 28, No. 4, November 2003, pp. 677 692 Printed in U.S.A. ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS ALEXANDER SHAPIRO We discuss in this paper a class of nonsmooth

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9 MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended

More information

Convex Analysis and Optimization Chapter 4 Solutions

Convex Analysis and Optimization Chapter 4 Solutions Convex Analysis and Optimization Chapter 4 Solutions Dimitri P. Bertsekas with Angelia Nedić and Asuman E. Ozdaglar Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com

More information

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM H. E. Krogstad, IMF, Spring 2012 Karush-Kuhn-Tucker (KKT) Theorem is the most central theorem in constrained optimization, and since the proof is scattered

More information

The Proximal Gradient Method

The Proximal Gradient Method Chapter 10 The Proximal Gradient Method Underlying Space: In this chapter, with the exception of Section 10.9, E is a Euclidean space, meaning a finite dimensional space endowed with an inner product,

More information

U e = E (U\E) e E e + U\E e. (1.6)

U e = E (U\E) e E e + U\E e. (1.6) 12 1 Lebesgue Measure 1.2 Lebesgue Measure In Section 1.1 we defined the exterior Lebesgue measure of every subset of R d. Unfortunately, a major disadvantage of exterior measure is that it does not satisfy

More information

BREGMAN DISTANCES, TOTALLY

BREGMAN DISTANCES, TOTALLY BREGMAN DISTANCES, TOTALLY CONVEX FUNCTIONS AND A METHOD FOR SOLVING OPERATOR EQUATIONS IN BANACH SPACES DAN BUTNARIU AND ELENA RESMERITA January 18, 2005 Abstract The aim of this paper is twofold. First,

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo January 29, 2012 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

GEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, Dedicated to Franco Giannessi and Diethard Pallaschke with great respect

GEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, Dedicated to Franco Giannessi and Diethard Pallaschke with great respect GEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, 2018 BORIS S. MORDUKHOVICH 1 and NGUYEN MAU NAM 2 Dedicated to Franco Giannessi and Diethard Pallaschke with great respect Abstract. In

More information

Optimality, identifiability, and sensitivity

Optimality, identifiability, and sensitivity Noname manuscript No. (will be inserted by the editor) Optimality, identifiability, and sensitivity D. Drusvyatskiy A. S. Lewis Received: date / Accepted: date Abstract Around a solution of an optimization

More information

Lecture 2: Convex Sets and Functions

Lecture 2: Convex Sets and Functions Lecture 2: Convex Sets and Functions Hyang-Won Lee Dept. of Internet & Multimedia Eng. Konkuk University Lecture 2 Network Optimization, Fall 2015 1 / 22 Optimization Problems Optimization problems are

More information

On Total Convexity, Bregman Projections and Stability in Banach Spaces

On Total Convexity, Bregman Projections and Stability in Banach Spaces Journal of Convex Analysis Volume 11 (2004), No. 1, 1 16 On Total Convexity, Bregman Projections and Stability in Banach Spaces Elena Resmerita Department of Mathematics, University of Haifa, 31905 Haifa,

More information

AN INEXACT HYBRID GENERALIZED PROXIMAL POINT ALGORITHM AND SOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS. M. V. Solodov and B. F.

AN INEXACT HYBRID GENERALIZED PROXIMAL POINT ALGORITHM AND SOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS. M. V. Solodov and B. F. AN INEXACT HYBRID GENERALIZED PROXIMAL POINT ALGORITHM AND SOME NEW RESULTS ON THE THEORY OF BREGMAN FUNCTIONS M. V. Solodov and B. F. Svaiter May 14, 1998 (Revised July 8, 1999) ABSTRACT We present a

More information

3.10 Lagrangian relaxation

3.10 Lagrangian relaxation 3.10 Lagrangian relaxation Consider a generic ILP problem min {c t x : Ax b, Dx d, x Z n } with integer coefficients. Suppose Dx d are the complicating constraints. Often the linear relaxation and the

More information

Optimality, identifiability, and sensitivity

Optimality, identifiability, and sensitivity Noname manuscript No. (will be inserted by the editor) Optimality, identifiability, and sensitivity D. Drusvyatskiy A. S. Lewis Received: date / Accepted: date Abstract Around a solution of an optimization

More information

Franco Giannessi, Giandomenico Mastroeni. Institute of Mathematics University of Verona, Verona, Italy

Franco Giannessi, Giandomenico Mastroeni. Institute of Mathematics University of Verona, Verona, Italy ON THE THEORY OF VECTOR OPTIMIZATION AND VARIATIONAL INEQUALITIES. IMAGE SPACE ANALYSIS AND SEPARATION 1 Franco Giannessi, Giandomenico Mastroeni Department of Mathematics University of Pisa, Pisa, Italy

More information

FROM WEIERSTRASS TO KY FAN THEOREMS AND EXISTENCE RESULTS ON OPTIMIZATION AND EQUILIBRIUM PROBLEMS. Wilfredo Sosa

FROM WEIERSTRASS TO KY FAN THEOREMS AND EXISTENCE RESULTS ON OPTIMIZATION AND EQUILIBRIUM PROBLEMS. Wilfredo Sosa Pesquisa Operacional (2013) 33(2): 199-215 2013 Brazilian Operations Research Society Printed version ISSN 0101-7438 / Online version ISSN 1678-5142 www.scielo.br/pope FROM WEIERSTRASS TO KY FAN THEOREMS

More information

c 2013 Society for Industrial and Applied Mathematics

c 2013 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 3, No., pp. 109 115 c 013 Society for Industrial and Applied Mathematics AN ACCELERATED HYBRID PROXIMAL EXTRAGRADIENT METHOD FOR CONVEX OPTIMIZATION AND ITS IMPLICATIONS TO SECOND-ORDER

More information

WE consider an undirected, connected network of n

WE consider an undirected, connected network of n On Nonconvex Decentralized Gradient Descent Jinshan Zeng and Wotao Yin Abstract Consensus optimization has received considerable attention in recent years. A number of decentralized algorithms have been

More information

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Carlos Humes Jr. a, Benar F. Svaiter b, Paulo J. S. Silva a, a Dept. of Computer Science, University of São Paulo, Brazil Email: {humes,rsilva}@ime.usp.br

More information

Constrained Optimization Theory

Constrained Optimization Theory Constrained Optimization Theory Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Constrained Optimization Theory IMA, August

More information

Search Directions for Unconstrained Optimization

Search Directions for Unconstrained Optimization 8 CHAPTER 8 Search Directions for Unconstrained Optimization In this chapter we study the choice of search directions used in our basic updating scheme x +1 = x + t d. for solving P min f(x). x R n All

More information

Relaxed Quasimonotone Operators and Relaxed Quasiconvex Functions

Relaxed Quasimonotone Operators and Relaxed Quasiconvex Functions J Optim Theory Appl (2008) 138: 329 339 DOI 10.1007/s10957-008-9382-6 Relaxed Quasimonotone Operators and Relaxed Quasiconvex Functions M.R. Bai N. Hadjisavvas Published online: 12 April 2008 Springer

More information

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version Convex Optimization Theory Chapter 5 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com

More information

(convex combination!). Use convexity of f and multiply by the common denominator to get. Interchanging the role of x and y, we obtain that f is ( 2M ε

(convex combination!). Use convexity of f and multiply by the common denominator to get. Interchanging the role of x and y, we obtain that f is ( 2M ε 1. Continuity of convex functions in normed spaces In this chapter, we consider continuity properties of real-valued convex functions defined on open convex sets in normed spaces. Recall that every infinitedimensional

More information

A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS

A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS Michael L. Flegel and Christian Kanzow University of Würzburg Institute of Applied Mathematics

More information

M. Marques Alves Marina Geremia. November 30, 2017

M. Marques Alves Marina Geremia. November 30, 2017 Iteration complexity of an inexact Douglas-Rachford method and of a Douglas-Rachford-Tseng s F-B four-operator splitting method for solving monotone inclusions M. Marques Alves Marina Geremia November

More information

Chapter 1. Optimality Conditions: Unconstrained Optimization. 1.1 Differentiable Problems

Chapter 1. Optimality Conditions: Unconstrained Optimization. 1.1 Differentiable Problems Chapter 1 Optimality Conditions: Unconstrained Optimization 1.1 Differentiable Problems Consider the problem of minimizing the function f : R n R where f is twice continuously differentiable on R n : P

More information

Robust error estimates for regularization and discretization of bang-bang control problems

Robust error estimates for regularization and discretization of bang-bang control problems Robust error estimates for regularization and discretization of bang-bang control problems Daniel Wachsmuth September 2, 205 Abstract We investigate the simultaneous regularization and discretization of

More information

On the acceleration of augmented Lagrangian method for linearly constrained optimization

On the acceleration of augmented Lagrangian method for linearly constrained optimization On the acceleration of augmented Lagrangian method for linearly constrained optimization Bingsheng He and Xiaoming Yuan October, 2 Abstract. The classical augmented Lagrangian method (ALM plays a fundamental

More information