Second order forward-backward dynamical systems for monotone inclusion problems

Size: px
Start display at page:

Download "Second order forward-backward dynamical systems for monotone inclusion problems"

Transcription

1 Second order forward-backward dynamical systems for monotone inclusion problems Radu Ioan Boţ Ernö Robert Csetnek March 2, 26 Abstract. We begin by considering second order dynamical systems of the from ẍt+ẋt+ λtbxt =, where B : H H is a cocoercive operator defined on a real Hilbert space H, λ : [, + [, + is a relaxation function and γ : [, + [, + a damping function, both depending on time. For the generated trajectories, we show existence and uniqueness of the generated trajectories as well as their weak asymptotic convergence to a zero of the operator B. The framework allows to address from similar perspectives second order dynamical systems associated with the problem of finding zeros of the sum of a maximally monotone operator and a cocoercive one. This captures as particular case the minimization of the sum of a nonsmooth convex function with a smooth convex one. Furthermore, we prove that when B is the gradient of a smooth convex function the value of the latter converges along the ergodic trajectory to its minimal value with a rate of O/t. Key Words. dynamical systems, monotone inclusions, convex optimization problems, continuous forward-backward method AMS subject classification. 34G25, 47J25, 47H5, 9C25 Introduction and preliminaries This paper is motivated by the heavy ball with friction dynamical system ẍ + γẋ + fx =, which is a nonlinear oscillator with damping γ > and potential f : H R, supposed to be a convex and differentiable function defined on the real Hilbert space H. The system is a simplified version of the differential system describing the motion of a heavy ball that keeps rolling over the graph of the function f under its own inertia until friction stops it at a critical point of f see [5]. Motivated by different models of friction, in [3,25] a generalized version of has been investigated in finite-dimensional spaces, by replacing the damping γẋ with Φẋ, which is the convex subdifferential of a convex function Φ : R n R at ẋ. The second order dynamical system has been considered by several authors in the context of minimizing the function f, these investigations being either concerned with the asymptotic convergence of the generated trajectories to a critical point of f or with the convergence of the function value along the trajectories to its global minimum value see [4,8,9,5]. It is also worth to mention that the time discretization of the heavy ball with friction dynamical system leads University of Vienna, Faculty of Mathematics, Oskar-Morgenstern-Platz, A-9 Vienna, Austria, radu.bot@univie.ac.at. University of Vienna, Faculty of Mathematics, Oskar-Morgenstern-Platz, A-9 Vienna, Austria, ernoe.robert.csetnek@univie.ac.at. Research supported by FWF Austrian Science Fund, Lise Meitner Programme, project M 682-N25.

2 to so-called inertial type algorithms, which are numerical schemes sharing the feature that the current iterate of the generated sequence is defined by making use of the previous two iterates see, for instance, [4 6, 23, 3, 33]. In order to approach the minimization of f over a nonempty, convex and closed set C H, the gradient-projection second order dynamical system ẍ + γẋ + x P C x η fx = 2 has been considered, where P C : H C denotes the projection onto the set C and η >. Convergence statements for the trajectories to a global minimizer of f over C have been provided in [8, 9]. Furthermore, in [9], these investigations have been expanded to more general second order dynamical systems of the form ẍ + γẋ + x T x =, 3 where T : H H is a nonexpansive operator. It has been shown that when γ 2 > 2 the trajectory of 8 converges weakly to an element in the fixed points set of T, provided the latter is nonempty. In this manuscript, we first treat the second order dynamical system ẍt + ẋt + λtbxt =, 4 where B : H H is a cocoercive operator, λ : [, + [, + is a relaxation function in time and γ : [, + [, + is a continuous damping parameter. We refer the reader to [,2,3, 6, 24, 26, 27, 37] for other works where second order differential equations with time dependent damping have been considered and investigated in connection with optimization problems. On the other hand, second order dynamical systems governed by cocoercive operators have been recently considered also in [3], however, with constant relaxation and damping functions. The existence and uniqueness of strong global solutions for 4 is obtained by applying the classical Cauchy-Lipschitz-Picard Theorem see [29]. We also show that under mild assumptions on the relaxation function the trajectory xt converges weakly as t + to a zero of the operator B, provided the latter has a nonempty set of zeros. To this end we use the continuous version of the Opial Lemma see also [4, 8, 9], where similar techniques have been used. Further, we approach the problem of finding a zero of the sum of a maximally monotone operator and a cocoercive one via a second order dynamical system formulated by making use of the resolvent of the set-valued operator, see 24. Dynamical systems of implicit type have been already considered in the literature in [, 2, 4, 7, 9, 2, 22]. We specialize these investigations to the minimization of the sum of a nonsmooth convex function with a smooth convex function, which is approached by means of a second order dynamical system of forward-backward type. This fact allows us to recover and improve results given in [8, 9] in the context of studying 2. We also emphasize the fact that the explicit discretization of the second order forward-backward dynamical system gives rise to a relaxed forward-backward algorithm with inertial effects. By approaching minimization problems from continuous perspective we expect to gain more insights into how to properly chose the relaxation and damping parameters in the corresponding iterative schemes in order to improve their convergence behavior. Finally, whenever B is the gradient of a smooth convex function we show that the value of the latter converges along the ergodic trajectories generated by 4 to its minimum value with a rate of convergence of O/t. Throughout this paper N = {,, 2,...} denotes the set of nonnegative integers and H a real Hilbert space with inner product, and corresponding norm =,. 2

3 2 Existence and uniqueness of strong global solutions This section is devoted to the study of existence and uniqueness of strong global solutions of a second order dynamical system governed by Lipschitz continuous operators. Let B : H H be an L-Lipschitz continuous operator that is L and Bx By L xy for all x, y H, λ, γ : [, + [, + be Lebesgue measurable functions, u, v H and consider the dynamical system { ẍt + ẋt + λtbxt = 5 x = u, ẋ = v. As in [2, 7], we consider the following definition of an absolutely continuous function. Definition see, for instance, [2, 7] A function x : [, b] H where b > is said to be absolutely continuous if one of the following equivalent properties holds: i there exists an integrable function y : [, b] H such that xt = x + t ysds t [, b]; ii x is continuous and its distributional derivative ẋ is Lebesgue integrable on [, b]; iii for every ε >, there exists η > such that for any finite family of intervals I k = a k, b k we have the implication I k I j = and k b k a k < η = k xb k xa k < ε. Remark a It follows from the definition that an absolutely continuous function is differentiable almost everywhere, its derivative coincides with its distributional derivative almost everywhere and one can recover the function from its derivative ẋ = y by the integration formula i. b If x : [, b] H where b > is absolutely continuous, then the function z = B x is absolutely continuous, too. This can be easily seen by using the characterization of absolute continuity in Definition iii. Moreover, z is almost everywhere differentiable and the inequality ż L ẋ holds almost everywhere. Definition 2 We say that x : [, + H is a strong global solution of 5 if the following properties are satisfied: i x, ẋ : [, + H are locally absolutely continuous, in other words, absolutely continuous on each interval [, b] for < b < + ; ii ẍt + ẋt + λtbxt = for almost every t [, + ; iii x = u and ẋ = v. For proving the existence and uniqueness of strong global solutions of 5 we use the Cauchy- Lipschitz-Picard Theorem for absolutely continues trajectories. The key observation here is that one can rewrite 5 as a particular first order dynamical system in a suitably chosen product space see also [7]. Theorem 2 Let B : H H be an L-Lipschitz continuous operator and λ, γ : [, + [, + be Lebesgue measurable functions such that λ, γ L loc [, + that is λ, γ L [, b] for every < b < +. Then for each u, v H there exists a unique strong global solution of the dynamical system 5. 3

4 Proof. The system 5 can be equivalently written as a first order dynamical system in the phase space H H { Ẏ t = F t, Y t 6 Y = u, v, with and Y : [, + H H, Y t = xt, ẋt F : [, + H H H H, F t, u, v = v, v λtbu. We endow H H with scalar product u, v, u, v H H = u, u + v, v and corresponding norm u, v H H = u 2 + v 2. a For arbitrary u, u, v, v H, by using the Lipschitz continuity of the involved operators, we obtain F t, u, v F t, u, v H H = v v 2 + v v + λtbu Bu 2 + 2γ 2 t v v 2 + 2L 2 λ 2 t u u 2 + 2γ 2 t + 2L 2 λ 2 t u, u v, v H H Lλt u, u v, v H H t. As λ, γ L loc [, +, the Lipschitz constant of F t,, is locally integrable. b Next we show that For arbitrary u, v H and b > it holds b u, v H, b >, F, u, v L [, b], H H. 7 F t, u, v H H dt = b b b v 2 + v + λtbu 2 dt + 2γ 2 t v 2 + 2λ 2 t Bu 2 dt + 2 v + 2λt Bu dt and from here, by using the assumptions made on λ, γ, 7 follows. In the light of the statements a and b, the existence and uniqueness of a strong global solution for 6 follow from the Cauchy-Lipschitz-Picard Theorem for first order dynamical systems see, for example, [29, Proposition 6.2.]. The conclusion is a consequence of the equivalence of 5 and 6. 3 Convergence of the trajectories In this section we address the convergence properties of the trajectories generated by the dynamical system 5 by assuming that B : H H is β-cocoercive for β >, that is β Bx By 2 x y, Bx By for all x, y H. This implies that B is β -Lipschitz continuous. If B = g, where g : H R is a convex and differentiable function such that g is β -Lipschitz continuous, then the reverse implication holds, too. Indeed, according to the Baillon-Haddad Theorem, g is a β-cocoercive operator see [8, Corollary 8.6]. The following results, which can be interpreted as continuous versions of the quasi-fejér monotonicity for sequences, plays an important role in the forthcoming investigations. For their proofs we refer the reader to [2, Lemma 5.] and [2, Lemma 5.2], respectively. 4

5 Lemma 3 Suppose that F : [, + R is locally absolutely continuous and bounded below and that there exists G L [, + such that for almost every t [, + Then there exists lim t F t R. d F t Gt. dt Lemma 4 If p <, r, F : [, + [, + is locally absolutely continuous, F L p [, +, G : [, + R, G L r [, + and for almost every t [, + then lim t + F t =. d F t Gt, dt The next result which we recall here is the continuous version of the Opial Lemma see, for example, [2, Lemma 5.3], [, Lemma.]. Lemma 5 Let S H be a nonempty set and x : [, + H a given map. Assume that i for every x S, lim t + xt x exists; ii every weak sequential cluster point of the map x belongs to S. Then there exists x S such that xt converges weakly to x as t +. In order to prove the convergence of the trajectories of 5, we make the following assumptions on the relaxation function λ and the damping parameter γ, respectively: A λ, γ : [, +, + are locally absolutely continuous and there exists θ > such that for almost every t [, + we have λt and γ2 t λt + θ β. 8 Due to Definition and Remark a, λt, exists for almost every t and λ, γ are Lebesgue integrable on each interval [, b] for < b < +. This combined with λt and the fact that λ, γ take only positive values yield the existence of a positive lower bound λ for λ and of a positive upper bound γ for γ. Furthermore, the second assumption in 8 provides also a positive upper bound λ for λ and a positive lower bound γ for γ. Notice that the couple of functions λt = ae ρt + b and = a e ρ t + b, where a, a, ρ, ρ and b, b > fulfill the inequality b 2 b > β, verify the conditions in assumption A. We would also like to point out that under the conditions considered in A the global version of the Picard-Lindelöf Theorem allows us to conclude that, for u, v H, there exists a unique trajectory x : [, + H which is a C 2 -function and which satisfies the relation ii in Definition 2 for every t [, +. The considerations we make in the following take into account this fact. Let us also mention that in case = γ and λt = λ for every t [, +, where γ, λ >, the assumption A becomes γ 2 β > λ, a condition which has been used in [3] in relation with the study of the asymptotical convergence of the dynamical system 5. We state now the convergence result. 5

6 Theorem 6 Let B : H H be a β-cocoercive operator for β > such that Zer B := {u H : Bu = }, λ, γ : [, +, + be functions fulfilling A and u, v H. Let x : [, + H be the unique strong global solution of 5. Then the following statements are true: i the trajectory x is bounded and ẋ, ẍ, Bx L 2 [, + ; H; ii lim t + ẋt = lim t + ẍt = lim t + Bxt = ; iii xt converges weakly to an element in Zer B as t +. Proof. i Take an arbitrary x Zer B and consider for every t [, + the function ht = 2 xt x 2. We have ḣt = xt x, ẋt and ḧt = ẋt 2 + xt x, ẍt for every t [, +. Taking into account 5, we get for every t [, + ḧt + ḣt + λt xt x, Bxt = ẋt 2. 9 The cocoercivity of B and the fact that Bx = yields for every t [, + ḧt + ḣt + βλt Bxt 2 ẋt 2. Taking again into account 5 one obtains for every t [, + or, equivalently, ḧt + ḧt + ḣt + ḣt + β λt Combining this inequality with d dt d ẋt 2 λt dt = d dt β λt ẍt + ẋt 2 ẋt 2 ẋt 2 + βγ 2 t λt λt ẋt 2 ẋt 2 + β λt ẍt 2. λt λt λ 2 ẋt 2 t and ḣt = d dt γht ht d γht, dt it yields for every t [, + ḧt + d dt γht+ β d dt λt ẋt 2 + βγ 2 t λt + λt + β λt λ 2 ẋt 2 + t β λt ẍt 2. Now, assumption A delivers for almost every t [, + the inequality ḧt + d dt γht + β d dt λt ẋt 2 + θ ẋt 2 + βλ ẍt 2. 2 This implies that the function t ḣt + ht + β λt ẋt 2, which is locally absolutely continuous, is monotonically decreasing. Hence there exists a real number M such that for every t [, + ḣt + ht + β λt ẋt 2 M, 3 6

7 which yields that for every t [, + ḣt + γht M. By multiplying this inequality with expγt and then integrating from to T, where T >, one easily obtains thus and, consequently, hence ht h expγt + M γ expγt, h is bounded 4 the trajectory x is bounded. 5 On the other hand, from 3, it follows that for every t [, + This inequality in combination with 5 yields which further implies that ḣt + βγλ ẋt 2 M, xt x, ẋt + βγλ ẋt 2 M. ẋ is bounded, 6 ḣ is bounded. 7 Integrating the inequality 2 we obtain that there exists a real number N R such that for every t [, + ḣt + ht + β λt ẋt 2 + θ t t ẋs 2 ds + βλ ẍs 2 ds N. From here, via 7, we conclude that ẋ, ẍ L 2 [, + ; H. Finally, from 5 and A we deduce Bx L 2 [, + ; H and the proof of i is complete. ii For every t [, + it holds d dt 2 ẋt 2 = ẋt, ẍt 2 ẋt ẍt 2 and Lemma 4 together with i lead to lim t + ẋt =. Further, by taking into consideration Remark b, for every t [, + we have d dt 2 Bxt 2 = Bxt, ddt Bxt 2 Bxt 2 + 2β 2 ẋt 2. By using again Lemma 4 and i we get lim t + Bxt =, while the fact that lim t + ẍt = follows from 5 and A2. iii We are going to prove that both assumptions in Opial Lemma are fulfilled. The first one concerns the existence of lim t + xt x. As seen in the proof of part i, the function t ḣt + ht + β λt ẋt 2 is monotonically decreasing, thus from i, ii and A we deduce that lim t + ht exists and it is a real number. By taking also into account that lim t +,, we obtain the existence of lim t + xt x. 7

8 We come now to the second assumption of the Opial Lemma. Let x be a weak sequential cluster point of x, that is, there exists a sequence t n + as n + such that xt n n N converges weakly to x. Since B is a maximally monotone operator see for instance [8, Example 2.28], its graph is sequentially closed with respect to the weak-strong topology of the product space H H. By using also that lim n + Bxt n =, we conclude that Bx =, hence x Zer B and the proof is complete. A standard choice of a cocoercive operator defined on a real Hilbert spaces is B = Id T, where T : H H is a nonexpansive operator, that is, a -Lipschitz continuous operator. As it easily follows from the nonexpansiveness of T, B is in this case /2-cocoercive. For this particular operator B the dynamical system 5 becomes { ẍt + ẋt + λt xt T xt = 8 x = u, ẋ = v, while assumption A reads A2 λ, γ : [, +, + are locally absolutely continuous and there exists θ > such that for almost every t [, + we have λt and γ2 t λt Theorem 6 gives rise to the following result. 2 + θ. 9 Corollary 7 Let T : H H be a nonexpansive operator such that Fix T := {u H : T u = u}, λ, γ : [, +, + be functions fulfilling A2 and u, v H. Let x : [, + H be the unique strong global solution of 8. Then the following statements are true: i the trajectory x is bounded and ẋ, ẍ, Id T x L 2 [, + ; H; ii lim t + ẋt = lim t + ẍt = lim t + Id T xt = ; iii xt converges weakly to a point in Fix T as t +. Remark 8 In the particular case when = γ > for all t and λt = for all t [, + the dynamical system 8 becomes { ẍt + γẋt + xt T xt = 2 x = u, ẋ = v. The convergence of the trajectories generated by 2 has been studied in [9, Theorem 3.2] under the condition γ 2 > 2. In this case A2 is obviously fulfilled for an arbitrary < θ γ 2 2/2. However, different to [9], we allow in Corollary 7 nonconstant damping and relaxation functions depending on time. We would also like to notice that in [4] an anisotropic damping has been considered in the context of approaching the minimization of a smooth convex function via second order dynamical systems. We close the section by addressing an immediate consequence of the above corollary applied to second order dynamical systems governed by averaged operators. The operator R : H H is said to be α-averaged for α,, if there exists a nonexpansive operator T : H H such that R = α Id +αt. For α = 2 we obtain as an important representative of this class the firmly nonexpansive operators. For properties and insights concerning these families of operators we refer to the monograph [8]. We consider the dynamical system { ẍt + ẋt + λt xt Rxt = 2 x = u, ẋ = v and formulate the assumption 8

9 A3 λ, γ : [, +, + are locally absolutely continuous and there exists θ > such that for almost every t [, + we have λt and γ2 t λt 2α + θ. 22 Corollary 9 Let R : H H be an α-averaged operator for α, such that Fix R, λ, γ : [, +, + be functions fulfilling A3 and u, v H. Let x : [, + H be the unique strong global solution of 2. Then the following statements are true: i the trajectory x is bounded and ẋ, ẍ, Id Rx L 2 [, + ; H; ii lim t + ẋt = lim t + ẍt = lim t + Id Rxt = ; iii xt converges weakly to a point in Fix R as t +. Proof. Since R is α-averaged, there exists a nonexpansive operator T : H H such that R = α Id +αt. The conclusion is a direct consequence of Corollary 7, by taking into account that 2 is equivalent to { ẍt + ẋt + αλt xt T xt = x = u, ẋ = v, and Fix R = Fix T. 4 Forward-backward second order dynamical systems In this section we approach the monotone inclusion problem find Ax + Bx, where A : H H is a maximally monotone operator and B : H H is a β-cocoercive operator for β > via a second order forward-backward dynamical system with relaxation and damping functions depending on time. For readers convenience we recall at the beginning some standard notions and results in monotone operator theory see also [8,2,36]. For an arbitrary set-valued operator A : H H we denote by Gr A = {x, u H H : u Ax} its graph. We use also the notation Zer A = {x H : Ax} for the set of zeros of A. We say that A is monotone, if xy, uv for all x, u, y, v Gr A. A monotone operator A is said to be maximally monotone, if there exists no proper monotone extension of the graph of A on H H. The resolvent of A, J A : H H, is defined by J A = Id +A. If A is maximally monotone, then J A : H H is single-valued and maximally monotone see [8, Proposition 23.7 and Corollary 23.]. For an arbitrary γ > we have see [8, Proposition 23.2] p J γa x if and only if p, γ x p Gr A. 23 The operator A is said to be uniformly monotone if there exists an increasing function φ A : [, + [, + ] that vanishes only at and fulfills x y, u v φ A x y for all x, u, y, v Gr A. A popular class of operators having this property is the one of strongly monotone operators. We say that A is γ-strongly monotone for γ >, if xy, uv γ xy 2 for all x, u, y, v Gr A. For η > we consider the dynamical system { ] ẍt + ẋt + λt [xt J ηa xt ηbxt = 24 x = u, ẋ = v. Further, we consider the following assumption, where δ := 4βη 2β : 9

10 A4 λ, γ : [, +, + are locally absolutely continuous and there exists θ > such that for almost every t [, + we have λt and γ2 t λt 2 + θ. 25 δ Theorem Let A : H H be a maximally monotone operator and B : H H be a β- cocoercive operator for β > such that ZerA + B. Let η, 2β and set δ := 4βη 2β. Let λ, γ : [, +, + be functions fulfilling A4, u, v H and x : [, + H be the unique strong global solution of 24. Then the following statements are true: i the trajectory x is bounded and ẋ, ẍ, Id J ηa Id ηb x L 2 [, + ; H; ii lim t + ẋt = lim t + ẍt = lim t + Id JηA Id ηb xt = ; iii xt converges weakly to a point in ZerA + B as t + ; iv if x ZerA + B, then Bx Bx L 2 [, + ; H, lim t + Bxt = Bx and B is constant on ZerA + B; v if A or B is uniformly monotone, then xt converges strongly to the unique point in ZerA + B as t +. Proof. i-iii It is immediate that the dynamical system 24 can be written in the form { ẍt + ẋt + λt xt Rxt = 26 x = u, ẋ = v, where R = J ηa Id ηb. According to [8, Corollary 23.8 and Remark 4.24iii], J ηa is /2- cocoercive. Moreover, by [8, Proposition 4.33], Id ηb is η/2β-averaged. Combining this with [32, Theorem 3b], we derive that R is /δ-averaged. The statements i-iii follow now from Corollary 9 by noticing that Fix R = ZerA + B see [8, Proposition 25.iv]. iv The fact that B is constant on ZerA + B follows from the cocoercivity of B and the monotonicity of A. A proof of this statement when A is the subdifferential of a proper, convex and lower semicontinuous function is given for instance in [, Lemma.7]. Let be an arbitrary x ZerA + B. From the definition of the resolvent we have for every t [, + Bxt ηλtẍt ηλtẋt A + + xt, 27 λtẍt λtẋt which combined with Bx Ax and the monotonicity of A leads to + λtẍt λtẋt + xt x, Bxt + Bx ηλtẍt ηλtẋt. 28

11 The cocoercivity of B yields for every t [, + β Bxt Bx 2 + Bxt + Bx λtẍt λtẋt, ηλ 2 ẍt + ẋt 2 t + xt x, ηλtẍt ηλtẋt 2 2β + λtẍt λtẋt + β 2 Bxt Bx 2 ηλ 2 t ẍt + ẋt 2 + xt x, ηλtẍt ηλtẋt = η 2β 2ηβλ 2 t ẍt + ẋt 2 + β 2 Bxt Bx 2 + xt x, ηλtẍt ηλtẋt β 2 Bxt Bx 2 + xt x, ηλtẍt ηλtẋt. For evaluating the last term of the above inequality we use the function h : [, + R, ht = 2 xt x 2, already used in the proof of Theorem 6. From xt x, ηλtẍt ηλtẋt = ḧt + ḣt ẋt 2 29 ηλt we obtain for every t [, + βλt Bxt Bx η ḧt + ḣt η ẋt 2. Taking into account also the relation and the bounds for λ, we get for every t [, + βλ 2 Bxt Bx 2 + ḧt + ddt η γht η ẋt 2. After integration we obtain that for every T [, + βλ 2 T Bxt Bx 2 dt + ḣt ḣ + γt ht γh η η T ẋt 2 dt. Since ẋ L 2 [, + ; H, γ has a positive upper bound, ht, γt for every T [, + and lim T + ḣt =, it follows that Bx Bx L 2 [, + ; H. Further, by taking into consideration Remark b, we have d dt 2 Bxt Bx 2 = Bxt Bx, ddt Bxt 2 Bxt Bx 2 + 2β 2 ẋt 2 and from here, in the light of Lemma 4, it follows that lim t + Bxt = Bx. v Let x be the unique element of ZerA + B. For the beginning we suppose that A is uniformly monotone with corresponding function φ A : [, + [, + ], which is increasing and vanishes only at. By similar arguments as in the proof of statement iv, for every t [, + we have φ A + + xt x λtẍt λtẋt + λtẍt λtẋt + xt x, Bxt + Bx ηλtẍt ηλtẋt,

12 which combined with the monotonicity of B yields φ A + + xt x λtẍt λtẋt + Bxt + Bx λtẍt λtẋt, ηλ 2 t ẍt + ẋt 2 + xt x, ηλtẍt ηλtẋt + Bxt + Bx + xt x, λtẍt λtẋt, ηλtẍt ηλtẋt. As λ and γ are bounded by positive constants, by using i-iv it follows that the right-hand side of the last inequality converges to as t +. Hence lim φ A + + xt x t + λtẍt λtẋt = and the properties of the function φ A allow to conclude that λtẍt + λtẋt + xt x converges strongly to as t +. By using again the boundedness of λ and γ and assumption ii we obtain that xt converges strongly to x as t +. Finally, suppose that B is uniformly monotone with corresponding function φ B : [, + [, + ], which is increasing and vanishes only at. The conclusion follows by letting t in the inequality xt x, Bxt Bx φ B xt x t [, + converge to + and by using that x is bounded and lim t + Bxt Bx =. Remark We would like to emphasize the fact that the statements in Theorem remain valid also for η := 2β. Indeed, in this case the cocoercivity of B implies that Id ηb is nonexpansive, hence the operator R = J ηa Id ηb used in the proof is nonexpansive, too, and so the statements in i-iii follow from Corollary 7. Furthermore, the proof of the statements iv and v can be repeated also for η = 2β. In the remaining of this section we turn our attention to optimization problems of the form min fx + gx, x H where f : H R {+ } is a proper, convex and lower semicontinuous function and g : H R is a convex and Fréchet differentiable function with β -Lipschitz continuous gradient for β >. We recall some standard notations and facts in convex analysis. For a proper, convex and lower semicontinuous function f : H R {+ }, its convex subdifferential at x H is defined as fx = {u H : fy fx + u, y x y H}. When seen as a set-valued mapping, it is a maximally monotone operator see [34] and its resolvent is given by J η f = prox ηf see [8], where prox ηf : H H, { prox ηf x = argmin fy + } y x 2, 3 y H 2η denotes the proximal point operator of f and η >. According to [8, Definition.5], f is said to be uniformly convex with modulus function φ : [, + [, + ], if φ is increasing, vanishes only at and fulfills fαx + αy + α αφ x y αfx + αfy for 2

13 all α, and x, y dom f := {x H : fx < + }. Notice that if this inequality holds for φ = ν/2 2 for ν >, then f is said to be ν-strongly convex. In the following statement we approach the minimizers of f +g via the second order dynamical system { ] ẍt + ẋt + λt [xt prox ηf xt η gxt = 3 x = u, ẋ = v. Corollary 2 Let f : H R {+ } by a proper, convex and lower semicontinuous function and g : H R be a convex and Fréchet differentiable function with β -Lipschitz continuous gradient for β > such that argmin x H {fx + gx} =. Let η, 2β] and set δ := 4βη 2β. Let λ, γ : [, +, + be functions fulfilling A4, u, v H and x : [, + H be the unique strong global solution of 3. Then the following statements are true: i the trajectory x is bounded and ẋ, ẍ, Id prox ηf Id η g x L 2 [, + ; H; ii lim t + ẋt = lim t + ẍt = lim t + Id proxηf Id η g xt = ; iii xt converges weakly to a minimizer of f + g as t + ; iv if x is a minimizer of f + g, then gx gx L 2 [, + ; H, lim t + gxt = gx and g is constant on argmin x H {fx + gx}; v if f or g is uniformly convex, then xt converges strongly to the unique minimizer of f + g as t +. Proof. The statements are direct consequences of the corresponding ones from Theorem see also Remark, by choosing A := f and B := g, by taking into account that Zer f + g = argmin{fx + gx}. x H For statement v we also use the fact that if f is uniformly convex with modulus φ, then f is uniformly monotone with modulus 2φ see [8, Example 22.3iii]. Remark 3 Consider again the setting in Remark 8, namely, when = γ > for every t cand λt = for every t [, +. Furthermore, for C a nonempty, convex, closed subset of H, let f = δ C be the indicator function of C, which is defined as being equal to for x C and to +, else. The dynamical system 3 attached in this setting to the minimization of g over C becomes { ẍt + γẋt + xt PC xt η gxt = 32 x = u, ẋ = v, where P C denotes the projection onto the set C. The asymptotic convergence of the trajectories of 32 has been studied in [9, Theorem 3.] under the conditions γ 2 > 2 and < η 2β. In this case assumption A4 trivially holds by choosing θ such that < θ γ 2 2/2 δγ 2 2/2. Thus, in order to verify A4 in case λt = for every t [, + one needs to equivalently assume that γ 2 > 2/δ. Since δ, this provides a slight improvement over [9, Theorem 3.] in what concerns the choice of γ. We refer the reader also to [8] for an analysis of the convergence rates of trajectories of the dynamical system 32 when g is endowed with supplementary properties. For the two main convergence statements provided in this section it was essential to choose the step size η in the interval, 2β] see Theorem, Remark and Corollary 2. This, because of the fact that in this way we were able to guarantee for the generated trajectories the existence of the limit lim t + xt x 2, where x denotes a solution of the problem under investigation. It is interesting to observe that, when dealing with convex optimization problems, one can go also beyond this classical restriction concerning the choice of the step size a similar phenomenon has been reported also in [, Section 5.2]. This is pointed out in the following result, which is valid under the assumption 3

14 A5 λ, γ : [, +, + are locally absolutely continuous and there exists θ > such that for almost every t [, + we have λt and γ2 t λt ηθ + η β and for the proof of which we use instead of x x 2 a modified energy functional. Corollary 4 Let f : H R {+ } by a proper, convex and lower semicontinuous function and g : H R be a convex and Fréchet differentiable function with β -Lipschitz continuous gradient for β > such that argmin x H {fx+gx}. Let be η >, λ, γ : [, +, + be functions fulfilling A5, u, v H and x : [, + H be the unique strong global solution of 3. Then the following statements are true: i the trajectory x is bounded and ẋ, ẍ, Id prox ηf Id η g x L 2 [, + ; H; ii lim t + ẋt = lim t + ẍt = lim t + Id proxηf Id η g xt = ; iii xt converges weakly to a minimizer of f + g as t + ; iv if x is a minimizer of f + g, then gx gx L 2 [, + ; H, lim t + gxt = gx and g is constant on argmin x H {fx + gx}; v if f or g is uniformly convex, then xt converges strongly to the unique minimizer of f + g as t +. Proof. Consider an arbitrary element x argmin x H {fx+gx} = Zer f + g. Similarly to the proof of Theorem iv, we derive for every t [, + see the first inequality after 28 β gxt gx 2 ẍt, gxt + gx + ẋt, gxt + gx λt ηλ 2 t ẍt + ẋt 2 + xt x, ηλtẍt ηλtẋt. 34 In what follows we evaluate the right-hand side of the above inequality and introduce to this end the function q : [, + R, qt = gxt gx gx, xt x. Due to the convexity of g one has Further, for every t [, + qt t. qt = ẋt, gxt gx, thus ẋt, gxt + gx = qt = d dt γqt + qt d γqt 35 dt On the other hand, for every t [, + qt = ẍt, gxt gx + ẋt, ddt gxt, hence ẍt, gxt + gx qt + β ẋt

15 We have for almost every t [, + see also λt ẍt + ẋt 2 = λt ẍt 2 + γ2 t d dt λt ẋt 2 λt ẋt 2 + λt λt λ 2 ẋt t Finally, by multiplying 34 with λt and by using 35, 36, 37 and 29 we obtain after rearranging the terms for almost every t [, + that βλt gxt gx 2 + d dt 2 η h + q + d dt η h + q + d η dt λt ẋt 2 γ 2 t λt + λt + + ηλt ηλ 2 t β ẋt 2 + η ηλt ẍt 2. This relation gives rise via A5 to βλt gxt gx 2 + d dt 2 η h + q + d dt η h + q + d η dt λt ẋt 2 + θ ẋt 2 + ηλt ẍt 2, 38 for almost every t [, +. This implies that the function t d dt η h + q t + η h + q t + η λt ẋt 2 is monotonically decreasing. Arguing as in the proof of Theorem 6, by taking into account that λ, γ have positive upper and lower bounds, it follows that η h + q, h, q, x, ẋ, ḣ, q are bounded and ẋ, ẍ, Id prox ηf Id η g x L 2 [, + ; H. Furthermore, lim t + ẋt =. Since d dt Id proxηf Id η g x L 2 [, + ; H see Remark b, we derive from Lemma 4 that lim t + Id proxηf Id η g xt =. As ẍt = ẋt λt Id prox ηf Id η g xt for every t [, +, we obtain that lim t + ẍt =. From 38 it also follows that gx gx L 2 [, + ; H and, by applying again Lemma 4, it yields lim t + gxt = gx. In this way the statements i, ii and iv are shown. iii Since the function in 39 is monotonically decreasing, from i, ii and iv it follows that the limit lim t + η h + u t exists and it is a real number. From lim t +, + we get that lim t + η h + u t R. Furthermore, since x has been chosen as an arbitrary minimizer of f + g, we conclude that for all x argmin x H {fx + gx} the limit exists, where lim Et, t + x R Et, x = 2η xt x 2 + gxt gx gx, xt x. In what follows we use a similar technique as in [9] see, also, [, Section 5.2]. Since x is bounded, it has at least one weak sequential cluster point. 5 39

16 We prove first that each weak sequential cluster point of x is a minimizer of f + g. Let x argmin x H {fx + gx} and t n + as n + be such that xt n n N converges weakly to x. Since xt n, gxt n Gr g, lim n + gxt n = gx and Gr g is sequentially closed in the weak-strong topology, we obtain gx = gx. From 27 written for t = t n, A = f and B = g, by letting n converge to + and by using that Gr f is sequentially closed in the weak-strong topology, we obtain gx fx. This, combined with gx = gx, delivers gx fx, hence x Zer f + g = argmin x H {fx + gx}. Next we show that x has at most one weak sequential cluster point, fact which guarantees that it has exactly one weak sequential cluster point. This implies the weak convergence of the trajectory to a minimizer of f + g. Let x, x 2 be two weak sequential cluster points of x. This means that there exist t n + as n + and t n + as n + such that xt n n N converges weakly to x as n + and xt n n N converges weakly to x 2 as n +. Since x, x 2 argmin x H{fx + gx}, we have lim t + Et, x R and lim t + Et, x 2 R, hence lim t + Et, x Et, x 2 R. We obtain lim t + η xt, x 2 x + gx 2 gx, xt R, which, when expressed by means of the sequences t n n N and t n n N, leads to η x, x 2 x + gx 2 gx, x = η x 2, x 2 x + gx 2 gx, x 2. This is the same with η x x gx 2 gx, x 2 x = and by the monotonicity of g we conclude that x = x 2. v The proof of this statement follows in analogy to the one of the corresponding statement of Theorem v written for A = f and B = g. Remark 5 When = γ > for every t and λt = for every t [, +, the second inequality in 33 is verified if and only if γ 2 > η β +. In other words, A5 allows in this particular setting a more relaxed choice for the parameters γ, η and β, beyond the standard assumptions < η 2β and γ 2 > 2 considered in [9]. Remark 6 The explicit discretization of 3 with respect to the time variable t, with step size h n >, relaxation variable λ n >, damping variable γ n > and initial points x := u and x := v yields the following iterative scheme x n+ 2x n + x n h 2 n For h n = this becomes x n+ = λ n + γ n x n + x n+ x [ ] n + γ n = λ n prox h ηf x n η gx n x n n n. λ n prox + γ ηf x n η gx n + λ n x n x n n, 4 n + γ n which is a relaxed forward-backward algorithm for minimizing f + g with inertial effects. In order to investigate the convergence properties of the above iterative scheme, it is natural to assume, according to A4 or A5 that γ n n is nonincreasing, λ n n is nondecreasing, and 6

17 there exists a lower bound for γ 2 n/λ n n. The control of the inertial term by means of the variable parameters λ n and γ n could increase the speed of convergence of the algorithm 4. Making use of the the sequence λ n n in 4, one obtains a relaxed forward-backward scheme, where the relaxation is usually considered in the literature in order to achieve more freedom in the choice of the parameters involved in the numerical scheme. We leave as an open question the investigation of the convergence proprties of 4. For more on inertial-type forward-backward algorithms we refer the reader to [3]. In the following we provide a rate for the convergence for a convex and Fréchet differentiable function g : H R with Lipschitz continuous gradient to its minimum value along the ergodic trajectory generated by { ẍt + ẋt + λt gxt = 4 x = u, ẋ = v. To this end we make the following assumption: A6 λ : [, +, + is locally absolutely continuous, γ : [, +, + is twice differentiable and there exists ζ > such that for almost every t [, + we have < ζ λt λt, and Let us mention that the following result is in the spirit of a convergence rate statement recently given in [28, Theorem ] for the objective function values on a sequence iteratively generated by an inertial gradient-type algorithm. Theorem 7 Let g : H R be a convex and Fréchet differentiable function with β -Lipschitz continuous gradient for β > such that argmin x H gx. Let λ, γ : [, +, + be functions fulfilling A6 u, v H and x : [, + H be the unique strong global solution of 4. Then for every minimizer x of g and every T > it holds Proof. g T 2ζT T xtdt gx [ v + γu x 2 + λ β ] γ u x 2. By using 4, the convexity of g and A6 we get for almost every t [, + d dt 2 ẋt + xt x 2 + λtgxt 2 xt x 2 = ẍt + xt x + ẋt, ẋt + xt x 2 xt x 2 ẋt, xt x + λtgxt + λt ẋt, gxt = λt gxt, xt x + λtgxt + xt x 2 2 λt gxt, xt x + λtgxt λt λtgxt gx + λtgx ζgxt gx + λtgx. 7

18 We obtain after integration 2 ẋt + γt xt x 2 + λt gxt γt 2 xt x 2 2 ẋ + γx x 2 λgx + γ 2 x x 2 +ζ T gxt gx dt λt λgx. Be neglecting the nonnegative terms in the left-hand side of the inequality above and by using that gxt gx, it yields ζ T gxt gx dt 2 v + γu x 2 γ 2 u x 2 + λgu gx. The conclusion follows by using gu gx 2β u x 2, which is a consequence of the descent lemma see [3, Lemma.2.3] and notice that gx =, and the inequality which holds since g is convex. T g xtdt gx T T T gxt gx dt, Remark 8 Under assumption A6, we obtain in the above theorem only the convergence of the function g along the ergodic trajectory to a global minimum value. If one is interested also in the weak convergence of the trajectory to a minimizer of g, this follows via Theorem 6 when λ, γ are assumed to fulfill A notice that if x converges weakly to a minimizer of g, then from the Cesaro-Stolz Theorem one also obtains the weak convergence of the ergodic trajectory T T T xtdt to the same minimizer. For a, a, ρ, ρ and b, b > fulfilling the inequalities b 2 b > β and ρ b one can prove that the functions λt = ae ρt + b and = a e ρ t + b, verify assumption A in Theorem 6 for < θ b 2 bβ and assumption A6 in Theorem 7 for < ζ bb. Hence, for this choice of the relaxation and damping functions, both a+b 2 convergence of both the objective function g along the ergodic trajectory to its global minimum value and weak convergence of the trajectory to a minimizer of g are guaranteed. Acknowledgements. The authors are thankful to the handling editor and three anonymous reviewers for comments and remarks which substantially improved the quality of the paper. References [] B. Abbas, H. Attouch, Dynamical systems and forward-backward algorithms associated with the sum of a convex subdifferential and a monotone cocoercive operator, Optimization 64, , 25 8

19 [2] B. Abbas, H. Attouch, B.F. Svaiter, Newton-like dynamics and forward-backward methods for structured monotone inclusions in Hilbert spaces, Journal of Optimization Theory and its Applications 62, 33 36, 24 [3] S. Adly, H. Attouch, A. Cabot, Finite time stabilization of nonlinear osscillators subject to dry friction, in: P. Alart, O. Maisonneuve, R.T. Rockafellar eds., Nonsmooth Mechanics and Analysis, Advances in Mechanics and Mathematics 2, , 26 [4] F. Alvarez, On the minimizing property of a second order dissipative system in Hilbert spaces, SIAM Journal on Control and Optimization 384, 2 9, 2 [5] F. Alvarez, Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert space, SIAM Journal on Optimization 43, , 24 [6] F. Alvarez, H. Attouch, An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping, Set-Valued Analysis 9-2, 3, 2 [7] F. Alvarez, H. Attouch, J. Bolte, P. Redont, A second-order gradient-like dissipative dynamical system with Hessian-driven damping. Application to optimization and mechanics, Journal de Mathématiques Pures et Appliquées 9 88, , 22 [8] A.S. Antipin, Minimization of convex functions on convex sets by means of differential equations, Russian Differentsial nye Uravneniya 39, , 994; translation in Differential Equations 39, , 994 [9] H. Attouch, F. Alvarez, The heavy ball with friction dynamical system for convex constrained minimization problems, in: Optimization Namur, 998, Lecture Notes in Economics and Mathematical Systems 48, Springer, Berlin, 25 35, 2 [] H. Attouch, Z. Chbani, Fast inertial dynamics and FISTA algorithms in convex optimization. Perturbation aspects, arxiv:57.367, 25 [] H. Attouch, M.-O. Czarnecki, Asymptotic behavior of coupled dynamical systems with multiscale aspects, Journal of Differential Equations 2486, , 2 [2] H. Attouch, M.-O. Czarnecki, Asymptotic behavior of gradient-like dynamical systems involving inertia and multiscale aspects, arxiv:62.232, 26 [3] H. Attouch, P.-E. Maingé, Asymptotic behavior of second-order dissipative evolution equations combining potential with non-potential effects, ESAIM: Control, Optimisation and Calculus of Variations 73, , 2 [4] H. Attouch, M. Marques Alves, B.F. Svaiter, A dynamic approach to a proximal- Newton method for monotone inclusions in Hilbert spaces, with complexity O/n 2, arxiv: v, to appear in Journal of Convex Analysis [5] H. Attouch, X. Goudou, P. Redont, The heavy ball with friction method. I. The continuous dynamical system: global exploration of the local minima of a real-valued function by asymptotic analysis of a dissipative dynamical system, Communications in Contemporary Mathematics 2, 34, 2 [6] H. Attouch, J. Peypoquet, P. Redont, Fast convergence of an inertial gradient-like system with vanishing viscosity, arxiv: , 25 9

20 [7] H. Attouch, B.F. Svaiter, A continuous dynamical Newton-like approach to solving monotone inclusions, SIAM Journal on Control and Optimization 492, , 2 [8] H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, CMS Books in Mathematics, Springer, New York, 2 [9] J. Bolte, Continuous gradient projection method in Hilbert spaces, Journal of Optimization Theory and its Applications 92, , 23 [2] J.M. Borwein and J.D. Vanderwerff, Convex Functions: Constructions, Characterizations and Counterexamples, Cambridge University Press, Cambridge, 2 [2] R.I. Boţ, E.R. Csetnek, A dynamical system associated with the fixed points set of a nonexpansive operator, Journal of Dynamics and Differential Equations, DOI:.7/s x, 25 [22] R.I. Boţ, E.R. Csetnek, Approaching the solving of constrained variational inequalities via penalty term-based dynamical systems, Journal of Mathematical Analysis and Applications 4352, 688-7, 26 [23] R.I. Boţ, E.R. Csetnek, C. Hendrich, Inertial Douglas-Rachford splitting for monotone inclusion problems, Applied Mathematics and Computation 256, , 25 [24] R.I. Boţ, E.R. Csetnek, Second order dynamical systems associated to variational inequalities, to appear in Applicable Analysis, arxiv:52.472v3, 26 [25] A. Cabot, Stabilization of oscillators subject to dry friction: finite time convergence versus exponential decay results, Transactions of the American Mathematical Society 36, 3 2, 28 [26] A. Cabot, H. Engler, S. Gadat, On the long time behavior of second order differential equations with asymptotically small dissipation, Transactions of the American Mathematical Society 36, , 29 [27] A. Cabot, H. Engler, S. Gadat, Second-order differential equations with asymptotically small dissipation and piecewise flat potentials, Proceedings of the Seventh Mississippi State-UAB Conference on Differential Equations and Computational Simulations, 33 38, Electronic Journal of Differential Equations Conference 7, 29 [28] E. Ghadimi, H.R. Feyzmahdavian, M. Johansson, Global convergence of the Heavy-ball method for convex optimization, arxiv: [29] A. Haraux, Systèmes Dynamiques Dissipatifs et Applications, Recherches en Mathé- matiques Appliquées 7, Masson, Paris, 99 [3] A. Moudafi, M. Oliny, Convergence of a splitting inertial proximal method for monotone operators, Journal of Computational and Applied Mathematics 55, , 23 [3] Y. Nesterov, Introductory Lectures on Convex Optimization: A Basic Course, Kluwer Academic Publishers, Dordrecht, 24 [32] N. Ogura, I. Yamada, Non-strictly convex minimization over the fixed point set of an asymptotically shrinking nonexpansive mapping, Numerical Functional Analysis and Optimization 23-2, 3 37, 22 2

21 [33] J.-C. Pesquet, N. Pustelnik, A parallel inertial proximal optimization method, Pacific Journal of Optimization 82, , 22 [34] R.T. Rockafellar, On the maximal monotonicity of subdifferential mappings, Pacific Journal of Mathematics 33, 29 26, 97 [35] R.T. Rockafellar, Monotone operators and the proximal point algorithm, SIAM Journal on Control and Optimization 45, , 976 [36] S. Simons, From Hahn-Banach to Monotonicity, Springer, Berlin, 28 [37] W. Su, S. Boyd, E.J. Candès, A differential equation for modeling Nesterov s accelerated gradient method: theory and insights, in Advances in Neural Information Processing Systems NIPS 27, 24 2

Approaching monotone inclusion problems via second order dynamical systems with linear and anisotropic damping

Approaching monotone inclusion problems via second order dynamical systems with linear and anisotropic damping March 0, 206 3:4 WSPC Proceedings - 9in x 6in secondorderanisotropicdamping206030 page Approaching monotone inclusion problems via second order dynamical systems with linear and anisotropic damping Radu

More information

Second order forward-backward dynamical systems for monotone inclusion problems

Second order forward-backward dynamical systems for monotone inclusion problems Second order forward-backward dynamical systems for monotone inclusion problems Radu Ioan Boţ Ernö Robert Csetnek March 6, 25 Abstract. We begin by considering second order dynamical systems of the from

More information

On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems

On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems Radu Ioan Boţ Ernö Robert Csetnek August 5, 014 Abstract. In this paper we analyze the

More information

Inertial Douglas-Rachford splitting for monotone inclusion problems

Inertial Douglas-Rachford splitting for monotone inclusion problems Inertial Douglas-Rachford splitting for monotone inclusion problems Radu Ioan Boţ Ernö Robert Csetnek Christopher Hendrich January 5, 2015 Abstract. We propose an inertial Douglas-Rachford splitting algorithm

More information

1 Introduction and preliminaries

1 Introduction and preliminaries Proximal Methods for a Class of Relaxed Nonlinear Variational Inclusions Abdellatif Moudafi Université des Antilles et de la Guyane, Grimaag B.P. 7209, 97275 Schoelcher, Martinique abdellatif.moudafi@martinique.univ-ag.fr

More information

A Brøndsted-Rockafellar Theorem for Diagonal Subdifferential Operators

A Brøndsted-Rockafellar Theorem for Diagonal Subdifferential Operators A Brøndsted-Rockafellar Theorem for Diagonal Subdifferential Operators Radu Ioan Boţ Ernö Robert Csetnek April 23, 2012 Dedicated to Jon Borwein on the occasion of his 60th birthday Abstract. In this note

More information

INERTIAL ACCELERATED ALGORITHMS FOR SOLVING SPLIT FEASIBILITY PROBLEMS. Yazheng Dang. Jie Sun. Honglei Xu

INERTIAL ACCELERATED ALGORITHMS FOR SOLVING SPLIT FEASIBILITY PROBLEMS. Yazheng Dang. Jie Sun. Honglei Xu Manuscript submitted to AIMS Journals Volume X, Number 0X, XX 200X doi:10.3934/xx.xx.xx.xx pp. X XX INERTIAL ACCELERATED ALGORITHMS FOR SOLVING SPLIT FEASIBILITY PROBLEMS Yazheng Dang School of Management

More information

ADMM for monotone operators: convergence analysis and rates

ADMM for monotone operators: convergence analysis and rates ADMM for monotone operators: convergence analysis and rates Radu Ioan Boţ Ernö Robert Csetne May 4, 07 Abstract. We propose in this paper a unifying scheme for several algorithms from the literature dedicated

More information

An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions

An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions Radu Ioan Boţ Ernö Robert Csetnek Szilárd Csaba László October, 1 Abstract. We propose a forward-backward

More information

Convergence rate estimates for the gradient differential inclusion

Convergence rate estimates for the gradient differential inclusion Convergence rate estimates for the gradient differential inclusion Osman Güler November 23 Abstract Let f : H R { } be a proper, lower semi continuous, convex function in a Hilbert space H. The gradient

More information

arxiv: v1 [math.oc] 12 Mar 2013

arxiv: v1 [math.oc] 12 Mar 2013 On the convergence rate improvement of a primal-dual splitting algorithm for solving monotone inclusion problems arxiv:303.875v [math.oc] Mar 03 Radu Ioan Boţ Ernö Robert Csetnek André Heinrich February

More information

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods Renato D.C. Monteiro B. F. Svaiter May 10, 011 Revised: May 4, 01) Abstract This

More information

On the order of the operators in the Douglas Rachford algorithm

On the order of the operators in the Douglas Rachford algorithm On the order of the operators in the Douglas Rachford algorithm Heinz H. Bauschke and Walaa M. Moursi June 11, 2015 Abstract The Douglas Rachford algorithm is a popular method for finding zeros of sums

More information

A proximal-newton method for monotone inclusions in Hilbert spaces with complexity O(1/k 2 ).

A proximal-newton method for monotone inclusions in Hilbert spaces with complexity O(1/k 2 ). H. ATTOUCH (Univ. Montpellier 2) Fast proximal-newton method Sept. 8-12, 2014 1 / 40 A proximal-newton method for monotone inclusions in Hilbert spaces with complexity O(1/k 2 ). Hedy ATTOUCH Université

More information

Inertial forward-backward methods for solving vector optimization problems

Inertial forward-backward methods for solving vector optimization problems Inertial forward-backward methods for solving vector optimization problems Radu Ioan Boţ Sorin-Mihai Grad Dedicated to Johannes Jahn on the occasion of his 65th birthday Abstract. We propose two forward-backward

More information

Visco-penalization of the sum of two monotone operators

Visco-penalization of the sum of two monotone operators Visco-penalization of the sum of two monotone operators Patrick L. Combettes a and Sever A. Hirstoaga b a Laboratoire Jacques-Louis Lions, Faculté de Mathématiques, Université Pierre et Marie Curie Paris

More information

Convex Optimization Notes

Convex Optimization Notes Convex Optimization Notes Jonathan Siegel January 2017 1 Convex Analysis This section is devoted to the study of convex functions f : B R {+ } and convex sets U B, for B a Banach space. The case of B =

More information

Subdifferential representation of convex functions: refinements and applications

Subdifferential representation of convex functions: refinements and applications Subdifferential representation of convex functions: refinements and applications Joël Benoist & Aris Daniilidis Abstract Every lower semicontinuous convex function can be represented through its subdifferential

More information

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Charles Byrne (Charles Byrne@uml.edu) http://faculty.uml.edu/cbyrne/cbyrne.html Department of Mathematical Sciences

More information

PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT

PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT Linear and Nonlinear Analysis Volume 1, Number 1, 2015, 1 PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT KAZUHIRO HISHINUMA AND HIDEAKI IIDUKA Abstract. In this

More information

Monotone operators and bigger conjugate functions

Monotone operators and bigger conjugate functions Monotone operators and bigger conjugate functions Heinz H. Bauschke, Jonathan M. Borwein, Xianfu Wang, and Liangjin Yao August 12, 2011 Abstract We study a question posed by Stephen Simons in his 2008

More information

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane Conference ADGO 2013 October 16, 2013 Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions Marc Lassonde Université des Antilles et de la Guyane Playa Blanca, Tongoy, Chile SUBDIFFERENTIAL

More information

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES Fenghui Wang Department of Mathematics, Luoyang Normal University, Luoyang 470, P.R. China E-mail: wfenghui@63.com ABSTRACT.

More information

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE Fixed Point Theory, Volume 6, No. 1, 2005, 59-69 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.htm WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE YASUNORI KIMURA Department

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

A Dykstra-like algorithm for two monotone operators

A Dykstra-like algorithm for two monotone operators A Dykstra-like algorithm for two monotone operators Heinz H. Bauschke and Patrick L. Combettes Abstract Dykstra s algorithm employs the projectors onto two closed convex sets in a Hilbert space to construct

More information

Local strong convexity and local Lipschitz continuity of the gradient of convex functions

Local strong convexity and local Lipschitz continuity of the gradient of convex functions Local strong convexity and local Lipschitz continuity of the gradient of convex functions R. Goebel and R.T. Rockafellar May 23, 2007 Abstract. Given a pair of convex conjugate functions f and f, we investigate

More information

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES U.P.B. Sci. Bull., Series A, Vol. 80, Iss. 3, 2018 ISSN 1223-7027 ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES Vahid Dadashi 1 In this paper, we introduce a hybrid projection algorithm for a countable

More information

Une méthode proximale pour les inclusions monotones dans les espaces de Hilbert, avec complexité O(1/k 2 ).

Une méthode proximale pour les inclusions monotones dans les espaces de Hilbert, avec complexité O(1/k 2 ). Une méthode proximale pour les inclusions monotones dans les espaces de Hilbert, avec complexité O(1/k 2 ). Hedy ATTOUCH Université Montpellier 2 ACSIOM, I3M UMR CNRS 5149 Travail en collaboration avec

More information

I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION

I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION Peter Ochs University of Freiburg Germany 17.01.2017 joint work with: Thomas Brox and Thomas Pock c 2017 Peter Ochs ipiano c 1

More information

Existence and Approximation of Fixed Points of. Bregman Nonexpansive Operators. Banach Spaces

Existence and Approximation of Fixed Points of. Bregman Nonexpansive Operators. Banach Spaces Existence and Approximation of Fixed Points of in Reflexive Banach Spaces Department of Mathematics The Technion Israel Institute of Technology Haifa 22.07.2010 Joint work with Prof. Simeon Reich General

More information

On the acceleration of the double smoothing technique for unconstrained convex optimization problems

On the acceleration of the double smoothing technique for unconstrained convex optimization problems On the acceleration of the double smoothing technique for unconstrained convex optimization problems Radu Ioan Boţ Christopher Hendrich October 10, 01 Abstract. In this article we investigate the possibilities

More information

Splitting methods for decomposing separable convex programs

Splitting methods for decomposing separable convex programs Splitting methods for decomposing separable convex programs Philippe Mahey LIMOS - ISIMA - Université Blaise Pascal PGMO, ENSTA 2013 October 4, 2013 1 / 30 Plan 1 Max Monotone Operators Proximal techniques

More information

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties

More information

Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems

Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems Lu-Chuan Ceng 1, Nicolas Hadjisavvas 2 and Ngai-Ching Wong 3 Abstract.

More information

M. Marques Alves Marina Geremia. November 30, 2017

M. Marques Alves Marina Geremia. November 30, 2017 Iteration complexity of an inexact Douglas-Rachford method and of a Douglas-Rachford-Tseng s F-B four-operator splitting method for solving monotone inclusions M. Marques Alves Marina Geremia November

More information

An inertial forward-backward method for solving vector optimization problems

An inertial forward-backward method for solving vector optimization problems An inertial forward-backward method for solving vector optimization problems Sorin-Mihai Grad Chemnitz University of Technology www.tu-chemnitz.de/ gsor research supported by the DFG project GR 3367/4-1

More information

An introduction to Mathematical Theory of Control

An introduction to Mathematical Theory of Control An introduction to Mathematical Theory of Control Vasile Staicu University of Aveiro UNICA, May 2018 Vasile Staicu (University of Aveiro) An introduction to Mathematical Theory of Control UNICA, May 2018

More information

Convergence rates for an inertial algorithm of gradient type associated to a smooth nonconvex minimization

Convergence rates for an inertial algorithm of gradient type associated to a smooth nonconvex minimization Convergence rates for an inertial algorithm of gradient type associated to a smooth nonconvex minimization Szilárd Csaba László November, 08 Abstract. We investigate an inertial algorithm of gradient type

More information

Strongly convex functions, Moreau envelopes and the generic nature of convex functions with strong minimizers

Strongly convex functions, Moreau envelopes and the generic nature of convex functions with strong minimizers University of Wollongong Research Online Faculty of Engineering and Information Sciences - Papers: Part B Faculty of Engineering and Information Sciences 206 Strongly convex functions, Moreau envelopes

More information

Asymptotics for some vibro-impact problems. with a linear dissipation term

Asymptotics for some vibro-impact problems. with a linear dissipation term XLIM UMR CNRS 617 Département Mathématiques-Informatique Asymptotics for some vibro-impact problems with a linear dissipation term Alexandre Cabot & Laetitia Paoli Rapport de recherche n 6-8 Déposé le

More information

WEAK CONVERGENCE THEOREMS FOR EQUILIBRIUM PROBLEMS WITH NONLINEAR OPERATORS IN HILBERT SPACES

WEAK CONVERGENCE THEOREMS FOR EQUILIBRIUM PROBLEMS WITH NONLINEAR OPERATORS IN HILBERT SPACES Fixed Point Theory, 12(2011), No. 2, 309-320 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.html WEAK CONVERGENCE THEOREMS FOR EQUILIBRIUM PROBLEMS WITH NONLINEAR OPERATORS IN HILBERT SPACES S. DHOMPONGSA,

More information

Near Equality, Near Convexity, Sums of Maximally Monotone Operators, and Averages of Firmly Nonexpansive Mappings

Near Equality, Near Convexity, Sums of Maximally Monotone Operators, and Averages of Firmly Nonexpansive Mappings Mathematical Programming manuscript No. (will be inserted by the editor) Near Equality, Near Convexity, Sums of Maximally Monotone Operators, and Averages of Firmly Nonexpansive Mappings Heinz H. Bauschke

More information

On nonexpansive and accretive operators in Banach spaces

On nonexpansive and accretive operators in Banach spaces Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 3437 3446 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa On nonexpansive and accretive

More information

Convergence of Fixed-Point Iterations

Convergence of Fixed-Point Iterations Convergence of Fixed-Point Iterations Instructor: Wotao Yin (UCLA Math) July 2016 1 / 30 Why study fixed-point iterations? Abstract many existing algorithms in optimization, numerical linear algebra, and

More information

arxiv: v3 [math.oc] 18 Apr 2012

arxiv: v3 [math.oc] 18 Apr 2012 A class of Fejér convergent algorithms, approximate resolvents and the Hybrid Proximal-Extragradient method B. F. Svaiter arxiv:1204.1353v3 [math.oc] 18 Apr 2012 Abstract A new framework for analyzing

More information

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Patrick L. Combettes joint work with J.-C. Pesquet) Laboratoire Jacques-Louis Lions Faculté de Mathématiques

More information

GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS. Jong Kyu Kim, Salahuddin, and Won Hee Lim

GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS. Jong Kyu Kim, Salahuddin, and Won Hee Lim Korean J. Math. 25 (2017), No. 4, pp. 469 481 https://doi.org/10.11568/kjm.2017.25.4.469 GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS Jong Kyu Kim, Salahuddin, and Won Hee Lim Abstract. In this

More information

A general iterative algorithm for equilibrium problems and strict pseudo-contractions in Hilbert spaces

A general iterative algorithm for equilibrium problems and strict pseudo-contractions in Hilbert spaces A general iterative algorithm for equilibrium problems and strict pseudo-contractions in Hilbert spaces MING TIAN College of Science Civil Aviation University of China Tianjin 300300, China P. R. CHINA

More information

A forward-backward dynamical approach to the minimization of the sum of a nonsmooth convex with a smooth nonconvex function

A forward-backward dynamical approach to the minimization of the sum of a nonsmooth convex with a smooth nonconvex function A forwar-backwar ynamical approach to the minimization of the sum of a nonsmooth convex with a smooth nonconvex function Rau Ioan Boţ Ernö Robert Csetnek February 28, 2017 Abstract. We aress the minimization

More information

A double projection method for solving variational inequalities without monotonicity

A double projection method for solving variational inequalities without monotonicity A double projection method for solving variational inequalities without monotonicity Minglu Ye Yiran He Accepted by Computational Optimization and Applications, DOI: 10.1007/s10589-014-9659-7,Apr 05, 2014

More information

A Unified Approach to Proximal Algorithms using Bregman Distance

A Unified Approach to Proximal Algorithms using Bregman Distance A Unified Approach to Proximal Algorithms using Bregman Distance Yi Zhou a,, Yingbin Liang a, Lixin Shen b a Department of Electrical Engineering and Computer Science, Syracuse University b Department

More information

MAXIMALITY OF SUMS OF TWO MAXIMAL MONOTONE OPERATORS

MAXIMALITY OF SUMS OF TWO MAXIMAL MONOTONE OPERATORS MAXIMALITY OF SUMS OF TWO MAXIMAL MONOTONE OPERATORS JONATHAN M. BORWEIN, FRSC Abstract. We use methods from convex analysis convex, relying on an ingenious function of Simon Fitzpatrick, to prove maximality

More information

Downloaded 09/27/13 to Redistribution subject to SIAM license or copyright; see

Downloaded 09/27/13 to Redistribution subject to SIAM license or copyright; see SIAM J. OPTIM. Vol. 23, No., pp. 256 267 c 203 Society for Industrial and Applied Mathematics TILT STABILITY, UNIFORM QUADRATIC GROWTH, AND STRONG METRIC REGULARITY OF THE SUBDIFFERENTIAL D. DRUSVYATSKIY

More information

Heinz H. Bauschke and Walaa M. Moursi. December 1, Abstract

Heinz H. Bauschke and Walaa M. Moursi. December 1, Abstract The magnitude of the minimal displacement vector for compositions and convex combinations of firmly nonexpansive mappings arxiv:1712.00487v1 [math.oc] 1 Dec 2017 Heinz H. Bauschke and Walaa M. Moursi December

More information

Monotone Operator Splitting Methods in Signal and Image Recovery

Monotone Operator Splitting Methods in Signal and Image Recovery Monotone Operator Splitting Methods in Signal and Image Recovery P.L. Combettes 1, J.-C. Pesquet 2, and N. Pustelnik 3 2 Univ. Pierre et Marie Curie, Paris 6 LJLL CNRS UMR 7598 2 Univ. Paris-Est LIGM CNRS

More information

Levenberg-Marquardt dynamics associated to variational inequalities

Levenberg-Marquardt dynamics associated to variational inequalities Levenberg-Marquardt dynamics associated to variational inequalities Radu Ioan Boţ Ernö Robert Csetnek Aril 1, 217 Abstract. In connection with the otimization roblem inf {Φx + Θx}, x argmin Ψ where Φ is

More information

Viscosity approximation methods for the implicit midpoint rule of asymptotically nonexpansive mappings in Hilbert spaces

Viscosity approximation methods for the implicit midpoint rule of asymptotically nonexpansive mappings in Hilbert spaces Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 016, 4478 4488 Research Article Viscosity approximation methods for the implicit midpoint rule of asymptotically nonexpansive mappings in Hilbert

More information

Research Article Strong Convergence of a Projected Gradient Method

Research Article Strong Convergence of a Projected Gradient Method Applied Mathematics Volume 2012, Article ID 410137, 10 pages doi:10.1155/2012/410137 Research Article Strong Convergence of a Projected Gradient Method Shunhou Fan and Yonghong Yao Department of Mathematics,

More information

Regularization Inertial Proximal Point Algorithm for Convex Feasibility Problems in Banach Spaces

Regularization Inertial Proximal Point Algorithm for Convex Feasibility Problems in Banach Spaces Int. Journal of Math. Analysis, Vol. 3, 2009, no. 12, 549-561 Regularization Inertial Proximal Point Algorithm for Convex Feasibility Problems in Banach Spaces Nguyen Buong Vietnamse Academy of Science

More information

THROUGHOUT this paper, we let C be a nonempty

THROUGHOUT this paper, we let C be a nonempty Strong Convergence Theorems of Multivalued Nonexpansive Mappings and Maximal Monotone Operators in Banach Spaces Kriengsak Wattanawitoon, Uamporn Witthayarat and Poom Kumam Abstract In this paper, we prove

More information

arxiv: v1 [math.oc] 1 Jul 2016

arxiv: v1 [math.oc] 1 Jul 2016 Convergence Rate of Frank-Wolfe for Non-Convex Objectives Simon Lacoste-Julien INRIA - SIERRA team ENS, Paris June 8, 016 Abstract arxiv:1607.00345v1 [math.oc] 1 Jul 016 We give a simple proof that the

More information

PROXIMAL POINT ALGORITHMS INVOLVING FIXED POINT OF NONSPREADING-TYPE MULTIVALUED MAPPINGS IN HILBERT SPACES

PROXIMAL POINT ALGORITHMS INVOLVING FIXED POINT OF NONSPREADING-TYPE MULTIVALUED MAPPINGS IN HILBERT SPACES PROXIMAL POINT ALGORITHMS INVOLVING FIXED POINT OF NONSPREADING-TYPE MULTIVALUED MAPPINGS IN HILBERT SPACES Shih-sen Chang 1, Ding Ping Wu 2, Lin Wang 3,, Gang Wang 3 1 Center for General Educatin, China

More information

Math 273a: Optimization Subgradient Methods

Math 273a: Optimization Subgradient Methods Math 273a: Optimization Subgradient Methods Instructor: Wotao Yin Department of Mathematics, UCLA Fall 2015 online discussions on piazza.com Nonsmooth convex function Recall: For ˉx R n, f(ˉx) := {g R

More information

Dual and primal-dual methods

Dual and primal-dual methods ELE 538B: Large-Scale Optimization for Data Science Dual and primal-dual methods Yuxin Chen Princeton University, Spring 2018 Outline Dual proximal gradient method Primal-dual proximal gradient method

More information

Graph Convergence for H(, )-co-accretive Mapping with over-relaxed Proximal Point Method for Solving a Generalized Variational Inclusion Problem

Graph Convergence for H(, )-co-accretive Mapping with over-relaxed Proximal Point Method for Solving a Generalized Variational Inclusion Problem Iranian Journal of Mathematical Sciences and Informatics Vol. 12, No. 1 (2017), pp 35-46 DOI: 10.7508/ijmsi.2017.01.004 Graph Convergence for H(, )-co-accretive Mapping with over-relaxed Proximal Point

More information

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

On a result of Pazy concerning the asymptotic behaviour of nonexpansive mappings

On a result of Pazy concerning the asymptotic behaviour of nonexpansive mappings On a result of Pazy concerning the asymptotic behaviour of nonexpansive mappings arxiv:1505.04129v1 [math.oc] 15 May 2015 Heinz H. Bauschke, Graeme R. Douglas, and Walaa M. Moursi May 15, 2015 Abstract

More information

A New Modified Gradient-Projection Algorithm for Solution of Constrained Convex Minimization Problem in Hilbert Spaces

A New Modified Gradient-Projection Algorithm for Solution of Constrained Convex Minimization Problem in Hilbert Spaces A New Modified Gradient-Projection Algorithm for Solution of Constrained Convex Minimization Problem in Hilbert Spaces Cyril Dennis Enyi and Mukiawa Edwin Soh Abstract In this paper, we present a new iterative

More information

Extremal Solutions of Differential Inclusions via Baire Category: a Dual Approach

Extremal Solutions of Differential Inclusions via Baire Category: a Dual Approach Extremal Solutions of Differential Inclusions via Baire Category: a Dual Approach Alberto Bressan Department of Mathematics, Penn State University University Park, Pa 1682, USA e-mail: bressan@mathpsuedu

More information

The Split Hierarchical Monotone Variational Inclusions Problems and Fixed Point Problems for Nonexpansive Semigroup

The Split Hierarchical Monotone Variational Inclusions Problems and Fixed Point Problems for Nonexpansive Semigroup International Mathematical Forum, Vol. 11, 2016, no. 8, 395-408 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/imf.2016.6220 The Split Hierarchical Monotone Variational Inclusions Problems and

More information

Solving monotone inclusions involving parallel sums of linearly composed maximally monotone operators

Solving monotone inclusions involving parallel sums of linearly composed maximally monotone operators Solving monotone inclusions involving parallel sums of linearly composed maximally monotone operators Radu Ioan Boţ Christopher Hendrich 2 April 28, 206 Abstract. The aim of this article is to present

More information

A user s guide to Lojasiewicz/KL inequalities

A user s guide to Lojasiewicz/KL inequalities Other A user s guide to Lojasiewicz/KL inequalities Toulouse School of Economics, Université Toulouse I SLRA, Grenoble, 2015 Motivations behind KL f : R n R smooth ẋ(t) = f (x(t)) or x k+1 = x k λ k f

More information

A GENERALIZATION OF THE REGULARIZATION PROXIMAL POINT METHOD

A GENERALIZATION OF THE REGULARIZATION PROXIMAL POINT METHOD A GENERALIZATION OF THE REGULARIZATION PROXIMAL POINT METHOD OGANEDITSE A. BOIKANYO AND GHEORGHE MOROŞANU Abstract. This paper deals with the generalized regularization proximal point method which was

More information

PROX-PENALIZATION AND SPLITTING METHODS FOR CONSTRAINED VARIATIONAL PROBLEMS

PROX-PENALIZATION AND SPLITTING METHODS FOR CONSTRAINED VARIATIONAL PROBLEMS PROX-PENALIZATION AND SPLITTING METHODS FOR CONSTRAINED VARIATIONAL PROBLEMS HÉDY ATTOUCH, MARC-OLIVIER CZARNECKI & JUAN PEYPOUQUET Abstract. This paper is concerned with the study of a class of prox-penalization

More information

On the Brézis - Haraux - type approximation in nonreflexive Banach spaces

On the Brézis - Haraux - type approximation in nonreflexive Banach spaces On the Brézis - Haraux - type approximation in nonreflexive Banach spaces Radu Ioan Boţ Sorin - Mihai Grad Gert Wanka Abstract. We give Brézis - Haraux - type approximation results for the range of the

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

Zeqing Liu, Jeong Sheok Ume and Shin Min Kang

Zeqing Liu, Jeong Sheok Ume and Shin Min Kang Bull. Korean Math. Soc. 41 (2004), No. 2, pp. 241 256 GENERAL VARIATIONAL INCLUSIONS AND GENERAL RESOLVENT EQUATIONS Zeqing Liu, Jeong Sheok Ume and Shin Min Kang Abstract. In this paper, we introduce

More information

Convex Analysis and Economic Theory AY Elementary properties of convex functions

Convex Analysis and Economic Theory AY Elementary properties of convex functions Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory AY 2018 2019 Topic 6: Convex functions I 6.1 Elementary properties of convex functions We may occasionally

More information

An accelerated non-euclidean hybrid proximal extragradient-type algorithm for convex-concave saddle-point problems

An accelerated non-euclidean hybrid proximal extragradient-type algorithm for convex-concave saddle-point problems An accelerated non-euclidean hybrid proximal extragradient-type algorithm for convex-concave saddle-point problems O. Kolossoski R. D. C. Monteiro September 18, 2015 (Revised: September 28, 2016) Abstract

More information

Iterative algorithms based on the hybrid steepest descent method for the split feasibility problem

Iterative algorithms based on the hybrid steepest descent method for the split feasibility problem Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 (206), 424 4225 Research Article Iterative algorithms based on the hybrid steepest descent method for the split feasibility problem Jong Soo

More information

arxiv: v1 [math.ca] 7 Jul 2013

arxiv: v1 [math.ca] 7 Jul 2013 Existence of Solutions for Nonconvex Differential Inclusions of Monotone Type Elza Farkhi Tzanko Donchev Robert Baier arxiv:1307.1871v1 [math.ca] 7 Jul 2013 September 21, 2018 Abstract Differential inclusions

More information

SHRINKING PROJECTION METHOD FOR A SEQUENCE OF RELATIVELY QUASI-NONEXPANSIVE MULTIVALUED MAPPINGS AND EQUILIBRIUM PROBLEM IN BANACH SPACES

SHRINKING PROJECTION METHOD FOR A SEQUENCE OF RELATIVELY QUASI-NONEXPANSIVE MULTIVALUED MAPPINGS AND EQUILIBRIUM PROBLEM IN BANACH SPACES U.P.B. Sci. Bull., Series A, Vol. 76, Iss. 2, 2014 ISSN 1223-7027 SHRINKING PROJECTION METHOD FOR A SEQUENCE OF RELATIVELY QUASI-NONEXPANSIVE MULTIVALUED MAPPINGS AND EQUILIBRIUM PROBLEM IN BANACH SPACES

More information

MOSCO STABILITY OF PROXIMAL MAPPINGS IN REFLEXIVE BANACH SPACES

MOSCO STABILITY OF PROXIMAL MAPPINGS IN REFLEXIVE BANACH SPACES MOSCO STABILITY OF PROXIMAL MAPPINGS IN REFLEXIVE BANACH SPACES Dan Butnariu and Elena Resmerita Abstract. In this paper we establish criteria for the stability of the proximal mapping Prox f ϕ =( ϕ+ f)

More information

A generalized forward-backward method for solving split equality quasi inclusion problems in Banach spaces

A generalized forward-backward method for solving split equality quasi inclusion problems in Banach spaces Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 4890 4900 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa A generalized forward-backward

More information

Nonlinear Systems and Control Lecture # 12 Converse Lyapunov Functions & Time Varying Systems. p. 1/1

Nonlinear Systems and Control Lecture # 12 Converse Lyapunov Functions & Time Varying Systems. p. 1/1 Nonlinear Systems and Control Lecture # 12 Converse Lyapunov Functions & Time Varying Systems p. 1/1 p. 2/1 Converse Lyapunov Theorem Exponential Stability Let x = 0 be an exponentially stable equilibrium

More information

Convergence Theorems for Bregman Strongly Nonexpansive Mappings in Reflexive Banach Spaces

Convergence Theorems for Bregman Strongly Nonexpansive Mappings in Reflexive Banach Spaces Filomat 28:7 (2014), 1525 1536 DOI 10.2298/FIL1407525Z Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Convergence Theorems for

More information

Monotone variational inequalities, generalized equilibrium problems and fixed point methods

Monotone variational inequalities, generalized equilibrium problems and fixed point methods Wang Fixed Point Theory and Applications 2014, 2014:236 R E S E A R C H Open Access Monotone variational inequalities, generalized equilibrium problems and fixed point methods Shenghua Wang * * Correspondence:

More information

Some unified algorithms for finding minimum norm fixed point of nonexpansive semigroups in Hilbert spaces

Some unified algorithms for finding minimum norm fixed point of nonexpansive semigroups in Hilbert spaces An. Şt. Univ. Ovidius Constanţa Vol. 19(1), 211, 331 346 Some unified algorithms for finding minimum norm fixed point of nonexpansive semigroups in Hilbert spaces Yonghong Yao, Yeong-Cheng Liou Abstract

More information

Characterizations of Lojasiewicz inequalities: subgradient flows, talweg, convexity.

Characterizations of Lojasiewicz inequalities: subgradient flows, talweg, convexity. Characterizations of Lojasiewicz inequalities: subgradient flows, talweg, convexity. Jérôme BOLTE, Aris DANIILIDIS, Olivier LEY, Laurent MAZET. Abstract The classical Lojasiewicz inequality and its extensions

More information

THE CYCLIC DOUGLAS RACHFORD METHOD FOR INCONSISTENT FEASIBILITY PROBLEMS

THE CYCLIC DOUGLAS RACHFORD METHOD FOR INCONSISTENT FEASIBILITY PROBLEMS THE CYCLIC DOUGLAS RACHFORD METHOD FOR INCONSISTENT FEASIBILITY PROBLEMS JONATHAN M. BORWEIN AND MATTHEW K. TAM Abstract. We analyse the behaviour of the newly introduced cyclic Douglas Rachford algorithm

More information

Complexity of the relaxed Peaceman-Rachford splitting method for the sum of two maximal strongly monotone operators

Complexity of the relaxed Peaceman-Rachford splitting method for the sum of two maximal strongly monotone operators Complexity of the relaxed Peaceman-Rachford splitting method for the sum of two maximal strongly monotone operators Renato D.C. Monteiro, Chee-Khian Sim November 3, 206 Abstract This paper considers the

More information

Convex Feasibility Problems

Convex Feasibility Problems Laureate Prof. Jonathan Borwein with Matthew Tam http://carma.newcastle.edu.au/drmethods/paseky.html Spring School on Variational Analysis VI Paseky nad Jizerou, April 19 25, 2015 Last Revised: May 6,

More information

Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION PROXIMAL POINT ALGORITHM. M. V. Solodov and B. F.

Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION PROXIMAL POINT ALGORITHM. M. V. Solodov and B. F. Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION PROXIMAL POINT ALGORITHM M. V. Solodov and B. F. Svaiter January 27, 1997 (Revised August 24, 1998) ABSTRACT We propose a modification

More information

A Viscosity Method for Solving a General System of Finite Variational Inequalities for Finite Accretive Operators

A Viscosity Method for Solving a General System of Finite Variational Inequalities for Finite Accretive Operators A Viscosity Method for Solving a General System of Finite Variational Inequalities for Finite Accretive Operators Phayap Katchang, Somyot Plubtieng and Poom Kumam Member, IAENG Abstract In this paper,

More information

Variational inequalities for fixed point problems of quasi-nonexpansive operators 1. Rafał Zalas 2

Variational inequalities for fixed point problems of quasi-nonexpansive operators 1. Rafał Zalas 2 University of Zielona Góra Faculty of Mathematics, Computer Science and Econometrics Summary of the Ph.D. thesis Variational inequalities for fixed point problems of quasi-nonexpansive operators 1 by Rafał

More information

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 XVI - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 A slightly changed ADMM for convex optimization with three separable operators Bingsheng He Department of

More information

On Penalty and Gap Function Methods for Bilevel Equilibrium Problems

On Penalty and Gap Function Methods for Bilevel Equilibrium Problems On Penalty and Gap Function Methods for Bilevel Equilibrium Problems Bui Van Dinh 1 and Le Dung Muu 2 1 Faculty of Information Technology, Le Quy Don Technical University, Hanoi, Vietnam 2 Institute of

More information

Viscosity Iterative Approximating the Common Fixed Points of Non-expansive Semigroups in Banach Spaces

Viscosity Iterative Approximating the Common Fixed Points of Non-expansive Semigroups in Banach Spaces Viscosity Iterative Approximating the Common Fixed Points of Non-expansive Semigroups in Banach Spaces YUAN-HENG WANG Zhejiang Normal University Department of Mathematics Yingbing Road 688, 321004 Jinhua

More information