Second order forward-backward dynamical systems for monotone inclusion problems

Size: px
Start display at page:

Download "Second order forward-backward dynamical systems for monotone inclusion problems"

Transcription

1 Second order forward-backward dynamical systems for monotone inclusion problems Radu Ioan Boţ Ernö Robert Csetnek March 6, 25 Abstract. We begin by considering second order dynamical systems of the from ẍt + Γẋt + λtbxt =, where Γ : H H is an elliptic bounded self-adjoint linear operator defined on a real Hilbert space H, B : H H is a cocoercive operator and λ : [, + [, + is a relaxation function depending on time. We show the existence and uniqueness of strong global solutions in the framework of the Cauchy-Lipschitz-Picard Theorem and prove weak convergence for the generated trajectories to a zero of the operator B, by using Lyapunov analysis combined with the celebrated Opial Lemma in its continuous version. The framework allows to address from similar perspectives second order dynamical systems associated with the problem of finding zeros of the sum of a maximally monotone operator and a cocoercive one. This captures as particular case the minimization of the sum of a nonsmooth convex function with a smooth convex one and allows us to recover and improve several results from the literature concerning the minimization of a convex smooth function subject to a convex closed set by means of second order dynamical systems. When considering the unconstrained minimization of a smooth convex function we prove a rate of O/t for the convergence of the function value along the ergodic trajectory to its minimum value. A similar analysis is carried out also for second order dynamical systems having as first order term γtẋt, where γ : [, + [, + is a damping function depending on time. Key Words. dynamical systems, Lyapunov analysis, monotone inclusions, convex optimization problems, continuous forward-backward method AMS subject classification. 34G25, 47J25, 47H5, 9C25 Introduction and preliminaries This paper is motivated by the heavy ball with friction dynamical system ẍ + γẋ + fx =, which is a nonlinear oscillator with damping γ > and potential f : H R, supposed to be a convex and differentiable function defined on the real Hilbert space H. When University of Vienna, Faculty of Mathematics, Oskar-Morgenstern-Platz, A-9 Vienna, Austria, radu.bot@univie.ac.at. University of Vienna, Faculty of Mathematics, Oskar-Morgenstern-Platz, A-9 Vienna, Austria, ernoe.robert.csetnek@univie.ac.at. Research supported by FWF Austrian Science Fund, Lise Meitner Programme, project M 682-N25.

2 H = R 2, the system is a simplified version of the differential system describing the motion of a heavy ball that keeps rolling over the graph of the function f under its own inertia until friction stops it at a critical point of f see []. The second order dynamical system has been considered by several authors in the context of minimizing the function f, these investigations being either concerned with the convergence of the generated trajectories to a critical point of f or with the convergence of the function along the trajectories to its global minimum value see [3,7,8,]. It is also worth to mention that the time discretization of the heavy ball with friction dynamical system leads to the so-called inertial-type algorithms, which are numerical schemes sharing the feature that the current iterate of the generated sequence is defined by making use of the previous two iterates see, for instance, [3 5, 8, 2, 23]. In order to approach the minimization of f over a nonempty, convex and closed set C H, the gradient-projection second order dynamical system ẍ + γẋ + x P C x η fx =, 2 has been considered, where P C : H C denotes the projection onto the set C and η >. Convergence statements for the trajectories to a global minimizer of f over C have been provided in [7, 8]. Furthermore, in [8], these investigations have been expanded to more general second order dynamical systems of the form ẍ + γẋ + x T x =, 3 where T : H H is a nonexpansive operator. It has been shown that when γ 2 > 2 the trajectory of 8 converges weakly to an element in the fixed points set of T, provided it is nonempty. In the first part of the present manuscript we treat the second order dynamical system ẍt + Γẋt + λtbxt =, 4 where Γ : H H is an elliptic bounded self-adjoint linear operator, B : H H is a cocoercive operator and λ : [, + [, + is a relaxation function in time. We notice that the presence of the elliptic operator induces an anisotropic damping and refer to [3], where a similar construction has been used fin the context of minimizing a convex and smooth function. The existence and uniqueness of strong global solutions for 4 is obtained by applying the classical Cauchy-Lipschitz-Picard Theorem see [2, 27]. We show that under mild assumptions on the relaxation function the trajectory xt converges weakly as t + to a zero of the operator B, provided it has a nonempty set of zeros. To this end we will use Lyapunov analysis combined with the continuous version of the Opial Lemma see also [3, 7, 8], where similar techniques have been used. Further, we approach the problem of finding a zero of the sum of a maximally monotone operator and a cocoercive one via a second order dynamical system formulated by making use of the resolvent of the set-valued operator, see 3. Dynamical systems of implicit type have been already considered in the literature in [, 2, 9, 2, 4, 6, 7]. We specialize these investigations to the minimization of the sum of a nonsmooth convex function with a smooth convex function one, fact which allows us to recover and improve results given in [7,8] in the context of studying the dynamical system 2. Whenever B is the gradient of a smooth convex function we show that the latter converges along the ergodic trajectories generated by 4 to its minimum value with a rate of convergence of O/t. 2

3 We close the paper by showing that a similar analysis can be carried out when using as starting point dynamical systems of the form ẍt + γtẋt + λtbxt =, 5 where the damping coefficient γ : [, + [, + is a function depending on time. Throughout this paper N = {,, 2,...} denotes the set of nonnegative integers and H a real Hilbert space with inner product, and corresponding norm =,. 2 A dynamical system: existence and uniqueness of strong global solutions This section is devoted to the study of existence and uniqueness of strong global solutions of a second order dynamical system governed by Lipschitz continuous operators. Let Γ : H H be an L Γ -Lipschitz continuous operator that is L Γ and Γx Γy L Γ x y for all x, y H, B : H H L B -Lipschitz continuous operator, λ : [, + [, + a Lebesgue measurable function, u, v H and consider the dynamical system { ẍt + Γẋt + λtbxt = 6 x = u, ẋ = v. As in [2, 2], we consider the following definition of an absolutely continuous function. Definition see, for instance, [2, 2] A function x : [, b] H where b > is said to be absolutely continuous if one of the following equivalent properties holds: i there exists an integrable function y : [, b] H such that xt = x + t ysds t [, b]; ii x is continuous and its distributional derivative is Lebesgue integrable on [, b]; iii for every ε >, there exists η > such that for any finite family of intervals I k = a k, b k we have the implication I k I j = and k b k a k < η = k xb k xa k < ε. Remark a It follows from the definition that an absolutely continuous function is differentiable almost everywhere, its derivative coincides with its distributional derivative almost everywhere and one can recover the function from its derivative ẋ = y by the integration formula i. b If x : [, b] H where b > is absolutely continuous and B : H H is L-Lipschitz continuous where L, then the function z = B x is absolutely continuous, too. This can be easily seen by using the characterization of absolute continuity in Definition iii. Moreover, z is almost everywhere differentiable and the inequality ż L ẋ holds almost everywhere. 3

4 Definition 2 We say that x : [, + H is a strong global solution of 6 if the following properties are satisfied: i x, ẋ : [, + H are locally absolutely continuous, in other words, absolutely continuous on each interval [, b] for < b < + ; ii ẍt + Γẋt + λtbxt = for almost every t [, + ; iii x = u and ẋ = v. For proving the existence and uniqueness of strong global solutions of 6 we use the Cauchy-Lipschitz-Picard Theorem for absolutely continues trajectories see for example [2, Proposition 6.2.], [27, Theorem 54]. The key observation here is that one ca rewrite 6 as a certain first order dynamical system in a product space see also [6]. Theorem 2 Let Γ : H H be an L Γ -Lipschitz continuous operator, B : H H a L B - Lipschitz continuous operator and λ : [, + [, + a Lebesgue measurable function such that λ L loc [, + that is λ L [, b] for every < b < +. Then for each u, v H there exists a unique strong global solution of the dynamical system 6. Proof. The system 6 can be equivalently written as a first order dynamical system in the phase space H H { Ẏ t = F t, Y t 7 Y = u, v, with and Y : [, + H H, Y t = xt, ẋt F : [, + H H H H, F t, u, v = v, Γv λtbu. We endow H H with scalar product u, v, u, v H H = u, u + v, v and corresponding norm u, v H H = u 2 + v 2. a For arbitrary u, u, v, v H, by using the Lipschitz continuity of the involved operators, we obtain F t, u, v F t, u, v H H = v v 2 + Γv Γv + λtbu Bu 2 + 2L 2 Γ v v 2 + 2L 2 B λ2 t u u 2 + 2L 2 Γ + 2L2 B λ2 t u, u v, v H H + 2L Γ + 2L B λt u, u v, v H H t. As λ L loc [, +, the Lipschitz constant of F t,, is local integrable. b Next we show that u, v H, b >, F, u, v L [, b], H H. 8 For arbitrary u, v H and b > it holds b F t, u, v H H dt = b b b v 2 + Γv + λtbu 2 dt v Γv 2 + 2λ 2 t Bu 2 dt v Γv 2 + 2λt Bu dt 4

5 and from here 8 follows, by using the assumptions made on λ. In the light of the statements a and b, the existence and uniqueness of a strong global solution for 7 are consequences of the Cauchy-Lipschitz-Picard Theorem for first order dynamical systems see, for example, [2, Proposition 6.2.], [27, Theorem 54]. From here, due to the equivalence of 6 and 7, the conclusion follows. 3 Convergence of the trajectories In this section we address the convergence properties of the trajectories generated by the dynamical system 6 by assuming that B : H H is a -cocoercive operator for >, that is Bx By 2 x y, Bx By for all x, y H. To this end we will make use of the following well-known results, which can be interpreted as continuous versions of the quasi-fejér monotonicity for sequences. For their proofs we refer the reader to [2, Lemma 5.] and [2, Lemma 5.2], respectively. Lemma 3 Suppose that F : [, + R is locally absolutely continuous and bounded below and that there exists G L [, + such that for almost every t [, + Then there exists lim t F t R. d F t Gt. dt Lemma 4 If p <, r, F : [, + [, + is locally absolutely continuous, F L p [, +, G : [, + R, G L r [, + and for almost every t [, + d F t Gt, dt then lim t + F t =. The next result which we recall here is the continuous version of the Opial Lemma see, for example, [2, Lemma 5.3], [, Lemma.]. Lemma 5 Let S H be a nonempty set and x : [, + H a given map. Assume that i for every x S, lim t + xt x exists; ii every weak sequential cluster point of the map x belongs to S. Then there exists x S such that xt converges weakly to x as t +. In order to prove the convergence of the trajectories of 6, we make the following assumptions on the operator Γ and the relaxation function λ, respectively: A Γ : H H is a bounded self-adjoint linear operator, assumed to be elliptic, that is, there exists γ > such that Γu, u γ u 2 for all u H; A2 λ : [, +, + is locally absolutely continuous and there exists θ > such that for almost every t [, + we have λt and λt γ2 + θ. 9 5

6 Due to Definition and Remark a λt exists for almost every t and λ is Lebesgue integrable on each interval [, b] for < b < +. If λt for almost every t, then λ is monotonically increasing, thus, as λ is assumed to take only positive values, A2 yields the existence of a lower bound λ such that for almost every t [, + one has < λ λt γ2 + θ. Theorem 6 Let B : H H be a -cocoercive operator for > such that zer B := {u H : Bu = }, Γ : H H be an operator fulfilling A, λ : [, +, + be a function fulfilling A2 and u, v H. Let x : [, + H be the unique strong global solution of 6. Then the following statements are true: i the trajectory x is bounded and ẋ, ẍ, Bx L 2 [, + ; H; ii lim t + ẋt = lim t + ẍt = lim t + Bxt = ; iii xt converges weakly to an element in zer B as t +. Proof. Notice that the existence and uniqueness of the trajectory x follows from Theorem 2, since B is /-Lipschitz continuous, Γ is Γ -Lipschitz continuous and A2 ensures λ L loc [, +. i Take an arbitrary x zer B and consider for every t [, + the function ht = 2 xt x 2. We have ḣt = xt x, ẋt and ḧt = ẋt 2 + xt x, ẍt for almost every t [, +. Taking into account 6, we get for almost every t [, + ḧt + γḣt + λt xt x, Bxt + xt x, Γẋt γẋt = ẋt 2. Now we introduce the function p : [, + R, pt = 2 Γ γ Id xt x, xt x, 2 where Id denotes the identity on H. Due to A, as Γ γ Id u, u for all u H, it holds pt for all t. 3 Moreover, ṗt = Γ γ Id ẋt, xt x, which combined with, the cocoercivity of B and the fact that Bx = yields for almost every t [, + ḧt + γḣt + λt Bxt 2 + ṗt ẋt 2. Taking into account 6 one obtains for almost every t [, + hence ḧt + γḣt + ḧt + γḣt + According to A we have λt ẍt + Γẋt 2 + ṗt ẋt 2, λt ẍt ẍt, Γẋt + λt λt Γẋt 2 + ṗt ẋt 2. 4 γ u Γu for all u H, 5 6

7 which combined with 4 yields for almost every t [, + ḧt + γḣt + ṗt + d γ 2 ẋt, Γẋt + λt dt λt ẋt 2 + By taking into account that for almost every t [, + d λt dt d ẋt, Γẋt = dt d dt we obtain for almost every t [, + ḧt + γḣt + ṗt+ d ẋt, Γẋt + dt λt γ 2 λt ẋt, Γẋt + λt λ 2 t λt ẋt, Γẋt λt λt ẍt 2. ẋt, Γẋt + γ λt λ 2 t ẋt 2, 6 λt + γ λ 2 t ẋt 2 + λt ẍt 2. 7 By using now assumption A2 we obtain that the following inequality holds for almost every t [, + ḧt + γḣt + ṗt + d ẋt, Γẋt + θ ẋt θ dt λt γ 2 ẍt 2. 8 This implies that the function t ḣt + γht + pt + λt ẋt, Γẋt, which is locally absolutely continuous, is monotonically decreasing. Hence there exists a real number M such that for almost every t [, + ḣt + γht + pt + ẋt, Γẋt M, 9 λt which yields, together with 3 and A2, that for almost every t [, + ḣt + γht M. By multiplying this inequality with e γt and then integrating from to T, where T >, one easily obtains thus and, consequently, ht he γt + M γ e γt, h is bounded 2 the trajectory x is bounded. 2 On the other hand, from 9, by taking into account 3, A and A2, it follows that for almost every t [, + ḣt + + θ γ ẋt 2 M, 7

8 hence xt x, ẋt + + θ γ ẋt 2 M. This inequality, in combination with 2, yields which further implies that ẋ is bounded, 22 ḣ is bounded. 23 Integrating the inequality 8 we obtain that there exists a real number N R such that for almost every t [, + ḣt + γht + pt + ẋt, Γẋt + θ λt t ẋs 2 ds + + θ γ 2 t ẍs 2 ds N. From here, via 23, 3 and A, we conclude that ẋ, ẍ L 2 [, + ; H. Finally, from 6, A and A2 we deduce Bx L 2 [, + ; H and the proof of i is complete. ii For almost every t [, + it holds d dt 2 ẋt 2 = ẋt, ẍt 2 ẋt ẍt 2 and Lemma 4 together with i lead to lim t + ẋt =. Further, by taking into consideration Remark b, for almost every t [, + we have d dt 2 Bxt 2 = Bxt, ddt Bxt 2 Bxt ẋt 2. By using again Lemma 4 and i we get lim t + Bxt =, while the fact that lim t + ẍt = follows from 6, A and A2. iii We are going to prove that both assumptions in Opial Lemma are fulfilled. The first one concerns the existence of lim t + xt x. As seen in the proof of part i, the function t ḣt + γht + pt + λt ẋt, Γẋt is monotonically decreasing, thus from i, ii, 3, A and A2 we deduce that lim t + γht + pt exists and it is a real number. It remains to prove that lim t + pt exists and it is a real number and this will prove the first part of the Opial Lemma. Indeed, from 8 we get that for almost every t [, + ṗt ḧt γḣt d ẋt, Γẋt. 24 dt λt On the other hand, by A, for every T we have T [ ḧt γḣt d ] ẋt, Γẋt dt = dt λt ḣt γht ẋt, ΓẋT + ḣ + γh + λt ḣt + ḣ + γh + ẋ, Γẋ. λ ẋ, Γẋ λ 8

9 Since lim T + ḣt = see i and ii, we deduce that the function t ḧt γḣt d dt λt ẋt, Γẋt is in L [, +. From Lemma 3 it follows that there exists lim t + pt R. We come now to the second assumption of the Opial Lemma. Let x be a weak sequential cluster point of x, that is, there exists a sequence t n + as n + such that xt n n N converges weakly to x. Since B is a maximally monotone operator see for instance [3, Example 2.28], its graph is sequentially closed with respect to the weakstrong topology of the product space H H. By using also that lim n + Bxt n =, we conclude that Bx =, hence x zer B and the proof is complete. A standard instance of a cocoercive operator defined on a real Hilbert spaces is the one that can be represented as B = Id T, where T : H H is a nonexpansive operator, that is, a -Lipschitz continuous operator. As it easily follows from the nonexpansiveness of T, B is in this case /2-cocoercive. For this particular choice of the operator B, the dynamical system 6 becomes { ẍt + Γẋt + λt xt T xt = 25 x = u, ẋ = v, while assumption A2 reads A3 λ : [, +, + is locally absolutely continuous and there exists θ > such that for almost every t [, + we have λt and λt Theorem 6 gives rise to the following result. γ θ. 26 Corollary 7 Let T : H H be a nonexpansive operator such that Fix T = {u H : T u = u} =, Γ : H H be an operator fulfilling A, λ : [, +, + be a function fulfilling A3 and u, v H. Let x : [, + H be the unique strong global solution of 25. Then the following statements are true: i the trajectory x is bounded and ẋ, ẍ, Id T x L 2 [, + ; H; ii lim t + ẋt = lim t + ẍt = lim t + Id T xt = ; iii xt converges weakly to a point in Fix T as t +. Remark 8 In the particular case when Γ = γ Id for γ > and λt = for all t [, + the dynamical system 25 becomes { ẍt + γẋt + xt T xt = 27 x = u, ẋ = v. The convergence of the trajectories generated by 27 has been studied in [8, Theorem 3.2] under the condition γ 2 > 2. In this case A3 is obviously fulfilled for an arbitrary < θ γ 2 2/2. However, different to [8], we allow in Corollary 7 an anisotropic damping through the use of the elliptic operator Γ and also a variable relaxation function λ depending on time in [3] the anisotropic damping has been considered as well in the context of minimizing of a smooth convex function via second order dynamical systems. 9

10 We close the section by addressing an immediate consequence of the above corollary applied to second order dynamical systems governed by averaged operators. The operator R : H H is said to be α-averaged for α,, if there exists a nonexpansive operator T : H H such that R = α Id +αt. For α = 2 we obtain as an important representative of this class the firmly nonexpansive operators. For properties and insights concerning these families of operators we refer to the monograph [3]. We consider the dynamical system { ẍt + Γẋt + λt xt Rxt = 28 x = u, ẋ = v and formulate the assumption A4 λ : [, +, + is locally absolutely continuous and there exists θ > such that for almost every t [, + we have λt and λt γ 2 2α + θ. 29 Corollary 9 Let R : H H be an α-averaged operator for α, such that Fix R, Γ : H H be an operator fulfilling A, λ : [, +, + be a function fulfilling A4 and u, v H. Let x : [, + H be the unique strong global solution of 28. Then the following statements are true: i the trajectory x is bounded and ẋ, ẍ, Id Rx L 2 [, + ; H; ii lim t + ẋt = lim t + ẍt = lim t + Id Rxt = ; iii xt converges weakly to a point in Fix R as t +. Proof. Since R is α-averaged, there exists a nonexpansive operator T : H H such that R = α Id +αt. The conclusion is a direct consequence of Corollary 7, by taking into account that 28 is equivalent to { ẍt + Γẋt + αλt xt T xt = x = u, ẋ = v, and Fix R = Fix T. 4 Forward-backward second order dynamical systems In this section we address the monotone inclusion problem find Ax + Bx, where A : H H is a maximally monotone operator and B : H H is a -cocoercive operator for > via a second-order forward-backward dynamical system with anisotropic damping and variable relaxation parameter. For readers convenience we recall at the beginning some standard notions and results in monotone operator theory which will be used in the following see also [3,5,26]. For an arbitrary set-valued operator A : H H we denote by Gr A = {x, u H H : u Ax} its graph. We use also the notation zer A = {x H : Ax} for the set of zeros

11 of A. We say that A is monotone, if x y, u v for all x, u, y, v Gr A. A monotone operator A is said to be maximally monotone, if there exists no proper monotone extension of the graph of A on H H. The resolvent of A, J A : H H, is defined by J A = Id +A. If A is maximally monotone, then J A : H H is single-valued and maximally monotone see [3, Proposition 23.7 and Corollary 23.]. For an arbitrary γ > we have see [3, Proposition 23.2] p J γa x if and only if p, γ x p Gr A. 3 The operator A is said to be uniformly monotone if there exists an increasing function φ A : [, + [, + ] that vanishes only at and fulfills x y, u v φ A x y for every x, u Gr A and y, v Gr A. A popular class of operators having this property is the one of the strongly monotone operators. We say that A is γ-strongly monotone for γ >, if x y, u v γ x y 2 for all x, u, y, v Gr A. For η > we consider the dynamical system { ] ẍt + Γẋt + λt [xt J ηa xt ηbxt = 3 x = u, ẋ = v. We formulate the following assumption, where δ := min{, /η} + /2: A5 λ : [, +, + is locally absolutely continuous and there exists θ > such that for almost every t [, + we have λt and λt δγ2 2 + θ. 32 Theorem Let A : H H be a maximally monotone operator and B : H H be -cocoercive operator for > such that zera + B. Let η, 2 and set δ := min{, /η} + /2. Let Γ : H H be an operator fulfilling A, λ : [, +, + be a function fulfilling A5, u, v H and x : [, + H be the unique strong global solution of 3. Then the following statements are true: i the trajectory x is bounded and ẋ, ẍ, Id J ηa Id ηb x L 2 [, + ; H; ii lim t + ẋt = lim t + ẍt = lim t + Id JηA Id ηb xt = ; iii xt converges weakly to a point in zera + B as t + ; iv if x zera+b, then Bx Bx L 2 [, + ; H, lim t + Bxt = Bx and B is constant on zera + B; v if A or B is uniformly monotone, then xt converges strongly to the unique point in zera + B as t +. Proof. i-iii It is immediate that the dynamical system 3 can be written in the form { ẍt + Γẋt + λt xt Rxt = x = u, ẋ = v, 33 where R = J ηa Id ηb. According to [3, Corollary 23.8 and Remark 4.24iii], J ηa is /2-cocoercive. Moreover, by [3, Proposition 4.33], Id ηb is η/2-averaged. Combining this with [3, Proposition 4.32], we derive that R is /δ-averaged. The statements i-iii follow now from Corollary 9 by noticing that Fix R = zera + B see [3, Proposition 25.iv].

12 iv The fact that B is constant on zera + B follows from the cocoercivity of B and the monotonicity of A. A proof of this statement when A is the subdifferential of a proper, convex and lower semicontinuous function is given for instance in [, Lemma.7]. Take an arbitrary x zera + B. From the definition of the resolvent we have for almost every t [, + Bxt ηλtẍt ηλt Γẋt A λtẍt + Γẋt + xt, 34 λt which combined with Bx Ax and the monotonicity of A leads to λtẍt + λt Γẋt + xt x, Bxt + Bx ηλtẍt ηλt Γẋt. 35 After using the cocoercivity of B we obtain for almost every t [, + Bxt Bx 2 λtẍt+ Γẋt, Bxt + Bx λt ηλ 2 ẍt + Γẋt 2 t + xt x, ηλtẍt ηλt Γẋt 2 λtẍt + λt Γẋt Bxt Bx 2 + xt x, ηλtẍt ηλt Γẋt. For evaluating the last term of the above inequality we use the functions h : [, + R, ht = 2 xt x 2 and p : [, + R, pt = 2 Γ γ Id xt x, xt x, already used in the proof of Theorem 6. For almost every t [, + we have and xt x, ẍt = ḧt ẋt 2 ṗt = xt x, Γẋt γ xt x, ẋt = xt x, Γẋt γḣt, hence xt x, ηλtẍt ηλt Γẋt = ḧt + γ ḣt + ṗt ẋt 2. ηλt 36 Consequently, for almost every t [, + it holds 2 Bxt Bx 2 2 λtẍt + λt Γẋt 2 ḧt + γ ḣt + ṗt ẋt ηλt By taking into account A5 we obtain a lower bound λ such that for almost every t [, + one has < λ λt δγ2 2 + θ. 2

13 By multiplying 37 with λt we obtain for almost every t [, + that λ 2 Bxt Bx 2 + ḧt + γ ḣt + ṗt η 2λ ẍt + Γẋt 2 + η ẋt 2. After integration we obtain that for every T [, + λ 2 T T Bxt Bx 2 dt + η 2λ ẍt + Γẋt 2 + η ẋt 2 dt. ḣt ḣ + γht γh + pt p As ẋ, ẍ L 2 [, + ; H, ht, pt for every T [, + and lim T + ḣt =, it follows that Bx Bx L 2 [, + ; H. Further, by taking into consideration Remark b, we have d dt 2 Bxt Bx 2 = Bxt Bx, ddt Bxt 2 Bxt Bx ẋt 2 and from here, in light of Lemma 4, it follows that lim t + Bxt = Bx. v Let x be the unique element of zera + B. For the beginning we suppose that A is uniformly monotone with corresponding function φ A : [, + [, + ], which is increasing and vanishes only at. By similar arguments as in the proof of statement iv, for almost every t [, + we have φ A λtẍt + Γẋt + xt x λt λtẍt + λt Γẋt + xt x, Bxt + Bx ηλtẍt ηλt Γẋt, which combined with the inequality xt x, Bxt Bx yields φ A λtẍt + Γẋt + xt x λt λtẍt + Γẋt, Bxt + Bx λt ηλ 2 t ẍt + Γẋt 2 + xt x, ηλtẍt ηλt Γẋt λtẍt + Γẋt, Bxt + Bx + xt x, λt ηλtẍt ηλt Γẋt. 3

14 As λ is bounded by positive constants, by using i-iv it follows that the right-hand side of the last inequality converges to as t +. Hence lim φ A t + λtẍt + Γẋt + xt x λt = and the properties of the function φ A allow to conclude that λtẍt+ λt Γẋt+xt x converges strongly to as t +. By using again the boundedness of λ and ii we obtain that xt converges strongly to x as t +. Finally, suppose that B is uniformly monotone with corresponding function φ B : [, + [, + ], which is increasing and vanishes only at. The conclusion follows by letting t in the inequality xt x, Bxt Bx φ B xt x t [, + converge to + and by using that x is bounded and lim t + Bxt Bx =. Remark We would like to emphasize the fact that the statements in Theorem remain valid also for η := 2. Indeed, in this case the cocoercivity of B implies that Id ηb is nonexpansive, hence the operator R = J ηa Id ηb used in the proof is nonexpansive, too, and so the statements in i-iii follow from Corollary 7. Furthermore, the proof of the statements iv and v can be repeated also for η = 2. In the remaining of this section we turn our attention to optimization problems of the form min fx + gx, x H where f : H R {+ } is a proper, convex and lower semicontinuous function and g : H R is a convex and Fréchet differentiable function with /-Lipschitz continuous gradient for >. We recall some standard notations and facts in convex analysis. For a proper, convex and lower semicontinuous function f : H R {+ }, its convex subdifferential at x H is defined as fx = {u H : fy fx + u, y x y H}. When seen as a set-valued mapping, it is a maximally monotone operator see [24] and its resolvent is given by J η f = prox ηf see [3], where prox ηf : H H, { prox ηf x = argmin fy + } y x 2, 38 y H 2η denotes the proximal point operator of f and η >. According to [3, Definition.5], f is said to be uniformly convex with modulus function φ : [, + [, + ], if φ is increasing, vanishes only at and fulfills fαx + αy + α αφ x y αfx + αfy for all α, and x, y dom f := {x H : fx < + }. Notice that if this inequality holds for φ = ν/2 2 for ν >, then f is said to be ν-strongly convex. 4

15 In the following statement we approach the minimizers of f + g via the second order dynamical system { ] ẍt + Γẋt + λt [xt prox ηf xt η gxt = 39 x = u, ẋ = v. Corollary 2 Let f : H R {+ } by a proper, convex and lower semicontinuous function and g : H R be a convex and Fréchet differentiable function with /- Lipschitz continuous gradient for > such that argmin x H {fx + gx}. Let η, 2] and set δ := min{, /η} + /2. Let Γ : H H be an operator fulfilling A, λ : [, +, + be a function fulfilling A5, u, v H and x : [, + H be the unique strong global solution of 39. Then the following statements are true: i the trajectory x is bounded and ẋ, ẍ, Id prox ηf Id η g x L 2 [, + ; H; ii lim t + ẋt = lim t + ẍt = lim t + Id proxηf Id η g xt = ; iii xt converges weakly to a minimizer of f + g as t + ; iv if x is a minimizer of f + g, then gx gx L 2 [, + ; H, lim t + gxt = gx and g is constant on argmin x H {fx + gx}; v if f or g is uniformly convex, then xt converges strongly to the unique minimizer of f + g as t +. Proof. The statements are direct consequences of the corresponding ones from Theorem see also Remark, by choosing A := f and B := g, by taking into account that zer f + g = argmin{fx + gx} x H and by making use of the Baillon-Haddad Theorem, which says that g is /-Lipschitz if and only if g is -cocoercive see [3, Corollary 8.6]. For statement v we also use the fact that if f is uniformly convex with modulus φ, then f is uniformly monotone with modulus 2φ see [3, Example 22.3iii]. Remark 3 Consider again the setting in Remark 8, namely, when Γ = γ Id for γ > and λt = for every t [, +. Furthermore, for C a nonempty, convex, closed subset of H, let f = δ C be the indicator function of C, which is defined as being equal to for x C and to +, else. The dynamical system 39 attached in this setting to the minimization of g over C becomes { ẍt + γẋt + xt PC xt η gxt = 4 x = u, ẋ = v, where P C denotes the projection onto the set C. The convergence of the trajectories of 4 has been studied in [8, Theorem 3.] under the conditions γ 2 > 2 and < η 2. In this case assumption A5 trivially holds by choosing θ such that < θ γ 2 2/2 δγ 2 2/2. Thus, in order to verify A5 in case λt = for every t [, + one needs to equivalently assume that γ 2 > 2/δ. Since δ, this provides a slight improvement over [8, Theorem 3.] in what concerns the choice of γ. We refer the reader also to [7] for an analysis of the convergence rates of trajectories of the dynamical system 4 when g is endowed with supplementary properties. 5

16 For the two main convergence statements provided in this section it was essential to choose the step size η in the interval, 2] see Theorem, Remark and Corollary 2. This, because of the fact that in this way we were able to guarantee for the generated trajectories the existence of the limit lim t + xt x 2, where x denotes a solution of the problem under investigation. It is interesting to observe that, when dealing with convex optimization problems, one can go also beyond this classical restriction concerning the choice of the step size a similar phenomenon has been reported also in [, Section 4.2]. This is pointed out in the following result, which is valid under the assumption A6 λ : [, +, + is locally absolutely continuous and there exist a, θ, θ > such that for almost every t [, + we have λt and θ + a 2 Γ γ Id γ 2 λt ηθ + η + η 4 2a Γ γ Id +, and for the proof of which we use instead of x x 2 a modified energy functional. Corollary 4 Let f : H R {+ } by a proper, convex and lower semicontinuous function and g : H R be a convex and Fréchet differentiable function with /- Lipschitz continuous gradient for > such that argmin x H {fx + gx} =. Let be η >, Γ : H H be an operator fulfilling A, λ : [, +, + be a function fulfilling A6, u, v H and x : [, + H be the unique strong global solution of 39. Then the following statements are true: i the trajectory x is bounded and ẋ, ẍ, Id prox ηf Id η g x L 2 [, + ; H; ii lim t + ẋt = lim t + ẍt = lim t + Id proxηf Id η g xt = ; iii xt converges weakly to a minimizer of f + g as t + ; iv if x is a minimizer of f + g, then gx gx L 2 [, + ; H, lim t + gxt = gx and g is constant on argmin x H {fx + gx}; v if f or g is uniformly convex, then xt converges strongly to the unique minimizer of f + g as t +. Proof. Consider an arbitrary element x argmin x H {fx + gx} = zer f + g. Similarly to the proof of Theorem iv, we derive for almot every t [, + see the first inequality after 35 gxt gx 2 ẍt, gxt + gx + Γẋt, gxt + gx λt ηλ 2 t ẍt + Γẋt 2 + xt x, ηλtẍt ηλt Γẋt. 42 In what follows we evaluate the right-hand side of the above inequality and introduce to this end the function q : [, + R, qt = gxt gx gx, xt x. Due to the convexity of g one has qt t. 6

17 Further, for almost every t [, + thus Γẋt, gxt + gx = qt = ẋt, gxt gx, γ qt + Γ γ Id ẋt, gxt + gx γ qt + 2a Γ γ Id ẋt 2 + a 2 Γ γ Id gxt gx On the other hand, for almost every t [, + qt = ẍt, gxt gx + ẋt, ddt gxt, hence ẍt, gxt + gx qt + ẋt Further, we have for almost every t [, + see also 6 and 5 λt ẍt + Γẋt 2 = λt ẍt 2 + λt ẍt 2 + d dt d ẋt, Γẋt + λt dt λt Γẋt 2 ẋt, Γẋt λt + γ λt λ 2 t ẋt 2 + γ2 λt ẋt Finally, by multiplying 42 with λt and by using 43, 44, 45 and 36 we obtain after rearranging the terms for almost every t [, + that λt a 2 Γ γ Id gxt gx 2 + d dt 2 η h + q + γ d dt η h + q + η ṗt + d ẋt, Γẋt + η dt λt γ 2 ηλt + γ λt ηλ 2 t η Γ γ Id ẋt 2 + 2a ηλt ẍt 2. and, further, via A6 θ gxt gx 2 + d dt 2 + d ẋt, Γẋt η dt λt η h + q + γ d dt η h + q + η ṗt + θ ẋt 2 + ηλt ẍt This implies that the function t d dt η h + q t + γ η h + q t + η pt + ẋt, Γẋt η λt 47 7

18 is monotonically decreasing. Arguing as in the proof of Theorem 6, by taking into account that λ has positive upper and lower bounds, it follows that η h + q, h, q, x, ẋ, ḣ, q are bounded, ẋ, ẍ and Id prox ηf Id η g x L 2 [, + ; H and lim t + ẋt =. Since dt d Id proxηf Id η g x L 2 [, + ; H see Remark b, we derive from Lemma 4 that lim t + Id proxηf Id η g xt =. As ẍt = Γẋt λt Id prox ηf Id η g xt for every t [, +, we obtain that lim t + ẍt =. From 46 it also follows that gx gx L 2 [, + ; H and, by applying again Lemma 4, it yield lim t + gxt = gx. In this way the statements i, ii and iv are shown. iii Since the function in 47 is monotonically decreasing, from i, ii and iv it follows that the limit lim t + γ η h + u t + η pt exists and it is a real number. By using similar arguments as at the beginning of the proof of statement iii of Theorem 6, by exploiting again 46 one gets that lim t + pt R, hence lim t + η h + u t R. Since x has been chosen as an arbitrary minimizer of f + g, we conclude that for all x argmin x H {fx + gx} the limit exists, where lim Et, t + x R, Et, x = 2η xt x 2 + gxt gx gx, xt x. In what follows we use a similar technique as in [4] see, also, [, Section 4.2]. Since x is bounded, it has at least one weak sequential cluster point. We prove first that each weak sequential cluster point of x is a minimizer of f + g. Let x argmin x H {fx + gx} and t n + as n + be such that xt n n N converges weakly to x. Since xt n, gxt n Gr g, lim n + gxt n = gx and Gr g is sequentially closed in the weak-strong topology, we obtain gx = gx. From 34 written for t = t n, A = f and B = g, by letting n converge to + and by using that Gr f is sequentially closed in the weak-strong topology, we obtain gx fx. This, combined with gx = gx delivers gx fx, hence x zer f + g = argmin x H {fx + gx}. Next we show that x has at most one weak sequential cluster point, which will actually guarantee that it has exactly one weak sequential cluster point. This will imply the weak convergence of the trajectory to a minimizer of f + g. Let x, x 2 be two weak sequential cluster points of x. This means that there exist t n + as n + and t n + as n + such that xt n n N converges weakly to x as n + and xt n n N converges weakly to x 2 as n +. Since x, x 2 argmin x H{fx + gx}, we have lim t + Et, x R and lim t + Et, x 2 R, hence lim t + Et, x Et, x 2 R. We obtain lim t + η xt, x 2 x + gx 2 gx, xt R, which, when expressed by means of the sequences t n n N and t n n N, leads to η x, x 2 x + gx 2 gx, x = η x 2, x 2 x + gx 2 gx, x 2. 8

19 This is the same with η x x gx 2 gx, x 2 x = and by the monotonicity of g we conclude that x = x 2. v The proof of this statement follows in analogy to the one of the corresponding statement of Theorem v written for A = f and B = g. Remark 5 When Γ = γ Id for γ >, in order to verify the left-hand side of the second statement in assumption A6 one can take θ := inf t λt. Thus, 4 amounts in this case to the existence of θ > such that λt γ 2 ηθ + η +. When one takes λt = for every t [, +, this is verified if and only if γ 2 > η +. In other words, A6 allows in this particular setting a more relaxed choice for the parameters γ, η and, beyond the standard assumptions < η 2 and γ 2 > 2 considered in [8]. In the following we provide a rate for the convergence of a convex and Fréchet differentiable function with Lipschitz continuous gradient g : H R along the ergodic trajectory generated by { ẍt + Γẋt + λt gxt = x = u, ẋ = v 48 to the minimum value of g. To this end we make the following assumption A7 λ : [, +, + is locally absolutely continuous and there exists ζ > such that for almost every t [, + we have < ζ γλt λt. 49 Let us mention that the following result is in the spirit of a convergence rate given for the objective function values on a sequence iteratively generated by an inertial-type algorithm recently obtained in [9, Theorem ]. Theorem 6 Let g : H R be a convex and Fréchet differentiable function with /- Lipschitz continuous gradient for > such that argmin x H gx. Let Γ : H H be an operator fulfilling A, λ : [, +, + a function fulfilling A7, u, v H and x : [, + H be the unique strong global solution of 48. Then for every minimizer x of g and every T > it holds g T 2ζT T xtdt gx [ v + γu x 2 + γ Γ γ Id + λ u x 2 ]. 9

20 Proof. The existence and uniqueness of the trajectory of 48 follow from Theorem 2. Let be x argmin x H gx, T > and consider again the function p : [, + R, pt = 2 Γ γ Id xt x, xt x which we defined in 2. By using 48, the formula for the derivative of p, the positive semidefinitness of Γ γ Id, the convexity of g and A7 we get for almost every t [, + d dt 2 ẋt + γxt x 2 + γpt + λtgxt = ẍt + γẋt, ẋt + γxt x + γ Γ γ Idẋt, xt x + λtgxt + λt ẋt, gxt = Γ γ Idẋt λt gxt, ẋt + γxt x + Γ γ Idẋt, γxt x + λtgxt + λt ẋt, gxt γλt gxt, xt x + λtgxt λt γλtgxt gx + λtgx ζgxt gx + λtgx. We obtain after integration 2 ẋt + γxt x 2 + γpt + λt gxt 2 ẋ + γx x 2 + γp + λgx +ζ T gxt gx dt λt λgx. Be neglecting the nonnegative terms on the left-hand side of this inequality and by using that gxt gx, it yields ζ T gxt gx dt 2 v + γu x 2 + γp + λgu gx. The conclusion follows by using p = 2 Γ γ Idu x, u x 2 Γ γ Id u x 2, gu gx 2 u x 2, which is a consequence of the descent lemma see [22, Lemma.2.3] and notice that gx =, and the inequality which holds since g is convex. T g xtdt gx T T T gxt gx dt, Remark 7 Under assumption A7 on the relaxation function λ, we obtain in the above theorem only the convergence of the function g along the ergodic trajectory to a global 2

21 minimum value. If one is interested also in the weak convergence of the trajectory to a minimizer of g, this follows via Theorem 6 when λ is assumed to fulfill A2 notice that if x converges weakly to a minimizer of g, then from the Cesaro-Stolz Theorem one also obtains the weak convergence of the ergodic trajectory T T minimizer. Take a, b > /γ 2 and ρ γ. Then λt = ae ρt + b T xtdt to the same is an example of a relaxation function which verifies assumption A2 with < θ bγ 2 and assumption A7 with < ζ γb/a + b 2. 5 Variable damping parameters In this section we carry out a similar analysis as in the previous section, however, for second order dynamical systems having as damping coefficient a function depending on time. As starting point for our investigations we consider the dynamical system { ẍt + γtẋt + λtbxt = 5 x = u, ẋ = v, where B : H H is a cocoercive operator, λ, γ : [, + [, + are Lebesgue measurable functions and u, v H. The existence and uniqueness of strong global solutions of 5 can be shown by using the same techniques as in the proof of Theorem 2, provided that λ, γ L loc [, +. For the convergence of the trajectories we need the following assumption A2 λ, γ : [, +, + are locally absolutely continuous and there exists θ > such that for almost every t [, + we have γt λt and γ2 t λt + θ. 5 According to Definition and Remark a, λt, γt exist for almost almost every t [, + and λ, γ are Lebesgue integrable on each interval [, b] for < b < +. This combined with γt λt, yields the existence of a positive lower bound for λ and for a positive upper bound for γ. Using further the second assumption in 5 provides also a positive upper bound for λ and a positive lower bound for γ. The couple of functions λt = ae ρt + b and γt = a e ρ t + b, where a, a, ρ, ρ and b, b > fulfill the inequality b 2 b > /, verify the conditions in assumption A2. We state now the convergence result. 2

22 Theorem 8 Let B : H H be a -cocoercive operator for > such that zer B := {u H : Bu = } =, λ, γ : [, +, + be functions fulfilling A2 and u, v H. Let x : [, + H be the unique strong global solution of 5. Then the following statements are true: i the trajectory x is bounded and ẋ, ẍ, Bx L 2 [, + ; H; ii lim t + ẋt = lim t + ẍt = lim t + Bxt = ; iii xt converges weakly to an element in zer B as t +. Proof. With the notations in the proof of Theorem 6 and by appealing to similar arguments one obtains for almost every t [, + or, equivalently, ḧt + ḧt + γtḣt + γtḣt + γt λt d dt Combining this inequality with γt d ẋt 2 λt dt = d dt λt ẍt + γtẋt 2 ẋt 2 ẋt 2 + γ 2 t λt γt λt ẋt 2 ẋt 2 + λt ẍt 2. γtλt γt λt λ 2 ẋt 2 t and γtḣt = d dt γht γtht d γht, 52 dt it yields for almost every t [, + ḧt + d dt γht+ d γt dt λt ẋt 2 + γ 2 t γtλt + γt λt + λt λ 2 ẋt 2 + t Now, assumption A2 delivers for almost every t [, + the inequality ḧt + d dt γht + d γt dt λt ẋt 2 + θ ẋt 2 + λt ẍt 2. λt ẍt 2. γt This implies that the function t ḣt+γtht+ λt ẋt 2 is monotonically decreasing and from here one obtains the conclusion following the lines of the proof of Theorem 6, by taking also into account that lim t + γt,. When T : H H is a nonexpansive operator one obtains for the dynamical system { ẍt + γtẋt + λt xt T xt = x = u, ẋ = v 53 and by making the assumption 22

23 A3 λ, γ : [, +, + are locally absolutely continuous and there exists θ > such that for almost every t [, + we have γt λt and γ2 t λt 2 + θ 54 the following result which can been seen as a counterpart to Corollary 7. Corollary 9 Let T : H H be a nonexpansive operator such that Fix T = {u H : T u = u}, λ, γ : [, +, + be functions fulfilling A3 and u, v H. Let x : [, + H be the unique strong global solution of 53. Then the following statements are true: i the trajectory x is bounded and ẋ, ẍ, Id T x L 2 [, + ; H; ii lim t + ẋt = lim t + ẍt = lim t + Id T xt = ; iii xt converges weakly to a point in Fix T as t +. When R : H H is an α-averaged operator for α, one obtains for the dynamical system { ẍt + γtẋt + λt xt Rxt = 55 x = u, ẋ = v, and by making the assumption A4 λ, γ : [, +, + are locally absolutely continuous and there exists θ > such that for almost every t [, + we have γt λt and γ2 t λt 2α + θ 56 the following result which can been seen as a counterpart to Corollary 9. Corollary 2 Let R : H H be an α-averaged operator for α, such that Fix R, λ, γ : [, +, + be functions fulfilling A4 and u, v H. Let x : [, + H be the unique strong global solution of 55. Then the following statements are true: i the trajectory x is bounded and ẋ, ẍ, Id Rx L 2 [, + ; H; ii lim t + ẋt = lim t + ẍt = lim t + Id Rxt = ; iii xt converges weakly to a point in Fix R as t +. We come now to the monotone inclusion problem find Ax + Bx, where A : H H is a maximally monotone operator and B : H H is a -cocoercive operator for > and assign to it the second order dynamical system { ] ẍt + γtẋt + λt [xt J ηa xt ηbxt = 57 x = u, ẋ = v. and make the assumption 23

24 A5 λ, γ : [, +, + are locally absolutely continuous and there exists θ > such that for almost every t [, + we have γt λt and γ2 t λt 2 + θ. 58 δ Theorem 2 Let A : H H be a maximally monotone operator and B : H H be -cocoercive operator for > such that zera + B. Let η, 2 and set δ := min{, /η} + /2. Let λ, γ : [, +, + be functions fulfilling A5, u, v H and x : [, + H be the unique strong global solution of 57. Then the following statements are true: i the trajectory x is bounded and ẋ, ẍ, Id J ηa Id ηb x L 2 [, + ; H; ii lim t + ẋt = lim t + ẍt = lim t + Id JηA Id ηb xt = ; iii xt converges weakly to a point in zera + B as t + ; iv if x zera+b, then Bx Bx L 2 [, + ; H, lim t + Bxt = Bx and B is constant on zera + B; v if A or B is uniformly monotone, then xt converges strongly to the unique point in zera + B as t +. Proof. The statements i-iii follow by using the same arguments as in the proof of Theorem. iv We use again the notations in the proof of Theorem 6. Let be an arbitrary x zera+b. From the definition of the resolvent we have for almost every t [, + Bxt γt ηλtẍt ηλtẋt A γt + + xt, 59 λtẍt λtẋt which combined with Bx Ax and the monotonicity of A leads to γt + λtẍt λtẋt + xt x, Bxt + Bx γt ηλtẍt ηλtẋt. 6 The cocoercivity of B yields for almost every t [, + Bxt Bx 2 γt + Bxt + Bx λtẍt λtẋt, ηλ 2 ẍt + γtẋt 2 t + xt x, γt ηλtẍt ηλtẋt γt λtẍt λtẋt + 2 Bxt Bx 2 + xt x, γt ηλtẍt ηλtẋt. From xt z, γt ηλtẍt ηλtẋt we obtain for almost every t [, + λt Bxt Bz 2 + ḧt + γt ḣt 2 η = ḧt + γt ḣt ẋt 2 6 ηλt 24 2λt ẍt + γtẋt 2 + η ẋt 2.

25 The conclusion follows in analogy to the proof of iv in Theorem by using also 52. v Let x be the unique element of zera + B. When A is uniformly monotone with corresponding function φ A : [, + [, + ], which is increasing and vanishes only at, similarly to the proof of statement v in Theorem the following inequality can be derived for almost every t [, + γt φ A + + xt z λtẍt λtẋt γt + Bxt + Bz + xt z, γt λtẍt λtẋt, ηλtẍt ηλtẋt. γt This yields lim t + φ A λtẍt + λtẋt + xt z = and from here the conclusion is immediate. The case when B is uniformly monotone is to be addressed in analogy to corresponding part of the proof of Theorem v. Remark 22 In the light of the arguments provided in Remark, one can see that the statements in Theorem 2 remain valid also for η = 2. When particularizing this setting to the solving of the optimization problem min fx + gx, x H where f : H R {+ } is a proper, convex and lower semicontinuous function and g : H R is a convex and Fréchet differentiable function with /-Lipschitz continuous gradient for >, via the second order dynamical system { ] ẍt + γtẋt + λt [xt prox ηf xt η gxt = 62 x = u, ẋ = v, Corollary 2 gives rise to the following result. Corollary 23 Let f : H R {+ } by a proper, convex and lower semicontinuous function and g : H R be a convex and Fréchet differentiable function with /- Lipschitz continuous gradient for > such that argmin x H {fx + gx}. Let η, 2] and set δ := min{, /η} + /2. Let λ, γ : [, +, + be functions fulfilling A5, u, v H and x : [, + H be the unique strong global solution of 62. Then the following statements are true: i the trajectory x is bounded and ẋ, ẍ, Id prox ηf Id η g x L 2 [, + ; H; ii lim t + ẋt = lim t + ẍt = lim t + Id proxηf Id η g xt = ; iii xt converges weakly to a minimizer of f + g as t + ; iv if x is a minimizer of f + g, then gx gx L 2 [, + ; H, lim t + gxt = gx and g is constant on argmin x H {fx + gx}; v if f or g is uniformly convex, then xt converges strongly to the unique minimizer of f + g as t +. As it was also the case in the previous section, we can weaken the choice of the step size in Corollary 23 through the following assumption 25

26 A6 λ, γ : [, +, + are locally absolutely continuous and there exists θ > such that for almost every t [, + we have γt λt and γ2 t λt ηθ + η Corollary 24 Let f : H R {+ } by a proper, convex and lower semicontinuous function and g : H R be a convex and Fréchet differentiable function with /- Lipschitz continuous gradient for > such that argmin x H {fx + gx} =. Let be η >, λ, γ : [, +, + be functions fulfilling A6, u, v H and x : [, + H be the unique strong global solution of 62. Then the following statements are true: i the trajectory x is bounded and ẋ, ẍ, Id prox ηf Id η g x L 2 [, + ; H; ii lim t + ẋt = lim t + ẍt = lim t + Id proxηf Id η g xt = ; iii xt converges weakly to a minimizer of f + g as t + ; iv if x is a minimizer of f + g, then gx gx L 2 [, + ; H, lim t + gxt = gx and g is constant on argmin x H {fx + gx}; v if f or g is uniformly convex, then xt converges strongly to the unique minimizer of f + g as t +. Proof. The proof follows in the lines of the one given for Corollary 4 and relies on the following key inequality, which holds for almost every t [, +, λt gxt gz 2 + d dt 2 η h + q + d γt dt η h + q + d γt η dt λt ẋt 2 γ 2 t γtλt + γt λt + + ηλt ηλ 2 t ẋt 2 + η ηλt ẍt 2, where x denotes a minimizer of f + g. This relation gives rise via A6 to λt gxt gz 2 + d dt 2 η h + q + d γt dt η h + q + d γt η dt λt ẋt 2 + θ ẋt 2 + ηλt ẍt 2, which can be seen as the counterpart to relation 46. Finally, we address the convergence rate of a convex and Fréchet differentiable function with Lipschitz continuous gradient g : H R along the ergodic trajectory generated by { ẍt + γtẋt + λt gxt = x = u, ẋ = v 64 to its global minimum value, when making the following assumption A7 λ : [, +, + is locally absolutely continuous, γ : [, +, + is twice differentiable and there exists ζ > such that for almost every t [, + we have < ζ γtλt λt, γt and 2 γtγt γt

27 Theorem 25 Let g : H R be a convex and Fréchet differentiable function with /-Lipschitz continuous gradient for > such that argmin x H gx. Let λ, γ : [, +, + be functions fulfilling A7 u, v H and x : [, + H be the unique strong global solution of 64. Then for every minimizer x of g and every T > it holds T g xtdt gx T 2ζT [ v + γu x 2 + λ ] γ u x 2. Proof. Let x argmin x H gx and T >. By using 64, the convexity of g and A7 we get for almost every t [, + d dt 2 ẋt + γtxt x 2 + λtgxt γt 2 xt x 2 = ẍt + γtxt x + γtẋt, ẋt + γtxt x γt 2 xt x 2 γt ẋt, xt x + λtgxt + λt ẋt, gxt = γtλt gxt, xt x + λtgxt + γtγt γt xt x 2 2 γtλt gxt, xt x + λtgxt λt γtλtgxt gx + λtgx ζgxt gx + λtgx. We obtain after integration 2 ẋt + γxt x 2 + λt gxt γt 2 xt x 2 2 ẋ + γx x 2 + λgx γ 2 x x 2 +ζ T gxt gx dt λt λgx. The conclusion follows from here as in the proof of Theorem 6. Remark 26 A similar comment as in Remark 7 can be made also in this context. For a, a, ρ, ρ and b, b > fulfilling the inequalities b 2 b > / and ρ b one can prove that the functions λt = ae ρt + b and γt = a e ρ t + b, verify assumption A2 in Theorem 8 with < θ b 2 b and assumption A7 in Theorem 25 with < ζ bb /a + b 2. Hence, for this choice of the relaxation and damping function, one has convergence of the objective function g along the ergodic trajectory to its global minimum value as well as weak convergence of the trajectory to a minimizer of g. 27

Approaching monotone inclusion problems via second order dynamical systems with linear and anisotropic damping

Approaching monotone inclusion problems via second order dynamical systems with linear and anisotropic damping March 0, 206 3:4 WSPC Proceedings - 9in x 6in secondorderanisotropicdamping206030 page Approaching monotone inclusion problems via second order dynamical systems with linear and anisotropic damping Radu

More information

Second order forward-backward dynamical systems for monotone inclusion problems

Second order forward-backward dynamical systems for monotone inclusion problems Second order forward-backward dynamical systems for monotone inclusion problems Radu Ioan Boţ Ernö Robert Csetnek March 2, 26 Abstract. We begin by considering second order dynamical systems of the from

More information

1 Introduction and preliminaries

1 Introduction and preliminaries Proximal Methods for a Class of Relaxed Nonlinear Variational Inclusions Abdellatif Moudafi Université des Antilles et de la Guyane, Grimaag B.P. 7209, 97275 Schoelcher, Martinique abdellatif.moudafi@martinique.univ-ag.fr

More information

ADMM for monotone operators: convergence analysis and rates

ADMM for monotone operators: convergence analysis and rates ADMM for monotone operators: convergence analysis and rates Radu Ioan Boţ Ernö Robert Csetne May 4, 07 Abstract. We propose in this paper a unifying scheme for several algorithms from the literature dedicated

More information

On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems

On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems Radu Ioan Boţ Ernö Robert Csetnek August 5, 014 Abstract. In this paper we analyze the

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

INERTIAL ACCELERATED ALGORITHMS FOR SOLVING SPLIT FEASIBILITY PROBLEMS. Yazheng Dang. Jie Sun. Honglei Xu

INERTIAL ACCELERATED ALGORITHMS FOR SOLVING SPLIT FEASIBILITY PROBLEMS. Yazheng Dang. Jie Sun. Honglei Xu Manuscript submitted to AIMS Journals Volume X, Number 0X, XX 200X doi:10.3934/xx.xx.xx.xx pp. X XX INERTIAL ACCELERATED ALGORITHMS FOR SOLVING SPLIT FEASIBILITY PROBLEMS Yazheng Dang School of Management

More information

Convex Optimization Notes

Convex Optimization Notes Convex Optimization Notes Jonathan Siegel January 2017 1 Convex Analysis This section is devoted to the study of convex functions f : B R {+ } and convex sets U B, for B a Banach space. The case of B =

More information

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods Renato D.C. Monteiro B. F. Svaiter May 10, 011 Revised: May 4, 01) Abstract This

More information

Inertial Douglas-Rachford splitting for monotone inclusion problems

Inertial Douglas-Rachford splitting for monotone inclusion problems Inertial Douglas-Rachford splitting for monotone inclusion problems Radu Ioan Boţ Ernö Robert Csetnek Christopher Hendrich January 5, 2015 Abstract. We propose an inertial Douglas-Rachford splitting algorithm

More information

A Brøndsted-Rockafellar Theorem for Diagonal Subdifferential Operators

A Brøndsted-Rockafellar Theorem for Diagonal Subdifferential Operators A Brøndsted-Rockafellar Theorem for Diagonal Subdifferential Operators Radu Ioan Boţ Ernö Robert Csetnek April 23, 2012 Dedicated to Jon Borwein on the occasion of his 60th birthday Abstract. In this note

More information

Convergence rate estimates for the gradient differential inclusion

Convergence rate estimates for the gradient differential inclusion Convergence rate estimates for the gradient differential inclusion Osman Güler November 23 Abstract Let f : H R { } be a proper, lower semi continuous, convex function in a Hilbert space H. The gradient

More information

Visco-penalization of the sum of two monotone operators

Visco-penalization of the sum of two monotone operators Visco-penalization of the sum of two monotone operators Patrick L. Combettes a and Sever A. Hirstoaga b a Laboratoire Jacques-Louis Lions, Faculté de Mathématiques, Université Pierre et Marie Curie Paris

More information

Douglas-Rachford splitting for nonconvex feasibility problems

Douglas-Rachford splitting for nonconvex feasibility problems Douglas-Rachford splitting for nonconvex feasibility problems Guoyin Li Ting Kei Pong Jan 3, 015 Abstract We adapt the Douglas-Rachford DR) splitting method to solve nonconvex feasibility problems by studying

More information

An introduction to Mathematical Theory of Control

An introduction to Mathematical Theory of Control An introduction to Mathematical Theory of Control Vasile Staicu University of Aveiro UNICA, May 2018 Vasile Staicu (University of Aveiro) An introduction to Mathematical Theory of Control UNICA, May 2018

More information

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Charles Byrne (Charles Byrne@uml.edu) http://faculty.uml.edu/cbyrne/cbyrne.html Department of Mathematical Sciences

More information

M. Marques Alves Marina Geremia. November 30, 2017

M. Marques Alves Marina Geremia. November 30, 2017 Iteration complexity of an inexact Douglas-Rachford method and of a Douglas-Rachford-Tseng s F-B four-operator splitting method for solving monotone inclusions M. Marques Alves Marina Geremia November

More information

Local strong convexity and local Lipschitz continuity of the gradient of convex functions

Local strong convexity and local Lipschitz continuity of the gradient of convex functions Local strong convexity and local Lipschitz continuity of the gradient of convex functions R. Goebel and R.T. Rockafellar May 23, 2007 Abstract. Given a pair of convex conjugate functions f and f, we investigate

More information

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES Fenghui Wang Department of Mathematics, Luoyang Normal University, Luoyang 470, P.R. China E-mail: wfenghui@63.com ABSTRACT.

More information

Convergence of Fixed-Point Iterations

Convergence of Fixed-Point Iterations Convergence of Fixed-Point Iterations Instructor: Wotao Yin (UCLA Math) July 2016 1 / 30 Why study fixed-point iterations? Abstract many existing algorithms in optimization, numerical linear algebra, and

More information

Subdifferential representation of convex functions: refinements and applications

Subdifferential representation of convex functions: refinements and applications Subdifferential representation of convex functions: refinements and applications Joël Benoist & Aris Daniilidis Abstract Every lower semicontinuous convex function can be represented through its subdifferential

More information

Zeqing Liu, Jeong Sheok Ume and Shin Min Kang

Zeqing Liu, Jeong Sheok Ume and Shin Min Kang Bull. Korean Math. Soc. 41 (2004), No. 2, pp. 241 256 GENERAL VARIATIONAL INCLUSIONS AND GENERAL RESOLVENT EQUATIONS Zeqing Liu, Jeong Sheok Ume and Shin Min Kang Abstract. In this paper, we introduce

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT

PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT Linear and Nonlinear Analysis Volume 1, Number 1, 2015, 1 PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT KAZUHIRO HISHINUMA AND HIDEAKI IIDUKA Abstract. In this

More information

Inertial forward-backward methods for solving vector optimization problems

Inertial forward-backward methods for solving vector optimization problems Inertial forward-backward methods for solving vector optimization problems Radu Ioan Boţ Sorin-Mihai Grad Dedicated to Johannes Jahn on the occasion of his 65th birthday Abstract. We propose two forward-backward

More information

SPACES ENDOWED WITH A GRAPH AND APPLICATIONS. Mina Dinarvand. 1. Introduction

SPACES ENDOWED WITH A GRAPH AND APPLICATIONS. Mina Dinarvand. 1. Introduction MATEMATIČKI VESNIK MATEMATIQKI VESNIK 69, 1 (2017), 23 38 March 2017 research paper originalni nauqni rad FIXED POINT RESULTS FOR (ϕ, ψ)-contractions IN METRIC SPACES ENDOWED WITH A GRAPH AND APPLICATIONS

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

arxiv: v1 [math.oc] 12 Mar 2013

arxiv: v1 [math.oc] 12 Mar 2013 On the convergence rate improvement of a primal-dual splitting algorithm for solving monotone inclusion problems arxiv:303.875v [math.oc] Mar 03 Radu Ioan Boţ Ernö Robert Csetnek André Heinrich February

More information

Graph Convergence for H(, )-co-accretive Mapping with over-relaxed Proximal Point Method for Solving a Generalized Variational Inclusion Problem

Graph Convergence for H(, )-co-accretive Mapping with over-relaxed Proximal Point Method for Solving a Generalized Variational Inclusion Problem Iranian Journal of Mathematical Sciences and Informatics Vol. 12, No. 1 (2017), pp 35-46 DOI: 10.7508/ijmsi.2017.01.004 Graph Convergence for H(, )-co-accretive Mapping with over-relaxed Proximal Point

More information

Strongly convex functions, Moreau envelopes and the generic nature of convex functions with strong minimizers

Strongly convex functions, Moreau envelopes and the generic nature of convex functions with strong minimizers University of Wollongong Research Online Faculty of Engineering and Information Sciences - Papers: Part B Faculty of Engineering and Information Sciences 206 Strongly convex functions, Moreau envelopes

More information

Monotone Operator Splitting Methods in Signal and Image Recovery

Monotone Operator Splitting Methods in Signal and Image Recovery Monotone Operator Splitting Methods in Signal and Image Recovery P.L. Combettes 1, J.-C. Pesquet 2, and N. Pustelnik 3 2 Univ. Pierre et Marie Curie, Paris 6 LJLL CNRS UMR 7598 2 Univ. Paris-Est LIGM CNRS

More information

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE Fixed Point Theory, Volume 6, No. 1, 2005, 59-69 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.htm WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE YASUNORI KIMURA Department

More information

Necessary optimality conditions for optimal control problems with nonsmooth mixed state and control constraints

Necessary optimality conditions for optimal control problems with nonsmooth mixed state and control constraints Necessary optimality conditions for optimal control problems with nonsmooth mixed state and control constraints An Li and Jane J. Ye Abstract. In this paper we study an optimal control problem with nonsmooth

More information

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties

More information

THEOREMS, ETC., FOR MATH 515

THEOREMS, ETC., FOR MATH 515 THEOREMS, ETC., FOR MATH 515 Proposition 1 (=comment on page 17). If A is an algebra, then any finite union or finite intersection of sets in A is also in A. Proposition 2 (=Proposition 1.1). For every

More information

arxiv: v3 [math.oc] 18 Apr 2012

arxiv: v3 [math.oc] 18 Apr 2012 A class of Fejér convergent algorithms, approximate resolvents and the Hybrid Proximal-Extragradient method B. F. Svaiter arxiv:1204.1353v3 [math.oc] 18 Apr 2012 Abstract A new framework for analyzing

More information

A Viscosity Method for Solving a General System of Finite Variational Inequalities for Finite Accretive Operators

A Viscosity Method for Solving a General System of Finite Variational Inequalities for Finite Accretive Operators A Viscosity Method for Solving a General System of Finite Variational Inequalities for Finite Accretive Operators Phayap Katchang, Somyot Plubtieng and Poom Kumam Member, IAENG Abstract In this paper,

More information

GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS. Jong Kyu Kim, Salahuddin, and Won Hee Lim

GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS. Jong Kyu Kim, Salahuddin, and Won Hee Lim Korean J. Math. 25 (2017), No. 4, pp. 469 481 https://doi.org/10.11568/kjm.2017.25.4.469 GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS Jong Kyu Kim, Salahuddin, and Won Hee Lim Abstract. In this

More information

On the acceleration of the double smoothing technique for unconstrained convex optimization problems

On the acceleration of the double smoothing technique for unconstrained convex optimization problems On the acceleration of the double smoothing technique for unconstrained convex optimization problems Radu Ioan Boţ Christopher Hendrich October 10, 01 Abstract. In this article we investigate the possibilities

More information

Stabilization of persistently excited linear systems

Stabilization of persistently excited linear systems Stabilization of persistently excited linear systems Yacine Chitour Laboratoire des signaux et systèmes & Université Paris-Sud, Orsay Exposé LJLL Paris, 28/9/2012 Stabilization & intermittent control Consider

More information

The resolvent average of monotone operators: dominant and recessive properties

The resolvent average of monotone operators: dominant and recessive properties The resolvent average of monotone operators: dominant and recessive properties Sedi Bartz, Heinz H. Bauschke, Sarah M. Moffat, and Xianfu Wang September 30, 2015 (first revision) December 22, 2015 (second

More information

2 Statement of the problem and assumptions

2 Statement of the problem and assumptions Mathematical Notes, 25, vol. 78, no. 4, pp. 466 48. Existence Theorem for Optimal Control Problems on an Infinite Time Interval A.V. Dmitruk and N.V. Kuz kina We consider an optimal control problem on

More information

Asymptotics for some vibro-impact problems. with a linear dissipation term

Asymptotics for some vibro-impact problems. with a linear dissipation term XLIM UMR CNRS 617 Département Mathématiques-Informatique Asymptotics for some vibro-impact problems with a linear dissipation term Alexandre Cabot & Laetitia Paoli Rapport de recherche n 6-8 Déposé le

More information

ASYMPTOTICALLY NONEXPANSIVE MAPPINGS IN MODULAR FUNCTION SPACES ABSTRACT

ASYMPTOTICALLY NONEXPANSIVE MAPPINGS IN MODULAR FUNCTION SPACES ABSTRACT ASYMPTOTICALLY NONEXPANSIVE MAPPINGS IN MODULAR FUNCTION SPACES T. DOMINGUEZ-BENAVIDES, M.A. KHAMSI AND S. SAMADI ABSTRACT In this paper, we prove that if ρ is a convex, σ-finite modular function satisfying

More information

Optimality, identifiability, and sensitivity

Optimality, identifiability, and sensitivity Noname manuscript No. (will be inserted by the editor) Optimality, identifiability, and sensitivity D. Drusvyatskiy A. S. Lewis Received: date / Accepted: date Abstract Around a solution of an optimization

More information

Existence and Approximation of Fixed Points of. Bregman Nonexpansive Operators. Banach Spaces

Existence and Approximation of Fixed Points of. Bregman Nonexpansive Operators. Banach Spaces Existence and Approximation of Fixed Points of in Reflexive Banach Spaces Department of Mathematics The Technion Israel Institute of Technology Haifa 22.07.2010 Joint work with Prof. Simeon Reich General

More information

Nonlinear Systems and Control Lecture # 12 Converse Lyapunov Functions & Time Varying Systems. p. 1/1

Nonlinear Systems and Control Lecture # 12 Converse Lyapunov Functions & Time Varying Systems. p. 1/1 Nonlinear Systems and Control Lecture # 12 Converse Lyapunov Functions & Time Varying Systems p. 1/1 p. 2/1 Converse Lyapunov Theorem Exponential Stability Let x = 0 be an exponentially stable equilibrium

More information

Sobolev spaces. May 18

Sobolev spaces. May 18 Sobolev spaces May 18 2015 1 Weak derivatives The purpose of these notes is to give a very basic introduction to Sobolev spaces. More extensive treatments can e.g. be found in the classical references

More information

WEAK CONVERGENCE THEOREMS FOR EQUILIBRIUM PROBLEMS WITH NONLINEAR OPERATORS IN HILBERT SPACES

WEAK CONVERGENCE THEOREMS FOR EQUILIBRIUM PROBLEMS WITH NONLINEAR OPERATORS IN HILBERT SPACES Fixed Point Theory, 12(2011), No. 2, 309-320 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.html WEAK CONVERGENCE THEOREMS FOR EQUILIBRIUM PROBLEMS WITH NONLINEAR OPERATORS IN HILBERT SPACES S. DHOMPONGSA,

More information

Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems

Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems Lu-Chuan Ceng 1, Nicolas Hadjisavvas 2 and Ngai-Ching Wong 3 Abstract.

More information

An Integral-type Constraint Qualification for Optimal Control Problems with State Constraints

An Integral-type Constraint Qualification for Optimal Control Problems with State Constraints An Integral-type Constraint Qualification for Optimal Control Problems with State Constraints S. Lopes, F. A. C. C. Fontes and M. d. R. de Pinho Officina Mathematica report, April 4, 27 Abstract Standard

More information

LMI Methods in Optimal and Robust Control

LMI Methods in Optimal and Robust Control LMI Methods in Optimal and Robust Control Matthew M. Peet Arizona State University Lecture 15: Nonlinear Systems and Lyapunov Functions Overview Our next goal is to extend LMI s and optimization to nonlinear

More information

Characterizations of Lojasiewicz inequalities: subgradient flows, talweg, convexity.

Characterizations of Lojasiewicz inequalities: subgradient flows, talweg, convexity. Characterizations of Lojasiewicz inequalities: subgradient flows, talweg, convexity. Jérôme BOLTE, Aris DANIILIDIS, Olivier LEY, Laurent MAZET. Abstract The classical Lojasiewicz inequality and its extensions

More information

The Split Hierarchical Monotone Variational Inclusions Problems and Fixed Point Problems for Nonexpansive Semigroup

The Split Hierarchical Monotone Variational Inclusions Problems and Fixed Point Problems for Nonexpansive Semigroup International Mathematical Forum, Vol. 11, 2016, no. 8, 395-408 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/imf.2016.6220 The Split Hierarchical Monotone Variational Inclusions Problems and

More information

An accelerated non-euclidean hybrid proximal extragradient-type algorithm for convex-concave saddle-point problems

An accelerated non-euclidean hybrid proximal extragradient-type algorithm for convex-concave saddle-point problems An accelerated non-euclidean hybrid proximal extragradient-type algorithm for convex-concave saddle-point problems O. Kolossoski R. D. C. Monteiro September 18, 2015 (Revised: September 28, 2016) Abstract

More information

I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION

I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION Peter Ochs University of Freiburg Germany 17.01.2017 joint work with: Thomas Brox and Thomas Pock c 2017 Peter Ochs ipiano c 1

More information

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation:

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: Oct. 1 The Dirichlet s P rinciple In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: 1. Dirichlet s Principle. u = in, u = g on. ( 1 ) If we multiply

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo September 6, 2011 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

Lecture Notes on Iterative Optimization Algorithms

Lecture Notes on Iterative Optimization Algorithms Charles L. Byrne Department of Mathematical Sciences University of Massachusetts Lowell December 8, 2014 Lecture Notes on Iterative Optimization Algorithms Contents Preface vii 1 Overview and Examples

More information

THE INVERSE FUNCTION THEOREM FOR LIPSCHITZ MAPS

THE INVERSE FUNCTION THEOREM FOR LIPSCHITZ MAPS THE INVERSE FUNCTION THEOREM FOR LIPSCHITZ MAPS RALPH HOWARD DEPARTMENT OF MATHEMATICS UNIVERSITY OF SOUTH CAROLINA COLUMBIA, S.C. 29208, USA HOWARD@MATH.SC.EDU Abstract. This is an edited version of a

More information

Research Article Strong Convergence of a Projected Gradient Method

Research Article Strong Convergence of a Projected Gradient Method Applied Mathematics Volume 2012, Article ID 410137, 10 pages doi:10.1155/2012/410137 Research Article Strong Convergence of a Projected Gradient Method Shunhou Fan and Yonghong Yao Department of Mathematics,

More information

Iterative algorithms based on the hybrid steepest descent method for the split feasibility problem

Iterative algorithms based on the hybrid steepest descent method for the split feasibility problem Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 (206), 424 4225 Research Article Iterative algorithms based on the hybrid steepest descent method for the split feasibility problem Jong Soo

More information

ON PERIODIC SOLUTIONS TO SOME LAGRANGIAN SYSTEM WITH TWO DEGREES OF FREEDOM

ON PERIODIC SOLUTIONS TO SOME LAGRANGIAN SYSTEM WITH TWO DEGREES OF FREEDOM ON PERIODIC SOLUTIONS TO SOME LAGRANGIAN SYSTEM WITH TWO DEGREES OF FREEDOM OLEG ZUBELEVICH DEPT. OF THEORETICAL MECHANICS, MECHANICS AND MATHEMATICS FACULTY, M. V. LOMONOSOV MOSCOW STATE UNIVERSITY RUSSIA,

More information

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES U.P.B. Sci. Bull., Series A, Vol. 80, Iss. 3, 2018 ISSN 1223-7027 ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES Vahid Dadashi 1 In this paper, we introduce a hybrid projection algorithm for a countable

More information

On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean

On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean Renato D.C. Monteiro B. F. Svaiter March 17, 2009 Abstract In this paper we analyze the iteration-complexity

More information

HAIYUN ZHOU, RAVI P. AGARWAL, YEOL JE CHO, AND YONG SOO KIM

HAIYUN ZHOU, RAVI P. AGARWAL, YEOL JE CHO, AND YONG SOO KIM Georgian Mathematical Journal Volume 9 (2002), Number 3, 591 600 NONEXPANSIVE MAPPINGS AND ITERATIVE METHODS IN UNIFORMLY CONVEX BANACH SPACES HAIYUN ZHOU, RAVI P. AGARWAL, YEOL JE CHO, AND YONG SOO KIM

More information

Optimality Conditions for Nonsmooth Convex Optimization

Optimality Conditions for Nonsmooth Convex Optimization Optimality Conditions for Nonsmooth Convex Optimization Sangkyun Lee Oct 22, 2014 Let us consider a convex function f : R n R, where R is the extended real field, R := R {, + }, which is proper (f never

More information

Solving monotone inclusions involving parallel sums of linearly composed maximally monotone operators

Solving monotone inclusions involving parallel sums of linearly composed maximally monotone operators Solving monotone inclusions involving parallel sums of linearly composed maximally monotone operators Radu Ioan Boţ Christopher Hendrich 2 April 28, 206 Abstract. The aim of this article is to present

More information

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers. Chapter 3 Duality in Banach Space Modern optimization theory largely centers around the interplay of a normed vector space and its corresponding dual. The notion of duality is important for the following

More information

PERTURBATION THEORY FOR NONLINEAR DIRICHLET PROBLEMS

PERTURBATION THEORY FOR NONLINEAR DIRICHLET PROBLEMS Annales Academiæ Scientiarum Fennicæ Mathematica Volumen 28, 2003, 207 222 PERTURBATION THEORY FOR NONLINEAR DIRICHLET PROBLEMS Fumi-Yuki Maeda and Takayori Ono Hiroshima Institute of Technology, Miyake,

More information

On nonexpansive and accretive operators in Banach spaces

On nonexpansive and accretive operators in Banach spaces Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 3437 3446 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa On nonexpansive and accretive

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo January 29, 2012 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

Coordinate Update Algorithm Short Course Operator Splitting

Coordinate Update Algorithm Short Course Operator Splitting Coordinate Update Algorithm Short Course Operator Splitting Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 25 Operator splitting pipeline 1. Formulate a problem as 0 A(x) + B(x) with monotone operators

More information

A strongly polynomial algorithm for linear systems having a binary solution

A strongly polynomial algorithm for linear systems having a binary solution A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th

More information

Math 273a: Optimization Subgradient Methods

Math 273a: Optimization Subgradient Methods Math 273a: Optimization Subgradient Methods Instructor: Wotao Yin Department of Mathematics, UCLA Fall 2015 online discussions on piazza.com Nonsmooth convex function Recall: For ˉx R n, f(ˉx) := {g R

More information

Coordinate Update Algorithm Short Course Proximal Operators and Algorithms

Coordinate Update Algorithm Short Course Proximal Operators and Algorithms Coordinate Update Algorithm Short Course Proximal Operators and Algorithms Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 36 Why proximal? Newton s method: for C 2 -smooth, unconstrained problems allow

More information

Extremal Solutions of Differential Inclusions via Baire Category: a Dual Approach

Extremal Solutions of Differential Inclusions via Baire Category: a Dual Approach Extremal Solutions of Differential Inclusions via Baire Category: a Dual Approach Alberto Bressan Department of Mathematics, Penn State University University Park, Pa 1682, USA e-mail: bressan@mathpsuedu

More information

Convex Analysis and Economic Theory AY Elementary properties of convex functions

Convex Analysis and Economic Theory AY Elementary properties of convex functions Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory AY 2018 2019 Topic 6: Convex functions I 6.1 Elementary properties of convex functions We may occasionally

More information

Nonlinear Systems Theory

Nonlinear Systems Theory Nonlinear Systems Theory Matthew M. Peet Arizona State University Lecture 2: Nonlinear Systems Theory Overview Our next goal is to extend LMI s and optimization to nonlinear systems analysis. Today we

More information

An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions

An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions Radu Ioan Boţ Ernö Robert Csetnek Szilárd Csaba László October, 1 Abstract. We propose a forward-backward

More information

Some unified algorithms for finding minimum norm fixed point of nonexpansive semigroups in Hilbert spaces

Some unified algorithms for finding minimum norm fixed point of nonexpansive semigroups in Hilbert spaces An. Şt. Univ. Ovidius Constanţa Vol. 19(1), 211, 331 346 Some unified algorithms for finding minimum norm fixed point of nonexpansive semigroups in Hilbert spaces Yonghong Yao, Yeong-Cheng Liou Abstract

More information

BASICS OF CONVEX ANALYSIS

BASICS OF CONVEX ANALYSIS BASICS OF CONVEX ANALYSIS MARKUS GRASMAIR 1. Main Definitions We start with providing the central definitions of convex functions and convex sets. Definition 1. A function f : R n R + } is called convex,

More information

6. Proximal gradient method

6. Proximal gradient method L. Vandenberghe EE236C (Spring 2016) 6. Proximal gradient method motivation proximal mapping proximal gradient method with fixed step size proximal gradient method with line search 6-1 Proximal mapping

More information

On Gap Functions for Equilibrium Problems via Fenchel Duality

On Gap Functions for Equilibrium Problems via Fenchel Duality On Gap Functions for Equilibrium Problems via Fenchel Duality Lkhamsuren Altangerel 1 Radu Ioan Boţ 2 Gert Wanka 3 Abstract: In this paper we deal with the construction of gap functions for equilibrium

More information

Least Sparsity of p-norm based Optimization Problems with p > 1

Least Sparsity of p-norm based Optimization Problems with p > 1 Least Sparsity of p-norm based Optimization Problems with p > Jinglai Shen and Seyedahmad Mousavi Original version: July, 07; Revision: February, 08 Abstract Motivated by l p -optimization arising from

More information

Sequential Unconstrained Minimization: A Survey

Sequential Unconstrained Minimization: A Survey Sequential Unconstrained Minimization: A Survey Charles L. Byrne February 21, 2013 Abstract The problem is to minimize a function f : X (, ], over a non-empty subset C of X, where X is an arbitrary set.

More information

A Relaxed Explicit Extragradient-Like Method for Solving Generalized Mixed Equilibria, Variational Inequalities and Constrained Convex Minimization

A Relaxed Explicit Extragradient-Like Method for Solving Generalized Mixed Equilibria, Variational Inequalities and Constrained Convex Minimization , March 16-18, 2016, Hong Kong A Relaxed Explicit Extragradient-Like Method for Solving Generalized Mixed Equilibria, Variational Inequalities and Constrained Convex Minimization Yung-Yih Lur, Lu-Chuan

More information

************************************* Applied Analysis I - (Advanced PDE I) (Math 940, Fall 2014) Baisheng Yan

************************************* Applied Analysis I - (Advanced PDE I) (Math 940, Fall 2014) Baisheng Yan ************************************* Applied Analysis I - (Advanced PDE I) (Math 94, Fall 214) by Baisheng Yan Department of Mathematics Michigan State University yan@math.msu.edu Contents Chapter 1.

More information

Chapter 2 Hilbert Spaces

Chapter 2 Hilbert Spaces Chapter 2 Hilbert Spaces Throughoutthis book,h isarealhilbertspacewith scalar(orinner)product. The associated norm is denoted by and the associated distance by d, i.e., ( x H)( y H) x = x x and d(x,y)

More information

The Proximal Gradient Method

The Proximal Gradient Method Chapter 10 The Proximal Gradient Method Underlying Space: In this chapter, with the exception of Section 10.9, E is a Euclidean space, meaning a finite dimensional space endowed with an inner product,

More information

t y n (s) ds. t y(s) ds, x(t) = x(0) +

t y n (s) ds. t y(s) ds, x(t) = x(0) + 1 Appendix Definition (Closed Linear Operator) (1) The graph G(T ) of a linear operator T on the domain D(T ) X into Y is the set (x, T x) : x D(T )} in the product space X Y. Then T is closed if its graph

More information

arxiv: v3 [math.ds] 22 Feb 2012

arxiv: v3 [math.ds] 22 Feb 2012 Stability of interconnected impulsive systems with and without time-delays using Lyapunov methods arxiv:1011.2865v3 [math.ds] 22 Feb 2012 Sergey Dashkovskiy a, Michael Kosmykov b, Andrii Mironchenko b,

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

An accelerated non-euclidean hybrid proximal extragradient-type algorithm for convex concave saddle-point problems

An accelerated non-euclidean hybrid proximal extragradient-type algorithm for convex concave saddle-point problems Optimization Methods and Software ISSN: 1055-6788 (Print) 1029-4937 (Online) Journal homepage: http://www.tandfonline.com/loi/goms20 An accelerated non-euclidean hybrid proximal extragradient-type algorithm

More information

FINDING BEST APPROXIMATION PAIRS RELATIVE TO A CONVEX AND A PROX-REGULAR SET IN A HILBERT SPACE

FINDING BEST APPROXIMATION PAIRS RELATIVE TO A CONVEX AND A PROX-REGULAR SET IN A HILBERT SPACE FINDING BEST APPROXIMATION PAIRS RELATIVE TO A CONVEX AND A PROX-REGULAR SET IN A HILBERT SPACE D. RUSSELL LUKE Abstract. We study the convergence of an iterative projection/reflection algorithm originally

More information

Viscosity approximation methods for the implicit midpoint rule of asymptotically nonexpansive mappings in Hilbert spaces

Viscosity approximation methods for the implicit midpoint rule of asymptotically nonexpansive mappings in Hilbert spaces Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 016, 4478 4488 Research Article Viscosity approximation methods for the implicit midpoint rule of asymptotically nonexpansive mappings in Hilbert

More information

Downloaded 09/27/13 to Redistribution subject to SIAM license or copyright; see

Downloaded 09/27/13 to Redistribution subject to SIAM license or copyright; see SIAM J. OPTIM. Vol. 23, No., pp. 256 267 c 203 Society for Industrial and Applied Mathematics TILT STABILITY, UNIFORM QUADRATIC GROWTH, AND STRONG METRIC REGULARITY OF THE SUBDIFFERENTIAL D. DRUSVYATSKIY

More information

Subgradient Projectors: Extensions, Theory, and Characterizations

Subgradient Projectors: Extensions, Theory, and Characterizations Subgradient Projectors: Extensions, Theory, and Characterizations Heinz H. Bauschke, Caifang Wang, Xianfu Wang, and Jia Xu April 13, 2017 Abstract Subgradient projectors play an important role in optimization

More information