Une méthode proximale pour les inclusions monotones dans les espaces de Hilbert, avec complexité O(1/k 2 ).

Size: px
Start display at page:

Download "Une méthode proximale pour les inclusions monotones dans les espaces de Hilbert, avec complexité O(1/k 2 )."

Transcription

1 Une méthode proximale pour les inclusions monotones dans les espaces de Hilbert, avec complexité O(1/k 2 ). Hedy ATTOUCH Université Montpellier 2 ACSIOM, I3M UMR CNRS 5149 Travail en collaboration avec M. Marques Alves, and B.F. Svaiter Effort sponsored by the Air Force Office of Scientific Research, USAF, grant number FA GDR MOA, Limoges Décembre 3-5, 2014 H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 1 / 42 de H

2 1A. General presentation: dynamical approach H real Hilbert space; x 2 = x, x ; A : H H maximal monotone operator. Fast methods for solving: find x H such that 0 Ax. (1) (I + λa) 1 : H H resolvent of index λ > 0 of A. Fixed point formulation of (1): x (I + λa) 1 x = 0. Dynamical system: control variable t λ(t). ẋ(t) + x(t) (I + λ(t)a) 1 x(t) = 0. (2) H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 2 / 42 de H

3 Maximal monotone operators A : H H monotone operator: for every y 1 Ax 1, y 2 Ax 2 y 2 y 1, x 2 x 1 0. A : H H is a maximal monotone operator if it is monotone, and it is maximal among monotone operators for the graph inclusion. Basic examples: A = f, f : H R +{ } convex lsc. A = ( x L, y L), L : H H R convex-concave. A = I T, T : H H nonexpansive operator. Resolvents: for any λ > 0, R(I + λa) = H, J A λ = (I + λa) 1 : H H nonexpansive, everywhere defined. A = f, J A λ x = prox λf (x) = argmin{f (y) + 1 2λ x y 2 }. Generators of semi-groups of contractions: ẋ(t) + Ax(t) 0, x(0 = x 0 ; S(t)x 0 = x(t), ergodic convergence! H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 3 / 42 de H

4 1B. General presentation: dynamical approach Differential-algebraic system: ẋ(t) + x(t) (I + λ(t)a) 1 x(t) = 0, (LSP) λ(t) (I + λ(t)a) 1 x(t) x(t) = θ. (3) (x( ), λ( )) variables, θ positive parameter. Closed-loop control: λ(t) = θ ẋ(t). The proximal parameter is inversely proportional to the speed. Asymptotic stabilization (t + ): ẋ(t) 0 = (I + λ(t)a) 1 x(t) x(t) 0 = λ(t) +. (LSP): Large Step Proximal method. H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 4 / 42 de H

5 1C. General presentation: dynamical approach Cauchy problem, main results: ẋ(t) + x(t) (I + λ(t)a) 1 x(t) = 0, (LSP) λ(t) (I + λ(t)a) 1 x(t) x(t) = θ, x(0) = x 0 H. 1 Existence and uniqueness of (x, λ) global solution of (LSP). 2 A 1 (0) : λ(t) +, w lim t + x(t) = x A 1 (0). 3 A = f, f : H R {+ } convex, lsc., proper, x 0 domf, f (x(t)) inf H f = O( 1 t 2 ). H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 5 / 42 de H

6 1D. General presentation: Large step proximal method A large-step proximal method for convex optimization: (0) x 0 domf, σ [0, 1[, θ > 0 given, set k = 1; (1) choose λ k > 0, and find x k, v k H, ε k 0 such that v k εk f (x k ), (4) λ k v k + x k x k λ k ε k σ 2 x k x k 1 2, (5) λ k x k x k 1 θ or v k = 0; (6) (2) if v k = 0 STOP, output x k ; otherwise k k + 1 and go to step 1. end. Main result: f (x k ) inf H f C k 2. H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 6 / 42 de H

7 Contents 1 General presentation. 2 Algebraic relationship linking λ and x. 3 Existence and uniqueness for the Cauchy problem. 4 Asymptotic behavior. 5 Link with the regularized Newton system. 6 The convex subdifferential case. 7 A large-step proximal method for convex optimization. 8 O( 1 ɛ ) proximal Newton method for convex optimization. 9 Perspective, open questions. 10 Appendix. Some examples H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 7 / 42 de H

8 2A. Algebraic relationship λ (I + λa) 1 x x = θ Set ϕ : [0, [ H R +, ϕ(λ, x) := λ x (I + λa) 1 x. (7) Some classical results on resolvents, λ > 0, µ > 0, x H (1) Jλ A = (I + λa) 1 : H H nonexpansive, (2) Jλ Ax = ( µ JA µ λ x + ( 1 µ ) λ J A λ x ), resolvent equation; (3) Jλ Ax JA µ x λ µ A λ x ; (4) lim λ 0 Jλ Ax = proj x; dom(a) (5) lim λ + Jλ Ax = proj A 1 (0)x, if A 1 (0). H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 8 / 42 de H

9 2B. Algebraic relationship linking λ and x Properties of ϕ(λ, x) := λ x (I + λa) 1 x ϕ(λ, x) = 0 if and only if 0 A(x). For any x 1, x 2 H and λ > 0, ϕ(λ, x 1 ) ϕ(λ, x 2 ) λ x 2 x 1. For any x H and 0 < λ 1 λ 2, λ 2 λ 1 ϕ(λ 1, x) ϕ(λ 2, x) ( λ2 λ 1 ) 2 ϕ(λ 1, x). (8) For any x / A 1 (0), λ [0, [ ϕ(λ, x) R + is continuous, strictly increasing, ϕ(0, x) = 0, and lim λ + ϕ(λ, x) = +. H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 9 / 42 de H

10 2C. Algebraic relationship linking λ and x λ [0, [ ϕ(λ, x) R + continuous, strict. increasing, 0 +. Definition Λ θ : H \ A 1 (0) ]0, [, Λ θ (x) := ϕ(, x) 1 (θ). (9) λ ϕ(λ, x) θ 0 Λ θ (x) λ Example: A = rot(0; π 2 ) H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 10 / 42 de H

11 3A. Existence result for the Cauchy problem (x, λ) system ẋ(t) + x(t) (I + λ(t)a) 1 x(t) = 0, λ(t) (I + λ(t)a) 1 x(t) x(t) = θ, (10) x(0) = x 0 H \ A 1 (0). x system ẋ(t) + x(t) (I + Λ θ (x(t))a) 1 x(t) = 0; x(0) = x 0. ẋ(t) = F (x(t)); x(0) = x 0. H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 11 / 42 de H

12 3B. Existence result for the Cauchy problem Continuity properties of F F : F : Ω = H \ A 1 (0) H F (x) := J A Λ θ (x) x x. locally Lipschitz continuous. x H \ A 1 (0) 1 Λ θ (x): Lipschitz continuous with constant θ. Cauchy problem: ẋ(t) = F (x(t)); x(0) = x 0. Cauchy-Lipschitz theorem: local existence, and uniqueness. Global existence: estimate 0 λ( ) λ( ). H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 12 / 42 de H

13 3C. Existence result for the Cauchy problem (LSP) ẋ(t) + x(t) (I + λ(t)a) 1 x(t) = 0, λ(t) (I + λ(t)a) 1 x(t) x(t) = θ, x(0) = x 0 H \ A 1 (0). Theorem 1 (existence and uniqueness) There exists a unique solution (x, λ) : [0, + [ H R ++ of (LSP); x( ) is C 1, and λ( ) is locally Lipschitz continuous. Moreover, (i) λ( ) is non-decreasing; 0 λ( ) λ( ); (ii) t Jλ(t) A x(t) x(t) is non-increasing. H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 13 / 42 de H

14 4. Asymptotic behavior Theorem 2 (convergence) Suppose that A 1 (0). Let (x, λ) : [0, + [ H R ++ be the unique solution of the Cauchy problem (LSP) with x 0 H \ A 1 (0). Then, (i) ẋ(t) = x(t) J A λ(t) x(t) d 0/ 2t; hence lim t + ẋ(t) = 0; (ii) λ(t) θ d 0 2t; hence lim t + λ(t) = + ; (iii) w lim t + x(t) = x exists, for some x A 1 (0), where d 0 is the distance from x 0 to A 1 (0). Weak convergence: Opial s lemma, Fejer monotonicity property. Strong convergence: A strongly monotone; A = f, f inf-compact. H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 14 / 42 de H

15 5A. Link with the regularized Newton system x 0 / A 1 (0), (x, λ) : [0, + [ H R ++ solution of (LSP). y(t) := (I + λ(t)a) 1 x(t), v(t) := 1 (x(t) y(t)). (11) λ(t) Claim: y( ) is solution of a regularized Newton system. Time derivation of λ(t)v(t) + y(t) x(t) = 0, and (LSP) gives v(t) Ay(t); ẏ(t) + λ(t) v(t) + (λ(t) + λ(t))v(t) = 0. (12) H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 15 / 42 de H

16 5B. Link with the regularized Newton system Suppose A : H H smooth operator. Classical Newton method: Continuous Newton method: A (y k )(y k+1 y k ) + A(y k ) = 0. (13) A (y(t))ẏ(t) + A(y(t)) = 0. d Ay(t) + A(y(t)) = 0. dt Levenberg-Marquard regularization: µ(t)ẏ(t) + d Ay(t) + A(y(t)) = 0. dt H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 16 / 42 de H

17 5C. Link with the regularized Newton system Time rescaling Since 1 τ(t) = t 0 λ(u) + λ(u) λ(u) λ(u) + λ(u) du = t + ln(λ(t)/λ(0)). (14) λ(u) 2, t τ(t) 2t. Set y(t) = ỹ(τ(t)), v(t) = ṽ(τ(t)). { ṽ Aỹ; 1 λ τ 1 d dτ ỹ + d dτ ṽ + ṽ = 0. (15) Regularized Newton system [AS, SICON 2011]. Levenberg-Marquardt regularization parameter 1 λ τ 1 0 as τ +. H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 17 / 42 de H

18 6A. The subdifferential case Suppose arg min f ; d 0 := inf{ x 0 z : z arg min f } = x 0 z. Theorem 3 (rate of convergence) Suppose that f (x(0)) < +. Then, (i) t f (x(t)) and t f (y(t)) are non-increasing; (ii) Set κ = θ/d0 3. For any t 0 f (x(t)) f ( z) [ f (x 0 ) f ( z) ] 1 + tκ 2 = O( 1 f (x 0 ) f ( z) t 2 ) κ f (x 0 ) f ( z) (16) H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 18 / 42 de H

19 6B. The subdifferential case Differential inequality: d dt β(t) cβ(t)3/2, β(t) := f (x(t)) f ( z). Step 1: d f (x(t)) f (y(t)) f (x(t)). dt Integrate ẋ + x = y and apply Jensen s inequality (convexity of f ) f (x(t + h)) t+h f (y( )) non-increasing t [ e h f (x(t)) + (1 e h )f (y(u)) ] e u f (x(t + h)) e h f (x(t)) + (1 e h )f (y(t)) f (x(t + h)) f (x(t)) h 1 e h (f (y(t)) f (x(t)). h e t+h e t du. H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 19 / 42 de H

20 6C. The subdifferential case Step 2: κ(f (x(t)) f ( z)) 3/2 f (y(t)) f (x(t)) 1 + (3κ/2)(f (x 0 ) f ( z)) 1/2. Since v(t) f (y(t)), λ(t)v(t) = x(t) y(t), and λ(t) 2 v(t) = θ f (x(t)) f (y(t)) + x(t) y(t), v(t) f (y(t)) + λ(t) v(t) 2 Since v(t) f (y(t)), for any t 0 = f (y(t)) + θ v(t) 3/2. f (y(t)) f ( z) y(t) z, v(t) y(t) z v(t) x(t) z v(t) d 0 v(t) where we have used y(t) = J A λ(t) (x(t)), z = JA λ(t) ( z), JA λ(t) nonexpansive, and t x(t) z non-increasing. Combining the above inequalities f (x(t)) f (y(t)) + (f (y(t) f ( z)) 3/2 θ/d 3 0. H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 20 / 42 de H

21 6D. The subdifferential case f (x(t)) f ( z) f (y(t)) f ( z) + (f (y(t) f ( z)) 3/2 θ/d 3 0. (17) Convexity of r r 3/2 : If a, b, c 0 and a b + cb 3/2 then b a ca 3/2 1 + (3c/2)a 1/2. Hence f (y) f (x) κ(f (x) f ( z)) 3/2. (18) 1 + (3κ/2)(f (x) f ( z)) 1/2 Since f (x( )) is non-increasing κ(f (x) f ( z)) 3/2 f (y) f (x). (19) 1 + (3κ/2)(f (x 0 ) f ( z)) 1/2 H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 21 / 42 de H

22 6E. The subdifferential case: y( ) versus x( ) y(t) = (I + λ(t) f ) 1 x(t) = J f λ(t) x(t) = prox λ(t)f x(t). (1) y( ): solution of a regularized Newton system. (2) x(t) J f λ(t) x(t) = ẋ(t) d 0/ 2t 0. w lim y(t) = w t + lim t + (3) f (x(t)) f λ(t) (x(t)) f (Jλ(t) f x(t)). Hence Hence x(t) arg min f. 0 f (y(t)) inf f f (x(t)) inf f = O( 1 H H t 2 ). (4) y( ) dom f : more (space) regularity than x(t). (5) f (y(t)) inf H f even if arg min f =. H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 22 / 42 de H

23 7A. A large-step prox. method for convex optimization Algorithm 1: (0) x 0 dom(f ), σ [0, 1[, θ > 0 given, set k = 1; (1) choose λ k > 0, and find x k, v k H, ε k 0 such that v k εk f (x k ), (20) λ k v k + x k x k λ k ε k σ 2 x k x k 1 2, (21) λ k x k x k 1 θ or v k = 0; (22) (2) if v k = 0 STOP, output x k ; otherwise k k + 1, go to step 1. end Relative error for HPE, Solodov and Svaiter, SVVA, JCA (1999). Large-step condition, Montero and Svaiter, SIOPT (2010, 2012). H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 23 / 42 de H

24 7B. A large-step prox. method for convex optimization Theorem 4 (complexity, ε k = 0) D 0 = sup{ x y max{f (x), f (y)} f (x 0 )} < +, κ 0 := θ(1 σ) f (x 0 ) f ( x) (i) f (x k ) f ( x) [ ] 2 = O(1/k 2 ). κ 0 f (x0 ) f ( x) 1 + k 2 + 3κ 0 f (x0 ) f ( x) (ii) for each k 2 even, there exists j {k/2 + 1,..., k} such that 4 v j 3 θ(1 σ) k 2+k f (x 0 ) f ( x) κ 0 f (x0 ) f ( x) 2 + 3κ 0 f (x0 ) f ( x) 2 2/3 D 3 0. = O(1/k 2 ). H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 24 / 42 de H

25 8A. O( 1 ɛ ) prox. Newton method for convex optim. (P) minimize f (x) s.t. x H. AS1) f : H R convex, twice continuously differentiable; AS2) 2 f (x) 2 f (y) L x y x, y H; AS3) D 0 = sup{ y x max{f (x), f (y)} f (x 0 )} <. Algorithm 2 (0) x 0 H, 0 < σ l < σ u < 1 be given, set k = 1; (1) if f (x k 1 ) = 0 then stop. Otherwise, compute λ k > 0 s.t. 2σ l L λ k (I + λ k 2 f (x k 1 )) 1 λ k f (x k 1 ) 2σ u L ; (23) (2) set x k = x k 1 (I + λ k 2 f (x k 1 )) 1 λ k f (x k 1 ); (3) set k k + 1 and go to step 1. end. H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 25 / 42 de H

26 8B. O( 1 ɛ ) prox. Newton method for convex optim. Proximal method y = (I + λ f ) 1 x λ f (y) + y x = 0. Perform a single Newton iteration from the current iterate x λ f (x) + (I + λ 2 f (x))(y x) = 0. i.e., y = x (I + λ 2 f (x)) 1 λ f (x). Hence the Proximal-Newton method x k = x k 1 (I + λ k 2 f (x k 1 )) 1 λ k f (x k 1 ). H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 26 / 42 de H

27 8C. O( 1 ɛ ) prox. Newton method for convex optim. Theorem 5 (complexity) Suppose AS1), AS2), AS3) hold, {x k } generated by Algorithm 2. Let x be a solution of (P). For any given tolerance ε > 0 define κ 0 = 2σ l (1 σ u ) LD0 3, K = 2 + 3κ 0 f (x0 ) f ( x), κ 0 ε ) 2/3 2L (2 1/6 + 3κ 0 f (x0 ) f ( x) J = [2σ l (1 σ u )] 1/6 κ 1/3. 0 ε Then, the following statements hold: (a) for any k K, f (x k ) f ( x) ε; (b) there exists j 2 J such that f (x j ) ε. H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 27 / 42 de H

28 8D. O( 1 ɛ ) prox. Newton method for convex optim. Solving w.r. λ > 0 2σ l L λ (I + λ 2 f (x k 1 )) 1 λ f (x k 1 ) 2σ u L ; The solution set is a closed interval [λ l, λ u ] s.t. λu λ l Binary search in ln λ may be used to find λ k. σu σ l. Φ(λ) := λ (I + λ 2 f (x)) 1 λ f (x), Ψ(µ) := Φ( 1 λ ). 2σ l L Ψ(µ) 2σ u L ; µ Ψ 2 (µ), and µ ln(ψ(µ)) are convex. H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 28 / 42 de H

29 9A. Perspective, large step proximal methods Main result (LSP) is a large step proximal method with O( 1 k 2 ) convergence. Related results Compare with Nesterov, Nesterov-Polyak, Güler. Rockafellar, SICON 1976: A is α-strongly monotone, A 1 (0) = x, x k+1 x αλ k x k x. Superlinear convergence when λ k +. Att-Redont-Svaiter, JOTA 2013, regularized Newton method, Levenberg-Marquardt parameter µ k = 1 λ k = f (x k ) 1 3, yield convergence of (x k ) at the order 4 3. Other algebraic relation: λ (I + λa) 1 x x γ = θ. (LSP): γ = 1. H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 29 / 42 de H

30 9B. Perspective, open questions Possible extensions Extension of f (x(t)) inf f = O( 1 t 2 ) and f (x k ) inf f = O( 1 k 2 ) to the non potential case: convex-concave saddle value problems? general maximal monotone operators (via Fitzpatrick function)? Combine (LSP) with splitting methods: forward-backward method, alternating proximal minimization? Implementation of the method and applications. Combine second-order analysis in time (Nesterov, FISTA...), and space (Newton-like methods: LSP), Att.-Alvarez. Convex tame optimization (KL): finite length of the trajectories? H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 30 / 42 de H

31 Appendix: Isotropic linear monotone operator α > 0 positive constant, A = αi, i.e., for every x H, Ax = αx. (λa + I ) 1 1 x = 1 + λα x (24) x (λa + I ) 1 x = λα x. 1 + λα (25) Given x 0 0, (LSP) can be written ẋ(t) + αλ(t) 1+αλ(t) x(t) = 0, λ(t) > 0, αλ(t) 2 1+αλ(t) x(t) = θ, x(0) = x 0 H \ A 1 (0). (26) Let us first integrate the above linear differential equation. H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 31 / 42 de H

32 Appendix: Isotropic linear monotone operator We have x(t) = e (t) x 0 with (t) := t 0 αλ(τ) 1+αλ(τ) dτ. Hence αλ(t) αλ(t) e (t) = θ x 0. (27) First, check this equation at time t = 0. Equivalently αλ(0) αλ(0) = θ x 0. (28) This equation defines uniquely λ(0) > 0, because ξ αξ2 1+αξ is strictly increasing from [0, + [ onto [0, + [. Thus, the only thing we have to prove is the existence of a positive function t λ(t) such that h(t) := αλ(t)2 1 + αλ(t) e (t) is constant on [0, + [. (29) Writing that the derivative h is identically zero on [0, + [, we obtain H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 32 / 42 de H

33 Appendix: Isotropic linear monotone operator After integration, we obtain λ (t)(αλ(t) + 2) αλ(t) 2 = 0. (30) α ln λ(t) 2 2 = αt + α ln λ(0) λ(t) λ(0). (31) Let us introduce the function g : ]0, + [ R g(ξ) = α ln ξ 2 ξ. (32) As t increases from 0 to +, g(t) is strictly increasing from to +. Thus, for each t > 0, (31) has a unique solution λ(t) > 0. Moreover, t λ(t) is increasing, continuously differentiable, and lim t λ(t) = +. Returning to (31), we obtain that λ(t) e t. H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 33 / 42 de H

34 Appendix: Antisymmetric linear monotone operator H = R 2, A = rot(0, π 2, A = A, A(ξ, η) = ( η, ξ). (λa + I ) 1 x = 1 ( ) 1 + λ 2 ξ + λη, η λξ (33) x (λa + I ) 1 x = λ ( ) 1 + λ 2 λξ η, λη + ξ. (34) ξ(t) + η(t) + λ(t) λ(t) 2 λ(t) ( ) 1 + λ(t) 2 λ(t)ξ(t) η(t) = 0, λ(t) > 0, (35) ) λ(t) ( 1 + λ(t) 2 λ(t)η(t) + ξ(t) = 0, λ(t) > 0, (36) ξ(t) 2 + η(t) 2 = θ, (37) H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 34 / 42 de H

35 Appendix: Antisymmetric linear monotone operator Set u(t) = ξ(t) 2 + η(t) 2. After multiplying (35) by ξ(t), and multiplying (36) by η(t), then adding the results, we obtain Set We have Equation (37) becomes u (t) + 2λ(t)2 u(t) = λ(t) (t) := t 0 2λ(τ) 2 dτ. (38) λ(τ) u(t) = e (t) u(0). (39) λ(t) 2 e 1 (t) 2 = θ + λ(t) 2 x 0. (40) H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 35 / 42 de H

36 Appendix: Antisymmetric linear monotone operator First, check this equation at time t = 0. Equivalently λ(0) 2 = θ 1 + λ(0) 2 x 0. (41) This equation defines uniquely λ(0) > 0, because the function ρ ρ2 is strictly increasing from [0, + [ onto [0, + [. Thus, we 1+ρ 2 just need to prove the existence of a positive function t λ(t) s.t. h(t) := λ(t) 2 e 1 (t) 2 is constant on [0, + [. (42) + λ(t) 2 Writing that the derivative h is identically zero on [0, + [, we obtain that λ( ) must satisfy λ (t)(2λ(t) + λ(t) 3 ) λ(t) 3 = 0. (43) H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 36 / 42 de H

37 Appendix: Antisymmetric linear monotone operator After integration of this first-order differential equation, with Cauchy data λ(0), we obtain λ(t) 2 λ(t) = t + λ(0) 2 λ(0). (44) Let us introduce the function g : ]0, + [ R g(ρ) = ρ 2 ρ. (45) As t increases from 0 to +, g(t) is strictly increasing from to +. Thus, for each t > 0, (44) has a unique solution λ(t) > 0. Moreover, the mapping t λ(t) is increasing, continuously differentiable, and lim t λ(t) = +. Returning to (44), we obtain that λ(t) t as t +. H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 37 / 42 de H

38 References B. Abbas, H. Attouch, and B. F. Svaiter, Newton-like dynamics and forward-backward methods for structured monotone inclusions in Hilbert spaces, JOTA, DOI /s , (2013). H. Attouch, Viscosity solutions of minimization problems, SIAM J. Optim., 6 (1996), No. 3, pp H. Attouch, J. Bolte, and B. F. Svaiter, Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods, Math. Program., 137 (2013), No. 1, pp H. Attouch, P. Redont, and B. F. Svaiter, Global convergence of a closed-loop regularized Newton method for solving monotone inclusions in Hilbert spaces, JOTA, 157 (2013), pp H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 38 / 42 de H

39 References H. Attouch and B. F. Svaiter, A continuous dynamical Newton-like approach to solving monotone inclusions, SIAM J. Control Optim., 49 (2011), pp H. Bauschke and P. Combettes, Convex analysis and monotone operator theory, CMS books in Mathematics, Springer, H. Brézis, Opérateurs maximaux monotones et semi-groupes de contractions dans les espaces de Hilbert, North-Holland/Elsevier, New-York, R.E. Bruck, Asymptotic convergence of nonlinear contraction semigroups in Hilbert spaces, J. Funct. Anal., 18 (1975), pp H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 39 / 42 de H

40 References C. Gonzaga, E. W. Karas, Fine tuning Nesterov s steepest descent algorithm for differentiable convex programming, Math. Program., 138 (2013), pp A. Griewank, Modification of Newton s method for unconstrained optimization by bounding cubic terms, TR. NA/12, Dept. of Applied Math. and Theoretical Physics, Cambridge Univ., O. Güler, New proximal point algorithms for convex minimization, SIAM J. Optimization, 2(4) (1992), pp B. Martinet, Régularisation d inéquations variationnelles par approximations successives, Rev. Française Informat. Recherche Opérationnelle, 4 (1970), (Ser. R-3), pp R. D. C. Monteiro and B. F. Svaiter, On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean, SIAM J. Optim., 20 (2010), No. 6, pp H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 40 / 42 de H

41 References R. D. C. Monteiro and B. F. Svaiter, Iteration-complexity of a Newton proximal extragradient method for monotone variational inequalities and inclusion problems, SIAM J. Optim., 22 (2012), No. 3, pp Y. Nesterov, Introductory lectures on convex optimization, A basic course, Kluwer, Boston (2004). Y. Nesterov and B. T. Polyak, Cubic regularization of Newton method and its global performance, Math. Program., 108 (2006), (1, Ser. A), pp Z. Opial, Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Amer. Math. Soc. 73 (1967), pp H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 41 / 42 de H

42 References R. T. Rockafellar, Augmented Lagrangians and applications of the proximal point algorithm in convex programming, Math. Oper. Res., 1 (1976), No. 2, pp R. T. Rockafellar, Monotone operators and the proximal point algorithm, SIAM J. Control Optim., 14 (1976), No. 5, pp M. V. Solodov and B. F. Svaiter, A hybrid approximate extragradient-proximal point algorithm using the enlargement of a maximal monotone operator. Set-Valued Anal., 7 (1999), No. 4, pp M. V. Solodov and B. F. Svaiter, A hybrid projection-proximal point algorithm. J. Convex Anal., 6 (1999), No. 1, pp M. Weiser, P. Deuflhard, and B. Erdmann, Affine conjugate adaptive Newton methods for nonlinear elastomechanics, Optim. Methods Softw., 22 (2007), No. 3, pp H. ATTOUCH (Univ. Montpellier 2)Une méthode proximale pour les inclusions monotones dans les espaces 42 / 42 de H

A proximal-newton method for monotone inclusions in Hilbert spaces with complexity O(1/k 2 ).

A proximal-newton method for monotone inclusions in Hilbert spaces with complexity O(1/k 2 ). H. ATTOUCH (Univ. Montpellier 2) Fast proximal-newton method Sept. 8-12, 2014 1 / 40 A proximal-newton method for monotone inclusions in Hilbert spaces with complexity O(1/k 2 ). Hedy ATTOUCH Université

More information

Convergence rate estimates for the gradient differential inclusion

Convergence rate estimates for the gradient differential inclusion Convergence rate estimates for the gradient differential inclusion Osman Güler November 23 Abstract Let f : H R { } be a proper, lower semi continuous, convex function in a Hilbert space H. The gradient

More information

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods Renato D.C. Monteiro B. F. Svaiter May 10, 011 Revised: May 4, 01) Abstract This

More information

Convergence rate of inexact proximal point methods with relative error criteria for convex optimization

Convergence rate of inexact proximal point methods with relative error criteria for convex optimization Convergence rate of inexact proximal point methods with relative error criteria for convex optimization Renato D. C. Monteiro B. F. Svaiter August, 010 Revised: December 1, 011) Abstract In this paper,

More information

Iteration-Complexity of a Newton Proximal Extragradient Method for Monotone Variational Inequalities and Inclusion Problems

Iteration-Complexity of a Newton Proximal Extragradient Method for Monotone Variational Inequalities and Inclusion Problems Iteration-Complexity of a Newton Proximal Extragradient Method for Monotone Variational Inequalities and Inclusion Problems Renato D.C. Monteiro B. F. Svaiter April 14, 2011 (Revised: December 15, 2011)

More information

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane Conference ADGO 2013 October 16, 2013 Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions Marc Lassonde Université des Antilles et de la Guyane Playa Blanca, Tongoy, Chile SUBDIFFERENTIAL

More information

c 2013 Society for Industrial and Applied Mathematics

c 2013 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 3, No., pp. 109 115 c 013 Society for Industrial and Applied Mathematics AN ACCELERATED HYBRID PROXIMAL EXTRAGRADIENT METHOD FOR CONVEX OPTIMIZATION AND ITS IMPLICATIONS TO SECOND-ORDER

More information

A GENERALIZATION OF THE REGULARIZATION PROXIMAL POINT METHOD

A GENERALIZATION OF THE REGULARIZATION PROXIMAL POINT METHOD A GENERALIZATION OF THE REGULARIZATION PROXIMAL POINT METHOD OGANEDITSE A. BOIKANYO AND GHEORGHE MOROŞANU Abstract. This paper deals with the generalized regularization proximal point method which was

More information

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods Renato D.C. Monteiro B. F. Svaiter May 10, 011 Abstract This paper presents an accelerated

More information

On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean

On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean Renato D.C. Monteiro B. F. Svaiter March 17, 2009 Abstract In this paper we analyze the iteration-complexity

More information

A Dykstra-like algorithm for two monotone operators

A Dykstra-like algorithm for two monotone operators A Dykstra-like algorithm for two monotone operators Heinz H. Bauschke and Patrick L. Combettes Abstract Dykstra s algorithm employs the projectors onto two closed convex sets in a Hilbert space to construct

More information

arxiv: v3 [math.oc] 18 Apr 2012

arxiv: v3 [math.oc] 18 Apr 2012 A class of Fejér convergent algorithms, approximate resolvents and the Hybrid Proximal-Extragradient method B. F. Svaiter arxiv:1204.1353v3 [math.oc] 18 Apr 2012 Abstract A new framework for analyzing

More information

Approaching monotone inclusion problems via second order dynamical systems with linear and anisotropic damping

Approaching monotone inclusion problems via second order dynamical systems with linear and anisotropic damping March 0, 206 3:4 WSPC Proceedings - 9in x 6in secondorderanisotropicdamping206030 page Approaching monotone inclusion problems via second order dynamical systems with linear and anisotropic damping Radu

More information

I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION

I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION Peter Ochs University of Freiburg Germany 17.01.2017 joint work with: Thomas Brox and Thomas Pock c 2017 Peter Ochs ipiano c 1

More information

1 Introduction and preliminaries

1 Introduction and preliminaries Proximal Methods for a Class of Relaxed Nonlinear Variational Inclusions Abdellatif Moudafi Université des Antilles et de la Guyane, Grimaag B.P. 7209, 97275 Schoelcher, Martinique abdellatif.moudafi@martinique.univ-ag.fr

More information

Steepest descent method on a Riemannian manifold: the convex case

Steepest descent method on a Riemannian manifold: the convex case Steepest descent method on a Riemannian manifold: the convex case Julien Munier Abstract. In this paper we are interested in the asymptotic behavior of the trajectories of the famous steepest descent evolution

More information

Variable Metric Forward-Backward Algorithm

Variable Metric Forward-Backward Algorithm Variable Metric Forward-Backward Algorithm 1/37 Variable Metric Forward-Backward Algorithm for minimizing the sum of a differentiable function and a convex function E. Chouzenoux in collaboration with

More information

PROX-PENALIZATION AND SPLITTING METHODS FOR CONSTRAINED VARIATIONAL PROBLEMS

PROX-PENALIZATION AND SPLITTING METHODS FOR CONSTRAINED VARIATIONAL PROBLEMS PROX-PENALIZATION AND SPLITTING METHODS FOR CONSTRAINED VARIATIONAL PROBLEMS HÉDY ATTOUCH, MARC-OLIVIER CZARNECKI & JUAN PEYPOUQUET Abstract. This paper is concerned with the study of a class of prox-penalization

More information

A Lyusternik-Graves Theorem for the Proximal Point Method

A Lyusternik-Graves Theorem for the Proximal Point Method A Lyusternik-Graves Theorem for the Proximal Point Method Francisco J. Aragón Artacho 1 and Michaël Gaydu 2 Abstract We consider a generalized version of the proximal point algorithm for solving the perturbed

More information

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Charles Byrne (Charles Byrne@uml.edu) http://faculty.uml.edu/cbyrne/cbyrne.html Department of Mathematical Sciences

More information

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES

ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES U.P.B. Sci. Bull., Series A, Vol. 80, Iss. 3, 2018 ISSN 1223-7027 ON A HYBRID PROXIMAL POINT ALGORITHM IN BANACH SPACES Vahid Dadashi 1 In this paper, we introduce a hybrid projection algorithm for a countable

More information

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Carlos Humes Jr. a, Benar F. Svaiter b, Paulo J. S. Silva a, a Dept. of Computer Science, University of São Paulo, Brazil Email: {humes,rsilva}@ime.usp.br

More information

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method L. Vandenberghe EE236C (Spring 2016) 1. Gradient method gradient method, first-order methods quadratic bounds on convex functions analysis of gradient method 1-1 Approximate course outline First-order

More information

On convergence rate of the Douglas-Rachford operator splitting method

On convergence rate of the Douglas-Rachford operator splitting method On convergence rate of the Douglas-Rachford operator splitting method Bingsheng He and Xiaoming Yuan 2 Abstract. This note provides a simple proof on a O(/k) convergence rate for the Douglas- Rachford

More information

Algorithmes de minimisation alternée avec frottement: H. Attouch

Algorithmes de minimisation alternée avec frottement: H. Attouch Algorithmes de minimisation alternée avec frottement: Applications aux E.D.P., problèmes inverses et jeux dynamiques. H. Attouch Institut de Mathématiques et de Modélisation de Montpellier UMR CNRS 5149

More information

An accelerated non-euclidean hybrid proximal extragradient-type algorithm for convex concave saddle-point problems

An accelerated non-euclidean hybrid proximal extragradient-type algorithm for convex concave saddle-point problems Optimization Methods and Software ISSN: 1055-6788 (Print) 1029-4937 (Online) Journal homepage: http://www.tandfonline.com/loi/goms20 An accelerated non-euclidean hybrid proximal extragradient-type algorithm

More information

Maximal monotone operators are selfdual vector fields and vice-versa

Maximal monotone operators are selfdual vector fields and vice-versa Maximal monotone operators are selfdual vector fields and vice-versa Nassif Ghoussoub Department of Mathematics, University of British Columbia, Vancouver BC Canada V6T 1Z2 nassif@math.ubc.ca February

More information

Maximal Monotone Operators with a Unique Extension to the Bidual

Maximal Monotone Operators with a Unique Extension to the Bidual Journal of Convex Analysis Volume 16 (2009), No. 2, 409 421 Maximal Monotone Operators with a Unique Extension to the Bidual M. Marques Alves IMPA, Estrada Dona Castorina 110, 22460-320 Rio de Janeiro,

More information

Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION PROXIMAL POINT ALGORITHM. M. V. Solodov and B. F.

Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION PROXIMAL POINT ALGORITHM. M. V. Solodov and B. F. Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION PROXIMAL POINT ALGORITHM M. V. Solodov and B. F. Svaiter January 27, 1997 (Revised August 24, 1998) ABSTRACT We propose a modification

More information

Visco-penalization of the sum of two monotone operators

Visco-penalization of the sum of two monotone operators Visco-penalization of the sum of two monotone operators Patrick L. Combettes a and Sever A. Hirstoaga b a Laboratoire Jacques-Louis Lions, Faculté de Mathématiques, Université Pierre et Marie Curie Paris

More information

An accelerated non-euclidean hybrid proximal extragradient-type algorithm for convex-concave saddle-point problems

An accelerated non-euclidean hybrid proximal extragradient-type algorithm for convex-concave saddle-point problems An accelerated non-euclidean hybrid proximal extragradient-type algorithm for convex-concave saddle-point problems O. Kolossoski R. D. C. Monteiro September 18, 2015 (Revised: September 28, 2016) Abstract

More information

Second order forward-backward dynamical systems for monotone inclusion problems

Second order forward-backward dynamical systems for monotone inclusion problems Second order forward-backward dynamical systems for monotone inclusion problems Radu Ioan Boţ Ernö Robert Csetnek March 6, 25 Abstract. We begin by considering second order dynamical systems of the from

More information

Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems

Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems Strong Convergence Theorem by a Hybrid Extragradient-like Approximation Method for Variational Inequalities and Fixed Point Problems Lu-Chuan Ceng 1, Nicolas Hadjisavvas 2 and Ngai-Ching Wong 3 Abstract.

More information

A WEAK-TO-STRONGCONVERGENCE PRINCIPLE FOR FEJÉR-MONOTONE METHODS IN HILBERT SPACES

A WEAK-TO-STRONGCONVERGENCE PRINCIPLE FOR FEJÉR-MONOTONE METHODS IN HILBERT SPACES MATHEMATICS OF OPERATIONS RESEARCH Vol. 26, No. 2, May 2001, pp. 248 264 Printed in U.S.A. A WEAK-TO-STRONGCONVERGENCE PRINCIPLE FOR FEJÉR-MONOTONE METHODS IN HILBERT SPACES HEINZ H. BAUSCHKE and PATRICK

More information

M. Marques Alves Marina Geremia. November 30, 2017

M. Marques Alves Marina Geremia. November 30, 2017 Iteration complexity of an inexact Douglas-Rachford method and of a Douglas-Rachford-Tseng s F-B four-operator splitting method for solving monotone inclusions M. Marques Alves Marina Geremia November

More information

Second order forward-backward dynamical systems for monotone inclusion problems

Second order forward-backward dynamical systems for monotone inclusion problems Second order forward-backward dynamical systems for monotone inclusion problems Radu Ioan Boţ Ernö Robert Csetnek March 2, 26 Abstract. We begin by considering second order dynamical systems of the from

More information

Coordinate Update Algorithm Short Course Proximal Operators and Algorithms

Coordinate Update Algorithm Short Course Proximal Operators and Algorithms Coordinate Update Algorithm Short Course Proximal Operators and Algorithms Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 36 Why proximal? Newton s method: for C 2 -smooth, unconstrained problems allow

More information

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE Fixed Point Theory, Volume 6, No. 1, 2005, 59-69 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.htm WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE YASUNORI KIMURA Department

More information

Convergence of Fixed-Point Iterations

Convergence of Fixed-Point Iterations Convergence of Fixed-Point Iterations Instructor: Wotao Yin (UCLA Math) July 2016 1 / 30 Why study fixed-point iterations? Abstract many existing algorithms in optimization, numerical linear algebra, and

More information

Monotone variational inequalities, generalized equilibrium problems and fixed point methods

Monotone variational inequalities, generalized equilibrium problems and fixed point methods Wang Fixed Point Theory and Applications 2014, 2014:236 R E S E A R C H Open Access Monotone variational inequalities, generalized equilibrium problems and fixed point methods Shenghua Wang * * Correspondence:

More information

Iteration-complexity of a Rockafellar s proximal method of multipliers for convex programming based on second-order approximations

Iteration-complexity of a Rockafellar s proximal method of multipliers for convex programming based on second-order approximations Iteration-complexity of a Rockafellar s proximal method of multipliers for convex programming based on second-order approximations M. Marques Alves R.D.C. Monteiro Benar F. Svaiter February 1, 016 Abstract

More information

ITERATIVE SCHEMES FOR APPROXIMATING SOLUTIONS OF ACCRETIVE OPERATORS IN BANACH SPACES SHOJI KAMIMURA AND WATARU TAKAHASHI. Received December 14, 1999

ITERATIVE SCHEMES FOR APPROXIMATING SOLUTIONS OF ACCRETIVE OPERATORS IN BANACH SPACES SHOJI KAMIMURA AND WATARU TAKAHASHI. Received December 14, 1999 Scientiae Mathematicae Vol. 3, No. 1(2000), 107 115 107 ITERATIVE SCHEMES FOR APPROXIMATING SOLUTIONS OF ACCRETIVE OPERATORS IN BANACH SPACES SHOJI KAMIMURA AND WATARU TAKAHASHI Received December 14, 1999

More information

Minimization vs. Null-Minimization: a Note about the Fitzpatrick Theory

Minimization vs. Null-Minimization: a Note about the Fitzpatrick Theory Minimization vs. Null-Minimization: a Note about the Fitzpatrick Theory Augusto Visintin Abstract. After a result of Fitzpatrick, for any maximal monotone operator α : V P(V ) there exists a function J

More information

On nonexpansive and accretive operators in Banach spaces

On nonexpansive and accretive operators in Banach spaces Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 3437 3446 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa On nonexpansive and accretive

More information

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Patrick L. Combettes joint work with J.-C. Pesquet) Laboratoire Jacques-Louis Lions Faculté de Mathématiques

More information

Viscosity approximation methods for the implicit midpoint rule of asymptotically nonexpansive mappings in Hilbert spaces

Viscosity approximation methods for the implicit midpoint rule of asymptotically nonexpansive mappings in Hilbert spaces Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 016, 4478 4488 Research Article Viscosity approximation methods for the implicit midpoint rule of asymptotically nonexpansive mappings in Hilbert

More information

Asymptotic Convergence of the Steepest Descent Method for the Exponential Penalty in Linear Programming

Asymptotic Convergence of the Steepest Descent Method for the Exponential Penalty in Linear Programming Journal of Convex Analysis Volume 2 (1995), No.1/2, 145 152 Asymptotic Convergence of the Steepest Descent Method for the Exponential Penalty in Linear Programming R. Cominetti 1 Universidad de Chile,

More information

Active sets, steepest descent, and smooth approximation of functions

Active sets, steepest descent, and smooth approximation of functions Active sets, steepest descent, and smooth approximation of functions Dmitriy Drusvyatskiy School of ORIE, Cornell University Joint work with Alex D. Ioffe (Technion), Martin Larsson (EPFL), and Adrian

More information

An Inexact Spingarn s Partial Inverse Method with Applications to Operator Splitting and Composite Optimization

An Inexact Spingarn s Partial Inverse Method with Applications to Operator Splitting and Composite Optimization Noname manuscript No. (will be inserted by the editor) An Inexact Spingarn s Partial Inverse Method with Applications to Operator Splitting and Composite Optimization S. Costa Lima M. Marques Alves Received:

More information

Master 2 MathBigData. 3 novembre CMAP - Ecole Polytechnique

Master 2 MathBigData. 3 novembre CMAP - Ecole Polytechnique Master 2 MathBigData S. Gaïffas 1 3 novembre 2014 1 CMAP - Ecole Polytechnique 1 Supervised learning recap Introduction Loss functions, linearity 2 Penalization Introduction Ridge Sparsity Lasso 3 Some

More information

Regularization Inertial Proximal Point Algorithm for Convex Feasibility Problems in Banach Spaces

Regularization Inertial Proximal Point Algorithm for Convex Feasibility Problems in Banach Spaces Int. Journal of Math. Analysis, Vol. 3, 2009, no. 12, 549-561 Regularization Inertial Proximal Point Algorithm for Convex Feasibility Problems in Banach Spaces Nguyen Buong Vietnamse Academy of Science

More information

Local strong convexity and local Lipschitz continuity of the gradient of convex functions

Local strong convexity and local Lipschitz continuity of the gradient of convex functions Local strong convexity and local Lipschitz continuity of the gradient of convex functions R. Goebel and R.T. Rockafellar May 23, 2007 Abstract. Given a pair of convex conjugate functions f and f, we investigate

More information

Variational inequalities for set-valued vector fields on Riemannian manifolds

Variational inequalities for set-valued vector fields on Riemannian manifolds Variational inequalities for set-valued vector fields on Riemannian manifolds Chong LI Department of Mathematics Zhejiang University Joint with Jen-Chih YAO Chong LI (Zhejiang University) VI on RM 1 /

More information

ITERATIVE ALGORITHMS WITH ERRORS FOR ZEROS OF ACCRETIVE OPERATORS IN BANACH SPACES. Jong Soo Jung. 1. Introduction

ITERATIVE ALGORITHMS WITH ERRORS FOR ZEROS OF ACCRETIVE OPERATORS IN BANACH SPACES. Jong Soo Jung. 1. Introduction J. Appl. Math. & Computing Vol. 20(2006), No. 1-2, pp. 369-389 Website: http://jamc.net ITERATIVE ALGORITHMS WITH ERRORS FOR ZEROS OF ACCRETIVE OPERATORS IN BANACH SPACES Jong Soo Jung Abstract. The iterative

More information

Complexity of the relaxed Peaceman-Rachford splitting method for the sum of two maximal strongly monotone operators

Complexity of the relaxed Peaceman-Rachford splitting method for the sum of two maximal strongly monotone operators Complexity of the relaxed Peaceman-Rachford splitting method for the sum of two maximal strongly monotone operators Renato D.C. Monteiro, Chee-Khian Sim November 3, 206 Abstract This paper considers the

More information

On the iterate convergence of descent methods for convex optimization

On the iterate convergence of descent methods for convex optimization On the iterate convergence of descent methods for convex optimization Clovis C. Gonzaga March 1, 2014 Abstract We study the iterate convergence of strong descent algorithms applied to convex functions.

More information

A user s guide to Lojasiewicz/KL inequalities

A user s guide to Lojasiewicz/KL inequalities Other A user s guide to Lojasiewicz/KL inequalities Toulouse School of Economics, Université Toulouse I SLRA, Grenoble, 2015 Motivations behind KL f : R n R smooth ẋ(t) = f (x(t)) or x k+1 = x k λ k f

More information

Variational and Topological methods : Theory, Applications, Numerical Simulations, and Open Problems 6-9 June 2012, Northern Arizona University

Variational and Topological methods : Theory, Applications, Numerical Simulations, and Open Problems 6-9 June 2012, Northern Arizona University Variational and Topological methods : Theory, Applications, Numerical Simulations, and Open Problems 6-9 June 22, Northern Arizona University Some methods using monotonicity for solving quasilinear parabolic

More information

Convergence Theorems of Approximate Proximal Point Algorithm for Zeroes of Maximal Monotone Operators in Hilbert Spaces 1

Convergence Theorems of Approximate Proximal Point Algorithm for Zeroes of Maximal Monotone Operators in Hilbert Spaces 1 Int. Journal of Math. Analysis, Vol. 1, 2007, no. 4, 175-186 Convergence Theorems of Approximate Proximal Point Algorithm for Zeroes of Maximal Monotone Operators in Hilbert Spaces 1 Haiyun Zhou Institute

More information

Sequential convex programming,: value function and convergence

Sequential convex programming,: value function and convergence Sequential convex programming,: value function and convergence Edouard Pauwels joint work with Jérôme Bolte Journées MODE Toulouse March 23 2016 1 / 16 Introduction Local search methods for finite dimensional

More information

Splitting methods for decomposing separable convex programs

Splitting methods for decomposing separable convex programs Splitting methods for decomposing separable convex programs Philippe Mahey LIMOS - ISIMA - Université Blaise Pascal PGMO, ENSTA 2013 October 4, 2013 1 / 30 Plan 1 Max Monotone Operators Proximal techniques

More information

Optimization for Machine Learning

Optimization for Machine Learning Optimization for Machine Learning (Problems; Algorithms - A) SUVRIT SRA Massachusetts Institute of Technology PKU Summer School on Data Science (July 2017) Course materials http://suvrit.de/teaching.html

More information

A proximal-like algorithm for a class of nonconvex programming

A proximal-like algorithm for a class of nonconvex programming Pacific Journal of Optimization, vol. 4, pp. 319-333, 2008 A proximal-like algorithm for a class of nonconvex programming Jein-Shan Chen 1 Department of Mathematics National Taiwan Normal University Taipei,

More information

Closed-Loop Impulse Control of Oscillating Systems

Closed-Loop Impulse Control of Oscillating Systems Closed-Loop Impulse Control of Oscillating Systems A. N. Daryin and A. B. Kurzhanski Moscow State (Lomonosov) University Faculty of Computational Mathematics and Cybernetics Periodic Control Systems, 2007

More information

arxiv: v2 [math.oc] 21 Nov 2017

arxiv: v2 [math.oc] 21 Nov 2017 Unifying abstract inexact convergence theorems and block coordinate variable metric ipiano arxiv:1602.07283v2 [math.oc] 21 Nov 2017 Peter Ochs Mathematical Optimization Group Saarland University Germany

More information

A General Iterative Method for Constrained Convex Minimization Problems in Hilbert Spaces

A General Iterative Method for Constrained Convex Minimization Problems in Hilbert Spaces A General Iterative Method for Constrained Convex Minimization Problems in Hilbert Spaces MING TIAN Civil Aviation University of China College of Science Tianjin 300300 CHINA tianming963@6.com MINMIN LI

More information

Merit functions and error bounds for generalized variational inequalities

Merit functions and error bounds for generalized variational inequalities J. Math. Anal. Appl. 287 2003) 405 414 www.elsevier.com/locate/jmaa Merit functions and error bounds for generalized variational inequalities M.V. Solodov 1 Instituto de Matemática Pura e Aplicada, Estrada

More information

LINEAR-CONVEX CONTROL AND DUALITY

LINEAR-CONVEX CONTROL AND DUALITY 1 LINEAR-CONVEX CONTROL AND DUALITY R.T. Rockafellar Department of Mathematics, University of Washington Seattle, WA 98195-4350, USA Email: rtr@math.washington.edu R. Goebel 3518 NE 42 St., Seattle, WA

More information

Self-dual Smooth Approximations of Convex Functions via the Proximal Average

Self-dual Smooth Approximations of Convex Functions via the Proximal Average Chapter Self-dual Smooth Approximations of Convex Functions via the Proximal Average Heinz H. Bauschke, Sarah M. Moffat, and Xianfu Wang Abstract The proximal average of two convex functions has proven

More information

Cubic regularization of Newton s method for convex problems with constraints

Cubic regularization of Newton s method for convex problems with constraints CORE DISCUSSION PAPER 006/39 Cubic regularization of Newton s method for convex problems with constraints Yu. Nesterov March 31, 006 Abstract In this paper we derive efficiency estimates of the regularized

More information

BORIS MORDUKHOVICH Wayne State University Detroit, MI 48202, USA. Talk given at the SPCOM Adelaide, Australia, February 2015

BORIS MORDUKHOVICH Wayne State University Detroit, MI 48202, USA. Talk given at the SPCOM Adelaide, Australia, February 2015 CODERIVATIVE CHARACTERIZATIONS OF MAXIMAL MONOTONICITY BORIS MORDUKHOVICH Wayne State University Detroit, MI 48202, USA Talk given at the SPCOM 2015 Adelaide, Australia, February 2015 Based on joint papers

More information

Tame variational analysis

Tame variational analysis Tame variational analysis Dmitriy Drusvyatskiy Mathematics, University of Washington Joint work with Daniilidis (Chile), Ioffe (Technion), and Lewis (Cornell) May 19, 2015 Theme: Semi-algebraic geometry

More information

Existence and Approximation of Fixed Points of. Bregman Nonexpansive Operators. Banach Spaces

Existence and Approximation of Fixed Points of. Bregman Nonexpansive Operators. Banach Spaces Existence and Approximation of Fixed Points of in Reflexive Banach Spaces Department of Mathematics The Technion Israel Institute of Technology Haifa 22.07.2010 Joint work with Prof. Simeon Reich General

More information

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

Convex Optimization Notes

Convex Optimization Notes Convex Optimization Notes Jonathan Siegel January 2017 1 Convex Analysis This section is devoted to the study of convex functions f : B R {+ } and convex sets U B, for B a Banach space. The case of B =

More information

Error bounds for proximal point subproblems and associated inexact proximal point algorithms

Error bounds for proximal point subproblems and associated inexact proximal point algorithms Error bounds for proximal point subproblems and associated inexact proximal point algorithms M. V. Solodov B. F. Svaiter Instituto de Matemática Pura e Aplicada, Estrada Dona Castorina 110, Jardim Botânico,

More information

Near Equality, Near Convexity, Sums of Maximally Monotone Operators, and Averages of Firmly Nonexpansive Mappings

Near Equality, Near Convexity, Sums of Maximally Monotone Operators, and Averages of Firmly Nonexpansive Mappings Mathematical Programming manuscript No. (will be inserted by the editor) Near Equality, Near Convexity, Sums of Maximally Monotone Operators, and Averages of Firmly Nonexpansive Mappings Heinz H. Bauschke

More information

Parameter Dependent Quasi-Linear Parabolic Equations

Parameter Dependent Quasi-Linear Parabolic Equations CADERNOS DE MATEMÁTICA 4, 39 33 October (23) ARTIGO NÚMERO SMA#79 Parameter Dependent Quasi-Linear Parabolic Equations Cláudia Buttarello Gentile Departamento de Matemática, Universidade Federal de São

More information

NONLINEAR DIFFERENTIAL INEQUALITY. 1. Introduction. In this paper the following nonlinear differential inequality

NONLINEAR DIFFERENTIAL INEQUALITY. 1. Introduction. In this paper the following nonlinear differential inequality M athematical Inequalities & Applications [2407] First Galley Proofs NONLINEAR DIFFERENTIAL INEQUALITY N. S. HOANG AND A. G. RAMM Abstract. A nonlinear differential inequality is formulated in the paper.

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

A splitting minimization method on geodesic spaces

A splitting minimization method on geodesic spaces A splitting minimization method on geodesic spaces J.X. Cruz Neto DM, Universidade Federal do Piauí, Teresina, PI 64049-500, BR B.P. Lima DM, Universidade Federal do Piauí, Teresina, PI 64049-500, BR P.A.

More information

MAXIMALITY OF SUMS OF TWO MAXIMAL MONOTONE OPERATORS

MAXIMALITY OF SUMS OF TWO MAXIMAL MONOTONE OPERATORS MAXIMALITY OF SUMS OF TWO MAXIMAL MONOTONE OPERATORS JONATHAN M. BORWEIN, FRSC Abstract. We use methods from convex analysis convex, relying on an ingenious function of Simon Fitzpatrick, to prove maximality

More information

STRONG CONVERGENCE OF AN ITERATIVE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS AND FIXED POINT PROBLEMS

STRONG CONVERGENCE OF AN ITERATIVE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS AND FIXED POINT PROBLEMS ARCHIVUM MATHEMATICUM (BRNO) Tomus 45 (2009), 147 158 STRONG CONVERGENCE OF AN ITERATIVE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS AND FIXED POINT PROBLEMS Xiaolong Qin 1, Shin Min Kang 1, Yongfu Su 2,

More information

INERTIAL ACCELERATED ALGORITHMS FOR SOLVING SPLIT FEASIBILITY PROBLEMS. Yazheng Dang. Jie Sun. Honglei Xu

INERTIAL ACCELERATED ALGORITHMS FOR SOLVING SPLIT FEASIBILITY PROBLEMS. Yazheng Dang. Jie Sun. Honglei Xu Manuscript submitted to AIMS Journals Volume X, Number 0X, XX 200X doi:10.3934/xx.xx.xx.xx pp. X XX INERTIAL ACCELERATED ALGORITHMS FOR SOLVING SPLIT FEASIBILITY PROBLEMS Yazheng Dang School of Management

More information

Optimization methods

Optimization methods Lecture notes 3 February 8, 016 1 Introduction Optimization methods In these notes we provide an overview of a selection of optimization methods. We focus on methods which rely on first-order information,

More information

Steepest descent method on a Riemannian manifold: the convex case

Steepest descent method on a Riemannian manifold: the convex case Steepest descent method on a Riemannian manifold: the convex case Julien Munier To cite this version: Julien Munier. Steepest descent method on a Riemannian manifold: the convex case. 2006.

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical

More information

Tight Rates and Equivalence Results of Operator Splitting Schemes

Tight Rates and Equivalence Results of Operator Splitting Schemes Tight Rates and Equivalence Results of Operator Splitting Schemes Wotao Yin (UCLA Math) Workshop on Optimization for Modern Computing Joint w Damek Davis and Ming Yan UCLA CAM 14-51, 14-58, and 14-59 1

More information

Convergence rates for an inertial algorithm of gradient type associated to a smooth nonconvex minimization

Convergence rates for an inertial algorithm of gradient type associated to a smooth nonconvex minimization Convergence rates for an inertial algorithm of gradient type associated to a smooth nonconvex minimization Szilárd Csaba László November, 08 Abstract. We investigate an inertial algorithm of gradient type

More information

Duality and dynamics in Hamilton-Jacobi theory for fully convex problems of control

Duality and dynamics in Hamilton-Jacobi theory for fully convex problems of control Duality and dynamics in Hamilton-Jacobi theory for fully convex problems of control RTyrrell Rockafellar and Peter R Wolenski Abstract This paper describes some recent results in Hamilton- Jacobi theory

More information

An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions

An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions Radu Ioan Boţ Ernö Robert Csetnek Szilárd Csaba László October, 1 Abstract. We propose a forward-backward

More information

EXISTENCE AND UNIQUENESS OF SOLUTIONS FOR A SECOND-ORDER NONLINEAR HYPERBOLIC SYSTEM

EXISTENCE AND UNIQUENESS OF SOLUTIONS FOR A SECOND-ORDER NONLINEAR HYPERBOLIC SYSTEM Electronic Journal of Differential Equations, Vol. 211 (211), No. 78, pp. 1 11. ISSN: 172-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu ftp ejde.math.txstate.edu EXISTENCE AND UNIQUENESS

More information

Iterative common solutions of fixed point and variational inequality problems

Iterative common solutions of fixed point and variational inequality problems Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 (2016), 1882 1890 Research Article Iterative common solutions of fixed point and variational inequality problems Yunpeng Zhang a, Qing Yuan b,

More information

WEAK CONVERGENCE THEOREMS FOR EQUILIBRIUM PROBLEMS WITH NONLINEAR OPERATORS IN HILBERT SPACES

WEAK CONVERGENCE THEOREMS FOR EQUILIBRIUM PROBLEMS WITH NONLINEAR OPERATORS IN HILBERT SPACES Fixed Point Theory, 12(2011), No. 2, 309-320 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.html WEAK CONVERGENCE THEOREMS FOR EQUILIBRIUM PROBLEMS WITH NONLINEAR OPERATORS IN HILBERT SPACES S. DHOMPONGSA,

More information

HAIYUN ZHOU, RAVI P. AGARWAL, YEOL JE CHO, AND YONG SOO KIM

HAIYUN ZHOU, RAVI P. AGARWAL, YEOL JE CHO, AND YONG SOO KIM Georgian Mathematical Journal Volume 9 (2002), Number 3, 591 600 NONEXPANSIVE MAPPINGS AND ITERATIVE METHODS IN UNIFORMLY CONVEX BANACH SPACES HAIYUN ZHOU, RAVI P. AGARWAL, YEOL JE CHO, AND YONG SOO KIM

More information

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Lecture 15 Newton Method and Self-Concordance. October 23, 2008 Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications

More information

1. Introduction. We consider the numerical solution of the unconstrained (possibly nonconvex) optimization problem

1. Introduction. We consider the numerical solution of the unconstrained (possibly nonconvex) optimization problem SIAM J. OPTIM. Vol. 2, No. 6, pp. 2833 2852 c 2 Society for Industrial and Applied Mathematics ON THE COMPLEXITY OF STEEPEST DESCENT, NEWTON S AND REGULARIZED NEWTON S METHODS FOR NONCONVEX UNCONSTRAINED

More information

FIXED POINTS IN THE FAMILY OF CONVEX REPRESENTATIONS OF A MAXIMAL MONOTONE OPERATOR

FIXED POINTS IN THE FAMILY OF CONVEX REPRESENTATIONS OF A MAXIMAL MONOTONE OPERATOR PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 00, Number 0, Pages 000 000 S 0002-9939(XX)0000-0 FIXED POINTS IN THE FAMILY OF CONVEX REPRESENTATIONS OF A MAXIMAL MONOTONE OPERATOR B. F. SVAITER

More information