Proximal splitting methods on convex problems with a quadratic term: Relax!

Size: px
Start display at page:

Download "Proximal splitting methods on convex problems with a quadratic term: Relax!"

Transcription

1 Proximal splitting methods on convex problems with a quadratic term: Relax! The slides I presented with added comments Laurent Condat GIPSA-lab, Univ. Grenoble Alpes, France Workshop BASP Frontiers, Jan. 2017

2 Proximal splitting algorithms A zoo of methods in the literature ADMM ISTA Douglas-Rachford PDFB Chambolle-Pock This is the primal-dual forward-backward algorithm I proposed, sometimes called Condat-Vu algorithm 1 / 39

3 Goal We consider the problem ( MX ) Find x 2 Arg min (x) = m=1 g m (L m x) where the U m and X are real Hilbert spaces, the functions g m are convex, from U m to R [ {+1}, the L m are linear operators from X to U m. 2 / 39

4 Goal We consider the problem ( MX ) Find x 2 Arg min (x) = m=1 g m (L m x) We want full splitting, with individual activation of L m, L m, the gradient or proximity operator of g m. 3 / 39

5 Goal We consider the problem ( MX ) Find x 2 Arg min (x) = m=1 g m (L m x) We want full splitting, with individual activation of L m, L m, the gradient or proximity operator of g m. no implicit operation (inner loop or linear system to solve) 3 / 39

6 The : X! 2 (x) is the set of gradients of the affine minorants of f at x. f is smooth at (x) ={rf (x)}. 4 / 39

7 Fermat s rule ( MX ) x 2 Arg min (x) = m=1 g m (L m x) (x) 0 ( x) x Pierre de Fermat, / 39

8 Fermat s rule ( MX ) x 2 Arg min (x) = m=1 g m (L m x) (x) 0 ( x) x 0 2 MX L m@g m (L m x) Pierre de Fermat, m=1 Here some qualification constraints are hidden 5 / 39

9 The proximity operator prox f : X! X, x 7! arg min z2x f (z)+ 1 2 kz xk2 6 / 39

10 The proximity operator prox f : X! X, x 7! arg min z2x f (z)+ 1 2 kz xk2 Explicit forms of prox f for a large class of functions. f (x) = x prox f (x) = sgn(x) max( x 1, 0) 6 / 39

11 The proximity operator prox f : X! X, x 7! arg min z2x f (z)+ 1 2 kz xk2 8 > 0, prox f = (Id ) 1 7 / 39

12 Iterative algorithms Principle: design an algorithm which iterates x (i+1) = T (x (i) ) for some operator T : X! X 8 / 39

13 Iterative algorithms Principle: design an algorithm which iterates x (i+1) = T (x (i) ) for some operator T : X! X Convergence: kx (i) xk!0, for some solution x. 8 / 39

14 Iterative algorithms Principle: design an algorithm which iterates x (i+1) = T (x (i) ) for some nonexpansive operator T : X! X i.e. 8x, y, kt (x) T (y)k applekx yk, and such that the set S of solutions is the set of fixed points of T, i.e. T ( x) = x. Here replace y by a fixed point and you get Fejer-monotonicity: at every iteration you get closer to the solution set 9 / 39

15 Iterative algorithms But nonexpansiveness is not sufficient : for instance, X = R 2 and T is a rotation. x (1) x (2) x x (3) 10 / 39

16 Iterative algorithms But nonexpansiveness is not sufficient : for instance, X = R 2 and T is a rotation. x (1) we need a bit more than nonexpansiveness. x (2) x x (3) "firmly nonexpansive" = 1/2-averaged 10 / 39

17 Iterative algorithms But nonexpansiveness is not sufficient : for instance, X = R 2 and T is a rotation. Definition: T is -averaged if T = T 0 + (1 )Id, for some 0 < < 1 and nonexpansive op. T 0. x (1) x (2) x (3) x 11 / 39

18 Iterative algorithms Theorem (Krasnoselskii Mann) If T is averaged, for some 0 < < 1, and a fixed point of T exists, then the iteration x (i+1) = T (x (i) ) converges to some fixed point x of T. 12 / 39

19 Over-relaxation An algorithm: w (i+1) = T (w (i) ) with T -averaged w (i) T w (i+1) the solution is around here! Its over-relaxed variant: $ w (i+ 1 2 ) = T (w (i) ) w (i+1) = w (i) + (w (i+ 1 2 ) w (i) ) T with 1 < < 1/ w (i) w (i+ 1 2 ) w (i+1) 13 / 39

20 Over-relaxation An algorithm: w (i+1) = T (w (i) ) with T -averaged w (i) T w (i+1) a firmly nonexpansive operator can be over-relaxed with 1 < < 2 T w (i) w (i+ 1 2 ) w (i+1) 14 / 39

21 Goal ( MX ) Find x 2 Arg min (x) = m=1 g m (L m x) by iterating x (i+1) = T (x (i) ) for an -averaged operator T. 15 / 39

22 Goal ( MX ) Find x 2 Arg min (x) = m=1 g m (L m x) by iterating x (i+1) = T (x (i) ) for an -averaged operator T. How to build T from the L m, L m, gradient or proximity operator of g m? 15 / 39

23 Forward-backward splitting Find x 2 Arg min where h is smooth with n o h(x) + f (x) -Lipschitz cont. gradient. the forward-backward iteration x (i+1) = prox f x (i) rh(x (i) ) converges to a solution x, if 0 < < / 39

24 Forward-backward splitting Find x 2 Arg min where h is smooth with n o h(x) + f (x) -Lipschitz cont. gradient. $ the over-relaxed forward-backward iteration x (i+ 1 2 ) = prox f x (i) rh(x (i) ) x (i+1) = x (i) + (x (i+ 1 2 ) x (i) ) converges to a solution x, if 0 < < 2,1apple < / 39

25 Forward-backward splitting Find x 2 Arg min where h is smooth with n o h(x) + f (x) -Lipschitz cont. gradient. $ the over-relaxed forward-backward iteration x (i+ 1 2 ) = prox f x (i) rh(x (i) ) x (i+1) = x (i) + (x (i+ 1 2 ) x (i) ) converges to a solution x, if 0 < < 2,1apple < 2 2. For instance, = 1, = / 39

26 Minimization of 2 functions Find x 2 Arg min n o f (x) + g(x) where f and g have simple prox.! calls to prox f and prox g. 18 / 39

27 Minimization of 2 functions Find x 2 Arg min n o f (x) + g(x) find x 2 X such that 0 (x) 19 / 39

28 Minimization of 2 functions Find x 2 Arg min n o f (x) + g(x) find x 2 X such that 0 (x) find (x, u) 2 X X such that u (x) u The dual variable u can just be viewed as an auxiliary variable. 19 / 39

29 Minimization of 2 functions Find x 2 Arg min n o f (x) + g(x) find x 2 X such that 0 (x) find (x, u) 2 X X such that u (x) x 2 (@g) 1 (u) 20 / 39

30 Minimization of 2 functions Find x 2 Arg min n o f (x) + g(x) find x 2 X such that 0 (x) find (x, u) 2 X X such that 0 (x) + u 0 2 (@g) 1 (u) x 21 / 39

31 Minimization of 2 functions Find x 2 Arg min n o f (x) + g(x) find x 2 X such that 0 (x) find (x, u) 2 X X such that (x) + u 2 0 x + (@g) 1 (u) 22 / 39

32 Primal-dual monotone inclusion Find x 2 Arg min n o f (x) + g(x) find x 2 X such that 0 (x) find (x, u) 2 X X such that (x) + u 2 0 x + (@g) 1 (u) maximally monotone operator 22 / 39

33 The proximal point algorithm To solve the monotone inclusion 0 2 M( w), the proximal point algorithm is: w (i+1) =(P 1 M + Id) 1 (w (i) ), 0 2 M(w (i+1) )+P(w (i+1) w (i) ) Convergence, for any linear, self-adjoint, strongly positive, P. With w=(x,u), be careful that convergence is on the pair (x,u) and with respect to a distorted <.,P.> norm. So, the variable x alone does not get closer to the solution at every iteration, it can oscillate. 23 / 39

34 Douglas-Rachford splitting design an algorithm such that, 8i 2 N, 0 2 (x (i+1) ) + u (i+1) x (i+1) +(@g) 1 (u (i+1) ) +?? x (i+1)?? u (i+1) x (i) u (i) We need to find this linear operator P to have a primal-dual PPA like on the previous slide 24 / 39

35 Douglas-Rachford algorithm design an algorithm such that, 8i 2 (x (i+1) ) + u (i+1) 1 x (i+1) (u (i+1) + Id ) Id Id x (i+1) Id u (i+1) x (i) u (i) Douglas Rachford iteration: $ x (i+1) = prox f x (i) u (i) u (i+1) = prox g / u(i) + 1 (2x (i+1) x (i) ) The linear operator in orange is positive, not strongly positive, but the convergence proofs can be extended to this case. 25 / 39

36 Douglas-Rachford algorithm design an algorithm such that, 8i 2 (x (i+1) ) + u (i+1) 1 x (i+1) (u (i+1) + Id ) Id Id x (i+1) Id u (i+1) x (i) u (i) 6 4 Douglas Rachford iteration: x (i+1) = prox f s (i) z (i+1) = prox g 2x (i+1) s (i) s (i+1) = s (i) + z (i+1) x (i+1) 26 / 39

37 Douglas-Rachford algorithm Over-relaxed Douglas Rachford iteration: x (i+1) = prox f s (i) 6 4 z (i+1) = prox g 2x (i+1) s (i) s (i+1) = s (i) + (z (i+1) x (i+1) ) Convergence with 1 apple < 2. take = 1.9! Over-relaxation does not cost anything here! 27 / 39

38 Douglas-Rachford algorithm, version 2 Switching the update order! different iteration: (x (i+1) ) + u (i+1) x (i+1) (u (i+1) + Id +Id x (i+1) ) +Id Id u (i+1) x (i) u (i) 28 / 39

39 Douglas-Rachford algorithm, version 2 Switching the update order! different iteration: (x (i+1) ) + u (i+1) x (i+1) (u (i+1) + Id +Id x (i+1) ) +Id Id u (i+1) x (i) u (i) switching the roles of the primal and dual variables / problems 28 / 39

40 Douglas-Rachford algorithm, version 2 Switching the update order! different iteration: (x (i+1) ) + u (i+1) x (i+1) (u (i+1) + Id +Id x (i+1) ) +Id Id u (i+1) x (i) u (i) $ u (i+1) = prox g / u(i) + 1 x (i) x (i+1) = prox f x (i) (2u (i+1) u (i) ) 29 / 39

41 ADMM Switching the update order! different iteration: (x (i+1) ) + u (i+1) x (i+1) (u (i+1) + Id +Id x (i+1) ) +Id Id u (i+1) x (i) u (i) $ u (i+1) = prox g / u(i) + 1 x (i) x (i+1) = prox f x (i) (2u (i+1) u (i) ) 6 4 x (i) = prox f z (i) u (i) z (i+1) = prox g x (i) + u (i) This is ADMM u (i+1) = u (i) + 1 (x (i) z (i+1) ) 29 / 39

42 ADMM Switching the update order! different iteration: (x (i+1) ) + u (i+1) x (i+1) (u (i+1) + Id +Id x (i+1) ) +Id Id u (i+1) 6 4 z (i+1) = prox g s (i) x (i+1) = prox f 2z (i+1) s (i) s (i+1) = s (i) + x (i+1) z (i+1) x (i) u (i) This is Douglas- Rachford like in slide 26, just with f and g exchanged. switching the roles of f and g in Douglas Rachford 30 / 39

43 Generalized Douglas-Rachford Find ( x, z) 2 Arg min (x,z) n o f (x) + g(z) s.t. Lx + Kz =0 6 4 Douglas Rachford iteration: x (i+1) 2 arg min x f (x) klx s(i) k 2 z (i+1) 2 arg min z g(z) kkz + (2Lx (i+1) s (i) )k 2 s (i+1) = s (i) (Lx (i+1) + Kz (i+1) ) take = 1.9! The ADMM is usually expressed with 1 or 2 linear operators, L and K here. We can write it in this completely equivalent Douglas- Rachford form. The advantage is that we can easily over-relax! 31 / 39

44 Chambolle-Pock algorithm Find x 2 Arg min n o f (x) + g(lx) Design an algorithm such that, 8i 2 (x (i+1) ) + L u (i+1) 1 Lx (i+1) (u (i+1) + Id L x (i+1) ) L 1 Id u (i+1) x (i) u (i) Once we understand that Douglas-Rachford is just a primal-dual PPA, there is no price to pay for introducing a linear operator L in the problem. It is the same as the L in the previous slide (with K=-Id) but this time we split. Other said, the Chambolle-Pock algorithm is a preconditioned ADMM/Douglas-Rachford. 32 / 39

45 Chambolle-Pock algorithm Find x 2 Arg min n o f (x) + g(lx) Chambolle Pock algorithm: $ x (i+1) = prox f x (i) L u (i) u (i+1) = prox g u (i) + L(2x (i+1) x (i) ) 33 / 39

46 Chambolle-Pock algorithm Find x 2 Arg min n o f (x) + g(lx) Chambolle Pock algorithm: $ x (i+1) = prox f x (i) L u (i) u (i+1) = prox g u (i) + L(2x (i+1) x (i) ) Convergence if klk 2 apple 1 set =1/( klk 2 ) This condition is proved in my paper JOTA 13 This way, there is only one parameter, tau, to tune, and we recover Douglas-Rachford exactly when L=Id (slide 25) 33 / 39

47 Chambolle-Pock algorithm Find x 2 Arg min n o f (x) + g(lx) Over-relaxed Chambolle Pock algorithm: x (i+ 1 2 ) = prox f x (i) L u (i) u (i+ 1 2 ) = prox g u (i) + L(2x (i+ 1 2 ) x (i) ) 6 4 x (i+1) = x (i) + (x (i+ 1 2 ) x (i) ) u (i+1) = u (i) + (u (i+ 1 2 ) u (i) ) 34 / 39

48 Chambolle-Pock algorithm Find x 2 Arg min n o f (x) + g(lx) Over-relaxed Chambolle Pock algorithm: x (i+ 1 2 ) = prox f x (i) L u (i) u (i+ 1 2 ) = prox g u (i) + L(2x (i+ 1 2 ) x (i) ) 6 4 x (i+1) = x (i) + (x (i+ 1 2 ) x (i) ) u (i+1) = u (i) + (u (i+ 1 2 ) u (i) ) Convergence with 1 apple < 2. take = 1.9! 34 / 39

49 Preconditioned Chambolle-Pock algo. Find x 2 Arg min n o 1 2 kax yk2 + f (x) + g(lx) primal-dual PPA: A Ax (i+1) A y (x (i+1) ) + L u (i+1) Lx (i+1) (u (i+1) ) 1 + Id A A L x (i+1) L 1 Id u (i+1) x (i) u (i) 35 / 39

50 Preconditioned Chambolle-Pock algo. Find x 2 Arg min n o 1 2 kax yk2 + f (x) + g(lx) primal-dual PPA: A Ax (i+1) A y (x (i+1) ) + L u (i+1) Lx (i+1) (u (i+1) ) 1 + Id A A L x (i+1) L 1 Id u (i+1) x (i) u (i) convergence if < 1 1 kak 2 and = kak 2 klk 2 35 / 39

51 Preconditioned Chambolle-Pock algo. Find x 2 Arg min n o 1 2 kax yk2 + f (x) + g(lx) 6 4 preconditioned Chambolle Pock algorithm: x (i+ 1 2 ) = prox f x (i) L u (i) (A Ax (i) A y) u (i+ 1 2 ) = prox g u (i) + L(2x (i+ 1 2 ) x (i) ) x (i+1) = x (i) + (x (i+ 1 2 ) x (i) ) u (i+1) = u (i) + (u (i+ 1 2 ) u (i) ) This is exactly the algorithm I proposed (JOTA 13), and exactly to solve this kind of regularized inverse problems. But if you view it as a preconditioned Chambolle-Pock algorithm instead of a primal-dual forward-backward, you can over-relax more! 36 / 39

52 Preconditioned Chambolle-Pock algo. Find x 2 Arg min n o 1 2 kax yk2 + f (x) + g(lx) 6 4 preconditioned Chambolle Pock algorithm: x (i+ 1 2 ) = prox f x (i) L u (i) (A Ax (i) A y) u (i+ 1 2 ) = prox g u (i) + L(2x (i+ 1 2 ) x (i) ) x (i+1) = x (i) + (x (i+ 1 2 ) x (i) ) u (i+1) = u (i) + (u (i+ 1 2 ) u (i) ) Convergence with 1 apple < 2. take = 1.9! With the conditions in my paper you cannot choose 1.9. This is specific to the smooth term in blue being quadratic. 36 / 39

53 Preconditioned PPA Find x 2 Arg min n o 1 2 kax yk2 + f (x) $ x (i+ 1 2 ) = prox f x (i) (A Ax (i) A y) x (i+1) = x (i) + (x (i+ 1 2 ) x (i) ) Convergence if < 1 and 1 apple < 2 kak2 Instead of = 1.99 kak 2, = 1, try = 0.99 kak 2, = 1.9! Here also, the forwardbackward can be viewed as just preconditioning the proximity operator. 37 / 39

54 Preconditioned PPA Find x 2 Arg min n o 1 2 kax yk2 + f (x) $ x (i+ 1 2 ) = prox f x (i) (A Ax (i) A y) x (i+1) = x (i) + (x (i+ 1 2 ) x (i) ) Convergence if < 1 and 1 apple < 2 kak2 Instead of = 1.99 kak 2, = 1, try = 0.99 kak 2, = 1.9! there is a continuum between these two regimes (proof unfinished) 37 / 39

55 Conclusion You can and should over-relax the algorithms, even more when the smooth function is quadratic. RELAX! 38 / 39

56 Conclusion You can and should over-relax the algorithms, even more when the smooth function is quadratic. No time to explain: Product-space trick to generalize g L to MX g m L m Applications to other splitting methods m=1 Design an algorithm = play LEGO... See the two bonus slides for the Chambolle-Pock algorithm (one of the two forms, the other one in my paper JOTA 13, algorithm 5.2). 39 / 39

57 The product-space trick minimize MX m=1 g m (L m x) minimize g(lx) with g : U 1 U M! R [ {+1} (z 1,..., z M ) 7! P M m=1 g m(z m ) and L : X! U 1 U M x 7! (L 1 x,..., L M x) 41 / 39

58 Chambolle-Pock algorithm ( MX ) minimize f (x) + m=1 g m (L m x) Over-relaxed Chambolle-Pock algo. Iterate, for i 2 N x (i+ 1 2 ) := prox f x (i) P M m=1 L mu m (i), x (i+1) := x (i) + (x (i+ 1 2 ) x (i) ), For m = 1,..., M, u(i+ 2 ) m := prox g u (i) m m + L m (2x (i+ 1 2 ) x (i) ), u m (i+1) := u m (i) + (u (i+ 1 2 ) m u m (i) ). 42 / 39

Coordinate Update Algorithm Short Course Operator Splitting

Coordinate Update Algorithm Short Course Operator Splitting Coordinate Update Algorithm Short Course Operator Splitting Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 25 Operator splitting pipeline 1. Formulate a problem as 0 A(x) + B(x) with monotone operators

More information

Primal-dual coordinate descent A Coordinate Descent Primal-Dual Algorithm with Large Step Size and Possibly Non-Separable Functions

Primal-dual coordinate descent A Coordinate Descent Primal-Dual Algorithm with Large Step Size and Possibly Non-Separable Functions Primal-dual coordinate descent A Coordinate Descent Primal-Dual Algorithm with Large Step Size and Possibly Non-Separable Functions Olivier Fercoq and Pascal Bianchi Problem Minimize the convex function

More information

Primal-dual algorithms for the sum of two and three functions 1

Primal-dual algorithms for the sum of two and three functions 1 Primal-dual algorithms for the sum of two and three functions 1 Ming Yan Michigan State University, CMSE/Mathematics 1 This works is partially supported by NSF. optimization problems for primal-dual algorithms

More information

Primal-dual coordinate descent

Primal-dual coordinate descent Primal-dual coordinate descent Olivier Fercoq Joint work with P. Bianchi & W. Hachem 15 July 2015 1/28 Minimize the convex function f, g, h convex f is differentiable Problem min f (x) + g(x) + h(mx) x

More information

Math 273a: Optimization Overview of First-Order Optimization Algorithms

Math 273a: Optimization Overview of First-Order Optimization Algorithms Math 273a: Optimization Overview of First-Order Optimization Algorithms Wotao Yin Department of Mathematics, UCLA online discussions on piazza.com 1 / 9 Typical flow of numerical optimization Optimization

More information

Dual and primal-dual methods

Dual and primal-dual methods ELE 538B: Large-Scale Optimization for Data Science Dual and primal-dual methods Yuxin Chen Princeton University, Spring 2018 Outline Dual proximal gradient method Primal-dual proximal gradient method

More information

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Patrick L. Combettes joint work with J.-C. Pesquet) Laboratoire Jacques-Louis Lions Faculté de Mathématiques

More information

A Primal-dual Three-operator Splitting Scheme

A Primal-dual Three-operator Splitting Scheme Noname manuscript No. (will be inserted by the editor) A Primal-dual Three-operator Splitting Scheme Ming Yan Received: date / Accepted: date Abstract In this paper, we propose a new primal-dual algorithm

More information

Operator Splitting for Parallel and Distributed Optimization

Operator Splitting for Parallel and Distributed Optimization Operator Splitting for Parallel and Distributed Optimization Wotao Yin (UCLA Math) Shanghai Tech, SSDS 15 June 23, 2015 URL: alturl.com/2z7tv 1 / 60 What is splitting? Sun-Tzu: (400 BC) Caesar: divide-n-conquer

More information

Tight Rates and Equivalence Results of Operator Splitting Schemes

Tight Rates and Equivalence Results of Operator Splitting Schemes Tight Rates and Equivalence Results of Operator Splitting Schemes Wotao Yin (UCLA Math) Workshop on Optimization for Modern Computing Joint w Damek Davis and Ming Yan UCLA CAM 14-51, 14-58, and 14-59 1

More information

Convergence of Fixed-Point Iterations

Convergence of Fixed-Point Iterations Convergence of Fixed-Point Iterations Instructor: Wotao Yin (UCLA Math) July 2016 1 / 30 Why study fixed-point iterations? Abstract many existing algorithms in optimization, numerical linear algebra, and

More information

6. Proximal gradient method

6. Proximal gradient method L. Vandenberghe EE236C (Spring 2016) 6. Proximal gradient method motivation proximal mapping proximal gradient method with fixed step size proximal gradient method with line search 6-1 Proximal mapping

More information

Splitting methods for decomposing separable convex programs

Splitting methods for decomposing separable convex programs Splitting methods for decomposing separable convex programs Philippe Mahey LIMOS - ISIMA - Université Blaise Pascal PGMO, ENSTA 2013 October 4, 2013 1 / 30 Plan 1 Max Monotone Operators Proximal techniques

More information

Mathematical methods for Image Processing

Mathematical methods for Image Processing Mathematical methods for Image Processing François Malgouyres Institut de Mathématiques de Toulouse, France invitation by Jidesh P., NITK Surathkal funding Global Initiative on Academic Network Oct. 23

More information

ADMM for monotone operators: convergence analysis and rates

ADMM for monotone operators: convergence analysis and rates ADMM for monotone operators: convergence analysis and rates Radu Ioan Boţ Ernö Robert Csetne May 4, 07 Abstract. We propose in this paper a unifying scheme for several algorithms from the literature dedicated

More information

This can be 2 lectures! still need: Examples: non-convex problems applications for matrix factorization

This can be 2 lectures! still need: Examples: non-convex problems applications for matrix factorization This can be 2 lectures! still need: Examples: non-convex problems applications for matrix factorization x = prox_f(x)+prox_{f^*}(x) use to get prox of norms! PROXIMAL METHODS WHY PROXIMAL METHODS Smooth

More information

WHY DUALITY? Gradient descent Newton s method Quasi-newton Conjugate gradients. No constraints. Non-differentiable ???? Constrained problems? ????

WHY DUALITY? Gradient descent Newton s method Quasi-newton Conjugate gradients. No constraints. Non-differentiable ???? Constrained problems? ???? DUALITY WHY DUALITY? No constraints f(x) Non-differentiable f(x) Gradient descent Newton s method Quasi-newton Conjugate gradients etc???? Constrained problems? f(x) subject to g(x) apple 0???? h(x) =0

More information

arxiv: v4 [math.oc] 29 Jan 2018

arxiv: v4 [math.oc] 29 Jan 2018 Noname manuscript No. (will be inserted by the editor A new primal-dual algorithm for minimizing the sum of three functions with a linear operator Ming Yan arxiv:1611.09805v4 [math.oc] 29 Jan 2018 Received:

More information

Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables

Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables Department of Systems Engineering and Engineering Management The Chinese University of Hong Kong 2014 Workshop

More information

Signal Processing and Networks Optimization Part VI: Duality

Signal Processing and Networks Optimization Part VI: Duality Signal Processing and Networks Optimization Part VI: Duality Pierre Borgnat 1, Jean-Christophe Pesquet 2, Nelly Pustelnik 1 1 ENS Lyon Laboratoire de Physique CNRS UMR 5672 pierre.borgnat@ens-lyon.fr,

More information

Coordinate Update Algorithm Short Course Proximal Operators and Algorithms

Coordinate Update Algorithm Short Course Proximal Operators and Algorithms Coordinate Update Algorithm Short Course Proximal Operators and Algorithms Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 36 Why proximal? Newton s method: for C 2 -smooth, unconstrained problems allow

More information

6. Proximal gradient method

6. Proximal gradient method L. Vandenberghe EE236C (Spring 2013-14) 6. Proximal gradient method motivation proximal mapping proximal gradient method with fixed step size proximal gradient method with line search 6-1 Proximal mapping

More information

A New Primal Dual Algorithm for Minimizing the Sum of Three Functions with a Linear Operator

A New Primal Dual Algorithm for Minimizing the Sum of Three Functions with a Linear Operator https://doi.org/10.1007/s10915-018-0680-3 A New Primal Dual Algorithm for Minimizing the Sum of Three Functions with a Linear Operator Ming Yan 1,2 Received: 22 January 2018 / Accepted: 22 February 2018

More information

Douglas-Rachford Splitting: Complexity Estimates and Accelerated Variants

Douglas-Rachford Splitting: Complexity Estimates and Accelerated Variants 53rd IEEE Conference on Decision and Control December 5-7, 204. Los Angeles, California, USA Douglas-Rachford Splitting: Complexity Estimates and Accelerated Variants Panagiotis Patrinos and Lorenzo Stella

More information

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems)

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Donghwan Kim and Jeffrey A. Fessler EECS Department, University of Michigan

More information

Master 2 MathBigData. 3 novembre CMAP - Ecole Polytechnique

Master 2 MathBigData. 3 novembre CMAP - Ecole Polytechnique Master 2 MathBigData S. Gaïffas 1 3 novembre 2014 1 CMAP - Ecole Polytechnique 1 Supervised learning recap Introduction Loss functions, linearity 2 Penalization Introduction Ridge Sparsity Lasso 3 Some

More information

On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems

On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems Radu Ioan Boţ Ernö Robert Csetnek August 5, 014 Abstract. In this paper we analyze the

More information

Accelerated primal-dual methods for linearly constrained convex problems

Accelerated primal-dual methods for linearly constrained convex problems Accelerated primal-dual methods for linearly constrained convex problems Yangyang Xu SIAM Conference on Optimization May 24, 2017 1 / 23 Accelerated proximal gradient For convex composite problem: minimize

More information

Design of optimal RF pulses for NMR as a discrete-valued control problem

Design of optimal RF pulses for NMR as a discrete-valued control problem Design of optimal RF pulses for NMR as a discrete-valued control problem Christian Clason Faculty of Mathematics, Universität Duisburg-Essen joint work with Carla Tameling (Göttingen) and Benedikt Wirth

More information

arxiv: v1 [math.oc] 12 Mar 2013

arxiv: v1 [math.oc] 12 Mar 2013 On the convergence rate improvement of a primal-dual splitting algorithm for solving monotone inclusion problems arxiv:303.875v [math.oc] Mar 03 Radu Ioan Boţ Ernö Robert Csetnek André Heinrich February

More information

Solving monotone inclusions involving parallel sums of linearly composed maximally monotone operators

Solving monotone inclusions involving parallel sums of linearly composed maximally monotone operators Solving monotone inclusions involving parallel sums of linearly composed maximally monotone operators Radu Ioan Boţ Christopher Hendrich 2 April 28, 206 Abstract. The aim of this article is to present

More information

About Split Proximal Algorithms for the Q-Lasso

About Split Proximal Algorithms for the Q-Lasso Thai Journal of Mathematics Volume 5 (207) Number : 7 http://thaijmath.in.cmu.ac.th ISSN 686-0209 About Split Proximal Algorithms for the Q-Lasso Abdellatif Moudafi Aix Marseille Université, CNRS-L.S.I.S

More information

Convergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization

Convergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization Convergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization Radu Ioan Boţ Christopher Hendrich November 7, 202 Abstract. In this paper we

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Support vector machines (SVMs) are one of the central concepts in all of machine learning. They are simply a combination of two ideas: linear classification via maximum (or optimal

More information

A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection

A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection EUSIPCO 2015 1/19 A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection Jean-Christophe Pesquet Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est

More information

A generalized forward-backward method for solving split equality quasi inclusion problems in Banach spaces

A generalized forward-backward method for solving split equality quasi inclusion problems in Banach spaces Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 4890 4900 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa A generalized forward-backward

More information

Distributed Computation of Quantiles via ADMM

Distributed Computation of Quantiles via ADMM 1 Distributed Computation of Quantiles via ADMM Franck Iutzeler Abstract In this paper, we derive distributed synchronous and asynchronous algorithms for computing uantiles of the agents local values.

More information

Adaptive Primal Dual Optimization for Image Processing and Learning

Adaptive Primal Dual Optimization for Image Processing and Learning Adaptive Primal Dual Optimization for Image Processing and Learning Tom Goldstein Rice University tag7@rice.edu Ernie Esser University of British Columbia eesser@eos.ubc.ca Richard Baraniuk Rice University

More information

A first-order primal-dual algorithm with linesearch

A first-order primal-dual algorithm with linesearch A first-order primal-dual algorithm with linesearch Yura Malitsky Thomas Pock arxiv:608.08883v2 [math.oc] 23 Mar 208 Abstract The paper proposes a linesearch for a primal-dual method. Each iteration of

More information

Alternating Direction Method of Multipliers. Ryan Tibshirani Convex Optimization

Alternating Direction Method of Multipliers. Ryan Tibshirani Convex Optimization Alternating Direction Method of Multipliers Ryan Tibshirani Convex Optimization 10-725 Consider the problem Last time: dual ascent min x f(x) subject to Ax = b where f is strictly convex and closed. Denote

More information

A primal dual fixed point algorithm for convex separable minimization with applications to image restoration

A primal dual fixed point algorithm for convex separable minimization with applications to image restoration IOP PUBLISHING Inverse Problems 29 (2013) 025011 (33pp) INVERSE PROBLEMS doi:10.1088/0266-5611/29/2/025011 A primal dual fixed point algorithm for convex separable minimization with applications to image

More information

Primal-dual fixed point algorithms for separable minimization problems and their applications in imaging

Primal-dual fixed point algorithms for separable minimization problems and their applications in imaging 1/38 Primal-dual fixed point algorithms for separable minimization problems and their applications in imaging Xiaoqun Zhang Department of Mathematics and Institute of Natural Sciences Shanghai Jiao Tong

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 9 Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 2 Separable convex optimization a special case is min f(x)

More information

Linearized Alternating Direction Method: Two Blocks and Multiple Blocks. Zhouchen Lin 林宙辰北京大学

Linearized Alternating Direction Method: Two Blocks and Multiple Blocks. Zhouchen Lin 林宙辰北京大学 Linearized Alternating Direction Method: Two Blocks and Multiple Blocks Zhouchen Lin 林宙辰北京大学 Dec. 3, 014 Outline Alternating Direction Method (ADM) Linearized Alternating Direction Method (LADM) Two Blocks

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 XVI - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 A slightly changed ADMM for convex optimization with three separable operators Bingsheng He Department of

More information

arxiv: v1 [math.oc] 21 Apr 2016

arxiv: v1 [math.oc] 21 Apr 2016 Accelerated Douglas Rachford methods for the solution of convex-concave saddle-point problems Kristian Bredies Hongpeng Sun April, 06 arxiv:604.068v [math.oc] Apr 06 Abstract We study acceleration and

More information

Frank-Wolfe Method. Ryan Tibshirani Convex Optimization

Frank-Wolfe Method. Ryan Tibshirani Convex Optimization Frank-Wolfe Method Ryan Tibshirani Convex Optimization 10-725 Last time: ADMM For the problem min x,z f(x) + g(z) subject to Ax + Bz = c we form augmented Lagrangian (scaled form): L ρ (x, z, w) = f(x)

More information

Sparse Optimization Lecture: Dual Methods, Part I

Sparse Optimization Lecture: Dual Methods, Part I Sparse Optimization Lecture: Dual Methods, Part I Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know dual (sub)gradient iteration augmented l 1 iteration

More information

M. Marques Alves Marina Geremia. November 30, 2017

M. Marques Alves Marina Geremia. November 30, 2017 Iteration complexity of an inexact Douglas-Rachford method and of a Douglas-Rachford-Tseng s F-B four-operator splitting method for solving monotone inclusions M. Marques Alves Marina Geremia November

More information

In collaboration with J.-C. Pesquet A. Repetti EC (UPE) IFPEN 16 Dec / 29

In collaboration with J.-C. Pesquet A. Repetti EC (UPE) IFPEN 16 Dec / 29 A Random block-coordinate primal-dual proximal algorithm with application to 3D mesh denoising Emilie CHOUZENOUX Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est, France Horizon Maths 2014

More information

Adaptive Restarting for First Order Optimization Methods

Adaptive Restarting for First Order Optimization Methods Adaptive Restarting for First Order Optimization Methods Nesterov method for smooth convex optimization adpative restarting schemes step-size insensitivity extension to non-smooth optimization continuation

More information

Convex Optimization. (EE227A: UC Berkeley) Lecture 15. Suvrit Sra. (Gradient methods III) 12 March, 2013

Convex Optimization. (EE227A: UC Berkeley) Lecture 15. Suvrit Sra. (Gradient methods III) 12 March, 2013 Convex Optimization (EE227A: UC Berkeley) Lecture 15 (Gradient methods III) 12 March, 2013 Suvrit Sra Optimal gradient methods 2 / 27 Optimal gradient methods We saw following efficiency estimates for

More information

Math 273a: Optimization Lagrange Duality

Math 273a: Optimization Lagrange Duality Math 273a: Optimization Lagrange Duality Instructor: Wotao Yin Department of Mathematics, UCLA Winter 2015 online discussions on piazza.com Gradient descent / forward Euler assume function f is proper

More information

On the equivalence of the primal-dual hybrid gradient method and Douglas-Rachford splitting

On the equivalence of the primal-dual hybrid gradient method and Douglas-Rachford splitting On the equivalence of the primal-dual hybrid gradient method and Douglas-Rachford splitting Daniel O Connor Lieven Vandenberghe September 27, 2017 Abstract The primal-dual hybrid gradient (PDHG) algorithm

More information

Optimization for Learning and Big Data

Optimization for Learning and Big Data Optimization for Learning and Big Data Donald Goldfarb Department of IEOR Columbia University Department of Mathematics Distinguished Lecture Series May 17-19, 2016. Lecture 1. First-Order Methods for

More information

Abstract In this article, we consider monotone inclusions in real Hilbert spaces

Abstract In this article, we consider monotone inclusions in real Hilbert spaces Noname manuscript No. (will be inserted by the editor) A new splitting method for monotone inclusions of three operators Yunda Dong Xiaohuan Yu Received: date / Accepted: date Abstract In this article,

More information

ACCELERATED FIRST-ORDER PRIMAL-DUAL PROXIMAL METHODS FOR LINEARLY CONSTRAINED COMPOSITE CONVEX PROGRAMMING

ACCELERATED FIRST-ORDER PRIMAL-DUAL PROXIMAL METHODS FOR LINEARLY CONSTRAINED COMPOSITE CONVEX PROGRAMMING ACCELERATED FIRST-ORDER PRIMAL-DUAL PROXIMAL METHODS FOR LINEARLY CONSTRAINED COMPOSITE CONVEX PROGRAMMING YANGYANG XU Abstract. Motivated by big data applications, first-order methods have been extremely

More information

Conditional Gradient (Frank-Wolfe) Method

Conditional Gradient (Frank-Wolfe) Method Conditional Gradient (Frank-Wolfe) Method Lecturer: Aarti Singh Co-instructor: Pradeep Ravikumar Convex Optimization 10-725/36-725 1 Outline Today: Conditional gradient method Convergence analysis Properties

More information

Monotone Operator Splitting Methods in Signal and Image Recovery

Monotone Operator Splitting Methods in Signal and Image Recovery Monotone Operator Splitting Methods in Signal and Image Recovery P.L. Combettes 1, J.-C. Pesquet 2, and N. Pustelnik 3 2 Univ. Pierre et Marie Curie, Paris 6 LJLL CNRS UMR 7598 2 Univ. Paris-Est LIGM CNRS

More information

On the equivalence of the primal-dual hybrid gradient method and Douglas Rachford splitting

On the equivalence of the primal-dual hybrid gradient method and Douglas Rachford splitting Mathematical Programming manuscript No. (will be inserted by the editor) On the equivalence of the primal-dual hybrid gradient method and Douglas Rachford splitting Daniel O Connor Lieven Vandenberghe

More information

On convergence rate of the Douglas-Rachford operator splitting method

On convergence rate of the Douglas-Rachford operator splitting method On convergence rate of the Douglas-Rachford operator splitting method Bingsheng He and Xiaoming Yuan 2 Abstract. This note provides a simple proof on a O(/k) convergence rate for the Douglas- Rachford

More information

Inertial Douglas-Rachford splitting for monotone inclusion problems

Inertial Douglas-Rachford splitting for monotone inclusion problems Inertial Douglas-Rachford splitting for monotone inclusion problems Radu Ioan Boţ Ernö Robert Csetnek Christopher Hendrich January 5, 2015 Abstract. We propose an inertial Douglas-Rachford splitting algorithm

More information

ARock: an algorithmic framework for asynchronous parallel coordinate updates

ARock: an algorithmic framework for asynchronous parallel coordinate updates ARock: an algorithmic framework for asynchronous parallel coordinate updates Zhimin Peng, Yangyang Xu, Ming Yan, Wotao Yin ( UCLA Math, U.Waterloo DCO) UCLA CAM Report 15-37 ShanghaiTech SSDS 15 June 25,

More information

1 Introduction and preliminaries

1 Introduction and preliminaries Proximal Methods for a Class of Relaxed Nonlinear Variational Inclusions Abdellatif Moudafi Université des Antilles et de la Guyane, Grimaag B.P. 7209, 97275 Schoelcher, Martinique abdellatif.moudafi@martinique.univ-ag.fr

More information

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES Fenghui Wang Department of Mathematics, Luoyang Normal University, Luoyang 470, P.R. China E-mail: wfenghui@63.com ABSTRACT.

More information

Second order forward-backward dynamical systems for monotone inclusion problems

Second order forward-backward dynamical systems for monotone inclusion problems Second order forward-backward dynamical systems for monotone inclusion problems Radu Ioan Boţ Ernö Robert Csetnek March 6, 25 Abstract. We begin by considering second order dynamical systems of the from

More information

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Charles Byrne (Charles Byrne@uml.edu) http://faculty.uml.edu/cbyrne/cbyrne.html Department of Mathematical Sciences

More information

An Algorithmic Framework of Generalized Primal-Dual Hybrid Gradient Methods for Saddle Point Problems

An Algorithmic Framework of Generalized Primal-Dual Hybrid Gradient Methods for Saddle Point Problems An Algorithmic Framework of Generalized Primal-Dual Hybrid Gradient Methods for Saddle Point Problems Bingsheng He Feng Ma 2 Xiaoming Yuan 3 January 30, 206 Abstract. The primal-dual hybrid gradient method

More information

EE 546, Univ of Washington, Spring Proximal mapping. introduction. review of conjugate functions. proximal mapping. Proximal mapping 6 1

EE 546, Univ of Washington, Spring Proximal mapping. introduction. review of conjugate functions. proximal mapping. Proximal mapping 6 1 EE 546, Univ of Washington, Spring 2012 6. Proximal mapping introduction review of conjugate functions proximal mapping Proximal mapping 6 1 Proximal mapping the proximal mapping (prox-operator) of a convex

More information

arxiv: v3 [math.oc] 29 Jun 2016

arxiv: v3 [math.oc] 29 Jun 2016 MOCCA: mirrored convex/concave optimization for nonconvex composite functions Rina Foygel Barber and Emil Y. Sidky arxiv:50.0884v3 [math.oc] 9 Jun 06 04.3.6 Abstract Many optimization problems arising

More information

Distributed Optimization via Alternating Direction Method of Multipliers

Distributed Optimization via Alternating Direction Method of Multipliers Distributed Optimization via Alternating Direction Method of Multipliers Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato Stanford University ITMANET, Stanford, January 2011 Outline precursors dual decomposition

More information

EE364b Convex Optimization II May 30 June 2, Final exam

EE364b Convex Optimization II May 30 June 2, Final exam EE364b Convex Optimization II May 30 June 2, 2014 Prof. S. Boyd Final exam By now, you know how it works, so we won t repeat it here. (If not, see the instructions for the EE364a final exam.) Since you

More information

A GENERAL INERTIAL PROXIMAL POINT METHOD FOR MIXED VARIATIONAL INEQUALITY PROBLEM

A GENERAL INERTIAL PROXIMAL POINT METHOD FOR MIXED VARIATIONAL INEQUALITY PROBLEM A GENERAL INERTIAL PROXIMAL POINT METHOD FOR MIXED VARIATIONAL INEQUALITY PROBLEM CAIHUA CHEN, SHIQIAN MA, AND JUNFENG YANG Abstract. In this paper, we first propose a general inertial proximal point method

More information

A New Use of Douglas-Rachford Splitting and ADMM for Classifying Infeasible, Unbounded, and Pathological Conic Programs

A New Use of Douglas-Rachford Splitting and ADMM for Classifying Infeasible, Unbounded, and Pathological Conic Programs A New Use of Douglas-Rachford Splitting and ADMM for Classifying Infeasible, Unbounded, and Pathological Conic Programs Yanli Liu, Ernest K. Ryu, and Wotao Yin May 23, 2017 Abstract In this paper, we present

More information

The Alternating Direction Method of Multipliers

The Alternating Direction Method of Multipliers The Alternating Direction Method of Multipliers With Adaptive Step Size Selection Peter Sutor, Jr. Project Advisor: Professor Tom Goldstein December 2, 2015 1 / 25 Background The Dual Problem Consider

More information

Local Linear Convergence Analysis of Primal Dual Splitting Methods

Local Linear Convergence Analysis of Primal Dual Splitting Methods Local Linear Convergence Analysis of Primal Dual Splitting Methods Jingwei Liang a and Jalal Fadili b and Gabriel Peyré c a DAMTP, University of Cambridge, UK; b Normandie Université, ENSICAEN, CNS, GEYC,

More information

A primal dual Splitting Method for Convex. Optimization Involving Lipschitzian, Proximable and Linear Composite Terms,

A primal dual Splitting Method for Convex. Optimization Involving Lipschitzian, Proximable and Linear Composite Terms, A primal dual Splitting Method for Convex Optimization Involving Lipschitzian, Proximable and Linear Composite Terms Laurent Condat Final author s version. Cite as: L. Condat, A primal dual Splitting Method

More information

I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION

I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION Peter Ochs University of Freiburg Germany 17.01.2017 joint work with: Thomas Brox and Thomas Pock c 2017 Peter Ochs ipiano c 1

More information

Sparse Regularization via Convex Analysis

Sparse Regularization via Convex Analysis Sparse Regularization via Convex Analysis Ivan Selesnick Electrical and Computer Engineering Tandon School of Engineering New York University Brooklyn, New York, USA 29 / 66 Convex or non-convex: Which

More information

Extensions of the CQ Algorithm for the Split Feasibility and Split Equality Problems

Extensions of the CQ Algorithm for the Split Feasibility and Split Equality Problems Extensions of the CQ Algorithm for the Split Feasibility Split Equality Problems Charles L. Byrne Abdellatif Moudafi September 2, 2013 Abstract The convex feasibility problem (CFP) is to find a member

More information

Lecture: Algorithms for Compressed Sensing

Lecture: Algorithms for Compressed Sensing 1/56 Lecture: Algorithms for Compressed Sensing Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:

More information

1 Sparsity and l 1 relaxation

1 Sparsity and l 1 relaxation 6.883 Learning with Combinatorial Structure Note for Lecture 2 Author: Chiyuan Zhang Sparsity and l relaxation Last time we talked about sparsity and characterized when an l relaxation could recover the

More information

Dual Proximal Gradient Method

Dual Proximal Gradient Method Dual Proximal Gradient Method http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes Outline 2/19 1 proximal gradient method

More information

SIAM Conference on Imaging Science, Bologna, Italy, Adaptive FISTA. Peter Ochs Saarland University

SIAM Conference on Imaging Science, Bologna, Italy, Adaptive FISTA. Peter Ochs Saarland University SIAM Conference on Imaging Science, Bologna, Italy, 2018 Adaptive FISTA Peter Ochs Saarland University 07.06.2018 joint work with Thomas Pock, TU Graz, Austria c 2018 Peter Ochs Adaptive FISTA 1 / 16 Some

More information

arxiv: v1 [math.oc] 20 Jun 2014

arxiv: v1 [math.oc] 20 Jun 2014 A forward-backward view of some primal-dual optimization methods in image recovery arxiv:1406.5439v1 [math.oc] 20 Jun 2014 P. L. Combettes, 1 L. Condat, 2 J.-C. Pesquet, 3 and B. C. Vũ 4 1 Sorbonne Universités

More information

Support Vector Machines and Kernel Methods

Support Vector Machines and Kernel Methods 2018 CS420 Machine Learning, Lecture 3 Hangout from Prof. Andrew Ng. http://cs229.stanford.edu/notes/cs229-notes3.pdf Support Vector Machines and Kernel Methods Weinan Zhang Shanghai Jiao Tong University

More information

Does Alternating Direction Method of Multipliers Converge for Nonconvex Problems?

Does Alternating Direction Method of Multipliers Converge for Nonconvex Problems? Does Alternating Direction Method of Multipliers Converge for Nonconvex Problems? Mingyi Hong IMSE and ECpE Department Iowa State University ICCOPT, Tokyo, August 2016 Mingyi Hong (Iowa State University)

More information

Convex Optimization Notes

Convex Optimization Notes Convex Optimization Notes Jonathan Siegel January 2017 1 Convex Analysis This section is devoted to the study of convex functions f : B R {+ } and convex sets U B, for B a Banach space. The case of B =

More information

Agenda. Fast proximal gradient methods. 1 Accelerated first-order methods. 2 Auxiliary sequences. 3 Convergence analysis. 4 Numerical examples

Agenda. Fast proximal gradient methods. 1 Accelerated first-order methods. 2 Auxiliary sequences. 3 Convergence analysis. 4 Numerical examples Agenda Fast proximal gradient methods 1 Accelerated first-order methods 2 Auxiliary sequences 3 Convergence analysis 4 Numerical examples 5 Optimality of Nesterov s scheme Last time Proximal gradient method

More information

You should be able to...

You should be able to... Lecture Outline Gradient Projection Algorithm Constant Step Length, Varying Step Length, Diminishing Step Length Complexity Issues Gradient Projection With Exploration Projection Solving QPs: active set

More information

Duality in Linear Programs. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Duality in Linear Programs. Lecturer: Ryan Tibshirani Convex Optimization /36-725 Duality in Linear Programs Lecturer: Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: proximal gradient descent Consider the problem x g(x) + h(x) with g, h convex, g differentiable, and

More information

Line Search Methods for Unconstrained Optimisation

Line Search Methods for Unconstrained Optimisation Line Search Methods for Unconstrained Optimisation Lecture 8, Numerical Linear Algebra and Optimisation Oxford University Computing Laboratory, MT 2007 Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The Generic

More information

Auxiliary-Function Methods in Optimization

Auxiliary-Function Methods in Optimization Auxiliary-Function Methods in Optimization Charles Byrne (Charles Byrne@uml.edu) http://faculty.uml.edu/cbyrne/cbyrne.html Department of Mathematical Sciences University of Massachusetts Lowell Lowell,

More information

arxiv: v2 [math.oc] 21 Nov 2017

arxiv: v2 [math.oc] 21 Nov 2017 Unifying abstract inexact convergence theorems and block coordinate variable metric ipiano arxiv:1602.07283v2 [math.oc] 21 Nov 2017 Peter Ochs Mathematical Optimization Group Saarland University Germany

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual

More information

A GENERAL FRAMEWORK FOR A CLASS OF FIRST ORDER PRIMAL-DUAL ALGORITHMS FOR CONVEX OPTIMIZATION IN IMAGING SCIENCE

A GENERAL FRAMEWORK FOR A CLASS OF FIRST ORDER PRIMAL-DUAL ALGORITHMS FOR CONVEX OPTIMIZATION IN IMAGING SCIENCE A GENERAL FRAMEWORK FOR A CLASS OF FIRST ORDER PRIMAL-DUAL ALGORITHMS FOR CONVEX OPTIMIZATION IN IMAGING SCIENCE ERNIE ESSER XIAOQUN ZHANG TONY CHAN Abstract. We generalize the primal-dual hybrid gradient

More information

On the order of the operators in the Douglas Rachford algorithm

On the order of the operators in the Douglas Rachford algorithm On the order of the operators in the Douglas Rachford algorithm Heinz H. Bauschke and Walaa M. Moursi June 11, 2015 Abstract The Douglas Rachford algorithm is a popular method for finding zeros of sums

More information

Implicit Euler Numerical Simulation of Sliding Mode Systems

Implicit Euler Numerical Simulation of Sliding Mode Systems Implicit Euler Numerical Simulation of Sliding Mode Systems Vincent Acary and Bernard Brogliato, INRIA Grenoble Rhône-Alpes Conference in honour of E. Hairer s 6th birthday Genève, 17-2 June 29 Objectives

More information

Convergence rate analysis for averaged fixed point iterations in the presence of Hölder regularity

Convergence rate analysis for averaged fixed point iterations in the presence of Hölder regularity Convergence rate analysis for averaged fixed point iterations in the presence of Hölder regularity Jonathan M. Borwein Guoyin Li Matthew K. Tam October 23, 205 Abstract In this paper, we establish sublinear

More information