In collaboration with J.-C. Pesquet A. Repetti EC (UPE) IFPEN 16 Dec / 29

Size: px
Start display at page:

Download "In collaboration with J.-C. Pesquet A. Repetti EC (UPE) IFPEN 16 Dec / 29"

Transcription

1 A Random block-coordinate primal-dual proximal algorithm with application to 3D mesh denoising Emilie CHOUZENOUX Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est, France Horizon Maths December 2014 EC (UPE) IFPEN 16 Dec / 29

2 In collaboration with J.-C. Pesquet A. Repetti EC (UPE) IFPEN 16 Dec / 29

3 Introduction EC (UPE) IFPEN 16 Dec / 29

4 Valued graphs V = { v (i) i {1,...,M} } set of vertices = objects v (i) V i {1,...,M} EC (UPE) IFPEN 16 Dec / 29

5 Valued graphs V = { v (i) i {1,...,M} } set of vertices = objects v (i) V i {1,...,M} E = { e (i,j) (i,j) E } set of edges = object relationships e (i,j) E (i,j) E EC (UPE) IFPEN 16 Dec / 29

6 Valued graphs V = { v (i) i {1,...,M} } set of vertices = objects v (i) V i {1,...,M} E = { e (i,j) (i,j) E } set of edges = object relationships e (i,j) E (i,j) E directed reflexive graph: E M 2 EC (UPE) IFPEN 16 Dec / 29

7 Valued graphs V = { v (i) i {1,...,M} } set of vertices = objects v (i) V i {1,...,M} E = { e (i,j) (i,j) E } set of edges = object relationships e (i,j) E (i,j) E directed reflexive graph: E M 2 directed nonreflexive graph: E M(M 1) EC (UPE) IFPEN 16 Dec / 29

8 Valued graphs V = { v (i) i {1,...,M} } set of vertices = objects v (i) V i {1,...,M} E = { e (i,j) (i,j) E } set of edges = object relationships e (i,j) E (i,j) E directed reflexive graph: E M 2 directed nonreflexive graph: E M(M 1) undirected nonreflexive graph: E M(M 1)/2 EC (UPE) IFPEN 16 Dec / 29

9 Valued graphs V = { v (i) i {1,...,M} } set of vertices = objects v (i) V i {1,...,M} E = { e (i,j) (i,j) E } set of edges = object relationships e (i,j) E (i,j) E directed reflexive graph: E M 2 directed nonreflexive graph: E M(M 1) undirected nonreflexive graph: E M(M 1)/2 (x (i) ) 1 i M : weights on vertices (scalars or vectors) EC (UPE) IFPEN 16 Dec / 29

10 Valued graphs V = { v (i) i {1,...,M} } set of vertices = objects v (i) V i {1,...,M} E = { e (i,j) (i,j) E } set of edges = object relationships e (i,j) E (i,j) E directed reflexive graph: E M 2 directed nonreflexive graph: E M(M 1) undirected nonreflexive graph: E M(M 1)/2 (x (i) ) 1 i M : weights on vertices (scalars or vectors) (x (i,j) ) (i,j) E : weights on edges (scalars or vectors) EC (UPE) IFPEN 16 Dec / 29

11 Variational formulation Objective function The cost of a given choice of the weights is evaluated by Φ ( (x (i) ) 1 i M,(x (i,j) ) (i,j) E ) EC (UPE) IFPEN 16 Dec / 29

12 Variational formulation Objective function The cost of a given choice of the weights is evaluated by Φ ( (x (i) ) 1 i M,(x (i,j) ) ) (i,j) E }{{} x where [ (x x = (i) ] ) 1 i M (x (i,j) H, ) (i,j) E H separable real Hilbert space, and Φ Γ 0 (H): class of convex lower-semicontinuous functions from H to ],+ ] with a nonempty domain. Example: scalar weights H = R N with N = M + E Problem: How to solve this very large-scale minimization problem in an efficient manner? EC (UPE) IFPEN 16 Dec / 29

13 First trick: parallel splitting EC (UPE) IFPEN 16 Dec / 29

14 First trick: parallel splitting Split Φ into simpler building blocks that is... EC (UPE) IFPEN 16 Dec / 29

15 First trick: parallel splitting Split Φ into simpler building blocks that is... ( x H) Φ(x) = f(x)+h(x)+ q (g k l k )(L k x) where f Γ 0 (H) h convex, µ-lipschitz differentiable function with µ ]0, + [ ( k {1,...,q}) g k Γ 0 (G k ), G k separable real Hilbert space l k Γ 0 (G k ) ν k -strongly convex with ν k ]0,+ [ L k : H G k linear and bounded g k l k inf-convolution of g k and l k : k=1 ( v k G k ) (g k l k )(v k ) = inf g k (v v k G k )+l k(v k v k ) k g k ι {0} = g k. EC (UPE) IFPEN 16 Dec / 29

16 First trick: parallel splitting Split Φ into simpler building blocks that is... ( x H) q Φ(x) = f(x)+h(x)+ (g k l k )(L k x) k=1 Difficulties: large-size optimization problem functions f, (g k ) 1 k q, or (l k ) 1 k q often nonsmooth (indicator functions of constraint sets, sparsity measures,...) linear operator inversions required by standard optimization methods (e.g. ADMM) difficult to perform due to the form of operators (L k ) 1 k q (e.g. weighted incidence matrices). EC (UPE) IFPEN 16 Dec / 29

17 Second trick: primal-dual strategy EC (UPE) IFPEN 16 Dec / 29

18 Second trick: primal-dual strategy Dualize the problem. Let H be a Hilbert space and f: H ],+ ]. The conjugate of f is f : H [,+ ] such that ( u H) f ( ) (u) = sup x u f(x). x H Adrien-Marie Legendre Werner Fenchel ( ) ( ) EC (UPE) IFPEN 16 Dec / 29

19 Conjugate versus Fourier transform conjugate Fourier transform Property h(x) h (u) h(x) ĥ(ν) invariant function 1 2 x u 2 e π x 2 e π ν 2 translation f(x c) f (u) + u c f(x c) j2π ν c e f(ν) c H dual translation f(x) + x c f (u c) e j2π x c f(x c) f(ν c) c H scalar ) multiplication αf(x) αf u α αf(x) α f(ν) α ]0,+ [ scaling α R ( ) f xα f (αu) ( ) f xα α f(αν) isomorphism L B(G,H) f(lx) f ( L u ) f(lx) 1 L ν ) det(l) reflection f( x) f ( u) f( x) f( ν) separability N N N N ϕ n(x (n) ) ϕ n (u(n) ) ϕ n(x (n) ) ϕ n(ν (n) ) n=1 n=1 n=1 n=1 x = (x (n) ) 1 n N u = (u (n) ) 1 n N x = (x (n) ) 1 n N ν = (ν (n) ) 1 n N isotropy ψ( x ) ψ ( u ) ψ( x ) ψ( ν ) inf-convolution (f g)(x) f (u) + g (u) (f g)(x) f(ν)ĝ(ν) /convolution = f(y)g(x y)dy H sum/product f(x) + g(x) (f g )(u) f(x)g(x) (f,g) ( Γ 0 (H) ) 2 ( f ĝ)(ν) domf domg identity element ι {0} (x) 0 δ(x) 1 of convolution identity element 0 ι {0} (u) 1 δ(ν) of addition/product EC (UPE) IFPEN 16 Dec / 29

20 Primal-dual formulation Find an element of the set F of solutions to the primal problem minimize x H q f(x)+h(x)+ (g k l k )(L k x) k=1 and an element of the set F of solutions to the dual problem ( minimize (f h ) v 1 G 1,...,v q G q q ) L k v k + k=1 We assume that there exists x H such that 0 f(x)+ h(x)+ q ( g k (v k )+lk (v k) ). k=1 q L k ( g k l k ) ( L k x ). k=1 EC (UPE) IFPEN 16 Dec / 29

21 Parallel proximal primal-dual algorithm Algorithm 1 for n = 0,1,... y n proxf (x W 1 n W ( q L kv k,n + h(x n ) )) k=1 x n+1 = x n +λ n (y n x n ) for k = 1,...,q ( u k,n prox U 1 k gk vk,n +U k (L k (2y n x n ) lk(v k,n )) ) v k,n+1 = v k,n +λ n (u k,n v k,n ), EC (UPE) IFPEN 16 Dec / 29

22 Parallel proximal primal-dual algorithm Algorithm 1 where for n = 0,1,... y n proxf (x W 1 n W ( q L kv k,n + h(x n ) )) k=1 x n+1 = x n +λ n (y n x n ) for k = 1,...,q ( u k,n prox U 1 k gk vk,n +U k (L k (2y n x n ) lk(v k,n )) ) v k,n+1 = v k,n +λ n (u k,n v k,n ), W : H H strongly positive self-adjoint bounded linear operator and ( k {1,...,q}) U k : G k G k strongly positive self-adjoint bounded linear operator such that ( 1 ( q ) ) 1/2 k=1 U1/2 k L k W 1/2 2 min{ W 1 µ,( U k 1 ν k ) 1 k q } > 1 2. EC (UPE) IFPEN 16 Dec / 29

23 Parallel proximal primal-dual algorithm Algorithm 1 for n = 0,1,... y n proxf (x W 1 n W ( q L kv k,n + h(x n ) )) k=1 x n+1 = x n +λ n (y n x n ) for k = 1,...,q ( u k,n prox U 1 k gk vk,n +U k (L k (2y n x n ) lk(v k,n )) ) v k,n+1 = v k,n +λ n (u k,n v k,n ), where prox W 1 f proximity operator of f in (H, W 1) ( x H) proxf W 1 (x) = argmin y H f(y)+ 1 2 y x 2 W 1, prox U 1 k g proximity operator of g k k in (G k, U 1) k ( n N) λn ]0,1] such that inf n N λ n > 0. EC (UPE) IFPEN 16 Dec / 29

24 Parallel proximal primal-dual algorithms Advantages: No linear operator inversion. Use of proximable or/and differentiable functions. Use of preconditioning linear operators. EC (UPE) IFPEN 16 Dec / 29

25 Parallel proximal primal-dual algorithms Advantages: No linear operator inversion. Use of proximable or/and differentiable functions. Use of preconditioning linear operators. Bibliographical remarks: methods based on Forward-Backward iteration type I: [Vũ,2013][Condat,2013] (extensions of [Esser et al.,2010][chambolle,pock,2011]) type II: [Combettes et al.,2014] (extensions of [Loris,Verhoeven,2011][Chen et al.,2014]) methods based on Forward-Backward-Forward iteration [Combettes,Pesquet,2012] projection based methods [Alotaibi et al.,2013]... EC (UPE) IFPEN 16 Dec / 29

26 Third trick: block-coordinate strategy EC (UPE) IFPEN 16 Dec / 29

27 Third trick: block-coordinate strategy Split variable: x = (x 1,...,x p ) H 1 H p = H where H 1,...,H p separable real Hilbert spaces. At each iteration n, update only a subset of components. ( Gauss-Seidel). EC (UPE) IFPEN 16 Dec / 29

28 Third trick: block-coordinate strategy Split variable: x = (x 1,...,x p ) H 1 H p = H where H 1,...,H p separable real Hilbert spaces. At each iteration n, update only a subset of components. ( Gauss-Seidel). Advantage: reduced complexity and memory requirements per iteration Useful for large-scale optimization Assumptions: f(x) = p j=1 f j(x j ), h(x) = p j=1 h j(x j ) where ( j {1,...,p}) f j Γ 0 (H j ), h j convex µ j -Lipschitz differentiable with µ j ]0,+ [. In addition, for every k {1,...,q}, L k x = p j=1 L k,jx j, where L k,j : H j G k linear and bounded. EC (UPE) IFPEN 16 Dec / 29

29 Primal-dual variational formulation Find an element of the set F of solutions to the primal problem minimize x 1 H 1,...,x p H p p ( fj (x j )+h j (x j ) ) + j=1 q ( p ) (g k l k ) L k,j x j k=1 j=1 and an element of the set F of solutions to the dual problem minimize v 1 G 1,...,v q G q p ( (fj h j) j=1 q ) L k,j v k + k=1 q ( g k (v k )+lk (v k) ). We assume that there exists (x 1,...,x p ) H 1 H p such that ( j {1,...,p}) 0 f j (x j )+ h j (x j ) q ( p ) + L k,j ( g k l k ) L k,j x j. k=1 k=1 j =1 EC (UPE) IFPEN 16 Dec / 29

30 Random block-coordinate primal-dual algorithm Algorithm 2 for n = 0,1,... for j = 1,...,p ( ( y j,n = ε j,n prox W 1 j f xj,n j W j ( L k,jv k,n + h j (x j,n )+c j,n ) ) k L ) j +a j,n x j,n+1 = x j,n +λ n ε j,n (y j,n x j,n ) for k = 1,...,q ( ( u k,n = ε p+k,n prox U 1 k g vk,n +U k ( L k,j (2y j,n x j,n ) lk(v k,n ) k j L k +d k,n ) ) ) +b k,n where v k,n+1 = v k,n +λ n ε p+k,n (u k,n v k,n ), (εn ) n N identically distributed D-valued random variables with D = {0,1} p+q {0} binary variables signaling the blocks to be activated EC (UPE) IFPEN 16 Dec / 29

31 Random block-coordinate primal-dual algorithm Algorithm 2 for n = 0,1,... for j = 1,...,p ( ( y j,n = ε j,n prox W 1 j f xj,n j W j ( L k,jv k,n + h j (x j,n )+c j,n ) ) k L ) j +a j,n x j,n+1 = x j,n +λ n ε j,n (y j,n x j,n ) for k = 1,...,q ( ( u k,n = ε p+k,n prox U 1 k g vk,n +U k ( L k,j (2y j,n x j,n ) lk(v k,n ) k j L k +d k,n ) ) ) +b k,n where v k,n+1 = v k,n +λ n ε p+k,n (u k,n v k,n ), x0, (a n ) n N, and (c n ) n N H-valued random variables, v 0, (b n ) n N, and (d n ) n N G-valued random variables with G = G 1 G q (a n ) n N, (b n ) n N, (c n ) n N, and (d n ) n N : error terms EC (UPE) IFPEN 16 Dec / 29

32 Random block-coordinate primal-dual algorithm Algorithm 2 for n = 0,1,... for j = 1,...,p ( y j,n ε j,n prox W 1 j f xj,n j W j ( L k,jv k,n + h j (x j,n )) ) k L j x j,n+1 = x j,n +λ n ε j,n (y j,n x j,n ) for k = 1,...,q u k,n ε p+k,n prox U 1( k gk vk,n +U k ( L k,j (2y j,n x j,n ) l ) k(v k,n )) j L k v k,n+1 = v k,n +λ n ε p+k,n (u k,n v k,n ), where ( j {1,...,p}) Wj : H j H j strongly positive self-adjoint bounded operator and ( k {1,...,q}) U k : G k G k strongly positive self-adjoint bounded operator such that ( 1 ( p j=1 ) ) 1/2 q k=1 U1/2 k L k,j W 1/2 j 2 min{( W j 1 µ j ) 1 j p,( U k 1 ν k ) 1 k q } > 1 2. EC (UPE) IFPEN 16 Dec / 29

33 Random block-coordinate primal-dual algorithm Algorithm 2 for n = 0,1,... for j = 1,...,p ( y j,n ε j,n prox W 1 j f xj,n j W j ( L k,jv k,n + h j (x j,n )) ) k L j x j,n+1 = x j,n +λ n ε j,n (y j,n x j,n ) for k = 1,...,q u k,n ε p+k,n prox U 1( k gk vk,n +U k ( L k,j (2y j,n x j,n ) l ) k(v k,n )) j L k v k,n+1 = v k,n +λ n ε p+k,n (u k,n v k,n ), where ( k {1,...,q}) Lk = { j {1,...,p} L k,j 0 }, ( j {1,...,p}) L j = { k {1,...,q} L k,j 0 }, prox W 1 j f j prox U 1 k gk proximity operator of f j in (H j, W 1), j proximity operator of gk in (G k, U 1) k EC (UPE) IFPEN 16 Dec / 29

34 Random block-coordinate primal-dual algorithm Theorem Set ( n N) X n = σ(x n,v n ) 0 n n and E n = σ(ε n ). Assume that 1 E( an 2 X n ) < +, n N E( bn 2 X n ) < +, n N E( cn 2 X n ) < +, and n N E( dn 2 X n ) < + P-a.s. 2 For every n N, E n and X n are independent, and ( k {1,...,q}) P[ε p+k,0 = 1] > 0. 3 For every j {1,...,p} and n N, { ω Ω ε p+k,n (ω) = 1 } { ω Ω ε j,n (ω) = 1 }. k L j Then, (x n ) n N converges weakly P-a.s. to an F-valued random variable, and (v n ) n N converges weakly P-a.s. to an F -valued random variable. EC (UPE) IFPEN 16 Dec / 29

35 Application to 3D mesh denoising EC (UPE) IFPEN 16 Dec / 29

36 Mesh denoising problem Undirected nonreflexive graph V: set of vertices of the mesh E: set of edges of the mesh x = (x (i) ) 1 i M where, for every i {1,...,M}, x (i) R 3 are 3D coordinates of the i-th vertex of the object H = R 3M. Cost function: Φ(x) = M ι Cj (x (j) )+ψ j (x (j) z (j) )+η j (x (j) x (i) ) i Nj 1,2 j=1 where, for every j {1,...,M}, Cj nonempty convex subset of R 3 ψj : R 3 R convex, Lipschitz differentiable function z (j) : 3D measured coordinates of the j-th vertex Nj : neighborhood of j-th vertex (ηj ) 1 j M : nonnegative regularization constants. EC (UPE) IFPEN 16 Dec / 29

37 Mesh denoising problem Implementation details: a block a vertex p = M ( j {1,...,M}) fj = ι Cj where C j : box constraint ( j {1,...,M}) hj = ψ j ( z j ) l 2 -l 1 Huber function robust data fidelity measure q = M and ( k {1,...,M}) ( x H) g k (L k x) = (x (k) x (i) ) i Nk 1,2 ( k {1,...,M}) lk = ι {0}. Simulation scenario: E = E1 E 2 with E 1 E 2. additive independent noise with distribution E 1 N(0,σ1), 2 E 2 πn(0,σ2)+(1 π)n(0,(σ 2 2) 2 ), π (0,1). probability of variable activation { p if j E 1 ( j {1,...,M})( n N) P(ε j,n = 1) = 1 otherwise. EC (UPE) IFPEN 16 Dec / 29

38 Simulation results Original mesh, Noisy mesh, σ 1 = 10 3, π = 0.98, E 1 = 6844, E 2 = (σ 2,σ 2 ) = (5 10 3, ), MSE = EC (UPE) IFPEN 16 Dec / 29

39 Simulation results Proposed reconstruction, Laplacian smoothing, MSE = MSE = EC (UPE) IFPEN 16 Dec / 29

40 Complexity C(p) p EC (UPE) IFPEN 16 Dec / 29

41 Simulation results Original mesh, Noisy mesh, σ1 = , π = 0.95, E1 = 18492, E2 = (σ2, σ2 ) = (2 10 3, ), MSE = EC (UPE) IFPEN 16 Dec / 29

42 Simulation results Proposed reconstruction, Laplacian smoothing, MSE = MSE = EC (UPE) IFPEN 16 Dec / 29

43 Complexity C(p) p EC (UPE) IFPEN 16 Dec / 29

44 Conclusion No linear operator inversion. Flexibility in the random activation of primal/dual components. Possibility to address other problems than denoising [Couprie et al.,2013] Fourth trick: EC (UPE) IFPEN 16 Dec / 29

45 Conclusion No linear operator inversion. Flexibility in the random activation of primal/dual components. Possibility to address other problems than denoising [Couprie et al.,2013] Fourth trick:... employ asynchronous distributed strategies. EC (UPE) IFPEN 16 Dec / 29

46 Some references P. L. Combettes and J.-C. Pesquet Proximal splitting methods in signal processing in Fixed-Point Algorithms for Inverse Problems in Science and Engineering, H. H. Bauschke, R. Burachik, P. L. Combettes, V. Elser, D. R. Luke, and H. Wolkowicz editors. Springer-Verlag, New York, pp , C. Couprie, L. Grady, L. Najman, J.-C. Pesquet, and H. Talbot Dual constrained TV-based regularization on graphs SIAM Journal on Imaging Sciences, vol. 6, no 3, pp , P. L. Combettes, L. Condat, J.-C. Pesquet, and B. C. Vũ A forward-backward view of some primal-dual optimization methods in image recovery IEEE International Conference on Image Processing (ICIP 2014), 5 p., Paris, France, Oct , P. Combettes and J.-C Pesquet Stochastic quasi-fejér block-coordinate fixed point iterations with random sweeping 2014, N. Komodakis and J.-C. Pesquet Playing with duality: An overview of recent primal-dual approaches for solving large-scale optimization problems to appear in Signal Processing Magazine, J.-C. Pesquet and A. Repetti A class of randomized primal-dual algorithms for distributed optimization to appear in Journal of Nonlinear and Convex Analysis, A. Repetti, E. Chouzenoux and J.-C. Pesquet A random block-coordinate primal-dual proximal algorithm with application to 3D mesh denoising submitted to IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2015). EC (UPE) IFPEN 16 Dec / 29

A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection

A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection EUSIPCO 2015 1/19 A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection Jean-Christophe Pesquet Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est

More information

An Overview of Recent and Brand New Primal-Dual Methods for Solving Convex Optimization Problems

An Overview of Recent and Brand New Primal-Dual Methods for Solving Convex Optimization Problems PGMO 1/32 An Overview of Recent and Brand New Primal-Dual Methods for Solving Convex Optimization Problems Emilie Chouzenoux Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est Marne-la-Vallée,

More information

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Patrick L. Combettes joint work with J.-C. Pesquet) Laboratoire Jacques-Louis Lions Faculté de Mathématiques

More information

Monotone Operator Splitting Methods in Signal and Image Recovery

Monotone Operator Splitting Methods in Signal and Image Recovery Monotone Operator Splitting Methods in Signal and Image Recovery P.L. Combettes 1, J.-C. Pesquet 2, and N. Pustelnik 3 2 Univ. Pierre et Marie Curie, Paris 6 LJLL CNRS UMR 7598 2 Univ. Paris-Est LIGM CNRS

More information

arxiv: v1 [math.oc] 20 Jun 2014

arxiv: v1 [math.oc] 20 Jun 2014 A forward-backward view of some primal-dual optimization methods in image recovery arxiv:1406.5439v1 [math.oc] 20 Jun 2014 P. L. Combettes, 1 L. Condat, 2 J.-C. Pesquet, 3 and B. C. Vũ 4 1 Sorbonne Universités

More information

Variable Metric Forward-Backward Algorithm

Variable Metric Forward-Backward Algorithm Variable Metric Forward-Backward Algorithm 1/37 Variable Metric Forward-Backward Algorithm for minimizing the sum of a differentiable function and a convex function E. Chouzenoux in collaboration with

More information

Signal Processing and Networks Optimization Part VI: Duality

Signal Processing and Networks Optimization Part VI: Duality Signal Processing and Networks Optimization Part VI: Duality Pierre Borgnat 1, Jean-Christophe Pesquet 2, Nelly Pustelnik 1 1 ENS Lyon Laboratoire de Physique CNRS UMR 5672 pierre.borgnat@ens-lyon.fr,

More information

A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration

A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration E. Chouzenoux, A. Jezierska, J.-C. Pesquet and H. Talbot Université Paris-Est Lab. d Informatique Gaspard

More information

On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems

On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems Radu Ioan Boţ Ernö Robert Csetnek August 5, 014 Abstract. In this paper we analyze the

More information

A Dykstra-like algorithm for two monotone operators

A Dykstra-like algorithm for two monotone operators A Dykstra-like algorithm for two monotone operators Heinz H. Bauschke and Patrick L. Combettes Abstract Dykstra s algorithm employs the projectors onto two closed convex sets in a Hilbert space to construct

More information

About Split Proximal Algorithms for the Q-Lasso

About Split Proximal Algorithms for the Q-Lasso Thai Journal of Mathematics Volume 5 (207) Number : 7 http://thaijmath.in.cmu.ac.th ISSN 686-0209 About Split Proximal Algorithms for the Q-Lasso Abdellatif Moudafi Aix Marseille Université, CNRS-L.S.I.S

More information

Victoria Martín-Márquez

Victoria Martín-Márquez A NEW APPROACH FOR THE CONVEX FEASIBILITY PROBLEM VIA MONOTROPIC PROGRAMMING Victoria Martín-Márquez Dep. of Mathematical Analysis University of Seville Spain XIII Encuentro Red de Análisis Funcional y

More information

A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing

A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing Emilie Chouzenoux emilie.chouzenoux@univ-mlv.fr Université Paris-Est Lab. d Informatique Gaspard

More information

A Primal-dual Three-operator Splitting Scheme

A Primal-dual Three-operator Splitting Scheme Noname manuscript No. (will be inserted by the editor) A Primal-dual Three-operator Splitting Scheme Ming Yan Received: date / Accepted: date Abstract In this paper, we propose a new primal-dual algorithm

More information

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties

More information

PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT

PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT Linear and Nonlinear Analysis Volume 1, Number 1, 2015, 1 PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT KAZUHIRO HISHINUMA AND HIDEAKI IIDUKA Abstract. In this

More information

Convergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization

Convergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization Convergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization Radu Ioan Boţ Christopher Hendrich November 7, 202 Abstract. In this paper we

More information

Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs

Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs Christian Clason Faculty of Mathematics, Universität Duisburg-Essen joint work with Barbara Kaltenbacher, Tuomo Valkonen,

More information

Optimisation in imaging

Optimisation in imaging Optimisation in imaging Hugues Talbot ICIP 2014 Tutorial Optimisation on Hierarchies H. Talbot : Optimisation 1/59 Outline of the lecture 1 Concepts in optimization Cost function Constraints Duality 2

More information

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems)

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Donghwan Kim and Jeffrey A. Fessler EECS Department, University of Michigan

More information

Primal-dual algorithms for the sum of two and three functions 1

Primal-dual algorithms for the sum of two and three functions 1 Primal-dual algorithms for the sum of two and three functions 1 Ming Yan Michigan State University, CMSE/Mathematics 1 This works is partially supported by NSF. optimization problems for primal-dual algorithms

More information

Learning with stochastic proximal gradient

Learning with stochastic proximal gradient Learning with stochastic proximal gradient Lorenzo Rosasco DIBRIS, Università di Genova Via Dodecaneso, 35 16146 Genova, Italy lrosasco@mit.edu Silvia Villa, Băng Công Vũ Laboratory for Computational and

More information

On Total Convexity, Bregman Projections and Stability in Banach Spaces

On Total Convexity, Bregman Projections and Stability in Banach Spaces Journal of Convex Analysis Volume 11 (2004), No. 1, 1 16 On Total Convexity, Bregman Projections and Stability in Banach Spaces Elena Resmerita Department of Mathematics, University of Haifa, 31905 Haifa,

More information

A splitting algorithm for coupled system of primal dual monotone inclusions

A splitting algorithm for coupled system of primal dual monotone inclusions A splitting algorithm for coupled system of primal dual monotone inclusions B`ăng Công Vũ UPMC Université Paris 06 Laboratoire Jacques-Louis Lions UMR CNRS 7598 75005 Paris, France vu@ljll.math.upmc.fr

More information

arxiv: v4 [math.oc] 29 Jan 2018

arxiv: v4 [math.oc] 29 Jan 2018 Noname manuscript No. (will be inserted by the editor A new primal-dual algorithm for minimizing the sum of three functions with a linear operator Ming Yan arxiv:1611.09805v4 [math.oc] 29 Jan 2018 Received:

More information

Self-dual Smooth Approximations of Convex Functions via the Proximal Average

Self-dual Smooth Approximations of Convex Functions via the Proximal Average Chapter Self-dual Smooth Approximations of Convex Functions via the Proximal Average Heinz H. Bauschke, Sarah M. Moffat, and Xianfu Wang Abstract The proximal average of two convex functions has proven

More information

Combining multiresolution analysis and non-smooth optimization for texture segmentation

Combining multiresolution analysis and non-smooth optimization for texture segmentation Combining multiresolution analysis and non-smooth optimization for texture segmentation Nelly Pustelnik CNRS, Laboratoire de Physique de l ENS de Lyon Intro Basics Segmentation Two-step texture segmentation

More information

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem Charles Byrne (Charles Byrne@uml.edu) http://faculty.uml.edu/cbyrne/cbyrne.html Department of Mathematical Sciences

More information

Proximal splitting methods on convex problems with a quadratic term: Relax!

Proximal splitting methods on convex problems with a quadratic term: Relax! Proximal splitting methods on convex problems with a quadratic term: Relax! The slides I presented with added comments Laurent Condat GIPSA-lab, Univ. Grenoble Alpes, France Workshop BASP Frontiers, Jan.

More information

arxiv: v2 [math.oc] 27 Nov 2015

arxiv: v2 [math.oc] 27 Nov 2015 arxiv:1507.03291v2 [math.oc] 27 Nov 2015 Asynchronous Block-Iterative Primal-Dual Decomposition Methods for Monotone Inclusions Patrick L. Combettes 1 and Jonathan Eckstein 2 1 Sorbonne Universités UPMC

More information

Existence and Approximation of Fixed Points of. Bregman Nonexpansive Operators. Banach Spaces

Existence and Approximation of Fixed Points of. Bregman Nonexpansive Operators. Banach Spaces Existence and Approximation of Fixed Points of in Reflexive Banach Spaces Department of Mathematics The Technion Israel Institute of Technology Haifa 22.07.2010 Joint work with Prof. Simeon Reich General

More information

Parcours OJD, Ecole Polytechnique et Université Pierre et Marie Curie 05 Mai 2015

Parcours OJD, Ecole Polytechnique et Université Pierre et Marie Curie 05 Mai 2015 Examen du cours Optimisation Stochastique Version 06/05/2014 Mastère de Mathématiques de la Modélisation F. Bonnans Parcours OJD, Ecole Polytechnique et Université Pierre et Marie Curie 05 Mai 2015 Authorized

More information

Solving monotone inclusions involving parallel sums of linearly composed maximally monotone operators

Solving monotone inclusions involving parallel sums of linearly composed maximally monotone operators Solving monotone inclusions involving parallel sums of linearly composed maximally monotone operators Radu Ioan Boţ Christopher Hendrich 2 April 28, 206 Abstract. The aim of this article is to present

More information

Primal-dual coordinate descent A Coordinate Descent Primal-Dual Algorithm with Large Step Size and Possibly Non-Separable Functions

Primal-dual coordinate descent A Coordinate Descent Primal-Dual Algorithm with Large Step Size and Possibly Non-Separable Functions Primal-dual coordinate descent A Coordinate Descent Primal-Dual Algorithm with Large Step Size and Possibly Non-Separable Functions Olivier Fercoq and Pascal Bianchi Problem Minimize the convex function

More information

Primal-dual coordinate descent

Primal-dual coordinate descent Primal-dual coordinate descent Olivier Fercoq Joint work with P. Bianchi & W. Hachem 15 July 2015 1/28 Minimize the convex function f, g, h convex f is differentiable Problem min f (x) + g(x) + h(mx) x

More information

PIPA: A New Proximal Interior Point Algorithm for Large Scale Convex Optimization

PIPA: A New Proximal Interior Point Algorithm for Large Scale Convex Optimization PIPA: A New Proximal Interior Point Algorithm for Large Scale Convex Optimization Marie-Caroline Corbineau, Emilie Chouzenoux, Jean-Christophe Pesquet To cite this version: Marie-Caroline Corbineau, Emilie

More information

arxiv: v1 [math.oc] 12 Mar 2013

arxiv: v1 [math.oc] 12 Mar 2013 On the convergence rate improvement of a primal-dual splitting algorithm for solving monotone inclusion problems arxiv:303.875v [math.oc] Mar 03 Radu Ioan Boţ Ernö Robert Csetnek André Heinrich February

More information

Maximal monotone operators are selfdual vector fields and vice-versa

Maximal monotone operators are selfdual vector fields and vice-versa Maximal monotone operators are selfdual vector fields and vice-versa Nassif Ghoussoub Department of Mathematics, University of British Columbia, Vancouver BC Canada V6T 1Z2 nassif@math.ubc.ca February

More information

On the acceleration of the double smoothing technique for unconstrained convex optimization problems

On the acceleration of the double smoothing technique for unconstrained convex optimization problems On the acceleration of the double smoothing technique for unconstrained convex optimization problems Radu Ioan Boţ Christopher Hendrich October 10, 01 Abstract. In this article we investigate the possibilities

More information

I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION

I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION Peter Ochs University of Freiburg Germany 17.01.2017 joint work with: Thomas Brox and Thomas Pock c 2017 Peter Ochs ipiano c 1

More information

Generalized greedy algorithms.

Generalized greedy algorithms. Generalized greedy algorithms. François-Xavier Dupé & Sandrine Anthoine LIF & I2M Aix-Marseille Université - CNRS - Ecole Centrale Marseille, Marseille ANR Greta Séminaire Parisien des Mathématiques Appliquées

More information

First-order methods for structured nonsmooth optimization

First-order methods for structured nonsmooth optimization First-order methods for structured nonsmooth optimization Sangwoon Yun Department of Mathematics Education Sungkyunkwan University Oct 19, 2016 Center for Mathematical Analysis & Computation, Yonsei University

More information

An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions

An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions Radu Ioan Boţ Ernö Robert Csetnek Szilárd Csaba László October, 1 Abstract. We propose a forward-backward

More information

Math 273a: Optimization Convex Conjugacy

Math 273a: Optimization Convex Conjugacy Math 273a: Optimization Convex Conjugacy Instructor: Wotao Yin Department of Mathematics, UCLA Fall 2015 online discussions on piazza.com Convex conjugate (the Legendre transform) Let f be a closed proper

More information

ADMM and Fast Gradient Methods for Distributed Optimization

ADMM and Fast Gradient Methods for Distributed Optimization ADMM and Fast Gradient Methods for Distributed Optimization João Xavier Instituto Sistemas e Robótica (ISR), Instituto Superior Técnico (IST) European Control Conference, ECC 13 July 16, 013 Joint work

More information

Convergence of Fixed-Point Iterations

Convergence of Fixed-Point Iterations Convergence of Fixed-Point Iterations Instructor: Wotao Yin (UCLA Math) July 2016 1 / 30 Why study fixed-point iterations? Abstract many existing algorithms in optimization, numerical linear algebra, and

More information

MOSCO STABILITY OF PROXIMAL MAPPINGS IN REFLEXIVE BANACH SPACES

MOSCO STABILITY OF PROXIMAL MAPPINGS IN REFLEXIVE BANACH SPACES MOSCO STABILITY OF PROXIMAL MAPPINGS IN REFLEXIVE BANACH SPACES Dan Butnariu and Elena Resmerita Abstract. In this paper we establish criteria for the stability of the proximal mapping Prox f ϕ =( ϕ+ f)

More information

A generalized forward-backward method for solving split equality quasi inclusion problems in Banach spaces

A generalized forward-backward method for solving split equality quasi inclusion problems in Banach spaces Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 4890 4900 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa A generalized forward-backward

More information

Math 273a: Optimization Subgradient Methods

Math 273a: Optimization Subgradient Methods Math 273a: Optimization Subgradient Methods Instructor: Wotao Yin Department of Mathematics, UCLA Fall 2015 online discussions on piazza.com Nonsmooth convex function Recall: For ˉx R n, f(ˉx) := {g R

More information

Operator Splitting for Parallel and Distributed Optimization

Operator Splitting for Parallel and Distributed Optimization Operator Splitting for Parallel and Distributed Optimization Wotao Yin (UCLA Math) Shanghai Tech, SSDS 15 June 23, 2015 URL: alturl.com/2z7tv 1 / 60 What is splitting? Sun-Tzu: (400 BC) Caesar: divide-n-conquer

More information

arxiv: v1 [math.na] 3 Jan 2019

arxiv: v1 [math.na] 3 Jan 2019 arxiv manuscript No. (will be inserted by the editor) A Finite Element Nonoverlapping Domain Decomposition Method with Lagrange Multipliers for the Dual Total Variation Minimizations Chang-Ock Lee Jongho

More information

Master 2 MathBigData. 3 novembre CMAP - Ecole Polytechnique

Master 2 MathBigData. 3 novembre CMAP - Ecole Polytechnique Master 2 MathBigData S. Gaïffas 1 3 novembre 2014 1 CMAP - Ecole Polytechnique 1 Supervised learning recap Introduction Loss functions, linearity 2 Penalization Introduction Ridge Sparsity Lasso 3 Some

More information

Sparse Regularization via Convex Analysis

Sparse Regularization via Convex Analysis Sparse Regularization via Convex Analysis Ivan Selesnick Electrical and Computer Engineering Tandon School of Engineering New York University Brooklyn, New York, USA 29 / 66 Convex or non-convex: Which

More information

ADMM for monotone operators: convergence analysis and rates

ADMM for monotone operators: convergence analysis and rates ADMM for monotone operators: convergence analysis and rates Radu Ioan Boţ Ernö Robert Csetne May 4, 07 Abstract. We propose in this paper a unifying scheme for several algorithms from the literature dedicated

More information

Compressed Sensing: a Subgradient Descent Method for Missing Data Problems

Compressed Sensing: a Subgradient Descent Method for Missing Data Problems Compressed Sensing: a Subgradient Descent Method for Missing Data Problems ANZIAM, Jan 30 Feb 3, 2011 Jonathan M. Borwein Jointly with D. Russell Luke, University of Goettingen FRSC FAAAS FBAS FAA Director,

More information

Proximal methods. S. Villa. October 7, 2014

Proximal methods. S. Villa. October 7, 2014 Proximal methods S. Villa October 7, 2014 1 Review of the basics Often machine learning problems require the solution of minimization problems. For instance, the ERM algorithm requires to solve a problem

More information

A characterization of essentially strictly convex functions on reflexive Banach spaces

A characterization of essentially strictly convex functions on reflexive Banach spaces A characterization of essentially strictly convex functions on reflexive Banach spaces Michel Volle Département de Mathématiques Université d Avignon et des Pays de Vaucluse 74, rue Louis Pasteur 84029

More information

Gauge optimization and duality

Gauge optimization and duality 1 / 54 Gauge optimization and duality Junfeng Yang Department of Mathematics Nanjing University Joint with Shiqian Ma, CUHK September, 2015 2 / 54 Outline Introduction Duality Lagrange duality Fenchel

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

9. Dual decomposition and dual algorithms

9. Dual decomposition and dual algorithms EE 546, Univ of Washington, Spring 2016 9. Dual decomposition and dual algorithms dual gradient ascent example: network rate control dual decomposition and the proximal gradient method examples with simple

More information

A New Primal Dual Algorithm for Minimizing the Sum of Three Functions with a Linear Operator

A New Primal Dual Algorithm for Minimizing the Sum of Three Functions with a Linear Operator https://doi.org/10.1007/s10915-018-0680-3 A New Primal Dual Algorithm for Minimizing the Sum of Three Functions with a Linear Operator Ming Yan 1,2 Received: 22 January 2018 / Accepted: 22 February 2018

More information

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 27 Introduction Fredholm first kind integral equation of convolution type in one space dimension: g(x) = 1 k(x x )f(x

More information

Solving DC Programs that Promote Group 1-Sparsity

Solving DC Programs that Promote Group 1-Sparsity Solving DC Programs that Promote Group 1-Sparsity Ernie Esser Contains joint work with Xiaoqun Zhang, Yifei Lou and Jack Xin SIAM Conference on Imaging Science Hong Kong Baptist University May 14 2014

More information

Convex Optimization on Large-Scale Domains Given by Linear Minimization Oracles

Convex Optimization on Large-Scale Domains Given by Linear Minimization Oracles Convex Optimization on Large-Scale Domains Given by Linear Minimization Oracles Arkadi Nemirovski H. Milton Stewart School of Industrial and Systems Engineering Georgia Institute of Technology Joint research

More information

EE 546, Univ of Washington, Spring Proximal mapping. introduction. review of conjugate functions. proximal mapping. Proximal mapping 6 1

EE 546, Univ of Washington, Spring Proximal mapping. introduction. review of conjugate functions. proximal mapping. Proximal mapping 6 1 EE 546, Univ of Washington, Spring 2012 6. Proximal mapping introduction review of conjugate functions proximal mapping Proximal mapping 6 1 Proximal mapping the proximal mapping (prox-operator) of a convex

More information

Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy

Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Caroline Chaux Joint work with X. Vu, N. Thirion-Moreau and S. Maire (LSIS, Toulon) Aix-Marseille

More information

arxiv: v1 [math.fa] 25 May 2009

arxiv: v1 [math.fa] 25 May 2009 The Brézis-Browder Theorem revisited and properties of Fitzpatrick functions of order n arxiv:0905.4056v1 [math.fa] 25 May 2009 Liangjin Yao May 22, 2009 Abstract In this note, we study maximal monotonicity

More information

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES Fenghui Wang Department of Mathematics, Luoyang Normal University, Luoyang 470, P.R. China E-mail: wfenghui@63.com ABSTRACT.

More information

Auxiliary-Function Methods in Iterative Optimization

Auxiliary-Function Methods in Iterative Optimization Auxiliary-Function Methods in Iterative Optimization Charles L. Byrne April 6, 2015 Abstract Let C X be a nonempty subset of an arbitrary set X and f : X R. The problem is to minimize f over C. In auxiliary-function

More information

SPARSE SIGNAL RESTORATION. 1. Introduction

SPARSE SIGNAL RESTORATION. 1. Introduction SPARSE SIGNAL RESTORATION IVAN W. SELESNICK 1. Introduction These notes describe an approach for the restoration of degraded signals using sparsity. This approach, which has become quite popular, is useful

More information

Viscosity Iterative Approximating the Common Fixed Points of Non-expansive Semigroups in Banach Spaces

Viscosity Iterative Approximating the Common Fixed Points of Non-expansive Semigroups in Banach Spaces Viscosity Iterative Approximating the Common Fixed Points of Non-expansive Semigroups in Banach Spaces YUAN-HENG WANG Zhejiang Normal University Department of Mathematics Yingbing Road 688, 321004 Jinhua

More information

FIXED POINTS IN THE FAMILY OF CONVEX REPRESENTATIONS OF A MAXIMAL MONOTONE OPERATOR

FIXED POINTS IN THE FAMILY OF CONVEX REPRESENTATIONS OF A MAXIMAL MONOTONE OPERATOR PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 00, Number 0, Pages 000 000 S 0002-9939(XX)0000-0 FIXED POINTS IN THE FAMILY OF CONVEX REPRESENTATIONS OF A MAXIMAL MONOTONE OPERATOR B. F. SVAITER

More information

PROXIMAL THRESHOLDING ALGORITHM FOR MINIMIZATION OVER ORTHONORMAL BASES

PROXIMAL THRESHOLDING ALGORITHM FOR MINIMIZATION OVER ORTHONORMAL BASES PROXIMAL THRESHOLDING ALGORITHM FOR MINIMIZATION OVER ORTHONORMAL BASES Patrick L. Combettes and Jean-Christophe Pesquet Laboratoire Jacques-Louis Lions UMR CNRS 7598 Université Pierre et Marie Curie Paris

More information

On the Brézis - Haraux - type approximation in nonreflexive Banach spaces

On the Brézis - Haraux - type approximation in nonreflexive Banach spaces On the Brézis - Haraux - type approximation in nonreflexive Banach spaces Radu Ioan Boţ Sorin - Mihai Grad Gert Wanka Abstract. We give Brézis - Haraux - type approximation results for the range of the

More information

Sequential Unconstrained Minimization: A Survey

Sequential Unconstrained Minimization: A Survey Sequential Unconstrained Minimization: A Survey Charles L. Byrne February 21, 2013 Abstract The problem is to minimize a function f : X (, ], over a non-empty subset C of X, where X is an arbitrary set.

More information

Epiconvergence and ε-subgradients of Convex Functions

Epiconvergence and ε-subgradients of Convex Functions Journal of Convex Analysis Volume 1 (1994), No.1, 87 100 Epiconvergence and ε-subgradients of Convex Functions Andrei Verona Department of Mathematics, California State University Los Angeles, CA 90032,

More information

ZERO DUALITY GAP FOR CONVEX PROGRAMS: A GENERAL RESULT

ZERO DUALITY GAP FOR CONVEX PROGRAMS: A GENERAL RESULT ZERO DUALITY GAP FOR CONVEX PROGRAMS: A GENERAL RESULT EMIL ERNST AND MICHEL VOLLE Abstract. This article addresses a general criterion providing a zero duality gap for convex programs in the setting of

More information

AUGMENTED LAGRANGIAN METHODS FOR CONVEX OPTIMIZATION

AUGMENTED LAGRANGIAN METHODS FOR CONVEX OPTIMIZATION New Horizon of Mathematical Sciences RIKEN ithes - Tohoku AIMR - Tokyo IIS Joint Symposium April 28 2016 AUGMENTED LAGRANGIAN METHODS FOR CONVEX OPTIMIZATION T. Takeuchi Aihara Lab. Laboratories for Mathematics,

More information

A Tutorial on Primal-Dual Algorithm

A Tutorial on Primal-Dual Algorithm A Tutorial on Primal-Dual Algorithm Shenlong Wang University of Toronto March 31, 2016 1 / 34 Energy minimization MAP Inference for MRFs Typical energies consist of a regularization term and a data term.

More information

A primal-dual fixed point algorithm for multi-block convex minimization *

A primal-dual fixed point algorithm for multi-block convex minimization * Journal of Computational Mathematics Vol.xx, No.x, 201x, 1 16. http://www.global-sci.org/jcm doi:?? A primal-dual fixed point algorithm for multi-block convex minimization * Peijun Chen School of Mathematical

More information

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

Douglas-Rachford Splitting for Pathological Convex Optimization

Douglas-Rachford Splitting for Pathological Convex Optimization Douglas-Rachford Splitting for Pathological Convex Optimization Ernest K. Ryu Yanli Liu Wotao Yin January 9, 208 Abstract Despite the vast literature on DRS, there has been very little work analyzing their

More information

Denoising of NIRS Measured Biomedical Signals

Denoising of NIRS Measured Biomedical Signals Denoising of NIRS Measured Biomedical Signals Y V Rami reddy 1, Dr.D.VishnuVardhan 2 1 M. Tech, Dept of ECE, JNTUA College of Engineering, Pulivendula, A.P, India 2 Assistant Professor, Dept of ECE, JNTUA

More information

Proximal Newton Method. Zico Kolter (notes by Ryan Tibshirani) Convex Optimization

Proximal Newton Method. Zico Kolter (notes by Ryan Tibshirani) Convex Optimization Proximal Newton Method Zico Kolter (notes by Ryan Tibshirani) Convex Optimization 10-725 Consider the problem Last time: quasi-newton methods min x f(x) with f convex, twice differentiable, dom(f) = R

More information

ALGORITHMS FOR MINIMIZING DIFFERENCES OF CONVEX FUNCTIONS AND APPLICATIONS

ALGORITHMS FOR MINIMIZING DIFFERENCES OF CONVEX FUNCTIONS AND APPLICATIONS ALGORITHMS FOR MINIMIZING DIFFERENCES OF CONVEX FUNCTIONS AND APPLICATIONS Mau Nam Nguyen (joint work with D. Giles and R. B. Rector) Fariborz Maseeh Department of Mathematics and Statistics Portland State

More information

Sequential convex programming,: value function and convergence

Sequential convex programming,: value function and convergence Sequential convex programming,: value function and convergence Edouard Pauwels joint work with Jérôme Bolte Journées MODE Toulouse March 23 2016 1 / 16 Introduction Local search methods for finite dimensional

More information

Sparse Optimization Lecture: Dual Methods, Part I

Sparse Optimization Lecture: Dual Methods, Part I Sparse Optimization Lecture: Dual Methods, Part I Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know dual (sub)gradient iteration augmented l 1 iteration

More information

ON THE RANGE OF THE SUM OF MONOTONE OPERATORS IN GENERAL BANACH SPACES

ON THE RANGE OF THE SUM OF MONOTONE OPERATORS IN GENERAL BANACH SPACES PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 124, Number 11, November 1996 ON THE RANGE OF THE SUM OF MONOTONE OPERATORS IN GENERAL BANACH SPACES HASSAN RIAHI (Communicated by Palle E. T. Jorgensen)

More information

LECTURE 13 LECTURE OUTLINE

LECTURE 13 LECTURE OUTLINE LECTURE 13 LECTURE OUTLINE Problem Structures Separable problems Integer/discrete problems Branch-and-bound Large sum problems Problems with many constraints Conic Programming Second Order Cone Programming

More information

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane Conference ADGO 2013 October 16, 2013 Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions Marc Lassonde Université des Antilles et de la Guyane Playa Blanca, Tongoy, Chile SUBDIFFERENTIAL

More information

2 Regularized Image Reconstruction for Compressive Imaging and Beyond

2 Regularized Image Reconstruction for Compressive Imaging and Beyond EE 367 / CS 448I Computational Imaging and Display Notes: Compressive Imaging and Regularized Image Reconstruction (lecture ) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement

More information

Fonctions Perspectives et Statistique en Grande Dimension

Fonctions Perspectives et Statistique en Grande Dimension Fonctions Perspectives et Statistique en Grande Dimension Patrick L. Combettes Department of Mathematics North Carolina State University Raleigh, NC 27695, USA Basé sur un travail conjoint avec C. L. Müller

More information

arxiv: v3 [math.oc] 29 Jun 2016

arxiv: v3 [math.oc] 29 Jun 2016 MOCCA: mirrored convex/concave optimization for nonconvex composite functions Rina Foygel Barber and Emil Y. Sidky arxiv:50.0884v3 [math.oc] 9 Jun 06 04.3.6 Abstract Many optimization problems arising

More information

Errata Applied Analysis

Errata Applied Analysis Errata Applied Analysis p. 9: line 2 from the bottom: 2 instead of 2. p. 10: Last sentence should read: The lim sup of a sequence whose terms are bounded from above is finite or, and the lim inf of a sequence

More information

On the Convergence of the Iterative Shrinkage/Thresholding Algorithm With a Weakly Convex Penalty

On the Convergence of the Iterative Shrinkage/Thresholding Algorithm With a Weakly Convex Penalty On the Convergence of the Iterative Shrinkage/Thresholding Algorithm With a Weakly Convex Penalty İlker Bayram arxiv:5.78v [math.oc] Nov 5 Abstract We consider the iterative shrinkage/thresholding algorithm

More information

Dual Decomposition.

Dual Decomposition. 1/34 Dual Decomposition http://bicmr.pku.edu.cn/~wenzw/opt-2017-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes Outline 2/34 1 Conjugate function 2 introduction:

More information

PDEs in Image Processing, Tutorials

PDEs in Image Processing, Tutorials PDEs in Image Processing, Tutorials Markus Grasmair Vienna, Winter Term 2010 2011 Direct Methods Let X be a topological space and R: X R {+ } some functional. following definitions: The mapping R is lower

More information

A Solution Method for Semidefinite Variational Inequality with Coupled Constraints

A Solution Method for Semidefinite Variational Inequality with Coupled Constraints Communications in Mathematics and Applications Volume 4 (2013), Number 1, pp. 39 48 RGN Publications http://www.rgnpublications.com A Solution Method for Semidefinite Variational Inequality with Coupled

More information

Visco-penalization of the sum of two monotone operators

Visco-penalization of the sum of two monotone operators Visco-penalization of the sum of two monotone operators Patrick L. Combettes a and Sever A. Hirstoaga b a Laboratoire Jacques-Louis Lions, Faculté de Mathématiques, Université Pierre et Marie Curie Paris

More information