Inexact Alternating Direction Method of Multipliers for Separable Convex Optimization

Size: px
Start display at page:

Download "Inexact Alternating Direction Method of Multipliers for Separable Convex Optimization"

Transcription

1 Inexact Alternating Direction Method of Multipliers for Separable Convex Optimization Hongchao Zhang Department of Mathematics Center for Computation and Technology Louisiana State University U.S.-Mexico Workshop on Optimization and Applications January 8th, 2016

2 Problem Constrained Separable Convex Optimization min m f i (x i )+h i (x i ) s. t. i=1 m A i x i = b i=1 where m 2 and f i : R n i R is convex, Lipschitz continuously differentiable. h i is a simple proper closed convex function on R n i, but not necessarily smooth. To put constraint x i X i, let h i be the indicator function for closed convex set X i R n i. Many applications in image processing, statistical learning and compressive sensing, etc.

3 Motivation m = 2 Total variation image reconstruction which is equivalent to where min H(u)+φ(Bu) min H(u)+φ(w) s. t. Bu = w, H(u) = 1 2 Au f 2 with A large, dense and ill-conditioned φ(bu) = α u TV = α N i=1 ( u) i with α > 0 Augmented Lagrangian L ρ (u,w,b) = H(u)+φ(w)+ b,bu w + ρ 2 Bu w 2

4 Motivation m = 2 Augmented Lagrangian Method (Method of Multipliers) (u k+1,w k+1 ) = argmin u,w Lρ (u,w,b k ), b k+1 = b k +ρ(bu k+1 w k+1 ). Alternating direction method of multipliers (ADMM) u k+1 1 = argmin u 2 Au f 2 + ρ 2 Bu wk +ρ 1 b k 2, w k+1 = argmin w φ(w)+ ρ 2 Buk+1 w+ρ 1 b k 2, b k+1 = b k +ρ(bu k+1 w k+1 ).

5 Motivation m = 2 The u subproblem: u k+1 1 = argmin u 2 Au f 2 + ρ 2 Bu wk +ρ 1 b k 2 Bregman operator splitting (BOS) scheme: δ k > A T A u k+1 = argmin u δ k u u k +δ 1 k AT (Au k f) 2 +ρ Bu w k +ρ 1 b k 2 Bregman operator splitting with variable stepsize (BOSVS): Apply nonmonotone line search with initial { A(u ˆδ k u k 1 ) 2 } k = max k, u k u k 1 2, where k > 0 is a lower bound and is adaptively increased.

6 BOSVS Algorithm Parameters: τ, η > 1, β, C 0, ρ, δ min > 0, ξ,σ (0,1), δ 0 = 1. Initialization: w 1, u 1 and b 1. Set Q 1 = 0 and 1 = δ min. For k = 1,2,... Step 1. Set δ k = η jˆδ k where j 0 is the smallest integer such that Q k+1 C k 2 where Q k+1 := ξ k Q k +Ω k with Ω k := σ(δ k u k+1 u k 2 +ρ Bu k+1 w k 2 ) A(u k+1 u k ) 2, and { u k+1 = argmin δ k u u k +δ 1 u k A T (Au k f) 2 +ρ w k Bu + bk ρ 2}, 0 ξ k min{ξ,(1 k 1 ) 2 }. Step 2. If δ k > max{δ k 1, k } when k > 1, k+1 := τ k. { Step 3. w k+1 = argmin φ(w)+ ρ w 2 w Buk+1 +b k /ρ 2 + β 2 w wk 2}. Step 4. b k+1 = b k +ρ(bu k+1 w k+1 ). Step 5. If a stopping criterion is satisfied, stop. End For

7 Convergence of BOSVS Theorem. If the minimizer exists, the sequence x k = (u k,w k,b k ) generated by BOSVS converges to a solution x = (u,w,b ) satisfying the first-order optimality conditions H(u ) B T b = 0, b φ(w ), w = Bu. Moreover, the ergodic mean u K given by u K := 1 K K k=1 uk satifies φ(bu K )+H(u K ) min u {φ(bu)+h(u)} = O( 1 K ).

8 Numerical Experiments (BOSVS) Partially Parallel Imaging (PPI) with L = 8 coils Optimize 1 min u 2 L F p (s l u) f l 2 +α u TV, l=1 where F p is the undersampled Fourier transform, S l is the sensitivity map of the l-th channel, and α = Three data sets: * data 1 and data 2 uses a random Poisson mask (25% Fourier coefficients); * data 3 uses a radial mask (34% Fourier coefficients). Plot the error in the objective function versus the iteration number (ρ = 10 4 ).

9 Numerical Experiments (BOSVS) 2 BOS 1.5 ERG 1 1 BOSVS 0.5 BOS ERG BOSVS log 10 (objective error) log 10 (objective error) (a) log K 10 (b) log 10 K BOS ERG BOSVS log 10 (objective error) (c) log 10 K Figure: Plots of the objective error for data 1, data 2, and data 3. Least squares fit of y = ck p give p as 1.6, 1.2, and 0.9 for data 1, data 2, and data 3 respectively.

10 Separable Convex Optimization m 3 Optimize min m f i (x i )+h i (x i ) s. t. i=1 m A i x i = b i=1 where f i : R n i R is convex, Lipschitz continuously differentiable. h i is a simple proper closed convex function on R n i, but not necessarily smooth. We assume A T i A i is nonsingular and the solution set of the problem is nonempty.

11 Separable Convex Optimization m 3 Direct extention of ADMM: For i = 1,...,m x k+1 i = argminl ρ (x k+1 1,...,x k+1 i 1,x i,x k i+1,...,xk m ;λk ); End λ k+1 = λ k +ρ(ax k+1 b), where L ρ (x 1,...,x m ;λ) = f(x)+λ T (Ax b)+ ρ 2 Ax b 2, f = m i=1 f i +h i, Ax := m i=1 A ix i and ρ > 0 is a parameter. Practical efficiency has been observed in many recent applications. However, not necessarily converge! (Chen, He, Ye, Yuan 2013) Moreover, each subproblem is solved exactly, which could be very expensive, not practical or even impossible when no closed formula exists.

12 Separable Convex Optimization m 3 Literature Han and Yuan (2012): * Assume all f i strongly convex and ρ in a specific range * Each subproblem needs to be solved exactly. He, Tao, Xu and Yuan (2013) * Block Gaussian backward substitution is used to ensure convergence. * Each subproblem needs to be solved exactly. Hong and Luo (2013) λ k+1 = λ k +αρ(ax k+1 b), * Stepsize α needs to be sufficiently small. * f i and h i needs to satisfy certain local error bound conditions. * Linear convergence rate is achieved. Much more recent works: randomization, m 2 strongly convex assumption,...

13 Separable Convex Optimization m 3 Motivation Extend BOSVS to the multi-block case to solve the subproblems inexactly. Analogous to the standard Augmented Lagrangian Method (ALM), i.e. m = 1, solve the subproblems to the adaptive accuracy relative to the current KKT error. (Do not require summable error! Ex. Eckstein-Bertsekas 1992: x k i x i ηk with k ηk < ) Apply accelerated optimal gradient methods to solve the subproblems. Global convergence is guaranteed.

14 Separable Convex Optimization m 3 Notation Let us define H = diag(a T 2 A 2,A T 3 A 3,...,A T ma m ) and A T 2 A M = A T 3 A 2 A T 3 A A T m A 2 A T m A 3 A T m A m Note H is positive definite and M is nonsingular. Let us define Φ k i (u,ū,δ) = f i (ū)+ f i (ū) T (u ū)+ δ 2 u ū 2 +h i (u) + ρ 2 A iu b k i +λ k /ρ 2, where b k i = b j<i A j x k j j>i A j x k j.

15 Linearized ADMM with Gaussian backward substitution Parameters: ρ, θ 1, θ 2, θ 3 R ++ ; 0 < δ min << δ max, α (0,1), 0 < σ < 1 < η, {ǫ k } R + with k=1 ǫk <. Step 0: Initialize λ 1. For i = 1,...,m, set δ min,i = δ min, initialize x 1 i and set x 1 i = x 1 i = x 1 i. Let k = 1. Step 1: For i = 1,...,m Choose δi k = η j δi,0 k, where j 0 is the smallest integer such that f i (x k i )+ f i(x k i ), xk i x k i + (1 σ)δk i 2 x k i x k i 2 +ǫ k f i ( x k i ), where δ min δ k i,0 δmax and xk i = Argmin u R n i Φ k i (u,xk i,δk i ). Let x k+1 i = x k i and r i k = (1/δi k ) x k i x k i 2. If δi k > max{δ k 1 i,δ min,i } when k > 1, δ min,i := ηδ min,i. End Step 2: Let e k = θ 1 v k ṽ k +θ 2 m i=1 A i x k i b +θ 3 m i=1 rk i. If e k is sufficiently small, stop the algorithm. Step 3: Let x k+1 1 = x k 1 and λk+1 = λ k +αρ( m i=1 A i x k i b). Compute ṽ k+1 = ṽ k +αm T H( v k ṽ k ). (Backward Substitution) Let k := k +1 and go to Step 1. Figure: L-ADMM-G Algorithm

16 Global Convergence of L-ADMM-G Theorem. Let w k = ( x k 1, xk 2,..., xk n,λk ), w k = ( x k 1, xk 2,..., xk n,λk ) be the iterates generated by the L-ADM-G algorithm. Then, lim k wk = lim k wk = w, where w = (x 1,x 2,...,x,,λ ) is an optimal primal-dual solution pair. Proof. Denote E k = ρ ṽ k e 2 G + 1 ρ λk e 2 +α m i=1 (δk i x k i,e 2 ). We can show for large k, where τ k = c E k E k+1 +τ k, m x k i x k i 2 + c(ρ ṽ k v k 2 H + 1 ρ λk λ k 2 ) ĉǫ k. i=1

17 Inexact ADMM with Gaussian backward substitution Parameters: ρ, θ 1, θ 2, θ 3 R ++, α (0,1), {ǫ k } R + with k=1 ǫk <. Step 0: Initialize λ 1. For i = 1,...,m, initialize x 1 i and set x 1 i = x 1 i = x 1 i. Let e0 =, Γ 0 i = 0 and k = 1. Step 1: For i = 1,...,m End Use the Accelerated optimal Gradient (AOG) method to solve min u R n i L k i (u) inexactly. Step 2: Let e k = θ 1 v k ṽ k +θ 2 m i=1 A i x k i b +θ 3 m i=1 rk i. If e k is sufficiently small, stop the algorithm. Step 3: Let x k+1 1 = x k 1 and λk+1 = λ k +αρ( m i=1 A i x k i b). Compute ṽ k+1 = ṽ k +αm T H( v k ṽ k ). (Backward Substitution) Let k := k +1 and go to Step 1. Figure: I-ADMM-G Algorithm

18 Accelerated Optimal Gradient Method Notation For any h i and z i R n i, we define its proximal mapping prox hi (z i) = Arg min h 1 u R n i i(u)+ 2 zi u 2, and define L k i (u) := L k i (u) h i(u), where L k i (u) := L ρ ( x k 1,..., x k i 1,u, x k i+1,..., x k m;λ k ). We have if and only if x i,k = Arg min u R n i L k i (u) prox hi (x i,k L k i (x i,k)) x i,k = 0. Define ψ : R R + to be any function having the property that lim ψ(t) = 0 and ψ(t) = 0 if and only if t = 0. t 0

19 Accelerated Optimal Gradient Method Figure: Solve min u R n i L k i (u) inexactly Parameters: 0 σ < 1, ǫ k > 0, {ω l } R + with l=1 ωl <. Let r 0 i = 0, y 0 i = u 0 i = x k i, α1 = 1, l = 1 and flag = true. while (flag = true) end Choose δ l > 0 and α l (0,1) for l 2 such that f i (ȳi l)+ f i(ȳi l),yl i ȳi l (1 σ)δl + y l 2α l i ȳi l 2 +π l f i (yi l ), (II) where ȳ l i = (1 α l )y l 1 i +α l u l 1 i, yi l = (1 α l )y l 1 i +α l u l i, u l i = arg min f i (ȳ l δl u R n i ),u + i 2 u ul 1 i 2 + ρ 2 A iu b k i +λ k /ρ 2 +h i (u), and π l = ǫ k ω l /(2γ l ) with γ 1 = 1/δ 1 and γ l = γ l 1 /(1 α l ) for l 2. Let ri l = r l 1 i +δ l α l γ l u l i u l 1 i 2. If γ l Γ k 1 i and prox hi (y l i L k i (yl i )) yl i ψ(ek 1 ), set x k+1 i else l := l+1. = u l i, Γk i = γ l, x k i = y l i, rk i = r l i /Γk i and flag = false;

20 Accelerated Optimal Gradient Method If the Lipschitz constant ζ i of f i is known, we could simply set δ l = 1 2ζ i (1 σ) l and α l = 2 l+1 (0,1]. Comment: * We have (1 σ)δ l α l = (l +1)ζ i l > ζ i. Hence, the condition (II) in the AG algorithm will be satisfied. * In addition, we have γ l = 1 l δ 1 (1 α j ) j=2 1 = l(l+1) 2δ 1 and ξ l := δ l α l γ l = 1.

21 Accelerated Optimal Gradient Method When the Lipschitz constant of f i is unknown, let τ l = 1/(δ l 0η j ), where δ l 0 [δ min,δ max] and j 0 is the smallest integer such that with δ l = 2 τ l + and α l 1 = (0,1], (τ l ) 2 +4τ l Λ l 1 1+δ l Λl 1 the condition (II) in the AG algorithm is satisfied, where Λ l 1 = l 1 i=1 Comment: * Then, we have (1 σ)δ l α l = 1 σ θ l = (1 σ)δ l 0η j. Hence, the condition (II) in the AG algorithm will be satisfied as j. * In addition, we have 1 δ i. γ l = Λ l l 2 4(ηζ i /(1 σ)+δ max ) and ξ l := δ l α l γ l = 1.

22 Accelerated Optimal Gradient Method Lemma. Suppose the accelerated optimal gradient method with the previous line search rule is applied to solve the subproblem. Then we have ( yi l x i,k 2 1 u 0 i x i,k 2 +ǫ k ) l j=1 ωj ν i ρ γ l, where x i,k = Argmin Lk i (u) and ν i is the minimum eigenvalue of A T i A i. Comment: We have optimal convergence rate y l i x i,k 2 O( 1 l 2). The subproblem stopping condition γ l Γ k 1 i and prox hi (y l i L k i (yl i )) yl i ψ(ek 1 ), will be satisfied in finite number of steps.

23 Global Convergence of I-ADMM-G Theorem. Let w k = ( x k 1, xk 2,..., xk n,λ k ), w k = ( x k 1, xk 2,..., xk n,λ k ) be the iterates generated by the I-ADMM-G algorithm. Suppose Then, lim l γ l = ; ξ l := δ l α l γ l is constant ; lim k wk = lim k wk = w, where w = (x 1,x 2,...,x,,λ ) is an optimal primal-dual solution pair. Proof. Denote E k = ρ ṽ k e 2 G + 1 ρ λk e 2 +α m i=1 for large k, where E k E k+1 +τ k, x k i,e 2 Γ k i. We can show m 1 τ k = c Γ k i=1 i Ii k u l i,k u l 1 i,k 2 + c(ρ ṽ k v k 2 H+ 1 ρ λk λ k 2 ) α l=1 m ǫ k Γ k i=1 i Ii k ω l. l=1

24 Numerical Experiments An image deblurring problem for the Cameraman image Optimize 1 min u 2 Au b 2 +α u TV +β Φ T u 1, which is equivalent to 1 min u 2 Au b 2 +α w 1,2 +β z 1, s. t. Bu = w,φ T u = z where Φ is the wavelet transform, α = and β = size , 9 9 uniform blur and Gaussian noise SNR = 40 Relative accuracy for the subproblem: ψ(e) = min{0.1e,e 1.1 } Plot the relative objective function error versus the CPU time (ρ = ).

25 Numerical Experiments 10 0 Cameraman L ADMM G I ADMM G E ADMM G Objective Error cpu time (s) Plots of the relative objective error of the Cameraman image (size , 9 9 uniform blur and Gaussian noise SNR = 40)

26 Numerical Experiments Three Partially Parallel Imaging (PPI) problems Optimize 1 min u 2 Au b 2 +α u TV +β Φ T u 1, which is equivalent to 1 min u 2 Au b 2 +α w 1,2 +β z 1, s. t. Bu = w,φ T u = z where Φ is the wavelet transform, α = 10 5 and β = Relative accuracy for the subproblem: ψ(e) = min{0.1e,e 1.1 } Plot the relative objective function error versus the CPU time (ρ = 10 3 ).

27 Numerical Experiments 10 0 Data 1 L ADMM G I ADMM G E ADMM G 10 0 Data 2 L ADMM G I ADMM G E ADMM G Objective Error Objective Error (a) cpu time (s) (b) cpu time (s) 10 0 Data 3 L ADMM G I ADMM G E ADMM G Objective Error (c) cpu time (s) Figure: Plots of the relative objective error for data 1, 2 and 3.

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 9 Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 2 Separable convex optimization a special case is min f(x)

More information

Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables

Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables Department of Systems Engineering and Engineering Management The Chinese University of Hong Kong 2014 Workshop

More information

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 XVI - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 A slightly changed ADMM for convex optimization with three separable operators Bingsheng He Department of

More information

Accelerated primal-dual methods for linearly constrained convex problems

Accelerated primal-dual methods for linearly constrained convex problems Accelerated primal-dual methods for linearly constrained convex problems Yangyang Xu SIAM Conference on Optimization May 24, 2017 1 / 23 Accelerated proximal gradient For convex composite problem: minimize

More information

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.18

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.18 XVIII - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No18 Linearized alternating direction method with Gaussian back substitution for separable convex optimization

More information

The Direct Extension of ADMM for Multi-block Convex Minimization Problems is Not Necessarily Convergent

The Direct Extension of ADMM for Multi-block Convex Minimization Problems is Not Necessarily Convergent The Direct Extension of ADMM for Multi-block Convex Minimization Problems is Not Necessarily Convergent Yinyu Ye K. T. Li Professor of Engineering Department of Management Science and Engineering Stanford

More information

Contraction Methods for Convex Optimization and monotone variational inequalities No.12

Contraction Methods for Convex Optimization and monotone variational inequalities No.12 XII - 1 Contraction Methods for Convex Optimization and monotone variational inequalities No.12 Linearized alternating direction methods of multipliers for separable convex programming Bingsheng He Department

More information

The paper will focus on the following form for H which arises in image reconstruction problems:

The paper will focus on the following form for H which arises in image reconstruction problems: AN O(1/K) CONVERGENCE RATE FOR THE VARIABLE STEPSIZE BREGMAN OPERATOR SPLITTING ALGORITHM WILLIAM W. HAGER, MARYAM YASHTINI, AND HONGCHAO ZHANG Abstract. An earlier paper proved the convergence of a variable

More information

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods Renato D.C. Monteiro B. F. Svaiter May 10, 011 Revised: May 4, 01) Abstract This

More information

Solving DC Programs that Promote Group 1-Sparsity

Solving DC Programs that Promote Group 1-Sparsity Solving DC Programs that Promote Group 1-Sparsity Ernie Esser Contains joint work with Xiaoqun Zhang, Yifei Lou and Jack Xin SIAM Conference on Imaging Science Hong Kong Baptist University May 14 2014

More information

arxiv: v1 [math.oc] 23 May 2017

arxiv: v1 [math.oc] 23 May 2017 A DERANDOMIZED ALGORITHM FOR RP-ADMM WITH SYMMETRIC GAUSS-SEIDEL METHOD JINCHAO XU, KAILAI XU, AND YINYU YE arxiv:1705.08389v1 [math.oc] 23 May 2017 Abstract. For multi-block alternating direction method

More information

An Efficient Inexact Symmetric Gauss-Seidel Based Majorized ADMM for High-Dimensional Convex Composite Conic Programming

An Efficient Inexact Symmetric Gauss-Seidel Based Majorized ADMM for High-Dimensional Convex Composite Conic Programming Journal manuscript No. (will be inserted by the editor) An Efficient Inexact Symmetric Gauss-Seidel Based Majorized ADMM for High-Dimensional Convex Composite Conic Programming Liang Chen Defeng Sun Kim-Chuan

More information

Sparse Optimization Lecture: Dual Methods, Part I

Sparse Optimization Lecture: Dual Methods, Part I Sparse Optimization Lecture: Dual Methods, Part I Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know dual (sub)gradient iteration augmented l 1 iteration

More information

Distributed Convex Optimization

Distributed Convex Optimization Master Program 2013-2015 Electrical Engineering Distributed Convex Optimization A Study on the Primal-Dual Method of Multipliers Delft University of Technology He Ming Zhang, Guoqiang Zhang, Richard Heusdens

More information

ACCELERATED FIRST-ORDER PRIMAL-DUAL PROXIMAL METHODS FOR LINEARLY CONSTRAINED COMPOSITE CONVEX PROGRAMMING

ACCELERATED FIRST-ORDER PRIMAL-DUAL PROXIMAL METHODS FOR LINEARLY CONSTRAINED COMPOSITE CONVEX PROGRAMMING ACCELERATED FIRST-ORDER PRIMAL-DUAL PROXIMAL METHODS FOR LINEARLY CONSTRAINED COMPOSITE CONVEX PROGRAMMING YANGYANG XU Abstract. Motivated by big data applications, first-order methods have been extremely

More information

A General Framework for a Class of Primal-Dual Algorithms for TV Minimization

A General Framework for a Class of Primal-Dual Algorithms for TV Minimization A General Framework for a Class of Primal-Dual Algorithms for TV Minimization Ernie Esser UCLA 1 Outline A Model Convex Minimization Problem Main Idea Behind the Primal Dual Hybrid Gradient (PDHG) Method

More information

A Bregman alternating direction method of multipliers for sparse probabilistic Boolean network problem

A Bregman alternating direction method of multipliers for sparse probabilistic Boolean network problem A Bregman alternating direction method of multipliers for sparse probabilistic Boolean network problem Kangkang Deng, Zheng Peng Abstract: The main task of genetic regulatory networks is to construct a

More information

LINEARIZED AUGMENTED LAGRANGIAN AND ALTERNATING DIRECTION METHODS FOR NUCLEAR NORM MINIMIZATION

LINEARIZED AUGMENTED LAGRANGIAN AND ALTERNATING DIRECTION METHODS FOR NUCLEAR NORM MINIMIZATION LINEARIZED AUGMENTED LAGRANGIAN AND ALTERNATING DIRECTION METHODS FOR NUCLEAR NORM MINIMIZATION JUNFENG YANG AND XIAOMING YUAN Abstract. The nuclear norm is widely used to induce low-rank solutions for

More information

ON THE GLOBAL AND LINEAR CONVERGENCE OF THE GENERALIZED ALTERNATING DIRECTION METHOD OF MULTIPLIERS

ON THE GLOBAL AND LINEAR CONVERGENCE OF THE GENERALIZED ALTERNATING DIRECTION METHOD OF MULTIPLIERS ON THE GLOBAL AND LINEAR CONVERGENCE OF THE GENERALIZED ALTERNATING DIRECTION METHOD OF MULTIPLIERS WEI DENG AND WOTAO YIN Abstract. The formulation min x,y f(x) + g(y) subject to Ax + By = b arises in

More information

On convergence rate of the Douglas-Rachford operator splitting method

On convergence rate of the Douglas-Rachford operator splitting method On convergence rate of the Douglas-Rachford operator splitting method Bingsheng He and Xiaoming Yuan 2 Abstract. This note provides a simple proof on a O(/k) convergence rate for the Douglas- Rachford

More information

Distributed Optimization via Alternating Direction Method of Multipliers

Distributed Optimization via Alternating Direction Method of Multipliers Distributed Optimization via Alternating Direction Method of Multipliers Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato Stanford University ITMANET, Stanford, January 2011 Outline precursors dual decomposition

More information

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.11

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.11 XI - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.11 Alternating direction methods of multipliers for separable convex programming Bingsheng He Department of Mathematics

More information

Primal-dual Subgradient Method for Convex Problems with Functional Constraints

Primal-dual Subgradient Method for Convex Problems with Functional Constraints Primal-dual Subgradient Method for Convex Problems with Functional Constraints Yurii Nesterov, CORE/INMA (UCL) Workshop on embedded optimization EMBOPT2014 September 9, 2014 (Lucca) Yu. Nesterov Primal-dual

More information

Beyond Heuristics: Applying Alternating Direction Method of Multipliers in Nonconvex Territory

Beyond Heuristics: Applying Alternating Direction Method of Multipliers in Nonconvex Territory Beyond Heuristics: Applying Alternating Direction Method of Multipliers in Nonconvex Territory Xin Liu(4Ð) State Key Laboratory of Scientific and Engineering Computing Institute of Computational Mathematics

More information

Iterative Hard Thresholding Methods for l 0 Regularized Convex Cone Programming

Iterative Hard Thresholding Methods for l 0 Regularized Convex Cone Programming Iterative Hard Thresholding Methods for l 0 Regularized Convex Cone Programming arxiv:1211.0056v2 [math.oc] 2 Nov 2012 Zhaosong Lu October 30, 2012 Abstract In this paper we consider l 0 regularized convex

More information

You should be able to...

You should be able to... Lecture Outline Gradient Projection Algorithm Constant Step Length, Varying Step Length, Diminishing Step Length Complexity Issues Gradient Projection With Exploration Projection Solving QPs: active set

More information

Splitting methods for decomposing separable convex programs

Splitting methods for decomposing separable convex programs Splitting methods for decomposing separable convex programs Philippe Mahey LIMOS - ISIMA - Université Blaise Pascal PGMO, ENSTA 2013 October 4, 2013 1 / 30 Plan 1 Max Monotone Operators Proximal techniques

More information

A GENERAL FRAMEWORK FOR A CLASS OF FIRST ORDER PRIMAL-DUAL ALGORITHMS FOR CONVEX OPTIMIZATION IN IMAGING SCIENCE

A GENERAL FRAMEWORK FOR A CLASS OF FIRST ORDER PRIMAL-DUAL ALGORITHMS FOR CONVEX OPTIMIZATION IN IMAGING SCIENCE A GENERAL FRAMEWORK FOR A CLASS OF FIRST ORDER PRIMAL-DUAL ALGORITHMS FOR CONVEX OPTIMIZATION IN IMAGING SCIENCE ERNIE ESSER XIAOQUN ZHANG TONY CHAN Abstract. We generalize the primal-dual hybrid gradient

More information

Dual methods and ADMM. Barnabas Poczos & Ryan Tibshirani Convex Optimization /36-725

Dual methods and ADMM. Barnabas Poczos & Ryan Tibshirani Convex Optimization /36-725 Dual methods and ADMM Barnabas Poczos & Ryan Tibshirani Convex Optimization 10-725/36-725 1 Given f : R n R, the function is called its conjugate Recall conjugate functions f (y) = max x R n yt x f(x)

More information

Coordinate Update Algorithm Short Course Operator Splitting

Coordinate Update Algorithm Short Course Operator Splitting Coordinate Update Algorithm Short Course Operator Splitting Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 25 Operator splitting pipeline 1. Formulate a problem as 0 A(x) + B(x) with monotone operators

More information

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems)

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Donghwan Kim and Jeffrey A. Fessler EECS Department, University of Michigan

More information

A Unified Approach to Proximal Algorithms using Bregman Distance

A Unified Approach to Proximal Algorithms using Bregman Distance A Unified Approach to Proximal Algorithms using Bregman Distance Yi Zhou a,, Yingbin Liang a, Lixin Shen b a Department of Electrical Engineering and Computer Science, Syracuse University b Department

More information

Proximal Newton Method. Zico Kolter (notes by Ryan Tibshirani) Convex Optimization

Proximal Newton Method. Zico Kolter (notes by Ryan Tibshirani) Convex Optimization Proximal Newton Method Zico Kolter (notes by Ryan Tibshirani) Convex Optimization 10-725 Consider the problem Last time: quasi-newton methods min x f(x) with f convex, twice differentiable, dom(f) = R

More information

arxiv: v2 [math.oc] 28 Jan 2019

arxiv: v2 [math.oc] 28 Jan 2019 On the Equivalence of Inexact Proximal ALM and ADMM for a Class of Convex Composite Programming Liang Chen Xudong Li Defeng Sun and Kim-Chuan Toh March 28, 2018; Revised on Jan 28, 2019 arxiv:1803.10803v2

More information

Primal-dual coordinate descent A Coordinate Descent Primal-Dual Algorithm with Large Step Size and Possibly Non-Separable Functions

Primal-dual coordinate descent A Coordinate Descent Primal-Dual Algorithm with Large Step Size and Possibly Non-Separable Functions Primal-dual coordinate descent A Coordinate Descent Primal-Dual Algorithm with Large Step Size and Possibly Non-Separable Functions Olivier Fercoq and Pascal Bianchi Problem Minimize the convex function

More information

Variational Image Restoration

Variational Image Restoration Variational Image Restoration Yuling Jiao yljiaostatistics@znufe.edu.cn School of and Statistics and Mathematics ZNUFE Dec 30, 2014 Outline 1 1 Classical Variational Restoration Models and Algorithms 1.1

More information

An accelerated non-euclidean hybrid proximal extragradient-type algorithm for convex-concave saddle-point problems

An accelerated non-euclidean hybrid proximal extragradient-type algorithm for convex-concave saddle-point problems An accelerated non-euclidean hybrid proximal extragradient-type algorithm for convex-concave saddle-point problems O. Kolossoski R. D. C. Monteiro September 18, 2015 (Revised: September 28, 2016) Abstract

More information

Proximal ADMM with larger step size for two-block separable convex programming and its application to the correlation matrices calibrating problems

Proximal ADMM with larger step size for two-block separable convex programming and its application to the correlation matrices calibrating problems Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., (7), 538 55 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa Proximal ADMM with larger step size

More information

Primal-dual coordinate descent

Primal-dual coordinate descent Primal-dual coordinate descent Olivier Fercoq Joint work with P. Bianchi & W. Hachem 15 July 2015 1/28 Minimize the convex function f, g, h convex f is differentiable Problem min f (x) + g(x) + h(mx) x

More information

Optimizing Nonconvex Finite Sums by a Proximal Primal-Dual Method

Optimizing Nonconvex Finite Sums by a Proximal Primal-Dual Method Optimizing Nonconvex Finite Sums by a Proximal Primal-Dual Method Davood Hajinezhad Iowa State University Davood Hajinezhad Optimizing Nonconvex Finite Sums by a Proximal Primal-Dual Method 1 / 35 Co-Authors

More information

LOCAL LINEAR CONVERGENCE OF ADMM Daniel Boley

LOCAL LINEAR CONVERGENCE OF ADMM Daniel Boley LOCAL LINEAR CONVERGENCE OF ADMM Daniel Boley Model QP/LP: min 1 / 2 x T Qx+c T x s.t. Ax = b, x 0, (1) Lagrangian: L(x,y) = 1 / 2 x T Qx+c T x y T x s.t. Ax = b, (2) where y 0 is the vector of Lagrange

More information

Relaxed linearized algorithms for faster X-ray CT image reconstruction

Relaxed linearized algorithms for faster X-ray CT image reconstruction Relaxed linearized algorithms for faster X-ray CT image reconstruction Hung Nien and Jeffrey A. Fessler University of Michigan, Ann Arbor The 13th Fully 3D Meeting June 2, 2015 1/20 Statistical image reconstruction

More information

A GENERAL FRAMEWORK FOR A CLASS OF FIRST ORDER PRIMAL-DUAL ALGORITHMS FOR TV MINIMIZATION

A GENERAL FRAMEWORK FOR A CLASS OF FIRST ORDER PRIMAL-DUAL ALGORITHMS FOR TV MINIMIZATION A GENERAL FRAMEWORK FOR A CLASS OF FIRST ORDER PRIMAL-DUAL ALGORITHMS FOR TV MINIMIZATION ERNIE ESSER XIAOQUN ZHANG TONY CHAN Abstract. We generalize the primal-dual hybrid gradient (PDHG) algorithm proposed

More information

Non-smooth Non-convex Bregman Minimization: Unification and new Algorithms

Non-smooth Non-convex Bregman Minimization: Unification and new Algorithms Non-smooth Non-convex Bregman Minimization: Unification and new Algorithms Peter Ochs, Jalal Fadili, and Thomas Brox Saarland University, Saarbrücken, Germany Normandie Univ, ENSICAEN, CNRS, GREYC, France

More information

A GENERAL INERTIAL PROXIMAL POINT METHOD FOR MIXED VARIATIONAL INEQUALITY PROBLEM

A GENERAL INERTIAL PROXIMAL POINT METHOD FOR MIXED VARIATIONAL INEQUALITY PROBLEM A GENERAL INERTIAL PROXIMAL POINT METHOD FOR MIXED VARIATIONAL INEQUALITY PROBLEM CAIHUA CHEN, SHIQIAN MA, AND JUNFENG YANG Abstract. In this paper, we first propose a general inertial proximal point method

More information

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March

More information

Linearized Alternating Direction Method of Multipliers via Positive-Indefinite Proximal Regularization for Convex Programming.

Linearized Alternating Direction Method of Multipliers via Positive-Indefinite Proximal Regularization for Convex Programming. Linearized Alternating Direction Method of Multipliers via Positive-Indefinite Proximal Regularization for Convex Programming Bingsheng He Feng Ma 2 Xiaoming Yuan 3 July 3, 206 Abstract. The alternating

More information

Primal-dual first-order methods with O(1/ǫ) iteration-complexity for cone programming

Primal-dual first-order methods with O(1/ǫ) iteration-complexity for cone programming Mathematical Programming manuscript No. (will be inserted by the editor) Primal-dual first-order methods with O(1/ǫ) iteration-complexity for cone programming Guanghui Lan Zhaosong Lu Renato D. C. Monteiro

More information

Optimal Linearized Alternating Direction Method of Multipliers for Convex Programming 1

Optimal Linearized Alternating Direction Method of Multipliers for Convex Programming 1 Optimal Linearized Alternating Direction Method of Multipliers for Convex Programming Bingsheng He 2 Feng Ma 3 Xiaoming Yuan 4 October 4, 207 Abstract. The alternating direction method of multipliers ADMM

More information

Key words. alternating direction method of multipliers, convex composite optimization, indefinite proximal terms, majorization, iteration-complexity

Key words. alternating direction method of multipliers, convex composite optimization, indefinite proximal terms, majorization, iteration-complexity A MAJORIZED ADMM WITH INDEFINITE PROXIMAL TERMS FOR LINEARLY CONSTRAINED CONVEX COMPOSITE OPTIMIZATION MIN LI, DEFENG SUN, AND KIM-CHUAN TOH Abstract. This paper presents a majorized alternating direction

More information

HYBRID JACOBIAN AND GAUSS SEIDEL PROXIMAL BLOCK COORDINATE UPDATE METHODS FOR LINEARLY CONSTRAINED CONVEX PROGRAMMING

HYBRID JACOBIAN AND GAUSS SEIDEL PROXIMAL BLOCK COORDINATE UPDATE METHODS FOR LINEARLY CONSTRAINED CONVEX PROGRAMMING SIAM J. OPTIM. Vol. 8, No. 1, pp. 646 670 c 018 Society for Industrial and Applied Mathematics HYBRID JACOBIAN AND GAUSS SEIDEL PROXIMAL BLOCK COORDINATE UPDATE METHODS FOR LINEARLY CONSTRAINED CONVEX

More information

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement to the material discussed in

More information

ADMM and Fast Gradient Methods for Distributed Optimization

ADMM and Fast Gradient Methods for Distributed Optimization ADMM and Fast Gradient Methods for Distributed Optimization João Xavier Instituto Sistemas e Robótica (ISR), Instituto Superior Técnico (IST) European Control Conference, ECC 13 July 16, 013 Joint work

More information

Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise

Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise Minru Bai(x T) College of Mathematics and Econometrics Hunan University Joint work with Xiongjun Zhang, Qianqian Shao June 30,

More information

A Multilevel Proximal Algorithm for Large Scale Composite Convex Optimization

A Multilevel Proximal Algorithm for Large Scale Composite Convex Optimization A Multilevel Proximal Algorithm for Large Scale Composite Convex Optimization Panos Parpas Department of Computing Imperial College London www.doc.ic.ac.uk/ pp500 p.parpas@imperial.ac.uk jointly with D.V.

More information

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods Renato D.C. Monteiro B. F. Svaiter May 10, 011 Abstract This paper presents an accelerated

More information

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general

More information

An accelerated non-euclidean hybrid proximal extragradient-type algorithm for convex concave saddle-point problems

An accelerated non-euclidean hybrid proximal extragradient-type algorithm for convex concave saddle-point problems Optimization Methods and Software ISSN: 1055-6788 (Print) 1029-4937 (Online) Journal homepage: http://www.tandfonline.com/loi/goms20 An accelerated non-euclidean hybrid proximal extragradient-type algorithm

More information

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization /

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization / Uses of duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Remember conjugate functions Given f : R n R, the function is called its conjugate f (y) = max x R n yt x f(x) Conjugates appear

More information

M. Marques Alves Marina Geremia. November 30, 2017

M. Marques Alves Marina Geremia. November 30, 2017 Iteration complexity of an inexact Douglas-Rachford method and of a Douglas-Rachford-Tseng s F-B four-operator splitting method for solving monotone inclusions M. Marques Alves Marina Geremia November

More information

Application of the Strictly Contractive Peaceman-Rachford Splitting Method to Multi-block Separable Convex Programming

Application of the Strictly Contractive Peaceman-Rachford Splitting Method to Multi-block Separable Convex Programming Application of the Strictly Contractive Peaceman-Rachford Splitting Method to Multi-block Separable Convex Programming Bingsheng He, Han Liu, Juwei Lu, and Xiaoming Yuan Abstract Recently, a strictly contractive

More information

SEMI-SMOOTH SECOND-ORDER TYPE METHODS FOR COMPOSITE CONVEX PROGRAMS

SEMI-SMOOTH SECOND-ORDER TYPE METHODS FOR COMPOSITE CONVEX PROGRAMS SEMI-SMOOTH SECOND-ORDER TYPE METHODS FOR COMPOSITE CONVEX PROGRAMS XIANTAO XIAO, YONGFENG LI, ZAIWEN WEN, AND LIWEI ZHANG Abstract. The goal of this paper is to study approaches to bridge the gap between

More information

Primal/Dual Decomposition Methods

Primal/Dual Decomposition Methods Primal/Dual Decomposition Methods Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Subgradients

More information

Sparse and Regularized Optimization

Sparse and Regularized Optimization Sparse and Regularized Optimization In many applications, we seek not an exact minimizer of the underlying objective, but rather an approximate minimizer that satisfies certain desirable properties: sparsity

More information

Generalized Alternating Direction Method of Multipliers: New Theoretical Insights and Applications

Generalized Alternating Direction Method of Multipliers: New Theoretical Insights and Applications Noname manuscript No. (will be inserted by the editor) Generalized Alternating Direction Method of Multipliers: New Theoretical Insights and Applications Ethan X. Fang Bingsheng He Han Liu Xiaoming Yuan

More information

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical

More information

arxiv: v7 [math.oc] 22 Feb 2018

arxiv: v7 [math.oc] 22 Feb 2018 A SMOOTH PRIMAL-DUAL OPTIMIZATION FRAMEWORK FOR NONSMOOTH COMPOSITE CONVEX MINIMIZATION QUOC TRAN-DINH, OLIVIER FERCOQ, AND VOLKAN CEVHER arxiv:1507.06243v7 [math.oc] 22 Feb 2018 Abstract. We propose a

More information

Convergent prediction-correction-based ADMM for multi-block separable convex programming

Convergent prediction-correction-based ADMM for multi-block separable convex programming Convergent prediction-correction-based ADMM for multi-block separable convex programming Xiaokai Chang a,b, Sanyang Liu a, Pengjun Zhao c and Xu Li b a School of Mathematics and Statistics, Xidian University,

More information

Convergence of Fixed-Point Iterations

Convergence of Fixed-Point Iterations Convergence of Fixed-Point Iterations Instructor: Wotao Yin (UCLA Math) July 2016 1 / 30 Why study fixed-point iterations? Abstract many existing algorithms in optimization, numerical linear algebra, and

More information

FAST ALTERNATING DIRECTION OPTIMIZATION METHODS

FAST ALTERNATING DIRECTION OPTIMIZATION METHODS FAST ALTERNATING DIRECTION OPTIMIZATION METHODS TOM GOLDSTEIN, BRENDAN O DONOGHUE, SIMON SETZER, AND RICHARD BARANIUK Abstract. Alternating direction methods are a common tool for general mathematical

More information

Linearized Alternating Direction Method: Two Blocks and Multiple Blocks. Zhouchen Lin 林宙辰北京大学

Linearized Alternating Direction Method: Two Blocks and Multiple Blocks. Zhouchen Lin 林宙辰北京大学 Linearized Alternating Direction Method: Two Blocks and Multiple Blocks Zhouchen Lin 林宙辰北京大学 Dec. 3, 014 Outline Alternating Direction Method (ADM) Linearized Alternating Direction Method (LADM) Two Blocks

More information

On Nesterov s Random Coordinate Descent Algorithms - Continued

On Nesterov s Random Coordinate Descent Algorithms - Continued On Nesterov s Random Coordinate Descent Algorithms - Continued Zheng Xu University of Texas At Arlington February 20, 2015 1 Revisit Random Coordinate Descent The Random Coordinate Descent Upper and Lower

More information

COMPLEXITY OF A QUADRATIC PENALTY ACCELERATED INEXACT PROXIMAL POINT METHOD FOR SOLVING LINEARLY CONSTRAINED NONCONVEX COMPOSITE PROGRAMS

COMPLEXITY OF A QUADRATIC PENALTY ACCELERATED INEXACT PROXIMAL POINT METHOD FOR SOLVING LINEARLY CONSTRAINED NONCONVEX COMPOSITE PROGRAMS COMPLEXITY OF A QUADRATIC PENALTY ACCELERATED INEXACT PROXIMAL POINT METHOD FOR SOLVING LINEARLY CONSTRAINED NONCONVEX COMPOSITE PROGRAMS WEIWEI KONG, JEFFERSON G. MELO, AND RENATO D.C. MONTEIRO Abstract.

More information

Dual and primal-dual methods

Dual and primal-dual methods ELE 538B: Large-Scale Optimization for Data Science Dual and primal-dual methods Yuxin Chen Princeton University, Spring 2018 Outline Dual proximal gradient method Primal-dual proximal gradient method

More information

Convergence rate of inexact proximal point methods with relative error criteria for convex optimization

Convergence rate of inexact proximal point methods with relative error criteria for convex optimization Convergence rate of inexact proximal point methods with relative error criteria for convex optimization Renato D. C. Monteiro B. F. Svaiter August, 010 Revised: December 1, 011) Abstract In this paper,

More information

Master 2 MathBigData. 3 novembre CMAP - Ecole Polytechnique

Master 2 MathBigData. 3 novembre CMAP - Ecole Polytechnique Master 2 MathBigData S. Gaïffas 1 3 novembre 2014 1 CMAP - Ecole Polytechnique 1 Supervised learning recap Introduction Loss functions, linearity 2 Penalization Introduction Ridge Sparsity Lasso 3 Some

More information

The Direct Extension of ADMM for Multi-block Convex Minimization Problems is Not Necessarily Convergent

The Direct Extension of ADMM for Multi-block Convex Minimization Problems is Not Necessarily Convergent The Direct Extension of ADMM for Multi-block Convex Minimization Problems is Not Necessarily Convergent Caihua Chen Bingsheng He Yinyu Ye Xiaoming Yuan 4 December, Abstract. The alternating direction method

More information

Variable Metric Forward-Backward Algorithm

Variable Metric Forward-Backward Algorithm Variable Metric Forward-Backward Algorithm 1/37 Variable Metric Forward-Backward Algorithm for minimizing the sum of a differentiable function and a convex function E. Chouzenoux in collaboration with

More information

Stochastic and online algorithms

Stochastic and online algorithms Stochastic and online algorithms stochastic gradient method online optimization and dual averaging method minimizing finite average Stochastic and online optimization 6 1 Stochastic optimization problem

More information

An ADMM Algorithm for Clustering Partially Observed Networks

An ADMM Algorithm for Clustering Partially Observed Networks An ADMM Algorithm for Clustering Partially Observed Networks Necdet Serhat Aybat Industrial Engineering Penn State University 2015 SIAM International Conference on Data Mining Vancouver, Canada Problem

More information

A primal-dual fixed point algorithm for multi-block convex minimization *

A primal-dual fixed point algorithm for multi-block convex minimization * Journal of Computational Mathematics Vol.xx, No.x, 201x, 1 16. http://www.global-sci.org/jcm doi:?? A primal-dual fixed point algorithm for multi-block convex minimization * Peijun Chen School of Mathematical

More information

An Inexact Newton Method for Nonlinear Constrained Optimization

An Inexact Newton Method for Nonlinear Constrained Optimization An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results

More information

Inexact Alternating-Direction-Based Contraction Methods for Separable Linearly Constrained Convex Optimization

Inexact Alternating-Direction-Based Contraction Methods for Separable Linearly Constrained Convex Optimization J Optim Theory Appl 04) 63:05 9 DOI 0007/s0957-03-0489-z Inexact Alternating-Direction-Based Contraction Methods for Separable Linearly Constrained Convex Optimization Guoyong Gu Bingsheng He Junfeng Yang

More information

An Optimal Algorithm for Stochastic Three-Composite Optimization

An Optimal Algorithm for Stochastic Three-Composite Optimization An Optimal Algorithm for Stochastic Three-Composite Optimization Renbo Zhao 1, William B. Haskell 2, Vincent Y. F. Tan 3 1 ORC, Massachusetts Institute of Technology 2 Dept. ISEM, National University of

More information

Iteration-complexity of first-order augmented Lagrangian methods for convex programming

Iteration-complexity of first-order augmented Lagrangian methods for convex programming Math. Program., Ser. A 016 155:511 547 DOI 10.1007/s10107-015-0861-x FULL LENGTH PAPER Iteration-complexity of first-order augmented Lagrangian methods for convex programming Guanghui Lan Renato D. C.

More information

Adaptive Primal Dual Optimization for Image Processing and Learning

Adaptive Primal Dual Optimization for Image Processing and Learning Adaptive Primal Dual Optimization for Image Processing and Learning Tom Goldstein Rice University tag7@rice.edu Ernie Esser University of British Columbia eesser@eos.ubc.ca Richard Baraniuk Rice University

More information

A Fast Augmented Lagrangian Algorithm for Learning Low-Rank Matrices

A Fast Augmented Lagrangian Algorithm for Learning Low-Rank Matrices A Fast Augmented Lagrangian Algorithm for Learning Low-Rank Matrices Ryota Tomioka 1, Taiji Suzuki 1, Masashi Sugiyama 2, Hisashi Kashima 1 1 The University of Tokyo 2 Tokyo Institute of Technology 2010-06-22

More information

DLM: Decentralized Linearized Alternating Direction Method of Multipliers

DLM: Decentralized Linearized Alternating Direction Method of Multipliers 1 DLM: Decentralized Linearized Alternating Direction Method of Multipliers Qing Ling, Wei Shi, Gang Wu, and Alejandro Ribeiro Abstract This paper develops the Decentralized Linearized Alternating Direction

More information

Preconditioned Alternating Direction Method of Multipliers for the Minimization of Quadratic plus Non-Smooth Convex Functionals

Preconditioned Alternating Direction Method of Multipliers for the Minimization of Quadratic plus Non-Smooth Convex Functionals SpezialForschungsBereich F 32 Karl Franzens Universita t Graz Technische Universita t Graz Medizinische Universita t Graz Preconditioned Alternating Direction Method of Multipliers for the Minimization

More information

Math 164: Optimization Barzilai-Borwein Method

Math 164: Optimization Barzilai-Borwein Method Math 164: Optimization Barzilai-Borwein Method Instructor: Wotao Yin Department of Mathematics, UCLA Spring 2015 online discussions on piazza.com Main features of the Barzilai-Borwein (BB) method The BB

More information

Accelerated Block-Coordinate Relaxation for Regularized Optimization

Accelerated Block-Coordinate Relaxation for Regularized Optimization Accelerated Block-Coordinate Relaxation for Regularized Optimization Stephen J. Wright Computer Sciences University of Wisconsin, Madison October 09, 2012 Problem descriptions Consider where f is smooth

More information

A Primal-dual Three-operator Splitting Scheme

A Primal-dual Three-operator Splitting Scheme Noname manuscript No. (will be inserted by the editor) A Primal-dual Three-operator Splitting Scheme Ming Yan Received: date / Accepted: date Abstract In this paper, we propose a new primal-dual algorithm

More information

Accelerated Proximal Gradient Methods for Convex Optimization

Accelerated Proximal Gradient Methods for Convex Optimization Accelerated Proximal Gradient Methods for Convex Optimization Paul Tseng Mathematics, University of Washington Seattle MOPTA, University of Guelph August 18, 2008 ACCELERATED PROXIMAL GRADIENT METHODS

More information

Perturbed Proximal Primal Dual Algorithm for Nonconvex Nonsmooth Optimization

Perturbed Proximal Primal Dual Algorithm for Nonconvex Nonsmooth Optimization Noname manuscript No. (will be inserted by the editor Perturbed Proximal Primal Dual Algorithm for Nonconvex Nonsmooth Optimization Davood Hajinezhad and Mingyi Hong Received: date / Accepted: date Abstract

More information

A Randomized Nonmonotone Block Proximal Gradient Method for a Class of Structured Nonlinear Programming

A Randomized Nonmonotone Block Proximal Gradient Method for a Class of Structured Nonlinear Programming A Randomized Nonmonotone Block Proximal Gradient Method for a Class of Structured Nonlinear Programming Zhaosong Lu Lin Xiao March 9, 2015 (Revised: May 13, 2016; December 30, 2016) Abstract We propose

More information

arxiv: v2 [math.oc] 25 Mar 2018

arxiv: v2 [math.oc] 25 Mar 2018 arxiv:1711.0581v [math.oc] 5 Mar 018 Iteration complexity of inexact augmented Lagrangian methods for constrained convex programming Yangyang Xu Abstract Augmented Lagrangian method ALM has been popularly

More information

An Inexact Newton Method for Optimization

An Inexact Newton Method for Optimization New York University Brown Applied Mathematics Seminar, February 10, 2009 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc)

More information

The Alternating Direction Method of Multipliers

The Alternating Direction Method of Multipliers The Alternating Direction Method of Multipliers With Adaptive Step Size Selection Peter Sutor, Jr. Project Advisor: Professor Tom Goldstein December 2, 2015 1 / 25 Background The Dual Problem Consider

More information

Coordinate Descent and Ascent Methods

Coordinate Descent and Ascent Methods Coordinate Descent and Ascent Methods Julie Nutini Machine Learning Reading Group November 3 rd, 2015 1 / 22 Projected-Gradient Methods Motivation Rewrite non-smooth problem as smooth constrained problem:

More information

INERTIAL PRIMAL-DUAL ALGORITHMS FOR STRUCTURED CONVEX OPTIMIZATION

INERTIAL PRIMAL-DUAL ALGORITHMS FOR STRUCTURED CONVEX OPTIMIZATION INERTIAL PRIMAL-DUAL ALGORITHMS FOR STRUCTURED CONVEX OPTIMIZATION RAYMOND H. CHAN, SHIQIAN MA, AND JUNFENG YANG Abstract. The primal-dual algorithm recently proposed by Chambolle & Pock (abbreviated as

More information