Relaxed linearized algorithms for faster X-ray CT image reconstruction

Size: px
Start display at page:

Download "Relaxed linearized algorithms for faster X-ray CT image reconstruction"

Transcription

1 Relaxed linearized algorithms for faster X-ray CT image reconstruction Hung Nien and Jeffrey A. Fessler University of Michigan, Ann Arbor The 13th Fully 3D Meeting June 2, /20

2 Statistical image reconstruction for X-ray CT 2/20

3 Statistical image reconstruction for X-ray CT 2/20

4 Statistical image reconstruction for X-ray CT 2/20

5 Statistical image reconstruction for X-ray CT 2/20

6 Statistical image reconstruction for X-ray CT 2/20

7 Statistical image reconstruction for X-ray CT Statistical image reconstruction (SIR) { } ˆx arg min Ψ PWLS (x) 1 x 2 y Ax 2 W + R(x) + ι Ω(x) 2/20

8 Statistical image reconstruction for X-ray CT Statistical image reconstruction (SIR) { } ˆx arg min Ψ PWLS (x) 1 x 2 y Ax 2 W + R(x) + ι Ω(x) 2/20

9 Statistical image reconstruction for X-ray CT Statistical image reconstruction (SIR) { } ˆx arg min Ψ PWLS (x) 1 x 2 y Ax 2 W + R(x) + ι Ω(x) 2/20

10 Statistical image reconstruction for X-ray CT Statistical image reconstruction (SIR) { } ˆx arg min Ψ PWLS (x) 1 x 2 y Ax 2 W + R(x) + ι Ω(x) 2/20

11 Statistical image reconstruction for X-ray CT Statistical image reconstruction (SIR) { } ˆx arg min Ψ PWLS (x) 1 x 2 y Ax 2 W + R(x) + ι Ω(x) 2/20

12 First-order algorithms with ordered-subsets OS SQS OS FGM2 OS LALM OS OGM2 FBP RMS difference [HU] OS 30 iter OS 30 iter HU 5 1 HU Number of iterations (10 subsets) 800 Figure: Chest: Existing first-order algorithms with ordered-subsets (OS). 3/20

13 First-order algorithms with ordered-subsets OS SQS OS FGM2 OS LALM OS OGM2 FBP RMS difference [HU] OS 30 iter OS 30 iter HU 5 1 HU Number of iterations (10 subsets) 800 Figure: Chest: Existing first-order algorithms with ordered-subsets (OS). 3/20

14 First-order algorithms with ordered-subsets OS SQS OS FGM2 OS LALM OS OGM2 FBP RMS difference [HU] OS 30 iter OS 30 iter HU 5 1 HU Number of iterations (10 subsets) 800 Figure: Chest: Existing first-order algorithms with ordered-subsets (OS). 3/20

15 First-order algorithms with ordered-subsets OS SQS OS FGM2 OS LALM OS OGM2 FBP RMS difference [HU] OS 10 iter OS 10 iter HU 5 1 HU Number of iterations (10 subsets) 800 Figure: Chest: Existing first-order algorithms with ordered-subsets (OS). 3/20

16 First-order algorithms with ordered-subsets OS SQS OS FGM2 OS LALM OS OGM2 FBP RMS difference [HU] OS 10 iter OS 10 iter HU 5 1 HU Number of iterations (10 subsets) 800 Figure: Chest: Existing first-order algorithms with ordered-subsets (OS). 3/20

17 Equality-constrained composite minimization Consider an equality-constrained minimization problem: { (ˆx, û) arg min 1 x,u 2 u y h(x)} s.t. u = Ax, (1) where h are closed and proper convex functions. 4/20

18 Equality-constrained composite minimization Consider an equality-constrained minimization problem: { (ˆx, û) arg min 1 x,u 2 u y h(x)} s.t. u = Ax, (1) where h are closed and proper convex functions. In particular, the quadratic loss function penalizes the difference between the linear model Ax and noisy measurement y, and h is a regularization term that introduces the prior knowledge of x to the reconstruction. 4/20

19 Equality-constrained composite minimization Consider an equality-constrained minimization problem: { (ˆx, û) arg min 1 x,u 2 u y h(x)} s.t. u = Ax, (1) where h are closed and proper convex functions. In particular, the quadratic loss function penalizes the difference between the linear model Ax and noisy measurement y, and h is a regularization term that introduces the prior knowledge of x to the reconstruction. For example, (ˆx, û) arg min x,u { 1 2 u W 1/2 y (R + ι Ω)(x) represents an X-ray CT image reconstruction problem. } s.t. u = W 1/2 Ax 4/20

20 Standard AL method The standard AL method finds a saddle-point of the augmented Lagrangian (AL) of (1): L A (x, u, d; ρ) 1 2 u y h(x) + ρ 2 Ax u d 2 2 in an alternating direction manner: x (k+1) arg min {h(x) + ρ x 2 Ax u (k) d (k) } 2 { 2 u (k+1) arg min 1 u 2 u y ρ 2 Ax (k+1) u d (k) } 2 2 d (k+1) = d (k) Ax (k+1) + u (k+1), where d is the scaled Lagrange multiplier of u, and ρ > 0 is the AL penalty parameter. [Gabay and Mercier, Comput. Math. Appl., 1976] 5/20

21 Standard AL method The standard AL method finds a saddle-point of the augmented Lagrangian (AL) of (1): L A (x, u, d; ρ) 1 2 u y h(x) + ρ 2 Ax u d 2 2 in an alternating direction manner: x (k+1) arg min {h(x) + ρ x 2 Ax u (k) d (k) } 2 { 2 u (k+1) arg min 1 u 2 u y ρ 2 Ax (k+1) u d (k) } 2 2 d (k+1) = d (k) Ax (k+1) + u (k+1), where d is the scaled Lagrange multiplier of u, and ρ > 0 is the AL penalty parameter. [Gabay and Mercier, Comput. Math. Appl., 1976] 5/20

22 Linearized AL method The linearized AL method adds an additional G-proximity term to the x-subproblem in the standard AL method: x (k+1) arg min {h(x) + ρ x 2 Ax u (k) d (k) ρ } 2 x x (k) 2 { G u (k+1) arg min 1 u 2 u y ρ 2 Ax (k+1) u d (k) } 2 2 d (k+1) = d (k) Ax (k+1) + u (k+1), where G D L A A, and D L is a diagonal majorizing matrix of A A. [Zhang et al., J. Sci. Comput., 2011] [HN and Fessler, IEEE Trans. Med. Imag., 2015] 6/20

23 Linearized AL method The linearized AL method adds an additional G-proximity term to the x-subproblem in the standard AL method: x (k+1) arg min {h(x) + ρ x 2 Ax u (k) d (k) ρ } 2 x x (k) 2 { G u (k+1) arg min 1 u 2 u y ρ 2 Ax (k+1) u d (k) } 2 2 d (k+1) = d (k) Ax (k+1) + u (k+1), where G D L A A, and D L is a diagonal majorizing matrix of A A. Here, we linearize the algorithm in a sense that the Hessian matrix of the augmented quadratic AL term is diagonal. [Zhang et al., J. Sci. Comput., 2011] [HN and Fessler, IEEE Trans. Med. Imag., 2015] 6/20

24 Linearized AL algorithm for X-ray CT Algorithm: OS-LALM for CT reconstruction. Input: K 1, M 1, and an initial (FBP) image x. set ρ = 1, ζ = g = M L M (x) for k = 1, 2,..., K do for m = 1, 2,..., M do s = ρζ + (1 ρ) g x + = [ x (ρd L + D R ) 1 (s + R(x)) ] Ω ζ + = M L m (x + ) ρ ρ+1 ζ+ + 1 ρ+1 g g + = decrease ρ gradually end end Output: The final image x. 7/20

25 Relaxed AL method The relaxed AL method accelerates the standard AL method with under- or over-relaxation: x (k+1) arg min {h(x) + ρ x 2 Ax u (k) d (k) } 2 { 2 u (k+1) arg min 1 u 2 u y ρ 2 r u,α (k+1) u d (k) } 2 2 d (k+1) = d (k) r u,α (k+1) + u (k+1), where r (k+1) u,α αax (k+1) + (1 α)u (k) is the relaxation variable of u, and 0 < α < 2 is the relaxation parameter. [Eckstein and Bertsekas, Math. Prog., 1992] 8/20

26 Relaxed AL method The relaxed AL method accelerates the standard AL method with under- or over-relaxation: x (k+1) arg min {h(x) + ρ x 2 Ax u (k) d (k) } 2 { 2 u (k+1) arg min 1 u 2 u y ρ 2 r u,α (k+1) u d (k) } 2 2 d (k+1) = d (k) r u,α (k+1) + u (k+1), where r (k+1) u,α αax (k+1) + (1 α)u (k) is the relaxation variable of u, and 0 < α < 2 is the relaxation parameter. When α = 1, it reverts to the standard AL method. [Eckstein and Bertsekas, Math. Prog., 1992] 8/20

27 Implicit linearization via redundant variable-splitting Consider an equivalent equality-constrained minimization problem with a redundant equality constraint: { { (ˆx, û, ˆv) arg min 1 x,u,v 2 u y h(x)} u = Ax s.t. v = G 1/2 x. [HN and Fessler, IEEE Trans. Med. Imag., 2015] 9/20

28 Implicit linearization via redundant variable-splitting Consider an equivalent equality-constrained minimization problem with a redundant equality constraint: { { (ˆx, û, ˆv) arg min 1 x,u,v 2 u y h(x)} u = Ax s.t. v = G 1/2 x. Suppose we use the same AL penalty parameter ρ for both equality constraints in AL methods. The quadratic AL term ρ 2 Ax u (k) d (k) ρ 2 G 1/2 x v (k) e (k) 2 2 has a diagonal Hessian matrix H ρ ρa A + ρg = ρd L, leading to a separable quadratic AL term. [HN and Fessler, IEEE Trans. Med. Imag., 2015] 9/20

29 Proposed relaxed linearized AL method The proposed relaxed linearized AL method solves the equivalent minimization problem with a redundant equality constraint using the relaxed AL method: { h(x) + ρ x (k+1) 2 Ax u (k) d (k) 2 } 2 arg min x + ρ 2 G 1/2 x v (k) e (k) { 2 2 u (k+1) arg min 1 u 2 u y ρ (k+1) 2 r u,α u d (k) } 2 2 d (k+1) = d (k) r u,α (k+1) + u (k+1) v (k+1) = r v,α (k+1) e (k) e (k+1) = e (k) r v,α (k+1) + v (k+1). 10/20

30 Proposed relaxed linearized AL method The proposed relaxed linearized AL method solves the equivalent minimization problem with a redundant equality constraint using the relaxed AL method: { h(x) + ρ x (k+1) 2 Ax u (k) d (k) 2 } 2 arg min x + ρ 2 G 1/2 x v (k) e (k) { 2 2 u (k+1) arg min 1 u 2 u y ρ (k+1) 2 r u,α u d (k) } 2 2 d (k+1) = d (k) r u,α (k+1) + u (k+1) v (k+1) = r v,α (k+1) e (k) e (k+1) = e (k) r v,α (k+1) + v (k+1). When α = 1, the proposed method reverts to the linearized AL method. 10/20

31 Proposed relaxed linearized AL method (cont d) The proposed relaxed linearized AL method further simplifies as follows: γ (k+1) = (ρ 1) g (k) + ρh (k) x (k+1) arg min {h(x) + 1 x 2 x (ρdl ) 1 γ (k+1) } 2 ρd L ζ (k+1) = L ( x (k+1)) A ( Ax (k+1) y ) g (k+1) = ρ ( ρ+1 αζ (k+1) + (1 α)g (k)) + 1 ρ+1 g(k) h (k+1) = α ( D L x (k+1) ζ (k+1)) + (1 α)h (k), where L(x) (1/2) Ax y 2 2 denotes the quadratic data-fidelity term, and g (k) A ( u (k) y ) denotes the split gradient of L. 11/20

32 Proposed relaxed linearized AL method (cont d) The proposed relaxed linearized AL method further simplifies as follows: γ (k+1) = (ρ 1) g (k) + ρh (k) x (k+1) arg min {h(x) + 1 x 2 x (ρdl ) 1 γ (k+1) } 2 ρd L ζ (k+1) = L ( x (k+1)) A ( Ax (k+1) y ) g (k+1) = ρ ( ρ+1 αζ (k+1) + (1 α)g (k)) + 1 ρ+1 g(k) h (k+1) = α ( D L x (k+1) ζ (k+1)) + (1 α)h (k), where L(x) (1/2) Ax y 2 2 denotes the quadratic data-fidelity term, and g (k) A ( u (k) y ) denotes the split gradient of L. 11/20

33 Relaxed OS-LALM for faster CT reconstruction To solve X-ray CT image reconstruction problem: { } ˆx arg min 1 x 2 y Ax 2 W + R(x) + ι Ω(x) using the proposed relaxed linearized AL method, we apply the following substitution: { A W 1/2 A and set h R + ι Ω. y W 1/2 y 12/20

34 Relaxed OS-LALM for faster CT reconstruction To solve X-ray CT image reconstruction problem: { } ˆx arg min 1 x 2 y Ax 2 W + R(x) + ι Ω(x) using the proposed relaxed linearized AL method, we apply the following substitution: { A W 1/2 A and set h R + ι Ω. y W 1/2 y The image update now is a diagonally weighted denoising problem. We solve it using a projected gradient descent step from x (k). 12/20

35 Relaxed OS-LALM for faster CT reconstruction To solve X-ray CT image reconstruction problem: { } ˆx arg min 1 x 2 y Ax 2 W + R(x) + ι Ω(x) using the proposed relaxed linearized AL method, we apply the following substitution: { A W 1/2 A and set h R + ι Ω. y W 1/2 y The image update now is a diagonally weighted denoising problem. We solve it using a projected gradient descent step from x (k). For speed-up, ordered subsets (OS) or incremental gradients are used. 12/20

36 Speed-up with decreasing continuation sequence We also use a continuation technique to speed up convergence; that is, we decrease ρ gradually with iteration. 13/20

37 Speed-up with decreasing continuation sequence We also use a continuation technique to speed up convergence; that is, we decrease ρ gradually with iteration. Based on a second-order recursive system analysis, we use 1, if i = 0 ρ i (α) = ( ) π 2, (2) 1 otherwise. α(i+1) π 2α(i+1) Therefore, we use a faster-decreasing continuation sequence in a more over-relaxed linearized AL method. [HN and Fessler, IEEE Trans. Med. Imag., submitted] 13/20

38 Proposed relaxed linearized algorithm Algorithm: Relaxed OS-LALM for CT reconstruction. Input: K 1, M 1, 0 < α < 2, and an initial (FBP) image x. set ρ = 1, ζ = g = M L M (x), h = D L x ζ for k = 1, 2,..., K do for m = 1, 2,..., M do s = ρ (D L x h) + (1 ρ) g x + = [ x (ρd L + D R ) 1 (s + R(x)) ] Ω ζ = M L m (x + ) g + = ρ 1 ρ+1 (αζ + (1 α) g) + ρ+1 g h + = α (D L x + ζ) + (1 α) h decrease ρ using (2) end end Output: The final image x. 14/20

39 Chest region helical scan We reconstruct a image from an helical (pitch 1.0) CT scan. FBP Converged Proposed Figure: Chest: Cropped images of the initial FBP image x (0) (left), the reference reconstruction x (center), and the reconstructed image x (20) using relaxed OS-LALM with 10 subsets after 20 iterations (right) /20

40 Chest region helical scan (cont d) OS SQS OS FGM2 OS LALM OS OGM2 Relaxed OS LALM OS SQS OS FGM2 OS LALM OS OGM2 Relaxed OS LALM RMS difference [HU] RMS difference [HU] HU 5 5 HU 1 HU 1 HU Number of iterations Number of iterations Figure: Chest: Convergence rate curves of different OS algorithms with 10 (left) and 20 (right) subsets. 16/20

41 Chest region helical scan (cont d) FBP OS SQS OS FGM OS LALM OS OGM2 Relaxed OS LALM 0 15 Figure: Chest: Difference images of the initial FBP image x (0) x and the reconstructed image x (10) x using OS algorithms with 10 subsets after 10 iterations /20

42 Chest region helical scan (cont d) FBP OS SQS OS FGM OS LALM OS OGM2 Relaxed OS LALM 0 15 Figure: Chest: Difference images of the initial FBP image x (0) x and the reconstructed image x (5) x using OS algorithms with 20 subsets after 5 iterations /20

43 Chest region helical scan (cont d) FBP OS SQS OS FGM OS LALM OS OGM2 Relaxed OS LALM 0 15 Figure: Chest: Difference images of the initial FBP image x (0) x and the reconstructed image x (10) x using OS algorithms with 20 subsets after 10 iterations /20

44 Chest region helical scan (cont d) FBP OS SQS OS FGM OS LALM OS OGM2 Relaxed OS LALM 0 15 Figure: Chest: Difference images of the initial FBP image x (0) x and the reconstructed image x (20) x using OS algorithms with 20 subsets after 20 iterations. More results 30 18/20

45 Conclusions and future work In summary, We proposed a relaxed variant of linearized AL methods for faster X-ray CT image reconstruction Experimental results showed that the proposed algorithm converges α-fold faster than its unrelaxed counterpart The speed-up means that one needs fewer subsets to reach an RMS difference criteria in a given number of iterations Empirically, the proposed algorithm is reasonably stable when we use moderate numbers of subsets For future work, We want to work on the convergence rate analysis of the proposed algorithm 19/20

46 Conclusions and future work In summary, We proposed a relaxed variant of linearized AL methods for faster X-ray CT image reconstruction Experimental results showed that the proposed algorithm converges α-fold faster than its unrelaxed counterpart The speed-up means that one needs fewer subsets to reach an RMS difference criteria in a given number of iterations Empirically, the proposed algorithm is reasonably stable when we use moderate numbers of subsets For future work, We want to work on the convergence rate analysis of the proposed algorithm 19/20

47 Conclusions and future work In summary, We proposed a relaxed variant of linearized AL methods for faster X-ray CT image reconstruction Experimental results showed that the proposed algorithm converges α-fold faster than its unrelaxed counterpart The speed-up means that one needs fewer subsets to reach an RMS difference criteria in a given number of iterations Empirically, the proposed algorithm is reasonably stable when we use moderate numbers of subsets For future work, We want to work on the convergence rate analysis of the proposed algorithm 19/20

48 Conclusions and future work In summary, We proposed a relaxed variant of linearized AL methods for faster X-ray CT image reconstruction Experimental results showed that the proposed algorithm converges α-fold faster than its unrelaxed counterpart The speed-up means that one needs fewer subsets to reach an RMS difference criteria in a given number of iterations Empirically, the proposed algorithm is reasonably stable when we use moderate numbers of subsets For future work, We want to work on the convergence rate analysis of the proposed algorithm 19/20

49 Conclusions and future work In summary, We proposed a relaxed variant of linearized AL methods for faster X-ray CT image reconstruction Experimental results showed that the proposed algorithm converges α-fold faster than its unrelaxed counterpart The speed-up means that one needs fewer subsets to reach an RMS difference criteria in a given number of iterations Empirically, the proposed algorithm is reasonably stable when we use moderate numbers of subsets For future work, We want to work on the convergence rate analysis of the proposed algorithm 19/20

50 Conclusions and future work In summary, We proposed a relaxed variant of linearized AL methods for faster X-ray CT image reconstruction Experimental results showed that the proposed algorithm converges α-fold faster than its unrelaxed counterpart The speed-up means that one needs fewer subsets to reach an RMS difference criteria in a given number of iterations Empirically, the proposed algorithm is reasonably stable when we use moderate numbers of subsets For future work, We want to work on the convergence rate analysis of the proposed algorithm 19/20

51 Acknowledgments Supported in part by NIH grant U01 EB Equipment support from Intel Corporation Research support from GE Healthcare 20/20

52 Shoulder region helical scan We reconstruct a image from an helical (pitch 0.5) CT scan. FBP Converged Proposed Figure: Shoulder: Cropped images of the initial FBP image x (0) (left), the reference reconstruction x (center), and the reconstructed image x (20) using relaxed OS-LALM with 20 subsets after 20 iterations (right). 20/20

53 Shoulder region helical scan (cont d) OS SQS OS FGM2 OS LALM OS OGM2 Relaxed OS LALM OS SQS OS FGM2 OS LALM OS OGM2 Relaxed OS LALM RMS difference [HU] RMS difference [HU] HU 5 5 HU 1 HU Number of iterations 1 HU Number of iterations Figure: Shoulder: Convergence rate curves of different OS algorithms with 20 (left) and 40 (right) subsets. 20/20

54 Shoulder region helical scan (cont d) FBP OS SQS OS FGM OS LALM OS OGM2 Relaxed OS LALM Figure: Shoulder: Difference images of the initial FBP image x (0) x and the reconstructed image x (20) x using OS algorithms with 20 subsets after 20 iterations. 20/20

55 Abdomen region helical scan We reconstruct a image from an helical (pitch 1.0) CT scan. FBP Converged Proposed Figure: Abdomen: Cropped images of the initial FBP image x (0) (left), the reference reconstruction x (center), and the reconstructed image x (20) using relaxed OS-LALM with 10 subsets after 20 iterations (right) /20

56 Abdomen region helical scan (cont d) OS SQS OS FGM2 OS LALM OS OGM2 Relaxed OS LALM OS SQS OS FGM2 OS LALM OS OGM2 Relaxed OS LALM RMS difference [HU] RMS difference [HU] HU 5 5 HU 1 HU 1 HU Number of iterations Number of iterations Figure: Abdomen: Convergence rate curves of different OS algorithms with 10 (left) and 20 (right) subsets. 20/20

57 Abdomen region helical scan (cont d) FBP OS SQS OS FGM OS LALM OS OGM2 Relaxed OS LALM Figure: Abdomen: Difference images of the initial FBP image x (0) x and the reconstructed image x (20) x using OS algorithms with 10 subsets after 20 iterations. Conclusions and future work 20/20

58 Simple vs. proposed relaxed OS-LALM OS SQS OS LALM Relaxed OS LALM (simple) Relaxed OS LALM (proposed) OS SQS OS LALM Relaxed OS LALM (simple) Relaxed OS LALM (proposed) RMS difference [HU] RMS difference [HU] HU 5 5 HU 1 HU 1 HU Number of iterations Number of iterations Figure: Chest: Convergence rate curves of different relaxed algorithms with a fixed AL parameter ρ = 0.05 (left) and the decreasing ρ (right). 20/20

59 Wide-cone axial scan We reconstruct a image from an axial CT scan. FBP Converged Proposed Figure: Wide-cone: Cropped images of the initial FBP image x (0) (left), the reference reconstruction x (center), and the reconstructed image x (20) using relaxed OS-LALM with 24 subsets after 20 iterations (right) /20

60 Wide-cone axial scan (cont d) 100 OS SQS OS OGM2 Relaxed OS LALM (θ = 1) Relaxed OS LALM (θ = 0.1) 100 OS SQS OS OGM2 Relaxed OS LALM (θ = 1) Relaxed OS LALM (θ = 0.1) RMS difference [HU] RMS difference [HU] HU 1 HU Number of iterations 5 HU 1 HU Number of iterations Figure: Wide-cone: Convergence rate curves of different OS algorithms with 12 (left) and 24 (right) subsets. 20/20

STATISTICAL image reconstruction (SIR) methods [1, 2]

STATISTICAL image reconstruction (SIR) methods [1, 2] Relaxed Linearized Algorithms for Faster X-Ray CT Image Reconstruction Hung Nien, Member, IEEE, and Jeffrey A. Fessler, Fellow, IEEE arxiv:.464v [math.oc] 4 Dec Abstract Statistical image reconstruction

More information

Optimized first-order minimization methods

Optimized first-order minimization methods Optimized first-order minimization methods Donghwan Kim & Jeffrey A. Fessler EECS Dept., BME Dept., Dept. of Radiology University of Michigan web.eecs.umich.edu/~fessler UM AIM Seminar 2014-10-03 1 Disclosure

More information

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems)

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Donghwan Kim and Jeffrey A. Fessler EECS Department, University of Michigan

More information

STATISTICAL methods for image reconstruction has been

STATISTICAL methods for image reconstruction has been 1 arxiv:140.4381v1 [math.oc] 18 Feb 014 Fast X-Ray CT Image Reconstruction Using the Linearized Augmented Lagrangian Method with Ordered Subsets Hung Nien, Student Member, IEEE, Jeffrey A. Fessler, Fellow,

More information

Beyond Heuristics: Applying Alternating Direction Method of Multipliers in Nonconvex Territory

Beyond Heuristics: Applying Alternating Direction Method of Multipliers in Nonconvex Territory Beyond Heuristics: Applying Alternating Direction Method of Multipliers in Nonconvex Territory Xin Liu(4Ð) State Key Laboratory of Scientific and Engineering Computing Institute of Computational Mathematics

More information

Scan Time Optimization for Post-injection PET Scans

Scan Time Optimization for Post-injection PET Scans Presented at 998 IEEE Nuc. Sci. Symp. Med. Im. Conf. Scan Time Optimization for Post-injection PET Scans Hakan Erdoğan Jeffrey A. Fessler 445 EECS Bldg., University of Michigan, Ann Arbor, MI 4809-222

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

Image restoration: numerical optimisation

Image restoration: numerical optimisation Image restoration: numerical optimisation Short and partial presentation Jean-François Giovannelli Groupe Signal Image Laboratoire de l Intégration du Matériau au Système Univ. Bordeaux CNRS BINP / 6 Context

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

ALADIN An Algorithm for Distributed Non-Convex Optimization and Control

ALADIN An Algorithm for Distributed Non-Convex Optimization and Control ALADIN An Algorithm for Distributed Non-Convex Optimization and Control Boris Houska, Yuning Jiang, Janick Frasch, Rien Quirynen, Dimitris Kouzoupis, Moritz Diehl ShanghaiTech University, University of

More information

Coordinate Update Algorithm Short Course Operator Splitting

Coordinate Update Algorithm Short Course Operator Splitting Coordinate Update Algorithm Short Course Operator Splitting Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 25 Operator splitting pipeline 1. Formulate a problem as 0 A(x) + B(x) with monotone operators

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 XVI - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 A slightly changed ADMM for convex optimization with three separable operators Bingsheng He Department of

More information

Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables

Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables Department of Systems Engineering and Engineering Management The Chinese University of Hong Kong 2014 Workshop

More information

Sparse Regularization via Convex Analysis

Sparse Regularization via Convex Analysis Sparse Regularization via Convex Analysis Ivan Selesnick Electrical and Computer Engineering Tandon School of Engineering New York University Brooklyn, New York, USA 29 / 66 Convex or non-convex: Which

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

Accelerated primal-dual methods for linearly constrained convex problems

Accelerated primal-dual methods for linearly constrained convex problems Accelerated primal-dual methods for linearly constrained convex problems Yangyang Xu SIAM Conference on Optimization May 24, 2017 1 / 23 Accelerated proximal gradient For convex composite problem: minimize

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

Math 273a: Optimization Overview of First-Order Optimization Algorithms

Math 273a: Optimization Overview of First-Order Optimization Algorithms Math 273a: Optimization Overview of First-Order Optimization Algorithms Wotao Yin Department of Mathematics, UCLA online discussions on piazza.com 1 / 9 Typical flow of numerical optimization Optimization

More information

Regularizing inverse problems using sparsity-based signal models

Regularizing inverse problems using sparsity-based signal models Regularizing inverse problems using sparsity-based signal models Jeffrey A. Fessler William L. Root Professor of EECS EECS Dept., BME Dept., Dept. of Radiology University of Michigan http://web.eecs.umich.edu/

More information

The Direct Extension of ADMM for Multi-block Convex Minimization Problems is Not Necessarily Convergent

The Direct Extension of ADMM for Multi-block Convex Minimization Problems is Not Necessarily Convergent The Direct Extension of ADMM for Multi-block Convex Minimization Problems is Not Necessarily Convergent Yinyu Ye K. T. Li Professor of Engineering Department of Management Science and Engineering Stanford

More information

Contraction Methods for Convex Optimization and monotone variational inequalities No.12

Contraction Methods for Convex Optimization and monotone variational inequalities No.12 XII - 1 Contraction Methods for Convex Optimization and monotone variational inequalities No.12 Linearized alternating direction methods of multipliers for separable convex programming Bingsheng He Department

More information

Coordinate Update Algorithm Short Course Proximal Operators and Algorithms

Coordinate Update Algorithm Short Course Proximal Operators and Algorithms Coordinate Update Algorithm Short Course Proximal Operators and Algorithms Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 36 Why proximal? Newton s method: for C 2 -smooth, unconstrained problems allow

More information

Distributed Optimization via Alternating Direction Method of Multipliers

Distributed Optimization via Alternating Direction Method of Multipliers Distributed Optimization via Alternating Direction Method of Multipliers Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato Stanford University ITMANET, Stanford, January 2011 Outline precursors dual decomposition

More information

Inexact Alternating Direction Method of Multipliers for Separable Convex Optimization

Inexact Alternating Direction Method of Multipliers for Separable Convex Optimization Inexact Alternating Direction Method of Multipliers for Separable Convex Optimization Hongchao Zhang hozhang@math.lsu.edu Department of Mathematics Center for Computation and Technology Louisiana State

More information

A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration

A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration E. Chouzenoux, A. Jezierska, J.-C. Pesquet and H. Talbot Université Paris-Est Lab. d Informatique Gaspard

More information

A Bregman alternating direction method of multipliers for sparse probabilistic Boolean network problem

A Bregman alternating direction method of multipliers for sparse probabilistic Boolean network problem A Bregman alternating direction method of multipliers for sparse probabilistic Boolean network problem Kangkang Deng, Zheng Peng Abstract: The main task of genetic regulatory networks is to construct a

More information

Splitting methods for decomposing separable convex programs

Splitting methods for decomposing separable convex programs Splitting methods for decomposing separable convex programs Philippe Mahey LIMOS - ISIMA - Université Blaise Pascal PGMO, ENSTA 2013 October 4, 2013 1 / 30 Plan 1 Max Monotone Operators Proximal techniques

More information

Optimization for Learning and Big Data

Optimization for Learning and Big Data Optimization for Learning and Big Data Donald Goldfarb Department of IEOR Columbia University Department of Mathematics Distinguished Lecture Series May 17-19, 2016. Lecture 1. First-Order Methods for

More information

Reconstruction from Digital Holograms by Statistical Methods

Reconstruction from Digital Holograms by Statistical Methods Reconstruction from Digital Holograms by Statistical Methods Saowapak Sotthivirat Jeffrey A. Fessler EECS Department The University of Michigan 2003 Asilomar Nov. 12, 2003 Acknowledgements: Brian Athey,

More information

Two-Material Decomposition From a Single CT Scan Using Statistical Image Reconstruction

Two-Material Decomposition From a Single CT Scan Using Statistical Image Reconstruction / 5 Two-Material Decomposition From a Single CT Scan Using Statistical Image Reconstruction Yong Long and Jeffrey A. Fessler EECS Department James M. Balter Radiation Oncology Department The University

More information

A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing

A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing Emilie Chouzenoux emilie.chouzenoux@univ-mlv.fr Université Paris-Est Lab. d Informatique Gaspard

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

A Study of Numerical Algorithms for Regularized Poisson ML Image Reconstruction

A Study of Numerical Algorithms for Regularized Poisson ML Image Reconstruction A Study of Numerical Algorithms for Regularized Poisson ML Image Reconstruction Yao Xie Project Report for EE 391 Stanford University, Summer 2006-07 September 1, 2007 Abstract In this report we solved

More information

Sparse Optimization Lecture: Dual Methods, Part I

Sparse Optimization Lecture: Dual Methods, Part I Sparse Optimization Lecture: Dual Methods, Part I Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know dual (sub)gradient iteration augmented l 1 iteration

More information

Applications of Linear Programming

Applications of Linear Programming Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal

More information

A Unified Approach to Proximal Algorithms using Bregman Distance

A Unified Approach to Proximal Algorithms using Bregman Distance A Unified Approach to Proximal Algorithms using Bregman Distance Yi Zhou a,, Yingbin Liang a, Lixin Shen b a Department of Electrical Engineering and Computer Science, Syracuse University b Department

More information

On convergence rate of the Douglas-Rachford operator splitting method

On convergence rate of the Douglas-Rachford operator splitting method On convergence rate of the Douglas-Rachford operator splitting method Bingsheng He and Xiaoming Yuan 2 Abstract. This note provides a simple proof on a O(/k) convergence rate for the Douglas- Rachford

More information

LINEARIZED AUGMENTED LAGRANGIAN AND ALTERNATING DIRECTION METHODS FOR NUCLEAR NORM MINIMIZATION

LINEARIZED AUGMENTED LAGRANGIAN AND ALTERNATING DIRECTION METHODS FOR NUCLEAR NORM MINIMIZATION LINEARIZED AUGMENTED LAGRANGIAN AND ALTERNATING DIRECTION METHODS FOR NUCLEAR NORM MINIMIZATION JUNFENG YANG AND XIAOMING YUAN Abstract. The nuclear norm is widely used to induce low-rank solutions for

More information

Variable Metric Forward-Backward Algorithm

Variable Metric Forward-Backward Algorithm Variable Metric Forward-Backward Algorithm 1/37 Variable Metric Forward-Backward Algorithm for minimizing the sum of a differentiable function and a convex function E. Chouzenoux in collaboration with

More information

2.3 Linear Programming

2.3 Linear Programming 2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

arxiv: v4 [math.oc] 1 Apr 2018

arxiv: v4 [math.oc] 1 Apr 2018 Generalizing the optimized gradient method for smooth convex minimization Donghwan Kim Jeffrey A. Fessler arxiv:607.06764v4 [math.oc] Apr 08 Date of current version: April 3, 08 Abstract This paper generalizes

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

Network Newton. Aryan Mokhtari, Qing Ling and Alejandro Ribeiro. University of Pennsylvania, University of Science and Technology (China)

Network Newton. Aryan Mokhtari, Qing Ling and Alejandro Ribeiro. University of Pennsylvania, University of Science and Technology (China) Network Newton Aryan Mokhtari, Qing Ling and Alejandro Ribeiro University of Pennsylvania, University of Science and Technology (China) aryanm@seas.upenn.edu, qingling@mail.ustc.edu.cn, aribeiro@seas.upenn.edu

More information

Analytical Approach to Regularization Design for Isotropic Spatial Resolution

Analytical Approach to Regularization Design for Isotropic Spatial Resolution Analytical Approach to Regularization Design for Isotropic Spatial Resolution Jeffrey A Fessler, Senior Member, IEEE Abstract In emission tomography, conventional quadratic regularization methods lead

More information

MMSE Denoising of 2-D Signals Using Consistent Cycle Spinning Algorithm

MMSE Denoising of 2-D Signals Using Consistent Cycle Spinning Algorithm Denoising of 2-D Signals Using Consistent Cycle Spinning Algorithm Bodduluri Asha, B. Leela kumari Abstract: It is well known that in a real world signals do not exist without noise, which may be negligible

More information

ACCELERATED FIRST-ORDER PRIMAL-DUAL PROXIMAL METHODS FOR LINEARLY CONSTRAINED COMPOSITE CONVEX PROGRAMMING

ACCELERATED FIRST-ORDER PRIMAL-DUAL PROXIMAL METHODS FOR LINEARLY CONSTRAINED COMPOSITE CONVEX PROGRAMMING ACCELERATED FIRST-ORDER PRIMAL-DUAL PROXIMAL METHODS FOR LINEARLY CONSTRAINED COMPOSITE CONVEX PROGRAMMING YANGYANG XU Abstract. Motivated by big data applications, first-order methods have been extremely

More information

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.11

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.11 XI - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.11 Alternating direction methods of multipliers for separable convex programming Bingsheng He Department of Mathematics

More information

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions International Journal of Control Vol. 00, No. 00, January 2007, 1 10 Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions I-JENG WANG and JAMES C.

More information

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

1 Sparsity and l 1 relaxation

1 Sparsity and l 1 relaxation 6.883 Learning with Combinatorial Structure Note for Lecture 2 Author: Chiyuan Zhang Sparsity and l relaxation Last time we talked about sparsity and characterized when an l relaxation could recover the

More information

FAST ALTERNATING DIRECTION OPTIMIZATION METHODS

FAST ALTERNATING DIRECTION OPTIMIZATION METHODS FAST ALTERNATING DIRECTION OPTIMIZATION METHODS TOM GOLDSTEIN, BRENDAN O DONOGHUE, SIMON SETZER, AND RICHARD BARANIUK Abstract. Alternating direction methods are a common tool for general mathematical

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

On Nesterov s Random Coordinate Descent Algorithms - Continued

On Nesterov s Random Coordinate Descent Algorithms - Continued On Nesterov s Random Coordinate Descent Algorithms - Continued Zheng Xu University of Texas At Arlington February 20, 2015 1 Revisit Random Coordinate Descent The Random Coordinate Descent Upper and Lower

More information

ADMM and Fast Gradient Methods for Distributed Optimization

ADMM and Fast Gradient Methods for Distributed Optimization ADMM and Fast Gradient Methods for Distributed Optimization João Xavier Instituto Sistemas e Robótica (ISR), Instituto Superior Técnico (IST) European Control Conference, ECC 13 July 16, 013 Joint work

More information

Functional SVD for Big Data

Functional SVD for Big Data Functional SVD for Big Data Pan Chao April 23, 2014 Pan Chao Functional SVD for Big Data April 23, 2014 1 / 24 Outline 1 One-Way Functional SVD a) Interpretation b) Robustness c) CV/GCV 2 Two-Way Problem

More information

Randomized Coordinate Descent with Arbitrary Sampling: Algorithms and Complexity

Randomized Coordinate Descent with Arbitrary Sampling: Algorithms and Complexity Randomized Coordinate Descent with Arbitrary Sampling: Algorithms and Complexity Zheng Qu University of Hong Kong CAM, 23-26 Aug 2016 Hong Kong based on joint work with Peter Richtarik and Dominique Cisba(University

More information

Optimizing Nonconvex Finite Sums by a Proximal Primal-Dual Method

Optimizing Nonconvex Finite Sums by a Proximal Primal-Dual Method Optimizing Nonconvex Finite Sums by a Proximal Primal-Dual Method Davood Hajinezhad Iowa State University Davood Hajinezhad Optimizing Nonconvex Finite Sums by a Proximal Primal-Dual Method 1 / 35 Co-Authors

More information

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)

More information

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study International Journal of Mathematics And Its Applications Vol.2 No.4 (2014), pp.47-56. ISSN: 2347-1557(online) Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms:

More information

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

COMPLEXITY OF A QUADRATIC PENALTY ACCELERATED INEXACT PROXIMAL POINT METHOD FOR SOLVING LINEARLY CONSTRAINED NONCONVEX COMPOSITE PROGRAMS

COMPLEXITY OF A QUADRATIC PENALTY ACCELERATED INEXACT PROXIMAL POINT METHOD FOR SOLVING LINEARLY CONSTRAINED NONCONVEX COMPOSITE PROGRAMS COMPLEXITY OF A QUADRATIC PENALTY ACCELERATED INEXACT PROXIMAL POINT METHOD FOR SOLVING LINEARLY CONSTRAINED NONCONVEX COMPOSITE PROGRAMS WEIWEI KONG, JEFFERSON G. MELO, AND RENATO D.C. MONTEIRO Abstract.

More information

Proximal-like contraction methods for monotone variational inequalities in a unified framework

Proximal-like contraction methods for monotone variational inequalities in a unified framework Proximal-like contraction methods for monotone variational inequalities in a unified framework Bingsheng He 1 Li-Zhi Liao 2 Xiang Wang Department of Mathematics, Nanjing University, Nanjing, 210093, China

More information

Divide-and-combine Strategies in Statistical Modeling for Massive Data

Divide-and-combine Strategies in Statistical Modeling for Massive Data Divide-and-combine Strategies in Statistical Modeling for Massive Data Liqun Yu Washington University in St. Louis March 30, 2017 Liqun Yu (WUSTL) D&C Statistical Modeling for Massive Data March 30, 2017

More information

Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery

Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery Department of Mathematics & Risk Management Institute National University of Singapore (Based on a joint work with Shujun

More information

Sparse Gaussian conditional random fields

Sparse Gaussian conditional random fields Sparse Gaussian conditional random fields Matt Wytock, J. ico Kolter School of Computer Science Carnegie Mellon University Pittsburgh, PA 53 {mwytock, zkolter}@cs.cmu.edu Abstract We propose sparse Gaussian

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 9 Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 2 Separable convex optimization a special case is min f(x)

More information

AUGMENTED LAGRANGIAN METHODS FOR CONVEX OPTIMIZATION

AUGMENTED LAGRANGIAN METHODS FOR CONVEX OPTIMIZATION New Horizon of Mathematical Sciences RIKEN ithes - Tohoku AIMR - Tokyo IIS Joint Symposium April 28 2016 AUGMENTED LAGRANGIAN METHODS FOR CONVEX OPTIMIZATION T. Takeuchi Aihara Lab. Laboratories for Mathematics,

More information

Does Alternating Direction Method of Multipliers Converge for Nonconvex Problems?

Does Alternating Direction Method of Multipliers Converge for Nonconvex Problems? Does Alternating Direction Method of Multipliers Converge for Nonconvex Problems? Mingyi Hong IMSE and ECpE Department Iowa State University ICCOPT, Tokyo, August 2016 Mingyi Hong (Iowa State University)

More information

Optimization transfer approach to joint registration / reconstruction for motion-compensated image reconstruction

Optimization transfer approach to joint registration / reconstruction for motion-compensated image reconstruction Optimization transfer approach to joint registration / reconstruction for motion-compensated image reconstruction Jeffrey A. Fessler EECS Dept. University of Michigan ISBI Apr. 5, 00 0 Introduction Image

More information

INERTIAL PRIMAL-DUAL ALGORITHMS FOR STRUCTURED CONVEX OPTIMIZATION

INERTIAL PRIMAL-DUAL ALGORITHMS FOR STRUCTURED CONVEX OPTIMIZATION INERTIAL PRIMAL-DUAL ALGORITHMS FOR STRUCTURED CONVEX OPTIMIZATION RAYMOND H. CHAN, SHIQIAN MA, AND JUNFENG YANG Abstract. The primal-dual algorithm recently proposed by Chambolle & Pock (abbreviated as

More information

Dual and primal-dual methods

Dual and primal-dual methods ELE 538B: Large-Scale Optimization for Data Science Dual and primal-dual methods Yuxin Chen Princeton University, Spring 2018 Outline Dual proximal gradient method Primal-dual proximal gradient method

More information

A Multilevel Proximal Algorithm for Large Scale Composite Convex Optimization

A Multilevel Proximal Algorithm for Large Scale Composite Convex Optimization A Multilevel Proximal Algorithm for Large Scale Composite Convex Optimization Panos Parpas Department of Computing Imperial College London www.doc.ic.ac.uk/ pp500 p.parpas@imperial.ac.uk jointly with D.V.

More information

Accelerated Proximal Gradient Methods for Convex Optimization

Accelerated Proximal Gradient Methods for Convex Optimization Accelerated Proximal Gradient Methods for Convex Optimization Paul Tseng Mathematics, University of Washington Seattle MOPTA, University of Guelph August 18, 2008 ACCELERATED PROXIMAL GRADIENT METHODS

More information

Efficient Methods for Stochastic Composite Optimization

Efficient Methods for Stochastic Composite Optimization Efficient Methods for Stochastic Composite Optimization Guanghui Lan School of Industrial and Systems Engineering Georgia Institute of Technology, Atlanta, GA 3033-005 Email: glan@isye.gatech.edu June

More information

Adaptive Primal Dual Optimization for Image Processing and Learning

Adaptive Primal Dual Optimization for Image Processing and Learning Adaptive Primal Dual Optimization for Image Processing and Learning Tom Goldstein Rice University tag7@rice.edu Ernie Esser University of British Columbia eesser@eos.ubc.ca Richard Baraniuk Rice University

More information

2 Regularized Image Reconstruction for Compressive Imaging and Beyond

2 Regularized Image Reconstruction for Compressive Imaging and Beyond EE 367 / CS 448I Computational Imaging and Display Notes: Compressive Imaging and Regularized Image Reconstruction (lecture ) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement

More information

Numerical Optimization: Basic Concepts and Algorithms

Numerical Optimization: Basic Concepts and Algorithms May 27th 2015 Numerical Optimization: Basic Concepts and Algorithms R. Duvigneau R. Duvigneau - Numerical Optimization: Basic Concepts and Algorithms 1 Outline Some basic concepts in optimization Some

More information

Sequential Unconstrained Minimization: A Survey

Sequential Unconstrained Minimization: A Survey Sequential Unconstrained Minimization: A Survey Charles L. Byrne February 21, 2013 Abstract The problem is to minimize a function f : X (, ], over a non-empty subset C of X, where X is an arbitrary set.

More information

Sparse and Regularized Optimization

Sparse and Regularized Optimization Sparse and Regularized Optimization In many applications, we seek not an exact minimizer of the underlying objective, but rather an approximate minimizer that satisfies certain desirable properties: sparsity

More information

This can be 2 lectures! still need: Examples: non-convex problems applications for matrix factorization

This can be 2 lectures! still need: Examples: non-convex problems applications for matrix factorization This can be 2 lectures! still need: Examples: non-convex problems applications for matrix factorization x = prox_f(x)+prox_{f^*}(x) use to get prox of norms! PROXIMAL METHODS WHY PROXIMAL METHODS Smooth

More information

A Brief Overview of Practical Optimization Algorithms in the Context of Relaxation

A Brief Overview of Practical Optimization Algorithms in the Context of Relaxation A Brief Overview of Practical Optimization Algorithms in the Context of Relaxation Zhouchen Lin Peking University April 22, 2018 Too Many Opt. Problems! Too Many Opt. Algorithms! Zero-th order algorithms:

More information

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization

More information

GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS. Jong Kyu Kim, Salahuddin, and Won Hee Lim

GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS. Jong Kyu Kim, Salahuddin, and Won Hee Lim Korean J. Math. 25 (2017), No. 4, pp. 469 481 https://doi.org/10.11568/kjm.2017.25.4.469 GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS Jong Kyu Kim, Salahuddin, and Won Hee Lim Abstract. In this

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

First-order methods for structured nonsmooth optimization

First-order methods for structured nonsmooth optimization First-order methods for structured nonsmooth optimization Sangwoon Yun Department of Mathematics Education Sungkyunkwan University Oct 19, 2016 Center for Mathematical Analysis & Computation, Yonsei University

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

A Solution Method for Semidefinite Variational Inequality with Coupled Constraints

A Solution Method for Semidefinite Variational Inequality with Coupled Constraints Communications in Mathematics and Applications Volume 4 (2013), Number 1, pp. 39 48 RGN Publications http://www.rgnpublications.com A Solution Method for Semidefinite Variational Inequality with Coupled

More information

You should be able to...

You should be able to... Lecture Outline Gradient Projection Algorithm Constant Step Length, Varying Step Length, Diminishing Step Length Complexity Issues Gradient Projection With Exploration Projection Solving QPs: active set

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

Preconditioned Alternating Direction Method of Multipliers for the Minimization of Quadratic plus Non-Smooth Convex Functionals

Preconditioned Alternating Direction Method of Multipliers for the Minimization of Quadratic plus Non-Smooth Convex Functionals SpezialForschungsBereich F 32 Karl Franzens Universita t Graz Technische Universita t Graz Medizinische Universita t Graz Preconditioned Alternating Direction Method of Multipliers for the Minimization

More information

Sparsifying Transform Learning for Compressed Sensing MRI

Sparsifying Transform Learning for Compressed Sensing MRI Sparsifying Transform Learning for Compressed Sensing MRI Saiprasad Ravishankar and Yoram Bresler Department of Electrical and Computer Engineering and Coordinated Science Laborarory University of Illinois

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1396 1 / 44 Table

More information

A DECOMPOSITION PROCEDURE BASED ON APPROXIMATE NEWTON DIRECTIONS

A DECOMPOSITION PROCEDURE BASED ON APPROXIMATE NEWTON DIRECTIONS Working Paper 01 09 Departamento de Estadística y Econometría Statistics and Econometrics Series 06 Universidad Carlos III de Madrid January 2001 Calle Madrid, 126 28903 Getafe (Spain) Fax (34) 91 624

More information

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general

More information

Iterative Hard Thresholding Methods for l 0 Regularized Convex Cone Programming

Iterative Hard Thresholding Methods for l 0 Regularized Convex Cone Programming Iterative Hard Thresholding Methods for l 0 Regularized Convex Cone Programming arxiv:1211.0056v2 [math.oc] 2 Nov 2012 Zhaosong Lu October 30, 2012 Abstract In this paper we consider l 0 regularized convex

More information

SPARSE SIGNAL RESTORATION. 1. Introduction

SPARSE SIGNAL RESTORATION. 1. Introduction SPARSE SIGNAL RESTORATION IVAN W. SELESNICK 1. Introduction These notes describe an approach for the restoration of degraded signals using sparsity. This approach, which has become quite popular, is useful

More information