Relaxed linearized algorithms for faster X-ray CT image reconstruction

Similar documents
STATISTICAL image reconstruction (SIR) methods [1, 2]

Optimized first-order minimization methods

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems)

STATISTICAL methods for image reconstruction has been

Beyond Heuristics: Applying Alternating Direction Method of Multipliers in Nonconvex Territory

Scan Time Optimization for Post-injection PET Scans

Algorithms for constrained local optimization

Image restoration: numerical optimisation

Constrained Optimization

ALADIN An Algorithm for Distributed Non-Convex Optimization and Control

Coordinate Update Algorithm Short Course Operator Splitting

Constrained optimization: direct methods (cont.)

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16

Recent Developments of Alternating Direction Method of Multipliers with Multi-Block Variables

Sparse Regularization via Convex Analysis

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Accelerated primal-dual methods for linearly constrained convex problems

1 Computing with constraints

Math 273a: Optimization Overview of First-Order Optimization Algorithms

Regularizing inverse problems using sparsity-based signal models

The Direct Extension of ADMM for Multi-block Convex Minimization Problems is Not Necessarily Convergent

Contraction Methods for Convex Optimization and monotone variational inequalities No.12

Coordinate Update Algorithm Short Course Proximal Operators and Algorithms

Distributed Optimization via Alternating Direction Method of Multipliers

Inexact Alternating Direction Method of Multipliers for Separable Convex Optimization

A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration

A Bregman alternating direction method of multipliers for sparse probabilistic Boolean network problem

Splitting methods for decomposing separable convex programs

Optimization for Learning and Big Data

Reconstruction from Digital Holograms by Statistical Methods

Two-Material Decomposition From a Single CT Scan Using Statistical Image Reconstruction

A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing

Nonlinear Optimization: What s important?

A Study of Numerical Algorithms for Regularized Poisson ML Image Reconstruction

Sparse Optimization Lecture: Dual Methods, Part I

Applications of Linear Programming

A Unified Approach to Proximal Algorithms using Bregman Distance

On convergence rate of the Douglas-Rachford operator splitting method

LINEARIZED AUGMENTED LAGRANGIAN AND ALTERNATING DIRECTION METHODS FOR NUCLEAR NORM MINIMIZATION

Variable Metric Forward-Backward Algorithm

2.3 Linear Programming

Lecture 3. Optimization Problems and Iterative Algorithms

arxiv: v4 [math.oc] 1 Apr 2018

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Network Newton. Aryan Mokhtari, Qing Ling and Alejandro Ribeiro. University of Pennsylvania, University of Science and Technology (China)

Analytical Approach to Regularization Design for Isotropic Spatial Resolution

MMSE Denoising of 2-D Signals Using Consistent Cycle Spinning Algorithm

ACCELERATED FIRST-ORDER PRIMAL-DUAL PROXIMAL METHODS FOR LINEARLY CONSTRAINED COMPOSITE CONVEX PROGRAMMING

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.11

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

4TE3/6TE3. Algorithms for. Continuous Optimization

1 Sparsity and l 1 relaxation

FAST ALTERNATING DIRECTION OPTIMIZATION METHODS

Algorithms for Constrained Optimization

On Nesterov s Random Coordinate Descent Algorithms - Continued

ADMM and Fast Gradient Methods for Distributed Optimization

Functional SVD for Big Data

Randomized Coordinate Descent with Arbitrary Sampling: Algorithms and Complexity

Optimizing Nonconvex Finite Sums by a Proximal Primal-Dual Method

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

COMPLEXITY OF A QUADRATIC PENALTY ACCELERATED INEXACT PROXIMAL POINT METHOD FOR SOLVING LINEARLY CONSTRAINED NONCONVEX COMPOSITE PROGRAMS

Proximal-like contraction methods for monotone variational inequalities in a unified framework

Divide-and-combine Strategies in Statistical Modeling for Massive Data

Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery

Sparse Gaussian conditional random fields

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers

AUGMENTED LAGRANGIAN METHODS FOR CONVEX OPTIMIZATION

Does Alternating Direction Method of Multipliers Converge for Nonconvex Problems?

Optimization transfer approach to joint registration / reconstruction for motion-compensated image reconstruction

INERTIAL PRIMAL-DUAL ALGORITHMS FOR STRUCTURED CONVEX OPTIMIZATION

Dual and primal-dual methods

A Multilevel Proximal Algorithm for Large Scale Composite Convex Optimization

Accelerated Proximal Gradient Methods for Convex Optimization

Efficient Methods for Stochastic Composite Optimization

Adaptive Primal Dual Optimization for Image Processing and Learning

2 Regularized Image Reconstruction for Compressive Imaging and Beyond

Numerical Optimization: Basic Concepts and Algorithms

Sequential Unconstrained Minimization: A Survey

Sparse and Regularized Optimization

This can be 2 lectures! still need: Examples: non-convex problems applications for matrix factorization

A Brief Overview of Practical Optimization Algorithms in the Context of Relaxation

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS. Jong Kyu Kim, Salahuddin, and Won Hee Lim

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey

First-order methods for structured nonsmooth optimization

5 Handling Constraints

A Solution Method for Semidefinite Variational Inequality with Coupled Constraints

You should be able to...

MS&E 318 (CME 338) Large-Scale Numerical Optimization

Preconditioned Alternating Direction Method of Multipliers for the Minimization of Quadratic plus Non-Smooth Convex Functionals

Sparsifying Transform Learning for Compressed Sensing MRI

Linear & nonlinear classifiers

A DECOMPOSITION PROCEDURE BASED ON APPROXIMATE NEWTON DIRECTIONS

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

Iterative Hard Thresholding Methods for l 0 Regularized Convex Cone Programming

SPARSE SIGNAL RESTORATION. 1. Introduction

Transcription:

Relaxed linearized algorithms for faster X-ray CT image reconstruction Hung Nien and Jeffrey A. Fessler University of Michigan, Ann Arbor The 13th Fully 3D Meeting June 2, 2015 1/20

Statistical image reconstruction for X-ray CT 2/20

Statistical image reconstruction for X-ray CT 2/20

Statistical image reconstruction for X-ray CT 2/20

Statistical image reconstruction for X-ray CT 2/20

Statistical image reconstruction for X-ray CT 2/20

Statistical image reconstruction for X-ray CT Statistical image reconstruction (SIR) { } ˆx arg min Ψ PWLS (x) 1 x 2 y Ax 2 W + R(x) + ι Ω(x) 2/20

Statistical image reconstruction for X-ray CT Statistical image reconstruction (SIR) { } ˆx arg min Ψ PWLS (x) 1 x 2 y Ax 2 W + R(x) + ι Ω(x) 2/20

Statistical image reconstruction for X-ray CT Statistical image reconstruction (SIR) { } ˆx arg min Ψ PWLS (x) 1 x 2 y Ax 2 W + R(x) + ι Ω(x) 2/20

Statistical image reconstruction for X-ray CT Statistical image reconstruction (SIR) { } ˆx arg min Ψ PWLS (x) 1 x 2 y Ax 2 W + R(x) + ι Ω(x) 2/20

Statistical image reconstruction for X-ray CT Statistical image reconstruction (SIR) { } ˆx arg min Ψ PWLS (x) 1 x 2 y Ax 2 W + R(x) + ι Ω(x) 2/20

First-order algorithms with ordered-subsets 40 35 30 OS SQS OS FGM2 OS LALM OS OGM2 FBP 1200 1100 RMS difference [HU] 25 20 15 OS SQS @ 30 iter 1000 10 OS OGM2 @ 30 iter 900 5 HU 5 1 HU 0 0 5 10 15 20 25 30 Number of iterations (10 subsets) 800 Figure: Chest: Existing first-order algorithms with ordered-subsets (OS). 3/20

First-order algorithms with ordered-subsets 40 35 30 OS SQS OS FGM2 OS LALM OS OGM2 FBP 1200 1100 RMS difference [HU] 25 20 15 OS SQS @ 30 iter 1000 10 OS OGM2 @ 30 iter 900 5 HU 5 1 HU 0 0 5 10 15 20 25 30 Number of iterations (10 subsets) 800 Figure: Chest: Existing first-order algorithms with ordered-subsets (OS). 3/20

First-order algorithms with ordered-subsets 40 35 30 OS SQS OS FGM2 OS LALM OS OGM2 FBP 1200 1100 RMS difference [HU] 25 20 15 OS SQS @ 30 iter 1000 10 OS OGM2 @ 30 iter 900 5 HU 5 1 HU 0 0 5 10 15 20 25 30 Number of iterations (10 subsets) 800 Figure: Chest: Existing first-order algorithms with ordered-subsets (OS). 3/20

First-order algorithms with ordered-subsets 40 35 30 OS SQS OS FGM2 OS LALM OS OGM2 FBP 1200 1100 RMS difference [HU] 25 20 15 OS SQS @ 10 iter 1000 10 OS OGM2 @ 10 iter 900 5 HU 5 1 HU 0 0 5 10 15 20 25 30 Number of iterations (10 subsets) 800 Figure: Chest: Existing first-order algorithms with ordered-subsets (OS). 3/20

First-order algorithms with ordered-subsets 40 35 30 OS SQS OS FGM2 OS LALM OS OGM2 FBP 1200 1100 RMS difference [HU] 25 20 15 OS SQS @ 10 iter 1000 10 OS OGM2 @ 10 iter 900 5 HU 5 1 HU 0 0 5 10 15 20 25 30 Number of iterations (10 subsets) 800 Figure: Chest: Existing first-order algorithms with ordered-subsets (OS). 3/20

Equality-constrained composite minimization Consider an equality-constrained minimization problem: { (ˆx, û) arg min 1 x,u 2 u y 2 2 + h(x)} s.t. u = Ax, (1) where h are closed and proper convex functions. 4/20

Equality-constrained composite minimization Consider an equality-constrained minimization problem: { (ˆx, û) arg min 1 x,u 2 u y 2 2 + h(x)} s.t. u = Ax, (1) where h are closed and proper convex functions. In particular, the quadratic loss function penalizes the difference between the linear model Ax and noisy measurement y, and h is a regularization term that introduces the prior knowledge of x to the reconstruction. 4/20

Equality-constrained composite minimization Consider an equality-constrained minimization problem: { (ˆx, û) arg min 1 x,u 2 u y 2 2 + h(x)} s.t. u = Ax, (1) where h are closed and proper convex functions. In particular, the quadratic loss function penalizes the difference between the linear model Ax and noisy measurement y, and h is a regularization term that introduces the prior knowledge of x to the reconstruction. For example, (ˆx, û) arg min x,u { 1 2 u W 1/2 y 2 2 + (R + ι Ω)(x) represents an X-ray CT image reconstruction problem. } s.t. u = W 1/2 Ax 4/20

Standard AL method The standard AL method finds a saddle-point of the augmented Lagrangian (AL) of (1): L A (x, u, d; ρ) 1 2 u y 2 2 + h(x) + ρ 2 Ax u d 2 2 in an alternating direction manner: x (k+1) arg min {h(x) + ρ x 2 Ax u (k) d (k) } 2 { 2 u (k+1) arg min 1 u 2 u y 2 2 + ρ 2 Ax (k+1) u d (k) } 2 2 d (k+1) = d (k) Ax (k+1) + u (k+1), where d is the scaled Lagrange multiplier of u, and ρ > 0 is the AL penalty parameter. [Gabay and Mercier, Comput. Math. Appl., 1976] 5/20

Standard AL method The standard AL method finds a saddle-point of the augmented Lagrangian (AL) of (1): L A (x, u, d; ρ) 1 2 u y 2 2 + h(x) + ρ 2 Ax u d 2 2 in an alternating direction manner: x (k+1) arg min {h(x) + ρ x 2 Ax u (k) d (k) } 2 { 2 u (k+1) arg min 1 u 2 u y 2 2 + ρ 2 Ax (k+1) u d (k) } 2 2 d (k+1) = d (k) Ax (k+1) + u (k+1), where d is the scaled Lagrange multiplier of u, and ρ > 0 is the AL penalty parameter. [Gabay and Mercier, Comput. Math. Appl., 1976] 5/20

Linearized AL method The linearized AL method adds an additional G-proximity term to the x-subproblem in the standard AL method: x (k+1) arg min {h(x) + ρ x 2 Ax u (k) d (k) 2 2 + ρ } 2 x x (k) 2 { G u (k+1) arg min 1 u 2 u y 2 2 + ρ 2 Ax (k+1) u d (k) } 2 2 d (k+1) = d (k) Ax (k+1) + u (k+1), where G D L A A, and D L is a diagonal majorizing matrix of A A. [Zhang et al., J. Sci. Comput., 2011] [HN and Fessler, IEEE Trans. Med. Imag., 2015] 6/20

Linearized AL method The linearized AL method adds an additional G-proximity term to the x-subproblem in the standard AL method: x (k+1) arg min {h(x) + ρ x 2 Ax u (k) d (k) 2 2 + ρ } 2 x x (k) 2 { G u (k+1) arg min 1 u 2 u y 2 2 + ρ 2 Ax (k+1) u d (k) } 2 2 d (k+1) = d (k) Ax (k+1) + u (k+1), where G D L A A, and D L is a diagonal majorizing matrix of A A. Here, we linearize the algorithm in a sense that the Hessian matrix of the augmented quadratic AL term is diagonal. [Zhang et al., J. Sci. Comput., 2011] [HN and Fessler, IEEE Trans. Med. Imag., 2015] 6/20

Linearized AL algorithm for X-ray CT Algorithm: OS-LALM for CT reconstruction. Input: K 1, M 1, and an initial (FBP) image x. set ρ = 1, ζ = g = M L M (x) for k = 1, 2,..., K do for m = 1, 2,..., M do s = ρζ + (1 ρ) g x + = [ x (ρd L + D R ) 1 (s + R(x)) ] Ω ζ + = M L m (x + ) ρ ρ+1 ζ+ + 1 ρ+1 g g + = decrease ρ gradually end end Output: The final image x. 7/20

Relaxed AL method The relaxed AL method accelerates the standard AL method with under- or over-relaxation: x (k+1) arg min {h(x) + ρ x 2 Ax u (k) d (k) } 2 { 2 u (k+1) arg min 1 u 2 u y 2 2 + ρ 2 r u,α (k+1) u d (k) } 2 2 d (k+1) = d (k) r u,α (k+1) + u (k+1), where r (k+1) u,α αax (k+1) + (1 α)u (k) is the relaxation variable of u, and 0 < α < 2 is the relaxation parameter. [Eckstein and Bertsekas, Math. Prog., 1992] 8/20

Relaxed AL method The relaxed AL method accelerates the standard AL method with under- or over-relaxation: x (k+1) arg min {h(x) + ρ x 2 Ax u (k) d (k) } 2 { 2 u (k+1) arg min 1 u 2 u y 2 2 + ρ 2 r u,α (k+1) u d (k) } 2 2 d (k+1) = d (k) r u,α (k+1) + u (k+1), where r (k+1) u,α αax (k+1) + (1 α)u (k) is the relaxation variable of u, and 0 < α < 2 is the relaxation parameter. When α = 1, it reverts to the standard AL method. [Eckstein and Bertsekas, Math. Prog., 1992] 8/20

Implicit linearization via redundant variable-splitting Consider an equivalent equality-constrained minimization problem with a redundant equality constraint: { { (ˆx, û, ˆv) arg min 1 x,u,v 2 u y 2 2 + h(x)} u = Ax s.t. v = G 1/2 x. [HN and Fessler, IEEE Trans. Med. Imag., 2015] 9/20

Implicit linearization via redundant variable-splitting Consider an equivalent equality-constrained minimization problem with a redundant equality constraint: { { (ˆx, û, ˆv) arg min 1 x,u,v 2 u y 2 2 + h(x)} u = Ax s.t. v = G 1/2 x. Suppose we use the same AL penalty parameter ρ for both equality constraints in AL methods. The quadratic AL term ρ 2 Ax u (k) d (k) 2 2 + ρ 2 G 1/2 x v (k) e (k) 2 2 has a diagonal Hessian matrix H ρ ρa A + ρg = ρd L, leading to a separable quadratic AL term. [HN and Fessler, IEEE Trans. Med. Imag., 2015] 9/20

Proposed relaxed linearized AL method The proposed relaxed linearized AL method solves the equivalent minimization problem with a redundant equality constraint using the relaxed AL method: { h(x) + ρ x (k+1) 2 Ax u (k) d (k) 2 } 2 arg min x + ρ 2 G 1/2 x v (k) e (k) { 2 2 u (k+1) arg min 1 u 2 u y 2 2 + ρ (k+1) 2 r u,α u d (k) } 2 2 d (k+1) = d (k) r u,α (k+1) + u (k+1) v (k+1) = r v,α (k+1) e (k) e (k+1) = e (k) r v,α (k+1) + v (k+1). 10/20

Proposed relaxed linearized AL method The proposed relaxed linearized AL method solves the equivalent minimization problem with a redundant equality constraint using the relaxed AL method: { h(x) + ρ x (k+1) 2 Ax u (k) d (k) 2 } 2 arg min x + ρ 2 G 1/2 x v (k) e (k) { 2 2 u (k+1) arg min 1 u 2 u y 2 2 + ρ (k+1) 2 r u,α u d (k) } 2 2 d (k+1) = d (k) r u,α (k+1) + u (k+1) v (k+1) = r v,α (k+1) e (k) e (k+1) = e (k) r v,α (k+1) + v (k+1). When α = 1, the proposed method reverts to the linearized AL method. 10/20

Proposed relaxed linearized AL method (cont d) The proposed relaxed linearized AL method further simplifies as follows: γ (k+1) = (ρ 1) g (k) + ρh (k) x (k+1) arg min {h(x) + 1 x 2 x (ρdl ) 1 γ (k+1) } 2 ρd L ζ (k+1) = L ( x (k+1)) A ( Ax (k+1) y ) g (k+1) = ρ ( ρ+1 αζ (k+1) + (1 α)g (k)) + 1 ρ+1 g(k) h (k+1) = α ( D L x (k+1) ζ (k+1)) + (1 α)h (k), where L(x) (1/2) Ax y 2 2 denotes the quadratic data-fidelity term, and g (k) A ( u (k) y ) denotes the split gradient of L. 11/20

Proposed relaxed linearized AL method (cont d) The proposed relaxed linearized AL method further simplifies as follows: γ (k+1) = (ρ 1) g (k) + ρh (k) x (k+1) arg min {h(x) + 1 x 2 x (ρdl ) 1 γ (k+1) } 2 ρd L ζ (k+1) = L ( x (k+1)) A ( Ax (k+1) y ) g (k+1) = ρ ( ρ+1 αζ (k+1) + (1 α)g (k)) + 1 ρ+1 g(k) h (k+1) = α ( D L x (k+1) ζ (k+1)) + (1 α)h (k), where L(x) (1/2) Ax y 2 2 denotes the quadratic data-fidelity term, and g (k) A ( u (k) y ) denotes the split gradient of L. 11/20

Relaxed OS-LALM for faster CT reconstruction To solve X-ray CT image reconstruction problem: { } ˆx arg min 1 x 2 y Ax 2 W + R(x) + ι Ω(x) using the proposed relaxed linearized AL method, we apply the following substitution: { A W 1/2 A and set h R + ι Ω. y W 1/2 y 12/20

Relaxed OS-LALM for faster CT reconstruction To solve X-ray CT image reconstruction problem: { } ˆx arg min 1 x 2 y Ax 2 W + R(x) + ι Ω(x) using the proposed relaxed linearized AL method, we apply the following substitution: { A W 1/2 A and set h R + ι Ω. y W 1/2 y The image update now is a diagonally weighted denoising problem. We solve it using a projected gradient descent step from x (k). 12/20

Relaxed OS-LALM for faster CT reconstruction To solve X-ray CT image reconstruction problem: { } ˆx arg min 1 x 2 y Ax 2 W + R(x) + ι Ω(x) using the proposed relaxed linearized AL method, we apply the following substitution: { A W 1/2 A and set h R + ι Ω. y W 1/2 y The image update now is a diagonally weighted denoising problem. We solve it using a projected gradient descent step from x (k). For speed-up, ordered subsets (OS) or incremental gradients are used. 12/20

Speed-up with decreasing continuation sequence We also use a continuation technique to speed up convergence; that is, we decrease ρ gradually with iteration. 13/20

Speed-up with decreasing continuation sequence We also use a continuation technique to speed up convergence; that is, we decrease ρ gradually with iteration. Based on a second-order recursive system analysis, we use 1, if i = 0 ρ i (α) = ( ) π 2, (2) 1 otherwise. α(i+1) π 2α(i+1) Therefore, we use a faster-decreasing continuation sequence in a more over-relaxed linearized AL method. [HN and Fessler, IEEE Trans. Med. Imag., submitted] 13/20

Proposed relaxed linearized algorithm Algorithm: Relaxed OS-LALM for CT reconstruction. Input: K 1, M 1, 0 < α < 2, and an initial (FBP) image x. set ρ = 1, ζ = g = M L M (x), h = D L x ζ for k = 1, 2,..., K do for m = 1, 2,..., M do s = ρ (D L x h) + (1 ρ) g x + = [ x (ρd L + D R ) 1 (s + R(x)) ] Ω ζ = M L m (x + ) g + = ρ 1 ρ+1 (αζ + (1 α) g) + ρ+1 g h + = α (D L x + ζ) + (1 α) h decrease ρ using (2) end end Output: The final image x. 14/20

Chest region helical scan We reconstruct a 600 600 222 image from an 888 64 3611 helical (pitch 1.0) CT scan. FBP Converged Proposed 1200 1100 1000 900 Figure: Chest: Cropped images of the initial FBP image x (0) (left), the reference reconstruction x (center), and the reconstructed image x (20) using relaxed OS-LALM with 10 subsets after 20 iterations (right). 800 15/20

Chest region helical scan (cont d) 40 35 30 OS SQS OS FGM2 OS LALM OS OGM2 Relaxed OS LALM 40 35 30 OS SQS OS FGM2 OS LALM OS OGM2 Relaxed OS LALM RMS difference [HU] 25 20 15 RMS difference [HU] 25 20 15 10 10 5 5 HU 5 5 HU 1 HU 1 HU 0 0 5 10 15 20 25 30 Number of iterations 0 0 5 10 15 20 25 30 Number of iterations Figure: Chest: Convergence rate curves of different OS algorithms with 10 (left) and 20 (right) subsets. 16/20

Chest region helical scan (cont d) FBP OS SQS OS FGM2 30 15 OS LALM OS OGM2 Relaxed OS LALM 0 15 Figure: Chest: Difference images of the initial FBP image x (0) x and the reconstructed image x (10) x using OS algorithms with 10 subsets after 10 iterations. 30 17/20

Chest region helical scan (cont d) FBP OS SQS OS FGM2 30 15 OS LALM OS OGM2 Relaxed OS LALM 0 15 Figure: Chest: Difference images of the initial FBP image x (0) x and the reconstructed image x (5) x using OS algorithms with 20 subsets after 5 iterations. 30 18/20

Chest region helical scan (cont d) FBP OS SQS OS FGM2 30 15 OS LALM OS OGM2 Relaxed OS LALM 0 15 Figure: Chest: Difference images of the initial FBP image x (0) x and the reconstructed image x (10) x using OS algorithms with 20 subsets after 10 iterations. 30 18/20

Chest region helical scan (cont d) FBP OS SQS OS FGM2 30 15 OS LALM OS OGM2 Relaxed OS LALM 0 15 Figure: Chest: Difference images of the initial FBP image x (0) x and the reconstructed image x (20) x using OS algorithms with 20 subsets after 20 iterations. More results 30 18/20

Conclusions and future work In summary, We proposed a relaxed variant of linearized AL methods for faster X-ray CT image reconstruction Experimental results showed that the proposed algorithm converges α-fold faster than its unrelaxed counterpart The speed-up means that one needs fewer subsets to reach an RMS difference criteria in a given number of iterations Empirically, the proposed algorithm is reasonably stable when we use moderate numbers of subsets For future work, We want to work on the convergence rate analysis of the proposed algorithm 19/20

Conclusions and future work In summary, We proposed a relaxed variant of linearized AL methods for faster X-ray CT image reconstruction Experimental results showed that the proposed algorithm converges α-fold faster than its unrelaxed counterpart The speed-up means that one needs fewer subsets to reach an RMS difference criteria in a given number of iterations Empirically, the proposed algorithm is reasonably stable when we use moderate numbers of subsets For future work, We want to work on the convergence rate analysis of the proposed algorithm 19/20

Conclusions and future work In summary, We proposed a relaxed variant of linearized AL methods for faster X-ray CT image reconstruction Experimental results showed that the proposed algorithm converges α-fold faster than its unrelaxed counterpart The speed-up means that one needs fewer subsets to reach an RMS difference criteria in a given number of iterations Empirically, the proposed algorithm is reasonably stable when we use moderate numbers of subsets For future work, We want to work on the convergence rate analysis of the proposed algorithm 19/20

Conclusions and future work In summary, We proposed a relaxed variant of linearized AL methods for faster X-ray CT image reconstruction Experimental results showed that the proposed algorithm converges α-fold faster than its unrelaxed counterpart The speed-up means that one needs fewer subsets to reach an RMS difference criteria in a given number of iterations Empirically, the proposed algorithm is reasonably stable when we use moderate numbers of subsets For future work, We want to work on the convergence rate analysis of the proposed algorithm 19/20

Conclusions and future work In summary, We proposed a relaxed variant of linearized AL methods for faster X-ray CT image reconstruction Experimental results showed that the proposed algorithm converges α-fold faster than its unrelaxed counterpart The speed-up means that one needs fewer subsets to reach an RMS difference criteria in a given number of iterations Empirically, the proposed algorithm is reasonably stable when we use moderate numbers of subsets For future work, We want to work on the convergence rate analysis of the proposed algorithm 19/20

Conclusions and future work In summary, We proposed a relaxed variant of linearized AL methods for faster X-ray CT image reconstruction Experimental results showed that the proposed algorithm converges α-fold faster than its unrelaxed counterpart The speed-up means that one needs fewer subsets to reach an RMS difference criteria in a given number of iterations Empirically, the proposed algorithm is reasonably stable when we use moderate numbers of subsets For future work, We want to work on the convergence rate analysis of the proposed algorithm 19/20

Acknowledgments Supported in part by NIH grant U01 EB-018753 Equipment support from Intel Corporation Research support from GE Healthcare 20/20

Shoulder region helical scan We reconstruct a 512 512 109 image from an 888 32 7146 helical (pitch 0.5) CT scan. FBP Converged Proposed 1200 1100 1000 900 800 Figure: Shoulder: Cropped images of the initial FBP image x (0) (left), the reference reconstruction x (center), and the reconstructed image x (20) using relaxed OS-LALM with 20 subsets after 20 iterations (right). 20/20

Shoulder region helical scan (cont d) 30 25 OS SQS OS FGM2 OS LALM OS OGM2 Relaxed OS LALM 30 25 OS SQS OS FGM2 OS LALM OS OGM2 Relaxed OS LALM RMS difference [HU] 20 15 10 RMS difference [HU] 20 15 10 5 5 HU 5 5 HU 1 HU 0 0 5 10 15 20 25 30 Number of iterations 1 HU 0 0 5 10 15 20 25 30 Number of iterations Figure: Shoulder: Convergence rate curves of different OS algorithms with 20 (left) and 40 (right) subsets. 20/20

Shoulder region helical scan (cont d) FBP OS SQS OS FGM2 30 15 OS LALM OS OGM2 Relaxed OS LALM 0 15 30 Figure: Shoulder: Difference images of the initial FBP image x (0) x and the reconstructed image x (20) x using OS algorithms with 20 subsets after 20 iterations. 20/20

Abdomen region helical scan We reconstruct a 600 600 239 image from an 888 64 3516 helical (pitch 1.0) CT scan. FBP Converged Proposed 1200 1100 1000 900 Figure: Abdomen: Cropped images of the initial FBP image x (0) (left), the reference reconstruction x (center), and the reconstructed image x (20) using relaxed OS-LALM with 10 subsets after 20 iterations (right). 800 20/20

Abdomen region helical scan (cont d) 40 35 30 OS SQS OS FGM2 OS LALM OS OGM2 Relaxed OS LALM 40 35 30 OS SQS OS FGM2 OS LALM OS OGM2 Relaxed OS LALM RMS difference [HU] 25 20 15 RMS difference [HU] 25 20 15 10 10 5 5 HU 5 5 HU 1 HU 1 HU 0 0 5 10 15 20 25 30 Number of iterations 0 0 5 10 15 20 25 30 Number of iterations Figure: Abdomen: Convergence rate curves of different OS algorithms with 10 (left) and 20 (right) subsets. 20/20

Abdomen region helical scan (cont d) FBP OS SQS OS FGM2 30 15 OS LALM OS OGM2 Relaxed OS LALM 0 15 30 Figure: Abdomen: Difference images of the initial FBP image x (0) x and the reconstructed image x (20) x using OS algorithms with 10 subsets after 20 iterations. Conclusions and future work 20/20

Simple vs. proposed relaxed OS-LALM 40 35 30 OS SQS OS LALM Relaxed OS LALM (simple) Relaxed OS LALM (proposed) 40 35 30 OS SQS OS LALM Relaxed OS LALM (simple) Relaxed OS LALM (proposed) RMS difference [HU] 25 20 15 RMS difference [HU] 25 20 15 10 10 5 5 HU 5 5 HU 1 HU 1 HU 0 0 5 10 15 20 25 30 Number of iterations 0 0 5 10 15 20 25 30 Number of iterations Figure: Chest: Convergence rate curves of different relaxed algorithms with a fixed AL parameter ρ = 0.05 (left) and the decreasing ρ (right). 20/20

Wide-cone axial scan We reconstruct a 718 718 440 image from an 888 256 984 axial CT scan. FBP Converged Proposed 1200 1100 1000 900 Figure: Wide-cone: Cropped images of the initial FBP image x (0) (left), the reference reconstruction x (center), and the reconstructed image x (20) using relaxed OS-LALM with 24 subsets after 20 iterations (right). 800 20/20

Wide-cone axial scan (cont d) 100 OS SQS OS OGM2 Relaxed OS LALM (θ = 1) Relaxed OS LALM (θ = 0.1) 100 OS SQS OS OGM2 Relaxed OS LALM (θ = 1) Relaxed OS LALM (θ = 0.1) 80 80 RMS difference [HU] 60 40 RMS difference [HU] 60 40 20 20 5 HU 1 HU 0 0 4 8 12 16 20 Number of iterations 5 HU 1 HU 0 0 4 8 12 16 20 Number of iterations Figure: Wide-cone: Convergence rate curves of different OS algorithms with 12 (left) and 24 (right) subsets. 20/20