A Study of Numerical Algorithms for Regularized Poisson ML Image Reconstruction

Size: px
Start display at page:

Download "A Study of Numerical Algorithms for Regularized Poisson ML Image Reconstruction"

Transcription

1 A Study of Numerical Algorithms for Regularized Poisson ML Image Reconstruction Yao Xie Project Report for EE 391 Stanford University, Summer September 1, 2007 Abstract In this report we solved a regularized Poisson maximum likelihood (ML) image reconstruction problem, using various numerical methods. Rather than the commonly assumed Gaussian ML formulation, we considered a Poisson ML formulation, which is more accurate in some applications such as low dose computed tomography (CT), and also avoids the problematic log conversion in the Gaussian formulation. The speed of these methods are compared using a 64 by 64 pixel size example. We found that in our case the diagonally preconditioned conjugate gradient (PCG) method has the best performance. 1 Introduction Deterministic Filtered backprojection (FBP) and Regularized Maximum-Likelihood(ML) estimation are two major approaches to transmission tomography image reconstruction, such as CT. Though the latter method typically yields higher image quality, its practical application is yet limited by its high computational cost (0.02 second per image for FBP versus several hours per image for ML-based method). Recently, Preconditioning Conjugate Gradient (PCG) has attracted a lot of interests due to its efficiency in solving ML problems. However, many of the works using PCG are mainly devoted to emission tomography image reconstruction problems. Here rises a question: can we apply PCG to transmission tomography? The comparison between emission tomography and transmission tomography shows that their mathematical formulation are similar if we use Gaussian model to describe the Poisson noise, which is a strategy adopted by most existing methods. However, the Gaussian model has two drawbacks limiting its use in real systems. On one hand, the reduction of radiation dose in modern CT systems due to fast scanning calls for more accurate description based 1

2 on Poisson model; on the other hand, the measured data produced by the Gaussian model can be negative due to scatter, which causes difficulty in log conversion. This study compares the efficiency of various methods (including PCG for ET) in solving the Poisson model based ML problem for TT. There seems to be little work on comparison between different approaches except for a recent one [DBT + 05]. We found that the diagonally preconditioned conjugate gradient (CG) method may be the fastest for our problem. Also, no log conversion was needed in the algorithms based on Poisson model. 2 Regularized ML reconstruction problem The goal of statistical tomography reconstruction is to estimate the image from noisy measurements, with some regularization to prevent overfitting the noisy data. For the monoenergetic transmission tomography under the Poisson noise model, the problem can be formulated as where minimize f(x) = y T Lx + b T exp( Lx) + βφ(cx) (1) x = [x 1,, x n ] R n 1 is the unknown pixel vector y = [y 1,, y m ] R m 1 is the measured data vector contaminated by Poisson noise b = [b 1,, b m ] T the object R m 1 represents the intensities of X-rays before passing through L = [l 1,, l m ] T R m n is the forward projection matrix that models the physical data acquisition process C R P n is the differentiation operator on the neighboring pixels Φ p (x) is the potential function that penalizes the roughness of the image β is the weight on prior term We use the generalized Geman prior function for Φ p (x) [De 01], which is convex and twice differentiable. By this choice, the regularized ML problem (1) is convex. Note that in the parallel beam CT scan geometry L is circulant, and approximately circulant in the fan-beam scan geometry [FB99]. As a comparison, if we use a Gaussian noise model, the cost function for regularized ML is given by f Gaussian (x) = log ((diag b) 1 y) Lx W + βφ(cx), (2) where W is the (fixed) estimate of the noise covariance matrix, and Φ(x) is the prior function. The first term is a quadratic function in x rather than a exponential function as in (1). 2

3 3 Optimality condition To solve the minimization problem, we find the gradient and Hessian matrix of (1) as and g = L T ( y b T exp( Lx) ) + βc T Φ (x), (3) H = L T DL + βc T MC > 0, (4) The diagonal matrices are given by D = diag{b 1 exp( l T 1 x),, b m exp( l T mx)}, and M = diag{φ (x 1 ),, Φ (x n )}, where Φ ( ) and Φ ( ) are the first and second order derivatives of the potential function. Since f(x) is differentiable and convex, the necessary and sufficient condition for a point x to be optimal is g(x ) = 0, which has no analytical solution. Thus it is solved by iterative algorithm x k+1 = x k + s k x k, (5) where x k is the search direction, and s k is the step size at iteration k. We use backtracking line search [BV04] to find the optimal s k. The stopping criterion for these iterative algorithms is g 2 ɛ with small ɛ > 0. It is still not clear so far which stopping criterion is the best for image reconstruction problems. 4 Algorithms 4.1 Gradient descent method The simplest method is to choose x = g, which has approximately linear convergence rate [BV04]. 4.2 Exact Newton method The exact Newton method directly solves the Newton system: H x = g for search direction. We used the backslash operator in MATLAB to solve this equation, which actually forms the (dense) Cholesky factorization of H. Newton method has quadratic convergence when x is near x. However, for realistic image size, the computational cost exact inversion of the Hessian matrix can be prohibitive. 4.3 Truncated Newton method The truncated Newton method solve Newton system using Conjugate gradient (CG) or Preconditioned CG (PCG) methods, by terminating the iteration before convergence [Boy07]. It is less reliable than Newton s method, but with good pre-conditioners, it can handle very large problems. Two commonly preconditioner for emission tomography (PET imaging) are: 3

4 diagonal preconditioner: M D = diag(h), combination of circulant and diagonal preconditioner [CPC + 93][FB99]: M CDC = Γ 1 F T Ω 1 F Γ 1, where F is the Fourier transform matrix, and the diagonal matrix Ω is approximately the spectrum of the following matrix K(β/c) = L T L + C T C F T ΩF, { } where ck(β/c) H with c = tr(lt DL), and Γ = diag i L2 ij D ii. mtr(l T L) i L2 ij The idea of M CDC is to diagonalize the approximate circulant matrix L by the a Fourier transform matrix; M CDC is said to be the best preconditioner for emission tomography problems [FB99]. 4.4 Approximate Newton methods The idea for approximate Newton method is to approximate H by a surrogate inversion is easy to compute. We considered using: Ĥ, whose Diagonal H: Ĥ = diag{h}. Precomputed fixed H We note from (4) the special structure of H and use a fixed Hessian matrix to calculate the Newton step: Ĥ = L T ˆDL + C T ˆMC, where D and M are calculated from the measured data y and the image reconstructed by FBP method x F BP. By doing so we fix the Hessian matrix and only need to invert it once. From our later numerical examples, we found that this proposed method approximate the exact Newton method well. One heuristic explanation is that the approximation of the Hessian by the fixed matrix is good when x is close to x, where the quadratic convergence of Newton method actually happens. So given a good initial guess, after sufficient number of iterations, this method has similar convergence rate to the exact Newton method. 4

5 4.5 WLSTR The WLSTR (weighted least square transmission reconstruction) method [DB05a] is a commonly used statistical image reconstruction method. It uses a Gaussian noise model. Each iteration of WLSTR is fast, but it takes many steps to converge to a suboptimal solution. The search direction of WLSTR is given by[db05a]: x = LT diag{y}(y Lx) L T diag{y}l1 (6) 4.6 Quadratic approximation of the likelihood function We can use quadratic approximation to the likelihood function to find an approximate search direction. A similar idea was presented in [EF99a]. However, we have an interesting observation that, if we denote the Radon transform of the search direction as ρ = L(x x k ) = L x, the quadratic approximation of the likelihood function can be written as f(x) f(x k ) + ( y I T exp( Lx) ) T ρ ρt Dρ, which is minimized by ρ = D 1 ( y I T exp( Lx) ). The inversion of matrix D is simple since it is diagonal. Then the searching direction can be found by solving the equation: L x = ρ. The proposed method is promising in that it may potentially convert the optimal sequence from the image space to the measured data (sinogram) space, so that the expensive forward and backward projection computations are avoided. Also, the Hessian matrix inversion is replaced by a diagonal matrix inversion. We are still not sure how to deal with the prior term, so in the implementation we use the search direction calculated from other methods for the second term. 4.7 Decomposition methods The decomposition approach groups variables or measurements into blocks and each time only solves a sub-problem. In doing so each iteration is less expensive, but more iterations may be needed (even no convergence) to a sub-optimal solution. For example, two existing approaches are: ICD (iterative coordinate method) [SB93], which divides the pixels into groups and in each iteration only updates one group; OS, (ordered subset method) [EF99b][EF99b], which divides the measured data into blocks. In each iteration only one group of data is used and these groups are used sequentially. It can be used with any of the methods presented above. It is proven that OS converges [De 01] to an sub-optimal solution. 5

6 5 Implementation To make our code more efficient, we use sparse matrix data type for L and C, and precompute Ax, A x, Cx, and C x outside the backtracking linear search loop. For CG and PCG algorithms, we do not have to actually form or invert the Hessian matrix but rather pass a function to MATLAB. In practice, the L and C matrices can still be too large to store. Note that we only need Lx, L T y, Cx in computing the search direction. So we can use efficient online implementations (such as Fourier transform based). The largest problem we can solve so far is the pixel image. The time it takes to solve this image with 200 rays and 360 measurements, using PCG with maximum 100 iterations per Newton step, is 5128 seconds (85 min). Forward backward projectors using more efficient codes (such as C mex) are need for larger image. From our numerical example results, we found that the reconstructed pixel values are mostly positive and the negative values are quite small. So herein no nonnegative constraints are used. With nonnegative constraint, we can use interior point to solve. We also tried to solve this problem using CVX (give reference). However, it seems that the exponential cost function takes forever to solve. 6 Numerical Examples We implemented a parallel beam CT geometry, with 100 detectors, and 180 uniform angular sampling. The rays are spread out wide enough to cover the entire image. The image has pixels (m = and n = 4096). We use I j = I 0 = 10 5 electrons/photon, lower than the common CT system (hence noisier data), where we expect the Poisson model to have some advantages than the Guassian noise model. We g 2 < 10 8 as a stopping criterion. The problem is solved on a IBM Laptop with Intel dual core CPU at 2.0 GHz and 1GB RAM. The actual computational time is listed in Table 1. Fig. 1 shows the difference between the reconstructed image and true image, using the deterministic methods and regularized ML methods with β = 10, β = 100, and β = 1000, respectively. We choose β = 100 for the following examples as a good tradeoff between noise level and resolution. Fig. 2 shows the likelihood function value as the number of iterations, for, for various methods. Fig. 3(a) shows g 2 versus the number of iterations, for the exact and approximate Newton method, using CG methods with the maximum number of CG steps N MAX = 10, 30, and 100, respectively. From the actual computational time listed in Table 1 N MAX = 30 wins. Fig. 3(b) shows g 2 versus the number of iterations for other methods. The exact Newton method converges with in 10 iterations, Newton method with diagonal approximate Hessian matrix uses less than 100 iterations, and Newton method using ICD converges within 1000 iterations. Newton method with quadratic approximate likelihood function, the Newton 6

7 method with OS, and WLSTR stagnate near g = Fig. 4 shows the g 2 as the number of CG iterations. In our case M D works best. For the pixel image case, the diagonally preconditioned CG speed up the exact Newton method by a ratio of 185:1059, and the CG without preconditioning by 185:206. The other preconditioner M CDC, uses more CG and Newton iterations and longer time to converge. We indeed has a case, pixel size image, where M CDC uses less number of iterations than the diagonal iteration, but each iteration takes longer time (due to the relatively complicated preconditioner). So overall it takes longer time. Finally, the peak errors in the reconstructed image, versus the number of iterations, is shown in Fig. 5. All the methods stagnate at certain peak error level due to the noise floor. Comparing the results in Fig. 5 with that in Fig. 3(b), we could argue that a stopping criterion g 2 < 10 2 would be good enough for this noise level. 7 Future work Given more time, we may explore the following aspects: Implement larger real instances, such as 3D imaging, with voxels (about 8 millions of variables).this requires efficient forward and backward projection functions to compute Lx and L T y. Compress sensing in CT imaging. The problem of reconstructed image from under sampling data can be formulated as L1 regularized ML problem minimize log(likelihood) + β x 1. Or using L1 regularization in other transform domain that has sparse structure: minimize log(likelihood) + β Φx 1, where Φ is some sparse transform such as DCT transform, or any other over complete basis. Similar approaches have been used in Magnetic resonance imaging (MRI) [LDPss], but has only recently become interested in the CT community. 7

8 Figure 1: The difference image of the reconstructed images in Fig.?? with the true image; (a): the deterministic method; and regularized ML with (b): β = 10, (c): β = 100, and (d): β = liks 1.6 x Newton, exact cg 100 gradient quadratic fixed H diagonal H WLSTR Newton, ICD Newton, OS iter Figure 2: Likelihood function value vs. number of outer loop iterations, for various methods. 8

9 cg 10 cg 30 cg 100 Newton, exact f iter f Newton, exact gradient quadratic fixed H diagonal H WLSTR Newton, ICD Newton, OS iter Figure 3: Norm of the gradient g 2 versus the number of iterations, using (a): exact and truncated Newton (CG) with N max = 10, 30, and 100, respectively; (b): other methods. f cg 10 cg 30 cg 100 pcg diag 10 pcg diag 30 pcg diag 100 pcg cdc 10 pcg cdc 30 pcg cdc pcgiter Figure 4: Norm of the gradient g 2 versus cumulative CG iterations, using CG, diagonally preconditioned PCG and M CDC conditioner PCG. 9

10 peak err Newton, exact gradient quadratic fixed H diagonal H WLSTR Newton, ICD Newton, OS iter Figure 5: Peak error x k iterations. x true in the reconstructed image versus number of TABLE 1: ACTUAL CONVERGING TIME (IN SECONDS) (listed only the time for methods converge within 1000 iterations) method pixels pixels Exact Newton cg cg cg pcg-diag pcg-diag pcg-diag pcg-cdc pcg-cdc pcg-cdc fixed H os icd References [Boy07] S. Boyd. EE 364b Notes. Winter [BV04] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press,

11 [CPC + 93] N. H. Clinthorne, T.-S. Pan, P.-C. Chiao, W. L. Rogers, and J. A. Stamos. Preconditioning methods for improved convergence rates in iterative reconstructions. IEEE Trans. on Med. Imaging, 12(1), March [DB05a] [DB05b] B. De Man and S. Basu. Efficient maximum likelihood reconstruction for transmission tomography. 14th International Conference of Medical Physics, Nuremberg, Germany, Sept B. De Man and S. Basu. Generalized Geman prior for iterative rconstruction. 14th International Conference of Medical Physics, Sept [DBT + 05] B. De Man, S. Basu, J.-B. Thibault, J. Hsieh, J. Fessler, C. Bouman, and K. Sauer. A study of four minimization approaches for iterative reconstruction in X-ray ct IEEE Nuclear Science Symposium Conference Record, pages , [De 01] [EF99a] [EF99b] [FB99] Bruno De Man. Iterative reconstruction for reduction of metal artifacts in computed tomography. PhD thesis, University of Leuven, Leuven-Heverlee, Belgium, May H. Erdogan and J. A. Fessler. Monotonic algorithms for transmission tomography. IEEE Trans on Med. Imaging, Nov H. Erdogan and J. A. Fessler. Ordered subsets algorithms for transmission tomography. Phys. Med. Biol., Nov J. A. Fessler and S. D. Booth. Conjugate-gradient preconditioning methods for shift-variant PET image reconstruction. IEEE Tran. Med. Imaging, 8(5): , Sept [LDPss] M. Lustig, D. L. Donoho, and J. M. Pauly. Sparse MRI: the application of compressed sensing for rapid MR imaging. Magn. Reson Med, in press. [SB93] K. Sauer and C. Bouman. A local update strategy for iterative reconstruction from projections. IEEE Trans. Signal Proc., 41(2): ,

Truncated Newton Method

Truncated Newton Method Truncated Newton Method approximate Newton methods truncated Newton methods truncated Newton interior-point methods EE364b, Stanford University minimize convex f : R n R Newton s method Newton step x nt

More information

Scan Time Optimization for Post-injection PET Scans

Scan Time Optimization for Post-injection PET Scans Presented at 998 IEEE Nuc. Sci. Symp. Med. Im. Conf. Scan Time Optimization for Post-injection PET Scans Hakan Erdoğan Jeffrey A. Fessler 445 EECS Bldg., University of Michigan, Ann Arbor, MI 4809-222

More information

MAP Reconstruction From Spatially Correlated PET Data

MAP Reconstruction From Spatially Correlated PET Data IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL 50, NO 5, OCTOBER 2003 1445 MAP Reconstruction From Spatially Correlated PET Data Adam Alessio, Student Member, IEEE, Ken Sauer, Member, IEEE, and Charles A Bouman,

More information

Analytical Approach to Regularization Design for Isotropic Spatial Resolution

Analytical Approach to Regularization Design for Isotropic Spatial Resolution Analytical Approach to Regularization Design for Isotropic Spatial Resolution Jeffrey A Fessler, Senior Member, IEEE Abstract In emission tomography, conventional quadratic regularization methods lead

More information

Superiorized Inversion of the Radon Transform

Superiorized Inversion of the Radon Transform Superiorized Inversion of the Radon Transform Gabor T. Herman Graduate Center, City University of New York March 28, 2017 The Radon Transform in 2D For a function f of two real variables, a real number

More information

Sparse Covariance Selection using Semidefinite Programming

Sparse Covariance Selection using Semidefinite Programming Sparse Covariance Selection using Semidefinite Programming A. d Aspremont ORFE, Princeton University Joint work with O. Banerjee, L. El Ghaoui & G. Natsoulis, U.C. Berkeley & Iconix Pharmaceuticals Support

More information

Elaine T. Hale, Wotao Yin, Yin Zhang

Elaine T. Hale, Wotao Yin, Yin Zhang , Wotao Yin, Yin Zhang Department of Computational and Applied Mathematics Rice University McMaster University, ICCOPT II-MOPTA 2007 August 13, 2007 1 with Noise 2 3 4 1 with Noise 2 3 4 1 with Noise 2

More information

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March

More information

Part 3. Algorithms. Method = Cost Function + Algorithm

Part 3. Algorithms. Method = Cost Function + Algorithm Part 3. Algorithms Method = Cost Function + Algorithm Outline Ideal algorithm Classical general-purpose algorithms Considerations: nonnegativity parallelization convergence rate monotonicity Algorithms

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725 Consider Last time: proximal Newton method min x g(x) + h(x) where g, h convex, g twice differentiable, and h simple. Proximal

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

Uniform Quadratic Penalties Cause Nonuniform Spatial Resolution

Uniform Quadratic Penalties Cause Nonuniform Spatial Resolution Uniform Quadratic Penalties Cause Nonuniform Spatial Resolution Jeffrey A. Fessler and W. Leslie Rogers 3480 Kresge 111, Box 0552, University of Michigan, Ann Arbor, MI 48109-0552 ABSTRACT This paper examines

More information

Continuous State MRF s

Continuous State MRF s EE64 Digital Image Processing II: Purdue University VISE - December 4, Continuous State MRF s Topics to be covered: Quadratic functions Non-Convex functions Continuous MAP estimation Convex functions EE64

More information

Objective Functions for Tomographic Reconstruction from. Randoms-Precorrected PET Scans. gram separately, this process doubles the storage space for

Objective Functions for Tomographic Reconstruction from. Randoms-Precorrected PET Scans. gram separately, this process doubles the storage space for Objective Functions for Tomographic Reconstruction from Randoms-Precorrected PET Scans Mehmet Yavuz and Jerey A. Fessler Dept. of EECS, University of Michigan Abstract In PET, usually the data are precorrected

More information

Line Search Methods for Unconstrained Optimisation

Line Search Methods for Unconstrained Optimisation Line Search Methods for Unconstrained Optimisation Lecture 8, Numerical Linear Algebra and Optimisation Oxford University Computing Laboratory, MT 2007 Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The Generic

More information

Convex Optimization and l 1 -minimization

Convex Optimization and l 1 -minimization Convex Optimization and l 1 -minimization Sangwoon Yun Computational Sciences Korea Institute for Advanced Study December 11, 2009 2009 NIMS Thematic Winter School Outline I. Convex Optimization II. l

More information

A DECOMPOSITION PROCEDURE BASED ON APPROXIMATE NEWTON DIRECTIONS

A DECOMPOSITION PROCEDURE BASED ON APPROXIMATE NEWTON DIRECTIONS Working Paper 01 09 Departamento de Estadística y Econometría Statistics and Econometrics Series 06 Universidad Carlos III de Madrid January 2001 Calle Madrid, 126 28903 Getafe (Spain) Fax (34) 91 624

More information

Fast variance predictions for 3D cone-beam CT with quadratic regularization

Fast variance predictions for 3D cone-beam CT with quadratic regularization Fast variance predictions for 3D cone-beam CT with quadratic regularization Yingying Zhang-O Connor and Jeffrey A. Fessler a a Department of Electrical Engineering and Computer Science, The University

More information

Relaxed linearized algorithms for faster X-ray CT image reconstruction

Relaxed linearized algorithms for faster X-ray CT image reconstruction Relaxed linearized algorithms for faster X-ray CT image reconstruction Hung Nien and Jeffrey A. Fessler University of Michigan, Ann Arbor The 13th Fully 3D Meeting June 2, 2015 1/20 Statistical image reconstruction

More information

Nonlinear Optimization for Optimal Control

Nonlinear Optimization for Optimal Control Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]

More information

Incomplete Cholesky preconditioners that exploit the low-rank property

Incomplete Cholesky preconditioners that exploit the low-rank property anapov@ulb.ac.be ; http://homepages.ulb.ac.be/ anapov/ 1 / 35 Incomplete Cholesky preconditioners that exploit the low-rank property (theory and practice) Artem Napov Service de Métrologie Nucléaire, Université

More information

Robust Preconditioned Conjugate Gradient for the GPU and Parallel Implementations

Robust Preconditioned Conjugate Gradient for the GPU and Parallel Implementations Robust Preconditioned Conjugate Gradient for the GPU and Parallel Implementations Rohit Gupta, Martin van Gijzen, Kees Vuik GPU Technology Conference 2012, San Jose CA. GPU Technology Conference 2012,

More information

Conjugate-Gradient Preconditioning Methods for Shift-Variant PET Image Reconstruction

Conjugate-Gradient Preconditioning Methods for Shift-Variant PET Image Reconstruction Conjugate-Gradient Preconditioning Methods for Shift-Variant PET Image Reconstruction Jeffrey A. Fessler, Member, IEEE and Scott D. Booth Dept. of Electrical Engineering and Computer Science and Dept.

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

IPAM Summer School Optimization methods for machine learning. Jorge Nocedal

IPAM Summer School Optimization methods for machine learning. Jorge Nocedal IPAM Summer School 2012 Tutorial on Optimization methods for machine learning Jorge Nocedal Northwestern University Overview 1. We discuss some characteristics of optimization problems arising in deep

More information

Coordinate descent. Geoff Gordon & Ryan Tibshirani Optimization /

Coordinate descent. Geoff Gordon & Ryan Tibshirani Optimization / Coordinate descent Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Adding to the toolbox, with stats and ML in mind We ve seen several general and useful minimization tools First-order methods

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Improving the Convergence of Back-Propogation Learning with Second Order Methods

Improving the Convergence of Back-Propogation Learning with Second Order Methods the of Back-Propogation Learning with Second Order Methods Sue Becker and Yann le Cun, Sept 1988 Kasey Bray, October 2017 Table of Contents 1 with Back-Propagation 2 the of BP 3 A Computationally Feasible

More information

Optimization transfer approach to joint registration / reconstruction for motion-compensated image reconstruction

Optimization transfer approach to joint registration / reconstruction for motion-compensated image reconstruction Optimization transfer approach to joint registration / reconstruction for motion-compensated image reconstruction Jeffrey A. Fessler EECS Dept. University of Michigan ISBI Apr. 5, 00 0 Introduction Image

More information

Numerical optimization

Numerical optimization Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal

More information

Compressive Sensing (CS)

Compressive Sensing (CS) Compressive Sensing (CS) Luminita Vese & Ming Yan lvese@math.ucla.edu yanm@math.ucla.edu Department of Mathematics University of California, Los Angeles The UCLA Advanced Neuroimaging Summer Program (2014)

More information

Reconstruction from Digital Holograms by Statistical Methods

Reconstruction from Digital Holograms by Statistical Methods Reconstruction from Digital Holograms by Statistical Methods Saowapak Sotthivirat Jeffrey A. Fessler EECS Department The University of Michigan 2003 Asilomar Nov. 12, 2003 Acknowledgements: Brian Athey,

More information

Lecture 17: October 27

Lecture 17: October 27 0-725/36-725: Convex Optimiation Fall 205 Lecturer: Ryan Tibshirani Lecture 7: October 27 Scribes: Brandon Amos, Gines Hidalgo Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These

More information

Proximal Newton Method. Zico Kolter (notes by Ryan Tibshirani) Convex Optimization

Proximal Newton Method. Zico Kolter (notes by Ryan Tibshirani) Convex Optimization Proximal Newton Method Zico Kolter (notes by Ryan Tibshirani) Convex Optimization 10-725 Consider the problem Last time: quasi-newton methods min x f(x) with f convex, twice differentiable, dom(f) = R

More information

EE 364B Project Final Report: SCP Solver for a Nonconvex Quantitative Susceptibility Mapping Formulation

EE 364B Project Final Report: SCP Solver for a Nonconvex Quantitative Susceptibility Mapping Formulation EE 364B Project Final Report: SCP Solver for a Nonconvex Quantitative Susceptibility Mapping Formulation Grant Yang Thanchanok Teeraratkul June 4, 2014 1 Background Changes in iron concentration in brain

More information

10. Unconstrained minimization

10. Unconstrained minimization Convex Optimization Boyd & Vandenberghe 10. Unconstrained minimization terminology and assumptions gradient descent method steepest descent method Newton s method self-concordant functions implementation

More information

Optimization Tutorial 1. Basic Gradient Descent

Optimization Tutorial 1. Basic Gradient Descent E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.

More information

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement to the material discussed in

More information

SPATIAL RESOLUTION PROPERTIES OF PENALIZED WEIGHTED LEAST-SQUARES TOMOGRAPHIC IMAGE RECONSTRUCTION WITH MODEL MISMATCH

SPATIAL RESOLUTION PROPERTIES OF PENALIZED WEIGHTED LEAST-SQUARES TOMOGRAPHIC IMAGE RECONSTRUCTION WITH MODEL MISMATCH SPATIAL RESOLUTION PROPERTIES OF PENALIZED WEIGHTED LEAST-SQUARES TOMOGRAPHIC IMAGE RECONSTRUCTION WITH MODEL MISMATCH Jeffrey A. Fessler COMMUNICATIONS & SIGNAL PROCESSING LABORATORY Department of Electrical

More information

Conjugate Gradient Method

Conjugate Gradient Method Conjugate Gradient Method direct and indirect methods positive definite linear systems Krylov sequence spectral analysis of Krylov sequence preconditioning Prof. S. Boyd, EE364b, Stanford University Three

More information

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition NONLINEAR CLASSIFICATION AND REGRESSION Nonlinear Classification and Regression: Outline 2 Multi-Layer Perceptrons The Back-Propagation Learning Algorithm Generalized Linear Models Radial Basis Function

More information

A FISTA-like scheme to accelerate GISTA?

A FISTA-like scheme to accelerate GISTA? A FISTA-like scheme to accelerate GISTA? C. Cloquet 1, I. Loris 2, C. Verhoeven 2 and M. Defrise 1 1 Dept. of Nuclear Medicine, Vrije Universiteit Brussel 2 Dept. of Mathematics, Université Libre de Bruxelles

More information

Preconditioning via Diagonal Scaling

Preconditioning via Diagonal Scaling Preconditioning via Diagonal Scaling Reza Takapoui Hamid Javadi June 4, 2014 1 Introduction Interior point methods solve small to medium sized problems to high accuracy in a reasonable amount of time.

More information

From Stationary Methods to Krylov Subspaces

From Stationary Methods to Krylov Subspaces Week 6: Wednesday, Mar 7 From Stationary Methods to Krylov Subspaces Last time, we discussed stationary methods for the iterative solution of linear systems of equations, which can generally be written

More information

Gradient Descent. Ryan Tibshirani Convex Optimization /36-725

Gradient Descent. Ryan Tibshirani Convex Optimization /36-725 Gradient Descent Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: canonical convex programs Linear program (LP): takes the form min x subject to c T x Gx h Ax = b Quadratic program (QP): like

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.

More information

Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy

Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Caroline Chaux Joint work with X. Vu, N. Thirion-Moreau and S. Maire (LSIS, Toulon) Aix-Marseille

More information

Proximal Newton Method. Ryan Tibshirani Convex Optimization /36-725

Proximal Newton Method. Ryan Tibshirani Convex Optimization /36-725 Proximal Newton Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: primal-dual interior-point method Given the problem min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h

More information

Jae Heon Yun and Yu Du Han

Jae Heon Yun and Yu Du Han Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose

More information

Proximal Gradient Descent and Acceleration. Ryan Tibshirani Convex Optimization /36-725

Proximal Gradient Descent and Acceleration. Ryan Tibshirani Convex Optimization /36-725 Proximal Gradient Descent and Acceleration Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: subgradient method Consider the problem min f(x) with f convex, and dom(f) = R n. Subgradient method:

More information

An Iterative Descent Method

An Iterative Descent Method Conjugate Gradient: An Iterative Descent Method The Plan Review Iterative Descent Conjugate Gradient Review : Iterative Descent Iterative Descent is an unconstrained optimization process x (k+1) = x (k)

More information

A Limited Memory, Quasi-Newton Preconditioner. for Nonnegatively Constrained Image. Reconstruction

A Limited Memory, Quasi-Newton Preconditioner. for Nonnegatively Constrained Image. Reconstruction A Limited Memory, Quasi-Newton Preconditioner for Nonnegatively Constrained Image Reconstruction Johnathan M. Bardsley Department of Mathematical Sciences, The University of Montana, Missoula, MT 59812-864

More information

14. Nonlinear equations

14. Nonlinear equations L. Vandenberghe ECE133A (Winter 2018) 14. Nonlinear equations Newton method for nonlinear equations damped Newton method for unconstrained minimization Newton method for nonlinear least squares 14-1 Set

More information

Introduction to the Mathematics of Medical Imaging

Introduction to the Mathematics of Medical Imaging Introduction to the Mathematics of Medical Imaging Second Edition Charles L. Epstein University of Pennsylvania Philadelphia, Pennsylvania EiaJTL Society for Industrial and Applied Mathematics Philadelphia

More information

Inverse problem and optimization

Inverse problem and optimization Inverse problem and optimization Laurent Condat, Nelly Pustelnik CNRS, Gipsa-lab CNRS, Laboratoire de Physique de l ENS de Lyon Decembre, 15th 2016 Inverse problem and optimization 2/36 Plan 1. Examples

More information

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems 1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of

More information

ORIE 6326: Convex Optimization. Quasi-Newton Methods

ORIE 6326: Convex Optimization. Quasi-Newton Methods ORIE 6326: Convex Optimization Quasi-Newton Methods Professor Udell Operations Research and Information Engineering Cornell April 10, 2017 Slides on steepest descent and analysis of Newton s method adapted

More information

AN EFFICIENT COMPUTATIONAL METHOD FOR TOTAL VARIATION-PENALIZED POISSON LIKELIHOOD ESTIMATION. Johnathan M. Bardsley

AN EFFICIENT COMPUTATIONAL METHOD FOR TOTAL VARIATION-PENALIZED POISSON LIKELIHOOD ESTIMATION. Johnathan M. Bardsley Volume X, No. 0X, 200X, X XX Web site: http://www.aimsciences.org AN EFFICIENT COMPUTATIONAL METHOD FOR TOTAL VARIATION-PENALIZED POISSON LIKELIHOOD ESTIMATION Johnathan M. Bardsley Department of Mathematical

More information

Parallel programming practices for the solution of Sparse Linear Systems (motivated by computational physics and graphics)

Parallel programming practices for the solution of Sparse Linear Systems (motivated by computational physics and graphics) Parallel programming practices for the solution of Sparse Linear Systems (motivated by computational physics and graphics) Eftychios Sifakis CS758 Guest Lecture - 19 Sept 2012 Introduction Linear systems

More information

TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS

TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS Martin Kleinsteuber and Simon Hawe Department of Electrical Engineering and Information Technology, Technische Universität München, München, Arcistraße

More information

Homework 4. Convex Optimization /36-725

Homework 4. Convex Optimization /36-725 Homework 4 Convex Optimization 10-725/36-725 Due Friday November 4 at 5:30pm submitted to Christoph Dann in Gates 8013 (Remember to a submit separate writeup for each problem, with your name at the top)

More information

Newton s Method. Javier Peña Convex Optimization /36-725

Newton s Method. Javier Peña Convex Optimization /36-725 Newton s Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, f ( (y) = max y T x f(x) ) x Properties and

More information

MATH 4211/6211 Optimization Basics of Optimization Problems

MATH 4211/6211 Optimization Basics of Optimization Problems MATH 4211/6211 Optimization Basics of Optimization Problems Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 A standard minimization

More information

Lecture 17: Iterative Methods and Sparse Linear Algebra

Lecture 17: Iterative Methods and Sparse Linear Algebra Lecture 17: Iterative Methods and Sparse Linear Algebra David Bindel 25 Mar 2014 Logistics HW 3 extended to Wednesday after break HW 4 should come out Monday after break Still need project description

More information

Lecture 22: More On Compressed Sensing

Lecture 22: More On Compressed Sensing Lecture 22: More On Compressed Sensing Scribed by Eric Lee, Chengrun Yang, and Sebastian Ament Nov. 2, 207 Recap and Introduction Basis pursuit was the method of recovering the sparsest solution to an

More information

Rich Tomography. Bill Lionheart, School of Mathematics, University of Manchester and DTU Compute. July 2014

Rich Tomography. Bill Lionheart, School of Mathematics, University of Manchester and DTU Compute. July 2014 Rich Tomography Bill Lionheart, School of Mathematics, University of Manchester and DTU Compute July 2014 What do we mean by Rich Tomography? Conventional tomography reconstructs one scalar image from

More information

5 Quasi-Newton Methods

5 Quasi-Newton Methods Unconstrained Convex Optimization 26 5 Quasi-Newton Methods If the Hessian is unavailable... Notation: H = Hessian matrix. B is the approximation of H. C is the approximation of H 1. Problem: Solve min

More information

NOTES ON FIRST-ORDER METHODS FOR MINIMIZING SMOOTH FUNCTIONS. 1. Introduction. We consider first-order methods for smooth, unconstrained

NOTES ON FIRST-ORDER METHODS FOR MINIMIZING SMOOTH FUNCTIONS. 1. Introduction. We consider first-order methods for smooth, unconstrained NOTES ON FIRST-ORDER METHODS FOR MINIMIZING SMOOTH FUNCTIONS 1. Introduction. We consider first-order methods for smooth, unconstrained optimization: (1.1) minimize f(x), x R n where f : R n R. We assume

More information

Lecture 25: November 27

Lecture 25: November 27 10-725: Optimization Fall 2012 Lecture 25: November 27 Lecturer: Ryan Tibshirani Scribes: Matt Wytock, Supreeth Achar Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These notes have

More information

AN NONNEGATIVELY CONSTRAINED ITERATIVE METHOD FOR POSITRON EMISSION TOMOGRAPHY. Johnathan M. Bardsley

AN NONNEGATIVELY CONSTRAINED ITERATIVE METHOD FOR POSITRON EMISSION TOMOGRAPHY. Johnathan M. Bardsley Volume X, No. 0X, 0X, X XX Web site: http://www.aimsciences.org AN NONNEGATIVELY CONSTRAINED ITERATIVE METHOD FOR POSITRON EMISSION TOMOGRAPHY Johnathan M. Bardsley Department of Mathematical Sciences

More information

CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3

CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3 CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3 Felix Kwok February 27, 2004 Written Problems 1. (Heath E3.10) Let B be an n n matrix, and assume that B is both

More information

On the interior of the simplex, we have the Hessian of d(x), Hd(x) is diagonal with ith. µd(w) + w T c. minimize. subject to w T 1 = 1,

On the interior of the simplex, we have the Hessian of d(x), Hd(x) is diagonal with ith. µd(w) + w T c. minimize. subject to w T 1 = 1, Math 30 Winter 05 Solution to Homework 3. Recognizing the convexity of g(x) := x log x, from Jensen s inequality we get d(x) n x + + x n n log x + + x n n where the equality is attained only at x = (/n,...,

More information

Accelerated MRI Image Reconstruction

Accelerated MRI Image Reconstruction IMAGING DATA EVALUATION AND ANALYTICS LAB (IDEAL) CS5540: Computational Techniques for Analyzing Clinical Data Lecture 15: Accelerated MRI Image Reconstruction Ashish Raj, PhD Image Data Evaluation and

More information

Newton s Method. Ryan Tibshirani Convex Optimization /36-725

Newton s Method. Ryan Tibshirani Convex Optimization /36-725 Newton s Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, Properties and examples: f (y) = max x

More information

4 Newton Method. Unconstrained Convex Optimization 21. H(x)p = f(x). Newton direction. Why? Recall second-order staylor series expansion:

4 Newton Method. Unconstrained Convex Optimization 21. H(x)p = f(x). Newton direction. Why? Recall second-order staylor series expansion: Unconstrained Convex Optimization 21 4 Newton Method H(x)p = f(x). Newton direction. Why? Recall second-order staylor series expansion: f(x + p) f(x)+p T f(x)+ 1 2 pt H(x)p ˆf(p) In general, ˆf(p) won

More information

Optimization Algorithms for Compressed Sensing

Optimization Algorithms for Compressed Sensing Optimization Algorithms for Compressed Sensing Stephen Wright University of Wisconsin-Madison SIAM Gator Student Conference, Gainesville, March 2009 Stephen Wright (UW-Madison) Optimization and Compressed

More information

Optimized first-order minimization methods

Optimized first-order minimization methods Optimized first-order minimization methods Donghwan Kim & Jeffrey A. Fessler EECS Dept., BME Dept., Dept. of Radiology University of Michigan web.eecs.umich.edu/~fessler UM AIM Seminar 2014-10-03 1 Disclosure

More information

Descent methods. min x. f(x)

Descent methods. min x. f(x) Gradient Descent Descent methods min x f(x) 5 / 34 Descent methods min x f(x) x k x k+1... x f(x ) = 0 5 / 34 Gradient methods Unconstrained optimization min f(x) x R n. 6 / 34 Gradient methods Unconstrained

More information

Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images

Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images Alfredo Nava-Tudela ant@umd.edu John J. Benedetto Department of Mathematics jjb@umd.edu Abstract In this project we are

More information

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method L. Vandenberghe EE236C (Spring 2016) 1. Gradient method gradient method, first-order methods quadratic bounds on convex functions analysis of gradient method 1-1 Approximate course outline First-order

More information

Estimating the Largest Elements of a Matrix

Estimating the Largest Elements of a Matrix Estimating the Largest Elements of a Matrix Samuel Relton samuel.relton@manchester.ac.uk @sdrelton samrelton.com blog.samrelton.com Joint work with Nick Higham nick.higham@manchester.ac.uk May 12th, 2016

More information

Iterative regularization of nonlinear ill-posed problems in Banach space

Iterative regularization of nonlinear ill-posed problems in Banach space Iterative regularization of nonlinear ill-posed problems in Banach space Barbara Kaltenbacher, University of Klagenfurt joint work with Bernd Hofmann, Technical University of Chemnitz, Frank Schöpfer and

More information

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 Introduction Almost all numerical methods for solving PDEs will at some point be reduced to solving A

More information

Introduction, basic but important concepts

Introduction, basic but important concepts Introduction, basic but important concepts Felix Kubler 1 1 DBF, University of Zurich and Swiss Finance Institute October 7, 2017 Felix Kubler Comp.Econ. Gerzensee, Ch1 October 7, 2017 1 / 31 Economics

More information

Lecture 9: Numerical Linear Algebra Primer (February 11st)

Lecture 9: Numerical Linear Algebra Primer (February 11st) 10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template

More information

Noise Analysis of Regularized EM for SPECT Reconstruction 1

Noise Analysis of Regularized EM for SPECT Reconstruction 1 Noise Analysis of Regularized EM for SPECT Reconstruction Wenli Wang and Gene Gindi Departments of Electrical Engineering and Radiology SUNY at Stony Brook, Stony Brook, NY 794-846 Abstract The ability

More information

PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.1/39

PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.1/39 PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP Michal Kočvara Institute of Information Theory and Automation Academy of Sciences of the Czech Republic and Czech Technical University

More information

Interior Point Algorithms for Constrained Convex Optimization

Interior Point Algorithms for Constrained Convex Optimization Interior Point Algorithms for Constrained Convex Optimization Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Inequality constrained minimization problems

More information

A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing

A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing Emilie Chouzenoux emilie.chouzenoux@univ-mlv.fr Université Paris-Est Lab. d Informatique Gaspard

More information

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Lecture 15 Newton Method and Self-Concordance. October 23, 2008 Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

2D X-Ray Tomographic Reconstruction From Few Projections

2D X-Ray Tomographic Reconstruction From Few Projections 2D X-Ray Tomographic Reconstruction From Few Projections Application of Compressed Sensing Theory CEA-LID, Thalès, UJF 6 octobre 2009 Introduction Plan 1 Introduction 2 Overview of Compressed Sensing Theory

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

Network Newton. Aryan Mokhtari, Qing Ling and Alejandro Ribeiro. University of Pennsylvania, University of Science and Technology (China)

Network Newton. Aryan Mokhtari, Qing Ling and Alejandro Ribeiro. University of Pennsylvania, University of Science and Technology (China) Network Newton Aryan Mokhtari, Qing Ling and Alejandro Ribeiro University of Pennsylvania, University of Science and Technology (China) aryanm@seas.upenn.edu, qingling@mail.ustc.edu.cn, aribeiro@seas.upenn.edu

More information

Mark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation.

Mark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation. CS 189 Spring 2015 Introduction to Machine Learning Midterm You have 80 minutes for the exam. The exam is closed book, closed notes except your one-page crib sheet. No calculators or electronic items.

More information

Splitting with Near-Circulant Linear Systems: Applications to Total Variation CT and PET. Ernest K. Ryu S. Ko J.-H. Won

Splitting with Near-Circulant Linear Systems: Applications to Total Variation CT and PET. Ernest K. Ryu S. Ko J.-H. Won Splitting with Near-Circulant Linear Systems: Applications to Total Variation CT and PET Ernest K. Ryu S. Ko J.-H. Won Outline Motivation and algorithm Theory Applications Experiments Conclusion Motivation

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

An Interior-Point Method for Large Scale Network Utility Maximization

An Interior-Point Method for Large Scale Network Utility Maximization An Interior-Point Method for Large Scale Network Utility Maximization Argyrios Zymnis, Nikolaos Trichakis, Stephen Boyd, and Dan O Neill August 6, 2007 Abstract We describe a specialized truncated-newton

More information