EE 364B Project Final Report: SCP Solver for a Nonconvex Quantitative Susceptibility Mapping Formulation

Size: px
Start display at page:

Download "EE 364B Project Final Report: SCP Solver for a Nonconvex Quantitative Susceptibility Mapping Formulation"

Transcription

1 EE 364B Project Final Report: SCP Solver for a Nonconvex Quantitative Susceptibility Mapping Formulation Grant Yang Thanchanok Teeraratkul June 4, Background Changes in iron concentration in brain tissue are of interest in the study of neurodegenerative diseases such as Alzheimer s and multiple sclerosis [MPY + 13]. Magnetic resonance imaging (MRI) is inherently sensitive to changes in tissue magnetic susceptibility, which can be used as a direct measure of iron concentration [Sch96]. Changes in tissue magnetic susceptibility result in phase shifts in the MR image. Calculating quantitative susceptibility maps (QSM) from the MR image gives researchers a noninvasive method for studying iron concentrations in the brain. 2 Problem Formulation The relationship between the phase of the image and the tissue magnetic susceptibility can be described as a 3D convolution with a unit dipole function [SdSM03]. The convolution relationship between tissue magnetic susceptibility and the phase of the MR signal can be expressed as a linear system, Ax = b, where A R n n is a known Toeplitz matrix representing the convolution kernel, x R n is a vector of magnetic susceptibilities and b R n is a vector of phase angles. The matrix, A, can be decomposed into F H DF, where F C n n is the discrete Fourier transform matrix, and D R n n is a diagonal matrix αkz containing the Fourier transform of the unit dipole function, 2 (kx 2+k2 y +k2 z ). Here, k denotes the spatial frequency coordinates, and α is a constant that accounts for the imaging parameters. Since A contains low singular values, direct inversion of A results in amplification of noise and artifacts in the reconstructed image. Therefore, a regularization scheme is needed to calculate the susceptibility maps. The simplest form of regularization is to threshold the singular values of A before inversion [WSB10]. However this approach does a poor job of suppressing artifacts. More sophisticated formulations have been developed to take advantage of prior knowledge of the location of tissue boundaries. The most popular of these formulations 1

2 employs a l 2 norm penalty on the gradient of the image, minimize M 0 (Ax b) λ 1 [W x G x W y G y W zgz] T x λ 2 M 1 x 2 2, where M 0, M 1 R n n are diagonal weighting matrices, W x, W y, W z R n n are diagonal weighting matrices specifying tissue boundaries, G x, G y, G z R n n are gradient matrices, and λ 1, λ 2 R + adjust the weight of the regularization [LLdR + 12]. However, this formulation has two deficiencies: the l 2 constraint encourages a piecewise constant solution which obscures small tissue structures, and the linear relationship between susceptibility and phase is sensitive to phase unwrapping errors. In order to improve the robustness and detail preservation in the susceptibility maps, we employ a total variation norm-based constraint on a complex valued QSM formulation, minimize M(e iax e ib ) 2 2 subject to (W x G x x) 2 + (W y G y x) 2 + (W z G z x) 2 1 ɛ, where M R n n is a diagonal matrix containing the magnitude of the complex MRI. This formulation should be an improvement over current methods, because the total variation norm improves retention of fine tissue structures, while the complex formulation increases robustness to phase unwrapping errors. Because the objective function is nonconvex and the problem size is very large, we are writing a Sequential Convex Program (SCP) which takes advantage of Toeplitz matrix structure to solve the minimization problem [Mur97]. 3 SCP Algorithm We implemented a Sequential Convex Program (SCP) to solve our problem in MATLAB. Our solver takes advantage of the Toeplitz structure of A, and an affine approximation of the nonconvex problem in order to achieve fast convergence. Denoting the nonconvex objective function as f(x), and the total variation norm-based constraint function as c(x), the SCP algorithm can be written as follows, given x 0 := 0, λ 0 := 1, k := 0. repeat until convergence 1. minimize f(x k ) T p pt p subject to c(x k ) + c(x k ) T p 0 2. Compute step size α by backtracking line search. 3. Update x k+1 := x k + αp, λ k+1 := λ, k := k + 1. return x. The gradient of the objective function and the constraint in each iteration were evaluated 2

3 analytically: f(x) = 2AM 2 (diag(sin(b)) cos(ax) + diag(cos(b)) sin(ax)) c(x) = n i=1 C T i C ix C i x 2, where C i R 3 n contains the i th row of W x G x, W y G y and W z G z. The QP in step 1 was solved directly by maximizing the dual function. This results in the update formulae, ( ) λ = max 0, c(x)t f(x) c(x) c(x) T c(x) p = λ c(x) f(x). By choosing a convex approximation which allows direct calculation of the search direction, we expect to reduce the computation time of the proposed SCP algorithm. 4 Results Quantitative susceptibility maps were calculated from an MR data set acquired on a 7T system of the brain of a healthy volunteer. The dimension of the image was voxels with mm resolution. The phase images were unwrapped using FSL PRELUDE, and the external phase not resulting from changes in tissue susceptibility was removed using projection onto dipole fields [LKdR + 11]. The nonconvex formulation of QSM was solved using the proposed algorithm with zero as the initial guess and ɛ chosen heuristically to be Computations were conducted on the Sherlock computing cluster, which uses an Intel(R) Xeon(R) CPU E GHz with 24 GB of RAM. The proposed SCP algorithm was compared to the current method for solving the nonconvex QSM formulation based on a Taylor expansion of the argument of the norm in the objective function, and a conjugate gradient solver for each iteration [LWL + 13]. Although the conjugate gradient based solver converged in fewer steps, the direct method for calculating each step direction in the proposed SCP was much faster than performing conjugate gradients, resulting in a reduction of the total calculation time from 46 minutes to 4.5 minutes. The proposed SCP algorithm also converged to a lower objective value. This improvement may occur because the smaller step size generated by the proposed SCP results in a more accurate local approximation of the objective function. The convergence of the proposed SCP and the current nonconvex solver can be seen in Figure 1. 3

4 2.5 3 x 105 Convergence of SCP of SCP Nonconvex CG based Algorithm Proposed Implementation Nonconvex CG-based Algorithm Proposed Implementation 2 f (k) (x) time (min) time Figure 1: The SCP implementation requires significantly less computation than the current method based on conjugate gradients. The SCP algorithm generated QSM with the expected contrast and range of susceptibilities based on known tissue properties. For example, the basal ganglia, which are known to have large deposits of paramagnetic ferritin, have high susceptibility, and tissue boundaries between white matter and grey matter are well delineated. 4

5 ppm ppm 0.2 Figure 2: QSM generated by SCP solver on nonconvex formulation The QSM also demonstrated an increase in detail and artifact reduction over conventional l 2 and thresholded inversion methods. As seen in Figure 3, the SCP algorithm result shows few reconstruction artifacts while retaining fine tissue structures. The streaking artifacts emanating from the blood vessels in the coronal view of the brain are significantly reduced compared to direct thresholding, while the image does not display the smoothing effect in the l 2 norm regularized image. TV l 2 TKD TV l 2 TKD Figure 3: The nonconvex formulation with a total variation based constraint shows improved tissue detail retention and artifact suppression compared to the popular l 2 norm and direct inversion methods. 5

6 5 Conclusions Our SCP implementation for solving the nonconvex QSM formulation represents a significant increase in speed over current nonconvex solvers. In addition, the total variation norm-based constraint limits image artifacts while retaining more tissue detail than the current l 2 norm or direct inversion methods. While more work is needed to validate performance on pathological tissue, our SCP shows significant promise as a robust method for studying changes in tissue magnetic susceptibility caused by neurodegenerative diseases. References [LKdR + 11] Tian Liu, Ildar Khalidov, Ludovic de Rochefort, Pascal Spinc le, Jing Liu, AJ Tsiouris, and Yi Wang. A novel background field removal method for mri using projection onto dipole fields (pdf). NMR Biomed., 9: , [LLdR + 12] Jing Liu, Tian Liu, Ludovic de Rochefort, James Ledoux, Ildar Khalidov, Weiwei Chen, AJ Tsiouris, Cynthia Wisnieff, Pascal Spinc le, Martin R. Prince, and Yi Wang. Morphology enabled dipole inversion for quantitative susceptibility mapping using structural consistency between the magnitude image and the susceptibility map. 69: , [LWL + 13] [MPY + 13] [Mur97] [Sch96] [SdSM03] [WSB10] Tian Liu, Cynthia Wisnieff, Min Lous, Weiwei Chen, Pascal Spinc le, and Yi Wang. Nonlinear formulation of the magnetic field to source relationship for robust quantitative susceptibility mapping. Magnetic Resonance in Medicine, 69: , Veela Mehta, Wei Pei, Grant Yang, Suyang Li, Eashwar Swamy, Aaron Boster, Petra Schmalbrock, and David Pitt. Iron is a sensitive biomarker for inflammation in multiple sclerosis lesions. PLOS ONE, 8:1 10, Walter Murray. Sequential quadratic programming methods for large-scale problems. Computational Optimization and Applications, 7: , J. F. Schenck. The role of magnetic susceptibility in magnetic resonance imaging: Mri magnetic compatibility of the first and second kinds. Med. Phys, 26: , Rares Salomir, Baudouin Denis de Senneville, and Chrit TW Moonen. A fast calculation method for magnetic field inhomogeneity due to an arbitrary distribution of bulk susceptibility. ERT Imagerie Moleculaire et Fonctionnelle, 117:26 34, Samuel Wharton, Andreas Schafer, and Richard Bowtell. Susceptibility mapping in the human brain using threhsold-based k-space division. Magnetic Resonance in Medicine, 63: ,

Technical Improvements in Quantitative Susceptibility Mapping

Technical Improvements in Quantitative Susceptibility Mapping Technical Improvements in Quantitative Susceptibility Mapping 1-2 Saifeng Liu 1 School of Biomedical Engineering, McMaster University 2 The Magnetic Resonance Imaging Institute for Biomedical Research

More information

Quantitative Susceptibility Mapping and Susceptibility Tensor Imaging. Magnetization and Susceptibility

Quantitative Susceptibility Mapping and Susceptibility Tensor Imaging. Magnetization and Susceptibility Quantitative Susceptibility Mapping and Susceptibility Tensor Imaging 1, Chunlei Liu, Ph.D. 1 Brain Imaging and Analysis Center Department of Radiology Duke University, Durham, NC, USA 1 Magnetization

More information

Simultaneous Unwrapping Phase and Error Recovery from Inhomogeneity (SUPER) for Quantitative Susceptibility Mapping of the Human Brain

Simultaneous Unwrapping Phase and Error Recovery from Inhomogeneity (SUPER) for Quantitative Susceptibility Mapping of the Human Brain pissn 2384-1095 eissn 2384-1109 imri 2018;22:37-49 https://doi.org/10.13104/imri.2018.22.1.37 Simultaneous Unwrapping Phase and Error Recovery from Inhomogeneity (SUPER) for Quantitative Susceptibility

More information

Using ADMM and Soft Shrinkage for 2D signal reconstruction

Using ADMM and Soft Shrinkage for 2D signal reconstruction Using ADMM and Soft Shrinkage for 2D signal reconstruction Yijia Zhang Advisor: Anne Gelb, Weihong Guo August 16, 2017 Abstract ADMM, the alternating direction method of multipliers is a useful algorithm

More information

A Study of Numerical Algorithms for Regularized Poisson ML Image Reconstruction

A Study of Numerical Algorithms for Regularized Poisson ML Image Reconstruction A Study of Numerical Algorithms for Regularized Poisson ML Image Reconstruction Yao Xie Project Report for EE 391 Stanford University, Summer 2006-07 September 1, 2007 Abstract In this report we solved

More information

Sparsity Regularization

Sparsity Regularization Sparsity Regularization Bangti Jin Course Inverse Problems & Imaging 1 / 41 Outline 1 Motivation: sparsity? 2 Mathematical preliminaries 3 l 1 solvers 2 / 41 problem setup finite-dimensional formulation

More information

Applied Mathematics 205. Unit I: Data Fitting. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit I: Data Fitting. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit I: Data Fitting Lecturer: Dr. David Knezevic Unit I: Data Fitting Chapter I.4: Nonlinear Least Squares 2 / 25 Nonlinear Least Squares So far we have looked at finding a best

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

Non-polynomial Least-squares fitting

Non-polynomial Least-squares fitting Applied Math 205 Last time: piecewise polynomial interpolation, least-squares fitting Today: underdetermined least squares, nonlinear least squares Homework 1 (and subsequent homeworks) have several parts

More information

2 Regularized Image Reconstruction for Compressive Imaging and Beyond

2 Regularized Image Reconstruction for Compressive Imaging and Beyond EE 367 / CS 448I Computational Imaging and Display Notes: Compressive Imaging and Regularized Image Reconstruction (lecture ) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement

More information

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement to the material discussed in

More information

A Riemannian Framework for Denoising Diffusion Tensor Images

A Riemannian Framework for Denoising Diffusion Tensor Images A Riemannian Framework for Denoising Diffusion Tensor Images Manasi Datar No Institute Given Abstract. Diffusion Tensor Imaging (DTI) is a relatively new imaging modality that has been extensively used

More information

The Nottingham eprints service makes this work by researchers of the University of Nottingham available open access under the following conditions.

The Nottingham eprints service makes this work by researchers of the University of Nottingham available open access under the following conditions. Wharton, Samuel and Bowtell, Richard W. (2015) Effects of white matter microstructure on phase and susceptibility maps. Magnetic Resonance in Medicine, 173 (3). pp. 1258-1269. ISSN 1522-2594 Access from

More information

Quantitative Susceptibility Mapping by Inversion of a Perturbation Field Model: Correlation With Brain Iron in Normal Aging

Quantitative Susceptibility Mapping by Inversion of a Perturbation Field Model: Correlation With Brain Iron in Normal Aging Quantitative Susceptibility Mapping by Inversion of a Perturbation Field Model: Correlation With Brain Iron in Normal Aging The MIT Faculty has made this article openly available. Please share how this

More information

14. Nonlinear equations

14. Nonlinear equations L. Vandenberghe ECE133A (Winter 2018) 14. Nonlinear equations Newton method for nonlinear equations damped Newton method for unconstrained minimization Newton method for nonlinear least squares 14-1 Set

More information

An Inexact Newton Method for Optimization

An Inexact Newton Method for Optimization New York University Brown Applied Mathematics Seminar, February 10, 2009 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc)

More information

Contrast Mechanisms in MRI. Michael Jay Schillaci

Contrast Mechanisms in MRI. Michael Jay Schillaci Contrast Mechanisms in MRI Michael Jay Schillaci Overview Image Acquisition Basic Pulse Sequences Unwrapping K-Space Image Optimization Contrast Mechanisms Static and Motion Contrasts T1 & T2 Weighting,

More information

A BRAIN MODEL FOR THE STUDY OF MR SUSCEPTIBILITY INDUCED PHASE BEHAVIOR

A BRAIN MODEL FOR THE STUDY OF MR SUSCEPTIBILITY INDUCED PHASE BEHAVIOR A BRAIN MODEL FOR THE STUDY OF MR SUSCEPTIBILITY INDUCED PHASE BEHAVIOR A BRAIN MODEL FOR THE STUDY OF MR SUSCEPTIBILITY INDUCED PHASE BEHAVIOR BY SAGAR BUCH A Thesis Submitted to the School of Graduate

More information

Newton s Method. Ryan Tibshirani Convex Optimization /36-725

Newton s Method. Ryan Tibshirani Convex Optimization /36-725 Newton s Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, Properties and examples: f (y) = max x

More information

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March

More information

An Inexact Newton Method for Nonlinear Constrained Optimization

An Inexact Newton Method for Nonlinear Constrained Optimization An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results

More information

TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS

TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS Martin Kleinsteuber and Simon Hawe Department of Electrical Engineering and Information Technology, Technische Universität München, München, Arcistraße

More information

CPSC 540: Machine Learning

CPSC 540: Machine Learning CPSC 540: Machine Learning First-Order Methods, L1-Regularization, Coordinate Descent Winter 2016 Some images from this lecture are taken from Google Image Search. Admin Room: We ll count final numbers

More information

Applications of Spin Echo and Gradient Echo: Diffusion and Susceptibility Contrast

Applications of Spin Echo and Gradient Echo: Diffusion and Susceptibility Contrast Applications of Spin Echo and Gradient Echo: Diffusion and Susceptibility Contrast Chunlei Liu, PhD Department of Electrical Engineering & Computer Sciences and Helen Wills Neuroscience Institute University

More information

Higher-Order Methods

Higher-Order Methods Higher-Order Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. PCMI, July 2016 Stephen Wright (UW-Madison) Higher-Order Methods PCMI, July 2016 1 / 25 Smooth

More information

Big Data Analytics: Optimization and Randomization

Big Data Analytics: Optimization and Randomization Big Data Analytics: Optimization and Randomization Tianbao Yang Tutorial@ACML 2015 Hong Kong Department of Computer Science, The University of Iowa, IA, USA Nov. 20, 2015 Yang Tutorial for ACML 15 Nov.

More information

SPARSE SIGNAL RESTORATION. 1. Introduction

SPARSE SIGNAL RESTORATION. 1. Introduction SPARSE SIGNAL RESTORATION IVAN W. SELESNICK 1. Introduction These notes describe an approach for the restoration of degraded signals using sparsity. This approach, which has become quite popular, is useful

More information

Iterative Methods for Smooth Objective Functions

Iterative Methods for Smooth Objective Functions Optimization Iterative Methods for Smooth Objective Functions Quadratic Objective Functions Stationary Iterative Methods (first/second order) Steepest Descent Method Landweber/Projected Landweber Methods

More information

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 27 Introduction Fredholm first kind integral equation of convolution type in one space dimension: g(x) = 1 k(x x )f(x

More information

From Artifact to State of the Art: An Introduction to Electromagnetic Tissue Property Mapping

From Artifact to State of the Art: An Introduction to Electromagnetic Tissue Property Mapping From Artifact to State of the Art: An Introduction to Electromagnetic Tissue Property Mapping Daniel K. Sodickson, MD, PhD Bernard and Irene Schwartz Center for Biomedical Imaging, Department of Radiology,

More information

A model for susceptibility artefacts from respiration in functional echo-planar magnetic resonance imaging

A model for susceptibility artefacts from respiration in functional echo-planar magnetic resonance imaging Phys. Med. Biol. 45 (2000) 3809 3820. Printed in the UK PII: S0031-9155(00)14109-0 A model for susceptibility artefacts from respiration in functional echo-planar magnetic resonance imaging Devesh Raj,

More information

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization /

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization / Uses of duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Remember conjugate functions Given f : R n R, the function is called its conjugate f (y) = max x R n yt x f(x) Conjugates appear

More information

Algorithms for Nonsmooth Optimization

Algorithms for Nonsmooth Optimization Algorithms for Nonsmooth Optimization Frank E. Curtis, Lehigh University presented at Center for Optimization and Statistical Learning, Northwestern University 2 March 2018 Algorithms for Nonsmooth Optimization

More information

Accelerated MRI Image Reconstruction

Accelerated MRI Image Reconstruction IMAGING DATA EVALUATION AND ANALYTICS LAB (IDEAL) CS5540: Computational Techniques for Analyzing Clinical Data Lecture 15: Accelerated MRI Image Reconstruction Ashish Raj, PhD Image Data Evaluation and

More information

Numerical Approximation Methods for Non-Uniform Fourier Data

Numerical Approximation Methods for Non-Uniform Fourier Data Numerical Approximation Methods for Non-Uniform Fourier Data Aditya Viswanathan aditya@math.msu.edu 2014 Joint Mathematics Meetings January 18 2014 0 / 16 Joint work with Anne Gelb (Arizona State) Guohui

More information

Sequential Convex Programming

Sequential Convex Programming Sequential Convex Programming sequential convex programming alternating convex optimization convex-concave procedure Prof. S. Boyd, EE364b, Stanford University Methods for nonconvex optimization problems

More information

Proximal Gradient Descent and Acceleration. Ryan Tibshirani Convex Optimization /36-725

Proximal Gradient Descent and Acceleration. Ryan Tibshirani Convex Optimization /36-725 Proximal Gradient Descent and Acceleration Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: subgradient method Consider the problem min f(x) with f convex, and dom(f) = R n. Subgradient method:

More information

Conditional Gradient (Frank-Wolfe) Method

Conditional Gradient (Frank-Wolfe) Method Conditional Gradient (Frank-Wolfe) Method Lecturer: Aarti Singh Co-instructor: Pradeep Ravikumar Convex Optimization 10-725/36-725 1 Outline Today: Conditional gradient method Convergence analysis Properties

More information

Inexact Newton Methods and Nonlinear Constrained Optimization

Inexact Newton Methods and Nonlinear Constrained Optimization Inexact Newton Methods and Nonlinear Constrained Optimization Frank E. Curtis EPSRC Symposium Capstone Conference Warwick Mathematics Institute July 2, 2009 Outline PDE-Constrained Optimization Newton

More information

THE solution of the absolute value equation (AVE) of

THE solution of the absolute value equation (AVE) of The nonlinear HSS-like iterative method for absolute value equations Mu-Zheng Zhu Member, IAENG, and Ya-E Qi arxiv:1403.7013v4 [math.na] 2 Jan 2018 Abstract Salkuyeh proposed the Picard-HSS iteration method

More information

Compressive Imaging by Generalized Total Variation Minimization

Compressive Imaging by Generalized Total Variation Minimization 1 / 23 Compressive Imaging by Generalized Total Variation Minimization Jie Yan and Wu-Sheng Lu Department of Electrical and Computer Engineering University of Victoria, Victoria, BC, Canada APCCAS 2014,

More information

Line Search Methods for Unconstrained Optimisation

Line Search Methods for Unconstrained Optimisation Line Search Methods for Unconstrained Optimisation Lecture 8, Numerical Linear Algebra and Optimisation Oxford University Computing Laboratory, MT 2007 Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The Generic

More information

Inverse Problem in Quantitative Susceptibility Mapping From Theory to Application. Jae Kyu Choi.

Inverse Problem in Quantitative Susceptibility Mapping From Theory to Application. Jae Kyu Choi. Inverse Problem in Quantitative Susceptibility Mapping From Theory to Application Jae Kyu Choi jaycjk@yonsei.ac.kr Department of CSE, Yonsei University 2015.08.07 Hangzhou Zhe Jiang University Jae Kyu

More information

Subgradient Method. Ryan Tibshirani Convex Optimization

Subgradient Method. Ryan Tibshirani Convex Optimization Subgradient Method Ryan Tibshirani Convex Optimization 10-725 Consider the problem Last last time: gradient descent min x f(x) for f convex and differentiable, dom(f) = R n. Gradient descent: choose initial

More information

ITK Filters. Thresholding Edge Detection Gradients Second Order Derivatives Neighborhood Filters Smoothing Filters Distance Map Image Transforms

ITK Filters. Thresholding Edge Detection Gradients Second Order Derivatives Neighborhood Filters Smoothing Filters Distance Map Image Transforms ITK Filters Thresholding Edge Detection Gradients Second Order Derivatives Neighborhood Filters Smoothing Filters Distance Map Image Transforms ITCS 6010:Biomedical Imaging and Visualization 1 ITK Filters:

More information

Lecture 25: November 27

Lecture 25: November 27 10-725: Optimization Fall 2012 Lecture 25: November 27 Lecturer: Ryan Tibshirani Scribes: Matt Wytock, Supreeth Achar Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These notes have

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained

More information

PDE-Constrained and Nonsmooth Optimization

PDE-Constrained and Nonsmooth Optimization Frank E. Curtis October 1, 2009 Outline PDE-Constrained Optimization Introduction Newton s method Inexactness Results Summary and future work Nonsmooth Optimization Sequential quadratic programming (SQP)

More information

Newton s Method. Javier Peña Convex Optimization /36-725

Newton s Method. Javier Peña Convex Optimization /36-725 Newton s Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, f ( (y) = max y T x f(x) ) x Properties and

More information

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Zhaosong Lu October 5, 2012 (Revised: June 3, 2013; September 17, 2013) Abstract In this paper we study

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

CS598 Machine Learning in Computational Biology (Lecture 5: Matrix - part 2) Professor Jian Peng Teaching Assistant: Rongda Zhu

CS598 Machine Learning in Computational Biology (Lecture 5: Matrix - part 2) Professor Jian Peng Teaching Assistant: Rongda Zhu CS598 Machine Learning in Computational Biology (Lecture 5: Matrix - part 2) Professor Jian Peng Teaching Assistant: Rongda Zhu Feature engineering is hard 1. Extract informative features from domain knowledge

More information

Development of Magnetic Resonance-Based Susceptibility Imaging Applications

Development of Magnetic Resonance-Based Susceptibility Imaging Applications Development of Magnetic Resonance-Based Susceptibility Imaging Applications Sung-Min Gho The Graduate School Yonsei University Development of Magnetic Resonance-Based Susceptibility Imaging Applications

More information

Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma

Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University 8 September 2003 European Union RTN Summer School on Multi-Agent

More information

Coordinate descent. Geoff Gordon & Ryan Tibshirani Optimization /

Coordinate descent. Geoff Gordon & Ryan Tibshirani Optimization / Coordinate descent Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Adding to the toolbox, with stats and ML in mind We ve seen several general and useful minimization tools First-order methods

More information

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION A Thesis by MELTEM APAYDIN Submitted to the Office of Graduate and Professional Studies of Texas A&M University in partial fulfillment of the

More information

Quantitative oxygenation venography from MRI phase

Quantitative oxygenation venography from MRI phase Quantitative oxygenation venography from MRI phase The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published Publisher

More information

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous

More information

NOTES ON FIRST-ORDER METHODS FOR MINIMIZING SMOOTH FUNCTIONS. 1. Introduction. We consider first-order methods for smooth, unconstrained

NOTES ON FIRST-ORDER METHODS FOR MINIMIZING SMOOTH FUNCTIONS. 1. Introduction. We consider first-order methods for smooth, unconstrained NOTES ON FIRST-ORDER METHODS FOR MINIMIZING SMOOTH FUNCTIONS 1. Introduction. We consider first-order methods for smooth, unconstrained optimization: (1.1) minimize f(x), x R n where f : R n R. We assume

More information

DIFFUSION MAGNETIC RESONANCE IMAGING

DIFFUSION MAGNETIC RESONANCE IMAGING DIFFUSION MAGNETIC RESONANCE IMAGING from spectroscopy to imaging apparent diffusion coefficient ADC-Map anisotropy diffusion tensor (imaging) DIFFUSION NMR - FROM SPECTROSCOPY TO IMAGING Combining Diffusion

More information

A Brief Overview of Practical Optimization Algorithms in the Context of Relaxation

A Brief Overview of Practical Optimization Algorithms in the Context of Relaxation A Brief Overview of Practical Optimization Algorithms in the Context of Relaxation Zhouchen Lin Peking University April 22, 2018 Too Many Opt. Problems! Too Many Opt. Algorithms! Zero-th order algorithms:

More information

Algorithm for Sparse Approximate Inverse Preconditioners in the Conjugate Gradient Method

Algorithm for Sparse Approximate Inverse Preconditioners in the Conjugate Gradient Method Algorithm for Sparse Approximate Inverse Preconditioners in the Conjugate Gradient Method Ilya B. Labutin A.A. Trofimuk Institute of Petroleum Geology and Geophysics SB RAS, 3, acad. Koptyug Ave., Novosibirsk

More information

What s New in Active-Set Methods for Nonlinear Optimization?

What s New in Active-Set Methods for Nonlinear Optimization? What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for

More information

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form: 0.1 N. L. P. Katta G. Murty, IOE 611 Lecture slides Introductory Lecture NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP does not include everything

More information

TECHNICAL IMPROVEMENTS IN QUANTITATIVE SUSCEPTIBILITY MAPPING

TECHNICAL IMPROVEMENTS IN QUANTITATIVE SUSCEPTIBILITY MAPPING TECHNICAL IMPROVEMENTS IN QUANTITATIVE SUSCEPTIBILITY MAPPING TECHNICAL IMPROVEMENTS IN QUANTITATIVE SUSCEPTIBILITY MAPPING BY SAIFENG LIU B.Sc. A Thesis Submitted to the School of Biomedical Engineering

More information

Variational methods for restoration of phase or orientation data

Variational methods for restoration of phase or orientation data Variational methods for restoration of phase or orientation data Martin Storath joint works with Laurent Demaret, Michael Unser, Andreas Weinmann Image Analysis and Learning Group Universität Heidelberg

More information

Frank-Wolfe Method. Ryan Tibshirani Convex Optimization

Frank-Wolfe Method. Ryan Tibshirani Convex Optimization Frank-Wolfe Method Ryan Tibshirani Convex Optimization 10-725 Last time: ADMM For the problem min x,z f(x) + g(z) subject to Ax + Bz = c we form augmented Lagrangian (scaled form): L ρ (x, z, w) = f(x)

More information

On the interior of the simplex, we have the Hessian of d(x), Hd(x) is diagonal with ith. µd(w) + w T c. minimize. subject to w T 1 = 1,

On the interior of the simplex, we have the Hessian of d(x), Hd(x) is diagonal with ith. µd(w) + w T c. minimize. subject to w T 1 = 1, Math 30 Winter 05 Solution to Homework 3. Recognizing the convexity of g(x) := x log x, from Jensen s inequality we get d(x) n x + + x n n log x + + x n n where the equality is attained only at x = (/n,...,

More information

Optimization Tutorial 1. Basic Gradient Descent

Optimization Tutorial 1. Basic Gradient Descent E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.

More information

Magnetic nanoparticle imaging using multiple electron paramagnetic resonance activation sequences

Magnetic nanoparticle imaging using multiple electron paramagnetic resonance activation sequences Magnetic nanoparticle imaging using multiple electron paramagnetic resonance activation sequences A. Coene, G. Crevecoeur, and L. Dupré Citation: Journal of Applied Physics 117, 17D105 (2015); doi: 10.1063/1.4906948

More information

Constructing Approximation Kernels for Non-Harmonic Fourier Data

Constructing Approximation Kernels for Non-Harmonic Fourier Data Constructing Approximation Kernels for Non-Harmonic Fourier Data Aditya Viswanathan aditya.v@caltech.edu California Institute of Technology SIAM Annual Meeting 2013 July 10 2013 0 / 19 Joint work with

More information

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel

More information

Navigator Echoes. BioE 594 Advanced Topics in MRI Mauli. M. Modi. BioE /18/ What are Navigator Echoes?

Navigator Echoes. BioE 594 Advanced Topics in MRI Mauli. M. Modi. BioE /18/ What are Navigator Echoes? Navigator Echoes BioE 594 Advanced Topics in MRI Mauli. M. Modi. 1 What are Navigator Echoes? In order to correct the motional artifacts in Diffusion weighted MR images, a modified pulse sequence is proposed

More information

Lecture 17: October 27

Lecture 17: October 27 0-725/36-725: Convex Optimiation Fall 205 Lecturer: Ryan Tibshirani Lecture 7: October 27 Scribes: Brandon Amos, Gines Hidalgo Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These

More information

Lecture 7: September 17

Lecture 7: September 17 10-725: Optimization Fall 2013 Lecture 7: September 17 Lecturer: Ryan Tibshirani Scribes: Serim Park,Yiming Gu 7.1 Recap. The drawbacks of Gradient Methods are: (1) requires f is differentiable; (2) relatively

More information

Lift me up but not too high Fast algorithms to solve SDP s with block-diagonal constraints

Lift me up but not too high Fast algorithms to solve SDP s with block-diagonal constraints Lift me up but not too high Fast algorithms to solve SDP s with block-diagonal constraints Nicolas Boumal Université catholique de Louvain (Belgium) IDeAS seminar, May 13 th, 2014, Princeton The Riemannian

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

Bulletin of the. Iranian Mathematical Society

Bulletin of the. Iranian Mathematical Society ISSN: 1017-060X (Print) ISSN: 1735-8515 (Online) Bulletin of the Iranian Mathematical Society Vol. 41 (2015), No. 5, pp. 1259 1269. Title: A uniform approximation method to solve absolute value equation

More information

Research Article Residual Iterative Method for Solving Absolute Value Equations

Research Article Residual Iterative Method for Solving Absolute Value Equations Abstract and Applied Analysis Volume 2012, Article ID 406232, 9 pages doi:10.1155/2012/406232 Research Article Residual Iterative Method for Solving Absolute Value Equations Muhammad Aslam Noor, 1 Javed

More information

A Weighted Multivariate Gaussian Markov Model For Brain Lesion Segmentation

A Weighted Multivariate Gaussian Markov Model For Brain Lesion Segmentation A Weighted Multivariate Gaussian Markov Model For Brain Lesion Segmentation Senan Doyle, Florence Forbes, Michel Dojat July 5, 2010 Table of contents Introduction & Background A Weighted Multi-Sequence

More information

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Stefania Bellavia Dipartimento di Energetica S. Stecco Università degli Studi di Firenze Joint work with Jacek

More information

Basic concepts in Linear Algebra and Optimization

Basic concepts in Linear Algebra and Optimization Basic concepts in Linear Algebra and Optimization Yinbin Ma GEOPHYS 211 Outline Basic Concepts on Linear Algbra vector space norm linear mapping, range, null space matrix multiplication terative Methods

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

Brain Lesion Segmentation: A Bayesian Weighted EM Approach

Brain Lesion Segmentation: A Bayesian Weighted EM Approach Brain Lesion Segmentation: A Bayesian Weighted EM Approach Senan Doyle, Florence Forbes, Michel Dojat November 19, 2009 Table of contents Introduction & Background A Weighted Multi-Sequence Markov Model

More information

Reconstruction of Block-Sparse Signals by Using an l 2/p -Regularized Least-Squares Algorithm

Reconstruction of Block-Sparse Signals by Using an l 2/p -Regularized Least-Squares Algorithm Reconstruction of Block-Sparse Signals by Using an l 2/p -Regularized Least-Squares Algorithm Jeevan K. Pant, Wu-Sheng Lu, and Andreas Antoniou University of Victoria May 21, 2012 Compressive Sensing 1/23

More information

Distributed Box-Constrained Quadratic Optimization for Dual Linear SVM

Distributed Box-Constrained Quadratic Optimization for Dual Linear SVM Distributed Box-Constrained Quadratic Optimization for Dual Linear SVM Lee, Ching-pei University of Illinois at Urbana-Champaign Joint work with Dan Roth ICML 2015 Outline Introduction Algorithm Experiments

More information

L5 Support Vector Classification

L5 Support Vector Classification L5 Support Vector Classification Support Vector Machine Problem definition Geometrical picture Optimization problem Optimization Problem Hard margin Convexity Dual problem Soft margin problem Alexander

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

On Signal to Noise Ratio Tradeoffs in fmri

On Signal to Noise Ratio Tradeoffs in fmri On Signal to Noise Ratio Tradeoffs in fmri G. H. Glover April 11, 1999 This monograph addresses the question of signal to noise ratio (SNR) in fmri scanning, when parameters are changed under conditions

More information

Filtering and Edge Detection

Filtering and Edge Detection Filtering and Edge Detection Local Neighborhoods Hard to tell anything from a single pixel Example: you see a reddish pixel. Is this the object s color? Illumination? Noise? The next step in order of complexity

More information

Hot-Starting NLP Solvers

Hot-Starting NLP Solvers Hot-Starting NLP Solvers Andreas Wächter Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu 204 Mixed Integer Programming Workshop Ohio

More information

Iterative regularization of nonlinear ill-posed problems in Banach space

Iterative regularization of nonlinear ill-posed problems in Banach space Iterative regularization of nonlinear ill-posed problems in Banach space Barbara Kaltenbacher, University of Klagenfurt joint work with Bernd Hofmann, Technical University of Chemnitz, Frank Schöpfer and

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 9 Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 2 Separable convex optimization a special case is min f(x)

More information

The effect of different number of diffusion gradients on SNR of diffusion tensor-derived measurement maps

The effect of different number of diffusion gradients on SNR of diffusion tensor-derived measurement maps J. Biomedical Science and Engineering, 009,, 96-101 The effect of different number of diffusion gradients on SNR of diffusion tensor-derived measurement maps Na Zhang 1, Zhen-Sheng Deng 1*, Fang Wang 1,

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis

More information

Notes for CS542G (Iterative Solvers for Linear Systems)

Notes for CS542G (Iterative Solvers for Linear Systems) Notes for CS542G (Iterative Solvers for Linear Systems) Robert Bridson November 20, 2007 1 The Basics We re now looking at efficient ways to solve the linear system of equations Ax = b where in this course,

More information

GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis. Massimiliano Pontil

GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis. Massimiliano Pontil GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis Massimiliano Pontil 1 Today s plan SVD and principal component analysis (PCA) Connection

More information

Inverse Singular Value Problems

Inverse Singular Value Problems Chapter 8 Inverse Singular Value Problems IEP versus ISVP Existence question A continuous approach An iterative method for the IEP An iterative method for the ISVP 139 140 Lecture 8 IEP versus ISVP Inverse

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information