Regularization of the Inverse Laplace Transform with Applications in Nuclear Magnetic Resonance Relaxometry Candidacy Exam

Similar documents
Choosing the Regularization Parameter

Linear regression methods

Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution

ISyE 691 Data mining and analytics

A Modern Look at Classical Multivariate Techniques

A Short Introduction to the Lasso Methodology

Discrete ill posed problems

A Bootstrap Lasso + Partial Ridge Method to Construct Confidence Intervals for Parameters in High-dimensional Sparse Linear Models

Regularization and Variable Selection via the Elastic Net

Generalized Elastic Net Regression

REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems

Overview. Optimization-Based Data Analysis. Carlos Fernandez-Granda

Regression Shrinkage and Selection via the Lasso

Uncertainty Quantification for Inverse Problems. November 7, 2011

Biostatistics Advanced Methods in Biostatistics IV

Newton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve

Introduction to Machine Learning

MRI Reconstruction via Fourier Frames on Interleaving Spirals

Functional SVD for Big Data

Helping the Beginners using NMR relaxation. Non-exponential NMR Relaxation: A Simple Computer Experiment.

ESL Chap3. Some extensions of lasso

Pore Length Scales and Pore Surface Relaxivity of Sandstone Determined by Internal Magnetic Fields Modulation at 2 MHz NMR

STAT 462-Computational Data Analysis

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

Linear Inverse Problems

Machine Learning for OR & FE

c 1999 Society for Industrial and Applied Mathematics

Linear Regression Linear Regression with Shrinkage

SCMA292 Mathematical Modeling : Machine Learning. Krikamol Muandet. Department of Mathematics Faculty of Science, Mahidol University.

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms

A study on regularization parameter choice in Near-field Acoustical Holography

Discrete Ill Posed and Rank Deficient Problems. Alistair Boyle, Feb 2009, SYS5906: Directed Studies Inverse Problems 1

Lecture 14: Variable Selection - Beyond LASSO

A direct formulation for sparse PCA using semidefinite programming

MRI Signal Reconstruction via Fourier Frames on Interleaving Spirals Project Proposal

Adaptive Piecewise Polynomial Estimation via Trend Filtering

Comparisons of penalized least squares. methods by simulations

Tikhonov Regularization in General Form 8.1

Sparsity Regularization

Linear inversion methods and generalized cross-validation

Spatial Process Estimates as Smoothers: A Review

Learning with Singular Vectors

A Survey of L 1. Regression. Céline Cunen, 20/10/2014. Vidaurre, Bielza and Larranaga (2013)

Sparse PCA with applications in finance

Inverse Theory Methods in Experimental Physics

Log Covariance Matrix Estimation

Regularization Methods for Additive Models

COMS 4771 Regression. Nakul Verma

Regression, Ridge Regression, Lasso

OWL to the rescue of LASSO

PENALIZED PRINCIPAL COMPONENT REGRESSION. Ayanna Byrd. (Under the direction of Cheolwoo Park) Abstract

Regularized Regression A Bayesian point of view

Introduction to MRI. Spin & Magnetic Moments. Relaxation (T1, T2) Spin Echoes. 2DFT Imaging. K-space & Spatial Resolution.

CS489/698: Intro to ML

Statistics 203: Introduction to Regression and Analysis of Variance Penalized models

Rock-typing of Laminated Sandstones by Nuclear Magnetic Resonance in the Presence of Diffusion Coupling

Inverse Ill Posed Problems in Image Processing

An Introduction to Functional Data Analysis

Variable Selection for Highly Correlated Predictors

Shrinkage Methods: Ridge and Lasso

Chris Fraley and Daniel Percival. August 22, 2008, revised May 14, 2010

Bi-level feature selection with applications to genetic association

Least Absolute Shrinkage is Equivalent to Quadratic Penalization

Regularization: Ridge Regression and the LASSO

Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation

Super-resolution by means of Beurling minimal extrapolation

c 2011 International Press Vol. 18, No. 1, pp , March DENNIS TREDE

Linear Regression Linear Regression with Shrinkage

Lecture 14: Shrinkage

Bayes Estimators & Ridge Regression

Direct Learning: Linear Regression. Donglin Zeng, Department of Biostatistics, University of North Carolina

Reduction of Model Complexity and the Treatment of Discrete Inputs in Computer Model Emulation

COMS 4771 Introduction to Machine Learning. James McInerney Adapted from slides by Nakul Verma

Adaptive Lasso for correlated predictors

Prediction & Feature Selection in GLM

Deterministic sampling masks and compressed sensing: Compensating for partial image loss at the pixel level

FuncICA for time series pattern discovery

Uncertainty quantification and visualization for functional random variables

A model function method in total least squares

Spin Relaxation and NOEs BCMB/CHEM 8190

Spatial Lasso with Application to GIS Model Selection. F. Jay Breidt Colorado State University

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation

Regularizing inverse problems. Damping and smoothing and choosing...

Spectral Regularization

Lecture 5: A step back

MS-C1620 Statistical inference

Linear Model Selection and Regularization

Linear Regression. Volker Tresp 2018

A new method for multi-exponential inversion of NMR relaxation measurements

Day 4: Shrinkage Estimators

Linear Methods for Regression. Lijun Zhang

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology

High-dimensional regression with unknown variance

Final project STK4030/ Statistical learning: Advanced regression and classification Fall 2015

Regularization via Spectral Filtering

Unfolding Methods in Particle Physics

High-dimensional regression

Statistically-Based Regularization Parameter Estimation for Large Scale Problems

Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise

Transcription:

Regularization of the Inverse Laplace Transform with Applications in Nuclear Magnetic Resonance Relaxometry Candidacy Exam Applied Mathematics, Applied Statistics, & Scientific Computation University of Maryland, College Park cmsabett@math.umd.edu Advisors: John J. Benedetto, Alfredo Nava-Tudela Mathematics, IPST Mentor: Richard Spencer Laboratory of Clinical Investigations, NIA

Outline Nuclear Magnetic Resonance (NMR) Relaxometry Background Objective Motivation from Celik 1D Discrete Model 2D Model Extension Ordinary Least Squares Regularization Regularized Least Squares Methods to Choose the Regularization Parameter Hansen s L-Curve L-Curve as an Analytical Tool FINDCORNER Algorithm Problem Extensions

Nuclear Magnetic Resonance (NMR) Relaxometry Figure: Clockwise from top left: a. Local magnetization M emerges from alignment with magnetic field B 0. b. With an RF pulse, M aligns with the magnetic field B 1 in the transversal plane. c. After the pulse, M begins to realign with B 0. d. Components M lon (t) and M tr (t), characterized by decay rates T 1 and T 2, respectively, describe M(t) at time t. Images courtesy of Alfredo Nava-Tudela.

Objective A 1-dimensional continuous NMR relaxometry signal takes the form y(t) = 0 f (T 2 )e t/t 2 dt 2 + n(t) (1) where T 2 is the transversal decay rate, f (T 2 ) corresponds to the amplitude of the associated component, and n(t) is additive noise. Objective: Recover the distribution of amplitudes f (T 2 ) present in the signal via an inverse Laplace transform (ILT).

Toy Example Consider a signal y(t) = 0.6e t/t 2,1 + 0.4e t/t 2,2 + n(t) (2) where the exact distribution f (T 2 ) is f (T 2 ) = 0.6 δ T2,1 (T 2 ) + 0.4 δ T2,2 (T 2 ) (3) The recovery of f (T 2 ) is unstable due to the sensitivity of the inversion to noise.

Motivation Celik et al [1] demonstrated stabilization of the ILT through the introduction of a second, indirect dimension. Figure: The experimental process applied by Celik [1]. The 2D ILT path illustrated by the solid arrows produced better resolution of peaks in a sparse signal than the 1D ILT path (dashed arrow).

Motivation Figure: The experimental process applied by Celik [1]. Inversions from 12 different noise realizations are shown, demonstrating the stability of the 2D ILT. Top: 1D ILT. Bottom: 2D ILT projection.

Discrete Model A 1D NMR signal takes discrete form z(t i ) = K F(T 2,j )e t i /T 2,j (4) j=1

Discrete Model A 1D NMR signal takes discrete form z(t i ) = K F(T 2,j )e t i /T 2,j (4) j=1 In matrix form: z = AF (5) where [A] ij = e t i /T 2,j (6) z i = z(t i ) (7)

2D NMR Model The 2D continuous NMR relaxometry signal takes the form z( t, t) = In discrete form, z( t i, t j ) = 0 0 K 1 K 2 k=1 m=1 F(T 1, T 2 )e t/t 1 e t/t 2 dt 1 dt 2 (8) F(T 1,k, T 2,m )e t i /T 1,k e t j /T 2,m (9)

2D NMR Model Define A 1, A 2, F, and Z such that [A 1 ] ik = e t i /T 1,k [ F]km = F(T 1,k, T 2,m) [A 2 ] jm = e t j /T 2,m [Z ] ij = z( t i, t j ) (10)

2D NMR Model Define A 1, A 2, F, and Z such that [A 1 ] ik = e t i /T 1,k [ F]km = F(T 1,k, T 2,m) [A 2 ] jm = e t j /T 2,m [Z ] ij = z( t i, t j ) (10) Then Z = A 1 F A T 2 (11) (M 1 M 2 ) = (M 1 K 1 )(K 1 K 2 )(K 2 M 2 )

2D NMR Model Define A 1, A 2, F, and Z such that [A 1 ] ik = e t i /T 1,k [ F]km = F(T 1,k, T 2,m) [A 2 ] jm = e t j /T 2,m [Z ] ij = z( t i, t j ) (10) Then Z = A 1 F A T 2 (11) (M 1 M 2 ) = (M 1 K 1 )(K 1 K 2 )(K 2 M 2 ) which becomes vec(z ) = (A 2 A 1 )vec( F) (12)

Ordinary Least Squares With 1D ordinary least squares (OLS), we solve min AF y 2 F R K 2 (13)

Ordinary Least Squares Define the singular value decomposition of A as A = L i=1 σ i u i v T i (14) where σ i are the singular values and u i, v i are the left and right singular vectors, respectively. Then the OLS solution is F OLS = N i=1 u T i y v i σ i. (15)

Regularization To increase stability in the inversion, we add a penalty term tuned by the parameter α. The most common form is Tikhonov regularization, aka ridge regression. min AF y 2 F R K 2 +α2 F 2 2 (16)

Regularization Other common penalty forms include: L p regularization, p 1 min AF y 2 F R K 2 +α F p (16) L 1 regularization is known as LASSO. L p regularization for 1 < p < 2 is called bridge regression.

Regularization Other common penalty forms include: differential operator, L min F R K AF y 2 2 +α LF 2 2 (16)

Regularization We will discuss Tikhonov regularization. min AF y 2 F R K 2 + α2 F 2 2 (16)

Regularized Least Squares The regularized problem can be expressed as a normal least squares problem min F R K ÃF ỹ 2 2 (17) where à = ỹ = [ A ] αi K [ ] y 0 K

Regularized Least Squares For an appropriate choice of α, F Tikh = N i=1 f i u T i y v i σ i (18) where, in the case of Tikhonov regularization, the filter factors f i take the form f i = σ2 i α 2 + σ 2 i. (19)

Regularization Parameter The choice of regularization parameter α strongly influences the character of the solution.

Methods to Choose the Regularization Parameter Numerous methods have been proposed to choose the ideal regularization parameter. We will consider the following: Discrepancy Principle Generalized Cross Validation L-Curve Method

Discrepancy Principle The discrepancy principle attempts to minimize the residual based on a prespecified error bound ɛ, such that for the optimal α. Disadvantages: AF α y = ɛ (20) Requires a priori knowledge of the error Often oversmooths the solution

Generalized Cross Validation Generalized Cross Validation (GCV) extends the idea of leave-one-out cross-validation, minimizing the function with G(α) = AF α y 2 (τ(α)) 2 (21) τ(α) = trace(i A(A T A + α 2 L T L) 1 A T ). (22) where L is a differential operator or the identity matrix (Hansen). Disadvantages: G(α) is difficult to minimize numerically due to its flatness Does not perform well with correlated errors

The L-Curve Introduced by P.C. Hansen in 1993, the L-curve was originally used as an analytical tool. It plots the residual AF α y 2 against the size of the solution F α 2 as a function of α.

L-Curve Method The L-curve method was proposed by Hansen and O Leary as a means of choosing the regularization parameter α. Idea: Find the corner of the L-curve. Advantages over the other methods: Well-defined numerically Requires no prior knowledge of the errors Not heavily influenced by large correlated errors when considered on a log scale

L-Curve Method When GCV performs well, the chosen α value is close to the value chosen by the L-curve method.

L-Curve Method (Hansen, O Leary) Let (ρ, η) define a point on the L-curve in log scale. FINDCORNER 1. Calculate several points (ρ i, η i ) on each side of the corner.

L-Curve Method (Hansen, O Leary) Let (ρ, η) define a point on the L-curve in log scale. FINDCORNER 2. Fit a 3-dimensional cubic spline to those points (ρ i, η i, α i ) after first performing local smoothing.

L-Curve Method (Hansen, O Leary) Let (ρ, η) define a point on the L-curve in log scale. FINDCORNER 3. Compute the point of maximum curvature and find the corresponding regularization parameter α 0.

L-Curve Method (Hansen, O Leary) Let (ρ, η) define a point on the L-curve in log scale. FINDCORNER 4. Solve the regularized problem and add the new point (ρ(α 0 ), η(α 0 )) to the L-curve.

L-Curve Method (Hansen, O Leary) Let (ρ, η) define a point on the L-curve in log scale. 5. Repeat until convergence. FINDCORNER

Model Extensions Characterize the optimal choice of penalty term using the L-curve method to optimize the regularization parameter. L 2 penalty: min F R K AF y 2 2 + α F 2 2

Model Extensions Characterize the optimal choice of penalty term using the L-curve method to optimize the regularization parameter. L 2 penalty: min F R K AF y 2 2 + α F 2 2 L 1 penalty: min F R K AF y 2 2 + α F 1

Model Extensions Characterize the optimal choice of penalty term using the L-curve method to optimize the regularization parameter. L 2 penalty: min F R K AF y 2 2 + α F 2 2 L 1 penalty: min F R K AF y 2 2 + α F 1 Elastic Net: min F R K AF y 2 2 + α 1 F 1 + α 2 F 2

Model Extensions Characterize the optimal choice of penalty term using the L-curve method to optimize the regularization parameter. L 2 penalty: min F R K AF y 2 2 + α F 2 2 L 1 penalty: min F R K AF y 2 2 + α F 1 Elastic Net: min F R K AF y 2 2 + α 1 F 1 + α 2 F 2 L p penalty: min F R K AF y 2 2 + α F p

Conclusion An NMR relaxometry signal can be inverted via an inverse Laplace transform. The measurement of an additional, indirect dimension provides increased stability in the inversion. Regularization Form a least-squares problem. Add a Tikhonov regularization term for stability. L-Curve Critical to the quality of the inversion is the choice of regularization parameter α. Use parametric plot of AF α y 2 versus F α 2 to find the optimal α.

References I Primary Material: Celik, H., Bouhrara, M., Reiter, D. A., Fishbein, K. W., & Spencer, R. G. (2013). Stabilization of the inverse Laplace transform of multiexponential decay through introduction of a second dimension. Journal of Magnetic Resonance, 236, 134-139. Hansen, P. C. & O Leary, D. P. (1993). The Use of the L-Curve in the Regularization of Discrete Ill-Posed Problems. SIAM Journal on Scientific Computing, 14(6), 1487-1503. Secondary Material: Fu, W. J. (1998). Penalized Regressions: The Bridge versus the Lasso. Journal of Computational and Graphical Statistics, 7(3), 397-416. Varah, J. M. (1983). Pitfalls in the Numerical Solution of Linear Ill-Posed Problems. SIAM Journal on Scientific and Statistical Computing, 4(2), 164-176.

References II Berman, P., Ofer, L., Parmet, Y., Saunders, M., Wiesman, Z. (2013). Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods. Concepts in Magnetic Resonance, 42(3), 72-88. Hansen, P. C. (1992). Analysis of Discrete Ill-Posed Problems by Means of the L-Curve. SIAM Review, 34(4), 561-580. Tibshirani, R. (1996). Regression Shrinkage and Selection via the Lasso. Journal of the Royal Statistical Society, 58(1), 267-288. Venkataramanan, L., Song, Y., & Hurlimann, M. D., (2002). Solving Fredholm Integrals of the First Kind with Tensor Product Structure in 2 and 2.5 Dimensions. IEEE Transactions on Signal Processing, 50(5), 1017-1026.