Regularization of the Inverse Laplace Transform with Applications in Nuclear Magnetic Resonance Relaxometry Candidacy Exam

Size: px
Start display at page:

Download "Regularization of the Inverse Laplace Transform with Applications in Nuclear Magnetic Resonance Relaxometry Candidacy Exam"

Transcription

1 Regularization of the Inverse Laplace Transform with Applications in Nuclear Magnetic Resonance Relaxometry Candidacy Exam Applied Mathematics, Applied Statistics, & Scientific Computation University of Maryland, College Park Advisors: John J. Benedetto, Alfredo Nava-Tudela Mathematics, IPST Mentor: Richard Spencer Laboratory of Clinical Investigations, NIA

2 Outline Nuclear Magnetic Resonance (NMR) Relaxometry Background Objective Motivation from Celik 1D Discrete Model 2D Model Extension Ordinary Least Squares Regularization Regularized Least Squares Methods to Choose the Regularization Parameter Hansen s L-Curve L-Curve as an Analytical Tool FINDCORNER Algorithm Problem Extensions

3 Nuclear Magnetic Resonance (NMR) Relaxometry Figure: Clockwise from top left: a. Local magnetization M emerges from alignment with magnetic field B 0. b. With an RF pulse, M aligns with the magnetic field B 1 in the transversal plane. c. After the pulse, M begins to realign with B 0. d. Components M lon (t) and M tr (t), characterized by decay rates T 1 and T 2, respectively, describe M(t) at time t. Images courtesy of Alfredo Nava-Tudela.

4 Objective A 1-dimensional continuous NMR relaxometry signal takes the form y(t) = 0 f (T 2 )e t/t 2 dt 2 + n(t) (1) where T 2 is the transversal decay rate, f (T 2 ) corresponds to the amplitude of the associated component, and n(t) is additive noise. Objective: Recover the distribution of amplitudes f (T 2 ) present in the signal via an inverse Laplace transform (ILT).

5 Toy Example Consider a signal y(t) = 0.6e t/t 2, e t/t 2,2 + n(t) (2) where the exact distribution f (T 2 ) is f (T 2 ) = 0.6 δ T2,1 (T 2 ) δ T2,2 (T 2 ) (3) The recovery of f (T 2 ) is unstable due to the sensitivity of the inversion to noise.

6 Motivation Celik et al [1] demonstrated stabilization of the ILT through the introduction of a second, indirect dimension. Figure: The experimental process applied by Celik [1]. The 2D ILT path illustrated by the solid arrows produced better resolution of peaks in a sparse signal than the 1D ILT path (dashed arrow).

7 Motivation Figure: The experimental process applied by Celik [1]. Inversions from 12 different noise realizations are shown, demonstrating the stability of the 2D ILT. Top: 1D ILT. Bottom: 2D ILT projection.

8 Discrete Model A 1D NMR signal takes discrete form z(t i ) = K F(T 2,j )e t i /T 2,j (4) j=1

9 Discrete Model A 1D NMR signal takes discrete form z(t i ) = K F(T 2,j )e t i /T 2,j (4) j=1 In matrix form: z = AF (5) where [A] ij = e t i /T 2,j (6) z i = z(t i ) (7)

10 2D NMR Model The 2D continuous NMR relaxometry signal takes the form z( t, t) = In discrete form, z( t i, t j ) = 0 0 K 1 K 2 k=1 m=1 F(T 1, T 2 )e t/t 1 e t/t 2 dt 1 dt 2 (8) F(T 1,k, T 2,m )e t i /T 1,k e t j /T 2,m (9)

11 2D NMR Model Define A 1, A 2, F, and Z such that [A 1 ] ik = e t i /T 1,k [ F]km = F(T 1,k, T 2,m) [A 2 ] jm = e t j /T 2,m [Z ] ij = z( t i, t j ) (10)

12 2D NMR Model Define A 1, A 2, F, and Z such that [A 1 ] ik = e t i /T 1,k [ F]km = F(T 1,k, T 2,m) [A 2 ] jm = e t j /T 2,m [Z ] ij = z( t i, t j ) (10) Then Z = A 1 F A T 2 (11) (M 1 M 2 ) = (M 1 K 1 )(K 1 K 2 )(K 2 M 2 )

13 2D NMR Model Define A 1, A 2, F, and Z such that [A 1 ] ik = e t i /T 1,k [ F]km = F(T 1,k, T 2,m) [A 2 ] jm = e t j /T 2,m [Z ] ij = z( t i, t j ) (10) Then Z = A 1 F A T 2 (11) (M 1 M 2 ) = (M 1 K 1 )(K 1 K 2 )(K 2 M 2 ) which becomes vec(z ) = (A 2 A 1 )vec( F) (12)

14 Ordinary Least Squares With 1D ordinary least squares (OLS), we solve min AF y 2 F R K 2 (13)

15 Ordinary Least Squares Define the singular value decomposition of A as A = L i=1 σ i u i v T i (14) where σ i are the singular values and u i, v i are the left and right singular vectors, respectively. Then the OLS solution is F OLS = N i=1 u T i y v i σ i. (15)

16 Regularization To increase stability in the inversion, we add a penalty term tuned by the parameter α. The most common form is Tikhonov regularization, aka ridge regression. min AF y 2 F R K 2 +α2 F 2 2 (16)

17 Regularization Other common penalty forms include: L p regularization, p 1 min AF y 2 F R K 2 +α F p (16) L 1 regularization is known as LASSO. L p regularization for 1 < p < 2 is called bridge regression.

18 Regularization Other common penalty forms include: differential operator, L min F R K AF y 2 2 +α LF 2 2 (16)

19 Regularization We will discuss Tikhonov regularization. min AF y 2 F R K 2 + α2 F 2 2 (16)

20 Regularized Least Squares The regularized problem can be expressed as a normal least squares problem min F R K ÃF ỹ 2 2 (17) where à = ỹ = [ A ] αi K [ ] y 0 K

21 Regularized Least Squares For an appropriate choice of α, F Tikh = N i=1 f i u T i y v i σ i (18) where, in the case of Tikhonov regularization, the filter factors f i take the form f i = σ2 i α 2 + σ 2 i. (19)

22 Regularization Parameter The choice of regularization parameter α strongly influences the character of the solution.

23 Methods to Choose the Regularization Parameter Numerous methods have been proposed to choose the ideal regularization parameter. We will consider the following: Discrepancy Principle Generalized Cross Validation L-Curve Method

24 Discrepancy Principle The discrepancy principle attempts to minimize the residual based on a prespecified error bound ɛ, such that for the optimal α. Disadvantages: AF α y = ɛ (20) Requires a priori knowledge of the error Often oversmooths the solution

25 Generalized Cross Validation Generalized Cross Validation (GCV) extends the idea of leave-one-out cross-validation, minimizing the function with G(α) = AF α y 2 (τ(α)) 2 (21) τ(α) = trace(i A(A T A + α 2 L T L) 1 A T ). (22) where L is a differential operator or the identity matrix (Hansen). Disadvantages: G(α) is difficult to minimize numerically due to its flatness Does not perform well with correlated errors

26 The L-Curve Introduced by P.C. Hansen in 1993, the L-curve was originally used as an analytical tool. It plots the residual AF α y 2 against the size of the solution F α 2 as a function of α.

27 L-Curve Method The L-curve method was proposed by Hansen and O Leary as a means of choosing the regularization parameter α. Idea: Find the corner of the L-curve. Advantages over the other methods: Well-defined numerically Requires no prior knowledge of the errors Not heavily influenced by large correlated errors when considered on a log scale

28 L-Curve Method When GCV performs well, the chosen α value is close to the value chosen by the L-curve method.

29 L-Curve Method (Hansen, O Leary) Let (ρ, η) define a point on the L-curve in log scale. FINDCORNER 1. Calculate several points (ρ i, η i ) on each side of the corner.

30 L-Curve Method (Hansen, O Leary) Let (ρ, η) define a point on the L-curve in log scale. FINDCORNER 2. Fit a 3-dimensional cubic spline to those points (ρ i, η i, α i ) after first performing local smoothing.

31 L-Curve Method (Hansen, O Leary) Let (ρ, η) define a point on the L-curve in log scale. FINDCORNER 3. Compute the point of maximum curvature and find the corresponding regularization parameter α 0.

32 L-Curve Method (Hansen, O Leary) Let (ρ, η) define a point on the L-curve in log scale. FINDCORNER 4. Solve the regularized problem and add the new point (ρ(α 0 ), η(α 0 )) to the L-curve.

33 L-Curve Method (Hansen, O Leary) Let (ρ, η) define a point on the L-curve in log scale. 5. Repeat until convergence. FINDCORNER

34 Model Extensions Characterize the optimal choice of penalty term using the L-curve method to optimize the regularization parameter. L 2 penalty: min F R K AF y α F 2 2

35 Model Extensions Characterize the optimal choice of penalty term using the L-curve method to optimize the regularization parameter. L 2 penalty: min F R K AF y α F 2 2 L 1 penalty: min F R K AF y α F 1

36 Model Extensions Characterize the optimal choice of penalty term using the L-curve method to optimize the regularization parameter. L 2 penalty: min F R K AF y α F 2 2 L 1 penalty: min F R K AF y α F 1 Elastic Net: min F R K AF y α 1 F 1 + α 2 F 2

37 Model Extensions Characterize the optimal choice of penalty term using the L-curve method to optimize the regularization parameter. L 2 penalty: min F R K AF y α F 2 2 L 1 penalty: min F R K AF y α F 1 Elastic Net: min F R K AF y α 1 F 1 + α 2 F 2 L p penalty: min F R K AF y α F p

38 Conclusion An NMR relaxometry signal can be inverted via an inverse Laplace transform. The measurement of an additional, indirect dimension provides increased stability in the inversion. Regularization Form a least-squares problem. Add a Tikhonov regularization term for stability. L-Curve Critical to the quality of the inversion is the choice of regularization parameter α. Use parametric plot of AF α y 2 versus F α 2 to find the optimal α.

39 References I Primary Material: Celik, H., Bouhrara, M., Reiter, D. A., Fishbein, K. W., & Spencer, R. G. (2013). Stabilization of the inverse Laplace transform of multiexponential decay through introduction of a second dimension. Journal of Magnetic Resonance, 236, Hansen, P. C. & O Leary, D. P. (1993). The Use of the L-Curve in the Regularization of Discrete Ill-Posed Problems. SIAM Journal on Scientific Computing, 14(6), Secondary Material: Fu, W. J. (1998). Penalized Regressions: The Bridge versus the Lasso. Journal of Computational and Graphical Statistics, 7(3), Varah, J. M. (1983). Pitfalls in the Numerical Solution of Linear Ill-Posed Problems. SIAM Journal on Scientific and Statistical Computing, 4(2),

40 References II Berman, P., Ofer, L., Parmet, Y., Saunders, M., Wiesman, Z. (2013). Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods. Concepts in Magnetic Resonance, 42(3), Hansen, P. C. (1992). Analysis of Discrete Ill-Posed Problems by Means of the L-Curve. SIAM Review, 34(4), Tibshirani, R. (1996). Regression Shrinkage and Selection via the Lasso. Journal of the Royal Statistical Society, 58(1), Venkataramanan, L., Song, Y., & Hurlimann, M. D., (2002). Solving Fredholm Integrals of the First Kind with Tensor Product Structure in 2 and 2.5 Dimensions. IEEE Transactions on Signal Processing, 50(5),

Choosing the Regularization Parameter

Choosing the Regularization Parameter Choosing the Regularization Parameter At our disposal: several regularization methods, based on filtering of the SVD components. Often fairly straightforward to eyeball a good TSVD truncation parameter

More information

Linear regression methods

Linear regression methods Linear regression methods Most of our intuition about statistical methods stem from linear regression. For observations i = 1,..., n, the model is Y i = p X ij β j + ε i, j=1 where Y i is the response

More information

Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution

Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution Rosemary Renaut, Jodi Mead Arizona State and Boise State September 2007 Renaut and Mead (ASU/Boise) Scalar

More information

ISyE 691 Data mining and analytics

ISyE 691 Data mining and analytics ISyE 691 Data mining and analytics Regression Instructor: Prof. Kaibo Liu Department of Industrial and Systems Engineering UW-Madison Email: kliu8@wisc.edu Office: Room 3017 (Mechanical Engineering Building)

More information

A Modern Look at Classical Multivariate Techniques

A Modern Look at Classical Multivariate Techniques A Modern Look at Classical Multivariate Techniques Yoonkyung Lee Department of Statistics The Ohio State University March 16-20, 2015 The 13th School of Probability and Statistics CIMAT, Guanajuato, Mexico

More information

A Short Introduction to the Lasso Methodology

A Short Introduction to the Lasso Methodology A Short Introduction to the Lasso Methodology Michael Gutmann sites.google.com/site/michaelgutmann University of Helsinki Aalto University Helsinki Institute for Information Technology March 9, 2016 Michael

More information

Discrete ill posed problems

Discrete ill posed problems Discrete ill posed problems Gérard MEURANT October, 2008 1 Introduction to ill posed problems 2 Tikhonov regularization 3 The L curve criterion 4 Generalized cross validation 5 Comparisons of methods Introduction

More information

A Bootstrap Lasso + Partial Ridge Method to Construct Confidence Intervals for Parameters in High-dimensional Sparse Linear Models

A Bootstrap Lasso + Partial Ridge Method to Construct Confidence Intervals for Parameters in High-dimensional Sparse Linear Models A Bootstrap Lasso + Partial Ridge Method to Construct Confidence Intervals for Parameters in High-dimensional Sparse Linear Models Jingyi Jessica Li Department of Statistics University of California, Los

More information

Regularization and Variable Selection via the Elastic Net

Regularization and Variable Selection via the Elastic Net p. 1/1 Regularization and Variable Selection via the Elastic Net Hui Zou and Trevor Hastie Journal of Royal Statistical Society, B, 2005 Presenter: Minhua Chen, Nov. 07, 2008 p. 2/1 Agenda Introduction

More information

Generalized Elastic Net Regression

Generalized Elastic Net Regression Abstract Generalized Elastic Net Regression Geoffroy MOURET Jean-Jules BRAULT Vahid PARTOVINIA This work presents a variation of the elastic net penalization method. We propose applying a combined l 1

More information

REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE

REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE Int. J. Appl. Math. Comput. Sci., 007, Vol. 17, No., 157 164 DOI: 10.478/v10006-007-0014-3 REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE DOROTA KRAWCZYK-STAŃDO,

More information

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Silvia Gazzola Dipartimento di Matematica - Università di Padova January 10, 2012 Seminario ex-studenti 2 Silvia Gazzola

More information

Overview. Optimization-Based Data Analysis. Carlos Fernandez-Granda

Overview. Optimization-Based Data Analysis.   Carlos Fernandez-Granda Overview Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 1/25/2016 Sparsity Denoising Regression Inverse problems Low-rank models Matrix completion

More information

Regression Shrinkage and Selection via the Lasso

Regression Shrinkage and Selection via the Lasso Regression Shrinkage and Selection via the Lasso ROBERT TIBSHIRANI, 1996 Presenter: Guiyun Feng April 27 () 1 / 20 Motivation Estimation in Linear Models: y = β T x + ɛ. data (x i, y i ), i = 1, 2,...,

More information

Uncertainty Quantification for Inverse Problems. November 7, 2011

Uncertainty Quantification for Inverse Problems. November 7, 2011 Uncertainty Quantification for Inverse Problems November 7, 2011 Outline UQ and inverse problems Review: least-squares Review: Gaussian Bayesian linear model Parametric reductions for IP Bias, variance

More information

Biostatistics Advanced Methods in Biostatistics IV

Biostatistics Advanced Methods in Biostatistics IV Biostatistics 140.754 Advanced Methods in Biostatistics IV Jeffrey Leek Assistant Professor Department of Biostatistics jleek@jhsph.edu Lecture 12 1 / 36 Tip + Paper Tip: As a statistician the results

More information

Newton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve

Newton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve Newton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve Rosemary Renaut, Jodi Mead Arizona State and Boise State Copper Mountain Conference on Iterative Methods

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Linear Regression Varun Chandola Computer Science & Engineering State University of New York at Buffalo Buffalo, NY, USA chandola@buffalo.edu Chandola@UB CSE 474/574 1

More information

MRI Reconstruction via Fourier Frames on Interleaving Spirals

MRI Reconstruction via Fourier Frames on Interleaving Spirals MRI Reconstruction via Fourier Frames on Interleaving Spirals Christiana Sabett Applied Mathematics & Statistics, and Scientific Computing (AMSC) University of Maryland, College Park Advisors: John Benedetto,

More information

Functional SVD for Big Data

Functional SVD for Big Data Functional SVD for Big Data Pan Chao April 23, 2014 Pan Chao Functional SVD for Big Data April 23, 2014 1 / 24 Outline 1 One-Way Functional SVD a) Interpretation b) Robustness c) CV/GCV 2 Two-Way Problem

More information

Helping the Beginners using NMR relaxation. Non-exponential NMR Relaxation: A Simple Computer Experiment.

Helping the Beginners using NMR relaxation. Non-exponential NMR Relaxation: A Simple Computer Experiment. Helping the Beginners using NMR relaxation. Non-exponential NMR Relaxation: A Simple Computer Experiment. Vladimir I. Bakhmutov Department of Chemistry, Texas A&M University, College Station, TX 77842-3012

More information

ESL Chap3. Some extensions of lasso

ESL Chap3. Some extensions of lasso ESL Chap3 Some extensions of lasso 1 Outline Consistency of lasso for model selection Adaptive lasso Elastic net Group lasso 2 Consistency of lasso for model selection A number of authors have studied

More information

Pore Length Scales and Pore Surface Relaxivity of Sandstone Determined by Internal Magnetic Fields Modulation at 2 MHz NMR

Pore Length Scales and Pore Surface Relaxivity of Sandstone Determined by Internal Magnetic Fields Modulation at 2 MHz NMR The OpenAccess Journal for the Basic Principles of Diffusion Theory, Experiment and Application Pore Length Scales and Pore Surface Relaxivity of Sandstone Determined by Internal Magnetic Fields Modulation

More information

STAT 462-Computational Data Analysis

STAT 462-Computational Data Analysis STAT 462-Computational Data Analysis Chapter 5- Part 2 Nasser Sadeghkhani a.sadeghkhani@queensu.ca October 2017 1 / 27 Outline Shrinkage Methods 1. Ridge Regression 2. Lasso Dimension Reduction Methods

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Sparse Recovery using L1 minimization - algorithms Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Linear Inverse Problems

Linear Inverse Problems Linear Inverse Problems Ajinkya Kadu Utrecht University, The Netherlands February 26, 2018 Outline Introduction Least-squares Reconstruction Methods Examples Summary Introduction 2 What are inverse problems?

More information

Machine Learning for OR & FE

Machine Learning for OR & FE Machine Learning for OR & FE Regression II: Regularization and Shrinkage Methods Martin Haugh Department of Industrial Engineering and Operations Research Columbia University Email: martin.b.haugh@gmail.com

More information

c 1999 Society for Industrial and Applied Mathematics

c 1999 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 21, No. 1, pp. 185 194 c 1999 Society for Industrial and Applied Mathematics TIKHONOV REGULARIZATION AND TOTAL LEAST SQUARES GENE H. GOLUB, PER CHRISTIAN HANSEN, AND DIANNE

More information

Linear Regression Linear Regression with Shrinkage

Linear Regression Linear Regression with Shrinkage Linear Regression Linear Regression ith Shrinkage Introduction Regression means predicting a continuous (usually scalar) output y from a vector of continuous inputs (features) x. Example: Predicting vehicle

More information

SCMA292 Mathematical Modeling : Machine Learning. Krikamol Muandet. Department of Mathematics Faculty of Science, Mahidol University.

SCMA292 Mathematical Modeling : Machine Learning. Krikamol Muandet. Department of Mathematics Faculty of Science, Mahidol University. SCMA292 Mathematical Modeling : Machine Learning Krikamol Muandet Department of Mathematics Faculty of Science, Mahidol University February 9, 2016 Outline Quick Recap of Least Square Ridge Regression

More information

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms François Caron Department of Statistics, Oxford STATLEARN 2014, Paris April 7, 2014 Joint work with Adrien Todeschini,

More information

A study on regularization parameter choice in Near-field Acoustical Holography

A study on regularization parameter choice in Near-field Acoustical Holography Acoustics 8 Paris A study on regularization parameter choice in Near-field Acoustical Holography J. Gomes a and P.C. Hansen b a Brüel & Kjær Sound and Vibration Measurement A/S, Skodsborgvej 37, DK-285

More information

Discrete Ill Posed and Rank Deficient Problems. Alistair Boyle, Feb 2009, SYS5906: Directed Studies Inverse Problems 1

Discrete Ill Posed and Rank Deficient Problems. Alistair Boyle, Feb 2009, SYS5906: Directed Studies Inverse Problems 1 Discrete Ill Posed and Rank Deficient Problems Alistair Boyle, Feb 2009, SYS5906: Directed Studies Inverse Problems 1 Definitions Overview Inversion, SVD, Picard Condition, Rank Deficient, Ill-Posed Classical

More information

Lecture 14: Variable Selection - Beyond LASSO

Lecture 14: Variable Selection - Beyond LASSO Fall, 2017 Extension of LASSO To achieve oracle properties, L q penalty with 0 < q < 1, SCAD penalty (Fan and Li 2001; Zhang et al. 2007). Adaptive LASSO (Zou 2006; Zhang and Lu 2007; Wang et al. 2007)

More information

A direct formulation for sparse PCA using semidefinite programming

A direct formulation for sparse PCA using semidefinite programming A direct formulation for sparse PCA using semidefinite programming A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley Available online at www.princeton.edu/~aspremon

More information

MRI Signal Reconstruction via Fourier Frames on Interleaving Spirals Project Proposal

MRI Signal Reconstruction via Fourier Frames on Interleaving Spirals Project Proposal MRI Signal Reconstruction via Fourier Frames on Interleaving Spirals Project Proposal Christiana Sabett Applied Mathematics, Applied Statistics, & Scientific Computation Advisors: John Benedetto, Alfredo

More information

Adaptive Piecewise Polynomial Estimation via Trend Filtering

Adaptive Piecewise Polynomial Estimation via Trend Filtering Adaptive Piecewise Polynomial Estimation via Trend Filtering Liubo Li, ShanShan Tu The Ohio State University li.2201@osu.edu, tu.162@osu.edu October 1, 2015 Liubo Li, ShanShan Tu (OSU) Trend Filtering

More information

Comparisons of penalized least squares. methods by simulations

Comparisons of penalized least squares. methods by simulations Comparisons of penalized least squares arxiv:1405.1796v1 [stat.co] 8 May 2014 methods by simulations Ke ZHANG, Fan YIN University of Science and Technology of China, Hefei 230026, China Shifeng XIONG Academy

More information

Tikhonov Regularization in General Form 8.1

Tikhonov Regularization in General Form 8.1 Tikhonov Regularization in General Form 8.1 To introduce a more general formulation, let us return to the continuous formulation of the first-kind Fredholm integral equation. In this setting, the residual

More information

Sparsity Regularization

Sparsity Regularization Sparsity Regularization Bangti Jin Course Inverse Problems & Imaging 1 / 41 Outline 1 Motivation: sparsity? 2 Mathematical preliminaries 3 l 1 solvers 2 / 41 problem setup finite-dimensional formulation

More information

Linear inversion methods and generalized cross-validation

Linear inversion methods and generalized cross-validation Online supplement to Using generalized cross-validation to select parameters in inversions for regional caron fluxes Linear inversion methods and generalized cross-validation NIR Y. KRAKAUER AND TAPIO

More information

Spatial Process Estimates as Smoothers: A Review

Spatial Process Estimates as Smoothers: A Review Spatial Process Estimates as Smoothers: A Review Soutir Bandyopadhyay 1 Basic Model The observational model considered here has the form Y i = f(x i ) + ɛ i, for 1 i n. (1.1) where Y i is the observed

More information

Learning with Singular Vectors

Learning with Singular Vectors Learning with Singular Vectors CIS 520 Lecture 30 October 2015 Barry Slaff Based on: CIS 520 Wiki Materials Slides by Jia Li (PSU) Works cited throughout Overview Linear regression: Given X, Y find w:

More information

A Survey of L 1. Regression. Céline Cunen, 20/10/2014. Vidaurre, Bielza and Larranaga (2013)

A Survey of L 1. Regression. Céline Cunen, 20/10/2014. Vidaurre, Bielza and Larranaga (2013) A Survey of L 1 Regression Vidaurre, Bielza and Larranaga (2013) Céline Cunen, 20/10/2014 Outline of article 1.Introduction 2.The Lasso for Linear Regression a) Notation and Main Concepts b) Statistical

More information

Sparse PCA with applications in finance

Sparse PCA with applications in finance Sparse PCA with applications in finance A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley Available online at www.princeton.edu/~aspremon 1 Introduction

More information

Inverse Theory Methods in Experimental Physics

Inverse Theory Methods in Experimental Physics Inverse Theory Methods in Experimental Physics Edward Sternin PHYS 5P10: 2018-02-26 Edward Sternin Inverse Theory Methods PHYS 5P10: 2018-02-26 1 / 29 1 Introduction Indirectly observed data A new research

More information

Log Covariance Matrix Estimation

Log Covariance Matrix Estimation Log Covariance Matrix Estimation Xinwei Deng Department of Statistics University of Wisconsin-Madison Joint work with Kam-Wah Tsui (Univ. of Wisconsin-Madsion) 1 Outline Background and Motivation The Proposed

More information

Regularization Methods for Additive Models

Regularization Methods for Additive Models Regularization Methods for Additive Models Marta Avalos, Yves Grandvalet, and Christophe Ambroise HEUDIASYC Laboratory UMR CNRS 6599 Compiègne University of Technology BP 20529 / 60205 Compiègne, France

More information

COMS 4771 Regression. Nakul Verma

COMS 4771 Regression. Nakul Verma COMS 4771 Regression Nakul Verma Last time Support Vector Machines Maximum Margin formulation Constrained Optimization Lagrange Duality Theory Convex Optimization SVM dual and Interpretation How get the

More information

Regression, Ridge Regression, Lasso

Regression, Ridge Regression, Lasso Regression, Ridge Regression, Lasso Fabio G. Cozman - fgcozman@usp.br October 2, 2018 A general definition Regression studies the relationship between a response variable Y and covariates X 1,..., X n.

More information

OWL to the rescue of LASSO

OWL to the rescue of LASSO OWL to the rescue of LASSO IISc IBM day 2018 Joint Work R. Sankaran and Francis Bach AISTATS 17 Chiranjib Bhattacharyya Professor, Department of Computer Science and Automation Indian Institute of Science,

More information

PENALIZED PRINCIPAL COMPONENT REGRESSION. Ayanna Byrd. (Under the direction of Cheolwoo Park) Abstract

PENALIZED PRINCIPAL COMPONENT REGRESSION. Ayanna Byrd. (Under the direction of Cheolwoo Park) Abstract PENALIZED PRINCIPAL COMPONENT REGRESSION by Ayanna Byrd (Under the direction of Cheolwoo Park) Abstract When using linear regression problems, an unbiased estimate is produced by the Ordinary Least Squares.

More information

Regularized Regression A Bayesian point of view

Regularized Regression A Bayesian point of view Regularized Regression A Bayesian point of view Vincent MICHEL Director : Gilles Celeux Supervisor : Bertrand Thirion Parietal Team, INRIA Saclay Ile-de-France LRI, Université Paris Sud CEA, DSV, I2BM,

More information

Introduction to MRI. Spin & Magnetic Moments. Relaxation (T1, T2) Spin Echoes. 2DFT Imaging. K-space & Spatial Resolution.

Introduction to MRI. Spin & Magnetic Moments. Relaxation (T1, T2) Spin Echoes. 2DFT Imaging. K-space & Spatial Resolution. Introduction to MRI Spin & Magnetic Moments Relaxation (T1, T2) Spin Echoes 2DFT Imaging Selective excitation, phase & frequency encoding K-space & Spatial Resolution Contrast (T1, T2) Acknowledgement:

More information

CS489/698: Intro to ML

CS489/698: Intro to ML CS489/698: Intro to ML Lecture 02: Linear Regression 1 I d rather die than telling you my password! Transfer success! 2 Outline Announcements Linear Regression Regularization Cross-validation 3 Outline

More information

Statistics 203: Introduction to Regression and Analysis of Variance Penalized models

Statistics 203: Introduction to Regression and Analysis of Variance Penalized models Statistics 203: Introduction to Regression and Analysis of Variance Penalized models Jonathan Taylor - p. 1/15 Today s class Bias-Variance tradeoff. Penalized regression. Cross-validation. - p. 2/15 Bias-variance

More information

Rock-typing of Laminated Sandstones by Nuclear Magnetic Resonance in the Presence of Diffusion Coupling

Rock-typing of Laminated Sandstones by Nuclear Magnetic Resonance in the Presence of Diffusion Coupling The Open-Access Journal for the Basic Principles of Diffusion Theory, Experiment and Application Rock-typing of Laminated Sandstones by Nuclear Magnetic Resonance in the Presence of Diffusion Coupling

More information

Inverse Ill Posed Problems in Image Processing

Inverse Ill Posed Problems in Image Processing Inverse Ill Posed Problems in Image Processing Image Deblurring I. Hnětynková 1,M.Plešinger 2,Z.Strakoš 3 hnetynko@karlin.mff.cuni.cz, martin.plesinger@tul.cz, strakos@cs.cas.cz 1,3 Faculty of Mathematics

More information

An Introduction to Functional Data Analysis

An Introduction to Functional Data Analysis An Introduction to Functional Data Analysis Chongzhi Di Fred Hutchinson Cancer Research Center cdi@fredhutch.org Biotat 578A: Special Topics in (Genetic) Epidemiology November 10, 2015 Textbook Ramsay

More information

Variable Selection for Highly Correlated Predictors

Variable Selection for Highly Correlated Predictors Variable Selection for Highly Correlated Predictors Fei Xue and Annie Qu Department of Statistics, University of Illinois at Urbana-Champaign WHOA-PSI, Aug, 2017 St. Louis, Missouri 1 / 30 Background Variable

More information

Shrinkage Methods: Ridge and Lasso

Shrinkage Methods: Ridge and Lasso Shrinkage Methods: Ridge and Lasso Jonathan Hersh 1 Chapman University, Argyros School of Business hersh@chapman.edu February 27, 2019 J.Hersh (Chapman) Ridge & Lasso February 27, 2019 1 / 43 1 Intro and

More information

Chris Fraley and Daniel Percival. August 22, 2008, revised May 14, 2010

Chris Fraley and Daniel Percival. August 22, 2008, revised May 14, 2010 Model-Averaged l 1 Regularization using Markov Chain Monte Carlo Model Composition Technical Report No. 541 Department of Statistics, University of Washington Chris Fraley and Daniel Percival August 22,

More information

Bi-level feature selection with applications to genetic association

Bi-level feature selection with applications to genetic association Bi-level feature selection with applications to genetic association studies October 15, 2008 Motivation In many applications, biological features possess a grouping structure Categorical variables may

More information

Least Absolute Shrinkage is Equivalent to Quadratic Penalization

Least Absolute Shrinkage is Equivalent to Quadratic Penalization Least Absolute Shrinkage is Equivalent to Quadratic Penalization Yves Grandvalet Heudiasyc, UMR CNRS 6599, Université de Technologie de Compiègne, BP 20.529, 60205 Compiègne Cedex, France Yves.Grandvalet@hds.utc.fr

More information

Regularization: Ridge Regression and the LASSO

Regularization: Ridge Regression and the LASSO Agenda Wednesday, November 29, 2006 Agenda Agenda 1 The Bias-Variance Tradeoff 2 Ridge Regression Solution to the l 2 problem Data Augmentation Approach Bayesian Interpretation The SVD and Ridge Regression

More information

Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation

Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation Gérard MEURANT November 2010 1 Introduction to ill posed problems 2 Examples of ill-posed problems 3 Tikhonov

More information

Super-resolution by means of Beurling minimal extrapolation

Super-resolution by means of Beurling minimal extrapolation Super-resolution by means of Beurling minimal extrapolation Norbert Wiener Center Department of Mathematics University of Maryland, College Park http://www.norbertwiener.umd.edu Acknowledgements ARO W911NF-16-1-0008

More information

c 2011 International Press Vol. 18, No. 1, pp , March DENNIS TREDE

c 2011 International Press Vol. 18, No. 1, pp , March DENNIS TREDE METHODS AND APPLICATIONS OF ANALYSIS. c 2011 International Press Vol. 18, No. 1, pp. 105 110, March 2011 007 EXACT SUPPORT RECOVERY FOR LINEAR INVERSE PROBLEMS WITH SPARSITY CONSTRAINTS DENNIS TREDE Abstract.

More information

Linear Regression Linear Regression with Shrinkage

Linear Regression Linear Regression with Shrinkage Linear Regression Linear Regression ith Shrinkage Introduction Regression means predicting a continuous (usually scalar) output y from a vector of continuous inputs (features) x. Example: Predicting vehicle

More information

Lecture 14: Shrinkage

Lecture 14: Shrinkage Lecture 14: Shrinkage Reading: Section 6.2 STATS 202: Data mining and analysis October 27, 2017 1 / 19 Shrinkage methods The idea is to perform a linear regression, while regularizing or shrinking the

More information

Bayes Estimators & Ridge Regression

Bayes Estimators & Ridge Regression Readings Chapter 14 Christensen Merlise Clyde September 29, 2015 How Good are Estimators? Quadratic loss for estimating β using estimator a L(β, a) = (β a) T (β a) How Good are Estimators? Quadratic loss

More information

Direct Learning: Linear Regression. Donglin Zeng, Department of Biostatistics, University of North Carolina

Direct Learning: Linear Regression. Donglin Zeng, Department of Biostatistics, University of North Carolina Direct Learning: Linear Regression Parametric learning We consider the core function in the prediction rule to be a parametric function. The most commonly used function is a linear function: squared loss:

More information

Reduction of Model Complexity and the Treatment of Discrete Inputs in Computer Model Emulation

Reduction of Model Complexity and the Treatment of Discrete Inputs in Computer Model Emulation Reduction of Model Complexity and the Treatment of Discrete Inputs in Computer Model Emulation Curtis B. Storlie a a Los Alamos National Laboratory E-mail:storlie@lanl.gov Outline Reduction of Emulator

More information

COMS 4771 Introduction to Machine Learning. James McInerney Adapted from slides by Nakul Verma

COMS 4771 Introduction to Machine Learning. James McInerney Adapted from slides by Nakul Verma COMS 4771 Introduction to Machine Learning James McInerney Adapted from slides by Nakul Verma Announcements HW1: Please submit as a group Watch out for zero variance features (Q5) HW2 will be released

More information

Adaptive Lasso for correlated predictors

Adaptive Lasso for correlated predictors Adaptive Lasso for correlated predictors Keith Knight Department of Statistics University of Toronto e-mail: keith@utstat.toronto.edu This research was supported by NSERC of Canada. OUTLINE 1. Introduction

More information

Prediction & Feature Selection in GLM

Prediction & Feature Selection in GLM Tarigan Statistical Consulting & Coaching statistical-coaching.ch Doctoral Program in Computer Science of the Universities of Fribourg, Geneva, Lausanne, Neuchâtel, Bern and the EPFL Hands-on Data Analysis

More information

Deterministic sampling masks and compressed sensing: Compensating for partial image loss at the pixel level

Deterministic sampling masks and compressed sensing: Compensating for partial image loss at the pixel level Deterministic sampling masks and compressed sensing: Compensating for partial image loss at the pixel level Alfredo Nava-Tudela Institute for Physical Science and Technology and Norbert Wiener Center,

More information

FuncICA for time series pattern discovery

FuncICA for time series pattern discovery FuncICA for time series pattern discovery Nishant Mehta and Alexander Gray Georgia Institute of Technology The problem Given a set of inherently continuous time series (e.g. EEG) Find a set of patterns

More information

Uncertainty quantification and visualization for functional random variables

Uncertainty quantification and visualization for functional random variables Uncertainty quantification and visualization for functional random variables MascotNum Workshop 2014 S. Nanty 1,3 C. Helbert 2 A. Marrel 1 N. Pérot 1 C. Prieur 3 1 CEA, DEN/DER/SESI/LSMR, F-13108, Saint-Paul-lez-Durance,

More information

A model function method in total least squares

A model function method in total least squares www.oeaw.ac.at A model function method in total least squares S. Lu, S. Pereverzyev, U. Tautenhahn RICAM-Report 2008-18 www.ricam.oeaw.ac.at A MODEL FUNCTION METHOD IN TOTAL LEAST SQUARES SHUAI LU, SERGEI

More information

Spin Relaxation and NOEs BCMB/CHEM 8190

Spin Relaxation and NOEs BCMB/CHEM 8190 Spin Relaxation and NOEs BCMB/CHEM 8190 T 1, T 2 (reminder), NOE T 1 is the time constant for longitudinal relaxation - the process of re-establishing the Boltzmann distribution of the energy level populations

More information

Spatial Lasso with Application to GIS Model Selection. F. Jay Breidt Colorado State University

Spatial Lasso with Application to GIS Model Selection. F. Jay Breidt Colorado State University Spatial Lasso with Application to GIS Model Selection F. Jay Breidt Colorado State University with Hsin-Cheng Huang, Nan-Jung Hsu, and Dave Theobald September 25 The work reported here was developed under

More information

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation Rosemary Renaut DEPARTMENT OF MATHEMATICS AND STATISTICS Prague 2008 MATHEMATICS AND STATISTICS

More information

Regularizing inverse problems. Damping and smoothing and choosing...

Regularizing inverse problems. Damping and smoothing and choosing... Regularizing inverse problems Damping and smoothing and choosing... 141 Regularization The idea behind SVD is to limit the degree of freedom in the model and fit the data to an acceptable level. Retain

More information

Spectral Regularization

Spectral Regularization Spectral Regularization Lorenzo Rosasco 9.520 Class 07 February 27, 2008 About this class Goal To discuss how a class of regularization methods originally designed for solving ill-posed inverse problems,

More information

Lecture 5: A step back

Lecture 5: A step back Lecture 5: A step back Last time Last time we talked about a practical application of the shrinkage idea, introducing James-Stein estimation and its extension We saw our first connection between shrinkage

More information

MS-C1620 Statistical inference

MS-C1620 Statistical inference MS-C1620 Statistical inference 10 Linear regression III Joni Virta Department of Mathematics and Systems Analysis School of Science Aalto University Academic year 2018 2019 Period III - IV 1 / 32 Contents

More information

Linear Model Selection and Regularization

Linear Model Selection and Regularization Linear Model Selection and Regularization Recall the linear model Y = β 0 + β 1 X 1 + + β p X p + ɛ. In the lectures that follow, we consider some approaches for extending the linear model framework. In

More information

Linear Regression. Volker Tresp 2018

Linear Regression. Volker Tresp 2018 Linear Regression Volker Tresp 2018 1 Learning Machine: The Linear Model / ADALINE As with the Perceptron we start with an activation functions that is a linearly weighted sum of the inputs h = M j=0 w

More information

A new method for multi-exponential inversion of NMR relaxation measurements

A new method for multi-exponential inversion of NMR relaxation measurements Science in China Ser. G Physics, Mechanics & Astronomy 2004 Vol.47 No.3 265 276 265 A new method for multi-exponential inversion of NMR relaxation measurements WANG Zhongdong 1, 2, XIAO Lizhi 1 & LIU Tangyan

More information

Day 4: Shrinkage Estimators

Day 4: Shrinkage Estimators Day 4: Shrinkage Estimators Kenneth Benoit Data Mining and Statistical Learning March 9, 2015 n versus p (aka k) Classical regression framework: n > p. Without this inequality, the OLS coefficients have

More information

Linear Methods for Regression. Lijun Zhang

Linear Methods for Regression. Lijun Zhang Linear Methods for Regression Lijun Zhang zlj@nju.edu.cn http://cs.nju.edu.cn/zlj Outline Introduction Linear Regression Models and Least Squares Subset Selection Shrinkage Methods Methods Using Derived

More information

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 27 Introduction Fredholm first kind integral equation of convolution type in one space dimension: g(x) = 1 k(x x )f(x

More information

High-dimensional regression with unknown variance

High-dimensional regression with unknown variance High-dimensional regression with unknown variance Christophe Giraud Ecole Polytechnique march 2012 Setting Gaussian regression with unknown variance: Y i = f i + ε i with ε i i.i.d. N (0, σ 2 ) f = (f

More information

Final project STK4030/ Statistical learning: Advanced regression and classification Fall 2015

Final project STK4030/ Statistical learning: Advanced regression and classification Fall 2015 Final project STK4030/9030 - Statistical learning: Advanced regression and classification Fall 2015 Available Monday November 2nd. Handed in: Monday November 23rd at 13.00 November 16, 2015 This is the

More information

Regularization via Spectral Filtering

Regularization via Spectral Filtering Regularization via Spectral Filtering Lorenzo Rosasco MIT, 9.520 Class 7 About this class Goal To discuss how a class of regularization methods originally designed for solving ill-posed inverse problems,

More information

Unfolding Methods in Particle Physics

Unfolding Methods in Particle Physics Unfolding Methods in Particle Physics Volker Blobel University of Hamburg, Hamburg, Germany 1 Inverse problems Abstract Measured distributions in particle physics are distorted by the finite resolution

More information

High-dimensional regression

High-dimensional regression High-dimensional regression Advanced Methods for Data Analysis 36-402/36-608) Spring 2014 1 Back to linear regression 1.1 Shortcomings Suppose that we are given outcome measurements y 1,... y n R, and

More information

Statistically-Based Regularization Parameter Estimation for Large Scale Problems

Statistically-Based Regularization Parameter Estimation for Large Scale Problems Statistically-Based Regularization Parameter Estimation for Large Scale Problems Rosemary Renaut Joint work with Jodi Mead and Iveta Hnetynkova March 1, 2010 National Science Foundation: Division of Computational

More information

Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise

Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise Minru Bai(x T) College of Mathematics and Econometrics Hunan University Joint work with Xiongjun Zhang, Qianqian Shao June 30,

More information