A Riemannian Framework for Denoising Diffusion Tensor Images

Similar documents
Differential Geometry and Lie Groups with Applications to Medical Imaging, Computer Vision and Geometric Modeling CIS610, Spring 2008

Higher Order Cartesian Tensor Representation of Orientation Distribution Functions (ODFs)

Robust Tensor Splines for Approximation of Diffusion Tensor MRI Data

Riemannian Metric Learning for Symmetric Positive Definite Matrices

Rician Noise Removal in Diffusion Tensor MRI

Research Trajectory. Alexey A. Koloydenko. 25th January Department of Mathematics Royal Holloway, University of London

Riemannian Geometry for the Statistical Analysis of Diffusion Tensor Data

Variational methods for restoration of phase or orientation data

CN780 Final Lecture. Low-Level Vision, Scale-Space, and Polyakov Action. Neil I. Weisenfeld

Nonlinear Diffusion. Journal Club Presentation. Xiaowei Zhou

Medical Visualization - Tensor Visualization. J.-Prof. Dr. Kai Lawonn

Improved Correspondence for DTI Population Studies via Unbiased Atlas Building

Diffusion Tensor Imaging I: The basics. Jennifer Campbell

Tensor Visualization. CSC 7443: Scientific Information Visualization

Statistical Analysis of Tensor Fields

Bayesian multi-tensor diffusion MRI and tractography

Nonlinear Diffusion. 1 Introduction: Motivation for non-standard diffusion

Fast and Accurate HARDI and its Application to Neurological Diagnosis

Regularization of Diffusion Tensor Field Using Coupled Robust Anisotropic Diffusion Filters

Manifolds, Lie Groups, Lie Algebras, with Applications. Kurt W.A.J.H.Y. Reillag (alias Jean Gallier) CIS610, Spring 2005

DWI acquisition schemes and Diffusion Tensor estimation

Segmenting Images on the Tensor Manifold

MEDINRIA : DT-MRI PROCESSING AND VISUALIZATION SOFTWARE. Pierre Fillard, Nicolas Toussaint and Xavier Pennec

Shape Anisotropy: Tensor Distance to Anisotropy Measure

Non-linear Dimensionality Reduction

II. DIFFERENTIABLE MANIFOLDS. Washington Mio CENTER FOR APPLIED VISION AND IMAGING SCIENCES

Multi-class DTI Segmentation: A Convex Approach

A Neurosurgeon s Perspectives of Diffusion Tensor Imaging(DTI) Diffusion Tensor MRI (DTI) Background and Relevant Physics.

Diffusion Tensor Imaging quality control : artifacts assessment and correction. A. Coste, S. Gouttard, C. Vachet, G. Gerig. Medical Imaging Seminar

Diffusion Tensor Imaging: Reconstruction Using Deterministic Error Bounds

ITK Filters. Thresholding Edge Detection Gradients Second Order Derivatives Neighborhood Filters Smoothing Filters Distance Map Image Transforms

CIND Pre-Processing Pipeline For Diffusion Tensor Imaging. Overview

Image enhancement. Why image enhancement? Why image enhancement? Why image enhancement? Example of artifacts caused by image encoding

Advanced Topics and Diffusion MRI

NIH Public Access Author Manuscript Conf Proc IEEE Eng Med Biol Soc. Author manuscript; available in PMC 2009 December 10.

A Localized Linearized ROF Model for Surface Denoising

DIFFUSION MAGNETIC RESONANCE IMAGING

A Recursive Filter For Linear Systems on Riemannian Manifolds

Diffusion Tensor Imaging (DTI): An overview of key concepts

Total Variation Theory and Its Applications

Learning Eigenfunctions: Links with Spectral Clustering and Kernel PCA

NEURONAL FIBER TRACKING IN DT-MRI

Anisotropy of HARDI Diffusion Profiles Based on the L 2 -Norm

NON-LINEAR DIFFUSION FILTERING

Lie Groups for 2D and 3D Transformations

Tensor fields. Tensor fields: Outline. Chantal Oberson Ausoni

Ordinary Least Squares and its applications

Basics of Diffusion Tensor Imaging and DtiStudio

DIFFUSION is a process of intermingling molecules as

Linear Diffusion. E9 242 STIP- R. Venkatesh Babu IISc

Statistical Models for Diffusion Weighted MRI Data

Directional Field. Xiao-Ming Fu

ENERGY METHODS IN IMAGE PROCESSING WITH EDGE ENHANCEMENT

H. Salehian, G. Cheng, J. Sun, B. C. Vemuri Department of CISE University of Florida

Improved Correspondence for DTI Population Studies Via Unbiased Atlas Building

Supplemental Figures: Results for Various Color-image Completion

Multivariate Statistical Analysis of Deformation Momenta Relating Anatomical Shape to Neuropsychological Measures

NIH Public Access Author Manuscript Med Image Comput Comput Assist Interv. Author manuscript; available in PMC 2014 May 19.

Spectral Processing. Misha Kazhdan

Fast Geodesic Regression for Population-Based Image Analysis

Mumford Shah and Potts Regularization for Manifold-Valued Data

Shape of Gaussians as Feature Descriptors

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Motion Estimation (I) Ce Liu Microsoft Research New England

Bayesian Principal Geodesic Analysis in Diffeomorphic Image Registration

DT-MRI Segmentation Using Graph Cuts

Gradient Descent and Implementation Solving the Euler-Lagrange Equations in Practice

A Constrained Variational Principle for Direct Estimation and Smoothing of the Diffusion Tensor Field

Riemannian Metrics on the Space of Solid Shapes

Registration of anatomical images using geodesic paths of diffeomorphisms parameterized with stationary vector fields

Dimension Reduction Techniques. Presented by Jie (Jerry) Yu

Unsupervised learning: beyond simple clustering and PCA

GEOMETRIC MEANS IN A NOVEL VECTOR SPACE STRUCTURE ON SYMMETRIC POSITIVE-DEFINITE MATRICES

Terse Notes on Riemannian Geometry

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Topics in Representation Theory: Lie Groups, Lie Algebras and the Exponential Map

From Diffusion Data to Bundle Analysis

NIH Public Access Author Manuscript Med Image Anal. Author manuscript; available in PMC 2014 July 16.

The Symmetric Space for SL n (R)

Beyond Scalar Affinities for Network Analysis or Vector Diffusion Maps and the Connection Laplacian

Introduction to Machine Learning

TENSOR BASED REPRESENTATION AND ANALYSIS OF DIFFUSION-WEIGHTED MAGNETIC RESONANCE IMAGES

A Physical Model for MR-DTI Based Connectivity Map Computation

Using Eigenvalue Derivatives for Edge Detection in DT-MRI Data

William P. Thurston. The Geometry and Topology of Three-Manifolds

Symmetric Positive 4 th Order Tensors & Their Estimation from Diffusion Weighted MRI

0.1 Tangent Spaces and Lagrange Multipliers

The Log-Euclidean Framework Applied to SPD Matrices and Polyaffine Transformations

Differential Geometry for Image Processing

Deep learning / Ian Goodfellow, Yoshua Bengio and Aaron Courville. - Cambridge, MA ; London, Spis treści

Contrast Mechanisms in MRI. Michael Jay Schillaci

A new algorithm for the computation of the group logarithm of diffeomorphisms

How Many Gradients are Sufficient in High-Angular Resolution Diffusion Imaging (HARDI)?

Compressive Imaging by Generalized Total Variation Minimization

Partial Differential Equations and Image Processing. Eshed Ohn-Bar

Fraunhofer Institute for Computer Graphics Research Interactive Graphics Systems Group, TU Darmstadt Fraunhoferstrasse 5, Darmstadt, Germany

Welcome to Copenhagen!

Principal Component Analysis (PCA) CSC411/2515 Tutorial

Tensor-based Image Diffusions Derived from Generalizations of the Total Variation and Beltrami Functionals

Paper E An Operator Algebraic Inverse Scale Space Method for Matrix Images

Transcription:

A Riemannian Framework for Denoising Diffusion Tensor Images Manasi Datar No Institute Given Abstract. Diffusion Tensor Imaging (DTI) is a relatively new imaging modality that has been extensively used to study diffusion processes in the brain and has applications ranging from diagnostic to surgical planning. However, DTI imaging systems are highly sensitive to noise, leading to reconstructed images with low SNR. Thus, there is a need for image denoising algorithms specifically designed to regularize tensor structures. Most commonly used denoising algorithms operate in the image space, with results prone to loss of tensor properties. This report presents an adaptation of scalar image denoising algorithms using H 1 regularization and Total Variation (TV) regularization, to the tensor space via a Riemannian framework. The mathematical framework translating these algorithms to the Riemannian space is presented, followed by results on DTI images of the brain. 1 Introduction and Related Work DTI is a magnetic resonance imaging (MRI) technique that records diffusion of water molecules in the brain tissue to create tractography images that provide structural information about the underlying tissue. Each voxel of the DTI image consists of a tensor composed from six or more different MRI acquisitions, each produced by orienting the imaging gradient to simulate diffusion along different directions. More precisely, tensor values at each voxel characterize the underlying local microstructure by quantifying diffusion of water molecules. While the information contained in a DTI image enables detection of pathology at various scales, the acquisition process is extremely sensitive to noise, leading to low SNR. There have been various efforts to denoise DTI images by extending traditional scalar image denoising algorithms to operate on tensor data. One such method is given by Christiansen, et. al. [1], which looks at an orthogonal decomposition of each tensor D = LL T and extends the total variation regularization scheme to matrix valued data. A novel class of anisotropic regularization methods that preserve positive semi-definiteness of a matrix field without any additional constraints is presented in [2]. In [3], the authors consider tensors belonging to matrix Lie groups and use minimization of the principal chiral model (PCM) action to regularize matrix valued data. While this method preserves the Lie group structure, it regularizes the tensors isotropically and does not preserve discontinuities. Another approach uses parameterization schemes

2 such as the Iwasawa coordinates [4] to propose a GL(n) invariant regularization algorithm. Most of the methods described above preserve tensor structure to some extent, but may not be able to preserve the positive definiteness of individual tensors. This report motivates the application of regularization algorithms to constrain the denoised tensor to lie on the manifold of positive definite (P D) matrices. We introduce the H 1 and Total Variation (TV) regularization schemes and derive equivalent solutions in the Riemannian space of PD matrices. Further, we describe an implementation using gradient descent and comment on the practical difficulties encountered in the process. Finally, we present results on DTI images of the human brain. 2 Background This section provides the mathematical premise of image denoising algorithms that attempt to minimize the total variation in an image. We also list the concepts of the exponential and logarithmic maps from Riemannian geometry to help develop an extension of traditional denoising algorithms to the tensor domain. 2.1 Image Denoising Mathematically, the general problem can be stated as the following minimization: min u such that u f u Ω 2 2 Ω σ 2 (1) where u is the resultant denoised image, f is the original image (ground truth, without any noise), Ω is the domain and σ 2 is the estimate of the variance of the noise in the image. Eq. 1 can be transformed into an unconstrained minimization problem using the Lagrange multiplier λ, as follows: min u + u f u Ω 2 2 (2) In this paper, we discuss H 1 regularization and T V regularization schemes as applied to tensor data. The reader is referred to class notes by Younes [5] for a thorough description of the algorithms discussed in subsequent sections. 2.2 Concepts from Riemannian Geometry By design, diffusion tensors have a structure synonymous to that of P D(n) (n = 3 in our case) of symmetric positive definite matrices and can be considered to form a Riemannian manifold [6]. This implies that the inner product is defined in the tangent space at each point, and this varies smoothly across the manifold. Thus, we have a Riemannian symmetric metric space in which each point is an

3 element of P D(n). As described in [6], the locally shortest distance between two points (x, y) on the Reimannian manifold is given by the geodesic. This is also equivalent to the length of the tangent vector from one point x to the other y and can be measured using the logarithmic map log x (y), given as: log x (y) = x 1/2 log(x 1/2 yx 1/2 )x 1/2 (3) Conversely, the exponential map takes a starting point x and a tangent direction W to return the target point y along that direction. The exponential map on a Riemannian manifold can be defined as: exp x ( W ) = x 1/2 exp(x 1/2 W x 1/2 )x 1/2 (4) Thus, in the Riemannian framework, we can find the distance between two points using Eq. 3. On the other hand, given a point and a search direction we can find the target point in that direction using Eq. 4. 3 Methods and Implementation This section provides an overview of the theory and implementation for both the denoising methods in tensor space. For further details, the reader is referred to the explanation given in the class notes [5] and the paper by Pennec [7]. 3.1 H 1 Regularization This algorithm aims to minimize the L 2 norm of the image gradient. The final minimization function can be given as, E = Ω u 2 2 + u f 2 2 (5) Discretizing Eq. 5 leads to a simple gradient descent (GD) scheme approximating the solution. With an assumption of zero-gradient at the image boundaries, and given a step size ɛ, the k th iteration of this scheme at each pixel location is given as: u k u k 1 + ɛ( u k 1 λ(u k 1 f)) (6) Considering that elements of u are elements of P D(n) and thus lie on a Riemannian manifold, the fidelity term describing the distance of the current estimate of the image to the input image can be given as (u k 1 f) log f (u k 1 )

4 and for each location (x, y), the Laplacian can be given as u k 1 = (uk 1 top u k 1 ) + (uk 1 bottom uk 1 ) + (u k 1 right uk 1 ) + (uk 1 left uk 1 ) log u top ) + log u bottom ) + N log u right ) + log u (u k 1 k 1 left ) log u N ) where N is the set of Manhattan neighbors of location (x, y). Note that all of the four tangent vectors computed above lie in the tangent space of the location (x, y) and thus can be added as above to obtain a valid tangent vector. From Eq. 6, we can see that the Laplacian and the weighted fidelity term together give the direction for gradient descent, along which we move one step-size ɛ to obtain the update for the current iteration. In the Riemannian framework, this can be translated as u k exp u k 1( W ) where W is the direction for gradient descent. Thus, the H 1 regularization update for tensor valued data can be given using the Riemannian framework as ( ) u k exp u k 1 ɛ N log u N ) λ log f (u k 1 ) This update was implemented successfully and the results are reported in Sec. 4. (7) 3.2 Total Variation (TV) Regularization H 1 regularization is an L 2 minimization algorithm and results in significant smoothing of edges in the resulting image. TV regularization avoids this problem by aiming to minimize the total variation in the image, making it an L 1 minimization scheme. The minimization function for TV regularization can be written as follows: E = u + u f 2 2 (8) Ω A gradient descent solution to Eq. 8 results in the following update at the k th iteration: ( ( ) ) u u k u k 1 + δt u k 1 k 1 div u k 1 2λ(u k 1 f) (9) Applying an analysis similar to the H 1 regularization discussed above, we can see that the fidelity term in Eq. 9 is easily translated into the Riemannian framework

5 as log f (u k 1 ). The computation of the regularization term in the Riemannian space presents a technical problem. We can compute the normalized gradient using logarithmic maps to represent central differences. This will result in two spatial components ( u x, u y ). Now, the divergence is given as div( u) = u x x + u y y For any given voxel, the tangent spaces defined by the two components of the derivative are not the same. Hence, the above addition is invalid and the divergence cannot be computed this way. One possible solution could be to use parallel transport [8] to transport the tangent vectors of neighbors to the tangent space of the current voxel, and then perform addition to get a valid divergence entry. Another way to get around this problem would be to assume that the tangent spaces of neighboring voxels are not very different, and approximate the divergence term as shown in [5], which is: ( ) I div = I xxiy 2 2I x I y I xy + I yy Ix 2 I ( I 2 x + Iy) 2 3/2 An implementation was attempted with this approximation, but none of the configurations tried gave valid results. Hence, results of the TV regularization algorithm in the Riemannian framework are not reported. 4 Results and Discussion (a) (b) Fig. 1. Input DTI images: (a) tensorimg1 and (b) tensorimg2 This section describes the results of the H 1 regularization update described by Eq. 7 on DTI brain images provided by Gopal. Each of the two input images,

6 shown in Fig. 1 is a 128 128 slice, with 9 imaging directions. We consider the measurements at each voxel to be in P D(3) for all of the results below. Fractional Anisotropy (FA) maps [9] were used to visualize the tensor data and provide a qualitative analysis of the denoising results. Fig. 2 shows the FA maps tensorimg1 tensorimg2 iter=10 iter=20 iter=10 iter=20 Fig. 2. Results of H 1 regularization on both input images over 10 and 20 iterations for two different settings of the step size (ɛ) and weight for the fidelity term (λ). top row: (ɛ = 0.01, λ = 0.05), bottom row:(ɛ = 0.1, λ = 0.5) for both input images after denoising with various settings. As with the scalar implementation, the tensor H 1 algorithm performs satisfactory denoising, but also results in blurred edges. This algorithm was also found to be sensitive to the step-size ɛ and fidelity term weight λ, though small changes in these values did not change the results drastically. 5 Discussion This paper motivates the algorithms for denoising DTI images using H 1 regularization and TV regularization in the Riemannian space. An advantage of such a scheme to preserve the positive definite structure of tensors at all voxels. This is a desirable characteristic and better allows further processing of DTI images. While H 1 regularization adapted to the Riemannian space was successfully implemented, TV regularization implementation has been left as future work due to the various technical difficulties inherent in the computation. As with scalar images, H 1 regularization provides good denoising, but also blurs the edges, while TV regularization was not tried due to difficulties in implementation.

7 References 1. Christiansen, O., Lee, T., Lie, J., Sinha, U., Chan, T.: Total variation regularization of matrix-valued images. International Journal Biomedical Imaging (2007) 2. Weickert, J., Brox, T.: Diffusion and regularization of vector- and matrix-valued images (2002) 3. Gur, Y., Sochen, N.: Regularizing flows over lie groups. Journal of Mathematical Imaging and Vision 33(2) (2009) 195 208 4. Gur, Y., Sochen, N.: Fast invariant riemannian dt-mri regularization. In: ICCV. (2007) 1 7 5. Younes, L.: Mathematical image analysis lecture notes. Johns Hopkins University 6. Fletcher, P.T., Joshi, S.C.: Riemannian geometry for the statistical analysis of diffusion tensor data. Signal Processing 87(2) (2007) 250 262 7. Pennec, X., Fillard, P., Ayache, N.: A riemannian framework for tensor computing. International Journal of Computer Vision 66(1) (2006) 41 66 8. Wikipedia: Parallel transport wikipedia, the free encyclopedia (2011) 9. Wikipedia: Diffusion mri wikipedia, the free encyclopedia (2011)