Joint Bayesian Compressed Sensing with Prior Estimate

Similar documents
Reconstruction Algorithms for MRI. Berkin Bilgic 17 December 2012

Accelerated MRI Image Reconstruction

Lecture 22: More On Compressed Sensing

Motivation Sparse Signal Recovery is an interesting area with many potential applications. Methods developed for solving sparse signal recovery proble

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

Introduction to Biomedical Imaging

Efficient Variational Inference in Large-Scale Bayesian Compressed Sensing

Restoration of Missing Data in Limited Angle Tomography Based on Helgason- Ludwig Consistency Conditions

Bayesian Nonparametric Dictionary Learning for Compressed Sensing MRI

Bayesian Paradigm. Maximum A Posteriori Estimation

Compressive Imaging by Generalized Total Variation Minimization

Sparsifying Transform Learning for Compressed Sensing MRI

Pulse Sequences: RARE and Simulations

Scale Mixture Modeling of Priors for Sparse Signal Recovery

Ordinary Least Squares and its applications

Fast and Accurate HARDI and its Application to Neurological Diagnosis

Physics of MR Image Acquisition

Variational Bayesian Inference Techniques

Graphical Models for Collaborative Filtering

Rician Noise Removal in Diffusion Tensor MRI

arxiv: v1 [cs.na] 29 Nov 2017

ELG7173 Topics in signal Processing II Computational Techniques in Medical Imaging

Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm

An IDL Based Image Deconvolution Software Package

Compressive Sensing (CS)

Mixture Models and EM

Part 2: Multivariate fmri analysis using a sparsifying spatio-temporal prior

Bayesian Methods for Sparse Signal Recovery

Contrast Mechanisms in MRI. Michael Jay Schillaci

INTRODUCTION TO BAYESIAN INFERENCE PART 2 CHRIS BISHOP

Pattern Recognition and Machine Learning

Mark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation.

Sparse Signal Recovery Using Markov Random Fields

STA 4273H: Sta-s-cal Machine Learning

Biomedical Engineering Image Formation II

Approximate Message Passing

Tissue Parametric Mapping:

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles

Automated Segmentation of Low Light Level Imagery using Poisson MAP- MRF Labelling

TWO METHODS FOR ESTIMATING OVERCOMPLETE INDEPENDENT COMPONENT BASES. Mika Inki and Aapo Hyvärinen

Statistical Image Recovery: A Message-Passing Perspective. Phil Schniter

Introduction to the Mathematics of Medical Imaging

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 58, NO. 6, JUNE

Simultaneous Multi-frame MAP Super-Resolution Video Enhancement using Spatio-temporal Priors

First Technical Course, European Centre for Soft Computing, Mieres, Spain. 4th July 2011

Relevance Vector Machines

Introduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Yishay Mansour, Lior Wolf

Bayesian multi-tensor diffusion MRI and tractography

Generalizing Diffusion Tensor Model Using Probabilistic Inference in Markov Random Fields

M R I Physics Course. Jerry Allison Ph.D., Chris Wright B.S., Tom Lavin B.S., Nathan Yanasak Ph.D. Department of Radiology Medical College of Georgia

Chapter 3: Maximum-Likelihood & Bayesian Parameter Estimation (part 1)

Metal Artifact Reduction and Dose Efficiency Improvement on Photon Counting Detector CT using an Additional Tin Filter

Bayesian Regression Linear and Logistic Regression

Expectation Maximization Deconvolution Algorithm

Gaussian process for nonstationary time series prediction

Principles of MRI EE225E / BIO265. Instructor: Miki Lustig UC Berkeley, EECS

HST 583 FUNCTIONAL MAGNETIC RESONANCE IMAGING DATA ANALYSIS AND ACQUISITION A REVIEW OF STATISTICS FOR FMRI DATA ANALYSIS

Multi-Task Compressive Sensing

Approximate Message Passing Algorithm for Complex Separable Compressed Imaging

BNG/ECE 487 FINAL (W16)

Lab 2: Magnetic Resonance Imaging

Artificial Neural Networks

Multi-Task Compressive Sensing

Regularizing inverse problems using sparsity-based signal models

Biomedical Engineering Image Formation

MAT Inverse Problems, Part 2: Statistical Inversion

M/EEG source analysis

Bayesian Networks BY: MOHAMAD ALSABBAGH

MRI beyond Fourier Encoding: From array detection to higher-order field dynamics

Speeding up Magnetic Resonance Image Acquisition by Bayesian Multi-Slice Adaptive Compressed Sensing

Background II. Signal-to-Noise Ratio (SNR) Pulse Sequences Sampling and Trajectories Parallel Imaging. B.Hargreaves - RAD 229.

BLIND SEPARATION OF TEMPORALLY CORRELATED SOURCES USING A QUASI MAXIMUM LIKELIHOOD APPROACH

A hierarchical Krylov-Bayes iterative inverse solver for MEG with physiological preconditioning

NMR and MRI : an introduction

Inverse problem and optimization

H. Salehian, G. Cheng, J. Sun, B. C. Vemuri Department of CISE University of Florida

Using Gaussian Processes for Variance Reduction in Policy Gradient Algorithms *

2D X-Ray Tomographic Reconstruction From Few Projections

Noise Analysis of Regularized EM for SPECT Reconstruction 1

STA414/2104 Statistical Methods for Machine Learning II

Maximum Likelihood (ML), Expectation Maximization (EM) Pieter Abbeel UC Berkeley EECS

p(d θ ) l(θ ) 1.2 x x x

Least Squares Regression

Greedy Signal Recovery and Uniform Uncertainty Principles

SYDE 372 Introduction to Pattern Recognition. Probability Measures for Classification: Part I

Compressive Sensing under Matrix Uncertainties: An Approximate Message Passing Approach

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING. Non-linear regression techniques Part - II

EL-GY 6813/BE-GY 6203 Medical Imaging, Fall 2016 Final Exam

Navigator Echoes. BioE 594 Advanced Topics in MRI Mauli. M. Modi. BioE /18/ What are Navigator Echoes?

Abel Inversion using the Maximum Entropy Method

Least Squares Regression

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016

Introduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Lior Wolf

Bayesian and empirical Bayesian approach to weighted averaging of ECG signal

A Survey of Compressive Sensing and Applications

Variational Dropout via Empirical Bayes

HST.582J / 6.555J / J Biomedical Signal and Image Processing Spring 2007

MCMC Sampling for Bayesian Inference using L1-type Priors

PILCO: A Model-Based and Data-Efficient Approach to Policy Search

Compressed Sensing: Extending CLEAN and NNLS

Transcription:

Joint Bayesian Compressed Sensing with Prior Estimate Berkin Bilgic 1, Tobias Kober 2,3, Gunnar Krueger 2,3, Elfar Adalsteinsson 1,4 1 Electrical Engineering and Computer Science, MIT 2 Laboratory for Functional and Metabolic Imaging, EPFL 3 Siemens Schweiz AG 4 Harvard-MIT Division of Health Sciences and Technology

ISMRM 20 th ANNUAL MEETING & EXHIBITION Adapting MR in a Changing World Declaration of Relevant Financial Interests or Relationships Speaker Name: Berkin Bilgic I have no relevant financial interest or relationship to disclose with regard to the subject matter of this presentation.

Multi-contrast Acquisition Clinical MRI: acquiring multiple contrast preparations increases the diagnostic power, but also the total scan time

Multi-contrast Acquisition Clinical MRI: acquiring multiple contrast preparations increases the diagnostic power, but also the total scan time Proton density T2 weighted T1 weighted contrast SRI24 atlas 1 [1] Rohlfing et al. Hum Brain Map, 2010

Multi-contrast Acquisition Clinical MRI: acquiring multiple contrast preparations increases the diagnostic power, but also the total scan time Proton density T2 weighted T1 weighted contrast Joint reconstruction from undersampled acquisitions substantially improves reconstruction quality 1 [1] Bilgic et al. MRM, 2011

Multi-contrast Acquisition Clinical MRI: acquiring multiple contrast preparations increases the diagnostic power, but also the total scan time Proton density T2 weighted T1 weighted contrast Suppose that one of the contrasts can be acquired much faster than the others (e.g. AutoAlign)

Multi-contrast Acquisition Clinical MRI: acquiring multiple contrast preparations increases the diagnostic power, but also the total scan time Proton density T2 weighted T1 weighted contrast Suppose that one of the contrasts can be acquired much faster than the others (e.g. AutoAlign) If we fully-sample the fast contrast, can we use it to help reconstruct the others?

Observation model F x = y F: partial Fourier transform x: image to be estimated y: undersampled k-space data

Observation model sparse representation V F x = V y V = (1 e 2πjω/n )

Observation model sparse representation F δ = y δ: image gradient to be estimated y: modified k-space data

Observation model sparse representation F δ = y δ: image gradient to be estimated y: modified k-space data x δ V

Data likelihood Assuming that the k-space data is corrupted by complexvalued Gaussian noise with σ 2 variance, p y δ, σ 2 N(Fδ y, σ 2 ) Gaussian likelihood

Prior distribution on gradient coefficients Bayesian CS places hyperparameters γ on each pixel, p δ i γ i N(0, γ i ) Gaussian prior So that i th pixel is a zero-mean Gaussian with variance γ i

Prior distribution on gradient coefficients Bayesian CS places hyperparameters γ on each pixel, p δ i γ i N(0, γ i ) Gaussian prior So that i th pixel is a zero-mean Gaussian with variance γ i Multiplicative combination of all pixels give the full prior distribution, p δ γ N(0, γ i ) i

Posterior distribution for gradient coefficients Using the likelihood and the prior, we invoke Bayes Rule to arrive at the posterior, p δ y, γ p δ γ p y δ

Posterior distribution for gradient coefficients Using the likelihood and the prior, we invoke Bayes Rule to arrive at the posterior, p δ y, γ p δ γ p y δ Gaussian posterior Gaussian prior Gaussian likelihood

Posterior distribution for gradient coefficients Using the likelihood and the prior, we invoke Bayes Rule to arrive at the posterior, p δ y, γ p δ γ p y δ N(μ, Σ) μ = ΓF H A 1 y Σ = Γ ΓF H A 1 FΓ

Posterior distribution for gradient coefficients Using the likelihood and the prior, we invoke Bayes Rule to arrive at the posterior, p δ y, γ p δ γ p y δ N(μ, Σ) μ = ΓF H A 1 y Σ = Γ ΓF H A 1 FΓ Γ = diag(γ) A 1 = (σ 2 I + FΓF H ) 1 Inversion using Lanczos algorithm 1 [1] Seeger et al. MRM, 2010

EM algorithm for optimization Expectation-maximization algorithm 1 is used to estimate the hyperparameters and the posterior iteratively, Expectation step: μ = ΓF H A 1 y Σ = Γ ΓF H A 1 FΓ Maximization step: γ i = μ i 2 /(1 Σ ii /γ i ) [1] Wipf et al. IEEE Trans Signal Process, 2007

EM algorithm for optimization Expectation-maximization algorithm 1 is used to estimate the hyperparameters and the posterior iteratively, Expectation step: μ = ΓF H A 1 y Σ = Γ ΓF H A 1 FΓ Maximization step: γ i = μ i 2 /(1 Σ ii /γ i ) [1] Wipf et al. IEEE Trans Signal Process, 2007

EM algorithm for optimization Expectation-maximization algorithm 1 is used to estimate the hyperparameters and the posterior iteratively, Expectation step: μ = ΓF H A 1 y Σ = Γ ΓF H A 1 FΓ Maximization step: γ i = μ i 2 /(1 Σ ii /γ i ) [1] Wipf et al. IEEE Trans Signal Process, 2007

Using fully-sampled prior image If we run EM iterations on the fully sampled image δ prior Expectation step: μ prior = δ prior Σ prior = 0 Maximization step: γ prior = μ prior 2

Using fully-sampled prior image If we run EM iterations on the fully sampled image δ prior Expectation step: μ prior = δ prior Σ prior = 0 Iterations do not alter fully-sampled prior image Maximization step: γ prior = μ prior 2

Using fully-sampled prior image If we run EM iterations on the fully sampled image δ prior Expectation step: μ prior = δ prior Σ prior = 0 Maximization step: γ prior = μ prior 2 Iterations do not alter fully-sampled prior image Use γ prior to initialize the EM iterations for undersampled images

Extended Shepp-Logan Phantoms fully-sampled prior undersampled R = 4 sampling pattern

Extended Shepp-Logan Phantoms sparsemri 1 : Total Variation 20.1% RMSE sparsemri: 20.1% RMSE error: scaled 10 [1] Lustig et al. MRM, 2007

Extended Shepp-Logan Phantoms fully-sampled prior Bayesian CS w/ prior 3.0% RMSE sparsemri: BCS w/ prior: 20.1% RMSE 3.0% RMSE error: scaled 10

Turbo Spin Echo Late Echo fully-sampled prior Early Echo undersampled R = 4 sampling pattern

Turbo Spin Echo sparsemri 1 : Total Variation 9.3% RMSE sparsemri: 9.3% RMSE error: scaled 10 [1] Lustig et al. MRM, 2007

Turbo Spin Echo Late Echo fully-sampled prior Bayesian CS w/ prior 5.8% RMSE sparsemri: BCS w/ prior: 9.3% RMSE 5.8% RMSE error: scaled 10

SRI24 atlas proton density fully-sampled prior T2 weighted undersampled T1 weighted undersampled R = 4 sampling pattern

SRI24 atlas sparsemri 1 : Total Variation 9.5% RMSE sparsemri: 9.5% RMSE error: scaled 10 [1] Lustig et al. MRM, 2007

SRI24 atlas Joint Bayesian CS 1 4.9% RMSE sparsemri: Joint BCS: 9.5% RMSE 4.9% RMSE error: scaled 10 [1] Bilgic et al. MRM, 2011

SRI24 atlas proton density fully-sampled prior Joint Bayesian CS w/ prior 4.3% RMSE sparsemri: Joint BCS: BCS w/ prior: 9.5% RMSE 4.9% RMSE 4.3% RMSE error: scaled 10

Conclusion In a multi-contrast scan, one of the acquisitions may be much faster than the others (e.g. AutoAlign) When the fast contrast is fully-sampled, we use it as prior information to help recover the undersampled contrasts Our method uses the prior image only to initialize Bayesian CS iterations, hence imposes the prior in a soft manner

Conclusion In a multi-contrast scan, one of the acquisitions may be much faster than the others (e.g. AutoAlign) When the fast contrast is fully-sampled, we use it as prior information to help recover the undersampled contrasts Our method uses the prior image only to initialize Bayesian CS iterations, hence imposes the prior in a soft manner Acknowledgements: Siemens Healthcare, Siemens-MIT Alliance, CIMIT-MIT Medical Engineering Fellowship, R01 EB 007942