Physics-based Prior modeling in Inverse Problems

Similar documents
Model-Based Dynamic Sampling (MBDS) in 2D and 3D. Dilshan Godaliyadda Gregery Buzzard Charles Bouman Purdue University

Graphical Models for Collaborative Filtering

Inverse problem and optimization

10. Multi-objective least squares

Machine Learning for Signal Processing Sparse and Overcomplete Representations. Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013

Markov Chains and Hidden Markov Models

Introduction to Compressed Sensing

A Majorize-Minimize subspace approach for l 2 -l 0 regularization with applications to image processing

A memory gradient algorithm for l 2 -l 0 regularization with applications to image restoration

Implicit Priors for Model-Based Inversion

Markov Random Fields

Adaptive one-bit matrix completion

Relaxed linearized algorithms for faster X-ray CT image reconstruction

Gradient Descent and Implementation Solving the Euler-Lagrange Equations in Practice

Probabilistic Graphical Models & Applications

Machine Learning for Signal Processing Sparse and Overcomplete Representations

State-Space Methods for Inferring Spike Trains from Calcium Imaging

Approximate Message Passing Algorithms

CPSC 340: Machine Learning and Data Mining. MLE and MAP Fall 2017

Probabilistic Graphical Models

PILCO: A Model-Based and Data-Efficient Approach to Policy Search

A Localized Linearized ROF Model for Surface Denoising

Efficient Variational Inference in Large-Scale Bayesian Compressed Sensing

arxiv: v1 [math.na] 23 May 2013

Statistical Data Mining and Machine Learning Hilary Term 2016

Bayesian Methods for Sparse Signal Recovery

SOS Boosting of Image Denoising Algorithms

Probabilistic & Unsupervised Learning

Sparse Regularization via Convex Analysis

Sparsity Regularization

COMS 4721: Machine Learning for Data Science Lecture 19, 4/6/2017

Generative v. Discriminative classifiers Intuition

Motivation Sparse Signal Recovery is an interesting area with many potential applications. Methods developed for solving sparse signal recovery proble

Kernel Bayes Rule: Nonparametric Bayesian inference with kernels

Image Noise: Detection, Measurement and Removal Techniques. Zhifei Zhang

Introduction to Machine Learning CMU-10701

Image processing and nonparametric regression

Convex Optimization: Applications

Submodularity in Machine Learning

11. Learning graphical models

10-725/36-725: Convex Optimization Prerequisite Topics

Numerical Approximation of Phase Field Models

Unsupervised learning: beyond simple clustering and PCA

sparse and low-rank tensor recovery Cubic-Sketching

2 Regularized Image Reconstruction for Compressive Imaging and Beyond

Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines

COMS 4721: Machine Learning for Data Science Lecture 10, 2/21/2017

Accelerated primal-dual methods for linearly constrained convex problems

Ergodicity in data assimilation methods

How to do backpropagation in a brain

ADMM and Fast Gradient Methods for Distributed Optimization

Regression.

Statistical approach for dictionary learning

Linear & nonlinear classifiers

Chris Bishop s PRML Ch. 8: Graphical Models

Introduction to Optimization

Statistical learning. Chapter 20, Sections 1 4 1

Deep Feedforward Networks

Inferring Sparsity: Compressed Sensing Using Generalized Restricted Boltzmann Machines. Eric W. Tramel. itwist 2016 Aalborg, DK 24 August 2016

Lecture 4 September 15

Mathematical introduction to Compressed Sensing

MCMC Sampling for Bayesian Inference using L1-type Priors

CPSC 540: Machine Learning

CSC 576: Variants of Sparse Learning

Introduction to Bayesian methods in inverse problems

System Identification, Lecture 4

Hidden Markov Models. By Parisa Abedi. Slides courtesy: Eric Xing

Generative v. Discriminative classifiers Intuition

Introduction to Machine Learning

System Identification, Lecture 4

EUSIPCO

LPA-ICI Applications in Image Processing

Rigorous Dynamics and Consistent Estimation in Arbitrarily Conditioned Linear Systems

Linear Regression Introduction to Machine Learning. Matt Gormley Lecture 4 September 19, Readings: Bishop, 3.

Linear Models for Regression CS534

Continuous State MRF s

Regularizing inverse problems using sparsity-based signal models

Lecture 6: CS395T Numerical Optimization for Graphics and AI Line Search Applications

Lecture 2 Machine Learning Review

Image restoration: numerical optimisation

Bayesian statistics. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

Spatial Statistics with Image Analysis. Outline. A Statistical Approach. Johan Lindström 1. Lund October 6, 2016

Factor Analysis and Kalman Filtering (11/2/04)

CPSC 540: Machine Learning

EE 381V: Large Scale Optimization Fall Lecture 24 April 11

Artificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino

13. Nonlinear least squares

STATS 306B: Unsupervised Learning Spring Lecture 3 April 7th

Variational Inference (11/04/13)

Variational Bayesian Inference Techniques

Hidden Markov Models. Aarti Singh Slides courtesy: Eric Xing. Machine Learning / Nov 8, 2010

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Learning Deep Architectures for AI. Part II - Vijay Chakilam

Ch 4. Linear Models for Classification

Pattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions

Restricted Boltzmann Machines for Collaborative Filtering

Recent Results on Input-Constrained Erasure Channels

Medical Image Analysis

Matrix and Tensor Factorization from a Machine Learning Perspective

Residual Correlation Regularization Based Image Denoising

Transcription:

Physics-based Prior modeling in Inverse Problems MURI Meeting 2013 M Usman Sadiq, Purdue University Charles A. Bouman, Purdue University In collaboration with: Jeff Simmons, AFRL Venkat Venkatakrishnan, Purdue Marc De Graef, CMU 1

Inverse Problems in Imaging Recover information from indirect measurement* x Unknown Quan<ty Physical System Linear/Nonlinear Deterministic/Stochastic y Data Inversion Method ˆx Es<mate φ Other Unknowns (Nuisance Parameters) Regularity Condi<ons (Prior knowledge) Image and system models are cri1cal to accurate inversion 2

Model Based Iterative Reconstruction General framework for solving inverse problems ( ˆx ) argmax p( y x) : Likelihood p(x) : Prior Model x x { p( x y) } = argmin x Physical system { log p( y x) log p( x) } y Prior Model: p(x) Optimization Engine ˆx Forward model : g(.) g(x) Difference 3

Popular models for the Prior Neighborhood based or Local priors: Penalize dissimilarity between voxels: Markov Random Fields Bilateral Filtering Non-local priors: Exploit image information from non-local voxels: Non-Local means K-SVD BM3D x i ρ(x i x j ) : Penalty on the difference Sheep lung image and its learned dictionary with 256 atoms* *Qiong Xu, Low-dose CT reconstruction via Dictionary Learning 4

Physics-based Prior For some inverse problems, Physics can provide more information than local or non-local priors. Example: Microstructure evolution in materials is described by Phase-field model. We explore the idea of using a Physics-based prior in such inverse problems. Microstructure evolution in Cu-Al alloy 5

Cahn-Hilliard Equation as Prior Cahn-Hilliard equation governs the temporal and spatial evolution in binary fluids. We use Cahn-Hilliard equation as the prior for inverse problems. As a first step, we apply the Cahn-Hilliard prior to Image de-noising problem. 6

Cahn Hilliard Equation The Cahn-Hilliard equation for a binary fluid is: where H(x,θ) = x t + a 4 x b 2 f * x = 0 x(r,t) is the concentration of the fluid between (0-1), with 0 representing one phase and 1 representing the other. f *(x) is the dimensionless free energy of the fluid. a and b are parameters of the equation. 7

Image de-noising in presence of Cahn-Hilliard prior Image de-noising problem statement: 2 ˆx = argmin y x D x,θ subject to H(x,θ) = 0 - x, the unknown image - y, the noisy input image D -, a diagonal matrix with ( ) = 0 d i = 1 2σ 2 - H x,θ, Cahn-Hilliard equation 8

De-noising Cost Function For de-noising problem, we form the following cost function: Penalize deviation from H(x,θ) = 0, i.e. deviation from the Physical behavior. L λ (x,θ) = 1 2σ 2 y x 2 + λ H(x,θ) 2 MAP Estimate: ˆx = argmin x,θ L λ (x,θ) 9

Alternate Minimization using ICD Set number of iterations Initialize x y. Low-pass filter to get and initialize For each iteration L λ (x,θ) = 1 2σ 2 y x 2 + λ H(x,θ) 2 x Update to minimize For each pixel y y lp θ argmin θ H(y lp,θ) x s L λ (x,θ) Minimize L λ (x,θ) for between (0-1) Update θ to minimize H(x,θ) Find least-square estimate θ argmin θ H(x,θ). x s 10

Generate phantom images equation. Experiments H(x,θ) = 0 y x that satisfy the Cahn-Hilliard Generate noisy images from : - Add i.i.d. Gaussian noise with σ = 0.05 and 0.1 x Apply ICD to minimize L λ (x,θ) = 1 y x 2 + λ H(x,θ) 2 2σ 2 x and θ jointly over 11

De-noising results for σ = 0.05(5% noise) 12

De-noising Comparison 13

De-noising results for σ = 0.1(10% noise) 14

De-noising Comparison 15

Current and future work Reconstruction in the presence of Cahn-Hilliard Prior: 2 y Ax D min x,θ subject to H(x,θ) = 0 where - A, a matrix implementing the linear forward model Reconstruction with time-interleaving and limited projections. 16

Questions? 17

Supplementary Slides 18

Cahn Hilliard Equation The Cahn-Hilliard equation for a binary fluid is: where u( r, t) u t = γ 2 $ & ε 2 2 (u)+ f % u is the concentration of the alloy between (0-1), with 0 representing one phase and 1 representing the other. f (u) is the free energy of the alloy. Assuming a double-well potential energy functional, we have f (u) = u 2 (u 1) 2 and f u =4u3 6u 2 + 2u ε : controls width of the transition region [µm] γ : controls rate of growth of the phase [µm 2 / s] ' ) ( f *( u) 0 1

Cahn Hilliard Equation The Cahn-Hilliard equation for a binary fluid is: u t = $ a 4 u + b 2 f * & % u where u( r, t) is the concentration of the fluid between (0-1), with 0 representing one phase and 1 representing the other. f *( u) is the dimensionless free energy of the fluid. Assuming a double-well potential energy functional, we have f *(u) = u 2 (u 1) 2 and f * =4u 3 6u 2 + 2u u f *( u) a [µm 4 / s] and b [µm 2 / s] are parameters of the equation ' ) ( 0 1 20

Discrete form of Cahn Hilliard Equation Consider 2D spatial coordinates, and let un, i, j = u(iδx, jδx, nδt s ) be the discrete realization of u at (i, j) spatial coordinates and th n time frame, where Δt s [sec] is time step and Δx [µm] is the spatial step. Finite Difference formulation of CH-equation is: u n+1,i, j u n,i, j = ad 2 (u n, Δx) i, j + bd(4u 3 n 6u 2 n + 2u n, Δx) i, j (1) Δt s where D(u n, Δx) i, j = u n,i+1, j + u n,i 1, j + u n,i, j+1 + u n,i, j 1 4u n,i, j discrete space Laplace operator and (Δx) 2 D 2 (u n, Δx) i, j = D(D(u n, Δx), Δx) i, j is the 21

Parameterization - Discrete form of Cahn Hilliard Equation Re write Cahn Hilliard equation (1) as u n+1,i, j u n,i, j = ad 2 (u n ) i, j + bd(4u n 3 6u n 2 + 2u n ) i, j (2) where a = a (Δx) 4 Δt s, b = b (Δx) 2 Δt s are unitless parameters. H(u n+1,u n,θ) So the Cahn-Hilliard regularization, is: H(u n+1,u n,θ) = u n+1 u n + ad 2 (u n ) bd(4u n 3 6u n 2 + 2u n ) 22

Stability Constraints on discretization Some discretization schemes of the Cahn Hilliard are known to be more stable[1]. Implicit Euler Scheme: Linearly Stabilized Splitting Scheme[1]: u n+1 n ij u ij = γd( ε 2 D(u n+1 ij )+ 2u n+1 ij )+ D((u n ij ) 3 3u n ij ) Δt 4 2 ( u) ( u) - Splits the free energy E( u) = 4 2 into concave and convex parts E ( u) = E1 ( u) + E2( u) - Treats the convex part implicitly and the concave parts explicitly. u ij n+1 u ij n Δt = γd( ε 2 D(u ij n+1 )+ (u ij n+1 ) 3 u ij n+1 ) [1]: D. Eyre, An uncondidonally stable one- step scheme for gradient systems, 1997.

Cost per pixel vs. iterations 24

Regularization per pixel vs. iterations Regularization per pixel after 50 iterations = 8.4469 10 5 25

Comparison with Standard denoising methods RMSE for de-noising with 5% noise: BM3D: 0.012011 BM4D: 0.006212 Cahn-Hilliard prior: 0.02614 26

BM4D De-noising Results 27

De-noising Comparison 28

De-noising Comparison 29