Source Reconstruction for 3D Bioluminescence Tomography with Sparse regularization

Size: px
Start display at page:

Download "Source Reconstruction for 3D Bioluminescence Tomography with Sparse regularization"

Transcription

1 1/33 Source Reconstruction for 3D Bioluminescence Tomography with Sparse regularization Xiaoqun Zhang Department of Mathematics/Institute of Natural Sciences, Shanghai Jiao Tong University International Conference on Immage Processing: Theory, Methods and Applications

2 Outline Problem Description Reconstruction based on Gaussian Noise model 1 Reconstruction based on Poisson Noise model 2 Conclusions and future work 1 Y. Lu, X. Zhang, A. Douraghy, D. Stout, J. Tian, T. F. Chan, and A. F. Chatziioannou, Source Reconstruction for Spectrally-resolved Bioluminescence Tomography with Sparse A priori Information, Optical express, X. Zhang, Y. Lu, T. F. Chan, A novel sparsity reconstruction method from Poisson data for bioluminescence tomography, submit 2/33

3 Problem description Bioluminescent Imaging system Powerful tumor detection/monitoring technique for in vivo Imaging. Extremely low backgrounds, high sensitivity and relatively simple instrumentation Tissue is a medium exhibiting both scattering and absorption properties. Amount of light is proportional to number of cells producing it. BLT reconstruction: reconstruct sources from observed photons intensity on the surfaces Surface intensity depends on: Source depth Source shape and brightness Surface shape (curvature) Wavelength Figure: By the Courtesy of XENOGEN 3/33

4 Problem description 4/33 Photons Diffusion Model For x Ω R 3, λ: wavelength, let Φ(x, λ): photon flux density S(x, λ): source energy density Steady-state diffusion equation (γ Φ ) µ a Φ+S = 0, ( x Ω) where γ(x,λ) is diffusion coefficient and µ a (x,λ) is absorbtion coefficient Robin boundary condition Φ+cγ ( v(x) Φ ) =0 (x Ω) where v: unit outer normal on Ω; c: parameter Measurements: the photon density on the body surface Ω for discretized λ i Q(x,λ i ) = Φ(x,λ i) c = γ ( v Φ(x,λ i ) ) (x Ω)

5 Problem description 5/33 Inverse Problem: Linear Relationship establishment Weak solution for Φ(x, λ) Using finite elements method to discretize the domain Ω, Φ(x, λ) and S(x, λ) Linear equation Au = f where u: unknown nodes vector for S in Ω, f is the measurable nodes vector for Φ on the boundary Ω. A is undetermined, highly ill-posed, depending on mesh sizes and shape functions.

6 Problem description 6/33 Noisy Model Gaussian Noise Model f = Au+N N: noise modeled as a Gaussian distribution Poisson Noise Model f i = Poisson((Au) i ) P(f i (Au) i ) = (Au)f i i e(au) i f i!

7 Gaussian Noise model 7/33 Classical Methods for Gaussian noise model Classical Maximum Likelihood method (Least square) Tikhonov regularization 1 min u 2 Au f 2 s. t. D = {0 u C} 1 min u 2 Au f 2 + δ 2 u 2 s. t. D = {0 u < C}

8 Gaussian Noise model Sparsity as a priori information l 1 regularization: sparsity 1 min S 2 Au f 2 +µ u 1 s. t. D = {0 u C} where S 1 = i S i denotes the l 1 norm of the vector S. Algorithm: Bound constrained quasi-newton method (BLMVM) 3 using smoothed l 1 norm. 3 Reference: S. J. Benson and J. More, A limited-memory variable-metric algorithm for bound-constrained minimization, /33

9 Gaussian Noise model 9/33 Simulations settings Data simulated by Monte Carlo Simulation of photon diffusion Cube domain with a width of 15mm 15mm 15mm and discretized by hexahedra based FEM Photon distribution observed only on the top surface of the cubic domain at three wavelengths λ 1 = 600nm,λ 2 = 650nm,λ 3 = 700nm

10 Gaussian Noise model 10/33 Observed photon distribution on one surface Photons 600nm 650nm 700nm Figure: Observed photon distribution of single source (located at ((0, 0, 0)) at the top surface.

11 Gaussian Noise model 11/33 Single Source Reconstruction No regularization l 2 regularization l 1 regularization Figure: Reconstruction for single source located at (0,0,0). First row: 10 6 photons; Second row: 10 4 photons

12 Gaussian Noise model 12/33 Dual sources with homogeneous media No regularization l 2 regularization l 1 regularization Figure: Dual source BLT reconstructions when the real source central positions are at ( 3.0,0.0,3.0) and (3.0,0.0,3.0) with 10 4 photons for 600nm.

13 Gaussian Noise model 13/33 Dual sources with heterogeneous media No regularization l 2 regularization l 1 regularization Figure: Dual source BLT reconstructions when the real source central positions are at ( 3.0,0.0,3.0) and (3.0,0.0,3.0) with 10 4 photons for 600nm.

14 Gaussian Noise model Figure: Experimental BLT reconstructions with mouse-shaped phantom. The actual source position is at (114.5, 131.0, 3.0) (CT scanning), No regularization: (111.7,132.6,2.7), l 2 regularization: (115.1,131.7,2.4), l 1 regularization: (114.7, 131.7, 2.9). 14/33 Experimental data Reconstruction GFP filter observation DsRed filter observation Volumetric mesh No regularization l 2 regularization l 1 regularization

15 Poisson Noise model 15/33 Poisson Noise model Random observation: f i Poisson(Au) i Maximum a-posteriori (MAP) estimation max u 0 p(u f) min u 0 (Au f logau) i Ω min u 0 D KL(f,Au) where D KL is the Kullback-Leihbler distance. Optimality condition KKT : F(u) λ = 0 λ i u i = 0 for i = 1,,m λ i 0 for i = 1,,m where F(u) = A 1 A ( f Au ) (1)

16 Poisson Noise model 16/33 Reconstruction models EM algorithm (Richardson-Lucy algorithm): fixed point algorithm on (A 1 A ( f ))u = 0 (2) Au u k+1 = u k A A 1 ( f Au k ) Sparsity A priori: l 1 norm regularization: l 0 norm regularization: min u 0 Φ(u) = D KL(f,Au)+µ u 1 min D KL(f,Au) s.t 0 < u 0 K u 0

17 Poisson Noise model Algorithms 17/33 Poisson l 1 minimization algorithm Forward backward splitting+em-l 1 Algorithm 4 u k+1 2 = (1 ω k )u k +ω k u k A A ( f 1 Ω Au k) u k+1 1 (A 1 Ω ) i = argmin u 0 2 (u k (u u k+1 2) 2 i +ω k µ u 1 ) i Ω 4 Brune, Sawatzky, Wubbeling, Losters, Burger, 2009

18 Poisson Noise model Algorithms SPIRAL-TAP 5 Let F(u) = D KL (f,au), F k (u) = F(u k )+(u u k ) T F(u k )+ α k 2 u u k 2. SPIRAL-TAP u k+1 = argmin u 0 F k (u)+µ u 1 Initialize Choose η > 1,σ (0,1),M Z +, 0 < α (α min,αmax), and initial solution u 0. Start iteration counter k = 0. Repeat choose α (α min,αmax) by Barzilai-Borwein method: α k = (uk u k 1 ) T 2 F(u k )(u k u k 1 ) u k u k u k+1 = argmin u>=0 2 u sk + µ α l u 1 α k ηα k, until u k+1 satisfies acceptance criteria : Φ(u k+1 ) max [k M]+,,kΦ(u k ) σα k 2 uk+1 u k 2 k k +1, until stopping criterion is satisfied 5 Harmany, Marcia, Willett, This is SPIRAL-TAP: Sparse Poisson Intensity 18/33

19 Poisson Noise model Algorithms 19/33 l 0 minimization with Orthogonal Matching Pursuit (OMP) Gaussian Noise model: OMP algorithm min u Au f 2 s. t. 0 < u 0 K (1) Initialize the residual r 0 = f, the index set Γ 0 = [ ]; (2) At step k, find the index j that solves the easy optimization problem j k = arg max i=1,...,n r k,a j (3) Add in the index set Γ k+1 = Γ k {jk } (4) Solve a least-squares problem to obtain a new signal estimate: (5) Repeat (2) until k = K. u k+1 = argmin u A Γk+1 u f 2

20 Poisson Noise model Algorithms 20/33 Iterative Poisson OMP Algorithm Iterative Poisson OMP min D KL(Au,f) s. t. 0 < u 0 K u 0 Initialize the index set Γ 0 = [ ], the set of all index I = {1,,n}; Outer iteration:t = 0 to t = T Set ˆΓ 0 = Γ t Inner iteration:k = 0 to k = K 1 Find û k+1 and i k+1 by solving the subproblem: [i k+1,û k+1 ] = arg min mind KL(f,A ˆΓk i {1,,n} u 0 {i} u) (3) Merge the new index: ˆΓ k+1 = ˆΓ k {i k+1 }. Find the K largest elements of û K and set the corresponding index set as Γ t+1. If Γ t+1 is the same as Γ t, stop; otherwise continue.

21 Poisson Noise model Numerical simulations Compressive sensing simulations Tests (m, n, d, K) Photons EM-L1 SPIRAL IterPOMP 1e6 K-L Dist RelErr 4.61e e e-3 (256,1024,40,4) Time e4 K-L Dist RelErr 4.70e e e-2 Time (256, 1024, 400, 4) (256, 1024, 40, 10) 1e6 K-L Dist RelErr 2.36e e e-3 Time e4 K-L Dist RelErr 2.55e e e-2 Time e6 K-L Dist RelErr 7.21e e e-3 Time e4 K-L Dist RelErr 5.96e e e-2 Time Table: Compressive sensing reconstruction based on 5 trials. For each test setting, we generate a Bernoulli matrix of size m n with d non zeros in each row. The randomly generated signals have K nonzeros of intensity 1e6 or 1e4. 21/33

22 Poisson Noise model Numerical simulations 22/33 BLT Reconstruction results with FEM simualted data True EM-l 1 SPIRAL POMP IterPOMP

23 Poisson Noise model Numerical simulations 23/33 Dual sources True EM-l 1 SPIRAL POMP IterPOMP

24 Poisson Noise model Numerical simulations 24/33 More sources setting tests True Sources Reconstructed Sources RelErr Time (0.5, 0.5, 0.5, 5.0e+6) (0.5, 0.5, 0.5, 5.0e+6) (3.5, -3.5, 2.5, 3.0e+6) (3.5, -3.5, 2.5, 3.0e+6) (-2.5, 2.5, 4.5, 1.0e+6) (-2.5, 2.5, 4.5, 1.0e+6) 5.50e (2.5, 1.5, 5.5, 2.0e+6) (2.5, 1.5, 5.5, 2.0e+6) (0.5, 0.5, 0.5, 5.0e+6) (0.5, 0.5, 0.5, 5.0e+6) (3.5, -3.5, 2.5, 3.0e+6) (3.5, -3.5, 2.5, 3.0e+6) (0.5, 0.5, 2.5, 2.0e+6) (0.5, 0.5, 1.5, 4.4e+6) 8.52e (-2.5, 2.5, 4.5, 1.0e+6) (-2.5, 2.5, 4.5, 1.0e+6) (0.5, 0.5, 0.5, 5.0e+4) (0.5, 0.5, 0.5, 5.0e+4) (3.5, -3.5, 2.5, 3.0e+4) (3.5, -3.5, 2.5, 3.0e+4) (-2.5, 2.5, 4.5, 1.0e+4) (-2.5, 2.5, 4.5, 9.9e+3) (2.5, 1.5, 5.5, 2.0e+4) (2.5, 1.5, 5.5, 2.0e+4) (0.5, 0.5, 0.5, 5.0e+4) (0.5, 0.5, 1.5, 5.6e+4) (3.5, -3.5, 2.5, 3.0e+4) (3.5, -3.5, 2.5, 3.0e+4) (0.5, 0.5, 2.5, 2.0e+4) (0.5, 0.5, -0.5, 1.4e+4) (-2.5, 2.5, 4.5, 1.0e+4) (-2.5, 2.5, 4.5, 1.0e+4) 2.72e e Table: Synthesized test examples with iterpomp algorithm for l 0 regularization model

25 Poisson Noise model Numerical simulations Test with larger K True Sources Reconstructed Sources RelErr Time (0.5, 0.5, 0.5, 5.0e+006) (0.5, 0.5, 0.5, 4.8e+006) (3.5, -3.5, 2.5, 3.0e+006) (3.5, -3.5, 2.5, 3.0e+006) (-2.5, 2.5, 4.5, 1.0e+006) (-2.5, 2.5, 4.5, 1.0e+006) (2.5, 1.5, 5.5, 2.0e+006) (2.5, 1.5, 5.5, 2.0e+006) (1.5, -0.5, 2.5, 1.2e+005) (0.5, 0.5, -1.5, 5.6e+004) (0.5, 0.5, 0.5, 5.0e+6) (0.5, 0.5, 1.5, 4.9e+6) (3.5, -3.5, 2.5, 3.0e+6) (3.5, -3.5, 2.5, 2.9e+6) (0.5, 0.5, 2.5, 2.0e+6) (0.5, 0.5, -0.5, 1.2e+6) (-2.5, 2.5, 4.5, 1.0e+6) (-2.5, 2.5, 4.5, 1.0e+6) (-0.5, 1.5, 0.5, 2.9e+5) (1.5, -0.5, 1.5, 7.1e+5) 3.32e e (0.5, 0.5, 0.5, 5.0e+4) (0.5, 0.5, 0.5, 5.0e+4) (3.5, -3.5, 2.5, 3.0e+4) (3.5, -3.5, 2.5, 3.0e+4) (4.5, -2.5, 2.5, 5.5e+2) (1.5, -0.5, 2.5, 2.2e+2) (-2.5, 2.5, 4.5, 1.0e+4) (-2.5, 2.5, 4.5, 9.9e+3) (2.5, 1.5, 5.5, 2.0e+4) (2.5, 1.5, 5.5, 2.0e+4) 1.21e (0.5, 0.5, 0.5, 5.0e+4) (0.5, 0.5, 1.5, 5.2e+4) (3.5, -3.5, 2.5, 3.0e+4) (3.5, -3.5, 2.5, 2.9e+4) (-0.5, 1.5, 0.5, 4.1e+3) (0.5, 0.5, 2.5, 2.0e+4) (0.5, 0.5, -1.5, 6.2e+3) (-2.5, 2.5, 4.5, 1.0e+4) (-2.5, 2.5, 4.5, 9.9e+3) 1.21e (1.5, -0.5, 1.5, 7.8e+3) 25/33

26 Poisson Noise model Numerical simulations 26/33 Monte-Carlo simulation recovery Figure: Triple sources with MC generated data. Left: true sources. Right: IterPOMP Recovered. True location and intensity: (-2.5, 2.5, 4.5, 1e+4), (0.5, 0.5, 0.5, 5e+4), (3.5, -3.5, 2.5, 3e+4). The l 0 algorithm has faithfully reconstructed the locations.

27 Poisson Noise model Numerical simulations 27/33 Monte-Carlo simulation recovery 7 x 104 True True Reconstructed Reconstructed Figure: Triple sources with MC generated data. True location and intensity: (-2.5, 2.5, 4.5, 1e+4), (0.5, 0.5, 0.5, 5e+4), (3.5, -3.5, 2.5, 3e+4). The l 0 algorithm has faithfully reconstructed the locations.

28 Poisson Noise model Numerical simulations 28/33 More test True sources Reconstructed RelErr Time (0.5, 0.5, 0.5, 5.0e+6) (0.5, 0.5, 0.5, 1.7e+6) (3.5, -3.5, 2.5, 3.0e+6) (3.5, -3.5, 2.5, 1.0e+6) 6.64e (-2.5, 2.5, 4.5, 1.0e+6) (-2.5, 2.5, 4.5, 3.4e+5) (0.5, 0.5, 0.5, 5.0e+4) (0.5, 0.5, 0.5, 1.7e+4) (3.5, -3.5, 2.5, 3.0e+4) (3.5, -3.5, 2.5, 1.0e+4) 6.63e (-2.5, 2.5, 4.5, 1.0e+4) (-2.5, 2.5, 4.5, 3.3e+3) (0.5, 0.5, 0.5, 5.0e+6) (0.5, 0.5, 0.5, 1.9e+6) (3.5, -3.5, 2.5, 3.0e+6) (3.5, -3.5, 2.5, 1.0e+6) (0.5, 0.5, 2.5, 2.0e+6) (0.5, 0.5, 3.5, 3.7e+5) 6.78e (-2.5, 2.5, 4.5, 1.0e+6) (-2.5, 2.5, 4.5, 3.4e+5) (0.5, 0.5, 0.5, 5.0e+4) (0.5, 0.5, 0.5, 1.6e+4) (3.5, -3.5, 2.5, 3.0e+4) (3.5, -3.5, 2.5, 9.9e+3) (0.5, 0.5, 2.5, 2.0e+4) (0.5, 0.5, 2.5, 7.4e+3) 6.68e (-2.5, 2.5, 4.5, 1.0e+4) (-2.5, 2.5, 4.5, 3.3e+3) Table: MC recovery examples with iterpomp algorithm.

29 Conclusions and future work 29/33 Conclusions and future work Sparsity improves location accuracy l 0 regularization is more robust than l 1 regularization. l 1 fails due to high correlation (ill-condition) of forward system matrix: higher order minimization algorithm for l 1 poisson model? Poisson OMP matching pursuit is efficient for small number of sources reconstruction problem. Parallelizable and need less parameters compared to l 1 minimization. Poisson OMP is a greedy algorithm. Need to improve the efficiency by a faster proximal solution Theoretical study on l 1 and l 0 reconstruction model for BLT problem. Test on real data, incomplete data with more complicated 3D body. Extension to total variation or framelets regularization together with better FEM discretization and more complicated 3D shapes.

GREEDY SIGNAL RECOVERY REVIEW

GREEDY SIGNAL RECOVERY REVIEW GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin

More information

Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm

Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm J. K. Pant, W.-S. Lu, and A. Antoniou University of Victoria August 25, 2011 Compressive Sensing 1 University

More information

Scaled gradient projection methods in image deblurring and denoising

Scaled gradient projection methods in image deblurring and denoising Scaled gradient projection methods in image deblurring and denoising Mario Bertero 1 Patrizia Boccacci 1 Silvia Bonettini 2 Riccardo Zanella 3 Luca Zanni 3 1 Dipartmento di Matematica, Università di Genova

More information

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk Model-Based Compressive Sensing for Signal Ensembles Marco F. Duarte Volkan Cevher Richard G. Baraniuk Concise Signal Structure Sparse signal: only K out of N coordinates nonzero model: union of K-dimensional

More information

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March

More information

Robust multichannel sparse recovery

Robust multichannel sparse recovery Robust multichannel sparse recovery Esa Ollila Department of Signal Processing and Acoustics Aalto University, Finland SUPELEC, Feb 4th, 2015 1 Introduction 2 Nonparametric sparse recovery 3 Simulation

More information

Analysis of Greedy Algorithms

Analysis of Greedy Algorithms Analysis of Greedy Algorithms Jiahui Shen Florida State University Oct.26th Outline Introduction Regularity condition Analysis on orthogonal matching pursuit Analysis on forward-backward greedy algorithm

More information

Compressed Sensing: Extending CLEAN and NNLS

Compressed Sensing: Extending CLEAN and NNLS Compressed Sensing: Extending CLEAN and NNLS Ludwig Schwardt SKA South Africa (KAT Project) Calibration & Imaging Workshop Socorro, NM, USA 31 March 2009 Outline 1 Compressed Sensing (CS) Introduction

More information

ANALYSIS OF p-norm REGULARIZED SUBPROBLEM MINIMIZATION FOR SPARSE PHOTON-LIMITED IMAGE RECOVERY

ANALYSIS OF p-norm REGULARIZED SUBPROBLEM MINIMIZATION FOR SPARSE PHOTON-LIMITED IMAGE RECOVERY ANALYSIS OF p-norm REGULARIZED SUBPROBLEM MINIMIZATION FOR SPARSE PHOTON-LIMITED IMAGE RECOVERY Aramayis Orkusyan, Lasith Adhikari, Joanna Valenzuela, and Roummel F. Marcia Department o Mathematics, Caliornia

More information

Motivation Sparse Signal Recovery is an interesting area with many potential applications. Methods developed for solving sparse signal recovery proble

Motivation Sparse Signal Recovery is an interesting area with many potential applications. Methods developed for solving sparse signal recovery proble Bayesian Methods for Sparse Signal Recovery Bhaskar D Rao 1 University of California, San Diego 1 Thanks to David Wipf, Zhilin Zhang and Ritwik Giri Motivation Sparse Signal Recovery is an interesting

More information

Simultaneous Sparsity

Simultaneous Sparsity Simultaneous Sparsity Joel A. Tropp Anna C. Gilbert Martin J. Strauss {jtropp annacg martinjs}@umich.edu Department of Mathematics The University of Michigan 1 Simple Sparse Approximation Work in the d-dimensional,

More information

Reconstruction of Block-Sparse Signals by Using an l 2/p -Regularized Least-Squares Algorithm

Reconstruction of Block-Sparse Signals by Using an l 2/p -Regularized Least-Squares Algorithm Reconstruction of Block-Sparse Signals by Using an l 2/p -Regularized Least-Squares Algorithm Jeevan K. Pant, Wu-Sheng Lu, and Andreas Antoniou University of Victoria May 21, 2012 Compressive Sensing 1/23

More information

Greedy Signal Recovery and Uniform Uncertainty Principles

Greedy Signal Recovery and Uniform Uncertainty Principles Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles

More information

Sparsity Regularization

Sparsity Regularization Sparsity Regularization Bangti Jin Course Inverse Problems & Imaging 1 / 41 Outline 1 Motivation: sparsity? 2 Mathematical preliminaries 3 l 1 solvers 2 / 41 problem setup finite-dimensional formulation

More information

Forward and inverse wave problems:quantum billiards and brain imaging p.1

Forward and inverse wave problems:quantum billiards and brain imaging p.1 Forward and inverse wave problems: Quantum billiards and brain imaging Alex Barnett barnett@cims.nyu.edu. Courant Institute Forward and inverse wave problems:quantum billiards and brain imaging p.1 The

More information

MCMC Sampling for Bayesian Inference using L1-type Priors

MCMC Sampling for Bayesian Inference using L1-type Priors MÜNSTER MCMC Sampling for Bayesian Inference using L1-type Priors (what I do whenever the ill-posedness of EEG/MEG is just not frustrating enough!) AG Imaging Seminar Felix Lucka 26.06.2012 , MÜNSTER Sampling

More information

c 2011 International Press Vol. 18, No. 1, pp , March DENNIS TREDE

c 2011 International Press Vol. 18, No. 1, pp , March DENNIS TREDE METHODS AND APPLICATIONS OF ANALYSIS. c 2011 International Press Vol. 18, No. 1, pp. 105 110, March 2011 007 EXACT SUPPORT RECOVERY FOR LINEAR INVERSE PROBLEMS WITH SPARSITY CONSTRAINTS DENNIS TREDE Abstract.

More information

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Journal of Information & Computational Science 11:9 (214) 2933 2939 June 1, 214 Available at http://www.joics.com Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Jingfei He, Guiling Sun, Jie

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

Non-negative Quadratic Programming Total Variation Regularization for Poisson Vector-Valued Image Restoration

Non-negative Quadratic Programming Total Variation Regularization for Poisson Vector-Valued Image Restoration University of New Mexico UNM Digital Repository Electrical & Computer Engineering Technical Reports Engineering Publications 5-10-2011 Non-negative Quadratic Programming Total Variation Regularization

More information

Superiorized Inversion of the Radon Transform

Superiorized Inversion of the Radon Transform Superiorized Inversion of the Radon Transform Gabor T. Herman Graduate Center, City University of New York March 28, 2017 The Radon Transform in 2D For a function f of two real variables, a real number

More information

MLCC 2018 Variable Selection and Sparsity. Lorenzo Rosasco UNIGE-MIT-IIT

MLCC 2018 Variable Selection and Sparsity. Lorenzo Rosasco UNIGE-MIT-IIT MLCC 2018 Variable Selection and Sparsity Lorenzo Rosasco UNIGE-MIT-IIT Outline Variable Selection Subset Selection Greedy Methods: (Orthogonal) Matching Pursuit Convex Relaxation: LASSO & Elastic Net

More information

SGN Advanced Signal Processing Project bonus: Sparse model estimation

SGN Advanced Signal Processing Project bonus: Sparse model estimation SGN 21006 Advanced Signal Processing Project bonus: Sparse model estimation Ioan Tabus Department of Signal Processing Tampere University of Technology Finland 1 / 12 Sparse models Initial problem: solve

More information

Bias-free Sparse Regression with Guaranteed Consistency

Bias-free Sparse Regression with Guaranteed Consistency Bias-free Sparse Regression with Guaranteed Consistency Wotao Yin (UCLA Math) joint with: Stanley Osher, Ming Yan (UCLA) Feng Ruan, Jiechao Xiong, Yuan Yao (Peking U) UC Riverside, STATS Department March

More information

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28 Sparsity Models Tong Zhang Rutgers University T. Zhang (Rutgers) Sparsity Models 1 / 28 Topics Standard sparse regression model algorithms: convex relaxation and greedy algorithm sparse recovery analysis:

More information

Compressive Sensing and Beyond

Compressive Sensing and Beyond Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

Optimization Algorithms for Compressed Sensing

Optimization Algorithms for Compressed Sensing Optimization Algorithms for Compressed Sensing Stephen Wright University of Wisconsin-Madison SIAM Gator Student Conference, Gainesville, March 2009 Stephen Wright (UW-Madison) Optimization and Compressed

More information

Sparse linear models

Sparse linear models Sparse linear models Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 2/22/2016 Introduction Linear transforms Frequency representation Short-time

More information

BOUNDED SPARSE PHOTON-LIMITED IMAGE RECOVERY. Lasith Adhikari and Roummel F. Marcia

BOUNDED SPARSE PHOTON-LIMITED IMAGE RECOVERY. Lasith Adhikari and Roummel F. Marcia BONDED SPARSE PHOTON-IMITED IMAGE RECOVERY asith Adhikari and Roummel F. Marcia Department of Applied Mathematics, niversity of California, Merced, Merced, CA 9533 SA ABSTRACT In photon-limited image reconstruction,

More information

EE 381V: Large Scale Optimization Fall Lecture 24 April 11

EE 381V: Large Scale Optimization Fall Lecture 24 April 11 EE 381V: Large Scale Optimization Fall 2012 Lecture 24 April 11 Lecturer: Caramanis & Sanghavi Scribe: Tao Huang 24.1 Review In past classes, we studied the problem of sparsity. Sparsity problem is that

More information

Recovery-Based a Posteriori Error Estimators for Interface Problems: Mixed and Nonconforming Elements

Recovery-Based a Posteriori Error Estimators for Interface Problems: Mixed and Nonconforming Elements Recovery-Based a Posteriori Error Estimators for Interface Problems: Mixed and Nonconforming Elements Zhiqiang Cai Shun Zhang Department of Mathematics Purdue University Finite Element Circus, Fall 2008,

More information

Non-Negative Matrix Factorization with Quasi-Newton Optimization

Non-Negative Matrix Factorization with Quasi-Newton Optimization Non-Negative Matrix Factorization with Quasi-Newton Optimization Rafal ZDUNEK, Andrzej CICHOCKI Laboratory for Advanced Brain Signal Processing BSI, RIKEN, Wako-shi, JAPAN Abstract. Non-negative matrix

More information

Generalized Orthogonal Matching Pursuit- A Review and Some

Generalized Orthogonal Matching Pursuit- A Review and Some Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents

More information

Variational Image Restoration

Variational Image Restoration Variational Image Restoration Yuling Jiao yljiaostatistics@znufe.edu.cn School of and Statistics and Mathematics ZNUFE Dec 30, 2014 Outline 1 1 Classical Variational Restoration Models and Algorithms 1.1

More information

Enhanced Compressive Sensing and More

Enhanced Compressive Sensing and More Enhanced Compressive Sensing and More Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Nonlinear Approximation Techniques Using L1 Texas A & M University

More information

An Introduction to Sparse Approximation

An Introduction to Sparse Approximation An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,

More information

Mismatch and Resolution in CS

Mismatch and Resolution in CS Mismatch and Resolution in CS Albert Fannjiang, UC Davis Coworker: Wenjing Liao, UC Davis SPIE Optics + Photonics, San Diego, August 2-25, 20 Mismatch: gridding error Band exclusion Local optimization

More information

EUSIPCO

EUSIPCO EUSIPCO 013 1569746769 SUBSET PURSUIT FOR ANALYSIS DICTIONARY LEARNING Ye Zhang 1,, Haolong Wang 1, Tenglong Yu 1, Wenwu Wang 1 Department of Electronic and Information Engineering, Nanchang University,

More information

Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy

Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Caroline Chaux Joint work with X. Vu, N. Thirion-Moreau and S. Maire (LSIS, Toulon) Aix-Marseille

More information

Inexact Alternating Direction Method of Multipliers for Separable Convex Optimization

Inexact Alternating Direction Method of Multipliers for Separable Convex Optimization Inexact Alternating Direction Method of Multipliers for Separable Convex Optimization Hongchao Zhang hozhang@math.lsu.edu Department of Mathematics Center for Computation and Technology Louisiana State

More information

Compressive Sensing (CS)

Compressive Sensing (CS) Compressive Sensing (CS) Luminita Vese & Ming Yan lvese@math.ucla.edu yanm@math.ucla.edu Department of Mathematics University of California, Los Angeles The UCLA Advanced Neuroimaging Summer Program (2014)

More information

Stability and Robustness of Weak Orthogonal Matching Pursuits

Stability and Robustness of Weak Orthogonal Matching Pursuits Stability and Robustness of Weak Orthogonal Matching Pursuits Simon Foucart, Drexel University Abstract A recent result establishing, under restricted isometry conditions, the success of sparse recovery

More information

Sparse analysis Lecture III: Dictionary geometry and greedy algorithms

Sparse analysis Lecture III: Dictionary geometry and greedy algorithms Sparse analysis Lecture III: Dictionary geometry and greedy algorithms Anna C. Gilbert Department of Mathematics University of Michigan Intuition from ONB Key step in algorithm: r, ϕ j = x c i ϕ i, ϕ j

More information

Novel tomography techniques and parameter identification problems

Novel tomography techniques and parameter identification problems Novel tomography techniques and parameter identification problems Bastian von Harrach harrach@ma.tum.de Department of Mathematics - M1, Technische Universität München Colloquium of the Institute of Biomathematics

More information

AN NONNEGATIVELY CONSTRAINED ITERATIVE METHOD FOR POSITRON EMISSION TOMOGRAPHY. Johnathan M. Bardsley

AN NONNEGATIVELY CONSTRAINED ITERATIVE METHOD FOR POSITRON EMISSION TOMOGRAPHY. Johnathan M. Bardsley Volume X, No. 0X, 0X, X XX Web site: http://www.aimsciences.org AN NONNEGATIVELY CONSTRAINED ITERATIVE METHOD FOR POSITRON EMISSION TOMOGRAPHY Johnathan M. Bardsley Department of Mathematical Sciences

More information

Elaine T. Hale, Wotao Yin, Yin Zhang

Elaine T. Hale, Wotao Yin, Yin Zhang , Wotao Yin, Yin Zhang Department of Computational and Applied Mathematics Rice University McMaster University, ICCOPT II-MOPTA 2007 August 13, 2007 1 with Noise 2 3 4 1 with Noise 2 3 4 1 with Noise 2

More information

Bayesian Methods for Sparse Signal Recovery

Bayesian Methods for Sparse Signal Recovery Bayesian Methods for Sparse Signal Recovery Bhaskar D Rao 1 University of California, San Diego 1 Thanks to David Wipf, Jason Palmer, Zhilin Zhang and Ritwik Giri Motivation Motivation Sparse Signal Recovery

More information

Nonlinear Diffusion. Journal Club Presentation. Xiaowei Zhou

Nonlinear Diffusion. Journal Club Presentation. Xiaowei Zhou 1 / 41 Journal Club Presentation Xiaowei Zhou Department of Electronic and Computer Engineering The Hong Kong University of Science and Technology 2009-12-11 2 / 41 Outline 1 Motivation Diffusion process

More information

Sparse Approximation of Signals with Highly Coherent Dictionaries

Sparse Approximation of Signals with Highly Coherent Dictionaries Sparse Approximation of Signals with Highly Coherent Dictionaries Bishnu P. Lamichhane and Laura Rebollo-Neira b.p.lamichhane@aston.ac.uk, rebollol@aston.ac.uk Support from EPSRC (EP/D062632/1) is acknowledged

More information

Scale Mixture Modeling of Priors for Sparse Signal Recovery

Scale Mixture Modeling of Priors for Sparse Signal Recovery Scale Mixture Modeling of Priors for Sparse Signal Recovery Bhaskar D Rao 1 University of California, San Diego 1 Thanks to David Wipf, Jason Palmer, Zhilin Zhang and Ritwik Giri Outline Outline Sparse

More information

Sparse Linear Models (10/7/13)

Sparse Linear Models (10/7/13) STA56: Probabilistic machine learning Sparse Linear Models (0/7/) Lecturer: Barbara Engelhardt Scribes: Jiaji Huang, Xin Jiang, Albert Oh Sparsity Sparsity has been a hot topic in statistics and machine

More information

L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise

L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise Srdjan Stanković, Irena Orović and Moeness Amin 1 Abstract- A modification of standard

More information

Sparse Optimization Lecture: Dual Methods, Part I

Sparse Optimization Lecture: Dual Methods, Part I Sparse Optimization Lecture: Dual Methods, Part I Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know dual (sub)gradient iteration augmented l 1 iteration

More information

Designing Information Devices and Systems I Discussion 13B

Designing Information Devices and Systems I Discussion 13B EECS 6A Fall 7 Designing Information Devices and Systems I Discussion 3B. Orthogonal Matching Pursuit Lecture Orthogonal Matching Pursuit (OMP) algorithm: Inputs: A set of m songs, each of length n: S

More information

Bayesian Paradigm. Maximum A Posteriori Estimation

Bayesian Paradigm. Maximum A Posteriori Estimation Bayesian Paradigm Maximum A Posteriori Estimation Simple acquisition model noise + degradation Constraint minimization or Equivalent formulation Constraint minimization Lagrangian (unconstraint minimization)

More information

The Pros and Cons of Compressive Sensing

The Pros and Cons of Compressive Sensing The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal

More information

Exact penalty decomposition method for zero-norm minimization based on MPEC formulation 1

Exact penalty decomposition method for zero-norm minimization based on MPEC formulation 1 Exact penalty decomposition method for zero-norm minimization based on MPEC formulation Shujun Bi, Xiaolan Liu and Shaohua Pan November, 2 (First revised July 5, 22) (Second revised March 2, 23) (Final

More information

AMS 529: Finite Element Methods: Fundamentals, Applications, and New Trends

AMS 529: Finite Element Methods: Fundamentals, Applications, and New Trends AMS 529: Finite Element Methods: Fundamentals, Applications, and New Trends Lecture 3: Finite Elements in 2-D Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Finite Element Methods 1 / 18 Outline 1 Boundary

More information

Part IV Compressed Sensing

Part IV Compressed Sensing Aisenstadt Chair Course CRM September 2009 Part IV Compressed Sensing Stéphane Mallat Centre de Mathématiques Appliquées Ecole Polytechnique Conclusion to Super-Resolution Sparse super-resolution is sometime

More information

Single-channel source separation using non-negative matrix factorization

Single-channel source separation using non-negative matrix factorization Single-channel source separation using non-negative matrix factorization Mikkel N. Schmidt Technical University of Denmark mns@imm.dtu.dk www.mikkelschmidt.dk DTU Informatics Department of Informatics

More information

Hale Collage. Spectropolarimetric Diagnostic Techniques!!!!!!!! Rebecca Centeno

Hale Collage. Spectropolarimetric Diagnostic Techniques!!!!!!!! Rebecca Centeno Hale Collage Spectropolarimetric Diagnostic Techniques Rebecca Centeno March 1-8, 2016 Today Simple solutions to the RTE Milne-Eddington approximation LTE solution The general inversion problem Spectral

More information

Solving Corrupted Quadratic Equations, Provably

Solving Corrupted Quadratic Equations, Provably Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin

More information

Resolving the White Noise Paradox in the Regularisation of Inverse Problems

Resolving the White Noise Paradox in the Regularisation of Inverse Problems 1 / 32 Resolving the White Noise Paradox in the Regularisation of Inverse Problems Hanne Kekkonen joint work with Matti Lassas and Samuli Siltanen Department of Mathematics and Statistics University of

More information

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp CoSaMP Iterative signal recovery from incomplete and inaccurate samples Joel A. Tropp Applied & Computational Mathematics California Institute of Technology jtropp@acm.caltech.edu Joint with D. Needell

More information

A Hybrid Method for the Wave Equation. beilina

A Hybrid Method for the Wave Equation.   beilina A Hybrid Method for the Wave Equation http://www.math.unibas.ch/ beilina 1 The mathematical model The model problem is the wave equation 2 u t 2 = (a 2 u) + f, x Ω R 3, t > 0, (1) u(x, 0) = 0, x Ω, (2)

More information

5.1 2D example 59 Figure 5.1: Parabolic velocity field in a straight two-dimensional pipe. Figure 5.2: Concentration on the input boundary of the pipe. The vertical axis corresponds to r 2 -coordinate,

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

Regularization and Inverse Problems

Regularization and Inverse Problems Regularization and Inverse Problems Caroline Sieger Host Institution: Universität Bremen Home Institution: Clemson University August 5, 2009 Caroline Sieger (Bremen and Clemson) Regularization and Inverse

More information

Iterative regularization of nonlinear ill-posed problems in Banach space

Iterative regularization of nonlinear ill-posed problems in Banach space Iterative regularization of nonlinear ill-posed problems in Banach space Barbara Kaltenbacher, University of Klagenfurt joint work with Bernd Hofmann, Technical University of Chemnitz, Frank Schöpfer and

More information

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles SIAM Student Research Conference Deanna Needell Joint work with Roman Vershynin and Joel Tropp UC Davis, May 2008 CoSaMP: Greedy Signal

More information

Communication by Regression: Achieving Shannon Capacity

Communication by Regression: Achieving Shannon Capacity Communication by Regression: Practical Achievement of Shannon Capacity Department of Statistics Yale University Workshop Infusing Statistics and Engineering Harvard University, June 5-6, 2011 Practical

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

Adaptive Primal Dual Optimization for Image Processing and Learning

Adaptive Primal Dual Optimization for Image Processing and Learning Adaptive Primal Dual Optimization for Image Processing and Learning Tom Goldstein Rice University tag7@rice.edu Ernie Esser University of British Columbia eesser@eos.ubc.ca Richard Baraniuk Rice University

More information

Deep convolutional framelets:

Deep convolutional framelets: Deep convolutional framelets: application to diffuse optical tomography : learning based approach for inverse scattering problems Jaejun Yoo NAVER Clova ML OUTLINE (bottom up!) I. INTRODUCTION II. EXPERIMENTS

More information

Dictionary Learning for photo-z estimation

Dictionary Learning for photo-z estimation Dictionary Learning for photo-z estimation Joana Frontera-Pons, Florent Sureau, Jérôme Bobin 5th September 2017 - Workshop Dictionary Learning on Manifolds MOTIVATION! Goal : Measure the radial positions

More information

Statistical Estimation of the Parameters of a PDE

Statistical Estimation of the Parameters of a PDE PIMS-MITACS 2001 1 Statistical Estimation of the Parameters of a PDE Colin Fox, Geoff Nicholls (University of Auckland) Nomenclature for image recovery Statistical model for inverse problems Traditional

More information

Extracting structured dynamical systems using sparse optimization with very few samples

Extracting structured dynamical systems using sparse optimization with very few samples Extracting structured dynamical systems using sparse optimization with very few samples Hayden Schaeffer 1, Giang Tran 2, Rachel Ward 3 and Linan Zhang 1 1 Department of Mathematical Sciences, Carnegie

More information

Regularization in Banach Space

Regularization in Banach Space Regularization in Banach Space Barbara Kaltenbacher, Alpen-Adria-Universität Klagenfurt joint work with Uno Hämarik, University of Tartu Bernd Hofmann, Technical University of Chemnitz Urve Kangro, University

More information

Stochastic Spectral Approaches to Bayesian Inference

Stochastic Spectral Approaches to Bayesian Inference Stochastic Spectral Approaches to Bayesian Inference Prof. Nathan L. Gibson Department of Mathematics Applied Mathematics and Computation Seminar March 4, 2011 Prof. Gibson (OSU) Spectral Approaches to

More information

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 27 Introduction Fredholm first kind integral equation of convolution type in one space dimension: g(x) = 1 k(x x )f(x

More information

Imaging In Challenging Weather Conditions

Imaging In Challenging Weather Conditions Imaging In Challenging Weather Conditions Guy Satat Computational Imaging for Self-Driving Vehicles @ CVPR 2018 Imaging Through Fog == Imaging Through Scattering? Why not RADAR? Visible X rays UV IR

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

The Analysis Cosparse Model for Signals and Images

The Analysis Cosparse Model for Signals and Images The Analysis Cosparse Model for Signals and Images Raja Giryes Computer Science Department, Technion. The research leading to these results has received funding from the European Research Council under

More information

Convex Optimization and l 1 -minimization

Convex Optimization and l 1 -minimization Convex Optimization and l 1 -minimization Sangwoon Yun Computational Sciences Korea Institute for Advanced Study December 11, 2009 2009 NIMS Thematic Winter School Outline I. Convex Optimization II. l

More information

A Framework for Analyzing and Constructing Hierarchical-Type A Posteriori Error Estimators

A Framework for Analyzing and Constructing Hierarchical-Type A Posteriori Error Estimators A Framework for Analyzing and Constructing Hierarchical-Type A Posteriori Error Estimators Jeff Ovall University of Kentucky Mathematics www.math.uky.edu/ jovall jovall@ms.uky.edu Kentucky Applied and

More information

Two-parameter regularization method for determining the heat source

Two-parameter regularization method for determining the heat source Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 13, Number 8 (017), pp. 3937-3950 Research India Publications http://www.ripublication.com Two-parameter regularization method for

More information

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional

More information

IEOR 265 Lecture 3 Sparse Linear Regression

IEOR 265 Lecture 3 Sparse Linear Regression IOR 65 Lecture 3 Sparse Linear Regression 1 M Bound Recall from last lecture that the reason we are interested in complexity measures of sets is because of the following result, which is known as the M

More information

Analysis of an Adaptive Finite Element Method for Recovering the Robin Coefficient

Analysis of an Adaptive Finite Element Method for Recovering the Robin Coefficient Analysis of an Adaptive Finite Element Method for Recovering the Robin Coefficient Yifeng Xu 1 Jun Zou 2 Abstract Based on a new a posteriori error estimator, an adaptive finite element method is proposed

More information

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 Introduction Almost all numerical methods for solving PDEs will at some point be reduced to solving A

More information

Recent Advances in Bayesian Inference for Inverse Problems

Recent Advances in Bayesian Inference for Inverse Problems Recent Advances in Bayesian Inference for Inverse Problems Felix Lucka University College London, UK f.lucka@ucl.ac.uk Applied Inverse Problems Helsinki, May 25, 2015 Bayesian Inference for Inverse Problems

More information

BACKGROUNDS. Two Models of Deformable Body. Distinct Element Method (DEM)

BACKGROUNDS. Two Models of Deformable Body. Distinct Element Method (DEM) BACKGROUNDS Two Models of Deformable Body continuum rigid-body spring deformation expressed in terms of field variables assembly of rigid-bodies connected by spring Distinct Element Method (DEM) simple

More information

Statistical Issues in Searches: Photon Science Response. Rebecca Willett, Duke University

Statistical Issues in Searches: Photon Science Response. Rebecca Willett, Duke University Statistical Issues in Searches: Photon Science Response Rebecca Willett, Duke University 1 / 34 Photon science seen this week Applications tomography ptychography nanochrystalography coherent diffraction

More information

Inverse Transport Problems and Applications. II. Optical Tomography and Clear Layers. Guillaume Bal

Inverse Transport Problems and Applications. II. Optical Tomography and Clear Layers. Guillaume Bal Inverse Transport Problems and Applications II. Optical Tomography and Clear Layers Guillaume Bal Department of Applied Physics & Applied Mathematics Columbia University http://www.columbia.edu/ gb23 gb23@columbia.edu

More information

Noisy Signal Recovery via Iterative Reweighted L1-Minimization

Noisy Signal Recovery via Iterative Reweighted L1-Minimization Noisy Signal Recovery via Iterative Reweighted L1-Minimization Deanna Needell UC Davis / Stanford University Asilomar SSC, November 2009 Problem Background Setup 1 Suppose x is an unknown signal in R d.

More information

COMPRESSIVE OPTICAL DEFLECTOMETRIC TOMOGRAPHY

COMPRESSIVE OPTICAL DEFLECTOMETRIC TOMOGRAPHY COMPRESSIVE OPTICAL DEFLECTOMETRIC TOMOGRAPHY Adriana González ICTEAM/UCL March 26th, 214 1 ISPGroup - ICTEAM - UCL http://sites.uclouvain.be/ispgroup Université catholique de Louvain, Louvain-la-Neuve,

More information

Applied Machine Learning for Biomedical Engineering. Enrico Grisan

Applied Machine Learning for Biomedical Engineering. Enrico Grisan Applied Machine Learning for Biomedical Engineering Enrico Grisan enrico.grisan@dei.unipd.it Data representation To find a representation that approximates elements of a signal class with a linear combination

More information

Deep Learning: Approximation of Functions by Composition

Deep Learning: Approximation of Functions by Composition Deep Learning: Approximation of Functions by Composition Zuowei Shen Department of Mathematics National University of Singapore Outline 1 A brief introduction of approximation theory 2 Deep learning: approximation

More information

sparse and low-rank tensor recovery Cubic-Sketching

sparse and low-rank tensor recovery Cubic-Sketching Sparse and Low-Ran Tensor Recovery via Cubic-Setching Guang Cheng Department of Statistics Purdue University www.science.purdue.edu/bigdata CCAM@Purdue Math Oct. 27, 2017 Joint wor with Botao Hao and Anru

More information