Sparse recovery for spherical harmonic expansions

Similar documents
Sparse Legendre expansions via l 1 minimization

Interpolation via weighted l 1 -minimization

Interpolation via weighted l 1 -minimization

Recovering overcomplete sparse representations from structured sensing

Interpolation via weighted l 1 minimization

AN INTRODUCTION TO COMPRESSIVE SENSING

Exponential decay of reconstruction error from binary measurements of sparse signals

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles

Strengthened Sobolev inequalities for a random subspace of functions

GREEDY SIGNAL RECOVERY REVIEW

Greedy Signal Recovery and Uniform Uncertainty Principles

Noisy Signal Recovery via Iterative Reweighted L1-Minimization

Interpolation via weighted l 1 minimization

Multiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing

Reconstruction from Anisotropic Random Measurements

Lecture Notes 9: Constrained Optimization

Sparse and Low Rank Recovery via Null Space Properties

Interpolation-Based Trust-Region Methods for DFO

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit

Lecture 22: More On Compressed Sensing

Compressed Sensing and Sparse Recovery

An Introduction to Sparse Approximation

Solution Recovery via L1 minimization: What are possible and Why?

Rapidly Computing Sparse Chebyshev and Legendre Coefficient Expansions via SFTs

6 Compressed Sensing and Sparse Recovery

Thresholds for the Recovery of Sparse Solutions via L1 Minimization

Constructing Explicit RIP Matrices and the Square-Root Bottleneck

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016

Combining geometry and combinatorics

Sparse Optimization Lecture: Sparse Recovery Guarantees

The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008

Sensing systems limited by constraints: physical size, time, cost, energy

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011

Sparse Approximation of PDEs based on Compressed Sensing

Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery

Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery

Methods for sparse analysis of high-dimensional data, II

Introduction to Compressed Sensing

Concentration Inequalities for Random Matrices

Upper Bound for Intermediate Singular Values of Random Sub-Gaussian Matrices 1

Sparse Recovery with Pre-Gaussian Random Matrices

Compressed Sensing: Lecture I. Ronald DeVore

Rui ZHANG Song LI. Department of Mathematics, Zhejiang University, Hangzhou , P. R. China

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

Sparse Astronomical Data Analysis. Jean-Luc Starck. Collaborators: J. Bobin., F. Sureau. CEA Saclay

Z Algorithmic Superpower Randomization October 15th, Lecture 12

MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina

Stability and robustness of l 1 -minimizations with Weibull matrices and redundant dictionaries

Lecture 3. Random Fourier measurements

Methods for sparse analysis of high-dimensional data, II

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit

The Analysis Cosparse Model for Signals and Images

Compressive sensing of low-complexity signals: theory, algorithms and extensions

Reconstruction of sparse Legendre and Gegenbauer expansions

Sigma Delta Quantization for Compressed Sensing

Compressed Sensing and Redundant Dictionaries

COMPRESSED SENSING IN PYTHON

Random hyperplane tessellations and dimension reduction

Solving Underdetermined Linear Equations and Overdetermined Quadratic Equations (using Convex Programming)

Class notes: Approximation

Compressed Sensing and Neural Networks

arxiv: v3 [math.na] 7 Dec 2018

Recent Developments in Compressed Sensing

CS on CS: Computer Science insights into Compresive Sensing (and vice versa) Piotr Indyk MIT

18.S096: Compressed Sensing and Sparse Recovery

On the coherence barrier and analogue problems in compressed sensing

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery

Sparsity Regularization

Compressed Sensing and Redundant Dictionaries

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

Constrained optimization

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit

Compressed Sensing: Extending CLEAN and NNLS

Part IV Compressed Sensing

Compressive Sensing and Beyond

THEORY OF COMPRESSIVE SENSING VIA l 1 -MINIMIZATION: A NON-RIP ANALYSIS AND EXTENSIONS

Komprimované snímání a LASSO jako metody zpracování vysocedimenzionálních dat

Sparse solutions of underdetermined systems

Quantifying conformation fluctuation induced uncertainty in bio-molecular systems

sample lectures: Compressed Sensing, Random Sampling I & II

On the recovery of measures without separation conditions

Compressive Sensing Theory and L1-Related Optimization Algorithms

Least singular value of random matrices. Lewis Memorial Lecture / DIMACS minicourse March 18, Terence Tao (UCLA)

Stochastic geometry and random matrix theory in CS

COMPRESSED Sensing (CS) is a method to recover a

Generalized Orthogonal Matching Pursuit- A Review and Some

Invertibility of random matrices

Phase Transition Phenomenon in Sparse Approximation

A Few Basic Problems in Compressed Sensing

Primal Dual Pursuit A Homotopy based Algorithm for the Dantzig Selector

Compressed Sensing and Linear Codes over Real Numbers

Compressed Sensing and Robust Recovery of Low Rank Matrices

Random Coding for Fast Forward Modeling

A new method on deterministic construction of the measurement matrix in compressed sensing

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

Transcription:

Rachel Ward 1 1 Courant Institute, New York University Workshop Sparsity and Cosmology, Nice May 31, 2011

Cosmic Microwave Background Radiation (CMB) map Temperature is measured as T (θ, ϕ) = k k=0 l= k a (l,k)yl k (θ, ϕ), where Y k l s are spherical harmonics Red band: measurements are corrupted by galactic signal

CMB map is compressible in spherical harmonics Consider the coefficient vector a = a (l,k) and T (θ, φ) n k k=0 l= k a (l,k) Yl k (θ, ϕ). This vector is predicted and observed to be compressible.

Spherical harmonics: Fourier analysis on the sphere Yl k s are products of complex exponentials and orthogonal Jacobi polynomials Yl k s are orthonormal with respect to spherical surface measure sin(ϕ)dϕdθ

CMB map inpainting via l 1 -minimization (Abrial, Moudden, Starck, Fadili, Delabrouille, Nguyen 08): Propose full-sky CMB map inpainting from partial measurements T (θ j, ϕ j ). Obtain coefficients a = a (l,k) by solving the l 1 -minimization problem: a = arg min c 1 N s.t. k k=0 l= k D = N is a prescribed maximal degree Theoretical justification? c (l,k) Y k l (θ j, ϕ j ) = T (θ j, ϕ j )

The spherical sampling matrix In matrix form, the constraints in l 1 -minimization problem are Φc = T, where Φ C m N is the spherical sampling matrix 1 Y1 1 (θ 1, ϕ 1 )... Yl k(θ 1, ϕ 1 )... 1 Y1 1 Φ = (θ 2, ϕ 2 )... Yl k(θ 2, ϕ 2 )...... 1 Y1 1(θ m, ϕ m )... Yl k(θ m, ϕ m )... We assume these measurements are underdetermined: m < N.

The spherical sampling matrix In matrix form, the constraints in l 1 -minimization problem are Φc = T, where Φ C m N is the spherical sampling matrix 1 Y1 1 (θ 1, ϕ 1 )... Yl k(θ 1, ϕ 1 )... 1 Y1 1 Φ = (θ 2, ϕ 2 )... Yl k(θ 2, ϕ 2 )...... 1 Y1 1(θ m, ϕ m )... Yl k(θ m, ϕ m )... We assume these measurements are underdetermined: m < N. Compressed sensing etc: If Φ acts as approximate isometry on sparse vectors, then compressible vectors are stably recovered via l 1 -minimization

Restricted Isometry Property (RIP) Definition [Candès, Romberg, Tao 06] The restricted isometry constant δ s of a matrix Φ C m N is the smallest number such that for all s-sparse x C N, (1 δ s ) x 2 2 Φx 2 2 (1 + δ s ) x 2 2

Restricted Isometry Property (RIP) Definition [Candès, Romberg, Tao 06] The restricted isometry constant δ s of a matrix Φ C m N is the smallest number such that for all s-sparse x C N, (1 δ s ) x 2 2 Φx 2 2 (1 + δ s ) x 2 2 Open to construct deterministic matrices satisfying the RIP in the regime m s log p (N). If Φ R m N has i.i.d. Gaussian or Bernoulli entries and m Cδ 2 (s log(n/s)) then δ s δ with high probability. [CRT 06, RV 08, R 09 ] If m = O(s log 4 (N)) the RIP holds w.h.p. for Φ associated to bounded orthonormal systems.

RIP matrices are good for sparse recovery [CRT 06, C 08, Foucart 10] If for Φ C m N we have δ s δ 0, (δ 0 =.46 is valid), y = Φx is observed, and then x = arg min z z 1 subject to Φz = y, x x 2 x x s 1 s, where x s is the best s-term approximation to x. If x is s-sparse, then x = x is recovered exactly. If x is well-approximated by an s-sparse vector, then x x.

Sparse recovery for bounded orthonormal systems Ψ = ψ 1 (x 1 ) ψ 2 (x 1 )...... ψ N (x 1 )... ψ 1 (x m ) ψ 2 (x m )...... ψ N (x m ) Suppose (ψ j ) N j=1 on compact domain D are orthonormal with respect to measure dν Suppose x 1,..., x m D are chosen i.i.d. from dν. Suppose max j 1...N ψ j K. Theorem (Rudelson, Vershynin 08, Rauhut 09) If m CK 2 δ 2 s log 3 (s) log(n) then the matrix 1 m Ψ satisfies δ s δ with probability at least 1 N γ log3 (s).

Examples of bounded orthonormal systems Fourier ψ j (x) = e 2πijx : D = [0, 1], dν = dx, K = 1 (also discrete analog) Chebyshev polynomials T j (x): D = [ 1, 1], dν = (1 x 2 ) 1/2 dx, K = 2 RIP for Ψ means that functions which admit s-sparse expansions with respect to the ψ j s can be recovered from their values at m sample points provided m CK 2 s log 3 (s) log(n), and functions with compressible expansions can be recovered approximately

Examples of bounded orthonormal systems [Rauhut, W 10] : preconditioned Legendre system Q(x)L j (x) L j s are normalized Legendre polynomials Q(x) = C(1 x 2 ) 1/4, dν(x) = π 1 (1 x 2 ) 1/2 dx, and K = 2 Q(x) is preconditioner; implies sparse recovery in Legendre system

Examples of bounded orthonormal systems [Rauhut,W 10] : More generally, preconditioned Jacobi system Q α (x)p α j (x) p α j s are polynomials orthonormal w.r.t. dν(x) = (1 x 2 ) α dx [Krasikov 07:] Q α p α j (x) Cα1/4 Q α (x) = (1 x 2 ) α/2+1/4, dν(x) = (1 x 2 ) 1/2 dx, and K = Cα 1/4 That is, Chebyshev sampling is universal for recovering sparse polynomial expansions

The spherical harmonics The spherical harmonics can be written as Yl k (θ, ϕ) = eikθ p k l k (cos ϕ)(sin ϕ) k, k l k, k 0 (θ, ϕ) [0, π] [0, 2π), Growth rates for complex exponentials and Jacobi polynomials give: sup 0 k N 1, k l k sin(ϕ) 1/2 Yl k (θ, ϕ) CN1/8 This implies the strategy of uniform sampling from the product measure dϕdθ.

Location of sampling points matters Figure: Phase transitions for sparse recovery on the sphere s/m 1 0.8 0.6 0.4 0.2 s/m 1 0.8 0.6 0.4 0.2 0.2 0.4 0.6 0.8 1 m/n (a) 0.2 0.4 0.6 0.8 1 m/n (b) We form random s-sparse coefficient vectors c = (c l,k ) of degree D = N 1/2 = 16 and choose m sampling points from (a) product measure dϕdθ and (b) uniform spherical measure sin ϕdϕdθ. Black indicates recovery.

Sparse recovery in spherical harmonic systems Theorem (Rauhut, W 11) Suppose that (θ 1, ϕ 1 ),..., (θ m, ϕ m ), with m Cs log 3 (s)n 1/4 log(n) are drawn independently from the uniform measure on B = [0, π] [0, 2π). Let Φ be the m N spherical sampling matrix and let QΦ be its preconditioned version. With high probability the following holds for all harmonic polynomials g(θ, ϕ) = N 1/2 1 l l=0 k= l c l,kyl k (θ, ϕ). Suppose that noisy sample values y j = g(θ j, ϕ j ) + η j are observed, and that η ε. Let ĉ = arg min z 1 subject to QΦz Qy 2 mε. Then c ĉ 2 C 1σ s (c) 1 s + C 2 ε.

Conclusions Our results provide a measure of justification for good numerical results for CMB map inpainting via l 1 -minimization Our results may be of interest to other problems in geophysics, astronomy, and medical imaging.

Open problems For practical implementation, we would rather sample from a discrete grid. In experiments, the sparse recovery results for discrete vs. continuous are indistinguishable. Proof?

Open problems In our proof, we require m sn 1/4 log 4 (N) sampling points (or rows in Φ) for l 1 -minimization to be able to recover s-sparse spherical polynomials of degree N 1/2.. We should be able to improve this to m s log p (N)...

Open problems In our proof, we require m sn 1/4 log 4 (N) sampling points (or rows in Φ) for l 1 -minimization to be able to recover s-sparse spherical polynomials of degree N 1/2.. We should be able to improve this to m s log p (N)... In practice, different models of sparsity are more suited for the sphere, such as rotationally invariant sparsity sets, or sparsity in certain linear combinations of spherical harmonic coefficients