Strengthened Sobolev inequalities for a random subspace of functions

Similar documents
Recovering overcomplete sparse representations from structured sensing

arxiv: v9 [cs.cv] 12 Mar 2013

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

Interpolation via weighted l 1 minimization

AN INTRODUCTION TO COMPRESSIVE SENSING

Noisy Signal Recovery via Iterative Reweighted L1-Minimization

On the coherence barrier and analogue problems in compressed sensing

Blind Deconvolution Using Convex Programming. Jiaming Cao

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp

COMPRESSED Sensing (CS) is a method to recover a

Greedy Signal Recovery and Uniform Uncertainty Principles

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

Exponential decay of reconstruction error from binary measurements of sparse signals

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016

Sparse Legendre expansions via l 1 minimization

Compressive Sensing and Beyond

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

Lecture Notes 9: Constrained Optimization

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles

An algebraic perspective on integer sparse recovery

Sensing systems limited by constraints: physical size, time, cost, energy

Reconstruction from Anisotropic Random Measurements

Recent Developments in Compressed Sensing

Compressed Sensing and Sparse Recovery

Near-optimal Compressed Sensing Guarantees for Total Variation Minimization

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk

Lecture 22: More On Compressed Sensing

Minimizing the Difference of L 1 and L 2 Norms with Applications

Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit

Interpolation via weighted l 1 -minimization

Combining geometry and combinatorics

Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery

GREEDY SIGNAL RECOVERY REVIEW

of Orthogonal Matching Pursuit

Rui ZHANG Song LI. Department of Mathematics, Zhejiang University, Hangzhou , P. R. China

Sparse Optimization Lecture: Sparse Recovery Guarantees

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

Interpolation via weighted l 1 -minimization

Compressive Sensing of Sparse Tensor

Introduction to Compressed Sensing

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Stability and Robustness of Weak Orthogonal Matching Pursuits

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit

Stable Signal Recovery from Incomplete and Inaccurate Measurements

Sparse recovery for spherical harmonic expansions

Constructing Explicit RIP Matrices and the Square-Root Bottleneck

Conditions for Robust Principal Component Analysis

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina

THEORY OF COMPRESSIVE SENSING VIA l 1 -MINIMIZATION: A NON-RIP ANALYSIS AND EXTENSIONS

Super-resolution via Convex Programming

Komprimované snímání a LASSO jako metody zpracování vysocedimenzionálních dat

Compressed Sensing: Lecture I. Ronald DeVore

Compressed Sensing and Neural Networks

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

COMPRESSED Sensing (CS) has recently emerged as a. Sparse Recovery from Combined Fusion Frame Measurements

Stability and robustness of l 1 -minimizations with Weibull matrices and redundant dictionaries

6 Compressed Sensing and Sparse Recovery

The Pros and Cons of Compressive Sensing

Methods for sparse analysis of high-dimensional data, II

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit

COMPRESSED SENSING IN PYTHON

Compressed Sensing and Robust Recovery of Low Rank Matrices

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

A new method on deterministic construction of the measurement matrix in compressed sensing

Observability of a Linear System Under Sparsity Constraints

The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008

Compressive Sensing Theory and L1-Related Optimization Algorithms

Self-Calibration and Biconvex Compressive Sensing

Sparse and Low Rank Recovery via Null Space Properties

An Introduction to Sparse Approximation

The Fundamentals of Compressive Sensing

Part IV Compressed Sensing

Compressed Sensing and Related Learning Problems

Compressive Sensing with Random Matrices

Towards a Mathematical Theory of Super-resolution

Lecture 3. Random Fourier measurements

Enhanced Compressive Sensing and More

Methods for sparse analysis of high-dimensional data, II

Recovery of Compressible Signals in Unions of Subspaces

TV-MIN AND GREEDY PURSUIT FOR CONSTRAINED JOINT SPARSITY AND APPLICATION TO INVERSE SCATTERING

RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems

Fast Hard Thresholding with Nesterov s Gradient Method

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions

Error Correction via Linear Programming

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

arxiv: v1 [cs.it] 21 Feb 2013

Multiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing

An Overview of Compressed Sensing

Compressed sensing techniques for hyperspectral image recovery

Primal Dual Pursuit A Homotopy based Algorithm for the Dantzig Selector

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

Randomness-in-Structured Ensembles for Compressed Sensing of Images

Stochastic geometry and random matrix theory in CS

Optimisation Combinatoire et Convexe.

Lecture 18 Nov 3rd, 2015

Gradient Descent with Sparsification: An iterative algorithm for sparse recovery with restricted isometry property

Transcription:

Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013

2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images) Let x R N N be zero-mean. Then N N N xj,k 2 N [ ] x j+1,k x j,k + x j,k+1 x j,k j=1 k=1 j=1 k=1 or x 2 x TV Theorem (New! Strengthened Sobolev inequality) With probability 1 e cm, the following holds for all images x R N N in the null space of an m N 2 random Gaussian matrix x 2 [log(n)]3/2 x TV m

3 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images) Let x R N N be zero-mean. Then N N N xj,k 2 N [ ] x j+1,k x j,k + x j,k+1 x j,k j=1 k=1 j=1 k=1 or x 2 x TV Theorem (New! Strengthened Sobolev inequality) With probability 1 e cm, the following holds for all images x R N N in the null space of an m N 2 random Gaussian matrix x 2 [log(n)]3/2 x TV m

4 Joint work with Deanna Needell Research supported in part by the ONR and Alfred P. Sloan Foundation

5 Motivation: Image processing

6 Images are compressible in discrete gradient

Images are compressible in discrete gradient We define the discrete directional derivatives of an image x R N N as x u : R N N R (N 1) N, (x u ) j,k = x j,k x j 1,k, x v : R N N R N (N 1), (x v ) j,k = x j,k x j,k 1, and discrete gradient operator x = (x u, x v ) R N N 2

8 Images are also compressible in wavelet bases Reconstruction of Boats from 10% of its Haar wavelet coefficients. H : R N N R N N is bivariate Haar wavelet transform Figure: First few Haar basis functions

Notation The image l p -norm is u p := ( N j=1 N k=1 u j,k p ) 1/p u is s-sparse if u 0 := {#(j, k) : u j,k 0} s u s = arg min {z: z 0=s} u z is the best s-sparse approximation to u σ s (u) p = u u s p is the s-sparse approximation error Natural images are compressible : σ s ( x) = x ( x) s 2 decays quickly in s σ s (Hx) = Hx (Hx) s 2 decays quickly in s

10 Notation The image l p -norm is u p := ( N j=1 N k=1 u j,k p ) 1/p u is s-sparse if u 0 := {#(j, k) : u j,k 0} s u s = arg min {z: z 0=s} u z is the best s-sparse approximation to u σ s (u) p = u u s p is the s-sparse approximation error Natural images are compressible : σ s ( x) = x ( x) s 2 decays quickly in s σ s (Hx) = Hx (Hx) s 2 decays quickly in s

Imaging via compressed sensing If images are so compressible (nearly low-dimensional), do we really need to know all of the pixel values for accurate reconstruction, or can we get away with taking a smaller number of linear measurements?

MRI imaging - speed up by exploiting sparsity? In Magnetic Resonance Imaging (MRI), rotating magnetic field measurements are 2D Fourier transform measurements: y k1,k 2 = ( F(x) ) = 1 x k 1,k j1 2,j N 2 e 2πi(k1j1+k2j2)/N, j 1,j 2 N/2 + 1 k 1, k 2, N/2 Each measurement takes a certain amount of time reduced number of samples m N 2 means reduced time of MRI scan.

Compressed sensing set-up 1. Signal (or image) of interest f R d which is sparse: f 0 s 2. Measurement operator A : R d R m, m d y = A f 3. Problem: Using the sparsity of f, can we reconstruct f efficiently from y? 3

A sufficient condition: Restricted Isometry Property 1 A : R d R m satisfies the Restricted Isometry Property (RIP) of order s if 3 4 f 2 Af 2 5 4 f 2 for all s-sparse f. Gaussian or Bernoulli measurement matrices satisfy the RIP with high probability when m s log d. Random subsets of rows from Fourier and others with fast multiply have similar property: m s log 4 d. 14 1 Candès-Romberg-Tao 2006

A sufficient condition: Restricted Isometry Property 1 A : R d R m satisfies the Restricted Isometry Property (RIP) of order s if 3 4 f 2 Af 2 5 4 f 2 for all s-sparse f. Gaussian or Bernoulli measurement matrices satisfy the RIP with high probability when m s log d. Random subsets of rows from Fourier and others with fast multiply have similar property: m s log 4 d. 1 Candès-Romberg-Tao 2006

16 l 1 -minimization for sparse recovery 2 Let y = Af. Let A satisfy the Restricted Isometry Property of order s and set: Then ˆf = argmin g 1 such that Ag = y g 1. Exact recovery: If f is s-sparse, then ˆf = f. 2. Stable recovery: If f is approximately s-sparse, then ˆf f. Precisely, f ˆf 1 σ s (f ) 1 f ˆf 2 1 s σ s (f ) 1 2 Candès-Romberg-Tao, Donoho

l 1 -minimization for sparse recovery Similarly, if B be an orthonormal basis in which f is assumed sparse, and AB = AB 1 satisfies the Restricted Isometry Property of order s and ˆf = argmin Bg 1 such that Ag = y g Bˆf = argmin h 1 such that AB 1 h = y h Then B(f ˆf ) 1 σ s (Bf ) 1 f ˆf 2 1 s σ s (Bf ) 1

18 Total variation minimization for sparse recovery For x R N N, the total variation seminorm is x TV = x 1. If x has s-sparse gradient, from observations y = Ax one solves Total Variation minimization ˆx = argmin z TV subject to Az = Ax z Immediate guarantees: 1. ˆx x 2 1 s σ s ( x) 1, 2. ˆx x 1 σ s ( x) 1 Apply discrete Sobolev inequality x ˆx 2 σ s ( x) 1 Can we do better?

19 Total variation minimization for sparse recovery For x R N N, the total variation seminorm is x TV = x 1. If x has s-sparse gradient, from observations y = Ax one solves Total Variation minimization ˆx = argmin z TV subject to Az = Ax z Immediate guarantees: 1. ˆx x 2 1 s σ s ( x) 1, 2. ˆx x 1 σ s ( x) 1 Apply discrete Sobolev inequality x ˆx 2 σ s ( x) 1 Can we do better?

Discrete Sobolev inequalities Proposition (Sobolev inequality ) Let x R N N be zero-mean. Then x 2 x TV Theorem (Strengthened Sobolev inequality) For all images x R N2 in the null space of a matrix A : R N2 R m which, when composed with the bivariate Haar wavelet transform, AH 1 : R N2 R m has the Restricted Isometry Property of order 2s, the following holds: x 2 log(n) x TV s

Discrete Sobolev inequalities Proposition (Sobolev inequality ) Let x R N N be zero-mean. Then x 2 x TV Theorem (Strengthened Sobolev inequality) For all images x R N2 in the null space of a matrix A : R N2 R m which, when composed with the bivariate Haar wavelet transform, AH 1 : R N2 R m has the Restricted Isometry Property of order 2s, the following holds: x 2 log(n) x TV s

22 Consequence for total variation minimization Suppose that the matrix A : R N2 R m is such that, composed with the bivariate Haar wavelet transform, AH : R N2 R m has the Restricted Isometry Property of order 2s. From observations y = Ax, consider Total Variation minimization Then ˆx = argmin z TV subject to Az = y z 1. ˆx x 2 1 s σ s ( x) 1 2. ˆx x TV σ s ( x) 1 3. x ˆx 2 log(n2 /s) s σ s ( x) 1

23 Consequence for total variation minimization Suppose that the matrix A : R N2 R m is such that, composed with the bivariate Haar wavelet transform, AH : R N2 R m has the Restricted Isometry Property of order 2s. From observations y = Ax, consider Total Variation minimization Then ˆx = argmin z TV subject to Az = y z 1. ˆx x 2 1 s σ s ( x) 1 2. ˆx x TV σ s ( x) 1 3. x ˆx 2 log(n2 /s) s σ s ( x) 1

Corollary (Sobolev inequality for random matrices) With probability 1 e cm, the following holds for all images x R N N in the null space of an m N 2 random Gaussian matrix: x 2 [log(n)]3/2 x TV m

50 100 150 200 250 50 100 150 200 250 Corollary (Sobolev inequality for Fourier constraints) Select m frequency pairs (k 1, k 2 ) with replacement from the distribution 1 p k1,k 2 ( k 1 + 1) 2 + ( k 2 + 1) 2. With probability 1 e cm, the following holds for all images x R N N with vanishing Fourier transform ( F(x) ) k 1,k 2 = 0: x 2 [log(n)]5 x TV m

50 100 150 200 250 50 100 150 200 250 50 100 150 200 250 50 100 150 200 250 50 100 150 200 250 50 100 150 200 250 50 Sobolev inequalities in action! 100 Original MRI image 150 50 100 200 150 200 Low frequencies only 50 250 100 150 200 250 Equispaced radial lines 250 50 100 (k 1, k 2 ) (k 2 1 + k2 2 ) 1/2 (k 1, k 2 ) (k 2 1 + k2 2 ) 1

27 Sketch of the proof Theorem (Strengthened Sobolev inequality) For all images x R N2 in the null space of a matrix A : R N2 R m which, when composed with the bivariate Haar wavelet transform, AH 1 : R N2 R m has the Restricted Isometry Property of order 2s, the following holds: x 2 log(n) x TV s

28 Sketch of the proof Theorem (Strengthened Sobolev inequality) For all images x R N2 in the null space of a matrix A : R N2 R m which, when composed with the bivariate Haar wavelet transform, AH 1 : R N2 R m has the Restricted Isometry Property of order 2s, the following holds: x 2 log(n) x TV s

29 Lemmas we use: 1. Denote the ordered bivariate Haar wavelet coefficients of mean-zero x R N2 by c (1) c (2) c (N 2 ). Then 3 c (k) x TV k That is, the sequence is in weak-l 1. 2. If c = c S0 + c S1 + c S2 +... where c S0 = (c (1), c (2),..., c (s), 0,..., 0), and c S1 = (0,..., 0, c (s+1),..., c (2s), 0,..., 0), and so on are s-sparse, then c Sj+1 2 1 s c Sj 1. 3. Recall that AH 1 : R N2 R m having the RIP of order s means that: 3 4 x 2 AH 1 x 2 5 4 x 2 s-sparse x. 3 Cohen, Devore, Petrushev, Xu, 1999

30 Suppose that Ψ = AH : R N2 R m has the RIP of order 2s. Suppose Ax = 0 Decompose c = Hx into ordered s-sparse blocks c = c S0 + c S1 + c S2 +... Then Ψc = AH Hx = Ax = 0 and 0 Ψ(c S0 + c S1 ) 2 Ψc Sj 2 j 2 (RIP of Ψ) 3 4 c S 0 + c S1 2 5 c Sj 2 4 j 2 (block trick) 3 4 c S 0 2 5/4 c Sj 1 s (c in weak l 1 ) 3 4 c S 0 2 5/4 s x TV log(n 2 /s) j 1

31 Suppose that Ψ = AH : R N2 R m has the RIP of order 2s. Suppose Ax = 0 Decompose c = Hx into ordered s-sparse blocks c = c S0 + c S1 + c S2 +... Then Ψc = AH Hx = Ax = 0 and 0 Ψ(c S0 + c S1 ) 2 Ψc Sj 2 j 2 (RIP of Ψ) 3 4 c S 0 + c S1 2 5 c Sj 2 4 j 2 (block trick) 3 4 c S 0 2 5/4 c Sj 1 s (c in weak l 1 ) 3 4 c S 0 2 5/4 s x TV log(n 2 /s) j 1

2 So 1. c S0 2 1 s x TV log(n/s), 2. c c S0 2 1 s x TV (c is in weak l 1 ) Then x 2 = c 2 c S0 2 + c c S0 2 log(n2 /s) x TV s

Extensions 1. Strengthened Sobolev inequalities and total variation minimization guarantees can be extended to multidimensional signals. 2. Strengthened Sobolev inequalities and total variation minimization guarantees are robust to noisy measurements: y = Ax + ξ, with ξ 2 ε. 3. Sobolev inequalities for random subspaces in dimension d = 1? Error guarantees for total variation minimization in dimension d = 1? 4. Can we remove the extra factor of log(n) in the derived Sobolev inequalities? 5. Deterministic constructions of subspaces satisfying strengthened Sobolev inequalities (warning: hard!!!) 6....

34 References Needell, Ward. Stable image reconstruction using total variation minimization. SIAM Journal on Imaging Sciences 6.2 (2013): 1035-1058. Needell, Ward. Near-optimal compressed sensing guarantees for total variation minimization. IEEE transactions on image processing 22.10 (2013): 3941. Krahmer, Ward, Beyond incoherence: stable and robust sampling strategies for compressive imaging. arxiv preprint arxiv:1210.2380 (2012).