Optimization-based sparse recovery: Compressed sensing vs. super-resolution

Similar documents
Towards a Mathematical Theory of Super-resolution

Super-resolution via Convex Programming

Support Detection in Super-resolution

Sparse Recovery Beyond Compressed Sensing

ROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210

Overview. Optimization-Based Data Analysis. Carlos Fernandez-Granda

Super-Resolution of Point Sources via Convex Programming

Going off the grid. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison

Constrained optimization

Lecture Notes 9: Constrained Optimization

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation

ELE 538B: Sparsity, Structure and Inference. Super-Resolution. Yuxin Chen Princeton University, Spring 2017

On the recovery of measures without separation conditions

Super-resolution of point sources via convex programming

Demixing Sines and Spikes: Robust Spectral Super-resolution in the Presence of Outliers

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011

Sparse linear models

The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008

Recovering overcomplete sparse representations from structured sensing

Strengthened Sobolev inequalities for a random subspace of functions

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

AN ALGORITHM FOR EXACT SUPER-RESOLUTION AND PHASE RETRIEVAL

Super-resolution by means of Beurling minimal extrapolation

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

Information and Resolution

Introduction to Compressed Sensing

Conditions for Robust Principal Component Analysis

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

Sensing systems limited by constraints: physical size, time, cost, energy

Going off the grid. Benjamin Recht University of California, Berkeley. Joint work with Badri Bhaskar Parikshit Shah Gonnguo Tang

Parameter Estimation for Mixture Models via Convex Optimization

COMPRESSED SENSING IN PYTHON

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

AN INTRODUCTION TO COMPRESSIVE SENSING

Solving Underdetermined Linear Equations and Overdetermined Quadratic Equations (using Convex Programming)

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5

Blind Deconvolution Using Convex Programming. Jiaming Cao

A CONVEX-PROGRAMMING FRAMEWORK FOR SUPER-RESOLUTION

Sparse solutions of underdetermined systems

Estimating Unknown Sparsity in Compressed Sensing

A Super-Resolution Algorithm for Multiband Signal Identification

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles

Three Generalizations of Compressed Sensing

Sparse Legendre expansions via l 1 minimization

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp

Signal Recovery, Uncertainty Relations, and Minkowski Dimension

Robust Recovery of Positive Stream of Pulses

Sparsity in system identification and data-driven control

An iterative hard thresholding estimator for low rank matrix recovery

A Survey of Compressive Sensing and Applications

An algebraic perspective on integer sparse recovery

Part IV Compressed Sensing

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization

GREEDY SIGNAL RECOVERY REVIEW

Three Recent Examples of The Effectiveness of Convex Programming in the Information Sciences

Compressive Sensing and Beyond

Near Optimal Signal Recovery from Random Projections

Sparsity in Underdetermined Systems

The Pros and Cons of Compressive Sensing

Compressive Line Spectrum Estimation with Clustering and Interpolation

Stable Signal Recovery from Incomplete and Inaccurate Measurements

Super-Resolution of Mutually Interfering Signals

Interpolation via weighted l 1 minimization

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Greedy Signal Recovery and Uniform Uncertainty Principles

Signal Recovery from Permuted Observations

Compressed Sensing and Robust Recovery of Low Rank Matrices

Robust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Frequency Information

Minimizing the Difference of L 1 and L 2 Norms with Applications

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN

Optimization for Compressed Sensing

The convex algebraic geometry of rank minimization

Convex Optimization and l 1 -minimization

Super-Resolution from Noisy Data

Compressed Sensing and Sparse Recovery

Spin Glass Approach to Restricted Isometry Constant

Recoverabilty Conditions for Rankings Under Partial Information

SPARSE signal representations have gained popularity in recent

Sparse Parameter Estimation: Compressed Sensing meets Matrix Pencil

Interpolation-Based Trust-Region Methods for DFO

COMPRESSIVE SAMPLING USING EM ALGORITHM. Technical Report No: ASU/2014/4

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Inverse problems, Dictionary based Signal Models and Compressed Sensing

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit

2D X-Ray Tomographic Reconstruction From Few Projections

Compressive Sensing with Random Matrices

Primal Dual Pursuit A Homotopy based Algorithm for the Dantzig Selector

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases

Recent Developments in Compressed Sensing

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina

Sparse DOA estimation with polynomial rooting

Compressed Sensing: Lecture I. Ronald DeVore

Sparse and Low Rank Recovery via Null Space Properties

A New Estimate of Restricted Isometry Constants for Sparse Solutions

Statistical Issues in Searches: Photon Science Response. Rebecca Willett, Duke University

Exact Joint Sparse Frequency Recovery via Optimization Methods

Error Correction via Linear Programming

Transcription:

Optimization-based sparse recovery: Compressed sensing vs. super-resolution Carlos Fernandez-Granda, Google Computational Photography and Intelligent Cameras, IPAM 2/5/2014

This work was supported by a Fundación La Caixa Fellowship and a Fundación Caja Madrid Fellowship Joint work with Emmanuel Candès (Stanford)

Optimization-based recovery Optimization Object Sensing system Data

Outline Two inverse problems When is the problem well posed? When do optimization-based methods succeed?

Two inverse problems When is the problem well posed? When do optimization-based methods succeed?

Compressed sensing Object Data Compressible Randomized Mathematical model: Random Fourier coecients of sparse signal

Super-resolution Object Data Point sources Low-pass blur Mathematical model: Low-pass Fourier coecients of sparse signal (Figures courtesy of V. Morgenshtern)

Motivation: Limits of resolution in imaging The resolving power of lenses, however perfect, is limited (Lord Rayleigh) Diraction imposes a fundamental limit on the resolution of optical systems

Super-resolution Spectrum Object Data

Super-resolution Optics: Data-acquisition techniques to overcome the diraction limit Image processing: Methods to upsample images onto a ner grid while preserving edges and hallucinating textures This talk: Estimation/deconvolution from low-pass measurements

Compressed sensing Super-resolution Spectrum interpolation Spectrum extrapolation

Two inverse problems When is the problem well posed? When do optimization-based methods succeed?

Compressed sensing x = Spectrum of x...... Fourier series of a measure/function x with domain [0, 1]

Compressed sensing x = Spectrum of x Discrete Fourier transform (DFT) of a vector x R N

Compressed sensing x = Spectrum of x Data: Random DFT coecients

Compressed sensing = x Data: Random DFT coecients

Compressed sensing = x What is the eect of the measurement operator on sparse vectors?

Compressed sensing = x Crucial insight: restricted operator is well conditioned when acting upon any sparse signal (restricted isometry property) [Candès, Tao 2006]

Compressed sensing = x Crucial insight: restricted operator is well conditioned when acting upon any sparse signal (restricted isometry property) [Candès, Tao 2006]

Super-resolution Signal: superposition of spikes (Dirac measures) in the unit interval x = j a j δ tj a j C, t j [0, 1] Data: low-pass Fourier coecients with cut-o frequency f c y(k) = 1 0 e i 2πkt x (dt) = j a j e i 2πkt j, k Z, k f c

Super-resolution x = Spectrum of x...... Fourier series of a measure x with domain [0, 1]

Super-resolution y x =...... Data: Low-pass Fourier coecients

Super-resolution = y x Data: Low-pass Fourier coecients

Super-resolution = y x Problem: If the support is clustered, the problem may be ill posed In super-resolution sparsity is not enough!

Super-resolution = y x If the support is spread out, there is still hope We need conditions beyond sparsity

Minimum separation The minimum separation of the support of x is = inf t t (t,t ) support(x) : t t

Conditioning of submatrix with respect to If < λ c := 1/f c the problem is ill posed λ c is the diameter of the main lobe of the point-spread function (twice the Rayleigh distance)

Example: 25 spikes, f c = 10 3, = 0.8/f c Signals Data (in signal space) 0.3 0.2 0.1 300 200 100 0 0 0.3 0.2 0.1 300 200 100 0 0

Example: 25 spikes, f c = 10 3, = 0.8/f c 0.3 Signal 1 Signal 2 300 Signal 1 Signal 2 0.2 200 0.1 100 0 0 Signals Data (in signal space)

Example: 25 spikes, f c = 10 3, = 0.8/f c The dierence is almost in the null space of the measurement operator 10 0 0.2 10 2 0 10 4 10 6 0.2 10 8 2,000 1,000 0 1,000 2,000 Dierence Spectrum

Two inverse problems When is the problem well posed? When do optimization-based methods succeed?

Estimation via convex programming Measurement model: Underdetermined linear system (we need further assumptions) Ax y Idea: Impose nonparametric assumptions on structure by minimizing a cost function min x C ( x) subject to A x = y, In our case: l 1 norm to enforce sparsity

Subgradients Denition: g x any vector v is a subgradient of a convex function C at x if for C (x + v) C (x) + g x, v Lemma If there is a subgradient of C at x in the range of A, g x = A u, and Ax = y then x is a solution of min x C ( x) subject to A x = y Proof: For all x such that Ax = y, so that A (x x) = 0, C ( x ) C (x) + g x, x x = C (x) + A u, x x = C (x) + ( u, A x x) = C (x)

Dual certicate for the l 1 norm Lemma x is a solution to min x x 1 subject to A x = y if Ax = y and there exists g x = A u such that g x (j) = sign {x (j)} if j support (x) g x (j) 1 if j / support (x)

Dual certicate for the l 1 norm Lemma x is the unique solution to min x x 1 subject to A x = y if Ax = y and there exists g x = A u such that g x (j) = sign {x (j)} if j support (x) g x (j) < 1 if j / support (x)

Dual certicate for the l 1 norm Lemma x is solution to min x x 1 subject to A x = y if Ax = y and there exists g x = A u such that g x (j) = sign {x (j)} if j support (x) g x (j) < 1 if j / support (x) The range of A corresponds to Compressed sensing: Random sinusoids Super-resolution: Low-pass sinusoids

Dual certicate for compressed sensing Least-squares interpolator

Dual certicate for compressed sensing Works out for linear levels of sparsity (up to logarithmic factors) [Candès, Romberg, Tao 2006]

Dual certicate for super-resolution 1 0 1 Least-squares interpolator does not work

Dual certicate for super-resolution 1 0 1 1st idea: Interpolation with a low-frequency fast-decaying kernel K g x (t) = α j K(t t j ), t j support(x)

Dual certicate for super-resolution 1 0 1 1st idea: Interpolation with a low-frequency fast-decaying kernel K g x (t) = α j K(t t j ), t j support(x)

Dual certicate for super-resolution 1 0 1 1st idea: Interpolation with a low-frequency fast-decaying kernel K g x (t) = α j K(t t j ), t j support(x)

Dual certicate for super-resolution 1 0 1 1st idea: Interpolation with a low-frequency fast-decaying kernel K g x (t) = α j K(t t j ), t j support(x)

Dual certicate for super-resolution 1 0 1 1st idea: Interpolation with a low-frequency fast-decaying kernel K g x (t) = α j K(t t j ), t j support(x)

Dual certicate for super-resolution 1 0 1 Problem: Magnitude of polynomial locally exceeds 1

Dual certicate for super-resolution 1 0 1 Problem: Magnitude of polynomial locally exceeds 1 Solution: Add correction term and force g x(t k ) = 0 for all t k support (x) g x (t) = α j K(t t j ) + β j K (t t j ) t j support(x)

Dual certicate for super-resolution 1 0 1 Problem: Magnitude of polynomial locally exceeds 1 Solution: Add correction term and force g x(t k ) = 0 for all t k support (x) g x (t) = α j K(t t j ) + β j K (t t j ) t j support(x)

Guarantees for super-resolution Theorem [Candès, F. 2012] If the minimum separation of the signal support obeys 2 /f c then recovery via l 1 norm minimization is exact

Guarantees for super-resolution Theorem [Candès, F. 2014] If the minimum separation of the signal support obeys 1.28 /f c, then recovery via l 1 norm minimization is exact

Guarantees for super-resolution Theorem [Candès, F. 2014] If the minimum separation of the signal support obeys 1.28 /f c, then recovery via l 1 norm minimization is exact Theorem [Candès, F. 2012] In 2D l 1 -norm minimization super-resolves point sources with a minimum separation of 2.38 /f c where f c is the cut-o frequency of the low-pass kernel

Guarantees for super-resolution Results hold for continuous version of the l 1 norm (no discretization) Numerical simulations show that the method works for 1/f c Generalizations of dual certicate allow to prove robustness to noise [Candès, F. 2013], [F. 2013] If the signal is sparse, we can randomly undersample low-pass measurements [Tang, Bhaskar, Shah, Recht 2013]

Conclusion Characterizing the interaction between the measurement operator and the structure of the object of interest is crucial to understand When the problem is well posed (conditioning of restricted operator) When optimization-based methods succeed (dual certicates)

For more details Towards a mathematical theory of super-resolution. E. J. Candès and C. Fernandez-Granda. Communications on Pure and Applied Math 67(6), 906-956. Super-resolution from noisy data. E. J. Candès and C. Fernandez-Granda. Journal of Fourier Analysis and Applications 19(6), 1229-1254. Support detection in super-resolution. C. Fernandez-Granda. Proceedings of SampTA 2013, 145-148.