Super-resolution via Convex Programming

Similar documents
Towards a Mathematical Theory of Super-resolution

Support Detection in Super-resolution

Optimization-based sparse recovery: Compressed sensing vs. super-resolution

Sparse Recovery Beyond Compressed Sensing

On the recovery of measures without separation conditions

Super-Resolution of Point Sources via Convex Programming

Super-resolution of point sources via convex programming

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation

Robust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Frequency Information

Going off the grid. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison

Demixing Sines and Spikes: Robust Spectral Super-resolution in the Presence of Outliers

ROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210

Sparse linear models

Three Recent Examples of The Effectiveness of Convex Programming in the Information Sciences

Strengthened Sobolev inequalities for a random subspace of functions

Super-Resolution from Noisy Data

Recovering overcomplete sparse representations from structured sensing

Near Optimal Signal Recovery from Random Projections

Lecture Notes 9: Constrained Optimization

ELE 538B: Sparsity, Structure and Inference. Super-Resolution. Yuxin Chen Princeton University, Spring 2017

SPECTRAL COMPRESSIVE SENSING WITH POLAR INTERPOLATION. Karsten Fyhn, Hamid Dadkhahi, Marco F. Duarte

Going off the grid. Benjamin Recht University of California, Berkeley. Joint work with Badri Bhaskar Parikshit Shah Gonnguo Tang

Constrained optimization

Compressed Sensing and Neural Networks

The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008

Sparse Parameter Estimation: Compressed Sensing meets Matrix Pencil

Compressive Line Spectrum Estimation with Clustering and Interpolation

The Fundamentals of Compressive Sensing

A Super-Resolution Algorithm for Multiband Signal Identification

Super-resolution by means of Beurling minimal extrapolation

Noisy Signal Recovery via Iterative Reweighted L1-Minimization

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

A CONVEX-PROGRAMMING FRAMEWORK FOR SUPER-RESOLUTION

Compressive Sensing and Beyond

Overview. Optimization-Based Data Analysis. Carlos Fernandez-Granda

Information and Resolution

The Pros and Cons of Compressive Sensing

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE

Part IV Compressed Sensing

Sensing systems limited by constraints: physical size, time, cost, energy

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011

Greedy Signal Recovery and Uniform Uncertainty Principles

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016

Multichannel Sampling of Signals with Finite. Rate of Innovation

Sampling Signals from a Union of Subspaces

Sparsity in Underdetermined Systems

Robust Recovery of Positive Stream of Pulses

Fast and Robust Phase Retrieval

Minimizing the Difference of L 1 and L 2 Norms with Applications

Parameter Estimation for Mixture Models via Convex Optimization

AN ALGORITHM FOR EXACT SUPER-RESOLUTION AND PHASE RETRIEVAL

Compressed Sensing: Lecture I. Ronald DeVore

Compressed sensing for radio interferometry: spread spectrum imaging techniques

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5

Compressive sensing in the analog world

Introduction to Compressed Sensing

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk

Stable Signal Recovery from Incomplete and Inaccurate Measurements

Exponential decay of reconstruction error from binary measurements of sparse signals

Random Sampling of Bandlimited Signals on Graphs

STAT 200C: High-dimensional Statistics

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN

Introduction to compressive sampling

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Generalized Line Spectral Estimation via Convex Optimization

Self-Calibration and Biconvex Compressive Sensing

SPARSE signal representations have gained popularity in recent

Gauge optimization and duality

Inverse problems, Dictionary based Signal Models and Compressed Sensing

Signal Recovery, Uncertainty Relations, and Minkowski Dimension

An iterative hard thresholding estimator for low rank matrix recovery

Stochastic geometry and random matrix theory in CS

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization

Uncertainty quantification for sparse solutions of random PDEs

THEORY OF COMPRESSIVE SENSING VIA l 1 -MINIMIZATION: A NON-RIP ANALYSIS AND EXTENSIONS

1 Introduction to Compressed Sensing

Gaussian Phase Transitions and Conic Intrinsic Volumes: Steining the Steiner formula

Lecture Notes 5: Multiresolution Analysis

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Sparsifying Transform Learning for Compressed Sensing MRI

Sparse DOA estimation with polynomial rooting

Tutorial: Sparse Signal Processing Part 1: Sparse Signal Representation. Pier Luigi Dragotti Imperial College London

Super-Resolution of Mutually Interfering Signals

Digital Object Identifier /MSP

arxiv: v3 [math.oc] 15 Sep 2014

COMPRESSIVE SENSING WITH AN OVERCOMPLETE DICTIONARY FOR HIGH-RESOLUTION DFT ANALYSIS. Guglielmo Frigo and Claudio Narduzzi

Agenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization

Spin Glass Approach to Restricted Isometry Constant

IMAGE RECONSTRUCTION FROM UNDERSAMPLED FOURIER DATA USING THE POLYNOMIAL ANNIHILATION TRANSFORM

Multiscale Geometric Analysis: Thoughts and Applications (a summary)

Tractable Upper Bounds on the Restricted Isometry Constant

Compressed Sensing Off the Grid

SUPER-RESOLUTION OF POSITIVE SOURCES: THE DISCRETE SETUP. Veniamin I. Morgenshtern Emmanuel J. Candès

Sparse Proteomics Analysis (SPA)

Various signal sampling and reconstruction methods

Sparse Solutions of an Undetermined Linear System

Thresholds for the Recovery of Sparse Solutions via L1 Minimization

Transcription:

Super-resolution via Convex Programming Carlos Fernandez-Granda (Joint work with Emmanuel Candès) Structure and Randomness in System Identication and Learning, IPAM 1/17/2013 1/17/2013 1 / 44

Index 1 Motivation 2 Sparsity is not enough 3 Exact recovery by convex optimization 4 Stability 5 Numerical algorithms 6 Related work and conclusion 1/17/2013 2 / 44

Motivation 1 Motivation 2 Sparsity is not enough 3 Exact recovery by convex optimization 4 Stability 5 Numerical algorithms 6 Related work and conclusion 1/17/2013 3 / 44

Motivation Single-molecule imaging Frame 1 Frame 2 Frame 3 1/17/2013 4 / 44

Motivation Single-molecule imaging Frame 1 Frame 2 Frame 3 Microscope receives light from uorescent molecules 1/17/2013 4 / 44

Motivation Single-molecule imaging Frame 1 Frame 2 Frame 3 Microscope receives light from uorescent molecules Few molecules are active in each frame sparsity 1/17/2013 4 / 44

Motivation Single-molecule imaging Frame 1 Frame 2 Frame 3 Microscope receives light from uorescent molecules Few molecules are active in each frame sparsity Multiple ( 10000) frames are recorded and processed individually 1/17/2013 4 / 44

Motivation Single-molecule imaging Frame 1 Frame 2 Frame 3 Microscope receives light from uorescent molecules Few molecules are active in each frame sparsity Multiple ( 10000) frames are recorded and processed individually Results from all frames are combined to reveal the underlying signal 1/17/2013 4 / 44

Motivation Single-molecule imaging 1/17/2013 5 / 44

Motivation Single-molecule imaging Bad news : the resolution of our measurements is too low 1/17/2013 5 / 44

Motivation Single-molecule imaging Bad news : the resolution of our measurements is too low Good news : there is structure in the signal 1/17/2013 5 / 44

Motivation Limits of resolution in imaging In any optical imaging system diraction imposes a fundamental limit on resolution 1/17/2013 6 / 44

Motivation Limits of resolution in imaging In any optical imaging system diraction imposes a fundamental limit on resolution 1/17/2013 6 / 44

Motivation What is super-resolution? Retrieving ne-scale structure from low-resolution data 1/17/2013 7 / 44

Motivation What is super-resolution? Retrieving ne-scale structure from low-resolution data Equivalently, extrapolating the high end of the spectrum 1/17/2013 7 / 44

Motivation Super-resolution of point sources Super-resolution of structured signals from bandlimited data arises in many applications 1/17/2013 8 / 44

Motivation Super-resolution of point sources Super-resolution of structured signals from bandlimited data arises in many applications Reconstruction of sub-wavelength structure in conventional imaging Reconstruction of sub-pixel structure in electronic imaging Spectral analysis of multitone signals Similar problems in signal processing, radar, spectroscopy, medical imaging, astronomy, geophysics, etc. 1/17/2013 8 / 44

Motivation Super-resolution of point sources Super-resolution of structured signals from bandlimited data arises in many applications Reconstruction of sub-wavelength structure in conventional imaging Reconstruction of sub-pixel structure in electronic imaging Spectral analysis of multitone signals Similar problems in signal processing, radar, spectroscopy, medical imaging, astronomy, geophysics, etc. The signal of interest is often modeled as a superposition of point sources 1/17/2013 8 / 44

Motivation Super-resolution of point sources Super-resolution of structured signals from bandlimited data arises in many applications Reconstruction of sub-wavelength structure in conventional imaging Reconstruction of sub-pixel structure in electronic imaging Spectral analysis of multitone signals Similar problems in signal processing, radar, spectroscopy, medical imaging, astronomy, geophysics, etc. The signal of interest is often modeled as a superposition of point sources Celestial bodies in astronomy Line spectra in speech analysis Fluorescent molecules in single-molecule microscopy 1/17/2013 8 / 44

Motivation Mathematical model Signal : superposition of delta measures x = j a j δ tj a j C, t j T [0, 1] 1/17/2013 9 / 44

Motivation Mathematical model Signal : superposition of delta measures x = j a j δ tj a j C, t j T [0, 1] Measurements : low-pass Fourier coecients y = F n x y(k) = 1 0 e i 2πkt x (dt) = j a j e i 2πkt j, k Z, k f c Number of measurements : n = 2 f c + 1 1/17/2013 9 / 44

Motivation Mathematical model Signal : superposition of delta measures x = j a j δ tj a j C, t j T [0, 1] Measurements : low-pass Fourier coecients y = F n x y(k) = 1 0 e i 2πkt x (dt) = j a j e i 2πkt j, k Z, k f c Number of measurements : n = 2 f c + 1 Under what conditions is it possible to recover x from y? 1/17/2013 9 / 44

Motivation Equivalent problem : line spectra estimation Swapping time and frequency 1/17/2013 10 / 44

Motivation Equivalent problem : line spectra estimation Swapping time and frequency Signal : superposition of sinusoids x(t) = j a j e i 2πω j t a j C, ω j T [0, 1] 1/17/2013 10 / 44

Motivation Equivalent problem : line spectra estimation Swapping time and frequency Signal : superposition of sinusoids x(t) = j a j e i 2πω j t a j C, ω j T [0, 1] Measurements : equispaced samples x(1), x(2), x(3),... x(n) 1/17/2013 10 / 44

Motivation Can you nd the spikes? 1/17/2013 11 / 44

Motivation Can you nd the spikes? 1/17/2013 11 / 44

Sparsity is not enough 1 Motivation 2 Sparsity is not enough 3 Exact recovery by convex optimization 4 Stability 5 Numerical algorithms 6 Related work and conclusion 1/17/2013 12 / 44

Sparsity is not enough Compressed sensing Compressed sensing theory establishes robust recovery of spikes from random Fourier measurements [Candès, Romberg & Tao 2004] 1/17/2013 13 / 44

Sparsity is not enough Compressed sensing Compressed sensing theory establishes robust recovery of spikes from random Fourier measurements [Candès, Romberg & Tao 2004] Crucial insight : measurement operator is well conditioned when acting upon sparse signals (restricted isometry property) 1/17/2013 13 / 44

Sparsity is not enough Compressed sensing Compressed sensing theory establishes robust recovery of spikes from random Fourier measurements [Candès, Romberg & Tao 2004] Crucial insight : measurement operator is well conditioned when acting upon sparse signals (restricted isometry property) Is this the case in the super-resolution setting? 1/17/2013 13 / 44

Sparsity is not enough Compressed sensing Compressed sensing theory establishes robust recovery of spikes from random Fourier measurements [Candès, Romberg & Tao 2004] Crucial insight : measurement operator is well conditioned when acting upon sparse signals (restricted isometry property) Is this the case in the super-resolution setting? Simple experiment : discretize support to N = 4096 points restrict signal to an interval of 48 contiguous points measure n DFT coecients super-resolution factor (SRF) = N n how well conditioned is the inverse problem if we know the support? 1/17/2013 13 / 44

Sparsity is not enough Simple experiment 0.0 0.2 0.4 0.6 0.8 1.0 0 10 20 30 40 SRF=2 SRF=4 SRF=8 SRF=16 1e 17 1e 13 1e 09 1e 05 1e 01 0 10 20 30 40 SRF=2 SRF=4 SRF=8 SRF=16 For SRF = 4 measuring any unit-normed vector in a subspace of dimension 24 results in a signal with norm less than 10 7 1/17/2013 14 / 44

Sparsity is not enough Simple experiment 0.0 0.2 0.4 0.6 0.8 1.0 0 10 20 30 40 SRF=2 SRF=4 SRF=8 SRF=16 1e 17 1e 13 1e 09 1e 05 1e 01 0 10 20 30 40 SRF=2 SRF=4 SRF=8 SRF=16 For SRF = 4 measuring any unit-normed vector in a subspace of dimension 24 results in a signal with norm less than 10 7 At an SNR of 145 db, recovery is impossible by any method 1/17/2013 14 / 44

Sparsity is not enough Prolate spheroidal sequences There seems to be a sharp transition between signals that are preserved and signals that are completely suppressed. 1/17/2013 15 / 44

Sparsity is not enough Prolate spheroidal sequences There seems to be a sharp transition between signals that are preserved and signals that are completely suppressed. This phenomenon can be characterized asymptotically by using Slepian's prolate spheroidal sequences 1/17/2013 15 / 44

Sparsity is not enough Prolate spheroidal sequences There seems to be a sharp transition between signals that are preserved and signals that are completely suppressed. This phenomenon can be characterized asymptotically by using Slepian's prolate spheroidal sequences These sequences are the eigenvectors of the concatenation of a time limiting P T and a frequency limiting operator P W, 0 < W 1/2 1/17/2013 15 / 44

Sparsity is not enough Prolate spheroidal sequences There seems to be a sharp transition between signals that are preserved and signals that are completely suppressed. This phenomenon can be characterized asymptotically by using Slepian's prolate spheroidal sequences These sequences are the eigenvectors of the concatenation of a time limiting P T and a frequency limiting operator P W, 0 < W 1/2 Asymptotically WT eigenvalues cluster near one while the rest are almost zero [Slepian 1978] 1/17/2013 15 / 44

Sparsity is not enough Consequences of Slepian's work For any value of SRF above one, there exist signals almost in the null-space of the measurement operator 1/17/2013 16 / 44

Sparsity is not enough Consequences of Slepian's work For any value of SRF above one, there exist signals almost in the null-space of the measurement operator The norm of the measurements corresponding to k-sparse signals for a xed SRF decreases exponentially in k 1/17/2013 16 / 44

Sparsity is not enough Consequences of Slepian's work For any value of SRF above one, there exist signals almost in the null-space of the measurement operator The norm of the measurements corresponding to k-sparse signals for a xed SRF decreases exponentially in k For SRF = 1.05 there is a 256-sparse signal such that y 2 10 7 1/17/2013 16 / 44

Sparsity is not enough Consequences of Slepian's work For any value of SRF above one, there exist signals almost in the null-space of the measurement operator The norm of the measurements corresponding to k-sparse signals for a xed SRF decreases exponentially in k For SRF = 1.05 there is a 256-sparse signal such that y 2 10 7 For SRF = 4 there is a 48-sparse signal such that y 2 10 33 1/17/2013 16 / 44

Sparsity is not enough Consequences of Slepian's work For any value of SRF above one, there exist signals almost in the null-space of the measurement operator The norm of the measurements corresponding to k-sparse signals for a xed SRF decreases exponentially in k For SRF = 1.05 there is a 256-sparse signal such that y 2 10 7 For SRF = 4 there is a 48-sparse signal such that y 2 10 33 For any interval T of length k, there is an irretrievable subspace of dimension (1 1/SRF) k supported on T 1/17/2013 16 / 44

Sparsity is not enough Consequences of Slepian's work For any value of SRF above one, there exist signals almost in the null-space of the measurement operator The norm of the measurements corresponding to k-sparse signals for a xed SRF decreases exponentially in k For SRF = 1.05 there is a 256-sparse signal such that y 2 10 7 For SRF = 4 there is a 48-sparse signal such that y 2 10 33 For any interval T of length k, there is an irretrievable subspace of dimension (1 1/SRF) k supported on T When SRF > 2 many clustered sparse signals are killed by the measurement process 1/17/2013 16 / 44

Sparsity is not enough Compressed sensing vs super-resolution What is the dierence? Compressed sensing : spectrum interpolation 1/17/2013 17 / 44

Sparsity is not enough Compressed sensing vs super-resolution What is the dierence? Compressed sensing : spectrum interpolation Super-resolution : spectrum extrapolation 1/17/2013 17 / 44

Sparsity is not enough Compressed sensing vs super-resolution Compressed sensing : spectrum interpolation Super-resolution : spectrum extrapolation 1/17/2013 18 / 44

Sparsity is not enough Conclusion Robust super-resolution of arbitrary sparse signals is impossible 1/17/2013 19 / 44

Sparsity is not enough Conclusion Robust super-resolution of arbitrary sparse signals is impossible We can only hope to recover signals that are not too clustered 1/17/2013 19 / 44

Sparsity is not enough Conclusion Robust super-resolution of arbitrary sparse signals is impossible We can only hope to recover signals that are not too clustered In this work, this structural assumption is captured by introducing the minimum separation of the support T of a signal (T ) = inf t t (t,t ) T : t t 1/17/2013 19 / 44

Exact recovery by convex optimization 1 Motivation 2 Sparsity is not enough 3 Exact recovery by convex optimization 4 Stability 5 Numerical algorithms 6 Related work and conclusion 1/17/2013 20 / 44

Exact recovery by convex optimization Noiseless case Recovery by solving min x x TV subject to F n x = y, over all nite complex measures x supported on [0, 1] 1/17/2013 21 / 44

Exact recovery by convex optimization Noiseless case Recovery by solving min x x TV subject to F n x = y, over all nite complex measures x supported on [0, 1] Total-variation norm of a complex measure ν : ν TV = sup ν (B j ), j=1 (supremum over all nite partitions B j of [0, 1]) 1/17/2013 21 / 44

Exact recovery by convex optimization Noiseless case Recovery by solving min x x TV subject to F n x = y, over all nite complex measures x supported on [0, 1] Total-variation norm of a complex measure ν : ν TV = sup ν (B j ), j=1 (supremum over all nite partitions B j of [0, 1]) not the total variation of a piecewise constant function 1/17/2013 21 / 44

Exact recovery by convex optimization Noiseless case Recovery by solving min x x TV subject to F n x = y, over all nite complex measures x supported on [0, 1] Total-variation norm of a complex measure ν : ν TV = sup ν (B j ), j=1 (supremum over all nite partitions B j of [0, 1]) not the total variation of a piecewise constant function continuous counterpart of the l 1 norm 1/17/2013 21 / 44

Exact recovery by convex optimization Noiseless case Recovery by solving min x x TV subject to F n x = y, over all nite complex measures x supported on [0, 1] Total-variation norm of a complex measure ν : ν TV = sup ν (B j ), j=1 (supremum over all nite partitions B j of [0, 1]) not the total variation of a piecewise constant function continuous counterpart of the l 1 norm if j a jδ tj then x TV = j a j 1/17/2013 21 / 44

Exact recovery by convex optimization Noiseless recovery y(k) = 1 0 e i 2πkt x (dt) = j a j e i 2πkt j, k Z, k f c Theorem [Candès, F. 2012] If the minimum separation of the signal obeys (T ) 2 /f c := 2 λ c, then x is recovered exactly by total-variation norm minimization 1/17/2013 22 / 44

Exact recovery by convex optimization Noiseless recovery y(k) = 1 0 e i 2πkt x (dt) = j a j e i 2πkt j, k Z, k f c Theorem [Candès, F. 2012] If the minimum separation of the signal obeys (T ) 2 /f c := 2 λ c, then x is recovered exactly by total-variation norm minimization Innite precision 1/17/2013 22 / 44

Exact recovery by convex optimization Noiseless recovery y(k) = 1 0 e i 2πkt x (dt) = j a j e i 2πkt j, k Z, k f c Theorem [Candès, F. 2012] If the minimum separation of the signal obeys (T ) 2 /f c := 2 λ c, then x is recovered exactly by total-variation norm minimization Innite precision Recovers (2λ c ) 1 = f c /2 = n/4 spikes from n low-frequency measurements 1/17/2013 22 / 44

Exact recovery by convex optimization Noiseless recovery y(k) = 1 0 e i 2πkt x (dt) = j a j e i 2πkt j, k Z, k f c Theorem [Candès, F. 2012] If the minimum separation of the signal obeys (T ) 2 /f c := 2 λ c, then x is recovered exactly by total-variation norm minimization Innite precision Recovers (2λ c ) 1 = f c /2 = n/4 spikes from n low-frequency measurements If x is real, then (T ) 2 /f c := 1.87 λ c, 1/17/2013 22 / 44

Exact recovery by convex optimization Lower bound Consider signals supported on a nite grid of N points 1/17/2013 23 / 44

Exact recovery by convex optimization Lower bound Consider signals supported on a nite grid of N points We look for adversarial sign patterns 1/17/2013 23 / 44

Exact recovery by convex optimization Lower bound Consider signals supported on a nite grid of N points We look for adversarial sign patterns We plot the ratio N/n (x-axis) against the largest minimum distance (y-axis) at which l 1 -norm minimization fails 10 20 30 40 50 60 T =50 T =20 T =10 T =5 T =2 5 10 15 20 25 30 1/17/2013 23 / 44

Exact recovery by convex optimization Lower bound Consider signals supported on a nite grid of N points We look for adversarial sign patterns We plot the ratio N/n (x-axis) against the largest minimum distance (y-axis) at which l 1 -norm minimization fails 10 20 30 40 50 60 T =50 T =20 T =10 T =5 T =2 5 10 15 20 25 30 Red line corresponds to (T ) = λ c 1/17/2013 23 / 44

Exact recovery by convex optimization Higher dimensions Signal : x = j a j δ tj a j R, t j T [0, 1] 2 Measurements : low-pass 2D Fourier coecients y(k) = e i 2π k,t x (dt) = a j e i 2π k,t j, [0,1] 2 j k Z2, k f c 1/17/2013 24 / 44

Exact recovery by convex optimization Higher dimensions Signal : x = j a j δ tj a j R, t j T [0, 1] 2 Measurements : low-pass 2D Fourier coecients y(k) = e i 2π k,t x (dt) = a j e i 2π k,t j, [0,1] 2 j k Z2, k f c Theorem [Candès, F. 2012] TV norm minimization yields exact recovery if (T ) 2.38 /f c := 2.38 λ c, 1/17/2013 24 / 44

Exact recovery by convex optimization Higher dimensions Signal : x = j a j δ tj a j R, t j T [0, 1] 2 Measurements : low-pass 2D Fourier coecients y(k) = e i 2π k,t x (dt) = a j e i 2π k,t j, [0,1] 2 j k Z2, k f c Theorem [Candès, F. 2012] TV norm minimization yields exact recovery if (T ) 2.38 /f c := 2.38 λ c, In dimension d, (T ) C d λ c, where C d only depends on d 1/17/2013 24 / 44

Exact recovery by convex optimization Extensions Signal : l 1 times continuously dierentiable piecewise smooth function x = t j T 1 (tj 1,t j) p j(t), t j T [0, 1] where p j (t) is a polynomial of degree l Measurements : low-pass Fourier coecients y(k) = 1 0 e i 2πkt x (dt), k Z, k f c 1/17/2013 25 / 44

Exact recovery by convex optimization Extensions Signal : l 1 times continuously dierentiable piecewise smooth function x = t j T 1 (tj 1,t j) p j(t), t j T [0, 1] where p j (t) is a polynomial of degree l Measurements : low-pass Fourier coecients Recovery : y(k) = 1 0 e i 2πkt x (dt), k Z, k f c min x (l+1) TV subject to F n x = y 1/17/2013 25 / 44

Exact recovery by convex optimization Extensions Signal : l 1 times continuously dierentiable piecewise smooth function x = t j T 1 (tj 1,t j) p j(t), t j T [0, 1] where p j (t) is a polynomial of degree l Measurements : low-pass Fourier coecients Recovery : y(k) = 1 0 e i 2πkt x (dt), k Z, k f c min x (l+1) TV subject to F n x = y Corollary TV norm minimization yields exact recovery if (T ) 2 λ c 1/17/2013 25 / 44

Exact recovery by convex optimization Sparse recovery If we discretize the support : Sparse recovery problem in an overcomplete Fourier dictionary 1/17/2013 26 / 44

Exact recovery by convex optimization Sparse recovery If we discretize the support : Sparse recovery problem in an overcomplete Fourier dictionary Previous theory based on dictionary incoherence is very weak, due to high column correlation 1/17/2013 26 / 44

Exact recovery by convex optimization Sparse recovery If we discretize the support : Sparse recovery problem in an overcomplete Fourier dictionary Previous theory based on dictionary incoherence is very weak, due to high column correlation If N = 20, 000 and n = 1, 000, [Dossal 2005] : recovery of 3 spikes, our result : n/4 = 250 spikes 1/17/2013 26 / 44

Exact recovery by convex optimization Sparse recovery If we discretize the support : Sparse recovery problem in an overcomplete Fourier dictionary Previous theory based on dictionary incoherence is very weak, due to high column correlation If N = 20, 000 and n = 1, 000, [Dossal 2005] : recovery of 3 spikes, our result : n/4 = 250 spikes Additional structural assumptions allow for a more precise theoretical analysis 1/17/2013 26 / 44

Exact recovery by convex optimization Sketch of the proof Sucient condition for exact recovery of a signal supported on T : For any v C T with v j = 1, there exists a low-frequency trigonometric polynomial q(t) = f c k= f c c k e i 2πkt (1) obeying { q(t j ) = v j, t j T, q(t) < 1, t [0, 1] \ T. (2) 1/17/2013 27 / 44

Exact recovery by convex optimization Sketch of the proof Interpolating the sign pattern with a low frequency polynomial becomes challenging if the minimum separation is small +1 +1 1 1 1/17/2013 28 / 44

Exact recovery by convex optimization Sketch of the proof Interpolation with low-frequency kernel K q(t) = t j T α j K(t t j ), where α is a vector of coecients, and q is constrained to satisfy q(t k ) = t j T α j K (t k t j ) = v k, t k T, 1/17/2013 29 / 44

Exact recovery by convex optimization Sketch of the proof Interpolation with low-frequency kernel K q(t) = t j T α j K(t t j ), where α is a vector of coecients, and q is constrained to satisfy q(t k ) = t j T α j K (t k t j ) = v k, t k T, If K is a sinc function this is equivalent to a least squares construction 1/17/2013 29 / 44

Exact recovery by convex optimization Sketch of the proof Interpolation with low-frequency kernel K q(t) = t j T α j K(t t j ), where α is a vector of coecients, and q is constrained to satisfy q(t k ) = t j T α j K (t k t j ) = v k, t k T, If K is a sinc function this is equivalent to a least squares construction Kernels with faster decay, such as the Fejér kernel or the Gaussian kernel [Kahane 2011], yield better results 1/17/2013 29 / 44

Exact recovery by convex optimization Sketch of the proof Interpolation with low-frequency kernel K q(t) = t j T α j K(t t j ), where α is a vector of coecients, and q is constrained to satisfy q(t k ) = t j T α j K (t k t j ) = v k, t k T, If K is a sinc function this is equivalent to a least squares construction Kernels with faster decay, such as the Fejér kernel or the Gaussian kernel [Kahane 2011], yield better results However, this does not allow to construct a valid continuous dual polynomial 1/17/2013 29 / 44

Exact recovery by convex optimization Sketch of the proof Adding a correction term q(t) = t j T α j K(t t j ) + β j K (t t j ) and an extra constraint q(t k ) = v k, q (t k ) = 0, t k T forces q to reach a local maximum at each element of T 1/17/2013 30 / 44

Exact recovery by convex optimization Sketch of the proof Adding a correction term q(t) = t j T α j K(t t j ) + β j K (t t j ) and an extra constraint q(t k ) = v k, q (t k ) = 0, t k T forces q to reach a local maximum at each element of T Choosing K to be the square of a Fejér kernel allows to show that there exist α and β satisfying the constraints q is strictly bounded by one on the o-support 1/17/2013 30 / 44

Stability 1 Motivation 2 Sparsity is not enough 3 Exact recovery by convex optimization 4 Stability 5 Numerical algorithms 6 Related work and conclusion 1/17/2013 31 / 44

Stability Super-resolution factor : spatial viewpoint Super-resolution factor SRF = λ c λ f 1/17/2013 32 / 44

Stability Super-resolution factor : spectral viewpoint Super-resolution factor SRF = f f c 1/17/2013 33 / 44

Stability Noise model F n x(k) = 1 0 e i 2πkt x (dt), k f c 1/17/2013 34 / 44

Stability Noise model F n x(k) = 1 0 e i 2πkt x (dt), Noise can be modeled in the spectral domain y = F n x + w k f c 1/17/2013 34 / 44

Stability Noise model F n x(k) = 1 0 e i 2πkt x (dt), Noise can be modeled in the spectral domain y = F n x + w or spatial domain s = Fn y = P n x + z, P n projection onto the rst n Fourier modes k f c 1/17/2013 34 / 44

Stability Noise model F n x(k) = 1 0 e i 2πkt x (dt), Noise can be modeled in the spectral domain y = F n x + w or spatial domain s = Fn y = P n x + z, P n projection onto the rst n Fourier modes k f c Assumption : z L1 δ 1/17/2013 34 / 44

Stability Noise model F n x(k) = 1 0 e i 2πkt x (dt), Noise can be modeled in the spectral domain y = F n x + w or spatial domain s = Fn y = P n x + z, P n projection onto the rst n Fourier modes k f c Assumption : z L1 δ Recovery algorithm min x x TV subject to P n x s L1 δ 1/17/2013 34 / 44

Stability Robust recovery Let φ λc be a kernel with width λ c and cut-o frequency f c If z L1 δ, then φ λc (x est x) L1 z L1 δ, 1/17/2013 35 / 44

Stability Robust recovery Let φ λc be a kernel with width λ c and cut-o frequency f c If z L1 δ, then What can we expect for φ λf? φ λc (x est x) L1 z L1 δ, 1/17/2013 35 / 44

Stability Robust recovery Let φ λc be a kernel with width λ c and cut-o frequency f c If z L1 δ, then What can we expect for φ λf? Theorem [Candès, F. 2012] φ λc (x est x) L1 z L1 δ, If (T ) 2 /f c then the solution x est to the TV-norm minimization problem satises φ λf (x est x) L1 SRF2 δ, 1/17/2013 35 / 44

Numerical algorithms 1 Motivation 2 Sparsity is not enough 3 Exact recovery by convex optimization 4 Stability 5 Numerical algorithms 6 Related work and conclusion 1/17/2013 36 / 44

Numerical algorithms Practical implementation TV norm minimization is an innite-dimensional optimization problem 1/17/2013 37 / 44

Numerical algorithms Practical implementation TV norm minimization is an innite-dimensional optimization problem Primal : min x x TV subject to F n x = y, First option : Discretize x l 1 norm minimization 1/17/2013 37 / 44

Numerical algorithms Practical implementation TV norm minimization is an innite-dimensional optimization problem Primal : min x x TV subject to F n x = y, First option : Discretize x l 1 norm minimization Dual : max Re [y u] subject to Fn u 1, u C n Second option : Recast as semidenite program 1/17/2013 37 / 44

Numerical algorithms Semidenite representation is equivalent to F n u 1 There exists a Hermitian matrix Q C n n such that [ ] Q u u 0, 1 n j Q i,i+j = i=1 { 1, j = 0, 0, j = 1, 2,..., n 1. 1/17/2013 38 / 44

Numerical algorithms Support detection By strong duality, F n û interpolates the sign of ˆx 1/17/2013 39 / 44

Numerical algorithms Experiment Support recovery by solving the SDP f c 25 50 75 100 Average error 6.66 10 9 1.70 10 9 5.58 10 10 2.96 10 10 Maximum error 1.83 10 7 8.14 10 8 2.55 10 8 2.31 10 8 For each f c, 100 random signals with T = f c /4 and (T ) 2/f c 1/17/2013 40 / 44

Related work and conclusion 1 Motivation 2 Sparsity is not enough 3 Exact recovery by convex optimization 4 Stability 5 Numerical algorithms 6 Related work and conclusion 1/17/2013 41 / 44

Related work and conclusion Related work Super-resolution of spike trains by l 1 norm minimization in seismic prospecting [Claerbout, Muir 1973 ; Levy, Fullagar 1981 ; Santosa, Symes 1986] 1/17/2013 42 / 44

Related work and conclusion Related work Super-resolution of spike trains by l 1 norm minimization in seismic prospecting [Claerbout, Muir 1973 ; Levy, Fullagar 1981 ; Santosa, Symes 1986] Guarantees on recovery of positive k-sparse signals from 2k + 1 Fourier coecients by l 1 norm minimization [Donoho, Tanner ; Fuchs 2005] 1/17/2013 42 / 44

Related work and conclusion Related work Super-resolution of spike trains by l 1 norm minimization in seismic prospecting [Claerbout, Muir 1973 ; Levy, Fullagar 1981 ; Santosa, Symes 1986] Guarantees on recovery of positive k-sparse signals from 2k + 1 Fourier coecients by l 1 norm minimization [Donoho, Tanner ; Fuchs 2005] Recovery of lacunary Fourier series coecients [Kahane 2011] 1/17/2013 42 / 44

Related work and conclusion Related work Super-resolution of spike trains by l 1 norm minimization in seismic prospecting [Claerbout, Muir 1973 ; Levy, Fullagar 1981 ; Santosa, Symes 1986] Guarantees on recovery of positive k-sparse signals from 2k + 1 Fourier coecients by l 1 norm minimization [Donoho, Tanner ; Fuchs 2005] Recovery of lacunary Fourier series coecients [Kahane 2011] Finite rate of innovation [Dragotti, Vetterli, Blu 2007] 1/17/2013 42 / 44

Related work and conclusion Related work Super-resolution of spike trains by l 1 norm minimization in seismic prospecting [Claerbout, Muir 1973 ; Levy, Fullagar 1981 ; Santosa, Symes 1986] Guarantees on recovery of positive k-sparse signals from 2k + 1 Fourier coecients by l 1 norm minimization [Donoho, Tanner ; Fuchs 2005] Recovery of lacunary Fourier series coecients [Kahane 2011] Finite rate of innovation [Dragotti, Vetterli, Blu 2007] Bounds on the modulus of continuity of super-resolution [Donoho 1992] 1/17/2013 42 / 44

Related work and conclusion Related work Super-resolution of spike trains by l 1 norm minimization in seismic prospecting [Claerbout, Muir 1973 ; Levy, Fullagar 1981 ; Santosa, Symes 1986] Guarantees on recovery of positive k-sparse signals from 2k + 1 Fourier coecients by l 1 norm minimization [Donoho, Tanner ; Fuchs 2005] Recovery of lacunary Fourier series coecients [Kahane 2011] Finite rate of innovation [Dragotti, Vetterli, Blu 2007] Bounds on the modulus of continuity of super-resolution [Donoho 1992] Super-resolution by greedy methods [Fannjiang, Liao 2012] 1/17/2013 42 / 44

Related work and conclusion Related work Super-resolution of spike trains by l 1 norm minimization in seismic prospecting [Claerbout, Muir 1973 ; Levy, Fullagar 1981 ; Santosa, Symes 1986] Guarantees on recovery of positive k-sparse signals from 2k + 1 Fourier coecients by l 1 norm minimization [Donoho, Tanner ; Fuchs 2005] Recovery of lacunary Fourier series coecients [Kahane 2011] Finite rate of innovation [Dragotti, Vetterli, Blu 2007] Bounds on the modulus of continuity of super-resolution [Donoho 1992] Super-resolution by greedy methods [Fannjiang, Liao 2012] Random undersampling of low-frequency coecients [Tang et al 2012] 1/17/2013 42 / 44

Related work and conclusion Conclusion Stable super-resolution is possible via tractable non-parametric methods based on convex optimization Note on single-molecule imaging : Joint work with E. Candès, V. Morgenshtern and the Moerner lab at Stanford 1/17/2013 43 / 44

Related work and conclusion Thank you 1/17/2013 44 / 44