Super-resolution via Convex Programming Carlos Fernandez-Granda (Joint work with Emmanuel Candès) Structure and Randomness in System Identication and Learning, IPAM 1/17/2013 1/17/2013 1 / 44
Index 1 Motivation 2 Sparsity is not enough 3 Exact recovery by convex optimization 4 Stability 5 Numerical algorithms 6 Related work and conclusion 1/17/2013 2 / 44
Motivation 1 Motivation 2 Sparsity is not enough 3 Exact recovery by convex optimization 4 Stability 5 Numerical algorithms 6 Related work and conclusion 1/17/2013 3 / 44
Motivation Single-molecule imaging Frame 1 Frame 2 Frame 3 1/17/2013 4 / 44
Motivation Single-molecule imaging Frame 1 Frame 2 Frame 3 Microscope receives light from uorescent molecules 1/17/2013 4 / 44
Motivation Single-molecule imaging Frame 1 Frame 2 Frame 3 Microscope receives light from uorescent molecules Few molecules are active in each frame sparsity 1/17/2013 4 / 44
Motivation Single-molecule imaging Frame 1 Frame 2 Frame 3 Microscope receives light from uorescent molecules Few molecules are active in each frame sparsity Multiple ( 10000) frames are recorded and processed individually 1/17/2013 4 / 44
Motivation Single-molecule imaging Frame 1 Frame 2 Frame 3 Microscope receives light from uorescent molecules Few molecules are active in each frame sparsity Multiple ( 10000) frames are recorded and processed individually Results from all frames are combined to reveal the underlying signal 1/17/2013 4 / 44
Motivation Single-molecule imaging 1/17/2013 5 / 44
Motivation Single-molecule imaging Bad news : the resolution of our measurements is too low 1/17/2013 5 / 44
Motivation Single-molecule imaging Bad news : the resolution of our measurements is too low Good news : there is structure in the signal 1/17/2013 5 / 44
Motivation Limits of resolution in imaging In any optical imaging system diraction imposes a fundamental limit on resolution 1/17/2013 6 / 44
Motivation Limits of resolution in imaging In any optical imaging system diraction imposes a fundamental limit on resolution 1/17/2013 6 / 44
Motivation What is super-resolution? Retrieving ne-scale structure from low-resolution data 1/17/2013 7 / 44
Motivation What is super-resolution? Retrieving ne-scale structure from low-resolution data Equivalently, extrapolating the high end of the spectrum 1/17/2013 7 / 44
Motivation Super-resolution of point sources Super-resolution of structured signals from bandlimited data arises in many applications 1/17/2013 8 / 44
Motivation Super-resolution of point sources Super-resolution of structured signals from bandlimited data arises in many applications Reconstruction of sub-wavelength structure in conventional imaging Reconstruction of sub-pixel structure in electronic imaging Spectral analysis of multitone signals Similar problems in signal processing, radar, spectroscopy, medical imaging, astronomy, geophysics, etc. 1/17/2013 8 / 44
Motivation Super-resolution of point sources Super-resolution of structured signals from bandlimited data arises in many applications Reconstruction of sub-wavelength structure in conventional imaging Reconstruction of sub-pixel structure in electronic imaging Spectral analysis of multitone signals Similar problems in signal processing, radar, spectroscopy, medical imaging, astronomy, geophysics, etc. The signal of interest is often modeled as a superposition of point sources 1/17/2013 8 / 44
Motivation Super-resolution of point sources Super-resolution of structured signals from bandlimited data arises in many applications Reconstruction of sub-wavelength structure in conventional imaging Reconstruction of sub-pixel structure in electronic imaging Spectral analysis of multitone signals Similar problems in signal processing, radar, spectroscopy, medical imaging, astronomy, geophysics, etc. The signal of interest is often modeled as a superposition of point sources Celestial bodies in astronomy Line spectra in speech analysis Fluorescent molecules in single-molecule microscopy 1/17/2013 8 / 44
Motivation Mathematical model Signal : superposition of delta measures x = j a j δ tj a j C, t j T [0, 1] 1/17/2013 9 / 44
Motivation Mathematical model Signal : superposition of delta measures x = j a j δ tj a j C, t j T [0, 1] Measurements : low-pass Fourier coecients y = F n x y(k) = 1 0 e i 2πkt x (dt) = j a j e i 2πkt j, k Z, k f c Number of measurements : n = 2 f c + 1 1/17/2013 9 / 44
Motivation Mathematical model Signal : superposition of delta measures x = j a j δ tj a j C, t j T [0, 1] Measurements : low-pass Fourier coecients y = F n x y(k) = 1 0 e i 2πkt x (dt) = j a j e i 2πkt j, k Z, k f c Number of measurements : n = 2 f c + 1 Under what conditions is it possible to recover x from y? 1/17/2013 9 / 44
Motivation Equivalent problem : line spectra estimation Swapping time and frequency 1/17/2013 10 / 44
Motivation Equivalent problem : line spectra estimation Swapping time and frequency Signal : superposition of sinusoids x(t) = j a j e i 2πω j t a j C, ω j T [0, 1] 1/17/2013 10 / 44
Motivation Equivalent problem : line spectra estimation Swapping time and frequency Signal : superposition of sinusoids x(t) = j a j e i 2πω j t a j C, ω j T [0, 1] Measurements : equispaced samples x(1), x(2), x(3),... x(n) 1/17/2013 10 / 44
Motivation Can you nd the spikes? 1/17/2013 11 / 44
Motivation Can you nd the spikes? 1/17/2013 11 / 44
Sparsity is not enough 1 Motivation 2 Sparsity is not enough 3 Exact recovery by convex optimization 4 Stability 5 Numerical algorithms 6 Related work and conclusion 1/17/2013 12 / 44
Sparsity is not enough Compressed sensing Compressed sensing theory establishes robust recovery of spikes from random Fourier measurements [Candès, Romberg & Tao 2004] 1/17/2013 13 / 44
Sparsity is not enough Compressed sensing Compressed sensing theory establishes robust recovery of spikes from random Fourier measurements [Candès, Romberg & Tao 2004] Crucial insight : measurement operator is well conditioned when acting upon sparse signals (restricted isometry property) 1/17/2013 13 / 44
Sparsity is not enough Compressed sensing Compressed sensing theory establishes robust recovery of spikes from random Fourier measurements [Candès, Romberg & Tao 2004] Crucial insight : measurement operator is well conditioned when acting upon sparse signals (restricted isometry property) Is this the case in the super-resolution setting? 1/17/2013 13 / 44
Sparsity is not enough Compressed sensing Compressed sensing theory establishes robust recovery of spikes from random Fourier measurements [Candès, Romberg & Tao 2004] Crucial insight : measurement operator is well conditioned when acting upon sparse signals (restricted isometry property) Is this the case in the super-resolution setting? Simple experiment : discretize support to N = 4096 points restrict signal to an interval of 48 contiguous points measure n DFT coecients super-resolution factor (SRF) = N n how well conditioned is the inverse problem if we know the support? 1/17/2013 13 / 44
Sparsity is not enough Simple experiment 0.0 0.2 0.4 0.6 0.8 1.0 0 10 20 30 40 SRF=2 SRF=4 SRF=8 SRF=16 1e 17 1e 13 1e 09 1e 05 1e 01 0 10 20 30 40 SRF=2 SRF=4 SRF=8 SRF=16 For SRF = 4 measuring any unit-normed vector in a subspace of dimension 24 results in a signal with norm less than 10 7 1/17/2013 14 / 44
Sparsity is not enough Simple experiment 0.0 0.2 0.4 0.6 0.8 1.0 0 10 20 30 40 SRF=2 SRF=4 SRF=8 SRF=16 1e 17 1e 13 1e 09 1e 05 1e 01 0 10 20 30 40 SRF=2 SRF=4 SRF=8 SRF=16 For SRF = 4 measuring any unit-normed vector in a subspace of dimension 24 results in a signal with norm less than 10 7 At an SNR of 145 db, recovery is impossible by any method 1/17/2013 14 / 44
Sparsity is not enough Prolate spheroidal sequences There seems to be a sharp transition between signals that are preserved and signals that are completely suppressed. 1/17/2013 15 / 44
Sparsity is not enough Prolate spheroidal sequences There seems to be a sharp transition between signals that are preserved and signals that are completely suppressed. This phenomenon can be characterized asymptotically by using Slepian's prolate spheroidal sequences 1/17/2013 15 / 44
Sparsity is not enough Prolate spheroidal sequences There seems to be a sharp transition between signals that are preserved and signals that are completely suppressed. This phenomenon can be characterized asymptotically by using Slepian's prolate spheroidal sequences These sequences are the eigenvectors of the concatenation of a time limiting P T and a frequency limiting operator P W, 0 < W 1/2 1/17/2013 15 / 44
Sparsity is not enough Prolate spheroidal sequences There seems to be a sharp transition between signals that are preserved and signals that are completely suppressed. This phenomenon can be characterized asymptotically by using Slepian's prolate spheroidal sequences These sequences are the eigenvectors of the concatenation of a time limiting P T and a frequency limiting operator P W, 0 < W 1/2 Asymptotically WT eigenvalues cluster near one while the rest are almost zero [Slepian 1978] 1/17/2013 15 / 44
Sparsity is not enough Consequences of Slepian's work For any value of SRF above one, there exist signals almost in the null-space of the measurement operator 1/17/2013 16 / 44
Sparsity is not enough Consequences of Slepian's work For any value of SRF above one, there exist signals almost in the null-space of the measurement operator The norm of the measurements corresponding to k-sparse signals for a xed SRF decreases exponentially in k 1/17/2013 16 / 44
Sparsity is not enough Consequences of Slepian's work For any value of SRF above one, there exist signals almost in the null-space of the measurement operator The norm of the measurements corresponding to k-sparse signals for a xed SRF decreases exponentially in k For SRF = 1.05 there is a 256-sparse signal such that y 2 10 7 1/17/2013 16 / 44
Sparsity is not enough Consequences of Slepian's work For any value of SRF above one, there exist signals almost in the null-space of the measurement operator The norm of the measurements corresponding to k-sparse signals for a xed SRF decreases exponentially in k For SRF = 1.05 there is a 256-sparse signal such that y 2 10 7 For SRF = 4 there is a 48-sparse signal such that y 2 10 33 1/17/2013 16 / 44
Sparsity is not enough Consequences of Slepian's work For any value of SRF above one, there exist signals almost in the null-space of the measurement operator The norm of the measurements corresponding to k-sparse signals for a xed SRF decreases exponentially in k For SRF = 1.05 there is a 256-sparse signal such that y 2 10 7 For SRF = 4 there is a 48-sparse signal such that y 2 10 33 For any interval T of length k, there is an irretrievable subspace of dimension (1 1/SRF) k supported on T 1/17/2013 16 / 44
Sparsity is not enough Consequences of Slepian's work For any value of SRF above one, there exist signals almost in the null-space of the measurement operator The norm of the measurements corresponding to k-sparse signals for a xed SRF decreases exponentially in k For SRF = 1.05 there is a 256-sparse signal such that y 2 10 7 For SRF = 4 there is a 48-sparse signal such that y 2 10 33 For any interval T of length k, there is an irretrievable subspace of dimension (1 1/SRF) k supported on T When SRF > 2 many clustered sparse signals are killed by the measurement process 1/17/2013 16 / 44
Sparsity is not enough Compressed sensing vs super-resolution What is the dierence? Compressed sensing : spectrum interpolation 1/17/2013 17 / 44
Sparsity is not enough Compressed sensing vs super-resolution What is the dierence? Compressed sensing : spectrum interpolation Super-resolution : spectrum extrapolation 1/17/2013 17 / 44
Sparsity is not enough Compressed sensing vs super-resolution Compressed sensing : spectrum interpolation Super-resolution : spectrum extrapolation 1/17/2013 18 / 44
Sparsity is not enough Conclusion Robust super-resolution of arbitrary sparse signals is impossible 1/17/2013 19 / 44
Sparsity is not enough Conclusion Robust super-resolution of arbitrary sparse signals is impossible We can only hope to recover signals that are not too clustered 1/17/2013 19 / 44
Sparsity is not enough Conclusion Robust super-resolution of arbitrary sparse signals is impossible We can only hope to recover signals that are not too clustered In this work, this structural assumption is captured by introducing the minimum separation of the support T of a signal (T ) = inf t t (t,t ) T : t t 1/17/2013 19 / 44
Exact recovery by convex optimization 1 Motivation 2 Sparsity is not enough 3 Exact recovery by convex optimization 4 Stability 5 Numerical algorithms 6 Related work and conclusion 1/17/2013 20 / 44
Exact recovery by convex optimization Noiseless case Recovery by solving min x x TV subject to F n x = y, over all nite complex measures x supported on [0, 1] 1/17/2013 21 / 44
Exact recovery by convex optimization Noiseless case Recovery by solving min x x TV subject to F n x = y, over all nite complex measures x supported on [0, 1] Total-variation norm of a complex measure ν : ν TV = sup ν (B j ), j=1 (supremum over all nite partitions B j of [0, 1]) 1/17/2013 21 / 44
Exact recovery by convex optimization Noiseless case Recovery by solving min x x TV subject to F n x = y, over all nite complex measures x supported on [0, 1] Total-variation norm of a complex measure ν : ν TV = sup ν (B j ), j=1 (supremum over all nite partitions B j of [0, 1]) not the total variation of a piecewise constant function 1/17/2013 21 / 44
Exact recovery by convex optimization Noiseless case Recovery by solving min x x TV subject to F n x = y, over all nite complex measures x supported on [0, 1] Total-variation norm of a complex measure ν : ν TV = sup ν (B j ), j=1 (supremum over all nite partitions B j of [0, 1]) not the total variation of a piecewise constant function continuous counterpart of the l 1 norm 1/17/2013 21 / 44
Exact recovery by convex optimization Noiseless case Recovery by solving min x x TV subject to F n x = y, over all nite complex measures x supported on [0, 1] Total-variation norm of a complex measure ν : ν TV = sup ν (B j ), j=1 (supremum over all nite partitions B j of [0, 1]) not the total variation of a piecewise constant function continuous counterpart of the l 1 norm if j a jδ tj then x TV = j a j 1/17/2013 21 / 44
Exact recovery by convex optimization Noiseless recovery y(k) = 1 0 e i 2πkt x (dt) = j a j e i 2πkt j, k Z, k f c Theorem [Candès, F. 2012] If the minimum separation of the signal obeys (T ) 2 /f c := 2 λ c, then x is recovered exactly by total-variation norm minimization 1/17/2013 22 / 44
Exact recovery by convex optimization Noiseless recovery y(k) = 1 0 e i 2πkt x (dt) = j a j e i 2πkt j, k Z, k f c Theorem [Candès, F. 2012] If the minimum separation of the signal obeys (T ) 2 /f c := 2 λ c, then x is recovered exactly by total-variation norm minimization Innite precision 1/17/2013 22 / 44
Exact recovery by convex optimization Noiseless recovery y(k) = 1 0 e i 2πkt x (dt) = j a j e i 2πkt j, k Z, k f c Theorem [Candès, F. 2012] If the minimum separation of the signal obeys (T ) 2 /f c := 2 λ c, then x is recovered exactly by total-variation norm minimization Innite precision Recovers (2λ c ) 1 = f c /2 = n/4 spikes from n low-frequency measurements 1/17/2013 22 / 44
Exact recovery by convex optimization Noiseless recovery y(k) = 1 0 e i 2πkt x (dt) = j a j e i 2πkt j, k Z, k f c Theorem [Candès, F. 2012] If the minimum separation of the signal obeys (T ) 2 /f c := 2 λ c, then x is recovered exactly by total-variation norm minimization Innite precision Recovers (2λ c ) 1 = f c /2 = n/4 spikes from n low-frequency measurements If x is real, then (T ) 2 /f c := 1.87 λ c, 1/17/2013 22 / 44
Exact recovery by convex optimization Lower bound Consider signals supported on a nite grid of N points 1/17/2013 23 / 44
Exact recovery by convex optimization Lower bound Consider signals supported on a nite grid of N points We look for adversarial sign patterns 1/17/2013 23 / 44
Exact recovery by convex optimization Lower bound Consider signals supported on a nite grid of N points We look for adversarial sign patterns We plot the ratio N/n (x-axis) against the largest minimum distance (y-axis) at which l 1 -norm minimization fails 10 20 30 40 50 60 T =50 T =20 T =10 T =5 T =2 5 10 15 20 25 30 1/17/2013 23 / 44
Exact recovery by convex optimization Lower bound Consider signals supported on a nite grid of N points We look for adversarial sign patterns We plot the ratio N/n (x-axis) against the largest minimum distance (y-axis) at which l 1 -norm minimization fails 10 20 30 40 50 60 T =50 T =20 T =10 T =5 T =2 5 10 15 20 25 30 Red line corresponds to (T ) = λ c 1/17/2013 23 / 44
Exact recovery by convex optimization Higher dimensions Signal : x = j a j δ tj a j R, t j T [0, 1] 2 Measurements : low-pass 2D Fourier coecients y(k) = e i 2π k,t x (dt) = a j e i 2π k,t j, [0,1] 2 j k Z2, k f c 1/17/2013 24 / 44
Exact recovery by convex optimization Higher dimensions Signal : x = j a j δ tj a j R, t j T [0, 1] 2 Measurements : low-pass 2D Fourier coecients y(k) = e i 2π k,t x (dt) = a j e i 2π k,t j, [0,1] 2 j k Z2, k f c Theorem [Candès, F. 2012] TV norm minimization yields exact recovery if (T ) 2.38 /f c := 2.38 λ c, 1/17/2013 24 / 44
Exact recovery by convex optimization Higher dimensions Signal : x = j a j δ tj a j R, t j T [0, 1] 2 Measurements : low-pass 2D Fourier coecients y(k) = e i 2π k,t x (dt) = a j e i 2π k,t j, [0,1] 2 j k Z2, k f c Theorem [Candès, F. 2012] TV norm minimization yields exact recovery if (T ) 2.38 /f c := 2.38 λ c, In dimension d, (T ) C d λ c, where C d only depends on d 1/17/2013 24 / 44
Exact recovery by convex optimization Extensions Signal : l 1 times continuously dierentiable piecewise smooth function x = t j T 1 (tj 1,t j) p j(t), t j T [0, 1] where p j (t) is a polynomial of degree l Measurements : low-pass Fourier coecients y(k) = 1 0 e i 2πkt x (dt), k Z, k f c 1/17/2013 25 / 44
Exact recovery by convex optimization Extensions Signal : l 1 times continuously dierentiable piecewise smooth function x = t j T 1 (tj 1,t j) p j(t), t j T [0, 1] where p j (t) is a polynomial of degree l Measurements : low-pass Fourier coecients Recovery : y(k) = 1 0 e i 2πkt x (dt), k Z, k f c min x (l+1) TV subject to F n x = y 1/17/2013 25 / 44
Exact recovery by convex optimization Extensions Signal : l 1 times continuously dierentiable piecewise smooth function x = t j T 1 (tj 1,t j) p j(t), t j T [0, 1] where p j (t) is a polynomial of degree l Measurements : low-pass Fourier coecients Recovery : y(k) = 1 0 e i 2πkt x (dt), k Z, k f c min x (l+1) TV subject to F n x = y Corollary TV norm minimization yields exact recovery if (T ) 2 λ c 1/17/2013 25 / 44
Exact recovery by convex optimization Sparse recovery If we discretize the support : Sparse recovery problem in an overcomplete Fourier dictionary 1/17/2013 26 / 44
Exact recovery by convex optimization Sparse recovery If we discretize the support : Sparse recovery problem in an overcomplete Fourier dictionary Previous theory based on dictionary incoherence is very weak, due to high column correlation 1/17/2013 26 / 44
Exact recovery by convex optimization Sparse recovery If we discretize the support : Sparse recovery problem in an overcomplete Fourier dictionary Previous theory based on dictionary incoherence is very weak, due to high column correlation If N = 20, 000 and n = 1, 000, [Dossal 2005] : recovery of 3 spikes, our result : n/4 = 250 spikes 1/17/2013 26 / 44
Exact recovery by convex optimization Sparse recovery If we discretize the support : Sparse recovery problem in an overcomplete Fourier dictionary Previous theory based on dictionary incoherence is very weak, due to high column correlation If N = 20, 000 and n = 1, 000, [Dossal 2005] : recovery of 3 spikes, our result : n/4 = 250 spikes Additional structural assumptions allow for a more precise theoretical analysis 1/17/2013 26 / 44
Exact recovery by convex optimization Sketch of the proof Sucient condition for exact recovery of a signal supported on T : For any v C T with v j = 1, there exists a low-frequency trigonometric polynomial q(t) = f c k= f c c k e i 2πkt (1) obeying { q(t j ) = v j, t j T, q(t) < 1, t [0, 1] \ T. (2) 1/17/2013 27 / 44
Exact recovery by convex optimization Sketch of the proof Interpolating the sign pattern with a low frequency polynomial becomes challenging if the minimum separation is small +1 +1 1 1 1/17/2013 28 / 44
Exact recovery by convex optimization Sketch of the proof Interpolation with low-frequency kernel K q(t) = t j T α j K(t t j ), where α is a vector of coecients, and q is constrained to satisfy q(t k ) = t j T α j K (t k t j ) = v k, t k T, 1/17/2013 29 / 44
Exact recovery by convex optimization Sketch of the proof Interpolation with low-frequency kernel K q(t) = t j T α j K(t t j ), where α is a vector of coecients, and q is constrained to satisfy q(t k ) = t j T α j K (t k t j ) = v k, t k T, If K is a sinc function this is equivalent to a least squares construction 1/17/2013 29 / 44
Exact recovery by convex optimization Sketch of the proof Interpolation with low-frequency kernel K q(t) = t j T α j K(t t j ), where α is a vector of coecients, and q is constrained to satisfy q(t k ) = t j T α j K (t k t j ) = v k, t k T, If K is a sinc function this is equivalent to a least squares construction Kernels with faster decay, such as the Fejér kernel or the Gaussian kernel [Kahane 2011], yield better results 1/17/2013 29 / 44
Exact recovery by convex optimization Sketch of the proof Interpolation with low-frequency kernel K q(t) = t j T α j K(t t j ), where α is a vector of coecients, and q is constrained to satisfy q(t k ) = t j T α j K (t k t j ) = v k, t k T, If K is a sinc function this is equivalent to a least squares construction Kernels with faster decay, such as the Fejér kernel or the Gaussian kernel [Kahane 2011], yield better results However, this does not allow to construct a valid continuous dual polynomial 1/17/2013 29 / 44
Exact recovery by convex optimization Sketch of the proof Adding a correction term q(t) = t j T α j K(t t j ) + β j K (t t j ) and an extra constraint q(t k ) = v k, q (t k ) = 0, t k T forces q to reach a local maximum at each element of T 1/17/2013 30 / 44
Exact recovery by convex optimization Sketch of the proof Adding a correction term q(t) = t j T α j K(t t j ) + β j K (t t j ) and an extra constraint q(t k ) = v k, q (t k ) = 0, t k T forces q to reach a local maximum at each element of T Choosing K to be the square of a Fejér kernel allows to show that there exist α and β satisfying the constraints q is strictly bounded by one on the o-support 1/17/2013 30 / 44
Stability 1 Motivation 2 Sparsity is not enough 3 Exact recovery by convex optimization 4 Stability 5 Numerical algorithms 6 Related work and conclusion 1/17/2013 31 / 44
Stability Super-resolution factor : spatial viewpoint Super-resolution factor SRF = λ c λ f 1/17/2013 32 / 44
Stability Super-resolution factor : spectral viewpoint Super-resolution factor SRF = f f c 1/17/2013 33 / 44
Stability Noise model F n x(k) = 1 0 e i 2πkt x (dt), k f c 1/17/2013 34 / 44
Stability Noise model F n x(k) = 1 0 e i 2πkt x (dt), Noise can be modeled in the spectral domain y = F n x + w k f c 1/17/2013 34 / 44
Stability Noise model F n x(k) = 1 0 e i 2πkt x (dt), Noise can be modeled in the spectral domain y = F n x + w or spatial domain s = Fn y = P n x + z, P n projection onto the rst n Fourier modes k f c 1/17/2013 34 / 44
Stability Noise model F n x(k) = 1 0 e i 2πkt x (dt), Noise can be modeled in the spectral domain y = F n x + w or spatial domain s = Fn y = P n x + z, P n projection onto the rst n Fourier modes k f c Assumption : z L1 δ 1/17/2013 34 / 44
Stability Noise model F n x(k) = 1 0 e i 2πkt x (dt), Noise can be modeled in the spectral domain y = F n x + w or spatial domain s = Fn y = P n x + z, P n projection onto the rst n Fourier modes k f c Assumption : z L1 δ Recovery algorithm min x x TV subject to P n x s L1 δ 1/17/2013 34 / 44
Stability Robust recovery Let φ λc be a kernel with width λ c and cut-o frequency f c If z L1 δ, then φ λc (x est x) L1 z L1 δ, 1/17/2013 35 / 44
Stability Robust recovery Let φ λc be a kernel with width λ c and cut-o frequency f c If z L1 δ, then What can we expect for φ λf? φ λc (x est x) L1 z L1 δ, 1/17/2013 35 / 44
Stability Robust recovery Let φ λc be a kernel with width λ c and cut-o frequency f c If z L1 δ, then What can we expect for φ λf? Theorem [Candès, F. 2012] φ λc (x est x) L1 z L1 δ, If (T ) 2 /f c then the solution x est to the TV-norm minimization problem satises φ λf (x est x) L1 SRF2 δ, 1/17/2013 35 / 44
Numerical algorithms 1 Motivation 2 Sparsity is not enough 3 Exact recovery by convex optimization 4 Stability 5 Numerical algorithms 6 Related work and conclusion 1/17/2013 36 / 44
Numerical algorithms Practical implementation TV norm minimization is an innite-dimensional optimization problem 1/17/2013 37 / 44
Numerical algorithms Practical implementation TV norm minimization is an innite-dimensional optimization problem Primal : min x x TV subject to F n x = y, First option : Discretize x l 1 norm minimization 1/17/2013 37 / 44
Numerical algorithms Practical implementation TV norm minimization is an innite-dimensional optimization problem Primal : min x x TV subject to F n x = y, First option : Discretize x l 1 norm minimization Dual : max Re [y u] subject to Fn u 1, u C n Second option : Recast as semidenite program 1/17/2013 37 / 44
Numerical algorithms Semidenite representation is equivalent to F n u 1 There exists a Hermitian matrix Q C n n such that [ ] Q u u 0, 1 n j Q i,i+j = i=1 { 1, j = 0, 0, j = 1, 2,..., n 1. 1/17/2013 38 / 44
Numerical algorithms Support detection By strong duality, F n û interpolates the sign of ˆx 1/17/2013 39 / 44
Numerical algorithms Experiment Support recovery by solving the SDP f c 25 50 75 100 Average error 6.66 10 9 1.70 10 9 5.58 10 10 2.96 10 10 Maximum error 1.83 10 7 8.14 10 8 2.55 10 8 2.31 10 8 For each f c, 100 random signals with T = f c /4 and (T ) 2/f c 1/17/2013 40 / 44
Related work and conclusion 1 Motivation 2 Sparsity is not enough 3 Exact recovery by convex optimization 4 Stability 5 Numerical algorithms 6 Related work and conclusion 1/17/2013 41 / 44
Related work and conclusion Related work Super-resolution of spike trains by l 1 norm minimization in seismic prospecting [Claerbout, Muir 1973 ; Levy, Fullagar 1981 ; Santosa, Symes 1986] 1/17/2013 42 / 44
Related work and conclusion Related work Super-resolution of spike trains by l 1 norm minimization in seismic prospecting [Claerbout, Muir 1973 ; Levy, Fullagar 1981 ; Santosa, Symes 1986] Guarantees on recovery of positive k-sparse signals from 2k + 1 Fourier coecients by l 1 norm minimization [Donoho, Tanner ; Fuchs 2005] 1/17/2013 42 / 44
Related work and conclusion Related work Super-resolution of spike trains by l 1 norm minimization in seismic prospecting [Claerbout, Muir 1973 ; Levy, Fullagar 1981 ; Santosa, Symes 1986] Guarantees on recovery of positive k-sparse signals from 2k + 1 Fourier coecients by l 1 norm minimization [Donoho, Tanner ; Fuchs 2005] Recovery of lacunary Fourier series coecients [Kahane 2011] 1/17/2013 42 / 44
Related work and conclusion Related work Super-resolution of spike trains by l 1 norm minimization in seismic prospecting [Claerbout, Muir 1973 ; Levy, Fullagar 1981 ; Santosa, Symes 1986] Guarantees on recovery of positive k-sparse signals from 2k + 1 Fourier coecients by l 1 norm minimization [Donoho, Tanner ; Fuchs 2005] Recovery of lacunary Fourier series coecients [Kahane 2011] Finite rate of innovation [Dragotti, Vetterli, Blu 2007] 1/17/2013 42 / 44
Related work and conclusion Related work Super-resolution of spike trains by l 1 norm minimization in seismic prospecting [Claerbout, Muir 1973 ; Levy, Fullagar 1981 ; Santosa, Symes 1986] Guarantees on recovery of positive k-sparse signals from 2k + 1 Fourier coecients by l 1 norm minimization [Donoho, Tanner ; Fuchs 2005] Recovery of lacunary Fourier series coecients [Kahane 2011] Finite rate of innovation [Dragotti, Vetterli, Blu 2007] Bounds on the modulus of continuity of super-resolution [Donoho 1992] 1/17/2013 42 / 44
Related work and conclusion Related work Super-resolution of spike trains by l 1 norm minimization in seismic prospecting [Claerbout, Muir 1973 ; Levy, Fullagar 1981 ; Santosa, Symes 1986] Guarantees on recovery of positive k-sparse signals from 2k + 1 Fourier coecients by l 1 norm minimization [Donoho, Tanner ; Fuchs 2005] Recovery of lacunary Fourier series coecients [Kahane 2011] Finite rate of innovation [Dragotti, Vetterli, Blu 2007] Bounds on the modulus of continuity of super-resolution [Donoho 1992] Super-resolution by greedy methods [Fannjiang, Liao 2012] 1/17/2013 42 / 44
Related work and conclusion Related work Super-resolution of spike trains by l 1 norm minimization in seismic prospecting [Claerbout, Muir 1973 ; Levy, Fullagar 1981 ; Santosa, Symes 1986] Guarantees on recovery of positive k-sparse signals from 2k + 1 Fourier coecients by l 1 norm minimization [Donoho, Tanner ; Fuchs 2005] Recovery of lacunary Fourier series coecients [Kahane 2011] Finite rate of innovation [Dragotti, Vetterli, Blu 2007] Bounds on the modulus of continuity of super-resolution [Donoho 1992] Super-resolution by greedy methods [Fannjiang, Liao 2012] Random undersampling of low-frequency coecients [Tang et al 2012] 1/17/2013 42 / 44
Related work and conclusion Conclusion Stable super-resolution is possible via tractable non-parametric methods based on convex optimization Note on single-molecule imaging : Joint work with E. Candès, V. Morgenshtern and the Moerner lab at Stanford 1/17/2013 43 / 44
Related work and conclusion Thank you 1/17/2013 44 / 44