Super-resolution via Convex Programming

Size: px
Start display at page:

Download "Super-resolution via Convex Programming"

Transcription

1 Super-resolution via Convex Programming Carlos Fernandez-Granda (Joint work with Emmanuel Candès) Structure and Randomness in System Identication and Learning, IPAM 1/17/2013 1/17/ / 44

2 Index 1 Motivation 2 Sparsity is not enough 3 Exact recovery by convex optimization 4 Stability 5 Numerical algorithms 6 Related work and conclusion 1/17/ / 44

3 Motivation 1 Motivation 2 Sparsity is not enough 3 Exact recovery by convex optimization 4 Stability 5 Numerical algorithms 6 Related work and conclusion 1/17/ / 44

4 Motivation Single-molecule imaging Frame 1 Frame 2 Frame 3 1/17/ / 44

5 Motivation Single-molecule imaging Frame 1 Frame 2 Frame 3 Microscope receives light from uorescent molecules 1/17/ / 44

6 Motivation Single-molecule imaging Frame 1 Frame 2 Frame 3 Microscope receives light from uorescent molecules Few molecules are active in each frame sparsity 1/17/ / 44

7 Motivation Single-molecule imaging Frame 1 Frame 2 Frame 3 Microscope receives light from uorescent molecules Few molecules are active in each frame sparsity Multiple ( 10000) frames are recorded and processed individually 1/17/ / 44

8 Motivation Single-molecule imaging Frame 1 Frame 2 Frame 3 Microscope receives light from uorescent molecules Few molecules are active in each frame sparsity Multiple ( 10000) frames are recorded and processed individually Results from all frames are combined to reveal the underlying signal 1/17/ / 44

9 Motivation Single-molecule imaging 1/17/ / 44

10 Motivation Single-molecule imaging Bad news : the resolution of our measurements is too low 1/17/ / 44

11 Motivation Single-molecule imaging Bad news : the resolution of our measurements is too low Good news : there is structure in the signal 1/17/ / 44

12 Motivation Limits of resolution in imaging In any optical imaging system diraction imposes a fundamental limit on resolution 1/17/ / 44

13 Motivation Limits of resolution in imaging In any optical imaging system diraction imposes a fundamental limit on resolution 1/17/ / 44

14 Motivation What is super-resolution? Retrieving ne-scale structure from low-resolution data 1/17/ / 44

15 Motivation What is super-resolution? Retrieving ne-scale structure from low-resolution data Equivalently, extrapolating the high end of the spectrum 1/17/ / 44

16 Motivation Super-resolution of point sources Super-resolution of structured signals from bandlimited data arises in many applications 1/17/ / 44

17 Motivation Super-resolution of point sources Super-resolution of structured signals from bandlimited data arises in many applications Reconstruction of sub-wavelength structure in conventional imaging Reconstruction of sub-pixel structure in electronic imaging Spectral analysis of multitone signals Similar problems in signal processing, radar, spectroscopy, medical imaging, astronomy, geophysics, etc. 1/17/ / 44

18 Motivation Super-resolution of point sources Super-resolution of structured signals from bandlimited data arises in many applications Reconstruction of sub-wavelength structure in conventional imaging Reconstruction of sub-pixel structure in electronic imaging Spectral analysis of multitone signals Similar problems in signal processing, radar, spectroscopy, medical imaging, astronomy, geophysics, etc. The signal of interest is often modeled as a superposition of point sources 1/17/ / 44

19 Motivation Super-resolution of point sources Super-resolution of structured signals from bandlimited data arises in many applications Reconstruction of sub-wavelength structure in conventional imaging Reconstruction of sub-pixel structure in electronic imaging Spectral analysis of multitone signals Similar problems in signal processing, radar, spectroscopy, medical imaging, astronomy, geophysics, etc. The signal of interest is often modeled as a superposition of point sources Celestial bodies in astronomy Line spectra in speech analysis Fluorescent molecules in single-molecule microscopy 1/17/ / 44

20 Motivation Mathematical model Signal : superposition of delta measures x = j a j δ tj a j C, t j T [0, 1] 1/17/ / 44

21 Motivation Mathematical model Signal : superposition of delta measures x = j a j δ tj a j C, t j T [0, 1] Measurements : low-pass Fourier coecients y = F n x y(k) = 1 0 e i 2πkt x (dt) = j a j e i 2πkt j, k Z, k f c Number of measurements : n = 2 f c + 1 1/17/ / 44

22 Motivation Mathematical model Signal : superposition of delta measures x = j a j δ tj a j C, t j T [0, 1] Measurements : low-pass Fourier coecients y = F n x y(k) = 1 0 e i 2πkt x (dt) = j a j e i 2πkt j, k Z, k f c Number of measurements : n = 2 f c + 1 Under what conditions is it possible to recover x from y? 1/17/ / 44

23 Motivation Equivalent problem : line spectra estimation Swapping time and frequency 1/17/ / 44

24 Motivation Equivalent problem : line spectra estimation Swapping time and frequency Signal : superposition of sinusoids x(t) = j a j e i 2πω j t a j C, ω j T [0, 1] 1/17/ / 44

25 Motivation Equivalent problem : line spectra estimation Swapping time and frequency Signal : superposition of sinusoids x(t) = j a j e i 2πω j t a j C, ω j T [0, 1] Measurements : equispaced samples x(1), x(2), x(3),... x(n) 1/17/ / 44

26 Motivation Can you nd the spikes? 1/17/ / 44

27 Motivation Can you nd the spikes? 1/17/ / 44

28 Sparsity is not enough 1 Motivation 2 Sparsity is not enough 3 Exact recovery by convex optimization 4 Stability 5 Numerical algorithms 6 Related work and conclusion 1/17/ / 44

29 Sparsity is not enough Compressed sensing Compressed sensing theory establishes robust recovery of spikes from random Fourier measurements [Candès, Romberg & Tao 2004] 1/17/ / 44

30 Sparsity is not enough Compressed sensing Compressed sensing theory establishes robust recovery of spikes from random Fourier measurements [Candès, Romberg & Tao 2004] Crucial insight : measurement operator is well conditioned when acting upon sparse signals (restricted isometry property) 1/17/ / 44

31 Sparsity is not enough Compressed sensing Compressed sensing theory establishes robust recovery of spikes from random Fourier measurements [Candès, Romberg & Tao 2004] Crucial insight : measurement operator is well conditioned when acting upon sparse signals (restricted isometry property) Is this the case in the super-resolution setting? 1/17/ / 44

32 Sparsity is not enough Compressed sensing Compressed sensing theory establishes robust recovery of spikes from random Fourier measurements [Candès, Romberg & Tao 2004] Crucial insight : measurement operator is well conditioned when acting upon sparse signals (restricted isometry property) Is this the case in the super-resolution setting? Simple experiment : discretize support to N = 4096 points restrict signal to an interval of 48 contiguous points measure n DFT coecients super-resolution factor (SRF) = N n how well conditioned is the inverse problem if we know the support? 1/17/ / 44

33 Sparsity is not enough Simple experiment SRF=2 SRF=4 SRF=8 SRF=16 1e 17 1e 13 1e 09 1e 05 1e SRF=2 SRF=4 SRF=8 SRF=16 For SRF = 4 measuring any unit-normed vector in a subspace of dimension 24 results in a signal with norm less than /17/ / 44

34 Sparsity is not enough Simple experiment SRF=2 SRF=4 SRF=8 SRF=16 1e 17 1e 13 1e 09 1e 05 1e SRF=2 SRF=4 SRF=8 SRF=16 For SRF = 4 measuring any unit-normed vector in a subspace of dimension 24 results in a signal with norm less than 10 7 At an SNR of 145 db, recovery is impossible by any method 1/17/ / 44

35 Sparsity is not enough Prolate spheroidal sequences There seems to be a sharp transition between signals that are preserved and signals that are completely suppressed. 1/17/ / 44

36 Sparsity is not enough Prolate spheroidal sequences There seems to be a sharp transition between signals that are preserved and signals that are completely suppressed. This phenomenon can be characterized asymptotically by using Slepian's prolate spheroidal sequences 1/17/ / 44

37 Sparsity is not enough Prolate spheroidal sequences There seems to be a sharp transition between signals that are preserved and signals that are completely suppressed. This phenomenon can be characterized asymptotically by using Slepian's prolate spheroidal sequences These sequences are the eigenvectors of the concatenation of a time limiting P T and a frequency limiting operator P W, 0 < W 1/2 1/17/ / 44

38 Sparsity is not enough Prolate spheroidal sequences There seems to be a sharp transition between signals that are preserved and signals that are completely suppressed. This phenomenon can be characterized asymptotically by using Slepian's prolate spheroidal sequences These sequences are the eigenvectors of the concatenation of a time limiting P T and a frequency limiting operator P W, 0 < W 1/2 Asymptotically WT eigenvalues cluster near one while the rest are almost zero [Slepian 1978] 1/17/ / 44

39 Sparsity is not enough Consequences of Slepian's work For any value of SRF above one, there exist signals almost in the null-space of the measurement operator 1/17/ / 44

40 Sparsity is not enough Consequences of Slepian's work For any value of SRF above one, there exist signals almost in the null-space of the measurement operator The norm of the measurements corresponding to k-sparse signals for a xed SRF decreases exponentially in k 1/17/ / 44

41 Sparsity is not enough Consequences of Slepian's work For any value of SRF above one, there exist signals almost in the null-space of the measurement operator The norm of the measurements corresponding to k-sparse signals for a xed SRF decreases exponentially in k For SRF = 1.05 there is a 256-sparse signal such that y /17/ / 44

42 Sparsity is not enough Consequences of Slepian's work For any value of SRF above one, there exist signals almost in the null-space of the measurement operator The norm of the measurements corresponding to k-sparse signals for a xed SRF decreases exponentially in k For SRF = 1.05 there is a 256-sparse signal such that y For SRF = 4 there is a 48-sparse signal such that y /17/ / 44

43 Sparsity is not enough Consequences of Slepian's work For any value of SRF above one, there exist signals almost in the null-space of the measurement operator The norm of the measurements corresponding to k-sparse signals for a xed SRF decreases exponentially in k For SRF = 1.05 there is a 256-sparse signal such that y For SRF = 4 there is a 48-sparse signal such that y For any interval T of length k, there is an irretrievable subspace of dimension (1 1/SRF) k supported on T 1/17/ / 44

44 Sparsity is not enough Consequences of Slepian's work For any value of SRF above one, there exist signals almost in the null-space of the measurement operator The norm of the measurements corresponding to k-sparse signals for a xed SRF decreases exponentially in k For SRF = 1.05 there is a 256-sparse signal such that y For SRF = 4 there is a 48-sparse signal such that y For any interval T of length k, there is an irretrievable subspace of dimension (1 1/SRF) k supported on T When SRF > 2 many clustered sparse signals are killed by the measurement process 1/17/ / 44

45 Sparsity is not enough Compressed sensing vs super-resolution What is the dierence? Compressed sensing : spectrum interpolation 1/17/ / 44

46 Sparsity is not enough Compressed sensing vs super-resolution What is the dierence? Compressed sensing : spectrum interpolation Super-resolution : spectrum extrapolation 1/17/ / 44

47 Sparsity is not enough Compressed sensing vs super-resolution Compressed sensing : spectrum interpolation Super-resolution : spectrum extrapolation 1/17/ / 44

48 Sparsity is not enough Conclusion Robust super-resolution of arbitrary sparse signals is impossible 1/17/ / 44

49 Sparsity is not enough Conclusion Robust super-resolution of arbitrary sparse signals is impossible We can only hope to recover signals that are not too clustered 1/17/ / 44

50 Sparsity is not enough Conclusion Robust super-resolution of arbitrary sparse signals is impossible We can only hope to recover signals that are not too clustered In this work, this structural assumption is captured by introducing the minimum separation of the support T of a signal (T ) = inf t t (t,t ) T : t t 1/17/ / 44

51 Exact recovery by convex optimization 1 Motivation 2 Sparsity is not enough 3 Exact recovery by convex optimization 4 Stability 5 Numerical algorithms 6 Related work and conclusion 1/17/ / 44

52 Exact recovery by convex optimization Noiseless case Recovery by solving min x x TV subject to F n x = y, over all nite complex measures x supported on [0, 1] 1/17/ / 44

53 Exact recovery by convex optimization Noiseless case Recovery by solving min x x TV subject to F n x = y, over all nite complex measures x supported on [0, 1] Total-variation norm of a complex measure ν : ν TV = sup ν (B j ), j=1 (supremum over all nite partitions B j of [0, 1]) 1/17/ / 44

54 Exact recovery by convex optimization Noiseless case Recovery by solving min x x TV subject to F n x = y, over all nite complex measures x supported on [0, 1] Total-variation norm of a complex measure ν : ν TV = sup ν (B j ), j=1 (supremum over all nite partitions B j of [0, 1]) not the total variation of a piecewise constant function 1/17/ / 44

55 Exact recovery by convex optimization Noiseless case Recovery by solving min x x TV subject to F n x = y, over all nite complex measures x supported on [0, 1] Total-variation norm of a complex measure ν : ν TV = sup ν (B j ), j=1 (supremum over all nite partitions B j of [0, 1]) not the total variation of a piecewise constant function continuous counterpart of the l 1 norm 1/17/ / 44

56 Exact recovery by convex optimization Noiseless case Recovery by solving min x x TV subject to F n x = y, over all nite complex measures x supported on [0, 1] Total-variation norm of a complex measure ν : ν TV = sup ν (B j ), j=1 (supremum over all nite partitions B j of [0, 1]) not the total variation of a piecewise constant function continuous counterpart of the l 1 norm if j a jδ tj then x TV = j a j 1/17/ / 44

57 Exact recovery by convex optimization Noiseless recovery y(k) = 1 0 e i 2πkt x (dt) = j a j e i 2πkt j, k Z, k f c Theorem [Candès, F. 2012] If the minimum separation of the signal obeys (T ) 2 /f c := 2 λ c, then x is recovered exactly by total-variation norm minimization 1/17/ / 44

58 Exact recovery by convex optimization Noiseless recovery y(k) = 1 0 e i 2πkt x (dt) = j a j e i 2πkt j, k Z, k f c Theorem [Candès, F. 2012] If the minimum separation of the signal obeys (T ) 2 /f c := 2 λ c, then x is recovered exactly by total-variation norm minimization Innite precision 1/17/ / 44

59 Exact recovery by convex optimization Noiseless recovery y(k) = 1 0 e i 2πkt x (dt) = j a j e i 2πkt j, k Z, k f c Theorem [Candès, F. 2012] If the minimum separation of the signal obeys (T ) 2 /f c := 2 λ c, then x is recovered exactly by total-variation norm minimization Innite precision Recovers (2λ c ) 1 = f c /2 = n/4 spikes from n low-frequency measurements 1/17/ / 44

60 Exact recovery by convex optimization Noiseless recovery y(k) = 1 0 e i 2πkt x (dt) = j a j e i 2πkt j, k Z, k f c Theorem [Candès, F. 2012] If the minimum separation of the signal obeys (T ) 2 /f c := 2 λ c, then x is recovered exactly by total-variation norm minimization Innite precision Recovers (2λ c ) 1 = f c /2 = n/4 spikes from n low-frequency measurements If x is real, then (T ) 2 /f c := 1.87 λ c, 1/17/ / 44

61 Exact recovery by convex optimization Lower bound Consider signals supported on a nite grid of N points 1/17/ / 44

62 Exact recovery by convex optimization Lower bound Consider signals supported on a nite grid of N points We look for adversarial sign patterns 1/17/ / 44

63 Exact recovery by convex optimization Lower bound Consider signals supported on a nite grid of N points We look for adversarial sign patterns We plot the ratio N/n (x-axis) against the largest minimum distance (y-axis) at which l 1 -norm minimization fails T =50 T =20 T =10 T =5 T = /17/ / 44

64 Exact recovery by convex optimization Lower bound Consider signals supported on a nite grid of N points We look for adversarial sign patterns We plot the ratio N/n (x-axis) against the largest minimum distance (y-axis) at which l 1 -norm minimization fails T =50 T =20 T =10 T =5 T = Red line corresponds to (T ) = λ c 1/17/ / 44

65 Exact recovery by convex optimization Higher dimensions Signal : x = j a j δ tj a j R, t j T [0, 1] 2 Measurements : low-pass 2D Fourier coecients y(k) = e i 2π k,t x (dt) = a j e i 2π k,t j, [0,1] 2 j k Z2, k f c 1/17/ / 44

66 Exact recovery by convex optimization Higher dimensions Signal : x = j a j δ tj a j R, t j T [0, 1] 2 Measurements : low-pass 2D Fourier coecients y(k) = e i 2π k,t x (dt) = a j e i 2π k,t j, [0,1] 2 j k Z2, k f c Theorem [Candès, F. 2012] TV norm minimization yields exact recovery if (T ) 2.38 /f c := 2.38 λ c, 1/17/ / 44

67 Exact recovery by convex optimization Higher dimensions Signal : x = j a j δ tj a j R, t j T [0, 1] 2 Measurements : low-pass 2D Fourier coecients y(k) = e i 2π k,t x (dt) = a j e i 2π k,t j, [0,1] 2 j k Z2, k f c Theorem [Candès, F. 2012] TV norm minimization yields exact recovery if (T ) 2.38 /f c := 2.38 λ c, In dimension d, (T ) C d λ c, where C d only depends on d 1/17/ / 44

68 Exact recovery by convex optimization Extensions Signal : l 1 times continuously dierentiable piecewise smooth function x = t j T 1 (tj 1,t j) p j(t), t j T [0, 1] where p j (t) is a polynomial of degree l Measurements : low-pass Fourier coecients y(k) = 1 0 e i 2πkt x (dt), k Z, k f c 1/17/ / 44

69 Exact recovery by convex optimization Extensions Signal : l 1 times continuously dierentiable piecewise smooth function x = t j T 1 (tj 1,t j) p j(t), t j T [0, 1] where p j (t) is a polynomial of degree l Measurements : low-pass Fourier coecients Recovery : y(k) = 1 0 e i 2πkt x (dt), k Z, k f c min x (l+1) TV subject to F n x = y 1/17/ / 44

70 Exact recovery by convex optimization Extensions Signal : l 1 times continuously dierentiable piecewise smooth function x = t j T 1 (tj 1,t j) p j(t), t j T [0, 1] where p j (t) is a polynomial of degree l Measurements : low-pass Fourier coecients Recovery : y(k) = 1 0 e i 2πkt x (dt), k Z, k f c min x (l+1) TV subject to F n x = y Corollary TV norm minimization yields exact recovery if (T ) 2 λ c 1/17/ / 44

71 Exact recovery by convex optimization Sparse recovery If we discretize the support : Sparse recovery problem in an overcomplete Fourier dictionary 1/17/ / 44

72 Exact recovery by convex optimization Sparse recovery If we discretize the support : Sparse recovery problem in an overcomplete Fourier dictionary Previous theory based on dictionary incoherence is very weak, due to high column correlation 1/17/ / 44

73 Exact recovery by convex optimization Sparse recovery If we discretize the support : Sparse recovery problem in an overcomplete Fourier dictionary Previous theory based on dictionary incoherence is very weak, due to high column correlation If N = 20, 000 and n = 1, 000, [Dossal 2005] : recovery of 3 spikes, our result : n/4 = 250 spikes 1/17/ / 44

74 Exact recovery by convex optimization Sparse recovery If we discretize the support : Sparse recovery problem in an overcomplete Fourier dictionary Previous theory based on dictionary incoherence is very weak, due to high column correlation If N = 20, 000 and n = 1, 000, [Dossal 2005] : recovery of 3 spikes, our result : n/4 = 250 spikes Additional structural assumptions allow for a more precise theoretical analysis 1/17/ / 44

75 Exact recovery by convex optimization Sketch of the proof Sucient condition for exact recovery of a signal supported on T : For any v C T with v j = 1, there exists a low-frequency trigonometric polynomial q(t) = f c k= f c c k e i 2πkt (1) obeying { q(t j ) = v j, t j T, q(t) < 1, t [0, 1] \ T. (2) 1/17/ / 44

76 Exact recovery by convex optimization Sketch of the proof Interpolating the sign pattern with a low frequency polynomial becomes challenging if the minimum separation is small /17/ / 44

77 Exact recovery by convex optimization Sketch of the proof Interpolation with low-frequency kernel K q(t) = t j T α j K(t t j ), where α is a vector of coecients, and q is constrained to satisfy q(t k ) = t j T α j K (t k t j ) = v k, t k T, 1/17/ / 44

78 Exact recovery by convex optimization Sketch of the proof Interpolation with low-frequency kernel K q(t) = t j T α j K(t t j ), where α is a vector of coecients, and q is constrained to satisfy q(t k ) = t j T α j K (t k t j ) = v k, t k T, If K is a sinc function this is equivalent to a least squares construction 1/17/ / 44

79 Exact recovery by convex optimization Sketch of the proof Interpolation with low-frequency kernel K q(t) = t j T α j K(t t j ), where α is a vector of coecients, and q is constrained to satisfy q(t k ) = t j T α j K (t k t j ) = v k, t k T, If K is a sinc function this is equivalent to a least squares construction Kernels with faster decay, such as the Fejér kernel or the Gaussian kernel [Kahane 2011], yield better results 1/17/ / 44

80 Exact recovery by convex optimization Sketch of the proof Interpolation with low-frequency kernel K q(t) = t j T α j K(t t j ), where α is a vector of coecients, and q is constrained to satisfy q(t k ) = t j T α j K (t k t j ) = v k, t k T, If K is a sinc function this is equivalent to a least squares construction Kernels with faster decay, such as the Fejér kernel or the Gaussian kernel [Kahane 2011], yield better results However, this does not allow to construct a valid continuous dual polynomial 1/17/ / 44

81 Exact recovery by convex optimization Sketch of the proof Adding a correction term q(t) = t j T α j K(t t j ) + β j K (t t j ) and an extra constraint q(t k ) = v k, q (t k ) = 0, t k T forces q to reach a local maximum at each element of T 1/17/ / 44

82 Exact recovery by convex optimization Sketch of the proof Adding a correction term q(t) = t j T α j K(t t j ) + β j K (t t j ) and an extra constraint q(t k ) = v k, q (t k ) = 0, t k T forces q to reach a local maximum at each element of T Choosing K to be the square of a Fejér kernel allows to show that there exist α and β satisfying the constraints q is strictly bounded by one on the o-support 1/17/ / 44

83 Stability 1 Motivation 2 Sparsity is not enough 3 Exact recovery by convex optimization 4 Stability 5 Numerical algorithms 6 Related work and conclusion 1/17/ / 44

84 Stability Super-resolution factor : spatial viewpoint Super-resolution factor SRF = λ c λ f 1/17/ / 44

85 Stability Super-resolution factor : spectral viewpoint Super-resolution factor SRF = f f c 1/17/ / 44

86 Stability Noise model F n x(k) = 1 0 e i 2πkt x (dt), k f c 1/17/ / 44

87 Stability Noise model F n x(k) = 1 0 e i 2πkt x (dt), Noise can be modeled in the spectral domain y = F n x + w k f c 1/17/ / 44

88 Stability Noise model F n x(k) = 1 0 e i 2πkt x (dt), Noise can be modeled in the spectral domain y = F n x + w or spatial domain s = Fn y = P n x + z, P n projection onto the rst n Fourier modes k f c 1/17/ / 44

89 Stability Noise model F n x(k) = 1 0 e i 2πkt x (dt), Noise can be modeled in the spectral domain y = F n x + w or spatial domain s = Fn y = P n x + z, P n projection onto the rst n Fourier modes k f c Assumption : z L1 δ 1/17/ / 44

90 Stability Noise model F n x(k) = 1 0 e i 2πkt x (dt), Noise can be modeled in the spectral domain y = F n x + w or spatial domain s = Fn y = P n x + z, P n projection onto the rst n Fourier modes k f c Assumption : z L1 δ Recovery algorithm min x x TV subject to P n x s L1 δ 1/17/ / 44

91 Stability Robust recovery Let φ λc be a kernel with width λ c and cut-o frequency f c If z L1 δ, then φ λc (x est x) L1 z L1 δ, 1/17/ / 44

92 Stability Robust recovery Let φ λc be a kernel with width λ c and cut-o frequency f c If z L1 δ, then What can we expect for φ λf? φ λc (x est x) L1 z L1 δ, 1/17/ / 44

93 Stability Robust recovery Let φ λc be a kernel with width λ c and cut-o frequency f c If z L1 δ, then What can we expect for φ λf? Theorem [Candès, F. 2012] φ λc (x est x) L1 z L1 δ, If (T ) 2 /f c then the solution x est to the TV-norm minimization problem satises φ λf (x est x) L1 SRF2 δ, 1/17/ / 44

94 Numerical algorithms 1 Motivation 2 Sparsity is not enough 3 Exact recovery by convex optimization 4 Stability 5 Numerical algorithms 6 Related work and conclusion 1/17/ / 44

95 Numerical algorithms Practical implementation TV norm minimization is an innite-dimensional optimization problem 1/17/ / 44

96 Numerical algorithms Practical implementation TV norm minimization is an innite-dimensional optimization problem Primal : min x x TV subject to F n x = y, First option : Discretize x l 1 norm minimization 1/17/ / 44

97 Numerical algorithms Practical implementation TV norm minimization is an innite-dimensional optimization problem Primal : min x x TV subject to F n x = y, First option : Discretize x l 1 norm minimization Dual : max Re [y u] subject to Fn u 1, u C n Second option : Recast as semidenite program 1/17/ / 44

98 Numerical algorithms Semidenite representation is equivalent to F n u 1 There exists a Hermitian matrix Q C n n such that [ ] Q u u 0, 1 n j Q i,i+j = i=1 { 1, j = 0, 0, j = 1, 2,..., n 1. 1/17/ / 44

99 Numerical algorithms Support detection By strong duality, F n û interpolates the sign of ˆx 1/17/ / 44

100 Numerical algorithms Experiment Support recovery by solving the SDP f c Average error Maximum error For each f c, 100 random signals with T = f c /4 and (T ) 2/f c 1/17/ / 44

101 Related work and conclusion 1 Motivation 2 Sparsity is not enough 3 Exact recovery by convex optimization 4 Stability 5 Numerical algorithms 6 Related work and conclusion 1/17/ / 44

102 Related work and conclusion Related work Super-resolution of spike trains by l 1 norm minimization in seismic prospecting [Claerbout, Muir 1973 ; Levy, Fullagar 1981 ; Santosa, Symes 1986] 1/17/ / 44

103 Related work and conclusion Related work Super-resolution of spike trains by l 1 norm minimization in seismic prospecting [Claerbout, Muir 1973 ; Levy, Fullagar 1981 ; Santosa, Symes 1986] Guarantees on recovery of positive k-sparse signals from 2k + 1 Fourier coecients by l 1 norm minimization [Donoho, Tanner ; Fuchs 2005] 1/17/ / 44

104 Related work and conclusion Related work Super-resolution of spike trains by l 1 norm minimization in seismic prospecting [Claerbout, Muir 1973 ; Levy, Fullagar 1981 ; Santosa, Symes 1986] Guarantees on recovery of positive k-sparse signals from 2k + 1 Fourier coecients by l 1 norm minimization [Donoho, Tanner ; Fuchs 2005] Recovery of lacunary Fourier series coecients [Kahane 2011] 1/17/ / 44

105 Related work and conclusion Related work Super-resolution of spike trains by l 1 norm minimization in seismic prospecting [Claerbout, Muir 1973 ; Levy, Fullagar 1981 ; Santosa, Symes 1986] Guarantees on recovery of positive k-sparse signals from 2k + 1 Fourier coecients by l 1 norm minimization [Donoho, Tanner ; Fuchs 2005] Recovery of lacunary Fourier series coecients [Kahane 2011] Finite rate of innovation [Dragotti, Vetterli, Blu 2007] 1/17/ / 44

106 Related work and conclusion Related work Super-resolution of spike trains by l 1 norm minimization in seismic prospecting [Claerbout, Muir 1973 ; Levy, Fullagar 1981 ; Santosa, Symes 1986] Guarantees on recovery of positive k-sparse signals from 2k + 1 Fourier coecients by l 1 norm minimization [Donoho, Tanner ; Fuchs 2005] Recovery of lacunary Fourier series coecients [Kahane 2011] Finite rate of innovation [Dragotti, Vetterli, Blu 2007] Bounds on the modulus of continuity of super-resolution [Donoho 1992] 1/17/ / 44

107 Related work and conclusion Related work Super-resolution of spike trains by l 1 norm minimization in seismic prospecting [Claerbout, Muir 1973 ; Levy, Fullagar 1981 ; Santosa, Symes 1986] Guarantees on recovery of positive k-sparse signals from 2k + 1 Fourier coecients by l 1 norm minimization [Donoho, Tanner ; Fuchs 2005] Recovery of lacunary Fourier series coecients [Kahane 2011] Finite rate of innovation [Dragotti, Vetterli, Blu 2007] Bounds on the modulus of continuity of super-resolution [Donoho 1992] Super-resolution by greedy methods [Fannjiang, Liao 2012] 1/17/ / 44

108 Related work and conclusion Related work Super-resolution of spike trains by l 1 norm minimization in seismic prospecting [Claerbout, Muir 1973 ; Levy, Fullagar 1981 ; Santosa, Symes 1986] Guarantees on recovery of positive k-sparse signals from 2k + 1 Fourier coecients by l 1 norm minimization [Donoho, Tanner ; Fuchs 2005] Recovery of lacunary Fourier series coecients [Kahane 2011] Finite rate of innovation [Dragotti, Vetterli, Blu 2007] Bounds on the modulus of continuity of super-resolution [Donoho 1992] Super-resolution by greedy methods [Fannjiang, Liao 2012] Random undersampling of low-frequency coecients [Tang et al 2012] 1/17/ / 44

109 Related work and conclusion Conclusion Stable super-resolution is possible via tractable non-parametric methods based on convex optimization Note on single-molecule imaging : Joint work with E. Candès, V. Morgenshtern and the Moerner lab at Stanford 1/17/ / 44

110 Related work and conclusion Thank you 1/17/ / 44

Towards a Mathematical Theory of Super-resolution

Towards a Mathematical Theory of Super-resolution Towards a Mathematical Theory of Super-resolution Carlos Fernandez-Granda www.stanford.edu/~cfgranda/ Information Theory Forum, Information Systems Laboratory, Stanford 10/18/2013 Acknowledgements This

More information

Support Detection in Super-resolution

Support Detection in Super-resolution Support Detection in Super-resolution Carlos Fernandez-Granda (Stanford University) 7/2/2013 SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 1 / 25 Acknowledgements This work was

More information

Optimization-based sparse recovery: Compressed sensing vs. super-resolution

Optimization-based sparse recovery: Compressed sensing vs. super-resolution Optimization-based sparse recovery: Compressed sensing vs. super-resolution Carlos Fernandez-Granda, Google Computational Photography and Intelligent Cameras, IPAM 2/5/2014 This work was supported by a

More information

Sparse Recovery Beyond Compressed Sensing

Sparse Recovery Beyond Compressed Sensing Sparse Recovery Beyond Compressed Sensing Carlos Fernandez-Granda www.cims.nyu.edu/~cfgranda Applied Math Colloquium, MIT 4/30/2018 Acknowledgements Project funded by NSF award DMS-1616340 Separable Nonlinear

More information

On the recovery of measures without separation conditions

On the recovery of measures without separation conditions Norbert Wiener Center Department of Mathematics University of Maryland, College Park http://www.norbertwiener.umd.edu Applied and Computational Mathematics Seminar Georgia Institute of Technology October

More information

Super-Resolution of Point Sources via Convex Programming

Super-Resolution of Point Sources via Convex Programming Super-Resolution of Point Sources via Convex Programming Carlos Fernandez-Granda July 205; Revised December 205 Abstract We consider the problem of recovering a signal consisting of a superposition of

More information

Super-resolution of point sources via convex programming

Super-resolution of point sources via convex programming Information and Inference: A Journal of the IMA 06) 5, 5 303 doi:0.093/imaiai/iaw005 Advance Access publication on 0 April 06 Super-resolution of point sources via convex programming Carlos Fernandez-Granda

More information

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation UIUC CSL Mar. 24 Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation Yuejie Chi Department of ECE and BMI Ohio State University Joint work with Yuxin Chen (Stanford).

More information

Robust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Frequency Information

Robust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Frequency Information 1 Robust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Frequency Information Emmanuel Candès, California Institute of Technology International Conference on Computational Harmonic

More information

Going off the grid. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison

Going off the grid. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Going off the grid Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Joint work with Badri Bhaskar Parikshit Shah Gonnguo Tang We live in a continuous world... But we work

More information

Demixing Sines and Spikes: Robust Spectral Super-resolution in the Presence of Outliers

Demixing Sines and Spikes: Robust Spectral Super-resolution in the Presence of Outliers Demixing Sines and Spikes: Robust Spectral Super-resolution in the Presence of Outliers Carlos Fernandez-Granda, Gongguo Tang, Xiaodong Wang and Le Zheng September 6 Abstract We consider the problem of

More information

ROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210

ROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210 ROBUST BLIND SPIKES DECONVOLUTION Yuejie Chi Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 4 ABSTRACT Blind spikes deconvolution, or blind super-resolution, deals with

More information

Sparse linear models

Sparse linear models Sparse linear models Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 2/22/2016 Introduction Linear transforms Frequency representation Short-time

More information

Three Recent Examples of The Effectiveness of Convex Programming in the Information Sciences

Three Recent Examples of The Effectiveness of Convex Programming in the Information Sciences Three Recent Examples of The Effectiveness of Convex Programming in the Information Sciences Emmanuel Candès International Symposium on Information Theory, Istanbul, July 2013 Three stories about a theme

More information

Strengthened Sobolev inequalities for a random subspace of functions

Strengthened Sobolev inequalities for a random subspace of functions Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)

More information

Super-Resolution from Noisy Data

Super-Resolution from Noisy Data Super-Resolution from Noisy Data Emmanuel J. Candès and Carlos Fernandez-Granda October 2012; Revised April 2013 Abstract his paper studies the recovery of a superposition of point sources from noisy bandlimited

More information

Recovering overcomplete sparse representations from structured sensing

Recovering overcomplete sparse representations from structured sensing Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix

More information

Near Optimal Signal Recovery from Random Projections

Near Optimal Signal Recovery from Random Projections 1 Near Optimal Signal Recovery from Random Projections Emmanuel Candès, California Institute of Technology Multiscale Geometric Analysis in High Dimensions: Workshop # 2 IPAM, UCLA, October 2004 Collaborators:

More information

Lecture Notes 9: Constrained Optimization

Lecture Notes 9: Constrained Optimization Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form

More information

ELE 538B: Sparsity, Structure and Inference. Super-Resolution. Yuxin Chen Princeton University, Spring 2017

ELE 538B: Sparsity, Structure and Inference. Super-Resolution. Yuxin Chen Princeton University, Spring 2017 ELE 538B: Sparsity, Structure and Inference Super-Resolution Yuxin Chen Princeton University, Spring 2017 Outline Classical methods for parameter estimation Polynomial method: Prony s method Subspace method:

More information

SPECTRAL COMPRESSIVE SENSING WITH POLAR INTERPOLATION. Karsten Fyhn, Hamid Dadkhahi, Marco F. Duarte

SPECTRAL COMPRESSIVE SENSING WITH POLAR INTERPOLATION. Karsten Fyhn, Hamid Dadkhahi, Marco F. Duarte SPECTRAL COMPRESSIVE SENSING WITH POLAR INTERPOLATION Karsten Fyhn, Hamid Dadkhahi, Marco F. Duarte Dept. of Electronic Systems, Aalborg University, Denmark. Dept. of Electrical and Computer Engineering,

More information

Going off the grid. Benjamin Recht University of California, Berkeley. Joint work with Badri Bhaskar Parikshit Shah Gonnguo Tang

Going off the grid. Benjamin Recht University of California, Berkeley. Joint work with Badri Bhaskar Parikshit Shah Gonnguo Tang Going off the grid Benjamin Recht University of California, Berkeley Joint work with Badri Bhaskar Parikshit Shah Gonnguo Tang imaging astronomy seismology spectroscopy x(t) = kx j= c j e i2 f jt DOA Estimation

More information

Constrained optimization

Constrained optimization Constrained optimization DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Compressed sensing Convex constrained

More information

Compressed Sensing and Neural Networks

Compressed Sensing and Neural Networks and Jan Vybíral (Charles University & Czech Technical University Prague, Czech Republic) NOMAD Summer Berlin, September 25-29, 2017 1 / 31 Outline Lasso & Introduction Notation Training the network Applications

More information

The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008

The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008 The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008 Emmanuel Candés (Caltech), Terence Tao (UCLA) 1 Uncertainty principles A basic principle

More information

Sparse Parameter Estimation: Compressed Sensing meets Matrix Pencil

Sparse Parameter Estimation: Compressed Sensing meets Matrix Pencil Sparse Parameter Estimation: Compressed Sensing meets Matrix Pencil Yuejie Chi Departments of ECE and BMI The Ohio State University Colorado School of Mines December 9, 24 Page Acknowledgement Joint work

More information

Compressive Line Spectrum Estimation with Clustering and Interpolation

Compressive Line Spectrum Estimation with Clustering and Interpolation Compressive Line Spectrum Estimation with Clustering and Interpolation Dian Mo Department of Electrical and Computer Engineering University of Massachusetts Amherst, MA, 01003 mo@umass.edu Marco F. Duarte

More information

The Fundamentals of Compressive Sensing

The Fundamentals of Compressive Sensing The Fundamentals of Compressive Sensing Mark A. Davenport Georgia Institute of Technology School of Electrical and Computer Engineering Sensor Explosion Data Deluge Digital Revolution If we sample a signal

More information

A Super-Resolution Algorithm for Multiband Signal Identification

A Super-Resolution Algorithm for Multiband Signal Identification A Super-Resolution Algorithm for Multiband Signal Identification Zhihui Zhu, Dehui Yang, Michael B. Wakin, and Gongguo Tang Department of Electrical Engineering, Colorado School of Mines {zzhu, dyang,

More information

Super-resolution by means of Beurling minimal extrapolation

Super-resolution by means of Beurling minimal extrapolation Super-resolution by means of Beurling minimal extrapolation Norbert Wiener Center Department of Mathematics University of Maryland, College Park http://www.norbertwiener.umd.edu Acknowledgements ARO W911NF-16-1-0008

More information

Noisy Signal Recovery via Iterative Reweighted L1-Minimization

Noisy Signal Recovery via Iterative Reweighted L1-Minimization Noisy Signal Recovery via Iterative Reweighted L1-Minimization Deanna Needell UC Davis / Stanford University Asilomar SSC, November 2009 Problem Background Setup 1 Suppose x is an unknown signal in R d.

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional

More information

A CONVEX-PROGRAMMING FRAMEWORK FOR SUPER-RESOLUTION

A CONVEX-PROGRAMMING FRAMEWORK FOR SUPER-RESOLUTION A CONVEX-PROGRAMMING FRAMEWORK FOR SUPER-RESOLUTION A DISSERTATION SUBMITTED TO THE DEPARTMENT OF ELECTRICAL ENGINEERING AND THE COMMITTEE ON GRADUATE STUDIES OF STANFORD UNIVERSITY IN PARTIAL FULFILLMENT

More information

Compressive Sensing and Beyond

Compressive Sensing and Beyond Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered

More information

Overview. Optimization-Based Data Analysis. Carlos Fernandez-Granda

Overview. Optimization-Based Data Analysis.   Carlos Fernandez-Granda Overview Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 1/25/2016 Sparsity Denoising Regression Inverse problems Low-rank models Matrix completion

More information

Information and Resolution

Information and Resolution Information and Resolution (Invited Paper) Albert Fannjiang Department of Mathematics UC Davis, CA 95616-8633. fannjiang@math.ucdavis.edu. Abstract The issue of resolution with complex-field measurement

More information

The Pros and Cons of Compressive Sensing

The Pros and Cons of Compressive Sensing The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal

More information

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina INDUSTRIAL MATHEMATICS INSTITUTE 2007:08 A remark on compressed sensing B.S. Kashin and V.N. Temlyakov IMI Preprint Series Department of Mathematics University of South Carolina A remark on compressed

More information

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE 5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER 2009 Uncertainty Relations for Shift-Invariant Analog Signals Yonina C. Eldar, Senior Member, IEEE Abstract The past several years

More information

Part IV Compressed Sensing

Part IV Compressed Sensing Aisenstadt Chair Course CRM September 2009 Part IV Compressed Sensing Stéphane Mallat Centre de Mathématiques Appliquées Ecole Polytechnique Conclusion to Super-Resolution Sparse super-resolution is sometime

More information

Sensing systems limited by constraints: physical size, time, cost, energy

Sensing systems limited by constraints: physical size, time, cost, energy Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original

More information

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011 Compressed Sensing Huichao Xue CS3750 Fall 2011 Table of Contents Introduction From News Reports Abstract Definition How it works A review of L 1 norm The Algorithm Backgrounds for underdetermined linear

More information

Greedy Signal Recovery and Uniform Uncertainty Principles

Greedy Signal Recovery and Uniform Uncertainty Principles Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles

More information

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016 Lecture notes 5 February 9, 016 1 Introduction Random projections Random projections are a useful tool in the analysis and processing of high-dimensional data. We will analyze two applications that use

More information

Multichannel Sampling of Signals with Finite. Rate of Innovation

Multichannel Sampling of Signals with Finite. Rate of Innovation Multichannel Sampling of Signals with Finite 1 Rate of Innovation Hojjat Akhondi Asl, Pier Luigi Dragotti and Loic Baboulaz EDICS: DSP-TFSR, DSP-SAMP. Abstract In this paper we present a possible extension

More information

Sampling Signals from a Union of Subspaces

Sampling Signals from a Union of Subspaces 1 Sampling Signals from a Union of Subspaces Yue M. Lu and Minh N. Do I. INTRODUCTION Our entire digital revolution depends on the sampling process, which converts continuousdomain real-life signals to

More information

Sparsity in Underdetermined Systems

Sparsity in Underdetermined Systems Sparsity in Underdetermined Systems Department of Statistics Stanford University August 19, 2005 Classical Linear Regression Problem X n y p n 1 > Given predictors and response, y Xβ ε = + ε N( 0, σ 2

More information

Robust Recovery of Positive Stream of Pulses

Robust Recovery of Positive Stream of Pulses 1 Robust Recovery of Positive Stream of Pulses Tamir Bendory arxiv:1503.0878v3 [cs.it] 9 Jan 017 Abstract The problem of estimating the delays and amplitudes of a positive stream of pulses appears in many

More information

Fast and Robust Phase Retrieval

Fast and Robust Phase Retrieval Fast and Robust Phase Retrieval Aditya Viswanathan aditya@math.msu.edu CCAM Lunch Seminar Purdue University April 18 2014 0 / 27 Joint work with Yang Wang Mark Iwen Research supported in part by National

More information

Minimizing the Difference of L 1 and L 2 Norms with Applications

Minimizing the Difference of L 1 and L 2 Norms with Applications 1/36 Minimizing the Difference of L 1 and L 2 Norms with Department of Mathematical Sciences University of Texas Dallas May 31, 2017 Partially supported by NSF DMS 1522786 2/36 Outline 1 A nonconvex approach:

More information

Parameter Estimation for Mixture Models via Convex Optimization

Parameter Estimation for Mixture Models via Convex Optimization Parameter Estimation for Mixture Models via Convex Optimization Yuanxin Li Department of Electrical and Computer Engineering The Ohio State University Columbus Ohio 432 Email: li.3822@osu.edu Yuejie Chi

More information

AN ALGORITHM FOR EXACT SUPER-RESOLUTION AND PHASE RETRIEVAL

AN ALGORITHM FOR EXACT SUPER-RESOLUTION AND PHASE RETRIEVAL 2014 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP) AN ALGORITHM FOR EXACT SUPER-RESOLUTION AND PHASE RETRIEVAL Yuxin Chen Yonina C. Eldar Andrea J. Goldsmith Department

More information

Compressed Sensing: Lecture I. Ronald DeVore

Compressed Sensing: Lecture I. Ronald DeVore Compressed Sensing: Lecture I Ronald DeVore Motivation Compressed Sensing is a new paradigm for signal/image/function acquisition Motivation Compressed Sensing is a new paradigm for signal/image/function

More information

Compressed sensing for radio interferometry: spread spectrum imaging techniques

Compressed sensing for radio interferometry: spread spectrum imaging techniques Compressed sensing for radio interferometry: spread spectrum imaging techniques Y. Wiaux a,b, G. Puy a, Y. Boursier a and P. Vandergheynst a a Institute of Electrical Engineering, Ecole Polytechnique Fédérale

More information

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Yin Zhang Technical Report TR05-06 Department of Computational and Applied Mathematics Rice University,

More information

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5 CS 229r: Algorithms for Big Data Fall 215 Prof. Jelani Nelson Lecture 19 Nov 5 Scribe: Abdul Wasay 1 Overview In the last lecture, we started discussing the problem of compressed sensing where we are given

More information

Compressive sensing in the analog world

Compressive sensing in the analog world Compressive sensing in the analog world Mark A. Davenport Georgia Institute of Technology School of Electrical and Computer Engineering Compressive Sensing A D -sparse Can we really acquire analog signals

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk Model-Based Compressive Sensing for Signal Ensembles Marco F. Duarte Volkan Cevher Richard G. Baraniuk Concise Signal Structure Sparse signal: only K out of N coordinates nonzero model: union of K-dimensional

More information

Stable Signal Recovery from Incomplete and Inaccurate Measurements

Stable Signal Recovery from Incomplete and Inaccurate Measurements Stable Signal Recovery from Incomplete and Inaccurate Measurements EMMANUEL J. CANDÈS California Institute of Technology JUSTIN K. ROMBERG California Institute of Technology AND TERENCE TAO University

More information

Exponential decay of reconstruction error from binary measurements of sparse signals

Exponential decay of reconstruction error from binary measurements of sparse signals Exponential decay of reconstruction error from binary measurements of sparse signals Deanna Needell Joint work with R. Baraniuk, S. Foucart, Y. Plan, and M. Wootters Outline Introduction Mathematical Formulation

More information

Random Sampling of Bandlimited Signals on Graphs

Random Sampling of Bandlimited Signals on Graphs Random Sampling of Bandlimited Signals on Graphs Pierre Vandergheynst École Polytechnique Fédérale de Lausanne (EPFL) School of Engineering & School of Computer and Communication Sciences Joint work with

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 57 Table of Contents 1 Sparse linear models Basis Pursuit and restricted null space property Sufficient conditions for RNS 2 / 57

More information

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION A Thesis by MELTEM APAYDIN Submitted to the Office of Graduate and Professional Studies of Texas A&M University in partial fulfillment of the

More information

Introduction to compressive sampling

Introduction to compressive sampling Introduction to compressive sampling Sparsity and the equation Ax = y Emanuele Grossi DAEIMI, Università degli Studi di Cassino e-mail: e.grossi@unicas.it Gorini 2010, Pistoia Outline 1 Introduction Traditional

More information

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad

More information

Generalized Line Spectral Estimation via Convex Optimization

Generalized Line Spectral Estimation via Convex Optimization Generalized Line Spectral Estimation via Convex Optimization Reinhard Heckel and Mahdi Soltanolkotabi September 26, 206 Abstract Line spectral estimation is the problem of recovering the frequencies and

More information

Self-Calibration and Biconvex Compressive Sensing

Self-Calibration and Biconvex Compressive Sensing Self-Calibration and Biconvex Compressive Sensing Shuyang Ling Department of Mathematics, UC Davis July 12, 2017 Shuyang Ling (UC Davis) SIAM Annual Meeting, 2017, Pittsburgh July 12, 2017 1 / 22 Acknowledgements

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

Gauge optimization and duality

Gauge optimization and duality 1 / 54 Gauge optimization and duality Junfeng Yang Department of Mathematics Nanjing University Joint with Shiqian Ma, CUHK September, 2015 2 / 54 Outline Introduction Duality Lagrange duality Fenchel

More information

Inverse problems, Dictionary based Signal Models and Compressed Sensing

Inverse problems, Dictionary based Signal Models and Compressed Sensing Inverse problems, Dictionary based Signal Models and Compressed Sensing Rémi Gribonval METISS project-team (audio signal processing, speech recognition, source separation) INRIA, Rennes, France Ecole d

More information

Signal Recovery, Uncertainty Relations, and Minkowski Dimension

Signal Recovery, Uncertainty Relations, and Minkowski Dimension Signal Recovery, Uncertainty Relations, and Minkowski Dimension Helmut Bőlcskei ETH Zurich December 2013 Joint work with C. Aubel, P. Kuppinger, G. Pope, E. Riegler, D. Stotz, and C. Studer Aim of this

More information

An iterative hard thresholding estimator for low rank matrix recovery

An iterative hard thresholding estimator for low rank matrix recovery An iterative hard thresholding estimator for low rank matrix recovery Alexandra Carpentier - based on a joint work with Arlene K.Y. Kim Statistical Laboratory, Department of Pure Mathematics and Mathematical

More information

Stochastic geometry and random matrix theory in CS

Stochastic geometry and random matrix theory in CS Stochastic geometry and random matrix theory in CS IPAM: numerical methods for continuous optimization University of Edinburgh Joint with Bah, Blanchard, Cartis, and Donoho Encoder Decoder pair - Encoder/Decoder

More information

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Shuyang Ling Department of Mathematics, UC Davis Oct.18th, 2016 Shuyang Ling (UC Davis) 16w5136, Oaxaca, Mexico Oct.18th, 2016

More information

Uncertainty quantification for sparse solutions of random PDEs

Uncertainty quantification for sparse solutions of random PDEs Uncertainty quantification for sparse solutions of random PDEs L. Mathelin 1 K.A. Gallivan 2 1 LIMSI - CNRS Orsay, France 2 Mathematics Dpt., Florida State University Tallahassee, FL, USA SIAM 10 July

More information

THEORY OF COMPRESSIVE SENSING VIA l 1 -MINIMIZATION: A NON-RIP ANALYSIS AND EXTENSIONS

THEORY OF COMPRESSIVE SENSING VIA l 1 -MINIMIZATION: A NON-RIP ANALYSIS AND EXTENSIONS THEORY OF COMPRESSIVE SENSING VIA l 1 -MINIMIZATION: A NON-RIP ANALYSIS AND EXTENSIONS YIN ZHANG Abstract. Compressive sensing (CS) is an emerging methodology in computational signal processing that has

More information

1 Introduction to Compressed Sensing

1 Introduction to Compressed Sensing 1 Introduction to Compressed Sensing Mark A. Davenport Stanford University, Department of Statistics Marco F. Duarte Duke University, Department of Computer Science Yonina C. Eldar Technion, Israel Institute

More information

Gaussian Phase Transitions and Conic Intrinsic Volumes: Steining the Steiner formula

Gaussian Phase Transitions and Conic Intrinsic Volumes: Steining the Steiner formula Gaussian Phase Transitions and Conic Intrinsic Volumes: Steining the Steiner formula Larry Goldstein, University of Southern California Nourdin GIoVAnNi Peccati Luxembourg University University British

More information

Lecture Notes 5: Multiresolution Analysis

Lecture Notes 5: Multiresolution Analysis Optimization-based data analysis Fall 2017 Lecture Notes 5: Multiresolution Analysis 1 Frames A frame is a generalization of an orthonormal basis. The inner products between the vectors in a frame and

More information

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March

More information

Sparsifying Transform Learning for Compressed Sensing MRI

Sparsifying Transform Learning for Compressed Sensing MRI Sparsifying Transform Learning for Compressed Sensing MRI Saiprasad Ravishankar and Yoram Bresler Department of Electrical and Computer Engineering and Coordinated Science Laborarory University of Illinois

More information

Sparse DOA estimation with polynomial rooting

Sparse DOA estimation with polynomial rooting Sparse DOA estimation with polynomial rooting Angeliki Xenaki Department of Applied Mathematics and Computer Science Technical University of Denmark 28 Kgs. Lyngby, Denmark Email: anxe@dtu.dk Peter Gerstoft

More information

Tutorial: Sparse Signal Processing Part 1: Sparse Signal Representation. Pier Luigi Dragotti Imperial College London

Tutorial: Sparse Signal Processing Part 1: Sparse Signal Representation. Pier Luigi Dragotti Imperial College London Tutorial: Sparse Signal Processing Part 1: Sparse Signal Representation Pier Luigi Dragotti Imperial College London Outline Part 1: Sparse Signal Representation ~90min Part 2: Sparse Sampling ~90min 2

More information

Super-Resolution of Mutually Interfering Signals

Super-Resolution of Mutually Interfering Signals Super-Resolution of utually Interfering Signals Yuanxin Li Department of Electrical and Computer Engineering The Ohio State University Columbus Ohio 430 Email: li.38@osu.edu Yuejie Chi Department of Electrical

More information

Digital Object Identifier /MSP

Digital Object Identifier /MSP DIGITAL VISION Sampling Signals from a Union of Subspaces [A new perspective for the extension of this theory] [ Yue M. Lu and Minh N. Do ] Our entire digital revolution depends on the sampling process,

More information

arxiv: v3 [math.oc] 15 Sep 2014

arxiv: v3 [math.oc] 15 Sep 2014 Exact Support Recovery for Sparse Spikes Deconvolution arxiv:136.699v3 [math.oc] 15 Sep 214 Vincent Duval and Gabriel Peyré CNRS and Université Paris-Dauphine {vincent.duval,gabriel.peyre}@ceremade.dauphine.fr

More information

COMPRESSIVE SENSING WITH AN OVERCOMPLETE DICTIONARY FOR HIGH-RESOLUTION DFT ANALYSIS. Guglielmo Frigo and Claudio Narduzzi

COMPRESSIVE SENSING WITH AN OVERCOMPLETE DICTIONARY FOR HIGH-RESOLUTION DFT ANALYSIS. Guglielmo Frigo and Claudio Narduzzi COMPRESSIVE SESIG WITH A OVERCOMPLETE DICTIOARY FOR HIGH-RESOLUTIO DFT AALYSIS Guglielmo Frigo and Claudio arduzzi University of Padua Department of Information Engineering DEI via G. Gradenigo 6/b, I

More information

Agenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization

Agenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization Agenda Applications of semidefinite programming 1 Control and system theory 2 Combinatorial and nonconvex optimization 3 Spectral estimation & super-resolution Control and system theory SDP in wide use

More information

Spin Glass Approach to Restricted Isometry Constant

Spin Glass Approach to Restricted Isometry Constant Spin Glass Approach to Restricted Isometry Constant Ayaka Sakata 1,Yoshiyuki Kabashima 2 1 Institute of Statistical Mathematics 2 Tokyo Institute of Technology 1/29 Outline Background: Compressed sensing

More information

IMAGE RECONSTRUCTION FROM UNDERSAMPLED FOURIER DATA USING THE POLYNOMIAL ANNIHILATION TRANSFORM

IMAGE RECONSTRUCTION FROM UNDERSAMPLED FOURIER DATA USING THE POLYNOMIAL ANNIHILATION TRANSFORM IMAGE RECONSTRUCTION FROM UNDERSAMPLED FOURIER DATA USING THE POLYNOMIAL ANNIHILATION TRANSFORM Anne Gelb, Rodrigo B. Platte, Rosie Renaut Development and Analysis of Non-Classical Numerical Approximation

More information

Multiscale Geometric Analysis: Thoughts and Applications (a summary)

Multiscale Geometric Analysis: Thoughts and Applications (a summary) Multiscale Geometric Analysis: Thoughts and Applications (a summary) Anestis Antoniadis, University Joseph Fourier Assimage 2005,Chamrousse, February 2005 Classical Multiscale Analysis Wavelets: Enormous

More information

Tractable Upper Bounds on the Restricted Isometry Constant

Tractable Upper Bounds on the Restricted Isometry Constant Tractable Upper Bounds on the Restricted Isometry Constant Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure, U.C. Berkeley. Support from NSF, DHS and Google.

More information

Compressed Sensing Off the Grid

Compressed Sensing Off the Grid Compressed Sensing Off the Grid Gongguo Tang, Badri Narayan Bhaskar, Parikshit Shah, and Benjamin Recht Department of Electrical and Computer Engineering Department of Computer Sciences University of Wisconsin-adison

More information

SUPER-RESOLUTION OF POSITIVE SOURCES: THE DISCRETE SETUP. Veniamin I. Morgenshtern Emmanuel J. Candès

SUPER-RESOLUTION OF POSITIVE SOURCES: THE DISCRETE SETUP. Veniamin I. Morgenshtern Emmanuel J. Candès SUPER-RESOLUTIO OF POSITIVE SOURCES: THE DISCRETE SETUP By Veniamin I. Morgenshtern Emmanuel J. Candès Technical Report o. 205-0 May 205 Department of Statistics STAFORD UIVERSITY Stanford, California

More information

Sparse Proteomics Analysis (SPA)

Sparse Proteomics Analysis (SPA) Sparse Proteomics Analysis (SPA) Toward a Mathematical Theory for Feature Selection from Forward Models Martin Genzel Technische Universität Berlin Winter School on Compressed Sensing December 5, 2015

More information

Various signal sampling and reconstruction methods

Various signal sampling and reconstruction methods Various signal sampling and reconstruction methods Rolands Shavelis, Modris Greitans 14 Dzerbenes str., Riga LV-1006, Latvia Contents Classical uniform sampling and reconstruction Advanced sampling and

More information

Sparse Solutions of an Undetermined Linear System

Sparse Solutions of an Undetermined Linear System 1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research

More information

Thresholds for the Recovery of Sparse Solutions via L1 Minimization

Thresholds for the Recovery of Sparse Solutions via L1 Minimization Thresholds for the Recovery of Sparse Solutions via L Minimization David L. Donoho Department of Statistics Stanford University 39 Serra Mall, Sequoia Hall Stanford, CA 9435-465 Email: donoho@stanford.edu

More information