MUSIC for Single-Snapshot Spectral Estimation: Stability and Super-resolution

Size: px
Start display at page:

Download "MUSIC for Single-Snapshot Spectral Estimation: Stability and Super-resolution"

Transcription

1 MUSIC for Single-Snapshot Spectral Estimation: Stability and Super-resolution Wenjing iao Albert Fannjiang April 5, 4 Abstract This paper studies the problem of line spectral estimation in the continuum of a bounded interval with one snapshot of array measurement. The single-snapshot measurement data is turned into a Hankel data matrix which admits the Vandermonde decomposition and is suitable for the MUSIC algorithm. The MUSIC algorithm amounts to finding the null space (the noise space) of the Hankel matrix, forming the noise-space correlation function and identifying the s smallest local minima of the noise-space correlation as the frequency set. In the noise-free case exact reconstruction is guaranteed for any arbitrary set of frequencies as long as the number of measurement data is at least twice the number of distinct frequencies to be recovered. In the presence of noise the stability analysis shows that the perturbation of the noise-space correlation is proportional to the spectral norm of the noise matrix as long as the latter is smaller than the smallest (nonzero) singular value of the noiseless Hankel data matrix. Under the assumption that the true frequencies are separated by at least twice the Rayleigh length, the stability of the noise-space correlation is proved by means of novel discrete Ingham inequalities which provide bounds on the largest and smallest nonzero singular values of the noiseless Hankel data matrix. The numerical performance of MUSIC is tested in comparison with other algorithms such as BO-OMP and SDP (TV-min). While BO-OMP is the stablest algorithm for frequencies separated above 4R, MUSIC becomes the best performing one for frequencies separated between R and 3R. Also, MUSIC is more efficient than other methods. MUSIC truly shines when the frequency separation drops to one R and below when all other methods fail. Indeed, the resolution of MUSIC apparently decreases to zero as noise decreases to zero. Keywords: MUSIC algorithm, spectral estimation, stability, super-resolution, discrete Ingham inequalities. Introduction The field of Compressive Sensing (CS) [3] has provided us with a new technology of reconstructing a signal from a small number of linear measurements. With a new exceptions, signals considered in Statistical and Applied Mathematical Sciences Institute (SAMSI) and Department of Mathematics, Duke Unviersity, Durham, NC. Wenjing iao is grateful to support from SAMSI and NSF DMS wjliao@math.duke.edu. Department of Mathematics, University of California, Davis, CA. fannjiang@math.ucdavis.edu.

2 the compressive sensing community are assumed to be sparse under a discrete, finite-dimensional dictionary. However, signals arising in applications such as radar [6], sonar and remote sensing [6] are represented by few parameters on a continuous domain. These signals are usually not sparse under any discrete dictionary but can be approximately sparsely represented by indicator functions on a discrete domain. An approximation error, called gridding error [8, 4] or basis mismatch [9, 4, 7] exists, manifesting the gap between the continuous world and the discrete world. This issue is well illustrated by the line spectral estimation problem [4] as follows. Suppose a signal y(t) consists of linear combinations of s time-harmonic components from the set {e πiω jt : ω j R, j =,..., s}. Consider the noisy signal model y ε (t) = y(t) + ε(t), y(t) = s x j e πiω jt where ε(t) is the external noise. The task of spectral estimation is to find out the frequency support set S = {ω,..., ω s } and the corresponding amplitudes x = [x,..., x s ] T from a finite data sampled at, say, t =,,,, M N. Because the signal y(t) depends nonlinearly on S, the main difficulty of spectral estimation lies in identifying S. The amplitudes x can be recovered by solving least squares once S is found. More explicitly, denote (with a slight abuse of notation) y = [y k ] M, ε = [ε k] M and yε = y + ε C M+, with y k = y(k), y ε k = yε (k) and ε k = ε(k). et j= φ M (ω) = [ e πiω e πiω... e πimω ] T C M+ () be the imaging vector of size M + at the frequency ω and define Φ M = [φ M (ω ) φ M (ω )... φ M (ω s )] C (M+) s. The single-snapshot formulation of spectral estimation takes the form () y ε = Φ M x + ε. (3) Again the main difficulty is in the (nonlinear) dependence of Φ M on the unknown frequencies in S. In addition, with the sampling times t =,,,, M N, one can only hope to determine frequencies on the torus T = [, ) with the natural metric d(ω j, ω l ) = min n Z ω j + n ω l. One can attempt to linearize (3) by expanding the matrix Φ M via setting up a grid { G = N, N,..., N } [, ), (4) N where N is some large integer, and writing the spectral estimation problem in the form a linear inversion problem y ε = Ax + ε (5)

3 where A := [ ( ) φ M N φ M ( N ) ( )] N... φ M C (M+) N N Discretizing [, ) as in (4) amounts to rounding frequencies on the continuum to the nearest grid points in G, giving rise to a gridding error which is roughly proportional to the grid spacing. On the other hand, as N increases, correlation among adjacent columns of A also increases dramatically [8]. A key unit of frequency separation is the Rayleigh ength, roughly the minimum resolvable separation of two objects with equal intensities in classical resolution theory [9]. Mathematically, the Rayleigh ength (R) is the distance between the center and the first zero of the Dirichlet kernel M/ D(ω) = e πitω sin πωm dt =. πω M/ Hence R = /M. The ratio F = N/M between R and the grid spacing is called the refinement factor in [8] and super-resolution factor in [4]. The higher F is, the more coherent the measurement matrix A becomes.. Single-snapshot MUSIC In this paper, to circumvent the gridding problem, we reformulate the spectral estimation problem (3) in the form of multiple measurement vectors suitable for the application of the MUltiple Signal Classification (MUSIC) algorithm [38, 37], widely used in signal processing[43, 8, 3] and array imaging [5,, 9]. Most state-of-the-art spectral estimation methods ([4] and references therein) assume many snapshots of array measurement as well as statistical assumptions on measurement noise. In contrast, we pursue below a deterministic approach to spectral estimation with a single snapshot of array measurement in common with [8]. Fixing a positive integer < M, we form the Hankel matrix y y... y M y y... y M + H = Hankel(y) =..... (6) y y +... y M Since its first appearance in Prony s method [35] the Hankel data matrix (6) plays an important role in modern methods such as the state space method [36] and the matrix pencil method [5]. It is straightforward to verify that Hankel(y) with y = Φ M x admits the Vandermonde decomposition H = Φ X(Φ M ) T, X = diag(x,..., x s ) (7) 3

4 with the Vandermonde matrix... e πiω e πiω... e πiωs Φ = (e πiω ) (e πiω )... (e πiωs )..... (e πiω ) (e πiω )... (e πiωs ) Here we use a special property of Fourier measurements: a time translation corresponds to a frequency phase modulation. et H ε = Hankel(y ε ) and E = Hankel(ε). The multiple measurement vector formulation of spectral estimation takes the form H ε = H + E = Φ X(Φ M ) T + E. (8) The crux of MUSIC is this: In the noiseless case with s and M + s the ranges of H and Φ coincide and are a proper subspace (the signal space) of C +. et the noise space be the orthogonal complement of the signal space in C +. Then S can be identified as the zero set of the orthogonal projection of the imaging vector φ (ω) of size + onto the noise space. More specifically, let the Singular Value Decomposition (SVD) of H be written as H = [ U }{{} (+) s U }{{} ] diag(σ, σ,..., σ s,,..., ) [ V }{{}}{{} (+) (+ s) (+) (M +) (M +) s V }{{} ] (M +) (M + s) with the singular values σ σ σ 3 σ s >. The signal and noise spaces are exactly the column spaces of U and U respectively. The orthogonal projection P onto the noise space is given by P w = U (U w), w C+. Under mild assumptions one can prove that ω S if and only if P φ (ω) =. Hence S can be identified as the zeros of the noise-space correlation function or the peaks of the imaging function R(ω) = P φ (ω) φ (ω) = U φ (ω) φ (ω), J(ω) = φ (ω) P φ (ω) = φ (ω) U φ (ω). The following fact is the basis for noiseless MUSIC (See Appendix A for proof). Theorem. Suppose ω k ω l k l. If then s, M + s, (9) ω S R(ω) = J(ω) =. Remark. Condition (9) says that the number of measurement data (M + ) s suffices to guarantee exact reconstruction by the MUSIC algorithm. 4

5 For the noisy data matrix H ε let the SVD be written as H ε = [ U ε }{{} (+) s U ε }{{} ] diag(σ, ε σ, ε..., σs, ε σs+, ε...) }{{} (+) (+ s) (+) (M +) [ V ε }{{} (M +) s V ε ] }{{} (M +) (M + s) with the singular values σ ε σε σε 3. The noise-space correlation function and imaging function become R ε (ω) = Pε φ (ω) φ = U ε φ (ω) (ω) φ (ω) and respectively with P ε = U ε (U ε ). The MUSIC algorithm is given by J ε (ω) = R ε (ω) = φ (ω) P ε φ (ω) = φ (ω) U ε φ (ω), MUSIC for Spectral Estimation Input: y ε C M+, s,. ) Form matrix H ε = Hankel(y ε ) C (+) (M +). ) SVD: H ε = [U ε U ε]diag(σε,..., σε s,...)[v ε V ε], where U ε C(+) s. 3) Compute imaging function J ε (ω) = φ (ω) / U ε φ (ω). Output: Ŝ = {ω corresponding to s largest local maxima of J ε (ω)}. Figure shows a noise-space correlation function and an imaging function in the noise-free case. True frequencies are exactly located where the noise-space correlation vanishes and the imaging function peaks. e. Noise space correlation 4.5 x 4 Imaging function (a) Noise-space correlation R(ω) (b) Imaging function J(ω) Figure : Plots of R(ω) and J(ω) when M =, = 5 and there are equally spaced objects on T represented by red dots. The MUSIC algorithm as formulated above requires the number of frequencies s as an input. There are some techniques [, 3] for evaluating how many objects are present in the event that such information is not available. When σ s E, s can be easily estimated based on the singular value distribution of H ε due to Weyl s theorem [44]. 5

6 Proposition (Weyl s Theorem). σ ε j σ j E, j =,,... As a result, σj ε E, j s + and σs ε σ s E. Hence σs ε σs+ ε, creating a gap between σs ε and {σj ε : j s + }. An example is shown in Figure 3. Before describing our main results, we pause to define notations to be used in the subsequent sections. For an m n matrix A, let σ max (A) and σ min (A) denote the maximum and minimum nonzero singular values of A, respectively. Denote the spectral norm and Frobenius norm of A by A and A F. et x max = max j=,...,s x j and x min = min j=,...,s x j. The dynamic range of x is defined as x max /x min. Fixing S = {ω,..., ω s } T, we define the matrix Φ N N such that For simplicity, we denote Φ M = Φ M.. Contribution of the present work Φ N N kj = e πikω j, k = N,..., N, j =,..., s. The main contribution of the paper is a stability analysis for the MUSIC algorithm with respect to general support set S and external noise. In the MUSIC algorithm frequency candidates are identified at s smallest local minima of the noise-space correlation which measures how much an imaging vector is correlated with the noise space. In noise-free case, the noise-space correlation function R(ω) vanishes exactly on S. For the noisy case we prove R ε (ω) R(ω) α E, α = 4σ + E (σ s E ) () which holds for any support set S T. To make the bounds () explicit and more meaningful, we prove the discrete Ingham inequalites (Corollary ) which implies σ s (M ) σ (M ) under the gap assumption x min x max q = min j l d(ω j, ω l ) > max ( π π q 4 ) ( ) π π(m ) q 4 M ( 4 π + π q + 3 ) ( 4 π + π(m ) q + 3 ) M ( ( π π 4 ) (, M π π 4 M () () ) ). (3) Furthermore, we prove that for every ω j S, there exists a local minimizer ˆω j of R ε such that ˆω j ω j as noise decreases to. To relax the restriction on the minimum separation between adjacent frequencies, condition (3) suggests that should be about M/ and then the resolving power of the present form of MUSIC is as good as /M = R. By the results of [], the spectral norm of the random Hankel matrix E from a zero mean, independently and identically distributed (i.i.d.) sequence of a finite variance is on the order of 6

7 M log M for M while σs is on the order of M (with M/). In this case the factor α in () is almost always positive for sufficiently large M regardless of the variance of noise and R ε (ω) R(ω) = O(M / ) up to a logarithmic factor. Our analysis can be easily extended to other settings where the MUSIC algorithm can be applied, such as the estimation of Directions of Arrivals (DOA) [3] and inverse scattering [5,, 9]..3 Comparison with other works Among existing works, [7] is most closely related to the present work. Central to the results of [7] is a stability criterion expressed in terms of the noise-to-signal ratio, the dynamic range and, when the objects are located exactly on a grid of spacing R, the restricted isometry constants from the theory of compressed sensing []. The emphasis there is on sparse (i.e. undersampling), and typically random, measurement. For the gridless setting considered in the present work, the implications of the analysis in [7] are not explicit due to lack of the restricted isometry property for a well-separated set in the continuum. This barrier is overcome in the present work by the discrete Ingham inequalities and the resulting bounds on singular values. Demanet et al. [8] propose another approach to spectral estimation with a single snap-shot array measurement. Their scheme involves a selection step and a pruning step. In the selection step, any ω satisfying sin (φ (ω), RangeH ε ) η for some judicious choice of η > is kept as a frequency candidate. This is based on their estimate sin (φ (ω j ), RangeH ε ) cs E x min σ min (Φ M ) φ (ω j ), ω j S for some constant c >. In comparison, our estimates ()-() are more comprehensive as they apply to T and more explicit due to discrete Ingham inequalities. In addition, the choice of the thresholding parameter η in [8] can affect the performance while MUSIC does not contain any thresholding parameter. In [8], we introduce the techniques of Band exclusion and ocal Optimization (BO) to enhance standard compressive sensing algorithms to mitigate the highly coherent, ill-conditioned sensing matrices A in (3) with arbitrarily large N. The performance guarantee in [8] assumes q 3R and ensures reconstruction of S to the accuracy of R. In [4, 3], Candès and Fernandez-Granda propose total variation minimization and show that, under the assumption of q 4R, that the minimizer yields an reconstruction error linearly proportional to noise level with a magnifying constant proportional to F (F = the refinement or super-resolution factor). In [4], Tang et al. propose the atomic norm (equivalent to the total variation norm in D) minimization for spectral estimation and show exact reconstruction with noiseless and compressed measurements. ike [7], a main emphasis in [4] is on sparse measurements. Unfortunately, the effect of noise is not considered in [4]. For numerical implementation a SemiDefinite Programming (SDP) on the dual problem is solved in [4, 3] where numerical efficiency, stable retrieval of primal solutions and how to identify frequencies from the SDP output may become a problem. Detailed numerical comparisons of the MUSIC algorithm with Band-excluded ocally Optimized Orthogonal Matching Pursuit (BOOMP) of [8], SemiDefinite Programming (SDP) of [4, 3, 4] and Matched filtering using prolates enhanced by the Band-excluded and ocally Optimized technique [5] are presented in Section 4. 7

8 As the SVD step being the primary computational cost, MUSIC has low computational complexity compared to other existing methods. As we will also see, MUSIC is also among the most accurate algorithms. Finally, MUSIC is the only algorithm that can resolve two frequencies closely spaced below R. Indeed, the resolution of MUSIC can be arbitrarily small for sufficiently small noise. The paper is organized as follows. We estimate nonzero singular values of rectangular Vandermonde matrices with nodes on the unit disk in Section. Perturbation theory is presented in in Section 3 and numerical experiments are provided in Section 4. Vandermonde matrices with nodes on the unit circle Performance of the MUSIC algorithm in the presence of noise is crucially dependent on σ and σ s. To pave a way for the stability analysis, we discuss the singular values of the rectangular Vandermonde matrix Φ in this section. Our estimate is motivated by the classical Ingham inequalities [6, 45, (pp.6-64)] for nonharmonic Fourier series whose exponents satisfy a gap condition. Specifically Ingham inequalities address the stability problem of complex exponential sums in the system {e πiωjt, t [ T/, T/], ω j R, j =,..., s}. Proposition. et s N and T > be given. If the ordered frequencies {ω,..., ω s } fulfill the gap condition ω j+ ω j q > /T, j =,..., s, (4) then the system of complex exponentials {e πiω jt, t [ T/, T/], j =,..., s} form a Riesz basis of its span in [ T/, T/], i.e., ( ) c π T q T T/ T/ for all complex vectors c = (c j ) s j= Cs. s c j e πiω jt dt 4 π j= ( + 4T q ) c (5) Remark. Ingham inequalities can be considered as a generalization of the Parseval s identity for non-harmonic Fourier series. The gap condition is necessary for a positive lower bound in (5) but the upper bound always holds. We prove the discrete version of Ingham inequalities. Theorem. Suppose S satisfies the gap condition When is an even integer, q = min j l d(ω j, ω l ) > π ( π 4 ( π π q 4 ) c ( 4 Φ c π + ). (6) π q + 3 ) c, c C s. (7) 8

9 In other words, and σ max(φ ) 4 π + π q + 3 (8) σ min(φ ) π π q 4. (9) When is an odd integer, ( π π q 4 ) c ( Φ c + ) ( 4 π + π( + ) q + 3 ) c +, c C s.() Proof of Theorem is provided in Appendix B. Remark 3. The difference between the bounds of the discrete and the continuous Ingham inequalities is O(/) which is negligible when is large. The upper bound in (7) holds even when the gap condition (6) is violated; however, (6) is necessary for the positivity of the lower bound. Remark 4. Some form of discrete Ingham inqualities are developed in [33, 34] for the analysis of the control/observation properties of numerical schemes of the -d wave equation. The main result therein is that when time integrals in (5) are replaced by discrete sums on a discrete mesh, discrete Ingham inequalities converge to the continuous one as the mesh becomes infinitely fine. Their asymptotic analysis, however, do not provide the non-asymptotic results stated in Theorem. 3 Perturbation of noise-space correlation In this section we use tools in classical matrix perturbation theory [4, 7] and develop a perturbation estimate on the noise-space correlation function, the key ingredient of the MUSIC algorithm. Our main results are presented in Theorem 3, Corollary and Theorem 4 and proofs are provided in Appendix C. and C.. Theorem 3. Suppose s, M + s and E < σ s. Then R ε (ω) R(ω) P ε P := sup φ C + P ε φ P φ φ 4σ + E (σ s E ) E. () In particular, for ω j S, R(ω j ) = and R ε (ω j ) E x min σ min ((Φ M ) T ) φ (ω j ), j =,,..., s. () Remark 5. While () is a general perturbation estimate valid on T, including S, () is a sharper estimate for ω j S. Remark 6. Suppose noise vector ε contains i.i.d. random variables of variance σ, E E F = O(σ). For fixed M and S, R ε (ω) R(ω) = O(σ). 9

10 Theorem 3 holds for all signal models with any support set S. In view of the Vandermonde decomposition (7) we next derive explicit bounds for the perturbation of noise-space correlation by combining Theorem and 3. Corollary. et and M be even integers. Suppose S satisfies the following gap condition ( ( q = min d(ω j, ω l ) > max j l π π 4 ) ( ), M π π 4 ). (3) M Then R ε (ω) R(ω) E (M ) 4α + ( ) α E (M ) E (M ), (4) where ( α = x max 4 π + π q + 3 ) ( 4 π + π(m ) q + 3 ) M and ( α = x min π π q 4 ) ( ) π π(m ) q 4. M Remark 7. Corollary is stated in the case that both and M are even integers. However, the results hold in other cases with a slightly different α based on (7) and (). Remark 8. As noted in Section., for i.i.d. noise E grows like M log M which is much smaller than (M ) with M as M. As a consequence, ( ) log M R ε (ω) R(ω) = O M under the gap condition (3). How close are the MUSIC estimates, namely the s lowest local minimizers of R ε (ω), to the true frequencies which are the zeros of R(ω)? While we can not at the moment answer this question directly, the following asymptotic result says that every true frequency has a near-by strict local minimizer of R ε in its vicinity converging to it. Denote [φ (ω)] = dφ (ω)/dω. Theorem 4. Suppose P [φ (ω)] ω=ωj, ω j S. (5) When E is sufficiently small (details in (47) and (48)), there exists a strict local minimizer ˆω j of R ε (ω) near ω j such that ˆω j ω j min ξ (ω j,ˆω j ) Q (ξ) 4αη() E (6) where Q(ω) = R (ω), η() = π / + and α = 4σ + E (σ s E ).

11 Remark 9. For i.i.d. random variables of variance σ, E = O(σ) for fixed M. Hence as σ. ˆω j ω j = O(σ) Remark. In view of the identity Q (ω j ) = + P [φ (ω)], ω=ωj j cf. (43), assumption (5) is a generic condition that says the true frequencies are not degenerate minimizers of R. Figure shows P [φ (ω)] / [φ (ω)] for various M and suggests that for q 4R, C [φ (ω)] P [φ (ω)] C [φ (ω)], ω [, ) (7) for some constants C, C > independent of M unit: R unit: R unit: R (a) M = 64 (b) M = 8 (c) M = 56 Figure : Function P [φ (ω)] / [φ (ω)] with varied M in the case that frequencies in S are separated by 4R. = M/ and S is marked by red dots. Remark. Under the assumption (3), the right hand side of (6) scales like σ M log M with = M/ for i.i.d. random noise of variance σ. Under the assumption (7), Q (ω j ) = + P [φ (ω j )] C [φ (ω j )] + = C ( ) + = O( ). As shown in the proof of Theorem 4 (Step ), min ξ (ω j,ˆω j ) Q (ξ) > Q (ω j ) = O(M ), M =. As a result ( log M ˆω j ω j = O M 3/ ).

12 Strong stability in the case of well separated objects is demonstrated in Figure 3. et M = 64, = 3 and then R = /64. External noise is i.i.d. Gaussian noise, i.e. ε N(, σ I). Noise to Signal Ratio (NSR) = E( ε )/ y = %. We display singular values of H and H ε in Figure 3(a), noise-space correlation function R(ω) and R ε (ω) in Figure 3(b), and imaging function J ε (ω) in Figure 3(c). With a minimum separation of 4 R imposed on S, R ε (ω) is stably perturbed from R(ω). More importantly, every peak of J ε (ω) is near a true object in S. Simply extracting s largest local maxima of J ε (ω) yields a good reconstruction. 4 singular values exact noisy.9 R(ω) and R ε (ω) exact noisy J ε (ω) unit: R unit: R (a) Singular values of H and H ε (b) R(ω) and R ε (ω) (c) J ε (ω) Figure 3: Perturbation of noise-space correlation and imaging function in the case where objects are separated by 4 R and Noise to Signal Ratio (NSR) = %. In (c) true objects are located at red dots. 4 Numerical experiments A systematic numerical simulation is performed on MUSIC, BOOMP, SDP and Matched Filtering using prolates in this section, showing that MUSIC combines the advantages of strong stability and low computation complexity for the detection of well-separated frequencies and furthermore only MUSIC yields an exact reconstruction in the noise-free case regardless of the distribution of true frequencies and processes the capability of resolving closely spaced frequencies. 4. Algorithms to be tested. We compare the performances of various algorithms on the line spectral estimation problem () with M = and i.i.d. Gaussian noise, i.e. ε N(, σ I) + in(, σ I). Define the Noise-to-Signal Ratio (NSR) = E( ε )/ y = σ (M + )/ y. We test and compare the following algorithms.. The MUSIC algorithm: As suggested by (4) in Corollary we set M =.. Band-excluded and ocally Optimized Orthogonal Matching Pursuit (BOOMP) [8]: BOOMP works with an arbitrarily fine grid with grid spacing l = R/F where F is the refinement/superresolution factor. In the presence of noise it is unnecessary to set an extremely large F. A rule of thumb for the problem of spectral estimation is that F SNR gives rise to a gridding

13 error comparable to the external noise. For instance F = is adequate when SNR 5%. When frequencies are separated above 3R (i.e., in Fig. 4(b), Fig. 5(b) and Fig. 6(a)(b)), we can set the radii of excluded band and local optimization to be R and R, respectively. When frequencies are separated between R and 3R (i.e., in Fig. 6(c)(d), the radii of excluded band and local optimization is set to be R. 3. SemiDefinite Programming (SDP) [3, 4]: The code is from ~cfgranda/superres_sdp_noisy.m where SDP is solved through CVX, a package for specifying and solving convex programs [, 3]. Output of SDP is the dual solution of total variation minimization. In the code, frequencies are identified through root findings of a polynomial and amplitudes are solved through least squares. et ω = [ ω j ] n j= Rn be the frequencies retrieved from the code and x = [ x j ] n j= Cn be the corresponding amplitudes. Usually n is greater than s. A straightforward way of extracting s reconstructed frequencies is by hard thresholding, i.e., picking the frequencies corresponding to the s largest amplitudes in x. We also test if the Band Excluded Thresholding (BET) technique introduced in [8] can improve on hard thresholding and enhance the performance of SDP (Figure 6). BET amounts to trimming ω R n to ˆω R s as follows. Band Excluded Thresholding (BET) Input: ω, x, s, r (radius of excluded band). Initialization: ˆω = [ ]. Iteration: for k =,..., s ) Find j such that x j = max i x i. If x j =, then go to Output. ) Update the support vector: ˆω = [ˆω ; ω j ]. 3) For i = : n If ω i ( ω j r, ω j + r), set x i =. Output: ˆω. When frequencies are separated by at least 4R, we choose r = R. 4. Matched Filtering (MF) using prolates: In [5], Eftekhari and Wakin use matched filtering windowed by the Discrete Prolate Spheroidal (Slepian) Sequence [39] for the same problem while frequencies are extracted by band-excluded and locally optimized thresholding proposed in [8]. In its current form, MF using prolates can not deal with complex-valued amplitudes so it is tested with real-valued amplitudes only. Reconstruction error is measured by Hausdorff distance between the exact (S) and the recovered (Ŝ) sets of frequencies: } min d(ˆω, ω), max d(ˆω, ω). (8) 4. Noise-free case d(ŝ, S) = max {max ˆω Ŝ ω S min ω S ˆω Ŝ In the noise-free case only MUSIC processes a theory of exact reconstruction regardless of the distribution of true frequencies. In theory, BOOMP requires a separation of 3R for approximate 3

14 support recovery while SDP requires a separation of 4R for exact recovery. In this test we use the four algorithms to recover 5 real-valued amplitudes separated by R. Figure 4 shows that MUSIC achieves the accuracy of about.4r while BOOMP, SDP and MF using prolates essentially fail, which implies that certain separation condition is necessary for BOOMP, SDP and MF using prolates..8.6 exact recovered MUSIC exact recovered BOOMP dist =.463R (a) MUSIC. Red: exact; Blue: recovered. d(ŝ, S).4R. exact SDP SDP dist =.838R (b) BOOMP. Red: exact; Blue: recovered. d(ŝ, S).8R. exact DPSS BO based DPSS MF dist =.794R (c) SDP. Red: exact; Blue: recovered. d(ŝ, S).7R (d) MF using prolates. Red: exact; Blue: inverse Fourier transform of y windowed by the first DPSS sequence. Figure 4: Reconstruction of 5 real-valued frequencies separated by R. Dynamic range = and NSR = %. 4.3 Detection of well-separated frequencies Figure 5 shows reconstructions of 5 real-valued frequencies separated by 4R. By extracting 5 largest local maximizers of the imaging function J ε (ω), MUSIC yields a reconstruction distance of.6r. As predicted by the theory in [8], every recovered object of BOOMP is within R 4

15 MUSIC BOOMP 6 exact recovered 6 exact recovered dist =.5549R (a) MUSIC. Red: exact; Blue: recovered. d(ŝ, S).6R. SDP dist =.533R (b) BOOMP. Red: exact; Blue: recovered. d(ŝ, S).5R. BO based DPSS MF 6 4 exact SDP hard thresholding 6 4 exact DPSS picked dist = R (c) SDP. Red: exact; Blue: Primal solution of SDP. Hard thresholding (green) yields d(ŝ, S) 3.94R. The true amplitude around 33R is recovered as two amplitudes and the BET technique can be used to eliminate the smaller one in the step of frequency selection dist =.9668R (d) MF using prolates. Red: exact; Blue: inverse Fourier transform of y ε windowed by the first DPSS sequence; Green: frequencies selected by the BO technique. d(ŝ, S).R. Figure 5: Reconstruction of 5 real-valued amplitudes separated by 4R. Dynamic range = and NSR = %. 5

16 distance from a true one. Indeed, in this simulation BOOMP achieves the best accuracy of.5 R among tested algorithms. The primal solution of SDP is usually not s-sparse and the recovered frequencies tend to cluster around the true ones [] which degrades the accuracy. The Hausdorff distance between the frequencies with the s strongest amplitudes and the true frequencies is 3.94R in this simulation. The BET technique can be used to enhance the accuracy of reconstruction and achieve the accuracy of.3r. Similarly the BO technique introduced in [8] can be applied to improve the result of Matched filtering windowed by the DPSS sequence (the blue curve in Figure 5(d)) and achieve the accuracy of.r. Figure 6 shows the average errors of trials by SDP, BET-enhanced SDP, BOOMP and MUSIC for complex-valued objects separated between 4R and 5R (Fig. 6(a)(b)) or separated between R and 3R (Fig. 6(c)(d)) versus NSR when dynamic range = (Fig. 6(a)(c)) and when dynamic range = (Fig. 6(b)(d)). In this simulation [, ) are fully occupied by frequencies satisfying the separation condition and amplitudes x are complex-valued with random phases. Refinement factor F in BOOMP is adaptive according to the rule: F = max(5, min(/nsr, )). Figure 6 shows that BOOMP is the stablest algorithm while frequencies are separated above 4R and MUSIC becomes the stablest one while frequencies are separated between R and 3R. Simply extracting s largest amplitudes from the SDP solution (black curve) is not a good idea and the BET technique (green curve) can mitigate the problem with SDP. The average running time in Figure 6 shows that MUSIC takes about.33s for one experiment and is the most efficient one among all methods being tested. SDP needs about.5s for one experiment and is computationally most expensive. Running time of BOOMP is dependent on sparsity s and refinement factor F. The running time of BOOMP in Fig. 6(c)(d) is more than the time in Fig. 6(a)(b) as s [33, 5] in Fig. 6(c)(d) and s [, 5] in Fig. 6(a)(b). 4.4 Detection of closely spaced objects Super-resolution in optics refers to the ability of distinguishing objects separated below R. Its simplest version, two-point resolution, is widely used as metric for resolving power of imaging systems [9]. The MUSIC algorithm is known to enjoy the property of high resolution reconstruction in spectral estimation, DOA estimation and sensor array imaging [43, 8, 3]. It is of interest to explore the two-point resolution limit of the MUSIC algorithm in the presence of noise. We run MUSIC algorithm on reconstructions of two randomly phased complex objects with varied separation and varied NSR over trials and then record the average ratio between d(s, Ŝ) and the separation of two objects. Figure 7 displays the color plot of the logarithm to the base of the average ratio between d(s, Ŝ) and the separation of two objects with respect to NSR (x-axis) and object separation (y-axis) in the unit of R. Two-point localization is almost successful if d(s, Ŝ) is below half of the separation (from green to black). A phase transition occurs, manifesting MUSIC s capability of distinguishing two closely spaced complex-valued objects if NSR is below certain level. Exact reconstruction by MUSIC of arbitrarily close objects in the noise-free case is reflected by the blue column near the vertical axis. 6

17 Frequencies separated between 4R and 5R Distance (unit:r) versus NSR Distance (unit:r) versus NSR Average distance (unit:r) in trials SDP SDP with BET BOOMP MUSIC Average distance (unit:r) in trials SDP SDP with BET BOOMP MUSIC NSR 5 5 NSR (a) Dynamic range =. Average running time for SDP and MUSIC in one experiment is.3583s and.367s while the average running time for BOOMP is 6.34s (F = ), 3.788s (F = ) and.76(f = 5). Frequencies separated between R and 3R (b) Dynamic range =. Average running time for SDP and MUSIC in one experiment is.593s and.366s while the average running time for BOOMP is 6.63s (F = ), 3.33s (F = ) and.754s (F = 5). Average distance (unit:r) in trials Distance (unit:r) versus NSR SDP SDP with BET BOOMP MUSIC Average distance (unit:r) in trials Distance (unit:r) versus NSR SDP SDP with BET BOOMP MUSIC NSR (c) Dynamic range =. Average running time for SDP and MUSIC in one experiment is.675s and.3334s while the average running time for BOOMP is s (F = ),.349s (F = 9) and 8.85s (F = 5). 5 5 NSR (d) Dynamic range =. Average running time for SDP and MUSIC in one experiment is.57s and.33s while the average running time for BOOMP is 9.933s (F = ),.59s (F = 9) and 8.54s (F = 5). Figure 6: Average error by SDP, BET-enhanced SDP, BOOMP and MUSIC on complex-valued objects separated between 4R and 5R (a)(b) or separated between R and 3R (c)(d) versus NSR when dynamic range = (a)(c) and when dynamic range = (b)(d). MF using prolates is not included since it is not designed for complex amplitudes. 7

18 The logarithm to the base of (distance error / separation) Separation Noise to signal ratio (%) The logarithm to the base of (distance error / separation) Separation Noise to signal ratio (%) (a) Resolution versus two-object separation (b) Zoom in around origin Figure 7: The color represents the logarithm to the base of the average ratio between d(s, Ŝ) and two-object separation with respect to NSR (x-axis) and object separation (y-axis) in the unit of R. Two-point resolution is successful if the ratio is less than / (from green to black). 5 Conclusions We have provided a stability analysis of the MUSIC algorithm for single-snapshot spectral estimation off the grid. We have proved that perturbation of the noise-space correlation by external noise is roughly proportional to the spectral norm of the noise Hankel matrix with a magnification factor given in terms of maximum and minimum nonzero singular values of the Hankel matrix constructed from the noiseless measurements. Under the assumption of frequency separation R, the magnification factor is explicitly estimated by means of a new version of discrete Ingham inequalities. A systematic numerical study has shown that the MUSIC algorithm enjoys strong stability and low computation complexity for the reconstruction of well-separated frequencies. MUSIC is the only algorithm that can recover arbitrarily closely spaced frequencies as long as the noise is sufficiently small. Acknowledgement Wenjing iao would like to thank Armin Eftekhari for providing their codes and helpful discussions at SAMSI. Appendix A Proof of Theorem In the noise-free case Range(H) and Range(Φ ) coincide if the matrix X(Φ M ) T has full row rank, i.e., Rank (Φ M ) = s, which is guaranteed on the condition that M + s and the frequencies in S are pairwise distinct. emma. If Rank (Φ M ) = s, then Range(H) = Range(Φ ). emma. Rank (Φ ) = s if + s and ω k ω l, k l. 8

19 Proof. If + s, then s s square submatrix Ψ of Φ is a square Vandermonde matrix whose determinant is given by det (Ψ) = (e iπω j e iπω i ). i<j s Clearly, det (Ψ) if and only if ω i ω j, i j. Hence Rank (Ψ) = s which implies Rank (Φ ) = s. Similarly, if + s +, the extended matrix Φ ω = [Φ φ (ω)] has full column rank for any ω / S. As a consequence, ω S if and only if φ (ω) belongs to Range(Φ ). Appendix B Proof of Theorem The proof of Theorem combines techniques used in [6] and [3]. We take and let G(ω) = g(t) = cos π(t.5) g( k )eπikω. (9) (a) g(t) (b) G(ω) / (c) Real part of G(ω)/ Figure 8: Function g(t) and G(ω)/ when = 8. Graphs of g(t), G(ω) / and the real part of G(ω)/ are shown in Figure 8. Function G has the following properties. emma 3.. G(ω + n) = G(ω) for n Z.. G( ω) = e πiω G(ω) and G( ω) = G(ω). 3. ( π ) G() ( π + ). 4. G(ω) π 4 ω + 8 π for ω [, /]. 9

20 Proof.. G(ω + n) = g( k )eπik(ω+n) = g( k )eπikω = G(ω).. G( ω) = = = g( k )e πikω = l= ( ) l g e πi( l)ω by letting l = k l= [cos π( l.5) ] e πi( l)ω = l= [cos π( l.5) ] e πi( l)ω g( l )e πi( l)ω = e πiω g( l )eπilω = e πiω G(ω). l= l= 3. On the one hand, On the other hand, and g = G() = 4. According to the Poisson summation formula, G(ω) = g( k )eπikω = cos π(t.5)dt = π. g( k ), (G() ) g (G() + ). r= g(z)e πi(ω r)z dz = π r= cos π(ω r) 4 (ω r) eiπ(ω r). (3)

21 Hence G(ω) π π r= 4 ω + π cos π(ω r) 4 (ω r) r π 4 ω + π r π 4 ω + 4 [ π 4 (r ω) 4 (r ω) 4 ( + ) 4 () + 4 ( 3 + ] ) 4 () +... as ω [, ], π 4 ω + 4 [ π ] π 4 ω + 4 π = π 4 ω + 8 π. In (3) the difference between the discrete and the continuous case lies in cos π(ω r) π 4 (ω r) r which is bounded above by 8/(π ), and is therefore negligible when is sufficiently large. We start with the following lemma, which paves the way for the proof of Theorem. emma 4. Suppose objects in S satisfy the gap condition Then d(ω j, ω l ) q > ( π π q 8s π ) c for all c C s. ( π π 8s ). (3) π g( k ) (Φ c) k ( π + + π q + 8s π ) c (3)

22 Proof. g( k (Φ ) c) k = g( k s s ) c j e πikω j c l e πikω l = s s c j c l j= l= = G() c + j= g( k )eπik(ω j ω l ) = s G(ω j ω l )c j c l. j= l j j= l= l= s s G(ω j ω l )c j c l It follows from the triangle inequality that s G() c G(ω j ω l )c j c l g( k (Φ ) s c) k G() c + G(ω j ω l )c j c l, j= l j j= l j where s G(ω j ω l )c j c l can be estimated through Property 4 in emma 3. j= l j s G(ω j ω l )c j c l j= l j = s G(ω j ω l ) c j + c l = j= l j s c j G(d(ω j, ω l )) l j j= s c j [ π l j j= s c j G(ω j ω l ) l j j= 4 d (ω j, ω l ) + 8 π s c j 4 s [ π 4 n q + 4 ] as frequencies in S are pairwise separated by q > j= s c j 4 π j= j= n= [ s + n= s c j 4 [ s π + q = c π ( q + 4s ), ] 4 n q s c j 4 [ s π + q j= ( n )] = n + n= j= n= ] ] 4n s c j 4 [ s π + ] q where s denotes the nearest integer smaller than or equal to s/. Kernel g in (3) is crucial for the convergence of the series in (33). Therefore G() c π ( q + 4s ) c The equation above along with Property 3 in emma 3 yields ( π π q 8s ) π c g( k (Φ ) c) k G() c + ( π q + 4s ) c. g( k ) (Φ c) k ( π + + π q + 8s π ) c. (33)

23 The gap condition (3) is derived from the positivity condition of the lower bound, i.e., π π q 8s π >. Proof of Theorem is given below. Proof. Given that S = {ω,..., ω s } [, ) and frequencies are separated above /, there are no more than frequencies in S, i.e., s <. The lower bound in Theorem 4 follows from emma 4 as Φ c = (Φ c) k = (Φ c) k g( k (Φ ) c) k ( π π q 8s ) ( π > π π q 4 ). The gap condition (6) in Theorem 4 is derived from the positivity condition of the lower bound, i.e., ( π π q 4 ) >. We prove the upper bound in Theorem 4 in two cases: is even or is odd. Case : is even. First we substitute with in (3) and obtain g( k (Φ ) ( c) k π + + 4π q + 8s ) 4π c. (34) et D = diag(e πiω, e πiω,..., e πiωs ) and D = (D ). On the one hand, g( k ) (Φ D c)k = 3/ k=/ = (Φ c) k. 3/ k=/ (Φ D c)k = On the other hand, (34) implies g( k ) (Φ D c)k 3/ k=/ g( 4 ) (Φ D c)k (Φ 3 D c)k = (Φ D D c)k g( k (Φ ) D ( c)k π + + 4π q + 8s ) 4π D c ( 4 = π + + π q + 4s ) π c. 3

24 As a result Φ c = (Φ ( 4 s ) ( 4 ) c) k π + + π q +4 π c < π + π q +3 c. When is an even integer, ( π π q 4 ) c ( 4 Φ c π + π q + 3 ) c. Case : is odd. Φ c = + (Φ c) k (Φ + c) k ( 4 < ( + ) π + π( + ) q + 3 ) c +. (35) Eq. (35) above follows from Case as + is an even integer. In summary, when is an odd integer, ( π π q 4 ) c ( Φ c + ) ( 4 π + π( + ) q + 3 ) c +. Appendix C Proof of Theorems in Section 3 C. Proof of Theorem 3 Proof. et H = HH, H ε = H ε H ε and E = HE + EH + EE. Then H ε = H + E, (36) and H = [ U U ] [ Σ Σ ] [ U U ], (37) H ε = [ U ε U ε ] [ Σ ε Σε ] [ ] U ε Σ ε Σε, (38) where Σ = diag(σ,..., σ s ), Σ ε = diag(σε,..., σε s), and Σ ε = diag(σε s+, σε s+,...). Combining (36) and (38) yields [ ] U (H + E) [ [ U ε U ε ] U = U εσε Σε U U εσε Σε ] U U εσε Σε U U εσε Σε. (39) U 4 U ε

25 On the one hand, the (,) entries on both sides of (39) are equal such that U HU ε + U EU ε = U U ε Σ ε Σ ε. Based on (37), we obtain U H = and therefore which implies U EU ε (Σ ε Σ ε ) = U U ε U U ε E (σ ε s) (4) On the other hand, the (,) entries on both sides of (39) are equal such that U HU ε + U EU ε = U U ε Σ ε Σ ε. Based on (37), we obtain U H = Σ Σ U and therefore Σ Σ U U ε + U EU ε = U U ε Σ ε Σ ε. For any φ C + s, so σ s U U ε φ Σ Σ U U ε φ U EU ε φ + U U ε Σ ε Σ ε φ E φ + U U ε (σ ε s+) φ, U U εφ E + (σs+ ε ) U U ε. φ By taking the supremum over φ C + s on the left, we obtain and then σ s U U ε E + (σs+ ε ) U U ε σs, U U ε E σs (σs+ ε. (4) ) et P and P ε be orthogonal projects onto the subspace spanned by columns of U and U ε respectively. For any φ C + P ε φ P φ φ = P P ε φ + P P ε φ P φ φ = P P ε φ P P ε φ φ = U U U ε U ε φ U U U ε U ε φ φ U U ε + U U ε. (4) Eq. (4) together with (4) and (4) imply R ε (ω) R(ω) P ε P ε P = sup φ P φ φ C + φ 5 [ ] (σs) ε + σs (σs+ ε E. )

26 Meanwhile and therefore E H E + E (σ + E ) E, [ ] P ε P (σ + E ) (σs) ε + σs (σs+ ε E. ) According to Proposition, σ ε s σ s E and σ ε s+ E. Then which implies that (σ ε s) + σ s (σ ε s+ ) (σ s E ), P ε P (σ + E ) (σ s E ) E. In particular, while ω is restricted on S, a sharper upper bound in () is derived as follows: Since M + s and true frequencies are pairwise distinct, X(Φ M ) T has full row rank. Denote Y ε = U εσε V ε + U εσε V ε where Σ ε = diag(σε,..., σε s) and Σ ε = diag(σε s+, σε s+,...). Multiplying U ε on the left and the pseudo-inverse of X(Φ M ) T on the right of U ε Σ ε V ε + U ε Σ ε V ε = Φ X(Φ M ) T + E yields Then and Σ ε V ε [X(Φ M ) T ] = U ε Φ + U ε E[X(Φ M ) T ]. U ε φ (ω j ) = U ε Φ e j = U ε E[X(Φ M ) T ] e j Σ ε V ε [X(Φ M ) T ] e j, P ε φ (ω j ) = U ε φ (ω j ) E + σ ε s+ σ min (X(Φ M ) T ) E σ min (X(Φ M ) T ) E x min σ min ((Φ M ) T ). Therefore R ε (ω j ) = Pε φ (ω j ) φ (ω j ) E x min σ min ((Φ M ) T ) φ (ω j ). C. Proof of Theorem 4 Proof. et Q(ω) = R (ω) = φ (ω) U U U U φ (ω) φ (ω) = φ (ω) U U U U φ (ω) + and Q ε (ω) = [R ε (ω)] = φ (ω) U εu ε U εu ε φ (ω). + 6

27 Both Q(ω) and Q ε (ω) are smooth functions and Q (ω) = φ (ω) U U U U [φ (ω)] + [φ (ω)] U U U U φ (ω) + = P φ (ω), P [φ (ω)] + P [φ (ω)], P φ (ω), + Q (ω) = φ (ω) U U U U [φ (ω)] + [φ (ω)] U U U U [φ (ω)] + [φ (ω)] U U U U φ (ω) + = P φ (ω), P [φ (ω)] + P [φ (ω)] + P [φ (ω)], P φ (ω). (43) + et D(ω) = Q ε (ω) Q(ω) and then [Q ε (ω)] = Q (ω) + D (ω), [Q ε (ω)] = Q (ω) + D (ω). First, we derive an upper bound of D (ω) and D (ω) in terms of α, and E. ( + ) D (ω) = P ε φ (ω), P ε [φ (ω)] P φ (ω), P [φ (ω)] + P ε [φ (ω)], P ε φ (ω) P [φ (ω)], P φ (ω) = P ε φ (ω), P ε [φ (ω)] P ε φ (ω), P [φ (ω)] + P ε φ (ω), P [φ (ω)] P φ (ω), P [φ (ω)] + P ε [φ (ω)], P ε φ (ω) P ε [φ (ω)], P φ (ω) + P ε [φ (ω)], P φ (ω) P [φ (ω)], P φ (ω) P ε φ (ω) P ε P [φ (ω)] + P [φ (ω)] P ε P φ (ω) + P ε [φ (ω)] P ε P φ (ω) + P φ (ω) P ε P [φ (ω)] 4 P ε P φ (ω) [φ (ω)]. Since [φ (ω)] = πi[ e πiω e πiω 3e πi3ω... e πiω ] T, we have [φ (ω)] = η() + where η() = π / + and then by Theorem 3. Applying the same technique to D (ω) yields D (ω) 4αη() E, (44) D (ω) 4α [ η () + ζ() ] E, (45) where ζ() = (π) / +. Next we prove that for each ω j S, there exists a strict local minimizer ˆω j of R ε near ω j satisfying (6). Since ω j is a strict local minimizer of Q(ω) and P [φ (ω)] ω=ωj, ω j S (46) 7

28 we have Q (ω j ) = and Q (ω j ) >. We break the following argument into several steps: Step et Q (ω j ) = m j. m j > due to assumption (46). Thanks to the smoothness of Q, there exists δ j > such that For sufficiently small noise satisfying Q (ω) > m j, ω (ω j δ j, ω j + δ j ). 4α[η () + ζ()] E < m j /, (47) we have [Q ε (ω)] > m j /, ω (ω j δ j, ω j + δ j ). Step Consider Q (ω j δ j ) and Q (ω j + δ j ). There exist κ, κ (, δ j ) such that Q (ω j δ j ) = Q (ω j ) + Q (ω j κ )( δ j ) Q (ω j + δ j ) = Q (ω j ) + Q (ω j + κ )δ j. From Step, Q (ω j κ ) > m j and Q (ω j + κ ) > m j, so Q (ω j δ j ) < m j δ j and Q (ω j + δ j ) > m j δ j. According to (44), when noise is sufficiently small such that we have 4αη() E < m j δ j /, (48) [Q ε (ω j δ )] < m j δ j / < and [Q ε (ω j + δ 3 )] > m j δ j / >. [Q ε (ω)] is a smooth function so there exists ˆω j (ω j δ j, ω j + δ j ) such that [Q ε (ˆω j )] = by intermediate value theorem. Step 3 From Step and Step 3 we have obtained an open interval containing ω j : (ω j δ j, ω j + δ j ) such that Q (ω) > m j and [Q ε (ω)] > m j /, ω (ω j δ j, ω j + δ j ). Also there exists ˆω j (ω j δ j, ω j + δ j ) such that [Q ε (ˆω j )] = and [Q ε (ˆω j )] >, so ˆω j is a strict local minimizer of Q ε (ω) and R ε (ω). Step 4 Furthermore = [Q ε (ˆω j )] = Q (ˆω j ) + D (ˆω j ) = Q (ω j ) + Q (ξ j )(ˆω j ω j ) + D (ˆω j ), for some ξ j (ω j, ˆω j ), through Taylor expansion of Q (ω) at ω = ω j. Since Q (ω j ) =, ˆω j ω j min ξ (ω j,ˆω j ) Q (ξ) ˆω j ω j Q (ξ j ) D (ˆω j ) 4αη() E (49) 8

29 Proof of Theorem 4 ends here. Next we provide the argument for the validation of Remark 9 and Remark. Remark 9 Suppose noise vector ε contains i.i.d. random variables of variance σ. For fixed M, E E F = O(σ). Since min ξ (ω j,ˆω j ) Q (ξ) > m j, we have ˆω j ω j as σ and ˆω j ω j = O(σ). Remark The asymptotic rate of ˆω j ω j as M = in the case of q 4R is discussed in Remark. Here we show that condition (47) and (48) hold as M = under assumption (7). The left hand side (l.h.s.) of (47) scales like M M log M. On the right hand side (r.h.s.), m j = P [φ (ω j )] + C + [φ (ω j )] M, as M =. Therefore the r.h.s. of (47) grows faster than the l.h.s. and (47) holds as M =. Next we show that (48) is also valid as M. First and then Q (ω) = P φ (ω), P [φ (ω)] + P [φ (ω)], P φ (ω) P [φ (ω)], P [φ (ω)] + 3 P [φ (ω)], P [φ (ω)] + Q (ω) φ (ω) [φ (ω)] + 6 [φ (ω)] [φ (ω)] + = = O( 3 ) as = O(M 3 ) as M =. In Step, for all ω (ω j δ j, ω j + δ j ), Q (ω) Q (ω j ) can be as large as m j = O(M ). Meanwhile Q (ω) = Q (ω j )+(ω ω j )Q (κ) for some κ (ω j, ω) and then Q (ω) Q (ω j ) = ω ω j Q (κ). Since Q (ω) Q (ω j ) can be as large as O(M ) and Q (κ) O(M 3 ), ω ω j can be as large as O(/M). In other words, δ j = O(/M) in Step. In (48), the l.h.s. scales like M log M and the r.h.s. = m j δ j / = O(M /M) = O(M) as M =. As a result, (48) holds while M = under assumption (7). References [] R. Adamczak, A few remarks on the operator norm of random Toeplitz matrices, Journal of Theoretical Probability 3, pp.85-8,. 9

30 [] E. J. Candès, The restricted isometry property and its implications for compressed sensing, Comptes Rendus Mathematique 346(9), pp , 8. [3] E. J. Candès and C. Fernandez-Granda, Super-resolution from noisy data, Journal of Fourier Analysis and Applications 9(6), pp.9-54, 3. [4] E. J. Candès and C. Fernandez-Granda, Towards a mathematical theory of super-resolution, Communications on Pure and Applied Mathematics 67(6), pp , June 4. [5] M. Cheney, The linear sampling method and the MUSIC algorithm, Inverse Problems 7(4), pp.59,. [6] M. Cheney, A mathematical tutorial on synthetic aperture radar, SIAM review 43(), pp.3-3,. [7] Y. Chi, A. Pezeshki,. Scharf, A. Pezeshki, and R. Calderbank, Sensitivity to basis mismatch in compressed sensing, IEEE Transactions on Signal Processing 59(5), pp.8-95,. [8]. Demanet, D. Needell and N. Nguyen, Super-resolution via superset selection and pruning, Proceedings of the th International Conference on Sampling Theory and Applications, 3. [9] A. J. Den Dekker and A. van den Bos, Resolution: a survey, Journal of Optical Society of America A 4(3), pp , 997. [] A. J. Devaney, Super-resolution processing of multi-static data using time reversal and MU- SIC, Journal of the Acoustical Society of America,. [] A. Di, Multiple source location - a matrix decomposition approach, IEEE Transactions on Acoustics, Speech and Signal Processing 33(5), pp. 86-9, 985. [] D.. Donoho, Superresolution via sparsity constraints, SIAM Journal on Mathematical Analysis 3, pp.39-33, 99. [3] D.. Donoho, Compressed sensing, IEEE Transactions on Information Theory 5(4), pp.89-36, 6. [4] M.F. Duarte and R.G. Baraniuk, Spectral compressive sensing, Applied and Computational Harmonic Analysis 35(), pp. -9, 3. [5] A. Eftekhari and M. B. Wakin, Greed is super: a new iterative method for super-resolution, in preparation. [6] A. Fannjiang, T. Strohmer and P. Yan, Compressed remote sensing of sparse objects, SIAM Journal on Imaging Sciences 3(3), pp ,. [7] A. Fannjiang, The MUSIC algorithm for sparse objects: a compressed sensing analysis, Inverse Problems 7, 353,. [8] A. Fannjiang, and W. iao, Coherence pattern-guided compressive sensing with unresolved grids, SIAM Journal on Imaging Sciences 5(), pp.79-,. 3

31 [9] A. Fannjiang and W. iao, Mismatch and resolution in compressive imaging, Proceedings of SPIE on Wavelets and Sparsity XIV 838,. [] A. Fannjiang and W. iao, Super-resolution by compressive sensing algorithms, Asilomar conference on signals, systems and computers, IEEE Computer Society,. [] C. Fernandez-Granda, Support detection in super-resolution, arxiv:3.39, 3. [] M. Grant and S. Boyd, CVX: Matlab software for disciplined convex programming, version. beta. September 3. [3] M. Grant and S. Boyd, Graph implementations for nonsmooth convex programs, Recent Advances in earning and Control (a tribute to M. Vidyasagar), V. Blondel, S. Boyd, and H. Kimura, editors, pp.95-, ecture Notes in Control and Information Sciences, Springer, 8. [4] M. Herman and T. Strohmer, General Deviants: An Analysis of Perturbations in Compressed Sensing, IEEE Journal of Selected Topics in Signal Processing: Special Issue on Compressive Sensing 4(), pp ,. [5] Y. Hua and T. K. Sarkar, Matrix pencil method for estimating parameters of exponentially damped/undamped sinusoids in noise, IEEE Transactions on Acoustics, Speech and Signal Processing 38(5), pp.84-84, 99. [6] A. E. Ingham, Some trigonometrical inequalities with applications to the theory of series, Mathematische Zeitschrift 4(), pp , 936. [7] I. Ipsen, The eigenproblem and invariant subspaces: perturbation theory, GW Stewart. Birkhuser Boston, pp.7-93,. [8] S. M. Kay and S.. Marple Jr, Spectrum analysis - a modern perspective, Proceedings of the IEEE 69(), pp.38-49, 98. [9] A. Kirsch, The MUSIC algorithm and the factorization method in inverse scattering theory for inhomogeneous media, Inverse problems 8, pp. 5-4,. [3] H. Krim and J. H. Cozzens, A data-based enumeration technique for fully correlated signals, IEEE Transactions on Signal Processing 4(7), pp , 994. [3] H. Krim and M. Viberg, Two decades of array signal processing research: the parametric approach, IEEE Signal Processing Magazine 3(4), pp.67-94, 996. [3] S. Kunis and D. Potts, Stability results for scattered data interpolation by trigonometric polynomials, SIAM Journal on Scientific Computing 9(4), pp , 7. [33] M. Negreanu and E. Zuazua, Discrete Ingham inequalities and applications, Comptes Rendus Mathematique 338(4), pp.8-86, 4. [34] M. Negreanu and E. Zuazua, Discrete Ingham inequalities and applications, SIAM Journal on Numerical Analysis 44(), pp.4-448, 6. 3

MUSIC for Single-Snapshot Spectral Estimation: Stability and Super-resolution

MUSIC for Single-Snapshot Spectral Estimation: Stability and Super-resolution MUSIC for Single-Snapshot Spectral Estimation: Stability and Super-resolution Wenjing iao Albert Fannjiang November 9, 4 Abstract This paper studies the problem of line spectral estimation in the continuum

More information

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation UIUC CSL Mar. 24 Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation Yuejie Chi Department of ECE and BMI Ohio State University Joint work with Yuxin Chen (Stanford).

More information

Information and Resolution

Information and Resolution Information and Resolution (Invited Paper) Albert Fannjiang Department of Mathematics UC Davis, CA 95616-8633. fannjiang@math.ucdavis.edu. Abstract The issue of resolution with complex-field measurement

More information

MUSIC for multidimensional spectral estimation: stability and super-resolution

MUSIC for multidimensional spectral estimation: stability and super-resolution MUSIC for multidimensional spectral estimation: stability and super-resolution Wenjing Liao Abstract This paper presents a performance analysis of the MUltiple SIgnal Classification MUSIC algorithm applied

More information

Super-resolution via Convex Programming

Super-resolution via Convex Programming Super-resolution via Convex Programming Carlos Fernandez-Granda (Joint work with Emmanuel Candès) Structure and Randomness in System Identication and Learning, IPAM 1/17/2013 1/17/2013 1 / 44 Index 1 Motivation

More information

Towards a Mathematical Theory of Super-resolution

Towards a Mathematical Theory of Super-resolution Towards a Mathematical Theory of Super-resolution Carlos Fernandez-Granda www.stanford.edu/~cfgranda/ Information Theory Forum, Information Systems Laboratory, Stanford 10/18/2013 Acknowledgements This

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

ELE 538B: Sparsity, Structure and Inference. Super-Resolution. Yuxin Chen Princeton University, Spring 2017

ELE 538B: Sparsity, Structure and Inference. Super-Resolution. Yuxin Chen Princeton University, Spring 2017 ELE 538B: Sparsity, Structure and Inference Super-Resolution Yuxin Chen Princeton University, Spring 2017 Outline Classical methods for parameter estimation Polynomial method: Prony s method Subspace method:

More information

Sparse Recovery Beyond Compressed Sensing

Sparse Recovery Beyond Compressed Sensing Sparse Recovery Beyond Compressed Sensing Carlos Fernandez-Granda www.cims.nyu.edu/~cfgranda Applied Math Colloquium, MIT 4/30/2018 Acknowledgements Project funded by NSF award DMS-1616340 Separable Nonlinear

More information

On the recovery of measures without separation conditions

On the recovery of measures without separation conditions Norbert Wiener Center Department of Mathematics University of Maryland, College Park http://www.norbertwiener.umd.edu Applied and Computational Mathematics Seminar Georgia Institute of Technology October

More information

Sparse Parameter Estimation: Compressed Sensing meets Matrix Pencil

Sparse Parameter Estimation: Compressed Sensing meets Matrix Pencil Sparse Parameter Estimation: Compressed Sensing meets Matrix Pencil Yuejie Chi Departments of ECE and BMI The Ohio State University Colorado School of Mines December 9, 24 Page Acknowledgement Joint work

More information

Lecture Notes 5: Multiresolution Analysis

Lecture Notes 5: Multiresolution Analysis Optimization-based data analysis Fall 2017 Lecture Notes 5: Multiresolution Analysis 1 Frames A frame is a generalization of an orthonormal basis. The inner products between the vectors in a frame and

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

Mismatch and Resolution in CS

Mismatch and Resolution in CS Mismatch and Resolution in CS Albert Fannjiang, UC Davis Coworker: Wenjing Liao, UC Davis SPIE Optics + Photonics, San Diego, August 2-25, 20 Mismatch: gridding error Band exclusion Local optimization

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

Lecture Notes 9: Constrained Optimization

Lecture Notes 9: Constrained Optimization Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form

More information

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016 Lecture notes 5 February 9, 016 1 Introduction Random projections Random projections are a useful tool in the analysis and processing of high-dimensional data. We will analyze two applications that use

More information

ROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210

ROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210 ROBUST BLIND SPIKES DECONVOLUTION Yuejie Chi Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 4 ABSTRACT Blind spikes deconvolution, or blind super-resolution, deals with

More information

Optimization-based sparse recovery: Compressed sensing vs. super-resolution

Optimization-based sparse recovery: Compressed sensing vs. super-resolution Optimization-based sparse recovery: Compressed sensing vs. super-resolution Carlos Fernandez-Granda, Google Computational Photography and Intelligent Cameras, IPAM 2/5/2014 This work was supported by a

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior

More information

Demixing Sines and Spikes: Robust Spectral Super-resolution in the Presence of Outliers

Demixing Sines and Spikes: Robust Spectral Super-resolution in the Presence of Outliers Demixing Sines and Spikes: Robust Spectral Super-resolution in the Presence of Outliers Carlos Fernandez-Granda, Gongguo Tang, Xiaodong Wang and Le Zheng September 6 Abstract We consider the problem of

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

Recovering overcomplete sparse representations from structured sensing

Recovering overcomplete sparse representations from structured sensing Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix

More information

Error estimates for the ESPRIT algorithm

Error estimates for the ESPRIT algorithm Error estimates for the ESPRIT algorithm Daniel Potts Manfred Tasche Let z j := e f j j = 1,..., M) with f j [ ϕ, 0] + i [ π, π) and small ϕ 0 be distinct nodes. With complex coefficients c j 0, we consider

More information

arxiv: v5 [math.na] 16 Nov 2017

arxiv: v5 [math.na] 16 Nov 2017 RANDOM PERTURBATION OF LOW RANK MATRICES: IMPROVING CLASSICAL BOUNDS arxiv:3.657v5 [math.na] 6 Nov 07 SEAN O ROURKE, VAN VU, AND KE WANG Abstract. Matrix perturbation inequalities, such as Weyl s theorem

More information

Parameter Estimation for Mixture Models via Convex Optimization

Parameter Estimation for Mixture Models via Convex Optimization Parameter Estimation for Mixture Models via Convex Optimization Yuanxin Li Department of Electrical and Computer Engineering The Ohio State University Columbus Ohio 432 Email: li.3822@osu.edu Yuejie Chi

More information

COMPRESSIVE SENSING WITH AN OVERCOMPLETE DICTIONARY FOR HIGH-RESOLUTION DFT ANALYSIS. Guglielmo Frigo and Claudio Narduzzi

COMPRESSIVE SENSING WITH AN OVERCOMPLETE DICTIONARY FOR HIGH-RESOLUTION DFT ANALYSIS. Guglielmo Frigo and Claudio Narduzzi COMPRESSIVE SESIG WITH A OVERCOMPLETE DICTIOARY FOR HIGH-RESOLUTIO DFT AALYSIS Guglielmo Frigo and Claudio arduzzi University of Padua Department of Information Engineering DEI via G. Gradenigo 6/b, I

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 57 Table of Contents 1 Sparse linear models Basis Pursuit and restricted null space property Sufficient conditions for RNS 2 / 57

More information

Conditions for Robust Principal Component Analysis

Conditions for Robust Principal Component Analysis Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and

More information

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Journal of Information & Computational Science 11:9 (214) 2933 2939 June 1, 214 Available at http://www.joics.com Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Jingfei He, Guiling Sun, Jie

More information

Constrained optimization

Constrained optimization Constrained optimization DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Compressed sensing Convex constrained

More information

Compressive Line Spectrum Estimation with Clustering and Interpolation

Compressive Line Spectrum Estimation with Clustering and Interpolation Compressive Line Spectrum Estimation with Clustering and Interpolation Dian Mo Department of Electrical and Computer Engineering University of Massachusetts Amherst, MA, 01003 mo@umass.edu Marco F. Duarte

More information

Support Detection in Super-resolution

Support Detection in Super-resolution Support Detection in Super-resolution Carlos Fernandez-Granda (Stanford University) 7/2/2013 SampTA 2013 Support Detection in Super-resolution C. Fernandez-Granda 1 / 25 Acknowledgements This work was

More information

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad

More information

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION A Thesis by MELTEM APAYDIN Submitted to the Office of Graduate and Professional Studies of Texas A&M University in partial fulfillment of the

More information

An algebraic perspective on integer sparse recovery

An algebraic perspective on integer sparse recovery An algebraic perspective on integer sparse recovery Lenny Fukshansky Claremont McKenna College (joint work with Deanna Needell and Benny Sudakov) Combinatorics Seminar USC October 31, 2018 From Wikipedia:

More information

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University PCA with random noise Van Ha Vu Department of Mathematics Yale University An important problem that appears in various areas of applied mathematics (in particular statistics, computer science and numerical

More information

Going off the grid. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison

Going off the grid. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Going off the grid Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Joint work with Badri Bhaskar Parikshit Shah Gonnguo Tang We live in a continuous world... But we work

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

Exact Joint Sparse Frequency Recovery via Optimization Methods

Exact Joint Sparse Frequency Recovery via Optimization Methods IEEE TRANSACTIONS ON SIGNAL PROCESSING Exact Joint Sparse Frequency Recovery via Optimization Methods Zai Yang, Member, IEEE, and Lihua Xie, Fellow, IEEE arxiv:45.6585v [cs.it] 3 May 6 Abstract Frequency

More information

Sparse linear models

Sparse linear models Sparse linear models Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 2/22/2016 Introduction Linear transforms Frequency representation Short-time

More information

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina INDUSTRIAL MATHEMATICS INSTITUTE 2007:08 A remark on compressed sensing B.S. Kashin and V.N. Temlyakov IMI Preprint Series Department of Mathematics University of South Carolina A remark on compressed

More information

Strengthened Sobolev inequalities for a random subspace of functions

Strengthened Sobolev inequalities for a random subspace of functions Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)

More information

Sparse polynomial interpolation in Chebyshev bases

Sparse polynomial interpolation in Chebyshev bases Sparse polynomial interpolation in Chebyshev bases Daniel Potts Manfred Tasche We study the problem of reconstructing a sparse polynomial in a basis of Chebyshev polynomials (Chebyshev basis in short)

More information

Noisy Signal Recovery via Iterative Reweighted L1-Minimization

Noisy Signal Recovery via Iterative Reweighted L1-Minimization Noisy Signal Recovery via Iterative Reweighted L1-Minimization Deanna Needell UC Davis / Stanford University Asilomar SSC, November 2009 Problem Background Setup 1 Suppose x is an unknown signal in R d.

More information

Generalized Orthogonal Matching Pursuit- A Review and Some

Generalized Orthogonal Matching Pursuit- A Review and Some Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents

More information

Uniqueness Conditions for A Class of l 0 -Minimization Problems

Uniqueness Conditions for A Class of l 0 -Minimization Problems Uniqueness Conditions for A Class of l 0 -Minimization Problems Chunlei Xu and Yun-Bin Zhao October, 03, Revised January 04 Abstract. We consider a class of l 0 -minimization problems, which is to search

More information

Multipath Matching Pursuit

Multipath Matching Pursuit Multipath Matching Pursuit Submitted to IEEE trans. on Information theory Authors: S. Kwon, J. Wang, and B. Shim Presenter: Hwanchol Jang Multipath is investigated rather than a single path for a greedy

More information

Compressed Sensing and Robust Recovery of Low Rank Matrices

Compressed Sensing and Robust Recovery of Low Rank Matrices Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

DETECTION theory deals primarily with techniques for

DETECTION theory deals primarily with techniques for ADVANCED SIGNAL PROCESSING SE Optimum Detection of Deterministic and Random Signals Stefan Tertinek Graz University of Technology turtle@sbox.tugraz.at Abstract This paper introduces various methods for

More information

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Shuyang Ling Department of Mathematics, UC Davis Oct.18th, 2016 Shuyang Ling (UC Davis) 16w5136, Oaxaca, Mexico Oct.18th, 2016

More information

GREEDY SIGNAL RECOVERY REVIEW

GREEDY SIGNAL RECOVERY REVIEW GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin

More information

L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise

L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise Srdjan Stanković, Irena Orović and Moeness Amin 1 Abstract- A modification of standard

More information

DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 1002 Lecture notes 10 November 23, Linear models DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

More information

The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008

The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008 The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008 Emmanuel Candés (Caltech), Terence Tao (UCLA) 1 Uncertainty principles A basic principle

More information

of Orthogonal Matching Pursuit

of Orthogonal Matching Pursuit A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement

More information

Joint Direction-of-Arrival and Order Estimation in Compressed Sensing using Angles between Subspaces

Joint Direction-of-Arrival and Order Estimation in Compressed Sensing using Angles between Subspaces Aalborg Universitet Joint Direction-of-Arrival and Order Estimation in Compressed Sensing using Angles between Subspaces Christensen, Mads Græsbøll; Nielsen, Jesper Kjær Published in: I E E E / S P Workshop

More information

Self-Calibration and Biconvex Compressive Sensing

Self-Calibration and Biconvex Compressive Sensing Self-Calibration and Biconvex Compressive Sensing Shuyang Ling Department of Mathematics, UC Davis July 12, 2017 Shuyang Ling (UC Davis) SIAM Annual Meeting, 2017, Pittsburgh July 12, 2017 1 / 22 Acknowledgements

More information

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Deanna Needell and Roman Vershynin Abstract We demonstrate a simple greedy algorithm that can reliably

More information

Sensitivity Considerations in Compressed Sensing

Sensitivity Considerations in Compressed Sensing Sensitivity Considerations in Compressed Sensing Louis L. Scharf, 1 Edwin K. P. Chong, 1,2 Ali Pezeshki, 2 and J. Rockey Luo 2 1 Department of Mathematics, Colorado State University Fort Collins, CO 8523,

More information

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE 5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER 2009 Uncertainty Relations for Shift-Invariant Analog Signals Yonina C. Eldar, Senior Member, IEEE Abstract The past several years

More information

Agenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization

Agenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization Agenda Applications of semidefinite programming 1 Control and system theory 2 Combinatorial and nonconvex optimization 3 Spectral estimation & super-resolution Control and system theory SDP in wide use

More information

SPECTRAL COMPRESSIVE SENSING WITH POLAR INTERPOLATION. Karsten Fyhn, Hamid Dadkhahi, Marco F. Duarte

SPECTRAL COMPRESSIVE SENSING WITH POLAR INTERPOLATION. Karsten Fyhn, Hamid Dadkhahi, Marco F. Duarte SPECTRAL COMPRESSIVE SENSING WITH POLAR INTERPOLATION Karsten Fyhn, Hamid Dadkhahi, Marco F. Duarte Dept. of Electronic Systems, Aalborg University, Denmark. Dept. of Electrical and Computer Engineering,

More information

Sparse Solutions of an Undetermined Linear System

Sparse Solutions of an Undetermined Linear System 1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research

More information

Sparse Sensing in Colocated MIMO Radar: A Matrix Completion Approach

Sparse Sensing in Colocated MIMO Radar: A Matrix Completion Approach Sparse Sensing in Colocated MIMO Radar: A Matrix Completion Approach Athina P. Petropulu Department of Electrical and Computer Engineering Rutgers, the State University of New Jersey Acknowledgments Shunqiao

More information

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Truncated singular value decomposition (TSVD) is a popular method for solving linear discrete ill-posed

More information

Greedy Signal Recovery and Uniform Uncertainty Principles

Greedy Signal Recovery and Uniform Uncertainty Principles Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

Going off the grid. Benjamin Recht University of California, Berkeley. Joint work with Badri Bhaskar Parikshit Shah Gonnguo Tang

Going off the grid. Benjamin Recht University of California, Berkeley. Joint work with Badri Bhaskar Parikshit Shah Gonnguo Tang Going off the grid Benjamin Recht University of California, Berkeley Joint work with Badri Bhaskar Parikshit Shah Gonnguo Tang imaging astronomy seismology spectroscopy x(t) = kx j= c j e i2 f jt DOA Estimation

More information

Stability and Robustness of Weak Orthogonal Matching Pursuits

Stability and Robustness of Weak Orthogonal Matching Pursuits Stability and Robustness of Weak Orthogonal Matching Pursuits Simon Foucart, Drexel University Abstract A recent result establishing, under restricted isometry conditions, the success of sparse recovery

More information

1 The linear algebra of linear programs (March 15 and 22, 2015)

1 The linear algebra of linear programs (March 15 and 22, 2015) 1 The linear algebra of linear programs (March 15 and 22, 2015) Many optimization problems can be formulated as linear programs. The main features of a linear program are the following: Variables are real

More information

The Analysis Cosparse Model for Signals and Images

The Analysis Cosparse Model for Signals and Images The Analysis Cosparse Model for Signals and Images Raja Giryes Computer Science Department, Technion. The research leading to these results has received funding from the European Research Council under

More information

Solving Corrupted Quadratic Equations, Provably

Solving Corrupted Quadratic Equations, Provably Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin

More information

Recent Developments in Compressed Sensing

Recent Developments in Compressed Sensing Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline

More information

Wavelet Footprints: Theory, Algorithms, and Applications

Wavelet Footprints: Theory, Algorithms, and Applications 1306 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 51, NO. 5, MAY 2003 Wavelet Footprints: Theory, Algorithms, and Applications Pier Luigi Dragotti, Member, IEEE, and Martin Vetterli, Fellow, IEEE Abstract

More information

ACCORDING to Shannon s sampling theorem, an analog

ACCORDING to Shannon s sampling theorem, an analog 554 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 59, NO 2, FEBRUARY 2011 Segmented Compressed Sampling for Analog-to-Information Conversion: Method and Performance Analysis Omid Taheri, Student Member,

More information

On Riesz-Fischer sequences and lower frame bounds

On Riesz-Fischer sequences and lower frame bounds On Riesz-Fischer sequences and lower frame bounds P. Casazza, O. Christensen, S. Li, A. Lindner Abstract We investigate the consequences of the lower frame condition and the lower Riesz basis condition

More information

Parameter estimation for nonincreasing exponential sums by Prony like methods

Parameter estimation for nonincreasing exponential sums by Prony like methods Parameter estimation for nonincreasing exponential sums by Prony like methods Daniel Potts Manfred Tasche Let z j := e f j with f j (, 0] + i [ π, π) be distinct nodes for j = 1,..., M. With complex coefficients

More information

A New Estimate of Restricted Isometry Constants for Sparse Solutions

A New Estimate of Restricted Isometry Constants for Sparse Solutions A New Estimate of Restricted Isometry Constants for Sparse Solutions Ming-Jun Lai and Louis Y. Liu January 12, 211 Abstract We show that as long as the restricted isometry constant δ 2k < 1/2, there exist

More information

arxiv: v1 [math.pr] 22 May 2008

arxiv: v1 [math.pr] 22 May 2008 THE LEAST SINGULAR VALUE OF A RANDOM SQUARE MATRIX IS O(n 1/2 ) arxiv:0805.3407v1 [math.pr] 22 May 2008 MARK RUDELSON AND ROMAN VERSHYNIN Abstract. Let A be a matrix whose entries are real i.i.d. centered

More information

Stability of partial Fourier matrices with clustered nodes

Stability of partial Fourier matrices with clustered nodes Stability of partial Fourier matrices with clustered nodes Dmitry Batenkov 1, Laurent Demanet 1, Gil Goldman 2, and Yosef Yomdin 2 1 Department of Mathematics, Massachusetts Institute of Technology, Cambridge,

More information

SDP APPROXIMATION OF THE HALF DELAY AND THE DESIGN OF HILBERT PAIRS. Bogdan Dumitrescu

SDP APPROXIMATION OF THE HALF DELAY AND THE DESIGN OF HILBERT PAIRS. Bogdan Dumitrescu SDP APPROXIMATION OF THE HALF DELAY AND THE DESIGN OF HILBERT PAIRS Bogdan Dumitrescu Tampere International Center for Signal Processing Tampere University of Technology P.O.Box 553, 3311 Tampere, FINLAND

More information

Multiple Change Point Detection by Sparse Parameter Estimation

Multiple Change Point Detection by Sparse Parameter Estimation Multiple Change Point Detection by Sparse Parameter Estimation Department of Econometrics Fac. of Economics and Management University of Defence Brno, Czech Republic Dept. of Appl. Math. and Comp. Sci.

More information

c 2011 International Press Vol. 18, No. 1, pp , March DENNIS TREDE

c 2011 International Press Vol. 18, No. 1, pp , March DENNIS TREDE METHODS AND APPLICATIONS OF ANALYSIS. c 2011 International Press Vol. 18, No. 1, pp. 105 110, March 2011 007 EXACT SUPPORT RECOVERY FOR LINEAR INVERSE PROBLEMS WITH SPARSITY CONSTRAINTS DENNIS TREDE Abstract.

More information

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit arxiv:0707.4203v2 [math.na] 14 Aug 2007 Deanna Needell Department of Mathematics University of California,

More information

Rank Determination for Low-Rank Data Completion

Rank Determination for Low-Rank Data Completion Journal of Machine Learning Research 18 017) 1-9 Submitted 7/17; Revised 8/17; Published 9/17 Rank Determination for Low-Rank Data Completion Morteza Ashraphijuo Columbia University New York, NY 1007,

More information

arxiv: v1 [math.na] 26 Nov 2009

arxiv: v1 [math.na] 26 Nov 2009 Non-convexly constrained linear inverse problems arxiv:0911.5098v1 [math.na] 26 Nov 2009 Thomas Blumensath Applied Mathematics, School of Mathematics, University of Southampton, University Road, Southampton,

More information

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1239 Preconditioning for Underdetermined Linear Systems with Sparse Solutions Evaggelia Tsiligianni, StudentMember,IEEE, Lisimachos P. Kondi,

More information

Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison

Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison Raghunandan H. Keshavan, Andrea Montanari and Sewoong Oh Electrical Engineering and Statistics Department Stanford University,

More information

TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS

TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS Martin Kleinsteuber and Simon Hawe Department of Electrical Engineering and Information Technology, Technische Universität München, München, Arcistraße

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via nonconvex optimization Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

arxiv: v1 [cs.it] 4 Nov 2017

arxiv: v1 [cs.it] 4 Nov 2017 Separation-Free Super-Resolution from Compressed Measurements is Possible: an Orthonormal Atomic Norm Minimization Approach Weiyu Xu Jirong Yi Soura Dasgupta Jian-Feng Cai arxiv:17111396v1 csit] 4 Nov

More information

Compressive Sensing and Beyond

Compressive Sensing and Beyond Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered

More information

A new method on deterministic construction of the measurement matrix in compressed sensing

A new method on deterministic construction of the measurement matrix in compressed sensing A new method on deterministic construction of the measurement matrix in compressed sensing Qun Mo 1 arxiv:1503.01250v1 [cs.it] 4 Mar 2015 Abstract Construction on the measurement matrix A is a central

More information

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5 CS 229r: Algorithms for Big Data Fall 215 Prof. Jelani Nelson Lecture 19 Nov 5 Scribe: Abdul Wasay 1 Overview In the last lecture, we started discussing the problem of compressed sensing where we are given

More information

MINIMUM PEAK IMPULSE FIR FILTER DESIGN

MINIMUM PEAK IMPULSE FIR FILTER DESIGN MINIMUM PEAK IMPULSE FIR FILTER DESIGN CHRISTINE S. LAW AND JON DATTORRO Abstract. Saturation pulses rf(t) are essential to many imaging applications. Criteria for desirable saturation profile are flat

More information

Exact Topology Identification of Large-Scale Interconnected Dynamical Systems from Compressive Observations

Exact Topology Identification of Large-Scale Interconnected Dynamical Systems from Compressive Observations Exact Topology Identification of arge-scale Interconnected Dynamical Systems from Compressive Observations Borhan M Sanandaji, Tyrone Vincent, and Michael B Wakin Abstract In this paper, we consider the

More information

COMPRESSED Sensing (CS) is a method to recover a

COMPRESSED Sensing (CS) is a method to recover a 1 Sample Complexity of Total Variation Minimization Sajad Daei, Farzan Haddadi, Arash Amini Abstract This work considers the use of Total Variation (TV) minimization in the recovery of a given gradient

More information

Least Squares Approximation

Least Squares Approximation Chapter 6 Least Squares Approximation As we saw in Chapter 5 we can interpret radial basis function interpolation as a constrained optimization problem. We now take this point of view again, but start

More information