c 2010 Society for Industrial and Applied Mathematics

Size: px
Start display at page:

Download "c 2010 Society for Industrial and Applied Mathematics"

Transcription

1 SIAM J. MATRIX AAL. APPL. Vol. 31, o. 5, pp c 010 Society for Industrial and Applied Mathematics IMPROVED BOUDS O RESTRICTED ISOMETRY COSTATS FOR GAUSSIA MATRICES BUBACARR BAH AD JARED TAER Abstract. The restricted isometry constant RIC of a matrix A measures how close to an isometry is the action of A on vectors with few nonzero entries, measured in the l norm. Specifically, the upper and lower RICs of a matrix A of size n are the maximum and the minimum deviation from unity one of the largest and smallest, respectively, square of singular values of all matrices formed by taing columns from A. Calculation of the RIC is intractable for most matrices due to its combinatorial nature; however, many random matrices typically have bounded RIC in some range of problem sizes, n,. We provide the best nown bound on the RIC for Gaussian matrices, which is also the smallest nown bound on the RIC for any large rectangular matrix. Our results are built on the prior bounds of Blanchard, Cartis, and Tanner SIAM Rev., to appear, with improvements achieved by grouping submatrices that share a substantial number of columns. Key words. Wishart matrices, compressed sensing, sparse approximation, restricted isometry constant, phase transitions, Gaussian matrices, singular values of random matrices AMS subject classifications. Primary, 15B5, 60F10, 94A0; Secondary, 94A1, 90C5 DOI / Introduction. Interest in parsimonious solutions to underdetermined systems of equations has seen a spie with the introduction of compressed sensing 10, 7, 6. Much of the analysis in this new topic has relied upon a new matrix quantity, the restricted isometry constant RIC, also referred to as the restricted isometry property RIP constant. Let A be a matrix of size n and define the set of -vectors with at most nonzero entries as 1.1 χ :={x R : x 0 }. Upper and lower RICs of A, U, n, ; A andl, n, ; A, respectively, are defined as 5, 1. U, n, ; A :=min c 0 c subject to 1 + c x Ax x χ, 1.3 L, n, ; A :=min c 0 c subject to 1 c x Ax x χ. RICs differ from standard singular values squared in their combinatorial nature. U, n, ; A andl, n, ; A measure the maximum and the minimum deviation from unity one of the largest and smallest, respectively, square of the singular values of all submatrices of A of size n constructed by taing columns from A. The RICs can be equivalently defined as 1.4 U, n, ; A := max K Ω, K = λmax A KA K 1 Received by the editors March 16, 010; accepted for publication in revised form by A. J. Wathen August 19, 010; published electronically ovember 30, Maxwell Institute and School of Mathematics, University of Edinburgh, Edinburgh, EH9 3JZ, UK b.bah@sms.ed.ac.u, jared.tanner@ed.ac.u. The second author s wor was supported in part by the Leverhulme Trust. 88

2 BOUDS O RESTRICTED ISOMETRY COSTATS: GAUSSIA 883 and 1.5 L, n, ; A :=1 min K Ω, K = λmin A K A K, where Ω := {1,,...,}, A K is the restriction of the columns of A to a support set K Ω with cardinality K =, and λ max B andλ min B are the smallest and largest eigenvalues of B, respectively. The standard notion of general position is Ln, n, ; A < 1, and Krusal ran 19 is the largest such that L, n, ; A < 1. Many of the theorems in compressed sensing rely upon a sensing matrix having suitable bounds on its RIC. Unfortunately, computing the RICs of a matrix A is in general P-hard. Efforts are underway to design algorithms that compute accurate bounds on the RICs of a matrix 9, 17, but to date these algorithms have had limited success, with the bounds effective only for n 1/. Lacing the ability to efficiently calculate the RICs of a given matrix, the research community is actively computing probabilistic bounds for various random matrix ensembles. These efforts have followed three research programs: Determination of the largest ensemble of matrices such that as the problem sizes, n, grow,thericsu, n, ; A remain bounded and the L, n, ; A is bounded away from 1 1. Computing bounds as accurate as possible for particular ensembles, such as the Gaussian ensemble 5,, where the entries of A are drawn independently and identically distributed i.i.d. from the standard Gaussian normal 0, 1/n. In part as a model for i.i.d. mean zero ensembles. Computing bounds as accurate as possible for the partial Fourier ensemble 4, where A is formed from random rows, j, orsamples,t l, of a Fourier matrix with entries F j,l = e πijt l. In part as a model for matrices possessing a fast matrix-vector product. This manuscript focuses on the second of these research programs, accurate bounds for the Gaussian/Wishart ensemble. Candès and Tao derived the first set of RIC bounds for the Gaussian ensemble using a union bound over all submatrices and bounding the singular values of each submatrix using concentration of measure bounds 7. Blanchard, Cartis, and Tanner derived the second set of RIC bounds for the Gaussian ensemble, similarly using a union bound over all submatrices, but achieved substantial improvements by using more accurate bounds on the probability density function of Wishart matrices. These bounds are presented here in Theorems.8 and.10, respectively. This paper presents yet further improved bounds for the Gaussian ensemble see Theorem.3 and Figure.1 by grouping submatrices with overlapping support sets say, A K and A K with K K 1 for which we expect the singular values to be highly correlated. These are the first RIC bounds that exploit this structure. In addition to asymptotic bounds for large problem sizes, we present bounds valid for finite values of, n,. This paper is organized as follows: Our improved asymptotic bounds are stated in section.1, and their derivation is described in section.. Prior bounds are presented in section.3 and are compared with those in Theorem.3. Bounds valid for finite values are presented in section.4, and the implications of the RIC bounds for compressed sensing are given in section.5. Proof of technical lemmas used or assumed in our discussion come in the appendix.. RIC bounds. We focus our attention on bounding the RIC for the Gaussian ensemble in the setting of proportional-growth asymptotics.

3 884 BUBACARR BAH AD JARED TAER Definition.1 proportional-growth asymptotics. A sequence of problem sizes, n, is said to follow proportional-growth asymptotics if.1 n = ρ n ρ and n = δ n δ for δ, ρ 0, 1 as, n,. In this asymptotic we provide quantitative values, above which it is exponentially unliely that the RIC will exceed. In section.4 we show how our derivation of these bounds can also supply probabilities for specified bounds and finite values of, n,..1. Improved RIP bounds. The probability density functions pdf of the RIC for the Gaussian ensemble are currently unnown, but asymptotic probabilistic bounds have been proven. Our bounds, and earlier ones, for the RIC of the Gaussian ensemble are built upon the bounds of the pdf s of the extreme eigenvalues of Gaussian Wishart matrices due to Edelman 13, 1. All earlier bounds on the RIC have been derived using union bounds that consider each of the submatrices of size n individually, 7. We consider groups of submatrices where the columns of the submatrices in a group are from at most m distinct columns of A. We present our improved bounds in Theorem.3, preceded by the definition whose terms are defined in Definition.. Plots of these bounds are displayed in Figure.1. Definition.. Let δ, ρ 0, 1, γ ρ, δ 1, and denote the Shannon entropy with base e logarithms as Hp :=p ln1/p+1 pln1/1 p. Let ψ min λ, γ :=H γ γlnλ + γ ln γ +1 γ λ, ψ max λ, γ := γlnλ γ ln γ +1+γ λ. Define λ min δ, ρ; γ and λ max δ, ρ; γ as the solution to.4 and.5, respectively:.4.5 δψ min λ min δ, ρ; γ,γ + Hρδ δγh ρ/γ =0 for λ min δ, ρ; γ 1 γ, δψ max λ max δ, ρ; γ,γ+hρδ δγh ρ/γ =0 for λ max δ, ρ; γ 1+γ. Let λ min δ, ρ :=max γ λ min δ, ρ; γ and λ max δ, ρ :=min γ λ max δ, ρ; γ, and define.6 L BT δ, ρ :=1 λ min δ, ρ and U BT δ, ρ :=λ max δ, ρ 1. That for each δ, ρ; γ,.4 and.5 have a unique solution λ min δ, ρ; γ and λ max δ, ρ; γ, respectively, was proven in. That λ min δ, ρ; γ andλ max δ, ρ; γ have unique maxima and minima, respectively, over γ ρ, δ 1 isestablishedinlemma.6. Theorem.3. Let A be a matrix of size n whose entries are drawn i.i.d. from 0, 1/n. Let δ and ρ be defined as in.1, andletl BT δ, ρ and U BT δ, ρ be defined as in Definition.. For any fixed ɛ>0, in the proportional-growth asymptotic, PL, n, < L BT δ, ρ+ɛ 1 and PU, n, < U BT δ, ρ+ɛ 1 exponentially in n. In the spirit of reproducible research, software and web forms that evaluate L BT δ, ρ andu BT δ, ρ are publicly available at 14.

4 BOUDS O RESTRICTED ISOMETRY COSTATS: GAUSSIA 885 Fig..1. U BT δ, ρ left panel and L BT δ, ρ right panel from Definition. for δ, ρ 0, 1. Fig... Empirically observed lower bounds on RIC for A Gaussian. Observed lower bounds of U, n, ; A left panel and L, n, ; A right panel. Although there is no computationally tractable method for calculating the RICs of a matrix, there are efficient algorithms that perform local searches for extremal eigenvalues of submatrices, allowing for observable lower bounds on the RICs. Algorithms for observing L, n, 11 and U, n, 16 were applied to hundreds of A drawn i.i.d. 0, 1/n with n = 400 and increasing from 40 to Sharpness of the bounds can be probed by comparison with empirically observed lower bounds on the RIC for finite-dimensional draws from the Gaussian ensemble. There exist efficient algorithms for calculating lower bounds of RICs 11, 16. These algorithms perform local searches for submatrices with extremal eigenvalues. The new bounds in Theorem.3 see Figure.1 can be compared with empirical data displayed in Figure.. To further demonstrate the sharpness of our bounds, we compute the maximum and minimum sharpness ratios of the bounds in Theorem.3 to empirically observed lower bounds; for each ρ, the maximum and minimum of the ratio are taen over all δ 0.05, These are the same δ values used in Figure.. These ratios are shown in the left panel of Figure.3 and are below 1.57 of the empirically observed lower bounds on L, n, andu, n, observed with n = Discussion on the construction of improved RIC bounds. The bounds in Theorem.3 improve upon the earlier results of by grouping matrices A K and

5 886 BUBACARR BAH AD JARED TAER Fig..3. Left panel: The maximum and minimum over δ sharpness ratios, U BT δ,ρ U,n,;A and L BT δ,ρ,asfunctionsofρ, with the maximum and minimum taen over all δ 0.05, 0.954, the L,n,;A same δ values used in Figure.. Right panel: The maximum and minimum over δ improvement ratios over the previous best nown bounds, U BCT δ,ρ and LBCT δ,ρ, as a function of ρ, withthe U BT δ,ρ L BT δ,ρ maximum and minimum also taen over δ 0.05, A K that share a significant number of columns from A. This is manifest in Definition. through the introduction of the free parameter γ associated with the number of groups considered. In this section we first discuss the way in which we construct these groups and the sense in which the bounds in Theorem.3 are optimal for this construction. Equipped with a suitable construction of groups, we discuss the way in which this grouping is employed to improve the RIC bounds from...1. Construction of groups. We construct our groups of A K by selecting a subset M i from 1,,..., of cardinality M i = m and setting G i := {K} K Mi, K =, the set of all sets K M of cardinality. The group G i has m members, with any two members sharing at least m elements. Hence, the quantity γ = m/n in Definition. is associated with the cardinality of the groups G i. In order to calculate bounds on the RIC of a matrix, we need a collection of groups whose union includes all sets of cardinality from Ω := {1,,...,}; thatis, we need {G i } u i=1 such that G := u i=1 G i with G =. From simple counting, the minimum number of groups G i needed for this covering is at least r := m 1. Although the construction of a minimal covering is an open question 18, even a simple random construction of the G i s typically requires only a polynomial multiple of r groups, hence achieving the optimal large deviation rate. Lemma.4 see 18. Set r = m 1, and draw u := r sets Mi each of cardinality m, drawn uniformly at random from the m possible sets of cardinality m. WithG defined as above,.7 P G < <C/ 1/ e 1 ln, where Cp 5 4 πp1 p 1. Proof. Select one set K Ω of cardinality K = prior to the draw of the sets M i. The probability that it is contained in one set M i is 1/r, andwitheachm i drawn independently, the probability that it is not contained in any of the u sets M i

6 BOUDS O RESTRICTED ISOMETRY COSTATS: GAUSSIA 887 is 1 r 1 u e u/r. Applying a union bound over all sets K yields P G < < e u/r. oting from Stirling s inequality that πp1 p 1 e Hp 5 p 4 πp1 p 1 e Hp, with Hp ln for p 0, 1, and substituting the selected value of u completes the proof. ote that an exponentially small probability can be obtained with u just larger than rhδρ, but the smaller polynomial factor is negligible for our purposes. Corollary.5. Given Lemma.4, asn in the proportional-growth asymptotics, the probability that all the -subsets of 1,,..., are covered by G converges to one exponentially in n.... Decreasing the combinatorial term. We illustrate the way the groups G i are used to improve the RIC bound on the upper RIC bound U, n, ; A; the bounds for L, n, ; A follow by a suitable replacement of maximizations/ minimizations and sign changes. All previous bounds on the RIC for the Gaussian ensemble have overcome the combinatorial maximization/minimization by use of a union bound over all sets K Ω and then using a tail bound on the pdf of the extreme eigenvalues of A K A K;forsomeλ max > 0, P max K Ω, K = λmax A KA K >λ max P λ max A KA K >λ max. The fact that the random variables λ max A K A K are treated as independent is the principal deficiency of this bound. To exploit dependencies of this variable for K and K with significant overlap, we exploit the groupings G i,which,atleastform moderately larger than, contain sets with significant overlap. For the moment we assume the groups {G i } u i=1 cover all K Ω, and we replace the above maximization over K with a double maximization P max K Ω, K = λmax A K A K >λ max = P max max i=1,...,u K G λmax A K A K >λ max. i, K = The outer maximization can be bounded over all u sets G i, again, using a simple union bound; however, with a smaller combinatorial term. The expected dependencies between λ max A K A KforK G i can, at times, be better controlled by replacing the maximization over K G i by λ max A M i A Mi, where M i is the subset of cardinality m containing all K G i :.9 P max max i=1,...,u K G λmax A KA K >λ max up λ max A M A M >λ max. i, K = Selecting m = recovers the usual union bound with u equal to. Larger values of m decrease the combinatorial term at the cost of increasing λ max A M A M. The efficacy of this approach depends on the interplay between these two competing factors. In the proportional-growth asymptotic, this interplay is observed through the optimization over m n = γ ρ, δ 1. Definition. uses the tail bounds

7 888 BUBACARR BAH AD JARED TAER Fig..4. Left panel: The relationship between the new bound U BT δ, ρ, Theorem.3, and the previous smallest bound U BCT δ, ρ, Theorem.10, where in the two bounds λ max δ, ρ; γ is evaluated at γ = γ min and γ = ρ, respectively. Right panel: The relationship between the new bound L BT δ, ρ, Theorem.3, and the previous smallest bound L BCT δ, ρ, Theorem.10, whereinthe two bounds λ min δ, ρ; γ is evaluated at γ = γ max and γ = ρ, respectively. Fig..5. Optimal choice of γ ρ for U BT δ, ρ left panel and L BT δ, ρ right panel. on the extreme eigenvalues of Wishart matrices derived by Edelman 1 to bound P λ max A M A M >λ max. The previously best nown bound on the RIC for the Gaussian ensemble is recovered by selecting γ = ρ in Definition.. This is illustrated in Figure.4. The innovation of the bounds in Theorem.3 follows from there always being a unique γ>ρsuch that λ max δ, ρ; γ islessthanλ max δ, ρ; ρ. Lemma.6. Suppose λ min δ, ρ; γ and λ max δ, ρ; γ are solutions to.4 and.5, respectively. For any fixed δ, ρ there exist a unique γ min ρ, δ 1 that minimizes λ max δ, ρ; γ andauniqueγ max ρ, δ 1 that maximizes λ min δ, ρ; γ. Furthermore, γ min and γ max are strictly larger than ρ. The optimal choices of γ ρ for U BT ρ, δ andl BT ρ, δ inρ, δ 0, 1 are displayed in Figure.5. The proof of Lemma.6 is presented in Appendix A Prior RIP bounds. There have been two previous quantitative bounds for the RIC of the Gaussian ensemble in the proportional-growth asymptotics. The first bounds on the RIC of the Gaussian ensemble were supplied in 7 by Candès and Tao using union bounds and concentration of measure bounds on the extreme eigenvalues of Wishart matrices from 0. These bounds are stated in Theorem.8 with Definition.7 defining some of the terms used in the theorem, and plots of these bounds are displayed in Figure.6.

8 BOUDS O RESTRICTED ISOMETRY COSTATS: GAUSSIA 889 Fig..6. U CT δ, ρ left panel and L CT δ, ρ right panel from Definition.7 for δ, ρ 0, 1. Definition.7. Let δ, ρ 0, 1, and define U CT δ, ρ := 1+ ρ +δ 1 Hδρ 1/ 1 and { L CT δ, ρ :=1 max 0, 1 ρ δ 1 Hδρ 1/ }. Theorem.8 Candès and Tao 7. Let A be a matrix of size n whose entries are drawn i.i.d. from 0, 1/n. Let δ and ρ be defined as in.1, andlet L CT δ, ρ and U CT δ, ρ be defined as in Definition.7. For any fixed ɛ>0, inthe proportional-growth asymptotic, PL, n, < L CT δ, ρ+ɛ 1 and PU, n, < U CT δ, ρ+ɛ 1 exponentially in n. The bounds in Theorem.3 follow the construction of the second bounds on the RIC for the Gaussian ensemble, presented in. Removing the optimization of γ ρ, δ 1 in Definition. and fixing γ = ρ recovers the bounds on L, n, ; A and the first of two bounds on U, n, ; A presented in. The first bound on U, n, ; A in suffers from excessive overestimation when δρ 1/ because of the combinatorial term. In fact, this overestimation is so severe that for some δ, ρ with δρ 1/, smaller bounds are obtained at δ, 1. This overestimation is somewhat ameliorated by our noting the monotonicity of U, n, ; A in, obtaining the improved bound; see.1. These bounds are stated in Theorem.10 with Definition.9 defining some of the terms used in the theorem, and plots of these bounds are displayed in Figure.7. Definition.9. Let δ, ρ 0, 1, and denote the Shannon entropy with base e logarithms as Hp :=p ln1/p+1 pln1/1 p. Letψ min λ, ρ and ψ max λ, ρ be defined as in. and.3, respectively. Define λ min BCT the solution to.10 and.11, respectively: δ, ρ and λmax BCT.10 δψ min λ min BCTδ, ρ,ρ+hρδ =0 for λ min BCTδ, ρ 1 ρ, δ, ρ as.11 δψ max λ max BCT δ, ρ,ρ+hρδ =0 for λmax BCT δ, ρ 1+ρ.

9 890 BUBACARR BAH AD JARED TAER Fig..7. U BCT δ, ρ left panel and L BCT δ, ρ right panel from Definition.9 for δ, ρ 0, 1. Define.1 L BCT δ, ρ :=1 λ min BCT δ, ρ and U BCT δ, ρ := min ν ρ,1 λmax BCT δ, ν 1. Theorem.10 Blanchard, Cartis, and Tanner. Let A be a matrix of size n whose entries are drawn i.i.d. from 0, 1/n. Let δ and ρ be defined as in.1, andletl BCT δ, ρ and U BCT δ, ρ be defined as in Definition.9. For any fixed ɛ>0, in the proportional-growth asymptotic, PL, n, ; A < L BCT δ, ρ+ɛ 1 and PU, n, ; A < U BCT δ, ρ+ɛ 1 exponentially in n. Figures.6 and.7 show that the bounds in Theorem.10 are a substantial improvement on those in Theorem.8. The bounds presented here in Definition. and Theorem.3 are a further improvement over those in, as implied by Lemma.6. Corollary.11. Let L BT δ, ρ and U BT δ, ρ be defined as in Definition., and let L BCT δ, ρ and U BCT δ, ρ be defined as in Definition.9. For any fixed δ, ρ 0, 1, L BT δ, ρ < L BCT δ, ρ and U BT δ, ρ < U BCT δ, ρ. The right panel of Figure.3 shows the ratio of the previously best nown bounds, Theorem.10, to the new bounds, Theorem.3; for each ρ, the ratio is maximized over δ 0.05, Finite interpretations. The method of proof we have used to obtain the proportional-growth asymptotic bounds in Theorem.3 also provides, albeit less elegantly, bounds valid for finite values of, n, and specified probabilities of the bound being satisfied. For a specified problem instance, n, andɛ, bounds on the probabilities P U, n, > U BT δ n,ρ n +ɛ and P L, n, > L BT δ n,ρ n +ɛ for discrete values of δ and ρ are given in Propositions.1 and.13, respectively. Proposition.1. Let A be a matrix of size n whose entries are drawn i.i.d. from 0, 1/n. Define U BT δ n,ρ n as in Definition. for discrete values of δ

10 BOUDS O RESTRICTED ISOMETRY COSTATS: GAUSSIA 891 and ρ. Then for any ɛ>0, P U, n, > U BT δ n,ρ n +ɛ p max n, λ max d δ n,ρ n +ɛ exp nɛ dλ ψ U λ max δ n,ρ n π1 / 1/ exp 1 ln, where.14 p max n, λ := 8 π 1/ n 7/ 3 1/ 5 nγ ρ γλ 4 γδ1 ρδ and ψ U λ, γ :=δ 1 Hρδ δγh ρ + δψ max λ, γ γ for ψ max λ, γ defined in.3. Proposition.13. Let A be a matrix of size n whose entries are drawn i.i.d. from 0, 1/n. Define L BT δ n,ρ n as in Definition. for discrete values of δ and ρ. Then for any ɛ>0, P L, n, > L BT δ n,ρ n +ɛ p min n, λ min δ n,ρ n +ɛ exp nɛ d dλ ψ L λ min δ n,ρ n π1 / 1/ exp 1 ln, where.16 p min n, λ := e λ π 1/ nγ ρ γδ1 ρδ and ψ L λ, γ :=δ 1 Hρδ δγh ρ + δψ min λ, γ γ for ψ min λ, γ defined in.. The proofs of Propositions.1 and.13 are presented in Appendix A. and also serve as the proof of Theorem.3, which follows by taing the appropriate limits. From Propositions.1 and.13 we calculated bounds for a few example values of, n, andɛ. Table.1 shows bounds on P U, n, > U BT δ n,ρ n +ɛ for a few values of, n, with two different choices of ɛ. It is remarable that these probabilities are already close to zero for these small values of, n, and even for ɛ 1. Table. shows bounds on P L, n, > L BT δ n,ρ n +ɛ for the same values of, n, as in Table.1, but with even smaller values for ɛ. Again, it is remarable that these probabilities are extremely small, even for relatively small values of, n, andɛ.

11 89 BUBACARR BAH AD JARED TAER Table.1 Prob is an upper bound on P U, n, > U BT δ n,ρ n+ɛ for the specified, n, and ɛ. n ɛ Prob Table. Prob is an upper bound on P L, n, > L BT δ n,ρ n+ɛ for the specified, n, and ɛ. n ɛ Prob ρ csp S S.5. Implications for sparse approximation and compressed sensing. The RIC was introduced by Candès and Tao 7 as a technique to prove that in certain conditions the sparsest solution of an underdetermined system of equations Ax = b A of size n with n< can be found using linear programming. The RIC is now a widely used technique in the study of sparse approximation algorithms, allowing the analysis of sparse approximation algorithms without specifying the measurement ma- A. For instance, in 5 it was proven that if maxl, n, ; A,U, n, ; A < trix 1, then if Ax = b has a unique -sparse solution as its sparsest solution, then arg min z 1 subject to Az = b will be this -sparse solution. A host of other RIC based conditions have been derived for this and other sparsifying algorithms. However, the values of, n, when these conditions on the RIC are satisfied can be determined only once the measurement matrix A has been specified 1. The RIC bounds for the Gaussian ensemble discussed here allow one to state values of, n, when sparse approximation recovery conditions are satisfied, and, from these, guarantee the recovery of -sparse vectors from A, b. Unfortunately, all existing sparse approximation bounds on the RICs are so sufficiently small that they are satisfied only for ρ 1, typically on the order of Although the bounds presented here are a strict improvement over the previously best nown bounds, and for some δ, ρ achieve as much as a 0% decrease see Figure.3, the improvements for ρ 1 are meager, approximately 0.5 1%. This limited improvement for compressed sensing algorithms is in large part due to the previous bounds being within 30% of empirically observed lower bounds on RIC for n = 400 when ρ<10. In 3, using RIC bounds from, lower bounds on the phase transitions for exact recovery of -sparse signals for three greedy algorithms and l 1 -minimization were presented. These curves are functions ρ sp S δ for subspace pursuit SP 8, δ for compressive sampling matching pursuit CoSaMP 3, ρihtδ for iterative hard thresholding IHT 4, and ρ l1 S δ forl 1-minimization 15. Figure 1 in 3 shows a plot of these phase transition curves, which are raised by approximately 0.5 1% when the bounds from are replaced with the improved bounds from Theorem.3.

12 BOUDS O RESTRICTED ISOMETRY COSTATS: GAUSSIA 893 A. Appendix. Here we present the proofs of the ey theorems and lemmas stated in this paper. For theorems and lemmas stated or used without proof, see the appendix of. A.1. Proof of Lemma.6. We start by showing that λ max δ, ρ; γ has a unique minimum for each fixed δ, ρ,andγ ρ, δ 1. Equation.5 gives the implicit relation between λ max and γ as where δψ max λ max,γ+hρδ δγh ρ/γ =0 for λ max 1+γ, ψ max λ max,γ= γlnλ max γ ln γ +1+γ λ max. Therefore, d dγ λmax = λ max λ max 1 + γ ln λ max γ ρ γ 3 is equal to zero when A.1 λ max γ ρ = γ 3. Let γ min satisfy A.1. Since λ max 1+γ>0, d dγ λmax is negative for γ ρ, γ min, is zero at γ min, and is positive for γ γ min,δ 1, equation.5 has a unique minimum over γ ρ, δ 1, and the γ that obtains the minimum is strictly greater than ρ. Similarly, we show that λ min δ, ρ; γ has a unique maximum for each fixed δ, ρ and γ ρ, δ 1. Equation.4 gives the implicit relation between λ min and γ as δψ min λ min,γ + Hρδ δγh ρ/γ =0 for λ min 1 γ, where ψ min λ min,γ := H γ+ 1 1 γln λ min + γ ln γ +1 γ λ min. Therefore, d λ min = dγ λ min ln 1 γ λmin γ 3 λ min 1 γ γ ρ is equal to zero when A. γ 3 λ min =1 γ γ ρ. Let γ max satisfy A.. Since 0 <λ min d 1+γ, dγ λ min is positive for γ ρ, γ max, is zero at γ max, and is negative for γ γ max,δ 1, equation.4 has a unique maximum over γ ρ, δ 1, and the γ that obtains the maximum is strictly greater than ρ.

13 894 BUBACARR BAH AD JARED TAER A.. Proof of main results, Theorem.3 and Propositions.1 and.13. Here we give a proof similar to that in, but we tae great care with the nonexponential terms necessary for the calculation of bounds on probabilities for finite values of, n, in section.4. We present the proof for U BT δ, ρ indetailand setch the proof of L BT δ, ρ, which follows similarly. The following lemma regarding the bound on the probability distribution function of the maximum eigenvalue of a Wishart matrix due to Edelman, 3, 1 is central to our proof. Lemma A.1 see 1, presented in this form in. Let A M be a matrix of size n m whose entries are drawn i.i.d. from 0, 1/n. Let f max m, n; λ denote the distribution function for the largest eigenvalue of the derived Wishart matrix A M A M of size m m. Thenf max m, n; λ satisfies A.3 f max m, n; λ π 1 nλ 3 n+m nλ 1 Γ m Γ n e nλ =: g max m, n; λ. It is helpful at this stage to rewrite Lemma A.1, separating the exponential and polynomial parts with respect to n ofg max m, n; λ as follows. Lemma A.. Let γ n = m/n ρ n,δn 1, and define A.4 ψ max λ, γ := γlnλ γ ln γ +1+γ λ. Then A.5 f max m, n; λ g max m, n; λ p max n, λ; γexp n ψ max λ, γ, where p max n, λ; γ is a polynomial in n, λ, andγ, given by 1/ 8 A.6 p max n, λ; γ = γ 1 n 7/ λ 3/. π Proof. Letγ n = m n and 1 n lng maxm, n; λ = Φ 1 m, n; λ+φ m, n; λ+φ 3 m, n; λ, where 1 + γln Φ 1 m, n; λ = 1 n lnπ 3 n lnnλ, Φ m, n; λ = 1 nλ λ, and Φ 3 m, n; λ = 1 m n Γ n ln Γ. We simplify Φ 3 m, n; λ by using the second Binet s log gamma formula 5 A.7 ln Γz z 1/ ln z z +ln π. Thus we have Φ m, n; λ+φ 3 m, n; λ γ n lnλ γ n ln γ n +1+γ n λ n 1 ln πγ n n /. Incorporating Φ 1 m, n; λ and n 1 ln πγ n n / into p max n, λ; γ and defining the exponent A.4 as 1 ψ max λ, γ := lim n n ln g maxm, n; λ = γlnλ γ ln γ + γ +1 λ completes the proof.

14 BOUDS O RESTRICTED ISOMETRY COSTATS: GAUSSIA 895 The upper RIC bound U BT δ, ρ is obtained by constructing the groups G i according to Lemma.4, taing a union bound over all u = r groups, and bounding the extreme eigenvalues within a group by the extreme eigenvalues of the Wishart matrices A M i A Mi ; see.9. In preparation for bounding the right-hand side of.9, we compute a bound on rg max m, n; λ. From Lemma A. and.8 we have A.8 where and λ 1 m g maxm, n; λ p maxn, λe nψu λ,γ, ψ U λ, γ :=δ 1 Hρδ δγh ρ + δψ max λ, γ γ 3 1/ 5 nγ ρ A.9 p max n, λ :=λ p maxn, λ; γ. 4 γδ1 ρδ The proof of Proposition.1 then follows. Proof of Proposition.1. For ɛ>0 with λ max δ, ρ =min γ λ max δ, ρ; γ being the optimal solution to.5, P U, n, >Uδ n,ρ n +ɛ = P U, n, >λ max δ n,ρ n 1+ɛ = P 1+U, n, >λ max δ n,ρ n +ɛ A.10 1 m = 1 m f max m, n; λdλ λ max δ n,ρ n+ɛ g max m, n; λdλ. λ max δ n,ρ n+ɛ To bound the final integral in A.10, we write g max m, n; λ as a product of two separate functions, one of λ and the other of n and γ n, as g max m, n; λ = ϕn, γ n λ 3 λ n n 1+γne λ,where ϕn, γ n =π 1 n n 3 n 1+γn 1 Γ n γ n Γ n. With this and using the fact that λ max δ n,ρ n > 1+γ n and that λ n 1+γn e n λ is strictly decreasing in λ on λ max δ n,ρ n,, we can bound the integral in A.10 as follows: λ max δ n,ρ n+ɛ A.11 g max m, n; λdλ ϕn, γ n λ max δ n,ρ n +ɛ n 1+γn e n λmax δ n,ρ n+ɛ =λ max δ n,ρ n +ɛ 3 g max m, n; λ max δ n,ρ n +ɛ =λ max δ n,ρ n +ɛg max m, n; λ max δ n,ρ n +ɛ. λ max δ n,ρ n+ɛ λ max δ n,ρ n+ɛ λ 3 dλ λ 3 dλ

15 896 BUBACARR BAH AD JARED TAER Thus A.10 and A.11 together give P U, n, >Uδ n,ρ n +ɛ A.1 λ max δ n,ρ n +ɛrg max m, n; λ max δ n,ρ n +ɛ, p max n, λ max δ n,ρ n +ɛ exp n ψ U λ max δ n,ρ n +ɛ p max n, λ max d δ n,ρ n +ɛ exp nɛ dλ ψ U λ max δ n,ρ n, where r = m 1, and the last inequality is due to ψu λ being strictly concave. The following is a corollary to Proposition.1. Corollary A.3. Let δ, ρ 0, 1, and let A be a matrix of size n whose entries are drawn i.i.d. from 0, 1/n. Define U BT δ, ρ =λ max δ, ρ 1, where λ max δ, ρ; γ is the solution of.5 for each γ ρ, δ 1 and λ max δ, ρ := min γ λ max δ, ρ; γ. Then for any ɛ>0, in the proportional-growth asymptotics P U, n, > U BT δ, ρ+ɛ 0 exponentially in n. Proof. From A.1, since d dλ ψ U λ max δ n,ρ n < 0 is strictly bounded away from zero and all the limits of δ n,ρ n are smoothly varying functions, we conclude for any ɛ>0 lim P U, n, > U BT δ, ρ+ɛ 0. n Thus we finish the proof for U BT δ, ρ. We setch the similar proof for Proposition.13 and L BT δ, ρ. Bounds on the probability distribution function of the minimum eigenvalue of a Wishart matrix are given in the following lemma. Lemma A.4 see 1, presented in this form in. Let A M be a matrix of size n m whose entries are drawn i.i.d. from 0, 1/n. Let f min m, n; λ denote the distribution function for the smallest eigenvalue of the derived Wishart matrix A M A M, of size m m. Thenf min m, n; λ satisfies A.13 f min m, n; λ π 1 nλ nλ =: g min m, n; λ. n m Γ m Γ n+1 Γ n m+1 Γ n m+ e nλ Again an explicit expression of f min m, n; λ in terms of exponential and polynomial parts leads to the following lemma. Lemma A.5. Let γ n = m/n, and define A.14 ψ min λ, γ :=H γ+ 1 1 γlnλ + γ ln γ +1 γ λ. Then A.15 f min m, n; λ g min m, n; λ p min n, λexp n ψ min λ, ρ, δ; γ, where p min n, λ; γ is a polynomial in n, λ, andγ, given by A.16 p min n, λ; γ = e π λ.

16 BOUDS O RESTRICTED ISOMETRY COSTATS: GAUSSIA 897 The proof of Lemma A.5 follows that of Lemma A. and is omitted for brevity. Equipped with Lemma A.5, a large deviation analysis yields A.17 where λ 1 m g maxm, n; λ p minn, λe nψlλ,γ, ψ L λ, γ :=δ 1 Hρδ δγh ρ + δψ min λ, γ γ and A.18 p minn, λ :=λ 3 1/ 5 nγ ρ p minn, λ. 4 γδ1 ρδ With Lemma A.5 and A.17, Proposition.13 follows similarly to the proof of Proposition.1 stated earlier in this section. The bound L BT δ, ρ is a corollary of Proposition.13 Corollary A.6, the proof of which is the same as the proof of Corollary A.3. Corollary A.6. Let δ, ρ 0, 1, and let A be a matrix of size n whose entries are drawn i.i.d. from 0, 1/n. Define L BT δ, ρ :=1 λ min δ, ρ, where λ min δ, ρ; γ is the solution of.4 for each γ ρ, δ 1 and λ min δ, ρ := min γ λ min δ, ρ; γ. Then for any ɛ>0, in the proportional-growth asymptotic exponentially in n. P L, n, > L BT δ, ρ+ɛ 0 REFERECES 1 J. D. Blanchard, C. Cartis, and J. Tanner, Decay properties for restricted isometry constants, IEEE Signal Process. Lett., , pp J. D. Blanchard, C. Cartis, and J. Tanner, Compressed sensing: How sharp is the RIP?, SIAM Rev., accepted. 3 J. D. Blanchard, C. Cartis, J. Tanner, and A. Thompson, Phase transitions for greedy sparse approximation algorithms, Appl. Comput. Harmon. Anal., to appear. 4 T. Blumensath and M. E. Davies, Iterative hard thresholding for compressed sensing, Appl. Comput. Harmon. Anal., 7 009, pp E. J. Candès, The restricted isometry property and its implications for compressed sensing, C. R. Math. Acad. Sci. Paris, , pp E. J. Candès,J.Romberg,andT.Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inform. Theory, 5 006, pp E. J. Candès and T. Tao, Decoding by linear programming, IEEE Trans. Inform. Theory, , pp W. Dai and O. Milenovic, Subspace pursuit for compressive sensing signal reconstruction, IEEE Trans. Inform. Theory, , pp A. d Aspremont, F. Bach, and L. El Ghaoui, Optimal solutions for sparse principal component analysis, J. Mach. Learn. Res., 9 008, pp D. L. Donoho, Compressed sensing, IEEE Trans. Inform. Theory, 5 006, pp C. Dossal, G. Peyré, and J. Fadili, A numerical exploration of compressed sampling recovery, Linear Algebra Appl., , pp A. Edelman, Eigenvalues and condition numbers of random matrices, SIAMJ. MatrixAnal. Appl., , pp A. Edelman and. R. Rao, Random matrix theory, Acta umer., , pp Edinburgh Compressed Sensing Research Group,

17 898 BUBACARR BAH AD JARED TAER 15 S. Foucart and M.-J. Lai, Sparsest solutions of underdetermined linear systems via l q- minimization for 0 <q 1, Appl. Comput. Harmon. Anal., 6 009, pp M. Journée, Y. esterov, P. Richtári, and R. Sepulchre, Generalized power method for sparse principal component analysis, J. Mach. Learn. Res., , pp A. Juditsy and A. emirovsi, On verifiable sufficient conditions for sparse signal recovery via l 1 minimization, Math. Program. Ser. B., submitted. 18 D. Kleitman, private communication, J. B. Krusal, Three-way arrays: Ran and uniqueness of trilinear decompositions, with application to arithmetic complexity and statistics, Linear Algebra and Appl., , pp M. Ledoux, The Concentration of Measure Phenomenon, Math. Surveys Monogr. 89, American Mathematical Society, Providence, RI, S. Mendelson, A. Pajor, and. Tomcza-Jaegermann, Reconstruction and subgaussian operators in asymptotic geometric analysis, Geom. Funct. Anal., , pp B. K. atarajan, Sparse approximate solutions to linear systems, SIAM J. Comput., , pp D. eedell and J. Tropp, CoSaMP: Iterative signal recovery from incomplete and inaccurate samples, Appl. Comput. Harmon. Anal., 6 009, pp H. Rauhut, Compressive sensing and structured random matrices, in Theoretical Foundations and umerical Methods for Sparse Recovery, Radon Series Comp. Appl. Math. 9, Walter de Gruyter, Berlin, 010, pp E. T. Whittaer and G.. Watson, ACourseofModernAnalysis, 4th ed., Cambridge University Press, London, 1946.

COMPRESSED SENSING: HOW SHARP IS THE RIP?

COMPRESSED SENSING: HOW SHARP IS THE RIP? COMPRESSED SENSING: HOW SHARP IS THE RIP? JEFFREY D. BLANCHARD, CORALIA CARTIS, AND JARED TANNER Abstract. Consider a measurement matrix A of size n N, with n < N, y a signal in R N, and b = Ay the observed

More information

COMPRESSED SENSING: HOW SHARP IS THE RESTRICTED ISOMETRY PROPERTY?

COMPRESSED SENSING: HOW SHARP IS THE RESTRICTED ISOMETRY PROPERTY? COMPRESSED SENSING: HOW SHARP IS THE RESTRICTED ISOMETRY PROPERTY? JEFFREY D. BLANCHARD, CORALIA CARTIS, AND JARED TANNER Abstract. Compressed Sensing (CS) seeks to recover an unknown vector with N entries

More information

Stochastic geometry and random matrix theory in CS

Stochastic geometry and random matrix theory in CS Stochastic geometry and random matrix theory in CS IPAM: numerical methods for continuous optimization University of Edinburgh Joint with Bah, Blanchard, Cartis, and Donoho Encoder Decoder pair - Encoder/Decoder

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

GREEDY SIGNAL RECOVERY REVIEW

GREEDY SIGNAL RECOVERY REVIEW GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin

More information

Phase Transitions for Greedy Sparse Approximation Algorithms

Phase Transitions for Greedy Sparse Approximation Algorithms Phase Transitions for Greedy parse Approximation Algorithms Jeffrey D. Blanchard,a,, Coralia Cartis b, Jared Tanner b,, Andrew Thompson b a Department of Mathematics and tatistics, Grinnell College, Grinnell,

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Deanna Needell and Roman Vershynin Abstract We demonstrate a simple greedy algorithm that can reliably

More information

of Orthogonal Matching Pursuit

of Orthogonal Matching Pursuit A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement

More information

Sparse Recovery with Pre-Gaussian Random Matrices

Sparse Recovery with Pre-Gaussian Random Matrices Sparse Recovery with Pre-Gaussian Random Matrices Simon Foucart Laboratoire Jacques-Louis Lions Université Pierre et Marie Curie Paris, 75013, France Ming-Jun Lai Department of Mathematics University of

More information

Rui ZHANG Song LI. Department of Mathematics, Zhejiang University, Hangzhou , P. R. China

Rui ZHANG Song LI. Department of Mathematics, Zhejiang University, Hangzhou , P. R. China Acta Mathematica Sinica, English Series May, 015, Vol. 31, No. 5, pp. 755 766 Published online: April 15, 015 DOI: 10.1007/s10114-015-434-4 Http://www.ActaMath.com Acta Mathematica Sinica, English Series

More information

Group testing example

Group testing example Group testing example Consider a vector x R n with a single nonzero in an unknown location. How can we determine the location and magnitude of the nonzero through inner produces with vectors a i R n for

More information

Thresholds for the Recovery of Sparse Solutions via L1 Minimization

Thresholds for the Recovery of Sparse Solutions via L1 Minimization Thresholds for the Recovery of Sparse Solutions via L Minimization David L. Donoho Department of Statistics Stanford University 39 Serra Mall, Sequoia Hall Stanford, CA 9435-465 Email: donoho@stanford.edu

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior

More information

Spin Glass Approach to Restricted Isometry Constant

Spin Glass Approach to Restricted Isometry Constant Spin Glass Approach to Restricted Isometry Constant Ayaka Sakata 1,Yoshiyuki Kabashima 2 1 Institute of Statistical Mathematics 2 Tokyo Institute of Technology 1/29 Outline Background: Compressed sensing

More information

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit arxiv:0707.4203v2 [math.na] 14 Aug 2007 Deanna Needell Department of Mathematics University of California,

More information

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5 CS 229r: Algorithms for Big Data Fall 215 Prof. Jelani Nelson Lecture 19 Nov 5 Scribe: Abdul Wasay 1 Overview In the last lecture, we started discussing the problem of compressed sensing where we are given

More information

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles SIAM Student Research Conference Deanna Needell Joint work with Roman Vershynin and Joel Tropp UC Davis, May 2008 CoSaMP: Greedy Signal

More information

Constructing Explicit RIP Matrices and the Square-Root Bottleneck

Constructing Explicit RIP Matrices and the Square-Root Bottleneck Constructing Explicit RIP Matrices and the Square-Root Bottleneck Ryan Cinoman July 18, 2018 Ryan Cinoman Constructing Explicit RIP Matrices July 18, 2018 1 / 36 Outline 1 Introduction 2 Restricted Isometry

More information

A new method on deterministic construction of the measurement matrix in compressed sensing

A new method on deterministic construction of the measurement matrix in compressed sensing A new method on deterministic construction of the measurement matrix in compressed sensing Qun Mo 1 arxiv:1503.01250v1 [cs.it] 4 Mar 2015 Abstract Construction on the measurement matrix A is a central

More information

Tractable Upper Bounds on the Restricted Isometry Constant

Tractable Upper Bounds on the Restricted Isometry Constant Tractable Upper Bounds on the Restricted Isometry Constant Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure, U.C. Berkeley. Support from NSF, DHS and Google.

More information

Multipath Matching Pursuit

Multipath Matching Pursuit Multipath Matching Pursuit Submitted to IEEE trans. on Information theory Authors: S. Kwon, J. Wang, and B. Shim Presenter: Hwanchol Jang Multipath is investigated rather than a single path for a greedy

More information

Compressed Sensing and Sparse Recovery

Compressed Sensing and Sparse Recovery ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing

More information

Phase Transitions for Greedy Sparse Approximation Algorithms

Phase Transitions for Greedy Sparse Approximation Algorithms Phase Transitions for Greedy parse Approximation Algorithms Jeffrey D. Blanchard,a,, Coralia Cartis b, Jared Tanner b,, Andrew Thompson b a Department of Mathematics and tatistics, Grinnell College, Grinnell,

More information

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3784. Ming-Jun Lai Department of Mathematics,

More information

MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing

MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing Afonso S. Bandeira April 9, 2015 1 The Johnson-Lindenstrauss Lemma Suppose one has n points, X = {x 1,..., x n }, in R d with d very

More information

Generalized Orthogonal Matching Pursuit- A Review and Some

Generalized Orthogonal Matching Pursuit- A Review and Some Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents

More information

A New Estimate of Restricted Isometry Constants for Sparse Solutions

A New Estimate of Restricted Isometry Constants for Sparse Solutions A New Estimate of Restricted Isometry Constants for Sparse Solutions Ming-Jun Lai and Louis Y. Liu January 12, 211 Abstract We show that as long as the restricted isometry constant δ 2k < 1/2, there exist

More information

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Yin Zhang Technical Report TR05-06 Department of Computational and Applied Mathematics Rice University,

More information

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:

More information

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Journal of Information & Computational Science 11:9 (214) 2933 2939 June 1, 214 Available at http://www.joics.com Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Jingfei He, Guiling Sun, Jie

More information

Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit

Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit Claremont Colleges Scholarship @ Claremont CMC Faculty Publications and Research CMC Faculty Scholarship 6-5-2008 Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit

More information

Optimisation Combinatoire et Convexe.

Optimisation Combinatoire et Convexe. Optimisation Combinatoire et Convexe. Low complexity models, l 1 penalties. A. d Aspremont. M1 ENS. 1/36 Today Sparsity, low complexity models. l 1 -recovery results: three approaches. Extensions: matrix

More information

Conditions for Robust Principal Component Analysis

Conditions for Robust Principal Component Analysis Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and

More information

Stability and robustness of l 1 -minimizations with Weibull matrices and redundant dictionaries

Stability and robustness of l 1 -minimizations with Weibull matrices and redundant dictionaries Stability and robustness of l 1 -minimizations with Weibull matrices and redundant dictionaries Simon Foucart, Drexel University Abstract We investigate the recovery of almost s-sparse vectors x C N from

More information

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad

More information

Recent Developments in Compressed Sensing

Recent Developments in Compressed Sensing Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline

More information

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina INDUSTRIAL MATHEMATICS INSTITUTE 2007:08 A remark on compressed sensing B.S. Kashin and V.N. Temlyakov IMI Preprint Series Department of Mathematics University of South Carolina A remark on compressed

More information

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar

More information

RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems

RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems 1 RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems Yun-Bin Zhao IEEE member Abstract Recently, the worse-case analysis, probabilistic analysis and empirical

More information

Interpolation via weighted l 1 minimization

Interpolation via weighted l 1 minimization Interpolation via weighted l 1 minimization Rachel Ward University of Texas at Austin December 12, 2014 Joint work with Holger Rauhut (Aachen University) Function interpolation Given a function f : D C

More information

Greedy Signal Recovery and Uniform Uncertainty Principles

Greedy Signal Recovery and Uniform Uncertainty Principles Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles

More information

Sparsest Solutions of Underdetermined Linear Systems via l q -minimization for 0 < q 1

Sparsest Solutions of Underdetermined Linear Systems via l q -minimization for 0 < q 1 Sparsest Solutions of Underdetermined Linear Systems via l q -minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3740 Ming-Jun Lai Department of Mathematics

More information

Compressed Sensing and Robust Recovery of Low Rank Matrices

Compressed Sensing and Robust Recovery of Low Rank Matrices Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech

More information

Lecture Notes 9: Constrained Optimization

Lecture Notes 9: Constrained Optimization Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form

More information

Least singular value of random matrices. Lewis Memorial Lecture / DIMACS minicourse March 18, Terence Tao (UCLA)

Least singular value of random matrices. Lewis Memorial Lecture / DIMACS minicourse March 18, Terence Tao (UCLA) Least singular value of random matrices Lewis Memorial Lecture / DIMACS minicourse March 18, 2008 Terence Tao (UCLA) 1 Extreme singular values Let M = (a ij ) 1 i n;1 j m be a square or rectangular matrix

More information

Lecture 13 October 6, Covering Numbers and Maurey s Empirical Method

Lecture 13 October 6, Covering Numbers and Maurey s Empirical Method CS 395T: Sublinear Algorithms Fall 2016 Prof. Eric Price Lecture 13 October 6, 2016 Scribe: Kiyeon Jeon and Loc Hoang 1 Overview In the last lecture we covered the lower bound for p th moment (p > 2) and

More information

Signal Recovery from Permuted Observations

Signal Recovery from Permuted Observations EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

Sparse and Low Rank Recovery via Null Space Properties

Sparse and Low Rank Recovery via Null Space Properties Sparse and Low Rank Recovery via Null Space Properties Holger Rauhut Lehrstuhl C für Mathematik (Analysis), RWTH Aachen Convexity, probability and discrete structures, a geometric viewpoint Marne-la-Vallée,

More information

AN INTRODUCTION TO COMPRESSIVE SENSING

AN INTRODUCTION TO COMPRESSIVE SENSING AN INTRODUCTION TO COMPRESSIVE SENSING Rodrigo B. Platte School of Mathematical and Statistical Sciences APM/EEE598 Reverse Engineering of Complex Dynamical Networks OUTLINE 1 INTRODUCTION 2 INCOHERENCE

More information

Uniform uncertainty principle for Bernoulli and subgaussian ensembles

Uniform uncertainty principle for Bernoulli and subgaussian ensembles arxiv:math.st/0608665 v1 27 Aug 2006 Uniform uncertainty principle for Bernoulli and subgaussian ensembles Shahar MENDELSON 1 Alain PAJOR 2 Nicole TOMCZAK-JAEGERMANN 3 1 Introduction August 21, 2006 In

More information

Error Correction via Linear Programming

Error Correction via Linear Programming Error Correction via Linear Programming Emmanuel Candes and Terence Tao Applied and Computational Mathematics, Caltech, Pasadena, CA 91125 Department of Mathematics, University of California, Los Angeles,

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

On Rank Awareness, Thresholding, and MUSIC for Joint Sparse Recovery

On Rank Awareness, Thresholding, and MUSIC for Joint Sparse Recovery On Rank Awareness, Thresholding, and MUSIC for Joint Sparse Recovery Jeffrey D. Blanchard a,1,, Caleb Leedy a,2, Yimin Wu a,2 a Department of Mathematics and Statistics, Grinnell College, Grinnell, IA

More information

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016 Lecture notes 5 February 9, 016 1 Introduction Random projections Random projections are a useful tool in the analysis and processing of high-dimensional data. We will analyze two applications that use

More information

Greedy Sparsity-Constrained Optimization

Greedy Sparsity-Constrained Optimization Greedy Sparsity-Constrained Optimization Sohail Bahmani, Petros Boufounos, and Bhiksha Raj 3 sbahmani@andrew.cmu.edu petrosb@merl.com 3 bhiksha@cs.cmu.edu Department of Electrical and Computer Engineering,

More information

Stability and Robustness of Weak Orthogonal Matching Pursuits

Stability and Robustness of Weak Orthogonal Matching Pursuits Stability and Robustness of Weak Orthogonal Matching Pursuits Simon Foucart, Drexel University Abstract A recent result establishing, under restricted isometry conditions, the success of sparse recovery

More information

Geometry of log-concave Ensembles of random matrices

Geometry of log-concave Ensembles of random matrices Geometry of log-concave Ensembles of random matrices Nicole Tomczak-Jaegermann Joint work with Radosław Adamczak, Rafał Latała, Alexander Litvak, Alain Pajor Cortona, June 2011 Nicole Tomczak-Jaegermann

More information

Supremum of simple stochastic processes

Supremum of simple stochastic processes Subspace embeddings Daniel Hsu COMS 4772 1 Supremum of simple stochastic processes 2 Recap: JL lemma JL lemma. For any ε (0, 1/2), point set S R d of cardinality 16 ln n S = n, and k N such that k, there

More information

Iterative Hard Thresholding for Compressed Sensing

Iterative Hard Thresholding for Compressed Sensing Iterative Hard Thresholding for Compressed Sensing Thomas lumensath and Mike E. Davies 1 Abstract arxiv:0805.0510v1 [cs.it] 5 May 2008 Compressed sensing is a technique to sample compressible signals below

More information

Strengthened Sobolev inequalities for a random subspace of functions

Strengthened Sobolev inequalities for a random subspace of functions Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)

More information

On Sparsity, Redundancy and Quality of Frame Representations

On Sparsity, Redundancy and Quality of Frame Representations On Sparsity, Redundancy and Quality of Frame Representations Mehmet Açaaya Division of Engineering and Applied Sciences Harvard University Cambridge, MA Email: acaaya@fasharvardedu Vahid Taroh Division

More information

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin

More information

arxiv: v1 [math.na] 26 Nov 2009

arxiv: v1 [math.na] 26 Nov 2009 Non-convexly constrained linear inverse problems arxiv:0911.5098v1 [math.na] 26 Nov 2009 Thomas Blumensath Applied Mathematics, School of Mathematics, University of Southampton, University Road, Southampton,

More information

List of Errata for the Book A Mathematical Introduction to Compressive Sensing

List of Errata for the Book A Mathematical Introduction to Compressive Sensing List of Errata for the Book A Mathematical Introduction to Compressive Sensing Simon Foucart and Holger Rauhut This list was last updated on September 22, 2018. If you see further errors, please send us

More information

Sparse Legendre expansions via l 1 minimization

Sparse Legendre expansions via l 1 minimization Sparse Legendre expansions via l 1 minimization Rachel Ward, Courant Institute, NYU Joint work with Holger Rauhut, Hausdorff Center for Mathematics, Bonn, Germany. June 8, 2010 Outline Sparse recovery

More information

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional

More information

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published

More information

Phase Transition Phenomenon in Sparse Approximation

Phase Transition Phenomenon in Sparse Approximation Phase Transition Phenomenon in Sparse Approximation University of Utah/Edinburgh L1 Approximation: May 17 st 2008 Convex polytopes Counting faces Sparse Representations via l 1 Regularization Underdetermined

More information

The Sparsity Gap. Joel A. Tropp. Computing & Mathematical Sciences California Institute of Technology

The Sparsity Gap. Joel A. Tropp. Computing & Mathematical Sciences California Institute of Technology The Sparsity Gap Joel A. Tropp Computing & Mathematical Sciences California Institute of Technology jtropp@acm.caltech.edu Research supported in part by ONR 1 Introduction The Sparsity Gap (Casazza Birthday

More information

Optimization for Compressed Sensing

Optimization for Compressed Sensing Optimization for Compressed Sensing Robert J. Vanderbei 2014 March 21 Dept. of Industrial & Systems Engineering University of Florida http://www.princeton.edu/ rvdb Lasso Regression The problem is to solve

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

Sparse Optimization Lecture: Sparse Recovery Guarantees

Sparse Optimization Lecture: Sparse Recovery Guarantees Those who complete this lecture will know Sparse Optimization Lecture: Sparse Recovery Guarantees Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor: Wotao Yin Department of Mathematics,

More information

Uniqueness Conditions for A Class of l 0 -Minimization Problems

Uniqueness Conditions for A Class of l 0 -Minimization Problems Uniqueness Conditions for A Class of l 0 -Minimization Problems Chunlei Xu and Yun-Bin Zhao October, 03, Revised January 04 Abstract. We consider a class of l 0 -minimization problems, which is to search

More information

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp CoSaMP Iterative signal recovery from incomplete and inaccurate samples Joel A. Tropp Applied & Computational Mathematics California Institute of Technology jtropp@acm.caltech.edu Joint with D. Needell

More information

sparse and low-rank tensor recovery Cubic-Sketching

sparse and low-rank tensor recovery Cubic-Sketching Sparse and Low-Ran Tensor Recovery via Cubic-Setching Guang Cheng Department of Statistics Purdue University www.science.purdue.edu/bigdata CCAM@Purdue Math Oct. 27, 2017 Joint wor with Botao Hao and Anru

More information

EXPONENTIAL BOUNDS IMPLYING CONSTRUCTION OF COMPRESSED SENSING MATRICES, ERROR-CORRECTING CODES AND NEIGHBORLY POLYTOPES BY RANDOM SAMPLING

EXPONENTIAL BOUNDS IMPLYING CONSTRUCTION OF COMPRESSED SENSING MATRICES, ERROR-CORRECTING CODES AND NEIGHBORLY POLYTOPES BY RANDOM SAMPLING EXPONENTIAL BOUNDS IMPLYING CONSTRUCTION OF COMPRESSED SENSING MATRICES, ERROR-CORRECTING CODES AND NEIGHBORLY POLYTOPES BY RANDOM SAMPLING DAVID L. DONOHO AND JARED TANNER Abstract. In [12] the authors

More information

Interpolation via weighted l 1 -minimization

Interpolation via weighted l 1 -minimization Interpolation via weighted l 1 -minimization Holger Rauhut RWTH Aachen University Lehrstuhl C für Mathematik (Analysis) Mathematical Analysis and Applications Workshop in honor of Rupert Lasser Helmholtz

More information

Compressive Sensing with Random Matrices

Compressive Sensing with Random Matrices Compressive Sensing with Random Matrices Lucas Connell University of Georgia 9 November 017 Lucas Connell (University of Georgia) Compressive Sensing with Random Matrices 9 November 017 1 / 18 Overview

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

The Analysis Cosparse Model for Signals and Images

The Analysis Cosparse Model for Signals and Images The Analysis Cosparse Model for Signals and Images Raja Giryes Computer Science Department, Technion. The research leading to these results has received funding from the European Research Council under

More information

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1239 Preconditioning for Underdetermined Linear Systems with Sparse Solutions Evaggelia Tsiligianni, StudentMember,IEEE, Lisimachos P. Kondi,

More information

Observability of a Linear System Under Sparsity Constraints

Observability of a Linear System Under Sparsity Constraints 2372 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 58, NO 9, SEPTEMBER 2013 Observability of a Linear System Under Sparsity Constraints Wei Dai and Serdar Yüksel Abstract Consider an -dimensional linear

More information

Does Compressed Sensing have applications in Robust Statistics?

Does Compressed Sensing have applications in Robust Statistics? Does Compressed Sensing have applications in Robust Statistics? Salvador Flores December 1, 2014 Abstract The connections between robust linear regression and sparse reconstruction are brought to light.

More information

Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization

Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization Forty-Fifth Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 26-28, 27 WeA3.2 Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization Benjamin

More information

An Introduction to Sparse Approximation

An Introduction to Sparse Approximation An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,

More information

Near Optimal Signal Recovery from Random Projections

Near Optimal Signal Recovery from Random Projections 1 Near Optimal Signal Recovery from Random Projections Emmanuel Candès, California Institute of Technology Multiscale Geometric Analysis in High Dimensions: Workshop # 2 IPAM, UCLA, October 2004 Collaborators:

More information

The Stability of Low-Rank Matrix Reconstruction: a Constrained Singular Value Perspective

The Stability of Low-Rank Matrix Reconstruction: a Constrained Singular Value Perspective Forty-Eighth Annual Allerton Conference Allerton House UIUC Illinois USA September 9 - October 1 010 The Stability of Low-Rank Matrix Reconstruction: a Constrained Singular Value Perspective Gongguo Tang

More information

Some Formulas for the Principal Matrix pth Root

Some Formulas for the Principal Matrix pth Root Int. J. Contemp. Math. Sciences Vol. 9 014 no. 3 141-15 HIKARI Ltd www.m-hiari.com http://dx.doi.org/10.1988/ijcms.014.4110 Some Formulas for the Principal Matrix pth Root R. Ben Taher Y. El Khatabi and

More information

ORTHOGONAL matching pursuit (OMP) is the canonical

ORTHOGONAL matching pursuit (OMP) is the canonical IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 9, SEPTEMBER 2010 4395 Analysis of Orthogonal Matching Pursuit Using the Restricted Isometry Property Mark A. Davenport, Member, IEEE, and Michael

More information

Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images

Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images Alfredo Nava-Tudela ant@umd.edu John J. Benedetto Department of Mathematics jjb@umd.edu Abstract In this project we are

More information

Concentration Inequalities for Random Matrices

Concentration Inequalities for Random Matrices Concentration Inequalities for Random Matrices M. Ledoux Institut de Mathématiques de Toulouse, France exponential tail inequalities classical theme in probability and statistics quantify the asymptotic

More information

COMPRESSED SENSING IN PYTHON

COMPRESSED SENSING IN PYTHON COMPRESSED SENSING IN PYTHON Sercan Yıldız syildiz@samsi.info February 27, 2017 OUTLINE A BRIEF INTRODUCTION TO COMPRESSED SENSING A BRIEF INTRODUCTION TO CVXOPT EXAMPLES A Brief Introduction to Compressed

More information

Exponential decay of reconstruction error from binary measurements of sparse signals

Exponential decay of reconstruction error from binary measurements of sparse signals Exponential decay of reconstruction error from binary measurements of sparse signals Deanna Needell Joint work with R. Baraniuk, S. Foucart, Y. Plan, and M. Wootters Outline Introduction Mathematical Formulation

More information

On the Observability of Linear Systems from Random, Compressive Measurements

On the Observability of Linear Systems from Random, Compressive Measurements On the Observability of Linear Systems from Random, Compressive Measurements Michael B Wakin, Borhan M Sanandaji, and Tyrone L Vincent Abstract Recovering or estimating the initial state of a highdimensional

More information

On Optimal Frame Conditioners

On Optimal Frame Conditioners On Optimal Frame Conditioners Chae A. Clark Department of Mathematics University of Maryland, College Park Email: cclark18@math.umd.edu Kasso A. Okoudjou Department of Mathematics University of Maryland,

More information

Compressive Sensing and Beyond

Compressive Sensing and Beyond Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered

More information

Hard Thresholding Pursuit Algorithms: Number of Iterations

Hard Thresholding Pursuit Algorithms: Number of Iterations Hard Thresholding Pursuit Algorithms: Number of Iterations Jean-Luc Bouchot, Simon Foucart, Pawel Hitczenko Abstract The Hard Thresholding Pursuit algorithm for sparse recovery is revisited using a new

More information

Fast Hard Thresholding with Nesterov s Gradient Method

Fast Hard Thresholding with Nesterov s Gradient Method Fast Hard Thresholding with Nesterov s Gradient Method Volkan Cevher Idiap Research Institute Ecole Polytechnique Federale de ausanne volkan.cevher@epfl.ch Sina Jafarpour Department of Computer Science

More information