A New Estimate of Restricted Isometry Constants for Sparse Solutions

Size: px
Start display at page:

Download "A New Estimate of Restricted Isometry Constants for Sparse Solutions"

Transcription

1 A New Estimate of Restricted Isometry Constants for Sparse Solutions Ming-Jun Lai and Louis Y. Liu January 12, 211 Abstract We show that as long as the restricted isometry constant δ 2k < 1/2, there exist a value (, 1] such that for any <, each minimizer of the nonconvex l minimization for the sparse solution of any underdetermined linear system is the sparse solution. 1 Introduction Let us start with the one of basic problems in compressed sensing: seek the minimizer x R n solving min{ x : Φ x = b}, (1) where x stands for the number of nonzero entries of vector x, Φ is a matrix of size m n with m << n. That is, the purpose of the research is to find the sparse solution satisfying the under-determined linear system Φx = b with x as small as possible. A key concept to describe the solution of (1) is the restricted isometry constants of a matrix Φ introduced in [5]. Definition 1 For each integer k = 1, 2,, let δ k be the smallest number such that (1 δ k ) x 2 2 Φx 2 2 (1 + δ k ) x 2 2 (2) mjlai@math.uga.edu. This author is partly supported by the National Science Foundation under grant DMS Department of Mathematics, The University of Georgia, Athens, GA 362 yliu@marlboro.edu, Department of Mathematics, Marlboro College, Marlboro, Vermont

2 holds for all k-sparse vectors x R n with x k, where x 2 the standard l 2 norm for vector x. δ k is called restricted isometry constant. One of the standard approaches to find the minimizer x is to seek the minimizer x 1 R n solving min{ x 1 : Φ x = b, x R n }, (3) where x 1 is the standard l 1 norm of vector x. Suppose that x = k. Let T {1, 2,, n} be the subset of indices for the k largest entries of x. For any vector x, let x T denote the vector whose entries agree with that of x at the indices in T and zeros for other entries. Many researchers have established the following result in various literature: Theorem 1 (Noiseless Recovery) For appropriate δ 2k >, the solution x 1 of the minimization problem (3) satisfies x x 1 2 C k 1/2 x x T 1, (4) for any x with Φx = b, where C is a positive constant dependent on δ 2k. In particular, if x is k-sparse, the recovery is exact. It is known from Candès, 28[4] that the above result holds when δ 2k < 2 1. This condition is improved in Foucart and Lai, 29[11] to be δ 2k < 2/(3+ 2) Subseuently, this condition is further improved in Cai, Wang, and Xu, 21[2] for special k (multiple of 4), δ 2k < 2/(2 + 5).4721 as well as in Foucart, 21[1] to be δ 2k < and for large k, δ 2k < Recently, Li and Mo proposed another approach in [15] and showed that the ineuality (4) holds as long as δ 2k < The problem (3) was extended in [12] by seeking a minimizer x R n for a number (, 1) which solves min{ x : Φ x = b, x R n }, (5) where x is the standard l uasi-norm of vector x. See also [7], [11], [6], [14] for study of the nonconvex l minimization problem (5). In Foucart and Lai, 29[11], the following result was established. 2

3 Theorem 2 Suppose that δ 2k < 2(3 2)/ Then for any (, 1], x x C x x T, (6) for any x with Φx = b, where C is a positive constant dependent on δ 2k. In particular, if x = x is k-sparse, the recovery is exact. To improve the result in Theorem 2, our main result in this paper is Theorem 3 Suppose that δ 2k < 1/2. There exists a number (, 1] such that for any <, each minimizer x of the l minimization (5) is the sparse solution of (1). Furthermore, there exists a positive constant C such that for any x R n with Φx = b such that x x C x x T, where C is dependent on and δ 2k and T is the index set of the k nonzero entries of the sparse solution x. Under the assumption that the l minimization (5) can be computed, the sensing matrix Φ reuires a more relaxed condition on the restricted isometry constant to be able to find the sparse solution than the conditions listed above for Theorem 1. For simplicity, we only discuss the sparse solution for noiseless recovery in this paper. We leave the discussion on noisy recovery to the interested reader. After we establish an elementary ineuality in Preliminary section 2, we prove our main result in 3. Finally we give a few remarks in 4. 2 Preliminary Results Let x = (x 1,, x n ) T be a vector in R n and we use x p be the standard norm for vector x for any p 1 and x be the standard uasi-norm when < 1. Recall that we have the following standard ineuality x 1 n x 2 or x 2 x 1 n. (7) by the well-known Cauchy-Schwarz ineuality. ineuality is x 2 x 1 A converse of the above which can be seen directly after dividing x = max{ x i, i = 1,, n} both sides. Recently, Cai, Wang and Xu proved the following interesting ineuality in [3]. 3

4 Lemma 1 (Cai, Wang and Xu 1) For any x R n, x 2 x ( ) 1 n + max n 4 x i min x i. (8) 1 i n 1 i n We now extend the ineuality to the setting of uasi-norm x with (, 1). It is easy to see that for < < 1, x 2 x n 1/ 1/2 (9) by using Hölder s euality. The following converse ineuality x 2 x, x R n. is often used in the literature. Motivated by the new ineuality in (8), we would like to see the converse of the ineuality (9). Lemma 2 Fix < < 1. For any x R n, x 2 x n 1/ 1/2 + ( ) n max x i min x i. (1) 1 i n 1 i n Proof. Without loss of generality, we may assume that x 1 x 2 x n and not all x i are eual. Let f(x) = x 2 x n 1/ 1/2. Let us fix x 1 and find an upper bound for f(x). Note that f x i = x i x 1 x 2 x 1 i n 1/ 1/2 is an increasing function as a function of x i. Indeed, it is easy to see that both functions x 2 n = ( x j ) x i x 2 i and x 1 x 1 i = j=1 n ( x j ) x i j=1 (1 )/ 4

5 f of x i are decreasing. Thus, x i is an increasing function of x i. It follows that f(x) is convex as a function of x i for each i = 2,, n 1. The maximum achieves at either x i = x i 1 or x i = x i+1. It follows that when f achieves its maximum, x must be of the form that x 1 = x 2 = = x k and x k+1 = = x n for some 1 k < n. Thus, f(x) = It is easy to see that k(x 21 x2n) + nx 2n (k(x 1 x n) + nx n) 1/ n 1/ 1/2. f(x) n(x 21 x2n) + nx 2n (nx n) 1/ n 1/ 1/2 = n(x 1 x n ) To find a better upper bound for some < 1, see Remark 4.4. One can see from Remark 4.4 that it is not an easy task to find out which k to maximize the function g(k) = k(x 21 x2n) + nx 2n (k(x 1 x n) + nx n) 1/ n 1/ 1/2 (cf. Remark 4.4). Anyway, the result in Lemma 2 is good enough for our application in the next section. 3 Main Results and Proofs To describe our results, we need more notation. We use Null(Φ) to denote the null space of Φ and S(x) to denote the support of x R n, i.e., S = {i, x i } for x = (x 1,, x n ) T. Recall that x is a sparse solution, i.e., Φx = b with S(x ) T with cardinality of T less or eual to k. Let x be the solution of the minimization problem (5). Recall from [12] that x is the uniue spare solution x if and only if h T < h T c (11) for all nonzero vector h in the null space of Φ. It is called the null space property. Indeed, we have x = x T x T + h T + h T < x T + h T + h T c = x + h 5

6 by (11) for any nonzero vector h in the null space of Φ. Thus, x is the solution of (5). Another way to show the sufficiency is to let x be the solution of (5) and let h = x x which is in the null space of Φ. If h, we have x T = x x = x T + x T c x T h T + h T c. It follows that h T c h T < h T c which is a contradiction, where we have used (11). Thus, x is the sparse solution. The necessity of the null space property (11) can be seen as follows: suppose that there is a nonzero vector h null(φ) such that h T c h T. Let x = h T and b = Φx. If h T c < h T, then h T c satisfies Φ( h T c ) = b and the minimization (5) should find a solution x which is not h T, the sparse solution of this vector b which is a contradiction to the assumption that x is the uniue sparse solution x. Similarly, if h T c = h T, the minimization (5) may find two solutions h T and h T c which is a contradiction. In fact, one can find the smallest constant ρ < 1 such that h T ρ h T c, h null (Φ). Indeed, it is easy to see that the following euality holds sup h Null(Φ) h i T h i i T h i = max h Null(Φ) h 2 =1 i T h i i T h i which is denoted by ρ. In general, for h = x x, let h T = τ(h, ) h T c. (12) The purpose of the study is to show how to make τ(h, ) < 1 for all nonzero vector h in the null space of Φ. For any nonzero vector h in the null space of Φ, we rewrite h as a sum of vectors h T, h T1, h T2,, each of sparsity at most k. Here, T corresponds to the locations of the k largest entries of x ; T 1 to the locations of the k largest entries of h T c ; T 2 to the locations of the next k largest entries of h T c, and so on, where T c stands for the complement index set of T in {1, 2,, n}. Without loss of generality, we may assume that h = (h T, h T1, h T2, ) T with the cardinality of T i being eual to k for all i =, 1, 2,. Let us introduce another ratio t := t(h, ) [, 1] be a number such that h T1 = t h Ti. First of all, we have 6

7 Lemma 3 For (, 1), we have h Ti (1 t)t(2 )/ k (2 )/ h Ti 2/. (13) Proof. It is easy to see that h Ti 2 2 h 2k+1 ( 2 h Ti ht1 ) (2 )/ h Ti k ( ht1 ) (2 )/ 1 t h T1 k t 1 1 t k (2 )/ t 2/ 2/ h Ti. t The result in (13) follows. Next we have Lemma 4 For (, 1), we have h Ti 2 Proof. By Theorem 2, we have for i 2. It follows 1 k 1/ 1/2 h Ti 1/ k 1/ 1/2 h Ti 2 h Ti + k 1/ ( h ik+1 h ik+k ) k 1/ 1/2 h Ti 2 h Ti + k 1/ h T1 /k 1/ h Ti h Ti. (14) 1/ since 1. Furthermore, we have Lemma 5 For (, 1), we have Φ(h T + h T1 ) δ 2k k 2/ 1 (τ(h, )2 + t 2/ ) h Ti 2/. (15) 7

8 Proof. By the definition of δ 2k and using (9), we have Φ(h T + h T1 ) 2 2 (1 δ 2k ) h T + h T1 2 2 = (1 δ 2k )( h T h T1 2 2) (1 δ 2k )( h T 2 + h T1 2 )/k 2/ 1 2/ = 1 δ 2k k 2/ 1 (τ(h, )2 + t 2/ ) h Ti. It is easy to see that Φ(h T +h T1 ) = Φh Φ( j 2 h T j ) = Φ( h T i ). We have the following estimate Lemma 6 For (, 1), we have Φ(h T +h T1 ) 2 2 = Φ( j 2 h Tj ) 2 2 ( ) (1 t)t (2 )/ k (2 )/ + δ 2k k 2/ 1 h Ti (16) Proof. A straightforward calculation shows Φ( h Tj ) 2 2 = Φ(h Ti ), Φ(h Tj ) j 2 i,j 2 = Φ(h Tj ), Φ(h Tj ) + 2 Φ(h Ti ), Φ(h Tj ) j 2 2 i<j (1 + δ k ) h Ti δ 2k h Ti 2 h Tj 2 i>j 2 h Ti δ 2k ( h Ti 2 ) 2 ( ) (1 t)t (2 )/ k (2 )/ + δ 2k 2/ k 2/ 1 h Ti. 2/. By using (15) and (16), we have (1 δ 2k )(τ(h, ) 2 + t 2/ ) (1 t)t (2 )/ + δ 2k or τ(h, ) 2 (δ 2k + t (2 )/ (2 δ 2k )t 2/ )/(1 δ 2k ). (17) 8

9 Let us study the maximum of the right-hand side as a function of t [, 1]. Letting s = (2 )/(2 δ 2k ) with s 2, it is easy to see that the maximum happens at t = s/2 and τ(h, ) 2 (δ 2k + 2 ( s ) 2/ ( s 2/)/(1 (2 δ2k ) δ2k ) = s 2 2) δ 2ks + ( s 2/ 2). s(1 δ 2k ) If the term on the right-hand of the ineuality is less than 1, then we will have τ(h, ) < 1 and hence, x is the sparse solution of (1). To see the range of value of δ 2k, we continue the following simple analysis: ( s ) 2/ δ 2k s + < s δ2k s 2 or ( s ) 2/ 2δ 2k + 2 s < 1. Further simplification yields ( ) 2 2/ 2 δ 2k δ 2k + < 1/2. (18) 2(2 δ 2k ) 2(2 ) Since the second term on the left-hand side goes to zero as + as δ 2k < 1, 1 and ( ) 2 2/ 2 δ 2k 2(2 δ 2k ) 2(2 ) ( 2 2 ) 2/ 1 e, we can establish the results in Theorem 3. Proof. of Theorem 3. Based on the proofs of Lemmas 5 and 6, we have h T 2 ρ 2 h Ti 2/. where ρ 2 is That is, ρ 2 := δ 2ks + ( s 2/ 2). s(1 δ 2k ) h T ρ h Ti 1/. (19) 9

10 Since δ 2k < 1, there exists a such that (18) holds and hence, we will have ρ < 1 for any <. As x is a minimizer of (5) and for any x which is a solution of the under-determined linear euations Φx = b, we let h = x x and x T + x T c = x x = x + h = x i + h i + i T i T c x T h T + h T c x T c. Thus, we have Together with (19), we conclude x i + h i h T c h T + 2 x T c. (2) h Ti ρ h Ti + 2 x T c. That is, By (19), we have h Ti 2 1 ρ x T c. h = h T + These complete the proof. 4 Remarks We have a few remarks in order. h Ti (ρ + 1) h Ti 2(1 + ρ ) 1 ρ x T c. Remark 4.1 Clearly, the results in Theorem 3 can be extended to the noisy recovery setting as in [4] and [11]. We leave the discussion to the interested reader. Remark 4.2 The results in Theorem 3 can also be extended to dealing with sparse solution for multiple measurement vectors as discussed in [13]. We omit the details. 1

11 Remark 4.3 Recently the block sparse solution of compressed sensing problems was introduced and studied in [8], [1], which have many practical applications, such as DNA microarrays [17], multiband signal [16], and magnetoencephalography (MEG) [9]. In recovering the sparse solution x from Φx = b, the entries of x are grouped into blocks. That is, x = (x t1, x t2,, x tl ) with x ti being a block of entries for each i. One looks for the fewest number of nonzero blocks x ti such that Φx = b. Letting ( l x 2, = x ti 2 i=1 ) 1/ be a mixed norm with x ti 2 is the standard l 2 norm of vector x ti, one finds the block sparse solution min{ x 2,, Φx = b}. (Cf. [8] for = 1.) The concept of restricted isometry constant was extended in this mixed norm minimization when = 1 in [8]. Our study in 3 can be generalized to the setting. We leave the details to the interested reader. Remark 4.4 In order to find a better upper bound for Lemma 2, we need to find out which k maximizes f(x). Let us treat the right-hand side of the euation in the end of the proof of Theorem 2 as a function g(k). Note that g(n) = and g() =. The maximum of g must happen inside k between 1 and n 1. The derivative of g is g (k) = x 2 1 x2 n 2 k(x 2 1 x2 n) + nx 2 n (x 1 x n) n 1/ 1/2 (k(x 1 x n) + nx n) 1/ 1. The critical point satisfies That is, n 1/ 1/2 (x 2 1 x2 n) 2(x 1 x n) = k(x 2 1 x2 n) + nx 2 n(k(x 1 x n) + nx n) 1/ 1 k(x 2 1 x2 n) + nx 2 n = n1/ 1/2 (x 2 1 x2 n) 2(x 1 (k(x x 1 n) x n) + nx n) 1 1/. (21) The critical point of k is not easy to find except for = 1. Let us try a particular = 2 3. In this case, we have 11

12 Lemma 7 For any x R n, one has x 2 x n 1/ 1/2 2 ( ) n 1/ max 3 3 x i min x i (22) 1 i n 1 i n for = 2 3. In particular, one has x 2 x n 1/ 1/2 2 3 Proof. It is easy to see that g (k) = achieves its maximum at k = n nx 2 n + k ( x 2 1 ) (k x2 n ( ) n max 3 x i min x i. (23) 1 i n 1 i n ( ) x 2/3 1 xn 2/3 n ) + nxn 2/3 3/2 4x x1/3 1 x 2/3 n +33x 8/3 1 x 4/3 n +46x 2 1 x2 n +33x4/3 1 x 8/3 n +12x 2/3 1 x 1/3 n +4x 4 n ( ) 6(x 2 1 x2 n) 3n x 4/3 1 xn 2/3 +x 2/3 1 x 4/3 n +2x 2 n 6(x 2 1 x2 n) by the standard calculation. Let p (s, t) := 4s s 5 t + 33s 4 t s 3 t s 2 t st 5 + 4t 6. Setting s := x 2/3 1 and t := x 2/3 n, we see that the maximum of g (k) is ( ) 3/2 n p p (s, t) + 3st (s + t) g (k ) = 6 6 (s, t) 3st (s + t) 6 s 2 + st + t 2. Let ( p p (s, t) + 3st (s + t) F (s, t) := 6 (s, t) 3st (s + t) s 2 + st + t 2 ) 3/2 n so that g(k ) = 6 F (s, t). To find an upper bound of F (s, t), we may 6 consider F (1, y) with y = t/s for a fixed s. It is easy to plot F (1, y) and 4 2 (1 y) 3/2 in Fig. 1 and their difference. Hence, the ineuality (22) follows. Furthermore, by the uasi-triangle ineuality for = 2/3, ( ) 1/ max x i min x i max x i min x i 1 i n 1 i n 1 i n 1 i n one obtains the ineuality in (23). The analysis above just shows that a better estimate for Lemma 2 is hard to find. 12

13 t t 3 2 F 1, t F (1, t).1 Figure 1: The graphs of F (1, t) and 4 2 (1 t) 3/2 (left) and the graph of their difference (right) References [1] R. G. Baraniuk, V. Cevher, M. F. Duarte, C. Hegde, Model-based compressive sensing, IEEE Trans. Inform. Theory 56 (21) [2] T. Cai, L. Wang, G. Xu, Shifting ineuality and recovery of sparse signals, IEEE Trans. Signal Process., 58(21), pp [3] T. Cai, L. Wang, and G. Xu, New Bounds for Restricted Isometry Constants, Information Theory, IEEE Transactions, 21, pp [4] E. Candès, The restricted isometry property and its implications for compressed sensing, C. R. Acad. Sci. Ser. I 346 (28) [5] E. Candès and T. Tao, Decoding by linear programing, IEEE Trans. Inform. Theory 51 (25) [6] M. Davies and R. Gribonval, Restricted isometry constants where l sparse recovery can fail for < p < 1, IEEE Trans. Inform. Theory, in press. [7] R. Chartrand, Exact reconstruction of sparse signals via nonconvex minimization, IEEE Signal Process. Lett. 14 (27) [8] Y. C. Eldar, M. Mishali, Robust recovery of signals from a structured union of subspaces, IEEE Trans. Inform. Theory 55 (29)

14 [9] Y. C. Eldar, P. Kuppinger, H. Bolcskei, Block-sparse signals uncertainty relations and efficient recovery, IEEE Trans. Signal Process. 58 (21) [1] S. Foucart, A note on guaranteed sparse recovery via l 1 -minimization, Appl. Comput. Harmon. Anal. 29 (21), [11] S. Foucart, M.J. Lai, Sparsest solutions of underdetermined linear systems via l -minimization for < < 1, Appl. Comput. Harmon. Anal. 26 (29) [12] R. Gribonval and M. Nielsen, Sparse decompositions in unions of bases. IEEE Trans. Info. Theory, vol. 49, no. 12, pp , Dec 23. [13] M. J. Lai and Louis Y. Liu, The null space property for sparse recovery from multiple measurement vectors, to appear in Applied Computational Harmonic Analysis, 21. [14] M. J. Lai and J. Wang, An unconstrained l minimization for sparse solution of under determined linear systems, to appear in SIAM J. Optimization, 21. [15] S. Li and Q. Mo, New bounds on the restricted isometry constant δ 2k, submitted, 21. [16] M. Mishali, Y. C. Eldar, Blind multi-band signal reconstruction: Compressed sensing for analog signals, IEEE Trans. Signal Process. 57 (29) [17] F. Parvaresh, H. Vikalo, S. Misra, B. Hassibi, Recovering sparse signals using sparse measurement matrices in compressed DNA microarrays, IEEE J. Sel. Top. Signal Process. 2 (28)

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:

More information

Rui ZHANG Song LI. Department of Mathematics, Zhejiang University, Hangzhou , P. R. China

Rui ZHANG Song LI. Department of Mathematics, Zhejiang University, Hangzhou , P. R. China Acta Mathematica Sinica, English Series May, 015, Vol. 31, No. 5, pp. 755 766 Published online: April 15, 015 DOI: 10.1007/s10114-015-434-4 Http://www.ActaMath.com Acta Mathematica Sinica, English Series

More information

of Orthogonal Matching Pursuit

of Orthogonal Matching Pursuit A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3784. Ming-Jun Lai Department of Mathematics,

More information

Sparse Recovery with Pre-Gaussian Random Matrices

Sparse Recovery with Pre-Gaussian Random Matrices Sparse Recovery with Pre-Gaussian Random Matrices Simon Foucart Laboratoire Jacques-Louis Lions Université Pierre et Marie Curie Paris, 75013, France Ming-Jun Lai Department of Mathematics University of

More information

A Note on Guaranteed Sparse Recovery via l 1 -Minimization

A Note on Guaranteed Sparse Recovery via l 1 -Minimization A Note on Guaranteed Sarse Recovery via l -Minimization Simon Foucart, Université Pierre et Marie Curie Abstract It is roved that every s-sarse vector x C N can be recovered from the measurement vector

More information

Sparsest Solutions of Underdetermined Linear Systems via l q -minimization for 0 < q 1

Sparsest Solutions of Underdetermined Linear Systems via l q -minimization for 0 < q 1 Sparsest Solutions of Underdetermined Linear Systems via l q -minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3740 Ming-Jun Lai Department of Mathematics

More information

Uniqueness Conditions for A Class of l 0 -Minimization Problems

Uniqueness Conditions for A Class of l 0 -Minimization Problems Uniqueness Conditions for A Class of l 0 -Minimization Problems Chunlei Xu and Yun-Bin Zhao October, 03, Revised January 04 Abstract. We consider a class of l 0 -minimization problems, which is to search

More information

Compressive Sensing with Random Matrices

Compressive Sensing with Random Matrices Compressive Sensing with Random Matrices Lucas Connell University of Georgia 9 November 017 Lucas Connell (University of Georgia) Compressive Sensing with Random Matrices 9 November 017 1 / 18 Overview

More information

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin

More information

Two Results on the Schatten p-quasi-norm Minimization for Low-Rank Matrix Recovery

Two Results on the Schatten p-quasi-norm Minimization for Low-Rank Matrix Recovery Two Results on the Schatten p-quasi-norm Minimization for Low-Rank Matrix Recovery Ming-Jun Lai, Song Li, Louis Y. Liu and Huimin Wang August 14, 2012 Abstract We shall provide a sufficient condition to

More information

On the l 1 -Norm Invariant Convex k-sparse Decomposition of Signals

On the l 1 -Norm Invariant Convex k-sparse Decomposition of Signals On the l 1 -Norm Invariant Convex -Sparse Decomposition of Signals arxiv:1305.6021v2 [cs.it] 11 Nov 2013 Guangwu Xu and Zhiqiang Xu Abstract Inspired by an interesting idea of Cai and Zhang, we formulate

More information

A new method on deterministic construction of the measurement matrix in compressed sensing

A new method on deterministic construction of the measurement matrix in compressed sensing A new method on deterministic construction of the measurement matrix in compressed sensing Qun Mo 1 arxiv:1503.01250v1 [cs.it] 4 Mar 2015 Abstract Construction on the measurement matrix A is a central

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and This article appeared in a journal published by Elsevier The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution

More information

Stability and Robustness of Weak Orthogonal Matching Pursuits

Stability and Robustness of Weak Orthogonal Matching Pursuits Stability and Robustness of Weak Orthogonal Matching Pursuits Simon Foucart, Drexel University Abstract A recent result establishing, under restricted isometry conditions, the success of sparse recovery

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior

More information

GREEDY SIGNAL RECOVERY REVIEW

GREEDY SIGNAL RECOVERY REVIEW GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin

More information

Multipath Matching Pursuit

Multipath Matching Pursuit Multipath Matching Pursuit Submitted to IEEE trans. on Information theory Authors: S. Kwon, J. Wang, and B. Shim Presenter: Hwanchol Jang Multipath is investigated rather than a single path for a greedy

More information

Noisy Signal Recovery via Iterative Reweighted L1-Minimization

Noisy Signal Recovery via Iterative Reweighted L1-Minimization Noisy Signal Recovery via Iterative Reweighted L1-Minimization Deanna Needell UC Davis / Stanford University Asilomar SSC, November 2009 Problem Background Setup 1 Suppose x is an unknown signal in R d.

More information

An Unconstrained l q Minimization with 0 < q 1 for Sparse Solution of Under-determined Linear Systems

An Unconstrained l q Minimization with 0 < q 1 for Sparse Solution of Under-determined Linear Systems An Unconstrained l q Minimization with 0 < q 1 for Sparse Solution of Under-determined Linear Systems Ming-Jun Lai and Jingyue Wang Department of Mathematics The University of Georgia Athens, GA 30602.

More information

Optimization for Compressed Sensing

Optimization for Compressed Sensing Optimization for Compressed Sensing Robert J. Vanderbei 2014 March 21 Dept. of Industrial & Systems Engineering University of Florida http://www.princeton.edu/ rvdb Lasso Regression The problem is to solve

More information

Lecture Notes 9: Constrained Optimization

Lecture Notes 9: Constrained Optimization Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form

More information

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5 CS 229r: Algorithms for Big Data Fall 215 Prof. Jelani Nelson Lecture 19 Nov 5 Scribe: Abdul Wasay 1 Overview In the last lecture, we started discussing the problem of compressed sensing where we are given

More information

Block-sparse Solutions using Kernel Block RIP and its Application to Group Lasso

Block-sparse Solutions using Kernel Block RIP and its Application to Group Lasso Block-sparse Solutions using Kernel Block RIP and its Application to Group Lasso Rahul Garg IBM T.J. Watson research center grahul@us.ibm.com Rohit Khandekar IBM T.J. Watson research center rohitk@us.ibm.com

More information

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published

More information

Recovery of Compressible Signals in Unions of Subspaces

Recovery of Compressible Signals in Unions of Subspaces 1 Recovery of Compressible Signals in Unions of Subspaces Marco F. Duarte, Chinmay Hegde, Volkan Cevher, and Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University Abstract

More information

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar

More information

Exact Reconstruction Conditions and Error Bounds for Regularized Modified Basis Pursuit (Reg-Modified-BP)

Exact Reconstruction Conditions and Error Bounds for Regularized Modified Basis Pursuit (Reg-Modified-BP) 1 Exact Reconstruction Conditions and Error Bounds for Regularized Modified Basis Pursuit (Reg-Modified-BP) Wei Lu and Namrata Vaswani Department of Electrical and Computer Engineering, Iowa State University,

More information

Recent Developments in Compressed Sensing

Recent Developments in Compressed Sensing Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline

More information

ORTHOGONAL matching pursuit (OMP) is the canonical

ORTHOGONAL matching pursuit (OMP) is the canonical IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 9, SEPTEMBER 2010 4395 Analysis of Orthogonal Matching Pursuit Using the Restricted Isometry Property Mark A. Davenport, Member, IEEE, and Michael

More information

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Deanna Needell and Roman Vershynin Abstract We demonstrate a simple greedy algorithm that can reliably

More information

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE 5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER 2009 Uncertainty Relations for Shift-Invariant Analog Signals Yonina C. Eldar, Senior Member, IEEE Abstract The past several years

More information

Compressed Sensing and Affine Rank Minimization Under Restricted Isometry

Compressed Sensing and Affine Rank Minimization Under Restricted Isometry IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 61, NO. 13, JULY 1, 2013 3279 Compressed Sensing Affine Rank Minimization Under Restricted Isometry T. Tony Cai Anru Zhang Abstract This paper establishes new

More information

Strengthened Sobolev inequalities for a random subspace of functions

Strengthened Sobolev inequalities for a random subspace of functions Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)

More information

RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems

RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems 1 RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems Yun-Bin Zhao IEEE member Abstract Recently, the worse-case analysis, probabilistic analysis and empirical

More information

Generalized Orthogonal Matching Pursuit- A Review and Some

Generalized Orthogonal Matching Pursuit- A Review and Some Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

arxiv: v1 [math.na] 26 Nov 2009

arxiv: v1 [math.na] 26 Nov 2009 Non-convexly constrained linear inverse problems arxiv:0911.5098v1 [math.na] 26 Nov 2009 Thomas Blumensath Applied Mathematics, School of Mathematics, University of Southampton, University Road, Southampton,

More information

Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm

Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm J. K. Pant, W.-S. Lu, and A. Antoniou University of Victoria August 25, 2011 Compressive Sensing 1 University

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

Exact Topology Identification of Large-Scale Interconnected Dynamical Systems from Compressive Observations

Exact Topology Identification of Large-Scale Interconnected Dynamical Systems from Compressive Observations Exact Topology Identification of arge-scale Interconnected Dynamical Systems from Compressive Observations Borhan M Sanandaji, Tyrone Vincent, and Michael B Wakin Abstract In this paper, we consider the

More information

COMPRESSED Sensing (CS) has emerged in the past

COMPRESSED Sensing (CS) has emerged in the past Distribution-aware Block-sparse Recovery via Convex Optimization Sajad Daei, Farzan Haddadi, Arash Amini arxiv:809.03005v [cs.it] 9 Sep 08 Abstract We study the problem of reconstructing a blocksparse

More information

Compressed Sensing and Robust Recovery of Low Rank Matrices

Compressed Sensing and Robust Recovery of Low Rank Matrices Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech

More information

Stopping Condition for Greedy Block Sparse Signal Recovery

Stopping Condition for Greedy Block Sparse Signal Recovery Stopping Condition for Greedy Block Sparse Signal Recovery Yu Luo, Ronggui Xie, Huarui Yin, and Weidong Wang Department of Electronics Engineering and Information Science, University of Science and Technology

More information

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit arxiv:0707.4203v2 [math.na] 14 Aug 2007 Deanna Needell Department of Mathematics University of California,

More information

Fast Hard Thresholding with Nesterov s Gradient Method

Fast Hard Thresholding with Nesterov s Gradient Method Fast Hard Thresholding with Nesterov s Gradient Method Volkan Cevher Idiap Research Institute Ecole Polytechnique Federale de ausanne volkan.cevher@epfl.ch Sina Jafarpour Department of Computer Science

More information

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk Model-Based Compressive Sensing for Signal Ensembles Marco F. Duarte Volkan Cevher Richard G. Baraniuk Concise Signal Structure Sparse signal: only K out of N coordinates nonzero model: union of K-dimensional

More information

SCRIBERS: SOROOSH SHAFIEEZADEH-ABADEH, MICHAËL DEFFERRARD

SCRIBERS: SOROOSH SHAFIEEZADEH-ABADEH, MICHAËL DEFFERRARD EE-731: ADVANCED TOPICS IN DATA SCIENCES LABORATORY FOR INFORMATION AND INFERENCE SYSTEMS SPRING 2016 INSTRUCTOR: VOLKAN CEVHER SCRIBERS: SOROOSH SHAFIEEZADEH-ABADEH, MICHAËL DEFFERRARD STRUCTURED SPARSITY

More information

Signal Recovery, Uncertainty Relations, and Minkowski Dimension

Signal Recovery, Uncertainty Relations, and Minkowski Dimension Signal Recovery, Uncertainty Relations, and Minkowski Dimension Helmut Bőlcskei ETH Zurich December 2013 Joint work with C. Aubel, P. Kuppinger, G. Pope, E. Riegler, D. Stotz, and C. Studer Aim of this

More information

Stability and robustness of l 1 -minimizations with Weibull matrices and redundant dictionaries

Stability and robustness of l 1 -minimizations with Weibull matrices and redundant dictionaries Stability and robustness of l 1 -minimizations with Weibull matrices and redundant dictionaries Simon Foucart, Drexel University Abstract We investigate the recovery of almost s-sparse vectors x C N from

More information

Constructing Explicit RIP Matrices and the Square-Root Bottleneck

Constructing Explicit RIP Matrices and the Square-Root Bottleneck Constructing Explicit RIP Matrices and the Square-Root Bottleneck Ryan Cinoman July 18, 2018 Ryan Cinoman Constructing Explicit RIP Matrices July 18, 2018 1 / 36 Outline 1 Introduction 2 Restricted Isometry

More information

A Note on the Complexity of L p Minimization

A Note on the Complexity of L p Minimization Mathematical Programming manuscript No. (will be inserted by the editor) A Note on the Complexity of L p Minimization Dongdong Ge Xiaoye Jiang Yinyu Ye Abstract We discuss the L p (0 p < 1) minimization

More information

Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation

Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 12, DECEMBER 2008 2009 Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation Yuanqing Li, Member, IEEE, Andrzej Cichocki,

More information

Thresholds for the Recovery of Sparse Solutions via L1 Minimization

Thresholds for the Recovery of Sparse Solutions via L1 Minimization Thresholds for the Recovery of Sparse Solutions via L Minimization David L. Donoho Department of Statistics Stanford University 39 Serra Mall, Sequoia Hall Stanford, CA 9435-465 Email: donoho@stanford.edu

More information

Abstract This paper is about the efficient solution of large-scale compressed sensing problems.

Abstract This paper is about the efficient solution of large-scale compressed sensing problems. Noname manuscript No. (will be inserted by the editor) Optimization for Compressed Sensing: New Insights and Alternatives Robert Vanderbei and Han Liu and Lie Wang Received: date / Accepted: date Abstract

More information

Sparse Optimization Lecture: Sparse Recovery Guarantees

Sparse Optimization Lecture: Sparse Recovery Guarantees Those who complete this lecture will know Sparse Optimization Lecture: Sparse Recovery Guarantees Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor: Wotao Yin Department of Mathematics,

More information

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Journal of Information & Computational Science 11:9 (214) 2933 2939 June 1, 214 Available at http://www.joics.com Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Jingfei He, Guiling Sun, Jie

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases 2558 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 48, NO 9, SEPTEMBER 2002 A Generalized Uncertainty Principle Sparse Representation in Pairs of Bases Michael Elad Alfred M Bruckstein Abstract An elementary

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

Shifting Inequality and Recovery of Sparse Signals

Shifting Inequality and Recovery of Sparse Signals Shifting Inequality and Recovery of Sparse Signals T. Tony Cai Lie Wang and Guangwu Xu Astract In this paper we present a concise and coherent analysis of the constrained l 1 minimization method for stale

More information

Information-Theoretic Limits of Matrix Completion

Information-Theoretic Limits of Matrix Completion Information-Theoretic Limits of Matrix Completion Erwin Riegler, David Stotz, and Helmut Bölcskei Dept. IT & EE, ETH Zurich, Switzerland Email: {eriegler, dstotz, boelcskei}@nari.ee.ethz.ch Abstract We

More information

Error Correction via Linear Programming

Error Correction via Linear Programming Error Correction via Linear Programming Emmanuel Candes and Terence Tao Applied and Computational Mathematics, Caltech, Pasadena, CA 91125 Department of Mathematics, University of California, Los Angeles,

More information

On the Reconstruction of Block-Sparse Signals With an Optimal Number of Measurements Mihailo Stojnic, Farzad Parvaresh, and Babak Hassibi

On the Reconstruction of Block-Sparse Signals With an Optimal Number of Measurements Mihailo Stojnic, Farzad Parvaresh, and Babak Hassibi IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 57, NO 8, AUGUST 2009 3075 On the Reconstruction of Block-Sparse Signals With an Optimal Number of Measurements Mihailo Stojnic, Farzad Parvaresh, Babak Hassibi

More information

Compressed Sensing and Sparse Recovery

Compressed Sensing and Sparse Recovery ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing

More information

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina INDUSTRIAL MATHEMATICS INSTITUTE 2007:08 A remark on compressed sensing B.S. Kashin and V.N. Temlyakov IMI Preprint Series Department of Mathematics University of South Carolina A remark on compressed

More information

An algebraic perspective on integer sparse recovery

An algebraic perspective on integer sparse recovery An algebraic perspective on integer sparse recovery Lenny Fukshansky Claremont McKenna College (joint work with Deanna Needell and Benny Sudakov) Combinatorics Seminar USC October 31, 2018 From Wikipedia:

More information

Interpolation via weighted l 1 minimization

Interpolation via weighted l 1 minimization Interpolation via weighted l 1 minimization Rachel Ward University of Texas at Austin December 12, 2014 Joint work with Holger Rauhut (Aachen University) Function interpolation Given a function f : D C

More information

Sparse Solutions of an Undetermined Linear System

Sparse Solutions of an Undetermined Linear System 1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research

More information

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES Fenghui Wang Department of Mathematics, Luoyang Normal University, Luoyang 470, P.R. China E-mail: wfenghui@63.com ABSTRACT.

More information

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad

More information

Conditions for a Unique Non-negative Solution to an Underdetermined System

Conditions for a Unique Non-negative Solution to an Underdetermined System Conditions for a Unique Non-negative Solution to an Underdetermined System Meng Wang and Ao Tang School of Electrical and Computer Engineering Cornell University Ithaca, NY 14853 Abstract This paper investigates

More information

Signal Recovery from Permuted Observations

Signal Recovery from Permuted Observations EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

Robust Sparse Recovery via Non-Convex Optimization

Robust Sparse Recovery via Non-Convex Optimization Robust Sparse Recovery via Non-Convex Optimization Laming Chen and Yuantao Gu Department of Electronic Engineering, Tsinghua University Homepage: http://gu.ee.tsinghua.edu.cn/ Email: gyt@tsinghua.edu.cn

More information

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Yin Zhang Technical Report TR05-06 Department of Computational and Applied Mathematics Rice University,

More information

Going off the grid. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison

Going off the grid. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Going off the grid Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Joint work with Badri Bhaskar Parikshit Shah Gonnguo Tang We live in a continuous world... But we work

More information

We will now focus on underdetermined systems of equations: data acquisition system

We will now focus on underdetermined systems of equations: data acquisition system l 1 minimization We will now focus on underdetermined systems of equations: # samples = resolution/ bandwidth data acquisition system unknown signal/image Suppose we observe y = Φx 0, and given y we attempt

More information

Phase Transition Phenomenon in Sparse Approximation

Phase Transition Phenomenon in Sparse Approximation Phase Transition Phenomenon in Sparse Approximation University of Utah/Edinburgh L1 Approximation: May 17 st 2008 Convex polytopes Counting faces Sparse Representations via l 1 Regularization Underdetermined

More information

Blind Deconvolution Using Convex Programming. Jiaming Cao

Blind Deconvolution Using Convex Programming. Jiaming Cao Blind Deconvolution Using Convex Programming Jiaming Cao Problem Statement The basic problem Consider that the received signal is the circular convolution of two vectors w and x, both of length L. How

More information

Non-convex Optimization for Linear System with Pregaussian Matrices. and Recovery from Multiple Measurements. Yang Liu

Non-convex Optimization for Linear System with Pregaussian Matrices. and Recovery from Multiple Measurements. Yang Liu Non-convex Optimization for Linear System with Pregaussian Matrices and Recovery from Multiple Measurements by Yang Liu Under the direction of Ming-Jun Lai Abstract The extremal singular values of random

More information

Self-Calibration and Biconvex Compressive Sensing

Self-Calibration and Biconvex Compressive Sensing Self-Calibration and Biconvex Compressive Sensing Shuyang Ling Department of Mathematics, UC Davis July 12, 2017 Shuyang Ling (UC Davis) SIAM Annual Meeting, 2017, Pittsburgh July 12, 2017 1 / 22 Acknowledgements

More information

A Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing via Polarization of Analog Transmission

A Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing via Polarization of Analog Transmission Li and Kang: A Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing 1 A Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing via Polarization

More information

Sparse Recovery over Graph Incidence Matrices

Sparse Recovery over Graph Incidence Matrices Sparse Recovery over Graph Incidence Matrices Mengnan Zhao, M. Devrim Kaba, René Vidal, Daniel P. Robinson, and Enrique Mallada Abstract Classical results in sparse recovery guarantee the exact reconstruction

More information

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1239 Preconditioning for Underdetermined Linear Systems with Sparse Solutions Evaggelia Tsiligianni, StudentMember,IEEE, Lisimachos P. Kondi,

More information

COMPRESSED Sensing (CS) has recently emerged as a. Sparse Recovery from Combined Fusion Frame Measurements

COMPRESSED Sensing (CS) has recently emerged as a. Sparse Recovery from Combined Fusion Frame Measurements IEEE TRANSACTION ON INFORMATION THEORY, VOL 57, NO 6, JUNE 2011 1 Sparse Recovery from Combined Fusion Frame Measurements Petros Boufounos, Member, IEEE, Gitta Kutyniok, and Holger Rauhut Abstract Sparse

More information

Construction of Multivariate Compactly Supported Orthonormal Wavelets

Construction of Multivariate Compactly Supported Orthonormal Wavelets Construction of Multivariate Compactly Supported Orthonormal Wavelets Ming-Jun Lai Department of Mathematics The University of Georgia Athens, GA 30602 April 30, 2004 Dedicated to Professor Charles A.

More information

Constrained optimization

Constrained optimization Constrained optimization DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Compressed sensing Convex constrained

More information

COMPRESSED SENSING IN PYTHON

COMPRESSED SENSING IN PYTHON COMPRESSED SENSING IN PYTHON Sercan Yıldız syildiz@samsi.info February 27, 2017 OUTLINE A BRIEF INTRODUCTION TO COMPRESSED SENSING A BRIEF INTRODUCTION TO CVXOPT EXAMPLES A Brief Introduction to Compressed

More information

AN INTRODUCTION TO COMPRESSIVE SENSING

AN INTRODUCTION TO COMPRESSIVE SENSING AN INTRODUCTION TO COMPRESSIVE SENSING Rodrigo B. Platte School of Mathematical and Statistical Sciences APM/EEE598 Reverse Engineering of Complex Dynamical Networks OUTLINE 1 INTRODUCTION 2 INCOHERENCE

More information

Reconstruction of Block-Sparse Signals by Using an l 2/p -Regularized Least-Squares Algorithm

Reconstruction of Block-Sparse Signals by Using an l 2/p -Regularized Least-Squares Algorithm Reconstruction of Block-Sparse Signals by Using an l 2/p -Regularized Least-Squares Algorithm Jeevan K. Pant, Wu-Sheng Lu, and Andreas Antoniou University of Victoria May 21, 2012 Compressive Sensing 1/23

More information

Jointly Low-Rank and Bisparse Recovery: Questions and Partial Answers

Jointly Low-Rank and Bisparse Recovery: Questions and Partial Answers Jointly Low-Rank and Bisparse Recovery: Questions and Partial Answers Simon Foucart, Rémi Gribonval, Laurent Jacques, and Holger Rauhut Abstract This preprint is not a finished product. It is presently

More information

Recovery Based on Kolmogorov Complexity in Underdetermined Systems of Linear Equations

Recovery Based on Kolmogorov Complexity in Underdetermined Systems of Linear Equations Recovery Based on Kolmogorov Complexity in Underdetermined Systems of Linear Equations David Donoho Department of Statistics Stanford University Email: donoho@stanfordedu Hossein Kakavand, James Mammen

More information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information Fast Angular Synchronization for Phase Retrieval via Incomplete Information Aditya Viswanathan a and Mark Iwen b a Department of Mathematics, Michigan State University; b Department of Mathematics & Department

More information

Tractable Upper Bounds on the Restricted Isometry Constant

Tractable Upper Bounds on the Restricted Isometry Constant Tractable Upper Bounds on the Restricted Isometry Constant Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure, U.C. Berkeley. Support from NSF, DHS and Google.

More information

Conditions for Robust Principal Component Analysis

Conditions for Robust Principal Component Analysis Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and

More information

Recovery of Sparse Signals Using Multiple Orthogonal Least Squares

Recovery of Sparse Signals Using Multiple Orthogonal Least Squares Recovery of Sparse Signals Using Multiple Orthogonal east Squares Jian Wang, Ping i Department of Statistics and Biostatistics arxiv:40.505v [stat.me] 9 Oct 04 Department of Computer Science Rutgers University

More information

An Overview of Compressed Sensing

An Overview of Compressed Sensing An Overview of Compressed Sensing Nathan Schneider November 18, 2009 Abstract In a large number of applications, the system will be designed to sample at a rate equal to at least the frequency bandwidth

More information

Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France

Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France remi.gribonval@inria.fr Structure of the tutorial Session 1: Introduction to inverse problems & sparse

More information