Two Results on the Schatten p-quasi-norm Minimization for Low-Rank Matrix Recovery

Size: px
Start display at page:

Download "Two Results on the Schatten p-quasi-norm Minimization for Low-Rank Matrix Recovery"

Transcription

1 Two Results on the Schatten p-quasi-norm Minimization for Low-Rank Matrix Recovery Ming-Jun Lai, Song Li, Louis Y. Liu and Huimin Wang August 14, 2012 Abstract We shall provide a sufficient condition to explain when the Schatten l p -quasinorm minimization can be used for low rank matrix recovery. The condition is given in terms of the restricted isometry property in the matrix version. More precisely, when the restricted isometry constant δ 3s < 1, there exists a real number p 0 < 1 such that any solution of the l p minimization is the minimal rank solution for any p p 0. This result can be improved to be δ 2s < 1 if the conjecture on an inequality for matrices in Schatten l p quasi-norm raised in [17] is true. The second part of the paper is a proof of the conjecture in a special case when matrices are nonnegative definite as well several other cases. Keywords and Phrases: singular values, schatten p norm, restricted isometry property, low rank matrix recovery Subject Classifications:[AMS 2000] 15A04; 46B09; 15A60 1 Introduction In this paper, we consider matrices of size m n for integer m, n. Let Ω be a given subset of all indices {(i, j), i = 1,, m, j = 1,, n}. Suppose the entries of a matrix M with indices in Ω are given. We look for this matrix M. This is the so-called matrix completion mjlai@math.uga.edu. This author is associated with Department of Mathematics, The University of Georgia, Athens, GA songli@math.zju.edu.cn. This author is associated with Department of Mathematics, Zhejiang University, Hangzhou, , P. R. China yliu15@wm.edu. This author is associated with Dept. of Mathematics, College of William and Mary, P.O. Box 8795, Williamsburg, VA glanme@126.com. This author is associated with Department of Mathematics, Shaoxing University, Shaoxing, Zhejiang, P. R. China 1

2 problem. Of course, there are many such matrices. We thus find a matrix X of size m n with smallest rank such that the entries of X in Ω are the same as the entries of M in Ω. In general, we solve the following rank minimization problem: min rank (X) : such that A(X) = A(M), (1) X R m n where A : R m n R l is a linear mapping. For example, in the matrix completion setting, A(X) = x ij, (i, j) Ω for any matrix X = [x ij ] 1im,1jn R m n. For a general A, the problem (1) is the so-called low-rank matrix recovery problem which is an extremely active research area recently. In this topic of research, there have been significant works on extending the conditions from recovering sparse vectors to those for recovering low-rank matrices, including the research on developing various algorithms for recovering low-rank matrices. We refer the interested reader to [4], [18], [5], [15], [16], [12], [17] for theoretical understanding and to [6], [13], and the references therein for computational approaches. In [17], Oymak, Mohan, Fazel and Hassibi extended recovery conditions from sparse vectors to low-rank matrices in a simple and transparent way. That is, they observed a key equivalence between the sparse vector recovery and low rank matrices recovery using the l 1 minimization in terms of the well-known restricted isometry property and the null space conditions. However, when extending the necessary and sufficient condition in terms of null space property for sparse vector recovery in l p quasi-norm with p (0, 1) (cf. e.g., [8] and [7]) to the setting of the recovery of low-rank matrices by Schatten l p quasi-norm minimization: minimize Tr ( X p ) subject to A(X) = y they could not get a condition which is both necessary and sufficient for recovery low-rank matrices (cf. [17]), where p (0, 1), y = A(X 0 ) with X 0 being the low-rank solution to be recovered. Their sufficient condition for the recovery of a low-rank matrix of rank s is that 2s σ p i (W ) n i=2s+1 (2) σ p i (W ) (3) for all W N (A) with W 0, where σ i (W ) is the i-th singular value of W and N (A) is the null space of A. Their necessary condition for the recovery is that s n σ p i (W ) σ p i (W ) (4) i=s+1 for all W N (A Ω )\{0}. To bridge the gap between these two conditions, they would need to have the following inequality, σ p i (A) σp i (B) σ p i (A B), k = 1,, l := min{m, n}. (5) 2

3 As pointed out by these researchers, the inequalities are true for p = 1 and p = 0 and for general p (0, 1) in their numerical experiments. Thus they conjectured that this inequality holds for all matrices A and B of the same size. This is a well-known conjecture in community of compressed sensing and low-ranked matrix recovery. As we shall use the case for p = 1 later in our effort to establish the conjecture for nonnegative definite matrices, we present it first as follows. The following Lemma 1.1 can be found in [9], [10], [11], [14] and [2]. Lemma 1.1 Let σ i (A) be the i-th singular value of an n by n matrix A and similar for σ i (B), 1 i n. Then σ i (A) σ i (B) for k = 1,, n, for all real matrices A and B of size n by n. The inequality in (5) is stronger than the following one σ i (A B). (6) Lemma 1.2 For any two matrices A, B which have the same size, say m n with m n, it holds that (σ p i (A) σp i (B)) σ p i (A B) (7) for all k = 1,..., m, where 0 p 1. The above inequality in (7) is known to be true as early as in [19] (for an English version, see [20]). See [22] for a general version with a short proof. It is also a consequence of the more general result in [23]. We shall present a self-contained proof in this paper so that we will use it to justify the following Theorem 1.1 which is one of the main results in this paper. Theorem 1.1 Let p [0, 1]. For all real nonnegative definite matrices A and B of size m by m, σ p i (A) σp i (B) σ p i (A B) (8) for k = 1, 2,, m, where σ 1 (A) σ 2 (A) σ l (A) 0, σ 1 (B) σ 2 (B) σ l (B) 0 and σ 1 (A B) σ 2 (A B) σ l (A B) 0 are singular values of A, B and A B, respectively. In addition to proving the conjecture in this special case, we extend the proof to establish the conjecture in several other cases. To describe when the minimizer of (2) is the solution of (1) when p = 1, the researchers in [18] introduced the following concept similar to the restricted isometry property for the study of compressed sensing. 3

4 Definition 1.1 Let A : R m n R l be a linear map. Let δ s be the smallest number such that (1 δ s ) X 2 F A(X) 2 2 (1 + δ s ) X 2 F holds for all matrices X of rank at most s. δ s is called restricted isometry constant (RIC). Note that our definition is slightly different from the one in [18]. The researchers in [18] showed that there is an overwhelming probability that the entries of M in Ω satisfies the RIP for any fixed RIC δ s < 1. Many results have been obtained based on the RIP. For example, in [4], Candès and Plan showed that when δ 4s < 2 1, the solution of the minimization (2) with p = 1 is unique and is the low-rank solution (1). The matrix RIC has been improved, e.g., δ 5s < 0.607, δ 4s < 0.558, δ 3s < in Mohan and Fazel s work [15], δ 2s < , δ s < in Wang and Li s work in [24] and δ 2s 1/2 and δ s < 1/3 in Cai and Zhang s work [3]. In fact, δ s < 1/3 is the best estimate. The result in [17] is worthy mentioning. The researchers showed that any sufficient condition for recovery of the sparse solution in the compressed setting can be translated a sufficient condition to recover the minimal rank solution in the matrix completion setting. It is also possible to use the Schatten p-quasi-norm to describe a sufficient condition when a minimizer of (2) is the solution of (1). Another one of main results in this paper is the following Theorem 1.2 Suppose that δ 3s < 1. There exists a number p 0 (0, 1) such that for any p p 0, each minimizer X of the Schatten l p quasi-norm minimization (2) is the lowest rank solution of (1). Similarly, we shall discuss the noisy recovery case. Let X be the optimal solution of the following problem min X R m n m σ p i (X), such that A(X) A(M) 2 η, (9) where η > 0 is a noise level. We can establish the following result. Theorem 1.3 If δ 3s < 1, there exists some p 0 < 1, such that for any p p 0 we have X X 0 p 2 31/p s 1/p 1/2 η 1 δ3s (1 1 2 s ) These two results can be extended to be δ 2s < 1 if the conjecture (5) is true. We shall make this remark in the end of this paper. In particular, in the vector setting, the inequality in (5) holds for any vectors x and y. We recover sparse vectors using l p minimization for a real number p small enough when the restrict isometry constant δ 2s < 1. The remaining part of this paper is to devote to proving Theorem 1.2 in Sections 2 and Theorem 1.1 in Section 3. Finally we end this paper with a few remarks. 4

5 2 Main Result on Low-Rank Matrix Recovery In this section, we will discuss low rank matrix recovery without the inequality in (5). Before we prove Theorem 1.2, we first introduce some preliminary results. The following inequality can be found in [[9], p. 423]. Lemma 2.1 For any two matrices A, B it holds that where t, s m and t + s 1 m. With Lemma 2.1, it is easy to see σ t+s 1 (A + B) σ t (A) + σ s (B), (10) Lemma 2.2 For any two matrices A, B R m n with B of rank r and any p > 0, it holds that m r m σ p i (A B) σ p i (A). (11) i=r+1 i=2r+1 Proof. Indeed, for each σ i (A) p with i 2r + 1, we use Lemma 2.1 to have σ i (A) σ i r (A B) + σ r+1 (B) = σ i r (A B) or σ p i (A) σp i r (A B) for i = r + 1,, m. We sum these inequalities to get the desired result. For convenience, we let X q q = m σ i(x) q, where σ 1 (X) σ 2 (X) σ m (X) are singular values of matrix X. Now with Lemmas above, we can prove the following result. Lemma 2.3 Let X 0 be the minimizer of (1) with rank s and X be a global minimizer of (2). Recall that both of them satisfy AX 0 = AX. Write H = X 0 X which is in the null space of A. Then Proof. By Lemma 2.2 we have m i=2s+1 σ p i (H) s X 0 p p X p p = X 0 H p p = = σ p i (H). (12) m σ p i (X0 H) s σ p i (X0 H) + m i=s+1 s m (σ p i (X0 ) σ p i (H)) + 5 σ p i (X0 H) i=2s+1 σ p i (H).

6 since σ i (X 0 ) = 0 for i s + 1. After rearranged the above inequality, since X 0 p p = s σp i (X0 ), we obtain the result. Next we need an inequality which can be found in [4]. Lemma 2.4 For any matrices X and Y with X, Y = trace(x Y ) = 0 with rank(x) r and rank(y ) s, A(X), A(Y ) δ r+s X F Y F, (13) where X F stands for the Frobenius norm of X and similar for Y F. We are now ready to prove Theorem 1.2. Proof of Theorem 1.2. Let H = X 0 X and write H = UΣ H V in SVD format. We decompose Σ H = Σ 0 + Σ 1 + Σ 2 + such that Σ 0 contains the first s singular values of Σ H, Σ 1 the second s singular values of Σ(H), and so on. Similarly, write U = [U 0 U 1 U 2 ] and V = [V 0 V 1 V 2 ] accordingly. Thus, H = H 0 + H 1 + H 2 + with H i = U i Σ i Vi. Clearly, H i, H j = 0 if i j. Thus, we can use the definition of matrix version RIP and Lemma 2.4 to have (1 δ 3s )( H 0 2 F + H 1 2 F + H 2 2 F ) A(H 0 + H 1 + H 2 ) 2 2 = A(H j 3 H j ) 2 2 = A(H j ), A(H i ) (1 + δ s ) H i 2 F + i,j 3 i 3 i,j 3 i j (1 + δ 2s ) H i 2 F + δ 2s H i F H j F i 3 i 3 i,j 3 i j H i 2 F + δ 2s ( H j F ) 2 j 3 ( ) 2 (1 + δ 2s ) H j F. j 3 A(H i ), A(H j ) If H 3 F = 0, the right-hand side of the inequality above is zero and hence H = 0. Hence X = X 0. That is, the minimizer is the low rank solution of (1). Otherwise, the above inequality can be rewritten as follows: ( ) ( ) 1/2 1 + δ2s H 0 F H j F (14) 1 δ 3s under the assumption that H 3 F 0. In this situation, σ 3s+1 (H) 0. We have j 3 (3s) 1/p H 0 + H 1 + H 2 + H 3 p = (3s) 1/p σ 3s+1 (H) 6 ( 4s ( σi (H) ) ) p 1/p σ 3s+1 (H)

7 ( 3s+1 ( ) ) p 1/p σ 3s+1 (H)(3s) 1/p σi (H) σ 3s+1 (H) σ 3s+1 (H)(3s) 1/p (3s + 1) 1/p = σ 3s+1 (H)( s )1/p. (15) On the other hand, we have H j F n s H 3 F n s σ3s+1 (H) = n σ 3s+1 (H). (16) s s j 3 Now we claim that for any ɛ (0, 1), there exists a real number p ɛ 1 such that when p p ɛ, we have ( ) 1/2 1 + (3s) 1/p δ2s H j F ɛ H 0 + H 1 + H 2 + H 3 p (17) 1 δ 3s j 3 For convenience, let ( ) 1/2 1 + δ2s C s =. 1 δ 3s We consider the following function f(p, H), using (16) and (15), f(p, H) = ( ) 1/2 1 + δ2s (3s) 1/p j 3 H j F 1 δ 3s (3s) 1/p n s σ 3s+1 (H) C s σ 3s+1 (H)(3s + 1) 1/p n C s s σ 3s+1 (H) σ 3s+1 (H)( s )1/p n C s s ( s )1/p H 0 + H 1 + H 2 + H 3 p For 0 < ɛ < 1, since ( s )1/p when p 0 +, for any H R m n, f(p, H) ɛ. That is, we have (17). Finally, we use the Hölder inequality, the inequality in (14), and then the inequality in (17) to have H 0 p p = s s σ p i (H) s1 p/2 ( σi 2 (H)) p/2 = s 1 p/2 H 0 p F 7

8 ( (1 ) ) 1/2 p + s 1 p/2 δ2s H j F 1 δ 3s j 3 s ( ) 1 p/2 ɛ(3s) 1/p p H 0 + H 1 + H 2 + H 3 p ɛ p s p/2 3 1 H 0 + H 1 + H 2 + H 3 p p ɛ p 3s p/2 ( H 0 p p + H 1 p p + ɛ p 3s p/2 3 H 0 p p = ɛp s p/2 H 0 p p i 2s+1 σ p i (H) ) by using Lemma 2.3 and the fact that H 1 p p H 0 p p. For ɛ 1/2 and find p such that (17) holds, we get a contradiction if H 0 F 0 since t = ɛp s < 1. Thus, H 0 p/2 F = 0. Therefore, H = X 0 X = 0 and hence, the minimizer of (2) is the minimal rank solution of (1). Next we consider the low rank matrix recovery with noises. Proof of Theorem 1.3. Let H = X 0 X. We have A(H) 2 2η. We decompose H = H 1 + H similarly as in the proof of Theorem 1.2. Using the definition of matrix version RIP we have (1 δ 3s )( H 0 2 F + H 1 2 F + H 2 2 F ) (18) A(H 0 + H 1 + H 2 ) 2 2 = A(H j 3 H j ) 2 2 (19) (2η + j 3 A(H j ) 2 ) 2. (20) Hence, we have H 0 F 2η + j 3 A(H j) 2 1 δ3s. (21) Then by the same argument as (14), we have A(H j ) 2 2 (1 + δ 2s )( H j F ) 2. j 3 j 3 Thus, we have H 0 F 2η δ 2s j 3 H j F 1 δ3s. (22) If σ 3s+1 (H) 0, then by (17), for any 0 < ɛ < 1, there exists some 0 < p 0 < 1, such that for any 0 < p p 0, ( 1 + δ2s 1 δ 3s ) 1/2 (3s) 1/p j 3 H j F H 0 + H 1 + H 2 + H 3 p ɛ. 8

9 Using Hölder inequality, we have H 0 p p s 1 p/2 H 0 p F s1 p/2 ( 2η δ2s j 3 H j F 1 δ3s ) p, that is, ( ( ) ) 1/2 H 0 p s 1/p 1/2 2η 1 + δ2s + H j F (23) 1 δ3s 1 δ 3s j 3 ( ) 2η s 1/p 1/2 + ɛ(3s) 1/p H 0 + H 1 + H 2 + H 3 p. (24) 1 δ3s By Lemma 2.3, we have H 0 + H 1 + H 2 + H 3 p p H p p = H 0 p p + H 1 p p + Substitute the above inequality into (23), we have H 0 p s1/p 1/2 2η 1 δ3s + ɛ H 0 p, s i 2s+1 σ p i (H) 3 H 0 p p. Then by direct computation, H 0 p s 1/p 1/2 2η 1 δ3s (1 ɛ s ) Now we can evaluate the Schatten p quasi norm of H, H p 3 1/p s 1/p 1/2 2η 1 δ3s (1 ɛ s ). Let ɛ = 1/2, then we obtain the result in Theorem 1.3. When δ 3s+1 = 0, H = H 0 + H 1 + H 2, by (22), we have H 0 F 2η δ 3s j 3 H j F 1 δ3s 2η 1 δ3s. (25) Finally, we give an estimate in terms of Schatten p-norm. It is easy to see H 0 p p s 1 p/2 H 0 p F 2η s1 p/2 ( ) p. (26) 1 δ3s Hence, H p p 3 H 0 p p 3s 1 p/2 2η ( ) p or H p 3 1/p s 1/p 1/2 2η ( ). 1 δ3s 1 δ3s These complete the proof. 9

10 3 Proof of Theorem 1.1 We shall use the Ky Fan k-norm for matrices (cf. [9] and [10]) in this section. That is, A (k) := σ i (A), where σ 1 (A) σ 2 (A) σ m (A) 0 are singular values of matrix A of size m n with m n. Let A = A A be the square root of the positive semi-definite matrix A A. When f(x) is a nonnegative function defined on [0, ), we define f(a) = Uf(Σ A )V where f(σ A ) = diag(f(σ 1 (A)), f(σ 2 (A)),, f(σ m (A))) for any matrix A = UΣ A V in the SVD representation. Here, Σ A = diag(σ 1 (A),, σ m (A)). Also, we use the standard notation A B if both A and B are symmetric and A B is positive semi-definite. Let us start with the following preparatory results. Lemma 3.1 Let f (x) be a non-negative and increasing function. Then σ i (f ( A )) = f (σ i (A)), (27) i = 1,, m, for all real matrices A of size m by n. In general, let F (x, t) be a continuous nonnegative function and is nondecreasing for the first variable. Then for 0 a < b, ( b σ i a ) b F (A, ξi) dξ a k = 1,, m, for all real matrices A of size m by n. σ i (F (A, ξi)) dξ, (28) Proof. Let Udiag (σ 1 (A),, σ m (A)) V be the singular value decomposition of A. As explained above, and A = A A = (V Σ AΣ A V ) 1/2 = V diag (σ 1 (A),, σ n (A)) V (29) f ( A ) = V diag (f (σ 1 (A)),, f (σ m (A))) V. (30) Then (27) follows since f (σ 1 (A)) f (σ m (A)) 0. With the equalities in (27) in mind, for continuous and positive function F(x, y) on the first quadrant {(x, y) : x 0, y 0} which is increasing for the first variable, we have ( b σ i a ) F (A, ξi) dξ = 10 b a F (A, ξi) dξ. (31) (k)

11 By the triangle inequality for the Ky Fan k-norm, one can obtain b b F (A, ξi) dξ F (A, ξi) (k) dξ. (32) (k) a Combining (31) and (32) gives the right-hand side of (28). Recall from [[10], Theorem 3.3.4] the following Lemma 3.2 Let A and B be two matrices of size m l and l n and k = min{l, m, n}. Denote the ordered singular values of A, B, and AB by σ 1 (A) σ 2 (A) σ k (A) 0, σ 1 (B) σ 2 (B) σ k (B) 0 and σ 1 (AB) σ 2 (AB) σ k (AB) 0. Then a j σ i (AB) j σ i (A)σ i (B), j = 1,, k. We are now ready to prove the following critical inequality. Lemma 3.3 For any real positive semidefinite symmetric matrices A and B of size n by n, let σ i (A) be the i-th singular value of matrix A and similar for σ i (B) for 1 i n. Given any p [0, 1], if A B is positive semi-definite, then σ i (A p B p ) σ p i (A B) (33) for k = 1,, n. Proof. First of all, using the method of contour integration to evaluate improper integrals, for any 0 < p < 1, one can obtain that that is 0 x p = ξ p 1 x + ξ dξ = sin (pπ) π π sin (pπ) xp 1, (34) Therefore, using the integral representation (35) and by (27), one has 0 xξ p 1 dξ. (35) x + ξ σ p sin (pπ) i (A B) = π = sin (pπ) π 0 0 σ i (A B)ξ p 1 σ i (A B) + ξ dξ ( σ i (A B) (A B + ξi) 1 ) ξ p 1 dξ, (36) where I is the n by n identity matrix which is the right-hand side of the inequality (33). 11

12 Using (35) again, due to the positive semi-definiteness of A and B, we have A p B p = sin (pπ) π Now by (27), the left-hand side of (33) is σ i (A p B p ) = Thus it suffices to show sin (pπ) π sin (pπ) π 0 ( A (A + ξi) 1 B (B + ξi) 1) ξ p 1 dξ. ( σ i 0 0 ( A (A + ξi) 1 B (B + ξi) 1) ) ξ p 1 dξ ( σ i A (A + ξi) 1 B (B + ξi) 1) ξ p 1 dξ. ( σ i A (A + ξi) 1 B (B + ξi) 1) for all ξ > 0, and furthermore it suffices to show ( σ i (A B) (A B + ξi) 1 ) (37) ( σ i A (A + I) 1 B (B + I) 1) ( σ i (A B) (A B + I) 1 ) (38) since one can substitute A and B in (38) by 1 ξ A and 1 ξ B to get (37). Since X (X + I) 1 = I (X + I) 1 for any n by n matrix X, (38) is equivalent to ( σ i (B + I) 1 (A + I) 1) ( σ i I (A B + I) 1 ), (39) so we just need to prove (39). By matrix operations, (B + I) 1 ( (A + I) 1 ) ) = (B + I) 1 2 I ((B + I) 1 2 (A B) (B + I) I (B + I) 1 (40) 2. Note that the leading singular value of (B + I) 1 2 (B is the operator norm + I) since B is positive semidefinite. By using Lemma 3.2, it follows that ( σ i (B + I) 1 (A + I) 1) σ i (I ((B + I) 1 2 (A B) (B + I) I ) 1 ),. (41) 12

13 Again, since B and A B are positive semidefinite, (B + I) 1 2 (A B) (B + I) 1 2 A B (here X Y means that Y X is positive semidefinite) which implies I ( (B + I) 1 2 (A B) (B + I) I ) 1 I (A B + I) 1. (42) Combining (42) with (41), we obtain (39) as needed. Thus, the inequality in (33) holds for positive semi-definite matrices A, B and A B. Furthermore, we need another two preparatory results. Lemma 3.4 Suppose that X is a real symmetric matrix of size n by n. Let σ i (X) be the i-th singular value of matrix X, 1 i n. Then σ p i (X + X ) + σ p i ( X X) = 2p σ p i (X) (43) k = 1,, n. Proof. Let Udiag (x 1,, x n ) U T be a diagonalization of X with x 1 x 2 x n. Then X = Udiag ( x 1,, x n ) U T. (44) It follows that and Thus we have and (43) follows. X + X = Udiag (x 1 + x 1,, x n + x n ) U T (45) X X = Udiag ( x 1 x 1,, x n x n ) U T. (46) σ p i (X + X ) + σp i ( X X) = (x i + x i ) p + ( x i x i ) p = 2 p x i p = 2 p σ p i (X), Lemma 3.5 Let X be a real symmetric matrix and Y a positive semidefinite symmetric matrix Y of size n by n. Suppose X Y. Then for i = 1,, n. σ i (X + X ) 2σ i (Y ) (47) 13

14 Proof. Let λ 1 (X) λ m (X) > 0 λ m+1 (X) λ n (X), (48) 1 m n, and λ 1 (Y ) λ n (Y ) 0 be the eigenvalues of X and Y, respectively. By using the minimax definition of eigenvalues, we have λ i (X) = for i = 1,, m. Then min dim(v )=i max v V : x =1 vxvt min dim(v )=i max vy v V : x =1 vt = λ i (Y ) (49) σ i (X + X ) = 2λ i (X) 2λ i (Y ) = 2σ i (Y ) (50) for i = 1,, m, and obviously (47) holds for i = m + 1,, n, since σ i (X + X ) = 0 for i = m + 1,, n. In the following lemma, we do not require that A B be positive semi-definite. This is the different from that of Lemma 3.3. Lemma 3.6 For any real positive semidefinite symmetric matrices A and B of size n by n, let σ i (A) be the i-th singular value of matrix A and similar for σ i (B) for 1 i n. Given any p [0, 1], we have (33) for k = 1,, n. Proof. Let C := A B. Note that C may not be positive semi-definite. Since C C, A B + C+ C. It follows that 2 ( A p B p B + C + C ) p B p. (51) 2 By using Lemma 3.5, we obtain that ( ) A p B p + A p B p σ i 2 (( σ i B + C + C ) p ) B p. (52) 2 Since B and C+ C 2 are both positive semidefinite, we use Lemma 3.3 to obtain (( σ i B + C + C ) p ) B p 2 σ p i ( ) C + C 2 (53) It follows from (52) and (53) that σ i (A p B p + A p B p ) 2 1 p σ p i (C + C ). (54) 14

15 Similarly, we also have σ i (B p A p + B p A p ) 2 1 p σ p i ( C + C ) = 21 p σ p i ( C C) (55) by swapping the positions of A and B in (54). By Lemma 3.4, we have σ i (A p B p + A p B p ) + σ i (B p A p + B p A p ) = 2 σ i (A p B p ) (56) and σ p i (C + C ) + σ p i ( C C) = 2p σ p i (C). (57) Thus by adding the left-hand sides and right-hand sides of (54) and (55) together, we get that σ i (A p B p )) σ p i (C) which is the inequality in (33) for any real positive semidefinite symmetric matrices A and B of size n by n, as desired. With the preparation above, we are able to prove Lemma 1.2 with a help from the following result. See [22] or [2]. Lemma 3.7 (Thompson, 1976) For any real square matrices A and B of the same size, there exist orthonormal matrices U and V such that A + B U A U T + V B V T, (58) namely U A U T + V B V T A + B is positive semidefinite. Remark 3.1 In general, we do not have A A B + B. Proof of Lemma 1.2. such that By Lemma 3.7, there exist orthonormal matrices U and V A U A B U + V B V, (59) namely U A B U +V B V A is positive semidefinite. If we use U A B U +V B V and V B V as A and B, we first use Lemma 1.1 and then use Lemma 3.3 to have σ p i (U A B U + V B V ) σ p i (V B V ) 15

16 = Next we notice that σ i (U A B U + V B V ) p σ i (V B V ) p σ p i (U A B U ) = σ p i ( A B ) = σ p i (A B). σ i ( A p ) σ i ((U A B U + V B V ) p ) (60) for i = 1,..., k, since the function x p is an increasing function. It follows σ p i (A) σp i (B) = σ i( A p ) σ i ( B p ) = σ i ( A p ) σ i ((V B V ) p ) σ i ((U A B U + V B V ) p ) σ i ((V B V ) p ). Combining the above inequalities for i = 1,..., k with previous inequality in (60), we obtain the inequality (7) for general matrices A and B. However, the proof of Theorem 1.1 is much more difficult. We have to divide the proof of Theorem 1.1 into several cases. Let us start with an easy case. Theorem 3.1 Suppose that A B or B A. Then we have the desired inequalities (5). Proof. Let us say the size of matrices A and B is n n. Let λ i (A) be the i th eigenvalue of A under the assumption that all eigenvalues are arranged in the increasing order. Suppose that A B. Then we know that the i th eigenvalue λ i ( A ) λ i ( B ) for all i = 1,..., n. As A and B are symmetric, singular values of A are eigenvalues of A. It follows that σ i ( A ) σ i ( B ) for all i and hence, σ i (A) σ i (B) for all i = 1,..., n. Since p (0, 1], we have σ p i (A) σp i (B) for all i = 1,..., n. Therefore, n n σ p i (A) σp i (B) = (σ p i (A) σp i (B)) σ p i (A B) by using Lemma 1.2. We are now ready to prove the conjecture for symmetric positive semi-definite matrices A and B. Proof of Theorem 1.1. By Lemma 3.6, we have Then by Lemma 1.1, σ i (A p B p ) σ p i (A) σp i (B) = σ i (A p ) σ i (B p ) 16 σ p i (A B). (61) σ i (A p B p ). (62)

17 Thus combining (61) with (62), we obtain (5) as desired. We have the following corollaries. Corollary 3.1 Suppose that both A and B are symmetric and negative semi-definite. Then we have the inequalities in (5). Similarly, if one of A and B is symmetric and positive semi-definite and the other is symmetric and negative semi-definite, we have the inequalities (5). Proof. Without loss of generality, we may assume that A is positive semi-definite while B is negative semi-definite. Then σ p i (A) σp i (B) = σ p i ( A ) σp i ( B ) = σ p i ( A B ) Since A = A and B = B, we have σ p i ( A B ) = σp i (A + B). Now A + B A + B = A B and all eigenvalues of A B are singular values of A B. We have σ p i (A + B) σp i (A B). Combining the discussions above concludes the proof. Corollary 3.2 Suppose that both A and B have polar decompositions A = P U and B = QU for positive semi-definite matrices P and Q with the same unitary matrix U. Then we have the inequalities in (5). Proof. Since σ i (A) = σ i (P ) and σ i (B) = σ i (Q), we have σ p i (A) σp i (B) = σ p i (P ) σp i (Q) σ p i (P Q) = σ p i (A B). by using Theorem 1.1. We now consider matrices A and B are symmetric. It is known that we can write A = A + A with A + and A symmetric and positive semi-definite. Similar for B. Thus, A = A + + A. Next we need to prove the following critical result. Theorem 3.2 Suppose that both A and B are symmetric and have the same size. If A, B have the following property: (A B) + = A + B + and (A B) = A B, then we have the inequalities in (5). Proof. We mainly use the proof of Lemma 3.4 to have σ p i ( A + A) + σp i ( A A) = 2p σ p i (A). If we write A = A + A with A + and A symmetric and positive semi-definite, the above inequality is σ p i (A +) + σ p i (A ) = σ p i (A). 17

18 Similar for B and hence we have σ p i (A) σp i (B) = σp i (A +) σ p i (B +)+σ p i (A ) σ p i (B ) σ p i (A +) σ p i (B +) + σ p i (A )+σ p i (B ) Then we use Theorem 1.1 to conclude σ p i (A) σp i (B) σ p i (A +) σ p i (B +) + σ p i (A )+σ p i (B ) σ p i (A + B + )+σ p i (A B ). Now if A + B + = (A B) + and A B = (A B), we have σ p i (A + B + ) + σ p i (A B ) = σ p i ((A B) +) + σ p i ((A B) ) = σ p i (A B). These complete the proof. Next we consider the case that two symmetric matrices are commutative, i.e. AB = BA. Theorem 3.3 Suppose that both A and B are symmetric and have the same size. AB = BA, then we have the inequalities in (5). If Proof. Since real-symmetric matrices A and B of size n by n which are commutative, A and B are simultaneously diagonalizable. There exists an orthogonal matrix U such that UAU T = Λ A and UBU T = Λ B, where Λ A and Λ B are diagonal matrices. Thus, σ p i (A) σp i (B) = σ p i (Λ A) σ p i (Λ B) = σ p i ( Λ A ) σ p i ( Λ B ) σ p i ( Λ A Λ B ). by using Theorem 1.1. It is easy to see that σ i ( Λ A Λ B ) σ i (Λ A Λ B ) for each i. It thus follows = σ p i ( Λ A Λ B ) σ p i (Λ A Λ B ) = σ p i ( Λ A Λ B ) σ p i (A B). Combining the above two estimates concludes the result in this theorem. 18

19 Corollary 3.3 Let a = (a 1, a 2,..., a n ) and b = (b 1, b 2,..., b n ) be two sequences of two nonnegative real numbers. Let r(a) be the rearrangement of a. That is r(a) = (r 1 (a),..., r n (a)) with r i (a) being an entry of a and r i (a) r i+1 (a) for all i = 1,..., n 1. Similarly, let r(b) be the rearrangement of b. Then for all k = 1,..., n. r p i (a) rp i (b) a i b i p (63) Proof. Consider a matrix A = diag(a 1,..., a n ) and B = diag(b 1,..., b n ). Then A and B are commute. Using Theorem 3.3, we conclude (63). Remark 3.2 To prove the result in Theorem 1.1 for two symmetric matrices, our first thought is to use the following idea: σ p i (A) σp i (B) = σ p i ( A ) σp i ( B ) σ p i ( A B ). We have tried very hard to show A B A B. It turns out that this inequality does not hold in general. Here is a counter example: A = and B = when k = 3 and k = 4. Finally we have We do not even have σ p i ( A B ) σ p i (A B) Proposition 3.1 Suppose that the inequalities in (5) hold for any symmetric matrices A and B. Then the inequalities hold for any matrices A and B of size m n. Proof. For any matrix A and B of size m n, we consider the (m+n) (m+n) symmetric matrices [ ] [ ] 0 A Ã = A and 0 B 0 B = B. 0 19

20 Then it is known that the eigenvalues of à are ±σ 1(A),, ±σ l (A) together with m+n 2l zeros, where l = min{m, n}. Similarly for B. For k q, we use the assumption to have σ p i (A) σp i (B) = σ p i (Ã) σp i ( B) This completes the proof. σ p i (à B) = σ p i (A B). 4 Remarks Remark 4.1 If the conjecture is true, i.e., the inequality in (5) holds for general matrices A and B, we can improve the bound on δ 2s in Theorem 1.2. That is, we have Theorem 4.1 Suppose that we have the inequality in (5) for general matrices A and B. Suppose that the matrix Φ whose matrix RIP constant δ 2s < 1. Then there exists a number p 0 (0, 1) such that for any p p 0, each minimizer X p of the Schatten l p quasi-norm minimization (2) is the lowest rank solution of (1). Firstly, we need the following Lemma. Lemma 4.1 Suppose we have the inequality in (5). Let X 0 be the minimizer of (1) with rank s and X be a global minimizer of (2). Recall that both of them satisfy A(X 0 ) = A(X ). Let H = X 0 X which is in the null space of A. Then m i=s+1 σ p i (H) s σ p i (H). (64) Proof. We use (5) to have X 0 p p X p p = X 0 H p p = m σ p i (X0 H) s σ p i (X0 ) σ p i (H) + m i=s+1 σ p i (H) m σ p i (X0 ) σ p i (H) since σ i (X 0 ) = 0 for i s + 1. After rearranging the above inequality, since X 0 p p = s σp i (X0 ), we obtain the result. The proof of Theorem 4.1 is similar to Theorem 1.2. We now outline a proof. Proof of Theorem 4.1. Let H = X 0 X and write H = UΣ H V in SVD format. We decompose H = H 0 + H 1 + H 2 + similar to the proof of Theorem 1.2. As before we have ( ) 2 (1 δ 2s )( H 0 2 F + H 1 2 F ) (1 + δ 2s ) H j F. 20 j 2

21 If H 2 F = 0, it follows that H = 0 and hence X = X 0. That is, the minimizer is the low rank solution (1). Otherwise, we have ( ) ( ) 1/2 1 + δ2s H 0 F H j F, (65) 1 δ 2s under the assumption that H 2 F 0. In this situation, σ 2s+1 (H) 0. We note that for all H Null(A Ω ) with σ 2s+1 (H) 0, similar with the proof of (15), we have j 2 ( 2s+1 (2s) 1/p H 0 + H 1 + H 2 p σ 2s+1 (H)(2s) 1/p ( σi (H) ) ) p 1/p σ 2s+1 (H) σ 2s+1 (H)( s )1/p. (66) As in the proof of Theorem 1.2, we can prove that for any ɛ (0, 1), there exists a real number p ɛ 1 such that when p p ɛ, we have ( ) 1/2 1 + (2s) 1/p δ2s H j F ɛ H 0 + H 1 + H 2 p. (67) 1 δ 2s j 2 Finally, we use the Hölder inequality, the inequality in (65), and then the inequality in (67) to have H 0 p p s 1 p/2 H 0 p F ( ) s1 p/2 ɛ(2s) 1/p p H 0 + H 1 + H 2 ( p ɛ p s p/2 2 1 H 0 + H 1 + H 2 p p ɛp H 2s p/2 0 p p + ) σ p i (H) ɛ p i s+1 2s 2 H 0 p p/2 p = ɛp s H 0 p p/2 p by using Lemma 4.1. Under the assumption that we used l p minimization (2) with p p ɛ and ɛ < 1/2, we get a contradiction if H 0 F 0 since t = ɛp s < 1. Thus, H 0 p/2 F = 0. Therefore, H = X 0 X = 0 and hence, the minimizer of (2) is the minimal rank solution of (1). Remark 4.2 From the proof, we can see that ɛ < s is enough. Remark 4.3 Also with the conjecture, we can also improve the Theorem 1.3 easily. The details are omitted. Remark 4.4 In the vector case, we have n x i y i p n ( x i p y i p ). So Theorem 4.1 is true in the compressed sensing setting. Thus, we recover a known result in [21], i.e. Theorem 1.2. Our proof is much simpler. 21

22 References [1] T. Ando. Comparison of norms f(a) f(b) and f( a b ), Mathematische Zeitschrift, 197(3): , [2] R. Bhatia. Matrix analysis, Springer Verlag, [3] T. Cai and A. Zhang, Sharp RIP bound for sparse signal and low-rank matrix recovery, Applied Comput. Harmonic Analysis, to appear, [4] E.J. Candès and Y. Plan, Tight Oracle Inequalities for Low-Rank Matrix Recovery From a Minimal Number of Noisy Random Measurements, IEEE Transactions on Information Theory, 57(2011), [5] K. Dvijotham and M. Fazel. A nullspace analysis of the nuclear norm heuristic for rank minimization. In Proc. of ICASSP, [6] M. Fornasier, H. Rauhut, and R. Ward, Low-rank matrix recovery via iteratively reweighted least squares minimization, SIAM J. Optim. 21 (2011), no. 4, [7] S. Foucart, M.J. Lai, Sparsest solutions of under-determined linear systems via l q - minimization for 0 < q < 1, Appl. Comput. Harmon. Anal. 26 (2009) [8] R. Gribonval and M. Nielsen, Sparse decompositions in unions of bases. IEEE Trans. Info. Theory, 49(2003), pp [9] R. Horn and C. Johnson, Matrix Analysis, Cambridge Univ. Press, [10] R. Horn and C. Johnson, Topics in Matrix Analysis, Cambridge Univ. Press, [11] T. Kato. Perturbation theory for linear operators, Springer Verlag, [12] L. Kong, and N. Xiu, New Bounds for Restricted Isometry Constants in Low-rank Matrix Recovery, Optimization on-line Digest, [13] M. J. Lai, Y. Xu, and W. Yin, Low-Rank Matrix Recovery using Unconstrained Smoothed-l q Minimization, submitted, [14] V. B. Lidskii, C.D. Benster, and G.E. Forsythe, The proper values of the sum and product of symmetric matrices, United States Office of Naval Research, [15] K. Mohan, M. Fazel, New Restricted Isometry results for noisy low-rank recovery. ISIT, Austin, [16] S. Oymak and B. Hassibi. New null space results and recovery thresholds for matrix rank minimization Available on-line. 22

23 [17] S. Oymak, K. Mohan, M. Fazel, and B. Hassibi, A Simplified Approach to Recovery Conditions for Low Rank Matrices, submitted on Mar. 27, [18] B. Recht, M. Fazel, and P. Parrilo. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Review, 52(2010), [19] S. Yu. Rotfel d, Remarks on the singular numbers of a sum of completely continuous operators, Functional Anal. Appl., 1 (1967), [20] S. Yu. Rotfel d, The singular numbers of the sum of completely continuous operators, Topics in Mathematical Physics, vol. 3, Spectral Theory, edited by M. S. Berman, (1969), English version published by Consultants Bureau, New York. [21] Q. Sun, Recovery of sparsest signals via l q minimization, Applied Comput. Harmonic Analysis, 32(2012), [22] R. C. Thompson, Convex and concave functions of singular values of matrix sums. Pacific Journal of Mathematics, 66(1): , [23] M. Uchiyama, Subadditivy of eigenvalue sums. Proc. American Mathematical Society, (2005) [24] H. Wang and S. Li, The bounds of restricted isometry constants for low rank matrices recovery, Sci. China, Ser. A, accepted,

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:

More information

A New Estimate of Restricted Isometry Constants for Sparse Solutions

A New Estimate of Restricted Isometry Constants for Sparse Solutions A New Estimate of Restricted Isometry Constants for Sparse Solutions Ming-Jun Lai and Louis Y. Liu January 12, 211 Abstract We show that as long as the restricted isometry constant δ 2k < 1/2, there exist

More information

Concave Mirsky Inequality and Low-Rank Recovery

Concave Mirsky Inequality and Low-Rank Recovery Simon Foucart Texas A&M University Abstract We propose a simple proof of a generalized Mirsky inequality comparing the differences of singular values of two matrices with the singular values of their difference.

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior

More information

Rui ZHANG Song LI. Department of Mathematics, Zhejiang University, Hangzhou , P. R. China

Rui ZHANG Song LI. Department of Mathematics, Zhejiang University, Hangzhou , P. R. China Acta Mathematica Sinica, English Series May, 015, Vol. 31, No. 5, pp. 755 766 Published online: April 15, 015 DOI: 10.1007/s10114-015-434-4 Http://www.ActaMath.com Acta Mathematica Sinica, English Series

More information

of Orthogonal Matching Pursuit

of Orthogonal Matching Pursuit A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

Compressed Sensing and Affine Rank Minimization Under Restricted Isometry

Compressed Sensing and Affine Rank Minimization Under Restricted Isometry IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 61, NO. 13, JULY 1, 2013 3279 Compressed Sensing Affine Rank Minimization Under Restricted Isometry T. Tony Cai Anru Zhang Abstract This paper establishes new

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

Compressed Sensing and Robust Recovery of Low Rank Matrices

Compressed Sensing and Robust Recovery of Low Rank Matrices Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech

More information

Banach Journal of Mathematical Analysis ISSN: (electronic)

Banach Journal of Mathematical Analysis ISSN: (electronic) Banach J. Math. Anal. 6 (2012), no. 1, 139 146 Banach Journal of Mathematical Analysis ISSN: 1735-8787 (electronic) www.emis.de/journals/bjma/ AN EXTENSION OF KY FAN S DOMINANCE THEOREM RAHIM ALIZADEH

More information

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

More information

ON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES

ON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES Volume 10 (2009), Issue 4, Article 91, 5 pp. ON THE HÖLDER CONTINUITY O MATRIX UNCTIONS OR NORMAL MATRICES THOMAS P. WIHLER MATHEMATICS INSTITUTE UNIVERSITY O BERN SIDLERSTRASSE 5, CH-3012 BERN SWITZERLAND.

More information

arxiv: v5 [math.na] 16 Nov 2017

arxiv: v5 [math.na] 16 Nov 2017 RANDOM PERTURBATION OF LOW RANK MATRICES: IMPROVING CLASSICAL BOUNDS arxiv:3.657v5 [math.na] 6 Nov 07 SEAN O ROURKE, VAN VU, AND KE WANG Abstract. Matrix perturbation inequalities, such as Weyl s theorem

More information

Clarkson Inequalities With Several Operators

Clarkson Inequalities With Several Operators isid/ms/2003/23 August 14, 2003 http://www.isid.ac.in/ statmath/eprints Clarkson Inequalities With Several Operators Rajendra Bhatia Fuad Kittaneh Indian Statistical Institute, Delhi Centre 7, SJSS Marg,

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3784. Ming-Jun Lai Department of Mathematics,

More information

Spectral inequalities and equalities involving products of matrices

Spectral inequalities and equalities involving products of matrices Spectral inequalities and equalities involving products of matrices Chi-Kwong Li 1 Department of Mathematics, College of William & Mary, Williamsburg, Virginia 23187 (ckli@math.wm.edu) Yiu-Tung Poon Department

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin

More information

Some inequalities for sum and product of positive semide nite matrices

Some inequalities for sum and product of positive semide nite matrices Linear Algebra and its Applications 293 (1999) 39±49 www.elsevier.com/locate/laa Some inequalities for sum and product of positive semide nite matrices Bo-Ying Wang a,1,2, Bo-Yan Xi a, Fuzhen Zhang b,

More information

Sparsest Solutions of Underdetermined Linear Systems via l q -minimization for 0 < q 1

Sparsest Solutions of Underdetermined Linear Systems via l q -minimization for 0 < q 1 Sparsest Solutions of Underdetermined Linear Systems via l q -minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3740 Ming-Jun Lai Department of Mathematics

More information

The matrix arithmetic-geometric mean inequality revisited

The matrix arithmetic-geometric mean inequality revisited isid/ms/007/11 November 1 007 http://wwwisidacin/ statmath/eprints The matrix arithmetic-geometric mean inequality revisited Rajendra Bhatia Fuad Kittaneh Indian Statistical Institute Delhi Centre 7 SJSS

More information

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013. The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational

More information

Conditions for Robust Principal Component Analysis

Conditions for Robust Principal Component Analysis Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and

More information

Stability and Robustness of Weak Orthogonal Matching Pursuits

Stability and Robustness of Weak Orthogonal Matching Pursuits Stability and Robustness of Weak Orthogonal Matching Pursuits Simon Foucart, Drexel University Abstract A recent result establishing, under restricted isometry conditions, the success of sparse recovery

More information

Information-Theoretic Limits of Matrix Completion

Information-Theoretic Limits of Matrix Completion Information-Theoretic Limits of Matrix Completion Erwin Riegler, David Stotz, and Helmut Bölcskei Dept. IT & EE, ETH Zurich, Switzerland Email: {eriegler, dstotz, boelcskei}@nari.ee.ethz.ch Abstract We

More information

Uniqueness Conditions for A Class of l 0 -Minimization Problems

Uniqueness Conditions for A Class of l 0 -Minimization Problems Uniqueness Conditions for A Class of l 0 -Minimization Problems Chunlei Xu and Yun-Bin Zhao October, 03, Revised January 04 Abstract. We consider a class of l 0 -minimization problems, which is to search

More information

A new method on deterministic construction of the measurement matrix in compressed sensing

A new method on deterministic construction of the measurement matrix in compressed sensing A new method on deterministic construction of the measurement matrix in compressed sensing Qun Mo 1 arxiv:1503.01250v1 [cs.it] 4 Mar 2015 Abstract Construction on the measurement matrix A is a central

More information

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES olume 10 2009, Issue 2, Article 41, 10 pp. ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES HANYU LI, HU YANG, AND HUA SHAO COLLEGE OF MATHEMATICS AND PHYSICS CHONGQING UNIERSITY

More information

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES HANYU LI, HU YANG College of Mathematics and Physics Chongqing University Chongqing, 400030, P.R. China EMail: lihy.hy@gmail.com,

More information

Homework 1. Yuan Yao. September 18, 2011

Homework 1. Yuan Yao. September 18, 2011 Homework 1 Yuan Yao September 18, 2011 1. Singular Value Decomposition: The goal of this exercise is to refresh your memory about the singular value decomposition and matrix norms. A good reference to

More information

On the l 1 -Norm Invariant Convex k-sparse Decomposition of Signals

On the l 1 -Norm Invariant Convex k-sparse Decomposition of Signals On the l 1 -Norm Invariant Convex -Sparse Decomposition of Signals arxiv:1305.6021v2 [cs.it] 11 Nov 2013 Guangwu Xu and Zhiqiang Xu Abstract Inspired by an interesting idea of Cai and Zhang, we formulate

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 1, pp. 1-11, 8. Copyright 8,. ISSN 168-961. MAJORIZATION BOUNDS FOR RITZ VALUES OF HERMITIAN MATRICES CHRISTOPHER C. PAIGE AND IVO PANAYOTOV Abstract.

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

An Unconstrained l q Minimization with 0 < q 1 for Sparse Solution of Under-determined Linear Systems

An Unconstrained l q Minimization with 0 < q 1 for Sparse Solution of Under-determined Linear Systems An Unconstrained l q Minimization with 0 < q 1 for Sparse Solution of Under-determined Linear Systems Ming-Jun Lai and Jingyue Wang Department of Mathematics The University of Georgia Athens, GA 30602.

More information

An Iterative Procedure for Solving the Riccati Equation A 2 R RA 1 = A 3 + RA 4 R. M.THAMBAN NAIR (I.I.T. Madras)

An Iterative Procedure for Solving the Riccati Equation A 2 R RA 1 = A 3 + RA 4 R. M.THAMBAN NAIR (I.I.T. Madras) An Iterative Procedure for Solving the Riccati Equation A 2 R RA 1 = A 3 + RA 4 R M.THAMBAN NAIR (I.I.T. Madras) Abstract Let X 1 and X 2 be complex Banach spaces, and let A 1 BL(X 1 ), A 2 BL(X 2 ), A

More information

Characterization of half-radial matrices

Characterization of half-radial matrices Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the

More information

Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery

Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery Department of Mathematics & Risk Management Institute National University of Singapore (Based on a joint work with Shujun

More information

Convexity of the Joint Numerical Range

Convexity of the Joint Numerical Range Convexity of the Joint Numerical Range Chi-Kwong Li and Yiu-Tung Poon October 26, 2004 Dedicated to Professor Yik-Hoi Au-Yeung on the occasion of his retirement. Abstract Let A = (A 1,..., A m ) be an

More information

Sparse Recovery with Pre-Gaussian Random Matrices

Sparse Recovery with Pre-Gaussian Random Matrices Sparse Recovery with Pre-Gaussian Random Matrices Simon Foucart Laboratoire Jacques-Louis Lions Université Pierre et Marie Curie Paris, 75013, France Ming-Jun Lai Department of Mathematics University of

More information

Operators with numerical range in a closed halfplane

Operators with numerical range in a closed halfplane Operators with numerical range in a closed halfplane Wai-Shun Cheung 1 Department of Mathematics, University of Hong Kong, Hong Kong, P. R. China. wshun@graduate.hku.hk Chi-Kwong Li 2 Department of Mathematics,

More information

OPERATOR-LIPSCHITZ ESTIMATES FOR THE SINGULAR VALUE FUNCTIONAL CALCULUS

OPERATOR-LIPSCHITZ ESTIMATES FOR THE SINGULAR VALUE FUNCTIONAL CALCULUS OPERATOR-LIPSCHITZ ESTIMATES FOR THE SINGULAR VALUE FUNCTIONAL CALCULUS FREDRIK ANDERSSON, MARCUS CARLSSON, AND KARL-MIKAEL PERFEKT Abstract We consider a functional calculus for compact operators, acting

More information

An iterative hard thresholding estimator for low rank matrix recovery

An iterative hard thresholding estimator for low rank matrix recovery An iterative hard thresholding estimator for low rank matrix recovery Alexandra Carpentier - based on a joint work with Arlene K.Y. Kim Statistical Laboratory, Department of Pure Mathematics and Mathematical

More information

Uniqueness of the Solutions of Some Completion Problems

Uniqueness of the Solutions of Some Completion Problems Uniqueness of the Solutions of Some Completion Problems Chi-Kwong Li and Tom Milligan Abstract We determine the conditions for uniqueness of the solutions of several completion problems including the positive

More information

Lecture 3. Random Fourier measurements

Lecture 3. Random Fourier measurements Lecture 3. Random Fourier measurements 1 Sampling from Fourier matrices 2 Law of Large Numbers and its operator-valued versions 3 Frames. Rudelson s Selection Theorem Sampling from Fourier matrices Our

More information

Sparse Recovery on Euclidean Jordan Algebras

Sparse Recovery on Euclidean Jordan Algebras Sparse Recovery on Euclidean Jordan Algebras Lingchen Kong, Jie Sun, Jiyuan Tao and Naihua Xiu February 3, 2013 Abstract We consider the sparse recovery problem on Euclidean Jordan algebra (SREJA), which

More information

A multi-stage convex relaxation approach to noisy structured low-rank matrix recovery

A multi-stage convex relaxation approach to noisy structured low-rank matrix recovery A multi-stage convex relaxation approach to noisy structured low-rank matrix recovery Shujun Bi, Shaohua Pan and Defeng Sun March 1, 2017 Abstract This paper concerns with a noisy structured low-rank matrix

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity

More information

Lecture 8: Linear Algebra Background

Lecture 8: Linear Algebra Background CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 8: Linear Algebra Background Lecturer: Shayan Oveis Gharan 2/1/2017 Scribe: Swati Padmanabhan Disclaimer: These notes have not been subjected

More information

INVESTIGATING THE NUMERICAL RANGE AND Q-NUMERICAL RANGE OF NON SQUARE MATRICES. Aikaterini Aretaki, John Maroulas

INVESTIGATING THE NUMERICAL RANGE AND Q-NUMERICAL RANGE OF NON SQUARE MATRICES. Aikaterini Aretaki, John Maroulas Opuscula Mathematica Vol. 31 No. 3 2011 http://dx.doi.org/10.7494/opmath.2011.31.3.303 INVESTIGATING THE NUMERICAL RANGE AND Q-NUMERICAL RANGE OF NON SQUARE MATRICES Aikaterini Aretaki, John Maroulas Abstract.

More information

arxiv: v1 [math.ra] 8 Apr 2016

arxiv: v1 [math.ra] 8 Apr 2016 ON A DETERMINANTAL INEQUALITY ARISING FROM DIFFUSION TENSOR IMAGING MINGHUA LIN arxiv:1604.04141v1 [math.ra] 8 Apr 2016 Abstract. In comparing geodesics induced by different metrics, Audenaert formulated

More information

arxiv: v3 [math.ra] 22 Aug 2014

arxiv: v3 [math.ra] 22 Aug 2014 arxiv:1407.0331v3 [math.ra] 22 Aug 2014 Positivity of Partitioned Hermitian Matrices with Unitarily Invariant Norms Abstract Chi-Kwong Li a, Fuzhen Zhang b a Department of Mathematics, College of William

More information

Strengthened Sobolev inequalities for a random subspace of functions

Strengthened Sobolev inequalities for a random subspace of functions Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)

More information

Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization

Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization Forty-Fifth Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 26-28, 27 WeA3.2 Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization Benjamin

More information

arxiv: v3 [math.ra] 10 Jun 2016

arxiv: v3 [math.ra] 10 Jun 2016 To appear in Linear and Multilinear Algebra Vol. 00, No. 00, Month 0XX, 1 10 The critical exponent for generalized doubly nonnegative matrices arxiv:1407.7059v3 [math.ra] 10 Jun 016 Xuchen Han a, Charles

More information

arxiv: v1 [math.na] 1 Sep 2018

arxiv: v1 [math.na] 1 Sep 2018 On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

An Introduction to Compressed Sensing

An Introduction to Compressed Sensing An Introduction to Compressed Sensing Mathukumalli Vidyasagar March 8, 216 2 Contents 1 Compressed Sensing: Problem Formulation 5 1.1 Mathematical Preliminaries..................................... 5 1.2

More information

The Singular Value Decomposition and Least Squares Problems

The Singular Value Decomposition and Least Squares Problems The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Notes on matrix arithmetic geometric mean inequalities

Notes on matrix arithmetic geometric mean inequalities Linear Algebra and its Applications 308 (000) 03 11 www.elsevier.com/locate/laa Notes on matrix arithmetic geometric mean inequalities Rajendra Bhatia a,, Fuad Kittaneh b a Indian Statistical Institute,

More information

On the Convex Geometry of Weighted Nuclear Norm Minimization

On the Convex Geometry of Weighted Nuclear Norm Minimization On the Convex Geometry of Weighted Nuclear Norm Minimization Seyedroohollah Hosseini roohollah.hosseini@outlook.com Abstract- Low-rank matrix approximation, which aims to construct a low-rank matrix from

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

PRODUCT OF OPERATORS AND NUMERICAL RANGE

PRODUCT OF OPERATORS AND NUMERICAL RANGE PRODUCT OF OPERATORS AND NUMERICAL RANGE MAO-TING CHIEN 1, HWA-LONG GAU 2, CHI-KWONG LI 3, MING-CHENG TSAI 4, KUO-ZHONG WANG 5 Abstract. We show that a bounded linear operator A B(H) is a multiple of a

More information

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Truncated singular value decomposition (TSVD) is a popular method for solving linear discrete ill-posed

More information

FACTORING A QUADRATIC OPERATOR AS A PRODUCT OF TWO POSITIVE CONTRACTIONS

FACTORING A QUADRATIC OPERATOR AS A PRODUCT OF TWO POSITIVE CONTRACTIONS FACTORING A QUADRATIC OPERATOR AS A PRODUCT OF TWO POSITIVE CONTRACTIONS CHI-KWONG LI AND MING-CHENG TSAI Abstract. Let T be a quadratic operator on a complex Hilbert space H. We show that T can be written

More information

On the Local Convergence of an Iterative Approach for Inverse Singular Value Problems

On the Local Convergence of an Iterative Approach for Inverse Singular Value Problems On the Local Convergence of an Iterative Approach for Inverse Singular Value Problems Zheng-jian Bai Benedetta Morini Shu-fang Xu Abstract The purpose of this paper is to provide the convergence theory

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and This article appeared in a journal published by Elsevier The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution

More information

Means of unitaries, conjugations, and the Friedrichs operator

Means of unitaries, conjugations, and the Friedrichs operator J. Math. Anal. Appl. 335 (2007) 941 947 www.elsevier.com/locate/jmaa Means of unitaries, conjugations, and the Friedrichs operator Stephan Ramon Garcia Department of Mathematics, Pomona College, Claremont,

More information

Supplementary Materials for Riemannian Pursuit for Big Matrix Recovery

Supplementary Materials for Riemannian Pursuit for Big Matrix Recovery Supplementary Materials for Riemannian Pursuit for Big Matrix Recovery Mingkui Tan, School of omputer Science, The University of Adelaide, Australia Ivor W. Tsang, IS, University of Technology Sydney,

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via convex relaxations Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Jointly Low-Rank and Bisparse Recovery: Questions and Partial Answers

Jointly Low-Rank and Bisparse Recovery: Questions and Partial Answers Jointly Low-Rank and Bisparse Recovery: Questions and Partial Answers Simon Foucart, Rémi Gribonval, Laurent Jacques, and Holger Rauhut Abstract This preprint is not a finished product. It is presently

More information

Moore Penrose inverses and commuting elements of C -algebras

Moore Penrose inverses and commuting elements of C -algebras Moore Penrose inverses and commuting elements of C -algebras Julio Benítez Abstract Let a be an element of a C -algebra A satisfying aa = a a, where a is the Moore Penrose inverse of a and let b A. We

More information

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University PCA with random noise Van Ha Vu Department of Mathematics Yale University An important problem that appears in various areas of applied mathematics (in particular statistics, computer science and numerical

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via nonconvex optimization Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Assignment 1: From the Definition of Convexity to Helley Theorem

Assignment 1: From the Definition of Convexity to Helley Theorem Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x

More information

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY ILSE C.F. IPSEN Abstract. Absolute and relative perturbation bounds for Ritz values of complex square matrices are presented. The bounds exploit quasi-sparsity

More information

We describe the generalization of Hazan s algorithm for symmetric programming

We describe the generalization of Hazan s algorithm for symmetric programming ON HAZAN S ALGORITHM FOR SYMMETRIC PROGRAMMING PROBLEMS L. FAYBUSOVICH Abstract. problems We describe the generalization of Hazan s algorithm for symmetric programming Key words. Symmetric programming,

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Universal low-rank matrix recovery from Pauli measurements

Universal low-rank matrix recovery from Pauli measurements Universal low-rank matrix recovery from Pauli measurements Yi-Kai Liu Applied and Computational Mathematics Division National Institute of Standards and Technology Gaithersburg, MD, USA yi-kai.liu@nist.gov

More information

A Note on Eigenvalues of Perturbed Hermitian Matrices

A Note on Eigenvalues of Perturbed Hermitian Matrices A Note on Eigenvalues of Perturbed Hermitian Matrices Chi-Kwong Li Ren-Cang Li July 2004 Let ( H1 E A = E H 2 Abstract and à = ( H1 H 2 be Hermitian matrices with eigenvalues λ 1 λ k and λ 1 λ k, respectively.

More information

On the Hermitian solutions of the

On the Hermitian solutions of the Journal of Applied Mathematics & Bioinformatics vol.1 no.2 2011 109-129 ISSN: 1792-7625 (print) 1792-8850 (online) International Scientific Press 2011 On the Hermitian solutions of the matrix equation

More information

A Counterexample for the Validity of Using Nuclear Norm as a Convex Surrogate of Rank

A Counterexample for the Validity of Using Nuclear Norm as a Convex Surrogate of Rank A Counterexample for the Validity of Using Nuclear Norm as a Convex Surrogate of Rank Hongyang Zhang, Zhouchen Lin, and Chao Zhang Key Lab. of Machine Perception (MOE), School of EECS Peking University,

More information

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published

More information

Singular Value Decomposition and Polar Form

Singular Value Decomposition and Polar Form Chapter 14 Singular Value Decomposition and Polar Form 14.1 Singular Value Decomposition for Square Matrices Let f : E! E be any linear map, where E is a Euclidean space. In general, it may not be possible

More information

Sparse and Low Rank Recovery via Null Space Properties

Sparse and Low Rank Recovery via Null Space Properties Sparse and Low Rank Recovery via Null Space Properties Holger Rauhut Lehrstuhl C für Mathematik (Analysis), RWTH Aachen Convexity, probability and discrete structures, a geometric viewpoint Marne-la-Vallée,

More information

arxiv: v2 [stat.ml] 1 Jul 2013

arxiv: v2 [stat.ml] 1 Jul 2013 A Counterexample for the Validity of Using Nuclear Norm as a Convex Surrogate of Rank Hongyang Zhang, Zhouchen Lin, and Chao Zhang Key Lab. of Machine Perception (MOE), School of EECS Peking University,

More information

Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm

Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm J. K. Pant, W.-S. Lu, and A. Antoniou University of Victoria August 25, 2011 Compressive Sensing 1 University

More information

Recent Developments in Compressed Sensing

Recent Developments in Compressed Sensing Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline

More information

Norm Inequalities of Positive Semi-Definite Matrices

Norm Inequalities of Positive Semi-Definite Matrices Norm Inequalities of Positive Semi-Definite Matrices Antoine Mhanna To cite this version: Antoine Mhanna Norm Inequalities of Positive Semi-Definite Matrices 15 HAL Id: hal-11844 https://halinriafr/hal-11844v1

More information

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent. Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u

More information

Abstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization relations.

Abstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization relations. HIROSHIMA S THEOREM AND MATRIX NORM INEQUALITIES MINGHUA LIN AND HENRY WOLKOWICZ Abstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization

More information

Positive definite preserving linear transformations on symmetric matrix spaces

Positive definite preserving linear transformations on symmetric matrix spaces Positive definite preserving linear transformations on symmetric matrix spaces arxiv:1008.1347v1 [math.ra] 7 Aug 2010 Huynh Dinh Tuan-Tran Thi Nha Trang-Doan The Hieu Hue Geometry Group College of Education,

More information

arxiv: v1 [math.ra] 11 Aug 2014

arxiv: v1 [math.ra] 11 Aug 2014 Double B-tensors and quasi-double B-tensors Chaoqian Li, Yaotang Li arxiv:1408.2299v1 [math.ra] 11 Aug 2014 a School of Mathematics and Statistics, Yunnan University, Kunming, Yunnan, P. R. China 650091

More information

On Convergence of the Alternating Projection Method for Matrix Completion and Sparse Recovery Problems

On Convergence of the Alternating Projection Method for Matrix Completion and Sparse Recovery Problems On Convergence of the Alternating Projection Method for Matrix Completion and Sparse Recovery Problems Ming-Jun Lai Department of Mathematics, University of Georgia, Athens, GA, USA 30602 mjlai@ugaedu

More information

Uniqueness Conditions For Low-Rank Matrix Recovery

Uniqueness Conditions For Low-Rank Matrix Recovery Claremont Colleges Scholarship @ Claremont CMC Faculty Publications and Research CMC Faculty Scholarship 3-28-2011 Uniqueness Conditions For Low-Rank Matrix Recovery Yonina C. Eldar Israel Institute of

More information

An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems

An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems Kim-Chuan Toh Sangwoon Yun March 27, 2009; Revised, Nov 11, 2009 Abstract The affine rank minimization

More information

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Deanna Needell and Roman Vershynin Abstract We demonstrate a simple greedy algorithm that can reliably

More information