Matrix Completion from Noisy Entries

Size: px
Start display at page:

Download "Matrix Completion from Noisy Entries"

Transcription

1 Matrix Completion from Noisy Entries Raghunandan H. Keshavan, Andrea Montanari, and Sewoong Oh June 10, 2009 Abstract Given a matrix M of low-rank, we consider the problem of reconstructing it from noisy observations of a small, random subset of its entries. The problem arises in a variety of applications, from collaborative filtering (the Netflix problem ) to structure-from-motion and positioning. We study a low complexity algorithm introduced in [KMO09], based on a combination of spectral techniques and manifold optimization, that we call here OptSpace. We prove performance guarantees that are order-optimal in a number of circumstances. 1 Introduction Spectral techniques are an authentic workhorse in machine learning, statistics, numerical analysis, and signal processing. Given a matrix M, its largest singular values and the associated singular vectors explain the most significant correlations in the underlying data source. A low-rank approximation of M can further be used for low-complexity implementations of a number of linear algebra algorithms [FKV04]. In many practical circumstances we have access only to a sparse subset of the entries of an m n matrix M. It has recently been discovered that, if the matrix M has rank r, and unless it is too structured, a small random subset of its entries allow to reconstruct it exactly. This result was first proved by Candés and Recht [CR08] by analyzing a convex relaxation indroduced by Fazel [Faz02]. A tighter analysis of the same convex relaxation was carried out in [CT09]. A number of iterative schemes to solve the convex optimization problem appeared soon thereafter [CCS08, MGC09, TY09] (see also [WGRM09] for a generalization). In an alternative line of work, the authors of [KMO09] attacked the same problem using a combination of spectral techniques and manifold optimization: We will refer to their algorithm as OptSpace. OptSpace is intrinsically of low complexity, the most complex operation being computing r singular values of a sparse m n matrix. The performance guarantees proved in [KMO09] are comparable with the information theoretic lower bound: roughly nr max{r, log n} random entries are needed to reconstruct M exactly (here we assume m of order n). A related approach was also developed in [LB09], although without performance guarantees for matrix completion. The above results crucially rely on the assumption that M is exactly a rank r matrix. For many applications of interest, this assumption is unrealistic and it is therefore important to investigate their robustness. Can the above approaches be generalized when the underlying data is well approximated by a rank r matrix? This question was addressed in [CP09] within the convex relaxation approach Department of Electrical Engineering, Stanford University Departments of Statistics, Stanford University 1

2 of [CR08]. The present paper proves a similar robustness result for OptSpace. Remarkably the guarantees we obtain are order-optimal in a variety of circumstances, and improve over the analogus results of [CP09]. 1.1 Model Definition Let M be an m n matrix of rank r, that is M = UΣV T. (1) where U has dimensions m r, V has dimensions n r, and Σ is a diagonal r r matrix. We assume that each entry of M is perturbed, thus producing an approximately low-rank matrix N, with N ij = M ij + Z ij, where the matrix Z will be assumed to be small in an appropriate sense. Out of the m n entries of N, a subset E [m] [n] is revealed. We let N E be the m n matrix that contains the revealed entries of N, and is filled with 0 s in the other positions N E ij = { Nij if (i,j) E, 0 otherwise. (2) The set E will be uniformly random given its size E. 1.2 Algorithm For the reader s convenience, we recall the algorithm introduced in [KMO09], which we will analyze here. The basic idea is to minimize the cost function F(X,Y ), defined by F(X,Y ) min F(X,Y,S), (3) S Rr r F(X,Y,S) 1 (N ij (XSY T ) ij ) 2. (4) 2 (i,j) E Here X R n r, Y R m r are orthogonal matrices, normalized by X T X = m1, Y T Y = n1. Minimizing F(X,Y ) is an a priori difficult task, since F is a non-convex function. The key insight is that the singular value decomposition (SVD) of N E provides an excellent initial guess, and that the minimum can be found with high probability by standard gradient descent after this initialization. Two caveats must be added to this decription: (1) In general the matrix N E must be trimmed to eliminate over-represented rows and columns; (2) For technical reasons, we consider a slightly modified cost function to be denoted by F(X,Y ). OptSpace( matrix M E ) 1: Trim N E, and let ÑE be the output; 2: Compute the rank-r projection of ÑE, T r (ÑE ) = X 0 S 0 Y0 T; 3: Minimize F(X,Y ) through gradient descent, with initial condition (X 0,Y 0 ). We may note here that the rank of the matrix M, if not known, can be reliably estimated from Ñ E. The various steps of the above algorithm are defined as follows. 2

3 Trimming. We say that a row is over-represented if it contains more than 2 E /m revealed entries (i.e. more than twice the average number of revealed entries). Analogously, a column is overrepresented if it contains more than 2 E /n revealed entries. The trimmed matrix ÑE is obtained from N E by setting to 0 over-represented rows and columns. Rank-r projection. Let Ñ E = min(m,n) i=1 σ i x i y T i, (5) be the singular value decomposition of ÑE, with singular vectors σ 1 σ 2... We then define T r (ÑE ) = mn E r σ i x i yi T. i=1 Apart from an overall normalization, T r (ÑE ) is the best rank-r approximation to ÑE in Frobenius norm. Minimization. The modified cost function F is defined as F(X,Y ) = F(X,Y ) + ρg(x,y ) (6) ( ) ( ) m X (i) 2 n Y (j) 2 F(X,Y ) + ρ G 1 + ρ G 1, (7) 3µ 0 r 3µ 0 r i=1 where X (i) denotes the i-th row of X, and Y (j) the j-th row of Y. The function G 1 :R + Ris such that G 1 (z) = 0 if z 1 and G 1 (z) = e (z 1)2 1 otherwise. Further, we can choose ρ = Θ(nǫ). Let us stress that the regularization term is mainly introduced for our proof technique to work (and a broad family of functions G 1 would work as well). In numerical experiments we did not find any performance loss in setting ρ = 0. One important feature of OptSpace is that F(X,Y ) and F(X,Y ) are regarded as functions of the r-dimensional subspaces of R m and R n generated (respectively) by the columns of X and Y. This interpretation is justified by the fact that F(X,Y ) = F(XA,Y B) for any two orthogonal matrices A, B R r r (the same property holds for F). The set of r dimensional subspaces ofr m is a differentiable Riemannian manifold G(m, r) (the Grassman manifold). The gradient descent algorithm is applied to the function F : M(m,n) G(m,r) G(m,r) R. For further details on optimization by gradient descent on matrix manifolds we refer to [EAS99, AMS08]. 1.3 Main Results Our first result shows that, in great generality, the rank-r projection of Ñ E provides a reasonable approximation of M. Throughout this paper, without loss of generality, we assume α m/n 1. Theorem 1.1. Let N = M +Z, where M has rank r, and assume that the subset of revealed entries E [m] [n] is uniformly random with size E. Then there exists numerical constants C and C such that ( 1 M T r mn (ÑE ) F CM max with probability larger than 1 1/n 3. nrα 3/2 E j=1 ) 1/2 + C n rα E Z E 2, 3

4 Projection onto rank-r matrices through SVD is pretty standard (although trimming is crucial for achieving the above guarantee). The key point here is that a much better approximation is obtained by minimizing the cost F(X,Y ) (step 3 in the pseudocode above), provided M satisfies an appropriate incoherence condition. Let M = UΣV T be a low rank matrix, and assume, without loss of generality, U T U = m1 and V T V = n1. We say that M is (µ 0,µ 1 )-incoherent if the following conditions hold. A1. For all i [m], j [n] we have, r k=1 U2 ik µ 0r, r k=1 V 2 ik µ 0r. A2. There exists µ 1 such that r k=1 U ik(σ k /Σ 1 )V jk µ 1 r 1/2. Theorem 1.2. Let N = M + Z, where M is a (µ 0,µ 1 )-incoherent matrix of rank r, and assume that the subset of revealed entries E [m] [n] is uniformly random with size E. Further, let Σ min = Σ 1 Σ r = Σ max with Σ max /Σ min κ. Let M be the output of OptSpace on input N E. Then there exists numerical constants C and C such that if then, with probability at least 1 1/n 3, E Cn ακ 2 max { µ 0 r α log n ; µ 2 0 r2 ακ 4 ; µ 2 1 r2 ακ 4}, provided that the right-hand side is smaller than Σ min. 1 M M F C κ 2n αr Z E 2. (8) mn E Apart from capturing the effect of additive noise, these two theorems improve over the work of [KMO09] even in the noiseless case. Indeed they provide quantitative bounds in finite dimensions, while the results of [KMO09] were only asymptotic. 1.4 Noise Models In order to make sense of the above results, it is convenient to consider a couple of simple models for the noise matrix Z: Independent entries model. We assume that Z s entries are independent random variables, with zero mean E{Z ij } = 0 and sub-gaussian tails. The latter means that P{ Z ij x} 2e x2 2σ 2, for some bounded constant σ 2. Worst case model. In this model Z is arbitrary, but we have an uniform bound on the size of its entries: Z ij Z max. The basic parameter entering our main results is the operator norm of Z E, which is bounded as follows. Theorem 1.3. If Z is a random matrix drawn according to the independent entries model, then there is a constant C such that, with probability at least 1 1/n 3. ( ) 1/2 Z α E log E E 2 Cσ, n 4

5 RMSE Convex Relaxation Lower Bound rank-r projection OptSpace : 1 iteration 2 iterations 3 iterations 10 iterations E /n Figure 1: Root mean square error achieved by OptSpace for reconstructing a random rank-2 matrix, as a function of the number of observed entries E, and of the number of line minimizations. The performance of nuclear norm minimization and an information theory lower bound are also shown. If Z is a matrix from the worst case model, then for any realization of E. Z E 2 2 E n α Z max, (9) Note that for E = Ω(n log n), no row or column is over-represented with high probability. It follows that in the regime of E for which the conditions of Theorem 1.2 are satisfied, we have Z E = Z E. Then, among the other things, this result implies that for the independent entries model the right-hand side of our error estimate, Eq. (8), is with high probability smaller than Σ min, if E Crα 3/2 n log n κ 4 (σ/σ min ) 2. For the worst case model, the same statement is true if Z max Σ min /C rκ Comparison with Related Work Let us begin by mentioning that a statement analogous to our preliminary Theorem 1.1 was proved in [AM07]. Our result however applies to any number of revealed entries, while the one of [AM07] requires E (8log n) 4 n (which for n is larger than n 2 ). As for Theorem 1.2, we will mainly compare our algorithm with the convex relaxation approach recently analyzed in [CP09]. Our basic setting is indeed the same, while the algorithms are rather different. Figure 1 compares the average root mean square error for the two algorithms as a function of E. Here M is a random rank r = 2 matrix of dimension m = n = 600, generated by letting M = ŨṼ T with Ũij,Ṽij i.i.d. N(0,20/ n). The noise is distributed according to the independent noise model with Z ij N(0,1). This example is taken from Figuer 2 in [CP09], from which we took the data for 5

6 the convex relaxation approach, as well as the information theory lower bound. After one iteration, OptSpace has a smaller root mean square error than [CP09], and in about 10 iterations it becomes indistiguishable from the information theory lower bound. Next let us compare our main result with the performance guarantee of Theorem 7 in [CP09]. Let us stress that we require some bound on the condition number κ, while the analysis of [CP09] and [CT09] require a stronger incoherence assumption. As far the error bound is concerned, [CP09] proved 1 n M M F 7 mn E ZE F + 2 n α ZE F. (10) (The constant in front of the first term is in fact slightly smaller than 7 in [CP09], but in any case larger than 4 2). Theorem 1.2 improves over this result in several respects: (1) We do not have the second term on the right hand side of (10), that actually increases with the number of observed entries; (2) Our error decreases as n/ E rather than (n/ E ) 1/2 ; (3) The noise enters Theorem 1.2 through the operator norm Z E 2 instead of its Frobenius norm Z E F Z E 2. For E uniformly random, one expects Z E F to be roughly of order Z E 2 n. For instance, within the intependent entries model with bounded variance σ, Z E F = Θ( E ) while Z E 2 is of order E /n (up to logarithmic terms). 2 Some Notations The matrix M to be reconstructed takes the form (1) where U R m r, V R n r. We write U = [u 1,u 2,...,u r ] and V = [v 1,v 2,...,v r ] for the columns of the two factors, with u i = m, v i = n, and u T i u j = 0, vi Tv j = 0 for i j (there is no loss of generality in this, since normalizations can be absorbed by redefining Σ). We shall write Σ = diag(σ 1,...,Σ r ) with Σ 1 Σ 2 Σ r > 0. The maximum and minimum singular values will also be denoted by Σ max = Σ 1 and Σ min = Σ r. Further, the maximum size of an entry of M is M max max ij M ij. Probability is taken with respect to the uniformly random subset E [m] [n] given E and (eventually) the noise matrix Z. Define ǫ E / mn. In the case when m = n, ǫ corresponds to the average number of revealed entries per row or column. Then it is convenient to work with a model in which each entry is revealed independently with probability ǫ/ mn. Since, with high probability E [ǫ α n A n log n,ǫ α n + A n log n], any guarantee on the algorithm performances that holds within one model, holds within the other model as well if we allow for a vanishing shift in ǫ. We will use C, C etc. to denote universal numerical constants. Given a vector x R n, x will denote its Euclidean norm. For a matrix X R n n, X F is its Frobenius norm, and X 2 its operator norm (i.e. X 2 = sup u 0 Xu / u ). The standard scalar product between vectors or matrices will sometimes be indicated by x, y or X, Y, respectively. Finally, we use the standard combinatorics notation [N] = {1, 2,..., N} to denote the set of first N integers. 3 Proof of Theorem 1.1 As explained in the introduction, the crucial idea is to consider the singular value decomposition of the trimmed matrix ÑE instead of the original matrix N E. Apart from a trivial rescaling, these singular values are close to the ones of the original matrix M. 6

7 Lemma 3.1. There exists a numerical constant C such that, with probability greater than 1 1/n 3, where it is understood that Σ q = 0 for q > r. σ q ǫ Σ α q CM max ǫ + 1 ǫ Z E 2, (11) Proof. For any matrix A, let σ q (A) denote the qth singular value of A. Then, σ q (A + B) σ q (A) + σ 1 (B), whence σ q ǫ Σ σ q ( M E ) q Σ q + σ 1( Z E ) ǫ ǫ CM max α ǫ + 1 ǫ Z E 2, where the second inequality follows from the next Lemma as shown in [KMO09]. Lemma 3.2 (Keshavan, Montanari, Oh, 2009). There exists a numerical constant C such that, with probability larger than 1 1/n 3, 1 mn 2 α M M E CM max mn ǫ ǫ. (12) We will now prove Theorem 1.1. Proof. (Theorem 1.1) For any matrix A of rank at most 2r, A F 2r A 2, whence 1 2r M T r mn (ÑE ) F mn (ÑE mn M ) σ i x i yi T ǫ i r+1 2 ( 2r M mn ) M E 2 mn + mn ǫ ǫ Z mn E 2 + σ r+1 ǫ This proves our claim. 4 Proof of Theorem 1.2 2αr 2CM max + 2 2r ǫ ǫ Z E 2 ( ) 1/2 C nrα 3/2 M max + 2 ( ) n rα 2 E E Z E 2. Recall that the cost function is defined over the Riemannian manifold M(m,n) G(m,r) G(n,r). The proof of Theorem 1.2 consists in controlling the behavior of F in a neighborhood of u = (U,V ) (the point corresponding to the matrix M to be reconstructed). Throughout the proof we let K(µ) be the set of matrix couples (X,Y ) R m r R n r such that X (i) 2 µr, Y (j) 2 µr for all i,j. 7

8 4.1 Preliminary Remarks and Definitions Given x 1 = (X 1,Y 1 ) and x 2 = (X 2,Y 2 ) M(m,n), two points on this manifold, their distance is defined as d(x 1,x 2 ) = d(x 1,X 2 ) 2 + d(y 1,Y 2 ) 2, where, letting (cos θ 1,...,cos θ r ) be the singular values of X T 1 X 2/m, d(x 1,X 2 ) = θ 2. Given S achieving the minimum in Eq. (3), it is also convenient to introduce the notations d (x,u) Σ 2 min d(x,u)2 + S Σ 2 F, d + (x,u) Σ 2 max d(x,u)2 + S Σ 2 F. 4.2 Auxiliary Lemmas and Proof of Theorem 1.2 The proof is based on the following two lemmas that generalize and sharpen analogous bounds in [KMO09]. Lemma 4.1. There exists numerical constants C 0,C 1,C 2 such that the following happens. Assume ǫ C 0 µ 0 r α max{log n ; µ 0 r α(σ min /Σ max ) 4 } and δ Σ min /(C 0 Σ max ). Then, F(x) F(u) C 1 nǫ αd (x,u) 2 C 1 n rα Z E 2 d + (x,u), (13) F(x) F(u) C 2 nǫ ασ 2 max d(x,u) 2 + C 2 n rα Z E 2 d + (x,u), (14) for all x M(m,n) K(4µ 0 ) such that d(x,u) δ, with probability at least 1 1/n 4. Here S R r r is the matrix realizing the minimum in Eq. (3). Corollary 4.2. There exist a constant C such that, under the hypotheses of Lemma 4.1 r S Σ F CΣ max d(x,u) + C ǫ ZE 2. Further, for an appropriate choice of the constants in Lemma 4.1, we have r σ max (S) 2Σ max + C ǫ ZE 2, (15) σ min (S) 1 r 2 Σ min C ǫ ZE 2. (16) Lemma 4.3. There exists numerical constants C 0,C 1,C 2 such that the following happens. Assume ǫ C 0 µ 0 r α (Σ max /Σ min ) 2 max{log n ; µ 0 r α(σ max /Σ min ) 4 } and δ Σ min /(C 0 Σ max ). Then, grad F(x) 2 C 1 nǫ 2 Σ 4 min [ rσmax Z E ] 2 2 d(x,u) C 2, (17) ǫσ min Σ min + for all x M(m,n) K(4µ 0 ) such that d(x,u) δ, with probability at least 1 1/n 4. (Here [a] + max(a,0).) We can now turn to the proof of our main theorem. 8

9 Proof. (Theorem 1.2). Let δ = Σ min /C 0 Σ max with C 0 large enough so that the hypotheses of Lemmas 4.1 and 4.3 are verified. Call {x k } k 0 the sequence of pairs (X k,y k ) M(m,n) generated by gradient descent. By assumption, the following is true with a large enough constant C: Z E 2 ǫ C r ( Σmin Σ max Further, by using Corollary 4.2 in Eqs. (13) and (14) we get where ) 2 Σ min. (18) F(x) F(u) C 1 nǫ ασ 2 min{ d(x,u) 2 δ 2 0, }, (19) F(x) F(u) C 2 nǫ ασ 2 max{ d(x,u) 2 + δ 2 0,+}, (20) rσmax Z E 2 δ 0, C, ǫσ min Σ min rσmax Z E 2 δ 0,+ C. ǫσ min Σ max By Eq. (18), we can assume δ 0,+ δ 0, δ/10. For ǫ Cαµ 2 1 r2 (Σ max /Σ min ) 4 as per our assumptions, using Eq. (18) in Theorem 1.1, together with the bound d(u,x 0 ) M X 0 SY T 0 F/n ασ min, we get We make the following claims : d(u,x 0 ) δ x k K(4µ 0 ) for all k. Indeed without loss of generality we can assume x 0 K(3µ 0 ) (because otherwise we can rescale those lines of X 0, Y 0 that violate the constraint). Therefore F(x 0 ) = F(x 0 ) 4C 2 nǫ ασ 2 max δ2 /100. On the other hand F(x) ρ(e 1/9 1) for x K(4µ 0 ). Since F(x k ) is a non-increasing sequence, the thesis follows provided we take ρ C 2 nǫ ασ 2 min. 2. d(x k,u) δ/10 for all k. Assuming ǫ Cαµ 2 1 r2 (Σ max /Σ min ) 6, we have d(x 0,u) 2 (Σ 2 min /C Σ 2 max )(δ/10)2. Also assuming Eq. (18) with large enough C we can show the following. For all x k such that d(x k,u) [δ/10,δ], we have F(x) F(x) F(x 0 ). This contradicts the monotonicity of F(x), and thus proves the claim. Since the cost function is twice differentiable, and because of the above, the sequence {x k } converges to By Lemma 4.3 for any x Ω, Ω = { x K(4µ 0 ) M(m,n) : d(x,u) δ,grad F(x) = 0 }. which implies the thesis using Corollary 4.2. rσmax Z E 2 d(x,u) C ǫσ min Σ min 9

10 4.3 Proof of Lemma 4.1 and Corollary 4.2 Proof. (Lemma 4.1) The proof is based on the analogous bound in the noiseless case, i.e. Lemma 5.3 in [KMO09]. For readers convenience, the result is reported in Appendix A, Lemma A.1. For the proof of these lemmas, we refer to [KMO09]. In order to prove the lower bound, we start by noticing that F(u) 1 2 P E(Z) 2 F, which is simply proved by using S = Σ in Eq. (3). On the other hand, we have F(x) = 1 2 P E(XSY T M Z) 2 F (21) = 1 2 P E(Z) 2 F P E(XSY T M) 2 F P E (Z),(XSY T M) (22) F(u) + Cnǫ α d (x,u) 2 2r Z E 2 XSY T M F, (23) where in the last step we used Lemma A.1. Now by triangular inequality XSY T M 2 F 3 X(S Σ)Y T 2 F + 3 XΣ(Y V ) T 2 F + 3 (X U)ΣV T 2 F 3nm S Σ 2 F + 3n 2 ασ 2 max( 1 m X U 2 F + 1 n Y V 2 F) Cn 2 αd + (x,u) 2, In order to prove the upper bound, we proceed as above to get F(x) 1 2 P E(Z) 2 F + Cnǫ ασ 2 max d(x,u) 2 + 2rα Z E 2 Cnd + (x,u). Further, by replacing x with u in Eq. (22) F(u) 1 2 P E(Z) 2 F P E(Z),(U(S Σ)V T ) 1 2 P E(Z) 2 F 2rα Z E 2 Cnd + (x,u). By taking the difference of these inequalities we get the desired upper bound. Proof. (Corollary 4.2) By putting together Eq. (13) and (14), and using the definitions of d + (x,u), d (x,u), we get S Σ 2 F C 2 Σ 2 max C d(x,u)2 + C 2 r 1 C 1 ǫ ZE 2 Σ 2 max d(x,u)2 + S Σ 2 F, Without loss of generality, assume C 2 C 1, and call x S Σ F, a 2 (C 2 /C 1 )Σ 2 maxd 2, and b ( C 2 r/ C 1 ǫ) Z E 2. The above inequality then takes the form x 2 a 2 + b x 2 + a 2 a 2 + ab + bx, which implies our claim x a + b. The singular value bounds (15) and (16) follow by triangular inequality. For instance r σ min (S) Σ min CΣ max d(x,u) C ǫ ZE 2. which implies the inequality (16) for d(x,u) δ = Σ min /C 0 Σ max and C 0 large enough. An analogous argument proves Eq. (15). 10

11 4.4 Proof of Lemma 4.3 Without loss of generality we will assume δ 1, C 2 1 and r ǫ ZE 2 Σ min, (24) because otherwise the lower bound (17) is trivial for all d(x,u) δ. Denote by t x(t), t [0,1], the geodesic on M(m,n) such that x(0) = u and x(1) = x, parametrized proportionally to the arclength. Let ŵ = ẋ(1) be its final velocity, with ŵ = (Ŵ, Q). Obviously ŵ T x (with T x the tangent space of M(m,n) at x) and 1 m Ŵ n Q 2 = d(x,u) 2, because t x(t) is parametrized proportionally to the arclength. Explicit expressions for ŵ can be obtained in terms of w ẋ(0) = (W,Q) [KMO09]. If we let W = LΘR T be the singular value decomposition of W, we obtain Ŵ = URΘ sinθr T + LΘ cos Θ R T. (25) It was proved in [KMO09] that grad G(x), ŵ 0. It is therefore sufficient to lower bound the scalar product grad F,ŵ. By computing the gradient of F we get grad F(x),ŵ = P E (XSY T N),(XS Q T + ŴSY T ) = P E (XSY T M),(XS Q T + ŴSY T ) P E (Z),(XS Q T + ŴSY T ) = grad F 0 (x),ŵ P E (Z),(XS Q T + ŴSY T ) (26) where F 0 (x) is the cost function in absence of noise, namely 1 ( F 0 (X,Y ) = min (XSY T ) 2 ) S R r r ij M ij 2. (27) As proved in [KMO09], (i,j) E grad F 0 (x),ŵ Cnǫ ασ 2 mind(x,u) 2 (28) (see Lemma A.3 in Appendix). We are therefore left with the task of upper bounding P E (Z),(XS Q T + ŴSY T ). Since XS Q T has rank at most r, we have Since X T X = m1, we get P E (Z),XS Q T r Z E 2 XS Q T F. XS Q T 2 F = mtr(s T S Q T Q) nασmax (S) 2 Q 2 F ( r ) 2 Cn 2 α Σ max + ǫ ZE F d(x,u) 2 (29) 4Cn 2 ασ 2 max d(x,u) 2, (30) 11

12 where, in inequality (29), we used Corollary 4.2 and in the last step, we used Eq. (24). Proceeding analogously for P E (Z),ŴSY T, we get P E (Z),(XS Q T + ŴSY T ) C nσ max rα Z E 2 d(x,u). Together with Eq. (26) and (28) this implies grad F(x),ŵ C 1 nǫ { ασ 2 rσmax Z E } 2 mind(x,u) d(x,u) C 2, ǫσ min Σ min which implies Eq. (17) by Cauchy-Schwartz inequality. 5 Proof of Theorem 1.3 Proof. (Independent entries model ) The proof is analogous to the proof of Lemma 3.2 in [KMO09], where the matrix M to be reconstructed plays the role of the noise matrix Z. We want to show that x T ZE y Cσ αǫlog E for all x R m and y R n such that x = 1 and y = 1. This will be done by first reducing ourselves to x and y belonging to finite discretized sets. We define { { n } } n T n = x Z : x 1, Notice that T n S n {x R n : x 1}. The next remark is proved in [FKS89, FO05], and relates the original problem to the discretized one. Remark 5.1. Let R R m n be a matrix. If x T Ry B for all x T m and y T n, then x T Ry (1 ) 2 B for all x S m and y S n. Hence it is enough to show that, with high probability, x T ZE y Cσ αǫlog n for all x T m and y T n. Given x S m, y S n, we define the set of light couples L as, { L = (i,j) : x i y j } ǫ/2mn, and let L be its complement, which we call heavy couples. In the following, we will prove that for any x T m and any y T n, with high probability, the contribution of light couples, L x i Z E ij y j, is bounded by Cσ αǫ and the contribution of heavy couples, L x i Z E ij y j, is bounded by Cσ αǫlog E. Together with Remark 5.1, this proves the desired thesis. Bounding the contribution of light couples Let us define the subset of row and column indices which have not been trimmed as A l and A r. Notice that A = (A l, A r ) is a function of the random set E. For any E [m] [n] and A = (A l,a r ) with A l [m], A r [n], we define Z E,A by setting to zero the entries of Z that are not in E, those whose row index is not in A l, and those whose column index not in A r. Let X E,A L = (i,j) L x iz E,A ij y j be the contribution of the light couples, then we need to bound the error event : { H(E,A) = (x,y) T m T n : X E,A L } > Cσ ǫ. (31) Note that Z E = Z E,A, and hence we want to bound P{H(E, A)}. We use the following remark, which is a slight modification of the proof for bounding the contribution of light couples given in [KMO09]. 12

13 Remark 5.2. There exists constants C 1 and C 2 depending only on α such that the following is true. P {H(E, A)} 2 (n+m)h(δ) max A l m(1 δ), A r n(1 δ) with δ max{e C 1ǫ,C 2 α} and H(x) the binary entropy function. P {H(E;A)} + 1 n 4, (32) Now we are left with the task of bounding P {H(E; A)} by concentration of measure inequality for the sub-gaussian random matrix Z. ( Lemma 5.3. For any x S m and y S n, P X E,A L > Cσ ) ǫ exp{ (C 2) αn}. Proof. We] will bound the probability using a Chernoff Bound. To this end, we will first upper bound E [e λxe,a L as below. Note that for sub-gaussian Z i,j we have E [ e kz ij] e k 2 σ 2 whence, E ] [e λxe,a L E (i,j) (i,j) E,i/ A l,j / A r exp ( 1 + ǫ mn 2λ 2 x 2 iy 2 jσ 2 ) ( λ 2 x 2 i y 2 jσ 2) (33) (34) exp ǫ ( ) 2λ 2 x 2 mn i y2 j σ2 ǫ exp 2λ 2 σ 2 mn ij (35) Equation (34) follows by the fact that exp(x) 1+2x for 0 x 1 2 which is ensured by choosing λ = σ 1 mn ǫ. The thesis follows from Chernoff bound after some calculus. We can now finish the upperbound on the light couples contribution. Consider the error event in Eq. (31). A simple volume calculation shows that T m (10/ ) m. We can apply union bound over T m and T n to Eq. (32) to obtain P{H(E, A)} exp { log 2 + (1 + α)(h(δ)log 2 + log(20/ )) n (C 2) αn } + 1 n 4. Hence, assuming α 1, there exists a numerical constant C such that, for C > C α, the first term is of order e Θ(n), and this finishes the proof. Bounding the contribution of heavy couples The heavy couples are bounded by L x i Z ij Ey j Zmax L Q ij x i y j, where Z max = max (i,j) E {Z ij } and Q is the adjacency matrix of the underlying graph. Using the following remark, we get that Z E 2 C Z max αǫ with probability larger than 1 1/2n 3. Remark 5.4. Given vectors x T m and y T n, let L = {(i,j) : x i y j C ǫ/mn}. Then there exist a constant C such that, (i,j) L Q ij x i y j C αǫ, with probability larger than 1 1/2n 3. For the proof of this remark we refer to [KMO09]. Further, for Z drawn from the independent model, we have For L larger than 4, we get the desired thesis. P(Z max Lσ L2 1 log E ) E 2. 13

14 Proof. (Worst Case Model ) Let D be the m n all-ones matrix. Then for any matrix Z from the worst case model, we have Z E 2 Z max D E 2, since x T ZE y i,j Z max x i D E ij y j, which follows from the fact that Z ij s are uniformly bounded. Further, D E is an adjacency matrix of a corresponding bipartite graph with bounded degrees. Then, for any choice of E the following is true for all positive integers k: D E 2k 2 max x, x =1 xt (( D E ) T DE ) k x Tr((( D E ) T DE ) k ) n(2ǫ) 2k. Taking k large, we get the desired thesis. Acknowledgements This work was partially supported by a Terman fellowship, the NSF CAREER award CCF and the NSF grant DMS A Three Lemmas on the Noiseless Problem Lemma A.1. There exists numerical constants C 0,C 1,C 2 such that the following happens. Assume ǫ C 0 µ 0 r α max{log n ; µ 0 r α(σ min /Σ max ) 4 } and δ Σ min /(C 0 Σ max ). Then, C 1 α Σ 2 min d(x,u) 2 + C 1 α S0 Σ 2 F 1 nǫ F 0(x) C 2 ασ 2 max d(x,u) 2, for all x M(m,n) K(4µ 0 ) such that d(x,u) δ, with probability at least 1 1/n 4. Here S 0 R r r is the matrix realizing the minimum in Eq. (27). Lemma A.2. There exists numerical constants C 0 and C such that the following happens. Assume ǫ C 0 µ 0 r α (Σ max /Σ min ) 2 max{log n ; µ 0 r α(σ max /Σ min ) 4 } and δ Σ min /(C 0 Σ max ). Then grad F 0 (x) 2 C nǫ 2 Σ 4 mind(x,u) 2, for all x M(m,n) K(4µ 0 ) such that d(x,u) δ, with probability at least 1 1/n 4. Lemma A.3. Define ŵ as in Eq. (25). Then there exists numerical constants C 0 and C such that the following happens. Under the hypothesis of Lemma A.2 grad F 0 (x),ŵ C nǫ α Σ 2 min d(x,u)2, for all x M(m,n) K(4µ 0 ) such that d(x,u) δ, with probability at least 1 1/n 4. References [AM07] [AMS08] [CCS08] D. Achlioptas and F. McSherry, Fast computation of low-rank matrix approximations, J. ACM 54 (2007), no. 2, 9. P.-A. Absil, R. Mahony, and R. Sepulchrer, Optimization algorithms on matrix manifolds, Princeton University Press, J-F Cai, E. J. Candès, and Z. Shen, A singular value thresholding algorithm for matrix completion, arxiv: ,

15 [CP09] E. J. Candès and Y. Plan, Matrix completion with noise, arxiv: , [CR08] [CT09] [EAS99] [Faz02] [FKS89] [FKV04] [FO05] [KMO09] [LB09] [MGC09] [TY09] E. J. Candès and B. Recht, Exact matrix completion via convex optimization, arxiv: , E. J. Candès and T. Tao, The power of convex relaxation: Near-optimal matrix completion, arxiv: , A. Edelman, T. A. Arias, and S. T. Smith, The geometry of algorithms with orthogonality constraints, SIAM J. Matr. Anal. Appl. 20 (1999), M. Fazel, Matrix rank minimization with applications, Ph.D. thesis, Stanford University, J. Friedman, J. Kahn, and E. Szemerédi, On the second eigenvalue in random regular graphs, Proceedings of the Twenty-First Annual ACM Symposium on Theory of Computing (Seattle, Washington, USA), ACM, may 1989, pp A. Frieze, R. Kannan, and S. Vempala, Fast monte-carlo algorithms for finding low-rank approximations, J. ACM 51 (2004), no. 6, U. Feige and E. Ofek, Spectral techniques applied to sparse random graphs, Random Struct. Algorithms 27 (2005), no. 2, R. H. Keshavan, A. Montanari, and S. Oh, Matrix completion from a few entries, arxiv: , January K. Lee and Y. Bresler, Admira: Atomic decomposition for minimum rank approximation, arxiv: , S. Ma, D. Goldfarb, and L. Chen, Fixed point and Bregman iterative methods for matrix rank minimization, arxiv: , K. Toh and S. Yun, An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems, matys, [WGRM09] J. Wright, A. Ganesh, S. Rao, and Y. Ma, Robust principal component analysis: Exact recovery of corrupted low-rank matrices, arxiv: ,

Matrix Completion from Noisy Entries

Matrix Completion from Noisy Entries Matrix Completion from Noisy Entries Raghunandan H. Keshavan, Andrea Montanari, and Sewoong Oh Abstract Given a matrix M of low-rank, we consider the problem of reconstructing it from noisy observations

More information

Matrix Completion from Noisy Entries

Matrix Completion from Noisy Entries Journal of Machine Learning Research (200) 2057-2078 Submitted 6/09; Revised 4/0; Published 7/0 Matrix Completion from Noisy Entries Raghunandan H. Keshavan Andrea Montanari Sewoong Oh Department of Electrical

More information

Matrix Completion from a Few Entries

Matrix Completion from a Few Entries Matrix Completion from a Few Entries Raghunandan H. Keshavan and Sewoong Oh EE Department Stanford University, Stanford, CA 9434 Andrea Montanari EE and Statistics Departments Stanford University, Stanford,

More information

Matrix completion: Fundamental limits and efficient algorithms. Sewoong Oh Stanford University

Matrix completion: Fundamental limits and efficient algorithms. Sewoong Oh Stanford University Matrix completion: Fundamental limits and efficient algorithms Sewoong Oh Stanford University 1 / 35 Low-rank matrix completion Low-rank Data Matrix Sparse Sampled Matrix Complete the matrix from small

More information

Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison

Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison Raghunandan H. Keshavan, Andrea Montanari and Sewoong Oh Electrical Engineering and Statistics Department Stanford University,

More information

Matrix Completion: Fundamental Limits and Efficient Algorithms

Matrix Completion: Fundamental Limits and Efficient Algorithms Matrix Completion: Fundamental Limits and Efficient Algorithms Sewoong Oh PhD Defense Stanford University July 23, 2010 1 / 33 Matrix completion Find the missing entries in a huge data matrix 2 / 33 Example

More information

Low-rank Matrix Completion from Noisy Entries

Low-rank Matrix Completion from Noisy Entries Low-rank Matrix Completion from Noisy Entries Sewoong Oh Joint work with Raghunandan Keshavan and Andrea Montanari Stanford University Forty-Seventh Allerton Conference October 1, 2009 R.Keshavan, S.Oh,

More information

Matrix Completion from a Few Entries

Matrix Completion from a Few Entries Matrix Completion from a Few Entries Raghunandan H. Keshavan, Andrea Montanari, and Sewoong Oh March 17, 2009 Abstract Let M be a random nα n matrix of rank r n, and assume that a uniformly random subset

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University PCA with random noise Van Ha Vu Department of Mathematics Yale University An important problem that appears in various areas of applied mathematics (in particular statistics, computer science and numerical

More information

Compressed Sensing and Robust Recovery of Low Rank Matrices

Compressed Sensing and Robust Recovery of Low Rank Matrices Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech

More information

Matrix estimation by Universal Singular Value Thresholding

Matrix estimation by Universal Singular Value Thresholding Matrix estimation by Universal Singular Value Thresholding Courant Institute, NYU Let us begin with an example: Suppose that we have an undirected random graph G on n vertices. Model: There is a real symmetric

More information

Analysis of Robust PCA via Local Incoherence

Analysis of Robust PCA via Local Incoherence Analysis of Robust PCA via Local Incoherence Huishuai Zhang Department of EECS Syracuse University Syracuse, NY 3244 hzhan23@syr.edu Yi Zhou Department of EECS Syracuse University Syracuse, NY 3244 yzhou35@syr.edu

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via convex relaxations Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Shuyang Ling Department of Mathematics, UC Davis Oct.18th, 2016 Shuyang Ling (UC Davis) 16w5136, Oaxaca, Mexico Oct.18th, 2016

More information

arxiv: v5 [math.na] 16 Nov 2017

arxiv: v5 [math.na] 16 Nov 2017 RANDOM PERTURBATION OF LOW RANK MATRICES: IMPROVING CLASSICAL BOUNDS arxiv:3.657v5 [math.na] 6 Nov 07 SEAN O ROURKE, VAN VU, AND KE WANG Abstract. Matrix perturbation inequalities, such as Weyl s theorem

More information

Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds

Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds Tao Wu Institute for Mathematics and Scientific Computing Karl-Franzens-University of Graz joint work with Prof.

More information

Sensor Network Localization from Local Connectivity : Performance Analysis for the MDS-MAP Algorithm

Sensor Network Localization from Local Connectivity : Performance Analysis for the MDS-MAP Algorithm Sensor Network Localization from Local Connectivity : Performance Analysis for the MDS-MAP Algorithm Sewoong Oh, Amin Karbasi, and Andrea Montanari August 27, 2009 Abstract Sensor localization from only

More information

Matrix Completion from Fewer Entries

Matrix Completion from Fewer Entries from Fewer Entries Stanford University March 30, 2009 Outline The problem, a look at the data, and some results (slides) 2 Proofs (blackboard) arxiv:090.350 The problem, a look at the data, and some results

More information

EE 381V: Large Scale Learning Spring Lecture 16 March 7

EE 381V: Large Scale Learning Spring Lecture 16 March 7 EE 381V: Large Scale Learning Spring 2013 Lecture 16 March 7 Lecturer: Caramanis & Sanghavi Scribe: Tianyang Bai 16.1 Topics Covered In this lecture, we introduced one method of matrix completion via SVD-based

More information

An iterative hard thresholding estimator for low rank matrix recovery

An iterative hard thresholding estimator for low rank matrix recovery An iterative hard thresholding estimator for low rank matrix recovery Alexandra Carpentier - based on a joint work with Arlene K.Y. Kim Statistical Laboratory, Department of Pure Mathematics and Mathematical

More information

Recovering any low-rank matrix, provably

Recovering any low-rank matrix, provably Recovering any low-rank matrix, provably Rachel Ward University of Texas at Austin October, 2014 Joint work with Yudong Chen (U.C. Berkeley), Srinadh Bhojanapalli and Sujay Sanghavi (U.T. Austin) Matrix

More information

Guaranteed Rank Minimization via Singular Value Projection

Guaranteed Rank Minimization via Singular Value Projection 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 3 3 33 34 35 36 37 38 39 4 4 4 43 44 45 46 47 48 49 5 5 5 53 Guaranteed Rank Minimization via Singular Value Projection Anonymous Author(s) Affiliation Address

More information

Conditions for Robust Principal Component Analysis

Conditions for Robust Principal Component Analysis Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via nonconvex optimization Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Lecture 14: SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Lecturer: Sanjeev Arora

Lecture 14: SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Lecturer: Sanjeev Arora princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 14: SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Lecturer: Sanjeev Arora Scribe: Today we continue the

More information

MATH 581D FINAL EXAM Autumn December 12, 2016

MATH 581D FINAL EXAM Autumn December 12, 2016 MATH 58D FINAL EXAM Autumn 206 December 2, 206 NAME: SIGNATURE: Instructions: there are 6 problems on the final. Aim for solving 4 problems, but do as much as you can. Partial credit will be given on all

More information

Random Methods for Linear Algebra

Random Methods for Linear Algebra Gittens gittens@acm.caltech.edu Applied and Computational Mathematics California Institue of Technology October 2, 2009 Outline The Johnson-Lindenstrauss Transform 1 The Johnson-Lindenstrauss Transform

More information

Low-Rank Matrix Recovery

Low-Rank Matrix Recovery ELE 538B: Mathematics of High-Dimensional Data Low-Rank Matrix Recovery Yuxin Chen Princeton University, Fall 2018 Outline Motivation Problem setup Nuclear norm minimization RIP and low-rank matrix recovery

More information

The convex algebraic geometry of rank minimization

The convex algebraic geometry of rank minimization The convex algebraic geometry of rank minimization Pablo A. Parrilo Laboratory for Information and Decision Systems Massachusetts Institute of Technology International Symposium on Mathematical Programming

More information

Rank Determination for Low-Rank Data Completion

Rank Determination for Low-Rank Data Completion Journal of Machine Learning Research 18 017) 1-9 Submitted 7/17; Revised 8/17; Published 9/17 Rank Determination for Low-Rank Data Completion Morteza Ashraphijuo Columbia University New York, NY 1007,

More information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information Fast Angular Synchronization for Phase Retrieval via Incomplete Information Aditya Viswanathan a and Mark Iwen b a Department of Mathematics, Michigan State University; b Department of Mathematics & Department

More information

Random matrices: Distribution of the least singular value (via Property Testing)

Random matrices: Distribution of the least singular value (via Property Testing) Random matrices: Distribution of the least singular value (via Property Testing) Van H. Vu Department of Mathematics Rutgers vanvu@math.rutgers.edu (joint work with T. Tao, UCLA) 1 Let ξ be a real or complex-valued

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

A New Estimate of Restricted Isometry Constants for Sparse Solutions

A New Estimate of Restricted Isometry Constants for Sparse Solutions A New Estimate of Restricted Isometry Constants for Sparse Solutions Ming-Jun Lai and Louis Y. Liu January 12, 211 Abstract We show that as long as the restricted isometry constant δ 2k < 1/2, there exist

More information

Approximating sparse binary matrices in the cut-norm

Approximating sparse binary matrices in the cut-norm Approximating sparse binary matrices in the cut-norm Noga Alon Abstract The cut-norm A C of a real matrix A = (a ij ) i R,j S is the maximum, over all I R, J S of the quantity i I,j J a ij. We show that

More information

Lecture 9: Matrix approximation continued

Lecture 9: Matrix approximation continued 0368-348-01-Algorithms in Data Mining Fall 013 Lecturer: Edo Liberty Lecture 9: Matrix approximation continued Warning: This note may contain typos and other inaccuracies which are usually discussed during

More information

5 Linear Algebra and Inverse Problem

5 Linear Algebra and Inverse Problem 5 Linear Algebra and Inverse Problem 5.1 Introduction Direct problem ( Forward problem) is to find field quantities satisfying Governing equations, Boundary conditions, Initial conditions. The direct problem

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Low Rank Matrix Completion Formulation and Algorithm

Low Rank Matrix Completion Formulation and Algorithm 1 2 Low Rank Matrix Completion and Algorithm Jian Zhang Department of Computer Science, ETH Zurich zhangjianthu@gmail.com March 25, 2014 Movie Rating 1 2 Critic A 5 5 Critic B 6 5 Jian 9 8 Kind Guy B 9

More information

A Characterization of Sampling Patterns for Union of Low-Rank Subspaces Retrieval Problem

A Characterization of Sampling Patterns for Union of Low-Rank Subspaces Retrieval Problem A Characterization of Sampling Patterns for Union of Low-Rank Subspaces Retrieval Problem Morteza Ashraphijuo Columbia University ashraphijuo@ee.columbia.edu Xiaodong Wang Columbia University wangx@ee.columbia.edu

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 57 Table of Contents 1 Sparse linear models Basis Pursuit and restricted null space property Sufficient conditions for RNS 2 / 57

More information

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:

More information

Supplementary Materials for Riemannian Pursuit for Big Matrix Recovery

Supplementary Materials for Riemannian Pursuit for Big Matrix Recovery Supplementary Materials for Riemannian Pursuit for Big Matrix Recovery Mingkui Tan, School of omputer Science, The University of Adelaide, Australia Ivor W. Tsang, IS, University of Technology Sydney,

More information

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices)

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Chapter 14 SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Today we continue the topic of low-dimensional approximation to datasets and matrices. Last time we saw the singular

More information

Dense Error Correction for Low-Rank Matrices via Principal Component Pursuit

Dense Error Correction for Low-Rank Matrices via Principal Component Pursuit Dense Error Correction for Low-Rank Matrices via Principal Component Pursuit Arvind Ganesh, John Wright, Xiaodong Li, Emmanuel J. Candès, and Yi Ma, Microsoft Research Asia, Beijing, P.R.C Dept. of Electrical

More information

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis. Vector spaces DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Vector space Consists of: A set V A scalar

More information

4 Bias-Variance for Ridge Regression (24 points)

4 Bias-Variance for Ridge Regression (24 points) Implement Ridge Regression with λ = 0.00001. Plot the Squared Euclidean test error for the following values of k (the dimensions you reduce to): k = {0, 50, 100, 150, 200, 250, 300, 350, 400, 450, 500,

More information

ON THE NUMBER OF ALTERNATING PATHS IN BIPARTITE COMPLETE GRAPHS

ON THE NUMBER OF ALTERNATING PATHS IN BIPARTITE COMPLETE GRAPHS ON THE NUMBER OF ALTERNATING PATHS IN BIPARTITE COMPLETE GRAPHS PATRICK BENNETT, ANDRZEJ DUDEK, ELLIOT LAFORGE December 1, 016 Abstract. Let C [r] m be a code such that any two words of C have Hamming

More information

Analysis of Spectral Kernel Design based Semi-supervised Learning

Analysis of Spectral Kernel Design based Semi-supervised Learning Analysis of Spectral Kernel Design based Semi-supervised Learning Tong Zhang IBM T. J. Watson Research Center Yorktown Heights, NY 10598 Rie Kubota Ando IBM T. J. Watson Research Center Yorktown Heights,

More information

From Compressed Sensing to Matrix Completion and Beyond. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison

From Compressed Sensing to Matrix Completion and Beyond. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison From Compressed Sensing to Matrix Completion and Beyond Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Netflix Prize One million big ones! Given 100 million ratings on a

More information

Deep Linear Networks with Arbitrary Loss: All Local Minima Are Global

Deep Linear Networks with Arbitrary Loss: All Local Minima Are Global homas Laurent * 1 James H. von Brecht * 2 Abstract We consider deep linear networks with arbitrary convex differentiable loss. We provide a short and elementary proof of the fact that all local minima

More information

BURGESS INEQUALITY IN F p 2. Mei-Chu Chang

BURGESS INEQUALITY IN F p 2. Mei-Chu Chang BURGESS INEQUALITY IN F p 2 Mei-Chu Chang Abstract. Let be a nontrivial multiplicative character of F p 2. We obtain the following results.. Given ε > 0, there is δ > 0 such that if ω F p 2\F p and I,

More information

Mathematics Department Stanford University Math 61CM/DM Inner products

Mathematics Department Stanford University Math 61CM/DM Inner products Mathematics Department Stanford University Math 61CM/DM Inner products Recall the definition of an inner product space; see Appendix A.8 of the textbook. Definition 1 An inner product space V is a vector

More information

arxiv: v1 [stat.ml] 1 Mar 2015

arxiv: v1 [stat.ml] 1 Mar 2015 Matrix Completion with Noisy Entries and Outliers Raymond K. W. Wong 1 and Thomas C. M. Lee 2 arxiv:1503.00214v1 [stat.ml] 1 Mar 2015 1 Department of Statistics, Iowa State University 2 Department of Statistics,

More information

Generalized Shifted Inverse Iterations on Grassmann Manifolds 1

Generalized Shifted Inverse Iterations on Grassmann Manifolds 1 Proceedings of the Sixteenth International Symposium on Mathematical Networks and Systems (MTNS 2004), Leuven, Belgium Generalized Shifted Inverse Iterations on Grassmann Manifolds 1 J. Jordan α, P.-A.

More information

Nonlinear Programming Algorithms Handout

Nonlinear Programming Algorithms Handout Nonlinear Programming Algorithms Handout Michael C. Ferris Computer Sciences Department University of Wisconsin Madison, Wisconsin 5376 September 9 1 Eigenvalues The eigenvalues of a matrix A C n n are

More information

A Unified Approach to Proximal Algorithms using Bregman Distance

A Unified Approach to Proximal Algorithms using Bregman Distance A Unified Approach to Proximal Algorithms using Bregman Distance Yi Zhou a,, Yingbin Liang a, Lixin Shen b a Department of Electrical Engineering and Computer Science, Syracuse University b Department

More information

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms François Caron Department of Statistics, Oxford STATLEARN 2014, Paris April 7, 2014 Joint work with Adrien Todeschini,

More information

Existence and Uniqueness

Existence and Uniqueness Chapter 3 Existence and Uniqueness An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect

More information

Linear dimensionality reduction for data analysis

Linear dimensionality reduction for data analysis Linear dimensionality reduction for data analysis Nicolas Gillis Joint work with Robert Luce, François Glineur, Stephen Vavasis, Robert Plemmons, Gabriella Casalino The setup Dimensionality reduction for

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Lecture Notes 10: Matrix Factorization

Lecture Notes 10: Matrix Factorization Optimization-based data analysis Fall 207 Lecture Notes 0: Matrix Factorization Low-rank models. Rank- model Consider the problem of modeling a quantity y[i, j] that depends on two indices i and j. To

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Solving Corrupted Quadratic Equations, Provably

Solving Corrupted Quadratic Equations, Provably Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

ELE 538B: Mathematics of High-Dimensional Data. Spectral methods. Yuxin Chen Princeton University, Fall 2018

ELE 538B: Mathematics of High-Dimensional Data. Spectral methods. Yuxin Chen Princeton University, Fall 2018 ELE 538B: Mathematics of High-Dimensional Data Spectral methods Yuxin Chen Princeton University, Fall 2018 Outline A motivating application: graph clustering Distance and angles between two subspaces Eigen-space

More information

Tighter Low-rank Approximation via Sampling the Leveraged Element

Tighter Low-rank Approximation via Sampling the Leveraged Element Tighter Low-rank Approximation via Sampling the Leveraged Element Srinadh Bhojanapalli The University of Texas at Austin bsrinadh@utexas.edu Prateek Jain Microsoft Research, India prajain@microsoft.com

More information

Introduction and Preliminaries

Introduction and Preliminaries Chapter 1 Introduction and Preliminaries This chapter serves two purposes. The first purpose is to prepare the readers for the more systematic development in later chapters of methods of real analysis

More information

Reconstruction in the Generalized Stochastic Block Model

Reconstruction in the Generalized Stochastic Block Model Reconstruction in the Generalized Stochastic Block Model Marc Lelarge 1 Laurent Massoulié 2 Jiaming Xu 3 1 INRIA-ENS 2 INRIA-Microsoft Research Joint Centre 3 University of Illinois, Urbana-Champaign GDR

More information

Matrix Completion for Structured Observations

Matrix Completion for Structured Observations Matrix Completion for Structured Observations Denali Molitor Department of Mathematics University of California, Los ngeles Los ngeles, C 90095, US Email: dmolitor@math.ucla.edu Deanna Needell Department

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

arxiv: v1 [math.pr] 22 May 2008

arxiv: v1 [math.pr] 22 May 2008 THE LEAST SINGULAR VALUE OF A RANDOM SQUARE MATRIX IS O(n 1/2 ) arxiv:0805.3407v1 [math.pr] 22 May 2008 MARK RUDELSON AND ROMAN VERSHYNIN Abstract. Let A be a matrix whose entries are real i.i.d. centered

More information

Recent Developments in Compressed Sensing

Recent Developments in Compressed Sensing Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline

More information

Topological properties

Topological properties CHAPTER 4 Topological properties 1. Connectedness Definitions and examples Basic properties Connected components Connected versus path connected, again 2. Compactness Definition and first examples Topological

More information

Conditional Gradient (Frank-Wolfe) Method

Conditional Gradient (Frank-Wolfe) Method Conditional Gradient (Frank-Wolfe) Method Lecturer: Aarti Singh Co-instructor: Pradeep Ravikumar Convex Optimization 10-725/36-725 1 Outline Today: Conditional gradient method Convergence analysis Properties

More information

Concentration-Based Guarantees for Low-Rank Matrix Reconstruction

Concentration-Based Guarantees for Low-Rank Matrix Reconstruction JMLR: Workshop and Conference Proceedings 9 20 35 339 24th Annual Conference on Learning Theory Concentration-Based Guarantees for Low-Rank Matrix Reconstruction Rina Foygel Department of Statistics, University

More information

Math 341: Convex Geometry. Xi Chen

Math 341: Convex Geometry. Xi Chen Math 341: Convex Geometry Xi Chen 479 Central Academic Building, University of Alberta, Edmonton, Alberta T6G 2G1, CANADA E-mail address: xichen@math.ualberta.ca CHAPTER 1 Basics 1. Euclidean Geometry

More information

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Adrien Todeschini Inria Bordeaux JdS 2014, Rennes Aug. 2014 Joint work with François Caron (Univ. Oxford), Marie

More information

Norms of Random Matrices & Low-Rank via Sampling

Norms of Random Matrices & Low-Rank via Sampling CS369M: Algorithms for Modern Massive Data Set Analysis Lecture 4-10/05/2009 Norms of Random Matrices & Low-Rank via Sampling Lecturer: Michael Mahoney Scribes: Jacob Bien and Noah Youngs *Unedited Notes

More information

Compressed Sensing and Sparse Recovery

Compressed Sensing and Sparse Recovery ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing

More information

Robust PCA. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng

Robust PCA. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng Robust PCA CS5240 Theoretical Foundations in Multimedia Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (NUS) Robust PCA 1 / 52 Previously...

More information

2 Two-Point Boundary Value Problems

2 Two-Point Boundary Value Problems 2 Two-Point Boundary Value Problems Another fundamental equation, in addition to the heat eq. and the wave eq., is Poisson s equation: n j=1 2 u x 2 j The unknown is the function u = u(x 1, x 2,..., x

More information

8.1 Concentration inequality for Gaussian random matrix (cont d)

8.1 Concentration inequality for Gaussian random matrix (cont d) MGMT 69: Topics in High-dimensional Data Analysis Falll 26 Lecture 8: Spectral clustering and Laplacian matrices Lecturer: Jiaming Xu Scribe: Hyun-Ju Oh and Taotao He, October 4, 26 Outline Concentration

More information

A combinatorial algorithm minimizing submodular functions in strongly polynomial time

A combinatorial algorithm minimizing submodular functions in strongly polynomial time A combinatorial algorithm minimizing submodular functions in strongly polynomial time Alexander Schrijver 1 Abstract We give a strongly polynomial-time algorithm minimizing a submodular function f given

More information

A Randomized Algorithm for the Approximation of Matrices

A Randomized Algorithm for the Approximation of Matrices A Randomized Algorithm for the Approximation of Matrices Per-Gunnar Martinsson, Vladimir Rokhlin, and Mark Tygert Technical Report YALEU/DCS/TR-36 June 29, 2006 Abstract Given an m n matrix A and a positive

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

arxiv: v3 [math.co] 10 Mar 2018

arxiv: v3 [math.co] 10 Mar 2018 New Bounds for the Acyclic Chromatic Index Anton Bernshteyn University of Illinois at Urbana-Champaign arxiv:1412.6237v3 [math.co] 10 Mar 2018 Abstract An edge coloring of a graph G is called an acyclic

More information

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR 1. Definition Existence Theorem 1. Assume that A R m n. Then there exist orthogonal matrices U R m m V R n n, values σ 1 σ 2... σ p 0 with p = min{m, n},

More information

Optimization methods

Optimization methods Lecture notes 3 February 8, 016 1 Introduction Optimization methods In these notes we provide an overview of a selection of optimization methods. We focus on methods which rely on first-order information,

More information

Singular value decomposition (SVD) of large random matrices. India, 2010

Singular value decomposition (SVD) of large random matrices. India, 2010 Singular value decomposition (SVD) of large random matrices Marianna Bolla Budapest University of Technology and Economics marib@math.bme.hu India, 2010 Motivation New challenge of multivariate statistics:

More information

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1) EXERCISE SET 5. 6. The pair (, 2) is in the set but the pair ( )(, 2) = (, 2) is not because the first component is negative; hence Axiom 6 fails. Axiom 5 also fails. 8. Axioms, 2, 3, 6, 9, and are easily

More information

CS598 Machine Learning in Computational Biology (Lecture 5: Matrix - part 2) Professor Jian Peng Teaching Assistant: Rongda Zhu

CS598 Machine Learning in Computational Biology (Lecture 5: Matrix - part 2) Professor Jian Peng Teaching Assistant: Rongda Zhu CS598 Machine Learning in Computational Biology (Lecture 5: Matrix - part 2) Professor Jian Peng Teaching Assistant: Rongda Zhu Feature engineering is hard 1. Extract informative features from domain knowledge

More information

1 Directional Derivatives and Differentiability

1 Directional Derivatives and Differentiability Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=

More information

CSC 576: Variants of Sparse Learning

CSC 576: Variants of Sparse Learning CSC 576: Variants of Sparse Learning Ji Liu Department of Computer Science, University of Rochester October 27, 205 Introduction Our previous note basically suggests using l norm to enforce sparsity in

More information

Discreteness of Transmission Eigenvalues via Upper Triangular Compact Operators

Discreteness of Transmission Eigenvalues via Upper Triangular Compact Operators Discreteness of Transmission Eigenvalues via Upper Triangular Compact Operators John Sylvester Department of Mathematics University of Washington Seattle, Washington 98195 U.S.A. June 3, 2011 This research

More information