Information-Theoretic Limits of Matrix Completion
|
|
- Ruth Dawson
- 5 years ago
- Views:
Transcription
1 Information-Theoretic Limits of Matrix Completion Erwin Riegler, David Stotz, and Helmut Bölcskei Dept. IT & EE, ETH Zurich, Switzerland {eriegler, dstotz, Abstract We propose an information-theoretic framework for matrix completion. The theory goes beyond the low-rank structure and applies to general matrices of low description complexity. Specifically, we consider random matrices X R m n of arbitrary distribution continuous, discrete, discretecontinuous mixture, or even singular). With S R m n an ε-support set of X, i.e., P[X S] 1 ε, and dim B S) denoting the lower Minkowski dimension of S, we show that k > dim B S) measurements of the form A i, X, with A i denoting the measurement matrices, suffice to recover X with probability of error at most ε. The result holds for Lebesgue a.a. A i and does not need incoherence between the A i and the unknown matrix X. We furthermore show that k > dim B S) measurements also suffice to recover the unknown matrix X from measurements taken with rank-one A i, again this applies to a.a. rank-one A i. Rank-one measurement matrices are attractive as they require less storage space than general measurement matrices and can be applied faster. Particularizing our results to the recovery of low-rank matrices, we find that k > m+n r)r measurements are sufficient to recover matrices of rank at most r. Finally, we construct a class of rank-r matrices that can be recovered with arbitrarily small probability of error from k < m + n r)r measurements. I. INTRODUCTION Matrix completion refers to the recovery of a low-rank matrix from a small) subset of its entries or a small) number of linear combinations of its entries. This problem arises in a wide range of applications, including quantum state tomography, face recognition, recommender systems, and sensor localization see, e.g., [1] and references therein). The formal problem statement is as follows. Suppose we have k linear measurements of the m n matrix X with rankx) r in the form of y = A 1, X,..., A k, X ) T R k 1) where A i R m n denotes the measurement matrices and, stands for the standard trace inner product between matrices in R m n. The number of measurements k is typically much smaller than the total number of entries, mn, of X. Depending on the A i, the measurements can simply be individual entries of X or general linear combinations thereof. The vast literature on matrix completion, for a highly incomplete list see [1] [8], provides guarantees for the recovery of the unknown low-rank matrix X from the measurements y, under various assumptions on the measurement matrices A i and the low-rank models generating X. For example, in [2] the A i are assumed to be chosen randomly from an orthonormal basis for R n n and it is shown that an unknown n n matrix X of rank at most r can be recovered with high probability if k = Onrν ln 2 n). Here, ν quantifies the incoherence between the unknown matrix X and the orthonormal basis for R n n the A i are drawn from. The setting in [3] assumes random measurement matrices A i with the position of the only nonzero entry chosen uniformly at random. It is shown that almost all a.a.) matrices with respect to the random orthogonal model [3, Def. 2.1]) of rank at most r can be recovered with high probability with respect to the measurement matrices) provided that the number of measurements satisfies k Cn 1.25 r ln n, where C is a numerical constant. In [1] it is shown that for measurement matrices A i containing i.i.d. entries that are, e.g., Gaussian), a matrix X of rank at most r can be recovered with high probability from k Cm + n)r measurements, where C is a constant. The recovery guarantees in [1] [8] all pertain to recovery through nuclear norm minimization. In [9] measurement matrices A i containing i.i.d. entries drawn from an absolutely continuous with respect to Lebesgue measure) distribution are considered. It is shown that rank minimization which is NP-hard, in general) recovers an n n matrix X of rank at most r with probability one if k > 2n r)r. It is furthermore shown in [9] that all matrices X of rank at most n/2 can be recovered, again with probability one, provided that k 4nr 4r 2. The recovery thresholds in [1], [9] do not exhibit a log n term, but assume significant richness in the random measurement matrices A i. Storing and applying such measurement matrices is costly in terms of memory and computation time. To overcome this problem [8] considers rank-one measurement matrices of the form A i = a i b T i, where a i R m and b i R n are independent with i.i.d. Gaussian or sub-gaussian entries, and shows that nuclear norm minimization succeeds under the same recovery threshold as in [1], namely k Cm + n)r. Contributions: Inspired by the work of Wu and Verdú on analog signal compression [10], we formulate an informationtheoretic framework for almost lossless matrix completion. The theory is general in the sense of going beyond the low-rank structure and applying to general matrices of low description complexity. Specifically, we consider random matrices X R m n of arbitrary distribution continuous, discrete, discrete-continuous mixture, or even singular). With S R m n an ε-support set of X, i.e., P[X S] 1 ε, and dim B S) denoting the lower Minkowski dimension see Definition 3) of S, we show that k > dim B S) measurements suffice to recover X with probability of error no more than ε. The result holds for Lebesgue a.a. measurement matrices A i and does not need any incoherence between the A i and the unknown matrix X. What is more, we show that k > dim B S) measurements also suffice for recovery from measurements
2 taken with rank-one A i, again this applies to a.a. rank-one A i. Particularizing our results to low-rank matrices X, we show that X of rank at most r can be recovered from k > m+n r)r measurements taken with either general A i or with rankone A i. Perhaps surprisingly, it turns out that, depending on the specific distribution of the low-rank matrix X, even fewer than m + n r)r measurements can suffice. We construct a class of examples that illuminates this phenomenon. Notation: Roman letters A, B,... designate deterministic matrices and a, b,... stands for deterministic vectors. Boldface letters A, B,... and a, b,... denote random matrices and vectors, respectively. For the distribution of a random matrix A we write µ A and we use µ a to designate the distribution of a random vector a. The superscript T stands for transposition. For A = a 1,..., a n ) R m n we let veca) = a T 1,..., a T n) T. For a rank-r matrix A R m n with ordered singular values σ 1 A)... σ r A), we set A) = r σ ia). For a matrix A, tra) denotes its trace. For matrices A, B of the same dimensions, A, B = tra T B) is the trace inner product between A and B. We write A 2 = A, A for the Euclidean norm of the matrix A. For the Euclidean space R k, 2 ), we denote the open ball of radius s centered at u R k by B k u, s), V k, s) and Ak 1, s) stand for its volume and the area of its closure, respectively. Similarly, for the Euclidean space R m n, 2 ), we denote the open ball of radius s centered at A R m n by B m n A, s). We write M m n r and Nr m n for the set of matrices A R m n with ranka) r and ranka) = r, respectively. II. ALMOST LOSSLESS MATRIX COMPLETION We start by formulating the almost lossless matrix completion framework. Definition 1. For a random matrix X R m n of arbitrary distribution µ X with Lebesgue decompositon µ X = µ c X + µ d X + µs X continuous, discrete, and singular components, respectively), an m n, k) code consists of i) linear measurements A 1,,..., A k, ) T : R m n R k ; ii) a measurable decoder g : R k R m n. For given measurement matrices A i, we say that a decoder g achieves error probability ε if P [ g A 1, X,..., A k X ) T) X ] ε. Definition 2. For ε 0, we call a nonempty bounded set S R m n an ε-support set of the random matrix X R m n if P[X S] 1 ε. Definition 3. Minkowski dimension 1 ) Let S be a nonempty bounded set in R m n. The lower Minkowski dimension of S is defined as dim B S) = lim inf log N S ) 1 This quantity is sometimes also referred to as box-counting dimension, which is the origin for the subscript B in the notation dim B ) used below. and the upper Minkowski dimension is log N S ) dim B S) = lim sup where N S denotes the covering number of S given by N S ) = min {k N S B m n M i, ), M i R m n}. i {1,...,k} If dim B S) = dim B S) =: dim B S), we simply say that dim B S) is the Minkowski dimension of S. III. MAIN RESULTS The following result formalizes the statement on the operationally relevant description complexity being given by the lower Minkowski dimensions of ε-support sets of X. Theorem 1. Let S R m n be an ε-support set of X R m n. Then, for Lebesgue a.a. measurement matrices A i, i = 1,..., k, there exists a decoder achieving error probability ε, provided that k > dim B S). Proof. See Section V. Remark 1. The central conceptual element in the proof of Theorem 1 is the following probabilistic null space property, first reported in [11] in the context of almost lossless analog signal separation. For a.a. measurement matrices A i, i = 1,..., k, the dimension of the kernel of the mapping X A 1, X,..., A k, X ) T is mn k. If the lower Minkowski dimension of a set S is smaller than k, the set S will intersect the kernel of this mapping at most trivially. What is remarkable here is that the notions of Euclidean dimension for the kernel of the mapping) and of lower Minkowski dimension for S) are compatible. We next particularize Theorem 1 for low-rank matrices. To this end, we first establish an upper bound on dim B S) for nonempty and bounded subsets of M m n r. Lemma 1. Let S M m n r be a nonempty bounded set. Then dim B S) m + n r)r. Proof. We can decompose Mr m n according to r M m n r = N m n i. i=0 By [12, Ex. 5.30], N m n i is an embedded submanifold of R m n of dimension m + n i)i, i = 1,..., r. Let I = { i {1,..., r} S N m n i }. Then, for each i I, S N m n i is a nonempty bounded set and, therefore, dim B S N m n i ) is well-defined. By [13, Sec. 3.2, Properties i) and ii)], dim B S N m n i ) m+n i)i, i I. Since the upper Minkowski dimension is finitely stable [13, Sec. 3.2, Property iii)], we get dim B S) = max dim BS N m n i ) i I m + n r)r where in the last step we used the monotonicity of fs) = m+n s)s in the range s [0, m+n)/2] together with r m + n)/2, which in turn follows from r minm, n).
3 We can now put the pieces together to get the desired statement on low-rank matrices. Remark 2. Lemma 1 together with dim B ) dim B ), when used in Theorem 1, implies that for X M m n r and every ε > 0, there exists a decoder that achieves error probability ε for Lebesgue a.a. measurement matrices A i, i = 1,..., k, provided that k > m + n r)r. While the sufficient condition k > m + n r)r in Remark 2 is intuitively appealing as m + n r)r is the dimension of the manifold Nr m n, it is actually the lower Minkowski dimensions of ε-support sets of X M m n r that are of operational significance. Specifically, depending on the distribution of X, a smaller than m + n r)r) number of measurements may suffice for recovery of X with probability of error at most ε. The following example illuminates this phenomenon. Example 1. Let X = X T 1 X 2 M m n r, where X 1 R r m and X 2 R r n are independent. Suppose that X 1 has l 1 columns at positions drawn uniformly at random and containing i.i.d. Gaussian entries with all other columns equal to zero and X 2 has l 2 columns at positions drawn uniformly at random and containing i.i.d. Gaussian entries with all other columns equal to zero. Suppose further that r l 1 < m/2 and r l 2 n/2 1/r. The assumptions l i r, i = 1, 2, guarantee that P[rankX) = r] = 1. Next, we construct an ε-support set T for X with dim B T ) l 1 + l 2 )r, which by Theorem 1 together with l 1 + l 2 )r < m + n)r/2 1 m+n r)r 1 proves that we can recover the rank-r matrix X with probability of error at most ε from strictly less than m + n r)r measurements. Let A r m l Mr r m be the set of r m matrices with no more than l nonzero columns. Choose L N sufficiently large for i) S 1 = A r m l 1 B r m 0, L) to be an ε/2-support set of X 1 and ii) S 2 = A r n l 2 B r n 0, L) to be an ε/2-support set of X 2. By [13, Sec. 3.2, Properties i) and iii)], we have dim B S i ) = l i r 2) which is simply the maximum number of nonzero entries of X i S i, i = 1, 2. Set T = {X T 1 X 2 X i S i, i = 1, 2}. Then, P[X T ] = P[X 1 S 1, X 2 S 2 ] = P[X 1 S 1 ] P[X 2 S 2 ] 1 ε. The triangle inequality implies that for all X i, X i S i, i = 1, 2, we have X T 1 X 2 X T 1 X 2 2 X T 1 X 2 X T 1 X X T 1 X 2 X T 1 X 2 2 L X 1 X X 2 X 2 2 ) 3) where we used S 1 B r m 0, L) and S 2 B r n 0, L). Let N Si ) be the covering number of S i, i = 1, 2. We can cover S i by N Si ) balls of radius with centers X ji, j i = 1,..., N Si ), i = 1, 2. Therefore, 3) implies that T can be covered by N S1 )N S2 ) balls of radius centered at X T Xj2 j 1, j i = 1,..., N Si ), i = 1, 2. This yields N T ) N S1 )N S2 ) and we finally get log N T ) dim B T ) = lim log 1 log NS1 )N S2 ) ) lim log 1 log N S1 ) log N S2 ) = lim log 1 + lim log 1 = l 1 r + l 2 r where we used 2) in the last step. Remark 3. The derivation of the recovery thresholds in [9] is also based on a null space property similar to the one discussed in Remark 1. The relevant dimension in [9] is the dimension m + n r)r of the manifold Nr m n. Example 1 above, however, shows that k < m + n r)r measurements can suffice for recovery of rank-r matrices, thereby corroborating the operational significance of the lower Minkowski dimensions of ε-support sets of X. IV. RANK-ONE MEASUREMENT MATRICES Rank-one measurement matrices, i.e., matrices A i = a i b T i with a i R m and b i R n, i = 1,..., k, are attractive as they require less storage space than general measurement matrices and can also be applied faster. Interestingly, Theorem 1 continues to hold for rank-one measurement matrices although they exhibit much less richness than general measurement matrices. The technical challenges in establishing this result are quite different from those encountered in the case of general measurement matrices. In particular, we will need a stronger concentration of measure inequality cf. Lemma 4). Theorem 2. Let S R m n be an ε-support set of X R m n. Then, for Lebesgue a.a. a i R m and b i R n and corresponding measurement matrices A i = a i b T i, i = 1,..., k, there exists a decoder achieving error probability ε, provided that k > dim B S). Proof. See Section V. Remark 4. Example 1 can be shown to carry over to rank-one measurement matrices A i. Remark 5. Theorem 2, when used in combination with Lemma 1, implies that for X M m n r and every ε > 0, there exists a decoder achieving error probability ε for Lebesgue a.a. a i R m and b i R n, provided that k > m + n r)r. In contrast, the threshold k Cm + n)r in [8] for rank-one measurements requires a i and b i to be independent random vectors containing i.i.d. Gaussian or sub-gaussian entries. In addition, the constant C in [8] remains unspecified. V. PROOFS OF THEOREMS 1 AND 2 For both proofs, we first construct a measurable map g : R k R m n such that P [ [g A 1, X,..., A k, X ) T) X ] P [ Z S X \{0} A 1, Z,..., A k, Z ) T = 0, X S ] + ε 4)
4 with S X = {W X W S} for X S. The proofs are then concluded by showing that P [ Z S X \{0} A1, Z,..., A k, Z ) T = 0, X S ] = 0 5) for Lebesgue a.a. matrices A i R m n, i = 1,..., k, in the case of Theorem 1 and for Lebesgue a.a. vectors a i R m and b i R n with A i = a i b T i, i = 1,..., k, in the case of Theorem 2. Proof of 4): Let S R m n be an ε-support set of X with dim B S) < k. We define a measurable map g as follows: gy) = { Z, if { W S A 1, W,..., A k, W ) T = y } = {Z} E, else where E is an arbitrary, but fixed, matrix in R m n \ S used to declare a decoding error). Then, we have P [ g A 1, X,..., A k, X ) T) X ] P [ g A 1, X,..., A k, X ) T) X, X S ] + P[X / S] P [ g A 1, X,..., A k, X ) T) X, X S ] + ε 6) = P [ g A 1, X,..., A k, X ) T) = E, X S ] + ε 7) = P [ Z S X \{0} A1, Z,..., A k, Z ) T = 0, X S ] + ε where 6) is a consequence of S being an ε-support set and in 7) we used that the decoder declares an error if and only if {W S A 1, W,..., A k, W ) T = y} > 1 for y = A 1, X,..., A k, X ) T with X S. Finishing the proof of Theorem 1: Let s > 0 and suppose that A 1,..., A k, i = 1,..., k, are independent and uniformly distributed on B m n 0, s). Then, we have P [ Z S X \{0} A 1, Z,..., A k, Z ) T = 0, X S ] ) k B m n0,s) dµ A1... dµ Ak = P [ Z S X \{0} A 1, Z,..., A k, Z ) T = 0 ] dµ X S = 0 9) where 8) is a consequence of Fubini s theorem for nonnegative measurable functions and 9) follows from Lemma 2 below. With R m n = s N B m n0, s) and since s is arbitrary, 5) holds for Lebesgue a.a. measurement matrices A i, which concludes the proof of Theorem 1. Finishing the proof of Theorem 2: Let s > 0 and suppose that A = [a 1,..., a k ] R m k and B = [b 1,..., b k ] R n k are independent random matrices with columns a i independent and uniformly distributed on B m 0, s) and columns b i independent and uniformly distributed on B n 0, s). Then, we have P [ Z S X \{0} a T 1 Zb 1,..., a T k Zb k ) T = 0, X S ] ) k B m0,s) B n0,s) dµ a1 dµ b1... dµ ak dµ bk 8) = P [ Z S X \{0} a T 1 Zb 1,..., a T k Zb k ) T = 0 ] dµ X S 10) = 0 11) where 10) is a consequence of Fubini s theorem for nonnegative measurable functions and 11) follows from Lemma 3 below. Again, with R l = s N B l0, s) and since s is arbitrary, 5) holds for Lebesgue a.a. vectors a i R m and b i R n, thereby finishing the proof of Theorem 2. Lemma 2. Let s > 0 and A 1,..., A k, i = 1,..., k, be independent and uniformly distributed on B m n 0, s). Suppose that U R m n is a nonempty bounded set with dim B U) < k. Then, we have P [ X U \{0} A 1, X,..., A k, X ) T = 0 ] = 0. 12) Proof. Follows from rewriting the trace inner products A i, X, i = 1,..., k as inner products between vectors in R mn and subsequent application of [11, Prop. 1]. Lemma 3. Let s > 0 and take A = [a 1,..., a k ] R m k and B = [b 1,..., b k ] R n k to be independent random matrices with columns a i, i = 1,..., k, independent and uniformly distributed on B m 0, s) and columns b i, i = 1,..., k, independent and uniformly distributed on B n 0, s). Suppose that U R m n is a nonempty bounded set with dim B U) < k. Then, we have P := P [ X U \{0} a T 1 Xb 1,..., a T k Xb k ) T = 0 ] = 0. Proof. Let R = max X U rankx) and set { U L,r = X U X) > 1 } L, σ 1X) < L, rankx) = r for L N and r = 1,..., R. By the union bound, we have P R P L,r 13) L N r=1 where P L,r = P [ X U L,r a T 1 Xb 1,..., a T k Xb k ) T = 0 ]. We now prove by contradiction that P L,r = 0 for all L N and all r {1,..., R}. Suppose that there exists an L N and an r {1,..., R} such that P L,r > 0 by definition, P L,r 0). For this pair {L, r}, we would then have lim inf log P L,r = 0. 14) For > 0, let be the covering number of the set U L,r and denote corresponding covering balls centered at M i ) R m n as B m n M i ), ), i = 1,...,. We now fix matrices Since X i ) B m n M i ), ) U L,r, B m n M i ), ) B m n X i ), 2), i = 1,...,. 15) i = 1,...,
5 by the triangle inequality, we get P L,r P [ X B m n M i ), ) a T 1 Xb 1,..., a T k Xb k ) T = 0 ] P [ X B m n X i ), 2) a T 1 Xb 1,..., a T k Xb k ) T = 0 ] P [ X B m n X i ), 2) a T 1 Xb 1,..., a T k Xb k ) T 2 ], > 0. 16) Now, for a i B m 0, s), b i B n 0, s), and X B m n X i ), 2), we have a T 1 X i )b 1,..., a T k X i )b k ) T 2 a T 1 X X i ))b 1,..., a T k X X i ))b k ) T 2 + a T 1 Xb 1,..., a T k Xb k ) T 2 k a j 2 2 X X i) 2 2 b j 2 2 j=1 + a T 1 Xb 1,..., a T k Xb k ) T 2 2s 2 k + a T 1 Xb 1,..., a T k Xb k ) T 2, > 0. 17) Inserting 17) into 16) allows us to further upper-bound P L,r according to where P L,r P [ a T 1 X i )b 1,..., a T k X i )b k ) T s 2 k) ] k gl, r, k, m, n, s, ) k, > 0 18) gl, r, k, m, n, s, ) = 22s) m+n r s 2 k)l V m, s)v n, s) s log max 2 L ),, 1 if r = 1 2 r 1+2s 2 k) ) r 1 s + Ar 1,1)sL) r 1 r 1, if r > 1. Here, we applied the concentration of measure inequality in Lemma 4 below with δ = 1 + 2s 2 k) and used the fact that 1/ X i )) < L, σ 1 X i )) < L, and rankx i )) = r recall that by 15) all matrices X i ) are in the set U L,r ). With the upper bound on P L,r in 18) we now get lim inf lim inf log P L,r log) + k log + k log gl, r, k, m, n, s, ) = lim inf = lim inf lim inf log) k log gl, r, k, m, n, s, ) log 1 k + lim log) log 1 k logn U )) = dim B U) k k 19) < 0 20) where 19) follows from U L,r U and in the last step we used that dim B U) < k, by assumption. Since 20) contradicts 14), P L,r = 0 for all L N and all r {1,..., R}. By 13), this establishes that P = 0. Lemma 4. Let A = [a 1,..., a k ] R m k and B = [b 1,..., b k ] R n k be independent random matrices, with columns a i, i = 1,..., k, independent and uniformly distributed on B m 0, s) and columns b i, i = 1,..., k, independent and uniformly distributed on B n 0, s). Suppose that X R m n with r = rankx) > 0. Then, we have with P [ a T 1 Xb 1,..., a T k Xb k ) T 2 δ ] δ k fx, s, δ) k 22s)m+n r 1 fx, s, δ) = X)V m, s)v n, s) ) log max s 2 σ 1X) δ, 1, if r = 1 2 r ) δ r 1 s + Ar 1,1)sσ 1X)) r 1 r 1, if r > 1. 21) Proof. Omitted due to space limitations. REFERENCES [1] E. J. Candès and Y. Plan, Tight oracle inequalities for low-rank matrix recovery from a minimal number of noisy random measurements, IEEE Trans. Inf. Theory, vol. 4, no. 57, pp , Apr [2] D. Gross, Recovering low-rank matrices from few coefficients in any basis, IEEE Trans. Inf. Theory, vol. 57, no. 3, pp , Mar [3] E. J. Candès and B. Recht, Exact matrix completion via convex optimization, Found. Comput. Math., vol. 9, no. 6, pp , [4] E. J. Candès and T. Tao, The power of convex relaxation: Near-optimal matrix completion, IEEE Trans. Inf. Theory, vol. 56, no. 5, pp , May [5] B. Recht, M. Fazel, and P. A. Parrilo, Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization, SIAM Review, vol. 52, no. 3, pp , Dec [6] B. Recht, A simpler approach to matrix completion, J. Mach. Learn. Res., vol. 12, pp , [7] T. T. Cai and A. Zhang, Sparse representation of a polytope and recovery of sparse signals and low-rank matrices, IEEE Trans. Inf. Theory, vol. 60, no. 1, pp , Jan [8], ROP: Matrix recovery via rank-one projections, Ann. Stat., vol. 43, no. 1, pp , [9] Y. C. Eldar, D. Needell, and Y. Plan, Uniqueness conditions for lowrank matrix recovery, Appl. Comp. Harm. Anal., vol. 33, no. 2, pp , [10] Y. Wu and S. Verdú, Rényi information dimension: Fundamental limits of almost lossless analog compression, IEEE Trans. Inf. Theory, vol. 56, no. 8, pp , Aug [11] D. Stotz, E. Riegler, and H. Bölcskei, Almost lossless analog signal separation, in Proc. IEEE Int. Symp. Inf. Theory, Istanbul, Turkey, Jul. 2013, pp [12] J. M. Lee, Introduction to Smooth Manifolds, 1st ed. New York, NY: Springer, [13] K. Falconer, Fractal Geometry, 1st ed. New York, NY: Wiley, 1990.
Signal Recovery, Uncertainty Relations, and Minkowski Dimension
Signal Recovery, Uncertainty Relations, and Minkowski Dimension Helmut Bőlcskei ETH Zurich December 2013 Joint work with C. Aubel, P. Kuppinger, G. Pope, E. Riegler, D. Stotz, and C. Studer Aim of this
More informationCompressed Sensing and Robust Recovery of Low Rank Matrices
Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech
More informationECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis
ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear
More informationExact Low-rank Matrix Recovery via Nonconvex M p -Minimization
Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:
More informationUniqueness Conditions For Low-Rank Matrix Recovery
Claremont Colleges Scholarship @ Claremont CMC Faculty Publications and Research CMC Faculty Scholarship 3-28-2011 Uniqueness Conditions For Low-Rank Matrix Recovery Yonina C. Eldar Israel Institute of
More informationAn iterative hard thresholding estimator for low rank matrix recovery
An iterative hard thresholding estimator for low rank matrix recovery Alexandra Carpentier - based on a joint work with Arlene K.Y. Kim Statistical Laboratory, Department of Pure Mathematics and Mathematical
More informationConditions for Robust Principal Component Analysis
Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and
More informationA new method on deterministic construction of the measurement matrix in compressed sensing
A new method on deterministic construction of the measurement matrix in compressed sensing Qun Mo 1 arxiv:1503.01250v1 [cs.it] 4 Mar 2015 Abstract Construction on the measurement matrix A is a central
More informationA Short Course on Frame Theory
A Short Course on Frame Theory Veniamin I. Morgenshtern and Helmut Bölcskei ETH Zurich, 8092 Zurich, Switzerland E-mail: {vmorgens, boelcskei}@nari.ee.ethz.ch April 2, 20 Hilbert spaces [, Def. 3.-] and
More informationA New Estimate of Restricted Isometry Constants for Sparse Solutions
A New Estimate of Restricted Isometry Constants for Sparse Solutions Ming-Jun Lai and Louis Y. Liu January 12, 211 Abstract We show that as long as the restricted isometry constant δ 2k < 1/2, there exist
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu
More informationNote on the convex hull of the Stiefel manifold
Note on the convex hull of the Stiefel manifold Kyle A. Gallivan P.-A. Absil July 9, 00 Abstract In this note, we characterize the convex hull of the Stiefel manifold and we find its strong convexity parameter.
More informationThresholds for the Recovery of Sparse Solutions via L1 Minimization
Thresholds for the Recovery of Sparse Solutions via L Minimization David L. Donoho Department of Statistics Stanford University 39 Serra Mall, Sequoia Hall Stanford, CA 9435-465 Email: donoho@stanford.edu
More informationRank Determination for Low-Rank Data Completion
Journal of Machine Learning Research 18 017) 1-9 Submitted 7/17; Revised 8/17; Published 9/17 Rank Determination for Low-Rank Data Completion Morteza Ashraphijuo Columbia University New York, NY 1007,
More informationRank Minimization over Finite Fields
Rank Minimization over Finite Fields Vincent Y. F. Tan, Laura Balzano and Stark C. Draper Dept. of ECE, University of Wisconsin-Madison, Email: {vtan@,sunbeam@ece.,sdraper@ece.}wisc.edu LIDS, Massachusetts
More informationThe Stability of Low-Rank Matrix Reconstruction: a Constrained Singular Value Perspective
Forty-Eighth Annual Allerton Conference Allerton House UIUC Illinois USA September 9 - October 1 010 The Stability of Low-Rank Matrix Reconstruction: a Constrained Singular Value Perspective Gongguo Tang
More informationGREEDY SIGNAL RECOVERY REVIEW
GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin
More informationUniversal low-rank matrix recovery from Pauli measurements
Universal low-rank matrix recovery from Pauli measurements Yi-Kai Liu Applied and Computational Mathematics Division National Institute of Standards and Technology Gaithersburg, MD, USA yi-kai.liu@nist.gov
More informationNECESSARY CONDITIONS FOR WEIGHTED POINTWISE HARDY INEQUALITIES
NECESSARY CONDITIONS FOR WEIGHTED POINTWISE HARDY INEQUALITIES JUHA LEHRBÄCK Abstract. We establish necessary conditions for domains Ω R n which admit the pointwise (p, β)-hardy inequality u(x) Cd Ω(x)
More informationINDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina
INDUSTRIAL MATHEMATICS INSTITUTE 2007:08 A remark on compressed sensing B.S. Kashin and V.N. Temlyakov IMI Preprint Series Department of Mathematics University of South Carolina A remark on compressed
More informationSparse and Low Rank Recovery via Null Space Properties
Sparse and Low Rank Recovery via Null Space Properties Holger Rauhut Lehrstuhl C für Mathematik (Analysis), RWTH Aachen Convexity, probability and discrete structures, a geometric viewpoint Marne-la-Vallée,
More informationHigh-Rank Matrix Completion and Subspace Clustering with Missing Data
High-Rank Matrix Completion and Subspace Clustering with Missing Data Authors: Brian Eriksson, Laura Balzano and Robert Nowak Presentation by Wang Yuxiang 1 About the authors
More informationUniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit
Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit arxiv:0707.4203v2 [math.na] 14 Aug 2007 Deanna Needell Department of Mathematics University of California,
More informationLecture Notes 9: Constrained Optimization
Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form
More informationRecent Developments in Compressed Sensing
Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline
More informationOptimisation Combinatoire et Convexe.
Optimisation Combinatoire et Convexe. Low complexity models, l 1 penalties. A. d Aspremont. M1 ENS. 1/36 Today Sparsity, low complexity models. l 1 -recovery results: three approaches. Extensions: matrix
More informationECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference
ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via convex relaxations Yuejie Chi Department of Electrical and Computer Engineering Spring
More informationCompressed Sensing and Affine Rank Minimization Under Restricted Isometry
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 61, NO. 13, JULY 1, 2013 3279 Compressed Sensing Affine Rank Minimization Under Restricted Isometry T. Tony Cai Anru Zhang Abstract This paper establishes new
More informationMatrix Completion for Structured Observations
Matrix Completion for Structured Observations Denali Molitor Department of Mathematics University of California, Los ngeles Los ngeles, C 90095, US Email: dmolitor@math.ucla.edu Deanna Needell Department
More informationA NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang
A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES Fenghui Wang Department of Mathematics, Luoyang Normal University, Luoyang 470, P.R. China E-mail: wfenghui@63.com ABSTRACT.
More informationThree Generalizations of Compressed Sensing
Thomas Blumensath School of Mathematics The University of Southampton June, 2010 home prev next page Compressed Sensing and beyond y = Φx + e x R N or x C N x K is K-sparse and x x K 2 is small y R M or
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationPolar codes for the m-user MAC and matroids
Research Collection Conference Paper Polar codes for the m-user MAC and matroids Author(s): Abbe, Emmanuel; Telatar, Emre Publication Date: 2010 Permanent Link: https://doi.org/10.3929/ethz-a-005997169
More informationFUNCTIONAL ANALYSIS LECTURE NOTES: COMPACT SETS AND FINITE-DIMENSIONAL SPACES. 1. Compact Sets
FUNCTIONAL ANALYSIS LECTURE NOTES: COMPACT SETS AND FINITE-DIMENSIONAL SPACES CHRISTOPHER HEIL 1. Compact Sets Definition 1.1 (Compact and Totally Bounded Sets). Let X be a metric space, and let E X be
More informationof Orthogonal Matching Pursuit
A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement
More informationCapacity Pre-Log of SIMO Correlated Block-Fading Channels
Capacity Pre-Log of SIMO Correlated Block-Fading Channels Wei Yang, Giuseppe Durisi, Veniamin I. Morgenshtern, Erwin Riegler 3 Chalmers University of Technology, 496 Gothenburg, Sweden ETH Zurich, 809
More informationAdaptive one-bit matrix completion
Adaptive one-bit matrix completion Joseph Salmon Télécom Paristech, Institut Mines-Télécom Joint work with Jean Lafond (Télécom Paristech) Olga Klopp (Crest / MODAL X, Université Paris Ouest) Éric Moulines
More informationGuaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization
Forty-Fifth Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 26-28, 27 WeA3.2 Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization Benjamin
More informationAn algebraic perspective on integer sparse recovery
An algebraic perspective on integer sparse recovery Lenny Fukshansky Claremont McKenna College (joint work with Deanna Needell and Benny Sudakov) Combinatorics Seminar USC October 31, 2018 From Wikipedia:
More informationSolution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions
Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Yin Zhang Technical Report TR05-06 Department of Computational and Applied Mathematics Rice University,
More informationMatrix Completion from a Few Entries
Matrix Completion from a Few Entries Raghunandan H. Keshavan and Sewoong Oh EE Department Stanford University, Stanford, CA 9434 Andrea Montanari EE and Statistics Departments Stanford University, Stanford,
More informationExistence and Uniqueness
Chapter 3 Existence and Uniqueness An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect
More informationTwo Results on the Schatten p-quasi-norm Minimization for Low-Rank Matrix Recovery
Two Results on the Schatten p-quasi-norm Minimization for Low-Rank Matrix Recovery Ming-Jun Lai, Song Li, Louis Y. Liu and Huimin Wang August 14, 2012 Abstract We shall provide a sufficient condition to
More informationOn the l 1 -Norm Invariant Convex k-sparse Decomposition of Signals
On the l 1 -Norm Invariant Convex -Sparse Decomposition of Signals arxiv:1305.6021v2 [cs.it] 11 Nov 2013 Guangwu Xu and Zhiqiang Xu Abstract Inspired by an interesting idea of Cai and Zhang, we formulate
More informationDegrees of Freedom Region of the Gaussian MIMO Broadcast Channel with Common and Private Messages
Degrees of Freedom Region of the Gaussian MIMO Broadcast hannel with ommon and Private Messages Ersen Ekrem Sennur Ulukus Department of Electrical and omputer Engineering University of Maryland, ollege
More informationConstructing Explicit RIP Matrices and the Square-Root Bottleneck
Constructing Explicit RIP Matrices and the Square-Root Bottleneck Ryan Cinoman July 18, 2018 Ryan Cinoman Constructing Explicit RIP Matrices July 18, 2018 1 / 36 Outline 1 Introduction 2 Restricted Isometry
More information1 Regression with High Dimensional Data
6.883 Learning with Combinatorial Structure ote for Lecture 11 Instructor: Prof. Stefanie Jegelka Scribe: Xuhong Zhang 1 Regression with High Dimensional Data Consider the following regression problem:
More information1-Bit Matrix Completion
1-Bit Matrix Completion Mark A. Davenport School of Electrical and Computer Engineering Georgia Institute of Technology Yaniv Plan Mary Wootters Ewout van den Berg Matrix Completion d When is it possible
More informationRecovering any low-rank matrix, provably
Recovering any low-rank matrix, provably Rachel Ward University of Texas at Austin October, 2014 Joint work with Yudong Chen (U.C. Berkeley), Srinadh Bhojanapalli and Sujay Sanghavi (U.T. Austin) Matrix
More informationSTAT 200C: High-dimensional Statistics
STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 57 Table of Contents 1 Sparse linear models Basis Pursuit and restricted null space property Sufficient conditions for RNS 2 / 57
More informationMATRIX COMPLETION AND TENSOR RANK
MATRIX COMPLETION AND TENSOR RANK HARM DERKSEN Abstract. In this paper, we show that the low rank matrix completion problem can be reduced to the problem of finding the rank of a certain tensor. arxiv:1302.2639v2
More informationSection 3.9. Matrix Norm
3.9. Matrix Norm 1 Section 3.9. Matrix Norm Note. We define several matrix norms, some similar to vector norms and some reflecting how multiplication by a matrix affects the norm of a vector. We use matrix
More informationLecture 9: Matrix approximation continued
0368-348-01-Algorithms in Data Mining Fall 013 Lecturer: Edo Liberty Lecture 9: Matrix approximation continued Warning: This note may contain typos and other inaccuracies which are usually discussed during
More informationReconstruction from Anisotropic Random Measurements
Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013
More informationNotes 6 : First and second moment methods
Notes 6 : First and second moment methods Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Roc, Sections 2.1-2.3]. Recall: THM 6.1 (Markov s inequality) Let X be a non-negative
More informationObservability of a Linear System Under Sparsity Constraints
2372 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 58, NO 9, SEPTEMBER 2013 Observability of a Linear System Under Sparsity Constraints Wei Dai and Serdar Yüksel Abstract Consider an -dimensional linear
More informationZeros of lacunary random polynomials
Zeros of lacunary random polynomials Igor E. Pritsker Dedicated to Norm Levenberg on his 60th birthday Abstract We study the asymptotic distribution of zeros for the lacunary random polynomials. It is
More informationNew Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit
New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence
More informationIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior
More informationCompressed Sensing and Sparse Recovery
ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing
More informationUncertainty Relations and Sparse Signal Recovery for Pairs of General Signal Sets
Uncertainty Relations and Sparse Signal Recovery for Pairs of General Signal Sets Patrick Kuppinger, Student Member, IEEE, Giuseppe Durisi, Member, IEEE, and elmut Bölcskei, Fellow, IEEE Abstract We present
More informationLebesgue Measure on R n
8 CHAPTER 2 Lebesgue Measure on R n Our goal is to construct a notion of the volume, or Lebesgue measure, of rather general subsets of R n that reduces to the usual volume of elementary geometrical sets
More informationTHEOREMS, ETC., FOR MATH 515
THEOREMS, ETC., FOR MATH 515 Proposition 1 (=comment on page 17). If A is an algebra, then any finite union or finite intersection of sets in A is also in A. Proposition 2 (=Proposition 1.1). For every
More informationLow-Rank Matrix Recovery
ELE 538B: Mathematics of High-Dimensional Data Low-Rank Matrix Recovery Yuxin Chen Princeton University, Fall 2018 Outline Motivation Problem setup Nuclear norm minimization RIP and low-rank matrix recovery
More informationRui ZHANG Song LI. Department of Mathematics, Zhejiang University, Hangzhou , P. R. China
Acta Mathematica Sinica, English Series May, 015, Vol. 31, No. 5, pp. 755 766 Published online: April 15, 015 DOI: 10.1007/s10114-015-434-4 Http://www.ActaMath.com Acta Mathematica Sinica, English Series
More informationLeast squares regularized or constrained by L0: relationship between their global minimizers. Mila Nikolova
Least squares regularized or constrained by L0: relationship between their global minimizers Mila Nikolova CMLA, CNRS, ENS Cachan, Université Paris-Saclay, France nikolova@cmla.ens-cachan.fr SIAM Minisymposium
More information1-Bit Matrix Completion
1-Bit Matrix Completion Mark A. Davenport School of Electrical and Computer Engineering Georgia Institute of Technology Yaniv Plan Mary Wootters Ewout van den Berg Matrix Completion d When is it possible
More information1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989),
Real Analysis 2, Math 651, Spring 2005 April 26, 2005 1 Real Analysis 2, Math 651, Spring 2005 Krzysztof Chris Ciesielski 1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer
More informationarxiv: v1 [math.na] 26 Nov 2009
Non-convexly constrained linear inverse problems arxiv:0911.5098v1 [math.na] 26 Nov 2009 Thomas Blumensath Applied Mathematics, School of Mathematics, University of Southampton, University Road, Southampton,
More informationSPARSE signal representations have gained popularity in recent
6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying
More informationNear Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing
Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar
More informationHausdorff Measure. Jimmy Briggs and Tim Tyree. December 3, 2016
Hausdorff Measure Jimmy Briggs and Tim Tyree December 3, 2016 1 1 Introduction In this report, we explore the the measurement of arbitrary subsets of the metric space (X, ρ), a topological space X along
More informationRobust Principal Component Analysis
ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M
More informationLeast Sparsity of p-norm based Optimization Problems with p > 1
Least Sparsity of p-norm based Optimization Problems with p > Jinglai Shen and Seyedahmad Mousavi Original version: July, 07; Revision: February, 08 Abstract Motivated by l p -optimization arising from
More informationCourse 212: Academic Year Section 1: Metric Spaces
Course 212: Academic Year 1991-2 Section 1: Metric Spaces D. R. Wilkins Contents 1 Metric Spaces 3 1.1 Distance Functions and Metric Spaces............. 3 1.2 Convergence and Continuity in Metric Spaces.........
More informationMATH 51H Section 4. October 16, Recall what it means for a function between metric spaces to be continuous:
MATH 51H Section 4 October 16, 2015 1 Continuity Recall what it means for a function between metric spaces to be continuous: Definition. Let (X, d X ), (Y, d Y ) be metric spaces. A function f : X Y is
More informationBlock-Sparse Recovery via Convex Optimization
1 Block-Sparse Recovery via Convex Optimization Ehsan Elhamifar, Student Member, IEEE, and René Vidal, Senior Member, IEEE arxiv:11040654v3 [mathoc 13 Apr 2012 Abstract Given a dictionary that consists
More informationLecture 2: Linear Algebra Review
EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1
More informationThe oblique derivative problem for general elliptic systems in Lipschitz domains
M. MITREA The oblique derivative problem for general elliptic systems in Lipschitz domains Let M be a smooth, oriented, connected, compact, boundaryless manifold of real dimension m, and let T M and T
More informationIEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER
IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1239 Preconditioning for Underdetermined Linear Systems with Sparse Solutions Evaggelia Tsiligianni, StudentMember,IEEE, Lisimachos P. Kondi,
More informationPCA with random noise. Van Ha Vu. Department of Mathematics Yale University
PCA with random noise Van Ha Vu Department of Mathematics Yale University An important problem that appears in various areas of applied mathematics (in particular statistics, computer science and numerical
More informationConstrained optimization
Constrained optimization DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Compressed sensing Convex constrained
More informationStrengthened Sobolev inequalities for a random subspace of functions
Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)
More informationTools from Lebesgue integration
Tools from Lebesgue integration E.P. van den Ban Fall 2005 Introduction In these notes we describe some of the basic tools from the theory of Lebesgue integration. Definitions and results will be given
More information5 Measure theory II. (or. lim. Prove the proposition. 5. For fixed F A and φ M define the restriction of φ on F by writing.
5 Measure theory II 1. Charges (signed measures). Let (Ω, A) be a σ -algebra. A map φ: A R is called a charge, (or signed measure or σ -additive set function) if φ = φ(a j ) (5.1) A j for any disjoint
More informationSparse and Low-Rank Matrix Decompositions
Forty-Seventh Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 30 - October 2, 2009 Sparse and Low-Rank Matrix Decompositions Venkat Chandrasekaran, Sujay Sanghavi, Pablo A. Parrilo,
More informationDense Error Correction for Low-Rank Matrices via Principal Component Pursuit
Dense Error Correction for Low-Rank Matrices via Principal Component Pursuit Arvind Ganesh, John Wright, Xiaodong Li, Emmanuel J. Candès, and Yi Ma, Microsoft Research Asia, Beijing, P.R.C Dept. of Electrical
More informationROP: MATRIX RECOVERY VIA RANK-ONE PROJECTIONS 1. BY T. TONY CAI AND ANRU ZHANG University of Pennsylvania
The Annals of Statistics 2015, Vol. 43, No. 1, 102 138 DOI: 10.1214/14-AOS1267 Institute of Mathematical Statistics, 2015 ROP: MATRIX RECOVERY VIA RANK-ONE PROJECTIONS 1 BY T. TONY CAI AND ANRU ZHANG University
More informationThe Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1
The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3784. Ming-Jun Lai Department of Mathematics,
More informationMATHS 730 FC Lecture Notes March 5, Introduction
1 INTRODUCTION MATHS 730 FC Lecture Notes March 5, 2014 1 Introduction Definition. If A, B are sets and there exists a bijection A B, they have the same cardinality, which we write as A, #A. If there exists
More informationMASS TRANSFERENCE PRINCIPLE: FROM BALLS TO ARBITRARY SHAPES. 1. Introduction. limsupa i = A i.
MASS TRANSFERENCE PRINCIPLE: FROM BALLS TO ARBITRARY SHAPES HENNA KOIVUSALO AND MICHA L RAMS arxiv:1812.08557v1 [math.ca] 20 Dec 2018 Abstract. The mass transference principle, proved by Beresnevich and
More informationLow-rank Matrix Completion with Noisy Observations: a Quantitative Comparison
Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison Raghunandan H. Keshavan, Andrea Montanari and Sewoong Oh Electrical Engineering and Statistics Department Stanford University,
More informationLebesgue Measure on R n
CHAPTER 2 Lebesgue Measure on R n Our goal is to construct a notion of the volume, or Lebesgue measure, of rather general subsets of R n that reduces to the usual volume of elementary geometrical sets
More informationEquivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 12, DECEMBER 2008 2009 Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation Yuanqing Li, Member, IEEE, Andrzej Cichocki,
More informationCHEBYSHEV INEQUALITIES AND SELF-DUAL CONES
CHEBYSHEV INEQUALITIES AND SELF-DUAL CONES ZDZISŁAW OTACHEL Dept. of Applied Mathematics and Computer Science University of Life Sciences in Lublin Akademicka 13, 20-950 Lublin, Poland EMail: zdzislaw.otachel@up.lublin.pl
More informationRank minimization via the γ 2 norm
Rank minimization via the γ 2 norm Troy Lee Columbia University Adi Shraibman Weizmann Institute Rank Minimization Problem Consider the following problem min X rank(x) A i, X b i for i = 1,..., k Arises
More informationIntroduction to Real Analysis Alternative Chapter 1
Christopher Heil Introduction to Real Analysis Alternative Chapter 1 A Primer on Norms and Banach Spaces Last Updated: March 10, 2018 c 2018 by Christopher Heil Chapter 1 A Primer on Norms and Banach Spaces
More informationUniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit
Claremont Colleges Scholarship @ Claremont CMC Faculty Publications and Research CMC Faculty Scholarship 6-5-2008 Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit
More informationLecture: Introduction to Compressed Sensing Sparse Recovery Guarantees
Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin
More informationOrthogonal Matching Pursuit for Sparse Signal Recovery With Noise
Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published
More information