Sparse and Low Rank Recovery via Null Space Properties

Size: px
Start display at page:

Download "Sparse and Low Rank Recovery via Null Space Properties"

Transcription

1 Sparse and Low Rank Recovery via Null Space Properties Holger Rauhut Lehrstuhl C für Mathematik (Analysis), RWTH Aachen Convexity, probability and discrete structures, a geometric viewpoint Marne-la-Vallée, October 29, 2015 Based on joint works with Sjoerd Dirksen, Guillaume Lecué; Maryia Kabanava, Richard Kueng, Ulrich Terstiege 1 / 27

2 Overview Sparse and Low Rank Recovery Null space properties Restricted isometry property and its limitations Implication for quantized compressive sensing Null space property for random matrices with finite number of finite moments Low rank matrix recovery from rank-one measurements related to 4-designs 2 / 27

3 Recovery problem Basic problem: Recovery a vector x R N from measurements y = Ax, A R m N, m N. 3 / 27

4 Recovery problem Basic problem: Recovery a vector x R N from measurements y = Ax, A R m N, m N. Sparse recovery: assume that x 0 = #{l : x l 0} s N, i.e., x is s-sparse 3 / 27

5 Recovery problem Basic problem: Recovery a vector x R N from measurements y = Ax, A R m N, m N. Sparse recovery: assume that x 0 = #{l : x l 0} s N, i.e., x is s-sparse Low rank recovery: assume that x = X R n 1 n 2 (N = n 1 n 2 ) with rank(x ) = r min{n 1, n 2 }. In this case, we also write y = A(X ) for A : R n 1 n 2 R m. 3 / 27

6 Approximate sparsity and noise In practice: approximate sparsity / approximately low rank: σ s (x) q := σ r (X ) Sq := inf z: z 0 s x z q inf Z:rank(Z) r X Z S q is small / decays rapidly in s is small / decays rapidly in r Here, X Sp := ( j σ j(x ) p ) 1/p denotes the Schatten p-norm (with σ j (X ) being the singular values of X ); below q = 1 4 / 27

7 Approximate sparsity and noise In practice: approximate sparsity / approximately low rank: σ s (x) q := σ r (X ) Sq := inf z: z 0 s x z q inf Z:rank(Z) r X Z S q is small / decays rapidly in s is small / decays rapidly in r Here, X Sp := ( j σ j(x ) p ) 1/p denotes the Schatten p-norm (with σ j (X ) being the singular values of X ); below q = 1 Noisy measurements y = Ax + e, y = A(X ) + e, with e R m. Assumption e p η for some η 0 and some p [1, ]. 4 / 27

8 Approximate sparsity and noise In practice: approximate sparsity / approximately low rank: σ s (x) q := σ r (X ) Sq := inf z: z 0 s x z q inf Z:rank(Z) r X Z S q is small / decays rapidly in s is small / decays rapidly in r Here, X Sp := ( j σ j(x ) p ) 1/p denotes the Schatten p-norm (with σ j (X ) being the singular values of X ); below q = 1 Noisy measurements y = Ax + e, y = A(X ) + e, with e R m. Assumption e p η for some η 0 and some p [1, ]. p = 2: standard, Gaussian noise p = 1: Poisson noise, doubly exponential noisy p = : quantized compressive sensing 4 / 27

9 Recovery via convex optimization Naive approaches of l 0 -minimization and rank-minimization, min z 0, z:az=y min rank(z) Z:A(Z)=y are NP-hard. 5 / 27

10 Recovery via convex optimization Naive approaches of l 0 -minimization and rank-minimization, min z 0, z:az=y min rank(z) Z:A(Z)=y are NP-hard. Convex relaxations: l p -constrained l 1 -minimization min z R N z 1 subject to Az y p η l p -constrained nuclear norm minimization min Z Z R n 1 n 2 subject to A(Z) y p η Here, Z = X S1 = min{n 1,n 2 } j=1 σ j (X ) Noiseless case: η = 0 5 / 27

11 Null space properties for sparse recovery A R m n satisfies the (stable) null space property (NSP) of order s with constant ρ (0, 1) if v S 1 ρ v S c 1 for all v ker(a)\{0} and all S [n], #S = s. Characterizes exact reconstruction of all s-sparse vectors x R n from y = Ax via l 1 -minimization. 6 / 27

12 Null space properties for sparse recovery A R m n satisfies the (stable) null space property (NSP) of order s with constant ρ (0, 1) if v S 1 ρ v S c 1 for all v ker(a)\{0} and all S [n], #S = s. Characterizes exact reconstruction of all s-sparse vectors x R n from y = Ax via l 1 -minimization. Stability: for y = Ax with x R n and x = arg min z:az=y z 1 : x x 1 2(1 + ρ) 1 ρ σ s(x) 1. 6 / 27

13 Null space properties for sparse recovery A R m n satisfies the (stable) null space property (NSP) of order s with constant ρ (0, 1) if v S 1 ρ v S c 1 for all v ker(a)\{0} and all S [n], #S = s. Characterizes exact reconstruction of all s-sparse vectors x R n from y = Ax via l 1 -minimization. Stability: for y = Ax with x R n and x = arg min z:az=y z 1 : x x 1 2(1 + ρ) 1 ρ σ s(x) 1. Noisy measurements: A R m n satisfies the l p -robust stable NSP (1 p ) of order s with constants ρ (0, 1) and τ > 0 if x S 2 ρ s x S c 1 + τ Ax p for all x R n and all S [n], #S = s. If y = Ax + e with e p η and x = arg min z: Az y p η x 1 then x x 2(1 + ρ)2 σ s (x) 1 τ(3 + ρ) ρ s 1 ρ η. 6 / 27

14 Null space properties for low rank recovery A linear map A : R n 1 n 2 R m satisfies the l p -robust stable rank NSP with constants ρ (0, 1) and τ > 0 if X r F ρ r X X r + τ A(X ) p for all X R n 1 n 2. Here, X r denotes the best rank-r approximation to X, so that X r 2 F = r j=1 σ j(x ) 2 and X X r = min{n 1,n 2 } j=r+1 σ j (X ). 7 / 27

15 Null space properties for low rank recovery A linear map A : R n 1 n 2 R m satisfies the l p -robust stable rank NSP with constants ρ (0, 1) and τ > 0 if X r F ρ r X X r + τ A(X ) p for all X R n 1 n 2. Here, X r denotes the best rank-r approximation to X, so that X r 2 F = r j=1 σ j(x ) 2 and X X r = min{n 1,n 2 } j=r+1 σ j (X ). If y = A(X ) + e for e R m with e p η and X = arg min Z: A(Z) y p η Z, then X X F 2(1 + ρ)2 1 ρ σ r (X ) F τ(3 + ρ) + r 1 ρ η. 7 / 27

16 NSP via Restricted Isometry Property The restricted isometry constant δ s of a matrix A R m n is the smallest number δ such that (1 δ) x 2 2 Ax 2 2 (1 + δ) x 2 2 for all s-sparse x R n. Theorem Suppose A satisfies δ 2s < 1/ 2. Then A satisfies the l p -robust stable NSP of order s for p = 2 and some constants ρ (0, 1) and τ > 0. 8 / 27

17 NSP via Restricted Isometry Property The restricted isometry constant δ s of a matrix A R m n is the smallest number δ such that (1 δ) x 2 2 Ax 2 2 (1 + δ) x 2 2 for all s-sparse x R n. Theorem Suppose A satisfies δ 2s < 1/ 2. Then A satisfies the l p -robust stable NSP of order s for p = 2 and some constants ρ (0, 1) and τ > 0. The (rank) restricted isometry constant δ r of A : R n 1 n 2 R m is the smallest number δ s.t. for all X R n 1 n 2 with rank(x ) r (1 δ) X 2 F A(X ) 2 2 (1 + δ) X 2 F. Theorem Suppose A satisfies δ 2r < 1/ 2. Then A satisfies the l p -robust stable rank NSP of order r for p = 2 and some constants ρ (0, 1) and τ > 0. 8 / 27

18 RIP for subgaussian random measurements A is called subgaussian random matrix if its entries are independent, mean zero, variance one subgaussian random variables. Examples: Gaussian, Bernoulli matrices The map A : R n 1 n 2 R m is called subgaussian if in A(X ) j = k,l A j,k,lx k,l the entries A l,j,k are independent, mean, zero, variance one subgaussian random variables. Theorem Let A R m n be a random draw of a subgaussian matrix and A : R n 1 n 2 R m be a random draw of a subgaussian random measurement map. Further, let δ, ε (0, 1). (a) If m Cδ 2 (s log(en/s) + log(ε 1 )) then A satisfies δ s δ with probability at least 1 ε. (b) (Fazel, Parrilo, Recht 09; Candès, Plan 11) If m Cδ 2 (r(n 1 + n 2 ) + log(ε 1 )) then A satisfies δ r δ with probability at least 1 ε. 9 / 27

19 RIP for subgaussian random measurements A is called subgaussian random matrix if its entries are independent, mean zero, variance one subgaussian random variables. Examples: Gaussian, Bernoulli matrices The map A : R n 1 n 2 R m is called subgaussian if in A(X ) j = k,l A j,k,lx k,l the entries A l,j,k are independent, mean, zero, variance one subgaussian random variables. Theorem Let A R m n be a random draw of a subgaussian matrix and A : R n 1 n 2 R m be a random draw of a subgaussian random measurement map. Further, let δ, ε (0, 1). (a) If m Cδ 2 (s log(en/s) + log(ε 1 )) then A satisfies δ s δ with probability at least 1 ε. (b) (Fazel, Parrilo, Recht 09; Candès, Plan 11) If m Cδ 2 (r(n 1 + n 2 ) + log(ε 1 )) then A satisfies δ r δ with probability at least 1 ε. Sparse recovery from m s log(en/s) random measurements Low rank recovery from m r(n 1 + n 2 ) random measurements 9 / 27

20 Questions What about noisy measurements with error bound in l p for p 2? Guarantees for reconstruction via min z z 1 subject to Az y p η? Can we relax the subgaussian assumption on the distribution of the entries of A, A? 10 / 27

21 Questions What about noisy measurements with error bound in l p for p 2? Guarantees for reconstruction via min z z 1 subject to Az y p η? Can we relax the subgaussian assumption on the distribution of the entries of A, A? Analyze the l p -robust NSP directly! 10 / 27

22 l p -constrained l 1 -minimization via RIP Previous approaches: RIP p,2 : (Jacques, Hammond, Fadili 2011) A satisfies RIP p,2 (p 2) of order s if (1 δ) 1/2 x 2 Ax p (1 + δ) 1/2 x 2 for all s-sparse x. RIP p,2 (p 2) of order 3s for sufficiently small δ implies error bound for x = arg min z: Az Ax p η x 1 : x x 2 C σ s(x) 1 s + D p η. 11 / 27

23 l p -constrained l 1 -minimization via RIP Previous approaches: RIP p,2 : (Jacques, Hammond, Fadili 2011) A satisfies RIP p,2 (p 2) of order s if (1 δ) 1/2 x 2 Ax p (1 + δ) 1/2 x 2 for all s-sparse x. RIP p,2 (p 2) of order 3s for sufficiently small δ implies error bound for x = arg min z: Az Ax p η x 1 : x x 2 C σ s(x) 1 s + D p η. (Rescaled) Gaussian matrix A R m n satisfies RIP p,2 of order s if m C ( δ 2 s log(en/(δs)) + δ 2 log(η 1 ) ) p/2 + (p 1)2 p 11 / 27

24 l p -constrained l 1 -minimization via RIP Previous approaches: RIP p,2 : (Jacques, Hammond, Fadili 2011) A satisfies RIP p,2 (p 2) of order s if (1 δ) 1/2 x 2 Ax p (1 + δ) 1/2 x 2 for all s-sparse x. RIP p,2 (p 2) of order 3s for sufficiently small δ implies error bound for x = arg min z: Az Ax p η x 1 : x x 2 C σ s(x) 1 s + D p η. (Rescaled) Gaussian matrix A R m n satisfies RIP p,2 of order s if m C ( δ 2 s log(en/(δs)) + δ 2 log(η 1 ) ) p/2 + (p 1)2 p Lower bound for RIP p,2 (p 2): m s p/2 11 / 27

25 RIP p,p (Allen-Zhu, Gelashvili, Razenshteyn 2014): A satisfies RIP p,p (p 2) of order s if (1 δ) x p p Ax p p (1 + δ) x p p for all s-sparse x. Implies recovery guarantee for x = arg min z: Az y p η x 1 of the form x x p C σ s(x) 1 + Dη. s1 1/p Lower bound: m C p s p for p 2 Upper bounds for adjacency matrices of random bipartite graphs 12 / 27

26 Towards analyzing the NSP Recall NSP x S 2 ρ s x S c 1 + Ax p for all x R n 13 / 27

27 Towards analyzing the NSP Recall NSP x S 2 ρ x S c 1 + Ax p s for all x R n Introduce T ρ,s = {x R n : S [n], #S = s : x S 2 ρ x S c 1 } s Lemma If A satisfies inf x T ρ,s, x 2 =1 Ax p 1 τ > 0 then A satisfies the l p -robust NSP of order s with constants ρ (0, 1) and τ > / 27

28 Low rank recovery case T ρ,r = {X R n 1 n 2 : X r F r ρ } X X r Lemma If A : R n 1 n 2 R m satisfies inf A(X ) p 1 X T ρ,s, X F =1 τ > 0 then A satisfies the l p -robust NSP of order s with constants ρ (0, 1) and τ > / 27

29 B n 2 := {x R n : x 2 1}, B n 1 n 2 F := {X R n 1 n 2 : X F 1} Lemma It holds (a) (b) T ρ,s B n 2 (2 + ρ 1 ) conv{x R n : x 2 = 1, x 0 s}, T ρ,r B n 1 n 2 F (2 + ρ 1 ) conv{x R n 1 n 2 : X F = 1, rank(x ) r} 15 / 27

30 B n 2 := {x R n : x 2 1}, B n 1 n 2 F := {X R n 1 n 2 : X F 1} Lemma It holds (a) (b) T ρ,s B n 2 (2 + ρ 1 ) conv{x R n : x 2 = 1, x 0 s}, T ρ,r B n 1 n 2 F (2 + ρ 1 ) conv{x R n 1 n 2 : X F = 1, rank(x ) r} Consequence: Need to show a lower bound over a slightly larger set than for the lower bound in the restricted isometry property. Advantage: No need to show an upper bound 15 / 27

31 Main tool: Mendelson s small ball method Lemma (Koltchinskii, Mendelson 13) For 1 p < and a class F of functions from R n into R introduce Q F (u) = inf P( f (X ) u) f F and R m (F) = E sup 1 f F m m ε i f (X i ), where (ε i ) i 1 is a Rademacher sequence. Let u > 0 and t > 0, then, with probability at least 1 2e 2t2, inf f F 1 m m i=1 i=1 f (X i ) p u p( Q F (2u) 4 u R m(f) t m ). (Koltchinskii, Mendelson 13 proved this for p = 2, but extension to arbitrary p is immediate.) 16 / 27

32 Main result: sparse recovery Theorem (Dirksen, Lecué, R 15) Fix 1 p and let A be an m n random matrix with independent, mean-zero, variance one, i.i.d. entries A ij which satisfy (E A i,j r ) 1/r λr α for all 2 r log(n) for some λ > 0 and α 1/2 and P( A i,, x u ) β for all x S n 1 for some u, β > 0. Let x R n, y = Ax + e, e p η and x = arg min z: Az y p η x 1. If { λ 2 e 4α 2 m C max u β 2 2 s log(en/s), log(ε 1 ) β 2, (log(n)) 2α 1}, then with probability at least 1 η, for 1 q 2, x x σ s (x) 1 C 2 C 1 2 η + s β 1/p u m. 1/p Lecué, Mendelson 14: p = 2 (and worse exponent at log(n)-term) 17 / 27

33 Distributions satisfying assumptions of the theorem Gaussian and more generally subgaussian random variables 18 / 27

34 Distributions satisfying assumptions of the theorem Gaussian and more generally subgaussian random variables Exponential random variables (see also Adamczak, Lata la, Litvak, Pajor, Tomczak-Jaegermann 11; Foucart 14) 18 / 27

35 Distributions satisfying assumptions of the theorem Gaussian and more generally subgaussian random variables Exponential random variables (see also Adamczak, Lata la, Litvak, Pajor, Tomczak-Jaegermann 11; Foucart 14) Random variables with density p(x) = γ 1 2γ min{1, x γ }, x R, for γ max{log(n) + 2, 6}. 18 / 27

36 Distributions satisfying assumptions of the theorem Gaussian and more generally subgaussian random variables Exponential random variables (see also Adamczak, Lata la, Litvak, Pajor, Tomczak-Jaegermann 11; Foucart 14) Random variables with density p(x) = γ 1 2γ min{1, x γ }, x R, for γ max{log(n) + 2, 6}. Independence of entries can be relaxed to independence of rows Theorem holds true if rows are samples of an isotropic, unconditional, log-concave vector on R n 18 / 27

37 Application to quantized compressive sensing In practice, measurements are often subject to quantization. Uniform quantizer with bin width θ (rounding): Q θ (v) = θ v/θ + θ/2, v R 19 / 27

38 Application to quantized compressive sensing In practice, measurements are often subject to quantization. Uniform quantizer with bin width θ (rounding): Q θ (v) = θ v/θ + θ/2, v R Observations y = Q θ (Ax) = (Q θ ((Ax) i )) i = Ax + e with e = Ax Q θ (Ax) so that e θ/2. 19 / 27

39 Application to quantized compressive sensing In practice, measurements are often subject to quantization. Uniform quantizer with bin width θ (rounding): Q θ (v) = θ v/θ + θ/2, v R Observations y = Q θ (Ax) = (Q θ ((Ax) i )) i = Ax + e with e = Ax Q θ (Ax) so that e θ/2. Desired: Quantization consistency for a reconstruction x from y: Q θ (Ax) = Q θ (Ax ) 19 / 27

40 Reconstruction Reconstruction of x from y = Q θ (Ax) via l -constrained l 1 -minimization, min z 1 subject to Az y θ/2. is quantization consistent! 20 / 27

41 Reconstruction Reconstruction of x from y = Q θ (Ax) via l -constrained l 1 -minimization, min z 1 subject to Az y θ/2. is quantization consistent! For Gaussian random matrices (for instance), the reconstruction satisfies x x 2 Cs 1/2 σ s (x) 1 + Dθ with high probability if m Cs log(en/s). (Extends to other random matrices.) 20 / 27

42 Reconstruction Reconstruction of x from y = Q θ (Ax) via l -constrained l 1 -minimization, min z 1 subject to Az y θ/2. is quantization consistent! For Gaussian random matrices (for instance), the reconstruction satisfies x x 2 Cs 1/2 σ s (x) 1 + Dθ with high probability if m Cs log(en/s). (Extends to other random matrices.) In contrast, l p -constrained l 1 -minimization is not quantization consistent for p <. 20 / 27

43 Main result: low rank recovery Theorem (Kabanva, Kueng, R, Terstiege 15) Fix 1 p. Let A : R n1 n2 R m with independent mean zero, variance one entries satisfying E A j,k,l 4 D for all j, k, l. For 0 < ρ < 1 and r min{n 1, n 2 } assume that m c 1 ρ 2 r(n 1 + n 2 ). Then with probability at least 1 e c2m, for any X R n1 n2 y = A(X ) + e with e p η the solution X of and min Z Z R n 1 n 2 subject to A(Z) y p η satisfies X X F 2(1 + ρ)2 (1 ρ) σ r (X ) r ρ η 1 ρ m. 1/p Error bound also available for S q -norms, 1 q / 27

44 Quantum state tomography from rank one measurements The state of a (finite-dimensional) quantum system is described by symmetric positive semidefinite matrix A C n n with tr A = 1. Quantum measurements are often with respect to rank one matrices y j = A(X ) j := a j Xa j = tr(xa j a j ), j = 1,..., m Pure states: rank(x ) = 1 mixed states: rank(x ) = r n 22 / 27

45 Quantum state tomography from rank one measurements The state of a (finite-dimensional) quantum system is described by symmetric positive semidefinite matrix A C n n with tr A = 1. Quantum measurements are often with respect to rank one matrices y j = A(X ) j := a j Xa j = tr(xa j a j ), j = 1,..., m Pure states: rank(x ) = 1 mixed states: rank(x ) = r n Special case: Phase retrieval via PhaseLift: Phaseless measurements of x C n y i = a j, x 2 22 / 27

46 Quantum state tomography from rank one measurements The state of a (finite-dimensional) quantum system is described by symmetric positive semidefinite matrix A C n n with tr A = 1. Quantum measurements are often with respect to rank one matrices y j = A(X ) j := a j Xa j = tr(xa j a j ), j = 1,..., m Pure states: rank(x ) = 1 mixed states: rank(x ) = r n Special case: Phase retrieval via PhaseLift: Phaseless measurements of x C n y i = a j, x 2 = a j xx a j = tr(xx a j a j ) can be interpreted as rank-one measurements of the rank one matrix X = xx C n n. 22 / 27

47 Recovery from Gaussian rank one measurements Theorem (Kueng, R, Terstiege 14; Kabanava, Kueng, R, Terstiege 15) Let A : C n n C m be generated by a sequence a j C n, j = 1,..., m of independent standard Gaussian random vectors and assume that m Cρ 2 rn. Then with probability at least 1 e cm, for any Hermitian matrix X C n n, y = A(X ) + e, e p η, and X = min Z: A(Z) y p η Z, it holds X X F C ρ σ r (X ) r + D ρ η m 1/p. Proof via Mendelson s small ball method 23 / 27

48 Towards real quantum experiments: t-designs Definition A weighted set {p i, w i } N i=1 of vectors with w i 2 = 1 is called an approximate t-design of p-norm accuracy θ p, if N ( ) p i (w i wi ) t (ww ) t n + t 1 1 dw θ p. w l2 =1 t p i=1 If θ p = 0 then {p i, w i } N i=1 is called exact design. Certain constructions of approximate t-designs can be implemented efficiently in quantum experiments (in principle?). 24 / 27

49 Recovery with 4-designs Theorem (Kueng, R, Terstiege 14; Kabanava, Kueng, R, Terstiege 15) Let {p i, w i } N i=1 be a an approximate 4-design with either θ 1/(16r 2 ), or θ 1 1/4 that furthermore obeys N i=1 p iw i wi 1 n I 1 n. Suppose that the measurement operator A is generated by m C 4 ρ 2 nr log n measurement matrices A j = n(n + 1)a j a j, where each a j is drawn independently from {p i, w i } N i=1. Then, with probability at least 1 e C5m, for every Hermitian matrix X C n n, measurements y = A(X ) + e = (tr(a j X )) j + e with e p η and X = arg min Z: A(Z) y p η Z it holds X X F C ρ σ r (X ) r + D ρ η m 1/p. Improves and generalizes previous result for phase retrieval by Gross, Krahmer, Kueng / 27

50 Positive semidefinite case In the positive semidefinite case and rank one measurements, we can replace nuclear norm minimization min tr(z) Z 0 subject to A(Z) y 2 η by positive semidefinite least squares min A(Z) y 2. Z 0 Advantage: No estimate of noise level η is required. 26 / 27

51 Thank you! Questions? 27 / 27

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

Interpolation via weighted l 1 -minimization

Interpolation via weighted l 1 -minimization Interpolation via weighted l 1 -minimization Holger Rauhut RWTH Aachen University Lehrstuhl C für Mathematik (Analysis) Matheon Workshop Compressive Sensing and Its Applications TU Berlin, December 11,

More information

The Stability of Low-Rank Matrix Reconstruction: a Constrained Singular Value Perspective

The Stability of Low-Rank Matrix Reconstruction: a Constrained Singular Value Perspective Forty-Eighth Annual Allerton Conference Allerton House UIUC Illinois USA September 9 - October 1 010 The Stability of Low-Rank Matrix Reconstruction: a Constrained Singular Value Perspective Gongguo Tang

More information

An iterative hard thresholding estimator for low rank matrix recovery

An iterative hard thresholding estimator for low rank matrix recovery An iterative hard thresholding estimator for low rank matrix recovery Alexandra Carpentier - based on a joint work with Arlene K.Y. Kim Statistical Laboratory, Department of Pure Mathematics and Mathematical

More information

Jointly Low-Rank and Bisparse Recovery: Questions and Partial Answers

Jointly Low-Rank and Bisparse Recovery: Questions and Partial Answers Jointly Low-Rank and Bisparse Recovery: Questions and Partial Answers Simon Foucart, Rémi Gribonval, Laurent Jacques, and Holger Rauhut Abstract This preprint is not a finished product. It is presently

More information

Geometry of log-concave Ensembles of random matrices

Geometry of log-concave Ensembles of random matrices Geometry of log-concave Ensembles of random matrices Nicole Tomczak-Jaegermann Joint work with Radosław Adamczak, Rafał Latała, Alexander Litvak, Alain Pajor Cortona, June 2011 Nicole Tomczak-Jaegermann

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

Low-Rank Matrix Recovery

Low-Rank Matrix Recovery ELE 538B: Mathematics of High-Dimensional Data Low-Rank Matrix Recovery Yuxin Chen Princeton University, Fall 2018 Outline Motivation Problem setup Nuclear norm minimization RIP and low-rank matrix recovery

More information

Interpolation via weighted l 1 -minimization

Interpolation via weighted l 1 -minimization Interpolation via weighted l 1 -minimization Holger Rauhut RWTH Aachen University Lehrstuhl C für Mathematik (Analysis) Mathematical Analysis and Applications Workshop in honor of Rupert Lasser Helmholtz

More information

Dimensionality Reduction Notes 3

Dimensionality Reduction Notes 3 Dimensionality Reduction Notes 3 Jelani Nelson minilek@seas.harvard.edu August 13, 2015 1 Gordon s theorem Let T be a finite subset of some normed vector space with norm X. We say that a sequence T 0 T

More information

Universal low-rank matrix recovery from Pauli measurements

Universal low-rank matrix recovery from Pauli measurements Universal low-rank matrix recovery from Pauli measurements Yi-Kai Liu Applied and Computational Mathematics Division National Institute of Standards and Technology Gaithersburg, MD, USA yi-kai.liu@nist.gov

More information

Stability and robustness of l 1 -minimizations with Weibull matrices and redundant dictionaries

Stability and robustness of l 1 -minimizations with Weibull matrices and redundant dictionaries Stability and robustness of l 1 -minimizations with Weibull matrices and redundant dictionaries Simon Foucart, Drexel University Abstract We investigate the recovery of almost s-sparse vectors x C N from

More information

Low-rank Tensor Recovery

Low-rank Tensor Recovery Low-rank Tensor Recovery Dissertation zur Erlangung des Doktorgrades Dr. rer. nat. der Mathematisch-Naturwissenschaftlichen Fakultät der Rheinischen Friedrich-Wilhelms-Universität Bonn vorgelegt von Željka

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via convex relaxations Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Interpolation via weighted l 1 minimization

Interpolation via weighted l 1 minimization Interpolation via weighted l 1 minimization Rachel Ward University of Texas at Austin December 12, 2014 Joint work with Holger Rauhut (Aachen University) Function interpolation Given a function f : D C

More information

Strengthened Sobolev inequalities for a random subspace of functions

Strengthened Sobolev inequalities for a random subspace of functions Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)

More information

Sparse Recovery with Pre-Gaussian Random Matrices

Sparse Recovery with Pre-Gaussian Random Matrices Sparse Recovery with Pre-Gaussian Random Matrices Simon Foucart Laboratoire Jacques-Louis Lions Université Pierre et Marie Curie Paris, 75013, France Ming-Jun Lai Department of Mathematics University of

More information

Sampling and high-dimensional convex geometry

Sampling and high-dimensional convex geometry Sampling and high-dimensional convex geometry Roman Vershynin SampTA 2013 Bremen, Germany, June 2013 Geometry of sampling problems Signals live in high dimensions; sampling is often random. Geometry in

More information

ONE-BIT COMPRESSED SENSING WITH PARTIAL GAUSSIAN CIRCULANT MATRICES

ONE-BIT COMPRESSED SENSING WITH PARTIAL GAUSSIAN CIRCULANT MATRICES ONE-BIT COMPRESSED SENSING WITH PARTIAL GAUSSIAN CIRCULANT MATRICES SJOERD DIRKSEN, HANS CHRISTIAN JUNG, AND HOLGER RAUHUT Abstract. In this paper we consider memoryless one-bit compressed sensing with

More information

Solving Corrupted Quadratic Equations, Provably

Solving Corrupted Quadratic Equations, Provably Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin

More information

Compressed Sensing and Robust Recovery of Low Rank Matrices

Compressed Sensing and Robust Recovery of Low Rank Matrices Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

Komprimované snímání a LASSO jako metody zpracování vysocedimenzionálních dat

Komprimované snímání a LASSO jako metody zpracování vysocedimenzionálních dat Komprimované snímání a jako metody zpracování vysocedimenzionálních dat Jan Vybíral (Charles University Prague, Czech Republic) November 2014 VUT Brno 1 / 49 Definition and motivation Its use in bioinformatics

More information

Sparse recovery for spherical harmonic expansions

Sparse recovery for spherical harmonic expansions Rachel Ward 1 1 Courant Institute, New York University Workshop Sparsity and Cosmology, Nice May 31, 2011 Cosmic Microwave Background Radiation (CMB) map Temperature is measured as T (θ, ϕ) = k k=0 l=

More information

Sparse Legendre expansions via l 1 minimization

Sparse Legendre expansions via l 1 minimization Sparse Legendre expansions via l 1 minimization Rachel Ward, Courant Institute, NYU Joint work with Holger Rauhut, Hausdorff Center for Mathematics, Bonn, Germany. June 8, 2010 Outline Sparse recovery

More information

GREEDY SIGNAL RECOVERY REVIEW

GREEDY SIGNAL RECOVERY REVIEW GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin

More information

Compressive Sensing and Beyond

Compressive Sensing and Beyond Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered

More information

Random hyperplane tessellations and dimension reduction

Random hyperplane tessellations and dimension reduction Random hyperplane tessellations and dimension reduction Roman Vershynin University of Michigan, Department of Mathematics Phenomena in high dimensions in geometric analysis, random matrices and computational

More information

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:

More information

ROP: MATRIX RECOVERY VIA RANK-ONE PROJECTIONS 1. BY T. TONY CAI AND ANRU ZHANG University of Pennsylvania

ROP: MATRIX RECOVERY VIA RANK-ONE PROJECTIONS 1. BY T. TONY CAI AND ANRU ZHANG University of Pennsylvania The Annals of Statistics 2015, Vol. 43, No. 1, 102 138 DOI: 10.1214/14-AOS1267 Institute of Mathematical Statistics, 2015 ROP: MATRIX RECOVERY VIA RANK-ONE PROJECTIONS 1 BY T. TONY CAI AND ANRU ZHANG University

More information

Constructing Explicit RIP Matrices and the Square-Root Bottleneck

Constructing Explicit RIP Matrices and the Square-Root Bottleneck Constructing Explicit RIP Matrices and the Square-Root Bottleneck Ryan Cinoman July 18, 2018 Ryan Cinoman Constructing Explicit RIP Matrices July 18, 2018 1 / 36 Outline 1 Introduction 2 Restricted Isometry

More information

Fast and Robust Phase Retrieval

Fast and Robust Phase Retrieval Fast and Robust Phase Retrieval Aditya Viswanathan aditya@math.msu.edu CCAM Lunch Seminar Purdue University April 18 2014 0 / 27 Joint work with Yang Wang Mark Iwen Research supported in part by National

More information

Recovering overcomplete sparse representations from structured sensing

Recovering overcomplete sparse representations from structured sensing Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix

More information

Sparse recovery under weak moment assumptions

Sparse recovery under weak moment assumptions Sparse recovery under weak moment assumptions Guillaume Lecué 1,3 Shahar Mendelson 2,4,5 February 23, 2015 Abstract We prove that iid random vectors that satisfy a rather weak moment assumption can be

More information

Invertibility of random matrices

Invertibility of random matrices University of Michigan February 2011, Princeton University Origins of Random Matrix Theory Statistics (Wishart matrices) PCA of a multivariate Gaussian distribution. [Gaël Varoquaux s blog gael-varoquaux.info]

More information

Optimisation Combinatoire et Convexe.

Optimisation Combinatoire et Convexe. Optimisation Combinatoire et Convexe. Low complexity models, l 1 penalties. A. d Aspremont. M1 ENS. 1/36 Today Sparsity, low complexity models. l 1 -recovery results: three approaches. Extensions: matrix

More information

Sparse solutions of underdetermined systems

Sparse solutions of underdetermined systems Sparse solutions of underdetermined systems I-Liang Chern September 22, 2016 1 / 16 Outline Sparsity and Compressibility: the concept for measuring sparsity and compressibility of data Minimum measurements

More information

Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery

Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery Department of Mathematics & Risk Management Institute National University of Singapore (Based on a joint work with Shujun

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

Lecture Notes 9: Constrained Optimization

Lecture Notes 9: Constrained Optimization Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form

More information

Information-Theoretic Limits of Matrix Completion

Information-Theoretic Limits of Matrix Completion Information-Theoretic Limits of Matrix Completion Erwin Riegler, David Stotz, and Helmut Bölcskei Dept. IT & EE, ETH Zurich, Switzerland Email: {eriegler, dstotz, boelcskei}@nari.ee.ethz.ch Abstract We

More information

Compressed Sensing - Near Optimal Recovery of Signals from Highly Incomplete Measurements

Compressed Sensing - Near Optimal Recovery of Signals from Highly Incomplete Measurements Compressed Sensing - Near Optimal Recovery of Signals from Highly Incomplete Measurements Wolfgang Dahmen Institut für Geometrie und Praktische Mathematik RWTH Aachen and IMI, University of Columbia, SC

More information

Adaptive one-bit matrix completion

Adaptive one-bit matrix completion Adaptive one-bit matrix completion Joseph Salmon Télécom Paristech, Institut Mines-Télécom Joint work with Jean Lafond (Télécom Paristech) Olga Klopp (Crest / MODAL X, Université Paris Ouest) Éric Moulines

More information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information Fast Angular Synchronization for Phase Retrieval via Incomplete Information Aditya Viswanathan a and Mark Iwen b a Department of Mathematics, Michigan State University; b Department of Mathematics & Department

More information

Interpolation via weighted l 1 minimization

Interpolation via weighted l 1 minimization Interpolation via weighted l minimization Holger Rauhut, Rachel Ward August 3, 23 Abstract Functions of interest are often smooth and sparse in some sense, and both priors should be taken into account

More information

Three Generalizations of Compressed Sensing

Three Generalizations of Compressed Sensing Thomas Blumensath School of Mathematics The University of Southampton June, 2010 home prev next page Compressed Sensing and beyond y = Φx + e x R N or x C N x K is K-sparse and x x K 2 is small y R M or

More information

Recovering any low-rank matrix, provably

Recovering any low-rank matrix, provably Recovering any low-rank matrix, provably Rachel Ward University of Texas at Austin October, 2014 Joint work with Yudong Chen (U.C. Berkeley), Srinadh Bhojanapalli and Sujay Sanghavi (U.T. Austin) Matrix

More information

Sparse Solutions of an Undetermined Linear System

Sparse Solutions of an Undetermined Linear System 1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research

More information

Conditions for Robust Principal Component Analysis

Conditions for Robust Principal Component Analysis Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and

More information

Constrained optimization

Constrained optimization Constrained optimization DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Compressed sensing Convex constrained

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

A SIMPLE TOOL FOR BOUNDING THE DEVIATION OF RANDOM MATRICES ON GEOMETRIC SETS

A SIMPLE TOOL FOR BOUNDING THE DEVIATION OF RANDOM MATRICES ON GEOMETRIC SETS A SIMPLE TOOL FOR BOUNDING THE DEVIATION OF RANDOM MATRICES ON GEOMETRIC SETS CHRISTOPHER LIAW, ABBAS MEHRABIAN, YANIV PLAN, AND ROMAN VERSHYNIN Abstract. Let A be an isotropic, sub-gaussian m n matrix.

More information

The convex algebraic geometry of rank minimization

The convex algebraic geometry of rank minimization The convex algebraic geometry of rank minimization Pablo A. Parrilo Laboratory for Information and Decision Systems Massachusetts Institute of Technology International Symposium on Mathematical Programming

More information

Exponential decay of reconstruction error from binary measurements of sparse signals

Exponential decay of reconstruction error from binary measurements of sparse signals Exponential decay of reconstruction error from binary measurements of sparse signals Deanna Needell Joint work with R. Baraniuk, S. Foucart, Y. Plan, and M. Wootters Outline Introduction Mathematical Formulation

More information

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28 Sparsity Models Tong Zhang Rutgers University T. Zhang (Rutgers) Sparsity Models 1 / 28 Topics Standard sparse regression model algorithms: convex relaxation and greedy algorithm sparse recovery analysis:

More information

Sparse Optimization Lecture: Sparse Recovery Guarantees

Sparse Optimization Lecture: Sparse Recovery Guarantees Those who complete this lecture will know Sparse Optimization Lecture: Sparse Recovery Guarantees Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor: Wotao Yin Department of Mathematics,

More information

arxiv: v3 [math.st] 9 Dec 2014

arxiv: v3 [math.st] 9 Dec 2014 The Annals of Statistics 2015, Vol. 43, No. 1, 102 138 DOI: 10.1214/14-AOS1267 c Institute of Mathematical Statistics, 2015 ROP: MATRIX RECOVERY VIA RANK-ONE PROJECTIONS 1 arxiv:1310.5791v3 [math.st] 9

More information

Compressed Sensing and Sparse Recovery

Compressed Sensing and Sparse Recovery ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing

More information

Lecture 13 October 6, Covering Numbers and Maurey s Empirical Method

Lecture 13 October 6, Covering Numbers and Maurey s Empirical Method CS 395T: Sublinear Algorithms Fall 2016 Prof. Eric Price Lecture 13 October 6, 2016 Scribe: Kiyeon Jeon and Loc Hoang 1 Overview In the last lecture we covered the lower bound for p th moment (p > 2) and

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 3: Sparse signal recovery: A RIPless analysis of l 1 minimization Yuejie Chi The Ohio State University Page 1 Outline

More information

Supremum of simple stochastic processes

Supremum of simple stochastic processes Subspace embeddings Daniel Hsu COMS 4772 1 Supremum of simple stochastic processes 2 Recap: JL lemma JL lemma. For any ε (0, 1/2), point set S R d of cardinality 16 ln n S = n, and k N such that k, there

More information

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin

More information

arxiv: v2 [cs.it] 8 Apr 2013

arxiv: v2 [cs.it] 8 Apr 2013 ONE-BIT COMPRESSED SENSING WITH NON-GAUSSIAN MEASUREMENTS ALBERT AI, ALEX LAPANOWSKI, YANIV PLAN, AND ROMAN VERSHYNIN arxiv:1208.6279v2 [cs.it] 8 Apr 2013 Abstract. In one-bit compressed sensing, previous

More information

MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design

MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications Class 19: Data Representation by Design What is data representation? Let X be a data-space X M (M) F (M) X A data representation

More information

University of Luxembourg. Master in Mathematics. Student project. Compressed sensing. Supervisor: Prof. I. Nourdin. Author: Lucien May

University of Luxembourg. Master in Mathematics. Student project. Compressed sensing. Supervisor: Prof. I. Nourdin. Author: Lucien May University of Luxembourg Master in Mathematics Student project Compressed sensing Author: Lucien May Supervisor: Prof. I. Nourdin Winter semester 2014 1 Introduction Let us consider an s-sparse vector

More information

Concentration-Based Guarantees for Low-Rank Matrix Reconstruction

Concentration-Based Guarantees for Low-Rank Matrix Reconstruction JMLR: Workshop and Conference Proceedings 9 20 35 339 24th Annual Conference on Learning Theory Concentration-Based Guarantees for Low-Rank Matrix Reconstruction Rina Foygel Department of Statistics, University

More information

Key Words: low-rank matrix recovery, iteratively reweighted least squares, matrix completion.

Key Words: low-rank matrix recovery, iteratively reweighted least squares, matrix completion. LOW-RANK MATRIX RECOVERY VIA ITERATIVELY REWEIGHTED LEAST SQUARES MINIMIZATION MASSIMO FORNASIER, HOLGER RAUHUT, AND RACHEL WARD Abstract. We present and analyze an efficient implementation of an iteratively

More information

Recent Developments in Compressed Sensing

Recent Developments in Compressed Sensing Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 57 Table of Contents 1 Sparse linear models Basis Pursuit and restricted null space property Sufficient conditions for RNS 2 / 57

More information

COMPRESSED Sensing (CS) is a method to recover a

COMPRESSED Sensing (CS) is a method to recover a 1 Sample Complexity of Total Variation Minimization Sajad Daei, Farzan Haddadi, Arash Amini Abstract This work considers the use of Total Variation (TV) minimization in the recovery of a given gradient

More information

Binary matrix completion

Binary matrix completion Binary matrix completion Yaniv Plan University of Michigan SAMSI, LDHD workshop, 2013 Joint work with (a) Mark Davenport (b) Ewout van den Berg (c) Mary Wootters Yaniv Plan (U. Mich.) Binary matrix completion

More information

arxiv: v1 [math.na] 28 Oct 2018

arxiv: v1 [math.na] 28 Oct 2018 Iterative Hard Thresholding for Low-Rank Recovery from Rank-One Projections Simon Foucart and Srinivas Subramanian Texas A&M University Abstract arxiv:80.749v [math.na] 28 Oct 208 A novel algorithm for

More information

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs Raphael Louca & Eilyan Bitar School of Electrical and Computer Engineering American Control Conference (ACC) Chicago,

More information

Stochastic geometry and random matrix theory in CS

Stochastic geometry and random matrix theory in CS Stochastic geometry and random matrix theory in CS IPAM: numerical methods for continuous optimization University of Edinburgh Joint with Bah, Blanchard, Cartis, and Donoho Encoder Decoder pair - Encoder/Decoder

More information

Noisy Signal Recovery via Iterative Reweighted L1-Minimization

Noisy Signal Recovery via Iterative Reweighted L1-Minimization Noisy Signal Recovery via Iterative Reweighted L1-Minimization Deanna Needell UC Davis / Stanford University Asilomar SSC, November 2009 Problem Background Setup 1 Suppose x is an unknown signal in R d.

More information

Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization

Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization Forty-Fifth Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 26-28, 27 WeA3.2 Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization Benjamin

More information

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION A Thesis by MELTEM APAYDIN Submitted to the Office of Graduate and Professional Studies of Texas A&M University in partial fulfillment of the

More information

Provable Alternating Minimization Methods for Non-convex Optimization

Provable Alternating Minimization Methods for Non-convex Optimization Provable Alternating Minimization Methods for Non-convex Optimization Prateek Jain Microsoft Research, India Joint work with Praneeth Netrapalli, Sujay Sanghavi, Alekh Agarwal, Animashree Anandkumar, Rashish

More information

Uniqueness Conditions for A Class of l 0 -Minimization Problems

Uniqueness Conditions for A Class of l 0 -Minimization Problems Uniqueness Conditions for A Class of l 0 -Minimization Problems Chunlei Xu and Yun-Bin Zhao October, 03, Revised January 04 Abstract. We consider a class of l 0 -minimization problems, which is to search

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini April 27, 2018 1 / 80 Classical case: n d. Asymptotic assumption: d is fixed and n. Basic tools: LLN and CLT. High-dimensional setting: n d, e.g. n/d

More information

A New Estimate of Restricted Isometry Constants for Sparse Solutions

A New Estimate of Restricted Isometry Constants for Sparse Solutions A New Estimate of Restricted Isometry Constants for Sparse Solutions Ming-Jun Lai and Louis Y. Liu January 12, 211 Abstract We show that as long as the restricted isometry constant δ 2k < 1/2, there exist

More information

Necessary and Sufficient Conditions for Success of the Nuclear Norm Heuristic for Rank Minimization

Necessary and Sufficient Conditions for Success of the Nuclear Norm Heuristic for Rank Minimization Necessary and Sufficient Conditions for Success of the Nuclear Norm Heuristic for Rank Minimization Babak Hassibi*, Benjamin Recht**, and Weiyu Xu Abstract Rank minimization minimizing the rank of a matrix

More information

Self-Calibration and Biconvex Compressive Sensing

Self-Calibration and Biconvex Compressive Sensing Self-Calibration and Biconvex Compressive Sensing Shuyang Ling Department of Mathematics, UC Davis July 12, 2017 Shuyang Ling (UC Davis) SIAM Annual Meeting, 2017, Pittsburgh July 12, 2017 1 / 22 Acknowledgements

More information

Recovery of Simultaneously Structured Models using Convex Optimization

Recovery of Simultaneously Structured Models using Convex Optimization Recovery of Simultaneously Structured Models using Convex Optimization Maryam Fazel University of Washington Joint work with: Amin Jalali (UW), Samet Oymak and Babak Hassibi (Caltech) Yonina Eldar (Technion)

More information

High dimensional Ising model selection

High dimensional Ising model selection High dimensional Ising model selection Pradeep Ravikumar UT Austin (based on work with John Lafferty, Martin Wainwright) Sparse Ising model US Senate 109th Congress Banerjee et al, 2008 Estimate a sparse

More information

Mahler Conjecture and Reverse Santalo

Mahler Conjecture and Reverse Santalo Mini-Courses Radoslaw Adamczak Random matrices with independent log-concave rows/columns. I will present some recent results concerning geometric properties of random matrices with independent rows/columns

More information

arxiv: v1 [cs.it] 21 Feb 2013

arxiv: v1 [cs.it] 21 Feb 2013 q-ary Compressive Sensing arxiv:30.568v [cs.it] Feb 03 Youssef Mroueh,, Lorenzo Rosasco, CBCL, CSAIL, Massachusetts Institute of Technology LCSL, Istituto Italiano di Tecnologia and IIT@MIT lab, Istituto

More information

Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison

Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison Raghunandan H. Keshavan, Andrea Montanari and Sewoong Oh Electrical Engineering and Statistics Department Stanford University,

More information

Matrix completion: Fundamental limits and efficient algorithms. Sewoong Oh Stanford University

Matrix completion: Fundamental limits and efficient algorithms. Sewoong Oh Stanford University Matrix completion: Fundamental limits and efficient algorithms Sewoong Oh Stanford University 1 / 35 Low-rank matrix completion Low-rank Data Matrix Sparse Sampled Matrix Complete the matrix from small

More information

Random Matrices: Invertibility, Structure, and Applications

Random Matrices: Invertibility, Structure, and Applications Random Matrices: Invertibility, Structure, and Applications Roman Vershynin University of Michigan Colloquium, October 11, 2011 Roman Vershynin (University of Michigan) Random Matrices Colloquium 1 / 37

More information

Small Ball Probability, Arithmetic Structure and Random Matrices

Small Ball Probability, Arithmetic Structure and Random Matrices Small Ball Probability, Arithmetic Structure and Random Matrices Roman Vershynin University of California, Davis April 23, 2008 Distance Problems How far is a random vector X from a given subspace H in

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via nonconvex optimization Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

IEOR 265 Lecture 3 Sparse Linear Regression

IEOR 265 Lecture 3 Sparse Linear Regression IOR 65 Lecture 3 Sparse Linear Regression 1 M Bound Recall from last lecture that the reason we are interested in complexity measures of sets is because of the following result, which is known as the M

More information

1-Bit Matrix Completion

1-Bit Matrix Completion 1-Bit Matrix Completion Mark A. Davenport School of Electrical and Computer Engineering Georgia Institute of Technology Yaniv Plan Mary Wootters Ewout van den Berg Matrix Completion d When is it possible

More information

Covariance Sketching via Quadratic Sampling

Covariance Sketching via Quadratic Sampling Covariance Sketching via Quadratic Sampling Yuejie Chi Department of ECE and BMI The Ohio State University Tsinghua University June 2015 Page 1 Acknowledgement Thanks to my academic collaborators on some

More information

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Shuyang Ling Department of Mathematics, UC Davis Oct.18th, 2016 Shuyang Ling (UC Davis) 16w5136, Oaxaca, Mexico Oct.18th, 2016

More information

Gauge optimization and duality

Gauge optimization and duality 1 / 54 Gauge optimization and duality Junfeng Yang Department of Mathematics Nanjing University Joint with Shiqian Ma, CUHK September, 2015 2 / 54 Outline Introduction Duality Lagrange duality Fenchel

More information

l 1 -Regularized Linear Regression: Persistence and Oracle Inequalities

l 1 -Regularized Linear Regression: Persistence and Oracle Inequalities l -Regularized Linear Regression: Persistence and Oracle Inequalities Peter Bartlett EECS and Statistics UC Berkeley slides at http://www.stat.berkeley.edu/ bartlett Joint work with Shahar Mendelson and

More information

sparse and low-rank tensor recovery Cubic-Sketching

sparse and low-rank tensor recovery Cubic-Sketching Sparse and Low-Ran Tensor Recovery via Cubic-Setching Guang Cheng Department of Statistics Purdue University www.science.purdue.edu/bigdata CCAM@Purdue Math Oct. 27, 2017 Joint wor with Botao Hao and Anru

More information