Low-Rank Matrix Recovery

Size: px
Start display at page:

Download "Low-Rank Matrix Recovery"

Transcription

1 ELE 538B: Mathematics of High-Dimensional Data Low-Rank Matrix Recovery Yuxin Chen Princeton University, Fall 2018

2 Outline Motivation Problem setup Nuclear norm minimization RIP and low-rank matrix recovery Phase retrieval / solving random quadratic systems of equations Matrix completion Matrix recovery 11-2

3 Motivation Matrix recovery 11-3

4 Motivation 1: recommendation systems??????????????? Netflix challenge: Netflix provides highly incomplete ratings from 0.5 million users for & 17,770 movies How to predict unseen user ratings for movies? Matrix recovery 11-4

5 In general, we cannot infer missing ratings????????????????????????????? Underdetermined system (more unknowns than observations) Matrix recovery 11-5

6 ... unless rating matrix has other structure??????????????? A few factors explain most of the data Matrix recovery 11-6

7 ... unless rating matrix has other structure??????????????? A few factors explain most of the data low-rank approximation How to exploit (approx.) low-rank structure in prediction? Matrix recovery 11-6

8 Motivation 2: sensor localization n sensors / points x j R 3, j = 1,, n Observe partial information about pairwise distances D i,j = x i x j 2 2 = x i x j 2 2 2x i x j Goal: infer distance between every pair of nodes Matrix recovery 11-7

9 Motivation 2: sensor localization Introduce x 1 x X = 2. Rn 3 x n then distance matrix D = [D i,j ] 1 i,j n can be written as x [ ] D = x 1 2 2,, x n 2 2 2XX }{{ } x n 2 }{{} rank 3 2 }{{} rank 1 } rank 1 {{ } low rank rank(d) n low-rank matrix completion Matrix recovery 11-8

10 Motivation 3: structure from motion Given multiple images and a few correspondences between image features, how to estimate the locations of 3D points? Snavely, Seitz, & Szeliski Structure from motion: reconstruct 3D scene geometry and }{{} structure camera poses from multiple images }{{} motion Matrix recovery 11-9

11 Motivation 3: structure from motion Tomasi and Kanade s factorization: Consider n 3D points {p j R 3 } 1 j n in m different 2D frames x i,j R 2 1 : locations of the j th point in the i th frame x i,j = M i }{{} projection matrix R 2 3 p j }{{} 3D position R 3 Matrix recovery 11-10

12 Motivation 3: structure from motion Tomasi and Kanade s factorization: Matrix of all 2D locations x 1,1 x 1,n X =..... x m,1 x m,n = M 1. [ p 1 ] p n R 2m n M m }{{} low-rank factorization Goal: fill in missing entries of X given a small number of entries Matrix recovery 11-10

13 Motivation 4: missing phase problem Detectors record intensities of diffracted rays electric field x(t 1, t 2 ) Fourier transform ˆx(f 1, f 2 ) Fig credit: Stanford SLAC intensity of electrical field: ˆx(f1, f 2 ) 2 = x(t 1, t 2 )e i2π(f1t1+f2t2) dt 1 dt 2 2 Phase retrieval: recover signal x(t 1, t 2 ) from intensity ˆx(f 1, f 2 ) 2 Matrix recovery 11-11

14 A discrete-time model: solving quadratic systems A x Ax y = Ax Solve for x R n in m quadratic equations y k = a k x 2, k = 1,..., m or y = Ax 2 where z 2 := { z 1 2,, z m 2 } Matrix recovery 11-12

15 An equivalent view: low-rank factorization Lifting: introduce X = xx to linearize constraints y k = a kx 2 = a k(xx )a k = y k = a kxa k = a k a k, X (11.1) find X 0 s.t. y k = a k a k, X, k = 1,, m rank(x) = 1 Matrix recovery 11-13

16 Problem setup Matrix recovery 11-14

17 Setup Consider M R n n rank(m) = r n Singular value decomposition (SVD) of M: where Σ = M = UΣV }{{ } = (2n r)r degrees of freedom σ 1... σ r r σ i u i vi i=1 contains all singular values {σ i }; U := [u 1,, u r ], V := [v 1,, v r ] consist of singular vectors Matrix recovery 11-15

18 Low-rank matrix completion Observed entries M i,j, (i, j) }{{} Ω sampling set Completion via rank minimization minimize X rank(x) s.t. X i,j = M i,j, (i, j) Ω Matrix recovery 11-16

19 Low-rank matrix completion Observed entries M i,j, (i, j) }{{} Ω sampling set An operator P Ω : orthogonal projection onto the subspace of matrices supported on Ω Completion via rank minimization minimize X rank(x) s.t. P Ω (X) = P Ω (M) Matrix recovery 11-16

20 More general: low-rank matrix recovery Linear measurements y i = A i, M = Tr(A i M), i = 1,... m An operator form y = A(M) := A 1, M. A m, M R m Recovery via rank minimization minimize X rank(x) s.t. y = A(X) Matrix recovery 11-17

21 Nuclear norm minimization Matrix recovery 11-18

22 Convex relaxation minimize X R n n s.t. rank(x) }{{} nonconvex P Ω (X) = P Ω (M) minimize X R n n s.t. rank(x) }{{} nonconvex A(X) = A(M) Question: what is the convex surrogate for rank( )? Matrix recovery 11-19

23 Nuclear norm Definition 11.1 The nuclear norm of X is X := n σ i (X) }{{} i=1 i th largest singular value Nuclear norm is a counterpart of l 1 norm for rank function Relations among different norms X X F X r X F r X (Tightness) {X : X 1} is the convex hull of rank-1 matrices obeying uv 1 (Fazel 02) Matrix recovery 11-20

24 Additivity of nuclear norm Fact 11.2 Let A and B be matrices of the same dimensions. If AB = 0 and A B = 0, then A + B = A + B. If row (resp. column) spaces of A and B are orthogonal, then A + B = A + B Similar to l 1 norm: when x and y have disjoint support, x + y 1 = x 1 + y 1 (a key to study l 1 -min under RIP) Matrix recovery 11-21

25 Proof of Fact 11.2 Suppose A = U A Σ A V A and B = U BΣ B VB, which gives AB = 0 A B = 0 V A V B = 0 U A U B = 0 Thus, one can write A = [U A, U B, U C ] B = [U A, U B, U C ] Σ A Σ B 0 [V A, V B, V C ] [V A, V B, V C ] and hence A + B = [U A, U B ] [ ΣA Σ B ] [V A, V B ] = A + B Matrix recovery 11-22

26 Dual norm Definition 11.3 (Dual norm) For a given norm A, the dual norm is defined as X A := max{ X, Y : Y A 1} l 1 norm nuclear norm l 2 norm Frobenius norm dual dual dual dual l norm spectral norm l 2 norm Frobenius norm Matrix recovery 11-23

27 Representing nuclear norm via SDP Since the spectral norm is the dual norm of the nuclear norm, X = max{ X, Y : Y 1} The constraint is equivalent to Y 1 Y Y I Schur complement [ I Y Y I ] 0 Fact 11.4 X = max Y { X, Y [ ] } I Y Y 0 I Matrix recovery 11-24

28 Representing nuclear norm via SDP Since the spectral norm is the dual norm of the nuclear norm, X = max{ X, Y : Y 1} The constraint is equivalent to Y 1 Y Y I Schur complement [ I Y Y I ] 0 Fact 11.5 (Dual characterization) X = min W 1,W 2 { 1 2 Tr(W 1) Tr(W 2) [ ] } W1 X X 0 W 2 Optimal point: W 1 = UΣU, W 2 = V ΣV (where X = UΣV ) Matrix recovery 11-24

29 Aside: dual of semidefinite program (primal) minimize X C, X s.t. A i, X = b i, 1 i m X 0 (dual) maximize y b y s.t. m y i A i + S = C Exercise: use this to verify Fact 11.5 i=1 S 0 Matrix recovery 11-25

30 Nuclear norm minimization via SDP Convex relaxation of rank minimization ˆM = argmin X X s.t. y = A(X) This is solvable via SDP 1 minimize X,W1,W 2 2 Tr(W 1) Tr(W 2) [ ] W1 X s.t. y = A(X), X 0 W 2 Matrix recovery 11-26

31 RIP and low-rank matrix recovery Matrix recovery 11-27

32 RIP for low-rank matrices Almost parallel results to compressed sensing... 1 Definition 11.6 The r-restricted isometry constants δr ub (A) and δr lb (A) are the smallest quantities s.t. (1 δ lb r ) X F A(X) F (1 + δ ub r ) X F, X : rank(x) r 1 One can also define RIP w.r.t. 2 F rather than F. Matrix recovery 11-28

33 RIP and low-rank matrix recovery Theorem 11.7 (Recht, Fazel, Parrilo 10, Candes, Plan 11) Suppose rank(m) = r. For any fixed integer K > 0, if 1+δ ub Kr 1 δ lb (2+K)r < K 2, then nuclear norm minimization is exact It allows δkr ub to be larger than 1 Can be easily extended to account for noisy case and approximately low-rank matrices Matrix recovery 11-29

34 Geometry of nuclear norm ball Level set of nuclear norm ball: [ x y y z ] 1 Fig. credit: Candes 14 Matrix recovery 11-30

35 Some notation Recall M = UΣV Let T be the span of matrices of the form (called tangent space) T = {UX + Y V : X, Y R n r } Let P T be the orthogonal projection onto T : P T (X) = UU X + XV V UU XV V Its complement P T = I P T : P T (X) = (I UU )X(I V V ) MP T (X) = 0 and M P T (X) = 0 Matrix recovery 11-31

36 Proof of Theorem 11.7 Suppose X = M + H is feasible and obeys M + H M. The goal is to show that H = 0 under RIP. The key is to decompose H into H 0 + H 1 + H }{{} H c H 0 = P T (H) (rank 2r) H c = P T (H) (obeying MH c = 0 and M H c = 0) H 1 : the best rank-(kr) approximation of H c H 2 : the best rank-(kr) approximation of H c H 1... (K is const) Matrix recovery 11-32

37 Proof of Theorem 11.7 Informally, the proof proceeds by showing that 1. H 0 dominates i 2 H i (by objective function) see Step 1 2. (converse) i 2 H i dominates H 0 + H 1 (by RIP + feasibility) see Step 2 These cannot happen simultaneously unless H = 0 Matrix recovery 11-33

38 Proof of Theorem 11.7 Step 1 (which does not rely on RIP). Show that H j F H 0 / Kr. (11.2) j 2 This follows immediately by combining the following 2 observations: (i) Since M + H is assumed to be a better estimate: M M + H M + H c H 0 (11.3) M + H c H 0 }{{} Fact 11.2 (MHc =0 and M H c=0) = H c H 0 (11.4) (ii) Since nonzero singular values of H j 1 dominate those of H j (j 2): H j F Kr H j Kr [ H j 1 /(Kr) ] H j 1 / Kr = H j F 1 H j 1 1 H c (11.5) j 2 Kr j 2 Kr

39 Proof of Theorem 11.7 Step 2 (using feasibility + RIP). Show that ρ < K/2 s.t. If this claim holds, then H 0 + H 1 F ρ j 2 H j F (11.6) H 0 + H 1 F ρ H (11.2) j F ρ 1 H 0 j 2 Kr ρ 1 ( ) 2 2r H0 F = ρ Kr K H 0 F 2 ρ K H 0 + H 1 F (11.7) where the last line holds since, by construction, H 0 and H 1 lie in orthogonal subspaces. This bound (11.7) cannot hold with ρ < K/2 unless H 0 + H 1 = 0 }{{} equivalently, H 0=H 1=0 Matrix recovery 11-35

40 Proof of Theorem 11.7 We now prove (11.6). To connect H 0 + H 1 with j 2 H j, we use feasibility: A(H) = 0 A(H 0 + H 1 ) = j 2 A(H j), which taken collectively with RIP yields (1 δ(2+k)r lb ) H 0 + H 1 F A(H 0 + H 1 ) F = A(H j) j 2 A(Hj ) j 2 F (1 + δub Kr) H j F j 2 F This establishes (11.6) as long as ρ := 1+δub Kr 1 δ lb (2+K)r < K 2. Matrix recovery 11-36

41 Gaussian sampling operators satisfy RIP If the entries of {A i } m i=1 are i.i.d. N (0, 1/m), then δ 5r (A) < with high prob., provided that m nr (near-optimal sample size) This satisfies the assumption of Theorem 11.7 with K = 3 Matrix recovery 11-37

42 Precise phase transition Using the statistical dimension machienry, we can locate precise phase transition (Amelunxen, Lotz, McCoy & Tropp 13) nuclear norm min { works if m > stat-dim ( D (, X) ) fails if m < stat-dim ( D (, X) ) where stat-dim ( D (, X) ) ( r n 2 ψ n) and ψ (ρ) = inf τ 0 { ρ + (1 ρ) [ ρ(1 + τ 2 ) + (1 ρ) 2 τ ]} (u τ) 2 4 u 2 du π Matrix recovery 11-38

43 Aside: subgradient of nuclear norm Subdifferential (set of subgradients) of at M is M = { } UV + W : P T (W ) = 0, W 1 Does not depend on the singular values of M Z M iff P T (Z) = UV, P T (Z) 1. Matrix recovery 11-39

44 Derivation of the statistical dimension [ Ir WLOG, suppose X =, then X 0 = [ ] G11 G 12 Let G = be i.i.d. standard Gaussian. G 21 G 22 ] {[ Ir W ] } W 1. From the convex geometry lecture, we know that [ stat-dim ( D(, X) ) inf E inf G τz 2 F τ 0 Z X [ [ ] [ ] ] = inf E inf G11 G 12 Ir 2 τ 0 τ G 21 G 22 W W : W 1 ] F Matrix recovery 11-40

45 Derivation of statistical dimension Observe that [ E inf W : W 1 [ [ ] G11 G 12 τ G 21 G 22 [ Ir W ] ] 2 = E G 11 τi r 2 F + G21 2 F + G12 2 F + inf G 22 τw 2 F W 1 = r ( 2n r + τ 2) [ n r ] + E (σ i (G 22) τ) 2 +. i=1 F ] empirical distributions of {σ i(g 22)/ n r} Matrix recovery 11-41

46 Derivation of statistical dimension Observe that [ E inf W : W 1 [ [ ] G11 G 12 τ G 21 G 22 [ Ir W ] ] 2 = E G 11 τi r 2 F + G21 2 F + G12 2 F + inf G 22 τw 2 F W 1 = r ( 2n r + τ 2) [ n r ] + E (σ i (G 22) τ) 2 +. i=1 F ] Recall from random matrix theory (Marchenko-Pastur law) [ n r ] 1 n r E ( ( ) ) 2 2 σi G22 τ (u τ) 2 4 u du, π i=1 where G 22 N ( 1 0, I). Taking ρ = r/n and minimizing over τ lead to n r closed-form expression for phase transition boundary. 0 Matrix recovery 11-41

47 Numerical phase transition (n = 30) Figure credit: Amelunxen, Lotz, McCoy, & Tropp 13 Matrix recovery 11-42

48 Sampling operators that do NOT satisfy RIP Unfortunately, many sampling operators fail to satisfy RIP (e.g. none of the 4 motivating examples in this lecture satisfies RIP) Two important examples: Phase retrieval / solving random quadratic systems of equations Matrix completion Matrix recovery 11-43

49 Phase retrieval / solving random quadratic systems of equations Matrix recovery 11-44

50 Rank-one measurements Measurements: see (11.1) y i = a i xx }{{ } a i = a i a i, M, 1 i m }{{} :=M :=A i A (X) = A 1, X A 2, X. A m, X = a 1 a 1, X a 2 a 2, X. a m a m, X Matrix recovery 11-45

51 Rank-one measurements Suppose a i ind. N (0, I n ) If x is independent of {a i }, then ai a i, xx = a i x 2 x 2 2 A ( xx ) F m xx F Consider A i = a i a i : with high prob., ai a i, A i = ai 4 2 n a i a i F = A(A i ) F a i a i, A i n Ai F Matrix recovery 11-46

52 Rank-one measurements Suppose a i ind. N (0, I n ) If the sample size m n (information limit) and K 1, then max X: rank(x)=1 A(X) F X F min X: rank(x)=1 A(X) F n n m X F = 1 + δub K 1 δ2+k lb max X: rank(x)=1 A(X) F X F min X: rank(x)=1 A(X) F X F n K Violate RIP condition in Theorem 11.7 unless K is exceeding large Matrix recovery 11-46

53 Why do we lose RIP? Problems: Some low-rank matrices X (e.g. a i a i ) might be too aligned with some (rank-1) measurement matrices loss of incoherence in some measurements Some measurements A i, X might have too high of a leverage on A(X) when measured in F Solution: replace F by other norms! Matrix recovery 11-47

54 Mixed-norm RIP Solution: modify RIP appropriately... Definition 11.8 (RIP-l 2 /l 1 ) Let ξr ub (A) and ξr lb (A) be the smallest quantities s.t. (1 ξ lb r ) X F A(X) 1 (1 + ξ ub r ) X F, X : rank(x) r More generally, it only requires A to satisfy sup X:rank(X) r A(X) 1 X F inf X:rank(X) r A(X) 1 X F 1 + ξub r 1 ξr lb (11.8) Matrix recovery 11-48

55 Analyzing phase retrieval via RIP-l 2 /l 1 Theorem 11.9 (Chen, Chi, Goldsmith 15) Theorem 11.7 continues to hold if we replace δr ub and ξr lb (defined in (11.8)), respectively and δ lb r with ξ ub r Follows the same proof as for Theorem 11.7, except that F (highlighted in red) is replaced by 1 in Slide Matrix recovery 11-49

56 Analyzing phase retrieval via RIP-l 2 /l 1 Theorem 11.9 (Chen, Chi, Goldsmith 15) Theorem 11.7 continues to hold if we replace δr ub and ξr lb (defined in (11.8)), respectively and δ lb r with ξ ub r Back to the example in Slide 11-46: If x is independent of {a i }, then ai a i, xx = a i x 2 x 2 2 A ( xx ) 1 m xx F A(A i ) 1 = a i a i, A i + j:j i a i a i, A j (n+m) Ai F For both cases, A(X) 1 X F are of the same order if m n Matrix recovery 11-49

57 Analyzing phase retrieval via RIP-l 2 /l 1 Informally, a debiased operator satisfies RIP condition of Theorem 11.9 when m nr (Chen, Chi, Goldsmith 15) B(X) := A 1 A 2, X A 3 A 4, X. R m/2 Debiasing is crucial when r 1 A consequence of the Hanson-Wright inequality for quadratic form (Hanson & Wright 71, Rudelson & Vershynin 03) Matrix recovery 11-50

58 Theoretical guarantee for phase retrieval (PhaseLift) minimize X R n n tr X }{{} for PSD matrices s.t. y i = a i Xa i, 1 i m X 0 (since X = xx ) Theorem (Candès, Strohmer, Voroninski 13, Candès, Li 14) Suppose a i ind. N (0, I). With high prob., PhaseLift recovers xx exactly as soon as m n Matrix recovery 11-51

59 Extension of phase retrieval (PhaseLift) minimize X R n n tr X }{{} for PSD matrices s.t. a i Xa i = a i Ma i, 1 i m X 0 Theorem (Chen, Chi, Goldsmith 15, Cai, Zhang 15) Suppose M 0, rank(m) = r, and a i ind. N (0, I). With high prob., PhaseLift recovers M exactly as soon as m nr Matrix recovery 11-52

60 Matrix completion Matrix recovery 11-53

61 Sampling operators for matrix completion Observation operator (projection onto matrices supported on Ω) Y = P Ω (M) where (i, j) Ω with prob. p (random sampling) P Ω does NOT satisfy RIP when p 1! For example, } {{ } M P Ω (M) F = 0, or equivalently,???????????? } {{ } Ω 1+δ ub K 1 δ lb 2+K = Matrix recovery 11-54

62 Which sampling pattern? Consider the following sampling pattern????? If some rows / columns are not sampled, recovery is impossible Matrix recovery 11-55

63 Which low-rank matrices can we recover? Compare the following rank-1 matrices: ? 0? 0 0? 0?...? 0? 0 if we miss the top-left entry, then we cannot hope to recover the matrix Matrix recovery 11-56

64 Which low-rank matrices can we recover? Compare the following rank-1 matrices: ? 1? 1 1? 1?...? 1? 1 it is possible to fill in all missing entries by exploiting the rank-1 structure Matrix recovery 11-57

65 Which low-rank matrices can we recover? Compare the following rank-1 matrices: } 0 0 {{ 0 } hard vs } 1 1 {{ 1 } easy Column / row spaces cannot be aligned with canonical basis vectors Matrix recovery 11-58

66 Coherence Definition Coherence parameter µ of M = UΣV is the smallest quantity s.t. X h P U (e i ) h e i max i U e i 2 2 µr n i U and max i V e i 2 2 µr n µ 1 (since n i=1 U e i 2 2 = U 2 F = r) µ = 1 if 1 n 1 = U = V (most incoherent) µ = n r if e i U (most coherent) Matrix recovery 11-59

67 Performance guarantee Theorem (Candes & Recht 09, Candes & Tao 10, Gross 11,...) Nuclear norm minimization is exact and unique with high probability, provided that m µnr log 2 n This result is optimal up to a logarithmic factor Established via a RIPless theory Matrix recovery 11-60

68 Numerical performance of nuclear-norm minimization n = 50 Fig. credit: Candes, Recht 09 Matrix recovery 11-61

69 KKT condition Lagrangian: L (X, Λ) = X + Λ, P Ω (X) P Ω (M) = X + P Ω (Λ), X M When M is the minimizer, the KKT condition reads 0 X L(X, Λ) X=M Λ s.t. P Ω (Λ) M W s.t. UV + W is supported on Ω, P T (W ) = 0, and W 1 Matrix recovery 11-62

70 Optimality condition via dual certificate Slightly stronger condition than KKT guarantees uniqueness: Lemma M is the unique minimizer of nuclear norm minimization if the sampling operator P Ω restricted to T is injective, i.e. P Ω (H) 0, nonzero H T W s.t. UV + W is supported on Ω, P T (W ) = 0, and W < 1 Matrix recovery 11-63

71 Proof of Lemma For any W 0 obeying W 0 1 and P T (W 0 ) = 0, one has M + H M + UV + W 0, H = M + UV + W, H + W 0 W, H = M + P Ω ( UV + W ), H + P T (W 0 W ), H = M + UV + W, P Ω (H) + W 0 W, P T (H) if we take W 0 s.t. W 0, P T (H) = P T (H) }{{} exercise: how to find such an W 0 M + P T (H) W P T (H) = M + (1 W ) P T (H) > M unless P T (H) = 0. But if P T (H) = 0, then H = 0 by injectivity. Thus, M + H > M for any H 0. This concludes the proof. Matrix recovery 11-64

72 Constructing dual certificates Use the golfing scheme to produce an approximate dual certificate (Gross 11) Think of it as an iterative algorithm (with sample splitting) to find a solution satisfying the KKT condition Matrix recovery 11-65

73 (Optional) Proximal algorithm In the presence of noise, one needs to solve minimize X 1 2 y A(X) 2 F + λ X which can be solved via proximal methods Proximal operator: prox λ (X) = arg min Z = UT λ (Σ)V { } 1 Z X 2F + λ Z 2 where SVD of X is X = UΣV with Σ = diag({σ i }), and T λ (Σ) = diag({(σ i λ) + }) Matrix recovery 11-66

74 (Optional) Proximal algorithm Algorithm 11.1 Proximal gradient methods for t = 0, 1, : ) X t+1 = T µt (X t µ t A A(X t ) where µ t : step size / learning rate Matrix recovery 11-67

75 Reference [1] Lecture notes, Advanced topics in signal processing (ECE 8201), Y. Chi, [2] Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization, B. Recht, M. Fazel, P. Parrilo, SIAM Review, [3] Exact matrix completion via convex optimization, E. Candes, and B. Recht, Foundations of Computational Mathematics, 2009 [4] Matrix completion from a few entries, R. Keshavan, A. Montanari, S. Oh, IEEE Transactions on Information Theory, [5] Matrix rank minimization with applications, M. Fazel, Ph.D. Thesis, [6] Modeling the world from internet photo collections, N. Snavely, S. Seitz, and R. Szeliski, International Journal of Computer Vision, Matrix recovery 11-68

76 Reference [7] Computer vision: algorithms and applications, R. Szeliski, Springer, [8] Shape and motion from image streams under orthography: a factorization method, C. Tomasi and T. Kanade, International Journal of Computer Vision, [9] Topics in random matrix theory, T. Tao, American mathematical society, [10] A singular value thresholding algorithm for matrix completion, J. Cai, E. Candes, Z. Shen, SIAM Journal on Optimization, [11] Recovering low-rank matrices from few coefficients in any basis, D. Gross, IEEE Transactions on Information Theory, [12] Incoherence-optimal matrix completion, Y. Chen, IEEE Transactions on Information Theory, Matrix recovery 11-69

77 Reference [13] The power of convex relaxation: Near-optimal matrix completion, E. Candes, and T. Tao, [14] Tight oracle inequalities for low-rank matrix recovery from a minimal number of noisy random measurements, E. Candes, and Y. Plan, IEEE Transactions on Information Theory, [15] PhaseLift: Exact and stable signal recovery from magnitude measurements via convex programming, E. Candes, T. Strohmer, and V. Voroninski, Communications on Pure and Applied Mathematics, [16] Solving quadratic equations via PhaseLift when there are about as many equations as unknowns, E. Candes, X. Li, Foundations of Computational Mathematics, [17] A bound on tail probabilities for quadratic forms in independent random variables, D. Hanson, F. Wright, Annals of Mathematical Statistics, Matrix recovery 11-70

78 Reference [18] Hanson-Wright inequality and sub-gaussian concentration, M. Rudelson, R. Vershynin, Electronic Communications in Probability, [19] Exact and stable covariance estimation from quadratic sampling via convex programming, Y. Chen, Y. Chi, and A. Goldsmith, IEEE Transactions on Information Theory, [20] ROP: Matrix recovery via rank-one projections, T. Cai, and A. Zhang, Annals of Statistics, [21] Low rank matrix recovery from rank one measurements, R. Kueng, H. Rauhut, and U. Terstiege, Applied and Computational Harmonic Analysis, [22] Mathematics of sparsity (and a few other things), E. Candes, International Congress of Mathematicians, Matrix recovery 11-71

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via convex relaxations Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

Compressed Sensing and Sparse Recovery

Compressed Sensing and Sparse Recovery ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing

More information

Solving Corrupted Quadratic Equations, Provably

Solving Corrupted Quadratic Equations, Provably Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via nonconvex optimization Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:

More information

Compressed Sensing and Robust Recovery of Low Rank Matrices

Compressed Sensing and Robust Recovery of Low Rank Matrices Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech

More information

Lecture Notes 9: Constrained Optimization

Lecture Notes 9: Constrained Optimization Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form

More information

Matrix completion: Fundamental limits and efficient algorithms. Sewoong Oh Stanford University

Matrix completion: Fundamental limits and efficient algorithms. Sewoong Oh Stanford University Matrix completion: Fundamental limits and efficient algorithms Sewoong Oh Stanford University 1 / 35 Low-rank matrix completion Low-rank Data Matrix Sparse Sampled Matrix Complete the matrix from small

More information

Matrix Completion: Fundamental Limits and Efficient Algorithms

Matrix Completion: Fundamental Limits and Efficient Algorithms Matrix Completion: Fundamental Limits and Efficient Algorithms Sewoong Oh PhD Defense Stanford University July 23, 2010 1 / 33 Matrix completion Find the missing entries in a huge data matrix 2 / 33 Example

More information

Recovering any low-rank matrix, provably

Recovering any low-rank matrix, provably Recovering any low-rank matrix, provably Rachel Ward University of Texas at Austin October, 2014 Joint work with Yudong Chen (U.C. Berkeley), Srinadh Bhojanapalli and Sujay Sanghavi (U.T. Austin) Matrix

More information

The convex algebraic geometry of rank minimization

The convex algebraic geometry of rank minimization The convex algebraic geometry of rank minimization Pablo A. Parrilo Laboratory for Information and Decision Systems Massachusetts Institute of Technology International Symposium on Mathematical Programming

More information

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016 Lecture notes 5 February 9, 016 1 Introduction Random projections Random projections are a useful tool in the analysis and processing of high-dimensional data. We will analyze two applications that use

More information

Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison

Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison Raghunandan H. Keshavan, Andrea Montanari and Sewoong Oh Electrical Engineering and Statistics Department Stanford University,

More information

Sparse and Low Rank Recovery via Null Space Properties

Sparse and Low Rank Recovery via Null Space Properties Sparse and Low Rank Recovery via Null Space Properties Holger Rauhut Lehrstuhl C für Mathematik (Analysis), RWTH Aachen Convexity, probability and discrete structures, a geometric viewpoint Marne-la-Vallée,

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 57 Table of Contents 1 Sparse linear models Basis Pursuit and restricted null space property Sufficient conditions for RNS 2 / 57

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 3: Sparse signal recovery: A RIPless analysis of l 1 minimization Yuejie Chi The Ohio State University Page 1 Outline

More information

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28 Sparsity Models Tong Zhang Rutgers University T. Zhang (Rutgers) Sparsity Models 1 / 28 Topics Standard sparse regression model algorithms: convex relaxation and greedy algorithm sparse recovery analysis:

More information

Rank minimization via the γ 2 norm

Rank minimization via the γ 2 norm Rank minimization via the γ 2 norm Troy Lee Columbia University Adi Shraibman Weizmann Institute Rank Minimization Problem Consider the following problem min X rank(x) A i, X b i for i = 1,..., k Arises

More information

Matrix Completion with Noise

Matrix Completion with Noise Matrix Completion with Noise 1 Emmanuel J. Candès and Yaniv Plan Applied and Computational Mathematics, Caltech, Pasadena, CA 91125 of the rapidly developing field of compressed sensing, and is already

More information

ELE 538B: Mathematics of High-Dimensional Data. Spectral methods. Yuxin Chen Princeton University, Fall 2018

ELE 538B: Mathematics of High-Dimensional Data. Spectral methods. Yuxin Chen Princeton University, Fall 2018 ELE 538B: Mathematics of High-Dimensional Data Spectral methods Yuxin Chen Princeton University, Fall 2018 Outline A motivating application: graph clustering Distance and angles between two subspaces Eigen-space

More information

Analysis of Robust PCA via Local Incoherence

Analysis of Robust PCA via Local Incoherence Analysis of Robust PCA via Local Incoherence Huishuai Zhang Department of EECS Syracuse University Syracuse, NY 3244 hzhan23@syr.edu Yi Zhou Department of EECS Syracuse University Syracuse, NY 3244 yzhou35@syr.edu

More information

An iterative hard thresholding estimator for low rank matrix recovery

An iterative hard thresholding estimator for low rank matrix recovery An iterative hard thresholding estimator for low rank matrix recovery Alexandra Carpentier - based on a joint work with Arlene K.Y. Kim Statistical Laboratory, Department of Pure Mathematics and Mathematical

More information

Optimisation Combinatoire et Convexe.

Optimisation Combinatoire et Convexe. Optimisation Combinatoire et Convexe. Low complexity models, l 1 penalties. A. d Aspremont. M1 ENS. 1/36 Today Sparsity, low complexity models. l 1 -recovery results: three approaches. Extensions: matrix

More information

Stochastic dynamical modeling:

Stochastic dynamical modeling: Stochastic dynamical modeling: Structured matrix completion of partially available statistics Armin Zare www-bcf.usc.edu/ arminzar Joint work with: Yongxin Chen Mihailo R. Jovanovic Tryphon T. Georgiou

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin

More information

Constructing Explicit RIP Matrices and the Square-Root Bottleneck

Constructing Explicit RIP Matrices and the Square-Root Bottleneck Constructing Explicit RIP Matrices and the Square-Root Bottleneck Ryan Cinoman July 18, 2018 Ryan Cinoman Constructing Explicit RIP Matrices July 18, 2018 1 / 36 Outline 1 Introduction 2 Restricted Isometry

More information

Constrained optimization

Constrained optimization Constrained optimization DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Compressed sensing Convex constrained

More information

From Compressed Sensing to Matrix Completion and Beyond. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison

From Compressed Sensing to Matrix Completion and Beyond. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison From Compressed Sensing to Matrix Completion and Beyond Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Netflix Prize One million big ones! Given 100 million ratings on a

More information

Covariance Sketching via Quadratic Sampling

Covariance Sketching via Quadratic Sampling Covariance Sketching via Quadratic Sampling Yuejie Chi Department of ECE and BMI The Ohio State University Tsinghua University June 2015 Page 1 Acknowledgement Thanks to my academic collaborators on some

More information

Binary matrix completion

Binary matrix completion Binary matrix completion Yaniv Plan University of Michigan SAMSI, LDHD workshop, 2013 Joint work with (a) Mark Davenport (b) Ewout van den Berg (c) Mary Wootters Yaniv Plan (U. Mich.) Binary matrix completion

More information

Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies

Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies July 12, 212 Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies Morteza Mardani Dept. of ECE, University of Minnesota, Minneapolis, MN 55455 Acknowledgments:

More information

Matrix Completion from a Few Entries

Matrix Completion from a Few Entries Matrix Completion from a Few Entries Raghunandan H. Keshavan and Sewoong Oh EE Department Stanford University, Stanford, CA 9434 Andrea Montanari EE and Statistics Departments Stanford University, Stanford,

More information

An Extended Frank-Wolfe Method, with Application to Low-Rank Matrix Completion

An Extended Frank-Wolfe Method, with Application to Low-Rank Matrix Completion An Extended Frank-Wolfe Method, with Application to Low-Rank Matrix Completion Robert M. Freund, MIT joint with Paul Grigas (UC Berkeley) and Rahul Mazumder (MIT) CDC, December 2016 1 Outline of Topics

More information

Provable Alternating Minimization Methods for Non-convex Optimization

Provable Alternating Minimization Methods for Non-convex Optimization Provable Alternating Minimization Methods for Non-convex Optimization Prateek Jain Microsoft Research, India Joint work with Praneeth Netrapalli, Sujay Sanghavi, Alekh Agarwal, Animashree Anandkumar, Rashish

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

University of Luxembourg. Master in Mathematics. Student project. Compressed sensing. Supervisor: Prof. I. Nourdin. Author: Lucien May

University of Luxembourg. Master in Mathematics. Student project. Compressed sensing. Supervisor: Prof. I. Nourdin. Author: Lucien May University of Luxembourg Master in Mathematics Student project Compressed sensing Author: Lucien May Supervisor: Prof. I. Nourdin Winter semester 2014 1 Introduction Let us consider an s-sparse vector

More information

Structured matrix factorizations. Example: Eigenfaces

Structured matrix factorizations. Example: Eigenfaces Structured matrix factorizations Example: Eigenfaces An extremely large variety of interesting and important problems in machine learning can be formulated as: Given a matrix, find a matrix and a matrix

More information

Breaking the Limits of Subspace Inference

Breaking the Limits of Subspace Inference Breaking the Limits of Subspace Inference Claudia R. Solís-Lemus, Daniel L. Pimentel-Alarcón Emory University, Georgia State University Abstract Inferring low-dimensional subspaces that describe high-dimensional,

More information

arxiv: v1 [math.oc] 11 Jun 2009

arxiv: v1 [math.oc] 11 Jun 2009 RANK-SPARSITY INCOHERENCE FOR MATRIX DECOMPOSITION VENKAT CHANDRASEKARAN, SUJAY SANGHAVI, PABLO A. PARRILO, S. WILLSKY AND ALAN arxiv:0906.2220v1 [math.oc] 11 Jun 2009 Abstract. Suppose we are given a

More information

Conditions for Robust Principal Component Analysis

Conditions for Robust Principal Component Analysis Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2013 PROBLEM SET 2

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2013 PROBLEM SET 2 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2013 PROBLEM SET 2 1. You are not allowed to use the svd for this problem, i.e. no arguments should depend on the svd of A or A. Let W be a subspace of C n. The

More information

6 Compressed Sensing and Sparse Recovery

6 Compressed Sensing and Sparse Recovery 6 Compressed Sensing and Sparse Recovery Most of us have noticed how saving an image in JPEG dramatically reduces the space it occupies in our hard drives as oppose to file types that save the pixel value

More information

Sparse and Low-Rank Matrix Decompositions

Sparse and Low-Rank Matrix Decompositions Forty-Seventh Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 30 - October 2, 2009 Sparse and Low-Rank Matrix Decompositions Venkat Chandrasekaran, Sujay Sanghavi, Pablo A. Parrilo,

More information

Low-rank Matrix Completion from Noisy Entries

Low-rank Matrix Completion from Noisy Entries Low-rank Matrix Completion from Noisy Entries Sewoong Oh Joint work with Raghunandan Keshavan and Andrea Montanari Stanford University Forty-Seventh Allerton Conference October 1, 2009 R.Keshavan, S.Oh,

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

Universal low-rank matrix recovery from Pauli measurements

Universal low-rank matrix recovery from Pauli measurements Universal low-rank matrix recovery from Pauli measurements Yi-Kai Liu Applied and Computational Mathematics Division National Institute of Standards and Technology Gaithersburg, MD, USA yi-kai.liu@nist.gov

More information

Lecture Notes 10: Matrix Factorization

Lecture Notes 10: Matrix Factorization Optimization-based data analysis Fall 207 Lecture Notes 0: Matrix Factorization Low-rank models. Rank- model Consider the problem of modeling a quantity y[i, j] that depends on two indices i and j. To

More information

1-Bit Matrix Completion

1-Bit Matrix Completion 1-Bit Matrix Completion Mark A. Davenport School of Electrical and Computer Engineering Georgia Institute of Technology Yaniv Plan Mary Wootters Ewout van den Berg Matrix Completion d When is it possible

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 8 A. d Aspremont. Convex Optimization M2. 1/57 Applications A. d Aspremont. Convex Optimization M2. 2/57 Outline Geometrical problems Approximation problems Combinatorial

More information

The Projected Power Method: An Efficient Algorithm for Joint Alignment from Pairwise Differences

The Projected Power Method: An Efficient Algorithm for Joint Alignment from Pairwise Differences The Projected Power Method: An Efficient Algorithm for Joint Alignment from Pairwise Differences Yuxin Chen Emmanuel Candès Department of Statistics, Stanford University, Sep. 2016 Nonconvex optimization

More information

Low Rank Matrix Completion Formulation and Algorithm

Low Rank Matrix Completion Formulation and Algorithm 1 2 Low Rank Matrix Completion and Algorithm Jian Zhang Department of Computer Science, ETH Zurich zhangjianthu@gmail.com March 25, 2014 Movie Rating 1 2 Critic A 5 5 Critic B 6 5 Jian 9 8 Kind Guy B 9

More information

Dense Error Correction for Low-Rank Matrices via Principal Component Pursuit

Dense Error Correction for Low-Rank Matrices via Principal Component Pursuit Dense Error Correction for Low-Rank Matrices via Principal Component Pursuit Arvind Ganesh, John Wright, Xiaodong Li, Emmanuel J. Candès, and Yi Ma, Microsoft Research Asia, Beijing, P.R.C Dept. of Electrical

More information

Supplementary Materials for Riemannian Pursuit for Big Matrix Recovery

Supplementary Materials for Riemannian Pursuit for Big Matrix Recovery Supplementary Materials for Riemannian Pursuit for Big Matrix Recovery Mingkui Tan, School of omputer Science, The University of Adelaide, Australia Ivor W. Tsang, IS, University of Technology Sydney,

More information

Dual and primal-dual methods

Dual and primal-dual methods ELE 538B: Large-Scale Optimization for Data Science Dual and primal-dual methods Yuxin Chen Princeton University, Spring 2018 Outline Dual proximal gradient method Primal-dual proximal gradient method

More information

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION A Thesis by MELTEM APAYDIN Submitted to the Office of Graduate and Professional Studies of Texas A&M University in partial fulfillment of the

More information

ELE 538B: Sparsity, Structure and Inference. Super-Resolution. Yuxin Chen Princeton University, Spring 2017

ELE 538B: Sparsity, Structure and Inference. Super-Resolution. Yuxin Chen Princeton University, Spring 2017 ELE 538B: Sparsity, Structure and Inference Super-Resolution Yuxin Chen Princeton University, Spring 2017 Outline Classical methods for parameter estimation Polynomial method: Prony s method Subspace method:

More information

Fast and Robust Phase Retrieval

Fast and Robust Phase Retrieval Fast and Robust Phase Retrieval Aditya Viswanathan aditya@math.msu.edu CCAM Lunch Seminar Purdue University April 18 2014 0 / 27 Joint work with Yang Wang Mark Iwen Research supported in part by National

More information

Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery

Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery Department of Mathematics & Risk Management Institute National University of Singapore (Based on a joint work with Shujun

More information

Recent Developments in Compressed Sensing

Recent Developments in Compressed Sensing Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline

More information

Sparse Solutions of an Undetermined Linear System

Sparse Solutions of an Undetermined Linear System 1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research

More information

CSC 576: Variants of Sparse Learning

CSC 576: Variants of Sparse Learning CSC 576: Variants of Sparse Learning Ji Liu Department of Computer Science, University of Rochester October 27, 205 Introduction Our previous note basically suggests using l norm to enforce sparsity in

More information

Statistical Issues in Searches: Photon Science Response. Rebecca Willett, Duke University

Statistical Issues in Searches: Photon Science Response. Rebecca Willett, Duke University Statistical Issues in Searches: Photon Science Response Rebecca Willett, Duke University 1 / 34 Photon science seen this week Applications tomography ptychography nanochrystalography coherent diffraction

More information

Sensing systems limited by constraints: physical size, time, cost, energy

Sensing systems limited by constraints: physical size, time, cost, energy Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original

More information

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming

More information

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional

More information

approximation algorithms I

approximation algorithms I SUM-OF-SQUARES method and approximation algorithms I David Steurer Cornell Cargese Workshop, 201 meta-task encoded as low-degree polynomial in R x example: f(x) = i,j n w ij x i x j 2 given: functions

More information

Tractable Upper Bounds on the Restricted Isometry Constant

Tractable Upper Bounds on the Restricted Isometry Constant Tractable Upper Bounds on the Restricted Isometry Constant Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure, U.C. Berkeley. Support from NSF, DHS and Google.

More information

On the Convex Geometry of Weighted Nuclear Norm Minimization

On the Convex Geometry of Weighted Nuclear Norm Minimization On the Convex Geometry of Weighted Nuclear Norm Minimization Seyedroohollah Hosseini roohollah.hosseini@outlook.com Abstract- Low-rank matrix approximation, which aims to construct a low-rank matrix from

More information

Information-Theoretic Limits of Matrix Completion

Information-Theoretic Limits of Matrix Completion Information-Theoretic Limits of Matrix Completion Erwin Riegler, David Stotz, and Helmut Bölcskei Dept. IT & EE, ETH Zurich, Switzerland Email: {eriegler, dstotz, boelcskei}@nari.ee.ethz.ch Abstract We

More information

EE 381V: Large Scale Learning Spring Lecture 16 March 7

EE 381V: Large Scale Learning Spring Lecture 16 March 7 EE 381V: Large Scale Learning Spring 2013 Lecture 16 March 7 Lecturer: Caramanis & Sanghavi Scribe: Tianyang Bai 16.1 Topics Covered In this lecture, we introduced one method of matrix completion via SVD-based

More information

Lecture 9: Matrix approximation continued

Lecture 9: Matrix approximation continued 0368-348-01-Algorithms in Data Mining Fall 013 Lecturer: Edo Liberty Lecture 9: Matrix approximation continued Warning: This note may contain typos and other inaccuracies which are usually discussed during

More information

COMPRESSED Sensing (CS) is a method to recover a

COMPRESSED Sensing (CS) is a method to recover a 1 Sample Complexity of Total Variation Minimization Sajad Daei, Farzan Haddadi, Arash Amini Abstract This work considers the use of Total Variation (TV) minimization in the recovery of a given gradient

More information

Concentration-Based Guarantees for Low-Rank Matrix Reconstruction

Concentration-Based Guarantees for Low-Rank Matrix Reconstruction JMLR: Workshop and Conference Proceedings 9 20 35 339 24th Annual Conference on Learning Theory Concentration-Based Guarantees for Low-Rank Matrix Reconstruction Rina Foygel Department of Statistics, University

More information

1-Bit Matrix Completion

1-Bit Matrix Completion 1-Bit Matrix Completion Mark A. Davenport School of Electrical and Computer Engineering Georgia Institute of Technology Yaniv Plan Mary Wootters Ewout van den Berg Matrix Completion d When is it possible

More information

Compressive Sensing and Beyond

Compressive Sensing and Beyond Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered

More information

1-Bit Matrix Completion

1-Bit Matrix Completion 1-Bit Matrix Completion Mark A. Davenport School of Electrical and Computer Engineering Georgia Institute of Technology Yaniv Plan Mary Wootters Ewout van den Berg Matrix Completion d When is it possible

More information

Recovering overcomplete sparse representations from structured sensing

Recovering overcomplete sparse representations from structured sensing Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix

More information

Signal Recovery from Permuted Observations

Signal Recovery from Permuted Observations EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,

More information

Collaborative Filtering Matrix Completion Alternating Least Squares

Collaborative Filtering Matrix Completion Alternating Least Squares Case Study 4: Collaborative Filtering Collaborative Filtering Matrix Completion Alternating Least Squares Machine Learning for Big Data CSE547/STAT548, University of Washington Sham Kakade May 19, 2016

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

Symmetric Factorization for Nonconvex Optimization

Symmetric Factorization for Nonconvex Optimization Symmetric Factorization for Nonconvex Optimization Qinqing Zheng February 24, 2017 1 Overview A growing body of recent research is shedding new light on the role of nonconvex optimization for tackling

More information

Relaxations and Randomized Methods for Nonconvex QCQPs

Relaxations and Randomized Methods for Nonconvex QCQPs Relaxations and Randomized Methods for Nonconvex QCQPs Alexandre d Aspremont, Stephen Boyd EE392o, Stanford University Autumn, 2003 Introduction While some special classes of nonconvex problems can be

More information

Stability and Robustness of Weak Orthogonal Matching Pursuits

Stability and Robustness of Weak Orthogonal Matching Pursuits Stability and Robustness of Weak Orthogonal Matching Pursuits Simon Foucart, Drexel University Abstract A recent result establishing, under restricted isometry conditions, the success of sparse recovery

More information

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation UIUC CSL Mar. 24 Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation Yuejie Chi Department of ECE and BMI Ohio State University Joint work with Yuxin Chen (Stanford).

More information

Jointly Low-Rank and Bisparse Recovery: Questions and Partial Answers

Jointly Low-Rank and Bisparse Recovery: Questions and Partial Answers Jointly Low-Rank and Bisparse Recovery: Questions and Partial Answers Simon Foucart, Rémi Gribonval, Laurent Jacques, and Holger Rauhut Abstract This preprint is not a finished product. It is presently

More information

AN INTRODUCTION TO COMPRESSIVE SENSING

AN INTRODUCTION TO COMPRESSIVE SENSING AN INTRODUCTION TO COMPRESSIVE SENSING Rodrigo B. Platte School of Mathematical and Statistical Sciences APM/EEE598 Reverse Engineering of Complex Dynamical Networks OUTLINE 1 INTRODUCTION 2 INCOHERENCE

More information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information Fast Angular Synchronization for Phase Retrieval via Incomplete Information Aditya Viswanathan a and Mark Iwen b a Department of Mathematics, Michigan State University; b Department of Mathematics & Department

More information

Recovery of Simultaneously Structured Models using Convex Optimization

Recovery of Simultaneously Structured Models using Convex Optimization Recovery of Simultaneously Structured Models using Convex Optimization Maryam Fazel University of Washington Joint work with: Amin Jalali (UW), Samet Oymak and Babak Hassibi (Caltech) Yonina Eldar (Technion)

More information

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms François Caron Department of Statistics, Oxford STATLEARN 2014, Paris April 7, 2014 Joint work with Adrien Todeschini,

More information

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5 CS 229r: Algorithms for Big Data Fall 215 Prof. Jelani Nelson Lecture 19 Nov 5 Scribe: Abdul Wasay 1 Overview In the last lecture, we started discussing the problem of compressed sensing where we are given

More information

arxiv: v1 [stat.ml] 1 Mar 2015

arxiv: v1 [stat.ml] 1 Mar 2015 Matrix Completion with Noisy Entries and Outliers Raymond K. W. Wong 1 and Thomas C. M. Lee 2 arxiv:1503.00214v1 [stat.ml] 1 Mar 2015 1 Department of Statistics, Iowa State University 2 Department of Statistics,

More information

Sparse Legendre expansions via l 1 minimization

Sparse Legendre expansions via l 1 minimization Sparse Legendre expansions via l 1 minimization Rachel Ward, Courant Institute, NYU Joint work with Holger Rauhut, Hausdorff Center for Mathematics, Bonn, Germany. June 8, 2010 Outline Sparse recovery

More information

Strengthened Sobolev inequalities for a random subspace of functions

Strengthened Sobolev inequalities for a random subspace of functions Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)

More information

Matrix Completion from Noisy Entries

Matrix Completion from Noisy Entries Matrix Completion from Noisy Entries Raghunandan H. Keshavan, Andrea Montanari, and Sewoong Oh Abstract Given a matrix M of low-rank, we consider the problem of reconstructing it from noisy observations

More information

Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization

Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization Forty-Fifth Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 26-28, 27 WeA3.2 Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization Benjamin

More information

of Orthogonal Matching Pursuit

of Orthogonal Matching Pursuit A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement

More information