BEYOND MATRIX COMPLETION

Size: px
Start display at page:

Download "BEYOND MATRIX COMPLETION"

Transcription

1 BEYOND MATRIX COMPLETION ANKUR MOITRA MASSACHUSETTS INSTITUTE OF TECHNOLOGY Based on joint work with Boaz Barak (Harvard)

2 Part I: RecommendaJon systems and parjally observed matrices

3 THE NETFLIX PROBLEM movies users

4 THE NETFLIX PROBLEM movies users 1-5 stars

5 THE NETFLIX PROBLEM movies users 480,189 17,770

6 THE NETFLIX PROBLEM movies users 1% observed 480,189 17,770

7 THE NETFLIX PROBLEM movies users 1% observed 480,189 17,770 Can we (approximately) fill- in the missing entries?

8 MATRIX COMPLETION Let M be an unknown, approximately low- rank matrix

9 MATRIX COMPLETION Let M be an unknown, approximately low- rank matrix M + + +

10 MATRIX COMPLETION Let M be an unknown, approximately low- rank matrix M drama

11 MATRIX COMPLETION Let M be an unknown, approximately low- rank matrix M drama comedy

12 MATRIX COMPLETION Let M be an unknown, approximately low- rank matrix M drama comedy sports

13 MATRIX COMPLETION Let M be an unknown, approximately low- rank matrix M drama comedy sports Model: we are given random observajons M i,j for all i,j Ω

14 MATRIX COMPLETION Let M be an unknown, approximately low- rank matrix M drama comedy sports Model: we are given random observajons M i,j for all i,j Ω Is there an efficient algorithm to recover M?

15 MATRIX COMPLETION The natural formulajon is non- convex, and NP- hard min rank(x) s.t. 1 Ω (i,j) Ω X i,j M i,j η

16 MATRIX COMPLETION The natural formulajon is non- convex, and NP- hard min rank(x) s.t. 1 Ω (i,j) Ω There is a powerful, convex relaxajon X i,j M i,j η

17 THE NUCLEAR NORM Consider the singular value decomposi;on of X: X = U Σ V T orthogonal diagonal orthogonal

18 THE NUCLEAR NORM Consider the singular value decomposi;on of X: X = U Σ V T orthogonal diagonal orthogonal Let σ 1 σ 2 σ r > σ r+1 = σ m = 0 be the singular values

19 THE NUCLEAR NORM Consider the singular value decomposi;on of X: X = U Σ V T orthogonal diagonal orthogonal Let σ 1 σ 2 σ r > σ r+1 = σ m = 0 be the singular values Then rank(x) = r, and X * = σ 1 + σ σ r (nuclear norm)

20 This yields a convex relaxajon, that can be solved efficiently: min X * s.t. 1 Ω (i,j) Ω X i,j M i,j η [Fazel], [Srebro, Shraibman], [Recht, Fazel, Parrilo], [Candes, Recht], [Candes, Tao], [Candes, Plan], [Recht], (P)

21 This yields a convex relaxajon, that can be solved efficiently: min X * s.t. 1 Ω (i,j) Ω X i,j M i,j η [Fazel], [Srebro, Shraibman], [Recht, Fazel, Parrilo], [Candes, Recht], [Candes, Tao], [Candes, Plan], [Recht], (P) Theorem: If M is n x n and has rank r, and is C- incoherent then (P) recovers M exactly from C 6 nrlog 2 n observajons

22 This yields a convex relaxajon, that can be solved efficiently: min X * s.t. 1 Ω (i,j) Ω X i,j M i,j η [Fazel], [Srebro, Shraibman], [Recht, Fazel, Parrilo], [Candes, Recht], [Candes, Tao], [Candes, Plan], [Recht], (P) Theorem: If M is n x n and has rank r, and is C- incoherent then (P) recovers M exactly from C 6 nrlog 2 n observajons This is nearly opjmal, since there are 2nr parameters

23 This yields a convex relaxajon, that can be solved efficiently: min X * s.t. 1 Ω (i,j) Ω X i,j M i,j η [Fazel], [Srebro, Shraibman], [Recht, Fazel, Parrilo], [Candes, Recht], [Candes, Tao], [Candes, Plan], [Recht], (P) Theorem: If M is n x n and has rank r, and is C- incoherent then (P) recovers M exactly from C 6 nrlog 2 n observajons This is nearly opjmal, since there are 2nr parameters Many other approaches, e.g. alterna;ng minimiza;on: [Keshavan, Montanari, Oh], [Jain, Netrapalli, Sanghavi], [Hardt],

24 Part II: Higher order structure?

25 TENSOR PREDICTION Can using more than two asributes can lead to beser recommendajons?

26 TENSOR PREDICTION Can using more than two asributes can lead to beser recommendajons? ac;vi;es users e.g. Groupon

27 TENSOR PREDICTION Can using more than two asributes can lead to beser recommendajons? ;me users e.g. Groupon ;me: season, Jme of day, weekday/weekend, etc

28 TENSOR PREDICTION Can using more than two asributes can lead to beser recommendajons? users ;me T = r i = 1 e.g. Groupon snow sports ;me: season, Jme of day, weekday/weekend, etc

29 TENSOR PREDICTION Can using more than two asributes can lead to beser recommendajons? users ;me r T = a i b i c i i = 1

30 TENSOR PREDICTION Can using more than two asributes can lead to beser recommendajons? users ;me r T = a i b i c i i = 1 Can we (approximately) fill- in the missing entries?

31 TENSOR PREDICTION Can using more than two asributes can lead to beser recommendajons? users ;me r T = a i b i c i i = 1 Can we (approximately) fill- in the missing entries? More asributes lead to beser recommendajons, but more complex objects

32 THE TROUBLE WITH TENSORS Natural approach (suggested by many authors): min X s.t. * 1 Ω (i,j,k) Ω X i,j,k T i,j,k η (P) tensor nuclear norm

33 THE TROUBLE WITH TENSORS Natural approach (suggested by many authors): min X s.t. * 1 Ω (i,j,k) Ω X i,j,k T i,j,k η (P) tensor nuclear norm The tensor nuclear norm is NP- hard to compute! [Gurvits], [Liu], [Harrow, Montanaro]

34 In fact, most of the linear algebra toolkit is ill- posed, or computa;onally hard for tensors

35 In fact, most of the linear algebra toolkit is ill- posed, or computa;onally hard for tensors e.g. [Hillar, Lim] Most Tensor Problems are NP- Hard

36 In fact, most of the linear algebra toolkit is ill- posed, or computa;onally hard for tensors e.g. [Hillar, Lim] Most Tensor Problems are NP- Hard Table I. Tractability of Tensor Problems Problem Complexity Bivariate Matrix Functions over R, C Undecidable (Proposition 12.2) Bilinear System over R, C NP-hard (Theorems 2.6, 3.7, 3.8) Eigenvalue over R NP-hard (Theorem 1.3) Approximating Eigenvector over R NP-hard (Theorem 1.5) Symmetric Eigenvalue over R NP-hard (Theorem 9.3) Approximating Symmetric Eigenvalue over R NP-hard (Theorem 9.6) Singular Value over R, C NP-hard (Theorem 1.7) Symmetric Singular Value over R NP-hard (Theorem 10.2) Approximating Singular Vector over R, C NP-hard (Theorem 6.3) Spectral Norm over R NP-hard (Theorem 1.10) Symmetric Spectral Norm over R NP-hard (Theorem 10.2) Approximating Spectral Norm over R NP-hard (Theorem 1.11) Nonnegative Definiteness NP-hard (Theorem 11.2) Best Rank-1 Approximation NP-hard (Theorem 1.13) Best Symmetric Rank-1 Approximation NP-hard (Theorem 10.2) Rank over R or C NP-hard (Theorem 8.2) Enumerating Eigenvectors over R #P-hard (Corollary 1.16) Combinatorial Hyperdeterminant NP-, #P-, VNP-hard (Theorems 4.1, 4.2, Corollary 4.3) Geometric Hyperdeterminant Conjectures 1.9, 13.1 Symmetric Rank Conjecture 13.2 Bilinear Programming Conjecture 13.4 Bilinear Least Squares Conjecture 13.5 Note: Except for positive definiteness and the combinatorial hyperdeterminant, which apply to -tensors,

37 FLATTENING A TENSOR Many tensor methods rely on flatening:

38 FLATTENING A TENSOR Many tensor methods rely on flatening: n 3 flat( ) = n 1 n 1 n 2 n 3

39 FLATTENING A TENSOR Many tensor methods rely on flatening: n 3 flat( ) = n 1 users ac;vi;es ;me

40 FLATTENING A TENSOR Many tensor methods rely on flatening: flat( ) = n 1 n 3 i (j,k) T i,j,k

41 FLATTENING A TENSOR Many tensor methods rely on flatening: flat( ) = n 1 n 3 i (j,k) T i,j,k This is a rearrangement of the entries, into a matrix, that does not increase its rank

42 FLATTENING A TENSOR Many tensor methods rely on flatening: flat( ) = n 1 n 3 i r flat( a i b i c i i = 1 r (j,k) ) = a i T vec(b i c i ) i = 1 T i,j,k n 2 n 3 - dimensional vector

43 Let n 1 = n 2 = n 3 = n We would need O(n 2 r) observajons to fill- in flat(t)

44 Let n 1 = n 2 = n 3 = n We would need O(n 2 r) observajons to fill- in flat(t) There are many other variants of flatening, but with comparable guarantees [Liu, Musialski, Wonka, Ye], [Gandy, Recht, Yamada], [SignoreTo, De Lathauwer, Suykens], [Tomioko, Hayashi, Kashima], [Mu, Huang, Wright, Goldfarb],

45 Let n 1 = n 2 = n 3 = n We would need O(n 2 r) observajons to fill- in flat(t) There are many other variants of flatening, but with comparable guarantees [Liu, Musialski, Wonka, Ye], [Gandy, Recht, Yamada], [SignoreTo, De Lathauwer, Suykens], [Tomioko, Hayashi, Kashima], [Mu, Huang, Wright, Goldfarb], Can we beat flasening?

46 Let n 1 = n 2 = n 3 = n We would need O(n 2 r) observajons to fill- in flat(t) There are many other variants of flatening, but with comparable guarantees [Liu, Musialski, Wonka, Ye], [Gandy, Recht, Yamada], [SignoreTo, De Lathauwer, Suykens], [Tomioko, Hayashi, Kashima], [Mu, Huang, Wright, Goldfarb], Can we beat flasening? Can we make beser predicjons than we do by treajng each ac;vity x ;me as unrelated?

47 Part III: Nearly opjmal algorithms for tensor predicjon

48 OUR RESULTS r T = a i b i c i i = 1 σ i + noise standard Gaussian r.v.

49 OUR RESULTS r T = a i b i c i i = 1 σ i + noise standard Gaussian r.v. Theorem: Suppose var(t i,j,k ) r. Then there is an efficient algorithm that outputs X which sajsfies: X i,j,k = (1±o(1))T i,j,k for a 1- o(1) fracjon of entries, provided m = Ω (n 3/2 r)

50 OUR RESULTS r T = a i b i c i i = 1 σ i + noise standard Gaussian r.v. Theorem: Suppose var(t i,j,k ) r. Then there is an efficient algorithm that outputs X which sajsfies: X i,j,k = (1±o(1))T i,j,k for a 1- o(1) fracjon of entries, provided m = Ω (n 3/2 r) This variance bound holds for random tensors, but also tensors where the factors (a i s, b i s, c i s) have large inner- product

51 OUR RESULTS r T = a i b i c i i = 1 σ i + noise standard Gaussian r.v. Theorem: Suppose var(t i,j,k ) r. Then there is an efficient algorithm that outputs X which sajsfies: X i,j,k = (1±o(1))T i,j,k for a 1- o(1) fracjon of entries, provided m = Ω (n 3/2 r) Even for r = n 3/2- δ (highly overcomplete), we only need to observe an o(1) fracjon of the entries to predict almost everything

52 LOWER BOUNDS Not only is the tensor nuclear norm hard to compute, but

53 LOWER BOUNDS Not only is the tensor nuclear norm hard to compute, but Tensor predicjon with m observajons Refute random 3- SAT with m clauses

54 LOWER BOUNDS Not only is the tensor nuclear norm hard to compute, but Tensor predicjon with m observajons Refute random 3- SAT with m clauses The best known algorithms require m = n 3/2, and even powerful SDP hierarchies fail with fewer clauses [Shor], [Nesterov], [Parrilo], [Lasserre], [Grigoriev], [Schoenebeck]

55 LOWER BOUNDS Not only is the tensor nuclear norm hard to compute, but Tensor predicjon with m observajons Refute random 3- SAT with m clauses The best known algorithms require m = n 3/2, and even powerful SDP hierarchies fail with fewer clauses [Shor], [Nesterov], [Parrilo], [Lasserre], [Grigoriev], [Schoenebeck] Corollary [informal]: Any algorithm for solving tensor predicjon, in the sum- of- squares hierarchy that uses m=n 3/2- δ r observajons must run in exponenjal Jme

56 Part IV: Our techniques: connecjons to random CSPs

57 Can we disjnguish between low- rank and random?

58 Can we disjnguish between low- rank and random? Case #1: Approximately low- rank a T a + noise

59 Can we disjnguish between low- rank and random? Case #1: Approximately low- rank a T a + noise For each (i,j) Ω M i,j = a i a j w/ probability ¾ random ±1 w/ probability ¼ where each a i = ±1

60 Can we disjnguish between low- rank and random?

61 Can we disjnguish between low- rank and random? Case #2: Random random For each (i,j) Ω, M i,j = random ±1

62 Can we disjnguish between low- rank and random? Case #2: Random random For each (i,j) Ω, M i,j = random ±1 In Case #1 the entries are (somewhat) predictable, but in Case #2 they are completely unpredictable

63 There are two very different communijes that (essenjally) asacked this same disjnguishing problem:

64 There are two very different communijes that (essenjally) asacked this same disjnguishing problem: The community working on matrix comple;on

65 There are two very different communijes that (essenjally) asacked this same disjnguishing problem: The community working on matrix comple;on The community working on refu;ng random CSPs

66 AN INTERPRETATION We can interpret: (i 1, j 1 ; σ 1 ), (i 2, j 2 ; σ 2 ),, (i m, j m ; σ m ) ±1 r.v. as a random 2- XOR formula ψ

67 AN INTERPRETATION We can interpret: (i 1, j 1 ; σ 1 ), (i 2, j 2 ; σ 2 ),, (i m, j m ; σ m ) ±1 r.v. as a random 2- XOR formula ψ In parjcular each observajon/fctn value maps to a clause: (i, j, σ) v i v j = σ variables constraint

68 AN INTERPRETATION We can interpret: (i 1, j 1 ; σ 1 ), (i 2, j 2 ; σ 2 ),, (i m, j m ; σ m ) ±1 r.v. as a random 2- XOR formula ψ (and vice- versa) In parjcular each observajon/fctn value maps to a clause: (i, j, σ) v i v j = σ variables constraint

69 STRONG REFUTATION We will say that an algorithm strongly refutes* random 2- XOR with m clauses if:

70 STRONG REFUTATION We will say that an algorithm strongly refutes* random 2- XOR with m clauses if: (1) On any 2- XOR formula ψ, it outputs val where: OPT(ψ) val(ψ)

71 STRONG REFUTATION We will say that an algorithm strongly refutes* random 2- XOR with m clauses if: (1) On any 2- XOR formula ψ, it outputs val where: OPT(ψ) val(ψ) largest fracjon of clauses of ψ that can be sajsfied

72 STRONG REFUTATION We will say that an algorithm strongly refutes* random 2- XOR with m clauses if: (1) On any 2- XOR formula ψ, it outputs val where: OPT(ψ) val(ψ)

73 STRONG REFUTATION We will say that an algorithm strongly refutes* random 2- XOR with m clauses if: (1) On any 2- XOR formula ψ, it outputs val where: OPT(ψ) val(ψ) (2) With high probability (for random ψ with m clauses): val(ψ) = o(1)

74 Lemma: If (i 1, j 1 ; σ 1 ),, (i m, j m ; σ m ) j ψ then σ i A 2 OPT(ψ) 1 n 1 A m 2

75 Lemma: If (i 1, j 1 ; σ 1 ),, (i m, j m ; σ m ) j ψ then σ i A 2 OPT(ψ) 1 n 1 A m 2 Proof: Map the assignment to a unit vector so that x i = ±1/ n and take the quadrajc form on A

76 Lemma: If (i 1, j 1 ; σ 1 ),, (i m, j m ; σ m ) j ψ then σ i A 2 OPT(ψ) 1 n 1 A m 2 Proof: Map the assignment to a unit vector so that x i = ±1/ n and take the quadrajc form on A 1 m A 1 mn

77 Lemma: If (i 1, j 1 ; σ 1 ),, (i m, j m ; σ m ) j ψ then σ i A 2 OPT(ψ) 1 n 1 A m 2 Proof: Map the assignment to a unit vector so that x i = ±1/ n and take the quadrajc form on A 1 m A 1 mn m = ω(n) OPT(ψ) o(1)

78 Lemma: If (i 1, j 1 ; σ 1 ),, (i m, j m ; σ m ) j ψ then σ i A 2 OPT(ψ) 1 n 1 A m 2 Proof: Map the assignment to a unit vector so that x i = ±1/ n and take the quadrajc form on A 1 m A 1 mn m = ω(n) OPT(ψ) o(1) This solves the strong refutajon problem

79 There are two very different communijes that (essenjally) asacked this same disjnguishing problem: The community working on matrix comple;on The community working on refu;ng random CSPs

80 There are two very different communijes that (essenjally) asacked this same disjnguishing problem: The community working on matrix comple;on The community working on refu;ng random CSPs Noisy matrix complejon with m observajons Rademacher Complexity Strongly refute* random 2- XOR/2- SAT with m clauses *Want an algorithm that cerjfies a formula is far from sajsfiable

81 There are two very different communijes that (essenjally) asacked this same disjnguishing problem: The community working on matrix comple;on The community working on refu;ng random CSPs Noisy tensor complejon with m observajons Rademacher Complexity Strongly refute* random 3- XOR/3- SAT with m clauses *Want an algorithm that cerjfies a formula is far from sajsfiable

82 There are two very different communijes that (essenjally) asacked this same disjnguishing problem: The community working on matrix comple;on The community working on refu;ng random CSPs Noisy tensor complejon with m observajons Rademacher Complexity Strongly refute* random 3- XOR/3- SAT with m clauses [Coja- Oghlan, Goerdt, Lanka] *Want an algorithm that cerjfies a formula is far from sajsfiable

83 There are two very different communijes that (essenjally) asacked this same disjnguishing problem: The community working on matrix comple;on The community working on refu;ng random CSPs Noisy tensor complejon with m observajons Embedding in SOS Strongly refute* random 3- XOR/3- SAT with m clauses [Coja- Oghlan, Goerdt, Lanka] We then embed this algorithm into the sixth level of the sum- of- squares hierarchy, to get a relaxajon for tensor predicjon *Want an algorithm that cerjfies a formula is far from sajsfiable

84 GENERALIZATION BOUNDS Suppose we are given Ω = m noisy observajons T i,j,k ± η, and the factors of T are C- incoherent: Theorem: There is an efficient algorithm that with prob 1- δ, outputs X with 1 n 3 i,j,k X i,j,k T i,j,k C 3 r n 3/2 log 4 n m + 2C 3 r ln(2/δ) m + 2η

85 GENERALIZATION BOUNDS Suppose we are given Ω = m noisy observajons T i,j,k ± η, and the factors of T are C- incoherent: Theorem: There is an efficient algorithm that with prob 1- δ, outputs X with 1 n 3 i,j,k X i,j,k T i,j,k C 3 r n 3/2 log 4 n m + 2C 3 r ln(2/δ) m + 2η This comes from giving an efficiently computable norm whose Rademacher complexity is asymptojcally smaller than the trivial bound whenever m=ω(n 3/2 log 4 n) K

86 SUMMARY New Algorithm: We gave an algorithm for 3 rd - order tensor predicjon that uses m = n 3/2 rlog 4 n observajons

87 SUMMARY New Algorithm: We gave an algorithm for 3 rd - order tensor predicjon that uses m = n 3/2 rlog 4 n observajons An Inefficient Algorithm: (via tensor nuclear norm) There is an inefficient algorithm that use m = nrlogn observajons

88 SUMMARY New Algorithm: We gave an algorithm for 3 rd - order tensor predicjon that uses m = n 3/2 rlog 4 n observajons An Inefficient Algorithm: (via tensor nuclear norm) There is an inefficient algorithm that use m = nrlogn observajons A Phase Transi;on: Even for n δ rounds of the powerful sum- of- squares hierarchy, no norm solves tensor predicjon with m = n 3/2- δ r observajons

89 Epilogue: New direcjons in linear inverse problems

90 Robust PCA [Candes et al.], [Chandrasekaran et al.], Can we recover a low rank matrix from sparse corrupjons?

91 Robust PCA [Candes et al.], [Chandrasekaran et al.], Can we recover a low rank matrix from sparse corrupjons? Superresolu;on, compressed sensing off- the- grid [Candes, Fernandez- Granda], [Tang et al.], Can we recover well- separated points from low- frequency measurements? f 1 f 2 f 3 f 4

92 DISCUSSION Convex programs are unreasonably effecjve for linear inverse problems!

93 DISCUSSION Convex programs are unreasonably effecjve for linear inverse problems! But we gave simple linear inverse problems that exhibit striking gaps between efficient and inefficient esjmators

94 DISCUSSION Convex programs are unreasonably effecjve for linear inverse problems! But we gave simple linear inverse problems that exhibit striking gaps between efficient and inefficient esjmators Where else are there computajonal vs stajsjcal tradeoffs?

95 DISCUSSION Convex programs are unreasonably effecjve for linear inverse problems! But we gave simple linear inverse problems that exhibit striking gaps between efficient and inefficient esjmators Where else are there computajonal vs stajsjcal tradeoffs? New Direc;on: Explore computajonal vs. stajsjcal tradeoffs through the powerful sum- of- squares hierarchy

BEYOND MATRIX COMPLETION

BEYOND MATRIX COMPLETION BEYOND MATRIX COMPLETION ANKUR MOITRA MASSACHUSETTS INSTITUTE OF TECHNOLOGY Based on joint work with Boaz Barak (MSR) Part I: RecommendaIon systems and parially observed matrices THE NETFLIX PROBLEM movies

More information

Linear dimensionality reduction for data analysis

Linear dimensionality reduction for data analysis Linear dimensionality reduction for data analysis Nicolas Gillis Joint work with Robert Luce, François Glineur, Stephen Vavasis, Robert Plemmons, Gabriella Casalino The setup Dimensionality reduction for

More information

The convex algebraic geometry of rank minimization

The convex algebraic geometry of rank minimization The convex algebraic geometry of rank minimization Pablo A. Parrilo Laboratory for Information and Decision Systems Massachusetts Institute of Technology International Symposium on Mathematical Programming

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

Statistical Performance of Convex Tensor Decomposition

Statistical Performance of Convex Tensor Decomposition Slides available: h-p://www.ibis.t.u tokyo.ac.jp/ryotat/tensor12kyoto.pdf Statistical Performance of Convex Tensor Decomposition Ryota Tomioka 2012/01/26 @ Kyoto University Perspectives in Informatics

More information

CSC 576: Variants of Sparse Learning

CSC 576: Variants of Sparse Learning CSC 576: Variants of Sparse Learning Ji Liu Department of Computer Science, University of Rochester October 27, 205 Introduction Our previous note basically suggests using l norm to enforce sparsity in

More information

Provable Alternating Minimization Methods for Non-convex Optimization

Provable Alternating Minimization Methods for Non-convex Optimization Provable Alternating Minimization Methods for Non-convex Optimization Prateek Jain Microsoft Research, India Joint work with Praneeth Netrapalli, Sujay Sanghavi, Alekh Agarwal, Animashree Anandkumar, Rashish

More information

Recovering any low-rank matrix, provably

Recovering any low-rank matrix, provably Recovering any low-rank matrix, provably Rachel Ward University of Texas at Austin October, 2014 Joint work with Yudong Chen (U.C. Berkeley), Srinadh Bhojanapalli and Sujay Sanghavi (U.T. Austin) Matrix

More information

Sum-of-Squares Method, Tensor Decomposition, Dictionary Learning

Sum-of-Squares Method, Tensor Decomposition, Dictionary Learning Sum-of-Squares Method, Tensor Decomposition, Dictionary Learning David Steurer Cornell Approximation Algorithms and Hardness, Banff, August 2014 for many problems (e.g., all UG-hard ones): better guarantees

More information

Matrix completion: Fundamental limits and efficient algorithms. Sewoong Oh Stanford University

Matrix completion: Fundamental limits and efficient algorithms. Sewoong Oh Stanford University Matrix completion: Fundamental limits and efficient algorithms Sewoong Oh Stanford University 1 / 35 Low-rank matrix completion Low-rank Data Matrix Sparse Sampled Matrix Complete the matrix from small

More information

Matrix Completion: Fundamental Limits and Efficient Algorithms

Matrix Completion: Fundamental Limits and Efficient Algorithms Matrix Completion: Fundamental Limits and Efficient Algorithms Sewoong Oh PhD Defense Stanford University July 23, 2010 1 / 33 Matrix completion Find the missing entries in a huge data matrix 2 / 33 Example

More information

Matrix Completion from Fewer Entries

Matrix Completion from Fewer Entries from Fewer Entries Stanford University March 30, 2009 Outline The problem, a look at the data, and some results (slides) 2 Proofs (blackboard) arxiv:090.350 The problem, a look at the data, and some results

More information

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation UIUC CSL Mar. 24 Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation Yuejie Chi Department of ECE and BMI Ohio State University Joint work with Yuxin Chen (Stanford).

More information

Tensor Low-Rank Completion and Invariance of the Tucker Core

Tensor Low-Rank Completion and Invariance of the Tucker Core Tensor Low-Rank Completion and Invariance of the Tucker Core Shuzhong Zhang Department of Industrial & Systems Engineering University of Minnesota zhangs@umn.edu Joint work with Bo JIANG, Shiqian MA, and

More information

Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison

Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison Raghunandan H. Keshavan, Andrea Montanari and Sewoong Oh Electrical Engineering and Statistics Department Stanford University,

More information

Low-rank Matrix Completion from Noisy Entries

Low-rank Matrix Completion from Noisy Entries Low-rank Matrix Completion from Noisy Entries Sewoong Oh Joint work with Raghunandan Keshavan and Andrea Montanari Stanford University Forty-Seventh Allerton Conference October 1, 2009 R.Keshavan, S.Oh,

More information

Structured matrix factorizations. Example: Eigenfaces

Structured matrix factorizations. Example: Eigenfaces Structured matrix factorizations Example: Eigenfaces An extremely large variety of interesting and important problems in machine learning can be formulated as: Given a matrix, find a matrix and a matrix

More information

approximation algorithms I

approximation algorithms I SUM-OF-SQUARES method and approximation algorithms I David Steurer Cornell Cargese Workshop, 201 meta-task encoded as low-degree polynomial in R x example: f(x) = i,j n w ij x i x j 2 given: functions

More information

An Extended Frank-Wolfe Method, with Application to Low-Rank Matrix Completion

An Extended Frank-Wolfe Method, with Application to Low-Rank Matrix Completion An Extended Frank-Wolfe Method, with Application to Low-Rank Matrix Completion Robert M. Freund, MIT joint with Paul Grigas (UC Berkeley) and Rahul Mazumder (MIT) CDC, December 2016 1 Outline of Topics

More information

MATRIX COMPLETION AND TENSOR RANK

MATRIX COMPLETION AND TENSOR RANK MATRIX COMPLETION AND TENSOR RANK HARM DERKSEN Abstract. In this paper, we show that the low rank matrix completion problem can be reduced to the problem of finding the rank of a certain tensor. arxiv:1302.2639v2

More information

From Compressed Sensing to Matrix Completion and Beyond. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison

From Compressed Sensing to Matrix Completion and Beyond. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison From Compressed Sensing to Matrix Completion and Beyond Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Netflix Prize One million big ones! Given 100 million ratings on a

More information

Lower bounds on the size of semidefinite relaxations. David Steurer Cornell

Lower bounds on the size of semidefinite relaxations. David Steurer Cornell Lower bounds on the size of semidefinite relaxations David Steurer Cornell James R. Lee Washington Prasad Raghavendra Berkeley Institute for Advanced Study, November 2015 overview of results unconditional

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via convex relaxations Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Shuyang Ling Department of Mathematics, UC Davis Oct.18th, 2016 Shuyang Ling (UC Davis) 16w5136, Oaxaca, Mexico Oct.18th, 2016

More information

Analysis of Robust PCA via Local Incoherence

Analysis of Robust PCA via Local Incoherence Analysis of Robust PCA via Local Incoherence Huishuai Zhang Department of EECS Syracuse University Syracuse, NY 3244 hzhan23@syr.edu Yi Zhou Department of EECS Syracuse University Syracuse, NY 3244 yzhou35@syr.edu

More information

Low-Rank Matrix Recovery

Low-Rank Matrix Recovery ELE 538B: Mathematics of High-Dimensional Data Low-Rank Matrix Recovery Yuxin Chen Princeton University, Fall 2018 Outline Motivation Problem setup Nuclear norm minimization RIP and low-rank matrix recovery

More information

Non-convex Robust PCA: Provable Bounds

Non-convex Robust PCA: Provable Bounds Non-convex Robust PCA: Provable Bounds Anima Anandkumar U.C. Irvine Joint work with Praneeth Netrapalli, U.N. Niranjan, Prateek Jain and Sujay Sanghavi. Learning with Big Data High Dimensional Regime Missing

More information

Compressive Sensing and Beyond

Compressive Sensing and Beyond Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered

More information

Low Rank Matrix Completion Formulation and Algorithm

Low Rank Matrix Completion Formulation and Algorithm 1 2 Low Rank Matrix Completion and Algorithm Jian Zhang Department of Computer Science, ETH Zurich zhangjianthu@gmail.com March 25, 2014 Movie Rating 1 2 Critic A 5 5 Critic B 6 5 Jian 9 8 Kind Guy B 9

More information

Most tensor problems are NP hard

Most tensor problems are NP hard Most tensor problems are NP hard (preliminary report) Lek-Heng Lim University of California, Berkeley August 25, 2009 (Joint work with: Chris Hillar, MSRI) L.H. Lim (Berkeley) Most tensor problems are

More information

Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies

Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies July 12, 212 Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies Morteza Mardani Dept. of ECE, University of Minnesota, Minneapolis, MN 55455 Acknowledgments:

More information

Conditions for Robust Principal Component Analysis

Conditions for Robust Principal Component Analysis Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and

More information

Sparse and Low-Rank Matrix Decompositions

Sparse and Low-Rank Matrix Decompositions Forty-Seventh Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 30 - October 2, 2009 Sparse and Low-Rank Matrix Decompositions Venkat Chandrasekaran, Sujay Sanghavi, Pablo A. Parrilo,

More information

Approximation & Complexity

Approximation & Complexity Summer school on semidefinite optimization Approximation & Complexity David Steurer Cornell University Part 1 September 6, 2012 Overview Part 1 Unique Games Conjecture & Basic SDP Part 2 SDP Hierarchies:

More information

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:

More information

DATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD

DATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD DATA MINING LECTURE 8 Dimensionality Reduction PCA -- SVD The curse of dimensionality Real data usually have thousands, or millions of dimensions E.g., web documents, where the dimensionality is the vocabulary

More information

Going off the grid. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison

Going off the grid. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Going off the grid Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Joint work with Badri Bhaskar Parikshit Shah Gonnguo Tang We live in a continuous world... But we work

More information

arxiv: v1 [math.oc] 11 Jun 2009

arxiv: v1 [math.oc] 11 Jun 2009 RANK-SPARSITY INCOHERENCE FOR MATRIX DECOMPOSITION VENKAT CHANDRASEKARAN, SUJAY SANGHAVI, PABLO A. PARRILO, S. WILLSKY AND ALAN arxiv:0906.2220v1 [math.oc] 11 Jun 2009 Abstract. Suppose we are given a

More information

Optimization. Benjamin Recht University of California, Berkeley Stephen Wright University of Wisconsin-Madison

Optimization. Benjamin Recht University of California, Berkeley Stephen Wright University of Wisconsin-Madison Optimization Benjamin Recht University of California, Berkeley Stephen Wright University of Wisconsin-Madison optimization () cost constraints might be too much to cover in 3 hours optimization (for big

More information

Binary matrix completion

Binary matrix completion Binary matrix completion Yaniv Plan University of Michigan SAMSI, LDHD workshop, 2013 Joint work with (a) Mark Davenport (b) Ewout van den Berg (c) Mary Wootters Yaniv Plan (U. Mich.) Binary matrix completion

More information

MINI-TUTORIAL ON SEMI-ALGEBRAIC PROOF SYSTEMS. Albert Atserias Universitat Politècnica de Catalunya Barcelona

MINI-TUTORIAL ON SEMI-ALGEBRAIC PROOF SYSTEMS. Albert Atserias Universitat Politècnica de Catalunya Barcelona MINI-TUTORIAL ON SEMI-ALGEBRAIC PROOF SYSTEMS Albert Atserias Universitat Politècnica de Catalunya Barcelona Part I CONVEX POLYTOPES Convex polytopes as linear inequalities Polytope: P = {x R n : Ax b}

More information

arxiv: v5 [math.na] 16 Nov 2017

arxiv: v5 [math.na] 16 Nov 2017 RANDOM PERTURBATION OF LOW RANK MATRICES: IMPROVING CLASSICAL BOUNDS arxiv:3.657v5 [math.na] 6 Nov 07 SEAN O ROURKE, VAN VU, AND KE WANG Abstract. Matrix perturbation inequalities, such as Weyl s theorem

More information

New Ranks for Even-Order Tensors and Their Applications in Low-Rank Tensor Optimization

New Ranks for Even-Order Tensors and Their Applications in Low-Rank Tensor Optimization New Ranks for Even-Order Tensors and Their Applications in Low-Rank Tensor Optimization Bo JIANG Shiqian MA Shuzhong ZHANG January 12, 2015 Abstract In this paper, we propose three new tensor decompositions

More information

Guaranteed Rank Minimization via Singular Value Projection

Guaranteed Rank Minimization via Singular Value Projection 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 3 3 33 34 35 36 37 38 39 4 4 4 43 44 45 46 47 48 49 5 5 5 53 Guaranteed Rank Minimization via Singular Value Projection Anonymous Author(s) Affiliation Address

More information

arxiv: v1 [stat.ml] 1 Mar 2015

arxiv: v1 [stat.ml] 1 Mar 2015 Matrix Completion with Noisy Entries and Outliers Raymond K. W. Wong 1 and Thomas C. M. Lee 2 arxiv:1503.00214v1 [stat.ml] 1 Mar 2015 1 Department of Statistics, Iowa State University 2 Department of Statistics,

More information

Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery

Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery Department of Mathematics & Risk Management Institute National University of Singapore (Based on a joint work with Shujun

More information

Adaptive one-bit matrix completion

Adaptive one-bit matrix completion Adaptive one-bit matrix completion Joseph Salmon Télécom Paristech, Institut Mines-Télécom Joint work with Jean Lafond (Télécom Paristech) Olga Klopp (Crest / MODAL X, Université Paris Ouest) Éric Moulines

More information

Rank minimization via the γ 2 norm

Rank minimization via the γ 2 norm Rank minimization via the γ 2 norm Troy Lee Columbia University Adi Shraibman Weizmann Institute Rank Minimization Problem Consider the following problem min X rank(x) A i, X b i for i = 1,..., k Arises

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via nonconvex optimization Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Sparse and Low Rank Recovery via Null Space Properties

Sparse and Low Rank Recovery via Null Space Properties Sparse and Low Rank Recovery via Null Space Properties Holger Rauhut Lehrstuhl C für Mathematik (Analysis), RWTH Aachen Convexity, probability and discrete structures, a geometric viewpoint Marne-la-Vallée,

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

Information-Theoretic Limits of Matrix Completion

Information-Theoretic Limits of Matrix Completion Information-Theoretic Limits of Matrix Completion Erwin Riegler, David Stotz, and Helmut Bölcskei Dept. IT & EE, ETH Zurich, Switzerland Email: {eriegler, dstotz, boelcskei}@nari.ee.ethz.ch Abstract We

More information

Tensor Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Tensors via Convex Optimization

Tensor Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Tensors via Convex Optimization Tensor Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Tensors via Convex Optimization Canyi Lu 1, Jiashi Feng 1, Yudong Chen, Wei Liu 3, Zhouchen Lin 4,5,, Shuicheng Yan 6,1

More information

Three Generalizations of Compressed Sensing

Three Generalizations of Compressed Sensing Thomas Blumensath School of Mathematics The University of Southampton June, 2010 home prev next page Compressed Sensing and beyond y = Φx + e x R N or x C N x K is K-sparse and x x K 2 is small y R M or

More information

Robust Statistics, Revisited

Robust Statistics, Revisited Robust Statistics, Revisited Ankur Moitra (MIT) joint work with Ilias Diakonikolas, Jerry Li, Gautam Kamath, Daniel Kane and Alistair Stewart CLASSIC PARAMETER ESTIMATION Given samples from an unknown

More information

Rank Determination for Low-Rank Data Completion

Rank Determination for Low-Rank Data Completion Journal of Machine Learning Research 18 017) 1-9 Submitted 7/17; Revised 8/17; Published 9/17 Rank Determination for Low-Rank Data Completion Morteza Ashraphijuo Columbia University New York, NY 1007,

More information

Convex sets, conic matrix factorizations and conic rank lower bounds

Convex sets, conic matrix factorizations and conic rank lower bounds Convex sets, conic matrix factorizations and conic rank lower bounds Pablo A. Parrilo Laboratory for Information and Decision Systems Electrical Engineering and Computer Science Massachusetts Institute

More information

Lecture Notes 10: Matrix Factorization

Lecture Notes 10: Matrix Factorization Optimization-based data analysis Fall 207 Lecture Notes 0: Matrix Factorization Low-rank models. Rank- model Consider the problem of modeling a quantity y[i, j] that depends on two indices i and j. To

More information

Unique Games Conjecture & Polynomial Optimization. David Steurer

Unique Games Conjecture & Polynomial Optimization. David Steurer Unique Games Conjecture & Polynomial Optimization David Steurer Newton Institute, Cambridge, July 2013 overview Proof Complexity Approximability Polynomial Optimization Quantum Information computational

More information

Universal low-rank matrix recovery from Pauli measurements

Universal low-rank matrix recovery from Pauli measurements Universal low-rank matrix recovery from Pauli measurements Yi-Kai Liu Applied and Computational Mathematics Division National Institute of Standards and Technology Gaithersburg, MD, USA yi-kai.liu@nist.gov

More information

Sum-of-Squares and Spectral Algorithms

Sum-of-Squares and Spectral Algorithms Sum-of-Squares and Spectral Algorithms Tselil Schramm June 23, 2017 Workshop on SoS @ STOC 2017 Spectral algorithms as a tool for analyzing SoS. SoS Semidefinite Programs Spectral Algorithms SoS suggests

More information

A Characterization of Sampling Patterns for Union of Low-Rank Subspaces Retrieval Problem

A Characterization of Sampling Patterns for Union of Low-Rank Subspaces Retrieval Problem A Characterization of Sampling Patterns for Union of Low-Rank Subspaces Retrieval Problem Morteza Ashraphijuo Columbia University ashraphijuo@ee.columbia.edu Xiaodong Wang Columbia University wangx@ee.columbia.edu

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

Linear Algebra (Review) Volker Tresp 2018

Linear Algebra (Review) Volker Tresp 2018 Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c

More information

Matrix Completion from a Few Entries

Matrix Completion from a Few Entries Matrix Completion from a Few Entries Raghunandan H. Keshavan and Sewoong Oh EE Department Stanford University, Stanford, CA 9434 Andrea Montanari EE and Statistics Departments Stanford University, Stanford,

More information

Optimisation Combinatoire et Convexe.

Optimisation Combinatoire et Convexe. Optimisation Combinatoire et Convexe. Low complexity models, l 1 penalties. A. d Aspremont. M1 ENS. 1/36 Today Sparsity, low complexity models. l 1 -recovery results: three approaches. Extensions: matrix

More information

High-dimensional Statistics

High-dimensional Statistics High-dimensional Statistics Pradeep Ravikumar UT Austin Outline 1. High Dimensional Data : Large p, small n 2. Sparsity 3. Group Sparsity 4. Low Rank 1 Curse of Dimensionality Statistical Learning: Given

More information

Robustness Meets Algorithms

Robustness Meets Algorithms Robustness Meets Algorithms Ankur Moitra (MIT) ICML 2017 Tutorial, August 6 th CLASSIC PARAMETER ESTIMATION Given samples from an unknown distribution in some class e.g. a 1-D Gaussian can we accurately

More information

Approximate Low-Rank Tensor Learning

Approximate Low-Rank Tensor Learning Approximate Low-Rank Tensor Learning Yaoliang Yu Dept of Machine Learning Carnegie Mellon University yaoliang@cs.cmu.edu Hao Cheng Dept of Electric Engineering University of Washington kelvinwsch@gmail.com

More information

Collaborative Filtering

Collaborative Filtering Case Study 4: Collaborative Filtering Collaborative Filtering Matrix Completion Alternating Least Squares Machine Learning/Statistics for Big Data CSE599C1/STAT592, University of Washington Carlos Guestrin

More information

Uniqueness Conditions For Low-Rank Matrix Recovery

Uniqueness Conditions For Low-Rank Matrix Recovery Claremont Colleges Scholarship @ Claremont CMC Faculty Publications and Research CMC Faculty Scholarship 3-28-2011 Uniqueness Conditions For Low-Rank Matrix Recovery Yonina C. Eldar Israel Institute of

More information

Fast and Robust Phase Retrieval

Fast and Robust Phase Retrieval Fast and Robust Phase Retrieval Aditya Viswanathan aditya@math.msu.edu CCAM Lunch Seminar Purdue University April 18 2014 0 / 27 Joint work with Yang Wang Mark Iwen Research supported in part by National

More information

ECE 598: Representation Learning: Algorithms and Models Fall 2017

ECE 598: Representation Learning: Algorithms and Models Fall 2017 ECE 598: Representation Learning: Algorithms and Models Fall 2017 Lecture 1: Tensor Methods in Machine Learning Lecturer: Pramod Viswanathan Scribe: Bharath V Raghavan, Oct 3, 2017 11 Introduction Tensors

More information

Recovery of Simultaneously Structured Models using Convex Optimization

Recovery of Simultaneously Structured Models using Convex Optimization Recovery of Simultaneously Structured Models using Convex Optimization Maryam Fazel University of Washington Joint work with: Amin Jalali (UW), Samet Oymak and Babak Hassibi (Caltech) Yonina Eldar (Technion)

More information

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional

More information

Collaborative Filtering Matrix Completion Alternating Least Squares

Collaborative Filtering Matrix Completion Alternating Least Squares Case Study 4: Collaborative Filtering Collaborative Filtering Matrix Completion Alternating Least Squares Machine Learning for Big Data CSE547/STAT548, University of Washington Sham Kakade May 19, 2016

More information

High-dimensional Statistical Models

High-dimensional Statistical Models High-dimensional Statistical Models Pradeep Ravikumar UT Austin MLSS 2014 1 Curse of Dimensionality Statistical Learning: Given n observations from p(x; θ ), where θ R p, recover signal/parameter θ. For

More information

Latent Variable Graphical Model Selection Via Convex Optimization

Latent Variable Graphical Model Selection Via Convex Optimization Latent Variable Graphical Model Selection Via Convex Optimization The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published

More information

Matrix Completion from Noisy Entries

Matrix Completion from Noisy Entries Matrix Completion from Noisy Entries Raghunandan H. Keshavan, Andrea Montanari, and Sewoong Oh Abstract Given a matrix M of low-rank, we consider the problem of reconstructing it from noisy observations

More information

Symmetric Factorization for Nonconvex Optimization

Symmetric Factorization for Nonconvex Optimization Symmetric Factorization for Nonconvex Optimization Qinqing Zheng February 24, 2017 1 Overview A growing body of recent research is shedding new light on the role of nonconvex optimization for tackling

More information

Breaking the Limits of Subspace Inference

Breaking the Limits of Subspace Inference Breaking the Limits of Subspace Inference Claudia R. Solís-Lemus, Daniel L. Pimentel-Alarcón Emory University, Georgia State University Abstract Inferring low-dimensional subspaces that describe high-dimensional,

More information

Matrix Completion with Noise

Matrix Completion with Noise Matrix Completion with Noise 1 Emmanuel J. Candès and Yaniv Plan Applied and Computational Mathematics, Caltech, Pasadena, CA 91125 of the rapidly developing field of compressed sensing, and is already

More information

Compressed Sensing and Robust Recovery of Low Rank Matrices

Compressed Sensing and Robust Recovery of Low Rank Matrices Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms Chapter 26 Semidefinite Programming Zacharias Pitouras 1 Introduction LP place a good lower bound on OPT for NP-hard problems Are there other ways of doing this? Vector programs

More information

An iterative hard thresholding estimator for low rank matrix recovery

An iterative hard thresholding estimator for low rank matrix recovery An iterative hard thresholding estimator for low rank matrix recovery Alexandra Carpentier - based on a joint work with Arlene K.Y. Kim Statistical Laboratory, Department of Pure Mathematics and Mathematical

More information

Overview. Optimization-Based Data Analysis. Carlos Fernandez-Granda

Overview. Optimization-Based Data Analysis.   Carlos Fernandez-Granda Overview Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 1/25/2016 Sparsity Denoising Regression Inverse problems Low-rank models Matrix completion

More information

ROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210

ROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210 ROBUST BLIND SPIKES DECONVOLUTION Yuejie Chi Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 4 ABSTRACT Blind spikes deconvolution, or blind super-resolution, deals with

More information

Sparse Parameter Estimation: Compressed Sensing meets Matrix Pencil

Sparse Parameter Estimation: Compressed Sensing meets Matrix Pencil Sparse Parameter Estimation: Compressed Sensing meets Matrix Pencil Yuejie Chi Departments of ECE and BMI The Ohio State University Colorado School of Mines December 9, 24 Page Acknowledgement Joint work

More information

Estimation of (near) low-rank matrices with noise and high-dimensional scaling

Estimation of (near) low-rank matrices with noise and high-dimensional scaling Estimation of (near) low-rank matrices with noise and high-dimensional scaling Sahand Negahban Department of EECS, University of California, Berkeley, CA 94720, USA sahand n@eecs.berkeley.edu Martin J.

More information

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

More information

Correlated-PCA: Principal Components Analysis when Data and Noise are Correlated

Correlated-PCA: Principal Components Analysis when Data and Noise are Correlated Correlated-PCA: Principal Components Analysis when Data and Noise are Correlated Namrata Vaswani and Han Guo Iowa State University, Ames, IA, USA Email: {namrata,hanguo}@iastate.edu Abstract Given a matrix

More information

Semidefinite Programming

Semidefinite Programming Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has

More information

ELE 538B: Sparsity, Structure and Inference. Super-Resolution. Yuxin Chen Princeton University, Spring 2017

ELE 538B: Sparsity, Structure and Inference. Super-Resolution. Yuxin Chen Princeton University, Spring 2017 ELE 538B: Sparsity, Structure and Inference Super-Resolution Yuxin Chen Princeton University, Spring 2017 Outline Classical methods for parameter estimation Polynomial method: Prony s method Subspace method:

More information

Three Recent Examples of The Effectiveness of Convex Programming in the Information Sciences

Three Recent Examples of The Effectiveness of Convex Programming in the Information Sciences Three Recent Examples of The Effectiveness of Convex Programming in the Information Sciences Emmanuel Candès International Symposium on Information Theory, Istanbul, July 2013 Three stories about a theme

More information

A statistical model for tensor PCA

A statistical model for tensor PCA A statistical model for tensor PCA Andrea Montanari Statistics & Electrical Engineering Stanford University Emile Richard Electrical Engineering Stanford University Abstract We consider the Principal Component

More information

Spectral k-support Norm Regularization

Spectral k-support Norm Regularization Spectral k-support Norm Regularization Andrew McDonald Department of Computer Science, UCL (Joint work with Massimiliano Pontil and Dimitris Stamos) 25 March, 2015 1 / 19 Problem: Matrix Completion Goal:

More information

Linear Algebra (Review) Volker Tresp 2017

Linear Algebra (Review) Volker Tresp 2017 Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.

More information

The convex algebraic geometry of linear inverse problems

The convex algebraic geometry of linear inverse problems The convex algebraic geometry of linear inverse problems The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published Publisher

More information

Matrix Completion for Structured Observations

Matrix Completion for Structured Observations Matrix Completion for Structured Observations Denali Molitor Department of Mathematics University of California, Los ngeles Los ngeles, C 90095, US Email: dmolitor@math.ucla.edu Deanna Needell Department

More information