BEYOND MATRIX COMPLETION

Size: px
Start display at page:

Download "BEYOND MATRIX COMPLETION"

Transcription

1 BEYOND MATRIX COMPLETION ANKUR MOITRA MASSACHUSETTS INSTITUTE OF TECHNOLOGY Based on joint work with Boaz Barak (MSR)

2 Part I: RecommendaIon systems and parially observed matrices

3 THE NETFLIX PROBLEM movies users

4 THE NETFLIX PROBLEM movies users 1-5 stars

5 THE NETFLIX PROBLEM movies users 480,189 17,770

6 THE NETFLIX PROBLEM movies users 1% observed 480,189 17,770

7 THE NETFLIX PROBLEM movies users 1% observed 480,189 17,770 Can we (approximately) fill- in the missing entries?

8 MATRIX COMPLETION Let M be an unknown, approximately low- rank matrix

9 MATRIX COMPLETION Let M be an unknown, approximately low- rank matrix M + + +

10 MATRIX COMPLETION Let M be an unknown, approximately low- rank matrix M drama

11 MATRIX COMPLETION Let M be an unknown, approximately low- rank matrix M drama comedy

12 MATRIX COMPLETION Let M be an unknown, approximately low- rank matrix M drama comedy sports

13 MATRIX COMPLETION Let M be an unknown, approximately low- rank matrix M drama comedy sports Model: we are given random observaions M i,j for all i,j Ω

14 MATRIX COMPLETION Let M be an unknown, approximately low- rank matrix M drama comedy sports Model: we are given random observaions M i,j for all i,j Ω Is there an efficient algorithm to recover M?

15 MATRIX COMPLETION The natural formulaion is non- convex, and NP- hard min rank(x) s.t. 1 Ω (i,j) Ω X i,j M i,j η

16 MATRIX COMPLETION The natural formulaion is non- convex, and NP- hard min rank(x) s.t. 1 Ω (i,j) Ω There is a powerful, convex relaxaion X i,j M i,j η

17 THE NUCLEAR NORM Consider the singular value decomposi;on of X: X = U Σ V T orthogonal diagonal orthogonal

18 THE NUCLEAR NORM Consider the singular value decomposi;on of X: X = U Σ V T orthogonal diagonal orthogonal Let σ 1 σ 2 σ r > σ r+1 = σ m = 0 be the singular values

19 THE NUCLEAR NORM Consider the singular value decomposi;on of X: X = U Σ V T orthogonal diagonal orthogonal Let σ 1 σ 2 σ r > σ r+1 = σ m = 0 be the singular values Then rank(x) = r, and X * = σ 1 + σ σ r (nuclear norm)

20 This yields a convex relaxaion, that can be solved efficiently: min X * s.t. 1 Ω (i,j) Ω X i,j M i,j η [Fazel], [Srebro, Shraibman], [Recht, Fazel, Parrilo], [Candes, Recht], [Candes, Tao], [Candes, Plan], [Recht], (P)

21 This yields a convex relaxaion, that can be solved efficiently: min X * s.t. 1 Ω (i,j) Ω X i,j M i,j η [Fazel], [Srebro, Shraibman], [Recht, Fazel, Parrilo], [Candes, Recht], [Candes, Tao], [Candes, Plan], [Recht], (P) Theorem: If M is n x n and has rank r, and is C- incoherent then (P) recovers M exactly from C 6 nrlog 2 n observaions

22 This yields a convex relaxaion, that can be solved efficiently: min X * s.t. 1 Ω (i,j) Ω X i,j M i,j η [Fazel], [Srebro, Shraibman], [Recht, Fazel, Parrilo], [Candes, Recht], [Candes, Tao], [Candes, Plan], [Recht], (P) Theorem: If M is n x n and has rank r, and is C- incoherent then (P) recovers M exactly from C 6 nrlog 2 n observaions This is nearly opimal, since there are 2nr parameters

23 This yields a convex relaxaion, that can be solved efficiently: min X * s.t. 1 Ω (i,j) Ω X i,j M i,j η [Fazel], [Srebro, Shraibman], [Recht, Fazel, Parrilo], [Candes, Recht], [Candes, Tao], [Candes, Plan], [Recht], (P) Theorem: If M is n x n and has rank r, and is C- incoherent then (P) recovers M exactly from C 6 nrlog 2 n observaions This is nearly opimal, since there are 2nr parameters Many other approaches, e.g. alterna;ng minimiza;on: [Keshavan, Montanari, Oh], [Jain, Netrapalli, Sanghavi], [Hardt],

24 Part II: Higher order structure?

25 TENSOR PREDICTION Can using more than two asributes can lead to beser recommendaions?

26 TENSOR PREDICTION Can using more than two asributes can lead to beser recommendaions? ac;vi;es users e.g. Groupon

27 TENSOR PREDICTION Can using more than two asributes can lead to beser recommendaions? ;me users e.g. Groupon ;me: season, Ime of day, weekday/weekend, etc

28 TENSOR PREDICTION Can using more than two asributes can lead to beser recommendaions? users ;me T = r i = 1 e.g. Groupon snow sports ;me: season, Ime of day, weekday/weekend, etc

29 TENSOR PREDICTION Can using more than two asributes can lead to beser recommendaions? users ;me r T = a i b i c i i = 1

30 TENSOR PREDICTION Can using more than two asributes can lead to beser recommendaions? users ;me r T = a i b i c i i = 1 Can we (approximately) fill- in the missing entries?

31 TENSOR PREDICTION Can using more than two asributes can lead to beser recommendaions? users ;me r T = a i b i c i i = 1 Can we (approximately) fill- in the missing entries? More asributes lead to beser recommendaions, but more complex objects

32 THE TROUBLE WITH TENSORS Natural approach (suggested by many authors): min X s.t. * 1 Ω (i,j,k) Ω X i,j,k T i,j,k η (P) tensor nuclear norm

33 THE TROUBLE WITH TENSORS Natural approach (suggested by many authors): min X s.t. * 1 Ω (i,j,k) Ω X i,j,k T i,j,k η (P) tensor nuclear norm The tensor nuclear norm is NP- hard to compute! [Gurvits], [Liu], [Harrow, Montanaro]

34 In fact, most of the linear algebra toolkit is ill- posed, or computa;onally hard for tensors

35 In fact, most of the linear algebra toolkit is ill- posed, or computa;onally hard for tensors e.g. [Hillar, Lim] Most Tensor Problems are NP- Hard

36 In fact, most of the linear algebra toolkit is ill- posed, or computa;onally hard for tensors e.g. [Hillar, Lim] Most Tensor Problems are NP- Hard Table I. Tractability of Tensor Problems Problem Complexity Bivariate Matrix Functions over R, C Undecidable (Proposition 12.2) Bilinear System over R, C NP-hard (Theorems 2.6, 3.7, 3.8) Eigenvalue over R NP-hard (Theorem 1.3) Approximating Eigenvector over R NP-hard (Theorem 1.5) Symmetric Eigenvalue over R NP-hard (Theorem 9.3) Approximating Symmetric Eigenvalue over R NP-hard (Theorem 9.6) Singular Value over R, C NP-hard (Theorem 1.7) Symmetric Singular Value over R NP-hard (Theorem 10.2) Approximating Singular Vector over R, C NP-hard (Theorem 6.3) Spectral Norm over R NP-hard (Theorem 1.10) Symmetric Spectral Norm over R NP-hard (Theorem 10.2) Approximating Spectral Norm over R NP-hard (Theorem 1.11) Nonnegative Definiteness NP-hard (Theorem 11.2) Best Rank-1 Approximation NP-hard (Theorem 1.13) Best Symmetric Rank-1 Approximation NP-hard (Theorem 10.2) Rank over R or C NP-hard (Theorem 8.2) Enumerating Eigenvectors over R #P-hard (Corollary 1.16) Combinatorial Hyperdeterminant NP-, #P-, VNP-hard (Theorems 4.1, 4.2, Corollary 4.3) Geometric Hyperdeterminant Conjectures 1.9, 13.1 Symmetric Rank Conjecture 13.2 Bilinear Programming Conjecture 13.4 Bilinear Least Squares Conjecture 13.5 Note: Except for positive definiteness and the combinatorial hyperdeterminant, which apply to -tensors,

37 FLATTENING A TENSOR Many tensor methods rely on flatening:

38 FLATTENING A TENSOR Many tensor methods rely on flatening: n 3 flat( ) = n 1 n 1 n 2 n 3

39 FLATTENING A TENSOR Many tensor methods rely on flatening: n 3 flat( ) = n 1 users ac;vi;es ;me

40 FLATTENING A TENSOR Many tensor methods rely on flatening: flat( ) = n 1 n 3 i (j,k) T i,j,k

41 FLATTENING A TENSOR Many tensor methods rely on flatening: flat( ) = n 1 n 3 i (j,k) T i,j,k This is a rearrangement of the entries, into a matrix, that does not increase its rank

42 FLATTENING A TENSOR Many tensor methods rely on flatening: flat( ) = n 1 n 3 i r flat( a i b i c i i = 1 r (j,k) ) = a i T vec(b i c i ) i = 1 T i,j,k n 2 n 3 - dimensional vector

43 Let n 1 = n 2 = n 3 = n We would need O(n 2 r) observaions to fill- in flat(t)

44 Let n 1 = n 2 = n 3 = n We would need O(n 2 r) observaions to fill- in flat(t) There are many other variants of flatening, but with comparable guarantees [Liu, Musialski, Wonka, Ye], [Gandy, Recht, Yamada], [SignoreTo, De Lathauwer, Suykens], [Tomioko, Hayashi, Kashima], [Mu, Huang, Wright, Goldfarb],

45 Let n 1 = n 2 = n 3 = n We would need O(n 2 r) observaions to fill- in flat(t) There are many other variants of flatening, but with comparable guarantees [Liu, Musialski, Wonka, Ye], [Gandy, Recht, Yamada], [SignoreTo, De Lathauwer, Suykens], [Tomioko, Hayashi, Kashima], [Mu, Huang, Wright, Goldfarb], Can we beat flasening?

46 Let n 1 = n 2 = n 3 = n We would need O(n 2 r) observaions to fill- in flat(t) There are many other variants of flatening, but with comparable guarantees [Liu, Musialski, Wonka, Ye], [Gandy, Recht, Yamada], [SignoreTo, De Lathauwer, Suykens], [Tomioko, Hayashi, Kashima], [Mu, Huang, Wright, Goldfarb], Can we beat flasening? Can we make beser predicions than we do by treaing each ac;vity x ;me as unrelated?

47 Part III: Nearly opimal algorithms for tensor predicion

48 OUR RESULTS r T = a i b i c i i = 1 σ i + noise standard Gaussian r.v.

49 OUR RESULTS r T = a i b i c i i = 1 σ i + noise standard Gaussian r.v. Theorem: Suppose var(t i,j,k ) r/n 3 polylog(n). Then there is an efficient algorithm that X which saisfies: X i,j,k = (1±o(1))T i,j,k for a 1- o(1) fracion of entries, provided m = Ω (n 3/2 r)

50 OUR RESULTS r T = a i b i c i i = 1 σ i + noise standard Gaussian r.v. Theorem: Suppose var(t i,j,k ) r/n 3 polylog(n). Then there is an efficient algorithm that X which saisfies: X i,j,k = (1±o(1))T i,j,k for a 1- o(1) fracion of entries, provided m = Ω (n 3/2 r) This variance bound holds for random tensors, but also tensors where the factors (a i s, b i s, c i s) have large inner- product

51 OUR RESULTS r T = a i b i c i i = 1 σ i + noise standard Gaussian r.v. Theorem: Suppose var(t i,j,k ) r/n 3 polylog(n). Then there is an efficient algorithm that X which saisfies: X i,j,k = (1±o(1))T i,j,k for a 1- o(1) fracion of entries, provided m = Ω (n 3/2 r) Even for r = n 3/2- δ (highly overcomplete), we only need to observe an o(1) fracion of the entries to predict almost everything

52 LOWER BOUNDS Not only is the tensor nuclear norm hard to compute, but

53 LOWER BOUNDS Not only is the tensor nuclear norm hard to compute, but Tensor predicion with m observaions Refute random 3- SAT with m clauses

54 LOWER BOUNDS Not only is the tensor nuclear norm hard to compute, but Tensor predicion with m observaions Refute random 3- SAT with m clauses The best known algorithms require m = n 3/2, and even powerful SDP hierarchies fail with fewer clauses [Shor], [Nesterov], [Parrilo], [Lasserre], [Grigoriev], [Schoenebeck]

55 LOWER BOUNDS Not only is the tensor nuclear norm hard to compute, but Tensor predicion with m observaions Refute random 3- SAT with m clauses The best known algorithms require m = n 3/2, and even powerful SDP hierarchies fail with fewer clauses [Shor], [Nesterov], [Parrilo], [Lasserre], [Grigoriev], [Schoenebeck] Corollary [informal]: Any algorithm for solving tensor predicion, in the sum- of- squares hierarchy that uses m=n 3/2- δ r observaions must run in exponenial Ime

56 Part IV: Our techniques: connecions to random CSPs

57 Can we disinguish between low- rank and random?

58 Can we disinguish between low- rank and random? Case #1: Approximately low- rank a T a + noise

59 Can we disinguish between low- rank and random? Case #1: Approximately low- rank a T a + noise For each (i,j) Ω M i,j = a i a j w/ probability ¾ random ±1 w/ probability ¼ where each a i = ±1

60 Can we disinguish between low- rank and random?

61 Can we disinguish between low- rank and random? Case #2: Random random For each (i,j) Ω, M i,j = random ±1

62 Can we disinguish between low- rank and random? Case #2: Random random For each (i,j) Ω, M i,j = random ±1 In Case #1 the entries are (somewhat) predictable, but in Case #2 they are completely unpredictable

63 There are two very different communiies that (essenially) asacked this same disinguishing problem:

64 There are two very different communiies that (essenially) asacked this same disinguishing problem: The community working on matrix comple;on

65 There are two very different communiies that (essenially) asacked this same disinguishing problem: The community working on matrix comple;on The community working on refu;ng random CSPs

66 There are two very different communiies that (essenially) asacked this same disinguishing problem: The community working on matrix comple;on The community working on refu;ng random CSPs Matrix predicion with m observaions Strongly refute* random 2- XOR with m clauses *Want an algorithm that cerifies a formula is far from saisfiable

67 There are two very different communiies that (essenially) asacked this same disinguishing problem: The community working on matrix comple;on The community working on refu;ng random CSPs Tensor predic;on with m observaions Strongly refute* random 3- XOR with m clauses *Want an algorithm that cerifies a formula is far from saisfiable

68 There are two very different communiies that (essenially) asacked this same disinguishing problem: The community working on matrix comple;on The community working on refu;ng random CSPs Tensor predic;on with m observaions Strongly refute* random 3- XOR with m clauses [Coja- Oghlan, Goerdt, Lanka] *Want an algorithm that cerifies a formula is far from saisfiable

69 There are two very different communiies that (essenially) asacked this same disinguishing problem: The community working on matrix comple;on The community working on refu;ng random CSPs Tensor predic;on with m observaions Rademacher Complexity Strongly refute* random 3- XOR with m clauses [Coja- Oghlan, Goerdt, Lanka] We then embed this algorithm into the sixth level of the sum- of- squares hierarchy, to get a relaxaion for tensor predicion *Want an algorithm that cerifies a formula is far from saisfiable

70 SUMMARY New Algorithm: We gave an algorithm for 3 rd - order tensor predicion that uses m = n 3/2 rlog 4 n observaions

71 SUMMARY New Algorithm: We gave an algorithm for 3 rd - order tensor predicion that uses m = n 3/2 rlog 4 n observaions An Inefficient Algorithm: (via tensor nuclear norm) There is an inefficient algorithm that use m = nrlogn observaions

72 SUMMARY New Algorithm: We gave an algorithm for 3 rd - order tensor predicion that uses m = n 3/2 rlog 4 n observaions An Inefficient Algorithm: (via tensor nuclear norm) There is an inefficient algorithm that use m = nrlogn observaions A Phase Transi;on: Even for n δ rounds of the powerful sum- of- squares hierarchy, no norm solves tensor predicion with m = n 3/2- δ r observaions

73 DISCUSSION Convex programs are unreasonably effecive for linear inverse problems!

74 DISCUSSION Convex programs are unreasonably effecive for linear inverse problems! But we gave simple linear inverse problems that exhibit striking gaps between efficient and inefficient esimators

75 DISCUSSION Convex programs are unreasonably effecive for linear inverse problems! But we gave simple linear inverse problems that exhibit striking gaps between efficient and inefficient esimators Where else are there computaional vs staisical tradeoffs?

76 DISCUSSION Convex programs are unreasonably effecive for linear inverse problems! But we gave simple linear inverse problems that exhibit striking gaps between efficient and inefficient esimators Where else are there computaional vs staisical tradeoffs? New Direc;on: Explore computaional vs. staisical tradeoffs through the powerful sum- of- squares hierarchy

BEYOND MATRIX COMPLETION

BEYOND MATRIX COMPLETION BEYOND MATRIX COMPLETION ANKUR MOITRA MASSACHUSETTS INSTITUTE OF TECHNOLOGY Based on joint work with Boaz Barak (Harvard) Part I: RecommendaJon systems and parjally observed matrices THE NETFLIX PROBLEM

More information

The convex algebraic geometry of rank minimization

The convex algebraic geometry of rank minimization The convex algebraic geometry of rank minimization Pablo A. Parrilo Laboratory for Information and Decision Systems Massachusetts Institute of Technology International Symposium on Mathematical Programming

More information

Linear dimensionality reduction for data analysis

Linear dimensionality reduction for data analysis Linear dimensionality reduction for data analysis Nicolas Gillis Joint work with Robert Luce, François Glineur, Stephen Vavasis, Robert Plemmons, Gabriella Casalino The setup Dimensionality reduction for

More information

Statistical Performance of Convex Tensor Decomposition

Statistical Performance of Convex Tensor Decomposition Slides available: h-p://www.ibis.t.u tokyo.ac.jp/ryotat/tensor12kyoto.pdf Statistical Performance of Convex Tensor Decomposition Ryota Tomioka 2012/01/26 @ Kyoto University Perspectives in Informatics

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

Recovering any low-rank matrix, provably

Recovering any low-rank matrix, provably Recovering any low-rank matrix, provably Rachel Ward University of Texas at Austin October, 2014 Joint work with Yudong Chen (U.C. Berkeley), Srinadh Bhojanapalli and Sujay Sanghavi (U.T. Austin) Matrix

More information

Matrix Completion from Fewer Entries

Matrix Completion from Fewer Entries from Fewer Entries Stanford University March 30, 2009 Outline The problem, a look at the data, and some results (slides) 2 Proofs (blackboard) arxiv:090.350 The problem, a look at the data, and some results

More information

CSC 576: Variants of Sparse Learning

CSC 576: Variants of Sparse Learning CSC 576: Variants of Sparse Learning Ji Liu Department of Computer Science, University of Rochester October 27, 205 Introduction Our previous note basically suggests using l norm to enforce sparsity in

More information

Matrix Completion: Fundamental Limits and Efficient Algorithms

Matrix Completion: Fundamental Limits and Efficient Algorithms Matrix Completion: Fundamental Limits and Efficient Algorithms Sewoong Oh PhD Defense Stanford University July 23, 2010 1 / 33 Matrix completion Find the missing entries in a huge data matrix 2 / 33 Example

More information

Provable Alternating Minimization Methods for Non-convex Optimization

Provable Alternating Minimization Methods for Non-convex Optimization Provable Alternating Minimization Methods for Non-convex Optimization Prateek Jain Microsoft Research, India Joint work with Praneeth Netrapalli, Sujay Sanghavi, Alekh Agarwal, Animashree Anandkumar, Rashish

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

Tensor Low-Rank Completion and Invariance of the Tucker Core

Tensor Low-Rank Completion and Invariance of the Tucker Core Tensor Low-Rank Completion and Invariance of the Tucker Core Shuzhong Zhang Department of Industrial & Systems Engineering University of Minnesota zhangs@umn.edu Joint work with Bo JIANG, Shiqian MA, and

More information

Low-rank Matrix Completion from Noisy Entries

Low-rank Matrix Completion from Noisy Entries Low-rank Matrix Completion from Noisy Entries Sewoong Oh Joint work with Raghunandan Keshavan and Andrea Montanari Stanford University Forty-Seventh Allerton Conference October 1, 2009 R.Keshavan, S.Oh,

More information

Sum-of-Squares Method, Tensor Decomposition, Dictionary Learning

Sum-of-Squares Method, Tensor Decomposition, Dictionary Learning Sum-of-Squares Method, Tensor Decomposition, Dictionary Learning David Steurer Cornell Approximation Algorithms and Hardness, Banff, August 2014 for many problems (e.g., all UG-hard ones): better guarantees

More information

Matrix completion: Fundamental limits and efficient algorithms. Sewoong Oh Stanford University

Matrix completion: Fundamental limits and efficient algorithms. Sewoong Oh Stanford University Matrix completion: Fundamental limits and efficient algorithms Sewoong Oh Stanford University 1 / 35 Low-rank matrix completion Low-rank Data Matrix Sparse Sampled Matrix Complete the matrix from small

More information

An Extended Frank-Wolfe Method, with Application to Low-Rank Matrix Completion

An Extended Frank-Wolfe Method, with Application to Low-Rank Matrix Completion An Extended Frank-Wolfe Method, with Application to Low-Rank Matrix Completion Robert M. Freund, MIT joint with Paul Grigas (UC Berkeley) and Rahul Mazumder (MIT) CDC, December 2016 1 Outline of Topics

More information

MATRIX COMPLETION AND TENSOR RANK

MATRIX COMPLETION AND TENSOR RANK MATRIX COMPLETION AND TENSOR RANK HARM DERKSEN Abstract. In this paper, we show that the low rank matrix completion problem can be reduced to the problem of finding the rank of a certain tensor. arxiv:1302.2639v2

More information

Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison

Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison Raghunandan H. Keshavan, Andrea Montanari and Sewoong Oh Electrical Engineering and Statistics Department Stanford University,

More information

approximation algorithms I

approximation algorithms I SUM-OF-SQUARES method and approximation algorithms I David Steurer Cornell Cargese Workshop, 201 meta-task encoded as low-degree polynomial in R x example: f(x) = i,j n w ij x i x j 2 given: functions

More information

Lower bounds on the size of semidefinite relaxations. David Steurer Cornell

Lower bounds on the size of semidefinite relaxations. David Steurer Cornell Lower bounds on the size of semidefinite relaxations David Steurer Cornell James R. Lee Washington Prasad Raghavendra Berkeley Institute for Advanced Study, November 2015 overview of results unconditional

More information

Structured matrix factorizations. Example: Eigenfaces

Structured matrix factorizations. Example: Eigenfaces Structured matrix factorizations Example: Eigenfaces An extremely large variety of interesting and important problems in machine learning can be formulated as: Given a matrix, find a matrix and a matrix

More information

From Compressed Sensing to Matrix Completion and Beyond. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison

From Compressed Sensing to Matrix Completion and Beyond. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison From Compressed Sensing to Matrix Completion and Beyond Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Netflix Prize One million big ones! Given 100 million ratings on a

More information

Most tensor problems are NP hard

Most tensor problems are NP hard Most tensor problems are NP hard (preliminary report) Lek-Heng Lim University of California, Berkeley August 25, 2009 (Joint work with: Chris Hillar, MSRI) L.H. Lim (Berkeley) Most tensor problems are

More information

Rank Determination for Low-Rank Data Completion

Rank Determination for Low-Rank Data Completion Journal of Machine Learning Research 18 017) 1-9 Submitted 7/17; Revised 8/17; Published 9/17 Rank Determination for Low-Rank Data Completion Morteza Ashraphijuo Columbia University New York, NY 1007,

More information

Low Rank Matrix Completion Formulation and Algorithm

Low Rank Matrix Completion Formulation and Algorithm 1 2 Low Rank Matrix Completion and Algorithm Jian Zhang Department of Computer Science, ETH Zurich zhangjianthu@gmail.com March 25, 2014 Movie Rating 1 2 Critic A 5 5 Critic B 6 5 Jian 9 8 Kind Guy B 9

More information

Unique Games Conjecture & Polynomial Optimization. David Steurer

Unique Games Conjecture & Polynomial Optimization. David Steurer Unique Games Conjecture & Polynomial Optimization David Steurer Newton Institute, Cambridge, July 2013 overview Proof Complexity Approximability Polynomial Optimization Quantum Information computational

More information

Low-Rank Matrix Recovery

Low-Rank Matrix Recovery ELE 538B: Mathematics of High-Dimensional Data Low-Rank Matrix Recovery Yuxin Chen Princeton University, Fall 2018 Outline Motivation Problem setup Nuclear norm minimization RIP and low-rank matrix recovery

More information

Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery

Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery Department of Mathematics & Risk Management Institute National University of Singapore (Based on a joint work with Shujun

More information

Rank minimization via the γ 2 norm

Rank minimization via the γ 2 norm Rank minimization via the γ 2 norm Troy Lee Columbia University Adi Shraibman Weizmann Institute Rank Minimization Problem Consider the following problem min X rank(x) A i, X b i for i = 1,..., k Arises

More information

Sparse and Low-Rank Matrix Decompositions

Sparse and Low-Rank Matrix Decompositions Forty-Seventh Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 30 - October 2, 2009 Sparse and Low-Rank Matrix Decompositions Venkat Chandrasekaran, Sujay Sanghavi, Pablo A. Parrilo,

More information

Binary matrix completion

Binary matrix completion Binary matrix completion Yaniv Plan University of Michigan SAMSI, LDHD workshop, 2013 Joint work with (a) Mark Davenport (b) Ewout van den Berg (c) Mary Wootters Yaniv Plan (U. Mich.) Binary matrix completion

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via convex relaxations Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Approximation & Complexity

Approximation & Complexity Summer school on semidefinite optimization Approximation & Complexity David Steurer Cornell University Part 1 September 6, 2012 Overview Part 1 Unique Games Conjecture & Basic SDP Part 2 SDP Hierarchies:

More information

Collaborative Filtering

Collaborative Filtering Case Study 4: Collaborative Filtering Collaborative Filtering Matrix Completion Alternating Least Squares Machine Learning/Statistics for Big Data CSE599C1/STAT592, University of Washington Carlos Guestrin

More information

Adaptive one-bit matrix completion

Adaptive one-bit matrix completion Adaptive one-bit matrix completion Joseph Salmon Télécom Paristech, Institut Mines-Télécom Joint work with Jean Lafond (Télécom Paristech) Olga Klopp (Crest / MODAL X, Université Paris Ouest) Éric Moulines

More information

Collaborative Filtering Matrix Completion Alternating Least Squares

Collaborative Filtering Matrix Completion Alternating Least Squares Case Study 4: Collaborative Filtering Collaborative Filtering Matrix Completion Alternating Least Squares Machine Learning for Big Data CSE547/STAT548, University of Washington Sham Kakade May 19, 2016

More information

arxiv: v1 [math.oc] 11 Jun 2009

arxiv: v1 [math.oc] 11 Jun 2009 RANK-SPARSITY INCOHERENCE FOR MATRIX DECOMPOSITION VENKAT CHANDRASEKARAN, SUJAY SANGHAVI, PABLO A. PARRILO, S. WILLSKY AND ALAN arxiv:0906.2220v1 [math.oc] 11 Jun 2009 Abstract. Suppose we are given a

More information

MINI-TUTORIAL ON SEMI-ALGEBRAIC PROOF SYSTEMS. Albert Atserias Universitat Politècnica de Catalunya Barcelona

MINI-TUTORIAL ON SEMI-ALGEBRAIC PROOF SYSTEMS. Albert Atserias Universitat Politècnica de Catalunya Barcelona MINI-TUTORIAL ON SEMI-ALGEBRAIC PROOF SYSTEMS Albert Atserias Universitat Politècnica de Catalunya Barcelona Part I CONVEX POLYTOPES Convex polytopes as linear inequalities Polytope: P = {x R n : Ax b}

More information

New Ranks for Even-Order Tensors and Their Applications in Low-Rank Tensor Optimization

New Ranks for Even-Order Tensors and Their Applications in Low-Rank Tensor Optimization New Ranks for Even-Order Tensors and Their Applications in Low-Rank Tensor Optimization Bo JIANG Shiqian MA Shuzhong ZHANG January 12, 2015 Abstract In this paper, we propose three new tensor decompositions

More information

ECE 598: Representation Learning: Algorithms and Models Fall 2017

ECE 598: Representation Learning: Algorithms and Models Fall 2017 ECE 598: Representation Learning: Algorithms and Models Fall 2017 Lecture 1: Tensor Methods in Machine Learning Lecturer: Pramod Viswanathan Scribe: Bharath V Raghavan, Oct 3, 2017 11 Introduction Tensors

More information

Convex sets, conic matrix factorizations and conic rank lower bounds

Convex sets, conic matrix factorizations and conic rank lower bounds Convex sets, conic matrix factorizations and conic rank lower bounds Pablo A. Parrilo Laboratory for Information and Decision Systems Electrical Engineering and Computer Science Massachusetts Institute

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via nonconvex optimization Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Guaranteed Rank Minimization via Singular Value Projection

Guaranteed Rank Minimization via Singular Value Projection 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 3 3 33 34 35 36 37 38 39 4 4 4 43 44 45 46 47 48 49 5 5 5 53 Guaranteed Rank Minimization via Singular Value Projection Anonymous Author(s) Affiliation Address

More information

Tensor Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Tensors via Convex Optimization

Tensor Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Tensors via Convex Optimization Tensor Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Tensors via Convex Optimization Canyi Lu 1, Jiashi Feng 1, Yudong Chen, Wei Liu 3, Zhouchen Lin 4,5,, Shuicheng Yan 6,1

More information

A Characterization of Sampling Patterns for Union of Low-Rank Subspaces Retrieval Problem

A Characterization of Sampling Patterns for Union of Low-Rank Subspaces Retrieval Problem A Characterization of Sampling Patterns for Union of Low-Rank Subspaces Retrieval Problem Morteza Ashraphijuo Columbia University ashraphijuo@ee.columbia.edu Xiaodong Wang Columbia University wangx@ee.columbia.edu

More information

Non-convex Robust PCA: Provable Bounds

Non-convex Robust PCA: Provable Bounds Non-convex Robust PCA: Provable Bounds Anima Anandkumar U.C. Irvine Joint work with Praneeth Netrapalli, U.N. Niranjan, Prateek Jain and Sujay Sanghavi. Learning with Big Data High Dimensional Regime Missing

More information

Semidefinite Programming

Semidefinite Programming Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has

More information

Analysis of Robust PCA via Local Incoherence

Analysis of Robust PCA via Local Incoherence Analysis of Robust PCA via Local Incoherence Huishuai Zhang Department of EECS Syracuse University Syracuse, NY 3244 hzhan23@syr.edu Yi Zhou Department of EECS Syracuse University Syracuse, NY 3244 yzhou35@syr.edu

More information

arxiv: v1 [stat.ml] 1 Mar 2015

arxiv: v1 [stat.ml] 1 Mar 2015 Matrix Completion with Noisy Entries and Outliers Raymond K. W. Wong 1 and Thomas C. M. Lee 2 arxiv:1503.00214v1 [stat.ml] 1 Mar 2015 1 Department of Statistics, Iowa State University 2 Department of Statistics,

More information

Information-Theoretic Limits of Matrix Completion

Information-Theoretic Limits of Matrix Completion Information-Theoretic Limits of Matrix Completion Erwin Riegler, David Stotz, and Helmut Bölcskei Dept. IT & EE, ETH Zurich, Switzerland Email: {eriegler, dstotz, boelcskei}@nari.ee.ethz.ch Abstract We

More information

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Shuyang Ling Department of Mathematics, UC Davis Oct.18th, 2016 Shuyang Ling (UC Davis) 16w5136, Oaxaca, Mexico Oct.18th, 2016

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

Approximate Low-Rank Tensor Learning

Approximate Low-Rank Tensor Learning Approximate Low-Rank Tensor Learning Yaoliang Yu Dept of Machine Learning Carnegie Mellon University yaoliang@cs.cmu.edu Hao Cheng Dept of Electric Engineering University of Washington kelvinwsch@gmail.com

More information

1 Inner Product and Orthogonality

1 Inner Product and Orthogonality CSCI 4/Fall 6/Vora/GWU/Orthogonality and Norms Inner Product and Orthogonality Definition : The inner product of two vectors x and y, x x x =.., y =. x n y y... y n is denoted x, y : Note that n x, y =

More information

Conditions for Robust Principal Component Analysis

Conditions for Robust Principal Component Analysis Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and

More information

arxiv: v5 [math.na] 16 Nov 2017

arxiv: v5 [math.na] 16 Nov 2017 RANDOM PERTURBATION OF LOW RANK MATRICES: IMPROVING CLASSICAL BOUNDS arxiv:3.657v5 [math.na] 6 Nov 07 SEAN O ROURKE, VAN VU, AND KE WANG Abstract. Matrix perturbation inequalities, such as Weyl s theorem

More information

Spectral k-support Norm Regularization

Spectral k-support Norm Regularization Spectral k-support Norm Regularization Andrew McDonald Department of Computer Science, UCL (Joint work with Massimiliano Pontil and Dimitris Stamos) 25 March, 2015 1 / 19 Problem: Matrix Completion Goal:

More information

Sparse and Low Rank Recovery via Null Space Properties

Sparse and Low Rank Recovery via Null Space Properties Sparse and Low Rank Recovery via Null Space Properties Holger Rauhut Lehrstuhl C für Mathematik (Analysis), RWTH Aachen Convexity, probability and discrete structures, a geometric viewpoint Marne-la-Vallée,

More information

Optimization. Benjamin Recht University of California, Berkeley Stephen Wright University of Wisconsin-Madison

Optimization. Benjamin Recht University of California, Berkeley Stephen Wright University of Wisconsin-Madison Optimization Benjamin Recht University of California, Berkeley Stephen Wright University of Wisconsin-Madison optimization () cost constraints might be too much to cover in 3 hours optimization (for big

More information

Tutorial: PART 2. Optimization for Machine Learning. Elad Hazan Princeton University. + help from Sanjeev Arora & Yoram Singer

Tutorial: PART 2. Optimization for Machine Learning. Elad Hazan Princeton University. + help from Sanjeev Arora & Yoram Singer Tutorial: PART 2 Optimization for Machine Learning Elad Hazan Princeton University + help from Sanjeev Arora & Yoram Singer Agenda 1. Learning as mathematical optimization Stochastic optimization, ERM,

More information

Sum-of-Squares and Spectral Algorithms

Sum-of-Squares and Spectral Algorithms Sum-of-Squares and Spectral Algorithms Tselil Schramm June 23, 2017 Workshop on SoS @ STOC 2017 Spectral algorithms as a tool for analyzing SoS. SoS Semidefinite Programs Spectral Algorithms SoS suggests

More information

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation UIUC CSL Mar. 24 Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation Yuejie Chi Department of ECE and BMI Ohio State University Joint work with Yuxin Chen (Stanford).

More information

Lecture Notes 10: Matrix Factorization

Lecture Notes 10: Matrix Factorization Optimization-based data analysis Fall 207 Lecture Notes 0: Matrix Factorization Low-rank models. Rank- model Consider the problem of modeling a quantity y[i, j] that depends on two indices i and j. To

More information

Matrix Factorizations: A Tale of Two Norms

Matrix Factorizations: A Tale of Two Norms Matrix Factorizations: A Tale of Two Norms Nati Srebro Toyota Technological Institute Chicago Maximum Margin Matrix Factorization S, Jason Rennie, Tommi Jaakkola (MIT), NIPS 2004 Rank, Trace-Norm and Max-Norm

More information

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:

More information

Matrix Completion from a Few Entries

Matrix Completion from a Few Entries Matrix Completion from a Few Entries Raghunandan H. Keshavan and Sewoong Oh EE Department Stanford University, Stanford, CA 9434 Andrea Montanari EE and Statistics Departments Stanford University, Stanford,

More information

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

More information

Breaking the Limits of Subspace Inference

Breaking the Limits of Subspace Inference Breaking the Limits of Subspace Inference Claudia R. Solís-Lemus, Daniel L. Pimentel-Alarcón Emory University, Georgia State University Abstract Inferring low-dimensional subspaces that describe high-dimensional,

More information

Compressive Sensing and Beyond

Compressive Sensing and Beyond Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered

More information

Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies

Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies July 12, 212 Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies Morteza Mardani Dept. of ECE, University of Minnesota, Minneapolis, MN 55455 Acknowledgments:

More information

Symmetric Factorization for Nonconvex Optimization

Symmetric Factorization for Nonconvex Optimization Symmetric Factorization for Nonconvex Optimization Qinqing Zheng February 24, 2017 1 Overview A growing body of recent research is shedding new light on the role of nonconvex optimization for tackling

More information

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28 Sparsity Models Tong Zhang Rutgers University T. Zhang (Rutgers) Sparsity Models 1 / 28 Topics Standard sparse regression model algorithms: convex relaxation and greedy algorithm sparse recovery analysis:

More information

High-dimensional Statistics

High-dimensional Statistics High-dimensional Statistics Pradeep Ravikumar UT Austin Outline 1. High Dimensional Data : Large p, small n 2. Sparsity 3. Group Sparsity 4. Low Rank 1 Curse of Dimensionality Statistical Learning: Given

More information

Universal low-rank matrix recovery from Pauli measurements

Universal low-rank matrix recovery from Pauli measurements Universal low-rank matrix recovery from Pauli measurements Yi-Kai Liu Applied and Computational Mathematics Division National Institute of Standards and Technology Gaithersburg, MD, USA yi-kai.liu@nist.gov

More information

Concentration-Based Guarantees for Low-Rank Matrix Reconstruction

Concentration-Based Guarantees for Low-Rank Matrix Reconstruction JMLR: Workshop and Conference Proceedings 9 20 35 339 24th Annual Conference on Learning Theory Concentration-Based Guarantees for Low-Rank Matrix Reconstruction Rina Foygel Department of Statistics, University

More information

Estimation of (near) low-rank matrices with noise and high-dimensional scaling

Estimation of (near) low-rank matrices with noise and high-dimensional scaling Estimation of (near) low-rank matrices with noise and high-dimensional scaling Sahand Negahban Department of EECS, University of California, Berkeley, CA 94720, USA sahand n@eecs.berkeley.edu Martin J.

More information

Robust Statistics, Revisited

Robust Statistics, Revisited Robust Statistics, Revisited Ankur Moitra (MIT) joint work with Ilias Diakonikolas, Jerry Li, Gautam Kamath, Daniel Kane and Alistair Stewart CLASSIC PARAMETER ESTIMATION Given samples from an unknown

More information

The convex algebraic geometry of linear inverse problems

The convex algebraic geometry of linear inverse problems The convex algebraic geometry of linear inverse problems The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published Publisher

More information

Matrix Completion with Noise

Matrix Completion with Noise Matrix Completion with Noise 1 Emmanuel J. Candès and Yaniv Plan Applied and Computational Mathematics, Caltech, Pasadena, CA 91125 of the rapidly developing field of compressed sensing, and is already

More information

Uniqueness Conditions For Low-Rank Matrix Recovery

Uniqueness Conditions For Low-Rank Matrix Recovery Claremont Colleges Scholarship @ Claremont CMC Faculty Publications and Research CMC Faculty Scholarship 3-28-2011 Uniqueness Conditions For Low-Rank Matrix Recovery Yonina C. Eldar Israel Institute of

More information

Latent Variable Graphical Model Selection Via Convex Optimization

Latent Variable Graphical Model Selection Via Convex Optimization Latent Variable Graphical Model Selection Via Convex Optimization The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published

More information

Generalized Higher-Order Tensor Decomposition via Parallel ADMM

Generalized Higher-Order Tensor Decomposition via Parallel ADMM Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence Generalized Higher-Order Tensor Decomposition via Parallel ADMM Fanhua Shang 1, Yuanyuan Liu 2, James Cheng 1 1 Department of

More information

Complexity of Deciding Convexity in Polynomial Optimization

Complexity of Deciding Convexity in Polynomial Optimization Complexity of Deciding Convexity in Polynomial Optimization Amir Ali Ahmadi Joint work with: Alex Olshevsky, Pablo A. Parrilo, and John N. Tsitsiklis Laboratory for Information and Decision Systems Massachusetts

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Optimisation Combinatoire et Convexe.

Optimisation Combinatoire et Convexe. Optimisation Combinatoire et Convexe. Low complexity models, l 1 penalties. A. d Aspremont. M1 ENS. 1/36 Today Sparsity, low complexity models. l 1 -recovery results: three approaches. Extensions: matrix

More information

High-dimensional Statistical Models

High-dimensional Statistical Models High-dimensional Statistical Models Pradeep Ravikumar UT Austin MLSS 2014 1 Curse of Dimensionality Statistical Learning: Given n observations from p(x; θ ), where θ R p, recover signal/parameter θ. For

More information

Matrix Completion from Noisy Entries

Matrix Completion from Noisy Entries Matrix Completion from Noisy Entries Raghunandan H. Keshavan, Andrea Montanari, and Sewoong Oh Abstract Given a matrix M of low-rank, we consider the problem of reconstructing it from noisy observations

More information

Recovery of Simultaneously Structured Models using Convex Optimization

Recovery of Simultaneously Structured Models using Convex Optimization Recovery of Simultaneously Structured Models using Convex Optimization Maryam Fazel University of Washington Joint work with: Amin Jalali (UW), Samet Oymak and Babak Hassibi (Caltech) Yonina Eldar (Technion)

More information

TENSOR DECOMPOSITIONS AND THEIR APPLICATIONS

TENSOR DECOMPOSITIONS AND THEIR APPLICATIONS TENSOR DECOMPOSITIONS AND THEIR APPLICATIONS ANKUR MOITRA MASSACHUSETTS INSTITUTE OF TECHNOLOGY SPEARMAN S HYPOTHESIS Charles Spearman (1904): There are two types of intelligence, educ%ve and reproduc%ve

More information

Fast and Robust Phase Retrieval

Fast and Robust Phase Retrieval Fast and Robust Phase Retrieval Aditya Viswanathan aditya@math.msu.edu CCAM Lunch Seminar Purdue University April 18 2014 0 / 27 Joint work with Yang Wang Mark Iwen Research supported in part by National

More information

Factor Matrix Trace Norm Minimization for Low-Rank Tensor Completion

Factor Matrix Trace Norm Minimization for Low-Rank Tensor Completion Factor Matrix Trace Norm Minimization for Low-Rank Tensor Completion Yuanyuan Liu Fanhua Shang Hong Cheng James Cheng Hanghang Tong Abstract Most existing low-n-rank imization algorithms for tensor completion

More information

Matrix Completion for Structured Observations

Matrix Completion for Structured Observations Matrix Completion for Structured Observations Denali Molitor Department of Mathematics University of California, Los ngeles Los ngeles, C 90095, US Email: dmolitor@math.ucla.edu Deanna Needell Department

More information

Conditioning and illposedness of tensor approximations

Conditioning and illposedness of tensor approximations Conditioning and illposedness of tensor approximations and why engineers should care Lek-Heng Lim MSRI Summer Graduate Workshop July 7 18, 2008 L.-H. Lim (MSRI SGW) Conditioning and illposedness July 7

More information

Tensor Completion for Estimating Missing Values in Visual Data

Tensor Completion for Estimating Missing Values in Visual Data Tensor Completion for Estimating Missing Values in Visual Data Ji Liu, Przemyslaw Musialski 2, Peter Wonka, and Jieping Ye Arizona State University VRVis Research Center 2 Ji.Liu@asu.edu, musialski@vrvis.at,

More information

A Scalable Spectral Relaxation Approach to Matrix Completion via Kronecker Products

A Scalable Spectral Relaxation Approach to Matrix Completion via Kronecker Products A Scalable Spectral Relaxation Approach to Matrix Completion via Kronecker Products Hui Zhao Jiuqiang Han Naiyan Wang Congfu Xu Zhihua Zhang Department of Automation, Xi an Jiaotong University, Xi an,

More information

DATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD

DATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD DATA MINING LECTURE 8 Dimensionality Reduction PCA -- SVD The curse of dimensionality Real data usually have thousands, or millions of dimensions E.g., web documents, where the dimensionality is the vocabulary

More information

An iterative hard thresholding estimator for low rank matrix recovery

An iterative hard thresholding estimator for low rank matrix recovery An iterative hard thresholding estimator for low rank matrix recovery Alexandra Carpentier - based on a joint work with Arlene K.Y. Kim Statistical Laboratory, Department of Pure Mathematics and Mathematical

More information

Exact Low Tubal Rank Tensor Recovery from Gaussian Measurements

Exact Low Tubal Rank Tensor Recovery from Gaussian Measurements Exact Low Tubal Rank Tensor Recovery from Gaussian Measurements Canyi Lu, Jiashi Feng 2, Zhouchen Li,4 Shuicheng Yan 5,2 Department of Electrical and Computer Engineering, Carnegie Mellon University 2

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

Tighter Low-rank Approximation via Sampling the Leveraged Element

Tighter Low-rank Approximation via Sampling the Leveraged Element Tighter Low-rank Approximation via Sampling the Leveraged Element Srinadh Bhojanapalli The University of Texas at Austin bsrinadh@utexas.edu Prateek Jain Microsoft Research, India prajain@microsoft.com

More information