Matrix completion: Fundamental limits and efficient algorithms. Sewoong Oh Stanford University

Size: px
Start display at page:

Download "Matrix completion: Fundamental limits and efficient algorithms. Sewoong Oh Stanford University"

Transcription

1 Matrix completion: Fundamental limits and efficient algorithms Sewoong Oh Stanford University 1 / 35

2 Low-rank matrix completion Low-rank Data Matrix Sparse Sampled Matrix Complete the matrix from small number of sampled entries 2 / 35

3 Example 1. Recommendation system 1 1? ? ? 4? ? movies users Better recommendation improves customer experience Less than 1% known entries Goal: Predict the (9.9 billion) unknown ratings 10 6 queries 3 / 35

4 Example 2. Positioning Distance Matrix Location information needed to route packets, habitat monitoring, etc. Only distances between close-by sensors are measured Q : How can we find the sensor positions up to a rigid motion? 4 / 35

5 Low-rank matrix completion More applications: Ultrasound tomography: Calibration Computer vision: Structure-from-motion Molecular biology: Microarray Theoretical computer science: Fast low-rank approximations etc. 5 / 35

6 Outline 1 Background 2 Algorithm and main results 3 Applications 6 / 35

7 Background 7 / 35

8 The model Σ V T r αn Low-rank M = U n r Rank-r matrix M Random sample set E (Uniformly random for fixed E ) Sample noise matrix Z Sample matrix N E = (M + Z) E N E ij = { Mij + Z ij if (i, j) E 0 otherwise 8 / 35

9 The model αn Sample N E Rank-r matrix M Random sample set E (Uniformly random for fixed E ) Sample noise matrix Z Sample matrix N E = (M + Z) E N E ij = n { Mij + Z ij if (i, j) E 0 otherwise 8 / 35

10 Pathological example M = = 0. [1 ] [1 0 0 ] P(sampling M 11 ) = E αn 2 9 / 35

11 Pathological example M = = 0. [1 ] [1 0 0 ] P(sampling M 11 ) = E αn 2 We focus on incoherent matrices M = UΣV T which have well balanced singular vectors U and V A0. A1. r k=1 r k=1 U 2 ik µ r αn, r Vjk 2 µ r n k=1 U ik V jk µ r n 9 / 35

12 Previous work [Candès, Recht 08] Semidefinite Programming(SDP) for matrix completion [Fazel 02] In the noiseless case, SDP reconstructs M exactly with high probability, if E C(α, µ) r n 6/5 log n Open problems 1. Complexity: SDP is computationally complex 2. Optimality: n 1/5 gap from the lower bound Ω(n log n) 3. Noise: Cannot deal with noise 10 / 35

13 Main contributions OptSpace 1 Complexity: Low complexity 2 Optimality: Order-optimal 3 Noise: Robust against noise 11 / 35

14 Example: rank-8 random matrix (noiseless) low-rank matrix M sampled matrix M E OptSpace output M squared error (M M) % sampled 12 / 35

15 Example: rank-8 random matrix (noiseless) low-rank matrix M sampled matrix M E OptSpace output M squared error (M M) % sampled 12 / 35

16 Example: rank-8 random matrix (noiseless) low-rank matrix M sampled matrix M E OptSpace output M squared error (M M) % sampled 12 / 35

17 Example: rank-8 random matrix (noiseless) low-rank matrix M sampled matrix M E OptSpace output M squared error (M M) % sampled 12 / 35

18 Example: rank-8 random matrix (noiseless) low-rank matrix M sampled matrix M E OptSpace output M squared error (M M) % sampled 12 / 35

19 Example: rank-8 random matrix (noiseless) low-rank matrix M sampled matrix M E OptSpace output M squared error (M M) % sampled 12 / 35

20 Example: rank-8 random matrix (noiseless) low-rank matrix M sampled matrix M E OptSpace output M squared error (M M) % sampled 12 / 35

21 Algorithmic aspect 13 / 35

22 Naïve approach Singular Value Decomposition (SVD) Compute rank-r approximation MSVD using SVD SVD is the optimal thing to do if we have complete matrix N E MSVD n N E = x k σ k yk T k=1 M SVD αn2 E r x k σ k yk T k=1 14 / 35

23 Naïve approach fails Singular Value Decomposition (SVD) Compute rank-r approximation MSVD using SVD SVD is the optimal thing to do if we have complete matrix N E MSVD n N E = x k σ k yk T k=1 M SVD αn2 E r x k σ k yk T k=1 14 / 35

24 Naïve approach fails Singular Value Decomposition (SVD) Compute rank-r approximation MSVD using SVD SVD is the optimal thing to do if we have complete matrix N E MSVD n N E = x k σ k yk T k=1 M SVD αn2 E r x k σ k yk T k=1 14 / 35

25 Naïve approach fails Define : deg(row i ) # of samples in row i deg(row i ) i.i.d. Binom(n, p), where p = E /αn 2 For E = O(n), maximum degree is Ω(log n/(log log n)) N E has spurious singular values of Ω( log n/(log log n)) 15 / 35

26 Trimming N E = Ñ E ij = N E ij 0 if deg(row i ) > 2 E /αn 0 if deg(col j ) > 2 E /n otherwise deg( ) is the number of samples in that row/column 16 / 35

27 Trimming N E = Ñ E ij = N E ij 0 if deg(row i ) > 2 E /αn 0 if deg(col j ) > 2 E /n otherwise deg( ) is the number of samples in that row/column 16 / 35

28 Trimming Ñ E = Ñ E ij = N E ij 0 if deg(row i ) > 2 E /αn 0 if deg(col j ) > 2 E /n otherwise deg( ) is the number of samples in that row/column 16 / 35

29 The algorithm OptSpace Input : sample indices E, sample values N E, rank r Output : estimation M 1: Trimming 2: SVD 3: Greedy minimization 17 / 35

30 The algorithm OptSpace Input : sample indices E, sample values N E, rank r Output : estimation M 1: Trimming 2: SVD M SVD can be computed efficiently for sparse matrices 17 / 35

31 Main results Theorem 1. (Keshavan, Montanari, Oh 09) For any E, M SVD achieves, with high probability, RMSE CM max nr E }{{} missing entries ( 1 RMSE = αn i,j (M M 2 SVD ) 2 i,j M max max i,j M i,j 2 is the spectral norm ) 1/2 + C n r E ZE 2 }{{} sample noise Keshavan, Montanari, Oh, NIPS 09 Keshavan, Montanari, Oh, Journ. Machine Learning Research (submitted) 18 / 35

32 Noiseless case Theorem 1. (Keshavan, Montanari, Oh 09) For any E, M SVD achieves, with high probability, RMSE CM max nr E }{{} missing entries ( 1 RMSE = αn i,j (M M 2 SVD ) 2 i,j M max max i,j M i,j 2 is the spectral norm ) 1/2 Keshavan, Montanari, Oh, NIPS 09 Keshavan, Montanari, Oh, Journ. Machine Learning Research (submitted) 18 / 35

33 Noiseless case Theorem 1. (Keshavan, Montanari, Oh 09) For any E, M SVD achieves, with high probability, RMSE CM max nr E }{{} missing entries SVD [Achlioptas, McSherry 07] If E n(8 log n) 4, with high probability, RMSE 4M max nr E For n = 10 5, (8 log n) Keshavan, Montanari, Oh, NIPS 09 Keshavan, Montanari, Oh, Journ. Machine Learning Research (submitted) 18 / 35

34 Noiseless case Theorem 1. (Keshavan, Montanari, Oh 09) For any E, M SVD achieves, with high probability, RMSE CM max nr E }{{} missing entries SVD [Achlioptas, McSherry 07] If E n(8 log n) 4, with high probability, RMSE 4M max nr E Netflix dataset A single user rated 17,000 movies. Miss Congeniality : 200,000 ratings. For n = 10 5, (8 log n) Keshavan, Montanari, Oh, NIPS 09 Keshavan, Montanari, Oh, Journ. Machine Learning Research (submitted) 18 / 35

35 Can we do better? 19 / 35

36 Greedy minimization minimize F (X, Y) subject to X T X = I, Y T Y = I F (X, Y) min S R r r X S Y T (i,j) E (N E ij (XSY T ) ij ) 2 rank F (X, Y) only depends on the column spaces of X and Y Perform gradient descent on Grassmann manifold 20 / 35

37 Greedy minimization minimize F (X, Y) subject to X T X = I, Y T Y = I F (X, Y) min S R r r X S Y T (i,j) E (N E ij (XSY T ) ij ) 2 rank F (X, Y) only depends on the column spaces of X and Y Perform gradient descent on Grassmann manifold Can be computed efficiently for sparse matrices 20 / 35

38 The algorithm OptSpace Input : sample indices E, sample values N E, rank r Output : estimation M 1: Trimming 2: SVD 3: Greedy minimization 21 / 35

39 Main results Theorem 1. (Trimming+SVD) For any E, M SVD achieves, with high probability, nr RMSE CM max + C n r E E ZE 2 }{{}}{{} missing entries sample noise Theorem 2. (Trimming+SVD+Greedy minimization) For E Cnr max{r, log n}, OptSpace achieves, with high probability, RMSE C n r E ZE 2, provided that the RHS is smaller than σ r (M). Keshavan, Montanari, Oh, NIPS 09 Keshavan, Montanari, Oh, Journ. Machine Learning Research (submitted) 22 / 35

40 Noiseless case 23 / 35

41 Noiseless case Theorem 2. (OptSpace) For E Cnr max{r, log n}, OptSpace achieves, with high probability, RMSE C n r E ZE 2 24 / 35

42 Noiseless case Theorem 2. (OptSpace) For E Cn log n, OptSpace achieves, with high probability, RMSE = 0 (Exact reconstruction) Assuming the rank of M does not depend on n Lower bound (coupon collector s problem): If E C n log n, then exact reconstruction is impossible OptSpace is order-optimal Keshavan, Montanari, Oh, IEEE Trans. Information Theory / 35

43 Noiseless case Theorem 2. (OptSpace) For E Cn log n, OptSpace achieves, with high probability, RMSE = 0 (Exact reconstruction) Assuming the rank of M does not depend on n Lower bound (coupon collector s problem): If E C n log n, then exact reconstruction is impossible Convex Relaxation: [Candès, Recht 08, Candès, Tao 09, Recht 09, Gross et al. 09] If E C n (log n) 2, then exact reconstruction by SDP OptSpace is order-optimal Keshavan, Montanari, Oh, IEEE Trans. Information Theory / 35

44 Noiseless case rank-10 matrix M 1 P success 0.5 Lower Bound OptSpace FPCA SVT ADMiRA Sampling rate Lower Bound [Singer, Cucuringu 09], FPCA [Ma, Goldfarb, Chen 09], SVT [Cai, Candès, Shen 08], ADMiRA [Lee, Bresler 09] 25 / 35

45 Noisy case 26 / 35

46 Gaussian noise case Theorem 2. (OptSpace) For E Cnr max{r, log n}, OptSpace achieves, with high probability, RMSE C n r E ZE 2 27 / 35

47 Gaussian noise case Theorem 2. (OptSpace) For E Cnr max{r, log n}, OptSpace achieves, with high probability, C RMSE σ r n z E Lower bound: [Candès, Plan 09] RMSE σ z 2 r n E OptSpace is order-optimal 27 / 35

48 Gaussian noise case rank-4 matrix M, Gaussian noise with σ z = 1 Example from [Candès, Plan 09] 1.5 Trim+SVD RMSE Lower Bound 0 OptSpace Sampling rate 28 / 35

49 Gaussian noise case rank-4 matrix M, Gaussian noise with σ z = 1 Example from [Candès, Plan 09] 1.5 Trim+SVD RMSE Lower Bound 0 OptSpace FPCA ADMiRA Sampling rate FPCA [Ma, Goldfarb, Chen 09], ADMiRA [Lee, Bresler 09] 28 / 35

50 Proof strategy (noiseless case) 29 / 35

51 Proof strategy Proving Theorem 1. RMSE(M, M SVD ) CM max nr/ E [Friedman,Kahn,Szemeredi 1989], [Feige, Ofek 2005] Erdős-Renýi Graph Adjacency Matrix Proving Theorem For E Cnr max{r, log n}, OptSpace reconstructs M exactly Prove d(m, MSVD ) δ In the δ neighborhood, F (X, Y) has unique local minimum at M = UΣV T 30 / 35

52 Applications 31 / 35

53 Ultrasound tomography 256 sensors Reconstructed image Time-of-flight measurements image reconstruction Accurate positions of the sensors to get more accurate images Images courtesy of Hormati, Jovanovic, Roy, Vetterli 32 / 35

54 Ultrasound tomography Ideal time-of-flight measurements Real time-of-flight measurements x x Measurements with water Ideal: Complete time-of-flight measurements Reality: Noisy measurements with missing entries T 0 : Transmission delay M i,j = T 0 + d i,j /v water + Z i,j 33 / 35

55 Ultrasound tomography Iterative algorithm for sensor positioning using OptSpace Estimated sensor positions 0.1 Water image (with calibration) Water image (without calibration) Parhizkar, Karbasi, Oh, Vetterli, Intern. Congress on Acoustics / 35

56 Ultrasound tomography Iterative algorithm for sensor positioning using OptSpace Estimated sensor positions 0.1 Water image (with calibration) Water image (without calibration) Parhizkar, Karbasi, Oh, Vetterli, Intern. Congress on Acoustics / 35

57 Conclusion Application Theory Matrix Completion Algorithm Open challenges Non-uniform sampling (e.g. Positioning) Noiseless: Sub-optimality in high rank regime (e.g. r = Ω( n) ) Noisy: Regularization in high noise regime 35 / 35

58 Thank you! 35 / 35

Matrix Completion: Fundamental Limits and Efficient Algorithms

Matrix Completion: Fundamental Limits and Efficient Algorithms Matrix Completion: Fundamental Limits and Efficient Algorithms Sewoong Oh PhD Defense Stanford University July 23, 2010 1 / 33 Matrix completion Find the missing entries in a huge data matrix 2 / 33 Example

More information

Low-rank Matrix Completion from Noisy Entries

Low-rank Matrix Completion from Noisy Entries Low-rank Matrix Completion from Noisy Entries Sewoong Oh Joint work with Raghunandan Keshavan and Andrea Montanari Stanford University Forty-Seventh Allerton Conference October 1, 2009 R.Keshavan, S.Oh,

More information

Matrix Completion from Fewer Entries

Matrix Completion from Fewer Entries from Fewer Entries Stanford University March 30, 2009 Outline The problem, a look at the data, and some results (slides) 2 Proofs (blackboard) arxiv:090.350 The problem, a look at the data, and some results

More information

Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison

Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison Raghunandan H. Keshavan, Andrea Montanari and Sewoong Oh Electrical Engineering and Statistics Department Stanford University,

More information

Matrix Completion from a Few Entries

Matrix Completion from a Few Entries Matrix Completion from a Few Entries Raghunandan H. Keshavan and Sewoong Oh EE Department Stanford University, Stanford, CA 9434 Andrea Montanari EE and Statistics Departments Stanford University, Stanford,

More information

Matrix Completion from Noisy Entries

Matrix Completion from Noisy Entries Matrix Completion from Noisy Entries Raghunandan H. Keshavan, Andrea Montanari, and Sewoong Oh Abstract Given a matrix M of low-rank, we consider the problem of reconstructing it from noisy observations

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

Matrix Completion from Noisy Entries

Matrix Completion from Noisy Entries Journal of Machine Learning Research (200) 2057-2078 Submitted 6/09; Revised 4/0; Published 7/0 Matrix Completion from Noisy Entries Raghunandan H. Keshavan Andrea Montanari Sewoong Oh Department of Electrical

More information

Matrix Completion from Noisy Entries

Matrix Completion from Noisy Entries Matrix Completion from Noisy Entries Raghunandan H. Keshavan, Andrea Montanari, and Sewoong Oh June 10, 2009 Abstract Given a matrix M of low-rank, we consider the problem of reconstructing it from noisy

More information

Recovering any low-rank matrix, provably

Recovering any low-rank matrix, provably Recovering any low-rank matrix, provably Rachel Ward University of Texas at Austin October, 2014 Joint work with Yudong Chen (U.C. Berkeley), Srinadh Bhojanapalli and Sujay Sanghavi (U.T. Austin) Matrix

More information

Guaranteed Rank Minimization via Singular Value Projection

Guaranteed Rank Minimization via Singular Value Projection 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 3 3 33 34 35 36 37 38 39 4 4 4 43 44 45 46 47 48 49 5 5 5 53 Guaranteed Rank Minimization via Singular Value Projection Anonymous Author(s) Affiliation Address

More information

EE 381V: Large Scale Learning Spring Lecture 16 March 7

EE 381V: Large Scale Learning Spring Lecture 16 March 7 EE 381V: Large Scale Learning Spring 2013 Lecture 16 March 7 Lecturer: Caramanis & Sanghavi Scribe: Tianyang Bai 16.1 Topics Covered In this lecture, we introduced one method of matrix completion via SVD-based

More information

arxiv: v1 [stat.ml] 1 Mar 2015

arxiv: v1 [stat.ml] 1 Mar 2015 Matrix Completion with Noisy Entries and Outliers Raymond K. W. Wong 1 and Thomas C. M. Lee 2 arxiv:1503.00214v1 [stat.ml] 1 Mar 2015 1 Department of Statistics, Iowa State University 2 Department of Statistics,

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via convex relaxations Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Using SVD to Recommend Movies

Using SVD to Recommend Movies Michael Percy University of California, Santa Cruz Last update: December 12, 2009 Last update: December 12, 2009 1 / Outline 1 Introduction 2 Singular Value Decomposition 3 Experiments 4 Conclusion Last

More information

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms François Caron Department of Statistics, Oxford STATLEARN 2014, Paris April 7, 2014 Joint work with Adrien Todeschini,

More information

Low-Rank Matrix Recovery

Low-Rank Matrix Recovery ELE 538B: Mathematics of High-Dimensional Data Low-Rank Matrix Recovery Yuxin Chen Princeton University, Fall 2018 Outline Motivation Problem setup Nuclear norm minimization RIP and low-rank matrix recovery

More information

Matrix Completion for Structured Observations

Matrix Completion for Structured Observations Matrix Completion for Structured Observations Denali Molitor Department of Mathematics University of California, Los ngeles Los ngeles, C 90095, US Email: dmolitor@math.ucla.edu Deanna Needell Department

More information

Matrix Completion with Noise

Matrix Completion with Noise Matrix Completion with Noise 1 Emmanuel J. Candès and Yaniv Plan Applied and Computational Mathematics, Caltech, Pasadena, CA 91125 of the rapidly developing field of compressed sensing, and is already

More information

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Adrien Todeschini Inria Bordeaux JdS 2014, Rennes Aug. 2014 Joint work with François Caron (Univ. Oxford), Marie

More information

Low Rank Matrix Completion Formulation and Algorithm

Low Rank Matrix Completion Formulation and Algorithm 1 2 Low Rank Matrix Completion and Algorithm Jian Zhang Department of Computer Science, ETH Zurich zhangjianthu@gmail.com March 25, 2014 Movie Rating 1 2 Critic A 5 5 Critic B 6 5 Jian 9 8 Kind Guy B 9

More information

An iterative hard thresholding estimator for low rank matrix recovery

An iterative hard thresholding estimator for low rank matrix recovery An iterative hard thresholding estimator for low rank matrix recovery Alexandra Carpentier - based on a joint work with Arlene K.Y. Kim Statistical Laboratory, Department of Pure Mathematics and Mathematical

More information

Breaking the Limits of Subspace Inference

Breaking the Limits of Subspace Inference Breaking the Limits of Subspace Inference Claudia R. Solís-Lemus, Daniel L. Pimentel-Alarcón Emory University, Georgia State University Abstract Inferring low-dimensional subspaces that describe high-dimensional,

More information

From Compressed Sensing to Matrix Completion and Beyond. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison

From Compressed Sensing to Matrix Completion and Beyond. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison From Compressed Sensing to Matrix Completion and Beyond Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Netflix Prize One million big ones! Given 100 million ratings on a

More information

A Scalable Spectral Relaxation Approach to Matrix Completion via Kronecker Products

A Scalable Spectral Relaxation Approach to Matrix Completion via Kronecker Products A Scalable Spectral Relaxation Approach to Matrix Completion via Kronecker Products Hui Zhao Jiuqiang Han Naiyan Wang Congfu Xu Zhihua Zhang Department of Automation, Xi an Jiaotong University, Xi an,

More information

Compressed Sensing and Robust Recovery of Low Rank Matrices

Compressed Sensing and Robust Recovery of Low Rank Matrices Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech

More information

Probabilistic Low-Rank Matrix Completion from Quantized Measurements

Probabilistic Low-Rank Matrix Completion from Quantized Measurements Journal of Machine Learning Research 7 (206-34 Submitted 6/5; Revised 2/5; Published 4/6 Probabilistic Low-Rank Matrix Completion from Quantized Measurements Sonia A. Bhaskar Department of Electrical Engineering

More information

Binary matrix completion

Binary matrix completion Binary matrix completion Yaniv Plan University of Michigan SAMSI, LDHD workshop, 2013 Joint work with (a) Mark Davenport (b) Ewout van den Berg (c) Mary Wootters Yaniv Plan (U. Mich.) Binary matrix completion

More information

Collaborative Filtering Matrix Completion Alternating Least Squares

Collaborative Filtering Matrix Completion Alternating Least Squares Case Study 4: Collaborative Filtering Collaborative Filtering Matrix Completion Alternating Least Squares Machine Learning for Big Data CSE547/STAT548, University of Washington Sham Kakade May 19, 2016

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

Fast and Robust Phase Retrieval

Fast and Robust Phase Retrieval Fast and Robust Phase Retrieval Aditya Viswanathan aditya@math.msu.edu CCAM Lunch Seminar Purdue University April 18 2014 0 / 27 Joint work with Yang Wang Mark Iwen Research supported in part by National

More information

Matrix Factorizations: A Tale of Two Norms

Matrix Factorizations: A Tale of Two Norms Matrix Factorizations: A Tale of Two Norms Nati Srebro Toyota Technological Institute Chicago Maximum Margin Matrix Factorization S, Jason Rennie, Tommi Jaakkola (MIT), NIPS 2004 Rank, Trace-Norm and Max-Norm

More information

Matrix estimation by Universal Singular Value Thresholding

Matrix estimation by Universal Singular Value Thresholding Matrix estimation by Universal Singular Value Thresholding Courant Institute, NYU Let us begin with an example: Suppose that we have an undirected random graph G on n vertices. Model: There is a real symmetric

More information

The convex algebraic geometry of rank minimization

The convex algebraic geometry of rank minimization The convex algebraic geometry of rank minimization Pablo A. Parrilo Laboratory for Information and Decision Systems Massachusetts Institute of Technology International Symposium on Mathematical Programming

More information

Adaptive one-bit matrix completion

Adaptive one-bit matrix completion Adaptive one-bit matrix completion Joseph Salmon Télécom Paristech, Institut Mines-Télécom Joint work with Jean Lafond (Télécom Paristech) Olga Klopp (Crest / MODAL X, Université Paris Ouest) Éric Moulines

More information

Linear dimensionality reduction for data analysis

Linear dimensionality reduction for data analysis Linear dimensionality reduction for data analysis Nicolas Gillis Joint work with Robert Luce, François Glineur, Stephen Vavasis, Robert Plemmons, Gabriella Casalino The setup Dimensionality reduction for

More information

Structured matrix factorizations. Example: Eigenfaces

Structured matrix factorizations. Example: Eigenfaces Structured matrix factorizations Example: Eigenfaces An extremely large variety of interesting and important problems in machine learning can be formulated as: Given a matrix, find a matrix and a matrix

More information

Lecture Notes 10: Matrix Factorization

Lecture Notes 10: Matrix Factorization Optimization-based data analysis Fall 207 Lecture Notes 0: Matrix Factorization Low-rank models. Rank- model Consider the problem of modeling a quantity y[i, j] that depends on two indices i and j. To

More information

1-Bit Matrix Completion

1-Bit Matrix Completion 1-Bit Matrix Completion Mark A. Davenport School of Electrical and Computer Engineering Georgia Institute of Technology Yaniv Plan Mary Wootters Ewout van den Berg Matrix Completion d When is it possible

More information

GoDec: Randomized Low-rank & Sparse Matrix Decomposition in Noisy Case

GoDec: Randomized Low-rank & Sparse Matrix Decomposition in Noisy Case ianyi Zhou IANYI.ZHOU@SUDEN.US.EDU.AU Dacheng ao DACHENG.AO@US.EDU.AU Centre for Quantum Computation & Intelligent Systems, FEI, University of echnology, Sydney, NSW 2007, Australia Abstract Low-rank and

More information

An Extended Frank-Wolfe Method, with Application to Low-Rank Matrix Completion

An Extended Frank-Wolfe Method, with Application to Low-Rank Matrix Completion An Extended Frank-Wolfe Method, with Application to Low-Rank Matrix Completion Robert M. Freund, MIT joint with Paul Grigas (UC Berkeley) and Rahul Mazumder (MIT) CDC, December 2016 1 Outline of Topics

More information

Joint Capped Norms Minimization for Robust Matrix Recovery

Joint Capped Norms Minimization for Robust Matrix Recovery Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-7) Joint Capped Norms Minimization for Robust Matrix Recovery Feiping Nie, Zhouyuan Huo 2, Heng Huang 2

More information

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Shuyang Ling Department of Mathematics, UC Davis Oct.18th, 2016 Shuyang Ling (UC Davis) 16w5136, Oaxaca, Mexico Oct.18th, 2016

More information

Collaborative Filtering

Collaborative Filtering Case Study 4: Collaborative Filtering Collaborative Filtering Matrix Completion Alternating Least Squares Machine Learning/Statistics for Big Data CSE599C1/STAT592, University of Washington Carlos Guestrin

More information

1-Bit Matrix Completion

1-Bit Matrix Completion 1-Bit Matrix Completion Mark A. Davenport School of Electrical and Computer Engineering Georgia Institute of Technology Yaniv Plan Mary Wootters Ewout van den Berg Matrix Completion d When is it possible

More information

Provable Alternating Minimization Methods for Non-convex Optimization

Provable Alternating Minimization Methods for Non-convex Optimization Provable Alternating Minimization Methods for Non-convex Optimization Prateek Jain Microsoft Research, India Joint work with Praneeth Netrapalli, Sujay Sanghavi, Alekh Agarwal, Animashree Anandkumar, Rashish

More information

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:

More information

Self-Calibration and Biconvex Compressive Sensing

Self-Calibration and Biconvex Compressive Sensing Self-Calibration and Biconvex Compressive Sensing Shuyang Ling Department of Mathematics, UC Davis July 12, 2017 Shuyang Ling (UC Davis) SIAM Annual Meeting, 2017, Pittsburgh July 12, 2017 1 / 22 Acknowledgements

More information

CSC 576: Variants of Sparse Learning

CSC 576: Variants of Sparse Learning CSC 576: Variants of Sparse Learning Ji Liu Department of Computer Science, University of Rochester October 27, 205 Introduction Our previous note basically suggests using l norm to enforce sparsity in

More information

MATRIX COMPLETION AND TENSOR RANK

MATRIX COMPLETION AND TENSOR RANK MATRIX COMPLETION AND TENSOR RANK HARM DERKSEN Abstract. In this paper, we show that the low rank matrix completion problem can be reduced to the problem of finding the rank of a certain tensor. arxiv:1302.2639v2

More information

Sparse and Low-Rank Matrix Decompositions

Sparse and Low-Rank Matrix Decompositions Forty-Seventh Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 30 - October 2, 2009 Sparse and Low-Rank Matrix Decompositions Venkat Chandrasekaran, Sujay Sanghavi, Pablo A. Parrilo,

More information

Divide-and-Conquer Matrix Factorization

Divide-and-Conquer Matrix Factorization Divide-and-Conquer Matrix Factorization Lester Mackey Collaborators: Ameet Talwalkar Michael I. Jordan Stanford University UCLA UC Berkeley December 14, 2015 Mackey (Stanford) Divide-and-Conquer Matrix

More information

Recommendation Systems

Recommendation Systems Recommendation Systems Popularity Recommendation Systems Predicting user responses to options Offering news articles based on users interests Offering suggestions on what the user might like to buy/consume

More information

Tighter Low-rank Approximation via Sampling the Leveraged Element

Tighter Low-rank Approximation via Sampling the Leveraged Element Tighter Low-rank Approximation via Sampling the Leveraged Element Srinadh Bhojanapalli The University of Texas at Austin bsrinadh@utexas.edu Prateek Jain Microsoft Research, India prajain@microsoft.com

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via nonconvex optimization Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

The Fundamentals of Compressive Sensing

The Fundamentals of Compressive Sensing The Fundamentals of Compressive Sensing Mark A. Davenport Georgia Institute of Technology School of Electrical and Computer Engineering Sensor Explosion Data Deluge Digital Revolution If we sample a signal

More information

Sensor Network Localization from Local Connectivity : Performance Analysis for the MDS-MAP Algorithm

Sensor Network Localization from Local Connectivity : Performance Analysis for the MDS-MAP Algorithm Sensor Network Localization from Local Connectivity : Performance Analysis for the MDS-MAP Algorithm Sewoong Oh, Amin Karbasi, and Andrea Montanari August 27, 2009 Abstract Sensor localization from only

More information

Sparse and Low Rank Recovery via Null Space Properties

Sparse and Low Rank Recovery via Null Space Properties Sparse and Low Rank Recovery via Null Space Properties Holger Rauhut Lehrstuhl C für Mathematik (Analysis), RWTH Aachen Convexity, probability and discrete structures, a geometric viewpoint Marne-la-Vallée,

More information

Spectral k-support Norm Regularization

Spectral k-support Norm Regularization Spectral k-support Norm Regularization Andrew McDonald Department of Computer Science, UCL (Joint work with Massimiliano Pontil and Dimitris Stamos) 25 March, 2015 1 / 19 Problem: Matrix Completion Goal:

More information

Riemannian Pursuit for Big Matrix Recovery

Riemannian Pursuit for Big Matrix Recovery Riemannian Pursuit for Big Matrix Recovery Mingkui Tan 1, Ivor W. Tsang 2, Li Wang 3, Bart Vandereycken 4, Sinno Jialin Pan 5 1 School of Computer Science, University of Adelaide, Australia 2 Center for

More information

BEYOND MATRIX COMPLETION

BEYOND MATRIX COMPLETION BEYOND MATRIX COMPLETION ANKUR MOITRA MASSACHUSETTS INSTITUTE OF TECHNOLOGY Based on joint work with Boaz Barak (MSR) Part I: RecommendaIon systems and parially observed matrices THE NETFLIX PROBLEM movies

More information

Sparse Parameter Estimation: Compressed Sensing meets Matrix Pencil

Sparse Parameter Estimation: Compressed Sensing meets Matrix Pencil Sparse Parameter Estimation: Compressed Sensing meets Matrix Pencil Yuejie Chi Departments of ECE and BMI The Ohio State University Colorado School of Mines December 9, 24 Page Acknowledgement Joint work

More information

Robust PCA via Outlier Pursuit

Robust PCA via Outlier Pursuit Robust PCA via Outlier Pursuit Huan Xu Electrical and Computer Engineering University of Texas at Austin huan.xu@mail.utexas.edu Constantine Caramanis Electrical and Computer Engineering University of

More information

Analysis of Robust PCA via Local Incoherence

Analysis of Robust PCA via Local Incoherence Analysis of Robust PCA via Local Incoherence Huishuai Zhang Department of EECS Syracuse University Syracuse, NY 3244 hzhan23@syr.edu Yi Zhou Department of EECS Syracuse University Syracuse, NY 3244 yzhou35@syr.edu

More information

Scaling Neighbourhood Methods

Scaling Neighbourhood Methods Quick Recap Scaling Neighbourhood Methods Collaborative Filtering m = #items n = #users Complexity : m * m * n Comparative Scale of Signals ~50 M users ~25 M items Explicit Ratings ~ O(1M) (1 per billion)

More information

A Novel Approach to Quantized Matrix Completion Using Huber Loss Measure

A Novel Approach to Quantized Matrix Completion Using Huber Loss Measure 1 A Novel Approach to Quantized Matrix Completion Using Huber Loss Measure Ashkan Esmaeili and Farokh Marvasti ariv:1810.1260v1 [stat.ml] 29 Oct 2018 Abstract In this paper, we introduce a novel and robust

More information

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28 Sparsity Models Tong Zhang Rutgers University T. Zhang (Rutgers) Sparsity Models 1 / 28 Topics Standard sparse regression model algorithms: convex relaxation and greedy algorithm sparse recovery analysis:

More information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information Fast Angular Synchronization for Phase Retrieval via Incomplete Information Aditya Viswanathan a and Mark Iwen b a Department of Mathematics, Michigan State University; b Department of Mathematics & Department

More information

BEYOND MATRIX COMPLETION

BEYOND MATRIX COMPLETION BEYOND MATRIX COMPLETION ANKUR MOITRA MASSACHUSETTS INSTITUTE OF TECHNOLOGY Based on joint work with Boaz Barak (Harvard) Part I: RecommendaJon systems and parjally observed matrices THE NETFLIX PROBLEM

More information

Non-convex Robust PCA: Provable Bounds

Non-convex Robust PCA: Provable Bounds Non-convex Robust PCA: Provable Bounds Anima Anandkumar U.C. Irvine Joint work with Praneeth Netrapalli, U.N. Niranjan, Prateek Jain and Sujay Sanghavi. Learning with Big Data High Dimensional Regime Missing

More information

Solving Corrupted Quadratic Equations, Provably

Solving Corrupted Quadratic Equations, Provably Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin

More information

Lecture 9: September 28

Lecture 9: September 28 0-725/36-725: Convex Optimization Fall 206 Lecturer: Ryan Tibshirani Lecture 9: September 28 Scribes: Yiming Wu, Ye Yuan, Zhihao Li Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These

More information

The Projected Power Method: An Efficient Algorithm for Joint Alignment from Pairwise Differences

The Projected Power Method: An Efficient Algorithm for Joint Alignment from Pairwise Differences The Projected Power Method: An Efficient Algorithm for Joint Alignment from Pairwise Differences Yuxin Chen Emmanuel Candès Department of Statistics, Stanford University, Sep. 2016 Nonconvex optimization

More information

Universal low-rank matrix recovery from Pauli measurements

Universal low-rank matrix recovery from Pauli measurements Universal low-rank matrix recovery from Pauli measurements Yi-Kai Liu Applied and Computational Mathematics Division National Institute of Standards and Technology Gaithersburg, MD, USA yi-kai.liu@nist.gov

More information

Can matrix coherence be efficiently and accurately estimated?

Can matrix coherence be efficiently and accurately estimated? Mehryar Mohri Courant Institute and Google Research New York, NY mohri@cs.nyu.edu Ameet Talwalkar Computer Science Division University of California, Berkeley ameet@eecs.berkeley.edu Abstract Matrix coherence

More information

arxiv: v2 [cs.it] 12 Jul 2011

arxiv: v2 [cs.it] 12 Jul 2011 Online Identification and Tracking of Subspaces from Highly Incomplete Information Laura Balzano, Robert Nowak and Benjamin Recht arxiv:1006.4046v2 [cs.it] 12 Jul 2011 Department of Electrical and Computer

More information

CS598 Machine Learning in Computational Biology (Lecture 5: Matrix - part 2) Professor Jian Peng Teaching Assistant: Rongda Zhu

CS598 Machine Learning in Computational Biology (Lecture 5: Matrix - part 2) Professor Jian Peng Teaching Assistant: Rongda Zhu CS598 Machine Learning in Computational Biology (Lecture 5: Matrix - part 2) Professor Jian Peng Teaching Assistant: Rongda Zhu Feature engineering is hard 1. Extract informative features from domain knowledge

More information

Subspace Projection Matrix Completion on Grassmann Manifold

Subspace Projection Matrix Completion on Grassmann Manifold Subspace Projection Matrix Completion on Grassmann Manifold Xinyue Shen and Yuantao Gu Dept. EE, Tsinghua University, Beijing, China http://gu.ee.tsinghua.edu.cn/ ICASSP 2015, Brisbane Contents 1 Background

More information

Andriy Mnih and Ruslan Salakhutdinov

Andriy Mnih and Ruslan Salakhutdinov MATRIX FACTORIZATION METHODS FOR COLLABORATIVE FILTERING Andriy Mnih and Ruslan Salakhutdinov University of Toronto, Machine Learning Group 1 What is collaborative filtering? The goal of collaborative

More information

ADMiRA: Atomic Decomposition for Minimum Rank Approximation

ADMiRA: Atomic Decomposition for Minimum Rank Approximation ADMiRA: Atomic Decomposition for Minimum Rank Approximation Kiryung Lee and Yoram Bresler 1 arxiv:09050044v2 [mathna] 15 Jun 2009 Abstract We address the inverse problem that arises in compressed sensing

More information

Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds

Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds Tao Wu Institute for Mathematics and Scientific Computing Karl-Franzens-University of Graz joint work with Prof.

More information

Rank minimization via the γ 2 norm

Rank minimization via the γ 2 norm Rank minimization via the γ 2 norm Troy Lee Columbia University Adi Shraibman Weizmann Institute Rank Minimization Problem Consider the following problem min X rank(x) A i, X b i for i = 1,..., k Arises

More information

High-dimensional Statistics

High-dimensional Statistics High-dimensional Statistics Pradeep Ravikumar UT Austin Outline 1. High Dimensional Data : Large p, small n 2. Sparsity 3. Group Sparsity 4. Low Rank 1 Curse of Dimensionality Statistical Learning: Given

More information

Statistical Performance of Convex Tensor Decomposition

Statistical Performance of Convex Tensor Decomposition Slides available: h-p://www.ibis.t.u tokyo.ac.jp/ryotat/tensor12kyoto.pdf Statistical Performance of Convex Tensor Decomposition Ryota Tomioka 2012/01/26 @ Kyoto University Perspectives in Informatics

More information

Stochastic dynamical modeling:

Stochastic dynamical modeling: Stochastic dynamical modeling: Structured matrix completion of partially available statistics Armin Zare www-bcf.usc.edu/ arminzar Joint work with: Yongxin Chen Mihailo R. Jovanovic Tryphon T. Georgiou

More information

1-Bit Matrix Completion

1-Bit Matrix Completion 1-Bit Matrix Completion Mark A. Davenport School of Electrical and Computer Engineering Georgia Institute of Technology Yaniv Plan Mary Wootters Ewout van den Berg Matrix Completion d When is it possible

More information

1-bit Matrix Completion. PAC-Bayes and Variational Approximation

1-bit Matrix Completion. PAC-Bayes and Variational Approximation : PAC-Bayes and Variational Approximation (with P. Alquier) PhD Supervisor: N. Chopin Bayes In Paris, 5 January 2017 (Happy New Year!) Various Topics covered Matrix Completion PAC-Bayesian Estimation Variational

More information

THERE are many applications in signal processing and. Recovery of Low Rank Matrices Under Affine Constraints via a Smoothed Rank Function

THERE are many applications in signal processing and. Recovery of Low Rank Matrices Under Affine Constraints via a Smoothed Rank Function 1 Recovery of Low Rank Matrices Under Affine Constraints via a Smoothed Rank Function Mohammadreza Malek-Mohammadi *, Student, IEEE, Massoud Babaie-Zadeh, Senior Member, IEEE, Arash Ai, and Christian Jutten,

More information

Concentration-Based Guarantees for Low-Rank Matrix Reconstruction

Concentration-Based Guarantees for Low-Rank Matrix Reconstruction JMLR: Workshop and Conference Proceedings 9 20 35 339 24th Annual Conference on Learning Theory Concentration-Based Guarantees for Low-Rank Matrix Reconstruction Rina Foygel Department of Statistics, University

More information

When Dictionary Learning Meets Classification

When Dictionary Learning Meets Classification When Dictionary Learning Meets Classification Bufford, Teresa 1 Chen, Yuxin 2 Horning, Mitchell 3 Shee, Liberty 1 Mentor: Professor Yohann Tendero 1 UCLA 2 Dalhousie University 3 Harvey Mudd College August

More information

Sparse Sensing in Colocated MIMO Radar: A Matrix Completion Approach

Sparse Sensing in Colocated MIMO Radar: A Matrix Completion Approach Sparse Sensing in Colocated MIMO Radar: A Matrix Completion Approach Athina P. Petropulu Department of Electrical and Computer Engineering Rutgers, the State University of New Jersey Acknowledgments Shunqiao

More information

Compressive Sensing and Beyond

Compressive Sensing and Beyond Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered

More information

Tutorial: PART 2. Optimization for Machine Learning. Elad Hazan Princeton University. + help from Sanjeev Arora & Yoram Singer

Tutorial: PART 2. Optimization for Machine Learning. Elad Hazan Princeton University. + help from Sanjeev Arora & Yoram Singer Tutorial: PART 2 Optimization for Machine Learning Elad Hazan Princeton University + help from Sanjeev Arora & Yoram Singer Agenda 1. Learning as mathematical optimization Stochastic optimization, ERM,

More information

Online Identification and Tracking of Subspaces from Highly Incomplete Information

Online Identification and Tracking of Subspaces from Highly Incomplete Information Online Identification and Tracking of Subspaces from Highly Incomplete Information Laura Balzano, Robert Nowak and Benjamin Recht Department of Electrical and Computer Engineering. University of Wisconsin-Madison

More information

LEARNING OVERCOMPLETE SPARSIFYING TRANSFORMS FOR SIGNAL PROCESSING. Saiprasad Ravishankar and Yoram Bresler

LEARNING OVERCOMPLETE SPARSIFYING TRANSFORMS FOR SIGNAL PROCESSING. Saiprasad Ravishankar and Yoram Bresler LEARNING OVERCOMPLETE SPARSIFYING TRANSFORMS FOR SIGNAL PROCESSING Saiprasad Ravishankar and Yoram Bresler Department of Electrical and Computer Engineering and the Coordinated Science Laboratory, University

More information

Jointly Clustering Rows and Columns of Binary Matrices: Algorithms and Trade-offs

Jointly Clustering Rows and Columns of Binary Matrices: Algorithms and Trade-offs Jointly Clustering Rows and Columns of Binary Matrices: Algorithms and Trade-offs Jiaming Xu Joint work with Rui Wu, Kai Zhu, Bruce Hajek, R. Srikant, and Lei Ying University of Illinois, Urbana-Champaign

More information

Restricted Boltzmann Machines for Collaborative Filtering

Restricted Boltzmann Machines for Collaborative Filtering Restricted Boltzmann Machines for Collaborative Filtering Authors: Ruslan Salakhutdinov Andriy Mnih Geoffrey Hinton Benjamin Schwehn Presentation by: Ioan Stanculescu 1 Overview The Netflix prize problem

More information

A Combinatorial Algebraic Approach for the Identifiability of Low-Rank Matrix Completion

A Combinatorial Algebraic Approach for the Identifiability of Low-Rank Matrix Completion A Combinatorial Algebraic Approach for the Identifiability of Low-Rank Matrix Completion Franz Király franz.j.kiraly@tu-berlin.de Berlin Institute of Technology (TU Berlin), Machine Learning Group, Franklinstr.

More information

Recommender Systems. Dipanjan Das Language Technologies Institute Carnegie Mellon University. 20 November, 2007

Recommender Systems. Dipanjan Das Language Technologies Institute Carnegie Mellon University. 20 November, 2007 Recommender Systems Dipanjan Das Language Technologies Institute Carnegie Mellon University 20 November, 2007 Today s Outline What are Recommender Systems? Two approaches Content Based Methods Collaborative

More information

High-dimensional Statistical Models

High-dimensional Statistical Models High-dimensional Statistical Models Pradeep Ravikumar UT Austin MLSS 2014 1 Curse of Dimensionality Statistical Learning: Given n observations from p(x; θ ), where θ R p, recover signal/parameter θ. For

More information