Matrix Completion: Fundamental Limits and Efficient Algorithms

Size: px
Start display at page:

Download "Matrix Completion: Fundamental Limits and Efficient Algorithms"

Transcription

1 Matrix Completion: Fundamental Limits and Efficient Algorithms Sewoong Oh PhD Defense Stanford University July 23, / 33

2 Matrix completion Find the missing entries in a huge data matrix 2 / 33

3 Example 1. Recommendation systems? ???? users movies 10 6 queries Given less than 1% of the movie ratings Goal: Predict missing ratings 3 / 33

4 Example 2. Positioning Distance Matrix Only distances between close-by sensors are measured Goal: Find the sensor positions up to a rigid motion 4 / 33

5 Matrix completion More applications: Computer vision: Structure-from-motion Molecular biology: Microarray Numerical linear algebra: Fast low-rank approximations etc. 5 / 33

6 Outline 1 Background 2 Algorithm and main results 3 Applications in positioning 6 / 33

7 Background 7 / 33

8 The model Σ V T r αn Low-rank M = U n r Rank-r matrix M Random uniform sample set E Sample matrix M E M E ij = { Mij if (i, j) E 0 otherwise 8 / 33

9 The model αn Sample M E n Rank-r matrix M Random uniform sample set E Sample matrix M E M E ij = { Mij if (i, j) E 0 otherwise 8 / 33

10 Which matrices? Pathological example M = = 0. [1 ] [1 0 0 ] / 33

11 Which matrices? Pathological example M = = 0. [1 ] [1 0 0 ] [Candès, Recht 08] M = UΣV T has coherence µ if A0. max A1. max i,j 1 i αn k=1 r U 2 ik µ r n, r k=1 U ik V jk µ r n max 1 j n k=1 Intuition µ is small if singular vectors are well balanced We need low-coherence for matrix completion r Vjk 2 µ r n 9 / 33

12 Previous work Rank minimization minimize subject to rank(x) X ij = M ij, (i, j) E NP-hard 10 / 33

13 Previous work Rank minimization Heuristic [Fazel 02] minimize subject to rank(x) X ij = M ij, (i, j) E minimize subject to X X ij = M ij, (i, j) E NP-hard Convex relaxation Nuclear norm n X = σ i (X ) i=1 Can be solved using Semidefinite Programming(SDP) 10 / 33

14 Previous work [Candès, Recht 08] Nuclear norm minimization reconstructs M exactly with high probability, if E C µ r n 6/5 log n Surprise? 11 / 33

15 Previous work [Candès, Recht 08] Nuclear norm minimization reconstructs M exactly with high probability, if E C µ r n 6/5 log n Degrees of freedom (1 + α)rn Open questions Optimality: Do we need n 6/5 log n samples? Complexity: SDP is computationally expensive Noise: Can not deal with noise 11 / 33

16 Previous work [Candès, Recht 08] Nuclear norm minimization reconstructs M exactly with high probability, if E C µ r n 6/5 log n Degrees of freedom (1 + α)rn Open questions Optimality: Do we need n 6/5 log n samples? Complexity: SDP is computationally expensive Noise: Can not deal with noise A new approach to Matrix Completion: OptSpace 11 / 33

17 Example: rank-8 random matrix low-rank matrix M sampled matrix M E OptSpace output M squared error (M M) % sampled 12 / 33

18 Example: rank-8 random matrix low-rank matrix M sampled matrix M E OptSpace output M squared error (M M) % sampled 12 / 33

19 Example: rank-8 random matrix low-rank matrix M sampled matrix M E OptSpace output M squared error (M M) % sampled 12 / 33

20 Example: rank-8 random matrix low-rank matrix M sampled matrix M E OptSpace output M squared error (M M) % sampled 12 / 33

21 Example: rank-8 random matrix low-rank matrix M sampled matrix M E OptSpace output M squared error (M M) % sampled 12 / 33

22 Example: rank-8 random matrix low-rank matrix M sampled matrix M E OptSpace output M squared error (M M) % sampled 12 / 33

23 Example: rank-8 random matrix low-rank matrix M sampled matrix M E OptSpace output M squared error (M M) % sampled 12 / 33

24 Algorithm 13 / 33

25 Naïve approach Singular Value Decomposition (SVD) n M E = σ i x i yi T i=1 Compute rank-r approximation M SVD r M SVD αn2 σ i x i yi T E i=1 M E MSVD 14 / 33

26 Naïve approach fails Singular Value Decomposition (SVD) n M E = σ i x i yi T i=1 Compute rank-r approximation M SVD r M SVD αn2 σ i x i yi T E i=1 M E MSVD 14 / 33

27 Naïve approach fails Singular Value Decomposition (SVD) n M E = σ i x i yi T i=1 Compute rank-r approximation M SVD r M SVD αn2 σ i x i yi T E i=1 M E MSVD 14 / 33

28 Trimming M E = M E ij = M E ij 0 if deg(row i ) > 2 E /αn 0 if deg(col j ) > 2 E /n otherwise deg( ) is the number of samples in that row/column 15 / 33

29 Trimming M E = M E ij = M E ij 0 if deg(row i ) > 2 E /αn 0 if deg(col j ) > 2 E /n otherwise deg( ) is the number of samples in that row/column 15 / 33

30 Trimming M E = M E ij = M E ij 0 if deg(row i ) > 2 E /αn 0 if deg(col j ) > 2 E /n otherwise deg( ) is the number of samples in that row/column 15 / 33

31 Algorithm OptSpace Input : sample indices E, sample values M E, rank r Output : estimation M 1: Trimming 2: Compute M SVD using SVD 3: Greedy minimization of the residual error 16 / 33

32 Algorithm OptSpace Input : sample indices E, sample values M E, rank r Output : estimation M 1: Trimming 2: Compute M SVD using SVD M SVD can be computed efficiently for sparse matrices 16 / 33

33 Main results Theorem For any E, M SVD achieves, with high probability, RMSE CM max nr E ( 1 RMSE = αn i,j (M M 2 SVD ) 2 ij M max max i,j M ij ) 1/2 Keshavan, Montanari, Oh, IEEE Trans. Information Theory, / 33

34 Main results Theorem For any E, M SVD achieves, with high probability, RMSE CM max nr E [Achlioptas, McSherry 07] If E n(8 log n) 4, with high probability, RMSE 4M max nr E For n = 10 5, (8 log n) Keshavan, Montanari, Oh, IEEE Trans. Information Theory, / 33

35 Main results Theorem For any E, M SVD achieves, with high probability, RMSE CM max nr E [Achlioptas, McSherry 07] If E n(8 log n) 4, with high probability, RMSE 4M max nr E Netflix dataset A single user rated 17,000 movies. Miss Congeniality : 200,000 ratings. For n = 10 5, (8 log n) Keshavan, Montanari, Oh, IEEE Trans. Information Theory, / 33

36 Can we do better? 18 / 33

37 Greedy minimization of residual error Starting from (X 0, Y 0 ) for M SVD = X 0 S 0 Y0 T, use gradient descent methods to solve minimize F (X, Y) subject to X T X = I, Y T Y = I ) 2 F (X, Y) (M E ij (XSY T ) ij min S R r r (i,j) E X S Y T rank Can be computed efficiently for sparse matrices 19 / 33

38 Algorithm OptSpace Input : sample indices E, sample values M E, rank r Output : estimation M 1: Trimming 2: Compute M SVD using SVD 3: Greedy minimization of the residual error 20 / 33

39 Main results Theorem (Trimming+SVD) M SVD achieves, with high probability, RMSE CM max nr E Theorem (Trimming+SVD+Greedy minimization) OptSpace reconstructs M exactly, with high probability, if E C µ r n max{µr, log n} Keshavan, Montanari, Oh, IEEE Trans. Information Theory, / 33

40 OptSpace is order-optimal Theorem If µ and r are bounded, OptSpace reconstructs M exactly, with high probability, if E C n log n Lower bound (coupon collector s problem): If E C n log n, then exact reconstruction is impossible 22 / 33

41 OptSpace is order-optimal Theorem If µ and r are bounded, OptSpace reconstructs M exactly, with high probability, if E C n log n Lower bound (coupon collector s problem): If E C n log n, then exact reconstruction is impossible Nuclear norm minimization: [Candès, Recht 08, Candès, Tao 09, Recht 09, Gross et al. 09] If E C n (log n) 2, then exact reconstruction by SDP 22 / 33

42 Comparison rank-10 matrix M 1 P success 0.5 Fundamental Limit OptSpace FPCA SVT ADMiRA Sampling rate Fundamental Limit [Singer, Cucuringu 09], FPCA [Ma, Goldfarb, Chen 09], SVT [Cai, Candès, Shen 08], ADMiRA [Lee, Bresler 09] 23 / 33

43 Story so far OptSpace reconstructs M from a few sampled entries, when M is exactly low-rank and samples are exact In reality, M is only approximately low-rank samples are corrupted by noise 24 / 33

44 The model with noise Σ V T r αn Low-rank M = U n r Rank-r matrix M Random sample set E Sample noise Z E Sample matrix N E = M E + Z E 25 / 33

45 The model with noise αn Sample N E n Rank-r matrix M Random sample set E Sample noise Z E Sample matrix N E = M E + Z E 25 / 33

46 Main results Theorem For E Cµrn max{µr, log n}, OptSpace achieves, with high probability, RMSE C n r E ZE 2, provided that the RHS is smaller than σ r (M)/n. 2 is the spectral norm Keshavan, Montanari, Oh, Journ. Machine Learning Research, / 33

47 OptSpace is order-optimal when noise is i.i.d. Gaussian Theorem For E Cµrn max{µr, log n}, OptSpace achieves, with high probability, RMSE C σ z r n E provided that the RHS is smaller than σ r (M)/n. Lower bound: [Candès, Plan 09] RMSE σ 2 r n z E Trimming + SVD r n r n RMSE CM max + C σ z E E }{{}}{{} missing entries sample noise 27 / 33

48 Comparison rank-4 matrix M, Gaussian noise with σ z = 1 Example from [Candès, Plan 09] 1.5 Trim+SVD RMSE Lower Bound 0 OptSpace Sampling rate 28 / 33

49 Comparison rank-4 matrix M, Gaussian noise with σ z = 1 Example from [Candès, Plan 09] 1.5 Trim+SVD RMSE Lower Bound 0 OptSpace FPCA ADMiRA Sampling rate FPCA [Ma, Goldfarb, Chen 09], ADMiRA [Lee, Bresler 09] 28 / 33

50 Positioning 29 / 33

51 The model Distance Matrix D R n wireless devices uniformly distributed in a bounded convex region Distance measurements between devices within radio range R Find the locations up to a rigid motion 30 / 33

52 The model Distance Matrix D R How is it related to Matrix Completion? Need to find the missing entries rank(d) = 4 How is it different? Non-uniform sampling Rich information not used in Matrix Completion 30 / 33

53 The model Distance Matrix D R MDS-MAP [Shang et al. 03] 1. Fill in the missing entries with shortest paths 2. Compute rank-4 approximation 30 / 33

54 Main results Theorem log n For R > C n, with high probability, RMSE C log n R n ( RMSE = 1 n 2 i,j ( D D) 2 ij) 1/2 + o(1). Lower Bound: If R <, then the graph is disconnected log n πn Generalized to quantized measurements and distributed algorithms We can add Greedy Minimization step Oh, Karbasi, Montanari, Information Theory Workshop, 2010 Karbasi, Oh, ACM SIGMETRICS, / 33

55 Numerical simulation RMSE n=500 n=1,000 n=2, n=500 n=1,000 n=2, R/ log n πn R/ πn 32 / 33

56 Conclusion Matrix Completion is an important problem with many practical applications OptSpace is an efficient algorithm for Matrix Completion OptSpace achieves performance close to the fundamental limit 33 / 33

57 Special thanks to: PMCRQOAVNBETHRDAZOKNWXAOURHLIO 33 / 33

58 Special thanks to: PMCRQOAVNBETHRDAZOKNWXAOURHLIO 33 / 33

59 Special thanks to: PMCRQOAVNBETHRDAZOKNWXAOURHLIO 33 / 33

60 Special thanks to: PMCRQOAVNBETHRDAZOKNWXAOURHLIO 33 / 33

61 Special thanks to: PMCRQOAVNBETHRDAZOKNWXAOURHLIO 33 / 33

62 Special thanks to: Officemates: Morteza, Yash, Jose, Raghu, Satish Friends: Mohsen, Adel, Farshid, Fernando, Arian, Haim, Sachin, Ivana, Ahn, Cha, Choi, Rhee, Kang, Kim, Lee, Na, Park, Ra, Seok, Song 33 / 33

63 Special thanks to: My family and Kyung Eun 33 / 33

64 Thank you! 33 / 33

Matrix completion: Fundamental limits and efficient algorithms. Sewoong Oh Stanford University

Matrix completion: Fundamental limits and efficient algorithms. Sewoong Oh Stanford University Matrix completion: Fundamental limits and efficient algorithms Sewoong Oh Stanford University 1 / 35 Low-rank matrix completion Low-rank Data Matrix Sparse Sampled Matrix Complete the matrix from small

More information

Low-rank Matrix Completion from Noisy Entries

Low-rank Matrix Completion from Noisy Entries Low-rank Matrix Completion from Noisy Entries Sewoong Oh Joint work with Raghunandan Keshavan and Andrea Montanari Stanford University Forty-Seventh Allerton Conference October 1, 2009 R.Keshavan, S.Oh,

More information

Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison

Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison Raghunandan H. Keshavan, Andrea Montanari and Sewoong Oh Electrical Engineering and Statistics Department Stanford University,

More information

Matrix Completion from Fewer Entries

Matrix Completion from Fewer Entries from Fewer Entries Stanford University March 30, 2009 Outline The problem, a look at the data, and some results (slides) 2 Proofs (blackboard) arxiv:090.350 The problem, a look at the data, and some results

More information

Matrix Completion from a Few Entries

Matrix Completion from a Few Entries Matrix Completion from a Few Entries Raghunandan H. Keshavan and Sewoong Oh EE Department Stanford University, Stanford, CA 9434 Andrea Montanari EE and Statistics Departments Stanford University, Stanford,

More information

Matrix Completion from Noisy Entries

Matrix Completion from Noisy Entries Matrix Completion from Noisy Entries Raghunandan H. Keshavan, Andrea Montanari, and Sewoong Oh Abstract Given a matrix M of low-rank, we consider the problem of reconstructing it from noisy observations

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

Recovering any low-rank matrix, provably

Recovering any low-rank matrix, provably Recovering any low-rank matrix, provably Rachel Ward University of Texas at Austin October, 2014 Joint work with Yudong Chen (U.C. Berkeley), Srinadh Bhojanapalli and Sujay Sanghavi (U.T. Austin) Matrix

More information

Guaranteed Rank Minimization via Singular Value Projection

Guaranteed Rank Minimization via Singular Value Projection 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 3 3 33 34 35 36 37 38 39 4 4 4 43 44 45 46 47 48 49 5 5 5 53 Guaranteed Rank Minimization via Singular Value Projection Anonymous Author(s) Affiliation Address

More information

Low Rank Matrix Completion Formulation and Algorithm

Low Rank Matrix Completion Formulation and Algorithm 1 2 Low Rank Matrix Completion and Algorithm Jian Zhang Department of Computer Science, ETH Zurich zhangjianthu@gmail.com March 25, 2014 Movie Rating 1 2 Critic A 5 5 Critic B 6 5 Jian 9 8 Kind Guy B 9

More information

Matrix Completion from Noisy Entries

Matrix Completion from Noisy Entries Journal of Machine Learning Research (200) 2057-2078 Submitted 6/09; Revised 4/0; Published 7/0 Matrix Completion from Noisy Entries Raghunandan H. Keshavan Andrea Montanari Sewoong Oh Department of Electrical

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via convex relaxations Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Using SVD to Recommend Movies

Using SVD to Recommend Movies Michael Percy University of California, Santa Cruz Last update: December 12, 2009 Last update: December 12, 2009 1 / Outline 1 Introduction 2 Singular Value Decomposition 3 Experiments 4 Conclusion Last

More information

EE 381V: Large Scale Learning Spring Lecture 16 March 7

EE 381V: Large Scale Learning Spring Lecture 16 March 7 EE 381V: Large Scale Learning Spring 2013 Lecture 16 March 7 Lecturer: Caramanis & Sanghavi Scribe: Tianyang Bai 16.1 Topics Covered In this lecture, we introduced one method of matrix completion via SVD-based

More information

Structured matrix factorizations. Example: Eigenfaces

Structured matrix factorizations. Example: Eigenfaces Structured matrix factorizations Example: Eigenfaces An extremely large variety of interesting and important problems in machine learning can be formulated as: Given a matrix, find a matrix and a matrix

More information

Matrix Completion from Noisy Entries

Matrix Completion from Noisy Entries Matrix Completion from Noisy Entries Raghunandan H. Keshavan, Andrea Montanari, and Sewoong Oh June 10, 2009 Abstract Given a matrix M of low-rank, we consider the problem of reconstructing it from noisy

More information

Low-Rank Matrix Recovery

Low-Rank Matrix Recovery ELE 538B: Mathematics of High-Dimensional Data Low-Rank Matrix Recovery Yuxin Chen Princeton University, Fall 2018 Outline Motivation Problem setup Nuclear norm minimization RIP and low-rank matrix recovery

More information

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Adrien Todeschini Inria Bordeaux JdS 2014, Rennes Aug. 2014 Joint work with François Caron (Univ. Oxford), Marie

More information

Breaking the Limits of Subspace Inference

Breaking the Limits of Subspace Inference Breaking the Limits of Subspace Inference Claudia R. Solís-Lemus, Daniel L. Pimentel-Alarcón Emory University, Georgia State University Abstract Inferring low-dimensional subspaces that describe high-dimensional,

More information

A Scalable Spectral Relaxation Approach to Matrix Completion via Kronecker Products

A Scalable Spectral Relaxation Approach to Matrix Completion via Kronecker Products A Scalable Spectral Relaxation Approach to Matrix Completion via Kronecker Products Hui Zhao Jiuqiang Han Naiyan Wang Congfu Xu Zhihua Zhang Department of Automation, Xi an Jiaotong University, Xi an,

More information

Compressed Sensing and Robust Recovery of Low Rank Matrices

Compressed Sensing and Robust Recovery of Low Rank Matrices Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech

More information

From Compressed Sensing to Matrix Completion and Beyond. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison

From Compressed Sensing to Matrix Completion and Beyond. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison From Compressed Sensing to Matrix Completion and Beyond Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Netflix Prize One million big ones! Given 100 million ratings on a

More information

Collaborative Filtering Matrix Completion Alternating Least Squares

Collaborative Filtering Matrix Completion Alternating Least Squares Case Study 4: Collaborative Filtering Collaborative Filtering Matrix Completion Alternating Least Squares Machine Learning for Big Data CSE547/STAT548, University of Washington Sham Kakade May 19, 2016

More information

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms François Caron Department of Statistics, Oxford STATLEARN 2014, Paris April 7, 2014 Joint work with Adrien Todeschini,

More information

The convex algebraic geometry of rank minimization

The convex algebraic geometry of rank minimization The convex algebraic geometry of rank minimization Pablo A. Parrilo Laboratory for Information and Decision Systems Massachusetts Institute of Technology International Symposium on Mathematical Programming

More information

arxiv: v1 [stat.ml] 1 Mar 2015

arxiv: v1 [stat.ml] 1 Mar 2015 Matrix Completion with Noisy Entries and Outliers Raymond K. W. Wong 1 and Thomas C. M. Lee 2 arxiv:1503.00214v1 [stat.ml] 1 Mar 2015 1 Department of Statistics, Iowa State University 2 Department of Statistics,

More information

Matrix Completion with Noise

Matrix Completion with Noise Matrix Completion with Noise 1 Emmanuel J. Candès and Yaniv Plan Applied and Computational Mathematics, Caltech, Pasadena, CA 91125 of the rapidly developing field of compressed sensing, and is already

More information

Adaptive one-bit matrix completion

Adaptive one-bit matrix completion Adaptive one-bit matrix completion Joseph Salmon Télécom Paristech, Institut Mines-Télécom Joint work with Jean Lafond (Télécom Paristech) Olga Klopp (Crest / MODAL X, Université Paris Ouest) Éric Moulines

More information

Matrix Completion for Structured Observations

Matrix Completion for Structured Observations Matrix Completion for Structured Observations Denali Molitor Department of Mathematics University of California, Los ngeles Los ngeles, C 90095, US Email: dmolitor@math.ucla.edu Deanna Needell Department

More information

Rank minimization via the γ 2 norm

Rank minimization via the γ 2 norm Rank minimization via the γ 2 norm Troy Lee Columbia University Adi Shraibman Weizmann Institute Rank Minimization Problem Consider the following problem min X rank(x) A i, X b i for i = 1,..., k Arises

More information

Binary matrix completion

Binary matrix completion Binary matrix completion Yaniv Plan University of Michigan SAMSI, LDHD workshop, 2013 Joint work with (a) Mark Davenport (b) Ewout van den Berg (c) Mary Wootters Yaniv Plan (U. Mich.) Binary matrix completion

More information

Matrix Factorizations: A Tale of Two Norms

Matrix Factorizations: A Tale of Two Norms Matrix Factorizations: A Tale of Two Norms Nati Srebro Toyota Technological Institute Chicago Maximum Margin Matrix Factorization S, Jason Rennie, Tommi Jaakkola (MIT), NIPS 2004 Rank, Trace-Norm and Max-Norm

More information

Lecture Notes 10: Matrix Factorization

Lecture Notes 10: Matrix Factorization Optimization-based data analysis Fall 207 Lecture Notes 0: Matrix Factorization Low-rank models. Rank- model Consider the problem of modeling a quantity y[i, j] that depends on two indices i and j. To

More information

1-Bit Matrix Completion

1-Bit Matrix Completion 1-Bit Matrix Completion Mark A. Davenport School of Electrical and Computer Engineering Georgia Institute of Technology Yaniv Plan Mary Wootters Ewout van den Berg Matrix Completion d When is it possible

More information

MATRIX COMPLETION AND TENSOR RANK

MATRIX COMPLETION AND TENSOR RANK MATRIX COMPLETION AND TENSOR RANK HARM DERKSEN Abstract. In this paper, we show that the low rank matrix completion problem can be reduced to the problem of finding the rank of a certain tensor. arxiv:1302.2639v2

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

An iterative hard thresholding estimator for low rank matrix recovery

An iterative hard thresholding estimator for low rank matrix recovery An iterative hard thresholding estimator for low rank matrix recovery Alexandra Carpentier - based on a joint work with Arlene K.Y. Kim Statistical Laboratory, Department of Pure Mathematics and Mathematical

More information

Linear dimensionality reduction for data analysis

Linear dimensionality reduction for data analysis Linear dimensionality reduction for data analysis Nicolas Gillis Joint work with Robert Luce, François Glineur, Stephen Vavasis, Robert Plemmons, Gabriella Casalino The setup Dimensionality reduction for

More information

Probabilistic Low-Rank Matrix Completion from Quantized Measurements

Probabilistic Low-Rank Matrix Completion from Quantized Measurements Journal of Machine Learning Research 7 (206-34 Submitted 6/5; Revised 2/5; Published 4/6 Probabilistic Low-Rank Matrix Completion from Quantized Measurements Sonia A. Bhaskar Department of Electrical Engineering

More information

1-Bit Matrix Completion

1-Bit Matrix Completion 1-Bit Matrix Completion Mark A. Davenport School of Electrical and Computer Engineering Georgia Institute of Technology Yaniv Plan Mary Wootters Ewout van den Berg Matrix Completion d When is it possible

More information

Collaborative Filtering

Collaborative Filtering Case Study 4: Collaborative Filtering Collaborative Filtering Matrix Completion Alternating Least Squares Machine Learning/Statistics for Big Data CSE599C1/STAT592, University of Washington Carlos Guestrin

More information

Sparse and Low-Rank Matrix Decompositions

Sparse and Low-Rank Matrix Decompositions Forty-Seventh Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 30 - October 2, 2009 Sparse and Low-Rank Matrix Decompositions Venkat Chandrasekaran, Sujay Sanghavi, Pablo A. Parrilo,

More information

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:

More information

BEYOND MATRIX COMPLETION

BEYOND MATRIX COMPLETION BEYOND MATRIX COMPLETION ANKUR MOITRA MASSACHUSETTS INSTITUTE OF TECHNOLOGY Based on joint work with Boaz Barak (MSR) Part I: RecommendaIon systems and parially observed matrices THE NETFLIX PROBLEM movies

More information

Spectral k-support Norm Regularization

Spectral k-support Norm Regularization Spectral k-support Norm Regularization Andrew McDonald Department of Computer Science, UCL (Joint work with Massimiliano Pontil and Dimitris Stamos) 25 March, 2015 1 / 19 Problem: Matrix Completion Goal:

More information

Sensor Network Localization from Local Connectivity : Performance Analysis for the MDS-MAP Algorithm

Sensor Network Localization from Local Connectivity : Performance Analysis for the MDS-MAP Algorithm Sensor Network Localization from Local Connectivity : Performance Analysis for the MDS-MAP Algorithm Sewoong Oh, Amin Karbasi, and Andrea Montanari August 27, 2009 Abstract Sensor localization from only

More information

A Novel Approach to Quantized Matrix Completion Using Huber Loss Measure

A Novel Approach to Quantized Matrix Completion Using Huber Loss Measure 1 A Novel Approach to Quantized Matrix Completion Using Huber Loss Measure Ashkan Esmaeili and Farokh Marvasti ariv:1810.1260v1 [stat.ml] 29 Oct 2018 Abstract In this paper, we introduce a novel and robust

More information

Tighter Low-rank Approximation via Sampling the Leveraged Element

Tighter Low-rank Approximation via Sampling the Leveraged Element Tighter Low-rank Approximation via Sampling the Leveraged Element Srinadh Bhojanapalli The University of Texas at Austin bsrinadh@utexas.edu Prateek Jain Microsoft Research, India prajain@microsoft.com

More information

CSC 576: Variants of Sparse Learning

CSC 576: Variants of Sparse Learning CSC 576: Variants of Sparse Learning Ji Liu Department of Computer Science, University of Rochester October 27, 205 Introduction Our previous note basically suggests using l norm to enforce sparsity in

More information

Solving Corrupted Quadratic Equations, Provably

Solving Corrupted Quadratic Equations, Provably Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin

More information

An Extended Frank-Wolfe Method, with Application to Low-Rank Matrix Completion

An Extended Frank-Wolfe Method, with Application to Low-Rank Matrix Completion An Extended Frank-Wolfe Method, with Application to Low-Rank Matrix Completion Robert M. Freund, MIT joint with Paul Grigas (UC Berkeley) and Rahul Mazumder (MIT) CDC, December 2016 1 Outline of Topics

More information

Lecture 2 Part 1 Optimization

Lecture 2 Part 1 Optimization Lecture 2 Part 1 Optimization (January 16, 2015) Mu Zhu University of Waterloo Need for Optimization E(y x), P(y x) want to go after them first, model some examples last week then, estimate didn t discuss

More information

Matrix estimation by Universal Singular Value Thresholding

Matrix estimation by Universal Singular Value Thresholding Matrix estimation by Universal Singular Value Thresholding Courant Institute, NYU Let us begin with an example: Suppose that we have an undirected random graph G on n vertices. Model: There is a real symmetric

More information

CS598 Machine Learning in Computational Biology (Lecture 5: Matrix - part 2) Professor Jian Peng Teaching Assistant: Rongda Zhu

CS598 Machine Learning in Computational Biology (Lecture 5: Matrix - part 2) Professor Jian Peng Teaching Assistant: Rongda Zhu CS598 Machine Learning in Computational Biology (Lecture 5: Matrix - part 2) Professor Jian Peng Teaching Assistant: Rongda Zhu Feature engineering is hard 1. Extract informative features from domain knowledge

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

Statistical Performance of Convex Tensor Decomposition

Statistical Performance of Convex Tensor Decomposition Slides available: h-p://www.ibis.t.u tokyo.ac.jp/ryotat/tensor12kyoto.pdf Statistical Performance of Convex Tensor Decomposition Ryota Tomioka 2012/01/26 @ Kyoto University Perspectives in Informatics

More information

1-Bit Matrix Completion

1-Bit Matrix Completion 1-Bit Matrix Completion Mark A. Davenport School of Electrical and Computer Engineering Georgia Institute of Technology Yaniv Plan Mary Wootters Ewout van den Berg Matrix Completion d When is it possible

More information

1-bit Matrix Completion. PAC-Bayes and Variational Approximation

1-bit Matrix Completion. PAC-Bayes and Variational Approximation : PAC-Bayes and Variational Approximation (with P. Alquier) PhD Supervisor: N. Chopin Bayes In Paris, 5 January 2017 (Happy New Year!) Various Topics covered Matrix Completion PAC-Bayesian Estimation Variational

More information

c Copyright 2015 Karthik Mohan

c Copyright 2015 Karthik Mohan c Copyright 2015 Karthik Mohan Learning structured matrices in high dimensions: Low rank estimation and structured graphical models Karthik Mohan A dissertation submitted in partial fulfillment of the

More information

Riemannian Pursuit for Big Matrix Recovery

Riemannian Pursuit for Big Matrix Recovery Riemannian Pursuit for Big Matrix Recovery Mingkui Tan 1, Ivor W. Tsang 2, Li Wang 3, Bart Vandereycken 4, Sinno Jialin Pan 5 1 School of Computer Science, University of Adelaide, Australia 2 Center for

More information

An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems

An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems Kim-Chuan Toh Sangwoon Yun March 27, 2009; Revised, Nov 11, 2009 Abstract The affine rank minimization

More information

Application to Hyperspectral Imaging

Application to Hyperspectral Imaging Compressed Sensing of Low Complexity High Dimensional Data Application to Hyperspectral Imaging Kévin Degraux PhD Student, ICTEAM institute Université catholique de Louvain, Belgium 6 November, 2013 Hyperspectral

More information

A Characterization of Sampling Patterns for Union of Low-Rank Subspaces Retrieval Problem

A Characterization of Sampling Patterns for Union of Low-Rank Subspaces Retrieval Problem A Characterization of Sampling Patterns for Union of Low-Rank Subspaces Retrieval Problem Morteza Ashraphijuo Columbia University ashraphijuo@ee.columbia.edu Xiaodong Wang Columbia University wangx@ee.columbia.edu

More information

Joint Capped Norms Minimization for Robust Matrix Recovery

Joint Capped Norms Minimization for Robust Matrix Recovery Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-7) Joint Capped Norms Minimization for Robust Matrix Recovery Feiping Nie, Zhouyuan Huo 2, Heng Huang 2

More information

GoDec: Randomized Low-rank & Sparse Matrix Decomposition in Noisy Case

GoDec: Randomized Low-rank & Sparse Matrix Decomposition in Noisy Case ianyi Zhou IANYI.ZHOU@SUDEN.US.EDU.AU Dacheng ao DACHENG.AO@US.EDU.AU Centre for Quantum Computation & Intelligent Systems, FEI, University of echnology, Sydney, NSW 2007, Australia Abstract Low-rank and

More information

A Simple Algorithm for Nuclear Norm Regularized Problems

A Simple Algorithm for Nuclear Norm Regularized Problems A Simple Algorithm for Nuclear Norm Regularized Problems ICML 00 Martin Jaggi, Marek Sulovský ETH Zurich Matrix Factorizations for recommender systems Y = Customer Movie UV T = u () The Netflix challenge:

More information

Collaborative Filtering. Radek Pelánek

Collaborative Filtering. Radek Pelánek Collaborative Filtering Radek Pelánek 2017 Notes on Lecture the most technical lecture of the course includes some scary looking math, but typically with intuitive interpretation use of standard machine

More information

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional

More information

BEYOND MATRIX COMPLETION

BEYOND MATRIX COMPLETION BEYOND MATRIX COMPLETION ANKUR MOITRA MASSACHUSETTS INSTITUTE OF TECHNOLOGY Based on joint work with Boaz Barak (Harvard) Part I: RecommendaJon systems and parjally observed matrices THE NETFLIX PROBLEM

More information

Sparse and Low Rank Recovery via Null Space Properties

Sparse and Low Rank Recovery via Null Space Properties Sparse and Low Rank Recovery via Null Space Properties Holger Rauhut Lehrstuhl C für Mathematik (Analysis), RWTH Aachen Convexity, probability and discrete structures, a geometric viewpoint Marne-la-Vallée,

More information

arxiv: v2 [cs.it] 12 Jul 2011

arxiv: v2 [cs.it] 12 Jul 2011 Online Identification and Tracking of Subspaces from Highly Incomplete Information Laura Balzano, Robert Nowak and Benjamin Recht arxiv:1006.4046v2 [cs.it] 12 Jul 2011 Department of Electrical and Computer

More information

Ranking from Crowdsourced Pairwise Comparisons via Matrix Manifold Optimization

Ranking from Crowdsourced Pairwise Comparisons via Matrix Manifold Optimization Ranking from Crowdsourced Pairwise Comparisons via Matrix Manifold Optimization Jialin Dong ShanghaiTech University 1 Outline Introduction FourVignettes: System Model and Problem Formulation Problem Analysis

More information

High-dimensional Statistics

High-dimensional Statistics High-dimensional Statistics Pradeep Ravikumar UT Austin Outline 1. High Dimensional Data : Large p, small n 2. Sparsity 3. Group Sparsity 4. Low Rank 1 Curse of Dimensionality Statistical Learning: Given

More information

Andriy Mnih and Ruslan Salakhutdinov

Andriy Mnih and Ruslan Salakhutdinov MATRIX FACTORIZATION METHODS FOR COLLABORATIVE FILTERING Andriy Mnih and Ruslan Salakhutdinov University of Toronto, Machine Learning Group 1 What is collaborative filtering? The goal of collaborative

More information

Compressive Sensing and Beyond

Compressive Sensing and Beyond Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered

More information

Provable Alternating Minimization Methods for Non-convex Optimization

Provable Alternating Minimization Methods for Non-convex Optimization Provable Alternating Minimization Methods for Non-convex Optimization Prateek Jain Microsoft Research, India Joint work with Praneeth Netrapalli, Sujay Sanghavi, Alekh Agarwal, Animashree Anandkumar, Rashish

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via nonconvex optimization Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Tractable Upper Bounds on the Restricted Isometry Constant

Tractable Upper Bounds on the Restricted Isometry Constant Tractable Upper Bounds on the Restricted Isometry Constant Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure, U.C. Berkeley. Support from NSF, DHS and Google.

More information

Fast and Robust Phase Retrieval

Fast and Robust Phase Retrieval Fast and Robust Phase Retrieval Aditya Viswanathan aditya@math.msu.edu CCAM Lunch Seminar Purdue University April 18 2014 0 / 27 Joint work with Yang Wang Mark Iwen Research supported in part by National

More information

Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies

Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies July 12, 212 Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies Morteza Mardani Dept. of ECE, University of Minnesota, Minneapolis, MN 55455 Acknowledgments:

More information

Divide-and-Conquer Matrix Factorization

Divide-and-Conquer Matrix Factorization Divide-and-Conquer Matrix Factorization Lester Mackey Collaborators: Ameet Talwalkar Michael I. Jordan Stanford University UCLA UC Berkeley December 14, 2015 Mackey (Stanford) Divide-and-Conquer Matrix

More information

Recommendation Systems

Recommendation Systems Recommendation Systems Popularity Recommendation Systems Predicting user responses to options Offering news articles based on users interests Offering suggestions on what the user might like to buy/consume

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 22 1 / 21 Overview

More information

Machine Learning for Disease Progression

Machine Learning for Disease Progression Machine Learning for Disease Progression Yong Deng Department of Materials Science & Engineering yongdeng@stanford.edu Xuxin Huang Department of Applied Physics xxhuang@stanford.edu Guanyang Wang Department

More information

Short Course Robust Optimization and Machine Learning. Lecture 4: Optimization in Unsupervised Learning

Short Course Robust Optimization and Machine Learning. Lecture 4: Optimization in Unsupervised Learning Short Course Robust Optimization and Machine Machine Lecture 4: Optimization in Unsupervised Laurent El Ghaoui EECS and IEOR Departments UC Berkeley Spring seminar TRANSP-OR, Zinal, Jan. 16-19, 2012 s

More information

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Shuyang Ling Department of Mathematics, UC Davis Oct.18th, 2016 Shuyang Ling (UC Davis) 16w5136, Oaxaca, Mexico Oct.18th, 2016

More information

THERE are many applications in signal processing and. Recovery of Low Rank Matrices Under Affine Constraints via a Smoothed Rank Function

THERE are many applications in signal processing and. Recovery of Low Rank Matrices Under Affine Constraints via a Smoothed Rank Function 1 Recovery of Low Rank Matrices Under Affine Constraints via a Smoothed Rank Function Mohammadreza Malek-Mohammadi *, Student, IEEE, Massoud Babaie-Zadeh, Senior Member, IEEE, Arash Ai, and Christian Jutten,

More information

The convex algebraic geometry of linear inverse problems

The convex algebraic geometry of linear inverse problems The convex algebraic geometry of linear inverse problems The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published Publisher

More information

Online Identification and Tracking of Subspaces from Highly Incomplete Information

Online Identification and Tracking of Subspaces from Highly Incomplete Information Online Identification and Tracking of Subspaces from Highly Incomplete Information Laura Balzano, Robert Nowak and Benjamin Recht Department of Electrical and Computer Engineering. University of Wisconsin-Madison

More information

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28 Sparsity Models Tong Zhang Rutgers University T. Zhang (Rutgers) Sparsity Models 1 / 28 Topics Standard sparse regression model algorithms: convex relaxation and greedy algorithm sparse recovery analysis:

More information

Rank Minimization over Finite Fields

Rank Minimization over Finite Fields Rank Minimization over Finite Fields Vincent Y. F. Tan, Laura Balzano and Stark C. Draper Dept. of ECE, University of Wisconsin-Madison, Email: {vtan@,sunbeam@ece.,sdraper@ece.}wisc.edu LIDS, Massachusetts

More information

Lecture 9: September 28

Lecture 9: September 28 0-725/36-725: Convex Optimization Fall 206 Lecturer: Ryan Tibshirani Lecture 9: September 28 Scribes: Yiming Wu, Ye Yuan, Zhihao Li Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These

More information

High-dimensional Statistical Models

High-dimensional Statistical Models High-dimensional Statistical Models Pradeep Ravikumar UT Austin MLSS 2014 1 Curse of Dimensionality Statistical Learning: Given n observations from p(x; θ ), where θ R p, recover signal/parameter θ. For

More information

The Projected Power Method: An Efficient Algorithm for Joint Alignment from Pairwise Differences

The Projected Power Method: An Efficient Algorithm for Joint Alignment from Pairwise Differences The Projected Power Method: An Efficient Algorithm for Joint Alignment from Pairwise Differences Yuxin Chen Emmanuel Candès Department of Statistics, Stanford University, Sep. 2016 Nonconvex optimization

More information

Network Localization via Schatten Quasi-Norm Minimization

Network Localization via Schatten Quasi-Norm Minimization Network Localization via Schatten Quasi-Norm Minimization Anthony Man-Cho So Department of Systems Engineering & Engineering Management The Chinese University of Hong Kong (Joint Work with Senshan Ji Kam-Fung

More information

Sparse Solutions of an Undetermined Linear System

Sparse Solutions of an Undetermined Linear System 1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research

More information

Lecture 8: Linear Algebra Background

Lecture 8: Linear Algebra Background CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 8: Linear Algebra Background Lecturer: Shayan Oveis Gharan 2/1/2017 Scribe: Swati Padmanabhan Disclaimer: These notes have not been subjected

More information

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation UIUC CSL Mar. 24 Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation Yuejie Chi Department of ECE and BMI Ohio State University Joint work with Yuxin Chen (Stanford).

More information

Lecture: Matrix Completion

Lecture: Matrix Completion 1/56 Lecture: Matrix Completion http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html Acknowledgement: this slides is based on Prof. Jure Leskovec and Prof. Emmanuel Candes s lecture notes Recommendation systems

More information

Information-Theoretic Limits of Matrix Completion

Information-Theoretic Limits of Matrix Completion Information-Theoretic Limits of Matrix Completion Erwin Riegler, David Stotz, and Helmut Bölcskei Dept. IT & EE, ETH Zurich, Switzerland Email: {eriegler, dstotz, boelcskei}@nari.ee.ethz.ch Abstract We

More information