Matrix Completion from Fewer Entries
|
|
- Lambert White
- 5 years ago
- Views:
Transcription
1 from Fewer Entries Stanford University March 30, 2009
2 Outline The problem, a look at the data, and some results (slides) 2 Proofs (blackboard) arxiv:
3 The problem, a look at the data, and some results
4 Netflix dataset: A big (!) matrix M = movies users 0 8 ratings
5 A big (!) matrix? ???? users 2 0 movies 0 6 queries M =
6 You get a prize if... RMSE < ; ) Is this possible?
7 You get a prize if... RMSE < ; ) Is this possible?
8 You get a prize if... RMSE < ; ) Is this possible?
9 A model: Incoherent low-rank matrices
10 The observations M = r n movies nα users
11 The observations M E = n movies nα users nɛ unif. random positions
12 You need some structure! r V T r M = n U nα r n
13 You need some structure! r V T r M = n U nα r n
14 Unstructured factors A. Bounded entries M ia M max = µ 0 r. A2. Incoherence r U 2 ik µ r, k= r Vak 2 µ r. k= [Candés, Recht 2008]
15 Metric (RMSE) D(M, ˆM) n 2 M 2 M ia ˆM ia 2 max i,a /2
16 Previous work Theorem (Candés, Recht, 2008) If ɛ C r n /5 log n then whp. M is unique given the observed entries. 2. M is the unique minimum of a SDP. cf. also [Recht, Fazel, Parrilo 2007]
17 Previous work Theorem (Candés, Recht, 2008) If ɛ C r n /5 log n then whp. M is unique given the observed entries. 2. M is the unique minimum of a SDP. cf. also [Recht, Fazel, Parrilo 2007]
18 Previous work Theorem (Candés, Recht, 2008) If ɛ C r n /5 log n then whp. M is unique given the observed entries. 2. M is the unique minimum of a SDP. cf. also [Recht, Fazel, Parrilo 2007]
19 Previous work Theorem (Candés, Recht, 2008) If ɛ C r n /5 log n then whp. M is unique given the observed entries. 2. M is the unique minimum of a SDP. cf. also [Recht, Fazel, Parrilo 2007]
20 Previous work Theorem (Candés, Recht, 2008) If ɛ C r n /5 log n then whp. M is unique given the observed entries. 2. M is the unique minimum of a SDP. cf. also [Recht, Fazel, Parrilo 2007]
21 Great, but.... n /5 observations for bit of information? 2. RMSE = 0? 3. SDP = O(n...6 ). Substitute n =
22 Great, but.... n /5 observations for bit of information? 2. RMSE = 0? 3. SDP = O(n...6 ). Substitute n =
23 Great, but.... n /5 observations for bit of information? 2. RMSE = 0? 3. SDP = O(n...6 ). Substitute n =
24 Great, but.... n /5 observations for bit of information? 2. RMSE = 0? 3. SDP = O(n...6 ). Substitute n =
25 O(n) entries are enough (practice)
26 A movie
27 Rank = : Bayes optimal vs. Belief Propagation 0.8 n=00 n=000 n=0000 D ɛ
28 Rank = 2: Belief Propagation..2 n=00 n=000 n=0000 D ɛ
29 Rank = 3: Belief Propagation D n=00 n=000 n= ɛ
30 Rank = : Belief Propagation D n=00 n=000 n= ɛ
31 O(n) entries are enough (theory)
32 Naive spectral algorithm M E ia = { Mia if (i, a) E, 0 otherwise. Projection M E = n i= T r (M E ) = n α ɛ σ i x i y T i, σ σ 2... r i= σ i x i y T i.
33 Naive spectral algorithm M E ia = { Mia if (i, a) E, 0 otherwise. Projection M E = n i= T r (M E ) = n α ɛ σ i x i y T i, σ σ 2... r i= σ i x i y T i.
34 Troubles and solution If ɛ = O(), spurious singular values Ω( log n/(log log n)). Trimming M E ia = { M E ia if deg(i) 2 Edeg(i), deg(a) 2 Edeg(a), 0 otherwise.
35 Troubles and solution If ɛ = O(), spurious singular values Ω( log n/(log log n)). Trimming M E ia = { M E ia if deg(i) 2 Edeg(i), deg(a) 2 Edeg(a), 0 otherwise.
36 Not-as-naive spectral algorithm Spectral ( matrix M E ) : Trim M E, and let M E be the output; 2: Project M E to T r ( M E ); 3: Clean residual errors by gradient descent in the factors.
37 Theorem (Keshavan, M, Oh, 2009) Assume r n /2 and bounded entries. Then nm max M T r ( M E ) F = RMSE C r/ɛ. with probability larger than exp( Bn). Theorem (Keshavan, M, Oh, 2009) Assune r = O(), bounded entries and incoherent factors, with Σ min, Σ max uniformly bounded away from 0 and. If ɛ C log n then Spectral returns, whp, the matrix M.
38 Theorem (Keshavan, M, Oh, 2009) Assume r n /2 and bounded entries. Then nm max M T r ( M E ) F = RMSE C r/ɛ. with probability larger than exp( Bn). Theorem (Keshavan, M, Oh, 2009) Assune r = O(), bounded entries and incoherent factors, with Σ min, Σ max uniformly bounded away from 0 and. If ɛ C log n then Spectral returns, whp, the matrix M.
39 A comparison Theorem (Achlioptas, McSherry 2007) Assume ɛ (8 log n) and bounded entries. Then nm max M T r ( M E ) F = RMSE r/ɛ. with probability larger than exp( 9(log n) ). (For n = 0 6, (8 log n) )
40 A comparison Theorem (Achlioptas, McSherry 2007) Assume ɛ (8 log n) and bounded entries. Then nm max M T r ( M E ) F = RMSE r/ɛ. with probability larger than exp( 9(log n) ). (For n = 0 6, (8 log n) )
41 One more comparison Theorem (Candés, Tao, March 8, 2009) Assume bounded entries and strongly incoherent factors If ɛ C r (log n) 6 then Semidefinite Programming returns, whp, the matrix M. A2. Strong incoherence r U 2 ik µ r, k= r U ik U jk µ r, k=
42 One more comparison Theorem (Candés, Tao, March 8, 2009) Assume bounded entries and strongly incoherent factors If ɛ C r (log n) 6 then Semidefinite Programming returns, whp, the matrix M. A2. Strong incoherence r U 2 ik µ r, k= r U ik U jk µ r, k=
43 One more comparison Theorem (Candés, Tao, March 8, 2009) Assume bounded entries and strongly incoherent factors If ɛ C r (log n) 6 then Semidefinite Programming returns, whp, the matrix M. A2. Strong incoherence r U 2 ik µ r, k= r U ik U jk µ r, k=
44 Our approach: Graph theory movies i nα users a n (i, a) E User a rated movie i.
45 Our approach: Graph theory movies i nα users a n (i, a) E User a rated movie i.
46 Back to the data
47 Random r =, n = 0000, ɛ = σ σ 3 σ 2σ
48 Netflix data (trimmed)
49 Is Neflix a random low-rank matrix? Compare for coordinate descent (SimonFunk).
50 Rank = 3 fit error pred. error D Lowrank Netflix Data Random Data U[- ] steps Lowrank Netflix Data Random Data U[- ] steps
51 Rank = fit error pred. error D Lowrank Netflix Data Random Data U[- ] steps Lowrank Netflix Data Random Data U[- ] steps
52 Rank = 5 fit error pred. error D Lowrank Netflix Data Random Data U[- ] steps Lowrank Netflix Data Random Data U[- ] steps
53 Proofs (blackboard)
Matrix completion: Fundamental limits and efficient algorithms. Sewoong Oh Stanford University
Matrix completion: Fundamental limits and efficient algorithms Sewoong Oh Stanford University 1 / 35 Low-rank matrix completion Low-rank Data Matrix Sparse Sampled Matrix Complete the matrix from small
More informationLow-rank Matrix Completion from Noisy Entries
Low-rank Matrix Completion from Noisy Entries Sewoong Oh Joint work with Raghunandan Keshavan and Andrea Montanari Stanford University Forty-Seventh Allerton Conference October 1, 2009 R.Keshavan, S.Oh,
More informationMatrix Completion: Fundamental Limits and Efficient Algorithms
Matrix Completion: Fundamental Limits and Efficient Algorithms Sewoong Oh PhD Defense Stanford University July 23, 2010 1 / 33 Matrix completion Find the missing entries in a huge data matrix 2 / 33 Example
More informationMatrix Completion from a Few Entries
Matrix Completion from a Few Entries Raghunandan H. Keshavan and Sewoong Oh EE Department Stanford University, Stanford, CA 9434 Andrea Montanari EE and Statistics Departments Stanford University, Stanford,
More informationECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis
ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear
More informationLow-rank Matrix Completion with Noisy Observations: a Quantitative Comparison
Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison Raghunandan H. Keshavan, Andrea Montanari and Sewoong Oh Electrical Engineering and Statistics Department Stanford University,
More informationMatrix Completion from Noisy Entries
Matrix Completion from Noisy Entries Raghunandan H. Keshavan, Andrea Montanari, and Sewoong Oh Abstract Given a matrix M of low-rank, we consider the problem of reconstructing it from noisy observations
More informationLow Rank Matrix Completion Formulation and Algorithm
1 2 Low Rank Matrix Completion and Algorithm Jian Zhang Department of Computer Science, ETH Zurich zhangjianthu@gmail.com March 25, 2014 Movie Rating 1 2 Critic A 5 5 Critic B 6 5 Jian 9 8 Kind Guy B 9
More informationRecovering any low-rank matrix, provably
Recovering any low-rank matrix, provably Rachel Ward University of Texas at Austin October, 2014 Joint work with Yudong Chen (U.C. Berkeley), Srinadh Bhojanapalli and Sujay Sanghavi (U.T. Austin) Matrix
More informationAdaptive one-bit matrix completion
Adaptive one-bit matrix completion Joseph Salmon Télécom Paristech, Institut Mines-Télécom Joint work with Jean Lafond (Télécom Paristech) Olga Klopp (Crest / MODAL X, Université Paris Ouest) Éric Moulines
More informationMatrix estimation by Universal Singular Value Thresholding
Matrix estimation by Universal Singular Value Thresholding Courant Institute, NYU Let us begin with an example: Suppose that we have an undirected random graph G on n vertices. Model: There is a real symmetric
More informationBEYOND MATRIX COMPLETION
BEYOND MATRIX COMPLETION ANKUR MOITRA MASSACHUSETTS INSTITUTE OF TECHNOLOGY Based on joint work with Boaz Barak (MSR) Part I: RecommendaIon systems and parially observed matrices THE NETFLIX PROBLEM movies
More informationProbabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms
Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms François Caron Department of Statistics, Oxford STATLEARN 2014, Paris April 7, 2014 Joint work with Adrien Todeschini,
More informationFrom Compressed Sensing to Matrix Completion and Beyond. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison
From Compressed Sensing to Matrix Completion and Beyond Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Netflix Prize One million big ones! Given 100 million ratings on a
More informationProbabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms
Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Adrien Todeschini Inria Bordeaux JdS 2014, Rennes Aug. 2014 Joint work with François Caron (Univ. Oxford), Marie
More informationLinear dimensionality reduction for data analysis
Linear dimensionality reduction for data analysis Nicolas Gillis Joint work with Robert Luce, François Glineur, Stephen Vavasis, Robert Plemmons, Gabriella Casalino The setup Dimensionality reduction for
More informationGuaranteed Rank Minimization via Singular Value Projection
3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 3 3 33 34 35 36 37 38 39 4 4 4 43 44 45 46 47 48 49 5 5 5 53 Guaranteed Rank Minimization via Singular Value Projection Anonymous Author(s) Affiliation Address
More informationECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference
ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via convex relaxations Yuejie Chi Department of Electrical and Computer Engineering Spring
More information1-bit Matrix Completion. PAC-Bayes and Variational Approximation
: PAC-Bayes and Variational Approximation (with P. Alquier) PhD Supervisor: N. Chopin Bayes In Paris, 5 January 2017 (Happy New Year!) Various Topics covered Matrix Completion PAC-Bayesian Estimation Variational
More informationCollaborative Filtering
Case Study 4: Collaborative Filtering Collaborative Filtering Matrix Completion Alternating Least Squares Machine Learning/Statistics for Big Data CSE599C1/STAT592, University of Washington Carlos Guestrin
More informationMatrix Completion for Structured Observations
Matrix Completion for Structured Observations Denali Molitor Department of Mathematics University of California, Los ngeles Los ngeles, C 90095, US Email: dmolitor@math.ucla.edu Deanna Needell Department
More informationLow-Rank Matrix Recovery
ELE 538B: Mathematics of High-Dimensional Data Low-Rank Matrix Recovery Yuxin Chen Princeton University, Fall 2018 Outline Motivation Problem setup Nuclear norm minimization RIP and low-rank matrix recovery
More informationThe convex algebraic geometry of rank minimization
The convex algebraic geometry of rank minimization Pablo A. Parrilo Laboratory for Information and Decision Systems Massachusetts Institute of Technology International Symposium on Mathematical Programming
More informationUsing SVD to Recommend Movies
Michael Percy University of California, Santa Cruz Last update: December 12, 2009 Last update: December 12, 2009 1 / Outline 1 Introduction 2 Singular Value Decomposition 3 Experiments 4 Conclusion Last
More informationStatistical Performance of Convex Tensor Decomposition
Slides available: h-p://www.ibis.t.u tokyo.ac.jp/ryotat/tensor12kyoto.pdf Statistical Performance of Convex Tensor Decomposition Ryota Tomioka 2012/01/26 @ Kyoto University Perspectives in Informatics
More informationMatrix Completion from Noisy Entries
Journal of Machine Learning Research (200) 2057-2078 Submitted 6/09; Revised 4/0; Published 7/0 Matrix Completion from Noisy Entries Raghunandan H. Keshavan Andrea Montanari Sewoong Oh Department of Electrical
More informationEE 381V: Large Scale Learning Spring Lecture 16 March 7
EE 381V: Large Scale Learning Spring 2013 Lecture 16 March 7 Lecturer: Caramanis & Sanghavi Scribe: Tianyang Bai 16.1 Topics Covered In this lecture, we introduced one method of matrix completion via SVD-based
More informationCollaborative Filtering Matrix Completion Alternating Least Squares
Case Study 4: Collaborative Filtering Collaborative Filtering Matrix Completion Alternating Least Squares Machine Learning for Big Data CSE547/STAT548, University of Washington Sham Kakade May 19, 2016
More informationRestricted Boltzmann Machines for Collaborative Filtering
Restricted Boltzmann Machines for Collaborative Filtering Authors: Ruslan Salakhutdinov Andriy Mnih Geoffrey Hinton Benjamin Schwehn Presentation by: Ioan Stanculescu 1 Overview The Netflix prize problem
More informationDivide-and-Conquer Matrix Factorization
Divide-and-Conquer Matrix Factorization Lester Mackey Collaborators: Ameet Talwalkar Michael I. Jordan Stanford University UCLA UC Berkeley December 14, 2015 Mackey (Stanford) Divide-and-Conquer Matrix
More informationUniform Convergence Examples
Uniform Convergence Examples James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University October 13, 2017 Outline More Uniform Convergence Examples Example
More informationBEYOND MATRIX COMPLETION
BEYOND MATRIX COMPLETION ANKUR MOITRA MASSACHUSETTS INSTITUTE OF TECHNOLOGY Based on joint work with Boaz Barak (Harvard) Part I: RecommendaJon systems and parjally observed matrices THE NETFLIX PROBLEM
More informationMatrix Factorizations: A Tale of Two Norms
Matrix Factorizations: A Tale of Two Norms Nati Srebro Toyota Technological Institute Chicago Maximum Margin Matrix Factorization S, Jason Rennie, Tommi Jaakkola (MIT), NIPS 2004 Rank, Trace-Norm and Max-Norm
More informationSubsampling, Spectral Methods and Semidefinite Programming
Subsampling, Spectral Methods and Semidefinite Programming Alexandre d Aspremont, Princeton University Joint work with Noureddine El Karoui, U.C. Berkeley. Support from NSF, DHS and Google. A. d Aspremont
More informationBinary matrix completion
Binary matrix completion Yaniv Plan University of Michigan SAMSI, LDHD workshop, 2013 Joint work with (a) Mark Davenport (b) Ewout van den Berg (c) Mary Wootters Yaniv Plan (U. Mich.) Binary matrix completion
More informationLow-rank Solutions of Linear Matrix Equations via Procrustes Flow
Low-rank Solutions of Linear Matrix Equations via Procrustes low Stephen Tu, Ross Boczar, Max Simchowitz {STEPHENT,BOCZAR,MSIMCHOW}@BERKELEYEDU EECS Department, UC Berkeley, Berkeley, CA Mahdi Soltanolkotabi
More informationELE 538B: Mathematics of High-Dimensional Data. Spectral methods. Yuxin Chen Princeton University, Fall 2018
ELE 538B: Mathematics of High-Dimensional Data Spectral methods Yuxin Chen Princeton University, Fall 2018 Outline A motivating application: graph clustering Distance and angles between two subspaces Eigen-space
More informationAn Extended Frank-Wolfe Method, with Application to Low-Rank Matrix Completion
An Extended Frank-Wolfe Method, with Application to Low-Rank Matrix Completion Robert M. Freund, MIT joint with Paul Grigas (UC Berkeley) and Rahul Mazumder (MIT) CDC, December 2016 1 Outline of Topics
More informationMATRIX RECOVERY FROM QUANTIZED AND CORRUPTED MEASUREMENTS
MATRIX RECOVERY FROM QUANTIZED AND CORRUPTED MEASUREMENTS Andrew S. Lan 1, Christoph Studer 2, and Richard G. Baraniuk 1 1 Rice University; e-mail: {sl29, richb}@rice.edu 2 Cornell University; e-mail:
More informationMatrix Completion with Noise
Matrix Completion with Noise 1 Emmanuel J. Candès and Yaniv Plan Applied and Computational Mathematics, Caltech, Pasadena, CA 91125 of the rapidly developing field of compressed sensing, and is already
More informationAn iterative hard thresholding estimator for low rank matrix recovery
An iterative hard thresholding estimator for low rank matrix recovery Alexandra Carpentier - based on a joint work with Arlene K.Y. Kim Statistical Laboratory, Department of Pure Mathematics and Mathematical
More informationLecture Notes 10: Matrix Factorization
Optimization-based data analysis Fall 207 Lecture Notes 0: Matrix Factorization Low-rank models. Rank- model Consider the problem of modeling a quantity y[i, j] that depends on two indices i and j. To
More informationMATRIX COMPLETION AND TENSOR RANK
MATRIX COMPLETION AND TENSOR RANK HARM DERKSEN Abstract. In this paper, we show that the low rank matrix completion problem can be reduced to the problem of finding the rank of a certain tensor. arxiv:1302.2639v2
More informationLLORMA: Local Low-Rank Matrix Approximation
Journal of Machine Learning Research 17 (2016) 1-24 Submitted 7/14; Revised 5/15; Published 3/16 LLORMA: Local Low-Rank Matrix Approximation Joonseok Lee Seungyeon Kim Google Research, Mountain View, CA
More informationLarge-scale Collaborative Ranking in Near-Linear Time
Large-scale Collaborative Ranking in Near-Linear Time Liwei Wu Depts of Statistics and Computer Science UC Davis KDD 17, Halifax, Canada August 13-17, 2017 Joint work with Cho-Jui Hsieh and James Sharpnack
More informationECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference
ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via nonconvex optimization Yuejie Chi Department of Electrical and Computer Engineering Spring
More informationRapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization
Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Shuyang Ling Department of Mathematics, UC Davis Oct.18th, 2016 Shuyang Ling (UC Davis) 16w5136, Oaxaca, Mexico Oct.18th, 2016
More informationLecture 2 Part 1 Optimization
Lecture 2 Part 1 Optimization (January 16, 2015) Mu Zhu University of Waterloo Need for Optimization E(y x), P(y x) want to go after them first, model some examples last week then, estimate didn t discuss
More informationAndriy Mnih and Ruslan Salakhutdinov
MATRIX FACTORIZATION METHODS FOR COLLABORATIVE FILTERING Andriy Mnih and Ruslan Salakhutdinov University of Toronto, Machine Learning Group 1 What is collaborative filtering? The goal of collaborative
More informationCompressed Sensing and Robust Recovery of Low Rank Matrices
Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech
More informationCollaborative Filtering. Radek Pelánek
Collaborative Filtering Radek Pelánek 2017 Notes on Lecture the most technical lecture of the course includes some scary looking math, but typically with intuitive interpretation use of standard machine
More informationCompressive Sensing and Beyond
Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered
More informationSolving Corrupted Quadratic Equations, Provably
Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin
More informationSparse and Low-Rank Matrix Decompositions
Forty-Seventh Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 30 - October 2, 2009 Sparse and Low-Rank Matrix Decompositions Venkat Chandrasekaran, Sujay Sanghavi, Pablo A. Parrilo,
More informationRecovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies
July 12, 212 Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies Morteza Mardani Dept. of ECE, University of Minnesota, Minneapolis, MN 55455 Acknowledgments:
More informationThree Generalizations of Compressed Sensing
Thomas Blumensath School of Mathematics The University of Southampton June, 2010 home prev next page Compressed Sensing and beyond y = Φx + e x R N or x C N x K is K-sparse and x x K 2 is small y R M or
More informationScaling Neighbourhood Methods
Quick Recap Scaling Neighbourhood Methods Collaborative Filtering m = #items n = #users Complexity : m * m * n Comparative Scale of Signals ~50 M users ~25 M items Explicit Ratings ~ O(1M) (1 per billion)
More informationUniversal low-rank matrix recovery from Pauli measurements
Universal low-rank matrix recovery from Pauli measurements Yi-Kai Liu Applied and Computational Mathematics Division National Institute of Standards and Technology Gaithersburg, MD, USA yi-kai.liu@nist.gov
More informationLecture: Matrix Completion
1/56 Lecture: Matrix Completion http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html Acknowledgement: this slides is based on Prof. Jure Leskovec and Prof. Emmanuel Candes s lecture notes Recommendation systems
More informationA Scalable Spectral Relaxation Approach to Matrix Completion via Kronecker Products
A Scalable Spectral Relaxation Approach to Matrix Completion via Kronecker Products Hui Zhao Jiuqiang Han Naiyan Wang Congfu Xu Zhihua Zhang Department of Automation, Xi an Jiaotong University, Xi an,
More informationA Tutorial on Matrix Approximation by Row Sampling
A Tutorial on Matrix Approximation by Row Sampling Rasmus Kyng June 11, 018 Contents 1 Fast Linear Algebra Talk 1.1 Matrix Concentration................................... 1. Algorithms for ɛ-approximation
More informationGenerative Models for Discrete Data
Generative Models for Discrete Data ddebarr@uw.edu 2016-04-21 Agenda Bayesian Concept Learning Beta-Binomial Model Dirichlet-Multinomial Model Naïve Bayes Classifiers Bayesian Concept Learning Numbers
More informationUniform Convergence Examples
Uniform Convergence Examples James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University October 13, 2017 Outline 1 Example Let (x n ) be the sequence
More informationRandom Methods for Linear Algebra
Gittens gittens@acm.caltech.edu Applied and Computational Mathematics California Institue of Technology October 2, 2009 Outline The Johnson-Lindenstrauss Transform 1 The Johnson-Lindenstrauss Transform
More informationSpectral k-support Norm Regularization
Spectral k-support Norm Regularization Andrew McDonald Department of Computer Science, UCL (Joint work with Massimiliano Pontil and Dimitris Stamos) 25 March, 2015 1 / 19 Problem: Matrix Completion Goal:
More informationProvable Alternating Minimization Methods for Non-convex Optimization
Provable Alternating Minimization Methods for Non-convex Optimization Prateek Jain Microsoft Research, India Joint work with Praneeth Netrapalli, Sujay Sanghavi, Alekh Agarwal, Animashree Anandkumar, Rashish
More informationSymmetric Factorization for Nonconvex Optimization
Symmetric Factorization for Nonconvex Optimization Qinqing Zheng February 24, 2017 1 Overview A growing body of recent research is shedding new light on the role of nonconvex optimization for tackling
More informationMatrix Completion from Noisy Entries
Matrix Completion from Noisy Entries Raghunandan H. Keshavan, Andrea Montanari, and Sewoong Oh June 10, 2009 Abstract Given a matrix M of low-rank, we consider the problem of reconstructing it from noisy
More informationMaximum Margin Matrix Factorization
Maximum Margin Matrix Factorization Nati Srebro Toyota Technological Institute Chicago Joint work with Noga Alon Tel-Aviv Yonatan Amit Hebrew U Alex d Aspremont Princeton Michael Fink Hebrew U Tommi Jaakkola
More informationStructured matrix factorizations. Example: Eigenfaces
Structured matrix factorizations Example: Eigenfaces An extremely large variety of interesting and important problems in machine learning can be formulated as: Given a matrix, find a matrix and a matrix
More informationSparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28
Sparsity Models Tong Zhang Rutgers University T. Zhang (Rutgers) Sparsity Models 1 / 28 Topics Standard sparse regression model algorithms: convex relaxation and greedy algorithm sparse recovery analysis:
More information1-bit Matrix Completion. PAC-Bayes and Variational Approximation
: PAC-Bayes and Variational Approximation (with P. Alquier) PhD Supervisor: N. Chopin Junior Conference on Data Science 2016 Université Paris Saclay, 15-16 September 2016 Introduction: Matrix Completion
More informationEstimation of (near) low-rank matrices with noise and high-dimensional scaling
Estimation of (near) low-rank matrices with noise and high-dimensional scaling Sahand Negahban Department of EECS, University of California, Berkeley, CA 94720, USA sahand n@eecs.berkeley.edu Martin J.
More informationMatrix Completion and Matrix Concentration
Matrix Completion and Matrix Concentration Lester Mackey Collaborators: Ameet Talwalkar, Michael I. Jordan, Richard Y. Chen, Brendan Farrell, Joel A. Tropp, and Daniel Paulin Stanford University UCLA UC
More informationData Mining and Matrices
Data Mining and Matrices 04 Matrix Completion Rainer Gemulla, Pauli Miettinen May 02, 2013 Recommender systems Problem Set of users Set of items (movies, books, jokes, products, stories,...) Feedback (ratings,
More informationHigh-dimensional Statistics
High-dimensional Statistics Pradeep Ravikumar UT Austin Outline 1. High Dimensional Data : Large p, small n 2. Sparsity 3. Group Sparsity 4. Low Rank 1 Curse of Dimensionality Statistical Learning: Given
More informationRank minimization via the γ 2 norm
Rank minimization via the γ 2 norm Troy Lee Columbia University Adi Shraibman Weizmann Institute Rank Minimization Problem Consider the following problem min X rank(x) A i, X b i for i = 1,..., k Arises
More informationRiemannian Pursuit for Big Matrix Recovery
Riemannian Pursuit for Big Matrix Recovery Mingkui Tan 1, Ivor W. Tsang 2, Li Wang 3, Bart Vandereycken 4, Sinno Jialin Pan 5 1 School of Computer Science, University of Adelaide, Australia 2 Center for
More informationDense Error Correction for Low-Rank Matrices via Principal Component Pursuit
Dense Error Correction for Low-Rank Matrices via Principal Component Pursuit Arvind Ganesh, John Wright, Xiaodong Li, Emmanuel J. Candès, and Yi Ma, Microsoft Research Asia, Beijing, P.R.C Dept. of Electrical
More informationGoing off the grid. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison
Going off the grid Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Joint work with Badri Bhaskar Parikshit Shah Gonnguo Tang We live in a continuous world... But we work
More informationRecommender Systems. Dipanjan Das Language Technologies Institute Carnegie Mellon University. 20 November, 2007
Recommender Systems Dipanjan Das Language Technologies Institute Carnegie Mellon University 20 November, 2007 Today s Outline What are Recommender Systems? Two approaches Content Based Methods Collaborative
More informationROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210
ROBUST BLIND SPIKES DECONVOLUTION Yuejie Chi Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 4 ABSTRACT Blind spikes deconvolution, or blind super-resolution, deals with
More informationGoDec: Randomized Low-rank & Sparse Matrix Decomposition in Noisy Case
ianyi Zhou IANYI.ZHOU@SUDEN.US.EDU.AU Dacheng ao DACHENG.AO@US.EDU.AU Centre for Quantum Computation & Intelligent Systems, FEI, University of echnology, Sydney, NSW 2007, Australia Abstract Low-rank and
More informationGuaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization
Forty-Fifth Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 26-28, 27 WeA3.2 Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization Benjamin
More informationarxiv: v1 [stat.ml] 1 Mar 2015
Matrix Completion with Noisy Entries and Outliers Raymond K. W. Wong 1 and Thomas C. M. Lee 2 arxiv:1503.00214v1 [stat.ml] 1 Mar 2015 1 Department of Statistics, Iowa State University 2 Department of Statistics,
More informationA Novel Approach to Quantized Matrix Completion Using Huber Loss Measure
1 A Novel Approach to Quantized Matrix Completion Using Huber Loss Measure Ashkan Esmaeili and Farokh Marvasti ariv:1810.1260v1 [stat.ml] 29 Oct 2018 Abstract In this paper, we introduce a novel and robust
More informationHigh-dimensional Statistical Models
High-dimensional Statistical Models Pradeep Ravikumar UT Austin MLSS 2014 1 Curse of Dimensionality Statistical Learning: Given n observations from p(x; θ ), where θ R p, recover signal/parameter θ. For
More informationInformation-Theoretic Limits of Matrix Completion
Information-Theoretic Limits of Matrix Completion Erwin Riegler, David Stotz, and Helmut Bölcskei Dept. IT & EE, ETH Zurich, Switzerland Email: {eriegler, dstotz, boelcskei}@nari.ee.ethz.ch Abstract We
More informationKernel Learning via Random Fourier Representations
Kernel Learning via Random Fourier Representations L. Law, M. Mider, X. Miscouridou, S. Ip, A. Wang Module 5: Machine Learning L. Law, M. Mider, X. Miscouridou, S. Ip, A. Wang Kernel Learning via Random
More informationLARGE SCALE 2D SPECTRAL COMPRESSED SENSING IN CONTINUOUS DOMAIN
LARGE SCALE 2D SPECTRAL COMPRESSED SENSING IN CONTINUOUS DOMAIN Jian-Feng Cai, Weiyu Xu 2, and Yang Yang 3 Department of Mathematics, Hong Kong University of Science and Technology, Clear Water Bay, Kowloon,
More informationLow-rank Solutions of Linear Matrix Equations via Procrustes Flow
Low-rank Solutions of Linear Matrix Equations via Procrustes low Stephen Tu Mahdi Soltanolkotabi Ross Boczar Benjamin Recht July 11, 2015 Abstract In this paper we study the problem of recovering an low-rank
More informationThe convex algebraic geometry of linear inverse problems
The convex algebraic geometry of linear inverse problems The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published Publisher
More informationINFO 4300 / CS4300 Information Retrieval. slides adapted from Hinrich Schütze s, linked from
INFO 4300 / CS4300 Information Retrieval slides adapted from Hinrich Schütze s, linked from http://informationretrieval.org/ IR 8: Evaluation & SVD Paul Ginsparg Cornell University, Ithaca, NY 20 Sep 2011
More informationProbabilistic Low-Rank Matrix Completion from Quantized Measurements
Journal of Machine Learning Research 7 (206-34 Submitted 6/5; Revised 2/5; Published 4/6 Probabilistic Low-Rank Matrix Completion from Quantized Measurements Sonia A. Bhaskar Department of Electrical Engineering
More informationA Fast Augmented Lagrangian Algorithm for Learning Low-Rank Matrices
A Fast Augmented Lagrangian Algorithm for Learning Low-Rank Matrices Ryota Tomioka 1, Taiji Suzuki 1, Masashi Sugiyama 2, Hisashi Kashima 1 1 The University of Tokyo 2 Tokyo Institute of Technology 2010-06-22
More informationConcentration-Based Guarantees for Low-Rank Matrix Reconstruction
JMLR: Workshop and Conference Proceedings 9 20 35 339 24th Annual Conference on Learning Theory Concentration-Based Guarantees for Low-Rank Matrix Reconstruction Rina Foygel Department of Statistics, University
More informationImplicit Regularization in Matrix Factorization
Implicit Regularization in Matrix Factorization Suriya Gunasekar Blake Woodworth Srinadh Bhojanapalli Behnam Neyshabur Nathan Srebro SURIYA@TTIC.EDU BLAKE@TTIC.EDU SRINADH@TTIC.EDU BEHNAM@TTIC.EDU NATI@TTIC.EDU
More informationStochastic dynamical modeling:
Stochastic dynamical modeling: Structured matrix completion of partially available statistics Armin Zare www-bcf.usc.edu/ arminzar Joint work with: Yongxin Chen Mihailo R. Jovanovic Tryphon T. Georgiou
More informationBinary Principal Component Analysis in the Netflix Collaborative Filtering Task
Binary Principal Component Analysis in the Netflix Collaborative Filtering Task László Kozma, Alexander Ilin, Tapani Raiko first.last@tkk.fi Helsinki University of Technology Adaptive Informatics Research
More informationA Fast Algorithm for Reconstruction of Spectrally Sparse Signals in Super-Resolution
A Fast Algorithm for Reconstruction of Spectrally Sparse Signals in Super-Resolution Jian-Feng ai a, Suhui Liu a, and Weiyu Xu b a Department of Mathematics, University of Iowa, Iowa ity, IA 52242; b Department
More information