Binary matrix completion

Size: px
Start display at page:

Download "Binary matrix completion"

Transcription

1 Binary matrix completion Yaniv Plan University of Michigan SAMSI, LDHD workshop, 2013 Joint work with (a) Mark Davenport (b) Ewout van den Berg (c) Mary Wootters Yaniv Plan (U. Mich.) Binary matrix completion 1 / 28

2 Low-rank matrices Example: Netflix matrix M 1,1 M 1,2 M 1,3 M 1,4 M 2,1 M 2,2 M 2,3 M 2,4 M 3,1 M 3,2 M 3,3 M 3,4, M 4,1 M 4,2 M 4,3 M 4,4 M i,j = How much user i likes movie j Yaniv Plan (U. Mich.) Binary matrix completion 2 / 28

3 Low-rank matrices Example: Netflix matrix M 1,1 M 1,2 M 1,3 M 1,4 M 2,1 M 2,2 M 2,3 M 2,4 M 3,1 M 3,2 M 3,3 M 3,4, M 4,1 M 4,2 M 4,3 M 4,4 M i,j = How much user i likes movie j Rank-1 model: a j = Amount of action in movie j x i = How much user i likes action M i,j = x i a j Yaniv Plan (U. Mich.) Binary matrix completion 2 / 28

4 Low-rank matrices Example: Netflix matrix M 1,1 M 1,2 M 1,3 M 1,4 M 2,1 M 2,2 M 2,3 M 2,4 M 3,1 M 3,2 M 3,3 M 3,4, M 4,1 M 4,2 M 4,3 M 4,4 M i,j = How much user i likes movie j Rank-1 model: a j = Amount of action in movie j x i = How much user i likes action M i,j = x i a j Rank-2 model: b j = Amount of comedy in movie j y i = How much user i likes comedy M i,j = x i a j + y i b j Yaniv Plan (U. Mich.) Binary matrix completion 2 / 28

5 Low-rank assumption The number of characteristics that determine user preferences should be smaller than the number of movies or users. M should depend linearly on the characteristics: M i,j exp(x i a j ). These characteristics need not be known. Yaniv Plan (U. Mich.) Binary matrix completion 3 / 28

6 Matrix completion Completion of M from a subset of the entries.? M 1,2??? M 2,3 Matrix Completion M 3,1? M 3,3 M 4,1 M 4,2? M 1,1 M 1,2 M 1,3 M 2,1 M 2,2 M 2,3 M 3,1 M 3,2 M 3,3 M 4,1 M 4,2 M 4,3 [Incomplete set of researchers: Srebro, Fazel, Candès, Recht, Rennie, Jaakkola, Montanari, Soo, Wainwright, Negahban, Yu, Koltchinskii, Lounici, Tsybakov, Klopp, Cai, Zhou, P., present] Yaniv Plan (U. Mich.) Binary matrix completion 4 / 28

7 Binary matrix completion: motivation Yaniv Plan (U. Mich.) Binary matrix completion 5 / 28

8 Binary data with missing entries Senate Voting Senators Senate bills??????????????? Yaniv Plan (U. Mich.) Binary matrix completion 6 / 28

9 Binary data with missing entries Mathoverflow Math enthusiasts Math questions??????????????? Yaniv Plan (U. Mich.) Binary matrix completion 6 / 28

10 Binary data with missing entries Pandora People Songs??????????????? Yaniv Plan (U. Mich.) Binary matrix completion 6 / 28

11 Binary data with missing entries Research literature Researchers Papers??????????????? Yaniv Plan (U. Mich.) Binary matrix completion 6 / 28

12 Binary data with missing entries Netflix People Movies??????????????? Yaniv Plan (U. Mich.) Binary matrix completion 6 / 28

13 Binary data with missing entries Reddit People Articles??????????????? Yaniv Plan (U. Mich.) Binary matrix completion 6 / 28

14 Binary data with missing entries Incomplete binary survey People Survey questions??????????????? Yaniv Plan (U. Mich.) Binary matrix completion 6 / 28

15 Binary data with missing entries Sensor triangulation Sensors Sensors Yaniv Plan (U. Mich.) Binary matrix completion 6 / 28

16 Binary data with missing entries Senate Voting Y = Senators Senate bills??????????????? Yaniv Plan (U. Mich.) Binary matrix completion 6 / 28

17 A classical numerical experiment with voting data. Yaniv Plan (U. Mich.) Binary matrix completion 7 / 28

18 Senate voting Y : Voting history of US senators on 299 bills from (d) First singular vector of Y Yaniv Plan (U. Mich.) Binary matrix completion 8 / 28

19 Senate voting Y : Voting history of US senators on 299 bills from (f) First singular vector of Y (g) Senate party affiliations Yaniv Plan (U. Mich.) Binary matrix completion 8 / 28

20 Senate voting Y : Voting history of US senators on 299 bills from (h) First singular vector of Y (i) Senate party affiliations A low-rank model? What matrix has low rank? Yaniv Plan (U. Mich.) Binary matrix completion 8 / 28

21 Low-rank assumption Consider the senate voting example. Can the voting preferences of a certain senator be predicted given only a few characteristics of this senator? Does Y depend linearly on these characteristics? Yaniv Plan (U. Mich.) Binary matrix completion 9 / 28

22 Generalized linear model M Preference Matrix P ( Y i,j = ) =f (M i,j ) Binary data matrix,y Yaniv Plan (U. Mich.) Binary matrix completion 10 / 28

23 Generalized linear model M Preference Matrix P ( Y i,j = ) =f (M i,j ) M is unknown. M has (approximately) low rank. Binary data matrix,y f : R [0, 1] is a known function (e.g., the logistic curve). M R d d, Y {, } d d. Yaniv Plan (U. Mich.) Binary matrix completion 10 / 28

24 Generalized linear model P M ( Y i,j = ) =f (M i,j )?????????????? Preference Matrix???? Incomplete binary matrix, Y M is unknown. M has (approximately) low rank. f : R [0, 1] is a known function (e.g., the logistic curve). M R d d, Y {, } d d. Ω {1, 2,..., d} {1, 2,..., d}. You see Y Ω. Yaniv Plan (U. Mich.) Binary matrix completion 10 / 28

25 Latent variable formulation Y i,j = sign(m i,j + Z i,j if M i,j + Z i,j 0 ) = if M i,j + Z i,j < 0 Z is an iid noise matrix. f (x) := P (Z 1,1 x). You see Y Ω. Yaniv Plan (U. Mich.) Binary matrix completion 11 / 28

26 Main assumption, main goal Goal: Efficiently approximate M and/or f (M). Data: Y Ω =???????????????. Assumption: M has (approximately) low rank. Yaniv Plan (U. Mich.) Binary matrix completion 12 / 28

27 Approximately low-rank Assumption: M d r. M conv(rank-r matrices with Frobenius norm d) (Nuclear-norm ball) d r M = i σ i(m) = (σ 1, σ 2,..., σ d ) 1. Robust extension of the rank [Chatterjee 2013]. Allows for convex programming reconstruction. Figure: Nuclear-norm ball in high dimensions Yaniv Plan (U. Mich.) Binary matrix completion 13 / 28

28 Maximum likelihood estimation Take our estimate ˆM be the solution to the following convex program: max X F Ω,Y(X) such that 1 d X r F Ω,Y : log-likelihood function. Yaniv Plan (U. Mich.) Binary matrix completion 14 / 28

29 Estimation of the distribution, f (M) Theorem (Upper bound achieved by convex programming) Let f be the logistic function. Assume that 1 d M r. Suppose the sampling set is chosen at random with E Ω = m d log(d). Then with high probability, ( ) 1 d 2 dh 2 (f ( Mˆ rd i,j ), f (M i,j )) 2 C min m, 1. i,j d H (p, q) 2 := ( p q) 2 + ( 1 p 1 q) 2 = squared Hellinger distance. Is this bound tight? Yaniv Plan (U. Mich.) Binary matrix completion 15 / 28

30 Estimation of the distribution, f (M) Theorem (Upper bound achieved by convex programming) Let f be the logistic function. Assume that 1 d M r. Suppose the sampling set is chosen at random with E Ω = m d log(d). Then with high probability, ( ) 1 d 2 dh 2 (f ( Mˆ rd i,j ), f (M i,j )) 2 C min m, 1. i,j Theorem (Lower bound achievable by any estimator) In the setup of the above theorem, inf ˆM(Y ) sup E 1 M d 2 i,j ( ) dh 2 (f ( Mˆ rd i,j ), f (M i,j )) 2 c min m, 1. Yaniv Plan (U. Mich.) Binary matrix completion 15 / 28

31 Estimation of M Assumption: M α. M max X F Ω,Y(X) such that d dα X r and X α Yaniv Plan (U. Mich.) Binary matrix completion 16 / 28

32 1-bit matrix completion vs noisy matrix completion: error bounds Let Y 0 := M + Z. Z is a matrix with iid Gaussian noise with variance σ 2. Let f follows the probit model. Y i,j := sign(y 0 i,j). Question: How much harder is it to estimate M from Y in comparison to estimating M from Y 0? Two regimes: High SNR and low SNR. Yaniv Plan (U. Mich.) Binary matrix completion 17 / 28

33 Case 1: σ α (high signal-to-noise ratio). Theorem (Upper bound, convex programming, quantized input, Y) Let f be the probit function. Assume that 1 dα M r and M α. Suppose the sampling set is chosen at random with E Ω = m d log(d). Then with high probability, 1 2 ˆM M d 2 F ( ) α Cα 2 2 rd exp 2σ 2 m. Theorem (Lower bound, achievable by any estimator, unquantized input) In the setup of the above theorem, and under mild technical conditions, inf sup E 1 2 rd ˆM M cασ ˆM(Y 0 ) M d 2 F m. Yaniv Plan (U. Mich.) Binary matrix completion 18 / 28

34 Case 2: σ α (low signal-to-noise ratio). Theorem (Upper bound, convex programming, quantized input, Y) Let f be the probit function. Assume that 1 M dα r and M α. Suppose the sampling set is chosen at random with E Ω = m d log(d). Then with high probability, 1 2 ˆM M d 2 F rd Cασ m. Theorem (Lower bound, achievable by any estimator, unquantized input) In the setup of the above theorem, and under mild technical conditions, inf sup E 1 2 rd ˆM M cασ ˆM(Y 0 ) M d 2 F m. Yaniv Plan (U. Mich.) Binary matrix completion 18 / 28

35 Dithering: noise helps! Conclusion: When the noise is larger than the signal, quantizing to a single bit loses almost no information! When the noise is (significantly) smaller than the signal, increasing the noise improves recovery from quantized measurements! Program in (3) Program in (4) M M 2 F M 2 F log 10 σ Yaniv Plan (U. Mich.) Binary matrix completion 19 / 28

36 Why we need noise Take Now remove the noise! Y i,j = sign(m i,j + Z i,j ) Yaniv Plan (U. Mich.) Binary matrix completion 20 / 28

37 Why we need noise Take Y i,j = sign(m i,j ) Claim: Accurate reconstruction of M is impossible! Yaniv Plan (U. Mich.) Binary matrix completion 20 / 28

38 Why we need noise Take Suppose that M = Y i,j = sign(m i,j ) λ λ λ λ λ λ λ λ λ λ λ λ λ λ λ λ. Yaniv Plan (U. Mich.) Binary matrix completion 20 / 28

39 Why we need noise Take Suppose that M = Y i,j = sign(m i,j ) λ λ λ λ λ λ λ λ λ λ λ λ λ λ λ λ. Y i,j = sign(λ) Approximation of M is impossible even if every entry of Y is seen. Yaniv Plan (U. Mich.) Binary matrix completion 20 / 28

40 Why we need noise Take Y i,j = sign(m i,j ) Suppose that we know that M has rank 1 so that M = uv T for some two vectors u, v R d. M = ũṽ T leads to exactly the same binary data as M if the signs of ũ match the signs of u and similarly for ṽ and v. Approximation of M is impossible even if every entry of Y is seen. Yaniv Plan (U. Mich.) Binary matrix completion 20 / 28

41 Related literature [Srebro - Rennie - Jaakkola et al. 04 ] Model free: If an estimate has low nuclear norm and matches the signs of the observed entries by a significant margin, then the error on unobserved entries is small. Our results: If the model is correct, then the overall error is nearly minimax. Noise helps! [Cai-Zhou]: Extension to non-uniform sampling. Yaniv Plan (U. Mich.) Binary matrix completion 21 / 28

42 general f Let f (x) L α := sup x α f (x)(1 f (x)) and f (x)(1 f (x)) β α := sup x α (f (x)) 2. Theorem (Upper bound achieved by convex programming) 1 d 2 M rd M 2 F CαL αβ α m. Theorem (Lower bound achievable by any estimator) 1 M d 1 d M rd 2 F cα β α m. Yaniv Plan (U. Mich.) Binary matrix completion 22 / 28

43 general f Let f (x) L α := sup x α f (x)(1 f (x)) and f (x)(1 f (x)) β α := sup x α (f (x)) 2. Theorem (Upper bound achieved by convex programming) d 2 H (f ( M), f (M)) CαL α rd m. Theorem (Lower bound achievable by any estimator) d 2 H (f (M), f ( M)) c α L 1 rd m. Yaniv Plan (U. Mich.) Binary matrix completion 22 / 28

44 Voting simulation Binary data: Voting history of US senators on 299 bills from (a) First singular vector of ˆM Yaniv Plan (U. Mich.) Binary matrix completion 23 / 28

45 Voting simulation Binary data: Voting history of US senators on 299 bills from (b) First singular vector of ˆM (c) Senate party affiliations Yaniv Plan (U. Mich.) Binary matrix completion 23 / 28

46 Voting simulation Binary data: Voting history of US senators on 299 bills from (d) First singular vector of ˆM (e) Senate party affiliations (f) First singular vector of Y Ω. Yaniv Plan (U. Mich.) Binary matrix completion 23 / 28

47 Voting simulation (g) First singular vector of ˆM (h) Senate party affiliations (i) First singular vector of Y Ω. (j) The first five singular values of ˆM : 463, 216, 40, 32, 29 Yaniv Plan (U. Mich.) Binary matrix completion 23 / 28

48 Voting simulation Randomly delete 90% of entries. (k) First singular vector of ˆM (l) Senate party affiliations (m) First singular vector of Y Ω. Yaniv Plan (U. Mich.) Binary matrix completion 23 / 28

49 Voting simulation Randomly delete 95% of entries. (n) First singular vector of ˆM (o) Senate party affiliations (p) First singular vector of Y Ω. Yaniv Plan (U. Mich.) Binary matrix completion 23 / 28

50 Voting simulation With 95% of votes deleted: 86% of missing votes were correctly predicted. (Averaged over 20 experiments.) Figure: Percent of missed predictions versus model rank r Rank-r approximation of Y Ω Nuclear-norm constrained maximum-likelihood estimation Yaniv Plan (U. Mich.) Binary matrix completion 23 / 28

51 MovieLens data set 100,000 movie ratings on a scale from 1 to 5 Convert to binary outcomes by comparing each rating to the average rating in the data set Evaluate by checking if we predict the correct sign Training on 95,000 ratings and testing on remainder - Standard matrix completion: 60% accuracy 1: 64% 2: 56% 3: 44% 4: 65% 5: 74% - Binary matrix completion: 73% accuracy 1: 79% 2: 73% 3: 58% 4: 75% 5: 89% Yaniv Plan (U. Mich.) Binary matrix completion 24 / 28

52 Restaurant recommendations [REU with Gao, Wootters, Vershynin] 100 restaurants, 107 users. 11 yes/no answers per user. > 75% success rate in recommending 1 restaurant per user (estimated using cross validation). Yaniv Plan (U. Mich.) Binary matrix completion 25 / 28

53 Methods of proof Upper bounds: Probability in Banach spaces/random matrix theory Lower bounds: Information theoretic techniques: Fano s inequality Yaniv Plan (U. Mich.) Binary matrix completion 26 / 28

54 Tiny sketch of upper bound proof Recall: F Ω,Y (X) is the log-likelihood of X (we maximize it). 1 For a fixed matrix, X, E(F Ω,Y (M) F Ω,Y (X)) = c D(f (X) f (M)). 2 Lemma: the following holds for all X satisfying 1 dα X r: F Ω,Y (X) E F Ω,Y (X) δ. 3 The maximizer, ˆM satisfies F Y,Ω ( ˆM) F Y,Ω (M). 4 0 F Ω,Y (M) F Ω,Y ( ˆM) E(F Ω,Y (M) F Ω,Y ( ˆM)) 2δ = c D(f ( ˆM) f (M)) 2δ Thus, D(f ( ˆM) f (M)) 2 c δ. Key step: Proof of lemma. Yaniv Plan (U. Mich.) Binary matrix completion 27 / 28

55 Thank you! Yaniv Plan (U. Mich.) Binary matrix completion 28 / 28

1-Bit Matrix Completion

1-Bit Matrix Completion 1-Bit Matrix Completion Mark A. Davenport School of Electrical and Computer Engineering Georgia Institute of Technology Yaniv Plan Mary Wootters Ewout van den Berg Matrix Completion d When is it possible

More information

1-Bit Matrix Completion

1-Bit Matrix Completion 1-Bit Matrix Completion Mark A. Davenport School of Electrical and Computer Engineering Georgia Institute of Technology Yaniv Plan Mary Wootters Ewout van den Berg Matrix Completion d When is it possible

More information

1-Bit Matrix Completion

1-Bit Matrix Completion 1-Bit Matrix Completion Mark A. Davenport School of Electrical and Computer Engineering Georgia Institute of Technology Yaniv Plan Mary Wootters Ewout van den Berg Matrix Completion d When is it possible

More information

Adaptive one-bit matrix completion

Adaptive one-bit matrix completion Adaptive one-bit matrix completion Joseph Salmon Télécom Paristech, Institut Mines-Télécom Joint work with Jean Lafond (Télécom Paristech) Olga Klopp (Crest / MODAL X, Université Paris Ouest) Éric Moulines

More information

1-Bit matrix completion

1-Bit matrix completion Information and Inference: A Journal of the IMA 2014) 3, 189 223 doi:10.1093/imaiai/iau006 Advance Access publication on 11 July 2014 1-Bit matrix completion Mark A. Davenport School of Electrical and

More information

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Adrien Todeschini Inria Bordeaux JdS 2014, Rennes Aug. 2014 Joint work with François Caron (Univ. Oxford), Marie

More information

Probabilistic low-rank matrix completion on finite alphabets

Probabilistic low-rank matrix completion on finite alphabets Probabilistic low-rank matrix completion on finite alphabets Jean Lafond Institut Mines-Télécom Télécom ParisTech CNRS LTCI jean.lafond@telecom-paristech.fr Olga Klopp CREST et MODAL X Université Paris

More information

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms François Caron Department of Statistics, Oxford STATLEARN 2014, Paris April 7, 2014 Joint work with Adrien Todeschini,

More information

1-bit Matrix Completion. PAC-Bayes and Variational Approximation

1-bit Matrix Completion. PAC-Bayes and Variational Approximation : PAC-Bayes and Variational Approximation (with P. Alquier) PhD Supervisor: N. Chopin Bayes In Paris, 5 January 2017 (Happy New Year!) Various Topics covered Matrix Completion PAC-Bayesian Estimation Variational

More information

arxiv: v1 [stat.ml] 24 Sep 2013

arxiv: v1 [stat.ml] 24 Sep 2013 A Max-Norm Constrained Minimization Approach to -Bit Matrix Completion arxiv:309.603v [stat.ml] 24 Sep 203 T. Tony Cai and Wen-Xin Zhou 2,3 Abstract We consider in this paper the problem of noisy -bit

More information

An iterative hard thresholding estimator for low rank matrix recovery

An iterative hard thresholding estimator for low rank matrix recovery An iterative hard thresholding estimator for low rank matrix recovery Alexandra Carpentier - based on a joint work with Arlene K.Y. Kim Statistical Laboratory, Department of Pure Mathematics and Mathematical

More information

Probabilistic Low-Rank Matrix Completion from Quantized Measurements

Probabilistic Low-Rank Matrix Completion from Quantized Measurements Journal of Machine Learning Research 7 (206-34 Submitted 6/5; Revised 2/5; Published 4/6 Probabilistic Low-Rank Matrix Completion from Quantized Measurements Sonia A. Bhaskar Department of Electrical Engineering

More information

Spectral k-support Norm Regularization

Spectral k-support Norm Regularization Spectral k-support Norm Regularization Andrew McDonald Department of Computer Science, UCL (Joint work with Massimiliano Pontil and Dimitris Stamos) 25 March, 2015 1 / 19 Problem: Matrix Completion Goal:

More information

Lower Bounds for Quantized Matrix Completion

Lower Bounds for Quantized Matrix Completion Lower Bounds for Quantized Matrix Copletion Mary Wootters and Yaniv Plan Departent of Matheatics University of Michigan Ann Arbor, MI Eail: wootters, yplan}@uich.edu Mark A. Davenport School of Elec. &

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

Universal low-rank matrix recovery from Pauli measurements

Universal low-rank matrix recovery from Pauli measurements Universal low-rank matrix recovery from Pauli measurements Yi-Kai Liu Applied and Computational Mathematics Division National Institute of Standards and Technology Gaithersburg, MD, USA yi-kai.liu@nist.gov

More information

Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison

Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison Raghunandan H. Keshavan, Andrea Montanari and Sewoong Oh Electrical Engineering and Statistics Department Stanford University,

More information

1-bit Matrix Completion. PAC-Bayes and Variational Approximation

1-bit Matrix Completion. PAC-Bayes and Variational Approximation : PAC-Bayes and Variational Approximation (with P. Alquier) PhD Supervisor: N. Chopin Junior Conference on Data Science 2016 Université Paris Saclay, 15-16 September 2016 Introduction: Matrix Completion

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

Matrix completion: Fundamental limits and efficient algorithms. Sewoong Oh Stanford University

Matrix completion: Fundamental limits and efficient algorithms. Sewoong Oh Stanford University Matrix completion: Fundamental limits and efficient algorithms Sewoong Oh Stanford University 1 / 35 Low-rank matrix completion Low-rank Data Matrix Sparse Sampled Matrix Complete the matrix from small

More information

MATRIX RECOVERY FROM QUANTIZED AND CORRUPTED MEASUREMENTS

MATRIX RECOVERY FROM QUANTIZED AND CORRUPTED MEASUREMENTS MATRIX RECOVERY FROM QUANTIZED AND CORRUPTED MEASUREMENTS Andrew S. Lan 1, Christoph Studer 2, and Richard G. Baraniuk 1 1 Rice University; e-mail: {sl29, richb}@rice.edu 2 Cornell University; e-mail:

More information

ELE 538B: Mathematics of High-Dimensional Data. Spectral methods. Yuxin Chen Princeton University, Fall 2018

ELE 538B: Mathematics of High-Dimensional Data. Spectral methods. Yuxin Chen Princeton University, Fall 2018 ELE 538B: Mathematics of High-Dimensional Data Spectral methods Yuxin Chen Princeton University, Fall 2018 Outline A motivating application: graph clustering Distance and angles between two subspaces Eigen-space

More information

arxiv: v1 [stat.ml] 1 Mar 2015

arxiv: v1 [stat.ml] 1 Mar 2015 Matrix Completion with Noisy Entries and Outliers Raymond K. W. Wong 1 and Thomas C. M. Lee 2 arxiv:1503.00214v1 [stat.ml] 1 Mar 2015 1 Department of Statistics, Iowa State University 2 Department of Statistics,

More information

Concentration-Based Guarantees for Low-Rank Matrix Reconstruction

Concentration-Based Guarantees for Low-Rank Matrix Reconstruction JMLR: Workshop and Conference Proceedings 9 20 35 339 24th Annual Conference on Learning Theory Concentration-Based Guarantees for Low-Rank Matrix Reconstruction Rina Foygel Department of Statistics, University

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via convex relaxations Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Matrix Completion: Fundamental Limits and Efficient Algorithms

Matrix Completion: Fundamental Limits and Efficient Algorithms Matrix Completion: Fundamental Limits and Efficient Algorithms Sewoong Oh PhD Defense Stanford University July 23, 2010 1 / 33 Matrix completion Find the missing entries in a huge data matrix 2 / 33 Example

More information

Lecture Notes 10: Matrix Factorization

Lecture Notes 10: Matrix Factorization Optimization-based data analysis Fall 207 Lecture Notes 0: Matrix Factorization Low-rank models. Rank- model Consider the problem of modeling a quantity y[i, j] that depends on two indices i and j. To

More information

Jointly Clustering Rows and Columns of Binary Matrices: Algorithms and Trade-offs

Jointly Clustering Rows and Columns of Binary Matrices: Algorithms and Trade-offs Jointly Clustering Rows and Columns of Binary Matrices: Algorithms and Trade-offs Jiaming Xu Joint work with Rui Wu, Kai Zhu, Bruce Hajek, R. Srikant, and Lei Ying University of Illinois, Urbana-Champaign

More information

Matrix estimation by Universal Singular Value Thresholding

Matrix estimation by Universal Singular Value Thresholding Matrix estimation by Universal Singular Value Thresholding Courant Institute, NYU Let us begin with an example: Suppose that we have an undirected random graph G on n vertices. Model: There is a real symmetric

More information

arxiv: v1 [cs.it] 21 Feb 2013

arxiv: v1 [cs.it] 21 Feb 2013 q-ary Compressive Sensing arxiv:30.568v [cs.it] Feb 03 Youssef Mroueh,, Lorenzo Rosasco, CBCL, CSAIL, Massachusetts Institute of Technology LCSL, Istituto Italiano di Tecnologia and IIT@MIT lab, Istituto

More information

Low-Rank Matrix Recovery

Low-Rank Matrix Recovery ELE 538B: Mathematics of High-Dimensional Data Low-Rank Matrix Recovery Yuxin Chen Princeton University, Fall 2018 Outline Motivation Problem setup Nuclear norm minimization RIP and low-rank matrix recovery

More information

Matrix Completion for Structured Observations

Matrix Completion for Structured Observations Matrix Completion for Structured Observations Denali Molitor Department of Mathematics University of California, Los ngeles Los ngeles, C 90095, US Email: dmolitor@math.ucla.edu Deanna Needell Department

More information

Binary Principal Component Analysis in the Netflix Collaborative Filtering Task

Binary Principal Component Analysis in the Netflix Collaborative Filtering Task Binary Principal Component Analysis in the Netflix Collaborative Filtering Task László Kozma, Alexander Ilin, Tapani Raiko first.last@tkk.fi Helsinki University of Technology Adaptive Informatics Research

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

Lecture 2 Part 1 Optimization

Lecture 2 Part 1 Optimization Lecture 2 Part 1 Optimization (January 16, 2015) Mu Zhu University of Waterloo Need for Optimization E(y x), P(y x) want to go after them first, model some examples last week then, estimate didn t discuss

More information

Sparse and Low Rank Recovery via Null Space Properties

Sparse and Low Rank Recovery via Null Space Properties Sparse and Low Rank Recovery via Null Space Properties Holger Rauhut Lehrstuhl C für Mathematik (Analysis), RWTH Aachen Convexity, probability and discrete structures, a geometric viewpoint Marne-la-Vallée,

More information

ROP: MATRIX RECOVERY VIA RANK-ONE PROJECTIONS 1. BY T. TONY CAI AND ANRU ZHANG University of Pennsylvania

ROP: MATRIX RECOVERY VIA RANK-ONE PROJECTIONS 1. BY T. TONY CAI AND ANRU ZHANG University of Pennsylvania The Annals of Statistics 2015, Vol. 43, No. 1, 102 138 DOI: 10.1214/14-AOS1267 Institute of Mathematical Statistics, 2015 ROP: MATRIX RECOVERY VIA RANK-ONE PROJECTIONS 1 BY T. TONY CAI AND ANRU ZHANG University

More information

Breaking the Limits of Subspace Inference

Breaking the Limits of Subspace Inference Breaking the Limits of Subspace Inference Claudia R. Solís-Lemus, Daniel L. Pimentel-Alarcón Emory University, Georgia State University Abstract Inferring low-dimensional subspaces that describe high-dimensional,

More information

Random hyperplane tessellations and dimension reduction

Random hyperplane tessellations and dimension reduction Random hyperplane tessellations and dimension reduction Roman Vershynin University of Michigan, Department of Mathematics Phenomena in high dimensions in geometric analysis, random matrices and computational

More information

High-dimensional Statistical Models

High-dimensional Statistical Models High-dimensional Statistical Models Pradeep Ravikumar UT Austin MLSS 2014 1 Curse of Dimensionality Statistical Learning: Given n observations from p(x; θ ), where θ R p, recover signal/parameter θ. For

More information

New ways of dimension reduction? Cutting data sets into small pieces

New ways of dimension reduction? Cutting data sets into small pieces New ways of dimension reduction? Cutting data sets into small pieces Roman Vershynin University of Michigan, Department of Mathematics Statistical Machine Learning Ann Arbor, June 5, 2012 Joint work with

More information

Low-rank Matrix Completion from Noisy Entries

Low-rank Matrix Completion from Noisy Entries Low-rank Matrix Completion from Noisy Entries Sewoong Oh Joint work with Raghunandan Keshavan and Andrea Montanari Stanford University Forty-Seventh Allerton Conference October 1, 2009 R.Keshavan, S.Oh,

More information

Rank minimization via the γ 2 norm

Rank minimization via the γ 2 norm Rank minimization via the γ 2 norm Troy Lee Columbia University Adi Shraibman Weizmann Institute Rank Minimization Problem Consider the following problem min X rank(x) A i, X b i for i = 1,..., k Arises

More information

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28 Sparsity Models Tong Zhang Rutgers University T. Zhang (Rutgers) Sparsity Models 1 / 28 Topics Standard sparse regression model algorithms: convex relaxation and greedy algorithm sparse recovery analysis:

More information

(Part 1) High-dimensional statistics May / 41

(Part 1) High-dimensional statistics May / 41 Theory for the Lasso Recall the linear model Y i = p j=1 β j X (j) i + ɛ i, i = 1,..., n, or, in matrix notation, Y = Xβ + ɛ, To simplify, we assume that the design X is fixed, and that ɛ is N (0, σ 2

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE7C (Spring 08): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee7c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee7c@berkeley.edu October

More information

An Extended Frank-Wolfe Method, with Application to Low-Rank Matrix Completion

An Extended Frank-Wolfe Method, with Application to Low-Rank Matrix Completion An Extended Frank-Wolfe Method, with Application to Low-Rank Matrix Completion Robert M. Freund, MIT joint with Paul Grigas (UC Berkeley) and Rahul Mazumder (MIT) CDC, December 2016 1 Outline of Topics

More information

Low Rank Matrix Completion Formulation and Algorithm

Low Rank Matrix Completion Formulation and Algorithm 1 2 Low Rank Matrix Completion and Algorithm Jian Zhang Department of Computer Science, ETH Zurich zhangjianthu@gmail.com March 25, 2014 Movie Rating 1 2 Critic A 5 5 Critic B 6 5 Jian 9 8 Kind Guy B 9

More information

Topics in matrix completion and genomic prediction

Topics in matrix completion and genomic prediction Graduate Theses and Dissertations Iowa State University Capstones, Theses and Dissertations 018 Topics in matrix completion and genomic prediction Xiaojun Mao Iowa State University Follow this and additional

More information

arxiv: v3 [math.st] 9 Dec 2014

arxiv: v3 [math.st] 9 Dec 2014 The Annals of Statistics 2015, Vol. 43, No. 1, 102 138 DOI: 10.1214/14-AOS1267 c Institute of Mathematical Statistics, 2015 ROP: MATRIX RECOVERY VIA RANK-ONE PROJECTIONS 1 arxiv:1310.5791v3 [math.st] 9

More information

Compressed Sensing and Robust Recovery of Low Rank Matrices

Compressed Sensing and Robust Recovery of Low Rank Matrices Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech

More information

CS281 Section 4: Factor Analysis and PCA

CS281 Section 4: Factor Analysis and PCA CS81 Section 4: Factor Analysis and PCA Scott Linderman At this point we have seen a variety of machine learning models, with a particular emphasis on models for supervised learning. In particular, we

More information

Decoupled Collaborative Ranking

Decoupled Collaborative Ranking Decoupled Collaborative Ranking Jun Hu, Ping Li April 24, 2017 Jun Hu, Ping Li WWW2017 April 24, 2017 1 / 36 Recommender Systems Recommendation system is an information filtering technique, which provides

More information

Least squares under convex constraint

Least squares under convex constraint Stanford University Questions Let Z be an n-dimensional standard Gaussian random vector. Let µ be a point in R n and let Y = Z + µ. We are interested in estimating µ from the data vector Y, under the assumption

More information

Maximum Margin Matrix Factorization

Maximum Margin Matrix Factorization Maximum Margin Matrix Factorization Nati Srebro Toyota Technological Institute Chicago Joint work with Noga Alon Tel-Aviv Yonatan Amit Hebrew U Alex d Aspremont Princeton Michael Fink Hebrew U Tommi Jaakkola

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

The Pros and Cons of Compressive Sensing

The Pros and Cons of Compressive Sensing The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal

More information

Matrix Factorizations: A Tale of Two Norms

Matrix Factorizations: A Tale of Two Norms Matrix Factorizations: A Tale of Two Norms Nati Srebro Toyota Technological Institute Chicago Maximum Margin Matrix Factorization S, Jason Rennie, Tommi Jaakkola (MIT), NIPS 2004 Rank, Trace-Norm and Max-Norm

More information

High-dimensional Statistics

High-dimensional Statistics High-dimensional Statistics Pradeep Ravikumar UT Austin Outline 1. High Dimensional Data : Large p, small n 2. Sparsity 3. Group Sparsity 4. Low Rank 1 Curse of Dimensionality Statistical Learning: Given

More information

Optimisation Combinatoire et Convexe.

Optimisation Combinatoire et Convexe. Optimisation Combinatoire et Convexe. Low complexity models, l 1 penalties. A. d Aspremont. M1 ENS. 1/36 Today Sparsity, low complexity models. l 1 -recovery results: three approaches. Extensions: matrix

More information

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University PCA with random noise Van Ha Vu Department of Mathematics Yale University An important problem that appears in various areas of applied mathematics (in particular statistics, computer science and numerical

More information

21.2 Example 1 : Non-parametric regression in Mean Integrated Square Error Density Estimation (L 2 2 risk)

21.2 Example 1 : Non-parametric regression in Mean Integrated Square Error Density Estimation (L 2 2 risk) 10-704: Information Processing and Learning Spring 2015 Lecture 21: Examples of Lower Bounds and Assouad s Method Lecturer: Akshay Krishnamurthy Scribes: Soumya Batra Note: LaTeX template courtesy of UC

More information

Collaborative Filtering via Rating Concentration

Collaborative Filtering via Rating Concentration Bert Huang Computer Science Department Columbia University New York, NY 0027 bert@cscolumbiaedu Tony Jebara Computer Science Department Columbia University New York, NY 0027 jebara@cscolumbiaedu Abstract

More information

Recovering any low-rank matrix, provably

Recovering any low-rank matrix, provably Recovering any low-rank matrix, provably Rachel Ward University of Texas at Austin October, 2014 Joint work with Yudong Chen (U.C. Berkeley), Srinadh Bhojanapalli and Sujay Sanghavi (U.T. Austin) Matrix

More information

General principles for high-dimensional estimation: Statistics and computation

General principles for high-dimensional estimation: Statistics and computation General principles for high-dimensional estimation: Statistics and computation Martin Wainwright Statistics, and EECS UC Berkeley Joint work with: Garvesh Raskutti, Sahand Negahban Pradeep Ravikumar, Bin

More information

Matrix completion by singular value thresholding: sharp bounds

Matrix completion by singular value thresholding: sharp bounds Matrix completion by singular value thresholding: sharp bounds Olga Klopp To cite this version: Olga Klopp. Matrix completion by singular value thresholding: sharp bounds. Electronic journal of statistics,

More information

DATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD

DATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD DATA MINING LECTURE 8 Dimensionality Reduction PCA -- SVD The curse of dimensionality Real data usually have thousands, or millions of dimensions E.g., web documents, where the dimensionality is the vocabulary

More information

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis. Vector spaces DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Vector space Consists of: A set V A scalar

More information

Linear dimensionality reduction for data analysis

Linear dimensionality reduction for data analysis Linear dimensionality reduction for data analysis Nicolas Gillis Joint work with Robert Luce, François Glineur, Stephen Vavasis, Robert Plemmons, Gabriella Casalino The setup Dimensionality reduction for

More information

Rank, Trace-Norm & Max-Norm

Rank, Trace-Norm & Max-Norm Rank, Trace-Norm & Max-Norm as measures of matrix complexity Nati Srebro University of Toronto Adi Shraibman Hebrew University Matrix Learning users movies 2 1 4 5 5 4? 1 3 3 5 2 4? 5 3? 4 1 3 5 2 1? 4

More information

arxiv: v5 [math.na] 16 Nov 2017

arxiv: v5 [math.na] 16 Nov 2017 RANDOM PERTURBATION OF LOW RANK MATRICES: IMPROVING CLASSICAL BOUNDS arxiv:3.657v5 [math.na] 6 Nov 07 SEAN O ROURKE, VAN VU, AND KE WANG Abstract. Matrix perturbation inequalities, such as Weyl s theorem

More information

Collaborative Filtering

Collaborative Filtering Case Study 4: Collaborative Filtering Collaborative Filtering Matrix Completion Alternating Least Squares Machine Learning/Statistics for Big Data CSE599C1/STAT592, University of Washington Carlos Guestrin

More information

Probabilistic Graphical Models

Probabilistic Graphical Models School of Computer Science Probabilistic Graphical Models Gaussian graphical models and Ising models: modeling networks Eric Xing Lecture 0, February 7, 04 Reading: See class website Eric Xing @ CMU, 005-04

More information

Lecture 9: September 28

Lecture 9: September 28 0-725/36-725: Convex Optimization Fall 206 Lecturer: Ryan Tibshirani Lecture 9: September 28 Scribes: Yiming Wu, Ye Yuan, Zhihao Li Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These

More information

Conditions for Robust Principal Component Analysis

Conditions for Robust Principal Component Analysis Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and

More information

Exponential decay of reconstruction error from binary measurements of sparse signals

Exponential decay of reconstruction error from binary measurements of sparse signals Exponential decay of reconstruction error from binary measurements of sparse signals Deanna Needell Joint work with R. Baraniuk, S. Foucart, Y. Plan, and M. Wootters Outline Introduction Mathematical Formulation

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via nonconvex optimization Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Collaborative Filtering Matrix Completion Alternating Least Squares

Collaborative Filtering Matrix Completion Alternating Least Squares Case Study 4: Collaborative Filtering Collaborative Filtering Matrix Completion Alternating Least Squares Machine Learning for Big Data CSE547/STAT548, University of Washington Sham Kakade May 19, 2016

More information

Collaborative Filtering: A Machine Learning Perspective

Collaborative Filtering: A Machine Learning Perspective Collaborative Filtering: A Machine Learning Perspective Chapter 6: Dimensionality Reduction Benjamin Marlin Presenter: Chaitanya Desai Collaborative Filtering: A Machine Learning Perspective p.1/18 Topics

More information

Lecture Notes 9: Constrained Optimization

Lecture Notes 9: Constrained Optimization Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form

More information

Statistical Performance of Convex Tensor Decomposition

Statistical Performance of Convex Tensor Decomposition Slides available: h-p://www.ibis.t.u tokyo.ac.jp/ryotat/tensor12kyoto.pdf Statistical Performance of Convex Tensor Decomposition Ryota Tomioka 2012/01/26 @ Kyoto University Perspectives in Informatics

More information

Estimating the Largest Elements of a Matrix

Estimating the Largest Elements of a Matrix Estimating the Largest Elements of a Matrix Samuel Relton samuel.relton@manchester.ac.uk @sdrelton samrelton.com blog.samrelton.com Joint work with Nick Higham nick.higham@manchester.ac.uk May 12th, 2016

More information

ESTIMATION OF (NEAR) LOW-RANK MATRICES WITH NOISE AND HIGH-DIMENSIONAL SCALING

ESTIMATION OF (NEAR) LOW-RANK MATRICES WITH NOISE AND HIGH-DIMENSIONAL SCALING Submitted to the Annals of Statistics ESTIMATION OF (NEAR) LOW-RANK MATRICES WITH NOISE AND HIGH-DIMENSIONAL SCALING By Sahand Negahban and Martin J. Wainwright University of California, Berkeley We study

More information

Matrix Completion from Fewer Entries

Matrix Completion from Fewer Entries from Fewer Entries Stanford University March 30, 2009 Outline The problem, a look at the data, and some results (slides) 2 Proofs (blackboard) arxiv:090.350 The problem, a look at the data, and some results

More information

The Pros and Cons of Compressive Sensing

The Pros and Cons of Compressive Sensing The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal

More information

Homework 4. Convex Optimization /36-725

Homework 4. Convex Optimization /36-725 Homework 4 Convex Optimization 10-725/36-725 Due Friday November 4 at 5:30pm submitted to Christoph Dann in Gates 8013 (Remember to a submit separate writeup for each problem, with your name at the top)

More information

The Convex Geometry of Linear Inverse Problems

The Convex Geometry of Linear Inverse Problems Found Comput Math 2012) 12:805 849 DOI 10.1007/s10208-012-9135-7 The Convex Geometry of Linear Inverse Problems Venkat Chandrasekaran Benjamin Recht Pablo A. Parrilo Alan S. Willsky Received: 2 December

More information

Joint Capped Norms Minimization for Robust Matrix Recovery

Joint Capped Norms Minimization for Robust Matrix Recovery Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-7) Joint Capped Norms Minimization for Robust Matrix Recovery Feiping Nie, Zhouyuan Huo 2, Heng Huang 2

More information

Estimation of (near) low-rank matrices with noise and high-dimensional scaling

Estimation of (near) low-rank matrices with noise and high-dimensional scaling Estimation of (near) low-rank matrices with noise and high-dimensional scaling Sahand Negahban Department of EECS, University of California, Berkeley, CA 94720, USA sahand n@eecs.berkeley.edu Martin J.

More information

Sampling and high-dimensional convex geometry

Sampling and high-dimensional convex geometry Sampling and high-dimensional convex geometry Roman Vershynin SampTA 2013 Bremen, Germany, June 2013 Geometry of sampling problems Signals live in high dimensions; sampling is often random. Geometry in

More information

GROUP-SPARSE SUBSPACE CLUSTERING WITH MISSING DATA

GROUP-SPARSE SUBSPACE CLUSTERING WITH MISSING DATA GROUP-SPARSE SUBSPACE CLUSTERING WITH MISSING DATA D Pimentel-Alarcón 1, L Balzano 2, R Marcia 3, R Nowak 1, R Willett 1 1 University of Wisconsin - Madison, 2 University of Michigan - Ann Arbor, 3 University

More information

arxiv: v1 [math.st] 13 May 2017

arxiv: v1 [math.st] 13 May 2017 Submitted to the Annals of Statistics BLIND REGRESSION VIA NEAREST NEIGHBORS UNDER LATENT VARIABLE MODELS BY CHRISTINA LEE, YIHUA LI, DEVAVRAT SHAH AND DOGYOON SONG arxiv:705.04867v [math.st] 3 May 07

More information

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad

More information

Low Resolution Adaptive Compressed Sensing for mmwave MIMO receivers

Low Resolution Adaptive Compressed Sensing for mmwave MIMO receivers Low Resolution Adaptive Compressed Sensing for mmwave MIMO receivers Cristian Rusu, Nuria González-Prelcic and Robert W. Heath Motivation 2 Power consumption at mmwave in a MIMO receiver Power at 60 GHz

More information

A Novel Approach to Quantized Matrix Completion Using Huber Loss Measure

A Novel Approach to Quantized Matrix Completion Using Huber Loss Measure 1 A Novel Approach to Quantized Matrix Completion Using Huber Loss Measure Ashkan Esmaeili and Farokh Marvasti ariv:1810.1260v1 [stat.ml] 29 Oct 2018 Abstract In this paper, we introduce a novel and robust

More information

Analysis of Robust PCA via Local Incoherence

Analysis of Robust PCA via Local Incoherence Analysis of Robust PCA via Local Incoherence Huishuai Zhang Department of EECS Syracuse University Syracuse, NY 3244 hzhan23@syr.edu Yi Zhou Department of EECS Syracuse University Syracuse, NY 3244 yzhou35@syr.edu

More information

CSC 576: Variants of Sparse Learning

CSC 576: Variants of Sparse Learning CSC 576: Variants of Sparse Learning Ji Liu Department of Computer Science, University of Rochester October 27, 205 Introduction Our previous note basically suggests using l norm to enforce sparsity in

More information

Restricted Boltzmann Machines for Collaborative Filtering

Restricted Boltzmann Machines for Collaborative Filtering Restricted Boltzmann Machines for Collaborative Filtering Authors: Ruslan Salakhutdinov Andriy Mnih Geoffrey Hinton Benjamin Schwehn Presentation by: Ioan Stanculescu 1 Overview The Netflix prize problem

More information

An algebraic perspective on integer sparse recovery

An algebraic perspective on integer sparse recovery An algebraic perspective on integer sparse recovery Lenny Fukshansky Claremont McKenna College (joint work with Deanna Needell and Benny Sudakov) Combinatorics Seminar USC October 31, 2018 From Wikipedia:

More information

High-dimensional graphical model selection: Practical and information-theoretic limits

High-dimensional graphical model selection: Practical and information-theoretic limits 1 High-dimensional graphical model selection: Practical and information-theoretic limits Martin Wainwright Departments of Statistics, and EECS UC Berkeley, California, USA Based on joint work with: John

More information