Sketching as a Tool for Numerical Linear Algebra

Size: px
Start display at page:

Download "Sketching as a Tool for Numerical Linear Algebra"

Transcription

1 Sketching as a Tool for Numerical Linear Algebra David P. Woodruff presented by Sepehr Assadi o(n) Big Data Reading Group University of Pennsylvania February, 2015 Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 1 / 25

2 Goal New survey by David Woodruff: Sketching as a Tool for Numerical Linear Algebra Topics: Subspace Embeddings Least Squares Regression Least Absolute Deviation Regression Low Rank Approximation Graph Sparsification Sketching Lower Bounds Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 2 / 25

3 Goal New survey by David Woodruff: Sketching as a Tool for Numerical Linear Algebra Topics: Subspace Embeddings Least Squares Regression Least Absolute Deviation Regression Low Rank Approximation Graph Sparsification Sketching Lower Bounds Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 3 / 25

4 Introduction You have Big data! Computationally expensive to deal with Excessive storage requirement Hard to communicate... Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 4 / 25

5 Introduction You have Big data! Computationally expensive to deal with Excessive storage requirement Hard to communicate... Summarize your data Sampling A representative subset of the data Sketching An aggregate summary of the whole data Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 5 / 25

6 Model Input: matrix A R n d vector b R n. Output: function F(A, b,...) e.g. least square regression Different goals: Faster algorithms Streaming Distributed Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 6 / 25

7 Linear Sketching Input: matrix A R n d Let r n and S R r n be a random matrix Let S A be the sketch Compute F(S A) instead of F(A) Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 7 / 25

8 Linear Sketching (cont.) Pros: Compute on a r d matrix instead of n d Smaller representation and faster computation Linearity: S (A + B) = S A + S B We can compose linear sketches! Cons: F(S A) is an approximation of F(A) Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 8 / 25

9 Least Square Regression (l 2 -regression) Input: matrix A R n d (full column rank) vector b R n Output x R d : Closed form solution: x = arg min x Ax b 2 x = (A T A) 1 A T b Θ(nd 2 )-time algorithm using naive matrix multiplication Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 9 / 25

10 Approximate l 2 -regression Input: matrix A R n d (full column rank) vector b R n parameter 0 < ε < 1 Output ˆx R d : Aˆx b 2 (1 + ε) arg min x Ax b 2 Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 10 / 25

11 Approximate l 2 -regression (cont.) A sketching algorithm: Sample a random matrix S R r n Compute S A and S b Output ˆx = arg minx (SA)x (Sb) 2 Which randomized family of matrices S and what value of r? Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 11 / 25

12 Approximate l 2 -regression (cont.) An introductory construction: Let r = Θ(d/ε 2 ) Let S R r n be a matrix of i.i.d normal random variables with mean zero and variance 1/r Proof Sketch. On the board Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 12 / 25

13 Approximate l 2 -regression (cont.) Problems: Computing S A takes Θ(nrd) time Constructing S requires Θ(nr) space Different constructions for S: Fast Johnson-Lindenstrauss transforms: O(nd log d) + poly(d/ε) time [Sarlos, FOCS 06] Optimal O(nnz(A)) + poly(d/ε) time algorithm [Clarkson, Woodruff, STOC 13] Random sign matrices with Θ(d)-wise independent entries: O(d 2 /ε log (nd))-space streaming algorithm [Clarkson, Woodruff, STOC 09] Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 13 / 25

14 Subspace Embedding Definition (l 2 -subspace embedding) A (1 ± ε) l 2 -subspace embedding for a matrix A R n d is a matrix S for which for all x R n SAx 2 2 = (1 ± ε) Ax 2 2 Actually subspace embedding for column space of A Oblivious l 2 -subspace embedding The distribution from which S is chosen is oblivious to A One very common tool for (oblivious) l 2 -subspace embedding is Johnson-Lindenstrauss transform (JLT) Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 14 / 25

15 Johnson-Lindenstrauss transform Definition (JLT(ε, δ, f )) A random matrix S R r d forms a JLT(ε, δ, f ), if with probability at least 1 δ, for any f -element subset V R n, it holds that: v, v V Sv, Sv v, v ε v 2 v 2 Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 15 / 25

16 Johnson-Lindenstrauss transform Definition (JLT(ε, δ, f )) A random matrix S R r d forms a JLT(ε, δ, f ), if with probability at least 1 δ, for any f -element subset V R n, it holds that: v, v V Sv, Sv v, v ε v 2 v 2 Usual statement (i.e. original Johnson-Lindenstrauss Lemma) Lemma (JLL) Given N points q 1,..., q N R n, there exists a matrix S R t n (linear map) for t = Θ(log N /ε 2 ) such that with high probability, simultaneously for all pairs q i and q j, S(q i q j ) 2 = (1 ± ε) (q i q j ) 2 Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 16 / 25

17 Johnson-Lindenstrauss transform (cont.) A simple construction of JLT(ε, δ, f ): Theorem Let 0 < ε, δ < 1 and S = 1 r R R r n where the entries R i,j are independent standard normal random variables. Assuming r = Ω(ε 2 log (f /δ)) then S is a JLT(ε, δ, f ). Other constructions: Random sign matrices [Achlioptas, 03],[Clarkson, Woodruff, STOC 09] Random sparse matrices [Dasgupta, Kumar, Sarlos, STOC 10],[Kane, Nelson, J. ACM 14] Fast Johnson-Lindenstrauss transforms [Ailon, Chazelle, STOC 06] Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 17 / 25

18 JLT results in l 2 -subspace embedding Claim S = JLT(ε, δ, f ) is an oblivious l 2 -subspace embedding for A R n d Challenge: JLT(ε, δ, f ) provides a guarantee for a single finite set in R n l2 -subspace embedding requires the guarantee for an infinite set, i.e. the column space of A Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 18 / 25

19 JLT results in l 2 -subspace embedding (cont.) Let S be the unit sphere in column space of A S = {y R n y = Ax for some x R d and y 2 = 1} We seek a finite subset N S so that if w, w N Sw, Sw = w, w ± ε then y S Sy = (1 ± ε) y Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 19 / 25

20 JLT results in l 2 -subspace embedding (cont.) Lemma ( 1 2-net for S) Suffices to choose any N such that Proof. 1 Decompose y: where y S w N s.t. y w 2 1/2 y (i) i y = y (0) + y (1) + y (2) +... and y i y (i) N 2 Sy 2 2 = S(y (0) + y (1) + y (2) +...) = 1 ± O(ε) Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 20 / 25

21 1 2-net of S Lemma There exists a 1 -net N of S for which N 5d 2 Proof. 1 Find a set N of maximal number of points in R d so that no two points are within 1/2 distance from each other 2 Let U be the orthonormal matrix of column space of A 3 N = {y R n y = Ux for some x N and y 2 = 1} Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 21 / 25

22 Subspace Embedding via JLT Theorem Let 0 < ε, δ < 1 and S = JLT(ε, δ, 5 d ). For any fixed matrix A R n d, with probability 1 δ, S is a (1 ± ε) l 2 -subspace embedding for A, i.e. x R d, SAx 2 = (1 ± ε) Ax 2 Results in O(nnz(A) ε 1 log d) time algorithm using column-sparsity transform of Kane and Nelson [Kane, Nelson, J. ACM 14] O(nd log n) time algorithm using Fast Johnson-Lindenstrauss transform of Ailon and Chazelle [Ailon, Chazelle, STOC 06] Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 22 / 25

23 Other Subspace Embedding Algorithms Not JLT-based subspace embedding O(nnz(A)) + poly(d/ε) time algorithm [Clarkson, Woodruff, STOC 13] None oblivious subspace embeddings Based on Leverage Score Sampling [Drineas, Mahoney, Muthukrishnan, SODA 06] Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 23 / 25

24 l 2 -regression via Oblivious Subspace Embedding Theorem Let S R r n be any oblivious subspace embedding matrix and ˆx = arg min x SAx Sb 2 ; then, Proof. SAˆx Sb 2 (1 + ε) arg min x Ax b 2 1 Let matrix U R n (d+1) be the orthonormal basis of columns of A together with vector b 2 Suppose S is a l 2 -subspace embedding for U Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 24 / 25

25 Questions? Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 25 / 25

Sketching as a Tool for Numerical Linear Algebra

Sketching as a Tool for Numerical Linear Algebra Sketching as a Tool for Numerical Linear Algebra (Part 2) David P. Woodruff presented by Sepehr Assadi o(n) Big Data Reading Group University of Pennsylvania February, 2015 Sepehr Assadi (Penn) Sketching

More information

Sketching as a Tool for Numerical Linear Algebra

Sketching as a Tool for Numerical Linear Algebra Foundations and Trends R in Theoretical Computer Science Vol. 10, No. 1-2 (2014) 1 157 c 2014 D. P. Woodruff DOI: 10.1561/0400000060 Sketching as a Tool for Numerical Linear Algebra David P. Woodruff IBM

More information

Approximate Spectral Clustering via Randomized Sketching

Approximate Spectral Clustering via Randomized Sketching Approximate Spectral Clustering via Randomized Sketching Christos Boutsidis Yahoo! Labs, New York Joint work with Alex Gittens (Ebay), Anju Kambadur (IBM) The big picture: sketch and solve Tradeoff: Speed

More information

Randomized Numerical Linear Algebra: Review and Progresses

Randomized Numerical Linear Algebra: Review and Progresses ized ized SVD ized : Review and Progresses Zhihua Department of Computer Science and Engineering Shanghai Jiao Tong University The 12th China Workshop on Machine Learning and Applications Xi an, November

More information

arxiv: v3 [cs.ds] 21 Mar 2013

arxiv: v3 [cs.ds] 21 Mar 2013 Low-distortion Subspace Embeddings in Input-sparsity Time and Applications to Robust Linear Regression Xiangrui Meng Michael W. Mahoney arxiv:1210.3135v3 [cs.ds] 21 Mar 2013 Abstract Low-distortion subspace

More information

The Fast Cauchy Transform and Faster Robust Linear Regression

The Fast Cauchy Transform and Faster Robust Linear Regression The Fast Cauchy Transform and Faster Robust Linear Regression Kenneth L Clarkson Petros Drineas Malik Magdon-Ismail Michael W Mahoney Xiangrui Meng David P Woodruff Abstract We provide fast algorithms

More information

Fast Dimension Reduction

Fast Dimension Reduction Fast Dimension Reduction MMDS 2008 Nir Ailon Google Research NY Fast Dimension Reduction Using Rademacher Series on Dual BCH Codes (with Edo Liberty) The Fast Johnson Lindenstrauss Transform (with Bernard

More information

Sketched Ridge Regression:

Sketched Ridge Regression: Sketched Ridge Regression: Optimization and Statistical Perspectives Shusen Wang UC Berkeley Alex Gittens RPI Michael Mahoney UC Berkeley Overview Ridge Regression min w f w = 1 n Xw y + γ w Over-determined:

More information

Sparse Johnson-Lindenstrauss Transforms

Sparse Johnson-Lindenstrauss Transforms Sparse Johnson-Lindenstrauss Transforms Jelani Nelson MIT May 24, 211 joint work with Daniel Kane (Harvard) Metric Johnson-Lindenstrauss lemma Metric JL (MJL) Lemma, 1984 Every set of n points in Euclidean

More information

CS 229r: Algorithms for Big Data Fall Lecture 17 10/28

CS 229r: Algorithms for Big Data Fall Lecture 17 10/28 CS 229r: Algorithms for Big Data Fall 2015 Prof. Jelani Nelson Lecture 17 10/28 Scribe: Morris Yau 1 Overview In the last lecture we defined subspace embeddings a subspace embedding is a linear transformation

More information

Sketching as a Tool for Numerical Linear Algebra All Lectures. David Woodruff IBM Almaden

Sketching as a Tool for Numerical Linear Algebra All Lectures. David Woodruff IBM Almaden Sketching as a Tool for Numerical Linear Algebra All Lectures David Woodruff IBM Almaden Massive data sets Examples Internet traffic logs Financial data etc. Algorithms Want nearly linear time or less

More information

Multidimensional data analysis in biomedicine and epidemiology

Multidimensional data analysis in biomedicine and epidemiology in biomedicine and epidemiology Katja Ickstadt and Leo N. Geppert Faculty of Statistics, TU Dortmund, Germany Stakeholder Workshop 12 13 December 2017, PTB Berlin Supported by Deutsche Forschungsgemeinschaft

More information

Randomized Algorithms

Randomized Algorithms Randomized Algorithms Saniv Kumar, Google Research, NY EECS-6898, Columbia University - Fall, 010 Saniv Kumar 9/13/010 EECS6898 Large Scale Machine Learning 1 Curse of Dimensionality Gaussian Mixture Models

More information

Empirical Performance of Approximate Algorithms for Low Rank Approximation

Empirical Performance of Approximate Algorithms for Low Rank Approximation Empirical Performance of Approximate Algorithms for Low Rank Approximation Dimitris Konomis (dkonomis@cs.cmu.edu) Machine Learning Department (MLD) School of Computer Science (SCS) Carnegie Mellon University

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 22 1 / 21 Overview

More information

Sparser Johnson-Lindenstrauss Transforms

Sparser Johnson-Lindenstrauss Transforms Sparser Johnson-Lindenstrauss Transforms Jelani Nelson Princeton February 16, 212 joint work with Daniel Kane (Stanford) Random Projections x R d, d huge store y = Sx, where S is a k d matrix (compression)

More information

RandNLA: Randomization in Numerical Linear Algebra

RandNLA: Randomization in Numerical Linear Algebra RandNLA: Randomization in Numerical Linear Algebra Petros Drineas Department of Computer Science Rensselaer Polytechnic Institute To access my web page: drineas Why RandNLA? Randomization and sampling

More information

Subspace Embedding and Linear Regression with Orlicz Norm

Subspace Embedding and Linear Regression with Orlicz Norm Alexandr Andoni 1 Chengyu Lin 1 Ying Sheng 1 Peilin Zhong 1 Ruiqi Zhong 1 Abstract We consider a generalization of the classic linear regression problem to the case when the loss is an Orlicz norm. An

More information

RandNLA: Randomized Numerical Linear Algebra

RandNLA: Randomized Numerical Linear Algebra RandNLA: Randomized Numerical Linear Algebra Petros Drineas Rensselaer Polytechnic Institute Computer Science Department To access my web page: drineas RandNLA: sketch a matrix by row/ column sampling

More information

Sketching as a Tool for Numerical Linear Algebra All Lectures. David Woodruff IBM Almaden

Sketching as a Tool for Numerical Linear Algebra All Lectures. David Woodruff IBM Almaden Sketching as a Tool for Numerical Linear Algebra All Lectures David Woodruff IBM Almaden Massive data sets Examples Internet traffic logs Financial data etc. Algorithms Want nearly linear time or less

More information

Johnson-Lindenstrauss, Concentration and applications to Support Vector Machines and Kernels

Johnson-Lindenstrauss, Concentration and applications to Support Vector Machines and Kernels Johnson-Lindenstrauss, Concentration and applications to Support Vector Machines and Kernels Devdatt Dubhashi Department of Computer Science and Engineering, Chalmers University, dubhashi@chalmers.se Functional

More information

OSNAP: Faster numerical linear algebra algorithms via sparser subspace embeddings

OSNAP: Faster numerical linear algebra algorithms via sparser subspace embeddings OSNAP: Faster numerical linear algebra algorithms via sparser subspace embeddings Jelani Nelson Huy L. Nguy ên Abstract An oblivious subspace embedding (OSE) given some parameters ε, d is a distribution

More information

arxiv: v4 [cs.ds] 5 Apr 2013

arxiv: v4 [cs.ds] 5 Apr 2013 Low Rank Approximation and Regression in Input Sparsity Time Kenneth L. Clarkson IBM Almaden David P. Woodruff IBM Almaden April 8, 2013 arxiv:1207.6365v4 [cs.ds] 5 Apr 2013 Abstract We design a new distribution

More information

to be more efficient on enormous scale, in a stream, or in distributed settings.

to be more efficient on enormous scale, in a stream, or in distributed settings. 16 Matrix Sketching The singular value decomposition (SVD) can be interpreted as finding the most dominant directions in an (n d) matrix A (or n points in R d ). Typically n > d. It is typically easy to

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 23, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 47 High dimensional

More information

Supremum of simple stochastic processes

Supremum of simple stochastic processes Subspace embeddings Daniel Hsu COMS 4772 1 Supremum of simple stochastic processes 2 Recap: JL lemma JL lemma. For any ε (0, 1/2), point set S R d of cardinality 16 ln n S = n, and k N such that k, there

More information

Fast Dimension Reduction

Fast Dimension Reduction Fast Dimension Reduction Nir Ailon 1 Edo Liberty 2 1 Google Research 2 Yale University Introduction Lemma (Johnson, Lindenstrauss (1984)) A random projection Ψ preserves all ( n 2) distances up to distortion

More information

Least squares problems Linear Algebra with Computer Science Application

Least squares problems Linear Algebra with Computer Science Application Linear Algebra with Computer Science Application April 8, 018 1 Least Squares Problems 11 Least Squares Problems What do you do when Ax = b has no solution? Inconsistent systems arise often in applications

More information

Randomized algorithms for the approximation of matrices

Randomized algorithms for the approximation of matrices Randomized algorithms for the approximation of matrices Luis Rademacher The Ohio State University Computer Science and Engineering (joint work with Amit Deshpande, Santosh Vempala, Grant Wang) Two topics

More information

Lecture 9: Low Rank Approximation

Lecture 9: Low Rank Approximation CSE 521: Design and Analysis of Algorithms I Fall 2018 Lecture 9: Low Rank Approximation Lecturer: Shayan Oveis Gharan February 8th Scribe: Jun Qi Disclaimer: These notes have not been subjected to the

More information

A fast randomized algorithm for approximating an SVD of a matrix

A fast randomized algorithm for approximating an SVD of a matrix A fast randomized algorithm for approximating an SVD of a matrix Joint work with Franco Woolfe, Edo Liberty, and Vladimir Rokhlin Mark Tygert Program in Applied Mathematics Yale University Place July 17,

More information

Some Useful Background for Talk on the Fast Johnson-Lindenstrauss Transform

Some Useful Background for Talk on the Fast Johnson-Lindenstrauss Transform Some Useful Background for Talk on the Fast Johnson-Lindenstrauss Transform Nir Ailon May 22, 2007 This writeup includes very basic background material for the talk on the Fast Johnson Lindenstrauss Transform

More information

Randomized Algorithms in Linear Algebra and Applications in Data Analysis

Randomized Algorithms in Linear Algebra and Applications in Data Analysis Randomized Algorithms in Linear Algebra and Applications in Data Analysis Petros Drineas Rensselaer Polytechnic Institute Computer Science Department To access my web page: drineas Why linear algebra?

More information

Fast Random Projections using Lean Walsh Transforms Yale University Technical report #1390

Fast Random Projections using Lean Walsh Transforms Yale University Technical report #1390 Fast Random Projections using Lean Walsh Transforms Yale University Technical report #1390 Edo Liberty Nir Ailon Amit Singer Abstract We present a k d random projection matrix that is applicable to vectors

More information

A fast randomized algorithm for overdetermined linear least-squares regression

A fast randomized algorithm for overdetermined linear least-squares regression A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm

More information

Lecture 18 Nov 3rd, 2015

Lecture 18 Nov 3rd, 2015 CS 229r: Algorithms for Big Data Fall 2015 Prof. Jelani Nelson Lecture 18 Nov 3rd, 2015 Scribe: Jefferson Lee 1 Overview Low-rank approximation, Compression Sensing 2 Last Time We looked at three different

More information

A Fast Algorithm For Computing The A-optimal Sampling Distributions In A Big Data Linear Regression

A Fast Algorithm For Computing The A-optimal Sampling Distributions In A Big Data Linear Regression A Fast Algorithm For Computing The A-optimal Sampling Distributions In A Big Data Linear Regression Hanxiang Peng and Fei Tan Indiana University Purdue University Indianapolis Department of Mathematical

More information

MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing

MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing Afonso S. Bandeira April 9, 2015 1 The Johnson-Lindenstrauss Lemma Suppose one has n points, X = {x 1,..., x n }, in R d with d very

More information

Accelerated Dense Random Projections

Accelerated Dense Random Projections 1 Advisor: Steven Zucker 1 Yale University, Department of Computer Science. Dimensionality reduction (1 ε) xi x j 2 Ψ(xi ) Ψ(x j ) 2 (1 + ε) xi x j 2 ( n 2) distances are ε preserved Target dimension k

More information

Sketching Structured Matrices for Faster Nonlinear Regression

Sketching Structured Matrices for Faster Nonlinear Regression Sketching Structured Matrices for Faster Nonlinear Regression Haim Avron Vikas Sindhwani IBM T.J. Watson Research Center Yorktown Heights, NY 10598 {haimav,vsindhw}@us.ibm.com David P. Woodruff IBM Almaden

More information

MANY scientific computations, signal processing, data analysis and machine learning applications lead to large dimensional

MANY scientific computations, signal processing, data analysis and machine learning applications lead to large dimensional Low rank approximation and decomposition of large matrices using error correcting codes Shashanka Ubaru, Arya Mazumdar, and Yousef Saad 1 arxiv:1512.09156v3 [cs.it] 15 Jun 2017 Abstract Low rank approximation

More information

Dimensionality Reduction Notes 3

Dimensionality Reduction Notes 3 Dimensionality Reduction Notes 3 Jelani Nelson minilek@seas.harvard.edu August 13, 2015 1 Gordon s theorem Let T be a finite subset of some normed vector space with norm X. We say that a sequence T 0 T

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

sublinear time low-rank approximation of positive semidefinite matrices Cameron Musco (MIT) and David P. Woodru (CMU)

sublinear time low-rank approximation of positive semidefinite matrices Cameron Musco (MIT) and David P. Woodru (CMU) sublinear time low-rank approximation of positive semidefinite matrices Cameron Musco (MIT) and David P. Woodru (CMU) 0 overview Our Contributions: 1 overview Our Contributions: A near optimal low-rank

More information

Dense Fast Random Projections and Lean Walsh Transforms

Dense Fast Random Projections and Lean Walsh Transforms Dense Fast Random Projections and Lean Walsh Transforms Edo Liberty, Nir Ailon, and Amit Singer Abstract. Random projection methods give distributions over k d matrices such that if a matrix Ψ (chosen

More information

Subspace Embeddings for the Polynomial Kernel

Subspace Embeddings for the Polynomial Kernel Subspace Embeddings for the Polynomial Kernel Haim Avron IBM T.J. Watson Research Center Yorktown Heights, NY 10598 haimav@us.ibm.com Huy L. Nguy ên Simons Institute, UC Berkeley Berkeley, CA 94720 hlnguyen@cs.princeton.edu

More information

Low-Rank PSD Approximation in Input-Sparsity Time

Low-Rank PSD Approximation in Input-Sparsity Time Low-Rank PSD Approximation in Input-Sparsity Time Kenneth L. Clarkson IBM Research Almaden klclarks@us.ibm.com David P. Woodruff IBM Research Almaden dpwoodru@us.ibm.com Abstract We give algorithms for

More information

Faster Johnson-Lindenstrauss style reductions

Faster Johnson-Lindenstrauss style reductions Faster Johnson-Lindenstrauss style reductions Aditya Menon August 23, 2007 Outline 1 Introduction Dimensionality reduction The Johnson-Lindenstrauss Lemma Speeding up computation 2 The Fast Johnson-Lindenstrauss

More information

dimensionality reduction for k-means and low rank approximation

dimensionality reduction for k-means and low rank approximation dimensionality reduction for k-means and low rank approximation Michael Cohen, Sam Elder, Cameron Musco, Christopher Musco, Mădălina Persu Massachusetts Institute of Technology 0 overview Simple techniques

More information

Sparsity Lower Bounds for Dimensionality Reducing Maps

Sparsity Lower Bounds for Dimensionality Reducing Maps Sparsity Lower Bounds for Dimensionality Reducing Maps Jelani Nelson Huy L. Nguy ên November 5, 01 Abstract We give near-tight lower bounds for the sparsity required in several dimensionality reducing

More information

Lecture 15: Random Projections

Lecture 15: Random Projections Lecture 15: Random Projections Introduction to Learning and Analysis of Big Data Kontorovich and Sabato (BGU) Lecture 15 1 / 11 Review of PCA Unsupervised learning technique Performs dimensionality reduction

More information

Optimality of the Johnson-Lindenstrauss Lemma

Optimality of the Johnson-Lindenstrauss Lemma Optimality of the Johnson-Lindenstrauss Lemma Kasper Green Larsen Jelani Nelson September 7, 2016 Abstract For any integers d, n 2 and 1/(min{n, d}) 0.4999 < ε < 1, we show the existence of a set of n

More information

Very Sparse Random Projections

Very Sparse Random Projections Very Sparse Random Projections Ping Li, Trevor Hastie and Kenneth Church [KDD 06] Presented by: Aditya Menon UCSD March 4, 2009 Presented by: Aditya Menon (UCSD) Very Sparse Random Projections March 4,

More information

THE FAST CAUCHY TRANSFORM AND FASTER ROBUST LINEAR REGRESSION

THE FAST CAUCHY TRANSFORM AND FASTER ROBUST LINEAR REGRESSION SIAM J COMPUT Vol 45, No 3, pp 763 80 c 206 Society for Industrial and Applied Mathematics THE FAST CAUCHY TRANSFORM AND FASTER ROBUST LINEAR REGRESSION KENNETH L CLARKSON, PETROS DRINEAS, MALIK MAGDON-ISMAIL,

More information

Information-Theoretic Methods in Data Science

Information-Theoretic Methods in Data Science Information-Theoretic Methods in Data Science Information-theoretic bounds on sketching Mert Pilanci Department of Electrical Engineering Stanford University Contents Information-theoretic bounds on sketching

More information

Approximate Principal Components Analysis of Large Data Sets

Approximate Principal Components Analysis of Large Data Sets Approximate Principal Components Analysis of Large Data Sets Daniel J. McDonald Department of Statistics Indiana University mypage.iu.edu/ dajmcdon April 27, 2016 Approximation-Regularization for Analysis

More information

OSNAP: Faster numerical linear algebra algorithms via sparser subspace embeddings

OSNAP: Faster numerical linear algebra algorithms via sparser subspace embeddings OSNAP: Faster numerical linear algebra algorithms via sparser subspace embeddings Jelani Nelson Huy L. Nguy ên November 5, 2012 Abstract An oblivious subspace embedding (OSE) given some parameters ε, d

More information

Low Rank Matrix Approximation

Low Rank Matrix Approximation Low Rank Matrix Approximation John T. Svadlenka Ph.D. Program in Computer Science The Graduate Center of the City University of New York New York, NY 10036 USA jsvadlenka@gradcenter.cuny.edu Abstract Low

More information

Optimal Bounds for Johnson-Lindenstrauss Transformations

Optimal Bounds for Johnson-Lindenstrauss Transformations Journal of Machine Learning Research 9 08 - Submitted 5/8; Revised 9/8; Published 0/8 Optimal Bounds for Johnson-Lindenstrauss Transformations Michael Burr Shuhong Gao Department of Mathematical Sciences

More information

Error Estimation for Randomized Least-Squares Algorithms via the Bootstrap

Error Estimation for Randomized Least-Squares Algorithms via the Bootstrap via the Bootstrap Miles E. Lopes 1 Shusen Wang 2 Michael W. Mahoney 2 Abstract Over the course of the past decade, a variety of randomized algorithms have been proposed for computing approximate least-squares

More information

The Johnson-Lindenstrauss Lemma Is Optimal for Linear Dimensionality Reduction

The Johnson-Lindenstrauss Lemma Is Optimal for Linear Dimensionality Reduction The Johnson-Lindenstrauss Lemma Is Optimal for Linear Dimensionality Reduction The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters

More information

25.2 Last Time: Matrix Multiplication in Streaming Model

25.2 Last Time: Matrix Multiplication in Streaming Model EE 381V: Large Scale Learning Fall 01 Lecture 5 April 18 Lecturer: Caramanis & Sanghavi Scribe: Kai-Yang Chiang 5.1 Review of Streaming Model Streaming model is a new model for presenting massive data.

More information

Recovering any low-rank matrix, provably

Recovering any low-rank matrix, provably Recovering any low-rank matrix, provably Rachel Ward University of Texas at Austin October, 2014 Joint work with Yudong Chen (U.C. Berkeley), Srinadh Bhojanapalli and Sujay Sanghavi (U.T. Austin) Matrix

More information

Lecture 13 October 6, Covering Numbers and Maurey s Empirical Method

Lecture 13 October 6, Covering Numbers and Maurey s Empirical Method CS 395T: Sublinear Algorithms Fall 2016 Prof. Eric Price Lecture 13 October 6, 2016 Scribe: Kiyeon Jeon and Loc Hoang 1 Overview In the last lecture we covered the lower bound for p th moment (p > 2) and

More information

Lecture 3 Sept. 4, 2014

Lecture 3 Sept. 4, 2014 CS 395T: Sublinear Algorithms Fall 2014 Prof. Eric Price Lecture 3 Sept. 4, 2014 Scribe: Zhao Song In today s lecture, we will discuss the following problems: 1. Distinct elements 2. Turnstile model 3.

More information

Norms of Random Matrices & Low-Rank via Sampling

Norms of Random Matrices & Low-Rank via Sampling CS369M: Algorithms for Modern Massive Data Set Analysis Lecture 4-10/05/2009 Norms of Random Matrices & Low-Rank via Sampling Lecturer: Michael Mahoney Scribes: Jacob Bien and Noah Youngs *Unedited Notes

More information

The Johnson-Lindenstrauss lemma is optimal for linear dimensionality reduction

The Johnson-Lindenstrauss lemma is optimal for linear dimensionality reduction The Johnson-Lindenstrauss lemma is optimal for linear dimensionality reduction Kasper Green Larsen Jelani Nelson Abstract For any n > 1 and 0 < ε < 1/, we show the existence of an n O(1) -point subset

More information

Gradient-based Sampling: An Adaptive Importance Sampling for Least-squares

Gradient-based Sampling: An Adaptive Importance Sampling for Least-squares Gradient-based Sampling: An Adaptive Importance Sampling for Least-squares Rong Zhu Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, China. rongzhu@amss.ac.cn Abstract

More information

Random Methods for Linear Algebra

Random Methods for Linear Algebra Gittens gittens@acm.caltech.edu Applied and Computational Mathematics California Institue of Technology October 2, 2009 Outline The Johnson-Lindenstrauss Transform 1 The Johnson-Lindenstrauss Transform

More information

Randomness Efficient Fast-Johnson-Lindenstrauss Transform with Applications in Differential Privacy

Randomness Efficient Fast-Johnson-Lindenstrauss Transform with Applications in Differential Privacy Randomness Efficient Fast-Johnson-Lindenstrauss Transform with Applications in Differential Privacy Jalaj Upadhyay Cheriton School of Computer Science University of Waterloo jkupadhy@csuwaterlooca arxiv:14102470v6

More information

JOHNSON-LINDENSTRAUSS TRANSFORMATION AND RANDOM PROJECTION

JOHNSON-LINDENSTRAUSS TRANSFORMATION AND RANDOM PROJECTION JOHNSON-LINDENSTRAUSS TRANSFORMATION AND RANDOM PROJECTION LONG CHEN ABSTRACT. We give a brief survey of Johnson-Lindenstrauss lemma. CONTENTS 1. Introduction 1 2. JL Transform 4 2.1. An Elementary Proof

More information

Fast Approximation of Matrix Coherence and Statistical Leverage

Fast Approximation of Matrix Coherence and Statistical Leverage Journal of Machine Learning Research 13 (01) 3475-3506 Submitted 7/1; Published 1/1 Fast Approximation of Matrix Coherence and Statistical Leverage Petros Drineas Malik Magdon-Ismail Department of Computer

More information

Optimal compression of approximate Euclidean distances

Optimal compression of approximate Euclidean distances Optimal compression of approximate Euclidean distances Noga Alon 1 Bo az Klartag 2 Abstract Let X be a set of n points of norm at most 1 in the Euclidean space R k, and suppose ε > 0. An ε-distance sketch

More information

Chapter XII: Data Pre and Post Processing

Chapter XII: Data Pre and Post Processing Chapter XII: Data Pre and Post Processing Information Retrieval & Data Mining Universität des Saarlandes, Saarbrücken Winter Semester 2013/14 XII.1 4-1 Chapter XII: Data Pre and Post Processing 1. Data

More information

Dimensionality reduction of SDPs through sketching

Dimensionality reduction of SDPs through sketching Technische Universität München Workshop on "Probabilistic techniques and Quantum Information Theory", Institut Henri Poincaré Joint work with Andreas Bluhm arxiv:1707.09863 Semidefinite Programs (SDPs)

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 26, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 55 High dimensional

More information

Dimensionality reduction: Johnson-Lindenstrauss lemma for structured random matrices

Dimensionality reduction: Johnson-Lindenstrauss lemma for structured random matrices Dimensionality reduction: Johnson-Lindenstrauss lemma for structured random matrices Jan Vybíral Austrian Academy of Sciences RICAM, Linz, Austria January 2011 MPI Leipzig, Germany joint work with Aicke

More information

arxiv: v2 [stat.ml] 29 Nov 2018

arxiv: v2 [stat.ml] 29 Nov 2018 Randomized Iterative Algorithms for Fisher Discriminant Analysis Agniva Chowdhury Jiasen Yang Petros Drineas arxiv:1809.03045v2 [stat.ml] 29 Nov 2018 Abstract Fisher discriminant analysis FDA is a widely

More information

Yale university technical report #1402.

Yale university technical report #1402. The Mailman algorithm: a note on matrix vector multiplication Yale university technical report #1402. Edo Liberty Computer Science Yale University New Haven, CT Steven W. Zucker Computer Science and Appled

More information

A Tutorial on Matrix Approximation by Row Sampling

A Tutorial on Matrix Approximation by Row Sampling A Tutorial on Matrix Approximation by Row Sampling Rasmus Kyng June 11, 018 Contents 1 Fast Linear Algebra Talk 1.1 Matrix Concentration................................... 1. Algorithms for ɛ-approximation

More information

randomized block krylov methods for stronger and faster approximate svd

randomized block krylov methods for stronger and faster approximate svd randomized block krylov methods for stronger and faster approximate svd Cameron Musco and Christopher Musco December 2, 25 Massachusetts Institute of Technology, EECS singular value decomposition n d left

More information

Tighter Low-rank Approximation via Sampling the Leveraged Element

Tighter Low-rank Approximation via Sampling the Leveraged Element Tighter Low-rank Approximation via Sampling the Leveraged Element Srinadh Bhojanapalli The University of Texas at Austin bsrinadh@utexas.edu Prateek Jain Microsoft Research, India prajain@microsoft.com

More information

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016 Lecture notes 5 February 9, 016 1 Introduction Random projections Random projections are a useful tool in the analysis and processing of high-dimensional data. We will analyze two applications that use

More information

Technical Report. Random projections for Bayesian regression. Leo Geppert, Katja Ickstadt, Alexander Munteanu and Christian Sohler 04/2014

Technical Report. Random projections for Bayesian regression. Leo Geppert, Katja Ickstadt, Alexander Munteanu and Christian Sohler 04/2014 Random projections for Bayesian regression Technical Report Leo Geppert, Katja Ickstadt, Alexander Munteanu and Christian Sohler 04/2014 technische universität dortmund Part of the work on this technical

More information

An Iterative, Sketching-based Framework for Ridge Regression

An Iterative, Sketching-based Framework for Ridge Regression Agniva Chowdhury Jiasen Yang Petros Drineas Abstract Ridge regression is a variant of regularized least squares regression that is particularly suitable in settings where the number of predictor variables

More information

Fast Dimension Reduction Using Rademacher Series on Dual BCH Codes

Fast Dimension Reduction Using Rademacher Series on Dual BCH Codes Fast Dimension Reduction Using Rademacher Series on Dual BCH Codes Nir Ailon Edo Liberty Abstract The Fast Johnson-Lindenstrauss Transform (FJLT) was recently discovered by Ailon and Chazelle as a novel

More information

Compressed Least Squares Regression revisited

Compressed Least Squares Regression revisited Martin Slawski Department of Statistics George Mason University Fairfax, VA 22030, USA mslawsk3@gmu.edu Abstract We revisit compressed least squares CLS) regression as originally analyzed in Maillard and

More information

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices)

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Chapter 14 SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Today we continue the topic of low-dimensional approximation to datasets and matrices. Last time we saw the singular

More information

Simple and Deterministic Matrix Sketches

Simple and Deterministic Matrix Sketches Simple and Deterministic Matrix Sketches Edo Liberty + ongoing work with: Mina Ghashami, Jeff Philips and David Woodruff. Edo Liberty: Simple and Deterministic Matrix Sketches 1 / 41 Data Matrices Often

More information

Fast Approximate Matrix Multiplication by Solving Linear Systems

Fast Approximate Matrix Multiplication by Solving Linear Systems Electronic Colloquium on Computational Complexity, Report No. 117 (2014) Fast Approximate Matrix Multiplication by Solving Linear Systems Shiva Manne 1 and Manjish Pal 2 1 Birla Institute of Technology,

More information

Gradient Projection Iterative Sketch for Large-Scale Constrained Least-Squares

Gradient Projection Iterative Sketch for Large-Scale Constrained Least-Squares Gradient Projection Iterative Sketch for Large-Scale Constrained Least-Squares Junqi Tang 1 Mohammad Golbabaee 1 Mike E. Davies 1 Abstract We propose a randomized first order optimization algorithm Gradient

More information

Fast Matrix Computations via Randomized Sampling. Gunnar Martinsson, The University of Colorado at Boulder

Fast Matrix Computations via Randomized Sampling. Gunnar Martinsson, The University of Colorado at Boulder Fast Matrix Computations via Randomized Sampling Gunnar Martinsson, The University of Colorado at Boulder Computational science background One of the principal developments in science and engineering over

More information

Relative-Error CUR Matrix Decompositions

Relative-Error CUR Matrix Decompositions RandNLA Reading Group University of California, Berkeley Tuesday, April 7, 2015. Motivation study [low-rank] matrix approximations that are explicitly expressed in terms of a small numbers of columns and/or

More information

Sketching as a Tool for Numerical Linear Algebra

Sketching as a Tool for Numerical Linear Algebra Sketching as a Tool for Numerical Linear Algebra David P. Woodruff IBM Research Almaden dpwoodru@us.ibm.com Boston Delft Foundations and Trends R in Theoretical Computer Science Published, sold and distributed

More information

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar

More information

R A N D O M I Z E D L I N E A R A L G E B R A F O R L A R G E - S C A L E D ATA A P P L I C AT I O N S

R A N D O M I Z E D L I N E A R A L G E B R A F O R L A R G E - S C A L E D ATA A P P L I C AT I O N S R A N D O M I Z E D L I N E A R A L G E B R A F O R L A R G E - S C A L E D ATA A P P L I C AT I O N S a dissertation submitted to the institute for computational and mathematical engineering and the committee

More information

Fast Relative-Error Approximation Algorithm for Ridge Regression

Fast Relative-Error Approximation Algorithm for Ridge Regression Fast Relative-Error Approximation Algorithm for Ridge Regression Shouyuan Chen 1 Yang Liu 21 Michael R. Lyu 31 Irwin King 31 Shengyu Zhang 21 3: Shenzhen Key Laboratory of Rich Media Big Data Analytics

More information

Compressed Sensing and Robust Recovery of Low Rank Matrices

Compressed Sensing and Robust Recovery of Low Rank Matrices Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech

More information

Optimality of the Johnson-Lindenstrauss lemma

Optimality of the Johnson-Lindenstrauss lemma 58th Annual IEEE Symposium on Foundations of Computer Science Optimality of the Johnson-Lindenstrauss lemma Kasper Green Larsen Computer Science Department Aarhus University Aarhus, Denmark larsen@cs.au.dk

More information

Lecture 16: Compressed Sensing

Lecture 16: Compressed Sensing Lecture 16: Compressed Sensing Introduction to Learning and Analysis of Big Data Kontorovich and Sabato (BGU) Lecture 16 1 / 12 Review of Johnson-Lindenstrauss Unsupervised learning technique key insight:

More information