Sketching as a Tool for Numerical Linear Algebra
|
|
- Ruth Blankenship
- 5 years ago
- Views:
Transcription
1 Sketching as a Tool for Numerical Linear Algebra (Part 2) David P. Woodruff presented by Sepehr Assadi o(n) Big Data Reading Group University of Pennsylvania February, 2015 Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 1 / 21
2 Goal New survey by David Woodruff: Sketching as a Tool for Numerical Linear Algebra Topics: Subspace Embeddings Least Squares Regression Least Absolute Deviation Regression Low Rank Approximation Graph Sparsification Sketching Lower Bounds Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 2 / 21
3 Goal New survey by David Woodruff: Sketching as a Tool for Numerical Linear Algebra Topics: Subspace Embeddings Least Squares Regression Least Absolute Deviation Regression Low Rank Approximation Graph Sparsification Sketching Lower Bounds Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 3 / 21
4 Introduction You have Big data! Computationally expensive to deal with Excessive storage requirement Hard to communicate... Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 4 / 21
5 Introduction You have Big data! Computationally expensive to deal with Excessive storage requirement Hard to communicate... Summarize your data Sampling A representative subset of the data Sketching An aggregate summary of the whole data Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 5 / 21
6 Model Input: matrix A R n d vector b R n. Output: function F(A, b,...) e.g. least square regression Different goals: Faster algorithms Streaming Distributed Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 6 / 21
7 Linear Sketching Input: matrix A R n d Let r n and S R r n be a random matrix Let S A be the sketch Compute F(S A) instead of F(A) Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 7 / 21
8 Linear Sketching (cont.) Pros: Compute on a r d matrix instead of n d Smaller representation and faster computation Linearity: S (A + B) = S A + S B We can compose linear sketches! Cons: F(S A) is an approximation of F(A) Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 8 / 21
9 Approximate l 2 -regression Input: matrix A R n d (full column rank) vector b R n parameter 0 < ε < 1 Output ˆx R d : Aˆx b 2 (1 + ε) arg min x Ax b 2 Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 9 / 21
10 Subspace Embedding Definition (l 2 -subspace embedding) A (1 ± ε) l 2 -subspace embedding for a matrix A R n d is a matrix S for which for all x R n SAx 2 2 = (1 ± ε) Ax 2 2 Actually subspace embedding for column space of A Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 10 / 21
11 Previous Session Oblivious l 2 -subspace embedding The distribution from which S is chosen is oblivious to A One very common tool: Johnson-Lindenstrauss transform (JLT) Immediately approximate l2 -regression problem Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 11 / 21
12 Today Non-oblivious l 2 -subspace embedding The distribution from which S is chosen depends on A One very common tool: Leverage Score Sampling Can still be used to approximate l2 -regression problem Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 12 / 21
13 Leverage Scores Thin Singular Value Decomposition (SVD) of A: An d = U n d Σ d d V d d U is an orthonormal basis of column space of A Leverage Score of i-th row of A: l i = 2 U(i) Properties: Independent of the basis (property of the column space) Forms a probability distribution (by simple normalization) Let H = A(A T A) 1 A T, then l 2 i = H i,i Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 13 / 21
14 Leverage Score Sampling Definition (SampleRescale(n, s, p)) We define the procedure S = SampleRescale(n, s, p), if S s n = D Ω, where each row of Ω is a random basis vector in R n chosen according to the probability distribution p, and D is a diagonal matrix where D i,i = 1/ p j s if e j is chosen for i-th row of Ω. Leverage Score Sampling (p = LS-Sampling(A, β)): p = (p1,..., p n ) is a probability distribution satisfying p i β l 2 i /d, where l i is the i-th leverage score of A n d Compute S = SampleRescale(n, s, p) Return S A Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 14 / 21
15 Subspace Embedding via LS-Sampling Theorem Let s = Θ( d log d ), S = SampleRescale(n, s, p) for βε 2 p = LS-Sampling(A, β), and U be an orthonormal matrix of the column space of A; then with probability 0.99, simultaneously for all i [d], 1 ε σ 2 (S U) 1 + ε It immediately implies subspace embedding Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 15 / 21
16 Subspace Embedding via LS-Sampling (cont.) Theorem Let s = Θ( d log d ), S = SampleRescale(n, s, p) for βε 2 p = LS-Sampling(A, β), and U be an orthonormal matrix of the column space of A; then with probability 0.99, simultaneously for all i [d], 1 ε σ 2 (S U) 1 + ε Proof. Matrix Chernoff: Suppose X 1,..., X s are independent copies of symmetric matrix X R d d with E[X] = 0, and X γ, and E[X T X] s 2 and let W = 1 si=1 X s i ; then Pr( W > ε) 2d exp ( sε 2 /(2s 2 + 2γε/3) ) Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 16 / 21
17 Linear Regression via LS-Sampling Theorem Let s = Θ( d log d ), S = SampleRescale(n, s, p) for βε 2 p = LS-Sampling(A, β), and ˆx = arg min x SAx Sb, then with probability 0.99, Aˆx b 2 (1 + ε) arg min x Ax b 2 Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 17 / 21
18 Linear Regression via LS-Sampling (cont.) Theorem (Approximate Matrix Multiplication) For an orthonormal matrix C n m, an arbitrary vector d n 1, and probabilities p = (p 1,..., p n ) such that: p k β C(k) C F let S = SampleRescale(n, s, p); then, with probability 0.99: (SC) T (Sd) C T d 1 F O( sβ ) C F d F Warning: this statement is neither general nor precise! see [DKM06] 2 Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 18 / 21
19 Linear Regression via LS-Sampling (cont.) Theorem Let s = Θ( d log d ), S = SampleRescale(n, s, p) for βε 2 p = LS-Sampling(A, β), and ˆx = arg min x SAx Sb, then with probability 0.99, Proof. On the board. Aˆx b 2 (1 + ε) arg min x Ax b 2 Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 19 / 21
20 Approximating Leverage Scores Computing leverage scores is as hard as solving the regression problem! Can we approximate them? For β = 1/2, in time O(nd log n + d 3 ) [DMIMW12] Improved to O(nnz(A) log n + d 3 ) [CW13] Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 20 / 21
21 Questions? Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 21 / 21
22 Kenneth L Clarkson and David P Woodruff. Low rank approximation and regression in input sparsity time. In Proceedings of the forty-fifth annual ACM symposium on Theory of computing, pages ACM, Petros Drineas, Ravi Kannan, and Michael W Mahoney. Fast monte carlo algorithms for matrices i: Approximating matrix multiplication. SIAM Journal on Computing, 36(1): , Petros Drineas, Malik Magdon-Ismail, Michael W Mahoney, and David P Woodruff. Fast approximation of matrix coherence and statistical leverage. The Journal of Machine Learning Research, 13(1): , Sepehr Assadi (Penn) Sketching for Numerical Linear Algebra Big Data Reading Group 21 / 21
Sketching as a Tool for Numerical Linear Algebra
Sketching as a Tool for Numerical Linear Algebra David P. Woodruff presented by Sepehr Assadi o(n) Big Data Reading Group University of Pennsylvania February, 2015 Sepehr Assadi (Penn) Sketching for Numerical
More informationto be more efficient on enormous scale, in a stream, or in distributed settings.
16 Matrix Sketching The singular value decomposition (SVD) can be interpreted as finding the most dominant directions in an (n d) matrix A (or n points in R d ). Typically n > d. It is typically easy to
More informationRelative-Error CUR Matrix Decompositions
RandNLA Reading Group University of California, Berkeley Tuesday, April 7, 2015. Motivation study [low-rank] matrix approximations that are explicitly expressed in terms of a small numbers of columns and/or
More informationRandomized Numerical Linear Algebra: Review and Progresses
ized ized SVD ized : Review and Progresses Zhihua Department of Computer Science and Engineering Shanghai Jiao Tong University The 12th China Workshop on Machine Learning and Applications Xi an, November
More informationSketched Ridge Regression:
Sketched Ridge Regression: Optimization and Statistical Perspectives Shusen Wang UC Berkeley Alex Gittens RPI Michael Mahoney UC Berkeley Overview Ridge Regression min w f w = 1 n Xw y + γ w Over-determined:
More informationA Fast Algorithm For Computing The A-optimal Sampling Distributions In A Big Data Linear Regression
A Fast Algorithm For Computing The A-optimal Sampling Distributions In A Big Data Linear Regression Hanxiang Peng and Fei Tan Indiana University Purdue University Indianapolis Department of Mathematical
More informationCS 229r: Algorithms for Big Data Fall Lecture 17 10/28
CS 229r: Algorithms for Big Data Fall 2015 Prof. Jelani Nelson Lecture 17 10/28 Scribe: Morris Yau 1 Overview In the last lecture we defined subspace embeddings a subspace embedding is a linear transformation
More informationLecture 9: Matrix approximation continued
0368-348-01-Algorithms in Data Mining Fall 013 Lecturer: Edo Liberty Lecture 9: Matrix approximation continued Warning: This note may contain typos and other inaccuracies which are usually discussed during
More informationEfficiently Implementing Sparsity in Learning
Efficiently Implementing Sparsity in Learning M. Magdon-Ismail Rensselaer Polytechnic Institute (Joint Work) December 9, 2013. Out-of-Sample is What Counts NO YES A pattern exists We don t know it We have
More information25.2 Last Time: Matrix Multiplication in Streaming Model
EE 381V: Large Scale Learning Fall 01 Lecture 5 April 18 Lecturer: Caramanis & Sanghavi Scribe: Kai-Yang Chiang 5.1 Review of Streaming Model Streaming model is a new model for presenting massive data.
More informationarxiv: v3 [cs.ds] 21 Mar 2013
Low-distortion Subspace Embeddings in Input-sparsity Time and Applications to Robust Linear Regression Xiangrui Meng Michael W. Mahoney arxiv:1210.3135v3 [cs.ds] 21 Mar 2013 Abstract Low-distortion subspace
More informationLecture 18 Nov 3rd, 2015
CS 229r: Algorithms for Big Data Fall 2015 Prof. Jelani Nelson Lecture 18 Nov 3rd, 2015 Scribe: Jefferson Lee 1 Overview Low-rank approximation, Compression Sensing 2 Last Time We looked at three different
More informationGradient-based Sampling: An Adaptive Importance Sampling for Least-squares
Gradient-based Sampling: An Adaptive Importance Sampling for Least-squares Rong Zhu Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, China. rongzhu@amss.ac.cn Abstract
More informationLecture 9: Low Rank Approximation
CSE 521: Design and Analysis of Algorithms I Fall 2018 Lecture 9: Low Rank Approximation Lecturer: Shayan Oveis Gharan February 8th Scribe: Jun Qi Disclaimer: These notes have not been subjected to the
More informationApproximate Spectral Clustering via Randomized Sketching
Approximate Spectral Clustering via Randomized Sketching Christos Boutsidis Yahoo! Labs, New York Joint work with Alex Gittens (Ebay), Anju Kambadur (IBM) The big picture: sketch and solve Tradeoff: Speed
More informationMultidimensional data analysis in biomedicine and epidemiology
in biomedicine and epidemiology Katja Ickstadt and Leo N. Geppert Faculty of Statistics, TU Dortmund, Germany Stakeholder Workshop 12 13 December 2017, PTB Berlin Supported by Deutsche Forschungsgemeinschaft
More informationarxiv: v2 [stat.ml] 29 Nov 2018
Randomized Iterative Algorithms for Fisher Discriminant Analysis Agniva Chowdhury Jiasen Yang Petros Drineas arxiv:1809.03045v2 [stat.ml] 29 Nov 2018 Abstract Fisher discriminant analysis FDA is a widely
More informationThe Fast Cauchy Transform and Faster Robust Linear Regression
The Fast Cauchy Transform and Faster Robust Linear Regression Kenneth L Clarkson Petros Drineas Malik Magdon-Ismail Michael W Mahoney Xiangrui Meng David P Woodruff Abstract We provide fast algorithms
More informationdimensionality reduction for k-means and low rank approximation
dimensionality reduction for k-means and low rank approximation Michael Cohen, Sam Elder, Cameron Musco, Christopher Musco, Mădălina Persu Massachusetts Institute of Technology 0 overview Simple techniques
More informationRandomized Algorithms for Matrix Computations
Randomized Algorithms for Matrix Computations Ilse Ipsen Students: John Holodnak, Kevin Penner, Thomas Wentworth Research supported in part by NSF CISE CCF, DARPA XData Randomized Algorithms Solve a deterministic
More informationarxiv: v2 [cs.ds] 1 May 2013
Dimension Independent Matrix Square using MapReduce arxiv:1304.1467v2 [cs.ds] 1 May 2013 Reza Bosagh Zadeh Institute for Computational and Mathematical Engineering rezab@stanford.edu Gunnar Carlsson Mathematics
More informationsublinear time low-rank approximation of positive semidefinite matrices Cameron Musco (MIT) and David P. Woodru (CMU)
sublinear time low-rank approximation of positive semidefinite matrices Cameron Musco (MIT) and David P. Woodru (CMU) 0 overview Our Contributions: 1 overview Our Contributions: A near optimal low-rank
More informationTighter Low-rank Approximation via Sampling the Leveraged Element
Tighter Low-rank Approximation via Sampling the Leveraged Element Srinadh Bhojanapalli The University of Texas at Austin bsrinadh@utexas.edu Prateek Jain Microsoft Research, India prajain@microsoft.com
More informationSketching as a Tool for Numerical Linear Algebra All Lectures. David Woodruff IBM Almaden
Sketching as a Tool for Numerical Linear Algebra All Lectures David Woodruff IBM Almaden Massive data sets Examples Internet traffic logs Financial data etc. Algorithms Want nearly linear time or less
More informationLecture 24: Element-wise Sampling of Graphs and Linear Equation Solving. 22 Element-wise Sampling of Graphs and Linear Equation Solving
Stat260/CS294: Randomized Algorithms for Matrices and Data Lecture 24-12/02/2013 Lecture 24: Element-wise Sampling of Graphs and Linear Equation Solving Lecturer: Michael Mahoney Scribe: Michael Mahoney
More informationRandomized algorithms for the approximation of matrices
Randomized algorithms for the approximation of matrices Luis Rademacher The Ohio State University Computer Science and Engineering (joint work with Amit Deshpande, Santosh Vempala, Grant Wang) Two topics
More informationSketching as a Tool for Numerical Linear Algebra
Foundations and Trends R in Theoretical Computer Science Vol. 10, No. 1-2 (2014) 1 157 c 2014 D. P. Woodruff DOI: 10.1561/0400000060 Sketching as a Tool for Numerical Linear Algebra David P. Woodruff IBM
More informationCS60021: Scalable Data Mining. Dimensionality Reduction
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 1 CS60021: Scalable Data Mining Dimensionality Reduction Sourangshu Bhattacharya Assumption: Data lies on or near a
More informationFoundations of Computer Vision
Foundations of Computer Vision Wesley. E. Snyder North Carolina State University Hairong Qi University of Tennessee, Knoxville Last Edited February 8, 2017 1 3.2. A BRIEF REVIEW OF LINEAR ALGEBRA Apply
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More informationRandNLA: Randomization in Numerical Linear Algebra
RandNLA: Randomization in Numerical Linear Algebra Petros Drineas Department of Computer Science Rensselaer Polytechnic Institute To access my web page: drineas Why RandNLA? Randomization and sampling
More informationMANY scientific computations, signal processing, data analysis and machine learning applications lead to large dimensional
Low rank approximation and decomposition of large matrices using error correcting codes Shashanka Ubaru, Arya Mazumdar, and Yousef Saad 1 arxiv:1512.09156v3 [cs.it] 15 Jun 2017 Abstract Low rank approximation
More informationLecture 12: Randomized Least-squares Approximation in Practice, Cont. 12 Randomized Least-squares Approximation in Practice, Cont.
Stat60/CS94: Randomized Algorithms for Matrices and Data Lecture 1-10/14/013 Lecture 1: Randomized Least-squares Approximation in Practice, Cont. Lecturer: Michael Mahoney Scribe: Michael Mahoney Warning:
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 22 1 / 21 Overview
More informationFast Dimension Reduction
Fast Dimension Reduction MMDS 2008 Nir Ailon Google Research NY Fast Dimension Reduction Using Rademacher Series on Dual BCH Codes (with Edo Liberty) The Fast Johnson Lindenstrauss Transform (with Bernard
More informationA Tutorial on Matrix Approximation by Row Sampling
A Tutorial on Matrix Approximation by Row Sampling Rasmus Kyng June 11, 018 Contents 1 Fast Linear Algebra Talk 1.1 Matrix Concentration................................... 1. Algorithms for ɛ-approximation
More informationRecovering any low-rank matrix, provably
Recovering any low-rank matrix, provably Rachel Ward University of Texas at Austin October, 2014 Joint work with Yudong Chen (U.C. Berkeley), Srinadh Bhojanapalli and Sujay Sanghavi (U.T. Austin) Matrix
More informationSupremum of simple stochastic processes
Subspace embeddings Daniel Hsu COMS 4772 1 Supremum of simple stochastic processes 2 Recap: JL lemma JL lemma. For any ε (0, 1/2), point set S R d of cardinality 16 ln n S = n, and k N such that k, there
More informationColumn Selection via Adaptive Sampling
Column Selection via Adaptive Sampling Saurabh Paul Global Risk Sciences, Paypal Inc. saupaul@paypal.com Malik Magdon-Ismail CS Dept., Rensselaer Polytechnic Institute magdon@cs.rpi.edu Petros Drineas
More informationApproximate Principal Components Analysis of Large Data Sets
Approximate Principal Components Analysis of Large Data Sets Daniel J. McDonald Department of Statistics Indiana University mypage.iu.edu/ dajmcdon April 27, 2016 Approximation-Regularization for Analysis
More informationSketching as a Tool for Numerical Linear Algebra All Lectures. David Woodruff IBM Almaden
Sketching as a Tool for Numerical Linear Algebra All Lectures David Woodruff IBM Almaden Massive data sets Examples Internet traffic logs Financial data etc. Algorithms Want nearly linear time or less
More informationApplications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices
Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Vahid Dehdari and Clayton V. Deutsch Geostatistical modeling involves many variables and many locations.
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 9 1 / 23 Overview
More informationRandomized Algorithms
Randomized Algorithms Saniv Kumar, Google Research, NY EECS-6898, Columbia University - Fall, 010 Saniv Kumar 9/13/010 EECS6898 Large Scale Machine Learning 1 Curse of Dimensionality Gaussian Mixture Models
More informationrandomized block krylov methods for stronger and faster approximate svd
randomized block krylov methods for stronger and faster approximate svd Cameron Musco and Christopher Musco December 2, 25 Massachusetts Institute of Technology, EECS singular value decomposition n d left
More informationRandNLA: Randomized Numerical Linear Algebra
RandNLA: Randomized Numerical Linear Algebra Petros Drineas Rensselaer Polytechnic Institute Computer Science Department To access my web page: drineas RandNLA: sketch a matrix by row/ column sampling
More informationEmpirical Performance of Approximate Algorithms for Low Rank Approximation
Empirical Performance of Approximate Algorithms for Low Rank Approximation Dimitris Konomis (dkonomis@cs.cmu.edu) Machine Learning Department (MLD) School of Computer Science (SCS) Carnegie Mellon University
More informationSTA141C: Big Data & High Performance Statistical Computing
STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude
More informationPseudoinverse and Adjoint Operators
ECE 275AB Lecture 5 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 5 ECE 275A Pseudoinverse and Adjoint Operators ECE 275AB Lecture 5 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San Diego p.
More informationFast Approximation of Matrix Coherence and Statistical Leverage
Journal of Machine Learning Research 13 (01) 3475-3506 Submitted 7/1; Published 1/1 Fast Approximation of Matrix Coherence and Statistical Leverage Petros Drineas Malik Magdon-Ismail Department of Computer
More informationA fast randomized algorithm for overdetermined linear least-squares regression
A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm
More informationRandomized Algorithms in Linear Algebra and Applications in Data Analysis
Randomized Algorithms in Linear Algebra and Applications in Data Analysis Petros Drineas Rensselaer Polytechnic Institute Computer Science Department To access my web page: drineas Why linear algebra?
More informationBackground Mathematics (2/2) 1. David Barber
Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and
More informationLeast squares problems Linear Algebra with Computer Science Application
Linear Algebra with Computer Science Application April 8, 018 1 Least Squares Problems 11 Least Squares Problems What do you do when Ax = b has no solution? Inconsistent systems arise often in applications
More informationSubspace Embedding and Linear Regression with Orlicz Norm
Alexandr Andoni 1 Chengyu Lin 1 Ying Sheng 1 Peilin Zhong 1 Ruiqi Zhong 1 Abstract We consider a generalization of the classic linear regression problem to the case when the loss is an Orlicz norm. An
More informationPseudoinverse & Moore-Penrose Conditions
ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 7 ECE 275A Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego
More informationSubspace Embeddings for the Polynomial Kernel
Subspace Embeddings for the Polynomial Kernel Haim Avron IBM T.J. Watson Research Center Yorktown Heights, NY 10598 haimav@us.ibm.com Huy L. Nguy ên Simons Institute, UC Berkeley Berkeley, CA 94720 hlnguyen@cs.princeton.edu
More informationPrincipal components analysis COMS 4771
Principal components analysis COMS 4771 1. Representation learning Useful representations of data Representation learning: Given: raw feature vectors x 1, x 2,..., x n R d. Goal: learn a useful feature
More informationSTA141C: Big Data & High Performance Statistical Computing
STA141C: Big Data & High Performance Statistical Computing Lecture 5: Numerical Linear Algebra Cho-Jui Hsieh UC Davis April 20, 2017 Linear Algebra Background Vectors A vector has a direction and a magnitude
More informationSketching Structured Matrices for Faster Nonlinear Regression
Sketching Structured Matrices for Faster Nonlinear Regression Haim Avron Vikas Sindhwani IBM T.J. Watson Research Center Yorktown Heights, NY 10598 {haimav,vsindhw}@us.ibm.com David P. Woodruff IBM Almaden
More informationAn Iterative, Sketching-based Framework for Ridge Regression
Agniva Chowdhury Jiasen Yang Petros Drineas Abstract Ridge regression is a variant of regularized least squares regression that is particularly suitable in settings where the number of predictor variables
More informationLow-Rank PSD Approximation in Input-Sparsity Time
Low-Rank PSD Approximation in Input-Sparsity Time Kenneth L. Clarkson IBM Research Almaden klclarks@us.ibm.com David P. Woodruff IBM Research Almaden dpwoodru@us.ibm.com Abstract We give algorithms for
More informationTheoretical and empirical aspects of SPSD sketches
53 Chapter 6 Theoretical and empirical aspects of SPSD sketches 6. Introduction In this chapter we consider the accuracy of randomized low-rank approximations of symmetric positive-semidefinite matrices
More informationRandom Methods for Linear Algebra
Gittens gittens@acm.caltech.edu Applied and Computational Mathematics California Institue of Technology October 2, 2009 Outline The Johnson-Lindenstrauss Transform 1 The Johnson-Lindenstrauss Transform
More informationFast Relative-Error Approximation Algorithm for Ridge Regression
Fast Relative-Error Approximation Algorithm for Ridge Regression Shouyuan Chen 1 Yang Liu 21 Michael R. Lyu 31 Irwin King 31 Shengyu Zhang 21 3: Shenzhen Key Laboratory of Rich Media Big Data Analytics
More informationIntroduction to Compressed Sensing
Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral
More informationConvergence Rates of Kernel Quadrature Rules
Convergence Rates of Kernel Quadrature Rules Francis Bach INRIA - Ecole Normale Supérieure, Paris, France ÉCOLE NORMALE SUPÉRIEURE NIPS workshop on probabilistic integration - Dec. 2015 Outline Introduction
More informationA Statistical Perspective on Algorithmic Leveraging
Ping Ma PINGMA@UGA.EDU Department of Statistics, University of Georgia, Athens, GA 30602 Michael W. Mahoney MMAHONEY@ICSI.BERKELEY.EDU International Computer Science Institute and Dept. of Statistics,
More informationarxiv: v2 [cs.ds] 17 Feb 2016
Efficient Algorithm for Sparse Matrices Mina Ghashami University of Utah ghashami@cs.utah.edu Edo Liberty Yahoo Labs edo.liberty@yahoo.com Jeff M. Phillips University of Utah jeffp@cs.utah.edu arxiv:1602.00412v2
More informationSingular Value Decomposition
Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =
More informationRandom Projections for Support Vector Machines
Saurabh Paul Christos Boutsidis Malik Magdon-Ismail Petros Drineas Computer Science Dept. Mathematical Sciences Dept. Computer Science Dept. Computer Science Dept. Rensselaer Polytechnic Inst. IBM Research
More information7.6 The Inverse of a Square Matrix
7.6 The Inverse of a Square Matrix Copyright Cengage Learning. All rights reserved. What You Should Learn Verify that two matrices are inverses of each other. Use Gauss-Jordan elimination to find inverses
More informationSeeing the Forest from the Trees in Two Looks: Matrix Sketching by Cascaded Bilateral Sampling
Seeing the orest from the Trees in Two Looks: Matrix Sketching by Cascaded Bilateral Sampling arxiv:67.7395v3 [cs.lg] 27 Jul 26 Kai Zhang, Chuanren Liu 2, Jie Zhang 3, Hui Xiong 4, Eric Xing 5, Jieping
More informationTHE FAST CAUCHY TRANSFORM AND FASTER ROBUST LINEAR REGRESSION
SIAM J COMPUT Vol 45, No 3, pp 763 80 c 206 Society for Industrial and Applied Mathematics THE FAST CAUCHY TRANSFORM AND FASTER ROBUST LINEAR REGRESSION KENNETH L CLARKSON, PETROS DRINEAS, MALIK MAGDON-ISMAIL,
More informationRandomly Sampling from Orthonormal Matrices: Coherence and Leverage Scores
Randomly Sampling from Orthonormal Matrices: Coherence and Leverage Scores Ilse Ipsen Joint work with Thomas Wentworth (thanks to Petros & Joel) North Carolina State University Raleigh, NC, USA Research
More informationDimensionality Reduction Notes 3
Dimensionality Reduction Notes 3 Jelani Nelson minilek@seas.harvard.edu August 13, 2015 1 Gordon s theorem Let T be a finite subset of some normed vector space with norm X. We say that a sequence T 0 T
More informationInformation-Theoretic Methods in Data Science
Information-Theoretic Methods in Data Science Information-theoretic bounds on sketching Mert Pilanci Department of Electrical Engineering Stanford University Contents Information-theoretic bounds on sketching
More informationImproved Distributed Principal Component Analysis
Improved Distributed Principal Component Analysis Maria-Florina Balcan School of Computer Science Carnegie Mellon University ninamf@cscmuedu Yingyu Liang Department of Computer Science Princeton University
More informationFocus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations.
Previously Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations y = Ax Or A simply represents data Notion of eigenvectors,
More informationMethods for sparse analysis of high-dimensional data, II
Methods for sparse analysis of high-dimensional data, II Rachel Ward May 23, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 47 High dimensional
More information5. Orthogonal matrices
L Vandenberghe EE133A (Spring 2017) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal
More information18.06 Professor Johnson Quiz 1 October 3, 2007
18.6 Professor Johnson Quiz 1 October 3, 7 SOLUTIONS 1 3 pts.) A given circuit network directed graph) which has an m n incidence matrix A rows = edges, columns = nodes) and a conductance matrix C [diagonal
More informationMath 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008
Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures
More informationCOMPRESSED AND PENALIZED LINEAR
COMPRESSED AND PENALIZED LINEAR REGRESSION Daniel J. McDonald Indiana University, Bloomington mypage.iu.edu/ dajmcdon 2 June 2017 1 OBLIGATORY DATA IS BIG SLIDE Modern statistical applications genomics,
More informationDS-GA 1002 Lecture notes 10 November 23, Linear models
DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.
More informationIntroduction The framework Bias and variance Approximate computation of leverage Empirical evaluation Discussion of sampling approach in big data
Discussion of sampling approach in big data Big data discussion group at MSCS of UIC Outline 1 Introduction 2 The framework 3 Bias and variance 4 Approximate computation of leverage 5 Empirical evaluation
More informationSimple and Deterministic Matrix Sketches
Simple and Deterministic Matrix Sketches Edo Liberty + ongoing work with: Mina Ghashami, Jeff Philips and David Woodruff. Edo Liberty: Simple and Deterministic Matrix Sketches 1 / 41 Data Matrices Often
More informationNear-Optimal Coresets for Least-Squares Regression
6880 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 59, NO 10, OCTOBER 2013 Near-Optimal Coresets for Least-Squares Regression Christos Boutsidis, Petros Drineas, Malik Magdon-Ismail Abstract We study the
More informationFast Approximate Matrix Multiplication by Solving Linear Systems
Electronic Colloquium on Computational Complexity, Report No. 117 (2014) Fast Approximate Matrix Multiplication by Solving Linear Systems Shiva Manne 1 and Manjish Pal 2 1 Birla Institute of Technology,
More informationNormed & Inner Product Vector Spaces
Normed & Inner Product Vector Spaces ECE 174 Introduction to Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 27 Normed
More informationMathematical foundations - linear algebra
Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More informationLinear Algebra (Review) Volker Tresp 2017
Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.
More informationSingle Pass PCA of Matrix Products
Single Pass PCA of Matrix Products Shanshan Wu The University of Texas at Austin shanshan@utexas.edu Sujay Sanghavi The University of Texas at Austin sanghavi@mail.utexas.edu Srinadh Bhojanapalli Toyota
More informationRandom Projections for Linear Support Vector Machines
Random Projections for Linear Support Vector Machines SAURABH PAUL, Rensselaer Polytechnic Institute CHRISTOS BOUTSIDIS, Yahoo! Labs, New York, NY MALIK MAGDON-ISMAIL and PETROS DRINEAS, Rensselaer Polytechnic
More informationConceptual Questions for Review
Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.
More informationSparse Features for PCA-Like Linear Regression
Sparse Features for PCA-Like Linear Regression Christos Boutsidis Mathematical Sciences Department IBM T J Watson Research Center Yorktown Heights, New York cboutsi@usibmcom Petros Drineas Computer Science
More informationProblem # Max points possible Actual score Total 120
FINAL EXAMINATION - MATH 2121, FALL 2017. Name: ID#: Email: Lecture & Tutorial: Problem # Max points possible Actual score 1 15 2 15 3 10 4 15 5 15 6 15 7 10 8 10 9 15 Total 120 You have 180 minutes to
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 A cautionary tale Notes for 2016-10-05 You have been dropped on a desert island with a laptop with a magic battery of infinite life, a MATLAB license, and a complete lack of knowledge of basic geometry.
More informationReview of similarity transformation and Singular Value Decomposition
Review of similarity transformation and Singular Value Decomposition Nasser M Abbasi Applied Mathematics Department, California State University, Fullerton July 8 7 page compiled on June 9, 5 at 9:5pm
More informationMathematical foundations - linear algebra
Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More information