Fast Dimension Reduction

Similar documents
Fast Dimension Reduction

Fast Random Projections

Some Useful Background for Talk on the Fast Johnson-Lindenstrauss Transform

Fast Dimension Reduction Using Rademacher Series on Dual BCH Codes

Faster Johnson-Lindenstrauss style reductions

Accelerated Dense Random Projections

Yale University Department of Computer Science

Fast Dimension Reduction Using Rademacher Series on Dual BCH Codes

Fast Random Projections using Lean Walsh Transforms Yale University Technical report #1390

Sparse Johnson-Lindenstrauss Transforms

Dense Fast Random Projections and Lean Walsh Transforms

Sketching as a Tool for Numerical Linear Algebra

Sparser Johnson-Lindenstrauss Transforms

Edo Liberty. Accelerated Dense Random Projections

MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing

Randomized algorithms for the approximation of matrices

Randomized Algorithms

JOHNSON-LINDENSTRAUSS TRANSFORMATION AND RANDOM PROJECTION

MANY scientific computations, signal processing, data analysis and machine learning applications lead to large dimensional

A fast randomized algorithm for approximating an SVD of a matrix

Approximate Spectral Clustering via Randomized Sketching

Dimensionality reduction: Johnson-Lindenstrauss lemma for structured random matrices

Optimal compression of approximate Euclidean distances

A fast randomized algorithm for overdetermined linear least-squares regression

Random Methods for Linear Algebra

Relative-Error CUR Matrix Decompositions

arxiv: v1 [cs.ds] 30 May 2010

Sketching as a Tool for Numerical Linear Algebra

dimensionality reduction for k-means and low rank approximation

Notes taken by Costis Georgiou revised by Hamed Hatami

Johnson-Lindenstrauss, Concentration and applications to Support Vector Machines and Kernels

CS on CS: Computer Science insights into Compresive Sensing (and vice versa) Piotr Indyk MIT

Lecture 13 October 6, Covering Numbers and Maurey s Empirical Method

A Fast Algorithm For Computing The A-optimal Sampling Distributions In A Big Data Linear Regression

CS 229r: Algorithms for Big Data Fall Lecture 17 10/28

Similarity searching, or how to find your neighbors efficiently

Supremum of simple stochastic processes

Lecture 18: March 15

Using the Johnson-Lindenstrauss lemma in linear and integer programming

Yale university technical report #1402.

Low Rank Matrix Approximation

Randomized Algorithms in Linear Algebra and Applications in Data Analysis

The Fast Cauchy Transform and Faster Robust Linear Regression

Lecture 15: Random Projections

Sketching as a Tool for Numerical Linear Algebra

Randomized methods for computing the Singular Value Decomposition (SVD) of very large matrices

Multivariate Statistics Random Projections and Johnson-Lindenstrauss Lemma

RandNLA: Randomization in Numerical Linear Algebra

to be more efficient on enormous scale, in a stream, or in distributed settings.

Randomized Numerical Linear Algebra: Review and Progresses

Random hyperplane tessellations and dimension reduction

Acceleration of Randomized Kaczmarz Method

Optimality of the Johnson-Lindenstrauss Lemma

Sketching as a Tool for Numerical Linear Algebra All Lectures. David Woodruff IBM Almaden

EECS 275 Matrix Computation

16 Embeddings of the Euclidean metric

Multidimensional data analysis in biomedicine and epidemiology

17 Random Projections and Orthogonal Matching Pursuit

Numerische Mathematik

Methods for sparse analysis of high-dimensional data, II

Enabling very large-scale matrix computations via randomization

Methods for sparse analysis of high-dimensional data, II

New constructions of RIP matrices with fast multiplication and fewer rows

arxiv: v3 [cs.ds] 21 Mar 2013

Optimal Data-Dependent Hashing for Approximate Near Neighbors

Randomized Algorithms

RandNLA: Randomized Numerical Linear Algebra

On the Optimality of the Dimensionality Reduction Method

Sketched Ridge Regression:

CS60021: Scalable Data Mining. Dimensionality Reduction

Dimensionality Reduction Notes 3

New ways of dimension reduction? Cutting data sets into small pieces

Nearest Neighbor Preserving Embeddings

Approximate Principal Components Analysis of Large Data Sets

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

sublinear time low-rank approximation of positive semidefinite matrices Cameron Musco (MIT) and David P. Woodru (CMU)

Problem Set 1. Homeworks will graded based on content and clarity. Please show your work clearly for full credit.

18.409: Topics in TCS: Embeddings of Finite Metric Spaces. Lecture 6

Fast Approximation of Matrix Coherence and Statistical Leverage

High Dimensional Geometry, Curse of Dimensionality, Dimension Reduction

Lecture 16: Compressed Sensing

Locality Sensitive Hashing

A fast randomized algorithm for the approximation of matrices preliminary report

ECE 204 Numerical Methods for Computer Engineers MIDTERM EXAMINATION /8:00-9:30

Recovering any low-rank matrix, provably

Fast Matrix Computations via Randomized Sampling. Gunnar Martinsson, The University of Colorado at Boulder

Lecture 24: Element-wise Sampling of Graphs and Linear Equation Solving. 22 Element-wise Sampling of Graphs and Linear Equation Solving

Algorithms for Streaming Data

Functional Maps ( ) Dr. Emanuele Rodolà Room , Informatik IX

Optimality of the Johnson-Lindenstrauss lemma

Finite Metric Spaces & Their Embeddings: Introduction and Basic Tools

Lecture 3. Random Fourier measurements

CS168: The Modern Algorithmic Toolbox Lecture #7: Understanding Principal Component Analysis (PCA)

Lecture 17: Perfect Codes and Gilbert-Varshamov Bound

CS168: The Modern Algorithmic Toolbox Lecture #4: Dimensionality Reduction

LECTURE NOTE #11 PROF. ALAN YUILLE

Numerical Methods I Non-Square and Sparse Linear Systems

1 Singular Value Decomposition and Principal Component

Processing Big Data Matrix Sketching

Randomness Efficient Fast-Johnson-Lindenstrauss Transform with Applications in Differential Privacy

Transcription:

Fast Dimension Reduction MMDS 2008 Nir Ailon Google Research NY Fast Dimension Reduction Using Rademacher Series on Dual BCH Codes (with Edo Liberty) The Fast Johnson Lindenstrauss Transform (with Bernard Chazelle)

Original Motivation: Nearest Neighbor Searching Wanted to improve algorithm by Indyk, Motwani for approx. NN searching in Euclidean space. Evidence for possibility to do so came from improvement on algorithm by Kushilevitz, Ostrovsky, Rabani for approx NN searching over GF(2). If we were to do the same for Euclidean space, it was evident that improving run time of Johnson-Lindenstrauss was key.

Later Motivation Provide more elegant proof, use modern techniques. Improvement obtained as bonus. Exciting use of Talagrand concentration bounds Error correcting codes

Random Dimension Reduction Sketching [Woodruff, Jayram, Li] (Existential) metric embedding Distance preserving Sets of points, subspaces, manifolds [Clarkson] Volume preserving [Magen, Zouzias] Fast approximate linear algebra SVD, linear regression (Muthukrishnan, Mahoney, Drineas, Sarlos) Computational aspects: Time [A, Chazelle, Liberty] + {Sketching Community} + {Fast Approximate Linear Algebra Community} randomness {Functional Analysis community}

Theoretical Challenge Find random projection Φ from R d to R k (d big, k small) such that for every x R d, x 2 =1, 0<ε<1 with probability 1-exp{-kε 2 } Φx 2 = 1 ± O(ε) x Φx R d Φ R k

Usage If you have n vectors x 1..x n R d : set k=o(ε 2 log n) by union bound: for all i,j Φx i - Φx j ε x i -x j low-distortion metric embedding tight x 1 Φx 1 Φx x 3 2 Φx x 2 3 Φ R d R k

Solution: Johnson-Lindenstrauss (JL) dense random matrix Φ = k d

So what s the problem? running time Ω(kd) number of random bits Ω(kd) can we do better?

Fast JL A, Chazelle 2006 Φ = Sparse. Hadamard. Diagonal k Fourier d time = O(k 3 + dlog d) beats JL Ω(kd) bound for: log d < k < d 1/3

Improvement on FJLT O(d logk) for k < d 1/2 beats JL up to k < d 1/2 O(d) random bits

Algorithm (k=d 1/2 ) A, Liberty 2007 Φ = BCH. Diagonal.... k Error Correcting Code d

Error Correcting Codes d B = k = d 1/2 binary dual-bch code of designed distance 4 0 k -1/2 columns closed under addition Bx computable 1 -k -1/2 in time d logd row set subset of given x 4-wise independence d x d Hadamard

Error Correcting Codes Fact (easily from properties of dual BCH): B t 2 4 = O(1) for y t R k with y 2 = 1: yb 4 = O(1) x Φx R d... D B R k= d

Rademacher Series on Error Correcting Codes look at r.v. BDx (R k, l 2 ) BDx = ΣD ii x i B i D ii R {±1} i=1...d = ΣD ii M i x Φx R d... D B R k= d

Talagrand s Concentration Bound for Rademacher Series Z = ΣD ii M i p (in our case p=2) Pr[ Z-EZ > ε ] = O(exp{-ε 2 /4 M 2 p2 })

Rademacher Series on Error Correcting Codes look at r.v. BDx (R k, l 2 ) Z = BDx 2 = ΣD ii M i 2 M 2 2 x 4 B t 2 4 (Cauchy-Schwartz) by ECC properties: M 2 2 x 4 O(1) trivial: EZ = x 2 = 1 x Φx R d... D B R k= d

Rademacher Series on Error Correcting Codes look at r.v. BDx (R k, l 2 ) Z = BDx 2 = ΣD ii M i 2 Pr[ Z-1 > ε] = O(exp{-ε 2 / x 42 }) M 2 2 =O( x 4 ) EZ = 1 Pr[ Z-EZ > ε ] = O(exp{-ε 2 /4 M 2 22 }) how to get x 4 2 = O(k -1 =d -1/2 )? x Φx Challenge: w. prob. exp{-kε 2 } deviation of more than ε R d... D B R k= d

Controlling x 4 2 how to get x 4 2 = O(k -1 =d -1/2 )? if you think about it for a second... random x has x 42 =O(d -1/2 ) but random x easy to reduce: just output first k dimensions are we asking for too much? no: truly random x has strong bound on x p for all p>2 x R d... D B R k= d Φx

Controlling x 4 2 how to get x 2 4 = O(k -1 =d -1/2 )? can multiply x by orthogonal matrix try matrix HD Z = HDx 4 = ΣD ii x i H i 4 = ΣD ii M i 4 by Talagrand: Pr[ Z-EZ > t ] = O(exp{-t 2 /4 M 2 42 }) (HD used in [AC06] to control HDx ) EZ = O(d -1/4 ) (trivial) M 2 4 H 4/3 4 x 4 (Cauchy Schwartz) x R d... D B R k= d Φx

Controlling x 4 2 how to get x 4 2 = O(k -1 =d -1/2 )? Z = ΣD ii M i 4 M i = x i H i Pr[ Z-EZ > t ] = O(exp{-t 2 /4 M 2 42 }) (Talagrand) EZ = O(d -1/4 ) (trivial) M 2 4 H 4/3 4 x 4 (Cauchy Schwartz) H 4/3 4 d -1/4 (Hausdorff-Young) M 2 4 d -1/4 x 4 Pr[ HDx 4 > d -1/4 + t ] = exp{-t 2 /d -1/2 x 42 } x R d... D B R k= d Φx

Controlling x 4 2 how to get x 4 2 = O(d -1/2 )? Pr[ HDx 4 > d -1/4 + t ] = exp{-t 2 /d -1/2 x 42 } need some slack k=d 1/2-δ max error probability for challenge: exp{-k} k = t 2 /d -1/2 x 2 4 t = k 1/2 d -1/4 x 4 = x 4 d -δ/2 x Φx R d... D B R k= d

Controlling x 4 2 how to get x 4 2 = O(d -1/2 )? first round: HDx 4 < d -1/4 + x 4 d -δ/2 second round: HD HD x 4 < d -1/4 + d -1/4-δ/2 + x 4 d -δ r=o(1/δ) th round: HD (r)... HD x 4 < O(r)d -1/4... with probability 1-O(exp{-k}) by union bound x R d... D B R k= d Φx

Algorithm for k=d 1/2-δ x Φx R d D (1) H... D (r) H D B R k running time O(d logd) randomness O(d)

Open Problems Go beyond k=d 1/2 Conjecture: can do O(d log d) for k=d 1- δ Approximate linear l 2- regression minimize Ax-b 2 given A,b (overdetermined) State of the art for general inputs: Õ(linear time) for #{variables} < #{equations} 1/3 Conjecture: can do Õ(linear time) always What is the best you can do in linear time? [A, Liberty, Singer 08]

Open Problem Worthy of Own Slide Prove that JL onto k=d 1/3 (say) with distortion ε=1/4 (say) requires Ω(dlog(d)) time This would establish similar lower bound for FFT Long standing dormant open problem