Dimensionality Reduction Notes 3

Size: px
Start display at page:

Download "Dimensionality Reduction Notes 3"

Transcription

1 Dimensionality Reduction Notes 3 Jelani Nelson minilek@seas.harvard.edu August 13, Gordon s theorem Let T be a finite subset of some normed vector space with norm X. We say that a sequence T 0 T 1... T is admissible if T 0 = 1 and T r 2 2r for all r 1, and T r = T for all r r 0 for some r 0. We define the γ 2 -functional γ 2 T, X = inf sup 2 r/2 d X x, T r, where the inf is taken over all admissible sequences. We also let d X T denote the diameter of T with respect to norm X. For the remainder of this section we make the definitions π r x = argmin y Tr y x X and r x = π r x π r 1 x. Throughout this section we let denote the l 2 2 operator norm in the case of matrix arguments, and the l 2 norm in the case of vector arguments. Krahmer, Mendelson, and Rauhut showed the following theorem [KMR14]. Theorem 1. Let A R m n be arbitrary. Let ε 1,..., ε n be independent ±1 random variables. Then E sup Aε 2 E Aε 2 γ 2 2 A, +γ 2 A, d F A+d F A d l2 2 A. ε The KMR theorem was actually more general, where the Rademacher variables could be replaced by subgaussian random variables. We present just the proof of the Rademacher case. 1

2 Proof. Without loss of generality we can assume A is finite else apply the theorem to a sufficiently fine net, i.e. fine in l 2 l 2 operator norm. Define E = E sup Aε 2 E Aε 2 ε and let A i denote the ith column of A. Then by decoupling E = E sup ε i ε j A i, A j ε i j 4 E sup ε ε, i j A i, A j i,j = 4 E sup Aε, A. ε, Let {T r } r=0 be admissible for A. Direct computation shows Aε, A = π 0 Aε, π 0 A + r Aε, π r 1 A + π }{{} r Aε, r A. }{{} X ra Y ra We have T 0 = {A 0 } for some A 0 A. Thus E ε, π 0 Aε, π 0 A equals 1/2 E ε,ε ε A 0A 0 E ε,ε ε A 0A 0 2 = A 0A 0 F A 0 F A 0 d F A d l2 2 A. Thus, E sup Aε, A d F A d l2 2 A+ E sup ε, ε, X r A + E ε, sup Y r A. We focus on the second summand; handling the third summand is similar. Note X r A = r Aε, π r 1 A = ε, r A π r 1 A. Thus P X r A > t2 r/2 r A π r 1 A e t2 2 r /2 Khintchine. Let EA be the event that for all r 1 simultaneously, X r A t2 r/2 r A sup A. Then P A A s.t. EA T r T r 1 e t2 2 r /2 2

3 Therefore E sup ε, X r A = E 0 2 2r+1 e t2 2 r /2. P ε sup X r A > t dt, which by a change of variables is equal to E sup A sup 2 r/2 r A P sup X r A > t sup ε 0 E sup A E sup A E sup A sup sup sup [ 2 r/2 r A r/2 r A 2 r/2 d 2 2 A, T r, 2 r/2 r A sup A 3 dt 2 2r+1 e t2 2 r /2 dt since r A d 2 2 A, T r 1 +d 2 2 A, T r via the triangle inequality. Choosing admissible T 0 T 1... T to minimize the above expression, Now observe E sup A E d F A d l2 2 A + γ 2 A, E sup A. = E sup E sup E sup A 2 1/2 1/2 A 2 E Aε 2 + Eε Aε 2 A 2 E ε Aε 2 + A 2 F 3 1/2 ]

4 Thus in summary, E + d F A E d F A d l2 2 A + γ 2 A, E + d F A. This implies E is at most the square of the larger root of the associated quadratic equation, which gives the theorem. Using the KMR theorem, we can recover Gordon s theorem [Gor88] also see [KM05, MPTJ07, Dir14]. We again only discuss the Rademacher case. Note that in metric JL, we wish for the set of vectors X that If we define x, y X, Πx y 2 2 x y 2 2 < ε x y 2 2. T = then it is equivalent to have { } x y : x, y X, x y 2 sup Πx < ε. Since Π is random, we will demand that this holds in expectation E sup Πx < ε. 1 Π Theorem 2. Let T R n be a set of vectors each of unit norm, and let ε 0, 1/2 be arbitrary. Let Π R m n be such that Π i,j = σ i,j / m for independent Rademacher σ i,j, and where m = Ωγ2T, 2 + 1/ε 2. Then E sup Πx 2 1 < ε. Proof. For x T let A x denote the m mn matrix defined as follows: x 1 x n x 1 x n 0 0 A x = 1 m x 1 x n. 4

5 Then Πx 2 = A x σ 2, so letting A = {A x : x T }, E sup Πx 2 1 = E sup Aσ 2 E Aσ 2. We have d F A = 1. Also A xa x is a block-diagonal matrix, with m blocks each equal to xx /m, and thus the singular values of A x are 0 and x / m, implying d l2 2 A = 1/ m. Similarly, since A x A y = A x y, for any vectors x, y we have A x A y = x y, and thus γ 2 A, = γ 2 T, / m. Thus by the KMR theorem we have E sup Πx 2 1 γ 2T, 2 + γ 2T, + 1, m m m which is at most ε for m as in the theorem statement. Gordon s theorem was actually stated differently in [Gor88] in two ways: 1 Gordon actually only analyzed the case of Π having i.i.d. gaussian entries, and 2 the γ 2 T, terms in the theorem statement were written as the gaussian mean width gt = E g sup g, x, where g R n is a vector of i.i.d. standard normal random variables. For 1, the extension to arbitrary subgaussian random variables was shown first in [KM05]. Note the KMR theorem only bounds an expectation; thus if one wants to argue that the random variable in question is large with with probability at most δ, the most obvious way is Markov, which would introduce JL a poor 1/δ 2 dependence in m. One could remedy this by doing Markov on the pth moment; the tightest known p-norm bound is given in [Dir13, Theorem 6.5] see also [Dir14, Theorem 4.8]. For 2, Gordon actually wrote his paper before γ 2 was even defined! The definition of γ 2 given here is due to Talagrand, who also showed that for all sets of vectors T R n, gt γ 2 T, [Tal14] this is known as the Majorizing Measures theorem. In fact the upper bound gt γ 2 T, was shown by Fernique [Fer75] although γ 2 was not defined at that point; Talagrand later recast this upper bound in terms of his newly defined γ 2 - functional. We thus state the following corollary of the majorizing measures theorem and Theorem 2. Corollary 1. Let T R n be a set of vectors each of unit norm, and let ε 0, 1/2 be arbitrary. Let Π R m n be such that Π i,j = σ i,j / m for 5

6 independent Rademacher σ i,j, and where m = Ωg 2 T + 1/ε 2. Then E sup Πx 2 1 < ε. 1.1 Application 1: numerical linear algebra Consider, for example, the least squares regression problem. R n d, b R n, n d, the goal is to compute Given A It is standard that x = argmin Ax b 2. 2 x R n x = A T A 1 A T b when A has full column rank. Unfortunately, naively computing A T A takes time Θnd 2. We would like to speed this up. Given our lectures on dimensionality reduction, one natural question is the following: if instead we compute x = argmin x R n ΠAx Πb 2 for some JL map Π with few rows m, can we argue that x is a good solution for 2? The answer is yes. Theorem 3. Suppose 1 holds for T the unit vectors in the subspace spanned by b and the columns of A. Then Proof. A x b ε 1 ε Ax b ε A x b 2 2 ΠA x Πb 2 2 ΠAx Πb ε Ax b 2 2. The first and third inequalities hold since Π preserves A x b and Ax b. The second inequality holds since x is the optimal solution to the lower dimensional regression problem. Now we may ask ourselves: what is the number of rows m needed to preserve the vector T in Theorem 3? We apply Corollary 1. Note T is the set 6

7 of unit vectors in a subspace of dimension D d+1. By rotational symmetry of the gaussian, we can assume this subspace equals span{e 1,..., e D }. Then E sup g, x = E g 2 E g 2 2 1/2 = D. g R D x l D g R D g R D 2 Thus it suffices for Π to have m d/ε 2 rows. Unfortunately in the above, although solving the lower-dimensional regression problem is fast since now ΠA has Od/ε 2 rows compared with the n rows of A, multiplying ΠA using dense random Π is actually slower than solving the original regression problem 2. This was remedied by Sarlós in [Sar06] by using a fast JL matrix as in Lecture 2; see [CNW15, Theorem 9] for the tightest analysis of this construction in this context. An alternative is to use a sparse Π. The first analysis of this approach was in [CW13]. The tightest known analyses are in [MM13, NN13, BDN15]. It is also the case that Π can be used more efficently to solve regression problems than simply requiring 1 for T as above. See for example [CW13, Theorem 7.7] in the full version of that paper for an iterative algorithm based on such Π which has running time dependence on ε equal to Olog1/ε, instead of the poly1/ε above. For further results on applying JL to problems in this domain, see the book [Woo14]. 1.2 Application 2: compressed sensing In compressed sensing, the goal is to approximately recover an approximately sparse signal x R n from a few linear measurements. We will imagine that these m linear measurements are organized as the rows of a matrix Π R m n. Let T k be the set of all k-sparse vectors in R n of unit norm i.e. the union of n k k-dimensional subspaces. One always has γ 2 T, inf {T r} 2 r/2 sup d l2 x, T r, i.e. the sup can be moved inside the sum to obtain an upper bound. Minimizing the right hand side amounts to finding the best nets possible for T of some bounded size 2 2k for each k. By doing this, which we do not discuss here, one can show that for our T k, γ 2 T k, k logn/k so that one can obtain 1 with m k logn/k/ε 2. A more direct net argument can also 7

8 yield this bound see [BDDW08] which suffered a log1/ε factor, and the removal of this factor in [FR13, Theorem 9.12]. Now, any matrix Π preserving this T k with distortion 1 + ε is known as having the k, ε-restricted isometry property RIP [CT06]. We are ready to state a theorem of [CT06, Don06]. One can find a short proof in [Can08]. Theorem 4. Suppose Π satisfies the 2k, 2 1-RIP. Then given y = Πx, if one solves the linear program min z 1 s.t. Πz = y then the optimal solution x will satisfy References x x 2 = O1/ k inf x w 1. w R n suppw k [BDDW08] R. Baraniuk, M. Davenport, R. DeVore, and M. Wakin. A simple proof of the restricted isometry property for random matrices. Constr. Approx., 283: , [BDN15] Jean Bourgain, Sjoerd Dirksen, and Jelani Nelson. Toward a unified theory of sparse dimensionality reduction in Euclidean space. Geometric and Functional Analysis GAFA, to appear, Preliminary version in STOC [Can08] Emmanuel Candès. The restricted isometry property and its implications for compressed sensing. Comptes Rendus Mathematique, : , [CNW15] [CT06] Michael B. Cohen, Jelani Nelson, and David P. Woodruff. Optimal approximate matrix product in terms of stable rank. CoRR, abs/ , Emmanuel J. Candès and Terence Tao. Near-optimal signal recovery from random projections: universal encoding strategies? IEEE Trans. Inform. Theory, 52: ,

9 [CW13] Kenneth L. Clarkson and David P. Woodruff. Low rank approximation and regression in input sparsity time. In Proceedings of the 45th ACM Symposium on Theory of Computing STOC, pages 81 90, Full version at v4.pdf. [Dir13] Sjoerd Dirksen. Tail bounds via generic chaining. CoRR, abs/ v2, [Dir14] [Don06] Sjoerd Dirksen. Dimensionality reduction with subgaussian matrices: a unified theory. CoRR, abs/ , D. Donoho. Compressed sensing. IEEE Trans. Inform. Theory, 524: , [Fer75] Xavier Fernique. Regularité des trajectoires des fonctions aléatoires gaussiennes. Lecture Notes in Math., 480:1 96, [FR13] Simon Foucart and Holger Rauhut. A Mathematical Introduction to Compressive Sensing. Birkhaüser, Boston, [Gor88] Yehoram Gordon. On Milman s inequality and random subspaces which escape through a mesh in R n. Geometric Aspects of Functional Analysis, pages , [KM05] [KMR14] [MM13] Bo az Klartag and Shahar Mendelson. Empirical processes and random projections. J. Funct. Anal., 2251: , Felix Krahmer, Shahar Mendelson, and Holger Rauhut. Suprema of chaos processes and the restricted isometry property. Comm. Pure Appl. Math., 6711: , Xiangrui Meng and Michael W. Mahoney. Low-distortion subspace embeddings in input-sparsity time and applications to robust linear regression. In Proceedings of the 45th ACM Symposium on Theory of Computing STOC, pages , [MPTJ07] Shahar Mendelson, Alain Pajor, and Nicole Tomczak- Jaegermann. Reconstruction and subgaussian operators in asymptotic geometric analysis. Geometric and Functional Analysis, 17: ,

10 [NN13] Jelani Nelson and Huy L. Nguy ên. OSNAP: faster numerical linear algebra algorithms via sparser subspace embeddings. In Proceedings of the 54th Annual IEEE Symposium on Foundations of Computer Science FOCS, pages , [Sar06] [Tal14] [Woo14] Tamás Sarlós. Improved approximation algorithms for large matrices via random projections. In 47th Annual IEEE Symposium on Foundations of Computer Science FOCS 2006, October 2006, Berkeley, California, USA, Proceedings, pages , Michel Talagrand. Upper and lower bounds for stochastic processes: modern methods and classical problems. Springer, David P. Woodruff. Sketching as a tool for numerical linear algebra. Foundations and Trends in Theoretical Computer Science, 101-2:1 157,

CS 229r: Algorithms for Big Data Fall Lecture 17 10/28

CS 229r: Algorithms for Big Data Fall Lecture 17 10/28 CS 229r: Algorithms for Big Data Fall 2015 Prof. Jelani Nelson Lecture 17 10/28 Scribe: Morris Yau 1 Overview In the last lecture we defined subspace embeddings a subspace embedding is a linear transformation

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5 CS 229r: Algorithms for Big Data Fall 215 Prof. Jelani Nelson Lecture 19 Nov 5 Scribe: Abdul Wasay 1 Overview In the last lecture, we started discussing the problem of compressed sensing where we are given

More information

Sparse and Low Rank Recovery via Null Space Properties

Sparse and Low Rank Recovery via Null Space Properties Sparse and Low Rank Recovery via Null Space Properties Holger Rauhut Lehrstuhl C für Mathematik (Analysis), RWTH Aachen Convexity, probability and discrete structures, a geometric viewpoint Marne-la-Vallée,

More information

Chaining Introduction With Some Computer Science Applications

Chaining Introduction With Some Computer Science Applications Chaining Introduction With Some Computer Science Applications The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters Citation Nelson,

More information

Sparse Recovery with Pre-Gaussian Random Matrices

Sparse Recovery with Pre-Gaussian Random Matrices Sparse Recovery with Pre-Gaussian Random Matrices Simon Foucart Laboratoire Jacques-Louis Lions Université Pierre et Marie Curie Paris, 75013, France Ming-Jun Lai Department of Mathematics University of

More information

Lecture 18 Nov 3rd, 2015

Lecture 18 Nov 3rd, 2015 CS 229r: Algorithms for Big Data Fall 2015 Prof. Jelani Nelson Lecture 18 Nov 3rd, 2015 Scribe: Jefferson Lee 1 Overview Low-rank approximation, Compression Sensing 2 Last Time We looked at three different

More information

Lecture 13 October 6, Covering Numbers and Maurey s Empirical Method

Lecture 13 October 6, Covering Numbers and Maurey s Empirical Method CS 395T: Sublinear Algorithms Fall 2016 Prof. Eric Price Lecture 13 October 6, 2016 Scribe: Kiyeon Jeon and Loc Hoang 1 Overview In the last lecture we covered the lower bound for p th moment (p > 2) and

More information

Optimisation Combinatoire et Convexe.

Optimisation Combinatoire et Convexe. Optimisation Combinatoire et Convexe. Low complexity models, l 1 penalties. A. d Aspremont. M1 ENS. 1/36 Today Sparsity, low complexity models. l 1 -recovery results: three approaches. Extensions: matrix

More information

Empirical Processes and random projections

Empirical Processes and random projections Empirical Processes and random projections B. Klartag, S. Mendelson School of Mathematics, Institute for Advanced Study, Princeton, NJ 08540, USA. Institute of Advanced Studies, The Australian National

More information

The Johnson-Lindenstrauss Lemma Is Optimal for Linear Dimensionality Reduction

The Johnson-Lindenstrauss Lemma Is Optimal for Linear Dimensionality Reduction The Johnson-Lindenstrauss Lemma Is Optimal for Linear Dimensionality Reduction The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters

More information

Compressed Sensing and Sparse Recovery

Compressed Sensing and Sparse Recovery ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing

More information

Lecture Notes 9: Constrained Optimization

Lecture Notes 9: Constrained Optimization Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form

More information

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar

More information

Optimal terminal dimensionality reduction in Euclidean space

Optimal terminal dimensionality reduction in Euclidean space Optimal terminal dimensionality reduction in Euclidean space Shyam Narayanan Jelani Nelson October 22, 2018 Abstract Let ε (0, 1) and X R d be arbitrary with X having size n > 1. The Johnson- Lindenstrauss

More information

Dimensionality reduction: Johnson-Lindenstrauss lemma for structured random matrices

Dimensionality reduction: Johnson-Lindenstrauss lemma for structured random matrices Dimensionality reduction: Johnson-Lindenstrauss lemma for structured random matrices Jan Vybíral Austrian Academy of Sciences RICAM, Linz, Austria January 2011 MPI Leipzig, Germany joint work with Aicke

More information

Supremum of simple stochastic processes

Supremum of simple stochastic processes Subspace embeddings Daniel Hsu COMS 4772 1 Supremum of simple stochastic processes 2 Recap: JL lemma JL lemma. For any ε (0, 1/2), point set S R d of cardinality 16 ln n S = n, and k N such that k, there

More information

Uniform uncertainty principle for Bernoulli and subgaussian ensembles

Uniform uncertainty principle for Bernoulli and subgaussian ensembles arxiv:math.st/0608665 v1 27 Aug 2006 Uniform uncertainty principle for Bernoulli and subgaussian ensembles Shahar MENDELSON 1 Alain PAJOR 2 Nicole TOMCZAK-JAEGERMANN 3 1 Introduction August 21, 2006 In

More information

Isometric sketching of any set via the Restricted Isometry Property

Isometric sketching of any set via the Restricted Isometry Property Isometric sketching of any set via the Restricted Isometry Property Samet Oymak Benjamin Recht Mahdi Soltanolkotabi October 015; Revised March 016 Abstract In this paper we show that for the purposes of

More information

Compressive Sensing with Random Matrices

Compressive Sensing with Random Matrices Compressive Sensing with Random Matrices Lucas Connell University of Georgia 9 November 017 Lucas Connell (University of Georgia) Compressive Sensing with Random Matrices 9 November 017 1 / 18 Overview

More information

Strengthened Sobolev inequalities for a random subspace of functions

Strengthened Sobolev inequalities for a random subspace of functions Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)

More information

Sampling and high-dimensional convex geometry

Sampling and high-dimensional convex geometry Sampling and high-dimensional convex geometry Roman Vershynin SampTA 2013 Bremen, Germany, June 2013 Geometry of sampling problems Signals live in high dimensions; sampling is often random. Geometry in

More information

Interpolation via weighted l 1 minimization

Interpolation via weighted l 1 minimization Interpolation via weighted l 1 minimization Rachel Ward University of Texas at Austin December 12, 2014 Joint work with Holger Rauhut (Aachen University) Function interpolation Given a function f : D C

More information

A Simple Proof of the Restricted Isometry Property for Random Matrices

A Simple Proof of the Restricted Isometry Property for Random Matrices DOI 10.1007/s00365-007-9003-x A Simple Proof of the Restricted Isometry Property for Random Matrices Richard Baraniuk Mark Davenport Ronald DeVore Michael Wakin Received: 17 May 006 / Revised: 18 January

More information

MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing

MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing Afonso S. Bandeira April 9, 2015 1 The Johnson-Lindenstrauss Lemma Suppose one has n points, X = {x 1,..., x n }, in R d with d very

More information

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad

More information

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit arxiv:0707.4203v2 [math.na] 14 Aug 2007 Deanna Needell Department of Mathematics University of California,

More information

Least singular value of random matrices. Lewis Memorial Lecture / DIMACS minicourse March 18, Terence Tao (UCLA)

Least singular value of random matrices. Lewis Memorial Lecture / DIMACS minicourse March 18, Terence Tao (UCLA) Least singular value of random matrices Lewis Memorial Lecture / DIMACS minicourse March 18, 2008 Terence Tao (UCLA) 1 Extreme singular values Let M = (a ij ) 1 i n;1 j m be a square or rectangular matrix

More information

Sketching as a Tool for Numerical Linear Algebra

Sketching as a Tool for Numerical Linear Algebra Sketching as a Tool for Numerical Linear Algebra David P. Woodruff presented by Sepehr Assadi o(n) Big Data Reading Group University of Pennsylvania February, 2015 Sepehr Assadi (Penn) Sketching for Numerical

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

GREEDY SIGNAL RECOVERY REVIEW

GREEDY SIGNAL RECOVERY REVIEW GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin

More information

AN INTRODUCTION TO COMPRESSIVE SENSING

AN INTRODUCTION TO COMPRESSIVE SENSING AN INTRODUCTION TO COMPRESSIVE SENSING Rodrigo B. Platte School of Mathematical and Statistical Sciences APM/EEE598 Reverse Engineering of Complex Dynamical Networks OUTLINE 1 INTRODUCTION 2 INCOHERENCE

More information

Suprema of Chaos Processes and the Restricted Isometry Property

Suprema of Chaos Processes and the Restricted Isometry Property Suprema of Chaos Processes and the Restricted Isometry Property Felix Krahmer, Shahar Mendelson, and Holger Rauhut June 30, 2012 Abstract We present a new bound for suprema of a special type of chaos processes

More information

Constructing Explicit RIP Matrices and the Square-Root Bottleneck

Constructing Explicit RIP Matrices and the Square-Root Bottleneck Constructing Explicit RIP Matrices and the Square-Root Bottleneck Ryan Cinoman July 18, 2018 Ryan Cinoman Constructing Explicit RIP Matrices July 18, 2018 1 / 36 Outline 1 Introduction 2 Restricted Isometry

More information

New constructions of RIP matrices with fast multiplication and fewer rows

New constructions of RIP matrices with fast multiplication and fewer rows New constructions of RIP matrices with fast multiplication and fewer rows Jelani Nelson Eric Price Mary Wootters July 8, 203 Abstract In this paper, we present novel constructions of matrices with the

More information

Invertibility of random matrices

Invertibility of random matrices University of Michigan February 2011, Princeton University Origins of Random Matrix Theory Statistics (Wishart matrices) PCA of a multivariate Gaussian distribution. [Gaël Varoquaux s blog gael-varoquaux.info]

More information

Small Ball Probability, Arithmetic Structure and Random Matrices

Small Ball Probability, Arithmetic Structure and Random Matrices Small Ball Probability, Arithmetic Structure and Random Matrices Roman Vershynin University of California, Davis April 23, 2008 Distance Problems How far is a random vector X from a given subspace H in

More information

Lecture 3. Random Fourier measurements

Lecture 3. Random Fourier measurements Lecture 3. Random Fourier measurements 1 Sampling from Fourier matrices 2 Law of Large Numbers and its operator-valued versions 3 Frames. Rudelson s Selection Theorem Sampling from Fourier matrices Our

More information

Sparsity Lower Bounds for Dimensionality Reducing Maps

Sparsity Lower Bounds for Dimensionality Reducing Maps Sparsity Lower Bounds for Dimensionality Reducing Maps Jelani Nelson Huy L. Nguy ên November 5, 01 Abstract We give near-tight lower bounds for the sparsity required in several dimensionality reducing

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

Sparse Interactions: Identifying High-Dimensional Multilinear Systems via Compressed Sensing

Sparse Interactions: Identifying High-Dimensional Multilinear Systems via Compressed Sensing Sparse Interactions: Identifying High-Dimensional Multilinear Systems via Compressed Sensing Bobak Nazer and Robert D. Nowak University of Wisconsin, Madison Allerton 10/01/10 Motivation: Virus-Host Interaction

More information

The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008

The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008 The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008 Emmanuel Candés (Caltech), Terence Tao (UCLA) 1 Uncertainty principles A basic principle

More information

Stability and robustness of l 1 -minimizations with Weibull matrices and redundant dictionaries

Stability and robustness of l 1 -minimizations with Weibull matrices and redundant dictionaries Stability and robustness of l 1 -minimizations with Weibull matrices and redundant dictionaries Simon Foucart, Drexel University Abstract We investigate the recovery of almost s-sparse vectors x C N from

More information

Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit

Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit Claremont Colleges Scholarship @ Claremont CMC Faculty Publications and Research CMC Faculty Scholarship 6-5-2008 Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit

More information

On the singular values of random matrices

On the singular values of random matrices On the singular values of random matrices Shahar Mendelson Grigoris Paouris Abstract We present an approach that allows one to bound the largest and smallest singular values of an N n random matrix with

More information

Randomized Numerical Linear Algebra: Review and Progresses

Randomized Numerical Linear Algebra: Review and Progresses ized ized SVD ized : Review and Progresses Zhihua Department of Computer Science and Engineering Shanghai Jiao Tong University The 12th China Workshop on Machine Learning and Applications Xi an, November

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior

More information

Fast Dimension Reduction

Fast Dimension Reduction Fast Dimension Reduction MMDS 2008 Nir Ailon Google Research NY Fast Dimension Reduction Using Rademacher Series on Dual BCH Codes (with Edo Liberty) The Fast Johnson Lindenstrauss Transform (with Bernard

More information

Lecture 9: Low Rank Approximation

Lecture 9: Low Rank Approximation CSE 521: Design and Analysis of Algorithms I Fall 2018 Lecture 9: Low Rank Approximation Lecturer: Shayan Oveis Gharan February 8th Scribe: Jun Qi Disclaimer: These notes have not been subjected to the

More information

A SIMPLE TOOL FOR BOUNDING THE DEVIATION OF RANDOM MATRICES ON GEOMETRIC SETS

A SIMPLE TOOL FOR BOUNDING THE DEVIATION OF RANDOM MATRICES ON GEOMETRIC SETS A SIMPLE TOOL FOR BOUNDING THE DEVIATION OF RANDOM MATRICES ON GEOMETRIC SETS CHRISTOPHER LIAW, ABBAS MEHRABIAN, YANIV PLAN, AND ROMAN VERSHYNIN Abstract. Let A be an isotropic, sub-gaussian m n matrix.

More information

An introduction to chaining, and applications to sublinear algorithms

An introduction to chaining, and applications to sublinear algorithms An ntroducton to channg, and applcatons to sublnear algorthms Jelan Nelson Harvard August 28, 2015 What s ths talk about? What s ths talk about? Gven a collecton of random varables X 1, X 2,...,, we would

More information

Faster Johnson-Lindenstrauss style reductions

Faster Johnson-Lindenstrauss style reductions Faster Johnson-Lindenstrauss style reductions Aditya Menon August 23, 2007 Outline 1 Introduction Dimensionality reduction The Johnson-Lindenstrauss Lemma Speeding up computation 2 The Fast Johnson-Lindenstrauss

More information

Sparse Johnson-Lindenstrauss Transforms

Sparse Johnson-Lindenstrauss Transforms Sparse Johnson-Lindenstrauss Transforms Jelani Nelson MIT May 24, 211 joint work with Daniel Kane (Harvard) Metric Johnson-Lindenstrauss lemma Metric JL (MJL) Lemma, 1984 Every set of n points in Euclidean

More information

of Orthogonal Matching Pursuit

of Orthogonal Matching Pursuit A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement

More information

Random hyperplane tessellations and dimension reduction

Random hyperplane tessellations and dimension reduction Random hyperplane tessellations and dimension reduction Roman Vershynin University of Michigan, Department of Mathematics Phenomena in high dimensions in geometric analysis, random matrices and computational

More information

Lecture 9: Matrix approximation continued

Lecture 9: Matrix approximation continued 0368-348-01-Algorithms in Data Mining Fall 013 Lecturer: Edo Liberty Lecture 9: Matrix approximation continued Warning: This note may contain typos and other inaccuracies which are usually discussed during

More information

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016 Lecture notes 5 February 9, 016 1 Introduction Random projections Random projections are a useful tool in the analysis and processing of high-dimensional data. We will analyze two applications that use

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 23, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 47 High dimensional

More information

Recovering overcomplete sparse representations from structured sensing

Recovering overcomplete sparse representations from structured sensing Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix

More information

Optimal compression of approximate Euclidean distances

Optimal compression of approximate Euclidean distances Optimal compression of approximate Euclidean distances Noga Alon 1 Bo az Klartag 2 Abstract Let X be a set of n points of norm at most 1 in the Euclidean space R k, and suppose ε > 0. An ε-distance sketch

More information

Lecture 16 Oct. 26, 2017

Lecture 16 Oct. 26, 2017 Sketching Algorithms for Big Data Fall 2017 Prof. Piotr Indyk Lecture 16 Oct. 26, 2017 Scribe: Chi-Ning Chou 1 Overview In the last lecture we constructed sparse RIP 1 matrix via expander and showed that

More information

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina INDUSTRIAL MATHEMATICS INSTITUTE 2007:08 A remark on compressed sensing B.S. Kashin and V.N. Temlyakov IMI Preprint Series Department of Mathematics University of South Carolina A remark on compressed

More information

Compressed Sensing and Robust Recovery of Low Rank Matrices

Compressed Sensing and Robust Recovery of Low Rank Matrices Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech

More information

arxiv: v3 [cs.ds] 21 Mar 2013

arxiv: v3 [cs.ds] 21 Mar 2013 Low-distortion Subspace Embeddings in Input-sparsity Time and Applications to Robust Linear Regression Xiangrui Meng Michael W. Mahoney arxiv:1210.3135v3 [cs.ds] 21 Mar 2013 Abstract Low-distortion subspace

More information

Multipath Matching Pursuit

Multipath Matching Pursuit Multipath Matching Pursuit Submitted to IEEE trans. on Information theory Authors: S. Kwon, J. Wang, and B. Shim Presenter: Hwanchol Jang Multipath is investigated rather than a single path for a greedy

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 26, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 55 High dimensional

More information

Deterministic constructions of compressed sensing matrices

Deterministic constructions of compressed sensing matrices Journal of Complexity 23 (2007) 918 925 www.elsevier.com/locate/jco Deterministic constructions of compressed sensing matrices Ronald A. DeVore Department of Mathematics, University of South Carolina,

More information

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk Model-Based Compressive Sensing for Signal Ensembles Marco F. Duarte Volkan Cevher Richard G. Baraniuk Concise Signal Structure Sparse signal: only K out of N coordinates nonzero model: union of K-dimensional

More information

The Johnson-Lindenstrauss lemma is optimal for linear dimensionality reduction

The Johnson-Lindenstrauss lemma is optimal for linear dimensionality reduction The Johnson-Lindenstrauss lemma is optimal for linear dimensionality reduction Kasper Green Larsen Jelani Nelson Abstract For any n > 1 and 0 < ε < 1/, we show the existence of an n O(1) -point subset

More information

Sparser Johnson-Lindenstrauss Transforms

Sparser Johnson-Lindenstrauss Transforms Sparser Johnson-Lindenstrauss Transforms Jelani Nelson Princeton February 16, 212 joint work with Daniel Kane (Stanford) Random Projections x R d, d huge store y = Sx, where S is a k d matrix (compression)

More information

Universal low-rank matrix recovery from Pauli measurements

Universal low-rank matrix recovery from Pauli measurements Universal low-rank matrix recovery from Pauli measurements Yi-Kai Liu Applied and Computational Mathematics Division National Institute of Standards and Technology Gaithersburg, MD, USA yi-kai.liu@nist.gov

More information

On State Estimation with Bad Data Detection

On State Estimation with Bad Data Detection On State Estimation with Bad Data Detection Weiyu Xu, Meng Wang, and Ao Tang School of ECE, Cornell University, Ithaca, NY 4853 Abstract We consider the problem of state estimation through observations

More information

sublinear time low-rank approximation of positive semidefinite matrices Cameron Musco (MIT) and David P. Woodru (CMU)

sublinear time low-rank approximation of positive semidefinite matrices Cameron Musco (MIT) and David P. Woodru (CMU) sublinear time low-rank approximation of positive semidefinite matrices Cameron Musco (MIT) and David P. Woodru (CMU) 0 overview Our Contributions: 1 overview Our Contributions: A near optimal low-rank

More information

Constrained optimization

Constrained optimization Constrained optimization DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Compressed sensing Convex constrained

More information

Geometry of log-concave Ensembles of random matrices

Geometry of log-concave Ensembles of random matrices Geometry of log-concave Ensembles of random matrices Nicole Tomczak-Jaegermann Joint work with Radosław Adamczak, Rafał Latała, Alexander Litvak, Alain Pajor Cortona, June 2011 Nicole Tomczak-Jaegermann

More information

Compressed Sensing: Lecture I. Ronald DeVore

Compressed Sensing: Lecture I. Ronald DeVore Compressed Sensing: Lecture I Ronald DeVore Motivation Compressed Sensing is a new paradigm for signal/image/function acquisition Motivation Compressed Sensing is a new paradigm for signal/image/function

More information

Noisy Signal Recovery via Iterative Reweighted L1-Minimization

Noisy Signal Recovery via Iterative Reweighted L1-Minimization Noisy Signal Recovery via Iterative Reweighted L1-Minimization Deanna Needell UC Davis / Stanford University Asilomar SSC, November 2009 Problem Background Setup 1 Suppose x is an unknown signal in R d.

More information

Sketching as a Tool for Numerical Linear Algebra

Sketching as a Tool for Numerical Linear Algebra Sketching as a Tool for Numerical Linear Algebra (Part 2) David P. Woodruff presented by Sepehr Assadi o(n) Big Data Reading Group University of Pennsylvania February, 2015 Sepehr Assadi (Penn) Sketching

More information

The Stability of Low-Rank Matrix Reconstruction: a Constrained Singular Value Perspective

The Stability of Low-Rank Matrix Reconstruction: a Constrained Singular Value Perspective Forty-Eighth Annual Allerton Conference Allerton House UIUC Illinois USA September 9 - October 1 010 The Stability of Low-Rank Matrix Reconstruction: a Constrained Singular Value Perspective Gongguo Tang

More information

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization

Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:

More information

Sections of Convex Bodies via the Combinatorial Dimension

Sections of Convex Bodies via the Combinatorial Dimension Sections of Convex Bodies via the Combinatorial Dimension (Rough notes - no proofs) These notes are centered at one abstract result in combinatorial geometry, which gives a coordinate approach to several

More information

A Bernstein-Chernoff deviation inequality, and geometric properties of random families of operators

A Bernstein-Chernoff deviation inequality, and geometric properties of random families of operators A Bernstein-Chernoff deviation inequality, and geometric properties of random families of operators Shiri Artstein-Avidan, Mathematics Department, Princeton University Abstract: In this paper we first

More information

Optimality of the Johnson-Lindenstrauss Lemma

Optimality of the Johnson-Lindenstrauss Lemma Optimality of the Johnson-Lindenstrauss Lemma Kasper Green Larsen Jelani Nelson September 7, 2016 Abstract For any integers d, n 2 and 1/(min{n, d}) 0.4999 < ε < 1, we show the existence of a set of n

More information

Tighter Low-rank Approximation via Sampling the Leveraged Element

Tighter Low-rank Approximation via Sampling the Leveraged Element Tighter Low-rank Approximation via Sampling the Leveraged Element Srinadh Bhojanapalli The University of Texas at Austin bsrinadh@utexas.edu Prateek Jain Microsoft Research, India prajain@microsoft.com

More information

Recovering any low-rank matrix, provably

Recovering any low-rank matrix, provably Recovering any low-rank matrix, provably Rachel Ward University of Texas at Austin October, 2014 Joint work with Yudong Chen (U.C. Berkeley), Srinadh Bhojanapalli and Sujay Sanghavi (U.T. Austin) Matrix

More information

Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery

Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery Anna C. Gilbert Department of Mathematics University of Michigan Sparse signal recovery measurements:

More information

Recent Developments in Compressed Sensing

Recent Developments in Compressed Sensing Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline

More information

Compressed Sensing and Redundant Dictionaries

Compressed Sensing and Redundant Dictionaries Compressed Sensing and Redundant Dictionaries Holger Rauhut, Karin Schnass and Pierre Vandergheynst December 2, 2006 Abstract This article extends the concept of compressed sensing to signals that are

More information

An algebraic perspective on integer sparse recovery

An algebraic perspective on integer sparse recovery An algebraic perspective on integer sparse recovery Lenny Fukshansky Claremont McKenna College (joint work with Deanna Needell and Benny Sudakov) Combinatorics Seminar USC October 31, 2018 From Wikipedia:

More information

A new method on deterministic construction of the measurement matrix in compressed sensing

A new method on deterministic construction of the measurement matrix in compressed sensing A new method on deterministic construction of the measurement matrix in compressed sensing Qun Mo 1 arxiv:1503.01250v1 [cs.it] 4 Mar 2015 Abstract Construction on the measurement matrix A is a central

More information

Compressed sensing and best k-term approximation

Compressed sensing and best k-term approximation Compressed sensing and best k-term approximation Albert Cohen, Wolfgang Dahmen, and Ronald DeVore July 19, 2006 Abstract Compressed sensing is a new concept in signal processing where one seeks to minimize

More information

Exponential decay of reconstruction error from binary measurements of sparse signals

Exponential decay of reconstruction error from binary measurements of sparse signals Exponential decay of reconstruction error from binary measurements of sparse signals Deanna Needell Joint work with R. Baraniuk, S. Foucart, Y. Plan, and M. Wootters Outline Introduction Mathematical Formulation

More information

On the Observability of Linear Systems from Random, Compressive Measurements

On the Observability of Linear Systems from Random, Compressive Measurements On the Observability of Linear Systems from Random, Compressive Measurements Michael B Wakin, Borhan M Sanandaji, and Tyrone L Vincent Abstract Recovering or estimating the initial state of a highdimensional

More information

Compressed Sensing and Redundant Dictionaries

Compressed Sensing and Redundant Dictionaries Compressed Sensing and Redundant Dictionaries Holger Rauhut, Karin Schnass and Pierre Vandergheynst Abstract This article extends the concept of compressed sensing to signals that are not sparse in an

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes

Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes Item Type text; Proceedings Authors Jagiello, Kristin M. Publisher International Foundation for Telemetering Journal International Telemetering

More information

Limitations in Approximating RIP

Limitations in Approximating RIP Alok Puranik Mentor: Adrian Vladu Fifth Annual PRIMES Conference, 2015 Outline 1 Background The Problem Motivation Construction Certification 2 Planted model Planting eigenvalues Analysis Distinguishing

More information

Sketching as a Tool for Numerical Linear Algebra

Sketching as a Tool for Numerical Linear Algebra Foundations and Trends R in Theoretical Computer Science Vol. 10, No. 1-2 (2014) 1 157 c 2014 D. P. Woodruff DOI: 10.1561/0400000060 Sketching as a Tool for Numerical Linear Algebra David P. Woodruff IBM

More information

Sparse representations and approximation theory

Sparse representations and approximation theory Journal of Approximation Theory 63 (20) 388 42 wwwelseviercom/locate/jat Sparse representations and approximation theory Allan Pinkus Department of Mathematics, Technion, 32000 Haifa, Israel Received 28

More information

Low-Rank PSD Approximation in Input-Sparsity Time

Low-Rank PSD Approximation in Input-Sparsity Time Low-Rank PSD Approximation in Input-Sparsity Time Kenneth L. Clarkson IBM Research Almaden klclarks@us.ibm.com David P. Woodruff IBM Research Almaden dpwoodru@us.ibm.com Abstract We give algorithms for

More information