Sample Optimal Fourier Sampling in Any Constant Dimension

Size: px
Start display at page:

Download "Sample Optimal Fourier Sampling in Any Constant Dimension"

Transcription

1 1 / 26 Sample Optimal Fourier Sampling in Any Constant Dimension Piotr Indyk Michael Kapralov MIT IBM Watson October 21, 2014

2 Fourier Transform and Sparsity Discrete Fourier Transform Given x C n, compute x i = x j ω ij, j [n] where ω is the n-th root of unity. Assume n is a power of 2. Fundamental tool: Compression (image, audio, video) Signal processing Data analysis Medical imaging (MRI, NMR) 2 / 26

3 3 / 26 Fourier Transform and Sparsity Sparse Fourier Transform The fast algorithm for DFT is FFT, runs in O(nlogn) time improving on FFT runtime in full generality a major open problem

4 3 / 26 Fourier Transform and Sparsity Sparse Fourier Transform The fast algorithm for DFT is FFT, runs in O(nlogn) time improving on FFT runtime in full generality a major open problem Most interesting signals are sparse (have few nonzero entries) or approximately sparse in the Fourier domain. k-sparse=at most k non-zeros Hassanieh-Indyk-Katabi-Price 12 compute approximate sparse FFT in O(k lognlog(n/k)) time

5 4 / 26 Sample complexity Fourier Transform and Sparsity Sample complexity=number of samples accessed in time domain. In some applications at least as important as runtime Shi-Andronesi-Hassanieh-Ghazi- Katabi-Adalsteinsson ISMRM 13

6 4 / 26 Sample complexity Fourier Transform and Sparsity Sample complexity=number of samples accessed in time domain. In some applications at least as important as runtime Shi-Andronesi-Hassanieh-Ghazi- Katabi-Adalsteinsson ISMRM 13 Given access to x C n, find ŷ such that x ŷ 2 C min k sparse ẑ x ẑ 2 Use smallest possible number of samples?

7 5 / 26 Uniform bounds (for all): Fourier Transform and Sparsity Candes-Tao 06 Rudelson-Vershynin 08 Candes-Plan 10 Cheraghchi-Guruswami-Velingker 12 Non-uniform bounds (for each): Goldreich-Levin 89 Kushilevitz-Mansour 91, Mansour 92 Gilbert-Guha-Indyk-Muthukrishnan-Strauss 02 Gilbert-Muthukrishnan-Strauss 05 Hassanieh-Indyk-Katabi-Price 12a Hassanieh-Indyk-Katabi-Price 12b Indyk-K.-Price 14 Deterministic, Ω(n) runtime O(k log 3 k logn) Randomized, O(k poly(logn)) runtime O(k logn (loglogn) C ) Lower bound: Ω(k log(n/k)) for non-adaptive algorithms Do-Ba-Indyk-Price-Woodruff 10

8 Fourier Transform and Sparsity Uniform bounds (for all): Candes-Tao 06 Rudelson-Vershynin 08 Candes-Plan 10 Cheraghchi-Guruswami-Velingker 12 Non-uniform bounds (for each): Goldreich-Levin 89 Kushilevitz-Mansour 91, Mansour 92 Gilbert-Guha-Indyk-Muthukrishnan-Strauss 02 Gilbert-Muthukrishnan-Strauss 05 Hassanieh-Indyk-Katabi-Price 12a Hassanieh-Indyk-Katabi-Price 12b Indyk-K.-Price 14 Deterministic, Ω(n) runtime O(k log 3 k logn) Randomized, O(k poly(logn)) runtime O(k logn (loglogn) C ) Lower bound: Ω(k log(n/k)) for non-adaptive algorithms Do-Ba-Indyk-Price-Woodruff 10 Theorem There exists an algorithm for l 2 /l 2 sparse recovery from Fourier measurements using O(k logn) samples and O(nlog 3 n) runtime. Optimal up to constant factors for k n 1 δ. First sample-optimal algorithm even if exponential runtime is allowed. 5 / 26

9 6 / 26 Fourier Transform and Sparsity Higher dimensional Fourier transform is needed in some applications Given x C [n]d, N = n d, compute x j = 1 ω it j x i and x j = 1 N i [n] d N i [n] d ω it j x i where ω is the n-th root of unity, and n is a power of 2.

10 7 / 26 Fourier Transform and Sparsity Previous sample complexity bounds: O(klog d N) in sublinear time algorithms runtime k log O(d) N, for each O(klog 4 N) for any d Ω(N) time, for all

11 Fourier Transform and Sparsity Previous sample complexity bounds: O(klog d N) in sublinear time algorithms runtime k log O(d) N, for each O(klog 4 N) for any d Ω(N) time, for all Our result: Theorem There exists an algorithm for l 2 /l 2 sparse recovery from Fourier measurements using O d (k logn) samples and O(N log 3 N) runtime. Sample-optimal up to constant factors for any constant d, first such algorithm. 7 / 26

12 Fourier Transform and Sparsity Previous sample complexity bounds: O(klog d N) in sublinear time algorithms runtime k log O(d) N, for each O(klog 4 N) for any d Ω(N) time, for all Our result: Theorem There exists an algorithm for l 2 /l 2 sparse recovery from Fourier measurements using O d (k logn) samples and O(N log 3 N) runtime. Sample-optimal up to constant factors for any constant d, first such algorithm. Also: sublinear time recovery, but with loglog 2 n loss in sample complexity (extending Indyk-K.-Price 14 to higher dimensions) 7 / 26

13 8 / 26 Fourier Transform and Sparsity Outline of talk: 1 l 2 /l 2 sparse recovery guarantee 2 Summary of techniques for recovery from Fourier measurements 3 Sample-optimal algorithm in O(N log 3 N) time for d = 1

14 9 / 26 l 2 /l 2 sparse recovery l 2 /l 2 sparse recovery guarantees: x ŷ 2 C min k sparse ẑ x ẑ 2 head tail µ tail noise/ k

15 10 / 26 l 2 /l 2 sparse recovery l 2 /l 2 sparse recovery guarantees: x ŷ 2 C min k sparse ẑ x ẑ 2 x 1... x k x k+1 x k+2... Err 2 k ( x) = n j=k+1 x j 2 Residual error bounded by noise energy Err 2 k ( x) head tail µ tail noise/ k

16 11 / 26 l 2 /l 2 sparse recovery l 2 /l 2 sparse recovery guarantees: x ŷ 2 C Err 2 k ( x) x 1... x k x k+1 x k+2... Err 2 k ( x) = n j=k+1 x j 2 Residual error bounded by noise energy Err 2 k ( x) head tail µ tail noise/ k

17 12 / 26 l 2 /l 2 sparse recovery l 2 /l 2 sparse recovery guarantees: x ŷ 2 C Err 2 k ( x) x 1... x k x k+1 x k+2... Err 2 k ( x) = n j=k+1 x j 2 Residual error bounded by noise energy Err 2 k ( x) head tail µ tail noise/ k

18 13 / 26 Recovery from Fourier measurements Iterative refinement Many algorithms use the iterative refinement scheme: Input: x C n ŷ 0 0 For t = 1 to L ẑ REFINEMENT(x y t 1 ) Update ŷ t ŷ t 1 +ẑ Takes random samples of x y REFINEMENT(x) return dominant Fourier coefficients ẑ of x (approximately) Gilbert-Guha-Indyk-Muthukrishnan-Strauss 02, Akavia-Goldwasser-Safra 03, Gilbert-Muthukrishnan-Strauss 05, Iwen 10, Akavia 10, Hassanieh-Indyk-Katabi-Price 12a, Hassanieh-Indyk-Katabi-Price 12b

19 14 / 26 Recovery from Fourier measurements magnitude magnitude time frequency

20 15 / 26 Recovery from Fourier measurements Task:approximate top k coeffs of x using few samples Natural idea: look at the value of the signal on the first O(k) points magnitude magnitude time frequency

21 15 / 26 Recovery from Fourier measurements Task:approximate top k coeffs of x using few samples Natural idea: look at the value of the signal on the first O(k) points magnitude magnitude time frequency

22 16 / 26 Recovery from Fourier measurements Task:approximate top k coeffs of x using few samples Natural idea: look at the value of the signal on the first O(k) points This convolves spectrum with sinc: (x G) = x Ĝ magnitude magnitude time frequency

23 16 / 26 Recovery from Fourier measurements Task:approximate top k coeffs of x using few samples Natural idea: look at the value of the signal on the first O(k) points This convolves spectrum with sinc: (x G) = x Ĝ magnitude 0 magnitude time frequency (G x) f = x f Ĝf f f [n]

24 16 / 26 Recovery from Fourier measurements Task:approximate top k coeffs of x using few samples Natural idea: look at the value of the signal on the first O(k) points This convolves spectrum with sinc: (x G) = x Ĝ magnitude 0 magnitude time frequency (G x) f = x f + x f Ĝf f f [n],f f

25 Recovery from Fourier measurements REFINEMENT(x) return dominant Fourier coefficients ẑ of x (approximately) Take M = C logn independent measurements: y j (P σj,a j,q j x) G Estimate each f [n] as { ŵ f median ỹ 1 f,...,ỹm f }. Sample complexity= filter support logn 17 / 26

26 18 / 26 Recovery from Fourier measurements magnitude 0 magnitude time frequency Like hashing heavy hitters into buckets (COUNTSKETCH, COUNTMIN), but buckets leak

27 19 / 26 Iterative refinement Sample-optimal algorithm Many algorithms use the iterative refinement scheme: Input: x C n ŷ 0 0 For t = 1 to L ẑ REFINEMENT(x y t 1 ) Update ŷ t ŷ t 1 +ẑ Takes random samples of x y

28 19 / 26 Iterative refinement Sample-optimal algorithm Many algorithms use the iterative refinement scheme: Input: x C n ŷ 0 0 For t = 1 to L ẑ REFINEMENT(x y t 1 ) Update ŷ t ŷ t 1 +ẑ In most prior works sampling complexity is Takes random samples of x y samples per REFINEMENT step number of iterations

29 20 / 26 Iterative refinement Sample-optimal algorithm Many algorithms use the iterative refinement scheme: Input: x C n ŷ 0 0 For t = 1 to L ẑ REFINEMENT(x y t 1 ) Update ŷ t ŷ t 1 +ẑ In most prior works sampling complexity is Takes random samples of x y filter support logn number of iterations

30 20 / 26 Iterative refinement Sample-optimal algorithm Many algorithms use the iterative refinement scheme: Input: x C n ŷ 0 0 For t = 1 to L ẑ REFINEMENT(x y t 1 ) Update ŷ t ŷ t 1 +ẑ In most prior works sampling complexity is Takes random samples of x y filter support logn number of iterations Lots of work on carefully choosing filters, reducing number of iterations: Hassanieh-Indyk-Katabi-Price 12, Ghazi-Hassanieh-Indyk-Katabi-Price-Shi 13, Indyk-K.-Price 14 still lose Ω(loglogn) in sample complexity (number of iterations) lose Ω((logn) d 1 loglogn) in higher dimensions

31 Iterative refinement Sample-optimal algorithm Many algorithms use the iterative refinement scheme: Input: x C n ŷ 0 0 For t = 1 to L ẑ REFINEMENT(x y t 1 ) Update ŷ t ŷ t 1 +ẑ Takes random samples of x y Our sampling complexity is filter support logn number of iterations 21 / 26

32 Iterative refinement Sample-optimal algorithm Many algorithms use the iterative refinement scheme: Input: x C n ŷ 0 0 For t = 1 to L ẑ REFINEMENT(x y t 1 ) Update ŷ t ŷ t 1 +ẑ Takes random samples of x y Our sampling complexity is filter support logn number of iterations 21 / 26

33 Iterative refinement Sample-optimal algorithm Many algorithms use the iterative refinement scheme: Input: x C n ŷ 0 0 For t = 1 to L ẑ REFINEMENT(x y t 1 ) Update ŷ t ŷ t 1 +ẑ Takes random samples of x y Our sampling complexity is samples per REFINEMENT step number of iterations 21 / 26

34 Iterative refinement Sample-optimal algorithm Many algorithms use the iterative refinement scheme: Input: x C n ŷ 0 0 For t = 1 to L ẑ REFINEMENT(x y t 1 ) Update ŷ t ŷ t 1 +ẑ Takes random samples of x y Our sampling complexity is samples per REFINEMENT step number of iterations Do not use fresh randomness in each iteration! In general challenging: only one paper Bayati-Montanari 11 gives provable guarantees, with Gaussians 21 / 26

35 Iterative refinement Sample-optimal algorithm Many algorithms use the iterative refinement scheme: Input: x C n ŷ 0 0 For t = 1 to L ẑ REFINEMENT(x y t 1 ) Update ŷ t ŷ t 1 +ẑ Takes random samples of x y Our sampling complexity is samples per REFINEMENT step number of iterations Do not use fresh randomness in each iteration! In general challenging: only one paper Bayati-Montanari 11 gives provable guarantees, with Gaussians Can use very simple filters! (Essentially) boxcar filter 21 / 26

36 22 / 26 G B B B Let y m (P m x) G m = 0,...,M = C logn ẑ 0 0 For t = 1,...,T = O(logn): Sample-optimal algorithm For f [n]: { ŵ f median ỹ 1 f,...,ỹm f If ŵ f < 2 T t µ/3 then ŵ f 0 End ẑ t+1 = ẑ t +ŵ y m y m (P m w) G for m = 1,...,M End } Take samples of x Loop over thresholds Estimate, prune small elements Update samples

37 22 / 26 G B B B Let y m (P m x) G m = 0,...,M = C logn ẑ 0 0 For t = 1,...,T = O(logn): Sample-optimal algorithm For f [n]: { ŵ f median ỹ 1 f,...,ỹm f If ŵ f < 2 T t µ/3 then ŵ f 0 End ẑ t+1 = ẑ t +ŵ y m y m (P m w) G for m = 1,...,M End } µ

38 22 / 26 G B B B Let y m (P m x) G m = 0,...,M = C logn ẑ 0 0 For t = 1,...,T = O(logn): Sample-optimal algorithm For f [n]: { ŵ f median ỹ 1 f,...,ỹm f If ŵ f < 2 T t µ/3 then ŵ f 0 End ẑ t+1 = ẑ t +ŵ y m y m (P m w) G for m = 1,...,M End } µ

39 22 / 26 G B B B Let y m (P m x) G m = 0,...,M = C logn ẑ 0 0 For t = 1,...,T = O(logn): Sample-optimal algorithm For f [n]: { ŵ f median ỹ 1 f,...,ỹm f If ŵ f < 2 T t µ/3 then ŵ f 0 End ẑ t+1 = ẑ t +ŵ y m y m (P m w) G for m = 1,...,M End } µ

40 22 / 26 G B B B Let y m (P m x) G m = 0,...,M = C logn ẑ 0 0 For t = 1,...,T = O(logn): Sample-optimal algorithm For f [n]: { ŵ f median ỹ 1 f,...,ỹm f If ŵ f < 2 T t µ/3 then ŵ f 0 End ẑ t+1 = ẑ t +ŵ y m y m (P m w) G for m = 1,...,M End } µ

41 22 / 26 G B B B Let y m (P m x) G m = 0,...,M = C logn ẑ 0 0 For t = 1,...,T = O(logn): Sample-optimal algorithm For f [n]: { ŵ f median ỹ 1 f,...,ỹm f If ŵ f < 2 T t µ/3 then ŵ f 0 End ẑ t+1 = ẑ t +ŵ y m y m (P m w) G for m = 1,...,M End } µ

42 22 / 26 G B B B Let y m (P m x) G m = 0,...,M = C logn ẑ 0 0 For t = 1,...,T = O(logn): Sample-optimal algorithm For f [n]: { ŵ f median ỹ 1 f,...,ỹm f If ŵ f < 2 T t µ/3 then ŵ f 0 End ẑ t+1 = ẑ t +ŵ y m y m (P m w) G for m = 1,...,M End } µ Main challenge: lack of fresh randomness. Why does median work?

43 23 / 26 Sample-optimal algorithm Let S denote the set of heavy hitters: S = { i [n] : x i > µ }. There cannot be too many of them: S = O(k) head tail

44 23 / 26 Sample-optimal algorithm Let S denote the set of heavy hitters: S = { i [n] : x i > µ }. There cannot be too many of them: S = O(k) head tail Main invariant: never modify x outside of S Prove: samples taken have error-correcting properties wrt set S of head elements Set of head elements does not change, only their values

45 24 / 26 Experimental evaluation Experimental evaluation Problem: recover support of a random k-sparse signal from Fourier measurements. Parameters: n = 2 15, k = 10,20,...,100 Filter: boxcar filter with support k + 1

46 Experimental evaluation Comparison to l 1 -minimization (SPGL1) O(k log 3 k logn) sample complexity, requires LP solve 2500 n=32768, L1 minimization n=32768, B=k, random phase, non monotone number of measurements number of measurements sparsity sparsity Within a factor of 2 of l 1 minimization 25 / 26

47 26 / 26 Experimental evaluation Open questions O(k logn) samples in O(k log O(1) n) time? Thank you!

Sparse Fourier Transform (lecture 4)

Sparse Fourier Transform (lecture 4) 1 / 5 Sparse Fourier Transform (lecture 4) Michael Kapralov 1 1 IBM Watson EPFL St. Petersburg CS Club November 215 2 / 5 Given x C n, compute the Discrete Fourier Transform of x: x i = 1 n x j ω ij, j

More information

An Adaptive Sublinear Time Block Sparse Fourier Transform

An Adaptive Sublinear Time Block Sparse Fourier Transform An Adaptive Sublinear Time Block Sparse Fourier Transform Volkan Cevher Michael Kapralov Jonathan Scarlett Amir Zandieh EPFL February 8th 217 Given x C N, compute the Discrete Fourier Transform (DFT) of

More information

Sparse Fourier Transform (lecture 1)

Sparse Fourier Transform (lecture 1) 1 / 73 Sparse Fourier Transform (lecture 1) Michael Kapralov 1 1 IBM Watson EPFL St. Petersburg CS Club November 2015 2 / 73 Given x C n, compute the Discrete Fourier Transform (DFT) of x: x i = 1 x n

More information

Faster Algorithms for Sparse Fourier Transform

Faster Algorithms for Sparse Fourier Transform Faster Algorithms for Sparse Fourier Transform Haitham Hassanieh Piotr Indyk Dina Katabi Eric Price MIT Material from: Hassanieh, Indyk, Katabi, Price, Simple and Practical Algorithms for Sparse Fourier

More information

Faster Algorithms for Sparse Fourier Transform

Faster Algorithms for Sparse Fourier Transform Faster Algorithms for Sparse Fourier Transform Haitham Hassanieh Piotr Indyk Dina Katabi Eric Price MIT Material from: Hassanieh, Indyk, Katabi, Price, Simple and Practical Algorithms for Sparse Fourier

More information

Recent Developments in the Sparse Fourier Transform

Recent Developments in the Sparse Fourier Transform Recent Developments in the Sparse Fourier Transform Piotr Indyk MIT Joint work with Fadel Adib, Badih Ghazi, Haitham Hassanieh, Michael Kapralov, Dina Katabi, Eric Price, Lixin Shi 1000 150 1000 150 1000

More information

A Non-sparse Tutorial on Sparse FFTs

A Non-sparse Tutorial on Sparse FFTs A Non-sparse Tutorial on Sparse FFTs Mark Iwen Michigan State University April 8, 214 M.A. Iwen (MSU) Fast Sparse FFTs April 8, 214 1 / 34 Problem Setup Recover f : [, 2π] C consisting of k trigonometric

More information

Rapidly Computing Sparse Chebyshev and Legendre Coefficient Expansions via SFTs

Rapidly Computing Sparse Chebyshev and Legendre Coefficient Expansions via SFTs Rapidly Computing Sparse Chebyshev and Legendre Coefficient Expansions via SFTs Mark Iwen Michigan State University October 18, 2014 Work with Janice (Xianfeng) Hu Graduating in May! M.A. Iwen (MSU) Fast

More information

Combining geometry and combinatorics

Combining geometry and combinatorics Combining geometry and combinatorics A unified approach to sparse signal recovery Anna C. Gilbert University of Michigan joint work with R. Berinde (MIT), P. Indyk (MIT), H. Karloff (AT&T), M. Strauss

More information

CS on CS: Computer Science insights into Compresive Sensing (and vice versa) Piotr Indyk MIT

CS on CS: Computer Science insights into Compresive Sensing (and vice versa) Piotr Indyk MIT CS on CS: Computer Science insights into Compresive Sensing (and vice versa) Piotr Indyk MIT Sparse Approximations Goal: approximate a highdimensional vector x by x that is sparse, i.e., has few nonzero

More information

Lecture 16 Oct. 26, 2017

Lecture 16 Oct. 26, 2017 Sketching Algorithms for Big Data Fall 2017 Prof. Piotr Indyk Lecture 16 Oct. 26, 2017 Scribe: Chi-Ning Chou 1 Overview In the last lecture we constructed sparse RIP 1 matrix via expander and showed that

More information

Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery

Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery Anna C. Gilbert Department of Mathematics University of Michigan Sparse signal recovery measurements:

More information

Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery

Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery Anna C. Gilbert Department of Mathematics University of Michigan Connection between... Sparse Approximation and Compressed

More information

Sparse FFT for Functions with Short Frequency Support

Sparse FFT for Functions with Short Frequency Support Sparse FFT for Functions with Short Frequency Support Sina ittens May 10, 017 Abstract In this paper we derive a deterministic fast discrete Fourier transform algorithm for a π-periodic function f whose

More information

Lecture 3 September 15, 2014

Lecture 3 September 15, 2014 MIT 6.893: Algorithms and Signal Processing Fall 2014 Prof. Piotr Indyk Lecture 3 September 15, 2014 Scribe: Ludwig Schmidt 1 Introduction In this lecture, we build on the ideas from the previous two lectures

More information

Greedy Signal Recovery and Uniform Uncertainty Principles

Greedy Signal Recovery and Uniform Uncertainty Principles Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles

More information

An Introduction to Sparse Approximation

An Introduction to Sparse Approximation An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,

More information

Linear Sketches A Useful Tool in Streaming and Compressive Sensing

Linear Sketches A Useful Tool in Streaming and Compressive Sensing Linear Sketches A Useful Tool in Streaming and Compressive Sensing Qin Zhang 1-1 Linear sketch Random linear projection M : R n R k that preserves properties of any v R n with high prob. where k n. M =

More information

Nearly Optimal Deterministic Algorithm for Sparse Walsh-Hadamard Transform

Nearly Optimal Deterministic Algorithm for Sparse Walsh-Hadamard Transform Electronic Colloquium on Computational Complexity, Report No. 76 (2015) Nearly Optimal Deterministic Algorithm for Sparse Walsh-Hadamard Transform Mahdi Cheraghchi University of California Berkeley, CA

More information

What s the Frequency Kenneth?: - Sublinear Fourier Sampling Off the Grid

What s the Frequency Kenneth?: - Sublinear Fourier Sampling Off the Grid MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com What s the Frequency Kenneth?: - Sublinear Fourier Sampling Off the Grid Boufounos, P.T. TR2012-059 August 2012 Abstract We design a sublinear

More information

Simple and Practical Algorithm for Sparse Fourier Transform

Simple and Practical Algorithm for Sparse Fourier Transform Simple and Practical Algorithm for Sparse Fourier Transform Haitham Hassanieh MIT Piotr Indyk MIT Dina Katabi MIT Eric Price MIT {haithamh,indyk,dk,ecprice}@mit.edu Abstract We consider the sparse Fourier

More information

Tutorial: Sparse Recovery Using Sparse Matrices. Piotr Indyk MIT

Tutorial: Sparse Recovery Using Sparse Matrices. Piotr Indyk MIT Tutorial: Sparse Recovery Using Sparse Matrices Piotr Indyk MIT Problem Formulation (approximation theory, learning Fourier coeffs, linear sketching, finite rate of innovation, compressed sensing...) Setup:

More information

Simple and Practical Algorithm for Sparse Fourier Transform

Simple and Practical Algorithm for Sparse Fourier Transform Simple and Practical Algorithm for Sparse Fourier Transform Haitham Hassanieh MIT Piotr Indyk MIT Dina Katabi MIT Eric Price MIT {haithamh,indyk,dk,ecprice}@mit.edu Abstract We consider the sparse Fourier

More information

Tutorial: Sparse Signal Recovery

Tutorial: Sparse Signal Recovery Tutorial: Sparse Signal Recovery Anna C. Gilbert Department of Mathematics University of Michigan (Sparse) Signal recovery problem signal or population length N k important Φ x = y measurements or tests:

More information

Wavelet decomposition of data streams. by Dragana Veljkovic

Wavelet decomposition of data streams. by Dragana Veljkovic Wavelet decomposition of data streams by Dragana Veljkovic Motivation Continuous data streams arise naturally in: telecommunication and internet traffic retail and banking transactions web server log records

More information

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp CoSaMP Iterative signal recovery from incomplete and inaccurate samples Joel A. Tropp Applied & Computational Mathematics California Institute of Technology jtropp@acm.caltech.edu Joint with D. Needell

More information

Combinatorial Algorithms for Compressed Sensing

Combinatorial Algorithms for Compressed Sensing DIMACS Technical Report 2005-40 Combinatorial Algorithms for Compressed Sensing by Graham Cormode 1 Bell Labs cormode@bell-labs.com S. Muthukrishnan 2,3 Rutgers University muthu@cs.rutgers.edu DIMACS is

More information

Tutorial: Sparse Recovery Using Sparse Matrices. Piotr Indyk MIT

Tutorial: Sparse Recovery Using Sparse Matrices. Piotr Indyk MIT Tutorial: Sparse Recovery Using Sparse Matrices Piotr Indyk MIT Problem Formulation (approximation theory, learning Fourier coeffs, linear sketching, finite rate of innovation, compressed sensing...) Setup:

More information

Sparse Recovery and Fourier Sampling. Eric Price

Sparse Recovery and Fourier Sampling. Eric Price Sparse Recovery and Fourier Sampling by Eric Price Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy

More information

Heavy Hitters. Piotr Indyk MIT. Lecture 4

Heavy Hitters. Piotr Indyk MIT. Lecture 4 Heavy Hitters Piotr Indyk MIT Last Few Lectures Recap (last few lectures) Update a vector x Maintain a linear sketch Can compute L p norm of x (in zillion different ways) Questions: Can we do anything

More information

Sparse recovery using sparse random matrices

Sparse recovery using sparse random matrices Sparse recovery using sparse random matrices Radu Berinde MIT texel@mit.edu Piotr Indyk MIT indyk@mit.edu April 26, 2008 Abstract We consider the approximate sparse recovery problem, where the goal is

More information

Towards an Algorithmic Theory of Compressed Sensing

Towards an Algorithmic Theory of Compressed Sensing DIMACS Technical Report 2005-25 September 2005 Towards an Algorithmic Theory of Compressed Sensing by Graham Cormode 1 Bell Labs cormode@bell-labs.com S. Muthukrishnan 2,3 Rutgers University muthu@cs.rutgers.edu

More information

Explicit Constructions for Compressed Sensing of Sparse Signals

Explicit Constructions for Compressed Sensing of Sparse Signals Explicit Constructions for Compressed Sensing of Sparse Signals Piotr Indyk MIT July 12, 2007 1 Introduction Over the recent years, a new approach for obtaining a succinct approximate representation of

More information

Sensing systems limited by constraints: physical size, time, cost, energy

Sensing systems limited by constraints: physical size, time, cost, energy Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original

More information

Solution Recovery via L1 minimization: What are possible and Why?

Solution Recovery via L1 minimization: What are possible and Why? Solution Recovery via L1 minimization: What are possible and Why? Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Eighth US-Mexico Workshop on Optimization

More information

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles SIAM Student Research Conference Deanna Needell Joint work with Roman Vershynin and Joel Tropp UC Davis, May 2008 CoSaMP: Greedy Signal

More information

The intrinsic structure of FFT and a synopsis of SFT

The intrinsic structure of FFT and a synopsis of SFT Global Journal o Pure and Applied Mathematics. ISSN 0973-1768 Volume 12, Number 2 (2016), pp. 1671-1676 Research India Publications http://www.ripublication.com/gjpam.htm The intrinsic structure o FFT

More information

FPS-SFT: A MULTI-DIMENSIONAL SPARSE FOURIER TRANSFORM BASED ON THE FOURIER PROJECTION-SLICE THEOREM

FPS-SFT: A MULTI-DIMENSIONAL SPARSE FOURIER TRANSFORM BASED ON THE FOURIER PROJECTION-SLICE THEOREM FPS-SFT: A MULTI-DIMENSIONAL SPARSE FOURIER TRANSFORM BASED ON THE FOURIER PROJECTION-SLICE THEOREM Shaogang Wang, Vishal M. Patel and Athina Petropulu Department of Electrical and Computer Engineering

More information

Sparse Legendre expansions via l 1 minimization

Sparse Legendre expansions via l 1 minimization Sparse Legendre expansions via l 1 minimization Rachel Ward, Courant Institute, NYU Joint work with Holger Rauhut, Hausdorff Center for Mathematics, Bonn, Germany. June 8, 2010 Outline Sparse recovery

More information

GREEDY SIGNAL RECOVERY REVIEW

GREEDY SIGNAL RECOVERY REVIEW GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin

More information

Phase Retrieval from Local Measurements: Deterministic Measurement Constructions and Efficient Recovery Algorithms

Phase Retrieval from Local Measurements: Deterministic Measurement Constructions and Efficient Recovery Algorithms Phase Retrieval from Local Measurements: Deterministic Measurement Constructions and Efficient Recovery Algorithms Aditya Viswanathan Department of Mathematics and Statistics adityavv@umich.edu www-personal.umich.edu/

More information

A Robust Sparse Fourier Transform in the Continuous Setting

A Robust Sparse Fourier Transform in the Continuous Setting A Robust Sparse Fourier ransform in the Continuous Setting Eric Price and Zhao Song ecprice@cs.utexas.edu zhaos@utexas.edu he University of exas at Austin January 7, 7 Abstract In recent years, a number

More information

A For-all Sparse Recovery in Near-Optimal Time

A For-all Sparse Recovery in Near-Optimal Time A For-all Sparse Recovery in Near-Optimal Time ANNA C. GILBERT, Department of Mathematics. University of Michigan, Ann Arbor YI LI, Division of Mathematics, SPMS. Nanyang Technological University ELY PORAT,

More information

Simultaneous Sparsity

Simultaneous Sparsity Simultaneous Sparsity Joel A. Tropp Anna C. Gilbert Martin J. Strauss {jtropp annacg martinjs}@umich.edu Department of Mathematics The University of Michigan 1 Simple Sparse Approximation Work in the d-dimensional,

More information

Recovering overcomplete sparse representations from structured sensing

Recovering overcomplete sparse representations from structured sensing Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix

More information

Optimal Deterministic Compressed Sensing Matrices

Optimal Deterministic Compressed Sensing Matrices Optimal Deterministic Compressed Sensing Matrices Arash Saber Tehrani email: saberteh@usc.edu Alexandros G. Dimakis email: dimakis@usc.edu Giuseppe Caire email: caire@usc.edu Abstract We present the first

More information

COMS E F15. Lecture 22: Linearity Testing Sparse Fourier Transform

COMS E F15. Lecture 22: Linearity Testing Sparse Fourier Transform COMS E6998-9 F15 Lecture 22: Linearity Testing Sparse Fourier Transform 1 Administrivia, Plan Thu: no class. Happy Thanksgiving! Tue, Dec 1 st : Sergei Vassilvitskii (Google Research) on MapReduce model

More information

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk Model-Based Compressive Sensing for Signal Ensembles Marco F. Duarte Volkan Cevher Richard G. Baraniuk Concise Signal Structure Sparse signal: only K out of N coordinates nonzero model: union of K-dimensional

More information

New Applications of Sparse Methods in Physics. Ra Inta, Centre for Gravitational Physics, The Australian National University

New Applications of Sparse Methods in Physics. Ra Inta, Centre for Gravitational Physics, The Australian National University New Applications of Sparse Methods in Physics Ra Inta, Centre for Gravitational Physics, The Australian National University 2 Sparse methods A vector is S-sparse if it has at most S non-zero coefficients.

More information

Sublinear Time, Measurement-Optimal, Sparse Recovery For All

Sublinear Time, Measurement-Optimal, Sparse Recovery For All Sublinear Time, Measurement-Optimal, Sparse Recovery For All Ely Porat and Martin J Strauss Abstract An approximate sparse recovery system in l 1 norm makes a small number of measurements of a noisy vector

More information

Sparse recovery for spherical harmonic expansions

Sparse recovery for spherical harmonic expansions Rachel Ward 1 1 Courant Institute, New York University Workshop Sparsity and Cosmology, Nice May 31, 2011 Cosmic Microwave Background Radiation (CMB) map Temperature is measured as T (θ, ϕ) = k k=0 l=

More information

Data stream algorithms via expander graphs

Data stream algorithms via expander graphs Data stream algorithms via expander graphs Sumit Ganguly sganguly@cse.iitk.ac.in IIT Kanpur, India Abstract. We present a simple way of designing deterministic algorithms for problems in the data stream

More information

Combinatorial Sublinear-Time Fourier Algorithms

Combinatorial Sublinear-Time Fourier Algorithms Combinatorial Sublinear-Time Fourier Algorithms M. A. Iwen Institute for Mathematics and its Applications (IMA University of Minnesota iwen@ima.umn.edu July 29, 2009 Abstract We study the problem of estimating

More information

FPS-SFT: A MULTI-DIMENSIONAL SPARSE FOURIER TRANSFORM BASED ON THE FOURIER PROJECTION-SLICE THEOREM

FPS-SFT: A MULTI-DIMENSIONAL SPARSE FOURIER TRANSFORM BASED ON THE FOURIER PROJECTION-SLICE THEOREM FPS-SFT: A MULTI-DIMENSIONAL SPARSE FOURIER TRANSFORM BASED ON THE FOURIER PROJECTION-SLICE THEOREM Shaogang Wang, Vishal M. Patel and Athina Petropulu Department of Electrical and Computer Engineering

More information

Fast Compressive Phase Retrieval

Fast Compressive Phase Retrieval Fast Compressive Phase Retrieval Aditya Viswanathan Department of Mathematics Michigan State University East Lansing, Michigan 4884 Mar Iwen Department of Mathematics and Department of ECE Michigan State

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 12 Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted for noncommercial,

More information

Algorithmic linear dimension reduction in the l 1 norm for sparse vectors

Algorithmic linear dimension reduction in the l 1 norm for sparse vectors Algorithmic linear dimension reduction in the l norm for sparse vectors A. C. Gilbert, M. J. Strauss, J. A. Tropp, and R. Vershynin Abstract Using a number of different algorithms, we can recover approximately

More information

Sparse analysis Lecture III: Dictionary geometry and greedy algorithms

Sparse analysis Lecture III: Dictionary geometry and greedy algorithms Sparse analysis Lecture III: Dictionary geometry and greedy algorithms Anna C. Gilbert Department of Mathematics University of Michigan Intuition from ONB Key step in algorithm: r, ϕ j = x c i ϕ i, ϕ j

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

The convex algebraic geometry of rank minimization

The convex algebraic geometry of rank minimization The convex algebraic geometry of rank minimization Pablo A. Parrilo Laboratory for Information and Decision Systems Massachusetts Institute of Technology International Symposium on Mathematical Programming

More information

Lecture 22: More On Compressed Sensing

Lecture 22: More On Compressed Sensing Lecture 22: More On Compressed Sensing Scribed by Eric Lee, Chengrun Yang, and Sebastian Ament Nov. 2, 207 Recap and Introduction Basis pursuit was the method of recovering the sparsest solution to an

More information

Learning Fourier Sparse Set Functions

Learning Fourier Sparse Set Functions Peter Stobbe Caltech Andreas Krause ETH Zürich Abstract Can we learn a sparse graph from observing the value of a few random cuts? This and more general problems can be reduced to the challenge of learning

More information

Signal Recovery from Permuted Observations

Signal Recovery from Permuted Observations EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,

More information

APPROXIMATE SPARSE RECOVERY: OPTIMIZING TIME AND MEASUREMENTS

APPROXIMATE SPARSE RECOVERY: OPTIMIZING TIME AND MEASUREMENTS SIAM J. COMPUT. Vol. 41, No. 2, pp. 436 453 c 2012 Society for Industrial and Applied Mathematics APPROXIMATE SPARSE RECOVERY: OPTIMIZING TIME AND MEASUREMENTS ANNA C. GILBERT, YI LI, ELY PORAT, AND MARTIN

More information

Interpolation via weighted l 1 minimization

Interpolation via weighted l 1 minimization Interpolation via weighted l 1 minimization Rachel Ward University of Texas at Austin December 12, 2014 Joint work with Holger Rauhut (Aachen University) Function interpolation Given a function f : D C

More information

Sparse linear models

Sparse linear models Sparse linear models Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 2/22/2016 Introduction Linear transforms Frequency representation Short-time

More information

Block Heavy Hitters Alexandr Andoni, Khanh Do Ba, and Piotr Indyk

Block Heavy Hitters Alexandr Andoni, Khanh Do Ba, and Piotr Indyk Computer Science and Artificial Intelligence Laboratory Technical Report -CSAIL-TR-2008-024 May 2, 2008 Block Heavy Hitters Alexandr Andoni, Khanh Do Ba, and Piotr Indyk massachusetts institute of technology,

More information

Recent Developments in Compressed Sensing

Recent Developments in Compressed Sensing Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline

More information

Sparse Recovery Using Sparse (Random) Matrices

Sparse Recovery Using Sparse (Random) Matrices Sparse Recovery Using Sparse (Random) Matrices Piotr Indyk MIT Joint work with: Radu Berinde, Anna Gilbert, Howard Karloff, Martin Strauss and Milan Ruzic Linear Compression (learning Fourier coeffs, linear

More information

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March

More information

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit arxiv:0707.4203v2 [math.na] 14 Aug 2007 Deanna Needell Department of Mathematics University of California,

More information

1 Approximate Quantiles and Summaries

1 Approximate Quantiles and Summaries CS 598CSC: Algorithms for Big Data Lecture date: Sept 25, 2014 Instructor: Chandra Chekuri Scribe: Chandra Chekuri Suppose we have a stream a 1, a 2,..., a n of objects from an ordered universe. For simplicity

More information

ONE SKETCH FOR ALL: FAST ALGORITHMS FOR COMPRESSED SENSING

ONE SKETCH FOR ALL: FAST ALGORITHMS FOR COMPRESSED SENSING ONE SKETCH FOR ALL: FAST ALGORITHMS FOR COMPRESSED SENSING A C GILBERT, M J STRAUSS, J A TROPP, AND R VERSHYNIN Abstract Compressed Sensing is a new paradigm for acquiring the compressible signals that

More information

Lecture 3 Frequency Moments, Heavy Hitters

Lecture 3 Frequency Moments, Heavy Hitters COMS E6998-9: Algorithmic Techniques for Massive Data Sep 15, 2015 Lecture 3 Frequency Moments, Heavy Hitters Instructor: Alex Andoni Scribes: Daniel Alabi, Wangda Zhang 1 Introduction This lecture is

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

Exponential decay of reconstruction error from binary measurements of sparse signals

Exponential decay of reconstruction error from binary measurements of sparse signals Exponential decay of reconstruction error from binary measurements of sparse signals Deanna Needell Joint work with R. Baraniuk, S. Foucart, Y. Plan, and M. Wootters Outline Introduction Mathematical Formulation

More information

Near-Optimal Sparse Recovery in the L 1 norm

Near-Optimal Sparse Recovery in the L 1 norm Near-Optimal Sparse Recovery in the L 1 norm Piotr Indyk MIT indyk@mit.edu Milan Ružić ITU Copenhagen milan@itu.dk Abstract We consider the approximate sparse recovery problem, where the goal is to (approximately)

More information

Provable Alternating Minimization Methods for Non-convex Optimization

Provable Alternating Minimization Methods for Non-convex Optimization Provable Alternating Minimization Methods for Non-convex Optimization Prateek Jain Microsoft Research, India Joint work with Praneeth Netrapalli, Sujay Sanghavi, Alekh Agarwal, Animashree Anandkumar, Rashish

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

Random Methods for Linear Algebra

Random Methods for Linear Algebra Gittens gittens@acm.caltech.edu Applied and Computational Mathematics California Institue of Technology October 2, 2009 Outline The Johnson-Lindenstrauss Transform 1 The Johnson-Lindenstrauss Transform

More information

Compressive Sensing of Streams of Pulses

Compressive Sensing of Streams of Pulses Compressive Sensing of Streams of Pulses Chinmay Hegde and Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University Abstract Compressive Sensing (CS) has developed as an enticing

More information

EMPIRICAL EVALUATION OF A SUB-LINEAR TIME SPARSE DFT ALGORITHM

EMPIRICAL EVALUATION OF A SUB-LINEAR TIME SPARSE DFT ALGORITHM EMPIRICAL EVALUATIO OF A SUB-LIEAR TIME SPARSE DFT ALGORITHM M. A. IWE, A. GILBERT, AD M. STRAUSS Abstract. In this paper we empirically evaluate a recently proposed Fast Approximate Discrete Fourier Transform

More information

Sparse and Fast Fourier Transforms in Biomedical Imaging

Sparse and Fast Fourier Transforms in Biomedical Imaging Sparse and Fast Fourier Transforms in Biomedical Imaging Stefan Kunis 1 1 joint work with Ines Melzer, TU Chemnitz and Helmholtz Center Munich, supported by DFG and Helmholtz association Outline 1 Trigonometric

More information

Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego

Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego 1 Outline Course Outline Motivation for Course Sparse Signal Recovery Problem Applications Computational

More information

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Deanna Needell and Roman Vershynin Abstract We demonstrate a simple greedy algorithm that can reliably

More information

Lecture 2. Frequency problems

Lecture 2. Frequency problems 1 / 43 Lecture 2. Frequency problems Ricard Gavaldà MIRI Seminar on Data Streams, Spring 2015 Contents 2 / 43 1 Frequency problems in data streams 2 Approximating inner product 3 Computing frequency moments

More information

Fast and Robust Phase Retrieval

Fast and Robust Phase Retrieval Fast and Robust Phase Retrieval Aditya Viswanathan aditya@math.msu.edu CCAM Lunch Seminar Purdue University April 18 2014 0 / 27 Joint work with Yang Wang Mark Iwen Research supported in part by National

More information

Structured signal recovery from non-linear and heavy-tailed measurements

Structured signal recovery from non-linear and heavy-tailed measurements Structured signal recovery from non-linear and heavy-tailed measurements Larry Goldstein* Stanislav Minsker* Xiaohan Wei # *Department of Mathematics # Department of Electrical Engineering UniversityofSouthern

More information

Interpolation via weighted l 1 -minimization

Interpolation via weighted l 1 -minimization Interpolation via weighted l 1 -minimization Holger Rauhut RWTH Aachen University Lehrstuhl C für Mathematik (Analysis) Matheon Workshop Compressive Sensing and Its Applications TU Berlin, December 11,

More information

Determinis)c Compressed Sensing for Images using Chirps and Reed- Muller Sequences

Determinis)c Compressed Sensing for Images using Chirps and Reed- Muller Sequences Determinis)c Compressed Sensing for Images using Chirps and Reed- Muller Sequences Kangyu Ni Mathematics Arizona State University Joint with Somantika Datta*, Prasun Mahanti, Svetlana Roudenko, Douglas

More information

Noisy Signal Recovery via Iterative Reweighted L1-Minimization

Noisy Signal Recovery via Iterative Reweighted L1-Minimization Noisy Signal Recovery via Iterative Reweighted L1-Minimization Deanna Needell UC Davis / Stanford University Asilomar SSC, November 2009 Problem Background Setup 1 Suppose x is an unknown signal in R d.

More information

1 Regression with High Dimensional Data

1 Regression with High Dimensional Data 6.883 Learning with Combinatorial Structure ote for Lecture 11 Instructor: Prof. Stefanie Jegelka Scribe: Xuhong Zhang 1 Regression with High Dimensional Data Consider the following regression problem:

More information

6 Compressed Sensing and Sparse Recovery

6 Compressed Sensing and Sparse Recovery 6 Compressed Sensing and Sparse Recovery Most of us have noticed how saving an image in JPEG dramatically reduces the space it occupies in our hard drives as oppose to file types that save the pixel value

More information

Optimization for Compressed Sensing

Optimization for Compressed Sensing Optimization for Compressed Sensing Robert J. Vanderbei 2014 March 21 Dept. of Industrial & Systems Engineering University of Florida http://www.princeton.edu/ rvdb Lasso Regression The problem is to solve

More information

Linear sketching for Functions over Boolean Hypercube

Linear sketching for Functions over Boolean Hypercube Linear sketching for Functions over Boolean Hypercube Grigory Yaroslavtsev (Indiana University, Bloomington) http://grigory.us with Sampath Kannan (U. Pennsylvania), Elchanan Mossel (MIT) and Swagato Sanyal

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 3: Sparse signal recovery: A RIPless analysis of l 1 minimization Yuejie Chi The Ohio State University Page 1 Outline

More information

A new method on deterministic construction of the measurement matrix in compressed sensing

A new method on deterministic construction of the measurement matrix in compressed sensing A new method on deterministic construction of the measurement matrix in compressed sensing Qun Mo 1 arxiv:1503.01250v1 [cs.it] 4 Mar 2015 Abstract Construction on the measurement matrix A is a central

More information

Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit

Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit Claremont Colleges Scholarship @ Claremont CMC Faculty Publications and Research CMC Faculty Scholarship 6-5-2008 Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit

More information

Data Streams & Communication Complexity

Data Streams & Communication Complexity Data Streams & Communication Complexity Lecture 1: Simple Stream Statistics in Small Space Andrew McGregor, UMass Amherst 1/25 Data Stream Model Stream: m elements from universe of size n, e.g., x 1, x

More information

Deterministic Compressed Sensing for Efficient Image Reconstruction

Deterministic Compressed Sensing for Efficient Image Reconstruction Chirp for Efficient Image University of Idaho Pacific Northwest Numerical Analysis Seminar 27 October 2012 Chirp Collaborators: Connie Ni (HRL Lab, Malibu, CA) Prasun Mahanti (Arizona State University)

More information