An Adaptive Sublinear Time Block Sparse Fourier Transform

Size: px
Start display at page:

Download "An Adaptive Sublinear Time Block Sparse Fourier Transform"

Transcription

1 An Adaptive Sublinear Time Block Sparse Fourier Transform Volkan Cevher Michael Kapralov Jonathan Scarlett Amir Zandieh EPFL February 8th 217

2 Given x C N, compute the Discrete Fourier Transform (DFT) of x: x i = 1 x N j ω ij, j [N] where ω = e 2πi/N is the N-th root of unity.

3 Given x C N, compute the Discrete Fourier Transform (DFT) of x: x i = 1 x N j ω ij, j [N] where ω = e 2πi/N is the N-th root of unity. Assume that N is a power of 2.

4 Given x C N, compute the Discrete Fourier Transform (DFT) of x: x i = 1 x N j ω ij, j [N] where ω = e 2πi/N is the N-th root of unity. Assume that N is a power of 2. compression schemes (JPEG, MPEG) signal processing data analysis imaging (MRI, NMR)

5 Fast Fourier Transform (FFT) Computes Discrete Fourier Transform (DFT) of a length N signal in O(N logn) time

6 Fast Fourier Transform (FFT) Computes Discrete Fourier Transform (DFT) of a length N signal in O(N logn) time Cooley and Tukey, 1964

7 Fast Fourier Transform (FFT) Computes Discrete Fourier Transform (DFT) of a length N signal in O(N logn) time Cooley and Tukey, 1964 Gauss, 185

8 Sparse FFT Say that x is k-sparse if x has k nonzero entries time frequency

9 Sparse FFT Say that x is k-sparse if x has k nonzero entries Say that x is approximately k-sparse if x is close to k-sparse in some norm time frequency

10 Sparse approximations JPEG = Image and video compression schemes (e.g. JPEG, MPEG) Compute top k coefficients of x faster than FFT?

11 Sample complexity Sample complexity=number of samples accessed in time domain.

12 Sample complexity Sample complexity=number of samples accessed in time domain. In medical imaging (MRI, NMR), one measures Fourier coefficients x of imaged object x (which is often sparse)

13 Given access to signal x in time domain, (approximately) compute top k coefficients of x Minimize runtime and sample complexity

14 Given access to signal x in time domain, (approximately) compute top k coefficients of x Minimize runtime and sample complexity Formally, want to find ŷ such that x ŷ 2 (1 + ε) min k sparse ẑ x ẑ 2 (l 2 /l 2 sparse recovery guarantees)

15 Uniform bounds (for all): Candes-Tao 6 Rudelson-Vershynin 8 Cheraghchi-Guruswami-Velingker 12 Bourgain 14 Haviv-Regev 15 Deterministic, Ω(N) runtime O(k log 2 k logn) Non-uniform bounds (for each): Goldreich-Levin 89 Kushilevitz-Mansour 91, Mansour 92 Gilbert-Guha-Indyk-Muthukrishnan-Strauss 2 Gilbert-Muthukrishnan-Strauss 5 Hassanieh-Indyk-Katabi-Price 12a Hassanieh-Indyk-Katabi-Price 12b Indyk-K.-Price 14(k logn(loglogn) C, but only in 1d) Indyk-K. 14 K 16 K?? Randomized, O(k poly(logn)) runtime O(k logn) Lower bound: k log(n/k) for non-adaptive algorithms Do-Ba-Indyk-Price-Woodruff 1 (Also Boufounos-Cevher-Gilbert-Li-Strauss 12, Prince-Song 14 on continuous Sparse FFT)

16 Uniform bounds (for all): Candes-Tao 6 Rudelson-Vershynin 8 Cheraghchi-Guruswami-Velingker 12 Bourgain 14 Haviv-Regev 15 Deterministic, Ω(N) runtime O(k log 2 k logn) Non-uniform bounds (for each): Goldreich-Levin 89 Kushilevitz-Mansour 91, Mansour 92 Gilbert-Guha-Indyk-Muthukrishnan-Strauss 2 Gilbert-Muthukrishnan-Strauss 5 Hassanieh-Indyk-Katabi-Price 12a Hassanieh-Indyk-Katabi-Price 12b Indyk-K.-Price 14(k logn(loglogn) C, but only in 1d) Indyk-K. 14 K 16 K?? Randomized, O(k poly(logn)) runtime O(k logn) Lower bound: k log(n/k) for non-adaptive algorithms Do-Ba-Indyk-Price-Woodruff 1 (Also Boufounos-Cevher-Gilbert-Li-Strauss 12, Prince-Song 14 on continuous Sparse FFT) Nearly optimal results under sparsity assumption alone can we exploit structure beyond sparsity?

17 Sparse FFT beyond the sparsity assumption Signals arising in applications often have structure beyond sparsity Can we exploit further structure to reduce sample complexity and runtime?

18 Block sparsity in Fourier domain 1 k blocks of width k 1.8 magnitude }{{} width k frequency This work: suppose that dominant frequencies are contained in k clusters (intervals) of length k 1 each? O(k logn +k k 1 ) sample complexity in sublinear time?

19 Model based compressed sensing Framework for exploiting structured sparsity to reduce sample complexity, introduced by Baraniuk, Cevher, Duarte and Hegde 1 Baraniuk et al 1: optimal O(k logn +k k 1 ) sample complexity using Gaussian measurements in Õ(k k 1 n) time 1 k blocks of width k 1.8 magnitude }{{} width k frequency

20 Adaptivity in compressed sensing An algorithm is adaptive its signal access pattern is guided by samples taken All Sparse FFT and compressed sensing results mentioned so far are non-adaptive

21 Main result Theorem There exists an adaptive (k,k 1 )-block sparse Fourier transform with sample complexity O (k log(1 +k )logn +k k 1 ) logsnr samples and O (k k 1 log 3 n) logsnr runtime. Sample complexity optimal up to lower order terms for constant SNR

22 head tail SNR=ratio of total signal energy to total noise energy

23 Main result Theorem There exists an adaptive (k,k 1 )-block sparse Fourier transform with sample complexity O (k log(1 +k )logn +k k 1 ) logsnr samples and O (k k 1 log 3 n) logsnr runtime. Sample complexity optimal up to lower order terms for constant SNR

24 Main result Theorem There exists an adaptive (k,k 1 )-block sparse Fourier transform with sample complexity O (k log(1 +k )logn +k k 1 ) logsnr samples and O (k k 1 log 3 n) logsnr runtime. Sample complexity optimal up to lower order terms for constant SNR Crucially use adaptivity!

25 Main result Theorem There exists an adaptive (k,k 1 )-block sparse Fourier transform with sample complexity O (k log(1 +k )logn +k k 1 ) logsnr samples and O (k k 1 log 3 n) logsnr runtime. Sample complexity optimal up to lower order terms for constant SNR Crucially use adaptivity! Theorem Any non-adaptive Sparse FFT algorithm must take Ω(k k 1 log n k k 1 ) samples.

26 Main result Theorem There exists an adaptive (k,k 1 )-block sparse Fourier transform with sample complexity O (k log(1 +k )logn +k k 1 ) logsnr samples and O (k k 1 log 3 n) logsnr runtime. Sample complexity optimal up to lower order terms for constant SNR Crucially use adaptivity! Theorem Any non-adaptive Sparse FFT algorithm must take Ω(k k 1 log n k k 1 ) samples. No improvement upon vanilla sparsity possible with non-adaptive Fourier measurements

27 First result along the following directions: First sublinear time model based compressed sensing primitive Separation between adaptive vs non-adaptive Sparse FFT Structured (Fourier) vs unstructured (Gaussian) measurements in model based compressed sensing

28 1. Sparse recovery from Fourier measurements 2. Main result

29 1. Sparse recovery from Fourier measurements 2. Main result

30 Structure of Sparse FFT algorithms Input: access to signal x C N in time domain Output: top k coefficients of x, approximately

31 Structure of Sparse FFT algorithms Input: access to signal x C N in time domain Output: top k coefficients of x, approximately Iterative process: recover dominant coefficients of input signal

32 Structure of Sparse FFT algorithms Input: access to signal x C N in time domain Output: top k coefficients of x, approximately Iterative process: recover dominant coefficients of input signal subtract recovered coefficients (e.g. in time domain)

33 Structure of Sparse FFT algorithms Input: access to signal x C N in time domain Output: top k coefficients of x, approximately Iterative process: recover dominant coefficients of input signal subtract recovered coefficients (e.g. in time domain) repeat

34 Structure of Sparse FFT algorithms Input: access to signal x C N in time domain Output: top k coefficients of x, approximately Iterative process: recover dominant coefficients of input signal subtract recovered coefficients (e.g. in time domain) repeat Summary of techniques from Gilbert-Guha-Indyk-Muthukrishnan-Strauss 2, Akavia-Goldwasser-Safra 3, Gilbert-Muthukrishnan-Strauss 5, Iwen 1, Akavia 1, Hassanieh-Indyk-Katabi-Price 12a, Hassanieh-Indyk-Katabi-Price 12b

35 1-sparse recovery from Fourier measurements magnitude.5 magnitude time frequency x a = ω a f +noise 2πf /n O(logn) measurements for random a

36 Reducing k-sparse recovery to 1-sparse recovery Permute with a random linear transformation and phase shift magnitude.5.5 magnitude time frequency

37 Reducing k-sparse recovery to 1-sparse recovery Permute with a random linear transformation and phase shift magnitude.5.5 magnitude time frequency

38 Reducing k-sparse recovery to 1-sparse recovery Permute with a random linear transformation and phase shift magnitude.5.5 magnitude time frequency

39 Reducing k-sparse recovery to 1-sparse recovery Permute with a random linear transformation and phase shift magnitude.5.5 magnitude time frequency Choose a filter G,Ĝ such that Ĝ rectangle G has support k Compute x Ĝ = (x G) Sample complexity=supp G!

40 Reducing k-sparse recovery to 1-sparse recovery Permute with a random linear transformation and phase shift magnitude.5.5 magnitude time frequency Choose a filter G,Ĝ such that Ĝ rectangle G has support k Compute x Ĝ = (x G) Sample complexity=supp G!

41 1. Sparse recovery from Fourier measurements 2. Main result

42 Block sparsity in Fourier domain 1 k blocks of width k 1.8 magnitude }{{} width k frequency This work: suppose that dominant frequencies are contained in k clusters (intervals) of length k 1 each?

43 Block sparsity in Fourier domain 1 k blocks of width k 1.8 magnitude }{{} width k frequency This work: suppose that dominant frequencies are contained in k clusters (intervals) of length k 1 each? Standard techniques destroy structure (by a random permutation), and need Ω(k k 1 logn) samples

44 Block sparsity in Fourier domain 1 k blocks of width k 1.8 magnitude }{{} width k frequency This work: suppose that dominant frequencies are contained in k clusters (intervals) of length k 1 each? Standard techniques destroy structure (by a random permutation), and need Ω(k k 1 logn) samples O(k logn +k k 1 ) sample complexity in sublinear time?

45 Main result Theorem There exists an adaptive (k,k 1 )-block sparse Fourier transform with sample complexity O (k log(1 +k )logn +k k 1 ) logsnr samples and O (k k 1 log 3 n) logsnr runtime. Sample complexity optimal up to lower order terms for constant SNR

46 Main result Theorem There exists an adaptive (k,k 1 )-block sparse Fourier transform with sample complexity O (k log(1 +k )logn +k k 1 ) logsnr samples and O (k k 1 log 3 n) logsnr runtime. Sample complexity optimal up to lower order terms for constant SNR Crucially use adaptivity!

47 Spectrum is well approximated by k blocks of length k 1 : 1 k blocks of width k 1.8 magnitude k = 3 in the example }{{} width k frequency Natural idea: can we treat blocks as individual frequencies, and thus reduce to standard k -sparse recovery? YES, to some extent...

48 Spectrum is well approximated by k blocks of length k 1 : 1.8 magnitude k = 3 in the example frequency Natural idea: can we treat blocks as individual frequencies, and thus reduce to standard k -sparse recovery? YES, to some extent...

49 Spectrum is well approximated by k blocks of length k 1 : 1.8 magnitude k = 3 in the example frequency Natural idea: can we treat blocks as individual frequencies, and thus reduce to standard k -sparse recovery? YES, to some extent...

50 Spectrum is well approximated by k blocks of length k 1 : 1 k -sparse reduced signal.8 magnitude k = 3 in the example frequency Natural idea: can we treat blocks as individual frequencies, and thus reduce to standard k -sparse recovery? YES, to some extent...

51 Reduced signals 1.8 magnitude frequency For each j = 1,...,n/k 1 define reduced signal Z by Ẑ j := ( X Ĝ) j k 1 (G has small support, and Ĝ indicator of a block)

52 Reduced signals 1.8 magnitude frequency For each j = 1,...,n/k 1 define reduced signal Z by Ẑ j := ( X Ĝ) j k 1 (G has small support, and Ĝ indicator of a block)

53 Reduced signals filter Ĝ 1.8 magnitude frequency For each j = 1,...,n/k 1 define reduced signal Z by Ẑ j := ( X Ĝ) j k 1 (G has small support, and Ĝ indicator of a block)

54 Reduced signals filter Ĝ 1.8 magnitude frequency For each j = 1,...,n/k 1 define reduced signal Z by Ẑ j := ( X Ĝ) j k 1 (G has small support, and Ĝ indicator of a block)

55 Reduced signals filter Ĝ 1.8 magnitude frequency For each j = 1,...,n/k 1 define reduced signal Z by Ẑ j := ( X Ĝ) j k 1 Good news: if X is approximately (k,k 1 )-block sparse, then Z is approximately O(k )-sparse.

56 Reduced signals filter Ĝ 1.8 magnitude frequency For each j = 1,...,n/k 1 define reduced signal Z by Ẑ j := ( X Ĝ) j k 1 Major problem: some blocks of X might have essentially zero contribution to Z due to cancellations!

57 Energy concentration of 1-block sparse signals In standard Sparse FFT k 1 = 1, so energy of a block is uniformly spread over time.

58 Energy concentration of 1-block sparse signals In standard Sparse FFT k 1 = 1, so energy of a block is uniformly spread over time time frequency

59 Energy concentration of 1-block sparse signals In standard Sparse FFT k 1 = 1, so energy of a block is uniformly spread over time time frequency

60 Energy concentration of 1-block sparse signals In standard Sparse FFT k 1 = 1, so energy of a block is uniformly spread over time. If X is 1-block sparse with block size k 1, its energy could be entirely concentrated on only a 1/k 1 fraction of the time domain! time frequency

61 Energy concentration of 1-block sparse signals In standard Sparse FFT k 1 = 1, so energy of a block is uniformly spread over time. If X is 1-block sparse with block size k 1, its energy could be entirely concentrated on only a 1/k 1 fraction of the time domain! time frequency

62 Energy concentration of 1-block sparse signals In standard Sparse FFT k 1 = 1, so energy of a block is uniformly spread over time. If X is 1-block sparse with block size k 1, its energy could be entirely concentrated on only a 1/k 1 fraction of the time domain! time frequency

63 Energy concentration of 1-block sparse signals In standard Sparse FFT k 1 = 1, so energy of a block is uniformly spread over time. If X is 1-block sparse with block size k 1, its energy could be entirely concentrated on only a 1/k 1 fraction of the time domain! time frequency

64 Energy concentration of 1-block sparse signals In standard Sparse FFT k 1 = 1, so energy of a block is uniformly spread over time. If X is 1-block sparse with block size k 1, its energy could be entirely concentrated on only a 1/k 1 fraction of the time domain! time frequency

65 Energy concentration of 1-block sparse signals In standard Sparse FFT k 1 = 1, so energy of a block is uniformly spread over time. If X is 1-block sparse with block size k 1, its energy could be entirely concentrated on only a 1/k 1 fraction of the time domain! time frequency

66 Recovery of block sparse signals For each r = 1,...,2k 1 define reduced signal Z r by ( ( Ẑj r ) ) n := X r Ĝ 2k 1 (Shift X first, use 2k 1 regular shifts) j k 1

67 Recovery of block sparse signals For each r = 1,...,2k 1 define reduced signal Z r by ( ( Ẑj r ) ) n := X r Ĝ 2k 1 (Shift X first, use 2k 1 regular shifts) If each reduction requires k logn samples, we are back to the k k 1 logn bound? j k 1

68 Recovery of block sparse signals For each r = 1,...,2k 1 define reduced signal Z r by ( ( Ẑj r ) ) n := X r Ĝ 2k 1 (Shift X first, use 2k 1 regular shifts) If each reduction requires k logn samples, we are back to the k k 1 logn bound? j k 1 An adaptive solution can first probe which signals carry most energy, then allocate hashing budgets carefully

69 Importance sampling ALLOCATEBUDGETS(X,k,k 1 ) For t = 1,...,O(k ) Sample r Z r 2 2, q Geom Set s t (r,2 q ) End For O(k k 1 ) samples

70 Importance sampling ALLOCATEBUDGETS(X,k,k 1 ) For t = 1,...,O(k ) Sample r Z r 2 2, q Geom Set s t (r,2 q ) End For O(k k 1 ) samples BUDGETEDRECOVERY(X,{s t }) For t = 1,...,O(k ) Hash Z r into s t buckets, Recover top s t frequencies End For O(k log(1 +k )logn) samples

71 Importance sampling ALLOCATEBUDGETS(X,k,k 1 ) For t = 1,...,O(k ) Sample r Z r 2 2, q Geom Set s t (r,2 q ) End For O(k k 1 ) samples 8 BUDGETEDRECOVERY(X,{s t }) For t = 1,...,O(k ) Hash Z r into s t buckets, Recover top s t frequencies End For O(k log(1 +k )logn) samples time time

72 Importance sampling ALLOCATEBUDGETS(X,k,k 1 ) For t = 1,...,O(k ) Sample r Z r 2 2, q Geom Set s t (r,2 q ) End For O(k k 1 ) samples 12 BUDGETEDRECOVERY(X,{s t }) For t = 1,...,O(k ) Hash Z r into s t buckets, Recover top s t frequencies End For O(k log(1 +k )logn) samples time time

73 Adaptivity is necessary Algorithm crucially uses adaptivity: sample a few points first to find a non-uniform sampling distribution for second stage Theorem Any non-adaptive Sparse FFT algorithm must take Ω(k k 1 log n k k 1 ) samples.

74 Our results: first Sparse FFT algorithm that exploits structure beyond sparsity adaptivity is crucial

75 Our results: first Sparse FFT algorithm that exploits structure beyond sparsity adaptivity is crucial Other forms of structured sparsity (e.g. tree sparsity, graph sparsity?)

76 Our results: first Sparse FFT algorithm that exploits structure beyond sparsity adaptivity is crucial Other forms of structured sparsity (e.g. tree sparsity, graph sparsity?) A new class of Sparse FFT techniques that adapt to structure of the spectrum?

77 Our results: first Sparse FFT algorithm that exploits structure beyond sparsity adaptivity is crucial Other forms of structured sparsity (e.g. tree sparsity, graph sparsity?) A new class of Sparse FFT techniques that adapt to structure of the spectrum? Experimental evaluation?

78 Our results: first Sparse FFT algorithm that exploits structure beyond sparsity adaptivity is crucial Other forms of structured sparsity (e.g. tree sparsity, graph sparsity?) A new class of Sparse FFT techniques that adapt to structure of the spectrum? Experimental evaluation? Thank you!

Sample Optimal Fourier Sampling in Any Constant Dimension

Sample Optimal Fourier Sampling in Any Constant Dimension 1 / 26 Sample Optimal Fourier Sampling in Any Constant Dimension Piotr Indyk Michael Kapralov MIT IBM Watson October 21, 2014 Fourier Transform and Sparsity Discrete Fourier Transform Given x C n, compute

More information

Sparse Fourier Transform (lecture 4)

Sparse Fourier Transform (lecture 4) 1 / 5 Sparse Fourier Transform (lecture 4) Michael Kapralov 1 1 IBM Watson EPFL St. Petersburg CS Club November 215 2 / 5 Given x C n, compute the Discrete Fourier Transform of x: x i = 1 n x j ω ij, j

More information

Sparse Fourier Transform (lecture 1)

Sparse Fourier Transform (lecture 1) 1 / 73 Sparse Fourier Transform (lecture 1) Michael Kapralov 1 1 IBM Watson EPFL St. Petersburg CS Club November 2015 2 / 73 Given x C n, compute the Discrete Fourier Transform (DFT) of x: x i = 1 x n

More information

Faster Algorithms for Sparse Fourier Transform

Faster Algorithms for Sparse Fourier Transform Faster Algorithms for Sparse Fourier Transform Haitham Hassanieh Piotr Indyk Dina Katabi Eric Price MIT Material from: Hassanieh, Indyk, Katabi, Price, Simple and Practical Algorithms for Sparse Fourier

More information

Faster Algorithms for Sparse Fourier Transform

Faster Algorithms for Sparse Fourier Transform Faster Algorithms for Sparse Fourier Transform Haitham Hassanieh Piotr Indyk Dina Katabi Eric Price MIT Material from: Hassanieh, Indyk, Katabi, Price, Simple and Practical Algorithms for Sparse Fourier

More information

A Non-sparse Tutorial on Sparse FFTs

A Non-sparse Tutorial on Sparse FFTs A Non-sparse Tutorial on Sparse FFTs Mark Iwen Michigan State University April 8, 214 M.A. Iwen (MSU) Fast Sparse FFTs April 8, 214 1 / 34 Problem Setup Recover f : [, 2π] C consisting of k trigonometric

More information

Rapidly Computing Sparse Chebyshev and Legendre Coefficient Expansions via SFTs

Rapidly Computing Sparse Chebyshev and Legendre Coefficient Expansions via SFTs Rapidly Computing Sparse Chebyshev and Legendre Coefficient Expansions via SFTs Mark Iwen Michigan State University October 18, 2014 Work with Janice (Xianfeng) Hu Graduating in May! M.A. Iwen (MSU) Fast

More information

Combining geometry and combinatorics

Combining geometry and combinatorics Combining geometry and combinatorics A unified approach to sparse signal recovery Anna C. Gilbert University of Michigan joint work with R. Berinde (MIT), P. Indyk (MIT), H. Karloff (AT&T), M. Strauss

More information

Lecture 16 Oct. 26, 2017

Lecture 16 Oct. 26, 2017 Sketching Algorithms for Big Data Fall 2017 Prof. Piotr Indyk Lecture 16 Oct. 26, 2017 Scribe: Chi-Ning Chou 1 Overview In the last lecture we constructed sparse RIP 1 matrix via expander and showed that

More information

Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery

Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery Anna C. Gilbert Department of Mathematics University of Michigan Sparse signal recovery measurements:

More information

CS on CS: Computer Science insights into Compresive Sensing (and vice versa) Piotr Indyk MIT

CS on CS: Computer Science insights into Compresive Sensing (and vice versa) Piotr Indyk MIT CS on CS: Computer Science insights into Compresive Sensing (and vice versa) Piotr Indyk MIT Sparse Approximations Goal: approximate a highdimensional vector x by x that is sparse, i.e., has few nonzero

More information

Recent Developments in the Sparse Fourier Transform

Recent Developments in the Sparse Fourier Transform Recent Developments in the Sparse Fourier Transform Piotr Indyk MIT Joint work with Fadel Adib, Badih Ghazi, Haitham Hassanieh, Michael Kapralov, Dina Katabi, Eric Price, Lixin Shi 1000 150 1000 150 1000

More information

Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery

Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery Anna C. Gilbert Department of Mathematics University of Michigan Connection between... Sparse Approximation and Compressed

More information

COMS E F15. Lecture 22: Linearity Testing Sparse Fourier Transform

COMS E F15. Lecture 22: Linearity Testing Sparse Fourier Transform COMS E6998-9 F15 Lecture 22: Linearity Testing Sparse Fourier Transform 1 Administrivia, Plan Thu: no class. Happy Thanksgiving! Tue, Dec 1 st : Sergei Vassilvitskii (Google Research) on MapReduce model

More information

Sparse FFT for Functions with Short Frequency Support

Sparse FFT for Functions with Short Frequency Support Sparse FFT for Functions with Short Frequency Support Sina ittens May 10, 017 Abstract In this paper we derive a deterministic fast discrete Fourier transform algorithm for a π-periodic function f whose

More information

Tutorial: Sparse Recovery Using Sparse Matrices. Piotr Indyk MIT

Tutorial: Sparse Recovery Using Sparse Matrices. Piotr Indyk MIT Tutorial: Sparse Recovery Using Sparse Matrices Piotr Indyk MIT Problem Formulation (approximation theory, learning Fourier coeffs, linear sketching, finite rate of innovation, compressed sensing...) Setup:

More information

Simple and Practical Algorithm for Sparse Fourier Transform

Simple and Practical Algorithm for Sparse Fourier Transform Simple and Practical Algorithm for Sparse Fourier Transform Haitham Hassanieh MIT Piotr Indyk MIT Dina Katabi MIT Eric Price MIT {haithamh,indyk,dk,ecprice}@mit.edu Abstract We consider the sparse Fourier

More information

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk Model-Based Compressive Sensing for Signal Ensembles Marco F. Duarte Volkan Cevher Richard G. Baraniuk Concise Signal Structure Sparse signal: only K out of N coordinates nonzero model: union of K-dimensional

More information

Tutorial: Sparse Recovery Using Sparse Matrices. Piotr Indyk MIT

Tutorial: Sparse Recovery Using Sparse Matrices. Piotr Indyk MIT Tutorial: Sparse Recovery Using Sparse Matrices Piotr Indyk MIT Problem Formulation (approximation theory, learning Fourier coeffs, linear sketching, finite rate of innovation, compressed sensing...) Setup:

More information

New Applications of Sparse Methods in Physics. Ra Inta, Centre for Gravitational Physics, The Australian National University

New Applications of Sparse Methods in Physics. Ra Inta, Centre for Gravitational Physics, The Australian National University New Applications of Sparse Methods in Physics Ra Inta, Centre for Gravitational Physics, The Australian National University 2 Sparse methods A vector is S-sparse if it has at most S non-zero coefficients.

More information

Tutorial: Sparse Signal Recovery

Tutorial: Sparse Signal Recovery Tutorial: Sparse Signal Recovery Anna C. Gilbert Department of Mathematics University of Michigan (Sparse) Signal recovery problem signal or population length N k important Φ x = y measurements or tests:

More information

What s the Frequency Kenneth?: - Sublinear Fourier Sampling Off the Grid

What s the Frequency Kenneth?: - Sublinear Fourier Sampling Off the Grid MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com What s the Frequency Kenneth?: - Sublinear Fourier Sampling Off the Grid Boufounos, P.T. TR2012-059 August 2012 Abstract We design a sublinear

More information

Greedy Signal Recovery and Uniform Uncertainty Principles

Greedy Signal Recovery and Uniform Uncertainty Principles Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles

More information

Lecture 3 September 15, 2014

Lecture 3 September 15, 2014 MIT 6.893: Algorithms and Signal Processing Fall 2014 Prof. Piotr Indyk Lecture 3 September 15, 2014 Scribe: Ludwig Schmidt 1 Introduction In this lecture, we build on the ideas from the previous two lectures

More information

Nearly Optimal Deterministic Algorithm for Sparse Walsh-Hadamard Transform

Nearly Optimal Deterministic Algorithm for Sparse Walsh-Hadamard Transform Electronic Colloquium on Computational Complexity, Report No. 76 (2015) Nearly Optimal Deterministic Algorithm for Sparse Walsh-Hadamard Transform Mahdi Cheraghchi University of California Berkeley, CA

More information

Explicit Constructions for Compressed Sensing of Sparse Signals

Explicit Constructions for Compressed Sensing of Sparse Signals Explicit Constructions for Compressed Sensing of Sparse Signals Piotr Indyk MIT July 12, 2007 1 Introduction Over the recent years, a new approach for obtaining a succinct approximate representation of

More information

The intrinsic structure of FFT and a synopsis of SFT

The intrinsic structure of FFT and a synopsis of SFT Global Journal o Pure and Applied Mathematics. ISSN 0973-1768 Volume 12, Number 2 (2016), pp. 1671-1676 Research India Publications http://www.ripublication.com/gjpam.htm The intrinsic structure o FFT

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

Simple and Practical Algorithm for Sparse Fourier Transform

Simple and Practical Algorithm for Sparse Fourier Transform Simple and Practical Algorithm for Sparse Fourier Transform Haitham Hassanieh MIT Piotr Indyk MIT Dina Katabi MIT Eric Price MIT {haithamh,indyk,dk,ecprice}@mit.edu Abstract We consider the sparse Fourier

More information

Linear Sketches A Useful Tool in Streaming and Compressive Sensing

Linear Sketches A Useful Tool in Streaming and Compressive Sensing Linear Sketches A Useful Tool in Streaming and Compressive Sensing Qin Zhang 1-1 Linear sketch Random linear projection M : R n R k that preserves properties of any v R n with high prob. where k n. M =

More information

6 Compressed Sensing and Sparse Recovery

6 Compressed Sensing and Sparse Recovery 6 Compressed Sensing and Sparse Recovery Most of us have noticed how saving an image in JPEG dramatically reduces the space it occupies in our hard drives as oppose to file types that save the pixel value

More information

EMPIRICAL EVALUATION OF A SUB-LINEAR TIME SPARSE DFT ALGORITHM

EMPIRICAL EVALUATION OF A SUB-LINEAR TIME SPARSE DFT ALGORITHM EMPIRICAL EVALUATIO OF A SUB-LIEAR TIME SPARSE DFT ALGORITHM M. A. IWE, A. GILBERT, AD M. STRAUSS Abstract. In this paper we empirically evaluate a recently proposed Fast Approximate Discrete Fourier Transform

More information

Sparse Recovery and Fourier Sampling. Eric Price

Sparse Recovery and Fourier Sampling. Eric Price Sparse Recovery and Fourier Sampling by Eric Price Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy

More information

Combinatorial Algorithms for Compressed Sensing

Combinatorial Algorithms for Compressed Sensing DIMACS Technical Report 2005-40 Combinatorial Algorithms for Compressed Sensing by Graham Cormode 1 Bell Labs cormode@bell-labs.com S. Muthukrishnan 2,3 Rutgers University muthu@cs.rutgers.edu DIMACS is

More information

Simultaneous Sparsity

Simultaneous Sparsity Simultaneous Sparsity Joel A. Tropp Anna C. Gilbert Martin J. Strauss {jtropp annacg martinjs}@umich.edu Department of Mathematics The University of Michigan 1 Simple Sparse Approximation Work in the d-dimensional,

More information

An Introduction to Sparse Approximation

An Introduction to Sparse Approximation An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,

More information

Sparse recovery using sparse random matrices

Sparse recovery using sparse random matrices Sparse recovery using sparse random matrices Radu Berinde MIT texel@mit.edu Piotr Indyk MIT indyk@mit.edu April 26, 2008 Abstract We consider the approximate sparse recovery problem, where the goal is

More information

Signal Recovery from Permuted Observations

Signal Recovery from Permuted Observations EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,

More information

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp CoSaMP Iterative signal recovery from incomplete and inaccurate samples Joel A. Tropp Applied & Computational Mathematics California Institute of Technology jtropp@acm.caltech.edu Joint with D. Needell

More information

Towards an Algorithmic Theory of Compressed Sensing

Towards an Algorithmic Theory of Compressed Sensing DIMACS Technical Report 2005-25 September 2005 Towards an Algorithmic Theory of Compressed Sensing by Graham Cormode 1 Bell Labs cormode@bell-labs.com S. Muthukrishnan 2,3 Rutgers University muthu@cs.rutgers.edu

More information

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles SIAM Student Research Conference Deanna Needell Joint work with Roman Vershynin and Joel Tropp UC Davis, May 2008 CoSaMP: Greedy Signal

More information

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March

More information

The Pros and Cons of Compressive Sensing

The Pros and Cons of Compressive Sensing The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal

More information

Interpolation via weighted l 1 minimization

Interpolation via weighted l 1 minimization Interpolation via weighted l 1 minimization Rachel Ward University of Texas at Austin December 12, 2014 Joint work with Holger Rauhut (Aachen University) Function interpolation Given a function f : D C

More information

FPS-SFT: A MULTI-DIMENSIONAL SPARSE FOURIER TRANSFORM BASED ON THE FOURIER PROJECTION-SLICE THEOREM

FPS-SFT: A MULTI-DIMENSIONAL SPARSE FOURIER TRANSFORM BASED ON THE FOURIER PROJECTION-SLICE THEOREM FPS-SFT: A MULTI-DIMENSIONAL SPARSE FOURIER TRANSFORM BASED ON THE FOURIER PROJECTION-SLICE THEOREM Shaogang Wang, Vishal M. Patel and Athina Petropulu Department of Electrical and Computer Engineering

More information

The Fundamentals of Compressive Sensing

The Fundamentals of Compressive Sensing The Fundamentals of Compressive Sensing Mark A. Davenport Georgia Institute of Technology School of Electrical and Computer Engineering Sensor Explosion Data Deluge Digital Revolution If we sample a signal

More information

Sparse Recovery Using Sparse (Random) Matrices

Sparse Recovery Using Sparse (Random) Matrices Sparse Recovery Using Sparse (Random) Matrices Piotr Indyk MIT Joint work with: Radu Berinde, Anna Gilbert, Howard Karloff, Martin Strauss and Milan Ruzic Linear Compression (learning Fourier coeffs, linear

More information

Compressed sensing and imaging

Compressed sensing and imaging Compressed sensing and imaging The effect and benefits of local structure Ben Adcock Department of Mathematics Simon Fraser University 1 / 45 Overview Previously: An introduction to compressed sensing.

More information

Solution Recovery via L1 minimization: What are possible and Why?

Solution Recovery via L1 minimization: What are possible and Why? Solution Recovery via L1 minimization: What are possible and Why? Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Eighth US-Mexico Workshop on Optimization

More information

Sensing systems limited by constraints: physical size, time, cost, energy

Sensing systems limited by constraints: physical size, time, cost, energy Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 12 Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted for noncommercial,

More information

Learning Fourier Sparse Set Functions

Learning Fourier Sparse Set Functions Peter Stobbe Caltech Andreas Krause ETH Zürich Abstract Can we learn a sparse graph from observing the value of a few random cuts? This and more general problems can be reduced to the challenge of learning

More information

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit arxiv:0707.4203v2 [math.na] 14 Aug 2007 Deanna Needell Department of Mathematics University of California,

More information

A Robust Sparse Fourier Transform in the Continuous Setting

A Robust Sparse Fourier Transform in the Continuous Setting A Robust Sparse Fourier ransform in the Continuous Setting Eric Price and Zhao Song ecprice@cs.utexas.edu zhaos@utexas.edu he University of exas at Austin January 7, 7 Abstract In recent years, a number

More information

Sparse Subspace Clustering

Sparse Subspace Clustering Sparse Subspace Clustering Based on Sparse Subspace Clustering: Algorithm, Theory, and Applications by Elhamifar and Vidal (2013) Alex Gutierrez CSCI 8314 March 2, 2017 Outline 1 Motivation and Background

More information

A For-all Sparse Recovery in Near-Optimal Time

A For-all Sparse Recovery in Near-Optimal Time A For-all Sparse Recovery in Near-Optimal Time ANNA C. GILBERT, Department of Mathematics. University of Michigan, Ann Arbor YI LI, Division of Mathematics, SPMS. Nanyang Technological University ELY PORAT,

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

Invertibility of random matrices

Invertibility of random matrices University of Michigan February 2011, Princeton University Origins of Random Matrix Theory Statistics (Wishart matrices) PCA of a multivariate Gaussian distribution. [Gaël Varoquaux s blog gael-varoquaux.info]

More information

Optimization for Compressed Sensing

Optimization for Compressed Sensing Optimization for Compressed Sensing Robert J. Vanderbei 2014 March 21 Dept. of Industrial & Systems Engineering University of Florida http://www.princeton.edu/ rvdb Lasso Regression The problem is to solve

More information

LIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION. By Srikanth Jagabathula Devavrat Shah

LIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION. By Srikanth Jagabathula Devavrat Shah 00 AIM Workshop on Ranking LIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION By Srikanth Jagabathula Devavrat Shah Interest is in recovering distribution over the space of permutations over n elements

More information

Design and Analysis of Algorithms

Design and Analysis of Algorithms Design and Analysis of Algorithms CSE 5311 Lecture 5 Divide and Conquer: Fast Fourier Transform Junzhou Huang, Ph.D. Department of Computer Science and Engineering CSE5311 Design and Analysis of Algorithms

More information

Compressive Sensing of Streams of Pulses

Compressive Sensing of Streams of Pulses Compressive Sensing of Streams of Pulses Chinmay Hegde and Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University Abstract Compressive Sensing (CS) has developed as an enticing

More information

Sparse Signal Recovery Using Markov Random Fields

Sparse Signal Recovery Using Markov Random Fields Sparse Signal Recovery Using Markov Random Fields Volkan Cevher Rice University volkan@rice.edu Chinmay Hegde Rice University chinmay@rice.edu Marco F. Duarte Rice University duarte@rice.edu Richard G.

More information

Sequential Fast Fourier Transform

Sequential Fast Fourier Transform Sequential Fast Fourier Transform Departement Computerwetenschappen Room 03.018 (Celestijnenlaan 200A) albert-jan.yzelman@cs.kuleuven.be Includes material from slides by Prof. dr. Rob H. Bisseling Applications

More information

Enhanced Compressive Sensing and More

Enhanced Compressive Sensing and More Enhanced Compressive Sensing and More Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Nonlinear Approximation Techniques Using L1 Texas A & M University

More information

Sparse and Fast Fourier Transforms in Biomedical Imaging

Sparse and Fast Fourier Transforms in Biomedical Imaging Sparse and Fast Fourier Transforms in Biomedical Imaging Stefan Kunis 1 1 joint work with Ines Melzer, TU Chemnitz and Helmholtz Center Munich, supported by DFG and Helmholtz association Outline 1 Trigonometric

More information

Fast Compressive Phase Retrieval

Fast Compressive Phase Retrieval Fast Compressive Phase Retrieval Aditya Viswanathan Department of Mathematics Michigan State University East Lansing, Michigan 4884 Mar Iwen Department of Mathematics and Department of ECE Michigan State

More information

FFT: Fast Polynomial Multiplications

FFT: Fast Polynomial Multiplications FFT: Fast Polynomial Multiplications Jie Wang University of Massachusetts Lowell Department of Computer Science J. Wang (UMass Lowell) FFT: Fast Polynomial Multiplications 1 / 20 Overview So far we have

More information

ONE SKETCH FOR ALL: FAST ALGORITHMS FOR COMPRESSED SENSING

ONE SKETCH FOR ALL: FAST ALGORITHMS FOR COMPRESSED SENSING ONE SKETCH FOR ALL: FAST ALGORITHMS FOR COMPRESSED SENSING A C GILBERT, M J STRAUSS, J A TROPP, AND R VERSHYNIN Abstract Compressed Sensing is a new paradigm for acquiring the compressible signals that

More information

Lecture 22: More On Compressed Sensing

Lecture 22: More On Compressed Sensing Lecture 22: More On Compressed Sensing Scribed by Eric Lee, Chengrun Yang, and Sebastian Ament Nov. 2, 207 Recap and Introduction Basis pursuit was the method of recovering the sparsest solution to an

More information

Near-Optimal Sparse Recovery in the L 1 norm

Near-Optimal Sparse Recovery in the L 1 norm Near-Optimal Sparse Recovery in the L 1 norm Piotr Indyk MIT indyk@mit.edu Milan Ružić ITU Copenhagen milan@itu.dk Abstract We consider the approximate sparse recovery problem, where the goal is to (approximately)

More information

Algorithmic linear dimension reduction in the l 1 norm for sparse vectors

Algorithmic linear dimension reduction in the l 1 norm for sparse vectors Algorithmic linear dimension reduction in the l norm for sparse vectors A. C. Gilbert, M. J. Strauss, J. A. Tropp, and R. Vershynin Abstract Using a number of different algorithms, we can recover approximately

More information

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad

More information

Recovering overcomplete sparse representations from structured sensing

Recovering overcomplete sparse representations from structured sensing Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix

More information

Sequential Fast Fourier Transform (PSC ) Sequential FFT

Sequential Fast Fourier Transform (PSC ) Sequential FFT Sequential Fast Fourier Transform (PSC 3.1 3.2) 1 / 18 Applications of Fourier analysis Fourier analysis studies the decomposition of functions into their frequency components. Piano Concerto no. 9 by

More information

Fast Hard Thresholding with Nesterov s Gradient Method

Fast Hard Thresholding with Nesterov s Gradient Method Fast Hard Thresholding with Nesterov s Gradient Method Volkan Cevher Idiap Research Institute Ecole Polytechnique Federale de ausanne volkan.cevher@epfl.ch Sina Jafarpour Department of Computer Science

More information

Efficient Adaptive Compressive Sensing Using Sparse Hierarchical Learned Dictionaries

Efficient Adaptive Compressive Sensing Using Sparse Hierarchical Learned Dictionaries 1 Efficient Adaptive Compressive Sensing Using Sparse Hierarchical Learned Dictionaries Akshay Soni and Jarvis Haupt University of Minnesota, Twin Cities Department of Electrical and Computer Engineering

More information

GREEDY SIGNAL RECOVERY REVIEW

GREEDY SIGNAL RECOVERY REVIEW GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin

More information

Phase Retrieval from Local Measurements: Deterministic Measurement Constructions and Efficient Recovery Algorithms

Phase Retrieval from Local Measurements: Deterministic Measurement Constructions and Efficient Recovery Algorithms Phase Retrieval from Local Measurements: Deterministic Measurement Constructions and Efficient Recovery Algorithms Aditya Viswanathan Department of Mathematics and Statistics adityavv@umich.edu www-personal.umich.edu/

More information

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5 CS 229r: Algorithms for Big Data Fall 215 Prof. Jelani Nelson Lecture 19 Nov 5 Scribe: Abdul Wasay 1 Overview In the last lecture, we started discussing the problem of compressed sensing where we are given

More information

Breaking the coherence barrier - A new theory for compressed sensing

Breaking the coherence barrier - A new theory for compressed sensing Breaking the coherence barrier - A new theory for compressed sensing Anders C. Hansen (Cambridge) Joint work with: B. Adcock (Simon Fraser University) C. Poon (Cambridge) B. Roman (Cambridge) UCL-Duke

More information

Sublinear Time, Measurement-Optimal, Sparse Recovery For All

Sublinear Time, Measurement-Optimal, Sparse Recovery For All Sublinear Time, Measurement-Optimal, Sparse Recovery For All Ely Porat and Martin J Strauss Abstract An approximate sparse recovery system in l 1 norm makes a small number of measurements of a noisy vector

More information

Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit

Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit Claremont Colleges Scholarship @ Claremont CMC Faculty Publications and Research CMC Faculty Scholarship 6-5-2008 Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit

More information

Lower Bounds for Adaptive Sparse Recovery

Lower Bounds for Adaptive Sparse Recovery Lower Bounds for Adaptive Sparse Recovery Eric Price MIT ecprice@mit.edu David P. Woodruff IBM Almaden dpwoodru@us.ibm.com Abstract We give lower bounds for the problem of stable sparse recovery from adaptive

More information

Provable Alternating Minimization Methods for Non-convex Optimization

Provable Alternating Minimization Methods for Non-convex Optimization Provable Alternating Minimization Methods for Non-convex Optimization Prateek Jain Microsoft Research, India Joint work with Praneeth Netrapalli, Sujay Sanghavi, Alekh Agarwal, Animashree Anandkumar, Rashish

More information

arxiv: v2 [cs.lg] 6 May 2017

arxiv: v2 [cs.lg] 6 May 2017 arxiv:170107895v [cslg] 6 May 017 Information Theoretic Limits for Linear Prediction with Graph-Structured Sparsity Abstract Adarsh Barik Krannert School of Management Purdue University West Lafayette,

More information

LEARNING DATA TRIAGE: LINEAR DECODING WORKS FOR COMPRESSIVE MRI. Yen-Huan Li and Volkan Cevher

LEARNING DATA TRIAGE: LINEAR DECODING WORKS FOR COMPRESSIVE MRI. Yen-Huan Li and Volkan Cevher LARNING DATA TRIAG: LINAR DCODING WORKS FOR COMPRSSIV MRI Yen-Huan Li and Volkan Cevher Laboratory for Information Inference Systems École Polytechnique Fédérale de Lausanne ABSTRACT The standard approach

More information

Randomness-in-Structured Ensembles for Compressed Sensing of Images

Randomness-in-Structured Ensembles for Compressed Sensing of Images Randomness-in-Structured Ensembles for Compressed Sensing of Images Abdolreza Abdolhosseini Moghadam Dep. of Electrical and Computer Engineering Michigan State University Email: abdolhos@msu.edu Hayder

More information

Wavelet decomposition of data streams. by Dragana Veljkovic

Wavelet decomposition of data streams. by Dragana Veljkovic Wavelet decomposition of data streams by Dragana Veljkovic Motivation Continuous data streams arise naturally in: telecommunication and internet traffic retail and banking transactions web server log records

More information

Strengthened Sobolev inequalities for a random subspace of functions

Strengthened Sobolev inequalities for a random subspace of functions Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)

More information

An Overview of Compressed Sensing

An Overview of Compressed Sensing An Overview of Compressed Sensing Nathan Schneider November 18, 2009 Abstract In a large number of applications, the system will be designed to sample at a rate equal to at least the frequency bandwidth

More information

Detecting Sparse Structures in Data in Sub-Linear Time: A group testing approach

Detecting Sparse Structures in Data in Sub-Linear Time: A group testing approach Detecting Sparse Structures in Data in Sub-Linear Time: A group testing approach Boaz Nadler The Weizmann Institute of Science Israel Joint works with Inbal Horev, Ronen Basri, Meirav Galun and Ery Arias-Castro

More information

FPS-SFT: A MULTI-DIMENSIONAL SPARSE FOURIER TRANSFORM BASED ON THE FOURIER PROJECTION-SLICE THEOREM

FPS-SFT: A MULTI-DIMENSIONAL SPARSE FOURIER TRANSFORM BASED ON THE FOURIER PROJECTION-SLICE THEOREM FPS-SFT: A MULTI-DIMENSIONAL SPARSE FOURIER TRANSFORM BASED ON THE FOURIER PROJECTION-SLICE THEOREM Shaogang Wang, Vishal M. Patel and Athina Petropulu Department of Electrical and Computer Engineering

More information

Determinis)c Compressed Sensing for Images using Chirps and Reed- Muller Sequences

Determinis)c Compressed Sensing for Images using Chirps and Reed- Muller Sequences Determinis)c Compressed Sensing for Images using Chirps and Reed- Muller Sequences Kangyu Ni Mathematics Arizona State University Joint with Somantika Datta*, Prasun Mahanti, Svetlana Roudenko, Douglas

More information

RECOVERING SIMPLE SIGNALS

RECOVERING SIMPLE SIGNALS RECOVERING SIMPLE SIGNALS ANNA C. GILBERT, BRETT HEMENWAY, ATRI RUDRA, MARTIN J. STRAUSS, AND MARY WOOTTERS Abstract. The primary goal of compressed sensing and (non-adaptive) combinatorial group testing

More information

Sparse Legendre expansions via l 1 minimization

Sparse Legendre expansions via l 1 minimization Sparse Legendre expansions via l 1 minimization Rachel Ward, Courant Institute, NYU Joint work with Holger Rauhut, Hausdorff Center for Mathematics, Bonn, Germany. June 8, 2010 Outline Sparse recovery

More information

Random Coding for Fast Forward Modeling

Random Coding for Fast Forward Modeling Random Coding for Fast Forward Modeling Justin Romberg with William Mantzel, Salman Asif, Karim Sabra, Ramesh Neelamani Georgia Tech, School of ECE Workshop on Sparsity and Computation June 11, 2010 Bonn,

More information

Robust Support Recovery Using Sparse Compressive Sensing Matrices

Robust Support Recovery Using Sparse Compressive Sensing Matrices Robust Support Recovery Using Sparse Compressive Sensing Matrices Jarvis Haupt and Richard Baraniuk University of Minnesota, Minneapolis MN Rice University, Houston TX Abstract This paper considers the

More information

Information-Theoretic Limits of Group Testing: Phase Transitions, Noisy Tests, and Partial Recovery

Information-Theoretic Limits of Group Testing: Phase Transitions, Noisy Tests, and Partial Recovery Information-Theoretic Limits of Group Testing: Phase Transitions, Noisy Tests, and Partial Recovery Jonathan Scarlett jonathan.scarlett@epfl.ch Laboratory for Information and Inference Systems (LIONS)

More information

Lecture 13 October 6, Covering Numbers and Maurey s Empirical Method

Lecture 13 October 6, Covering Numbers and Maurey s Empirical Method CS 395T: Sublinear Algorithms Fall 2016 Prof. Eric Price Lecture 13 October 6, 2016 Scribe: Kiyeon Jeon and Loc Hoang 1 Overview In the last lecture we covered the lower bound for p th moment (p > 2) and

More information