Sparse Fourier Transform (lecture 4)

Size: px
Start display at page:

Download "Sparse Fourier Transform (lecture 4)"

Transcription

1 1 / 5 Sparse Fourier Transform (lecture 4) Michael Kapralov 1 1 IBM Watson EPFL St. Petersburg CS Club November 215

2 2 / 5 Given x C n, compute the Discrete Fourier Transform of x: x i = 1 n x j ω ij, j [n] where ω = e 2πi/n is the n-th root of unity.

3 2 / 5 Given x C n, compute the Discrete Fourier Transform of x: x i = 1 n x j ω ij, j [n] where ω = e 2πi/n is the n-th root of unity. Goal: find the top k coefficients of x approximately In previous lectures: exactly k-sparse: O(k logn) runtime and samples

4 Given x C n, compute the Discrete Fourier Transform of x: x i = 1 n x j ω ij, j [n] where ω = e 2πi/n is the n-th root of unity. Goal: find the top k coefficients of x approximately In previous lectures: exactly k-sparse: O(k logn) runtime and samples approximately k-sparse: O(k log 2 n(loglogn)) runtime and samples This lecture, for approximately k-sparse case: k lognlog O(1) logn samples in k log 2 nlog O(1) logn time; O(k logn) samples (optimal). 2 / 5

5 3 / 5 Improvements? List For t = 1 to logk B t Ck/4 t γ t 1/(C2 t ) List List + PARTIALRECOVERY(B t,γ t,list) End

6 3 / 5 Improvements? List For t = 1 to logk B t Ck/4 t γ t 1/(C2 t ) List List + PARTIALRECOVERY(B t,γ t,list) End Summary: independent invocations of PARTIALRECOVERY: use fresh samples in every iteration

7 3 / 5 Improvements? List For t = 1 to logk B t Ck/4 t γ t 1/(C2 t ) List List + PARTIALRECOVERY(B t,γ t,list) End Summary: independent invocations of PARTIALRECOVERY: use fresh samples in every iteration reduce (approximate) sparsity at geometric rate

8 3 / 5 Improvements? List For t = 1 to logk B t Ck/4 t γ t 1/(C2 t ) List List + PARTIALRECOVERY(B t,γ t,list) End Summary: independent invocations of PARTIALRECOVERY: use fresh samples in every iteration reduce (approximate) sparsity at geometric rate need sharp filters to reduce sparsity

9 3 / 5 Improvements? List For t = 1 to logk B t Ck/4 t γ t 1/(C2 t ) List List + PARTIALRECOVERY(B t,γ t,list) End Summary: independent invocations of PARTIALRECOVERY: use fresh samples in every iteration reduce (approximate) sparsity at geometric rate need sharp filters to reduce sparsity lose Ω(logn) time and sparsity because of sharpness

10 4 / 5 Why not use simpler filters with smaller support? Let { 1/(B +1) if j [ B/2,B/2] G j := o.w time frequency supp(g) = B k as opposed to B k logn, but buckets leak

11 5 / 5 Why not use simpler filters with smaller support? Let { 1/(B +1) if j [ B/2,B/2] G j := o.w time frequency supp(g) = B k as opposed to B k logn, but buckets leak Can only identify and approximate elements of value at least x 2 2 /k, and estimate up to x 2 /k additive error, so need to 2 repeat Ω(logn) times

12 6 / 5 Sample complexity Sample complexity=number of samples accessed in time domain. In some applications at least as important as runtime Shi-Andronesi-Hassanieh-Ghazi- Katabi-Adalsteinsson ISMRM 13

13 6 / 5 Sample complexity Sample complexity=number of samples accessed in time domain. In some applications at least as important as runtime Shi-Andronesi-Hassanieh-Ghazi- Katabi-Adalsteinsson ISMRM 13 Given access to x C n, find ŷ such that x ŷ 2 C min k sparse ẑ x ẑ 2 Use smallest possible number of samples?

14 7 / 5 Uniform bounds (for all): Candes-Tao 6 Rudelson-Vershynin 8 Cheraghchi-Guruswami-Velingker 12 Bourgain 14 Haviv-Regev 15 Non-uniform bounds (for each): Goldreich-Levin 89 Kushilevitz-Mansour 91, Mansour 92 Gilbert-Guha-Indyk-Muthukrishnan- Strauss 2 Gilbert-Muthukrishnan-Strauss 5 Hassanieh-Indyk-Katabi-Price 12a Hassanieh-Indyk-Katabi-Price 12b Deterministic, Ω(n) runtime O(k log 2 k logn) Randomized, O(k poly(logn)) runtime O(k log 2 n) Lower bound: Ω(k log(n/k)) for non-adaptive algorithms Do-Ba-Indyk-Price-Woodruff 1

15 7 / 5 Uniform bounds (for all): Candes-Tao 6 Rudelson-Vershynin 8 Cheraghchi-Guruswami-Velingker 12 Bourgain 14 Haviv-Regev 15 Non-uniform bounds (for each): Goldreich-Levin 89 Kushilevitz-Mansour 91, Mansour 92 Gilbert-Guha-Indyk-Muthukrishnan- Strauss 2 Gilbert-Muthukrishnan-Strauss 5 Hassanieh-Indyk-Katabi-Price 12a Hassanieh-Indyk-Katabi-Price 12b Deterministic, Ω(n) runtime O(k log 2 k logn) Randomized, O(k poly(logn)) runtime O(k log 2 n) Lower bound: Ω(k log(n/k)) for non-adaptive algorithms Do-Ba-Indyk-Price-Woodruff 1 Theorem There exists an algorithm for l 2 /l 2 sparse recovery from Fourier measurements using O(k logn log O(1) logn) samples and O(k log 2 n log O(1) logn) runtime. Optimal up to a poly(loglogn) factors for k n 1 δ.

16 8 / 5 l 2 /l 2 sparse recovery guarantees: x ŷ 2 C min k sparse ẑ x ẑ 2

17 9 / 5 l 2 /l 2 sparse recovery guarantees: x ŷ 2 C Err 2 k ( x) x 1... x k x k+1 x k+2... Err 2 k ( x) = n j=k+1 x j 2 Residual error bounded by noise energy Err 2 k ( x)

18 1 / 5 l 2 /l 2 sparse recovery guarantees: Signal to noise ratio R = x ŷ 2 /Err 2 k ( x) C x 1... x k x k+1 x k+2... Err 2 k ( x) = n j=k+1 x j 2 Residual error bounded by noise energy Err 2 k ( x)

19 1 / 5 l 2 /l 2 sparse recovery guarantees: Signal to noise ratio R = x ŷ 2 /Err 2 k ( x) C x 1... x k x k+1 x k+2... Err 2 k ( x) = n j=k+1 x j 2 Residual error bounded by noise energy Err 2 k ( x) Sufficient to ensure that most elements are below average noise level: x i ŷ i 2 c Err 2 k ( x)/k =: µ2

20 11 / 5 Iterative recovery Many algorithms use the iterative recovery scheme: Input: x C n ŷ For t = 1 to L ẑ PARTIALRECOVERY(x,ŷ t 1 ) Update ŷ t ŷ t 1 +ẑ Takes random samples of x y PARTIALRECOVERY(x, ŷ) return dominant Fourier coefficients ẑ of x y (approximately) dominant coefficients x i ŷ i 2 µ 2 (above average noise level)

21 12 / 5 PARTIALRECOVERY(x, ŷ) return dominant Fourier coefficients ẑ of x y (approximately) dominant coefficients x i ŷ i 2 µ 2 (above average noise level) Main questions: How many samples per SNR reduction step? How many iterations?

22 12 / 5 PARTIALRECOVERY(x, ŷ) return dominant Fourier coefficients ẑ of x y (approximately) dominant coefficients x i ŷ i 2 µ 2 (above average noise level) Main questions: How many samples per SNR reduction step? How many iterations? Summary of techniques from Gilbert-Guha-Indyk-Muthukrishnan-Strauss 2, Akavia-Goldwasser-Safra 3, Gilbert-Muthukrishnan-Strauss 5, Iwen 1, Akavia 1, Hassanieh-Indyk-Katabi-Price 12a, Hassanieh-Indyk-Katabi-Price 12b

23 13 / 5 1-sparse recovery from Fourier measurements magnitude.5 magnitude time frequency x a = ω a f +noise 2πf /n O(log SNR n) measurements for random a

24 14 / 5 Reducing k-sparse recovery to 1-sparse recovery Permute with a random linear transformation and phase shift magnitude.5.5 magnitude time frequency

25 15 / 5 Reducing k-sparse recovery to 1-sparse recovery Permute with a random linear transformation and phase shift magnitude.5.5 magnitude time frequency

26 16 / 5 Reducing k-sparse recovery to 1-sparse recovery Permute with a random linear transformation and phase shift magnitude.5.5 magnitude time frequency

27 17 / 5 Reducing k-sparse recovery to 1-sparse recovery Partition frequency space into B = k/α buckets for constant α (,1) magnitude.5.5 magnitude time frequency Choose a filter G,Ĝ such that Ĝ approximates the buckets G has small support Compute x Ĝ = (x G)

28 18 / 5 Reducing k-sparse recovery to 1-sparse recovery Partition frequency space into B = k/α buckets for constant α (,1) magnitude.5.5 magnitude time frequency Choose a filter G,Ĝ such that Ĝ approximates the buckets G has small support Compute x Ĝ = (x G)

29 19 / 5 Reducing k-sparse recovery to 1-sparse recovery Partition frequency space into B = k/α buckets for constant α (,1) magnitude.5.5 magnitude time frequency Choose a filter G,Ĝ such that Ĝ approximates the buckets G has small support Compute x Ĝ = (x G)

30 2 / 5 Reducing k-sparse recovery to 1-sparse recovery Partition frequency space into B = k/α buckets for constant α (,1) magnitude.5.5 magnitude time frequency Choose a filter G,Ĝ such that Ĝ approximates the buckets G has small support Compute x Ĝ = (x G) Sample complexity=supp G!

31 21 / 5 PARTIALRECOVERY step PARTIALRECOVERY(x, ŷ) Make measurements (independent permutation+filtering) Locate and estimate large frequencies (1-sparse recovery) return dominant Fourier coefficients ẑ of x y (approximately) Sample complexity = support of G

32 21 / 5 PARTIALRECOVERY step PARTIALRECOVERY(x, ŷ) Make measurements (independent permutation+filtering) Locate and estimate large frequencies (1-sparse recovery) return dominant Fourier coefficients ẑ of x y (approximately) Sample complexity = support of G How many measurements do we need? How effective is a refinement step? Both determined by signal to noise ratio in each bucket function of filter choice

33 Time domain: support O(k) [GMS 5] Frequency domain: 22 / magnitude frequency SNR = O(1) Reduce SNR by O(1) factor Ω(k log 2 n) samples

34 Time domain: support O(k) [GMS 5] Frequency domain: Time domain: support Θ(k logn) [HIKP12] Frequency domain: 23 / magnitude.6.4 magnitude frequency SNR = O(1) Reduce SNR by O(1) factor Ω(k log 2 n) samples frequency SNR = can by poly(n) Reduce sparsity by O(1) factor Ω(k log 2 n) samples This paper: interpolate between the two extremes, get all benefits

35 24 / 5 Main idea A new family of filters that adapt to current upper bound on SNR. Sharp filters initially, more blurred later magnitude.6.4 magnitude.6.4 magnitude frequency frequency frequency

36 / magnitude.6.4 magnitude.6.4 magnitude frequency frequency frequency When SNR is bounded by R: filter support O(k logr) ( convolve boxcar with itself logr times)

37 / magnitude.6.4 magnitude.6.4 magnitude frequency frequency frequency When SNR is bounded by R: filter support O(k logr) ( convolve boxcar with itself logr times) (most) 1-sparse recovery subproblems for dominant frequencies have high SNR (about R) so O (log R n) measurements! O (k logr log R n) = O (k logn) samples per step!

38 26 / 5 PARTIALRECOVERY(x, ŷ, R) R R 1/2 R 1/4... C 2 C }{{} O(loglogn) iterations R µ 2 R 1 2 µ 2 k µ 2 = tail noise/b

39 26 / 5 PARTIALRECOVERY(x, ŷ, R) R R 1/2 R 1/4... C 2 C }{{} O(loglogn) iterations R µ 2 R 1 2 µ 2 k µ 2 = tail noise/b

40 26 / 5 PARTIALRECOVERY(x,ŷ,R 1 2 ) R R 1/2 R 1/4... C 2 C }{{} O(loglogn) iterations R 1 2 µ 2 R 1 4 µ 2 k µ 2 = tail noise/b

41 26 / 5 PARTIALRECOVERY(x,ŷ,C 2 ) R R 1/2 R 1/4... C 2 C }{{} O(loglogn) iterations C 2 µ 2 C µ 2 k µ 2 = tail noise/b

42 27 / 5 Algorithm Input: x C n ŷ R poly(n) For t = 1 to O(loglogn) ẑ PARTIALRECOVERY(x,ŷ t 1,R t 1 ) Takes samples of x y Update ŷ t ŷ t 1 +ẑ R t R t 1 PARTIALRECOVERY step: Takes O (k logn) samples independent of R Is very effective: reduces R R 1 2, so O(loglogn) iterations suffice

43 28 / 5 Partial recovery analysis PARTIALRECOVERY(x, ŷ, R) R µ 2 R 1 2 µ 2 k µ 2 = tail noise/b Need to reduce most large frequencies, i.e. x i 2 Rµ 2

44 28 / 5 Partial recovery analysis PARTIALRECOVERY(x, ŷ, R) R µ 2 R 1 2 µ 2 k µ 2 = tail noise/b Need to reduce most large frequencies, i.e. x i 2 Rµ 2 Most=1 1/poly(R) fraction

45 28 / 5 Partial recovery analysis PARTIALRECOVERY(x, ŷ, R) R µ 2 R 1 2 µ 2 k µ 2 = tail noise/b Need to reduce most large frequencies, i.e. x i 2 Rµ 2 Most=1 1/poly(R) fraction Iterative process, O(loglogn) steps

46 29 / 5 R µ 2 R 1 2 µ 2 k µ 2 = tail noise/b partition elements into geometric weight classes write down recursion that governs the dynamics top half classes are reduced at double exponentialy rate if we use Ω(loglogR) levels

47 Sample optimal algorithm (reusing measurements) 3 / 5

48 31 / 5 Uniform bounds (for all): Candes-Tao 6 Rudelson-Vershynin 8 Cheraghchi-Guruswami-Velingker 12 Bourgain 14 Haviv-Regev 15 Non-uniform bounds (for each): Goldreich-Levin 89 Kushilevitz-Mansour 91, Mansour 92 Gilbert-Guha-Indyk-Muthukrishnan- Strauss 2 Gilbert-Muthukrishnan-Strauss 5 Hassanieh-Indyk-Katabi-Price 12a Hassanieh-Indyk-Katabi-Price 12b Indyk-K.-Price 14 Deterministic, Ω(n) runtime O(k log 2 k logn) Randomized, O(k poly(logn)) runtime O(k logn (loglogn) C ) Lower bound: Ω(k log(n/k)) for non-adaptive algorithms Do-Ba-Indyk-Price-Woodruff 1

49 31 / 5 Uniform bounds (for all): Candes-Tao 6 Rudelson-Vershynin 8 Cheraghchi-Guruswami-Velingker 12 Bourgain 14 Haviv-Regev 15 Non-uniform bounds (for each): Goldreich-Levin 89 Kushilevitz-Mansour 91, Mansour 92 Gilbert-Guha-Indyk-Muthukrishnan- Strauss 2 Gilbert-Muthukrishnan-Strauss 5 Hassanieh-Indyk-Katabi-Price 12a Hassanieh-Indyk-Katabi-Price 12b Indyk-K.-Price 14 Deterministic, Ω(n) runtime O(k log 2 k logn) Randomized, O(k poly(logn)) runtime O(k logn (loglogn) C ) Lower bound: Ω(k log(n/k)) for non-adaptive algorithms Do-Ba-Indyk-Price-Woodruff 1 Theorem There exists an algorithm for l 2 /l 2 sparse recovery from Fourier measurements using O(k logn) samples and O(nlog 3 n) runtime. Optimal up to constant factors for k n 1 δ.

50 32 / 5 Higher dimensional Fourier transform is needed in some applications Given x C [n]d, N = n d, compute x j = 1 ω it j x i and x j = 1 N i [n] d N i [n] d ω it j x i where ω is the n-th root of unity, and n is a power of 2.

51 33 / 5 Previous sample complexity bounds: O(k log d N) in sublinear time algorithms runtime k log O(d) N, for each O(k log 4 N) for any d Ω(N) time, for all This lecture: Theorem There exists an algorithm for l 2 /l 2 sparse recovery from Fourier measurements using O d (k logn) samples and O(N log 3 N) runtime. Sample-optimal up to constant factors for any constant d.

52 34 / 5 l 2 /l 2 sparse recovery guarantees: x ŷ 2 C min k sparse ẑ x ẑ 2 head tail µ tail noise/ k

53 35 / 5 l 2 /l 2 sparse recovery guarantees: x ŷ 2 C min k sparse ẑ x ẑ 2 x 1... x k x k+1 x k+2... Err 2 k ( x) = n j=k+1 x j 2 Residual error bounded by noise energy Err 2 k ( x) head tail µ tail noise/ k

54 l 2 /l 2 sparse recovery guarantees: 36 / 5 x ŷ 2 C Err 2 k ( x) x 1... x k x k+1 x k+2... Err 2 k ( x) = n j=k+1 x j 2 Residual error bounded by noise energy Err 2 k ( x) head tail µ tail noise/ k

55 l 2 /l 2 sparse recovery guarantees: 37 / 5 x ŷ 2 C Err 2 k ( x) x 1... x k x k+1 x k+2... Err 2 k ( x) = n j=k+1 x j 2 Residual error bounded by noise energy Err 2 k ( x) head tail µ tail noise/ k

56 l 2 /l 2 sparse recovery guarantees: 38 / 5 x ŷ 2 C Err 2 k ( x) µ tail noise/ k Sufficient to ensure that most elements are below average noise level: x i ŷ i 2 c Err 2 k ( x)/k =: µ2

57 39 / 5 l 2 /l 2 sparse recovery guarantees: x ŷ 2 C Err 2 k ( x) µ tail noise/ k Will ensure that all elements are below average noise level: x ŷ 2 c Err2 k ( x)/k =: µ2

58 4 / 5 l /l 2 sparse recovery guarantees: x ŷ 2 C Err2 k ( x)/k µ tail noise/ k Will ensure that all elements are below average noise level: x ŷ 2 c Err2 k ( x)/k =: µ2

59 41 / 5 l /l 2 sparse recovery guarantees: x ŷ 2 C Err2 k ( x)/k µ tail noise/ k Will ensure that all elements are below average noise level: x ŷ 2 µ2

60 42 / 5 Iterative recovery Input: x C n ŷ For t = 1 to L ẑ PARTIALRECOVERY(x y t 1 ) Takes random samples of x y Update ŷ t ŷ t 1 +ẑ

61 42 / 5 Iterative recovery Input: x C n ŷ For t = 1 to L ẑ PARTIALRECOVERY(x y t 1 ) Takes random samples of x y Update ŷ t ŷ t 1 +ẑ In most prior works sampling complexity is samples per PARTIALRECOVERY step number of iterations

62 42 / 5 Iterative recovery Input: x C n ŷ For t = 1 to L ẑ PARTIALRECOVERY(x y t 1 ) Takes random samples of x y Update ŷ t ŷ t 1 +ẑ In most prior works sampling complexity is samples per PARTIALRECOVERY step number of iterations Lots of work on carefully choosing filters, reducing number of iterations: Hassanieh-Indyk-Katabi-Price 12, Ghazi-Hassanieh-Indyk-Katabi-Price-Shi 13, Indyk-K.-Price 14 still lose Ω(loglogn) in sample complexity (number of iterations) lose Ω((logn) d 1 loglogn) in higher dimensions

63 43 / 5 Iterative recovery Input: x C n ŷ For t = 1 to L ẑ PARTIALRECOVERY(x y t 1 ) Takes random samples of x y Update ŷ t ŷ t 1 +ẑ Our sampling complexity is samples per PARTIALRECOVERY step number of iterations

64 43 / 5 Iterative recovery Input: x C n ŷ For t = 1 to L ẑ PARTIALRECOVERY(x y t 1 ) Takes random samples of x y Update ŷ t ŷ t 1 +ẑ Our sampling complexity is samples per PARTIALRECOVERY step number of iterations

65 43 / 5 Iterative recovery Input: x C n ŷ For t = 1 to L ẑ PARTIALRECOVERY(x y t 1 ) Takes random samples of x y Update ŷ t ŷ t 1 +ẑ Our sampling complexity is samples per PARTIALRECOVERY step number of iterations Can use very simple filters!

66 44 / 5 Our filter=boxcar convolved with itself O(1) times Filter support is O(k) (=samples per measurement) O(k logn) samples in PARTIALRECOVERY step magnitude.3.2 magnitude time frequency Can choose a rather weak filter, but do not need fresh randomness

67 44 / 5 Our filter=boxcar convolved with itself O(1) times Filter support is O(k) (=samples per measurement) O(k logn) samples in PARTIALRECOVERY step magnitude.3 magnitude time frequency Can choose a rather weak filter, but do not need fresh randomness

68 44 / 5 Our filter=boxcar convolved with itself O(1) times Filter support is O(k) (=samples per measurement) O(k logn) samples in PARTIALRECOVERY step magnitude.25.2 magnitude time frequency Can choose a rather weak filter, but do not need fresh randomness

69 45 / 5 G B B B Let y m (P m x) G m =,...,M = C logn ẑ For t = 1,...,T = O(logn): For f [n]: { ŵ f median ỹ 1 f,...,ỹm f If ŵ f < 2 T t µ/3 then ŵ f End ẑ t+1 = ẑ t +ŵ y m y m (P m w) G for m = 1,...,M End } Take samples of x Loop over thresholds Estimate, prune small elements Update samples

70 G B B B Let y m (P m x) G m =,...,M = C logn 45 / 5 ẑ For t = 1,...,T = O(logn): For f [n]: { ŵ f median ỹ 1 f,...,ỹm f If ŵ f < 2 T t µ/3 then ŵ f End ẑ t+1 = ẑ t +ŵ y m y m (P m w) G for m = 1,...,M End } µ

71 G B B B Let y m (P m x) G m =,...,M = C logn 45 / 5 ẑ For t = 1,...,T = O(logn): For f [n]: { ŵ f median ỹ 1 f,...,ỹm f If ŵ f < 2 T t µ/3 then ŵ f End ẑ t+1 = ẑ t +ŵ y m y m (P m w) G for m = 1,...,M End } µ

72 G B B B Let y m (P m x) G m =,...,M = C logn 45 / 5 ẑ For t = 1,...,T = O(logn): For f [n]: { ŵ f median ỹ 1 f,...,ỹm f If ŵ f < 2 T t µ/3 then ŵ f End ẑ t+1 = ẑ t +ŵ y m y m (P m w) G for m = 1,...,M End } µ

73 G B B B Let y m (P m x) G m =,...,M = C logn 45 / 5 ẑ For t = 1,...,T = O(logn): For f [n]: { ŵ f median ỹ 1 f,...,ỹm f If ŵ f < 2 T t µ/3 then ŵ f End ẑ t+1 = ẑ t +ŵ y m y m (P m w) G for m = 1,...,M End } µ

74 G B B B Let y m (P m x) G m =,...,M = C logn 45 / 5 ẑ For t = 1,...,T = O(logn): For f [n]: { ŵ f median ỹ 1 f,...,ỹm f If ŵ f < 2 T t µ/3 then ŵ f End ẑ t+1 = ẑ t +ŵ y m y m (P m w) G for m = 1,...,M End } µ

75 46 / 5 Lecture so far Optimal sample complexity by reusing randomness Very simple algorithm, can be implemented Extension to higher dimensions: algorithm is the same, permutations are different. Choose random invertible linear transformation over Z d n

76 47 / 5 Experimental evaluation Problem: recover support of a random k-sparse signal from Fourier measurements. Parameters: n = 2 15, k = 1,2,...,1 Filter: boxcar filter with support k + 1

77 Comparison to l 1 -minimization (SPGL1) O(k log 3 k logn) sample complexity, requires LP solve 25 n=32768, L1 minimization 1 25 n=32768, B=k, random phase, non monotone number of measurements number of measurements sparsity sparsity Within a factor of 2 of l 1 minimization 48 / 5

78 49 / 5 Open questions: O(k logn) in O(k log 2 n) time? O(k logn) runtime? remove dependence on dimension? Current approaches lose C d in sample complexity, (logn) d in runtime

79 49 / 5 Open questions: O(k logn) in O(k log 2 n) time? O(k logn) runtime? remove dependence on dimension? Current approaches lose C d in sample complexity, (logn) d in runtime More on sparse FFT:

Sample Optimal Fourier Sampling in Any Constant Dimension

Sample Optimal Fourier Sampling in Any Constant Dimension 1 / 26 Sample Optimal Fourier Sampling in Any Constant Dimension Piotr Indyk Michael Kapralov MIT IBM Watson October 21, 2014 Fourier Transform and Sparsity Discrete Fourier Transform Given x C n, compute

More information

An Adaptive Sublinear Time Block Sparse Fourier Transform

An Adaptive Sublinear Time Block Sparse Fourier Transform An Adaptive Sublinear Time Block Sparse Fourier Transform Volkan Cevher Michael Kapralov Jonathan Scarlett Amir Zandieh EPFL February 8th 217 Given x C N, compute the Discrete Fourier Transform (DFT) of

More information

Sparse Fourier Transform (lecture 1)

Sparse Fourier Transform (lecture 1) 1 / 73 Sparse Fourier Transform (lecture 1) Michael Kapralov 1 1 IBM Watson EPFL St. Petersburg CS Club November 2015 2 / 73 Given x C n, compute the Discrete Fourier Transform (DFT) of x: x i = 1 x n

More information

Faster Algorithms for Sparse Fourier Transform

Faster Algorithms for Sparse Fourier Transform Faster Algorithms for Sparse Fourier Transform Haitham Hassanieh Piotr Indyk Dina Katabi Eric Price MIT Material from: Hassanieh, Indyk, Katabi, Price, Simple and Practical Algorithms for Sparse Fourier

More information

Lecture 16 Oct. 26, 2017

Lecture 16 Oct. 26, 2017 Sketching Algorithms for Big Data Fall 2017 Prof. Piotr Indyk Lecture 16 Oct. 26, 2017 Scribe: Chi-Ning Chou 1 Overview In the last lecture we constructed sparse RIP 1 matrix via expander and showed that

More information

A Non-sparse Tutorial on Sparse FFTs

A Non-sparse Tutorial on Sparse FFTs A Non-sparse Tutorial on Sparse FFTs Mark Iwen Michigan State University April 8, 214 M.A. Iwen (MSU) Fast Sparse FFTs April 8, 214 1 / 34 Problem Setup Recover f : [, 2π] C consisting of k trigonometric

More information

Faster Algorithms for Sparse Fourier Transform

Faster Algorithms for Sparse Fourier Transform Faster Algorithms for Sparse Fourier Transform Haitham Hassanieh Piotr Indyk Dina Katabi Eric Price MIT Material from: Hassanieh, Indyk, Katabi, Price, Simple and Practical Algorithms for Sparse Fourier

More information

Rapidly Computing Sparse Chebyshev and Legendre Coefficient Expansions via SFTs

Rapidly Computing Sparse Chebyshev and Legendre Coefficient Expansions via SFTs Rapidly Computing Sparse Chebyshev and Legendre Coefficient Expansions via SFTs Mark Iwen Michigan State University October 18, 2014 Work with Janice (Xianfeng) Hu Graduating in May! M.A. Iwen (MSU) Fast

More information

Recent Developments in the Sparse Fourier Transform

Recent Developments in the Sparse Fourier Transform Recent Developments in the Sparse Fourier Transform Piotr Indyk MIT Joint work with Fadel Adib, Badih Ghazi, Haitham Hassanieh, Michael Kapralov, Dina Katabi, Eric Price, Lixin Shi 1000 150 1000 150 1000

More information

Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery

Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery Anna C. Gilbert Department of Mathematics University of Michigan Sparse signal recovery measurements:

More information

Combining geometry and combinatorics

Combining geometry and combinatorics Combining geometry and combinatorics A unified approach to sparse signal recovery Anna C. Gilbert University of Michigan joint work with R. Berinde (MIT), P. Indyk (MIT), H. Karloff (AT&T), M. Strauss

More information

CS on CS: Computer Science insights into Compresive Sensing (and vice versa) Piotr Indyk MIT

CS on CS: Computer Science insights into Compresive Sensing (and vice versa) Piotr Indyk MIT CS on CS: Computer Science insights into Compresive Sensing (and vice versa) Piotr Indyk MIT Sparse Approximations Goal: approximate a highdimensional vector x by x that is sparse, i.e., has few nonzero

More information

Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery

Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery Anna C. Gilbert Department of Mathematics University of Michigan Connection between... Sparse Approximation and Compressed

More information

Sparse FFT for Functions with Short Frequency Support

Sparse FFT for Functions with Short Frequency Support Sparse FFT for Functions with Short Frequency Support Sina ittens May 10, 017 Abstract In this paper we derive a deterministic fast discrete Fourier transform algorithm for a π-periodic function f whose

More information

Simple and Practical Algorithm for Sparse Fourier Transform

Simple and Practical Algorithm for Sparse Fourier Transform Simple and Practical Algorithm for Sparse Fourier Transform Haitham Hassanieh MIT Piotr Indyk MIT Dina Katabi MIT Eric Price MIT {haithamh,indyk,dk,ecprice}@mit.edu Abstract We consider the sparse Fourier

More information

Lecture 3 September 15, 2014

Lecture 3 September 15, 2014 MIT 6.893: Algorithms and Signal Processing Fall 2014 Prof. Piotr Indyk Lecture 3 September 15, 2014 Scribe: Ludwig Schmidt 1 Introduction In this lecture, we build on the ideas from the previous two lectures

More information

Simple and Practical Algorithm for Sparse Fourier Transform

Simple and Practical Algorithm for Sparse Fourier Transform Simple and Practical Algorithm for Sparse Fourier Transform Haitham Hassanieh MIT Piotr Indyk MIT Dina Katabi MIT Eric Price MIT {haithamh,indyk,dk,ecprice}@mit.edu Abstract We consider the sparse Fourier

More information

What s the Frequency Kenneth?: - Sublinear Fourier Sampling Off the Grid

What s the Frequency Kenneth?: - Sublinear Fourier Sampling Off the Grid MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com What s the Frequency Kenneth?: - Sublinear Fourier Sampling Off the Grid Boufounos, P.T. TR2012-059 August 2012 Abstract We design a sublinear

More information

Tutorial: Sparse Signal Recovery

Tutorial: Sparse Signal Recovery Tutorial: Sparse Signal Recovery Anna C. Gilbert Department of Mathematics University of Michigan (Sparse) Signal recovery problem signal or population length N k important Φ x = y measurements or tests:

More information

Combinatorial Algorithms for Compressed Sensing

Combinatorial Algorithms for Compressed Sensing DIMACS Technical Report 2005-40 Combinatorial Algorithms for Compressed Sensing by Graham Cormode 1 Bell Labs cormode@bell-labs.com S. Muthukrishnan 2,3 Rutgers University muthu@cs.rutgers.edu DIMACS is

More information

An Introduction to Sparse Approximation

An Introduction to Sparse Approximation An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

Sparse Recovery and Fourier Sampling. Eric Price

Sparse Recovery and Fourier Sampling. Eric Price Sparse Recovery and Fourier Sampling by Eric Price Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy

More information

Greedy Signal Recovery and Uniform Uncertainty Principles

Greedy Signal Recovery and Uniform Uncertainty Principles Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles

More information

New Applications of Sparse Methods in Physics. Ra Inta, Centre for Gravitational Physics, The Australian National University

New Applications of Sparse Methods in Physics. Ra Inta, Centre for Gravitational Physics, The Australian National University New Applications of Sparse Methods in Physics Ra Inta, Centre for Gravitational Physics, The Australian National University 2 Sparse methods A vector is S-sparse if it has at most S non-zero coefficients.

More information

COMS E F15. Lecture 22: Linearity Testing Sparse Fourier Transform

COMS E F15. Lecture 22: Linearity Testing Sparse Fourier Transform COMS E6998-9 F15 Lecture 22: Linearity Testing Sparse Fourier Transform 1 Administrivia, Plan Thu: no class. Happy Thanksgiving! Tue, Dec 1 st : Sergei Vassilvitskii (Google Research) on MapReduce model

More information

Explicit Constructions for Compressed Sensing of Sparse Signals

Explicit Constructions for Compressed Sensing of Sparse Signals Explicit Constructions for Compressed Sensing of Sparse Signals Piotr Indyk MIT July 12, 2007 1 Introduction Over the recent years, a new approach for obtaining a succinct approximate representation of

More information

Towards an Algorithmic Theory of Compressed Sensing

Towards an Algorithmic Theory of Compressed Sensing DIMACS Technical Report 2005-25 September 2005 Towards an Algorithmic Theory of Compressed Sensing by Graham Cormode 1 Bell Labs cormode@bell-labs.com S. Muthukrishnan 2,3 Rutgers University muthu@cs.rutgers.edu

More information

A Robust Sparse Fourier Transform in the Continuous Setting

A Robust Sparse Fourier Transform in the Continuous Setting A Robust Sparse Fourier ransform in the Continuous Setting Eric Price and Zhao Song ecprice@cs.utexas.edu zhaos@utexas.edu he University of exas at Austin January 7, 7 Abstract In recent years, a number

More information

Simultaneous Sparsity

Simultaneous Sparsity Simultaneous Sparsity Joel A. Tropp Anna C. Gilbert Martin J. Strauss {jtropp annacg martinjs}@umich.edu Department of Mathematics The University of Michigan 1 Simple Sparse Approximation Work in the d-dimensional,

More information

Sensing systems limited by constraints: physical size, time, cost, energy

Sensing systems limited by constraints: physical size, time, cost, energy Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original

More information

Wavelet decomposition of data streams. by Dragana Veljkovic

Wavelet decomposition of data streams. by Dragana Veljkovic Wavelet decomposition of data streams by Dragana Veljkovic Motivation Continuous data streams arise naturally in: telecommunication and internet traffic retail and banking transactions web server log records

More information

The intrinsic structure of FFT and a synopsis of SFT

The intrinsic structure of FFT and a synopsis of SFT Global Journal o Pure and Applied Mathematics. ISSN 0973-1768 Volume 12, Number 2 (2016), pp. 1671-1676 Research India Publications http://www.ripublication.com/gjpam.htm The intrinsic structure o FFT

More information

Linear Sketches A Useful Tool in Streaming and Compressive Sensing

Linear Sketches A Useful Tool in Streaming and Compressive Sensing Linear Sketches A Useful Tool in Streaming and Compressive Sensing Qin Zhang 1-1 Linear sketch Random linear projection M : R n R k that preserves properties of any v R n with high prob. where k n. M =

More information

Nearly Optimal Deterministic Algorithm for Sparse Walsh-Hadamard Transform

Nearly Optimal Deterministic Algorithm for Sparse Walsh-Hadamard Transform Electronic Colloquium on Computational Complexity, Report No. 76 (2015) Nearly Optimal Deterministic Algorithm for Sparse Walsh-Hadamard Transform Mahdi Cheraghchi University of California Berkeley, CA

More information

1 Approximate Quantiles and Summaries

1 Approximate Quantiles and Summaries CS 598CSC: Algorithms for Big Data Lecture date: Sept 25, 2014 Instructor: Chandra Chekuri Scribe: Chandra Chekuri Suppose we have a stream a 1, a 2,..., a n of objects from an ordered universe. For simplicity

More information

A For-all Sparse Recovery in Near-Optimal Time

A For-all Sparse Recovery in Near-Optimal Time A For-all Sparse Recovery in Near-Optimal Time ANNA C. GILBERT, Department of Mathematics. University of Michigan, Ann Arbor YI LI, Division of Mathematics, SPMS. Nanyang Technological University ELY PORAT,

More information

Sparse recovery using sparse random matrices

Sparse recovery using sparse random matrices Sparse recovery using sparse random matrices Radu Berinde MIT texel@mit.edu Piotr Indyk MIT indyk@mit.edu April 26, 2008 Abstract We consider the approximate sparse recovery problem, where the goal is

More information

6 Compressed Sensing and Sparse Recovery

6 Compressed Sensing and Sparse Recovery 6 Compressed Sensing and Sparse Recovery Most of us have noticed how saving an image in JPEG dramatically reduces the space it occupies in our hard drives as oppose to file types that save the pixel value

More information

Solution Recovery via L1 minimization: What are possible and Why?

Solution Recovery via L1 minimization: What are possible and Why? Solution Recovery via L1 minimization: What are possible and Why? Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Eighth US-Mexico Workshop on Optimization

More information

Tutorial: Sparse Recovery Using Sparse Matrices. Piotr Indyk MIT

Tutorial: Sparse Recovery Using Sparse Matrices. Piotr Indyk MIT Tutorial: Sparse Recovery Using Sparse Matrices Piotr Indyk MIT Problem Formulation (approximation theory, learning Fourier coeffs, linear sketching, finite rate of innovation, compressed sensing...) Setup:

More information

Divide and Conquer. Maximum/minimum. Median finding. CS125 Lecture 4 Fall 2016

Divide and Conquer. Maximum/minimum. Median finding. CS125 Lecture 4 Fall 2016 CS125 Lecture 4 Fall 2016 Divide and Conquer We have seen one general paradigm for finding algorithms: the greedy approach. We now consider another general paradigm, known as divide and conquer. We have

More information

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp CoSaMP Iterative signal recovery from incomplete and inaccurate samples Joel A. Tropp Applied & Computational Mathematics California Institute of Technology jtropp@acm.caltech.edu Joint with D. Needell

More information

Tutorial: Sparse Recovery Using Sparse Matrices. Piotr Indyk MIT

Tutorial: Sparse Recovery Using Sparse Matrices. Piotr Indyk MIT Tutorial: Sparse Recovery Using Sparse Matrices Piotr Indyk MIT Problem Formulation (approximation theory, learning Fourier coeffs, linear sketching, finite rate of innovation, compressed sensing...) Setup:

More information

Combinatorial Sublinear-Time Fourier Algorithms

Combinatorial Sublinear-Time Fourier Algorithms Combinatorial Sublinear-Time Fourier Algorithms M. A. Iwen Institute for Mathematics and its Applications (IMA University of Minnesota iwen@ima.umn.edu July 29, 2009 Abstract We study the problem of estimating

More information

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles SIAM Student Research Conference Deanna Needell Joint work with Roman Vershynin and Joel Tropp UC Davis, May 2008 CoSaMP: Greedy Signal

More information

Fast Compressive Phase Retrieval

Fast Compressive Phase Retrieval Fast Compressive Phase Retrieval Aditya Viswanathan Department of Mathematics Michigan State University East Lansing, Michigan 4884 Mar Iwen Department of Mathematics and Department of ECE Michigan State

More information

CS483 Design and Analysis of Algorithms

CS483 Design and Analysis of Algorithms CS483 Design and Analysis of Algorithms Lecture 6-8 Divide and Conquer Algorithms Instructor: Fei Li lifei@cs.gmu.edu with subject: CS483 Office hours: STII, Room 443, Friday 4:00pm - 6:00pm or by appointments

More information

Signal Recovery from Permuted Observations

Signal Recovery from Permuted Observations EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 12 Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted for noncommercial,

More information

Optimal Deterministic Compressed Sensing Matrices

Optimal Deterministic Compressed Sensing Matrices Optimal Deterministic Compressed Sensing Matrices Arash Saber Tehrani email: saberteh@usc.edu Alexandros G. Dimakis email: dimakis@usc.edu Giuseppe Caire email: caire@usc.edu Abstract We present the first

More information

Fast and Robust Phase Retrieval

Fast and Robust Phase Retrieval Fast and Robust Phase Retrieval Aditya Viswanathan aditya@math.msu.edu CCAM Lunch Seminar Purdue University April 18 2014 0 / 27 Joint work with Yang Wang Mark Iwen Research supported in part by National

More information

FPS-SFT: A MULTI-DIMENSIONAL SPARSE FOURIER TRANSFORM BASED ON THE FOURIER PROJECTION-SLICE THEOREM

FPS-SFT: A MULTI-DIMENSIONAL SPARSE FOURIER TRANSFORM BASED ON THE FOURIER PROJECTION-SLICE THEOREM FPS-SFT: A MULTI-DIMENSIONAL SPARSE FOURIER TRANSFORM BASED ON THE FOURIER PROJECTION-SLICE THEOREM Shaogang Wang, Vishal M. Patel and Athina Petropulu Department of Electrical and Computer Engineering

More information

Linear sketching for Functions over Boolean Hypercube

Linear sketching for Functions over Boolean Hypercube Linear sketching for Functions over Boolean Hypercube Grigory Yaroslavtsev (Indiana University, Bloomington) http://grigory.us with Sampath Kannan (U. Pennsylvania), Elchanan Mossel (MIT) and Swagato Sanyal

More information

Near-Optimal Sparse Recovery in the L 1 norm

Near-Optimal Sparse Recovery in the L 1 norm Near-Optimal Sparse Recovery in the L 1 norm Piotr Indyk MIT indyk@mit.edu Milan Ružić ITU Copenhagen milan@itu.dk Abstract We consider the approximate sparse recovery problem, where the goal is to (approximately)

More information

Data stream algorithms via expander graphs

Data stream algorithms via expander graphs Data stream algorithms via expander graphs Sumit Ganguly sganguly@cse.iitk.ac.in IIT Kanpur, India Abstract. We present a simple way of designing deterministic algorithms for problems in the data stream

More information

Interpolation via weighted l 1 minimization

Interpolation via weighted l 1 minimization Interpolation via weighted l 1 minimization Rachel Ward University of Texas at Austin December 12, 2014 Joint work with Holger Rauhut (Aachen University) Function interpolation Given a function f : D C

More information

Lower Bounds for Adaptive Sparse Recovery

Lower Bounds for Adaptive Sparse Recovery Lower Bounds for Adaptive Sparse Recovery Eric Price MIT ecprice@mit.edu David P. Woodruff IBM Almaden dpwoodru@us.ibm.com Abstract We give lower bounds for the problem of stable sparse recovery from adaptive

More information

Optimisation Combinatoire et Convexe.

Optimisation Combinatoire et Convexe. Optimisation Combinatoire et Convexe. Low complexity models, l 1 penalties. A. d Aspremont. M1 ENS. 1/36 Today Sparsity, low complexity models. l 1 -recovery results: three approaches. Extensions: matrix

More information

Structured signal recovery from non-linear and heavy-tailed measurements

Structured signal recovery from non-linear and heavy-tailed measurements Structured signal recovery from non-linear and heavy-tailed measurements Larry Goldstein* Stanislav Minsker* Xiaohan Wei # *Department of Mathematics # Department of Electrical Engineering UniversityofSouthern

More information

Sublinear Time, Measurement-Optimal, Sparse Recovery For All

Sublinear Time, Measurement-Optimal, Sparse Recovery For All Sublinear Time, Measurement-Optimal, Sparse Recovery For All Ely Porat and Martin J Strauss Abstract An approximate sparse recovery system in l 1 norm makes a small number of measurements of a noisy vector

More information

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad

More information

Fast Convolution; Strassen s Method

Fast Convolution; Strassen s Method Fast Convolution; Strassen s Method 1 Fast Convolution reduction to subquadratic time polynomial evaluation at complex roots of unity interpolation via evaluation at complex roots of unity 2 The Master

More information

SGN Advanced Signal Processing Project bonus: Sparse model estimation

SGN Advanced Signal Processing Project bonus: Sparse model estimation SGN 21006 Advanced Signal Processing Project bonus: Sparse model estimation Ioan Tabus Department of Signal Processing Tampere University of Technology Finland 1 / 12 Sparse models Initial problem: solve

More information

Lecture 2. Frequency problems

Lecture 2. Frequency problems 1 / 43 Lecture 2. Frequency problems Ricard Gavaldà MIRI Seminar on Data Streams, Spring 2015 Contents 2 / 43 1 Frequency problems in data streams 2 Approximating inner product 3 Computing frequency moments

More information

Heavy Hitters. Piotr Indyk MIT. Lecture 4

Heavy Hitters. Piotr Indyk MIT. Lecture 4 Heavy Hitters Piotr Indyk MIT Last Few Lectures Recap (last few lectures) Update a vector x Maintain a linear sketch Can compute L p norm of x (in zillion different ways) Questions: Can we do anything

More information

Divide and conquer. Philip II of Macedon

Divide and conquer. Philip II of Macedon Divide and conquer Philip II of Macedon Divide and conquer 1) Divide your problem into subproblems 2) Solve the subproblems recursively, that is, run the same algorithm on the subproblems (when the subproblems

More information

Phase Retrieval from Local Measurements: Deterministic Measurement Constructions and Efficient Recovery Algorithms

Phase Retrieval from Local Measurements: Deterministic Measurement Constructions and Efficient Recovery Algorithms Phase Retrieval from Local Measurements: Deterministic Measurement Constructions and Efficient Recovery Algorithms Aditya Viswanathan Department of Mathematics and Statistics adityavv@umich.edu www-personal.umich.edu/

More information

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk Model-Based Compressive Sensing for Signal Ensembles Marco F. Duarte Volkan Cevher Richard G. Baraniuk Concise Signal Structure Sparse signal: only K out of N coordinates nonzero model: union of K-dimensional

More information

Optimal Deterministic Compressed Sensing Matrices

Optimal Deterministic Compressed Sensing Matrices Optimal Deterministic Compressed Sensing Matrices Arash Saber Tehrani Department of Electrical Engineering University of Southern California Los Angeles, CA 90089, USA email: saberteh@usc.edu Alexandros

More information

The security of all bits using list decoding

The security of all bits using list decoding The security of all bits using list decoding Paz Morillo and Carla Ràfols Dept. Matemàtica Aplicada IV Universitat Politècnica de Catalunya C. Jordi Girona -3, E-08034 Barcelona, Spain e-mail: {paz,crafols}@ma4.upc.edu

More information

GREEDY SIGNAL RECOVERY REVIEW

GREEDY SIGNAL RECOVERY REVIEW GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin

More information

Algorithmic linear dimension reduction in the l 1 norm for sparse vectors

Algorithmic linear dimension reduction in the l 1 norm for sparse vectors Algorithmic linear dimension reduction in the l norm for sparse vectors A. C. Gilbert, M. J. Strauss, J. A. Tropp, and R. Vershynin Abstract Using a number of different algorithms, we can recover approximately

More information

Finding Significant Fourier Transform Coefficients Deterministically and Locally

Finding Significant Fourier Transform Coefficients Deterministically and Locally Finding Significant Fourier Transform Coefficients Deterministically and Locally Adi Akavia November 20, 2008 Abstract Computing the Fourier transform is a basic building block used in numerous applications.

More information

Learning Fourier Sparse Set Functions

Learning Fourier Sparse Set Functions Peter Stobbe Caltech Andreas Krause ETH Zürich Abstract Can we learn a sparse graph from observing the value of a few random cuts? This and more general problems can be reduced to the challenge of learning

More information

FFT: Fast Polynomial Multiplications

FFT: Fast Polynomial Multiplications FFT: Fast Polynomial Multiplications Jie Wang University of Massachusetts Lowell Department of Computer Science J. Wang (UMass Lowell) FFT: Fast Polynomial Multiplications 1 / 20 Overview So far we have

More information

Parameterized PDEs Compressing sensing Sampling complexity lower-rip CS for PDEs Nonconvex regularizations Concluding remarks. Clayton G.

Parameterized PDEs Compressing sensing Sampling complexity lower-rip CS for PDEs Nonconvex regularizations Concluding remarks. Clayton G. COMPUTATIONAL & APPLIED MATHEMATICS Parameterized PDEs Compressing sensing Sampling complexity lower-rip CS for PDEs Nonconvex regularizations Concluding remarks Polynomial approximations via compressed

More information

The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008

The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008 The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008 Emmanuel Candés (Caltech), Terence Tao (UCLA) 1 Uncertainty principles A basic principle

More information

b + O(n d ) where a 1, b > 1, then O(n d log n) if a = b d d ) if a < b d O(n log b a ) if a > b d

b + O(n d ) where a 1, b > 1, then O(n d log n) if a = b d d ) if a < b d O(n log b a ) if a > b d CS161, Lecture 4 Median, Selection, and the Substitution Method Scribe: Albert Chen and Juliana Cook (2015), Sam Kim (2016), Gregory Valiant (2017) Date: January 23, 2017 1 Introduction Last lecture, we

More information

APPROXIMATE SPARSE RECOVERY: OPTIMIZING TIME AND MEASUREMENTS

APPROXIMATE SPARSE RECOVERY: OPTIMIZING TIME AND MEASUREMENTS SIAM J. COMPUT. Vol. 41, No. 2, pp. 436 453 c 2012 Society for Industrial and Applied Mathematics APPROXIMATE SPARSE RECOVERY: OPTIMIZING TIME AND MEASUREMENTS ANNA C. GILBERT, YI LI, ELY PORAT, AND MARTIN

More information

Two Fast Parallel GCD Algorithms of Many Integers. Sidi Mohamed SEDJELMACI

Two Fast Parallel GCD Algorithms of Many Integers. Sidi Mohamed SEDJELMACI Two Fast Parallel GCD Algorithms of Many Integers Sidi Mohamed SEDJELMACI Laboratoire d Informatique Paris Nord, France ISSAC 2017, Kaiserslautern, 24-28 July 2017 1 Motivations GCD of two integers: Used

More information

LIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION. By Srikanth Jagabathula Devavrat Shah

LIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION. By Srikanth Jagabathula Devavrat Shah 00 AIM Workshop on Ranking LIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION By Srikanth Jagabathula Devavrat Shah Interest is in recovering distribution over the space of permutations over n elements

More information

Sparse and Fast Fourier Transforms in Biomedical Imaging

Sparse and Fast Fourier Transforms in Biomedical Imaging Sparse and Fast Fourier Transforms in Biomedical Imaging Stefan Kunis 1 1 joint work with Ines Melzer, TU Chemnitz and Helmholtz Center Munich, supported by DFG and Helmholtz association Outline 1 Trigonometric

More information

Finding Frequent Items in Data Streams

Finding Frequent Items in Data Streams Finding Frequent Items in Data Streams Moses Charikar 1, Kevin Chen 2, and Martin Farach-Colton 3 1 Princeton University moses@cs.princeton.edu 2 UC Berkeley kevinc@cs.berkeley.edu 3 Rutgers University

More information

Lecture 13 October 6, Covering Numbers and Maurey s Empirical Method

Lecture 13 October 6, Covering Numbers and Maurey s Empirical Method CS 395T: Sublinear Algorithms Fall 2016 Prof. Eric Price Lecture 13 October 6, 2016 Scribe: Kiyeon Jeon and Loc Hoang 1 Overview In the last lecture we covered the lower bound for p th moment (p > 2) and

More information

Lecture 10. Sublinear Time Algorithms (contd) CSC2420 Allan Borodin & Nisarg Shah 1

Lecture 10. Sublinear Time Algorithms (contd) CSC2420 Allan Borodin & Nisarg Shah 1 Lecture 10 Sublinear Time Algorithms (contd) CSC2420 Allan Borodin & Nisarg Shah 1 Recap Sublinear time algorithms Deterministic + exact: binary search Deterministic + inexact: estimating diameter in a

More information

Efficiently Decodable Compressed Sensing by List-Recoverable Codes and Recursion

Efficiently Decodable Compressed Sensing by List-Recoverable Codes and Recursion Efficiently Decodable Compressed Sensing by List-Recoverable Codes and Recursion Hung Q. Ngo 1, Ely Porat 2, and Atri Rudra 1 1 Department of Computer Science and Engineering University at Buffalo, SUNY,

More information

arxiv: v1 [cs.it] 9 Jan 2019

arxiv: v1 [cs.it] 9 Jan 2019 Simple Codes and Sparse Recovery with Fast Decoding Mahdi Cheraghchi João Ribeiro arxiv:1901.02852v1 [cs.it] 9 Jan 2019 Abstract Construction of error-correcting codes achieving a designated minimum distance

More information

On the Power of Adaptivity in Sparse Recovery

On the Power of Adaptivity in Sparse Recovery On the Power of Adaptivity in Sparse Recovery Piotr Indyk Eric Price David P. Woodruff August 18, 011 Abstract The goal of (stable) sparse recovery is to recover a k-sparse approximation x of a vector

More information

FPS-SFT: A MULTI-DIMENSIONAL SPARSE FOURIER TRANSFORM BASED ON THE FOURIER PROJECTION-SLICE THEOREM

FPS-SFT: A MULTI-DIMENSIONAL SPARSE FOURIER TRANSFORM BASED ON THE FOURIER PROJECTION-SLICE THEOREM FPS-SFT: A MULTI-DIMENSIONAL SPARSE FOURIER TRANSFORM BASED ON THE FOURIER PROJECTION-SLICE THEOREM Shaogang Wang, Vishal M. Patel and Athina Petropulu Department of Electrical and Computer Engineering

More information

Interpolation via weighted l 1 -minimization

Interpolation via weighted l 1 -minimization Interpolation via weighted l 1 -minimization Holger Rauhut RWTH Aachen University Lehrstuhl C für Mathematik (Analysis) Mathematical Analysis and Applications Workshop in honor of Rupert Lasser Helmholtz

More information

Sparse analysis Lecture III: Dictionary geometry and greedy algorithms

Sparse analysis Lecture III: Dictionary geometry and greedy algorithms Sparse analysis Lecture III: Dictionary geometry and greedy algorithms Anna C. Gilbert Department of Mathematics University of Michigan Intuition from ONB Key step in algorithm: r, ϕ j = x c i ϕ i, ϕ j

More information

Randomness-in-Structured Ensembles for Compressed Sensing of Images

Randomness-in-Structured Ensembles for Compressed Sensing of Images Randomness-in-Structured Ensembles for Compressed Sensing of Images Abdolreza Abdolhosseini Moghadam Dep. of Electrical and Computer Engineering Michigan State University Email: abdolhos@msu.edu Hayder

More information

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit arxiv:0707.4203v2 [math.na] 14 Aug 2007 Deanna Needell Department of Mathematics University of California,

More information

Lecture 9: Matrix approximation continued

Lecture 9: Matrix approximation continued 0368-348-01-Algorithms in Data Mining Fall 013 Lecturer: Edo Liberty Lecture 9: Matrix approximation continued Warning: This note may contain typos and other inaccuracies which are usually discussed during

More information

Divide and Conquer Strategy

Divide and Conquer Strategy Divide and Conquer Strategy Algorithm design is more an art, less so a science. There are a few useful strategies, but no guarantee to succeed. We will discuss: Divide and Conquer, Greedy, Dynamic Programming.

More information

Interpolation via weighted l 1 -minimization

Interpolation via weighted l 1 -minimization Interpolation via weighted l 1 -minimization Holger Rauhut RWTH Aachen University Lehrstuhl C für Mathematik (Analysis) Matheon Workshop Compressive Sensing and Its Applications TU Berlin, December 11,

More information

List Decoding of Noisy Reed-Muller-like Codes

List Decoding of Noisy Reed-Muller-like Codes List Decoding of Noisy Reed-Muller-like Codes Martin J. Strauss University of Michigan Joint work with A. Robert Calderbank (Princeton) Anna C. Gilbert (Michigan) Joel Lepak (Michigan) Euclidean List Decoding

More information

COMBINING GEOMETRY AND COMBINATORICS: A UNIFIED APPROACH TO SPARSE SIGNAL RECOVERY

COMBINING GEOMETRY AND COMBINATORICS: A UNIFIED APPROACH TO SPARSE SIGNAL RECOVERY COMBINING GEOMETRY AND COMBINATORICS: A UNIFIED APPROACH TO SPARSE SIGNAL RECOVERY R. BERINDE, A. C. GILBERT, P. INDYK, H. KARLOFF, AND M. J. STRAUSS Abstract. There are two main algorithmic approaches

More information

Sparse Optimization Lecture: Sparse Recovery Guarantees

Sparse Optimization Lecture: Sparse Recovery Guarantees Those who complete this lecture will know Sparse Optimization Lecture: Sparse Recovery Guarantees Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor: Wotao Yin Department of Mathematics,

More information