Sparse Fourier Transform (lecture 1)

Size: px
Start display at page:

Download "Sparse Fourier Transform (lecture 1)"

Transcription

1 1 / 73 Sparse Fourier Transform (lecture 1) Michael Kapralov 1 1 IBM Watson EPFL St. Petersburg CS Club November 2015

2 2 / 73 Given x C n, compute the Discrete Fourier Transform (DFT) of x: x i = 1 x n j ω ij, j [n] where ω = e 2πi/n is the n-th root of unity.

3 2 / 73 Given x C n, compute the Discrete Fourier Transform (DFT) of x: x i = 1 x n j ω ij, j [n] where ω = e 2πi/n is the n-th root of unity. Assume that n is a power of 2.

4 2 / 73 Given x C n, compute the Discrete Fourier Transform (DFT) of x: x i = 1 x n j ω ij, j [n] where ω = e 2πi/n is the n-th root of unity. Assume that n is a power of 2. compression schemes (JPEG, MPEG) signal processing data analysis imaging (MRI, NMR)

5 DFT has numerous applications: 3 / 73

6 4 / 73 Fast Fourier Transform (FFT) Computes Discrete Fourier Transform (DFT) of a length n signal in O(nlogn) time

7 4 / 73 Fast Fourier Transform (FFT) Computes Discrete Fourier Transform (DFT) of a length n signal in O(nlogn) time Cooley and Tukey, 1964

8 5 / 73 Fast Fourier Transform (FFT) Computes Discrete Fourier Transform (DFT) of a length n signal in O(nlogn) time Cooley and Tukey, 1964 Gauss, 1805

9 6 / 73 Fast Fourier Transform (FFT) Computes Discrete Fourier Transform (DFT) of a length n signal in O(nlogn) time Cooley and Tukey, 1964 Gauss, 1805 Code=FFTW (Fastest Fourier Transform in the West)

10 7 / 73 Sparse FFT Say that x is k-sparse if x has k nonzero entries time frequency

11 8 / 73 Sparse FFT Say that x is k-sparse if x has k nonzero entries Say that x is approximately k-sparse if x is close to k-sparse in some norm (l 2 for this lecture) time frequency

12 9 / 73 Sparse approximations Given x, compute x, then keep top k coefficients only for k N Used in image and video compression schemes (e.g. JPEG, MPEG)

13 10 / 73 Sparse approximations JPEG = Given x, compute x, then keep top k coefficients only for k N Used in image and video compression schemes (e.g. JPEG, MPEG)

14 11 / 73 Computing approximation fast Basic approach: FFT computes x from x in O(nlogn) time compute top k coefficients in O(n) time.

15 11 / 73 Computing approximation fast Basic approach: FFT computes x from x in O(nlogn) time compute top k coefficients in O(n) time. Sparse FFT: directly computes k largest coefficients of x (approximately formal def later) Running time O(k log 2 n) or faster Sublinear time!

16 12 / 73 Sample complexity Sample complexity=number of samples accessed in time domain.

17 13 / 73 Sample complexity In medical imaging (MRI, NMR), one measures Fourier coefficients x of imaged object x (which is often sparse)

18 13 / 73 Sample complexity In medical imaging (MRI, NMR), one measures Fourier coefficients x of imaged object x (which is often sparse)

19 14 / 73 Sample complexity Measure x C n, compute the Inverse Discrete Fourier Transform (IDFT) of x: x i = x j ω ij. j [n]

20 14 / 73 Sample complexity Measure x C n, compute the Inverse Discrete Fourier Transform (IDFT) of x: x i = x j ω ij. j [n] Given x C n, compute the Discrete Fourier Transform (DFT) of x: x i = 1 x n j ω ij. j [n]

21 15 / 73 Sample complexity Measure x C n, compute the Inverse Discrete Fourier Transform (IDFT) of x: x i = x j ω ij. j [n] Given x C n, compute the Discrete Fourier Transform (DFT) of x: x i = 1 x n j ω ij. j [n]

22 15 / 73 Sample complexity Measure x C n, compute the Inverse Discrete Fourier Transform (IDFT) of x: x i = x j ω ij. j [n] Given x C n, compute the Discrete Fourier Transform (DFT) of x: x i = 1 x n j ω ij. j [n] Sample complexity=number of samples accessed in time domain. Governs the measurement complexity of imaging process.

23 16 / 73 Given access to signal x in time domain, find best k-sparse approximation to x approximately Minimize 1. runtime 2. number of samples

24 17 / 73 Algorithms Randomization Approximation Hashing Sketching... Signal processing Fourier transform Hadamard transform Filters Compressive sensing...

25 18 / 73 Lecture 1: summary of techniques from Gilbert-Guha-Indyk-Muthukrishnan-Strauss 02, Akavia-Goldwasser-Safra 03, Gilbert-Muthukrishnan-Strauss 05, Iwen 10, Akavia 10, Hassanieh-Indyk-Katabi-Price 12a, Hassanieh-Indyk-Katabi-Price 12b Lecture 2: Algorithm with O(k logn) runtime (noiseless case) Hassanieh-Indyk-Katabi-Price 12b Lecture 3: Algorithm with O(k log 2 n) runtime (noisy case) Hassanieh-Indyk-Katabi-Price 12b Lecture 4: Algorithm with O(k logn) sample complexity Indyk-Kapralov-Price 14, Indyk-Kapralov 14

26 19 / 73 Outline 1. Computing Fourier transform of 1-sparse signals fast 2. Sparsity k > 1: main ideas and challenges

27 20 / 73 Outline 1. Computing Fourier transform of 1-sparse signals fast 2. Sparsity k > 1: main ideas and challenges

28 21 / 73 Sparse Fourier Transform (k = 1) Warmup: x is exactly 1-sparse: x f = 0 when f f for some f f time frequency Note: signal is a pure frequency Given: access to x Need: find f and x f

29 22 / 73 Two-point sampling Input signal x is a pure frequency, so x j = a ω f j

30 22 / 73 Two-point sampling Input signal x is a pure frequency, so x j = a ω f j Sample x 0,x 1

31 23 / 73 Two-point sampling Input signal x is a pure frequency, so x j = a ω f j Sample x 0,x 1 We have x 0 = a x 1 = a ω f x 1 = a ω f x 0 = a

32 24 / 73 Two-point sampling Input signal x is a pure frequency, so x j = a ω f j Sample x 0,x 1 We have x 0 = a x 1 = a ω f ω f x 0 = a x 1 = a ω f So x 1 /x 0 = ω f unit circle

33 25 / 73 Two-point sampling Input signal x is a pure frequency, so x j = a ω f j Sample x 0,x 1 We have x 0 = a x 1 = a ω f ω f x 0 = a x 1 = a ω f So x 1 /x 0 = ω f Can read frequency from the angle! unit circle

34 Two-point sampling Input signal x is a pure frequency, so x j = a ω f j Sample x 0,x 1 We have x 0 = a x 1 = a ω f ω f x 0 = a x 1 = a ω f So x 1 /x 0 = ω f Can read frequency from the angle! unit circle Pro: constant time algorithm Con: depends heavily on the signal being pure 26 / 73

35 27 / 73 Two-point sampling Input signal x is a pure frequency+noise, so x j = a ω f j +noise Sample x 0,x 1 x 1 = a ω f We have So x 0 = a+noise x 1 = a ω f +noise x 1 /x 0 = ω f +noise Can read frequency from the angle! ω f x 0 = a unit circle

36 28 / 73 Two-point sampling Input signal x is a pure frequency+noise, so x j = a ω f j +noise Sample x 0,x 1 x 1 = a ω f We have So x 0 = a+noise x 1 = a ω f +noise x 1 /x 0 = ω f +noise Can read frequency from the angle! ω f x 0 = a unit circle

37 Two-point sampling Input signal x is a pure frequency+noise, so x j = a ω f j +noise Sample x 0,x 1 x 1 = a ω f We have x 0 = a+noise x 1 = a ω f +noise So x 1 /x 0 = ω f +noise ω f x 0 = a Can read frequency from the angle! unit circle Pro: constant time algorithm Con: depends heavily on the signal being pure 29 / 73

38 30 / 73 Sparse Fourier Transform (k = 1) Warmup part 2: x is 1-sparse plus noise time frequency Note: signal is a pure frequency plus noise Given: access to x Need: find f and x f

39 31 / 73 Sparse Fourier Transform (k = 1) Warmup part 2: x is 1-sparse plus noise head time frequency Note: signal is a pure frequency plus noise Given: access to x Need: find f and x f

40 32 / 73 Sparse Fourier Transform (k = 1) Warmup part 2: x is 1-sparse plus noise head tail noise time frequency Note: signal is a pure frequency plus noise Given: access to x Need: find f and x f

41 33 / 73 Sparse Fourier Transform (k = 1) Warmup part 2: x is 1-sparse plus noise head tail noise time frequency Note: signal is a pure frequency plus noise Ideally, find pure frequency x that approximates x best: min 1 sparse x x x 2

42 34 / 73 Sparse Fourier Transform (k = 1) Warmup part 2: x is 1-sparse plus noise head time frequency Note: signal is a pure frequency plus noise Ideally, find pure frequency x that approximates x best: min 1 sparse x x x 2

43 35 / 73 Sparse Fourier Transform (k = 1) Warmup part 2: x is 1-sparse plus noise head time frequency Note: signal is a pure frequency plus noise Ideally, find pure frequency x that approximates x best: min 1 sparse x x x 2 = tail noise 2

44 36 / 73 l 2 /l 2 sparse recovery Ideally, find pure frequency x that approximates x best Need to allow approximation: find ŷ such that x ŷ 2 C tail noise 2 where C > 1 is the approximation factor.

45 37 / 73 l 2 /l 2 sparse recovery Ideally, find pure frequency x that approximates x best Need to allow approximation: find ŷ such that x ŷ 2 3 tail noise 2

46 Approximation guarantee Find ŷ such that x ŷ 2 3 tail noise /

47 Approximation guarantee Find ŷ such that x ŷ 2 3 tail noise /

48 Approximation guarantee Find ŷ such that x ŷ 2 3 tail noise 2 Note: only meaningful if or, equivalently, x 2 > 3 tail noise 2 f f x f 2 ε a /

49 Approximation guarantee Find ŷ such that x ŷ 2 3 tail noise 2 Note: only meaningful if or, equivalently, x 2 > 3 tail noise 2 f f x f 2 ε a 2 (assume this for the lecture) /

50 Approximation guarantee Find ŷ such that x ŷ 2 3 tail noise 2 Note: only meaningful if or, equivalently, x 2 > 3 tail noise 2 f f x f 2 ε a 2 (assume this for the lecture) /

51 Approximation guarantee Find ŷ such that x ŷ 2 3 tail noise 2 Note: only meaningful if or, equivalently, x 2 > 3 tail noise 2 f f x f 2 ε a 2 (assume this for the lecture) head tail noise / 73

52 44 / 73 A robust algorithm for finding the heavy hitter head tail noise frequency Assume that x f 2 ε a 2 f f Describe algorithm for the noiseless case first (ε = 0) Suppose that x j = a ω f j.

53 A robust algorithm for finding the heavy hitter head tail noise frequency Assume that x f 2 ε a 2 f f Describe algorithm for the noiseless case first (ε = 0) Suppose that x j = a ω f j. Will find f bit by bit (binary search). 44 / 73

54 45 / 73 Bit 0 Suppose that f = 2f +b, we want b Compute x 0 = a x n/2 = a ω f (n/2)

55 45 / 73 Bit 0 Suppose that f = 2f +b, we want b Compute x 0 = a x n/2 = a ω f (n/2) Claim We have x n/2 = x 0 ( 1) b (Even frequencies are n/2-periodic, odd are n/2-antiperiodic) Proof. x n/2 = a ω f (n/2) = a ( 1) 2f +b = x 0 ( 1) b

56 46 / 73 Bit 0 Suppose that f = 2f +b, we want b Compute x 0+r = a ω f r x n/2+r = a ω f (n/2+r) Claim For all r [n] we have x n/2+r = x 0+r ( 1) b (Even frequencies are n/2-periodic, odd are n/2-antiperiodic) Proof. x n/2+r = a ω f (n/2+r) = a ω f r ( 1) 2f +b = x0+r ( 1) b

57 47 / 73 Bit 0 Suppose that f = 2f +b, we want b Compute x r = a ω f r x n/2+r = a ω f (n/2+r) Claim For all r [n] we have x n/2+r = x r ( 1) b (Even frequencies are n/2-periodic, odd are n/2-antiperiodic) Proof. x n/2+r = a ω f (n/2+r) = a ω f r ( 1) 2f +b = xr ( 1) b

58 48 / 73 Bit 0 Suppose that f = 2f +b, we want b Compute x r = a ω f r x n/2+r = a ω f (n/2+r) Claim For all r [n] we have x n/2+r = x r ( 1) b (Even frequencies are n/2-periodic, odd are n/2-antiperiodic) Proof. x n/2+r = a ω f (n/2+r) = a ω f r ( 1) 2f +b = xr ( 1) b Will need arbitrary r s for the noisy setting

59 49 / 73 Bit 0 test Set b 0 0 if x n/2+r +x r > x n/2+r x r b 0 1 o.w.

60 49 / 73 Bit 0 test Set b 0 0 if x n/2+r +x r > x n/2+r x r b 0 1 o.w. Correctness: If b = 0, then x n/2+r +x r = 2 x r = 2 a and x n/2+r x r = 0

61 49 / 73 Bit 0 test Set b 0 0 if x n/2+r +x r > x n/2+r x r b 0 1 o.w. Correctness: If b = 0, then x n/2+r +x r = 2 x r = 2 a and x n/2+r x r = 0 If b = 1, then x n/2+r +x r = 0 and x n/2+r x r = 2 x r = 2 a

62 50 / 73 Bit 1 Can pretend that b 0 = 0. Why? Claim (Time shift theorem) If y j = x j ω j, then ŷ f = x f. Proof. ŷ f = 1 n j ω j [n]y fj = 1 x n j ω j ω fj j [n] = 1 j (f ) x n j ω j [n] = x f

63 50 / 73 Bit 1 Can pretend that b 0 = 0. Why? Claim (Time shift theorem) If y j = x j ω j, then ŷ f = x f. Proof. ŷ f = 1 n j ω j [n]y fj = 1 x n j ω j ω fj j [n] = 1 j (f ) x n j ω j [n] = x f If b 0 = 1, then replace x with y j := x j ω j b 0.

64 51 / 73 Bit 1 Assume b 0 = 0. Then we have f = 2f, so x j = a ω f j = a ω 2f j = a ω f j N/2.

65 51 / 73 Bit 1 Assume b 0 = 0. Then we have f = 2f, so x j = a ω f j = a ω 2f j = a ω f j N/2. Let ẑ j := x 2j, i.e. spectrum of z contains even components of spectrum of x

66 Bit 1 Assume b 0 = 0. Then we have f = 2f, so x j = a ω f j = a ω 2f j = a ω f j N/2. Let ẑ j := x 2j, i.e. spectrum of z contains even components of spectrum of x Then (x 0,...,x N/2 1 ) = (z 0,...,z N/2 1 ) are time samples of z j ; and ẑ f = a is the heavy hitter in z. 51 / 73

67 Bit 1 Assume b 0 = 0. Then we have f = 2f, so x j = a ω f j = a ω 2f j = a ω f j N/2. Let ẑ j := x 2j, i.e. spectrum of z contains even components of spectrum of x Then (x 0,...,x N/2 1 ) = (z 0,...,z N/2 1 ) are time samples of z j ; and ẑ f = a is the heavy hitter in z. So by previous derivation z N/4+r = z r ( 1) b 1 And hence x n/4+r ω (n/4+r)b 0 = x r ω r b0 ( 1) b 1 51 / 73

68 Bit 1 Assume b 0 = 0. Then we have f = 2f, so x j = a ω f j = a ω 2f j = a ω f j N/2. Let ẑ j := x 2j, i.e. spectrum of z contains even components of spectrum of x Then (x 0,...,x N/2 1 ) = (z 0,...,z N/2 1 ) are time samples of z j ; and ẑ f = a is the heavy hitter in z. So by previous derivation z N/4+r = z r ( 1) b 1 And hence x n/4+r ω (n/4)b 0 = x r ( 1) b 1 52 / 73

69 53 / 73 Decoding bit by bit Set b 0 0 if x n/2+r +x r > x n/2+r x r b 0 1 o.w.

70 53 / 73 Decoding bit by bit Set b 0 0 if x n/2+r +x r > x n/2+r x r b 0 1 o.w. Set b 1 0 if ω (n/4)b 0x n/4+r +x r > ω (n/4)b 0x n/4+r x r b 1 1 o.w.

71 53 / 73 Decoding bit by bit Set b 0 0 if x n/2+r +x r > x n/2+r x r b 0 1 o.w. Set b 1 0 if ω (n/4)b 0x n/4+r +x r > ω (n/4)b 0x n/4+r x r b 1 1 o.w.... ω (n/8)(2b 1+b 0 ) x n/8+r +x r > ω (n/8)(2b 1+b 0 ) x n/8+r x r...

72 53 / 73 Decoding bit by bit Set b 0 0 if x n/2+r +x r > x n/2+r x r b 0 1 o.w. Set b 1 0 if ω (n/4)b 0x n/4+r +x r > ω (n/4)b 0x n/4+r x r b 1 1 o.w.... ω (n/8)(2b 1+b 0 ) x n/8+r +x r > ω (n/8)(2b 1+b 0 ) x n/8+r x r... Overall: O(logn) samples to identify f. Runtime O(logn)

73 54 / 73 Noisy setting (dealing with ε) We now have x j = a ω f j + f f x f ω fj = a ω f j + µj (µ j is the noise in time domain) Argue that µ j is usually small?

74 54 / 73 Noisy setting (dealing with ε) We now have x j = a ω f j + f f x f ω fj = a ω f j + µj (µ j is the noise in time domain) Argue that µ j is usually small? Parseval s equality: noise energy in time domain is proportional to noise energy in frequency domain: N 1 j=0 µ j 2 = n f f x f 2.

75 Noisy setting (dealing with ε) We now have x j = a ω f j + f f x f ω fj = a ω f j + µj (µ j is the noise in time domain) Argue that µ j is usually small? Parseval s equality: noise energy in time domain is proportional to noise energy in frequency domain: N 1 j=0 µ j 2 = n So on average µ j 2 is small: E j [ µ j 2 ] f f x f 2. f f x f 2 ε a 2 54 / 73

76 55 / 73 Need to ensure that: 1. f is decoded correctly 2. a is estimated well enough to satisfy l 2 /l 2 guarantees: x ŷ 2 C x x 2

77 56 / 73 Decoding in the noisy setting Bit 0: set b 0 0 if x n/2+r +x r > x n/2+r x r and b 0 1 o.w. Claim If µ n/2+r < a /2 and µ r < a /2, then outcome of the bit test is the same.

78 56 / 73 Decoding in the noisy setting Bit 0: set b 0 0 if x n/2+r +x r > x n/2+r x r and b 0 1 o.w. Claim If µ n/2+r < a /2 and µ r < a /2, then outcome of the bit test is the same. Suppose b 0 = 1.

79 56 / 73 Decoding in the noisy setting Bit 0: set b 0 0 if x n/2+r +x r > x n/2+r x r and b 0 1 o.w. Claim If µ n/2+r < a /2 and µ r < a /2, then outcome of the bit test is the same. Suppose b 0 = 1. Then x n/2+r +x r µ n/2+r + µ r < a

80 56 / 73 Decoding in the noisy setting Bit 0: set b 0 0 if x n/2+r +x r > x n/2+r x r and b 0 1 o.w. Claim If µ n/2+r < a /2 and µ r < a /2, then outcome of the bit test is the same. Suppose b 0 = 1. x r = a ω f r + µ r Then x n/2+r +x r µ n/2+r + µ r < a and x n/2+r x r 2 a µ n/2+r µ r > a x n/2+r = a ω f (n/2+r) + µ n/2+r

81 57 / 73 Decoding in the noisy setting On average µ j 2 is small: E j [ µ j 2 ] x f 2 ε a 2 f f By Markov s inequality Pr j [ µ j 2 > a 2 /4] Pr j [ µ j 2 > (1/(4ε)) E j [ µ j 2 ]] 4ε

82 57 / 73 Decoding in the noisy setting On average µ j 2 is small: E j [ µ j 2 ] x f 2 ε a 2 f f By Markov s inequality Pr j [ µ j 2 > a 2 /4] Pr j [ µ j 2 > (1/(4ε)) E j [ µ j 2 ]] 4ε By a union bound Pr r [ µ r a /2 and µ n/2+r a /2] 1 8ε

83 57 / 73 Decoding in the noisy setting On average µ j 2 is small: E j [ µ j 2 ] x f 2 ε a 2 f f By Markov s inequality Pr j [ µ j 2 > a 2 /4] Pr j [ µ j 2 > (1/(4ε)) E j [ µ j 2 ]] 4ε By a union bound Pr r [ µ r a /2 and µ n/2+r a /2] 1 8ε Thus, a bit test is correct with probability at least 1 8ε.

84 58 / 73 Decoding in the noisy setting Bit 0: set b 0 to zero if x n/2+r +x r > x n/2+r x r and to 1 otherwise For ε < 1/64 each test is correct with probability 3/4. Final test: perform T 1 independent tests, use majority vote. How large should T be? Success probability?

85 59 / 73 Decoding in the noisy setting For j = 1,...,T let We have E[Z j ] 3/4. { 1 if j-th test is correct Z j = 0 o.w.

86 59 / 73 Decoding in the noisy setting For j = 1,...,T let We have E[Z j ] 3/4. Chernoff bounds { 1 if j-th test is correct Z j = 0 o.w. T Pr[ Z j < T /2] < e Ω(T ). j=1 Set T = O(loglogn) Majority is correct with probability at least 1 1/(16log 2 n) So all bits correct with probability 15/16

87 60 / 73 Estimating the value of heavy hitter Recall that x r = a ω f r + µr (noise) Our estimate: pick random r [n] and output est x r ω f r

88 60 / 73 Estimating the value of heavy hitter Recall that x r = a ω f r + µr (noise) Our estimate: pick random r [n] and output est x r ω f r Expected squared error? E r [ est a 2 ]

89 60 / 73 Estimating the value of heavy hitter Recall that x r = a ω f r + µr (noise) Our estimate: pick random r [n] and output est x r ω f r Expected squared error? E r [ est a 2 ] = E r [ x r ω f r a 2 ]

90 60 / 73 Estimating the value of heavy hitter Recall that x r = a ω f r + µr (noise) Our estimate: pick random r [n] and output est x r ω f r Expected squared error? E r [ est a 2 ] = E r [ x r ω f r a 2 ] = E r [ x r a ω f r 2 ]

91 60 / 73 Estimating the value of heavy hitter Recall that x r = a ω f r + µr (noise) Our estimate: pick random r [n] and output est x r ω f r Expected squared error? E r [ est a 2 ] = E r [ x r ω f r a 2 ] = E r [ x r a ω f r 2 ] = E r [ µ r 2 ] Now by Markov s inequality Pr r [ est a 2 > 4ε a 2 ] < 1/4.

92 Putting it together: algorithm for 1-sparse signals Let { est if f = f ŷ f = 0 o.w. By triangle inequality ŷ x 2 ŷ f a 2 + ŷ f x f 2 2 ε a + ε a = 3 x x 2. Thus, with probability 2/3 our algorithm satisfies l 2 /l 2 guarantee with C = / 73

93 62 / 73 Runtime=O(logn loglogn) Sample complexity=o(logn loglogn)

94 62 / 73 Runtime=O(logn loglogn) Sample complexity=o(logn loglogn) Ex. 1: reduce sample complexity to O(logn), keep O(poly(logn)) runtime Ex. 2: reduce sample complexity to O(log 1/ε n)

95 62 / 73 Runtime=O(logn loglogn) Sample complexity=o(logn loglogn) Ex. 1: reduce sample complexity to O(logn), keep O(poly(logn)) runtime Ex. 2: reduce sample complexity to O(log 1/ε n) What about k > 1

96 63 / 73 Outline 1. Sparsity: definitions, motivation 2. Computing Fourier transform of 1-sparse signals fast 3. Sparsity k > 1: main ideas and challenges

97 Sparsity k > 1 Let x best k-sparse approximation of x Our goal: find ŷ such that x ŷ 2 C x x 2 where C > 1 is the approximation factor. (This is the l 2 /l 2 guarantee) 1 head frequency 64 / 73

98 65 / 73 Sparsity k > 1 Main idea: implement hashing to reduce to 1-sparse case: hash frequencies into k bins run 1-sparse algo on isolated elements Assumption: can randomly permute frequencies (will remove in next lecture) Implement hashing? Need to design a bucketing scheme for the frequency domain

99 Partition frequency domain into B k buckets time frequency 66 / 73

100 Partition frequency domain into B k buckets time frequency 67 / 73

101 67 / 73 Partition frequency domain into B k buckets time frequency For each j = 0,...,B 1 let { û j xf f =, if f j-th bucket 0 o.w. Restricted to a bucket, signal is likely approximately 1-sparse!

102 68 / 73 Partition frequency domain into B k buckets time frequency For each j = 0,...,B 1 let { û j xf f =, if f j-th bucket 0 o.w. Restricted to a bucket, signal is likely approximately 1-sparse!

103 69 / 73 Partition frequency domain into B k buckets time frequency For each j = 0,...,B 1 let { û j xf f =, if f j-th bucket 0 o.w. Restricted to a bucket, signal is likely approximately 1-sparse!

104 69 / 73 Partition frequency domain into B k buckets time frequency For each j = 0,...,B 1 let { û j xf f =, if f j-th bucket 0 o.w. Restricted to a bucket, signal is likely approximately 1-sparse!

105 69 / 73 Partition frequency domain into B k buckets time frequency For each j = 0,...,B 1 let { û j xf f =, if f j-th bucket 0 o.w. Restricted to a bucket, signal is likely approximately 1-sparse!

106 69 / 73 Partition frequency domain into B k buckets time frequency For each j = 0,...,B 1 let { û j xf f =, if f j-th bucket 0 o.w. Restricted to a bucket, signal is likely approximately 1-sparse!

107 69 / 73 Partition frequency domain into B k buckets time frequency For each j = 0,...,B 1 let { û j xf f =, if f j-th bucket 0 o.w. Restricted to a bucket, signal is likely approximately 1-sparse!

108 69 / 73 Partition frequency domain into B k buckets time frequency For each j = 0,...,B 1 let { û j xf f =, if f j-th bucket 0 o.w. Restricted to a bucket, signal is likely approximately 1-sparse!

109 70 / 73 1 Zero-th bucket signal u 0 : { û 0 xf, if f [ n f = 2B : 2B n ] 0 o.w frequency

110 70 / 73 1 Zero-th bucket signal u 0 : { û 0 xf, if f [ n f = 2B : 2B n ] 0 o.w frequency We want time domain access to u 0 : for any a = 0,...,n 1, compute u 0 a = f û 0 f ω f a

111 70 / 73 1 Zero-th bucket signal u 0 : { û 0 xf, if f [ n f = 2B : 2B n ] 0 o.w frequency We want time domain access to u 0 : for any a = 0,...,n 1, compute u 0 a = f û 0 f ω f a = n 2B f n 2B x f ω f a

112 70 / 73 1 Zero-th bucket signal u 0 : { û 0 xf, if f [ n f = 2B : 2B n ] 0 o.w frequency We want time domain access to u 0 : for any a = 0,...,n 1, compute u 0 a = f û 0 f ω f a = n 2B f n 2B x f ω f a = n 2B f n 2B where y j = x j+a (y is a time shift of x by the time shift theorem). ŷ f,

113 71 / 73 We want time domain access to u 0 : for any a = 0,...,n 1, compute ua 0 = n 2B f n 2B ŷ f, where y j = x j+a (y is a time shift of x).

114 71 / 73 We want time domain access to u 0 : for any a = 0,...,n 1, compute ua 0 = n 2B f n 2B ŷ f, where y j = x j+a (y is a time shift of x). Let { [ 1, if f n Ĝ f = 2B : 2B n ] 0 o.w. Then u 0 a = n 2B f n 2B ŷ f

115 71 / 73 We want time domain access to u 0 : for any a = 0,...,n 1, compute ua 0 = n 2B f n 2B ŷ f, where y j = x j+a (y is a time shift of x). Let { [ 1, if f n Ĝ f = 2B : 2B n ] 0 o.w. Then u 0 a = n 2B f n 2B ŷ f = ŷ f Ĝf f [n]

116 71 / 73 We want time domain access to u 0 : for any a = 0,...,n 1, compute ua 0 = n 2B f n 2B ŷ f, where y j = x j+a (y is a time shift of x). Let { [ 1, if f n Ĝ f = 2B : 2B n ] 0 o.w. Then u 0 a = n 2B f n 2B ŷ f = ŷ f Ĝf = (ŷ Ĝ)(0) f [n]

117 71 / 73 We want time domain access to u 0 : for any a = 0,...,n 1, compute ua 0 = n 2B f n 2B ŷ f, where y j = x j+a (y is a time shift of x). Let { [ 1, if f n Ĝ f = 2B : 2B n ] 0 o.w. Then u 0 a = n 2B f n 2B ŷ f = f [n] ŷ f Ĝf = (ŷ Ĝ)(0) = ( x +a Ĝ)(0)

118 72 / 73 Need to evaluate for j = 0,...,B 1. ( ( x Ĝ) j n ) B We have access to x, not x...

119 72 / 73 Need to evaluate for j = 0,...,B 1. ( ( x Ĝ) j n ) B By the convolution identity We have access to x, not x... x Ĝ = (x G)

120 72 / 73 Need to evaluate for j = 0,...,B 1. ( ( x Ĝ) j n ) B By the convolution identity We have access to x, not x... x Ĝ = (x G) Suffices to compute x G j n,j = 0,...,B 1 B

121 Suffices to compute x G j n,j = B/2,...,B/2 1 B Sample complexity? Runtime? time frequency 72 / 73

122 Suffices to compute x G j n,j = B/2,...,B/2 1 B Sample complexity? Runtime? time frequency 72 / 73

123 Suffices to compute x G j n,j = B/2,...,B/2 1 B Sample complexity? Runtime? time frequency 72 / 73

124 Suffices to compute x G j n,j = B/2,...,B/2 1 B Sample complexity? Runtime? time frequency Computing x G takes Ω(N) time and samples! 73 / 73

125 Suffices to compute x G j n,j = B/2,...,B/2 1 B Sample complexity? Runtime? time frequency Computing x G takes Ω(N) time and samples! Design a filter supp(g) k? Truncate sinc? Tolerate imprecise hashing? Collisions in buckets? 73 / 73

An Adaptive Sublinear Time Block Sparse Fourier Transform

An Adaptive Sublinear Time Block Sparse Fourier Transform An Adaptive Sublinear Time Block Sparse Fourier Transform Volkan Cevher Michael Kapralov Jonathan Scarlett Amir Zandieh EPFL February 8th 217 Given x C N, compute the Discrete Fourier Transform (DFT) of

More information

Sample Optimal Fourier Sampling in Any Constant Dimension

Sample Optimal Fourier Sampling in Any Constant Dimension 1 / 26 Sample Optimal Fourier Sampling in Any Constant Dimension Piotr Indyk Michael Kapralov MIT IBM Watson October 21, 2014 Fourier Transform and Sparsity Discrete Fourier Transform Given x C n, compute

More information

Sparse Fourier Transform (lecture 4)

Sparse Fourier Transform (lecture 4) 1 / 5 Sparse Fourier Transform (lecture 4) Michael Kapralov 1 1 IBM Watson EPFL St. Petersburg CS Club November 215 2 / 5 Given x C n, compute the Discrete Fourier Transform of x: x i = 1 n x j ω ij, j

More information

COMS E F15. Lecture 22: Linearity Testing Sparse Fourier Transform

COMS E F15. Lecture 22: Linearity Testing Sparse Fourier Transform COMS E6998-9 F15 Lecture 22: Linearity Testing Sparse Fourier Transform 1 Administrivia, Plan Thu: no class. Happy Thanksgiving! Tue, Dec 1 st : Sergei Vassilvitskii (Google Research) on MapReduce model

More information

A Non-sparse Tutorial on Sparse FFTs

A Non-sparse Tutorial on Sparse FFTs A Non-sparse Tutorial on Sparse FFTs Mark Iwen Michigan State University April 8, 214 M.A. Iwen (MSU) Fast Sparse FFTs April 8, 214 1 / 34 Problem Setup Recover f : [, 2π] C consisting of k trigonometric

More information

Lecture 16 Oct. 26, 2017

Lecture 16 Oct. 26, 2017 Sketching Algorithms for Big Data Fall 2017 Prof. Piotr Indyk Lecture 16 Oct. 26, 2017 Scribe: Chi-Ning Chou 1 Overview In the last lecture we constructed sparse RIP 1 matrix via expander and showed that

More information

Faster Algorithms for Sparse Fourier Transform

Faster Algorithms for Sparse Fourier Transform Faster Algorithms for Sparse Fourier Transform Haitham Hassanieh Piotr Indyk Dina Katabi Eric Price MIT Material from: Hassanieh, Indyk, Katabi, Price, Simple and Practical Algorithms for Sparse Fourier

More information

Faster Algorithms for Sparse Fourier Transform

Faster Algorithms for Sparse Fourier Transform Faster Algorithms for Sparse Fourier Transform Haitham Hassanieh Piotr Indyk Dina Katabi Eric Price MIT Material from: Hassanieh, Indyk, Katabi, Price, Simple and Practical Algorithms for Sparse Fourier

More information

Rapidly Computing Sparse Chebyshev and Legendre Coefficient Expansions via SFTs

Rapidly Computing Sparse Chebyshev and Legendre Coefficient Expansions via SFTs Rapidly Computing Sparse Chebyshev and Legendre Coefficient Expansions via SFTs Mark Iwen Michigan State University October 18, 2014 Work with Janice (Xianfeng) Hu Graduating in May! M.A. Iwen (MSU) Fast

More information

Recent Developments in the Sparse Fourier Transform

Recent Developments in the Sparse Fourier Transform Recent Developments in the Sparse Fourier Transform Piotr Indyk MIT Joint work with Fadel Adib, Badih Ghazi, Haitham Hassanieh, Michael Kapralov, Dina Katabi, Eric Price, Lixin Shi 1000 150 1000 150 1000

More information

Sparse FFT for Functions with Short Frequency Support

Sparse FFT for Functions with Short Frequency Support Sparse FFT for Functions with Short Frequency Support Sina ittens May 10, 017 Abstract In this paper we derive a deterministic fast discrete Fourier transform algorithm for a π-periodic function f whose

More information

Lecture 6. Today we shall use graph entropy to improve the obvious lower bound on good hash functions.

Lecture 6. Today we shall use graph entropy to improve the obvious lower bound on good hash functions. CSE533: Information Theory in Computer Science September 8, 010 Lecturer: Anup Rao Lecture 6 Scribe: Lukas Svec 1 A lower bound for perfect hash functions Today we shall use graph entropy to improve the

More information

Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery

Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery Anna C. Gilbert Department of Mathematics University of Michigan Sparse signal recovery measurements:

More information

Combining geometry and combinatorics

Combining geometry and combinatorics Combining geometry and combinatorics A unified approach to sparse signal recovery Anna C. Gilbert University of Michigan joint work with R. Berinde (MIT), P. Indyk (MIT), H. Karloff (AT&T), M. Strauss

More information

The intrinsic structure of FFT and a synopsis of SFT

The intrinsic structure of FFT and a synopsis of SFT Global Journal o Pure and Applied Mathematics. ISSN 0973-1768 Volume 12, Number 2 (2016), pp. 1671-1676 Research India Publications http://www.ripublication.com/gjpam.htm The intrinsic structure o FFT

More information

CS on CS: Computer Science insights into Compresive Sensing (and vice versa) Piotr Indyk MIT

CS on CS: Computer Science insights into Compresive Sensing (and vice versa) Piotr Indyk MIT CS on CS: Computer Science insights into Compresive Sensing (and vice versa) Piotr Indyk MIT Sparse Approximations Goal: approximate a highdimensional vector x by x that is sparse, i.e., has few nonzero

More information

Lecture 13 October 6, Covering Numbers and Maurey s Empirical Method

Lecture 13 October 6, Covering Numbers and Maurey s Empirical Method CS 395T: Sublinear Algorithms Fall 2016 Prof. Eric Price Lecture 13 October 6, 2016 Scribe: Kiyeon Jeon and Loc Hoang 1 Overview In the last lecture we covered the lower bound for p th moment (p > 2) and

More information

Improved Concentration Bounds for Count-Sketch

Improved Concentration Bounds for Count-Sketch Improved Concentration Bounds for Count-Sketch Gregory T. Minton 1 Eric Price 2 1 MIT MSR New England 2 MIT IBM Almaden UT Austin 2014-01-06 Gregory T. Minton, Eric Price (IBM) Improved Concentration Bounds

More information

Finding Significant Fourier Transform Coefficients Deterministically and Locally

Finding Significant Fourier Transform Coefficients Deterministically and Locally Finding Significant Fourier Transform Coefficients Deterministically and Locally Adi Akavia November 20, 2008 Abstract Computing the Fourier transform is a basic building block used in numerous applications.

More information

Simple and Practical Algorithm for Sparse Fourier Transform

Simple and Practical Algorithm for Sparse Fourier Transform Simple and Practical Algorithm for Sparse Fourier Transform Haitham Hassanieh MIT Piotr Indyk MIT Dina Katabi MIT Eric Price MIT {haithamh,indyk,dk,ecprice}@mit.edu Abstract We consider the sparse Fourier

More information

Lecture 3 September 15, 2014

Lecture 3 September 15, 2014 MIT 6.893: Algorithms and Signal Processing Fall 2014 Prof. Piotr Indyk Lecture 3 September 15, 2014 Scribe: Ludwig Schmidt 1 Introduction In this lecture, we build on the ideas from the previous two lectures

More information

Lecture 6 September 13, 2016

Lecture 6 September 13, 2016 CS 395T: Sublinear Algorithms Fall 206 Prof. Eric Price Lecture 6 September 3, 206 Scribe: Shanshan Wu, Yitao Chen Overview Recap of last lecture. We talked about Johnson-Lindenstrauss (JL) lemma [JL84]

More information

Heavy Hitters. Piotr Indyk MIT. Lecture 4

Heavy Hitters. Piotr Indyk MIT. Lecture 4 Heavy Hitters Piotr Indyk MIT Last Few Lectures Recap (last few lectures) Update a vector x Maintain a linear sketch Can compute L p norm of x (in zillion different ways) Questions: Can we do anything

More information

Sparse Recovery and Fourier Sampling. Eric Price

Sparse Recovery and Fourier Sampling. Eric Price Sparse Recovery and Fourier Sampling by Eric Price Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy

More information

Lecture 01 August 31, 2017

Lecture 01 August 31, 2017 Sketching Algorithms for Big Data Fall 2017 Prof. Jelani Nelson Lecture 01 August 31, 2017 Scribe: Vinh-Kha Le 1 Overview In this lecture, we overviewed the six main topics covered in the course, reviewed

More information

Structured signal recovery from non-linear and heavy-tailed measurements

Structured signal recovery from non-linear and heavy-tailed measurements Structured signal recovery from non-linear and heavy-tailed measurements Larry Goldstein* Stanislav Minsker* Xiaohan Wei # *Department of Mathematics # Department of Electrical Engineering UniversityofSouthern

More information

An algebraic perspective on integer sparse recovery

An algebraic perspective on integer sparse recovery An algebraic perspective on integer sparse recovery Lenny Fukshansky Claremont McKenna College (joint work with Deanna Needell and Benny Sudakov) Combinatorics Seminar USC October 31, 2018 From Wikipedia:

More information

Simple and Practical Algorithm for Sparse Fourier Transform

Simple and Practical Algorithm for Sparse Fourier Transform Simple and Practical Algorithm for Sparse Fourier Transform Haitham Hassanieh MIT Piotr Indyk MIT Dina Katabi MIT Eric Price MIT {haithamh,indyk,dk,ecprice}@mit.edu Abstract We consider the sparse Fourier

More information

Fall 2011, EE123 Digital Signal Processing

Fall 2011, EE123 Digital Signal Processing Lecture 6 Miki Lustig, UCB September 11, 2012 Miki Lustig, UCB DFT and Sampling the DTFT X (e jω ) = e j4ω sin2 (5ω/2) sin 2 (ω/2) 5 x[n] 25 X(e jω ) 4 20 3 15 2 1 0 10 5 1 0 5 10 15 n 0 0 2 4 6 ω 5 reconstructed

More information

Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery

Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery Anna C. Gilbert Department of Mathematics University of Michigan Connection between... Sparse Approximation and Compressed

More information

EE123 Digital Signal Processing

EE123 Digital Signal Processing Announcements EE Digital Signal Processing otes posted HW due Friday SDR give away Today! Read Ch 9 $$$ give me your names Lecture based on slides by JM Kahn M Lustig, EECS UC Berkeley M Lustig, EECS UC

More information

14.1 Finding frequent elements in stream

14.1 Finding frequent elements in stream Chapter 14 Streaming Data Model 14.1 Finding frequent elements in stream A very useful statistics for many applications is to keep track of elements that occur more frequently. It can come in many flavours

More information

Lecture 2. Frequency problems

Lecture 2. Frequency problems 1 / 43 Lecture 2. Frequency problems Ricard Gavaldà MIRI Seminar on Data Streams, Spring 2015 Contents 2 / 43 1 Frequency problems in data streams 2 Approximating inner product 3 Computing frequency moments

More information

Lecture 3 Frequency Moments, Heavy Hitters

Lecture 3 Frequency Moments, Heavy Hitters COMS E6998-9: Algorithmic Techniques for Massive Data Sep 15, 2015 Lecture 3 Frequency Moments, Heavy Hitters Instructor: Alex Andoni Scribes: Daniel Alabi, Wangda Zhang 1 Introduction This lecture is

More information

Nearly Optimal Deterministic Algorithm for Sparse Walsh-Hadamard Transform

Nearly Optimal Deterministic Algorithm for Sparse Walsh-Hadamard Transform Electronic Colloquium on Computational Complexity, Report No. 76 (2015) Nearly Optimal Deterministic Algorithm for Sparse Walsh-Hadamard Transform Mahdi Cheraghchi University of California Berkeley, CA

More information

Linear Sketches A Useful Tool in Streaming and Compressive Sensing

Linear Sketches A Useful Tool in Streaming and Compressive Sensing Linear Sketches A Useful Tool in Streaming and Compressive Sensing Qin Zhang 1-1 Linear sketch Random linear projection M : R n R k that preserves properties of any v R n with high prob. where k n. M =

More information

Approximating MAX-E3LIN is NP-Hard

Approximating MAX-E3LIN is NP-Hard Approximating MAX-E3LIN is NP-Hard Evan Chen May 4, 2016 This lecture focuses on the MAX-E3LIN problem. We prove that approximating it is NP-hard by a reduction from LABEL-COVER. 1 Introducing MAX-E3LIN

More information

Lecture 4 February 2nd, 2017

Lecture 4 February 2nd, 2017 CS 224: Advanced Algorithms Spring 2017 Prof. Jelani Nelson Lecture 4 February 2nd, 2017 Scribe: Rohil Prasad 1 Overview In the last lecture we covered topics in hashing, including load balancing, k-wise

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 3: Sparse signal recovery: A RIPless analysis of l 1 minimization Yuejie Chi The Ohio State University Page 1 Outline

More information

Data Streams & Communication Complexity

Data Streams & Communication Complexity Data Streams & Communication Complexity Lecture 1: Simple Stream Statistics in Small Space Andrew McGregor, UMass Amherst 1/25 Data Stream Model Stream: m elements from universe of size n, e.g., x 1, x

More information

Signal Recovery from Permuted Observations

Signal Recovery from Permuted Observations EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,

More information

5.6 Convolution and FFT

5.6 Convolution and FFT 5.6 Convolution and FFT Fast Fourier Transform: Applications Applications. Optics, acoustics, quantum physics, telecommunications, control systems, signal processing, speech recognition, data compression,

More information

6.842 Randomness and Computation Lecture 5

6.842 Randomness and Computation Lecture 5 6.842 Randomness and Computation 2012-02-22 Lecture 5 Lecturer: Ronitt Rubinfeld Scribe: Michael Forbes 1 Overview Today we will define the notion of a pairwise independent hash function, and discuss its

More information

Lecture 1: Introduction to Sublinear Algorithms

Lecture 1: Introduction to Sublinear Algorithms CSE 522: Sublinear (and Streaming) Algorithms Spring 2014 Lecture 1: Introduction to Sublinear Algorithms March 31, 2014 Lecturer: Paul Beame Scribe: Paul Beame Too much data, too little time, space for

More information

New Applications of Sparse Methods in Physics. Ra Inta, Centre for Gravitational Physics, The Australian National University

New Applications of Sparse Methods in Physics. Ra Inta, Centre for Gravitational Physics, The Australian National University New Applications of Sparse Methods in Physics Ra Inta, Centre for Gravitational Physics, The Australian National University 2 Sparse methods A vector is S-sparse if it has at most S non-zero coefficients.

More information

Combinatorial Sublinear-Time Fourier Algorithms

Combinatorial Sublinear-Time Fourier Algorithms Combinatorial Sublinear-Time Fourier Algorithms M. A. Iwen Institute for Mathematics and its Applications (IMA University of Minnesota iwen@ima.umn.edu July 29, 2009 Abstract We study the problem of estimating

More information

Divide and conquer. Philip II of Macedon

Divide and conquer. Philip II of Macedon Divide and conquer Philip II of Macedon Divide and conquer 1) Divide your problem into subproblems 2) Solve the subproblems recursively, that is, run the same algorithm on the subproblems (when the subproblems

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 12 Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted for noncommercial,

More information

Lecture 7: Passive Learning

Lecture 7: Passive Learning CS 880: Advanced Complexity Theory 2/8/2008 Lecture 7: Passive Learning Instructor: Dieter van Melkebeek Scribe: Tom Watson In the previous lectures, we studied harmonic analysis as a tool for analyzing

More information

12 Count-Min Sketch and Apriori Algorithm (and Bloom Filters)

12 Count-Min Sketch and Apriori Algorithm (and Bloom Filters) 12 Count-Min Sketch and Apriori Algorithm (and Bloom Filters) Many streaming algorithms use random hashing functions to compress data. They basically randomly map some data items on top of each other.

More information

The Pros and Cons of Compressive Sensing

The Pros and Cons of Compressive Sensing The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal

More information

Interpolation on the unit circle

Interpolation on the unit circle Lecture 2: The Fast Discrete Fourier Transform Interpolation on the unit circle So far, we have considered interpolation of functions at nodes on the real line. When we model periodic phenomena, it is

More information

FPS-SFT: A MULTI-DIMENSIONAL SPARSE FOURIER TRANSFORM BASED ON THE FOURIER PROJECTION-SLICE THEOREM

FPS-SFT: A MULTI-DIMENSIONAL SPARSE FOURIER TRANSFORM BASED ON THE FOURIER PROJECTION-SLICE THEOREM FPS-SFT: A MULTI-DIMENSIONAL SPARSE FOURIER TRANSFORM BASED ON THE FOURIER PROJECTION-SLICE THEOREM Shaogang Wang, Vishal M. Patel and Athina Petropulu Department of Electrical and Computer Engineering

More information

Recent Developments in Compressed Sensing

Recent Developments in Compressed Sensing Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline

More information

Lecture 29: Computational Learning Theory

Lecture 29: Computational Learning Theory CS 710: Complexity Theory 5/4/2010 Lecture 29: Computational Learning Theory Instructor: Dieter van Melkebeek Scribe: Dmitri Svetlov and Jake Rosin Today we will provide a brief introduction to computational

More information

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March

More information

Lecture 10. Sublinear Time Algorithms (contd) CSC2420 Allan Borodin & Nisarg Shah 1

Lecture 10. Sublinear Time Algorithms (contd) CSC2420 Allan Borodin & Nisarg Shah 1 Lecture 10 Sublinear Time Algorithms (contd) CSC2420 Allan Borodin & Nisarg Shah 1 Recap Sublinear time algorithms Deterministic + exact: binary search Deterministic + inexact: estimating diameter in a

More information

Compressive Sensing with Random Matrices

Compressive Sensing with Random Matrices Compressive Sensing with Random Matrices Lucas Connell University of Georgia 9 November 017 Lucas Connell (University of Georgia) Compressive Sensing with Random Matrices 9 November 017 1 / 18 Overview

More information

Algorithm Design and Analysis

Algorithm Design and Analysis Algorithm Design and Analysis LECTURE 14 Divide and Conquer Fast Fourier Transform Sofya Raskhodnikova 10/7/2016 S. Raskhodnikova; based on slides by K. Wayne. 5.6 Convolution and FFT Fast Fourier Transform:

More information

Finding Correlations in Subquadratic Time, with Applications to Learning Parities and Juntas

Finding Correlations in Subquadratic Time, with Applications to Learning Parities and Juntas Finding Correlations in Subquadratic Time, with Applications to Learning Parities and Juntas Gregory Valiant UC Berkeley Berkeley, CA gregory.valiant@gmail.com Abstract Given a set of n d-dimensional Boolean

More information

DIVIDE AND CONQUER II

DIVIDE AND CONQUER II DIVIDE AND CONQUER II master theorem integer multiplication matrix multiplication convolution and FFT Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley http://www.cs.princeton.edu/~wayne/kleinberg-tardos

More information

Efficient Sketches for Earth-Mover Distance, with Applications

Efficient Sketches for Earth-Mover Distance, with Applications Efficient Sketches for Earth-Mover Distance, with Applications Alexandr Andoni MIT andoni@mit.edu Khanh Do Ba MIT doba@mit.edu Piotr Indyk MIT indyk@mit.edu David Woodruff IBM Almaden dpwoodru@us.ibm.com

More information

Tutorial: Sparse Recovery Using Sparse Matrices. Piotr Indyk MIT

Tutorial: Sparse Recovery Using Sparse Matrices. Piotr Indyk MIT Tutorial: Sparse Recovery Using Sparse Matrices Piotr Indyk MIT Problem Formulation (approximation theory, learning Fourier coeffs, linear sketching, finite rate of innovation, compressed sensing...) Setup:

More information

Solving Hidden Number Problem

Solving Hidden Number Problem Solving Hidden Number Problem with One Bit Oracle and Advice Adi Akavia Institute for Advanced Study, Princeton NJ 08540 DIMACS, Rutgers University, Piscataway, NJ 08854 Abstract. In the Hidden Number

More information

Lecture 24: Goldreich-Levin Hardcore Predicate. Goldreich-Levin Hardcore Predicate

Lecture 24: Goldreich-Levin Hardcore Predicate. Goldreich-Levin Hardcore Predicate Lecture 24: : Intuition A One-way Function: A function that is easy to compute but hard to invert (efficiently) Hardcore-Predicate: A secret bit that is hard to compute Theorem (Goldreich-Levin) If f :

More information

What s the Frequency Kenneth?: - Sublinear Fourier Sampling Off the Grid

What s the Frequency Kenneth?: - Sublinear Fourier Sampling Off the Grid MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com What s the Frequency Kenneth?: - Sublinear Fourier Sampling Off the Grid Boufounos, P.T. TR2012-059 August 2012 Abstract We design a sublinear

More information

Lecture 3. Random Fourier measurements

Lecture 3. Random Fourier measurements Lecture 3. Random Fourier measurements 1 Sampling from Fourier matrices 2 Law of Large Numbers and its operator-valued versions 3 Frames. Rudelson s Selection Theorem Sampling from Fourier matrices Our

More information

Lecture 22: More On Compressed Sensing

Lecture 22: More On Compressed Sensing Lecture 22: More On Compressed Sensing Scribed by Eric Lee, Chengrun Yang, and Sebastian Ament Nov. 2, 207 Recap and Introduction Basis pursuit was the method of recovering the sparsest solution to an

More information

arxiv: v4 [cs.dm] 27 Oct 2016

arxiv: v4 [cs.dm] 27 Oct 2016 On the Joint Entropy of d-wise-independent Variables Dmitry Gavinsky Pavel Pudlák January 0, 208 arxiv:503.0854v4 [cs.dm] 27 Oct 206 Abstract How low can the joint entropy of n d-wise independent for d

More information

Sparse and Fast Fourier Transforms in Biomedical Imaging

Sparse and Fast Fourier Transforms in Biomedical Imaging Sparse and Fast Fourier Transforms in Biomedical Imaging Stefan Kunis 1 1 joint work with Ines Melzer, TU Chemnitz and Helmholtz Center Munich, supported by DFG and Helmholtz association Outline 1 Trigonometric

More information

Computing the Entropy of a Stream

Computing the Entropy of a Stream Computing the Entropy of a Stream To appear in SODA 2007 Graham Cormode graham@research.att.com Amit Chakrabarti Dartmouth College Andrew McGregor U. Penn / UCSD Outline Introduction Entropy Upper Bound

More information

Sparse Johnson-Lindenstrauss Transforms

Sparse Johnson-Lindenstrauss Transforms Sparse Johnson-Lindenstrauss Transforms Jelani Nelson MIT May 24, 211 joint work with Daniel Kane (Harvard) Metric Johnson-Lindenstrauss lemma Metric JL (MJL) Lemma, 1984 Every set of n points in Euclidean

More information

Low Rate Is Insufficient for Local Testability

Low Rate Is Insufficient for Local Testability Electronic Colloquium on Computational Complexity, Revision 2 of Report No. 4 (200) Low Rate Is Insufficient for Local Testability Eli Ben-Sasson Michael Viderman Computer Science Department Technion Israel

More information

Lecture 9: Matrix approximation continued

Lecture 9: Matrix approximation continued 0368-348-01-Algorithms in Data Mining Fall 013 Lecturer: Edo Liberty Lecture 9: Matrix approximation continued Warning: This note may contain typos and other inaccuracies which are usually discussed during

More information

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5 CS 229r: Algorithms for Big Data Fall 215 Prof. Jelani Nelson Lecture 19 Nov 5 Scribe: Abdul Wasay 1 Overview In the last lecture, we started discussing the problem of compressed sensing where we are given

More information

Tutorial: Sparse Signal Recovery

Tutorial: Sparse Signal Recovery Tutorial: Sparse Signal Recovery Anna C. Gilbert Department of Mathematics University of Michigan (Sparse) Signal recovery problem signal or population length N k important Φ x = y measurements or tests:

More information

Faster Johnson-Lindenstrauss style reductions

Faster Johnson-Lindenstrauss style reductions Faster Johnson-Lindenstrauss style reductions Aditya Menon August 23, 2007 Outline 1 Introduction Dimensionality reduction The Johnson-Lindenstrauss Lemma Speeding up computation 2 The Fast Johnson-Lindenstrauss

More information

Wavelet decomposition of data streams. by Dragana Veljkovic

Wavelet decomposition of data streams. by Dragana Veljkovic Wavelet decomposition of data streams by Dragana Veljkovic Motivation Continuous data streams arise naturally in: telecommunication and internet traffic retail and banking transactions web server log records

More information

Fast Convolution; Strassen s Method

Fast Convolution; Strassen s Method Fast Convolution; Strassen s Method 1 Fast Convolution reduction to subquadratic time polynomial evaluation at complex roots of unity interpolation via evaluation at complex roots of unity 2 The Master

More information

The space complexity of approximating the frequency moments

The space complexity of approximating the frequency moments The space complexity of approximating the frequency moments Felix Biermeier November 24, 2015 1 Overview Introduction Approximations of frequency moments lower bounds 2 Frequency moments Problem Estimate

More information

6.854 Advanced Algorithms

6.854 Advanced Algorithms 6.854 Advanced Algorithms Homework Solutions Hashing Bashing. Solution:. O(log U ) for the first level and for each of the O(n) second level functions, giving a total of O(n log U ) 2. Suppose we are using

More information

Sparser Johnson-Lindenstrauss Transforms

Sparser Johnson-Lindenstrauss Transforms Sparser Johnson-Lindenstrauss Transforms Jelani Nelson Princeton February 16, 212 joint work with Daniel Kane (Stanford) Random Projections x R d, d huge store y = Sx, where S is a k d matrix (compression)

More information

8.1 Concentration inequality for Gaussian random matrix (cont d)

8.1 Concentration inequality for Gaussian random matrix (cont d) MGMT 69: Topics in High-dimensional Data Analysis Falll 26 Lecture 8: Spectral clustering and Laplacian matrices Lecturer: Jiaming Xu Scribe: Hyun-Ju Oh and Taotao He, October 4, 26 Outline Concentration

More information

Lecture 13: 04/23/2014

Lecture 13: 04/23/2014 COMS 6998-3: Sub-Linear Algorithms in Learning and Testing Lecturer: Rocco Servedio Lecture 13: 04/23/2014 Spring 2014 Scribe: Psallidas Fotios Administrative: Submit HW problem solutions by Wednesday,

More information

The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008

The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008 The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008 Emmanuel Candés (Caltech), Terence Tao (UCLA) 1 Uncertainty principles A basic principle

More information

CSE525: Randomized Algorithms and Probabilistic Analysis April 2, Lecture 1

CSE525: Randomized Algorithms and Probabilistic Analysis April 2, Lecture 1 CSE525: Randomized Algorithms and Probabilistic Analysis April 2, 2013 Lecture 1 Lecturer: Anna Karlin Scribe: Sonya Alexandrova and Eric Lei 1 Introduction The main theme of this class is randomized algorithms.

More information

Randomized Algorithms

Randomized Algorithms Randomized Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A new 4 credit unit course Part of Theoretical Computer Science courses at the Department of Mathematics There will be 4 hours

More information

Randomized Sorting Algorithms Quick sort can be converted to a randomized algorithm by picking the pivot element randomly. In this case we can show th

Randomized Sorting Algorithms Quick sort can be converted to a randomized algorithm by picking the pivot element randomly. In this case we can show th CSE 3500 Algorithms and Complexity Fall 2016 Lecture 10: September 29, 2016 Quick sort: Average Run Time In the last lecture we started analyzing the expected run time of quick sort. Let X = k 1, k 2,...,

More information

Design and Analysis of Algorithms

Design and Analysis of Algorithms Design and Analysis of Algorithms CSE 5311 Lecture 5 Divide and Conquer: Fast Fourier Transform Junzhou Huang, Ph.D. Department of Computer Science and Engineering CSE5311 Design and Analysis of Algorithms

More information

Testing that distributions are close

Testing that distributions are close Tugkan Batu, Lance Fortnow, Ronitt Rubinfeld, Warren D. Smith, Patrick White May 26, 2010 Anat Ganor Introduction Goal Given two distributions over an n element set, we wish to check whether these distributions

More information

EMPIRICAL EVALUATION OF A SUB-LINEAR TIME SPARSE DFT ALGORITHM

EMPIRICAL EVALUATION OF A SUB-LINEAR TIME SPARSE DFT ALGORITHM EMPIRICAL EVALUATIO OF A SUB-LIEAR TIME SPARSE DFT ALGORITHM M. A. IWE, A. GILBERT, AD M. STRAUSS Abstract. In this paper we empirically evaluate a recently proposed Fast Approximate Discrete Fourier Transform

More information

CS 598CSC: Algorithms for Big Data Lecture date: Sept 11, 2014

CS 598CSC: Algorithms for Big Data Lecture date: Sept 11, 2014 CS 598CSC: Algorithms for Big Data Lecture date: Sept 11, 2014 Instructor: Chandra Cheuri Scribe: Chandra Cheuri The Misra-Greis deterministic counting guarantees that all items with frequency > F 1 /

More information

R ij = 2. Using all of these facts together, you can solve problem number 9.

R ij = 2. Using all of these facts together, you can solve problem number 9. Help for Homework Problem #9 Let G(V,E) be any undirected graph We want to calculate the travel time across the graph. Think of each edge as one resistor of 1 Ohm. Say we have two nodes: i and j Let the

More information

Learning and Fourier Analysis

Learning and Fourier Analysis Learning and Fourier Analysis Grigory Yaroslavtsev http://grigory.us Slides at http://grigory.us/cis625/lecture2.pdf CIS 625: Computational Learning Theory Fourier Analysis and Learning Powerful tool for

More information

Tail Inequalities. The Chernoff bound works for random variables that are a sum of indicator variables with the same distribution (Bernoulli trials).

Tail Inequalities. The Chernoff bound works for random variables that are a sum of indicator variables with the same distribution (Bernoulli trials). Tail Inequalities William Hunt Lane Department of Computer Science and Electrical Engineering, West Virginia University, Morgantown, WV William.Hunt@mail.wvu.edu Introduction In this chapter, we are interested

More information

Lecture 4: Hashing and Streaming Algorithms

Lecture 4: Hashing and Streaming Algorithms CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 4: Hashing and Streaming Algorithms Lecturer: Shayan Oveis Gharan 01/18/2017 Scribe: Yuqing Ai Disclaimer: These notes have not been subjected

More information

CS261: A Second Course in Algorithms Lecture #18: Five Essential Tools for the Analysis of Randomized Algorithms

CS261: A Second Course in Algorithms Lecture #18: Five Essential Tools for the Analysis of Randomized Algorithms CS261: A Second Course in Algorithms Lecture #18: Five Essential Tools for the Analysis of Randomized Algorithms Tim Roughgarden March 3, 2016 1 Preamble In CS109 and CS161, you learned some tricks of

More information

LIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION. By Srikanth Jagabathula Devavrat Shah

LIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION. By Srikanth Jagabathula Devavrat Shah 00 AIM Workshop on Ranking LIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION By Srikanth Jagabathula Devavrat Shah Interest is in recovering distribution over the space of permutations over n elements

More information

Recovering overcomplete sparse representations from structured sensing

Recovering overcomplete sparse representations from structured sensing Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix

More information

A fast randomized algorithm for approximating an SVD of a matrix

A fast randomized algorithm for approximating an SVD of a matrix A fast randomized algorithm for approximating an SVD of a matrix Joint work with Franco Woolfe, Edo Liberty, and Vladimir Rokhlin Mark Tygert Program in Applied Mathematics Yale University Place July 17,

More information