Introduction to compressive sampling

Size: px
Start display at page:

Download "Introduction to compressive sampling"

Transcription

1 Introduction to compressive sampling Sparsity and the equation Ax = y Emanuele Grossi DAEIMI, Università degli Studi di Cassino e.grossi@unicas.it Gorini 2010, Pistoia

2 Outline 1 Introduction Traditional data acquisition Compressive data acquisition 2 Compressed sensing Measurement protocol Recovery procedure Recovery conditions Sensing matrices 3 Discussion Connections with other fields Numerical methods Applications Conclusions

3 Outline 1 Introduction Traditional data acquisition Compressive data acquisition 2 Compressed sensing Measurement protocol Recovery procedure Recovery conditions Sensing matrices 3 Discussion Connections with other fields Numerical methods Applications Conclusions

4 Traditional data acquisition Uniformly sample (or sense) data at Nyquist rate Compress data (adaptive, non linear) data sample size n compress transmit/store size k n receive size k decompress recovered data size n

5 Sparsity/Compressibility Many signals can be well-approximated by a sparse expansion in terms of a suitable basis, i.e., by few of non-zero coefficients Fourier transform time, n = 512 frequency, k = 6 n

6 Introduction Compressed sensing Discussion Sparsity/Compressibility Wavelet transform 1.5 MB image wavelet domain

7 Introduction Compressed sensing Discussion Sparsity/Compressibility Wavelet transform n = coefficients threshold wavelet coefficients (sorted) k = 7% of n x 10

8 Introduction Compressed sensing Discussion Sparsity/Compressibility Wavelet transform original compress & decompress

9 Traditional data acquisition data transmit/store sample compress size n size k n receive size k decompress recovered data size n Pro: simple data recovery Cons: inefficient n can be very large even if k is small n transform coefficients must be computed but only the largest k are stored also the location of the k largest coefficients must be encoded (=overhead) in some applications measurements can be costly, lengthy, or otherwise difficult (e.g., radar, MRI, etc.)

10 Outline 1 Introduction Traditional data acquisition Compressive data acquisition 2 Compressed sensing Measurement protocol Recovery procedure Recovery conditions Sensing matrices 3 Discussion Connections with other fields Numerical methods Applications Conclusions

11 Compressive data acquisition Why spend so much effort to acquire all the data when most of it will be discarded? Wouldn t it be possible to acquire the data in a compressed form so that one does not need to throw away anything? Yes: compressed sensing (CS) Compressed sensing, a.k.a. as compressive sensing or compressive sampling, is a simple and efficient signal acquisition protocol E. J. Candès and T. Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inorm. Theory, 2006; D. L. Donoho, Compressed sensing, IEEE Trans. Inorm. Theory, 2006.

12 Compressive data acquisition CS samples in a signal independent fashion at low rate later uses computational power and exploit sparsity for reconstruction from what appears to be an incomplete set of measurements data compressed sensing transmit/store size m = O(k ln n) receive size m reconstruct sparsity-aware recovered data size n reduced measurement time reduced sampling rates reduced ADC resources usage

13 Outline 1 Introduction Traditional data acquisition Compressive data acquisition 2 Compressed sensing Measurement protocol Recovery procedure Recovery conditions Sensing matrices 3 Discussion Connections with other fields Numerical methods Applications Conclusions

14 CS recipe Sparse signal representation Coded measurements (sampling process) Recovery algorithms (non-linear)

15 Sparse signal representation Many types of real-world signals (e.g., sound, images, video) can be viewed as an n-dimensional vector of real numbers, where n is large (e.g., n = 10 6 ) They may have a concise representation in a suitable basis 1 n s = s. = x i ψ i = Ψ x s i=1 n signal to be sensed basis vectors (not necessarily orthogonal) into an n n matrix Ψ = (ψ 1 ψ n ), e.g., spikes, sinusoids, wavelets, etc. signal coefficients into an n-dimensional vector

16 Sparsity and compressibility x p = ( n i=1 x i p) 1/p, lp -norm, e.g., x 2 = n x i 2, Euclidian x 1 = i=1 n x i, gives the Manhattan distance i=1 x 0 = card { i {1,..., n} : x i 0 }, number of non-zero entries (little abuse: it is not a norm)

17 Sparsity and compressibility x p = ( n i=1 x i p) 1/p, lp -norm Definitions: x sparse if x 0 n x k-sparse, if x 0 k n best k-term approximation of x x k = arg min x w p w: w 0 k = x with the smallest n k entries set to 0 x compressible if x x k p ck r, for some c > 0 and r > 1 (namely, decay quickly in k)

18 Traditional compression If x is compressible then encode x with x k Inefficiencies of the protocol: adaptive (i.e., x must be known to select its largest k entries) can be non-linear

19 Take m linear measurements Measurement model y = Φ s = ΦΨx = Ax linear model measurement matrix (m n) measures (m-dimensional vector) Common sensing matrices: the rows of Φ are Dirac delta s y contains the samples of s the rows of Φ are sinusoids many others y contains the Fourier coefficients (typical in MRI)

20 Traditional data acquisition (again) s size n measurement process y = Ax size m n compress x k size k n sensor side x k size k decompress ŝ size n receiver side compression is adaptive (or signal-dependent): need to know x inversion of Ax = y at the sensor side neglect sparsity: m n for matrix inversion

21 CS intuition If x is k-sparse, then it should have k degrees of freedom, not n only k measurements or so are needed Analogy with the 12 coin problem: Of 12 coins, one is counterfeit and weighs either more or less than the others. Find the counterfeit coin and say if its lighter or heavier with 3 weighings on a balance scale. General problem: (3 p 3)/2 coins coins and p weighings 3 coins: possible weighing plan 1 st weighing 2 nd weighing

22 CS intuition 12 coins: possible weighing plan 1 st weighing 2 nd weighing 3 rd weighing Key points counterfeit data is sparse weigh the coins in suitably chosen batches each measurement picks up little information about many coins

23 CS protocol s size n measurement process y = Ax sensor side size m = O(k ln n) y size m reconstruct ŝ size n receiver side exploit sparsity m can be comparable with k inversion of Ax = y at the receiver side through non-linear processing measurements have to be suitably designed; remarkably, random measurement matrices work! non-adaptive sensing (i.e., signal-independent): need not to know x

24 CS protocol s size n measurement process y = Ax sensor side size m = O(k ln n) y size m reconstruct ŝ size n receiver side What is needed then is: a reconstruction algorithm to invert Ax = y a sensing matrix Φ that gives a good A

25 Outline 1 Introduction Traditional data acquisition Compressive data acquisition 2 Compressed sensing Measurement protocol Recovery procedure Recovery conditions Sensing matrices 3 Discussion Connections with other fields Numerical methods Applications Conclusions

26 The equation Ax = y Assume w.l.o.g. rank(a) = min{m, n} m = n, determined system solution = A 1 y m > n, over-determined system; two cases y I (A) solution = ( A T A) 1 A T y = A y y / I (A) (e.g., noisy measurements) no solution; the Least-squares (LS) one is arg min Ax y 2 = ( A T A) 1 A T y x }{{} A m < n, under-determined system infinite solutions; the LS one is arg min x 2 = A T( AA T ) 1 x: Ax=y }{{} A y

27 The under-determined case Recovery of x is possible only if prior information is available if x is low-energy (i.e., x 2 is small), then LS is reasonable Least-squares min x x 2, s.t. Ax = y Ax = y unique solution for any A and y solution in closed form

28 The under-determined case Recovery of x is possible only if prior information is available if x is sparse (i.e., x 0 is small) Problem (P 0 ) min x x 0, s.t. Ax = y Ax = y solution not always unique problem in general NP-hard

29 Uniqueness Proposition If any 2k m columns of A are linearly independent, then any k-sparse signal x can be recovered uniquely from Ax. Proof: If not there would exists x 1, x 2 such that Ax 1 = Ax 2. This implies A(x 1 x 2 ) = 0, with x 1 x 2 2k-sparse, and it is not possible Observation If (A) i, j are Gaussian (or from other continuous distribution) i.i.d. then the condition is satisfied w.p.1.

30 Computational complexity p p=2 p=1 x p=1/3 p=0 0 x x p p is { convex, if p 1 non-convex, otherwise That s why (P 0 ) is hard!

31 Computational complexity p p=2 p=1 x p=1/3 p=0 0 x Possible ways: look for iterative algorithms: greedy algorithms convex relaxation: use the convex norm with the lowest p

32 l 1 regularization Problem (P 1 ) min x x 1, s.t. Ax = y (P 1 ) is a convex optimization problem and admits a solution It can be recast as min t, x n t i, i=1 s.t. x i t i i, Ax = y linear program (LP) in the real case, second order cone program (SOCP) in the complex case fast (polynomial time), accurate algorithms are available

33 l 1 regularization Problem (P 1 ) min x x 1, s.t. Ax = y Ax = y Heuristic way to obtain sparse solutions In the example: the solution is always unique and sparse unless the line has ±45 slope if A is sampled from an i.i.d. continuous distribution, this happens w.p.0

34 l 0, l 1, and l 2 together {z : Az = y} ˆx = x {z : Az = y} x {z : Az = y} ˆx = x ˆx l 0 l 2 l 1 x is k-sparse, and y = Ax Example: k = 1, A R 2 3, any 2 columns of A are linearly independent

35 l 0, l 1, and l 2 together {z : Az = y} ˆx = x {z : Az = y} x {z : Az = y} ˆx = x ˆx l 0 l 2 l 1 x is k-sparse, and y = Ax l 0 works if any 2k columns of A are linearly independent l 2 never works l 1 works if the condition on A is strengthened

36 Example Reconstruction of a 512-long signal from 120 random measurements superposition of 10 cosines in the time domain frequency domain

37 Example Reconstruction of a 512-long signal from 120 random measurements frequency l 2 reconstruction l 1 reconstruction

38 Example Reconstruction of a image (= pixels) from 5481 measurements in the Fourier domain Shepp-Logan phantom (a toy model for MRI) sampling pattern in the frequency domain (22 approximately radial lines) E. J. Candès, J. Romberg, and T. Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inform. Theory, 2006.

39 Example Reconstruction of a image (= pixels) from 5481 measurements in the Fourier domain original min-energy reconstruction min-tv reconstruction E. J. Candès, J. Romberg, and T. Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inform. Theory, 2006.

40 Outline 1 Introduction Traditional data acquisition Compressive data acquisition 2 Compressed sensing Measurement protocol Recovery procedure Recovery conditions Sensing matrices 3 Discussion Connections with other fields Numerical methods Applications Conclusions

41 Sparse recovery & incoherence Theorem Let (a 1 a n ) be the columns of A, normalized so that a i 2 = 1 i, M = max i j a T i a j, and y = Ax. If x 0 < (1 + 1/M)/2 then x is the unique solution of (P 1 ). M = max i j a T i a j is called mutual coherence Easy to check but coarse/pessimistic D. L. Donoho and M. Elad, Optimally sparse representation in general (nonorthogonal) dictionaries via l1 minimization, Proc. Nat. Acad. Sci., 2003

42 Sparse recovery & RIP For k {1, 2,..., n}, let δ k be the smallest δ such that (1 δ) x 2 2 Ax 2 2 (1 + δ) x 2 2, x : x 0 k A satisfies the restricted isometry property (RIP) of order k if δ k [0, 1), i.e., any k columns are nearly orthogonal Theorem Let y = Ax and ˆx be the solution of (P 1 ). If δ 2k < 2 1, then x ˆx 1 C 0 x x k 1 x ˆx 2 C 0 x x k 1 / k for some constant C 0. In particular, if x is k-sparse, ˆx = x. E. J. Candès, The restricted isometry property and its implications for compressed sensing, Compte Rendus de l Academie des Sciences, 2008.

43 Noisy measurements Any measurement process introduces noise (say n) y = Ax + n In this case, if n 2 ɛ, Problem (P ɛ 1) min x x 1, s.t. Ax y 2 ɛ (P ɛ 1 ) is a convex optimization problem and can be recast as SOCP min t, x n t i, i=1 s.t. x i t i i, Ax y 2 ɛ

44 Noisy measurements Problem (P ɛ 1) min x x 1, s.t. Ax y 2 ɛ Ax y 2 ɛ

45 Theorem Approximate recovery & RIP Let y = Ax + n, with n 2 ɛ, and ˆx be the solution of (P ɛ 1 ). If δ 2k < 2 1, then x ˆx 2 C 0 x x k 1 / k + C 1 ɛ for some constant C 1 (C 0 same as before). Stable recovery Reconstruction error bounded by 2 terms: same as in the noiseless case proportional to the noise level C 0 and C 1 are rather small, e.g., if δ = 0.25, then C and C 1 6 E. J. Candès, The restricted isometry property and its implications for compressed sensing, Compte Rendus de l Academie des Sciences, 2008.

46 Sparse recovery & NSP A has the null space property (NSP) of order k if, for some γ (0, 1), η T 1 γ η T c 1, η ker(a), T {1,..., n}, card(t) k The elements in the null space should have no structure (look like noise) NSP is actually equivalent to sparse l 1 -recovery since Theorem Let y = Ax. If A has NSP of order k, then x is the solution of (P 1 ) x k-sparse. Conversely, if x is the solution of (P 1 ) x k-sparse, then A has NSP of order 2k. A. Cohen, W. Dahmen, and R. DeVore, Compressed sensing and best k-term approximation, J. Amer. Math. Soc., 2009.

47 Recovery conditions Mutual coherence: easy to check but coarse/pessimistic RIP: maybe almost sharp, works in the noisy case, but hard to compute not invariant to invertible linear transformations G, i.e., y = Ax Gy = GAx, but A satisfies RIP GA satisfies RIP NSP: tight but hard to compute (usually NSP is verified through RIP) not available in the noisy case Others: many conditions are present in the literature (e.g., incoherence between Φ and Ψ)

48 How many measurements? If x 1 R, the reconstruction error from linear measurements ln n/m+1 of any recovery method is lower bounded by C 2 R m, for some constant C 2 If A is such that δ 2k 2 1 then x ˆx 2 C 0 x x k 1 / k C 0 R/ k Thus C 0R ln n/m+1 k C 2 R m, and then, for a constant C, m Ck(ln n/m + 1) O(k ln n) measurements are sufficient to recover the signal with an accuracy comparable to that attainable with direct knowledge of the k largest coefficients B. Kashin, Diameters of some finite-dimensional sets and classes of smooth functions, Izv. Akad. Nauk SSSR, Ser. Mat., 1977; A. Y. Garnaev and E. D. Gluskin, On widths of the Euclidean ball, Sov. Math. Dokl., 1984.

49 Outline 1 Introduction Traditional data acquisition Compressive data acquisition 2 Compressed sensing Measurement protocol Recovery procedure Recovery conditions Sensing matrices 3 Discussion Connections with other fields Numerical methods Applications Conclusions

50 Goal: find A which satisfies RIP Good sensing matrices Deterministic constructions of A have been proposed but m is much larger than the optimal value Try with random matrices and accept a (hopefully small) probability of failure Key property Concentration inequality The random matrix A satisfies the concentration inequality if, x and ɛ (0, 1), P ( Ax 2 2 x 2 2 ɛ x 2 2) 2e nc ɛ where c ɛ > 0

51 Good sensing matrices Theorem Let δ (0, 1). If A satisfies the concentration inequality, then there exist constants c 1, c 2 > 0 depending only on δ such that the restricted isometry constant of A satisfies δ k δ with probability exceeding 1 2e c 1m provided that m c 2 k ln n/k. Observation: m c 2 k ln n/k m c 2 1+c 2 k(ln n/m + 1) R. Baraniuk, M. Davenport, R. DeVore, and M. Wakin, A Simple Proof of the Restricted Isometry Property for Random Matrices, Constr. Approx., 2009

52 Random sensing Random matrices Concentration inequality RIP solution of (P 1 )-(P ɛ 1 ) Random matrices allow perfect/approximate recovery of k-sparse/compressible signals with overwhelming probability using O(k ln n) measurements Examples Two important cases satisfy the concentration inequality Gaussian: (A) i, j N (0, 1/m) i.i.d. Bernoully: (A) i, j B(1/2) i.i.d. with values ±1/ m

53 The sensing matrix Recall that y = Φs = }{{} ΦΨ x, A { Φ = sensing matrix Ψ = sparsifying matrix If Ψ orthogonal Φ = AΨ T Not actually needed: just take Φ Gaussian or Bernoully If Φ satisfies the concentration inequality, so A does: P ( Ax 2 2 x 2 2 ɛ x 2 ) ( 2 =P Φs 2 2 Ψ T s 2 2 ɛ Ψ T s 2 ) 2 P ( Φs 2 2 s 2 2 ɛ s 2 ) 2 Random sensing is universal: it does not matter in which basis the signal is sparse (Ψ not needed at the sensor side)

54 Random partial Fourier matrices Gaussian and Bernoulli matrices provide minimal number of measurements but physical constraints on the sensor may preclude Gaussian both are not structured no fast matrix-vector multiplication algorithm Possible alternative: select m rows uniformly at random in a n n Fourier matrix F (F) h,k = 1 n e 2πihk/n equivalent to observe m random entries of the DFT of the signal RIP holds with high probability, but m Ck ln 4 n

55 More random matrices (A) i, j i.i.d. with distribution 1 6 δ 3 m δ δ 3 m ; in this case m Ck ln n/k A is formed selecting m vectors uniformly at random from the surface of the unit l 2 sphere in R m ; in this case m Ck ln n/k A is formed selecting m rows uniformly at random from a unitary matrix U, and re-normalizing the columns to be unit l 2 -norm; in this case m Cµ 2 k ln 4 n where µ = n max i, j (U) i, j (in the Fourier matrix µ = 1)

56 More random matrices If U = ΦΨ, both unitary, then m Cµ 2 k ln 4 n, µ = n max < φ i, ψ j > i, j µ is a measure of the mutual incoherence between the measurement basis and the sparsity basis µ [ 1, n ], and low coherence is good: e.g., Φ = Fourier & Ψ = I gives µ = 1, i.e., maximal incoherence basis vectors of Ψ must be spread out in the basis Φ (e.g., in the Fourier-Identity case, δ exponential) sparse signal incoherent measurements

57 Outline 1 Introduction Traditional data acquisition Compressive data acquisition 2 Compressed sensing Measurement protocol Recovery procedure Recovery conditions Sensing matrices 3 Discussion Connections with other fields Numerical methods Applications Conclusions

58 Three equivalent problems There is a considerable interest in solving the unconstrained optimization problem (convex but non differentiable) { } min Ax y 2 x 2 + λ x 1 (1) Example: Bayesian estimation x Laplace, n white Gaussian, the MAP estimate of x from y = Ax + n solves (1), since max x f (x y) max x f (y x)f (x) max e 1 2σ 2 Ax y 2 2 e γ x 1 x

59 Three equivalent problems There is a considerable interest in solving the unconstrained optimization problem (convex but non differentiable) { } min Ax y 2 x 2 + λ x 1 (1) Problem (1) is closely related to min x 1, s.t. Ax y 2 x 2 ɛ (2a) min Ax y 2 x 2, s.t. x 1 η (2b) The solution of (2a), which is just (P ɛ 1 ), is either x = 0, or a solution of (1) for some λ > 0 The solution of (2b) is also a solution for (1) for some η

60 Geophysics: early references min x { Ax y λ x 1 } (1) Claerbount and Muir wrote in 1973 In deconvolving any observed seismic trace, it is rather disappointing to discover that there is a nonzero spike at every point in time regardless of the data sampling rate. One might hope to find spikes only where real geologic discontinuities take place. Perhaps the l 1 norm can be utilized to give a [sparse] output trace... Santosa and Symes proved in 1986 that (1) succeeds under mild conditions in recovering spike trains from seismic traces J. F. Claerbout and F. Muir, Robust modeling of erratic data, Geophysics, F. Santosa and W. W. Symes, Linear inversion of band-limited reflection seismograms, SIAM J. Sci. Statist. Comput.1986.

61 signal to be represented Signal processing: basis pursuit y = ( a 1 a n ) basis vectors (overcomplete) x 1. x n = Ax There is a very large number of basis functions (called dictionary) so that x is likely to be sparse (sparse) coefficients Goal: find a good fit of the signal as a linear combination of a small number of the basis functions i.e., basis pursuit (BP) BP finds signal representations in overcomplete dictionaries solving min x x 1, s.t. Ax = y S. S. Chen, D. L. Donoho, and M. A. Saunders, Atomic decomposition by basis pursuit, SIAM J. Sci. Comput., 1999.

62 Signal processing: basis pursuit y = Ax + n noise Can also deal with noisy measurements of the signal (basis pursuit denoising) In this case which is (P ɛ 1 ), or equivalently min x x 1, s.t. Ax y 2 2 ɛ min x { Ax y λ x 1 } S. S. Chen, D. L. Donoho, and M. A. Saunders, Atomic decomposition by basis pursuit, SIAM J. Sci. Comput., 1999.

63 Statistics: linear regression A common problem in statistics is linear regression y = n a i x i + n = Ax + n i=1 measures (response variables) regressors (explanatory variables) parameter vector (regression coefficients) noise (error term) i.i.d., zero-mean In oder to mitigate modeling biases, a large number of regressors can be included m < n Goals: minimize the prediction error y Ax 2 (good data fit) identifying the significative regressors (variable selection)

64 Regularization Penalized regression can be used min x { Ax y λ x p } As the parameter λ varies over (0, ), its solution traces out the optimal trade-off curve The most common is ridge regression { min Ax y 2 x 2 + λ x 2 } 2 The solution is ˆx = (A T A + λi) 1 A T y, but it cannot produce model selection A. E. Hoerl and R. W. Kennard, Ridge regression: applications to nonorthogonal problems, Technometrics, 1970.

65 Regularization Bridge regression is more general { min Ax y 2 x 2 + λ x p } p If p 1 and λ is sufficiently large it combines parameter estimation and model selection The p = 1 case is related to the least absolute shrinkage and selection operator (lasso) min x Ax y 2, s.t. x 1 η Lasso and problem (P ɛ 1 ) are formally identical, and equivalent to { } min Ax y 2 x 2 + λ x 1 I. E. Frank and J. H. Freidman, A statistical view of some chemometrics regression tools, Technometrics R. Tibshirani, Regression shrinkage and selection via the lasso, J. Roy. Statist. Soc. B, 1996.

66 Variants of lasso If more prior information (other than sparsity) is available on x, it can be included in the optimization problem through proper penalizing terms The fused lasso preserves local constancy when the regressors are properly arranged { } n min Ax y 2 x 2 + λ 1 x 1 + λ 2 x i x i 1 In reconstruction/denoising problems this can be used to recover sparse and piece-wise constant signals If the signal is smooth, the total variation n i=2 x i x i 1 can be substituted with a quadratic smoothing n i=2 x i x i 1 2 i=2 R. Tibshirani, M. Saunders, S. Rosset, J. Zhu, and K. Knight, Sparsity and smoothness via the fused lasso, J. Roy. Stat. Soc. B, 2005.

67 Variants of lasso The group lasso promotes group selection { } k min Ax y 2 x 2 + λ x i 2 where x = (x T 1 i=1 xt k )T has been partitioned in k groups effective for recovery of sparse signals where coefficients appears in groups The elastic net is a stabilized version of lasso { } min Ax y 2 x 2 + λ 1 x 1 + λ 2 x 2 It can select more than m variables even when m < n. It s in between ridge and lasso M. Yuan and Y. Lin, Model selection and estimation in regression with grouped variables, J. Roy. Stat. Soc. B, H. Zou and T. Hastie, Regularization and variable selection via the elastic net, J. Roy. Stat. Soc. B, 2005.

68 Outline 1 Introduction Traditional data acquisition Compressive data acquisition 2 Compressed sensing Measurement protocol Recovery procedure Recovery conditions Sensing matrices 3 Discussion Connections with other fields Numerical methods Applications Conclusions

69 Convex programs 1 min x { Ax y λ x 1 } can be recast as a perturbed LP (a quadratic program (QP) with structure similar to LP) 2 min x Ax y 2 2, s.t. x 1 η QP 3 min x x 1, s.t. Ax = y can be cast as a LP 4 min x x 1, s.t. Ax y 2 2 ɛ can be cast as a SOCP

70 Algorithms Can be all solved through standard convex optimization methods, e.g., interior point methods (primal-dual, log-barrier, etc.) general purposes solvers can handle small to medium size problems optimized algorithms (with fast matrix-vector operations) can scale to large problems Homotopy methods, e.g., least angle regression (LARS): compute the entire solution path (i.e., for any λ > 0) exploit the piece-wise linear property of the regularization path fast if the solution is very sparse Greedy algorithms for signal reconstructions, e.g., matching pursuit MP and orthogonal MP (OMP) not based on optimization iteratively chooses the dictionary element with the highest inner product with the current residual low complexity but less powerfull

71 Outline 1 Introduction Traditional data acquisition Compressive data acquisition 2 Compressed sensing Measurement protocol Recovery procedure Recovery conditions Sensing matrices 3 Discussion Connections with other fields Numerical methods Applications Conclusions

72 Applications Compressed sensing is advantageous whenever signals are sparse in a known basis measurements (or computation at the sensor end) are expensive, but computations at the receiver end are cheap Such situations can arise in Compressive imaging (e.g., the single-pixel camera ) Medical imaging (e.g., MRI and computed tomography) AD conversion Computational biology (e.g., DNA microarrays) Geophysical data analysis (e.g., seismic data recovery) Radar Sensor networks Astronomy and others

73 Compressive imaging: the single-pixel camera

74 Compressive imaging: the single-pixel camera

75 Compressive imaging: the single-pixel camera original image and CS reconstruction (65536 pixels) from 3300 measurements (5%)

76 Compressive imaging: the single-pixel camera original image and CS reconstruction (65536 pixels) from 6600 measurements (10%)

77 Medical imaging: MRI (1) (3) 1 MRI scans the patient by collecting coefficients in the frequency domain 2 These coefficients are very sparse 3 Inverse Fourier transform produces medical image

78 Medical imaging: MRI original image Rapid acquisition of a mouse heart beating in dynamic MRI M.E. Davies and T. Blumensath, Faster & greedier: algorithms for sparse reconstruction of large datasets, IEEE ISCCSP 2008

79 Medical imaging: MRI Reconstruction from 20% of available measurements (linear and CS) Rapid acquisition of a mouse heart beating in dynamic MRI M.E. Davies and T. Blumensath, Faster & greedier: algorithms for sparse reconstruction of large datasets, IEEE ISCCSP 2008

80 Medical imaging: MRI original image Angiogram with observations along 80 lines in the Fourier domain and measurements E. J. Candès and J. Romberg, Practical signal recovery from random projections, SPIE Conf. on Wavelet App. in Signal and Image Process. 2008

81 Medical imaging: MRI minimum energy and CS reconstructions Angiogram with observations along 80 lines in the Fourier domain and measurements E. J. Candès and J. Romberg, Practical signal recovery from random projections, SPIE Conf. on Wavelet App. in Signal and Image Process. 2008

82 Medical imaging: MRI detail Angiogram with observations along 80 lines in the Fourier domain and measurements E. J. Candès and J. Romberg, Practical signal recovery from random projections, SPIE Conf. on Wavelet App. in Signal and Image Process. 2008

83 AD conversion: the random demodulator x(t) LPF R n/r y(n) p(t) high rate pseudo-noise sequence x(t) = l Λ a le 2πif lt, multi-tone signal Λ {0, ±1,..., ±(W/2 1), W/2}, W/2 N, card(λ) = k W sampling rate: R = O(k ln W) no need for a high-rate ADC J. A. Tropp, J. N. Laska, M. F. Duarte, J. K. Romberg, and R. G. Baraniuk, Beyond Nyquist: efficient sampling of sparse bandlimited signals, IEEE Trans. Inform. Theory, 2010

84 AD conversion: the random demodulator x(t) LPF R n/r y(n) p(t) high rate pseudo-noise sequence X(f) X(f)P (f) Y (f) W/2 W/2 W/2 W/2 R/2 R/2 Each frequency receives a unique signature that can be discerned by examining the filter output J. A. Tropp, J. N. Laska, M. F. Duarte, J. K. Romberg, and R. G. Baraniuk, Beyond Nyquist: efficient sampling of sparse bandlimited signals, IEEE Trans. Inform. Theory, 2010

85 AD conversion: the modulated wideband converter p 1(t) LPF B n/b y 1(n) x(t) p m(t) LPF B n/b y m(n) mixing functions p i(t) B X(f) W 10 s GHz 0 W/2 W/2 Practical sampling stage for sparse wideband analog signals It enables generating a low rate sequence corresponding to each of the bands, without going through the high Nyquist rate M. Mishali and Y. C. Eldar, From theory to practice: sub-nyquist sampling of sparse wideband analog signals, IEEE Trans. Signal Process., 2010

86 AD conversion: the modulated wideband converter p 1(t) LPF B n/b y 1(n) x(t) p m(t) LPF B n/b y m(n) mixing functions p i(t) Hardware implementation M. Mishali and Y. C. Eldar, From theory to practice: sub-nyquist sampling of sparse wideband analog signals, IEEE Trans. Signal Process., 2010

87 CDMA synchronization and channel estimation known code matrix A (pilot symbols) user 1 user 2 user K unknown channel vector x received multiplex y }{{} shifts of the user s signature noise n Model: y = Ax + n, x sparse Standard method: m > n & LS channel response of the user Sparse recovery allow m < n higher data-rates D. Angelosante, E. Grossi, G. G. Giannakis, and M. Lops, Sparsity-Aware Estimation of CDMA System Parameters, EURASIP J. Adv. Signal. Process., 2010

88 CDMA synchronization and channel estimation P=10, N=15, K=5, ISR=0dB Lasso LS OMP P=4, N=15, K=5, ISR=0dB Lasso LS OMP NMSE NMSE SNR SNR Normalized mean square error (NMSE) in channel estimation for known number of active users (left over-determinate case, right under-determinate case) D. Angelosante, E. Grossi, G. G. Giannakis, and M. Lops, Sparsity-Aware Estimation of CDMA System Parameters, EURASIP J. Adv. Signal. Process., 2010

89 CDMA synchronization and channel estimation P=10, N=15, K=10, S=5, SNR=20dB, ISR=0dB P=10, N=15, K=10, S=5, ISR=0dB, P FA =0.01 Lasso LS OMP P D Lasso LS OMP P MD P FA SNR User activity detection for unknown number of active users: receiver operating characteristics (left) and probability of miss versus SNR (right) D. Angelosante, E. Grossi, G. G. Giannakis, and M. Lops, Sparsity-Aware Estimation of CDMA System Parameters, EURASIP J. Adv. Signal. Process., 2010

90 Outline 1 Introduction Traditional data acquisition Compressive data acquisition 2 Compressed sensing Measurement protocol Recovery procedure Recovery conditions Sensing matrices 3 Discussion Connections with other fields Numerical methods Applications Conclusions

91 Conclusions Compressed sensing is an efficient signal acquisition protocol that collect data in a compressed form Linear measurements can be taken at low rate and non-adaptively (signal independent) Sparsity is exploited for reconstruction The measurement matrix must be properly chosen but Random matrices work

92 Some on-line resources CS resources CS blog Software: SparseLab l 1 -magic GPSR mtf/gpsr/ l 1 LS boyd/l1_ls/

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional

More information

An Introduction to Sparse Approximation

An Introduction to Sparse Approximation An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

Compressive Sensing and Beyond

Compressive Sensing and Beyond Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered

More information

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March

More information

Generalized Orthogonal Matching Pursuit- A Review and Some

Generalized Orthogonal Matching Pursuit- A Review and Some Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents

More information

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk Model-Based Compressive Sensing for Signal Ensembles Marco F. Duarte Volkan Cevher Richard G. Baraniuk Concise Signal Structure Sparse signal: only K out of N coordinates nonzero model: union of K-dimensional

More information

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011 Compressed Sensing Huichao Xue CS3750 Fall 2011 Table of Contents Introduction From News Reports Abstract Definition How it works A review of L 1 norm The Algorithm Backgrounds for underdetermined linear

More information

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina INDUSTRIAL MATHEMATICS INSTITUTE 2007:08 A remark on compressed sensing B.S. Kashin and V.N. Temlyakov IMI Preprint Series Department of Mathematics University of South Carolina A remark on compressed

More information

COMPRESSED SENSING IN PYTHON

COMPRESSED SENSING IN PYTHON COMPRESSED SENSING IN PYTHON Sercan Yıldız syildiz@samsi.info February 27, 2017 OUTLINE A BRIEF INTRODUCTION TO COMPRESSED SENSING A BRIEF INTRODUCTION TO CVXOPT EXAMPLES A Brief Introduction to Compressed

More information

Compressed Sensing. 1 Introduction. 2 Design of Measurement Matrices

Compressed Sensing. 1 Introduction. 2 Design of Measurement Matrices Compressed Sensing Yonina C. Eldar Electrical Engineering Department, Technion-Israel Institute of Technology, Haifa, Israel, 32000 1 Introduction Compressed sensing (CS) is an exciting, rapidly growing

More information

The Fundamentals of Compressive Sensing

The Fundamentals of Compressive Sensing The Fundamentals of Compressive Sensing Mark A. Davenport Georgia Institute of Technology School of Electrical and Computer Engineering Sensor Explosion Data Deluge Digital Revolution If we sample a signal

More information

Strengthened Sobolev inequalities for a random subspace of functions

Strengthened Sobolev inequalities for a random subspace of functions Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)

More information

Lecture 22: More On Compressed Sensing

Lecture 22: More On Compressed Sensing Lecture 22: More On Compressed Sensing Scribed by Eric Lee, Chengrun Yang, and Sebastian Ament Nov. 2, 207 Recap and Introduction Basis pursuit was the method of recovering the sparsest solution to an

More information

Sparsity in Underdetermined Systems

Sparsity in Underdetermined Systems Sparsity in Underdetermined Systems Department of Statistics Stanford University August 19, 2005 Classical Linear Regression Problem X n y p n 1 > Given predictors and response, y Xβ ε = + ε N( 0, σ 2

More information

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin

More information

Compressed Sensing and Related Learning Problems

Compressed Sensing and Related Learning Problems Compressed Sensing and Related Learning Problems Yingzhen Li Dept. of Mathematics, Sun Yat-sen University Advisor: Prof. Haizhang Zhang Advisor: Prof. Haizhang Zhang 1 / Overview Overview Background Compressed

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1239 Preconditioning for Underdetermined Linear Systems with Sparse Solutions Evaggelia Tsiligianni, StudentMember,IEEE, Lisimachos P. Kondi,

More information

Compressed Sensing: Lecture I. Ronald DeVore

Compressed Sensing: Lecture I. Ronald DeVore Compressed Sensing: Lecture I Ronald DeVore Motivation Compressed Sensing is a new paradigm for signal/image/function acquisition Motivation Compressed Sensing is a new paradigm for signal/image/function

More information

Super-resolution via Convex Programming

Super-resolution via Convex Programming Super-resolution via Convex Programming Carlos Fernandez-Granda (Joint work with Emmanuel Candès) Structure and Randomness in System Identication and Learning, IPAM 1/17/2013 1/17/2013 1 / 44 Index 1 Motivation

More information

Lecture Notes 9: Constrained Optimization

Lecture Notes 9: Constrained Optimization Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form

More information

Compressive Sensing Theory and L1-Related Optimization Algorithms

Compressive Sensing Theory and L1-Related Optimization Algorithms Compressive Sensing Theory and L1-Related Optimization Algorithms Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, USA CAAM Colloquium January 26, 2009 Outline:

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

The Analysis Cosparse Model for Signals and Images

The Analysis Cosparse Model for Signals and Images The Analysis Cosparse Model for Signals and Images Raja Giryes Computer Science Department, Technion. The research leading to these results has received funding from the European Research Council under

More information

Noisy Signal Recovery via Iterative Reweighted L1-Minimization

Noisy Signal Recovery via Iterative Reweighted L1-Minimization Noisy Signal Recovery via Iterative Reweighted L1-Minimization Deanna Needell UC Davis / Stanford University Asilomar SSC, November 2009 Problem Background Setup 1 Suppose x is an unknown signal in R d.

More information

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published

More information

A new method on deterministic construction of the measurement matrix in compressed sensing

A new method on deterministic construction of the measurement matrix in compressed sensing A new method on deterministic construction of the measurement matrix in compressed sensing Qun Mo 1 arxiv:1503.01250v1 [cs.it] 4 Mar 2015 Abstract Construction on the measurement matrix A is a central

More information

Enhanced Compressive Sensing and More

Enhanced Compressive Sensing and More Enhanced Compressive Sensing and More Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Nonlinear Approximation Techniques Using L1 Texas A & M University

More information

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016 Lecture notes 5 February 9, 016 1 Introduction Random projections Random projections are a useful tool in the analysis and processing of high-dimensional data. We will analyze two applications that use

More information

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Journal of Information & Computational Science 11:9 (214) 2933 2939 June 1, 214 Available at http://www.joics.com Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Jingfei He, Guiling Sun, Jie

More information

Compressed Sensing: Extending CLEAN and NNLS

Compressed Sensing: Extending CLEAN and NNLS Compressed Sensing: Extending CLEAN and NNLS Ludwig Schwardt SKA South Africa (KAT Project) Calibration & Imaging Workshop Socorro, NM, USA 31 March 2009 Outline 1 Compressed Sensing (CS) Introduction

More information

Optimization Algorithms for Compressed Sensing

Optimization Algorithms for Compressed Sensing Optimization Algorithms for Compressed Sensing Stephen Wright University of Wisconsin-Madison SIAM Gator Student Conference, Gainesville, March 2009 Stephen Wright (UW-Madison) Optimization and Compressed

More information

of Orthogonal Matching Pursuit

of Orthogonal Matching Pursuit A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement

More information

Inverse problems and sparse models (6/6) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France.

Inverse problems and sparse models (6/6) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France. Inverse problems and sparse models (6/6) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France remi.gribonval@inria.fr Overview of the course Introduction sparsity & data compression inverse problems

More information

Tractable Upper Bounds on the Restricted Isometry Constant

Tractable Upper Bounds on the Restricted Isometry Constant Tractable Upper Bounds on the Restricted Isometry Constant Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure, U.C. Berkeley. Support from NSF, DHS and Google.

More information

Signal Recovery from Permuted Observations

Signal Recovery from Permuted Observations EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,

More information

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp CoSaMP Iterative signal recovery from incomplete and inaccurate samples Joel A. Tropp Applied & Computational Mathematics California Institute of Technology jtropp@acm.caltech.edu Joint with D. Needell

More information

Recent Developments in Compressed Sensing

Recent Developments in Compressed Sensing Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline

More information

Exponential decay of reconstruction error from binary measurements of sparse signals

Exponential decay of reconstruction error from binary measurements of sparse signals Exponential decay of reconstruction error from binary measurements of sparse signals Deanna Needell Joint work with R. Baraniuk, S. Foucart, Y. Plan, and M. Wootters Outline Introduction Mathematical Formulation

More information

Compressed Sensing and Sparse Recovery

Compressed Sensing and Sparse Recovery ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing

More information

Introduction to Sparsity. Xudong Cao, Jake Dreamtree & Jerry 04/05/2012

Introduction to Sparsity. Xudong Cao, Jake Dreamtree & Jerry 04/05/2012 Introduction to Sparsity Xudong Cao, Jake Dreamtree & Jerry 04/05/2012 Outline Understanding Sparsity Total variation Compressed sensing(definition) Exact recovery with sparse prior(l 0 ) l 1 relaxation

More information

Thresholds for the Recovery of Sparse Solutions via L1 Minimization

Thresholds for the Recovery of Sparse Solutions via L1 Minimization Thresholds for the Recovery of Sparse Solutions via L Minimization David L. Donoho Department of Statistics Stanford University 39 Serra Mall, Sequoia Hall Stanford, CA 9435-465 Email: donoho@stanford.edu

More information

AN INTRODUCTION TO COMPRESSIVE SENSING

AN INTRODUCTION TO COMPRESSIVE SENSING AN INTRODUCTION TO COMPRESSIVE SENSING Rodrigo B. Platte School of Mathematical and Statistical Sciences APM/EEE598 Reverse Engineering of Complex Dynamical Networks OUTLINE 1 INTRODUCTION 2 INCOHERENCE

More information

GREEDY SIGNAL RECOVERY REVIEW

GREEDY SIGNAL RECOVERY REVIEW GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin

More information

Near Optimal Signal Recovery from Random Projections

Near Optimal Signal Recovery from Random Projections 1 Near Optimal Signal Recovery from Random Projections Emmanuel Candès, California Institute of Technology Multiscale Geometric Analysis in High Dimensions: Workshop # 2 IPAM, UCLA, October 2004 Collaborators:

More information

Optimization for Compressed Sensing

Optimization for Compressed Sensing Optimization for Compressed Sensing Robert J. Vanderbei 2014 March 21 Dept. of Industrial & Systems Engineering University of Florida http://www.princeton.edu/ rvdb Lasso Regression The problem is to solve

More information

Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France

Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France remi.gribonval@inria.fr Structure of the tutorial Session 1: Introduction to inverse problems & sparse

More information

Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice

Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice Jason N. Laska, Mark A. Davenport, Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University

More information

Stochastic geometry and random matrix theory in CS

Stochastic geometry and random matrix theory in CS Stochastic geometry and random matrix theory in CS IPAM: numerical methods for continuous optimization University of Edinburgh Joint with Bah, Blanchard, Cartis, and Donoho Encoder Decoder pair - Encoder/Decoder

More information

Sparse linear models

Sparse linear models Sparse linear models Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 2/22/2016 Introduction Linear transforms Frequency representation Short-time

More information

Primal Dual Pursuit A Homotopy based Algorithm for the Dantzig Selector

Primal Dual Pursuit A Homotopy based Algorithm for the Dantzig Selector Primal Dual Pursuit A Homotopy based Algorithm for the Dantzig Selector Muhammad Salman Asif Thesis Committee: Justin Romberg (Advisor), James McClellan, Russell Mersereau School of Electrical and Computer

More information

An Overview of Compressed Sensing

An Overview of Compressed Sensing An Overview of Compressed Sensing Nathan Schneider November 18, 2009 Abstract In a large number of applications, the system will be designed to sample at a rate equal to at least the frequency bandwidth

More information

Multipath Matching Pursuit

Multipath Matching Pursuit Multipath Matching Pursuit Submitted to IEEE trans. on Information theory Authors: S. Kwon, J. Wang, and B. Shim Presenter: Hwanchol Jang Multipath is investigated rather than a single path for a greedy

More information

Basis Pursuit Denoising and the Dantzig Selector

Basis Pursuit Denoising and the Dantzig Selector BPDN and DS p. 1/16 Basis Pursuit Denoising and the Dantzig Selector West Coast Optimization Meeting University of Washington Seattle, WA, April 28 29, 2007 Michael Friedlander and Michael Saunders Dept

More information

A Survey of Compressive Sensing and Applications

A Survey of Compressive Sensing and Applications A Survey of Compressive Sensing and Applications Justin Romberg Georgia Tech, School of ECE ENS Winter School January 10, 2012 Lyon, France Signal processing trends DSP: sample first, ask questions later

More information

Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery

Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery Anna C. Gilbert Department of Mathematics University of Michigan Connection between... Sparse Approximation and Compressed

More information

Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego

Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego 1 Outline Course Outline Motivation for Course Sparse Signal Recovery Problem Applications Computational

More information

Sparsity Regularization

Sparsity Regularization Sparsity Regularization Bangti Jin Course Inverse Problems & Imaging 1 / 41 Outline 1 Motivation: sparsity? 2 Mathematical preliminaries 3 l 1 solvers 2 / 41 problem setup finite-dimensional formulation

More information

Compressive Sensing (CS)

Compressive Sensing (CS) Compressive Sensing (CS) Luminita Vese & Ming Yan lvese@math.ucla.edu yanm@math.ucla.edu Department of Mathematics University of California, Los Angeles The UCLA Advanced Neuroimaging Summer Program (2014)

More information

People Hearing Without Listening: An Introduction To Compressive Sampling

People Hearing Without Listening: An Introduction To Compressive Sampling People Hearing Without Listening: An Introduction To Compressive Sampling Emmanuel J. Candès and Michael B. Wakin Applied and Computational Mathematics California Institute of Technology, Pasadena CA 91125

More information

Robust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Frequency Information

Robust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Frequency Information 1 Robust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Frequency Information Emmanuel Candès, California Institute of Technology International Conference on Computational Harmonic

More information

Solution Recovery via L1 minimization: What are possible and Why?

Solution Recovery via L1 minimization: What are possible and Why? Solution Recovery via L1 minimization: What are possible and Why? Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Eighth US-Mexico Workshop on Optimization

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Sparse Recovery using L1 minimization - algorithms Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images

Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images Alfredo Nava-Tudela ant@umd.edu John J. Benedetto Department of Mathematics jjb@umd.edu Abstract In this project we are

More information

Sparse Optimization Lecture: Sparse Recovery Guarantees

Sparse Optimization Lecture: Sparse Recovery Guarantees Those who complete this lecture will know Sparse Optimization Lecture: Sparse Recovery Guarantees Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor: Wotao Yin Department of Mathematics,

More information

Exploiting Sparsity for Wireless Communications

Exploiting Sparsity for Wireless Communications Exploiting Sparsity for Wireless Communications Georgios B. Giannakis Dept. of ECE, Univ. of Minnesota http://spincom.ece.umn.edu Acknowledgements: D. Angelosante, J.-A. Bazerque, H. Zhu; and NSF grants

More information

Wavelet Footprints: Theory, Algorithms, and Applications

Wavelet Footprints: Theory, Algorithms, and Applications 1306 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 51, NO. 5, MAY 2003 Wavelet Footprints: Theory, Algorithms, and Applications Pier Luigi Dragotti, Member, IEEE, and Martin Vetterli, Fellow, IEEE Abstract

More information

The Pros and Cons of Compressive Sensing

The Pros and Cons of Compressive Sensing The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal

More information

Solving Underdetermined Linear Equations and Overdetermined Quadratic Equations (using Convex Programming)

Solving Underdetermined Linear Equations and Overdetermined Quadratic Equations (using Convex Programming) Solving Underdetermined Linear Equations and Overdetermined Quadratic Equations (using Convex Programming) Justin Romberg Georgia Tech, ECE Caltech ROM-GR Workshop June 7, 2013 Pasadena, California Linear

More information

Recovery of Compressible Signals in Unions of Subspaces

Recovery of Compressible Signals in Unions of Subspaces 1 Recovery of Compressible Signals in Unions of Subspaces Marco F. Duarte, Chinmay Hegde, Volkan Cevher, and Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University Abstract

More information

Sparse linear models and denoising

Sparse linear models and denoising Lecture notes 4 February 22, 2016 Sparse linear models and denoising 1 Introduction 1.1 Definition and motivation Finding representations of signals that allow to process them more effectively is a central

More information

Randomness-in-Structured Ensembles for Compressed Sensing of Images

Randomness-in-Structured Ensembles for Compressed Sensing of Images Randomness-in-Structured Ensembles for Compressed Sensing of Images Abdolreza Abdolhosseini Moghadam Dep. of Electrical and Computer Engineering Michigan State University Email: abdolhos@msu.edu Hayder

More information

Part IV Compressed Sensing

Part IV Compressed Sensing Aisenstadt Chair Course CRM September 2009 Part IV Compressed Sensing Stéphane Mallat Centre de Mathématiques Appliquées Ecole Polytechnique Conclusion to Super-Resolution Sparse super-resolution is sometime

More information

Sparse Optimization Lecture: Basic Sparse Optimization Models

Sparse Optimization Lecture: Basic Sparse Optimization Models Sparse Optimization Lecture: Basic Sparse Optimization Models Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know basic l 1, l 2,1, and nuclear-norm

More information

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1

The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3784. Ming-Jun Lai Department of Mathematics,

More information

Color Scheme. swright/pcmi/ M. Figueiredo and S. Wright () Inference and Optimization PCMI, July / 14

Color Scheme.   swright/pcmi/ M. Figueiredo and S. Wright () Inference and Optimization PCMI, July / 14 Color Scheme www.cs.wisc.edu/ swright/pcmi/ M. Figueiredo and S. Wright () Inference and Optimization PCMI, July 2016 1 / 14 Statistical Inference via Optimization Many problems in statistical inference

More information

Necessary and sufficient conditions of solution uniqueness in l 1 minimization

Necessary and sufficient conditions of solution uniqueness in l 1 minimization 1 Necessary and sufficient conditions of solution uniqueness in l 1 minimization Hui Zhang, Wotao Yin, and Lizhi Cheng arxiv:1209.0652v2 [cs.it] 18 Sep 2012 Abstract This paper shows that the solutions

More information

Komprimované snímání a LASSO jako metody zpracování vysocedimenzionálních dat

Komprimované snímání a LASSO jako metody zpracování vysocedimenzionálních dat Komprimované snímání a jako metody zpracování vysocedimenzionálních dat Jan Vybíral (Charles University Prague, Czech Republic) November 2014 VUT Brno 1 / 49 Definition and motivation Its use in bioinformatics

More information

Stable Signal Recovery from Incomplete and Inaccurate Measurements

Stable Signal Recovery from Incomplete and Inaccurate Measurements Stable Signal Recovery from Incomplete and Inaccurate Measurements EMMANUEL J. CANDÈS California Institute of Technology JUSTIN K. ROMBERG California Institute of Technology AND TERENCE TAO University

More information

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE 5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER 2009 Uncertainty Relations for Shift-Invariant Analog Signals Yonina C. Eldar, Senior Member, IEEE Abstract The past several years

More information

Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes

Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes Item Type text; Proceedings Authors Jagiello, Kristin M. Publisher International Foundation for Telemetering Journal International Telemetering

More information

Recovering overcomplete sparse representations from structured sensing

Recovering overcomplete sparse representations from structured sensing Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix

More information

Optimisation Combinatoire et Convexe.

Optimisation Combinatoire et Convexe. Optimisation Combinatoire et Convexe. Low complexity models, l 1 penalties. A. d Aspremont. M1 ENS. 1/36 Today Sparsity, low complexity models. l 1 -recovery results: three approaches. Extensions: matrix

More information

Structured matrix factorizations. Example: Eigenfaces

Structured matrix factorizations. Example: Eigenfaces Structured matrix factorizations Example: Eigenfaces An extremely large variety of interesting and important problems in machine learning can be formulated as: Given a matrix, find a matrix and a matrix

More information

Compressed Sensing and Neural Networks

Compressed Sensing and Neural Networks and Jan Vybíral (Charles University & Czech Technical University Prague, Czech Republic) NOMAD Summer Berlin, September 25-29, 2017 1 / 31 Outline Lasso & Introduction Notation Training the network Applications

More information

Interpolation via weighted l 1 minimization

Interpolation via weighted l 1 minimization Interpolation via weighted l 1 minimization Rachel Ward University of Texas at Austin December 12, 2014 Joint work with Holger Rauhut (Aachen University) Function interpolation Given a function f : D C

More information

COMS 4721: Machine Learning for Data Science Lecture 6, 2/2/2017

COMS 4721: Machine Learning for Data Science Lecture 6, 2/2/2017 COMS 4721: Machine Learning for Data Science Lecture 6, 2/2/2017 Prof. John Paisley Department of Electrical Engineering & Data Science Institute Columbia University UNDERDETERMINED LINEAR EQUATIONS We

More information

RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems

RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems 1 RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems Yun-Bin Zhao IEEE member Abstract Recently, the worse-case analysis, probabilistic analysis and empirical

More information

Analog-to-Information Conversion

Analog-to-Information Conversion Analog-to-Information Conversion Sergiy A. Vorobyov Dept. Signal Processing and Acoustics, Aalto University February 2013 Winter School on Compressed Sensing, Ruka 1/55 Outline 1 Compressed Sampling (CS)

More information

Sensing systems limited by constraints: physical size, time, cost, energy

Sensing systems limited by constraints: physical size, time, cost, energy Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original

More information

Performance Analysis of Compressive Sensing Algorithms for Image Processing

Performance Analysis of Compressive Sensing Algorithms for Image Processing Performance Analysis of Compressive Sensing Algorithms for Image Processing Sonia Gandhi 1, Deepti Khanduja 2 and Neelu Pareek 3 1 Department of Electronics and Communication, Vivekananda Global University,

More information

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5 CS 229r: Algorithms for Big Data Fall 215 Prof. Jelani Nelson Lecture 19 Nov 5 Scribe: Abdul Wasay 1 Overview In the last lecture, we started discussing the problem of compressed sensing where we are given

More information

Block-sparse Solutions using Kernel Block RIP and its Application to Group Lasso

Block-sparse Solutions using Kernel Block RIP and its Application to Group Lasso Block-sparse Solutions using Kernel Block RIP and its Application to Group Lasso Rahul Garg IBM T.J. Watson research center grahul@us.ibm.com Rohit Khandekar IBM T.J. Watson research center rohitk@us.ibm.com

More information

AN OVERVIEW OF ROBUST COMPRESSIVE SENSING OF SPARSE SIGNALS IN IMPULSIVE NOISE

AN OVERVIEW OF ROBUST COMPRESSIVE SENSING OF SPARSE SIGNALS IN IMPULSIVE NOISE AN OVERVIEW OF ROBUST COMPRESSIVE SENSING OF SPARSE SIGNALS IN IMPULSIVE NOISE Ana B. Ramirez, Rafael E. Carrillo, Gonzalo Arce, Kenneth E. Barner and Brian Sadler Universidad Industrial de Santander,

More information

Towards a Mathematical Theory of Super-resolution

Towards a Mathematical Theory of Super-resolution Towards a Mathematical Theory of Super-resolution Carlos Fernandez-Granda www.stanford.edu/~cfgranda/ Information Theory Forum, Information Systems Laboratory, Stanford 10/18/2013 Acknowledgements This

More information

Sparse analysis Lecture III: Dictionary geometry and greedy algorithms

Sparse analysis Lecture III: Dictionary geometry and greedy algorithms Sparse analysis Lecture III: Dictionary geometry and greedy algorithms Anna C. Gilbert Department of Mathematics University of Michigan Intuition from ONB Key step in algorithm: r, ϕ j = x c i ϕ i, ϕ j

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information