Sparse Sampling. Pier Luigi Dragotti 1. investigator award Nr (RecoSamp). December 17, 2012

Size: px
Start display at page:

Download "Sparse Sampling. Pier Luigi Dragotti 1. investigator award Nr (RecoSamp). December 17, 2012"

Transcription

1 1 December 17, is supported by the European Research Council (ERC) starting investigator award Nr (RecoSamp).

2 Outline Problem Statement and Motivation Classical Sampling Formulation Sampling using expansion-based sparsity Sparsity in Complete and Over-complete Dictionaries Compressed Sensing Applications Sampling using sparsity in parametric spaces Signals with Finite Rate of Innovation (FRI) Sampling Kernels: E-splines and B-splines Sampling FRI Signals: the Basic Set-up and Extensions Applications New Domains of Applications of the Sparsity and Sampling Paradigm Diffusion Fields and Neuroscience Conclusions and Outlook

3 Problem Statement You are given a class of functions. You have a sampling device. Given the measurements y n = x(t), ϕ(t/t n), you want to reconstruct x(t). x(t) h(t)=!(!t/t) y(t) T y n =<x(t),!(t/t!n)> Acquisition Device Natural questions: When is there a one-to-one mapping between x(t) and y n? What signals can be sampled and what kernels ϕ(t) can be used? What reconstruction algorithm?

4 Problem Statement Observed scene Lens Acquisition System CCD Array Samples The low-quality lens blurs the images. The images are sampled by the CCD array.

5 Outline Problem Statement and Motivation Classical Sampling Formulation Sampling using expansion-based sparsity Sparsity in Complete and Over-complete Dictionaries Compressed Sensing Applications Sampling using sparsity in parametric spaces Signals with Finite Rate of Innovation (FRI) Sampling Kernels: E-splines and B-splines Sampling FRI Signals: the Basic Set-up and Extensions Applications New Domains of Applications of the Sparsity and Sampling Paradigm Diffusion Fields and Neuroscience Conclusions and Outlook

6 Motivation: Sparsity and Sampling Everywhere In 2005, the U.S. spent 16% of its GDP on health care. It is projected that this will reach 20% by Goal: Individualized treatments based on low-cost and effective medical devices. Image Formation End!Users Image Processing

7 Motivation: Sparsity and Sampling Everywhere Wide-Band Communications: t t TX RX Current A-to-D converters in UWB communications operate at several gigaherz. This is a sparse parametric estimation problem, only the location and amplitude of the pulses need to be estimated.

8 Motivation: Sparsity and Sampling Everywhere Sensor networks The source (phenomenon) is distributed in space and time. The phenomenon is sampled in space (finite number of sensors) and time. When the sources are localized the problem is sparse.

9 8 Motivation: Sparsity and Sampling Everywhere Neural acquisi)on and sor)ng Applications in Neuroscience Neuroprosthesis ADC Spike sor)ng Processing unit The energy budget of implanted devices is limited. Implanted High neuronal sampling prostheses rates impose require wired data low-processing transmission and low-sampling limits the rate. quality and diversity of experiments. Spike sorting is based on a sparse description of the action potentials.

10 Motivation: Free Viewpoint Video Multiple cameras are used to record a scene or an event. Users can freely choose an arbitrary viewpoint for 3D viewing. This is a multi-dimensional sampling and interpolation problem.

11 Classical Sampling Formulation Sampling of x(t) is equivalent to projecting x(t) into the shift-invariant subspace V = span{ϕ(t/t n)} n Z. If x(t) V, perfect reconstruction is possible. Reconstruction process is linear: ˆx(t) = P n ynϕ(t/t n). For bandlimited signals ϕ(t) = sinc(t). x(t) ~ φ(t) ~ T y n φ(t) x(t) ^

12 Sampling as Projecting into Shift-Invariant Sub-Spaces 1! !4!3!2! !1 0 1!4!3!2!

13 Classical Sampling Formulation The Shannon sampling theorem provides sufficient but not necessary conditions for perfect reconstruction. Moreover: How many real signals are bandlimited? How many realizable filters are ideal low-pass filters? By the way, who discovered the sampling theorem? The list is long ;-) Whittaker 1915, 1935 Kotelnikov 1933 Nyquist 1928 Raabe 1938 Gabor 1946 Shannon 1948 Someya 1948

14 Key elements in the novel sampling approaches Classical Sampling Formulation: In classical sampling formulation, the reconstruction process is linear. Innovation is uniform. New formulation: The reconstruction process can be non-linear. Innovation can be non-uniform.

15 Sparse Representations in a Basis Wavelets provide sparse representations of images. In matrix/vector form α = W 1 Y is sparse. Here the matrix W has size N N and models the discrete-time wavelet transform of finite dimensional signals. Figure: Cameraman is reconstructed using only 8% of the wavelet coefficients.

16 Notation The l 0 norm of a N-dimensional vector X is X 0 = the number of i such that x i 0 The l 1 norm of a N-dimensional vector X is: X 1 = P N i=1 x i The Mutual Coherence of a given N M matrix A is the largest absolute normalized inner product between different columns of A: a T k µ(a) = max a j 1 k,j M;k j a k 2 a j 2

17 Sparsity in Redundant Dictionaries Y The above signal, Y, is a combination of two spikes and two complex exponentials of different frequency (real part of Y plotted). In matrix vector form: Y = (I N F N ) α = Dα, where I N is the N N identity matrix and F N is the N N Fourier transform. The matrix D models an over-complete dictionary and has size N M with M > N, α has only K non-zero coefficients (in the example K = 4, N = 128, M = 2N).

18 Sparsity in Redundant Dictionaries You are given Y and want to find its sparse representation. Ideally, you want to solve (P 0 ) : min α 0 s.t. Y = Dα. This is a combinatorial problem which requires N chooses K operations. You may instead solve the convex problem: (P 1 ) : min α 1 s.t. Y = Dα. Key result due to Donoho et al.: (P 0 ) is unique when K < 1/µ(D) = N. (P 0 ) and (P 1 ) are equivalent when K < ( 2 0.5)/µ(D) 0.9 N.

19 Sparsity ibounds n Pairs of Bases (P 0 ) Bound: 1/µ Tight (P 1 ) bound Simplified (P 1 ) bound: 0.91/µ 7 6 K q K p Uniqueness of (P 0 ) and the two l 1 bounds for the case of two orthogonal bases and µ(d) = 0.1. See [Elad 2010, page 59] for more details.

20 Sparsity in Redundant Dictionaries Sketch on the proof of unicity of (P 0 ). (P 0 ) is unique when K is such that, given Y 1 = Dα 1 and Y 2 = Dα 2, then Y 1 Y 2 for any possible K-sparse α 1, α 2. Consider α n = α 1 α 2, this new vector has sparsity 2K and unicity is lost when Y = Dα n = 0. «ˆX α n is in the null space of D = (I N F N ) when α n =, where X ˆX = F N X. In fact: Y = Dα n = (I N F N ) ˆX X «= 0, X is an N dimensional vector and cannot be simultaneously sparse in both the time and the frequency domain. Donoho uncertainty principle says that the number of non-zero entries in α n must be 2K 2/µ(D) = 2 N. Thus, (P 0 ) can be solved when K < N.

21 Sparsity in Redundant Dictionaries (cont d) Extensions [Tropp-04, GribonvalN:03, Elad-10] For a generic over-complete dictionary D, (P 1 ) is equivalent to (P 0 ) when 2 K < «. µ When D is a concatenation of J orthonormal dictionaries (P 1 ) is equivalent to (P 0 ) when» 2 1 K < 1 + µ 1 2(J 1) 2 Proof in Appendix.

22 Compressed Sensing The fat matrix D now plays the role of the acquisition device and we denote it with Φ. The entries of Y = Φα are the samples. Based on the previous analysis, we want to reconstruct the signal α from the samples Y using l 1 minimization. We want maximum incoherence of the columns of Φ. We consider large M, N. Key insight: Relax the condition of a deterministic perfect reconstruction and accept that, with an extremely small probability, there might be an error in the reconstruction.

23 The power of randomness Key theorem due to Candès et al.[candes:06-08]: if Φ is a proper random matrix (e.g., a matrix with normalized Gaussian entries), then with overwhelming probability the signal can be reconstructed from the samples Y when N C K log(m/k) for some constant C. Assume that the measured signal X is not sparse but has a sparse representation: X = Dα. We have that Y = ΦX = ΦDα. The new matrix ΦD is essentially as random as the original one. Therefore the theorem is still valid. Thus random matrices provides universality. However, very redundant dictionaries implies larger M and therefore larger N.

24 Restricted Isometry Property (RIP) In order to have perfect reconstruction, Φ must satisfy the so called Restricted Isometry Property: (1 δ S ) x 2 2 Φx 2 2 (1 + δ S ) x 2 2 for some 0 < δ S < 1 and for any S-sparse vector x. Candes et al.: If x is K-sparse and δ 2K + δ 3K < 1 then the l 1 minimization finds x exactly. if Φ is a random Gaussian matrix, the above condition is satisfied with probability 1 O(e γm ) for some γ > 0, when N C K log(m/k). if Φ is obtained by extracting at random N rows from the Fourier matrix, then perfect reconstruction is satisfied with high probability when: N C K(log M) 4. NB: When the signal x is not exactly sparse, solve: y Φˆx 2 + λ ˆx 1 It is proved that linear programming achieve the best solution up to a constant factor.

25 Compressed Sensing. Simulation Results Image Boat. (a) Recovered from random projections using Compressed Sensing. PSNR=31.8dB. (b) Optimal 7207-approximation using the wavelet transform with the same PSNR as (a). (c) Zoom of (a). (d) Zoom of (b). Images courtesy of Prof. J. Romberg.

26 Application in MRI Image taken from Lustig, Donoho, Santos, Pauly-08.

27 Problem Statement You are given a class of functions. You have a sampling device. Given the measurements y n = x(t), ϕ(t/t n), you want to retrieve the degrees of freedom of x(t). x(t) h(t)=!(!t/t) y(t) T y n =<x(t),!(t/t!n)> Acquisition Device Natural questions: When is there a one-to-one mapping between x(t) and y n? What kernels ϕ(t) can be used? What reconstruction algorithm?

28 Sparsity in Parametric Spaces Consider a continuous-time stream of pulses or a piecewise sinusoidal signal These signals are not bandlimited. are not sparse in a basis or a frame. However: they are completely determine by a finite number of free parameters.

29 Signals with Finite Rate of Innovation Consider a signal of the form: x(t) = X k Z γ k g(t t k ). (1) The rate of innovation of x(t) is then defined as 1 ρ = lim τ τ τ Cx 2, τ, (2) 2 where C x( τ/2, τ/2) is a function counting the number of free parameters in the interval τ. Definition A signal with a finite rate of innovation is a signal whose parametric representation is given in (1) and with a finite ρ as defined in (2).

30 Examples of Signals with Finite Rate of Innovation Filtered Streams of Diracs Piecewise Polynomial Signals Piecewise Sinusoidal Signals Mondrian paintings ;-)

31 The Sampling Kernel x(t) h(t)=!(!t/t) y(t) T y n =<x(t),!(t/t!n)> Acquisition Device Given by nature Diffusion equation, Green function. Ex: sensor networks. Given by the set-up Designed by somebody else. Ex: Hubble telescope, digital cameras. Given by design Pick the best kernel. Ex: engineered systems.

32 The Sampling Kernel 1 Line spread function Pixels (perpendicular) (a)original ( ) (b) Point Spread function

33 Sampling Kernels Any kernel ϕ(t) that can reproduce exponentials: X c m,nϕ(t n) = e αmt, α m = α 0 + mλ and m = 0, 1,..., L. n This includes any composite kernel of the form γ(t) β α (t) where β α (t) = β α0 (t) β α1 (t)... β αl (t) and β αi (t) is an Exponential Spline of first order [UnserB:05]. E Spline β α (t) e α t Notice: β α(t) ˆβ(ω) = 1 eα jω jω α α can be complex. E-Spline is of compact support. E-Spline reduces to the classical polynomial spline when α = 0.

34 Kernels Reproducing Exponentials Exponential e 0.06t Reproduction Scaled and shifted E splines Exponential e 0.5t Reproduction Scaled and shifted E splines t t t Here the E-spline is of second order and reproduces the exponential e α 0t, e α 1t : with α 0 = 0.06 and α 1 = 0.5.

35 Examples of E-Splines Kernels

36 Examples of Best Kernels

37 The Sampling Kernel 1 Line spread function Pixels (perpendicular) (a)original ( ) (b) Point Spread function

38 Kernel Reproducing Exponential Any functions with rational Fourier transform: Q i ˆϕ(ω) = (jω b i ) Qm (jω am) m = 0, 1,..., L. is a generalized E-splines. This includes practical devices as common as an RC circuit: + R + x(t) C y(t) - -

39 : Basic Set-up Assume the sampling period T = 1. Consider any x(t) with t [0, N). Assume the sampling kernel ϕ(t) is any function that can reproduce exponentials of the form X c m,nϕ(t n) = e αmt m = 0, 1,..., L, n We want to retrieve x(t), from the samples y n = x(t), ϕ(t n), n = 0, 1,..., N 1.

40 : Basic Set-up We have that s m = N 1 n=0 c m,ny n = x(t), N 1 n=0 c m,nϕ(t n) = x(t)eαmt dt, m = 0, 1,..., L. s m is the bilateral Laplace transform of x(t) evaluated at α m. When α m = jω m then s m = ˆx(ω m ) where ˆx(ω) is the Fourier transform of x(t). When α m = 0, the s m s are the polynomial moments of x(t).

41 Sampling Streams of Diracs Assume x(t) is a stream of K Diracs on the interval of size N: x(t) = K 1 k=0 x kδ(t t k ), t k [0, N). We restrict α m = α 0 + mλ m = 0, 1,..., L and L 2K 1. We have N samples: y n = x(t), ϕ(t n), n = 0, 1,...N 1: We obtain s m = N 1 n=0 c m,ny n = x(t)eαmt dt, = K 1 k=0 x ke αmt k = K 1 k=0 ˆx ke λmt k = K 1 k=0 ˆx kuk m, m = 0, 1,..., L.

42 The quantity The Annihilating Filter Method s m = K 1 k=0 ˆx k u m k, m = 0, 1,..., L is a sum of exponentials. We can retrieve the locations u k and the amplitudes ˆx k with the annihilating filter method (also known as Prony s method since it was discovered by Gaspard de Prony in 1795). Given the pairs {u k, ˆx k }, then t k = (ln u k )/λ and x k = ˆx k /e α0t k.

43 The Annihilating Filter Method 1. Call h m the filter with z-transform H(z) = P K i=0 h iz i = Q K 1 k=0 (1 u kz 1 ). We have that KX KX K 1 X X KX h m s m = i=0 h i s m i = i=0 k=0 K 1 ˆx k h i u m i k = k=0 ˆx k u m k i=0 h i u i k {z } 0 = 0. This filter is thus called the annihilating filter. In matrix/vector form, we have that SH = 0 and using the fact that h 0 = 1, we obtain s K 1 s K 2 s 0 h 1 s K s K s K 1 s 1 h 2 s K+1 = B. C B. C 4 A s L 1 s L 2 s L K h K s L Solve the above system to find the coefficients of the annihilating filter.

44 The Annihilating Filter Method 2. Given the coefficients {1, h 1, h 2,..., h k }, we get the locations u k by finding the roots of H(z). 3. Solve the first K equations in s m = P K 1 k=0 ˆx kuk m to find the amplitudes ˆx k. In matrix/vector form u 0 u 1 u K u K 1 0 u K 1 1 u K 1 K B ˆx 0 ˆx 1. ˆx K = C B Classic Vandermonde system. Unique solution for distinct u k s. s 0 s 1. s K 1 1. (3) C A

45 Sampling Streams of Diracs: Numerical Example t (a) Original Signal t (b) Sampling Kernel (β 7 (t)) t (c) Samples t (d) Reconstructed Signal

46 Sampling Streams of Diracs: Sequential Reconstruction (a) Original Signal (b) Sampling Kernel (β 7(t)) (c) Samples (d) Reconstructed Signal

47 Sampling Streams of Diracs: Sequential Reconstruction Noiseless samples Noise Time (s) (a) y n samples (b) Reconstructed stream stream of ion of the Fig. 6: Noisy samples with a SNR 10 db and reconstructed In this N 50, stream example: from10kthesamples, peaks of1000 the histogram Diracs, SNR of the = 15dB, retrieved Execution locations. time: one minute, The Success temporal ratelocations 100%, one are very falseaccurately positive. estimated Original Diracs Estimated Diracs Time (s) led with shows one realisation of the procedure explained before.

48 Note on the proof Linear vs Non-linear Problem is Non-linear in t k, but linear in x k given t k The key to the solution is the separability of the non-linear from the linear problem using the annihilating filter. The proof is based on a constructive algorithm: 1. Given the N samples y n, compute the moments s m using the exponential reproduction formula. In matrix vector form S = CY. 2. Solve a K K Toeplitz system to find H(z) 3. Find the roots of H(z) 4. Solve a K K Vandermonde system to find the a k Complexity 1. O(KN) 2. O(K 2 ) 3. O(K 3 ) 4. O(K 2 ) Thus, the algorithm complexity is polynomial with the signal innovation.

49 Sampling Piecewise Sinusoidal Signals: Numerical Example [sec] (a) [sec] (b) [sec] (c)

50 plane χ ds here the d on (5) he Sampling 2-D domains Algorithm (a) Original (c) 1: Difference Annihilation indicator image plane of indicator (b)(d) Reconstructed Samples function of indicator plane lane χ 1. Compute Fourier transform of the indicator plane χ evaluated ( at uniformly sampled frequency grids 2πk Fourier transd exactly. the FRI 2. Create annihilation filter according to (6) where the M, ) 2πl N ; ere either streams elements are rearranged lexicographically; whose Fourier on in (5) the case of 3. Solve the annihilation coefficients c k,l based on (5) of the Fourier under the Hermitian symmetry constraint on the n not use DFT coefficients in (2). (c) Difference image (d) Samples of indicator plane e χ Γ has infirs from server Fig. 2. Annihilation results for noiseless case sform ˆχ Γ with4.1. Fourier Transform of Indicator Plane rm ourier exactly. transexactly. As Notice that the annihilation algorithm presented here is noniterative to apply and the is extremely annihilation fast. Algorithm Except for1, the Fourier DFT step trans- which In order can prove FRI that form of imited ither streams discrete is O(n the annihilable log n) withcurve the FFT Γ has implementation, to be evaluatedexecution exactly. FRI time for signals allinvestigated the other steps previously are linear inwith 1-DKL, cases, theare degree either ofstreams freedom of hose Fourier of Diracs the annihilable [7] [2] or curve. piecewise Thus polynomials the algorithm[1] can whose easilyfourier cope with the case of transform largecan scalebeproblems. computed exactly. However, in the case of the Fourier annihilable curves here, no explicit expression of the Fourier, not the use annihilatwhich χ Γ haswe infi- canof χ Γ directlyfig. either. 2. Annihilation Because theresults indicator for noiseless plane χ Γ case has infi- DFT transform ˆχ Γ is available in general. And we can not use DFT 5. SIMULATION RESULTS AND DISCUSSIONS lter from Pier method, Luigi server Dragotti nite a bandwith, Several experiments results obtained are set from up to DFT verify suffers the exact from annihilation server ormsparse I signals. ˆχ Γ with Sampling Sup- aliasing for effect. Notice noiseless But that the cases. we can annihilation In relate all the the algorithm subsequent Fourier transform presented simulations, ˆχ Γ with here is wenon- use Fig. 2. Annihilation results for noiseles Notice that the annihilation algorithm presen (a) Original indicator plane (b) Reconstructed (c) Difference image (d) Samples of in

51 Robust x(t) h(t)=!(!t/t) y(t) T + y =<x(t), (t/t!n)>+" n! n " n Acquisition Device The measurements are noisy The noise is additive and i.i.d. Gaussian

52 Robust In the presence of noise, the annihilation equation SH = 0 is only approximately satisfied. Minimize: SH 2 under the constraint H 2 = 1. This is achieved by performing an SVD of S: S = UλV T. Then H is the last column of V. Notice: this is similar to Pisarenko s method in spectral estimation.

53 Robust : Cadzow s algorithm For small SNR use Cadzow s method to denoise S before applying TLS. The basic intuition behind this method is that, in the noiseless case, S is rank deficient (rank K) and Toeplitz, while in the noisy case S is full rank. Algorithm: SVD of S = UλV T. Keep the K largest diagonal coefficients of λ and set the others to zero. Reconstruct S = Uλ V T. This matrix is not Toeplitz, make it so by averaging along the diagonals. Iterate.

54 Robust Samples are corrupted by additive noise. This is a parametric estimation problem. Unbiased algorithms have a covariance matrix lower bounded by CRB. The proposed algorithm reaches CRB down to SNR of 5dB.

55 Robust Noiseless Samples (N=128) Noisy Samples (N=128) Original and estimated Diracs : SNR = 5 db; original Diracs estimated Diracs t

56 Page 23 of 108 Piecewise sinusoidal signal location [seconds] Robust 1 IEEE TRANSACTIONS ON SIGNAL PROCESSING Observed Standard Deviation 1 6 Cramer!Rao Bound ! ! ! ! Input SNR [db] Input SNR [db] (a) (b) Fig. 8. Retrieval of the switching point of a step sine (ω1 = 12.23π [1/sec] and t1 = [sec]) in 128 noisy samples. (a) Scatter plot of the estimated location. (b) Standard deviation (averages over 1000 iterations) of the location retrieval compared 27 to the Cramér-Rao bound. 28 For Re Standard deviation for location t1

57 Robust Page 24 of IEEE TRANSACTIONS ON SIGNAL PROCESSING !1 9! [sec] (a) Ground Truth Reconstructed Signal !0.5 16! [sec] (b) Fig. 9. Recovery of a truncated sine wave at SNR = 8 [db]. (a) The observed noisy samples. (b) The reconstructed signal 21 SNR= 8dB, N= along with the ground truth signal (dashed) Pier 27Luigi Dragottitime piecewise sinusoidal signal (with t 1 = [sec], t 2 = [sec] and ω 1 = 12.23π [rad/sec]) 28 given 128 noisy samples at an SNR of 8 [db]. Note that despite the small error in the estimation of the For Re

58 Application: Image Super-Resolution Super-Resolution is a multichannel sampling problem with unknown shifts. Use moments to retrieve the shifts or the geometric transformation between images. (a)original ( ) (b) Low-res. (64 64) (c) Super-res ( PSNR=24.2dB) Forty low-resolution and shifted versions of the original. The disparity between images has a finite rate of innovation and can be retrieved. Accurate registration is achieved by retrieving the continuous moments of the Tiger from the samples. The registered images are interpolated to achieve super-resolution.

59 Application: Image Super-Resolution Image super-resolution basic building blocks Restoration Super-resolved image LR image 0... LR image k Set of low-resolution images Image Registration HR grid estimation

60 Application: Image Super-Resolution For each blurred image I (x, y): A pixel Pm,n in the blurred image is given by P m,n = I (x, y), ϕ(x/t n, y/t m), where ϕ(t) represents the point spread function of the lens. We assume ϕ(t) is a spline that can reproduce polynomials: X X c m,n (l,j) ϕ(x n, y m) = x l y j l = 0, 1,..., N; j = 0, 1,..., N. n m We retrieve the exact moments of I (x, y) from Pm,n: τ l,j = X X Z Z c m,n (l,j) P m,n = I (x, y)x l y j dxdy. n m Given the moments from two or more images, we estimate the geometrical transformation and register them. Notice that moments of up to order three along the x and y coordinates allows the estimation of an affine transformation.

61 Application: Image Super-Resolution 1 Line spread function Pixels (perpendicular) (a)original ( ) (b) Point Spread function

62 Application: Image Super-Resolution (a)original ( ) (b) Super-res ( )

63 Application: Image Super-Resolution (a)original (48 48) (b) Super-res ( )

64 Application in Neuroscience Applications Neural in Neuroscience acquisi)on and sor)ng 8 Neuroprosthesis ADC Spike sor)ng Processing unit The energy budget of implanted devices is limited. High sampling rates impose wired data transmission and limits the quality and diversity of experiments.

65 Objec)ve Application in Neuroscience Insight: Sample at lower rate and reconstruct the signal outside the implant Sample at a lower rate and reconstruct the signal outside the implant. 6 Neuroprosthesis ADC Classical Processing unit Spike sor)ng Neuroprosthesis ADC Sparse Reconstruc)on Processing unit Spike sor)ng

66 Application in Neuroscience Spike sor)ng valida)on (iii) Classical Sampling (C) f s = 24KHz Sparse Classical Sampling sampling (F) f s = at 5.8KHz. Sparse sampling at (x4 undersampling) % 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% C F C F C F C F C F σ = 0.05 σ = 0.1 σ = 0.15 σ = 0.2 Average ClassificaFon Error DetecFon Error Correctly Sorted

67 Calcium Transient Detection Figure 6: Double consistency spike search. (i) and (ii) show the detected locations in red and the locations of the original spikes in green for two different window sizes. In (i) the algorithm runs estimating the number of spikes within the sliding window. In (ii) the algorithm runs assuming a fixed number of spikes equal to one for each position of the sliding window. (iii) shows the joint histogram of the detected locations. (iv) shows the fluorescence signal in blue with the original spikes in green and the detected spikes in red.

68 Application in Sensor Networks Localizing Point Sources in Diffusion Fields Localized and instantaneous sources: KX f t = 4f + c k (x x k) (t t k) Diffusion field: k=1 KX c k f(x,t)= 4 (t t e kx xkk 2 4(t t k) U(t t k) k=1 k) finite degrees of freedom p 4 p 3 x 2 x 1 p 5 p p 2 1 Goal: Es-mate the unknown parameters {c k} k, {t k} k, {x k} k from the spa-otemporal samples taken by distributed sensors. Source Localiza-on in Diffusive Fields IMA 2012 Tuesday, 11 September 12 8

69 Sampling signals using sparsity models: Conclusions and Outlook New framework that allows the sampling and reconstruction of signals at a rate smaller than Nyquist rate. It is a non-linear problem Different possible algorithms with various degrees of efficiency and robustness Applications: Many actual and potential applications: But you need to fit the right model! Carve the right algorithm for your problem: continuous/discrete, fast/ complex, redundant/ not-redundant Still many open questions from theory to practice!

70 On sparsity in over-complete dictionaries References D. L. Donoho and P.B. Starck, Uncertainty principles and signal recovery, SIAM J. Appl. Math., 1989, pp D.L. Donoho and X. Huo, Uncertainty principles and ideal atomic decomposition, IEEE Trans. on Info.. Theory, vol.47(7, pp , November M. Elad, Sparse and Redundant Representations, Springer, On Compressed Sensing and its applications E. J. Candès, J. Romberg, and T. Tao, Robust Uncertainty Principle: Exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Info. Theory, vol. 52(2), pp , February E.J. Candés and M.B. Wakin, An introduction to compressive sampling, IEEE Signal Processing Magazine, vol. 25(2), pp , March M. Lustig, D.L. Donoho, J.M. Santos and J.M. Pauly, Compressed Sensing MRI, IEEE Signal Processing Magazine, vol. 25(2), pp , March 2008.

71 References On sampling FRI Signals M. Vetterli, P. Marziliano and T.Blu, Sampling Signals with Finite Rate of Innovation, IEEE Trans. on Signal Processing, 50(6): , June T. Blu, P.L. Dragotti, M. Vetterli, P. Marziliano and L. Coulot of Signal Innovations: Theory, Algorithms and Performance Bounds, IEEE Signal Processing Magazine, vol. 25(2), pp , March 2008 P.L. Dragotti, M. Vetterli and T. Blu, Sampling Moments and Reconstructing Signals of Finite Rate of Innovation: Shannon meets Strang-Fix, IEEE Trans. on Signal Processing, vol.55 (5), pp , May J.Berent and P.L. Dragotti, and T. Blu, Sampling Piecewise Sinusoidal Signals with Finite Rate of Innovation Methods, IEEE Transactions on Signal Processing, Vol. 58(2),pp , February J. Uriguen, P.L. Dragotti and T. Blu, On the Exponential Reproducing Kernels for Sampling Signals with Finite Rate of Innovation in Proc. of Sampling Theory and Application Conference, Singapore, May P.L. Dragotti, M. Vetterli and T. Blu, Exact Sampling Results for signals with finite rate of innovation using Strang-Fix conditions and local kernels in Proc. of ICASSP, Philadelphia, March 2005.

72 On Image Super-Resolution References (cont d) L. Baboulaz and P.L. Dragotti, Exact Feature Extraction using Finite Rate of Innovation Principles with an Application to Image Super-Resolution, IEEE Trans. on Image Processing, vol.18(2), pp , February On Diffusion Fields Y. Lu, P.L. Dragotti and M. Vetterli, Localization of diffusive sources using spatio-temporal measurements, 49th Allerton Conference, Allerton D. Malioutov, M. Cetin and A. Willsky, A sparse signal reconstruction perspective for source localization with sensor arrays, IEEE Trans, on Signal Processing, vol. 53(8) pp , August On Neuroscience: J. Onativia, S. Schultz and P.L. Dragotti, A Finite Rate of Innovation algorithm for fast and accurate spike detection from two-photon calcium imaging, submitted to Journal of Neural Engineering, Nov J. Caballero, J.A. Uriguen, S. Schultz and P.L. Dragotti, Spike Sorting at Sub-Nyquist Rates, in Proc. of IEEE (ICASSP), Kyoto, Japan, April 2012.

73 Appendix Orthogonal matching pursuit (OMP) finds the correct sparse representation when K < «. (4) 2 µ Sketch of the Proof (Elad 2010, pages 65-67): Assume the K non-zero entries are at the beginning of the vector in descending order with y = Dx. Thus KX y = x l D l (5) First iteration of OMP work properly if D1 T y > Di T y for any i > K. Using (5) KX KX x l D1 T D l > x l Di T D l l=1 l=1 l=1

74 Sketch of the Proof (cont d): But l=1 Appendix (cont d) KX KX KX x l D1 T D l x 1 x l D1 T D l x 1 x l µ x 1 (1 µ)(k 1). Moreover, l=2 l=2 KX x l Di T l=1 D l KX x l Di T D l l=1 KX x l µ x 1 µk l=1 Using these two bounds, we conclude that D1 T y > Di T condition (4) is met. y is satisfied when

Sparse Sampling: Theory and Applications

Sparse Sampling: Theory and Applications November 24, 29 Outline Problem Statement Signals with Finite Rate of Innovation Sampling Kernels: E-splines and -splines Sparse Sampling: the asic Set-up and Extensions The Noisy Scenario Applications

More information

Parametric Sparse Sampling and its Applications in Neuroscience and Sensor Networks

Parametric Sparse Sampling and its Applications in Neuroscience and Sensor Networks Parametric Sparse Sampling and its Applications in Neuroscience and Sensor Networks April 24, 24 This research is supported by European Research Council ERC, project 2778 (RecoSamp) Problem Statement You

More information

Multichannel Sampling of Signals with Finite. Rate of Innovation

Multichannel Sampling of Signals with Finite. Rate of Innovation Multichannel Sampling of Signals with Finite 1 Rate of Innovation Hojjat Akhondi Asl, Pier Luigi Dragotti and Loic Baboulaz EDICS: DSP-TFSR, DSP-SAMP. Abstract In this paper we present a possible extension

More information

Tutorial: Sparse Signal Processing Part 1: Sparse Signal Representation. Pier Luigi Dragotti Imperial College London

Tutorial: Sparse Signal Processing Part 1: Sparse Signal Representation. Pier Luigi Dragotti Imperial College London Tutorial: Sparse Signal Processing Part 1: Sparse Signal Representation Pier Luigi Dragotti Imperial College London Outline Part 1: Sparse Signal Representation ~90min Part 2: Sparse Sampling ~90min 2

More information

Super-resolution via Convex Programming

Super-resolution via Convex Programming Super-resolution via Convex Programming Carlos Fernandez-Granda (Joint work with Emmanuel Candès) Structure and Randomness in System Identication and Learning, IPAM 1/17/2013 1/17/2013 1 / 44 Index 1 Motivation

More information

Sampling in the Age of Sparsity

Sampling in the Age of Sparsity Sampling in the Age of Sparsity Martin Vetterli EPFL & UC Berkeley BOAT WAKE Pete Turner Acknowledgements Sponsors and supporters Conference organizers, PinaMarziliano in particular! NSF Switzerland and

More information

5310 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 61, NO. 21, NOVEMBER 1, FRI Sampling With Arbitrary Kernels

5310 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 61, NO. 21, NOVEMBER 1, FRI Sampling With Arbitrary Kernels 5310 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 61, NO 21, NOVEMBER 1, 2013 FRI Sampling With Arbitrary Kernels Jose Antonio Urigüen, Student Member, IEEE, Thierry Blu, Fellow, IEEE, and Pier Luigi Dragotti,

More information

Towards a Mathematical Theory of Super-resolution

Towards a Mathematical Theory of Super-resolution Towards a Mathematical Theory of Super-resolution Carlos Fernandez-Granda www.stanford.edu/~cfgranda/ Information Theory Forum, Information Systems Laboratory, Stanford 10/18/2013 Acknowledgements This

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

Part IV Compressed Sensing

Part IV Compressed Sensing Aisenstadt Chair Course CRM September 2009 Part IV Compressed Sensing Stéphane Mallat Centre de Mathématiques Appliquées Ecole Polytechnique Conclusion to Super-Resolution Sparse super-resolution is sometime

More information

Generalized Orthogonal Matching Pursuit- A Review and Some

Generalized Orthogonal Matching Pursuit- A Review and Some Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents

More information

Sampling Signals from a Union of Subspaces

Sampling Signals from a Union of Subspaces 1 Sampling Signals from a Union of Subspaces Yue M. Lu and Minh N. Do I. INTRODUCTION Our entire digital revolution depends on the sampling process, which converts continuousdomain real-life signals to

More information

Compressive Sensing and Beyond

Compressive Sensing and Beyond Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered

More information

Sampling Signals of Finite Rate of Innovation*

Sampling Signals of Finite Rate of Innovation* Sampling Signals of Finite Rate of Innovation* Martin Vetterli http://lcavwww.epfl.ch/~vetterli EPFL & UCBerkeley.15.1.1.5.5.5.5.1.1.15.2 128 128 256 384 512 64 768 896 124 1152.15 128 128 256 384 512

More information

The Fundamentals of Compressive Sensing

The Fundamentals of Compressive Sensing The Fundamentals of Compressive Sensing Mark A. Davenport Georgia Institute of Technology School of Electrical and Computer Engineering Sensor Explosion Data Deluge Digital Revolution If we sample a signal

More information

COMPRESSED SENSING IN PYTHON

COMPRESSED SENSING IN PYTHON COMPRESSED SENSING IN PYTHON Sercan Yıldız syildiz@samsi.info February 27, 2017 OUTLINE A BRIEF INTRODUCTION TO COMPRESSED SENSING A BRIEF INTRODUCTION TO CVXOPT EXAMPLES A Brief Introduction to Compressed

More information

Digital Object Identifier /MSP

Digital Object Identifier /MSP DIGITAL VISION Sampling Signals from a Union of Subspaces [A new perspective for the extension of this theory] [ Yue M. Lu and Minh N. Do ] Our entire digital revolution depends on the sampling process,

More information

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation UIUC CSL Mar. 24 Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation Yuejie Chi Department of ECE and BMI Ohio State University Joint work with Yuxin Chen (Stanford).

More information

Recovering overcomplete sparse representations from structured sensing

Recovering overcomplete sparse representations from structured sensing Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix

More information

Simultaneous Sparsity

Simultaneous Sparsity Simultaneous Sparsity Joel A. Tropp Anna C. Gilbert Martin J. Strauss {jtropp annacg martinjs}@umich.edu Department of Mathematics The University of Michigan 1 Simple Sparse Approximation Work in the d-dimensional,

More information

ROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210

ROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210 ROBUST BLIND SPIKES DECONVOLUTION Yuejie Chi Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 4 ABSTRACT Blind spikes deconvolution, or blind super-resolution, deals with

More information

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011 Compressed Sensing Huichao Xue CS3750 Fall 2011 Table of Contents Introduction From News Reports Abstract Definition How it works A review of L 1 norm The Algorithm Backgrounds for underdetermined linear

More information

Compressed Sensing: Extending CLEAN and NNLS

Compressed Sensing: Extending CLEAN and NNLS Compressed Sensing: Extending CLEAN and NNLS Ludwig Schwardt SKA South Africa (KAT Project) Calibration & Imaging Workshop Socorro, NM, USA 31 March 2009 Outline 1 Compressed Sensing (CS) Introduction

More information

Optimization-based sparse recovery: Compressed sensing vs. super-resolution

Optimization-based sparse recovery: Compressed sensing vs. super-resolution Optimization-based sparse recovery: Compressed sensing vs. super-resolution Carlos Fernandez-Granda, Google Computational Photography and Intelligent Cameras, IPAM 2/5/2014 This work was supported by a

More information

Compressed Sensing and Related Learning Problems

Compressed Sensing and Related Learning Problems Compressed Sensing and Related Learning Problems Yingzhen Li Dept. of Mathematics, Sun Yat-sen University Advisor: Prof. Haizhang Zhang Advisor: Prof. Haizhang Zhang 1 / Overview Overview Background Compressed

More information

Going off the grid. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison

Going off the grid. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Going off the grid Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Joint work with Badri Bhaskar Parikshit Shah Gonnguo Tang We live in a continuous world... But we work

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

A new method on deterministic construction of the measurement matrix in compressed sensing

A new method on deterministic construction of the measurement matrix in compressed sensing A new method on deterministic construction of the measurement matrix in compressed sensing Qun Mo 1 arxiv:1503.01250v1 [cs.it] 4 Mar 2015 Abstract Construction on the measurement matrix A is a central

More information

An Introduction to Sparse Approximation

An Introduction to Sparse Approximation An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,

More information

Signal acquisition and reconstruction is at the heart of signal processing, and. Sparse Sampling of Signal Innovations

Signal acquisition and reconstruction is at the heart of signal processing, and. Sparse Sampling of Signal Innovations DIGITAL VISION Sparse Sampling of Signal Innovations PHOTO CREDIT [Theory, algorithms, and performance bounds] [ Thierry Blu, Pier-Luigi Dragotti, Martin Vetterli, Pina Marziliano, and Lionel Coulot ]

More information

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016 Lecture notes 5 February 9, 016 1 Introduction Random projections Random projections are a useful tool in the analysis and processing of high-dimensional data. We will analyze two applications that use

More information

Sparse Parameter Estimation: Compressed Sensing meets Matrix Pencil

Sparse Parameter Estimation: Compressed Sensing meets Matrix Pencil Sparse Parameter Estimation: Compressed Sensing meets Matrix Pencil Yuejie Chi Departments of ECE and BMI The Ohio State University Colorado School of Mines December 9, 24 Page Acknowledgement Joint work

More information

New Applications of Sparse Methods in Physics. Ra Inta, Centre for Gravitational Physics, The Australian National University

New Applications of Sparse Methods in Physics. Ra Inta, Centre for Gravitational Physics, The Australian National University New Applications of Sparse Methods in Physics Ra Inta, Centre for Gravitational Physics, The Australian National University 2 Sparse methods A vector is S-sparse if it has at most S non-zero coefficients.

More information

Wavelet Footprints: Theory, Algorithms, and Applications

Wavelet Footprints: Theory, Algorithms, and Applications 1306 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 51, NO. 5, MAY 2003 Wavelet Footprints: Theory, Algorithms, and Applications Pier Luigi Dragotti, Member, IEEE, and Martin Vetterli, Fellow, IEEE Abstract

More information

Compressed Sensing. 1 Introduction. 2 Design of Measurement Matrices

Compressed Sensing. 1 Introduction. 2 Design of Measurement Matrices Compressed Sensing Yonina C. Eldar Electrical Engineering Department, Technion-Israel Institute of Technology, Haifa, Israel, 32000 1 Introduction Compressed sensing (CS) is an exciting, rapidly growing

More information

Sparse Recovery Beyond Compressed Sensing

Sparse Recovery Beyond Compressed Sensing Sparse Recovery Beyond Compressed Sensing Carlos Fernandez-Granda www.cims.nyu.edu/~cfgranda Applied Math Colloquium, MIT 4/30/2018 Acknowledgements Project funded by NSF award DMS-1616340 Separable Nonlinear

More information

of Orthogonal Matching Pursuit

of Orthogonal Matching Pursuit A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement

More information

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp CoSaMP Iterative signal recovery from incomplete and inaccurate samples Joel A. Tropp Applied & Computational Mathematics California Institute of Technology jtropp@acm.caltech.edu Joint with D. Needell

More information

Analog-to-Information Conversion

Analog-to-Information Conversion Analog-to-Information Conversion Sergiy A. Vorobyov Dept. Signal Processing and Acoustics, Aalto University February 2013 Winter School on Compressed Sensing, Ruka 1/55 Outline 1 Compressed Sampling (CS)

More information

Sensing systems limited by constraints: physical size, time, cost, energy

Sensing systems limited by constraints: physical size, time, cost, energy Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

Noisy Signal Recovery via Iterative Reweighted L1-Minimization

Noisy Signal Recovery via Iterative Reweighted L1-Minimization Noisy Signal Recovery via Iterative Reweighted L1-Minimization Deanna Needell UC Davis / Stanford University Asilomar SSC, November 2009 Problem Background Setup 1 Suppose x is an unknown signal in R d.

More information

Strengthened Sobolev inequalities for a random subspace of functions

Strengthened Sobolev inequalities for a random subspace of functions Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)

More information

Sparse Solutions of an Undetermined Linear System

Sparse Solutions of an Undetermined Linear System 1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research

More information

Sparse Sensing in Colocated MIMO Radar: A Matrix Completion Approach

Sparse Sensing in Colocated MIMO Radar: A Matrix Completion Approach Sparse Sensing in Colocated MIMO Radar: A Matrix Completion Approach Athina P. Petropulu Department of Electrical and Computer Engineering Rutgers, the State University of New Jersey Acknowledgments Shunqiao

More information

Thresholds for the Recovery of Sparse Solutions via L1 Minimization

Thresholds for the Recovery of Sparse Solutions via L1 Minimization Thresholds for the Recovery of Sparse Solutions via L Minimization David L. Donoho Department of Statistics Stanford University 39 Serra Mall, Sequoia Hall Stanford, CA 9435-465 Email: donoho@stanford.edu

More information

SAMPLING plays an essential role in signal processing and

SAMPLING plays an essential role in signal processing and EDIC RESEARCH ROOSAL 1 Robust arametric Signal Estimations Hanjie an LCAV, I&C, EFL Abstract In this research proposal, we generalise the theory of sampling signals with finite rate of innovation FRI)

More information

AN INTRODUCTION TO COMPRESSIVE SENSING

AN INTRODUCTION TO COMPRESSIVE SENSING AN INTRODUCTION TO COMPRESSIVE SENSING Rodrigo B. Platte School of Mathematical and Statistical Sciences APM/EEE598 Reverse Engineering of Complex Dynamical Networks OUTLINE 1 INTRODUCTION 2 INCOHERENCE

More information

Sparse linear models

Sparse linear models Sparse linear models Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 2/22/2016 Introduction Linear transforms Frequency representation Short-time

More information

GREEDY SIGNAL RECOVERY REVIEW

GREEDY SIGNAL RECOVERY REVIEW GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin

More information

Compressed Sensing: Lecture I. Ronald DeVore

Compressed Sensing: Lecture I. Ronald DeVore Compressed Sensing: Lecture I Ronald DeVore Motivation Compressed Sensing is a new paradigm for signal/image/function acquisition Motivation Compressed Sensing is a new paradigm for signal/image/function

More information

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin

More information

Randomness-in-Structured Ensembles for Compressed Sensing of Images

Randomness-in-Structured Ensembles for Compressed Sensing of Images Randomness-in-Structured Ensembles for Compressed Sensing of Images Abdolreza Abdolhosseini Moghadam Dep. of Electrical and Computer Engineering Michigan State University Email: abdolhos@msu.edu Hayder

More information

Signal Recovery, Uncertainty Relations, and Minkowski Dimension

Signal Recovery, Uncertainty Relations, and Minkowski Dimension Signal Recovery, Uncertainty Relations, and Minkowski Dimension Helmut Bőlcskei ETH Zurich December 2013 Joint work with C. Aubel, P. Kuppinger, G. Pope, E. Riegler, D. Stotz, and C. Studer Aim of this

More information

Various signal sampling and reconstruction methods

Various signal sampling and reconstruction methods Various signal sampling and reconstruction methods Rolands Shavelis, Modris Greitans 14 Dzerbenes str., Riga LV-1006, Latvia Contents Classical uniform sampling and reconstruction Advanced sampling and

More information

Using Hankel structured low-rank approximation for sparse signal recovery

Using Hankel structured low-rank approximation for sparse signal recovery Using Hankel structured low-rank approximation for sparse signal recovery Ivan Markovsky 1 and Pier Luigi Dragotti 2 Department ELEC Vrije Universiteit Brussel (VUB) Pleinlaan 2, Building K, B-1050 Brussels,

More information

Fast Hard Thresholding with Nesterov s Gradient Method

Fast Hard Thresholding with Nesterov s Gradient Method Fast Hard Thresholding with Nesterov s Gradient Method Volkan Cevher Idiap Research Institute Ecole Polytechnique Federale de ausanne volkan.cevher@epfl.ch Sina Jafarpour Department of Computer Science

More information

Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice

Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice Jason N. Laska, Mark A. Davenport, Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University

More information

Parameter Estimation for Mixture Models via Convex Optimization

Parameter Estimation for Mixture Models via Convex Optimization Parameter Estimation for Mixture Models via Convex Optimization Yuanxin Li Department of Electrical and Computer Engineering The Ohio State University Columbus Ohio 432 Email: li.3822@osu.edu Yuejie Chi

More information

Fast and Robust Phase Retrieval

Fast and Robust Phase Retrieval Fast and Robust Phase Retrieval Aditya Viswanathan aditya@math.msu.edu CCAM Lunch Seminar Purdue University April 18 2014 0 / 27 Joint work with Yang Wang Mark Iwen Research supported in part by National

More information

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional

More information

RECENT results in sampling theory [1] have shown that it

RECENT results in sampling theory [1] have shown that it 2140 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 54, NO 6, JUNE 2006 Oversampled A/D Conversion and Error-Rate Dependence of Nonbandlimited Signals With Finite Rate of Innovation Ivana Jovanović, Student

More information

Robust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Frequency Information

Robust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Frequency Information 1 Robust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Frequency Information Emmanuel Candès, California Institute of Technology International Conference on Computational Harmonic

More information

Sparsifying Transform Learning for Compressed Sensing MRI

Sparsifying Transform Learning for Compressed Sensing MRI Sparsifying Transform Learning for Compressed Sensing MRI Saiprasad Ravishankar and Yoram Bresler Department of Electrical and Computer Engineering and Coordinated Science Laborarory University of Illinois

More information

Sparsity in Underdetermined Systems

Sparsity in Underdetermined Systems Sparsity in Underdetermined Systems Department of Statistics Stanford University August 19, 2005 Classical Linear Regression Problem X n y p n 1 > Given predictors and response, y Xβ ε = + ε N( 0, σ 2

More information

Solving Underdetermined Linear Equations and Overdetermined Quadratic Equations (using Convex Programming)

Solving Underdetermined Linear Equations and Overdetermined Quadratic Equations (using Convex Programming) Solving Underdetermined Linear Equations and Overdetermined Quadratic Equations (using Convex Programming) Justin Romberg Georgia Tech, ECE Caltech ROM-GR Workshop June 7, 2013 Pasadena, California Linear

More information

Multipath Matching Pursuit

Multipath Matching Pursuit Multipath Matching Pursuit Submitted to IEEE trans. on Information theory Authors: S. Kwon, J. Wang, and B. Shim Presenter: Hwanchol Jang Multipath is investigated rather than a single path for a greedy

More information

Joint Direction-of-Arrival and Order Estimation in Compressed Sensing using Angles between Subspaces

Joint Direction-of-Arrival and Order Estimation in Compressed Sensing using Angles between Subspaces Aalborg Universitet Joint Direction-of-Arrival and Order Estimation in Compressed Sensing using Angles between Subspaces Christensen, Mads Græsbøll; Nielsen, Jesper Kjær Published in: I E E E / S P Workshop

More information

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1239 Preconditioning for Underdetermined Linear Systems with Sparse Solutions Evaggelia Tsiligianni, StudentMember,IEEE, Lisimachos P. Kondi,

More information

Near Optimal Signal Recovery from Random Projections

Near Optimal Signal Recovery from Random Projections 1 Near Optimal Signal Recovery from Random Projections Emmanuel Candès, California Institute of Technology Multiscale Geometric Analysis in High Dimensions: Workshop # 2 IPAM, UCLA, October 2004 Collaborators:

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

Spike Detection Using FRI Methods and Protein Calcium Sensors: Performance Analysis and Comparisons

Spike Detection Using FRI Methods and Protein Calcium Sensors: Performance Analysis and Comparisons Spike Detection Using FRI Methods and Protein Calcium Sensors: Performance Analysis and Comparisons Stephanie Reynolds, Jon Oñativia, Caroline S Copeland, Simon R Schultz and Pier Luigi Dragotti Department

More information

Sparse Optimization Lecture: Sparse Recovery Guarantees

Sparse Optimization Lecture: Sparse Recovery Guarantees Those who complete this lecture will know Sparse Optimization Lecture: Sparse Recovery Guarantees Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor: Wotao Yin Department of Mathematics,

More information

SIGNALS with sparse representations can be recovered

SIGNALS with sparse representations can be recovered IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1497 Cramér Rao Bound for Sparse Signals Fitting the Low-Rank Model with Small Number of Parameters Mahdi Shaghaghi, Student Member, IEEE,

More information

Uncertainty principles and sparse approximation

Uncertainty principles and sparse approximation Uncertainty principles and sparse approximation In this lecture, we will consider the special case where the dictionary Ψ is composed of a pair of orthobases. We will see that our ability to find a sparse

More information

Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation

Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 12, DECEMBER 2008 2009 Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation Yuanqing Li, Member, IEEE, Andrzej Cichocki,

More information

Massive MIMO: Signal Structure, Efficient Processing, and Open Problems II

Massive MIMO: Signal Structure, Efficient Processing, and Open Problems II Massive MIMO: Signal Structure, Efficient Processing, and Open Problems II Mahdi Barzegar Communications and Information Theory Group (CommIT) Technische Universität Berlin Heisenberg Communications and

More information

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March

More information

The Sparsity Gap. Joel A. Tropp. Computing & Mathematical Sciences California Institute of Technology

The Sparsity Gap. Joel A. Tropp. Computing & Mathematical Sciences California Institute of Technology The Sparsity Gap Joel A. Tropp Computing & Mathematical Sciences California Institute of Technology jtropp@acm.caltech.edu Research supported in part by ONR 1 Introduction The Sparsity Gap (Casazza Birthday

More information

Greedy Signal Recovery and Uniform Uncertainty Principles

Greedy Signal Recovery and Uniform Uncertainty Principles Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles

More information

EUSIPCO

EUSIPCO EUSIPCO 013 1569746769 SUBSET PURSUIT FOR ANALYSIS DICTIONARY LEARNING Ye Zhang 1,, Haolong Wang 1, Tenglong Yu 1, Wenwu Wang 1 Department of Electronic and Information Engineering, Nanchang University,

More information

LEARNING OVERCOMPLETE SPARSIFYING TRANSFORMS FOR SIGNAL PROCESSING. Saiprasad Ravishankar and Yoram Bresler

LEARNING OVERCOMPLETE SPARSIFYING TRANSFORMS FOR SIGNAL PROCESSING. Saiprasad Ravishankar and Yoram Bresler LEARNING OVERCOMPLETE SPARSIFYING TRANSFORMS FOR SIGNAL PROCESSING Saiprasad Ravishankar and Yoram Bresler Department of Electrical and Computer Engineering and the Coordinated Science Laboratory, University

More information

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad

More information

On the Projection Matrices Influence in the Classification of Compressed Sensed ECG Signals

On the Projection Matrices Influence in the Classification of Compressed Sensed ECG Signals On the Projection Matrices Influence in the Classification of Compressed Sensed ECG Signals Monica Fira, Liviu Goras Institute of Computer Science Romanian Academy Iasi, Romania Liviu Goras, Nicolae Cleju,

More information

A Survey of Compressive Sensing and Applications

A Survey of Compressive Sensing and Applications A Survey of Compressive Sensing and Applications Justin Romberg Georgia Tech, School of ECE ENS Winter School January 10, 2012 Lyon, France Signal processing trends DSP: sample first, ask questions later

More information

Stable Signal Recovery from Incomplete and Inaccurate Measurements

Stable Signal Recovery from Incomplete and Inaccurate Measurements Stable Signal Recovery from Incomplete and Inaccurate Measurements EMMANUEL J. CANDÈS California Institute of Technology JUSTIN K. ROMBERG California Institute of Technology AND TERENCE TAO University

More information

Directionlets. Anisotropic Multi-directional Representation of Images with Separable Filtering. Vladan Velisavljević Deutsche Telekom, Laboratories

Directionlets. Anisotropic Multi-directional Representation of Images with Separable Filtering. Vladan Velisavljević Deutsche Telekom, Laboratories Directionlets Anisotropic Multi-directional Representation of Images with Separable Filtering Vladan Velisavljević Deutsche Telekom, Laboratories Google Inc. Mountain View, CA October 2006 Collaborators

More information

MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design

MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications Class 19: Data Representation by Design What is data representation? Let X be a data-space X M (M) F (M) X A data representation

More information

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE 5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER 2009 Uncertainty Relations for Shift-Invariant Analog Signals Yonina C. Eldar, Senior Member, IEEE Abstract The past several years

More information

An Overview of Compressed Sensing

An Overview of Compressed Sensing An Overview of Compressed Sensing Nathan Schneider November 18, 2009 Abstract In a large number of applications, the system will be designed to sample at a rate equal to at least the frequency bandwidth

More information

IMAGE RECONSTRUCTION FROM UNDERSAMPLED FOURIER DATA USING THE POLYNOMIAL ANNIHILATION TRANSFORM

IMAGE RECONSTRUCTION FROM UNDERSAMPLED FOURIER DATA USING THE POLYNOMIAL ANNIHILATION TRANSFORM IMAGE RECONSTRUCTION FROM UNDERSAMPLED FOURIER DATA USING THE POLYNOMIAL ANNIHILATION TRANSFORM Anne Gelb, Rodrigo B. Platte, Rosie Renaut Development and Analysis of Non-Classical Numerical Approximation

More information

Super-Resolution of Point Sources via Convex Programming

Super-Resolution of Point Sources via Convex Programming Super-Resolution of Point Sources via Convex Programming Carlos Fernandez-Granda July 205; Revised December 205 Abstract We consider the problem of recovering a signal consisting of a superposition of

More information

Error Correction via Linear Programming

Error Correction via Linear Programming Error Correction via Linear Programming Emmanuel Candes and Terence Tao Applied and Computational Mathematics, Caltech, Pasadena, CA 91125 Department of Mathematics, University of California, Los Angeles,

More information

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles SIAM Student Research Conference Deanna Needell Joint work with Roman Vershynin and Joel Tropp UC Davis, May 2008 CoSaMP: Greedy Signal

More information

No. of dimensions 1. No. of centers

No. of dimensions 1. No. of centers Contents 8.6 Course of dimensionality............................ 15 8.7 Computational aspects of linear estimators.................. 15 8.7.1 Diagonalization of circulant andblock-circulant matrices......

More information

The Pros and Cons of Compressive Sensing

The Pros and Cons of Compressive Sensing The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 3: Sparse signal recovery: A RIPless analysis of l 1 minimization Yuejie Chi The Ohio State University Page 1 Outline

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression

More information

Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego

Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego 1 Outline Course Outline Motivation for Course Sparse Signal Recovery Problem Applications Computational

More information

Compressive Sensing (CS)

Compressive Sensing (CS) Compressive Sensing (CS) Luminita Vese & Ming Yan lvese@math.ucla.edu yanm@math.ucla.edu Department of Mathematics University of California, Los Angeles The UCLA Advanced Neuroimaging Summer Program (2014)

More information