Signal Sparsity Models: Theory and Applications

Size: px
Start display at page:

Download "Signal Sparsity Models: Theory and Applications"

Transcription

1 Signal Sparsity Models: Theory and Applications Raja Giryes Computer Science Department, Technion Michael Elad, Technion, Haifa Israel Sangnam Nam, CMI, Marseille France Remi Gribonval, INRIA, Rennes France Mike E. Davies, University of Edinburgh Scotland Deanna Needell, Claremont McKenna College California, USA Alfred Bruckstein, Technion, Haifa Israel Dept. of Electrical Engineering-Systems, School of Electrical Engineering, July 8 th, 014

2 The sparsity model for signals and images

3 3 Why Sparsity? Emerging and fast growing field. New algorithms and theory. State-of-the-art results in many fields. Audio processing Video processing Image processing Radar Medical imaging Etc. Searching ISI-Web-of-Science (October 9 th 013): Topic=((spars* and (represent* or approx* or solution) and (dictionary or pursuit)) or (compres* and sens* and spars*))

4 4 Our Problem Setup Given the linear measurements y Mx e, y m m y e m M md x d x Target: Recover from. md is the measurement matrix (m d). M y

5 5 Denoising Given the linear measurements y x e, y m m y e m Zero entries One entries x d x Target: Recover from with noise reduction. M=I is the identity matrix. y

6 6 Debluring/Deconvolution Given the linear measurements y Mx e, y m m y e m Zero entries x d x Target: Recover from. is a block circulant matrix. M d d y

7 7 Super-Resolution Given the linear measurements y Mx e, y m m y e m Zero entries M md x d x Target: Recover from. md is the downsampling matrix (m<d). M y

8 8 Compressed Sensing Given the linear measurements y Mx e, y m m y e m M md x d x Target: Recover from. md is a (random) sensing matrix (m d). M y

9 9 Sparsity Prior (Synthesis) x D, k 0 k m Zero entries Non-zero entries x d n D dn

10 10 Classical Problem Setup We look at the measurements now as m y e m A MD m n n x d D dn n Signal is recovered by Our target

11 11 Sparsity Minimization Problem Assume bounded adversarial noise The problem we aim at solving is simply: ˆ x l ˆ arg min w s.t. y Aw l0 0 n w Dˆ l 0 0 e What can we say about the recovery?

12 1 Restricted Isometry Property (RIP) We say that a matrix A satisfies the RIP [Candès and Tao, 006] with a constants if for every k-sparse vector it holds that 1 w Aw 1 w. k w k k

13 13 Uniqueness and Stability If k then we have stability in the noisy case [Candès and Tao, 006]: 1 ˆ l 0 1 ( 0) Uniqueness in the noiseless case. A better condition for uniqueness can be achieved via the spark [Donoho and Elad, 003]. The l 0 -minimization is NP-hard. Approximation is needed. k

14 14 l 1 -relaxation Replacing l 0 with l 1 ˆ arg min w s.t. y MDw l1 1 n w If k Li 011] then C l. * [Candes and Tao 005, Mo and ˆ C 1 1 for 1 Perfect recovery in the noiseless case. l l (*) RIP condition holds for subgaussian matrices with m=o(k log(n))

15 15 Other Approximation Techniques Relaxation Based The Danzig Selector (DS) [Candes and Tao 007]. Greedy Thresholding Orthogonal Matching Pursuit (OMP) Regularized OMP (ROMP) [Needell, Vershynin 009] Greedy-like CoSaMP [Needell and Tropp 009] Subspace Pursuit (SP) [Dai and Milenkovic 009] Iterative Hard Thresholding (IHT) Hard Thresholding Pursuit (HTP) [Foucart 010]

16 16 Iterative Hard Thresholding (IHT) Approximating arg min y Aw s.t. w k w 0 0 ˆ 0, t 0 Gradient step with step size μ ˆ A y A ˆ t t 1 * t1 t t1 Thresholding: t ˆ t t T Support selection of closest k-sparse vector T t t supp, k Output: xˆ t Dˆ t supp(,k) select the support of the largest k-elements

17 17 Recovery Guarantees For DS, OMP, CoSaMP, SP, IHT and HTP [Candes and Tao 007, Zhang 011, Needell and Tropp 009, Dai and Milenkovic 009, Blumensath and Davies 009, Foucart, 010] we have also that if (for a>1) then ak alg ˆ C C alg is greater than. No guarantees for noise reduction. alg alg

18 18 Oracle Estimator for Random Noise Let e be white Gaussian with variance σ, i.e., ˆ Let be an oracle estimator Oracle Ay T T is the original support of α. Then k k E ˆ Oracle 1 1 k ei ~ N 0, k

19 19 Near Oracle Guarantees If e is random Gaussian and ak alg then the solution of DS and l 1 minimization obeys alg ˆ C log n k with very high probability [Candes and Tao 007, Bickel, Ritov and Tsybakov 009]. Coherence based Guarantees for l 1, OMP, thresholding [Ben Haim, Eldar and Elad 010] and DS [Cai, Wang and Xu 010].

20 0 Agenda Sparsity for overparametrized variational problems CoSaMP, SP and IHT near oracle performance Poisson Data Modeling Denoising relaxation techniques Compressed sensing Coherence Near-oracle performance Sparse Approximation Greedy algorithms Synthesis vs. analysis RIP Cosparse Analysis Greedylike Methods The Transform Domain Technique The Signal Space Paradigm

21 1 Analysis Model Another way to model the sparsity of the signal is using the analysis model [Elad, Milanfar, and Rubinstein, 007, Nam, Davies, Elad, Gribonval 013]. Looks at the coefficients of Ωx. pd is a given transform the analysis dictionary.

22 Analysis Model - Example Assume, x 44 x x Zero location Non-zero location x 0 3 non-zeros l p x 0 1 zeros What is the dimension of the subspace in which x resides?

23 3 Cosparsity and Corank We focus on the zeros in Ωx. Cosupport Λ: zeros locations. Cosparsity l: Ωx has l zeros. Corank r: rank of Ω Λ. l=r if Ω is in general position. In general x resides in a subspace of dimension d-r. Λ Λ =l Zero location Non-zero location x d l r=rank(ω Λ) Ω Λ x

24 4 Problem Setup Given the linear measurements, y Mx y m m y e m M md x d e Analysis Focus x p x d x d n p d Classical synthesis focus D dn

25 5 Analysis Minimization Problem We work directly with and not with its representation. Generates a different family of signals. The signals are characterized by their behavior and not by their building blocks. The problem we aim at solving in analysis is xˆ arg min v s.t. y Mv v d x 0

26 6 Approximation Methods The analysis problem is hard just like the synthesis one and thus approximation techniques are required.

27 7 Analysis Existing Approximations Analysis l 1 -minimization xˆ arg min v s.t. y Mv v d 1 Greedy anlaysis pursuit (GAP) [Nam, Davies, Elad and Gribonval, 013] Selects the cosupport iteratively In each iteration removes elements from the cosupport in a greedy way Simple Thresholding Orthogonal backward greedy (OBG) an algorithm for the case M=I [Rubinstein, Peleg, Elad, 013].

28 8 Analysis Dictionary Examples 1D and D finite difference operators Known as 1D-TV and D-TV if used with l 1. Laplacian Gabor frames Wavelets Curvelets Discrete Fourier transform (DFT) Learned dictionary And more

29 9 A General Recipe We present a general scheme for translating synthesis operations into analysis ones. This provides us with a recipe for converting the algorithms. The recipe is general and easy to use. We apply this recipe on CoSaMP, SP, IHT and HTP. Theoretical study of analysis techniques is more complicated.

30 30 Synthesis versus Analysis Synthesis Sparsity k non-zeros dn Dictionary D Sparse representation α with (at most) k nonzeros D i - i-th column in D. Support Analysis x D x Cosparsity l zeros pd Operator Cosparse representation with (at least) l zeros x p l 0 -i i-th row in. Cosupport p T 1.. n. {1.. }.

31 31 Orthogonal Projection Synthesis Given: a representation w and support T. Solve: min w w s.t. w 0 Solution: keep elements in w supported on T and zero the rest. w T w T C Analysis Given: a vector z and cosupport. Solve: Solution: calculate projection to the subspace spanned by and remove it from the vector. Q I min z v s.t. v 0 v

32 3 (Co)Support Selection Synthesis Given: a non sparse representation w. Solve: Solution: Select the indices of largest k elements in w. T T arg min w w supp w, k Optimal solution. T T Analysis Given: a non cosparse vector z. Solve: arg min z Q z Solution: Select the indices of smallest l elements in x. cosupp x, l This solution is suboptimal.

33 33 Cosupport Selection Optimal selection is NP-hard [Gribonval et al. 014]. In some cases efficient algorithms exist: The unitary case. 1D-DIF ( 1D-DIF ) [Han et al., 004]. Fused-Lasso ( ;I FUS 1D-DIF ) [Giryes et al., 014]. A cosupport selection scheme Sˆl is near optimal d with a constant C l if for any vector v Sˆ l z l l [Giryes et al., 014]. Q v v C inf Q v v

34 34 Iterative Hard Thresholding (IHT) 0 ˆ 0, t 0 Gradient step with step size μ t t 1 * 1 ˆ t A y A ˆ. t t1 Thresholding: t ˆ t t T Support selection of closest k-sparse vector T t t supp, k Output: xˆ t Dˆ t supp(,k) select the support of the largest k-elements [Blumensath and Davies, 009]

35 35 Analysis IHT 0 xˆ 0, t 0 Gradient step with step size μ t t1 * t1 ˆ ˆ. x x M y Mx Projection: xˆ t t t1 Q x t t Approximate cosupport selection of closest l-sparse vector t Sˆ t l x Output: xˆ t [Giryes, Nam, Gribonval and Davies, 011]

36 36 Analysis Greedy-like Methods We translate other synthesis operations. In a similar way we get Analaysis CoSaMP (ACoSaMP). Analysis SP (ASP). Analysis HTP (AHTP). Relaxed version for high dimensional problems Relaxed ACoSaMP (RACoSaMP). Relaxed ASP (RASP). Relaxed AHTP (RAHTP).

37 37 Recovery from Fourier Samples x Mx Shepp-Logan phantom 1 (5%) sampled radial lines

38 38 Naïve recovery Naïve recovery l minimization result with no prior

39 39 Shepp-Logan Derivatives D-DIF x0 Result of applying D-Finite difference operator

40 40 Stable Recovery from Noisy Samples 5 (10%) radial lines SNR 0 Naïve recovery, PSNR =18dB RASP recovery, PSNR =36dB

41 41 -RIP We say that a matrix M satisfies the -RIP with a constant if for every l-cosparse vector it l holds that l l 1 v Mv 1 v. [Giryes, Nam, Elad, Gribonval and Davies 013]. Similar to the D-RIP [Candès, Eldar, Needell, Randall, 011] v

42 4 Reconstruction Guarantees Theorem: If l p 1 Cl, M and C l is close enough to 1 then AIHT satisfies x xˆ C IHT e, M is the largest singular value of M C IHT is a constant greater than. Similar guarantees for ACoSaMP, ASP and AHTP. * * Given an optimal projection the condition is 1/3. l p [Giryes, Nam, Elad, Gribonval and Davies, 014]

43 43 Linear Dependencies in Ω For Ω-RIP the number of measurements we need is m O p llog p [Blumensath, Davies 009, Giryes et al. 014]. Linear dependencies are permitted in Ω and even encouraged. Can we get a guarantee which is in terms of d-r (r is the corank) instead of p-l? No [Giryes, Plan and Vershynin, 014]. d p

44 44 Related Work D-RIP based recovery guarantees for l 1 - minimization [Candès, Eldar, Needell, Randall, 011], [Liu, Mi, Li, 01]. RIP based TV recovery guarantees [Needell, Ward, 013]. ERC like recovery conditions for GAP [Nam, Davies, Elad and Gribonval, 013] and l 1 - minimization [Vaiter,Peyre,Dossal,Fadili, 013]. Thresholding denoising (M=I) performance for the case of Gaussian noise [Peleg, Elad, 013].

45 45 Agenda Sparsity for overparametrized variational problems CoSaMP, SP and IHT near oracle performance Poisson Data Modeling Denoising relaxation techniques Compressed sensing Coherence Near-oracle performance Sparse Approximation Greedy algorithms Synthesis vs. analysis RIP Cosparse Analysis Greedylike Methods The Transform Domain Technique The Signal Space Paradigm

46 46 Implications for the Synthesis Model Classically, in the synthesis model linear dependencies between small groups of columns are not allowed. However, in the analysis model, as our target is the signal, linear dependencies are even encouraged. Does the same hold for synthesis?

47 47 Problem Setup Given the linear measurements, y Mx y m m y e m M md x d e Signal Analysis Space Focus Focus x p x d x d n p d Classical synthesis focus D dn

48 48 Implication for the Synthesis Model Signal space CoSaMP signal recovery under the synthesis model also when the dictionary is highly coherent [Davenport, Needell, Wakin, 013]. An easy extension of our analysis theoretical guarantees also to synthesis [Giryes, Elad, 013]. No non-trivial dictionary is known to have a near-optimal projection in synthesis. The uniqueness and stability conditions for the signal recovery and the representation recovery are not the same [Giryes, Elad, 013].

49 49 Theoretical Results for Signal Space Modified SSCoSaMP with theoretical guarantees for synthesis dictionaries [Giryes and Needell, 014]. Theoretical Guarantees for (modified) OMP in the case of highly coherent dictionaries [Giryes and Elad, 013].

50 50 Agenda Sparsity for overparametrized variational problems CoSaMP, SP and IHT near oracle performance Poisson Data Modeling Denoising relaxation techniques Compressed sensing Coherence Near-oracle performance Sparse Approximation Greedy algorithms Synthesis vs. analysis RIP Cosparse Analysis Greedylike Methods The Transform Domain Technique The Signal Space Paradigm

51 51 Problem Setup Given the linear measurements, y Mx y m m y e m M md x d e Transform domain focus Signal space and analysis focus x p x d p d x Classical synthesis focus d D dn n

52 5 The Transform Domain Strategy Transform Domain IHT [Giryes, 014]. Efficient Algorithm. Guarantees for frames. Near oracle performance in the analysis framework. D-DIF (TV) guarantees [Giryes, 014].

53 53 Agenda Sparsity for overparametrized variational problems CoSaMP, SP and IHT near oracle performance Poisson Data Modeling Denoising relaxation techniques Compressed sensing Coherence Near-oracle performance Sparse Approximation Greedy algorithms Synthesis vs. analysis RIP Cosparse Analysis Greedylike Methods The Transform Domain Technique The Signal Space Paradigm

54 54 The Line Fitting Problem The Problem: g f The Model: Overparametrized representation for linear function d a f a Xb [I X], X diag1,...,d b e

55 55 Sparsity Based Solution Notice that Ω DIF a and Ω DIF b should be sparse at the same location. If k, the number of jumps, is given then solve if we know a min g [I X] s.t. DIFa DIFb k, ab, 0 b e and not k use a min DIFa DIFb s.t. g [I X] e. ab, 0 b We return to the analysis cosparse framework.

56 56 Sparsity Based Solution If k is given Near oracle denoising guarantees using SSCoSaMP [Giryes, Needell 014]. If is given Use GAPN [Nam, Davies, Elad and Gribonval, 013] and modify it to support structured (co)sparsity. Add a continuity constraint on the jump points. e

57 57 Line Fitting e~ N 0,4 e~ N 0,5

58 58 Line Fitting with Continuity Constraint e~ N 0,4 e~ N 0,5

59 59 From 1D to D For D we set a f a Xb1 Yb [I X Y] b1. b We consider options for extending Ω DIF to D: Standard: Vertical and horizontal derivatives , Standard + diagonal derivatives: ,,,

60 60 Denoising Example Original DIF Standard, PSNR=39.09 ROF, PSNR=33.1 [Rudin, Osher and Fatemi, 199] Noisy, =0 DIF With Diagonal, PSNR=39.57

61 61 Denoising Example Original DIF Standard, PSNR=39.09 Noisy, =0 DIF With Diagonal, PSNR=39.57

62 6 Denoising Example ROF gets better SNR since the new method assumes a piecewise Original DIF Standard, SNR=9.6 linear image and therefore smoothens regions in the image. Is this useful? Noisy, =0 ROF, PSNR=30.8

63 63 Segmentation Gradient Map of the Original Image Original Gradient Map of the Piecwise Constant Coefficeints a of the Recovered Image DIF Standard Recovery

64 64 Segmentation

65 65 Geometrical Inpainting

66 66 Geometrical Inpainting With standard derivatives With diagonal derivatives

67 67 Agenda Sparsity for overparametrized variational problems CoSaMP, SP and IHT near oracle performance Poisson Data Modeling Denoising relaxation techniques Compressed sensing Coherence Near-oracle performance Sparse Approximation Greedy algorithms Synthesis vs. analysis RIP Cosparse Analysis Greedylike Methods The Transform Domain Technique The Signal Space Paradigm

68 68 Gaussian Denoising Problem y x e e where is a zero-mean white Gaussian noise with variance, i.e., e[ i] ~ N 0,. Another way to look at this problem is y[ i] ~ N x[ i],

69 69 Poisson Denoising Problem yi [] y Each element in is i.i.d. Poisson distributed with mean xi [] and variance xi large xi [] large. small xi [] small. xi [ ] 0 yi [ ] 0. The noise power is measured by the peak value: max xi i. Poisson noise is not an additive noise, unlike the Gaussian case. d yi [] yi [] [ ].

70 70 Poisson Denoising Problem Noisy image distribution: yi [] m e [ ] [ ]! 0 m P y i m x i m is an integer. 0 0

71 71 Poisson Denoising Problem x0 peak 100 y

72 7 Poisson Denoising Problem x0 peak 0.1 y

73 73 Exponential Sparsity Prior x exp D, k 0 Zero entries Non-zero entries x d exp D dn n [Salmon et al. 01, Giryes and Elad 01]

74 74 Poisson Denoising Methods BM3D with improved Anscombe [Makitalo and Foi, 011]. Non local sparse PCA [Salmon et al. 013]. GMM (Gaussian Mixture Model) based approach. Binning Down-sample noisy image to improve SNR. Denoise down-sampled image. Interpolate recovered image to return to initial size.

75 75 Our Method Divide the image into patches. One global dictionary for all patches. OMP-like algorithm for sparse coding. Boot-strapped stopping criterion. Online dictionary learning steps. [Giryes and Elad 013]

76 76 Noisy Image Max y value = 7 Peak = 1

77 77 Poisson Greedy Algorithm Dictionary learned atoms: Method Our NLSPCA BM3Dbin.45db 0.37db 19.41db Peak = 1 [Giryes and Elad 013].

78 78 Poisson Greedy Algorithm Method Our NLSPCA BM3Dbin.45db 0.37db 19.41db Peak = 1 [Salmon et al. 013].

79 79 Poisson Greedy Algorithm Method Our NLSPCA BM3Dbin.45db 0.37db 19.41db Peak = 1 [Makitalo and Foi 011]

80 80 Original Image

81 81 Noisy Image Max y value = 3 Peak = 0.

82 8 Poisson Greedy Algorithm Method Our NLSPCA BM3Dbin 3.9db.98db 3.16db Peak = 0. [Giryes and Elad 013].

83 83 Poisson Greedy Algorithm Method Our NLSPCA BM3Dbin 3.9db.98db 3.16db Peak = 0. [Salmon et al. 013].

84 84 Poisson Greedy Algorithm Method Our NLSPCA BM3Dbin 3.9db.98db 3.16db Peak = 0. [Makitalo and Foi 011]

85 85 Original Image

86 86 Noisy Image Max y value = 8 Peak =

87 87 Poisson Greedy Algorithm Method Our NLSPCA BM3Dbin 4.87db 3.3db 4.3db Peak = [Giryes and Elad 013].

88 88 Poisson Greedy Algorithm Method Our NLSPCA BM3Dbin 4.87db 3.3db 4.3db Peak = [Salmon et al. 013].

89 89 Poisson Greedy Algorithm Method Our NLSPCA BM3Dbin 4.87db 3.3db 4.3db Peak = [Makitalo and Foi 011]

90 90 Original Image

91 91 Conclusions The cosparse analysis model Analysis greedy-like algorithms. Signal space recovery New uniqueness and stability conditions. New algorithms with guarantees. Operating in the analysis transform domain. Relation between variational based methods and sparse representations.

92 9

93 93

94 94 RIP Condition Can we satisfy a condition like The answer is positive: Subguassian matrices with [Candes and Tao 006] Sample Fourier matrices with [Rudelson and Vershynin 006] Other random matrices? k alg k n m O log alg k m k c n O log alg k Back

95 95 DS guarantee For the white Gaussian noise case, if DS 1 a log N and K 3K 1, then the DS solution obeys: DS x x ˆ C 1 a log N K DS x ˆDS 3K a N N 1 where CDS 4 / 1 and with probability a exceeding 1 1 log [Candes and Tao 007].

96 96 Reconstruction Guarantees Theorem: Apply either ACoSaMP or ASP with a=(l-p)/l, obtaining after t iterations. If C 1 M / C 1(*) and 4l3 p C, M, where C max C l, Cl p and is a constant greater than zero C, M whenever (*) is satisfied, then after a finite number of iterations t * ˆt * t ˆ, x x c e 0 where c <1 is a function of x 4l3 p, Cl, Clp and M. [Giryes, Nam, Elad, Gribonval and Davies, 013]

97 97 Hard thresholding pursuit(htp) Hard Thresholding Pursuit (HTP) [Foucart, 010] is similar to IHT but differ in the projection step Instead of projecting to the closest k-sparse vector, takes the support T i+1 of z i 1 and k minimizes the fidelity term: zˆ arg min y MD z i1 z i1 1 * * * * T T T D M MD D M y i1 i1 i1 T

98 98 Changing step size Instead of using a constant step size, one can choose in each iteration the step size that minimizes the fidelity term [V. Cevher, 011]: y MDzˆi 1

99 99 A-IHT and A-HTP gradient step cosupport selection: Projection: For A-IHT: ˆ Reminder: For A-HTP: x x M y Mx ˆ T ˆ 1 ˆ cosupp x, l i i i x Q x Q xˆ y Mx i1 i1 i1 i1 i1 I i1 i1 arg min i1 x s.t. x0 ˆ i 1 i1 i1 i1 T ˆ ˆ T Q M M Q M y ˆ 0 i1 [Giryes, Nam, Gribonval and Davies 011]

100 100 Algorithms variations Optimal changing stepsize instead of a fixed one: ˆ i 1 arg min y Mxi 1 Approximated changing stepsize i 1 arg min y Mxi 1 Underestimate the cosparsity and use l l.

101 101 Reconstruction Guarantees Theorem: Apply AIHT or AHTP with a certain constant step size or optimal step size obtaining x t after t iterations. If C 1 / C 1(*) l M l and l p 1 Cl, M, where 1 C, M is a constant greater than zero whenever (*) is satisfied, then after a finite number of iterations t * * t ˆ, x x c e 0 1 where c 1 <1 is a function of l p, Cl, and M. [Giryes, Nam, Elad, Gribonval and Davies, 013]

102 10 Reconstruction Guarantees Theorem: Apply either ACoSaMP or ASP with a=(l-p)/l, obtaining after t iterations. For an appropriate value of C max C l, Cl p there exists a reference constant C, M greater than zero such that if 4l3 p C, M then after a finite number of iterations t * ˆt * t ˆ, x x c e 0 where c <1 is a function of x 4l3 p, Cl, Clp and M. [Giryes, Nam, Elad, Gribonval and Davies, 013]

103 103 Reconstruction Guarantees Theorem: Apply AIHT or AHTP with a certain constant step size or optimal step size obtaining x t after t iterations. If C 1 / C 1(*) l M l and l p 1 Cl, M, where 1 C, M is a constant greater than zero whenever (*) is satisfied, then after a finite number of iterations t * * t ˆ, x x c e 0 1 where c 1 <1 is a function of l p, Cl, and M. [Giryes, Nam, Elad, Gribonval and Davies, 013]

104 104 Special Cases Optimal Projection Given an optimal projection scheme the condition of the theorem is simply l When is unitary the RIP and -RIP coincide and the condition becomes p M * 4k

105 105 Reconstruction Guarantees Theorem: Apply either ACoSaMP or ASP with a=(l-p)/l, obtaining after t iterations. For an appropriate value of C max C l, Cl p there exists a reference constant C, M greater than zero such that if 4l3 p C, M then after a finite number of iterations t * ˆt * t ˆ, x x c e 0 where c <1 is a function of x 4l3 p, Cl, Clp and M. [Giryes, Nam, Elad, Gribonval and Davies, 013]

106 106 Special Cases Optimal Projection Given an optimal projection scheme the condition of the theorem is simply l When is unitary the RIP and -RIP coincide and the condition becomes p M * 4k

107 107 Reconstruction Guarantees Theorem: Apply either AIHT or AHTP with a certain constant step size or optimal step size obtaining x t after t iterations. If C 1 / C 1(*) l M l and l p 1 Cl, M, where 1 C, M is a constant greater than zero whenever (*) is satisfied, then after a finite number of iterations t * * t ˆ, x x c e 0 1 where c 1 <1 is a function of l p, Cl, and M. [Giryes, Nam, Elad, Gribonval and Davies, 013]

108 108 Special Case Unitary Transform When is unitary the condition of the first theorems is simply M * k 1/ 3

109 109 -RIP Matrices p Theorem: for a fixed d md, if M satisfies CM m for any z P Mz z z e then, for any constant l 0 we have l l with probability exceeding 1 e t if 3 9 p m p llog t CM l p l l

110 110 (Co)Sparse vectors addition Synthesis z 1 Given two vectors and with supports T 1 and T of sizes and k Task: find the support of z1 z Solution: supp z 1 z T1 T The maximal size of the support is z T k 1 T 1 k k 1 Analysis x 1 Given two vectors and x with cosupports and of sizes and Task: find the cosupport of x1 x Solution: cosupp x l1 x 1 The minimal size of the cosupport is l l p 1 1 l 1 1

111 111 Objective Aware Projection Synthesis Given a vector z and support T Task: Perform an objective aware projection onto the subspace spanned by Solution: solve The last has a simple closed form solution D T arg min y MDz s.t. z 0 z T C MD T z Analysis Given a vector x and cosupport Task: Perform an objective aware projection onto the subspace orthogonal to the one spanned by Solution: solve arg min x y Mx s.t. x 0 The last has a closed form solution

112 11 Analysis SP (ASP) r 0 y, t 0 [1.. p] Output: xˆ xˆt Find new cosupport elements ˆ * t 1 S M r al t t1 Update the residual: t r y Mxˆ t Cosupport update: Compute new solution: t xˆ arg min x s.t. 0 x y Mx Calculate temporal solution: x p arg min x s.t. 0 x y Mx Estimate l-cosparse cosupport: Sˆl x p [Giryes and Elad 01]

113 113 r T 0 Compressive Sampling Matching Pursuit (CoSaMP) y, t 0 Output: xˆ xˆt Find new support elements * * t 1 T supp D M r, k t t1 Update the residual: t r y Mxˆ xˆ t t t Dzˆ Support update: T T T Compute new solution: zˆ t z p T Calculate temporal representations: p z MD y arg min s.t. z 0 T C T y MDz Estimate k- sparse support: T supp z, k p [Needell and Tropp 009]

114 114 Analysis CoSaMP (ACoSaMP) r 0 y, t 0 [1.. p] Output: xˆ xˆt Find new cosupport elements S ˆal M r t r y Mxˆ * t 1 t t1 Update the residual: t Cosupport update: Compute new solution: xˆ t Qx p Calculate temporal solution: x p arg min x s.t. 0 x y Mx Estimate l-cosparse cosupport: S ˆl x p [Giryes and Elad 01]

The Analysis Cosparse Model for Signals and Images

The Analysis Cosparse Model for Signals and Images The Analysis Cosparse Model for Signals and Images Raja Giryes Computer Science Department, Technion. The research leading to these results has received funding from the European Research Council under

More information

EUSIPCO

EUSIPCO EUSIPCO 013 1569746769 SUBSET PURSUIT FOR ANALYSIS DICTIONARY LEARNING Ye Zhang 1,, Haolong Wang 1, Tenglong Yu 1, Wenwu Wang 1 Department of Electronic and Information Engineering, Nanchang University,

More information

Greedy Signal Recovery and Uniform Uncertainty Principles

Greedy Signal Recovery and Uniform Uncertainty Principles Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles

More information

GREEDY SIGNAL RECOVERY REVIEW

GREEDY SIGNAL RECOVERY REVIEW GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin

More information

Recovering overcomplete sparse representations from structured sensing

Recovering overcomplete sparse representations from structured sensing Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix

More information

Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France

Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France remi.gribonval@inria.fr Structure of the tutorial Session 1: Introduction to inverse problems & sparse

More information

Sparse & Redundant Signal Representation, and its Role in Image Processing

Sparse & Redundant Signal Representation, and its Role in Image Processing Sparse & Redundant Signal Representation, and its Role in Michael Elad The CS Department The Technion Israel Institute of technology Haifa 3000, Israel Wave 006 Wavelet and Applications Ecole Polytechnique

More information

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles SIAM Student Research Conference Deanna Needell Joint work with Roman Vershynin and Joel Tropp UC Davis, May 2008 CoSaMP: Greedy Signal

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk Model-Based Compressive Sensing for Signal Ensembles Marco F. Duarte Volkan Cevher Richard G. Baraniuk Concise Signal Structure Sparse signal: only K out of N coordinates nonzero model: union of K-dimensional

More information

Compressive Sensing and Beyond

Compressive Sensing and Beyond Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered

More information

The Pros and Cons of Compressive Sensing

The Pros and Cons of Compressive Sensing The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal

More information

Generalized Orthogonal Matching Pursuit- A Review and Some

Generalized Orthogonal Matching Pursuit- A Review and Some Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

of Orthogonal Matching Pursuit

of Orthogonal Matching Pursuit A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement

More information

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published

More information

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp CoSaMP Iterative signal recovery from incomplete and inaccurate samples Joel A. Tropp Applied & Computational Mathematics California Institute of Technology jtropp@acm.caltech.edu Joint with D. Needell

More information

Multipath Matching Pursuit

Multipath Matching Pursuit Multipath Matching Pursuit Submitted to IEEE trans. on Information theory Authors: S. Kwon, J. Wang, and B. Shim Presenter: Hwanchol Jang Multipath is investigated rather than a single path for a greedy

More information

Exponential decay of reconstruction error from binary measurements of sparse signals

Exponential decay of reconstruction error from binary measurements of sparse signals Exponential decay of reconstruction error from binary measurements of sparse signals Deanna Needell Joint work with R. Baraniuk, S. Foucart, Y. Plan, and M. Wootters Outline Introduction Mathematical Formulation

More information

Noisy Signal Recovery via Iterative Reweighted L1-Minimization

Noisy Signal Recovery via Iterative Reweighted L1-Minimization Noisy Signal Recovery via Iterative Reweighted L1-Minimization Deanna Needell UC Davis / Stanford University Asilomar SSC, November 2009 Problem Background Setup 1 Suppose x is an unknown signal in R d.

More information

Sensing systems limited by constraints: physical size, time, cost, energy

Sensing systems limited by constraints: physical size, time, cost, energy Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original

More information

Inverse problems and sparse models (6/6) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France.

Inverse problems and sparse models (6/6) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France. Inverse problems and sparse models (6/6) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France remi.gribonval@inria.fr Overview of the course Introduction sparsity & data compression inverse problems

More information

Generalized greedy algorithms.

Generalized greedy algorithms. Generalized greedy algorithms. François-Xavier Dupé & Sandrine Anthoine LIF & I2M Aix-Marseille Université - CNRS - Ecole Centrale Marseille, Marseille ANR Greta Séminaire Parisien des Mathématiques Appliquées

More information

A new method on deterministic construction of the measurement matrix in compressed sensing

A new method on deterministic construction of the measurement matrix in compressed sensing A new method on deterministic construction of the measurement matrix in compressed sensing Qun Mo 1 arxiv:1503.01250v1 [cs.it] 4 Mar 2015 Abstract Construction on the measurement matrix A is a central

More information

The Cosparse Analysis Model and Algorithms

The Cosparse Analysis Model and Algorithms The Cosparse Analysis Model and Algorithms S. Nam a, M. E. Davies b, M. Elad c, R. Gribonval a a Centre de Recherche INRIA Rennes - Bretagne Atlantique, Campus de Beaulieu, F-35042 Rennes, France b School

More information

Sketching for Large-Scale Learning of Mixture Models

Sketching for Large-Scale Learning of Mixture Models Sketching for Large-Scale Learning of Mixture Models Nicolas Keriven Université Rennes 1, Inria Rennes Bretagne-atlantique Adv. Rémi Gribonval Outline Introduction Practical Approach Results Theoretical

More information

An Introduction to Sparse Approximation

An Introduction to Sparse Approximation An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,

More information

SOS Boosting of Image Denoising Algorithms

SOS Boosting of Image Denoising Algorithms SOS Boosting of Image Denoising Algorithms Yaniv Romano and Michael Elad The Technion Israel Institute of technology Haifa 32000, Israel The research leading to these results has received funding from

More information

The Cosparse Analysis Model and Algorithms

The Cosparse Analysis Model and Algorithms The Cosparse Analysis Model and Algorithms Sangnam Nam, Mike E. Davies, Michael Elad, Rémi Gribonval To cite this version: Sangnam Nam, Mike E. Davies, Michael Elad, Rémi Gribonval. The Cosparse Analysis

More information

Inverse problems, Dictionary based Signal Models and Compressed Sensing

Inverse problems, Dictionary based Signal Models and Compressed Sensing Inverse problems, Dictionary based Signal Models and Compressed Sensing Rémi Gribonval METISS project-team (audio signal processing, speech recognition, source separation) INRIA, Rennes, France Ecole d

More information

Introduction to Sparsity. Xudong Cao, Jake Dreamtree & Jerry 04/05/2012

Introduction to Sparsity. Xudong Cao, Jake Dreamtree & Jerry 04/05/2012 Introduction to Sparsity Xudong Cao, Jake Dreamtree & Jerry 04/05/2012 Outline Understanding Sparsity Total variation Compressed sensing(definition) Exact recovery with sparse prior(l 0 ) l 1 relaxation

More information

COMPRESSED Sensing (CS) is a method to recover a

COMPRESSED Sensing (CS) is a method to recover a 1 Sample Complexity of Total Variation Minimization Sajad Daei, Farzan Haddadi, Arash Amini Abstract This work considers the use of Total Variation (TV) minimization in the recovery of a given gradient

More information

Stability and Robustness of Weak Orthogonal Matching Pursuits

Stability and Robustness of Weak Orthogonal Matching Pursuits Stability and Robustness of Weak Orthogonal Matching Pursuits Simon Foucart, Drexel University Abstract A recent result establishing, under restricted isometry conditions, the success of sparse recovery

More information

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Deanna Needell and Roman Vershynin Abstract We demonstrate a simple greedy algorithm that can reliably

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

LEARNING OVERCOMPLETE SPARSIFYING TRANSFORMS FOR SIGNAL PROCESSING. Saiprasad Ravishankar and Yoram Bresler

LEARNING OVERCOMPLETE SPARSIFYING TRANSFORMS FOR SIGNAL PROCESSING. Saiprasad Ravishankar and Yoram Bresler LEARNING OVERCOMPLETE SPARSIFYING TRANSFORMS FOR SIGNAL PROCESSING Saiprasad Ravishankar and Yoram Bresler Department of Electrical and Computer Engineering and the Coordinated Science Laboratory, University

More information

Rui ZHANG Song LI. Department of Mathematics, Zhejiang University, Hangzhou , P. R. China

Rui ZHANG Song LI. Department of Mathematics, Zhejiang University, Hangzhou , P. R. China Acta Mathematica Sinica, English Series May, 015, Vol. 31, No. 5, pp. 755 766 Published online: April 15, 015 DOI: 10.1007/s10114-015-434-4 Http://www.ActaMath.com Acta Mathematica Sinica, English Series

More information

Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery

Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery Anna C. Gilbert Department of Mathematics University of Michigan Connection between... Sparse Approximation and Compressed

More information

Sparsity in Underdetermined Systems

Sparsity in Underdetermined Systems Sparsity in Underdetermined Systems Department of Statistics Stanford University August 19, 2005 Classical Linear Regression Problem X n y p n 1 > Given predictors and response, y Xβ ε = + ε N( 0, σ 2

More information

The Pros and Cons of Compressive Sensing

The Pros and Cons of Compressive Sensing The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal

More information

Signal Recovery, Uncertainty Relations, and Minkowski Dimension

Signal Recovery, Uncertainty Relations, and Minkowski Dimension Signal Recovery, Uncertainty Relations, and Minkowski Dimension Helmut Bőlcskei ETH Zurich December 2013 Joint work with C. Aubel, P. Kuppinger, G. Pope, E. Riegler, D. Stotz, and C. Studer Aim of this

More information

ORTHOGONAL matching pursuit (OMP) is the canonical

ORTHOGONAL matching pursuit (OMP) is the canonical IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 9, SEPTEMBER 2010 4395 Analysis of Orthogonal Matching Pursuit Using the Restricted Isometry Property Mark A. Davenport, Member, IEEE, and Michael

More information

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional

More information

MATCHING PURSUIT WITH STOCHASTIC SELECTION

MATCHING PURSUIT WITH STOCHASTIC SELECTION 2th European Signal Processing Conference (EUSIPCO 22) Bucharest, Romania, August 27-3, 22 MATCHING PURSUIT WITH STOCHASTIC SELECTION Thomas Peel, Valentin Emiya, Liva Ralaivola Aix-Marseille Université

More information

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin

More information

Tutorial: Sparse Signal Processing Part 1: Sparse Signal Representation. Pier Luigi Dragotti Imperial College London

Tutorial: Sparse Signal Processing Part 1: Sparse Signal Representation. Pier Luigi Dragotti Imperial College London Tutorial: Sparse Signal Processing Part 1: Sparse Signal Representation Pier Luigi Dragotti Imperial College London Outline Part 1: Sparse Signal Representation ~90min Part 2: Sparse Sampling ~90min 2

More information

Recovery of Sparse Signals Using Multiple Orthogonal Least Squares

Recovery of Sparse Signals Using Multiple Orthogonal Least Squares Recovery of Sparse Signals Using Multiple Orthogonal east Squares Jian Wang, Ping i Department of Statistics and Biostatistics arxiv:40.505v [stat.me] 9 Oct 04 Department of Computer Science Rutgers University

More information

The Fundamentals of Compressive Sensing

The Fundamentals of Compressive Sensing The Fundamentals of Compressive Sensing Mark A. Davenport Georgia Institute of Technology School of Electrical and Computer Engineering Sensor Explosion Data Deluge Digital Revolution If we sample a signal

More information

A discretized Newton flow for time varying linear inverse problems

A discretized Newton flow for time varying linear inverse problems A discretized Newton flow for time varying linear inverse problems Martin Kleinsteuber and Simon Hawe Department of Electrical Engineering and Information Technology, Technische Universität München Arcisstrasse

More information

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit arxiv:0707.4203v2 [math.na] 14 Aug 2007 Deanna Needell Department of Mathematics University of California,

More information

On the Minimization Over Sparse Symmetric Sets: Projections, O. Projections, Optimality Conditions and Algorithms

On the Minimization Over Sparse Symmetric Sets: Projections, O. Projections, Optimality Conditions and Algorithms On the Minimization Over Sparse Symmetric Sets: Projections, Optimality Conditions and Algorithms Amir Beck Technion - Israel Institute of Technology Haifa, Israel Based on joint work with Nadav Hallak

More information

Simultaneous Sparsity

Simultaneous Sparsity Simultaneous Sparsity Joel A. Tropp Anna C. Gilbert Martin J. Strauss {jtropp annacg martinjs}@umich.edu Department of Mathematics The University of Michigan 1 Simple Sparse Approximation Work in the d-dimensional,

More information

Robust Sparse Recovery via Non-Convex Optimization

Robust Sparse Recovery via Non-Convex Optimization Robust Sparse Recovery via Non-Convex Optimization Laming Chen and Yuantao Gu Department of Electronic Engineering, Tsinghua University Homepage: http://gu.ee.tsinghua.edu.cn/ Email: gyt@tsinghua.edu.cn

More information

Machine Learning for Signal Processing Sparse and Overcomplete Representations

Machine Learning for Signal Processing Sparse and Overcomplete Representations Machine Learning for Signal Processing Sparse and Overcomplete Representations Abelino Jimenez (slides from Bhiksha Raj and Sourish Chaudhuri) Oct 1, 217 1 So far Weights Data Basis Data Independent ICA

More information

Detecting Sparse Structures in Data in Sub-Linear Time: A group testing approach

Detecting Sparse Structures in Data in Sub-Linear Time: A group testing approach Detecting Sparse Structures in Data in Sub-Linear Time: A group testing approach Boaz Nadler The Weizmann Institute of Science Israel Joint works with Inbal Horev, Ronen Basri, Meirav Galun and Ery Arias-Castro

More information

A tutorial on sparse modeling. Outline:

A tutorial on sparse modeling. Outline: A tutorial on sparse modeling. Outline: 1. Why? 2. What? 3. How. 4. no really, why? Sparse modeling is a component in many state of the art signal processing and machine learning tasks. image processing

More information

Compressed Sensing and Sparse Recovery

Compressed Sensing and Sparse Recovery ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing

More information

Near Optimal Signal Recovery from Random Projections

Near Optimal Signal Recovery from Random Projections 1 Near Optimal Signal Recovery from Random Projections Emmanuel Candès, California Institute of Technology Multiscale Geometric Analysis in High Dimensions: Workshop # 2 IPAM, UCLA, October 2004 Collaborators:

More information

Compressed Sensing. 1 Introduction. 2 Design of Measurement Matrices

Compressed Sensing. 1 Introduction. 2 Design of Measurement Matrices Compressed Sensing Yonina C. Eldar Electrical Engineering Department, Technion-Israel Institute of Technology, Haifa, Israel, 32000 1 Introduction Compressed sensing (CS) is an exciting, rapidly growing

More information

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1239 Preconditioning for Underdetermined Linear Systems with Sparse Solutions Evaggelia Tsiligianni, StudentMember,IEEE, Lisimachos P. Kondi,

More information

Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit

Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit Claremont Colleges Scholarship @ Claremont CMC Faculty Publications and Research CMC Faculty Scholarship 6-5-2008 Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit

More information

Abstract This paper is about the efficient solution of large-scale compressed sensing problems.

Abstract This paper is about the efficient solution of large-scale compressed sensing problems. Noname manuscript No. (will be inserted by the editor) Optimization for Compressed Sensing: New Insights and Alternatives Robert Vanderbei and Han Liu and Lie Wang Received: date / Accepted: date Abstract

More information

Compressed Sensing and Related Learning Problems

Compressed Sensing and Related Learning Problems Compressed Sensing and Related Learning Problems Yingzhen Li Dept. of Mathematics, Sun Yat-sen University Advisor: Prof. Haizhang Zhang Advisor: Prof. Haizhang Zhang 1 / Overview Overview Background Compressed

More information

Stopping Condition for Greedy Block Sparse Signal Recovery

Stopping Condition for Greedy Block Sparse Signal Recovery Stopping Condition for Greedy Block Sparse Signal Recovery Yu Luo, Ronggui Xie, Huarui Yin, and Weidong Wang Department of Electronics Engineering and Information Science, University of Science and Technology

More information

Sparse analysis Lecture III: Dictionary geometry and greedy algorithms

Sparse analysis Lecture III: Dictionary geometry and greedy algorithms Sparse analysis Lecture III: Dictionary geometry and greedy algorithms Anna C. Gilbert Department of Mathematics University of Michigan Intuition from ONB Key step in algorithm: r, ϕ j = x c i ϕ i, ϕ j

More information

Sparse Optimization Lecture: Sparse Recovery Guarantees

Sparse Optimization Lecture: Sparse Recovery Guarantees Those who complete this lecture will know Sparse Optimization Lecture: Sparse Recovery Guarantees Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor: Wotao Yin Department of Mathematics,

More information

Machine Learning for Signal Processing Sparse and Overcomplete Representations. Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013

Machine Learning for Signal Processing Sparse and Overcomplete Representations. Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013 Machine Learning for Signal Processing Sparse and Overcomplete Representations Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013 1 Key Topics in this Lecture Basics Component-based representations

More information

Sparse linear models

Sparse linear models Sparse linear models Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 2/22/2016 Introduction Linear transforms Frequency representation Short-time

More information

Strengthened Sobolev inequalities for a random subspace of functions

Strengthened Sobolev inequalities for a random subspace of functions Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)

More information

Analysis of Greedy Algorithms

Analysis of Greedy Algorithms Analysis of Greedy Algorithms Jiahui Shen Florida State University Oct.26th Outline Introduction Regularity condition Analysis on orthogonal matching pursuit Analysis on forward-backward greedy algorithm

More information

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Journal of Information & Computational Science 11:9 (214) 2933 2939 June 1, 214 Available at http://www.joics.com Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Jingfei He, Guiling Sun, Jie

More information

Physics-driven structured cosparse modeling for source localization

Physics-driven structured cosparse modeling for source localization Physics-driven structured cosparse modeling for source localization Sangnam Nam, Rémi Gribonval To cite this version: Sangnam Nam, Rémi Gribonval. Physics-driven structured cosparse modeling for source

More information

Recovery of Compressible Signals in Unions of Subspaces

Recovery of Compressible Signals in Unions of Subspaces 1 Recovery of Compressible Signals in Unions of Subspaces Marco F. Duarte, Chinmay Hegde, Volkan Cevher, and Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University Abstract

More information

Compressed Sensing: Extending CLEAN and NNLS

Compressed Sensing: Extending CLEAN and NNLS Compressed Sensing: Extending CLEAN and NNLS Ludwig Schwardt SKA South Africa (KAT Project) Calibration & Imaging Workshop Socorro, NM, USA 31 March 2009 Outline 1 Compressed Sensing (CS) Introduction

More information

A NEW FRAMEWORK FOR DESIGNING INCOHERENT SPARSIFYING DICTIONARIES

A NEW FRAMEWORK FOR DESIGNING INCOHERENT SPARSIFYING DICTIONARIES A NEW FRAMEWORK FOR DESIGNING INCOERENT SPARSIFYING DICTIONARIES Gang Li, Zhihui Zhu, 2 uang Bai, 3 and Aihua Yu 3 School of Automation & EE, Zhejiang Univ. of Sci. & Tech., angzhou, Zhejiang, P.R. China

More information

On Rank Awareness, Thresholding, and MUSIC for Joint Sparse Recovery

On Rank Awareness, Thresholding, and MUSIC for Joint Sparse Recovery On Rank Awareness, Thresholding, and MUSIC for Joint Sparse Recovery Jeffrey D. Blanchard a,1,, Caleb Leedy a,2, Yimin Wu a,2 a Department of Mathematics and Statistics, Grinnell College, Grinnell, IA

More information

c 2011 International Press Vol. 18, No. 1, pp , March DENNIS TREDE

c 2011 International Press Vol. 18, No. 1, pp , March DENNIS TREDE METHODS AND APPLICATIONS OF ANALYSIS. c 2011 International Press Vol. 18, No. 1, pp. 105 110, March 2011 007 EXACT SUPPORT RECOVERY FOR LINEAR INVERSE PROBLEMS WITH SPARSITY CONSTRAINTS DENNIS TREDE Abstract.

More information

TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS

TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS TRACKING SOLUTIONS OF TIME VARYING LINEAR INVERSE PROBLEMS Martin Kleinsteuber and Simon Hawe Department of Electrical Engineering and Information Technology, Technische Universität München, München, Arcistraße

More information

Stochastic geometry and random matrix theory in CS

Stochastic geometry and random matrix theory in CS Stochastic geometry and random matrix theory in CS IPAM: numerical methods for continuous optimization University of Edinburgh Joint with Bah, Blanchard, Cartis, and Donoho Encoder Decoder pair - Encoder/Decoder

More information

Compressive Sensing Recovery of Spike Trains Using A Structured Sparsity Model

Compressive Sensing Recovery of Spike Trains Using A Structured Sparsity Model Compressive Sensing Recovery of Spike Trains Using A Structured Sparsity Model Chinmay Hegde, Marco F. Duarte, Volkan Cevher To cite this version: Chinmay Hegde, Marco F. Duarte, Volkan Cevher. Compressive

More information

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March

More information

Compressive sensing in the analog world

Compressive sensing in the analog world Compressive sensing in the analog world Mark A. Davenport Georgia Institute of Technology School of Electrical and Computer Engineering Compressive Sensing A D -sparse Can we really acquire analog signals

More information

Interpolation via weighted l 1 minimization

Interpolation via weighted l 1 minimization Interpolation via weighted l 1 minimization Rachel Ward University of Texas at Austin December 12, 2014 Joint work with Holger Rauhut (Aachen University) Function interpolation Given a function f : D C

More information

Complementary Matching Pursuit Algorithms for Sparse Approximation

Complementary Matching Pursuit Algorithms for Sparse Approximation Complementary Matching Pursuit Algorithms for Sparse Approximation Gagan Rath and Christine Guillemot IRISA-INRIA, Campus de Beaulieu 35042 Rennes, France phone: +33.2.99.84.75.26 fax: +33.2.99.84.71.71

More information

Five Lectures on Sparse and Redundant Representations Modelling of Images. Michael Elad

Five Lectures on Sparse and Redundant Representations Modelling of Images. Michael Elad Five Lectures on Sparse and Redundant Representations Modelling of Images Michael Elad IAS/Park City Mathematics Series Volume 19, 2010 Five Lectures on Sparse and Redundant Representations Modelling

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

Compressive Sensing (CS)

Compressive Sensing (CS) Compressive Sensing (CS) Luminita Vese & Ming Yan lvese@math.ucla.edu yanm@math.ucla.edu Department of Mathematics University of California, Los Angeles The UCLA Advanced Neuroimaging Summer Program (2014)

More information

Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity

Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity Wei Dai and Olgica Milenkovic Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign

More information

Sparse Legendre expansions via l 1 minimization

Sparse Legendre expansions via l 1 minimization Sparse Legendre expansions via l 1 minimization Rachel Ward, Courant Institute, NYU Joint work with Holger Rauhut, Hausdorff Center for Mathematics, Bonn, Germany. June 8, 2010 Outline Sparse recovery

More information

Lecture 22: More On Compressed Sensing

Lecture 22: More On Compressed Sensing Lecture 22: More On Compressed Sensing Scribed by Eric Lee, Chengrun Yang, and Sebastian Ament Nov. 2, 207 Recap and Introduction Basis pursuit was the method of recovering the sparsest solution to an

More information

Greedy Sparsity-Constrained Optimization

Greedy Sparsity-Constrained Optimization Greedy Sparsity-Constrained Optimization Sohail Bahmani, Petros Boufounos, and Bhiksha Raj 3 sbahmani@andrew.cmu.edu petrosb@merl.com 3 bhiksha@cs.cmu.edu Department of Electrical and Computer Engineering,

More information

Lecture Notes 9: Constrained Optimization

Lecture Notes 9: Constrained Optimization Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form

More information

arxiv: v2 [cs.lg] 6 May 2017

arxiv: v2 [cs.lg] 6 May 2017 arxiv:170107895v [cslg] 6 May 017 Information Theoretic Limits for Linear Prediction with Graph-Structured Sparsity Abstract Adarsh Barik Krannert School of Management Purdue University West Lafayette,

More information

Randomized projection algorithms for overdetermined linear systems

Randomized projection algorithms for overdetermined linear systems Randomized projection algorithms for overdetermined linear systems Deanna Needell Claremont McKenna College ISMP, Berlin 2012 Setup Setup Let Ax = b be an overdetermined, standardized, full rank system

More information

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28 Sparsity Models Tong Zhang Rutgers University T. Zhang (Rutgers) Sparsity Models 1 / 28 Topics Standard sparse regression model algorithms: convex relaxation and greedy algorithm sparse recovery analysis:

More information

Image Noise: Detection, Measurement and Removal Techniques. Zhifei Zhang

Image Noise: Detection, Measurement and Removal Techniques. Zhifei Zhang Image Noise: Detection, Measurement and Removal Techniques Zhifei Zhang Outline Noise measurement Filter-based Block-based Wavelet-based Noise removal Spatial domain Transform domain Non-local methods

More information

Least squares regularized or constrained by L0: relationship between their global minimizers. Mila Nikolova

Least squares regularized or constrained by L0: relationship between their global minimizers. Mila Nikolova Least squares regularized or constrained by L0: relationship between their global minimizers Mila Nikolova CMLA, CNRS, ENS Cachan, Université Paris-Saclay, France nikolova@cmla.ens-cachan.fr SIAM Minisymposium

More information

Noise Removal? The Evolution Of Pr(x) Denoising By Energy Minimization. ( x) An Introduction to Sparse Representation and the K-SVD Algorithm

Noise Removal? The Evolution Of Pr(x) Denoising By Energy Minimization. ( x) An Introduction to Sparse Representation and the K-SVD Algorithm Sparse Representation and the K-SV Algorithm he CS epartment he echnion Israel Institute of technology Haifa 3, Israel University of Erlangen - Nürnberg April 8 Noise Removal? Our story begins with image

More information

Linear Convergence of Stochastic Iterative Greedy Algorithms with Sparse Constraints

Linear Convergence of Stochastic Iterative Greedy Algorithms with Sparse Constraints Claremont Colleges Scholarship @ Claremont CMC Faculty Publications and Research CMC Faculty Scholarship 7--04 Linear Convergence of Stochastic Iterative Greedy Algorithms with Sparse Constraints Nam Nguyen

More information