Robust multichannel sparse recovery

Size: px
Start display at page:

Download "Robust multichannel sparse recovery"

Transcription

1 Robust multichannel sparse recovery Esa Ollila Department of Signal Processing and Acoustics Aalto University, Finland SUPELEC, Feb 4th, 2015

2 1 Introduction 2 Nonparametric sparse recovery 3 Simulation studies 4 Source localization with Sensor Arrays

3 2/24 Single Measurement Vector Model (SMV) y = Φ M 1 M N x N 1 + e N 1 Φ = ( φ 1 φ N ) = ( φ(1) φ (M) ) H, M N known measurement (design, system) matrix x = (x 1,...,x N ), unobserved signal vector e = (e 1,...,e M ), random noise vector. Objective recover K-sparse or compressible signal from y knowing Φ, K NOTE: typically, M < N (ill-posed model) and K N.

4 3/24 Single Measurement Vector Model (SMV) y = Φ M 1 M N x N 1 + e M 1 If we can determine Γ = supp(x), then the problem reduces to conventional regression problem (more responses than predictors ): y Φ x e

5 Multiple Measurement Vectors Model (MMV) y i = Φx i +e i, i = 1,...,Q In matrix form Y = Φ M Q M N X N Q + E M 1 Y = ( y 1 y Q ), observed measurement matrix X = ( x 1 x Q ), unobserved signal matrix E = ( e 1 e Q ), noise matrix Objective recover K-sparse (or compressible) signal matrix X knowing Y, Φ, K. N Q matrix X is called K-sparse if X 0 = supp(x) K, where Q Γ = supp(x) = supp(x i ) = {j {1,...,N} : x jk 0 for some k} i=1 4/24

6 5/24 Multiple Measurement Vectors Model (MMV) Key idea: signals are all predominantly supported on a common support set = Joint estimation can lead both to computational advantages and increased reconstruction accuracy See [Tropp, 2006, Chen and Huo, 2006, Eldar and Rauhut, 2010, Duarte and Eldar, 2011, Blanchard et al., 2014]. Applications EEG/MEG [Gorodnitsky et al., 1995] equalization of sparse communications channels [Cotter and Rao, 2002] blind source separation [Gribonval and Zibulevsky, 2010] source localization using sensor arrays [Malioutov et al., 2005]...

7 6/24 Matrix norms The mixed l p,q norm of X for p,q [1, ) ( ( ) q/p ) ( 1/q 1/q X p,q = x ij p = x (i) p) q. i j i If p = q = matrix p-norm X p,p = X p. l 2 (Frobenius) norm 2 is denoted shortly as. The mixed l -norms of X C N Q X p, = max i {1,...,N} x (i) p X,q = ( i (max j {1,...,Q} x ij ) q) 1/q. The row l 0 quasi-norm of X X 0 = supp(x) = nr. of number of nonzero rows of X

8 7/24 Optimization problem min X Y ΦX 2 subject to X 0 K is combinatorial (NP-hard). More feasible approaches: Optimization (Convex relaxation) Greedy pursuit: Orthogonal Matching Pursuit (OMP) Normalized Iterative Hard Thresholding (NIHT) Normalized Hard Thresholding Pursuit (NHTP) Compressive Sampling MP (CoSaMP) Guaranteed to perform very well under suitable conditions (restricted isometry property on Φ, non impulsive noise e) NIHT is simple and scalable, convenient for problems where the support is of primary interest

9 7/24 Optimization problem min X Y ΦX 2 subject to X 0 K is combinatorial (NP-hard). More feasible approaches: Optimization (Convex relaxation) Greedy pursuit: Orthogonal Matching Pursuit (OMP) Normalized Iterative Hard Thresholding (NIHT) Normalized Hard Thresholding Pursuit (NHTP) Compressive Sampling MP (CoSaMP) Guaranteed to perform very well under suitable conditions (restricted isometry property on Φ, non impulsive noise e) NIHT is simple and scalable, convenient for problems where the support is of primary interest

10 8/24 IDEA: replace X 0 by l p,1 norm Convex relaxation [Tropp, 2006] uses p = : where X,1 = i x (i). [Malioutov et al., 2005] uses p = 2: where X 2,1 = i x (i). min X Y ΦX 2 +λ X,1 min X Y ΦX 2 +λ X 2,1 good recovery depends on the choise of egularization parameter λ > 0 not robust due to the used data fidelity term (LS-loss) not as scalable as greedy methods

11 1 Introduction 2 Nonparametric sparse recovery 3 Simulation studies 4 Source localization with Sensor Arrays

12 9/24 Nonparametric approach based on mixed norms min X Y ΦX q p,q data fidelity subject to X 0 K sparsity constraint (P p,q ) = mixed l 1 norms provide robustness, i.e., give larger weights on small residuals and less weight on large residuals. We consider the cases (p,q) = (1,1): Y ΦX 1 = i j y ij φ H (i) x j robust norm for errors i.i.d. in space and time (p,q) = (2,1): Y ΦX 2,1 = i y (i) X H φ (i) robust norm for errors i.i.d. in space only and devise greedy simultaneous NIHT (SNIHT) algorithm

13 10/24 Normalized iterative hard thresholding (NIHT) Conventional l 2,2 approach : X n+1 = H K hard thresholding stepsize ( X n + µ n+1 Φ H (Y ΦX n ) neg. gradient X 2 ) ψ 2,2 (X) = X = Conventional l 2,2 approach ψ 1,1 (X) = [sign(x ij )] (elementwise operation of S( )) ψ 2,1 (X) = [sign(x (i) )] (rowwise operation of S( ))

14 10/24 Normalized iterative hard thresholding (NIHT) In our mixed norm l p,q approach: X n+1 = H K hard thresholding stepsize ( X n + µ n+1 Φ H ψ p,q (Y ΦX n ) neg. gradient X q p,q ) ψ 2,2 (X) = X = Conventional l 2,2 approach ψ 1,1 (X) = [sign(x ij )] (elementwise operation of S( )) ψ 2,1 (X) = [sign(x (i) )] (rowwise operation of S( ))

15 10/24 Normalized iterative hard thresholding (NIHT) In our mixed norm l p,q approach: X n+1 = H K hard thresholding stepsize ( X n + µ n+1 Φ H ψ p,q (Y ΦX n ) neg. gradient X q p,q ) ψ 2,2 (X) = X = Conventional l 2,2 approach ψ 1,1 (X) = [sign(x ij )] (elementwise operation of S( )) ψ 2,1 (X) = [sign(x (i) )] (rowwise operation of S( )) Spatial sign sign(x) = { x/ x, for x 0 0, for x = 0

16 10/24 Normalized iterative hard thresholding (NIHT) In our mixed norm l p,q approach: X n+1 = H K hard thresholding stepsize ( X n + µ n+1 Φ H ψ p,q (Y ΦX n ) neg. gradient X q p,q ) ψ 2,2 (X) = X = Conventional l 2,2 approach ψ 1,1 (X) = [sign(x ij )] (elementwise operation of S( )) ψ 2,1 (X) = [sign(x (i) )] (rowwise operation of S( )) Spatial sign sign(x) = { x/ x, for x 0 0, for x = 0

17 11/24 SNIHT(p,q) algorithm input : Y, Φ, sparsity K, mixed norm indices (p,q) output : (X n+1,γ n+1 ) as estimates of X and Γ = supp(x) initialize: X 0 = 0, µ 0 = 0, n = 0, Γ 0 = 1Γ 0 = supp ( H K (Φ H ψ p,q (Y)) ) while halting criterion false do 2 R n ψ = ψ p,q(y ΦX n ) 3 G n = Φ H R n ψ 4 µ n+1 = CompStepsize(Φ,G n,γ n,µ n,p,q) 5 X n+1 = H K (X n + µ n+1 G n ) 6 Γ n+1 = supp(x n+1 ) 7 n = n+1 end

18 12/24 Computation of the step size µ n+1 For convergence, it is important that the stepsize is adaptively controlled at each iteration. Given the found support Γ n is correct, one can choose µ n+1 as the min. of the objective fnc for the gradient ascent direction X n +µg n Γ n: L(µ) = ( Y Φ X n +µg n ) Γ n q p,q = Rn µb q p,q where R n = Y ΦX n and B = ΦG (Γ n ) For p = q: simple linear regression problem based on l p -norm, with response r = vec(r n ) and predictor b = vec(b). µ n+1 = G n (Γ n ) 2 / ΦG n (Γ n ) 2 for p = q = 2 For p = q = 1 and (p,q) = (2,1), minimizer of L(µ) can not be found in closed-form.

19 13/24 Computation of the step size µ n+1 When p = q = 1, the solution µ verifies a fixed point (FP) equation µ = H(µ) ( H(µ) = r ij 1 b ij 2) 1 r ij 1 Re(bij r ij), i,j i,j and R = R n µb (depends on unknown µ) and B = ΦG (Γ n ). We select µ n+1 as the 1-step FP iterate µ n+1 = H(µ n ), where µ n is the value of the stepsize at previous nth iteration. Often this approximation was within 1e 3 to minimizer of L(µ). Same approach is used in the case (p,q) = (2,1).

20 1 Introduction 2 Nonparametric sparse recovery 3 Simulation studies 4 Source localization with Sensor Arrays

21 14/24 Simulation study Y = Φ M Q M N X N Q + E M 1 Set-up Measurement matrix Φ: φ ij CN(0,1) with unit-norm columns Γ = supp(x) randomly drawn K elements from {1,...,N} signal matrix X: x ij = 10 and Arg(x ij ) Unif(0,2π) i Γ. For l = 1,...,L MC-trials (L = 1000), compute X [l] and Γ [l] Performance measures mean squared error MSE(^X) = 1 LQ L l=1 probability of exact recovery PER = 1 L ^X [l] X [l] 2. L I (^Γ [l] = Γ [l]). l=1

22 15/24 IIID Gaussian noise, e ij CN(0,σ 2 ) MSE (db) MSE vs SNR for Q=16 SNIHT(2, 2) SNIHT(2, 1) SNIHT(1, 1) HUB-SNIHT SNR (db) PER PER vs Q at SNR = 5 db SNIHT(2, 2) SNIHT(2, 1) SNIHT(1, 1) HUB-SNIHT Q (number of vectors) SNIHT(2, 2) but SNIHT(2, 1) and HUB-SNIHT (threshold c = 1.517) loose only 0.09 db in MSE (Q = 16)

23 16/24 IID t-distributed noise, e ij Ct ν (0,σ 2 ), σ 2 = Med F ( e i 2 ) SNR(σ) = 30 db SNR(σ) = 10 db, ν = 3 MSE (db) SNIHT(2, 2) SNIHT(2, 1) SNIHT(1, 1) HUB-SNIHT PER SNIHT(2, 2) SNIHT(2, 1) SNIHT(1, 1) HUB-SNIHT ν (degrees of freedom) Q (number of vectors) SNIHT(1, 1)

24 17/24 Row IIID compound Gaussian inverse Gaussian texture, e (i) CIG λ (0,σ 2 I Q ) SNIHT(2, 2) SNIHT(2, 1) SNIHT(1, 1) HUB-SNIHT MSE (db) SNR (db) SNIHT(2, 1)

25 1 Introduction 2 Nonparametric sparse recovery 3 Simulation studies 4 Source localization with Sensor Arrays

26 18/24 Sensor array and narrowband, farfield source model M sensors, K sources (M > K): Model for sensor measurements at time instant t y(t) = A(θ)x(t)+e(t), - steering matrix A(θ) = ( a(θ 1 )... a(θ K ) ) - unknown (distinct) Direction of Arrivals (DOA s) θ 1,...,θ K - known array manifold a(θ) (e.g., ULA) - unknown signal waveforms x(t) = (x 1 (t),...,x K (t)) Objective Estimate the DOA s of the sources given the snapshots y(t 1 ),...,y(t Q ) and the number of sources K impinging on the sensor array.

27 19/24 Multichannel sparse recovery approach to source localization Construct an M N overcomplete steering matrix A( θ) for a grid of all source locations θ 1,..., θ N of interest. Suppose that θ contains the true DOA s to some accuracy. The array model in matrix form then rewrites as Y = ( y(t 1 ) y(t Q ) ) = A( θ)x+e, where X C N Q is K-sparse, with K non-zero rows containing the source waveforms at time instants t 1,...,t Q. = K-sparse MMV model In the multichannel sparce recovery formulation A( θ) is completely known, and we can use SNIHT methods to identify the support.

28 19/24 Multichannel sparse recovery approach to source localization Construct an M N overcomplete steering matrix A( θ) for a grid of all source locations θ 1,..., θ N of interest. Suppose that θ contains the true DOA s to some accuracy. The array model in matrix form then rewrites as Y = ( y(t 1 ) y(t Q ) ) = A( θ)x+e, where X C N Q is K-sparse, with K non-zero rows containing the source waveforms at time instants t 1,...,t Q. = K-sparse MMV model In the multichannel sparce recovery formulation A( θ) is completely known, and we can use SNIHT methods to identify the support.

29 19/24 Multichannel sparse recovery approach to source localization Construct an M N overcomplete steering matrix A( θ) for a grid of all source locations θ 1,..., θ N of interest. Suppose that θ contains the true DOA s to some accuracy. The array model in matrix form then rewrites as Y = ( y(t 1 ) y(t Q ) ) = A( θ)x+e, where X C N Q is K-sparse, with K non-zero rows containing the source waveforms at time instants t 1,...,t Q. = Finding DOA s is equivalent to identifying Γ = supp(x) In the multichannel sparce recovery formulation A( θ) is completely known, and we can use SNIHT methods to identify the support.

30 19/24 Multichannel sparse recovery approach to source localization Construct an M N overcomplete steering matrix A( θ) for a grid of all source locations θ 1,..., θ N of interest. Suppose that θ contains the true DOA s to some accuracy. The array model in matrix form then rewrites as Y = ( y(t 1 ) y(t Q ) ) = A( θ)x+e, where X C N Q is K-sparse, with K non-zero rows containing the source waveforms at time instants t 1,...,t Q. = Finding DOA s is equivalent to identifying Γ = supp(x) In the multichannel sparce recovery formulation A( θ) is completely known, and we can use SNIHT methods to identify the support.

31 20/24 Simul set-up ULA of M = 20 sensors with half a wavelength interelement spacing. K = 2 independent Gaussian sources from θ 1 = 0 o and θ 2 = 8 o. Temporally/spatially white Gaussian error terms. Low SNR = 0 db. Uniform grid θ on [ 90,90] with 2 o degree spacing We compute PER rates and relative frequencies of found DOA estimates in the grid for 1000 Monte Carlo runs.

32 21/24 Number of snapshots Q = 4 Frequency DOA s at θ 1 = 0 o and θ 1 = 8 o. SNIHT(2, 2) SNIHT(1, 1) SNIHT(2, 1) DOA θ degrees (2 o sampling grid) PER rates 1 SNIHT(1,1) = 81% 2 SNIHT(2,1) = 73.1% 3 HUB-SNIHT = 42.6% 4 SNIHT = 37.6% SNIHT(1, 1)

33 22/24 Number of snapshots Q = 40 Frequency DOA s at θ 1 = 0 o and θ 1 = 8 o. SNIHT(2, 2) SNIHT(1, 1) SNIHT(2, 1) DOA θ degrees (2 o sampling grid) PER rates 1 SNIHT(1,1) = 100% 2 SNIHT(2,1) = 73.1% 3 HUB-SNIHT = 42.6% 4 SNIHT = 37.6% SNIHT(1, 1)

34 23/24 Conclusions robust and nonparametric appraches for simultaneous sparse recovery

35 23/24 Conclusions robust and nonparametric appraches for simultaneous sparse recovery Different method, SNIHT(1, 1), SNIHT(2, 1), HUB-SNIHT, can be selected for a problem at hand (iid/correlated noise, high-breakdown, etc)

36 23/24 Conclusions robust and nonparametric appraches for simultaneous sparse recovery Different method, SNIHT(1, 1), SNIHT(2, 1), HUB-SNIHT, can be selected for a problem at hand (iid/correlated noise, high-breakdown, etc) Fast and scalable greedy SNIHT algorithm: increase in robustness does not imply increase in computation time!

37 23/24 Conclusions robust and nonparametric appraches for simultaneous sparse recovery Different method, SNIHT(1, 1), SNIHT(2, 1), HUB-SNIHT, can be selected for a problem at hand (iid/correlated noise, high-breakdown, etc) Fast and scalable greedy SNIHT algorithm: increase in robustness does not imply increase in computation time! Applications in source localization and image denoising, etc.

38 24/24 This talk based on E. Ollila (2015a) Nonparametric Simultaneous Sparse Recovery: an Application to Source Localization PDF (ArXiv): EUSIPCO 15, submitted. E. Ollila (2015b). Robust Simultaneous Sparse Approximation A Festschrift in Honor of Hannu Oja, Springer, Sept

39 Blanchard, J. D., Cermak, M., Hanle, D., and Jin, Y. (2014). Greedy algorithms for joint sparse recovery. IEEE Trans. Signal Processing, 62(7). Chen, J. and Huo, X. (2006). Theoretical results on sparse representations of multiple-measurement vectors. IEEE Trans. Signal Processing, 54(12): Cotter, S. and Rao, B. (2002). Sparse channel estimation via matching pursuit with application to equalization. IEEE Trans. Comm., 50(3): Duarte, M. F. and Eldar, Y. C. (2011). Structured compressed sensing: From theory to applications. IEEE Trans. Signal Processing, 59(9): Eldar, Y. C. and Rauhut, H. (2010). Average case analysis of multichannel sparse recovery using convex relaxation. IEEE Trans. Inform. Theory, 56(1): Gorodnitsky, I., George, J., and Rao, B. (1995). Neuromagnetic source imaging with FOCUSS: a recursive weighted minimum norm algorithm. J. Electroencephalogr. Clin. Neurophysiol., 95(4):

40 24/24 Gribonval, R. and Zibulevsky, M. (2010). Handbook of Blind Source Separation, chapter Sparse component analysis, pages Academic Press, Oxford, UK. Malioutov, D., Çetin, M., and Willsky, A. S. (2005). A sparse signal reconstruction perspective for source localization with sensor arrays. IEEE Trans. Signal Processing, 53(8): Tropp, J. A. (2006). Algorithms for simultaneous sparse approximation. part II: Convex relaxation. Signal Processing, 86:

Simultaneous Sparsity

Simultaneous Sparsity Simultaneous Sparsity Joel A. Tropp Anna C. Gilbert Martin J. Strauss {jtropp annacg martinjs}@umich.edu Department of Mathematics The University of Michigan 1 Simple Sparse Approximation Work in the d-dimensional,

More information

Recovery Guarantees for Rank Aware Pursuits

Recovery Guarantees for Rank Aware Pursuits BLANCHARD AND DAVIES: RECOVERY GUARANTEES FOR RANK AWARE PURSUITS 1 Recovery Guarantees for Rank Aware Pursuits Jeffrey D. Blanchard and Mike E. Davies Abstract This paper considers sufficient conditions

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

Generalized Orthogonal Matching Pursuit- A Review and Some

Generalized Orthogonal Matching Pursuit- A Review and Some Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents

More information

On Rank Awareness, Thresholding, and MUSIC for Joint Sparse Recovery

On Rank Awareness, Thresholding, and MUSIC for Joint Sparse Recovery On Rank Awareness, Thresholding, and MUSIC for Joint Sparse Recovery Jeffrey D. Blanchard a,1,, Caleb Leedy a,2, Yimin Wu a,2 a Department of Mathematics and Statistics, Grinnell College, Grinnell, IA

More information

GREEDY SIGNAL RECOVERY REVIEW

GREEDY SIGNAL RECOVERY REVIEW GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin

More information

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk Model-Based Compressive Sensing for Signal Ensembles Marco F. Duarte Volkan Cevher Richard G. Baraniuk Concise Signal Structure Sparse signal: only K out of N coordinates nonzero model: union of K-dimensional

More information

DISTRIBUTED TARGET LOCALIZATION VIA SPATIAL SPARSITY

DISTRIBUTED TARGET LOCALIZATION VIA SPATIAL SPARSITY First published in the Proceedings of the 16th European Signal Processing Conference (EUSIPCO-2008) in 2008, published by EURASIP DISTRIBUTED TARGET LOCALIZATION VIA SPATIAL SPARSITY Volkan Cevher, Marco

More information

Joint Direction-of-Arrival and Order Estimation in Compressed Sensing using Angles between Subspaces

Joint Direction-of-Arrival and Order Estimation in Compressed Sensing using Angles between Subspaces Aalborg Universitet Joint Direction-of-Arrival and Order Estimation in Compressed Sensing using Angles between Subspaces Christensen, Mads Græsbøll; Nielsen, Jesper Kjær Published in: I E E E / S P Workshop

More information

Self-Calibration and Biconvex Compressive Sensing

Self-Calibration and Biconvex Compressive Sensing Self-Calibration and Biconvex Compressive Sensing Shuyang Ling Department of Mathematics, UC Davis July 12, 2017 Shuyang Ling (UC Davis) SIAM Annual Meeting, 2017, Pittsburgh July 12, 2017 1 / 22 Acknowledgements

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

SPARSE signal processing has recently been exploited in

SPARSE signal processing has recently been exploited in JOURNA OF A TEX CASS FIES, VO. 14, NO. 8, AUGUST 2015 1 Simultaneous Sparse Approximation Using an Iterative Method with Adaptive Thresholding Shahrzad Kiani, Sahar Sadrizadeh, Mahdi Boloursaz, Student

More information

Real-Valued Khatri-Rao Subspace Approaches on the ULA and a New Nested Array

Real-Valued Khatri-Rao Subspace Approaches on the ULA and a New Nested Array Real-Valued Khatri-Rao Subspace Approaches on the ULA and a New Nested Array Huiping Duan, Tiantian Tuo, Jun Fang and Bing Zeng arxiv:1511.06828v1 [cs.it] 21 Nov 2015 Abstract In underdetermined direction-of-arrival

More information

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Journal of Information & Computational Science 11:9 (214) 2933 2939 June 1, 214 Available at http://www.joics.com Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Jingfei He, Guiling Sun, Jie

More information

AN OVERVIEW OF ROBUST COMPRESSIVE SENSING OF SPARSE SIGNALS IN IMPULSIVE NOISE

AN OVERVIEW OF ROBUST COMPRESSIVE SENSING OF SPARSE SIGNALS IN IMPULSIVE NOISE AN OVERVIEW OF ROBUST COMPRESSIVE SENSING OF SPARSE SIGNALS IN IMPULSIVE NOISE Ana B. Ramirez, Rafael E. Carrillo, Gonzalo Arce, Kenneth E. Barner and Brian Sadler Universidad Industrial de Santander,

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

Recovery of Low Rank and Jointly Sparse. Matrices with Two Sampling Matrices

Recovery of Low Rank and Jointly Sparse. Matrices with Two Sampling Matrices Recovery of Low Rank and Jointly Sparse 1 Matrices with Two Sampling Matrices Sampurna Biswas, Hema K. Achanta, Mathews Jacob, Soura Dasgupta, and Raghuraman Mudumbai Abstract We provide a two-step approach

More information

The Analysis Cosparse Model for Signals and Images

The Analysis Cosparse Model for Signals and Images The Analysis Cosparse Model for Signals and Images Raja Giryes Computer Science Department, Technion. The research leading to these results has received funding from the European Research Council under

More information

Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego

Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego 1 Outline Course Outline Motivation for Course Sparse Signal Recovery Problem Applications Computational

More information

Scale Mixture Modeling of Priors for Sparse Signal Recovery

Scale Mixture Modeling of Priors for Sparse Signal Recovery Scale Mixture Modeling of Priors for Sparse Signal Recovery Bhaskar D Rao 1 University of California, San Diego 1 Thanks to David Wipf, Jason Palmer, Zhilin Zhang and Ritwik Giri Outline Outline Sparse

More information

On the Role of the Properties of the Nonzero Entries on Sparse Signal Recovery

On the Role of the Properties of the Nonzero Entries on Sparse Signal Recovery On the Role of the Properties of the Nonzero Entries on Sparse Signal Recovery Yuzhe Jin and Bhaskar D. Rao Department of Electrical and Computer Engineering, University of California at San Diego, La

More information

arxiv: v2 [cs.it] 7 Jan 2017

arxiv: v2 [cs.it] 7 Jan 2017 Sparse Methods for Direction-of-Arrival Estimation Zai Yang, Jian Li, Petre Stoica, and Lihua Xie January 10, 2017 arxiv:1609.09596v2 [cs.it] 7 Jan 2017 Contents 1 Introduction 3 2 Data Model 4 2.1 Data

More information

Greedy Signal Recovery and Uniform Uncertainty Principles

Greedy Signal Recovery and Uniform Uncertainty Principles Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles

More information

Multipath Matching Pursuit

Multipath Matching Pursuit Multipath Matching Pursuit Submitted to IEEE trans. on Information theory Authors: S. Kwon, J. Wang, and B. Shim Presenter: Hwanchol Jang Multipath is investigated rather than a single path for a greedy

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

Motivation Sparse Signal Recovery is an interesting area with many potential applications. Methods developed for solving sparse signal recovery proble

Motivation Sparse Signal Recovery is an interesting area with many potential applications. Methods developed for solving sparse signal recovery proble Bayesian Methods for Sparse Signal Recovery Bhaskar D Rao 1 University of California, San Diego 1 Thanks to David Wipf, Zhilin Zhang and Ritwik Giri Motivation Sparse Signal Recovery is an interesting

More information

Compressed Sensing: Extending CLEAN and NNLS

Compressed Sensing: Extending CLEAN and NNLS Compressed Sensing: Extending CLEAN and NNLS Ludwig Schwardt SKA South Africa (KAT Project) Calibration & Imaging Workshop Socorro, NM, USA 31 March 2009 Outline 1 Compressed Sensing (CS) Introduction

More information

THE emerging theory of compressed sensing (CS) supplies. Joint Sparse Recovery with Semi-Supervised MUSIC. arxiv: v1 [cs.

THE emerging theory of compressed sensing (CS) supplies. Joint Sparse Recovery with Semi-Supervised MUSIC. arxiv: v1 [cs. 1 Joint Sparse Recovery with Semi-Supervised MUSIC Zaidao Wen, Biao Hou, Member, IEEE, Licheng Jiao, Senior Member, IEEE arxiv:1705.09446v1 [cs.it] 26 May 2017 Abstract Discrete multiple signal classification

More information

arxiv: v1 [cs.it] 14 Apr 2009

arxiv: v1 [cs.it] 14 Apr 2009 Department of Computer Science, University of British Columbia Technical Report TR-29-7, April 29 Joint-sparse recovery from multiple measurements Ewout van den Berg Michael P. Friedlander arxiv:94.251v1

More information

Research Article Support Recovery of Greedy Block Coordinate Descent Using the Near Orthogonality Property

Research Article Support Recovery of Greedy Block Coordinate Descent Using the Near Orthogonality Property Hindawi Mathematical Problems in Engineering Volume 17, Article ID 493791, 7 pages https://doiorg/11155/17/493791 Research Article Support Recovery of Greedy Block Coordinate Descent Using the Near Orthogonality

More information

Analog-to-Information Conversion

Analog-to-Information Conversion Analog-to-Information Conversion Sergiy A. Vorobyov Dept. Signal Processing and Acoustics, Aalto University February 2013 Winter School on Compressed Sensing, Ruka 1/55 Outline 1 Compressed Sampling (CS)

More information

MLCC 2018 Variable Selection and Sparsity. Lorenzo Rosasco UNIGE-MIT-IIT

MLCC 2018 Variable Selection and Sparsity. Lorenzo Rosasco UNIGE-MIT-IIT MLCC 2018 Variable Selection and Sparsity Lorenzo Rosasco UNIGE-MIT-IIT Outline Variable Selection Subset Selection Greedy Methods: (Orthogonal) Matching Pursuit Convex Relaxation: LASSO & Elastic Net

More information

SIGNALS with sparse representations can be recovered

SIGNALS with sparse representations can be recovered IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1497 Cramér Rao Bound for Sparse Signals Fitting the Low-Rank Model with Small Number of Parameters Mahdi Shaghaghi, Student Member, IEEE,

More information

Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France

Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France remi.gribonval@inria.fr Structure of the tutorial Session 1: Introduction to inverse problems & sparse

More information

Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation

Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 12, DECEMBER 2008 2009 Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation Yuanqing Li, Member, IEEE, Andrzej Cichocki,

More information

Bayesian Methods for Sparse Signal Recovery

Bayesian Methods for Sparse Signal Recovery Bayesian Methods for Sparse Signal Recovery Bhaskar D Rao 1 University of California, San Diego 1 Thanks to David Wipf, Jason Palmer, Zhilin Zhang and Ritwik Giri Motivation Motivation Sparse Signal Recovery

More information

Compressive Sensing and Beyond

Compressive Sensing and Beyond Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered

More information

An Introduction to Sparse Approximation

An Introduction to Sparse Approximation An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,

More information

Invariancy of Sparse Recovery Algorithms

Invariancy of Sparse Recovery Algorithms Invariancy of Sparse Recovery Algorithms Milad Kharratzadeh, Arsalan Sharifnassab, and Massoud Babaie-Zadeh Abstract In this paper, a property for sparse recovery algorithms, called invariancy, is introduced.

More information

Robust covariance matrices estimation and applications in signal processing

Robust covariance matrices estimation and applications in signal processing Robust covariance matrices estimation and applications in signal processing F. Pascal SONDRA/Supelec GDR ISIS Journée Estimation et traitement statistique en grande dimension May 16 th, 2013 FP (SONDRA/Supelec)

More information

Sparsity in Underdetermined Systems

Sparsity in Underdetermined Systems Sparsity in Underdetermined Systems Department of Statistics Stanford University August 19, 2005 Classical Linear Regression Problem X n y p n 1 > Given predictors and response, y Xβ ε = + ε N( 0, σ 2

More information

Robust Sparse Recovery via Non-Convex Optimization

Robust Sparse Recovery via Non-Convex Optimization Robust Sparse Recovery via Non-Convex Optimization Laming Chen and Yuantao Gu Department of Electronic Engineering, Tsinghua University Homepage: http://gu.ee.tsinghua.edu.cn/ Email: gyt@tsinghua.edu.cn

More information

Multi-Source DOA Estimation Using an Acoustic Vector Sensor Array Under a Spatial Sparse Representation Framework

Multi-Source DOA Estimation Using an Acoustic Vector Sensor Array Under a Spatial Sparse Representation Framework DOI 10.1007/s00034-015-0102-9 Multi-Source DOA Estimation Using an Acoustic Vector Sensor Array Under a Spatial Sparse Representation Framework Yue-Xian Zou 1 Bo Li 1 Christian H Ritz 2 Received: 22 May

More information

EUSIPCO

EUSIPCO EUSIPCO 013 1569746769 SUBSET PURSUIT FOR ANALYSIS DICTIONARY LEARNING Ye Zhang 1,, Haolong Wang 1, Tenglong Yu 1, Wenwu Wang 1 Department of Electronic and Information Engineering, Nanchang University,

More information

THE estimation of covariance matrices is a crucial component

THE estimation of covariance matrices is a crucial component 1 A Subspace Method for Array Covariance Matrix Estimation Mostafa Rahmani and George K. Atia, Member, IEEE, arxiv:1411.0622v1 [cs.na] 20 Oct 2014 Abstract This paper introduces a subspace method for the

More information

EXTENSION OF NESTED ARRAYS WITH THE FOURTH-ORDER DIFFERENCE CO-ARRAY ENHANCEMENT

EXTENSION OF NESTED ARRAYS WITH THE FOURTH-ORDER DIFFERENCE CO-ARRAY ENHANCEMENT EXTENSION OF NESTED ARRAYS WITH THE FOURTH-ORDER DIFFERENCE CO-ARRAY ENHANCEMENT Qing Shen,2, Wei Liu 2, Wei Cui, Siliang Wu School of Information and Electronics, Beijing Institute of Technology Beijing,

More information

A simple test to check the optimality of sparse signal approximations

A simple test to check the optimality of sparse signal approximations A simple test to check the optimality of sparse signal approximations Rémi Gribonval, Rosa Maria Figueras I Ventura, Pierre Vergheynst To cite this version: Rémi Gribonval, Rosa Maria Figueras I Ventura,

More information

Near-Field Source Localization Using Sparse Recovery Techniques

Near-Field Source Localization Using Sparse Recovery Techniques Near-Field Source Localization Using Sparse Recovery Techniques Keke Hu, Sundeep Prabhakar Chepuri, and Geert Leus Faculty of Electrical Engineering, Mathematics and Computer Science, Delft University

More information

MATCHING PURSUIT WITH STOCHASTIC SELECTION

MATCHING PURSUIT WITH STOCHASTIC SELECTION 2th European Signal Processing Conference (EUSIPCO 22) Bucharest, Romania, August 27-3, 22 MATCHING PURSUIT WITH STOCHASTIC SELECTION Thomas Peel, Valentin Emiya, Liva Ralaivola Aix-Marseille Université

More information

ROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210

ROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210 ROBUST BLIND SPIKES DECONVOLUTION Yuejie Chi Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 4 ABSTRACT Blind spikes deconvolution, or blind super-resolution, deals with

More information

Sparse regression. Optimization-Based Data Analysis. Carlos Fernandez-Granda

Sparse regression. Optimization-Based Data Analysis.   Carlos Fernandez-Granda Sparse regression Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 3/28/2016 Regression Least-squares regression Example: Global warming Logistic

More information

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp CoSaMP Iterative signal recovery from incomplete and inaccurate samples Joel A. Tropp Applied & Computational Mathematics California Institute of Technology jtropp@acm.caltech.edu Joint with D. Needell

More information

Particle Filtered Modified-CS (PaFiMoCS) for tracking signal sequences

Particle Filtered Modified-CS (PaFiMoCS) for tracking signal sequences Particle Filtered Modified-CS (PaFiMoCS) for tracking signal sequences Samarjit Das and Namrata Vaswani Department of Electrical and Computer Engineering Iowa State University http://www.ece.iastate.edu/

More information

Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice

Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice Jason N. Laska, Mark A. Davenport, Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University

More information

Source Reconstruction for 3D Bioluminescence Tomography with Sparse regularization

Source Reconstruction for 3D Bioluminescence Tomography with Sparse regularization 1/33 Source Reconstruction for 3D Bioluminescence Tomography with Sparse regularization Xiaoqun Zhang xqzhang@sjtu.edu.cn Department of Mathematics/Institute of Natural Sciences, Shanghai Jiao Tong University

More information

New Applications of Sparse Methods in Physics. Ra Inta, Centre for Gravitational Physics, The Australian National University

New Applications of Sparse Methods in Physics. Ra Inta, Centre for Gravitational Physics, The Australian National University New Applications of Sparse Methods in Physics Ra Inta, Centre for Gravitational Physics, The Australian National University 2 Sparse methods A vector is S-sparse if it has at most S non-zero coefficients.

More information

Blind Compressed Sensing

Blind Compressed Sensing 1 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE arxiv:1002.2586v2 [cs.it] 28 Apr 2010 Abstract The fundamental principle underlying compressed sensing is that a signal,

More information

Sensing systems limited by constraints: physical size, time, cost, energy

Sensing systems limited by constraints: physical size, time, cost, energy Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original

More information

c 2011 International Press Vol. 18, No. 1, pp , March DENNIS TREDE

c 2011 International Press Vol. 18, No. 1, pp , March DENNIS TREDE METHODS AND APPLICATIONS OF ANALYSIS. c 2011 International Press Vol. 18, No. 1, pp. 105 110, March 2011 007 EXACT SUPPORT RECOVERY FOR LINEAR INVERSE PROBLEMS WITH SPARSITY CONSTRAINTS DENNIS TREDE Abstract.

More information

SCRIBERS: SOROOSH SHAFIEEZADEH-ABADEH, MICHAËL DEFFERRARD

SCRIBERS: SOROOSH SHAFIEEZADEH-ABADEH, MICHAËL DEFFERRARD EE-731: ADVANCED TOPICS IN DATA SCIENCES LABORATORY FOR INFORMATION AND INFERENCE SYSTEMS SPRING 2016 INSTRUCTOR: VOLKAN CEVHER SCRIBERS: SOROOSH SHAFIEEZADEH-ABADEH, MICHAËL DEFFERRARD STRUCTURED SPARSITY

More information

Machine Learning for Signal Processing Sparse and Overcomplete Representations

Machine Learning for Signal Processing Sparse and Overcomplete Representations Machine Learning for Signal Processing Sparse and Overcomplete Representations Abelino Jimenez (slides from Bhiksha Raj and Sourish Chaudhuri) Oct 1, 217 1 So far Weights Data Basis Data Independent ICA

More information

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28 Sparsity Models Tong Zhang Rutgers University T. Zhang (Rutgers) Sparsity Models 1 / 28 Topics Standard sparse regression model algorithms: convex relaxation and greedy algorithm sparse recovery analysis:

More information

Compressive Sensing of Temporally Correlated Sources Using Isotropic Multivariate Stable Laws

Compressive Sensing of Temporally Correlated Sources Using Isotropic Multivariate Stable Laws Compressive Sensing of Temporally Correlated Sources Using Isotropic Multivariate Stable Laws George Tzagkarakis EONOS Investment Technologies Paris, France and Institute of Computer Science Foundation

More information

Stopping Condition for Greedy Block Sparse Signal Recovery

Stopping Condition for Greedy Block Sparse Signal Recovery Stopping Condition for Greedy Block Sparse Signal Recovery Yu Luo, Ronggui Xie, Huarui Yin, and Weidong Wang Department of Electronics Engineering and Information Science, University of Science and Technology

More information

Stochastic geometry and random matrix theory in CS

Stochastic geometry and random matrix theory in CS Stochastic geometry and random matrix theory in CS IPAM: numerical methods for continuous optimization University of Edinburgh Joint with Bah, Blanchard, Cartis, and Donoho Encoder Decoder pair - Encoder/Decoder

More information

Quantized Iterative Hard Thresholding:

Quantized Iterative Hard Thresholding: Quantized Iterative Hard Thresholding: ridging 1-bit and High Resolution Quantized Compressed Sensing Laurent Jacques, Kévin Degraux, Christophe De Vleeschouwer Louvain University (UCL), Louvain-la-Neuve,

More information

Sparse linear models

Sparse linear models Sparse linear models Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 2/22/2016 Introduction Linear transforms Frequency representation Short-time

More information

Machine Learning for Signal Processing Sparse and Overcomplete Representations. Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013

Machine Learning for Signal Processing Sparse and Overcomplete Representations. Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013 Machine Learning for Signal Processing Sparse and Overcomplete Representations Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013 1 Key Topics in this Lecture Basics Component-based representations

More information

Distributed Inexact Newton-type Pursuit for Non-convex Sparse Learning

Distributed Inexact Newton-type Pursuit for Non-convex Sparse Learning Distributed Inexact Newton-type Pursuit for Non-convex Sparse Learning Bo Liu Department of Computer Science, Rutgers Univeristy Xiao-Tong Yuan BDAT Lab, Nanjing University of Information Science and Technology

More information

Signal Recovery from Permuted Observations

Signal Recovery from Permuted Observations EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,

More information

A Compressive Sensing Based Compressed Neural Network for Sound Source Localization

A Compressive Sensing Based Compressed Neural Network for Sound Source Localization A Compressive Sensing Based Compressed Neural Network for Sound Source Localization Mehdi Banitalebi Dehkordi Speech Processing Research Lab Yazd University Yazd, Iran mahdi_banitalebi@stu.yazduni.ac.ir

More information

Data Fusion in Large Arrays of Microsensors 1

Data Fusion in Large Arrays of Microsensors 1 Approved for pulic release; distriution is unlimited. Data Fusion in Large Arrays of Microsensors Alan S. Willsky, Dmitry M. Malioutov, and Müjdat Çetin Room 35-433, M.I.T. Camridge, MA 039 willsky@mit.edu

More information

Linear Convergence of Adaptively Iterative Thresholding Algorithms for Compressed Sensing

Linear Convergence of Adaptively Iterative Thresholding Algorithms for Compressed Sensing Linear Convergence of Adaptively Iterative Thresholding Algorithms for Compressed Sensing Yu Wang, Jinshan Zeng, Zhimin Peng, Xiangyu Chang, and Zongben Xu arxiv:48.689v3 [math.oc] 6 Dec 5 Abstract This

More information

SGN Advanced Signal Processing Project bonus: Sparse model estimation

SGN Advanced Signal Processing Project bonus: Sparse model estimation SGN 21006 Advanced Signal Processing Project bonus: Sparse model estimation Ioan Tabus Department of Signal Processing Tampere University of Technology Finland 1 / 12 Sparse models Initial problem: solve

More information

PHASE TRANSITION OF JOINT-SPARSE RECOVERY FROM MULTIPLE MEASUREMENTS VIA CONVEX OPTIMIZATION

PHASE TRANSITION OF JOINT-SPARSE RECOVERY FROM MULTIPLE MEASUREMENTS VIA CONVEX OPTIMIZATION PHASE TRASITIO OF JOIT-SPARSE RECOVERY FROM MUTIPE MEASUREMETS VIA COVEX OPTIMIZATIO Shih-Wei Hu,, Gang-Xuan in, Sung-Hsien Hsieh, and Chun-Shien u Institute of Information Science, Academia Sinica, Taipei,

More information

Greedy Subspace Pursuit for Joint Sparse. Recovery

Greedy Subspace Pursuit for Joint Sparse. Recovery Greedy Subspace Pursuit for Joint Sparse 1 Recovery Kyung Su Kim, Student Member, IEEE, Sae-Young Chung, Senior Member, IEEE arxiv:1601.07087v1 [cs.it] 6 Jan 016 Abstract In this paper, we address the

More information

WIRELESS sensor networks (WSNs) have attracted

WIRELESS sensor networks (WSNs) have attracted On the Benefit of using ight Frames for Robust Data ransmission and Compressive Data Gathering in Wireless Sensor Networks Wei Chen, Miguel R. D. Rodrigues and Ian J. Wassell Computer Laboratory, University

More information

Sparse DOA estimation with polynomial rooting

Sparse DOA estimation with polynomial rooting Sparse DOA estimation with polynomial rooting Angeliki Xenaki Department of Applied Mathematics and Computer Science Technical University of Denmark 28 Kgs. Lyngby, Denmark Email: anxe@dtu.dk Peter Gerstoft

More information

DOA Estimation of Quasi-Stationary Signals Using a Partly-Calibrated Uniform Linear Array with Fewer Sensors than Sources

DOA Estimation of Quasi-Stationary Signals Using a Partly-Calibrated Uniform Linear Array with Fewer Sensors than Sources Progress In Electromagnetics Research M, Vol. 63, 185 193, 218 DOA Estimation of Quasi-Stationary Signals Using a Partly-Calibrated Uniform Linear Array with Fewer Sensors than Sources Kai-Chieh Hsu and

More information

A Performance Guarantee for Orthogonal Matching Pursuit Using Mutual Coherence

A Performance Guarantee for Orthogonal Matching Pursuit Using Mutual Coherence oname manuscript o. (will be inserted by the editor) A Performance Guarantee for Orthogonal Matching Pursuit Using Mutual Coherence Mohammad Emadi Ehsan Miandji Jonas Unger the date of receipt and acceptance

More information

Robustly Stable Signal Recovery in Compressed Sensing with Structured Matrix Perturbation

Robustly Stable Signal Recovery in Compressed Sensing with Structured Matrix Perturbation Robustly Stable Signal Recovery in Compressed Sensing with Structured Matri Perturbation Zai Yang, Cishen Zhang, and Lihua Xie, Fellow, IEEE arxiv:.7v [cs.it] 4 Mar Abstract The sparse signal recovery

More information

Fast Algorithms for Sparse Recovery with Perturbed Dictionary

Fast Algorithms for Sparse Recovery with Perturbed Dictionary Fast Algorithms for Sparse Recovery with Perturbed Dictionary Xuebing Han, Hao Zhang, Gang Li arxiv:.637v3 cs.it May Abstract In this paper, we account for approaches of sparse recovery from large underdetermined

More information

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March

More information

Robust Subspace DOA Estimation for Wireless Communications

Robust Subspace DOA Estimation for Wireless Communications Robust Subspace DOA Estimation for Wireless Communications Samuli Visuri Hannu Oja ¾ Visa Koivunen Laboratory of Signal Processing Computer Technology Helsinki Univ. of Technology P.O. Box 3, FIN-25 HUT

More information

A Review of Sparsity-Based Methods for Analysing Radar Returns from Helicopter Rotor Blades

A Review of Sparsity-Based Methods for Analysing Radar Returns from Helicopter Rotor Blades A Review of Sparsity-Based Methods for Analysing Radar Returns from Helicopter Rotor Blades Ngoc Hung Nguyen 1, Hai-Tan Tran 2, Kutluyıl Doğançay 1 and Rocco Melino 2 1 University of South Australia 2

More information

Solving Underdetermined Linear Equations and Overdetermined Quadratic Equations (using Convex Programming)

Solving Underdetermined Linear Equations and Overdetermined Quadratic Equations (using Convex Programming) Solving Underdetermined Linear Equations and Overdetermined Quadratic Equations (using Convex Programming) Justin Romberg Georgia Tech, ECE Caltech ROM-GR Workshop June 7, 2013 Pasadena, California Linear

More information

L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise

L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise Srdjan Stanković, Irena Orović and Moeness Amin 1 Abstract- A modification of standard

More information

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Deanna Needell and Roman Vershynin Abstract We demonstrate a simple greedy algorithm that can reliably

More information

Stability of Modified-CS over Time for recursive causal sparse reconstruction

Stability of Modified-CS over Time for recursive causal sparse reconstruction Stability of Modified-CS over Time for recursive causal sparse reconstruction Namrata Vaswani Department of Electrical and Computer Engineering Iowa State University http://www.ece.iastate.edu/ namrata

More information

Computable Performance Analysis of Sparsity Recovery with Applications

Computable Performance Analysis of Sparsity Recovery with Applications Computable Performance Analysis of Sparsity Recovery with Applications Arye Nehorai Preston M. Green Department of Electrical & Systems Engineering Washington University in St. Louis, USA European Signal

More information

Optimization for Compressed Sensing

Optimization for Compressed Sensing Optimization for Compressed Sensing Robert J. Vanderbei 2014 March 21 Dept. of Industrial & Systems Engineering University of Florida http://www.princeton.edu/ rvdb Lasso Regression The problem is to solve

More information

Stability and Robustness of Weak Orthogonal Matching Pursuits

Stability and Robustness of Weak Orthogonal Matching Pursuits Stability and Robustness of Weak Orthogonal Matching Pursuits Simon Foucart, Drexel University Abstract A recent result establishing, under restricted isometry conditions, the success of sparse recovery

More information

Sparse Signal Recovery Using Markov Random Fields

Sparse Signal Recovery Using Markov Random Fields Sparse Signal Recovery Using Markov Random Fields Volkan Cevher Rice University volkan@rice.edu Chinmay Hegde Rice University chinmay@rice.edu Marco F. Duarte Rice University duarte@rice.edu Richard G.

More information

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit arxiv:0707.4203v2 [math.na] 14 Aug 2007 Deanna Needell Department of Mathematics University of California,

More information

Exact Joint Sparse Frequency Recovery via Optimization Methods

Exact Joint Sparse Frequency Recovery via Optimization Methods IEEE TRANSACTIONS ON SIGNAL PROCESSING Exact Joint Sparse Frequency Recovery via Optimization Methods Zai Yang, Member, IEEE, and Lihua Xie, Fellow, IEEE arxiv:45.6585v [cs.it] 3 May 6 Abstract Frequency

More information

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles SIAM Student Research Conference Deanna Needell Joint work with Roman Vershynin and Joel Tropp UC Davis, May 2008 CoSaMP: Greedy Signal

More information

A ROBUST BEAMFORMER BASED ON WEIGHTED SPARSE CONSTRAINT

A ROBUST BEAMFORMER BASED ON WEIGHTED SPARSE CONSTRAINT Progress In Electromagnetics Research Letters, Vol. 16, 53 60, 2010 A ROBUST BEAMFORMER BASED ON WEIGHTED SPARSE CONSTRAINT Y. P. Liu and Q. Wan School of Electronic Engineering University of Electronic

More information

Block-sparse Solutions using Kernel Block RIP and its Application to Group Lasso

Block-sparse Solutions using Kernel Block RIP and its Application to Group Lasso Block-sparse Solutions using Kernel Block RIP and its Application to Group Lasso Rahul Garg IBM T.J. Watson research center grahul@us.ibm.com Rohit Khandekar IBM T.J. Watson research center rohitk@us.ibm.com

More information

arxiv: v1 [cs.it] 21 Feb 2013

arxiv: v1 [cs.it] 21 Feb 2013 q-ary Compressive Sensing arxiv:30.568v [cs.it] Feb 03 Youssef Mroueh,, Lorenzo Rosasco, CBCL, CSAIL, Massachusetts Institute of Technology LCSL, Istituto Italiano di Tecnologia and IIT@MIT lab, Istituto

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information