On the Role of the Properties of the Nonzero Entries on Sparse Signal Recovery
|
|
- Clarence Newman
- 5 years ago
- Views:
Transcription
1 On the Role of the Properties of the Nonzero Entries on Sparse Signal Recovery Yuzhe Jin and Bhaskar D. Rao Department of Electrical and Computer Engineering, University of California at San Diego, La Jolla, CA , USA {yujin, Abstract We study the role of the nonzero entries on the performance limits in support recovery of sparse signals. The key to our results is the recently studied connection between sparse signal recovery and multiple user communication. By leveraging the concept of outage capacity in information theory, we explicitly characterize the impact of the probability distribution imposed on the nonzero entries of the sparse signal on support recovery. When Multiple Measurement Vectors MMV) are available, we show that the identification of the nonzero rows of the signal is closely connected to decoding the messages from multiple users over a Single-Input Multiple-Output channel. Necessary and sufficient conditions for support indices of nonzero rows) recovery are provided, and the results allow us to understand the role of correlation of the nonzero entries as well as the role of the rank of the matrix formed from the non-zero entries. I. INTRODUCTION Suppose the signal of interest is X R m l, and X is said to be sparse when only a few of its rows contain nonzero elements whereas the rest consist of zero elements. One wishes to estimate X via the noisy measurements Y = AX + Z 1) where A R n m is the measurement matrix and Z R n l is the measurement noise. Specifically, when l = 1, this problem is usually termed as sparse signal recovery with single measurement vector SMV); when l > 1, it is commonly referred to as sparse signal recovery with multiple measurement vectors MMV) [1]. This problem has received much attention from many applications such as compressed sensing [], [3], biomagnetic inverse problems [4], [5], image processing [6], [7], bandlimited extrapolation and spectral estimation [8], robust regression and outlier detection [9], speech processing [10], echo cancellation [11], and wireless communication [1]. A. Brief Background on the SMV Problem For the problem of sparse signal recovery with SMV, computationally efficient algorithms have been proposed to find or approximate the sparse solution X R m in various settings. A partial list includes matching pursuit [13], orthogonal matching pursuit OMP) [14], lasso [15], basis pursuit [16], FOCUSS [4], iteratively reweighted l 1 minimization [17], iteratively reweighted l minimization [18], and sparse Bayesian learning SBL) [19], [0]. Analysis has been developed to shed light on the performances of these practical algorithms. For example, Donoho [], Donoho, Elad, and Temlyakov [1], Candès and Tao [], and Candès, Romberg, and Tao [3] presented sufficient conditions for l 1 -norm minimization algorithms, including basis pursuit and its variant in the noisy setting, to successfully recover the sparse signals with respect to different performance metrics. Tropp [4], Tropp and Gilbert [5], and Donoho, Tsaig, Drori, and Starck [6] studied the performances of greedy sequential selection methods such as matching pursuit and its variants. Wainwright [7] and Zhao and Yu [8] provided sufficient and necessary conditions for lasso to recover the support of the sparse signal, i.e., the set of indices of the nonzero entries. On the other hand, from an information theoretic perspective, a series of papers, for instance, Wainwright [9], Fletcher, Rangan, and Goyal [30], Wang, Wainwright, and Ramchandran [31], Akçakaya and Tarokh [3], Jin, Kim, and Rao [33], provided sufficient and necessary conditions to indicate the performance limits of optimal algorithms for support recovery, regardless of computational complexity. B. Brief Background on the MMV Problem A fast emerging trend is the capability of collecting multiple measurements in an increasing number of applications, such as magnetoencephalography MEG) and electroencephalography EEG) [34], [35], multivariate regression [36], and direction of arrival estimation [37]. This gives rise to the problem of sparse signal recovery with multiple measurement vectors. Practical algorithms have been developed to address the new challenges in this scenario. One class of algorithms for solving the MMV problem can be viewed as straightforward extensions based on their counterparts for the SMV problem. To sample a few, M-OMP [1], [38], M-FOCUSS [1], l 1 /l minimization method 1 [39], multivariate group lasso [36], and M-SBL [40] can be all viewed as examples of this kind. Another class of algorithms additionally make explicit effort to exploit the structure underlying the sparse signal X, such as the temporal correlation of the non-zero entries of a row which would be otherwise unavailable when l = 1, to aim for better performance of sparse signal recovery. For instance, the improved M-FOCUSS algorithms [35] and the auto-regressive sparse Bayesian learning AR-SBL) [41] both have the capability of explicitly taking advantage of the structural properties of X to 1 This method is sometimes referred to as l /l 1 minimization, due to the naming convention in a specific paper. In this paper, we use l 1 /l p to indicate the cost of a matrix B which is define as i j b i,j q ) 1/q.
2 improve the recovery performance. Along side the algorithmic advancement, a series of work have been focusing on the theoretic analysis to support the effectiveness of existing algorithms for the MMV problem. We briefly divide these results into two categories. The first category of theoretic analysis aims at specific practical algorithms for sparse signal recovery with MMV. For example, Chen and Huo [4] discovered the sufficient conditions for l 1 /l p norm minimization method and orthogonal matching pursuit to exactly recover every sparse signal within certain sparsity level in the noiseless setting. Eldar and Rauhut [43] also analyzed the performance of sparse recovery using the l 1 /l norm minimization method in the noiseless setting, but the sparse signal was assumed to be randomly distributed according to certain probability distribution and the performance was averaged over all possible realizations of the sparse signal. Obozinski, Wainwright, and Jordan [36] provided sufficient and necessary conditions for multivariate group lasso to successfully recover the support of the sparse signal in the presence of measurement noise. The second category of performance analysis bears an information theoretic nature, and it explores the performance limits that any algorithm, regardless of computational complexity, could possibly achieve. In this regard, Tang and Nehorai [37] employed a hypothesis testing framework with the likelihood ratio test as the optimal decision rule to study how fast the error probability decays. Sufficient and necessary conditions are further identified for successful support recovery in the asymptotic sense. C. Contributions of this Paper In this paper, we develop sharp performance tradeoffs involving the signal dimension m, the number of nonzero rows k, the number of measurements per measurement vector n, the number of measurement vectors l, and especially the nonzero entries for exact support recovery in the noisy setting. Specifically, we consider two cases. The first case is the support recovery problem with SMV, and the nonzero entries are modeled as random quantities. In this case, due to the randomness of the nonzero entries, we provide an probability lower bound for successful support recovery in an asymptotic sense. The second case is the support recovery problem with MMV, where the nonzero entries are assumed to be fixed. In this case, we show that n = )/cx) is sufficient and necessary. We give a complete characterization of cx) that explicitly depends on the elements of the nonzero rows of X. Together with the interpretations we provide, we demonstrate the potential performance improvement enabled by having MMV, and hence bolster its usage in practical applications. Our main results are inspired by the analogy to wireless communication over the additive white Gaussian noise AWGN) single-input multiple-output SIMO) multiple access channel MAC). According to this connection, the columns of the measurement matrix form a common codebook for all senders. Codewords from the senders are individually multiplied by unknown channel gains, which correspond to nonzero entries of X. Then, the noise corrupted linear combinations of these codewords are observed by multiple receivers, which correspond to the multiple measurement vectors. Thus, the problem of support recovery can be interpreted as multiple receivers joint decoding messages sent by multiple senders. With appropriate modifications, the techniques for deriving the capacity of a SIMO MAC channel and outage capacity for slow fading channel can be leveraged to provide performance tradeoffs for support recovery. D. Notation R m denotes the m-dimensional real Euclidean space. [k] denotes the set {1,,..., k}. The notation S denotes the cardinality of set S, x denotes the l -norm of a vector x. For a matrix A, A T denotes the submatrix formed by the rows of A indexed by the set T. Let 1 denote a column vector whose elements are all 1 s, and its length can be determined from the context. II. PROBLEM FORMULATION Let W R k l, where w i,j 0 for all i, j. Note that W can be either fixed or random, where in the latter case every element of W has bounded support. Let S = [S 1,..., S k ] [m] k be such that S 1,..., S k are chosen uniformly at random from [m] without replacement. In particular, {S 1,..., S k } is uniformly distributed over all size-k subsets of [m]. Then, the signal of interest X = XW, S) is generated as { wj,i if s = S X s,i = j, ) 0 if s / {S 1,..., S k }. The support of X, denoted by suppx), is defined as the set of indices corresponding to the nonzero rows of X, i.e., suppx) = {S 1,..., S k }. According to the signal model ), suppx) = k. Throughout this paper, we assume k is known. We measure X through the linear operation 1). We assume that the elements of A are independently and identically distributed i.i.d.) according to the Gaussian distribution N 0, σ a), and the noise Z i,j are i.i.d. according to N 0, σ z). Upon observing the noisy measurement Y, the goal is to recover the support of X. A support recovery map is defined as d : R n l [m]. 3) Given the signal model ), the measurement model 1), and the support recovery map 3), we define the average probability of error by P{dY ) suppxw, S))} Note that the probability is averaged over the locations of the nonzero rows S, the measurement matrix A, the measurement noise Z, and the possible randomness of the nonzero signal matrix W. III. CONNECTION TO MULTIUSER COMMUNICATION We introduce an important interpretation of the sparse signal recovery problem using a communication problem over the Gaussian SIMO MAC, which extends our earlier work [33]. This analogy motivates the intuition behind our main results and facilities the development of the proof techniques.
3 A. A Brief Review on SIMO MAC Consider the following wireless communication scenario. Suppose k users wish to transmit information to a set of l common receivers. Each sender i has access to a codebook C i) = {c i) 1, ci),..., ci) }, where c i) m i) j R n is a codeword and m i) is the number of codewords in the codebook. The rate for the ith sender is R i) = i) )/n. To transmit information, each sender chooses a codeword from its codebook, and all senders transmit their codewords simultaneously over a SIMO MAC: Y j,i = h j,1 X 1,i + h j, X,i + + h j,k X k,i + Z j,i 4) i = 1,,..., n, and j = 1,,..., l where X q,i denotes the input symbol from the qth sender to the channel at the ith use of the channel, h j,q denotes the channel gain between with the qth sender and the jth receiver, Z j,i is the additive Gaussian noise i.i.d. according to N 0, ), and Y j,i is the channel output at the jth receiver at the ith use of the channel. After receiving Y j,1,,..., Y j,n at each receiver j [l], the receivers work jointly to determine the codewords transmitted by each sender. Since the senders interfere with each other, there is an inherent tradeoff among their operating rates. The notion of capacity region is introduced to capture this tradeoff by characterizing all possible rate tuples R 1), R ),..., R k) ) at which reliable communication can be achieved with diminishing error probability of decoding. We discuss two different cases based on the assumption on the channel gains. In the first case, we assume the channel gains are fixed and known at the receivers. By assuming each sender obeys the power constraint c i) j /n σc for all j [m i) ] and all i [k], the capacity region of a SIMO MAC [44], [45] is { R 1),..., R k) ) : ) } R i) 1 log I + σ c h i h i, T [k] i T where h i [h 1,i,..., h l,i ] for i [k]. In the second case, we assume the channel gains are random according to certain distribution. Further, the channel gains are realized once and keep fixed during the entire channel use. As a result, there could be a nontrivial possibility that the channel gains are realized too poor to support the target rate. In this case, the channel model 4) is recognized as a slow fading channel, and outage capacity is employed to characterize the performance of this channel [45]. i T B. Similarities and Differences to Sparse Signal Recovery Based on the measurement model 1), we can remove the columns in A which correspond to the zero rows of X, and obtain the following effective form of the measurement procedure 5) Y j = X S1,jA S1 + + X Sk,jA Sk + Z j 6) for j [l]. By contrasting 6) to a SIMO MAC 4), we can draw the following key connections that relate the two problems. i) A nonzero entry as a sender: We can view the existence of a nonzero row index S i as sender i that accesses the MAC. ii) A measurement vector as a receiver: We can view the existence of a measurement vector Y j as receiver j. iii) X Si,j as the channel gain: The nonzero entry X Si,j, i.e., w i,j, plays the role of the channel gain h j,i from the ith user to the jth receiver. When X Si,j is assumed to be fixed, it corresponds to the channel with fixed gain. When X Si,j is assume to be random, it corresponds to a random channel gain which is realized once and fixed during the entire channel use. In the latter case, it is conceivable that the outage capacity is useful in analyzing the performance limit of sparse signal recovery. iv) A i as the codeword: We treat the measurement matrix A as a codebook with each column A i, i [m], as a codeword. Each element of A Si is fed one by one through the channel as input symbols for the ith sender to the l receivers, resulting in n uses of the channel. v) Similarity of objectives: In the problem of sparse signal recovery, we focus on finding the support {S 1,..., S k } of the signal. In the problem of MAC communication, the receiver needs to determine the indices of codewords, i.e., S 1,..., S k, that are transmitted by senders. Based on the abovementioned aspects, the two problems share significant similarities which enable leveraging the information theoretic methods for performance analysis of support recovery of sparse signals. However, there are domain specific differences, namely the problems of common codebook and unknown channel gains [33], between the support recovery problem and the channel coding problem that should be addressed accordingly to rigorously apply the information theoretic approaches. Based on techniques that are rooted in channel capacity results, but suitably modified to deal with the differences, performance tradeoffs for support recovery of sparse signals can be obtained. IV. PERFORMANCE TRADEOFFS FOR SUPPORT RECOVERY We state the main results of the paper. For a fixed W, define the auxiliary constant [ )] 1 cw ) min log det I + σ a T [k] T W T W T. 7) A. SMV with Random Nonzero Entries We consider the SMV case, i.e., l = 1, and the nonzero entries are randomly drawn according to certain probability distribution. The following theorem states the performance of support recovery of sparse signals with random signal activities. The proof is presented in [33]. Theorem 1: Suppose W R k has bounded support, and lim sup n m = r for some constant r > 0. Then, there
4 exists a sequence of support recovery maps {d m) } m=k, dm) : R n m [m], such that lim sup P{d m) A m) XW, S) + Z) suppx)} P{cW) r} 8) Theorem 1 implies that, in general, rather than having a diminishing error probability, we have to tolerate certain error probability which is upper-bounded by PcW) r). From a channel coding viewpoint as discussed in Section III, we can view r as the rate which all senders are operating at, and view cw ) as the channel capacity for a specific realization W. Hence, PcW) r) represents the probability that the channel realization is too poor to support the target rate, which corresponds to the event of channel outage. B. MMV with Fixed Nonzero Entries We consider the support recovery of a sequence of sparse signals generated with the same fixed W. The following theorems state the performance tradeoff for this case. The proof can be found in [46]. Theorem : If lim sup n m < cw ) 9) then there exists a sequence of support recovery maps {d m) } m=k, dm) : R n m l [m], such that Theorem 3: If lim P{dY ) suppxw, S))} = 0. 10) lim sup n m > cw ) 11) then for any sequence of support recovery maps {d m) } m=k, dm) : R nm l [m], lim inf P{dY ) suppxw, S))} > 0. 1) Theorems and 3 together indicate that n = 1 cw )±ϵ is the sufficient and necessary number of measurements per measurement vector to ensure asymptotically successful support recovery. The constant cw ) explicitly captures the role of the nonzero entries in the performance tradeoff. C. Performance Improvement via MMV It have been observed in existing literature that having MMV can improve the performance of sparse signal recovery [1], [35], [39]. Our analysis provides theoretical support to the improvement enabled by MMV. At this point it is useful to note that the SIMO MAC is related to point to point MIMO with full spatial multiplexing, i.e independent data streams from each transmit antenna, and the sum rate is equal to the capacity of the point to point MIMO problem, a well studied problem [44], [45]. In particular, the capacity depends on the rank of the MIMO channel and scales as minimum of the number of transmit and receive antennas in the full rank case. Relating to the sparse recovery problem, cw ) depends on the rank of W and scales as mink, l), the minimum of the number of nonzero entries and the number of measurement vectors. For the purpose of illustration, let us consider three special cases. Let k be an even number. The first case deals with an SMV problem, where w = 1 R k. The second case is concerned about an MMV problem, where W = [1, 1] R k. The third case considers an MMV [ 1 1 ] problem, where W = R 1 1 k, and the second column of W consists of equal number of 1 and 1. According to Theorem, we calculate the upper bound on m such that successful support recovery is attained asymptotically as long as m is within that upper bound. The following table summarizes the results. Structure of nonzero matrix Upper bound on m 1) SMV: W = 1 R k m < 1 + kσ a 1 + kσ a ) MMV: W = [1, 1] R k m < [ ] 1 1 3) MMV: W = 1 1 R k m < 1 + kσ a k k k From this table, we have the following observations. First, the cases with MMV enjoy a larger upper bound on m than that of the SMV case. This means that more positions can be monitored in the sparse signal, which improves the performance of sparse signal recovery. Second, the upper bound in case 3) is larger than that of case ). By inspecting the nonzero entries, we can see that case ) has two identical nonzero signal source vectors. It leads to a gain equivalent to doubling the signal to noise ratio. On the contrary, the two nonzero source vectors in case 3) are orthogonal. Therefore, the structure of the nonzero signal matrix W, especially the rank, plays an important role in the performance of support recovery. Furthermore, since the performance limit of support recovery is closely related to the nonzero signal matrix W, a practical algorithm should explore the structure of W in order to achieve better performance. The smooth overlapped blocks M- FOCUSS [35] and the autoregressive sparse Bayesian learning AR-SBL) [41] can be viewed as this type of algorithms. For example, the AR-SBL models the nonzero entries as a first order autoregressive process and it attempts to learn the correlation coefficient along with the other parameters. Experimental studies have shown that these algorithms achieve better performance than algorithms that do not explicitly take into account the structure of W. V. SUMMARY This paper discussed the performance tradeoffs for support recovery of sparse signals. The key to our results is the connection between sparse signal recovery and SIMO multiuser communication systems. The SIMO MAC capacity and outage capacity proved useful in unveiling the performance limits of
5 support recovery. Necessary and sufficient conditions were obtained for successful support recovery in the asymptotic sense. Specifically, the roles of the nonzero entries in those conditions were explicitly identified. ACKNOWLEDGMENT This research was supported by NSF Grant CCF REFERENCES [1] S. F. Cotter, B. D. Rao, K. Engan, and K. Kreutz-Delgado, Sparse solutions to linear inverse problems with multiple measurement vectors, IEEE Trans. Sig. Proc., vol. 53, no. 7, pp , 005. [] D. L. Donoho, Compressed sensing, IEEE Trans. Inform. Theory, vol. 5, no. 4, pp , 006. [3] E. J. Candes, Compressive sampling, Proceedings of the Int. Congress of Mathematicians, pp , 006. [4] I. Gorodnitsky and B. Rao, Sparse signal reconstruction from limited data using focuss: A re-weighted norm minimization algorithm, IEEE Trans. Sig. Proc., vol. 45, no. 3, pp , [5] I. F. Gorodnitsky, J. S. George, and B. D. Rao, neuromagnetic source imaging with focuss: A recursive weighted minimum norm algorithm, J. Electroencephalog. Clinical Neurophysiol., vol. 95, pp , [6] B. D. Jeffs, Sparse inverse solution methods for signal and image processing applications, Proc. ICASSP, pp , [7] M. Duarte, M. Davenport, D. Takhar, J. Laska, T. Sun, K. Kelly, and R. G. Baraniuk, Single-pixel imaging via compressive sampling, IEEE Signal Processing Magazine, vol. 5, pp , 008. [8] S. D. Cabrera and T. W. Parks, Extrapolation and spectral estimation with iterative weighted norm modification, IEEE Trans. Acoust., Speech, Signal Process., vol. 4, pp , [9] Y. Jin and B. D. Rao, Algorithms for robust linear regression by exploiting the connection to sparse signal recovery, ICASSP, 010. [10] W. C. Chu, Speech coding algorithms. Wiley-Interscience, 003. [11] D. L. Duttweiler, Proportionate normalized least-mean-squares adaptation in echo cancelers, IEEE Trans. Acoust., Speech, Signal Process., vol. 8, pp , 000. [1] S. F. Cotter and B. D. Rao, Sparse channel estimation via matching pursuit with application to equalization, IEEE Trans. on Communications, vol. 50, pp , 00. [13] S. Mallat and Z. Zhang, Matching pursuits with time-frequency dictionaries, IEEE Trans. Sig. Proc., vol. 41, no. 1, pp , [14] Y. C. Pati, R. Rezaiifar, and P. S. Krishnaprasad, Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition, 7th Annual Asilomar Conf. Sig. Sys. Comp., [15] R. Tibshirani, Regression shrinkage and selection via the lasso, J. R. Statist. Soc. B, vol. 58, no. 1, pp , [16] S. S. Chen, D. L. Donoho, and M. A. Saunders, Atomic decomposition by basis pursuit, SIREV, vol. 43, no. 1, pp , 001. [17] E. J. Candes, M. B. Wakin, and S. P. Boyd, Enhancing sparsity by reweighted l 1 minimization, J Fourier Anal Appl., vol. 14, pp , 008. [18] R. Chartrand and W. Yin, Iteratively reweighted algorithms for compressive sensing, ICASSP, 008. [19] M. E. Tipping, Sparse bayesian learning and the relevance vector machine, JMLR, 001. [0] D. Wipf and B. D. Rao, Sparse bayesian learning for basis selection, IEEE Trans. Sig. Proc., vol. 5, no. 8, pp , August 004. [1] D. Donoho, M. Elad, and V. N. Temlyakov, Stable recovery of sparse overcomplete representations in the presense of noise, IEEE Trans. Inform. Theory, vol. 5, no. 1, pp. 6 18, 006. [] E. J. Candes and T. Tao, Decoding by linear programming, IEEE Trans. Inform. Theory, vol. 51, no. 1, pp , 005. [3] E. J. Candes, J. K. Romberg, and T. Tao, Stable signal recovery from incomplete and inaccurate measurements, Comm. Pure Appl. Math, 006. [4] J. A. Tropp, Greedy is good: Algorithmic results for sparse approximation, IEEE Trans. Info. Theo., vol. 50, no. 10, pp. 31 4, 004. [5] J. Tropp and A. Gilbert, Signal recovery from random measurements via orthogonal matching pursuit, IEEE Trans. Info. Theo. preprint, 007. [6] D. Donoho, Y. Tsaig, I. Drori, and J. Starck, Sparse solution of underdetermined linear equations by stagewise orthogonal matching pursuit, preprint, 006. [7] M. Wainwright, Sharp thresholds for high-dimensional and noisy sparsity recovery using l 1 -constrained quadratic programming lasso), IEEE Trans. Info. Theory, vol. 55, no. 5, pp , 009. [8] P. Zhao and B. Yu, On model selection consistency of lasso, Journal of Machine Learning Research, vol. 7, pp , 006. [9] M. Wainwright, Information-theoretic limits on sparsity recovery in the high-dimensional and noisy setting, IEEE Trans. Info. Theo., Dec 009. [30] A. K. Fletcher, S. Rangan, and V. K. Goyal, Necessary and sufficient conditions for sparsity pattern recovery, IEEE Transactions on Information Theory, vol. 55, no. 1, pp , Dec 009. [31] W. Wang, M. J. Wainwright, and K. Ramchandran, Informationtheoretic limits on sparse signal recovery: Dense versus sparse measurement, Proc. of ISIT, pp , 008. [3] M. Akçakaya and V. Tarokh, Shannon theoretic limits on noisy compressive sampling, IEEE Transactions on Information Theory, vol. 56, no. 1, pp , 010. [33] Y. Jin, Y.-H. Kim, and B. D. Rao, Support recovery of sparse signals, arxiv: v1[cs.it], 010. [34] D. Wipf and S. Nagarajan, A unified bayesian framework for meg/eeg source imaging, NeuroImage, pp , 008. [35] R. Zdunek and A. Cichocki, Improved m-focuss algorithm with overlapping blocks for locally smooth sparse signals, IEEE Trans. Signal Proc., 008. [36] G. Obozinski, M. J. Wainwright, and M. I. Jordan, Support union recovery in high-dimensional multivariate regression, preprint, 010. [37] G. Tang and A. Nehorai, Performance analysis for sparse support recovery, IEEE Trans. Information Theory, vol. 56, no. 3, pp , 010. [38] J. A. Tropp, A. C. Gilbert, and M. J. Strauss, Simultaneous sparse approximation via greedy pursuit, ICASSP, 005. [39] Y. C. Eldar and M. Mishali, Robust recovery of signals from a structured union of subspaces, IEEE Trans. Info. Theory, vol. 55, no. 11, pp , 009. [40] D. P. Wipf and B. D. Rao, An empirical bayesian strategy for solving the simultaneous sparse approximation problem, IEEE Trans. Signal Processing, vol. 55, no. 7, pp , July 007. [41] Z. Zhang and B. D. Rao, Sparse signal recovery in the presence of correlated multiple measurement vectors, ICASSP, 010. [4] J. Chen and X. Huo, Theoretical results on sparse representations of multiple-measurement vectors, IEEE Trans. Signal Processing, vol. 54, pp , 006. [43] Y. C. Eldar and H. Rauhut, Average case analysis of multichannel sparse recovery using convex relaxation, IEEE Trans. Information Theory, vol. 56, no. 1, pp [44] A. Paulraj, R. Nabar, and D. Gore, Introduction to Space-Time Wireless Communications. Cambridge Univ. Press, 003. [45] D. Tse and P. Viswanath, Fundamentals of Wireless Communication. Cambridge University Press, 005. [46] Y. Jin and B. D. Rao, Support recovery in presence of multiple measurement vectors, In preparation, 010.
Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego
Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego 1 Outline Course Outline Motivation for Course Sparse Signal Recovery Problem Applications Computational
More informationOrthogonal Matching Pursuit for Sparse Signal Recovery With Noise
Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published
More informationMultipath Matching Pursuit
Multipath Matching Pursuit Submitted to IEEE trans. on Information theory Authors: S. Kwon, J. Wang, and B. Shim Presenter: Hwanchol Jang Multipath is investigated rather than a single path for a greedy
More informationBayesian Methods for Sparse Signal Recovery
Bayesian Methods for Sparse Signal Recovery Bhaskar D Rao 1 University of California, San Diego 1 Thanks to David Wipf, Jason Palmer, Zhilin Zhang and Ritwik Giri Motivation Motivation Sparse Signal Recovery
More informationA new method on deterministic construction of the measurement matrix in compressed sensing
A new method on deterministic construction of the measurement matrix in compressed sensing Qun Mo 1 arxiv:1503.01250v1 [cs.it] 4 Mar 2015 Abstract Construction on the measurement matrix A is a central
More informationSparsity in Underdetermined Systems
Sparsity in Underdetermined Systems Department of Statistics Stanford University August 19, 2005 Classical Linear Regression Problem X n y p n 1 > Given predictors and response, y Xβ ε = + ε N( 0, σ 2
More informationNear Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing
Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar
More informationof Orthogonal Matching Pursuit
A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement
More informationNew Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit
New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence
More informationMotivation Sparse Signal Recovery is an interesting area with many potential applications. Methods developed for solving sparse signal recovery proble
Bayesian Methods for Sparse Signal Recovery Bhaskar D Rao 1 University of California, San Diego 1 Thanks to David Wipf, Zhilin Zhang and Ritwik Giri Motivation Sparse Signal Recovery is an interesting
More informationSimultaneous Sparsity
Simultaneous Sparsity Joel A. Tropp Anna C. Gilbert Martin J. Strauss {jtropp annacg martinjs}@umich.edu Department of Mathematics The University of Michigan 1 Simple Sparse Approximation Work in the d-dimensional,
More informationGeneralized Orthogonal Matching Pursuit- A Review and Some
Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents
More informationModel-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk
Model-Based Compressive Sensing for Signal Ensembles Marco F. Duarte Volkan Cevher Richard G. Baraniuk Concise Signal Structure Sparse signal: only K out of N coordinates nonzero model: union of K-dimensional
More informationRobust multichannel sparse recovery
Robust multichannel sparse recovery Esa Ollila Department of Signal Processing and Acoustics Aalto University, Finland SUPELEC, Feb 4th, 2015 1 Introduction 2 Nonparametric sparse recovery 3 Simulation
More informationPHASE TRANSITION OF JOINT-SPARSE RECOVERY FROM MULTIPLE MEASUREMENTS VIA CONVEX OPTIMIZATION
PHASE TRASITIO OF JOIT-SPARSE RECOVERY FROM MUTIPE MEASUREMETS VIA COVEX OPTIMIZATIO Shih-Wei Hu,, Gang-Xuan in, Sung-Hsien Hsieh, and Chun-Shien u Institute of Information Science, Academia Sinica, Taipei,
More informationScale Mixture Modeling of Priors for Sparse Signal Recovery
Scale Mixture Modeling of Priors for Sparse Signal Recovery Bhaskar D Rao 1 University of California, San Diego 1 Thanks to David Wipf, Jason Palmer, Zhilin Zhang and Ritwik Giri Outline Outline Sparse
More informationAn Introduction to Sparse Approximation
An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,
More informationNoisy Signal Recovery via Iterative Reweighted L1-Minimization
Noisy Signal Recovery via Iterative Reweighted L1-Minimization Deanna Needell UC Davis / Stanford University Asilomar SSC, November 2009 Problem Background Setup 1 Suppose x is an unknown signal in R d.
More informationSPARSE signal processing has recently been exploited in
JOURNA OF A TEX CASS FIES, VO. 14, NO. 8, AUGUST 2015 1 Simultaneous Sparse Approximation Using an Iterative Method with Adaptive Thresholding Shahrzad Kiani, Sahar Sadrizadeh, Mahdi Boloursaz, Student
More informationExact Topology Identification of Large-Scale Interconnected Dynamical Systems from Compressive Observations
Exact Topology Identification of arge-scale Interconnected Dynamical Systems from Compressive Observations Borhan M Sanandaji, Tyrone Vincent, and Michael B Wakin Abstract In this paper, we consider the
More informationSignal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit
Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Deanna Needell and Roman Vershynin Abstract We demonstrate a simple greedy algorithm that can reliably
More informationSIGNALS with sparse representations can be recovered
IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1497 Cramér Rao Bound for Sparse Signals Fitting the Low-Rank Model with Small Number of Parameters Mahdi Shaghaghi, Student Member, IEEE,
More informationThresholds for the Recovery of Sparse Solutions via L1 Minimization
Thresholds for the Recovery of Sparse Solutions via L Minimization David L. Donoho Department of Statistics Stanford University 39 Serra Mall, Sequoia Hall Stanford, CA 9435-465 Email: donoho@stanford.edu
More informationGREEDY SIGNAL RECOVERY REVIEW
GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin
More informationTractable Upper Bounds on the Restricted Isometry Constant
Tractable Upper Bounds on the Restricted Isometry Constant Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure, U.C. Berkeley. Support from NSF, DHS and Google.
More informationORTHOGONAL matching pursuit (OMP) is the canonical
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 9, SEPTEMBER 2010 4395 Analysis of Orthogonal Matching Pursuit Using the Restricted Isometry Property Mark A. Davenport, Member, IEEE, and Michael
More informationExact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice
Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice Jason N. Laska, Mark A. Davenport, Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University
More informationElaine T. Hale, Wotao Yin, Yin Zhang
, Wotao Yin, Yin Zhang Department of Computational and Applied Mathematics Rice University McMaster University, ICCOPT II-MOPTA 2007 August 13, 2007 1 with Noise 2 3 4 1 with Noise 2 3 4 1 with Noise 2
More informationOn Rank Awareness, Thresholding, and MUSIC for Joint Sparse Recovery
On Rank Awareness, Thresholding, and MUSIC for Joint Sparse Recovery Jeffrey D. Blanchard a,1,, Caleb Leedy a,2, Yimin Wu a,2 a Department of Mathematics and Statistics, Grinnell College, Grinnell, IA
More informationAcommon problem in signal processing is to estimate an
5758 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER 2009 Necessary and Sufficient Conditions for Sparsity Pattern Recovery Alyson K. Fletcher, Member, IEEE, Sundeep Rangan, and Vivek
More informationSPARSE signal representations have gained popularity in recent
6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying
More informationOrthogonal Matching Pursuit: A Brownian Motion Analysis
1010 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 60, NO. 3, MARCH 2012 Orthogonal Matching Pursuit: A Brownian Motion Analysis Alyson K. Fletcher, Member, IEEE, and Sundeep Rangan, Member, IEEE Abstract
More informationShannon-Theoretic Limits on Noisy Compressive Sampling Mehmet Akçakaya, Student Member, IEEE, and Vahid Tarokh, Fellow, IEEE
492 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 1, JANUARY 2010 Shannon-Theoretic Limits on Noisy Compressive Sampling Mehmet Akçakaya, Student Member, IEEE, Vahid Tarokh, Fellow, IEEE Abstract
More informationSensing systems limited by constraints: physical size, time, cost, energy
Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original
More informationRandomness-in-Structured Ensembles for Compressed Sensing of Images
Randomness-in-Structured Ensembles for Compressed Sensing of Images Abdolreza Abdolhosseini Moghadam Dep. of Electrical and Computer Engineering Michigan State University Email: abdolhos@msu.edu Hayder
More informationEstimation Error Bounds for Frame Denoising
Estimation Error Bounds for Frame Denoising Alyson K. Fletcher and Kannan Ramchandran {alyson,kannanr}@eecs.berkeley.edu Berkeley Audio-Visual Signal Processing and Communication Systems group Department
More informationA simple test to check the optimality of sparse signal approximations
A simple test to check the optimality of sparse signal approximations Rémi Gribonval, Rosa Maria Figueras I Ventura, Pierre Vergheynst To cite this version: Rémi Gribonval, Rosa Maria Figueras I Ventura,
More informationRecovery Guarantees for Rank Aware Pursuits
BLANCHARD AND DAVIES: RECOVERY GUARANTEES FOR RANK AWARE PURSUITS 1 Recovery Guarantees for Rank Aware Pursuits Jeffrey D. Blanchard and Mike E. Davies Abstract This paper considers sufficient conditions
More informationGradient Descent with Sparsification: An iterative algorithm for sparse recovery with restricted isometry property
: An iterative algorithm for sparse recovery with restricted isometry property Rahul Garg grahul@us.ibm.com Rohit Khandekar rohitk@us.ibm.com IBM T. J. Watson Research Center, 0 Kitchawan Road, Route 34,
More informationRanked Sparse Signal Support Detection
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 60, NO 11, NOVEMBER 2012 5919 Ranked Sparse Signal Support Detection Alyson K Fletcher, Member, IEEE, Sundeep Rangan, Member, IEEE, and Vivek K Goyal, Senior
More informationThe Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1
The Sparsest Solution of Underdetermined Linear System by l q minimization for 0 < q 1 Simon Foucart Department of Mathematics Vanderbilt University Nashville, TN 3784. Ming-Jun Lai Department of Mathematics,
More informationOptimisation Combinatoire et Convexe.
Optimisation Combinatoire et Convexe. Low complexity models, l 1 penalties. A. d Aspremont. M1 ENS. 1/36 Today Sparsity, low complexity models. l 1 -recovery results: three approaches. Extensions: matrix
More information1-Bit Compressive Sensing
1-Bit Compressive Sensing Petros T. Boufounos, Richard G. Baraniuk Rice University, Electrical and Computer Engineering 61 Main St. MS 38, Houston, TX 775 Abstract Compressive sensing is a new signal acquisition
More informationINDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina
INDUSTRIAL MATHEMATICS INSTITUTE 2007:08 A remark on compressed sensing B.S. Kashin and V.N. Temlyakov IMI Preprint Series Department of Mathematics University of South Carolina A remark on compressed
More informationStopping Condition for Greedy Block Sparse Signal Recovery
Stopping Condition for Greedy Block Sparse Signal Recovery Yu Luo, Ronggui Xie, Huarui Yin, and Weidong Wang Department of Electronics Engineering and Information Science, University of Science and Technology
More informationSingle-letter Characterization of Signal Estimation from Linear Measurements
Single-letter Characterization of Signal Estimation from Linear Measurements Dongning Guo Dror Baron Shlomo Shamai The work has been supported by the European Commission in the framework of the FP7 Network
More informationRecovery of Compressible Signals in Unions of Subspaces
1 Recovery of Compressible Signals in Unions of Subspaces Marco F. Duarte, Chinmay Hegde, Volkan Cevher, and Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University Abstract
More informationSparse Signal Recovery: Theory, Applications and Algorithms
Sparse Signal Recovery: Theory, Applications and Algorithms Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego Collaborators: I. Gorodonitsky, S. Cotter,
More informationApproximate Message Passing with Built-in Parameter Estimation for Sparse Signal Recovery
Approimate Message Passing with Built-in Parameter Estimation for Sparse Signal Recovery arxiv:1606.00901v1 [cs.it] Jun 016 Shuai Huang, Trac D. Tran Department of Electrical and Computer Engineering Johns
More informationEquivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 12, DECEMBER 2008 2009 Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation Yuanqing Li, Member, IEEE, Andrzej Cichocki,
More informationLarge-Scale L1-Related Minimization in Compressive Sensing and Beyond
Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March
More informationRecovery of Sparse Signals Using Multiple Orthogonal Least Squares
Recovery of Sparse Signals Using Multiple Orthogonal east Squares Jian Wang, Ping i Department of Statistics and Biostatistics arxiv:40.505v [stat.me] 9 Oct 04 Department of Computer Science Rutgers University
More informationSignal Recovery from Permuted Observations
EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,
More informationLecture: Introduction to Compressed Sensing Sparse Recovery Guarantees
Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin
More informationPre-weighted Matching Pursuit Algorithms for Sparse Recovery
Journal of Information & Computational Science 11:9 (214) 2933 2939 June 1, 214 Available at http://www.joics.com Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Jingfei He, Guiling Sun, Jie
More informationIntroduction to Compressed Sensing
Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral
More informationA NEW FRAMEWORK FOR DESIGNING INCOHERENT SPARSIFYING DICTIONARIES
A NEW FRAMEWORK FOR DESIGNING INCOERENT SPARSIFYING DICTIONARIES Gang Li, Zhihui Zhu, 2 uang Bai, 3 and Aihua Yu 3 School of Automation & EE, Zhejiang Univ. of Sci. & Tech., angzhou, Zhejiang, P.R. China
More informationA Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases
2558 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 48, NO 9, SEPTEMBER 2002 A Generalized Uncertainty Principle Sparse Representation in Pairs of Bases Michael Elad Alfred M Bruckstein Abstract An elementary
More informationA New Estimate of Restricted Isometry Constants for Sparse Solutions
A New Estimate of Restricted Isometry Constants for Sparse Solutions Ming-Jun Lai and Louis Y. Liu January 12, 211 Abstract We show that as long as the restricted isometry constant δ 2k < 1/2, there exist
More informationStability and Robustness of Weak Orthogonal Matching Pursuits
Stability and Robustness of Weak Orthogonal Matching Pursuits Simon Foucart, Drexel University Abstract A recent result establishing, under restricted isometry conditions, the success of sparse recovery
More informationStable Signal Recovery from Incomplete and Inaccurate Measurements
Stable Signal Recovery from Incomplete and Inaccurate Measurements EMMANUEL J. CANDÈS California Institute of Technology JUSTIN K. ROMBERG California Institute of Technology AND TERENCE TAO University
More informationAnalysis of Denoising by Sparse Approximation with Random Frame Asymptotics
Analysis of Denoising by Sparse Approximation with Random Frame Asymptotics Alyson K Fletcher Univ of California, Berkeley alyson@eecsberkeleyedu Sundeep Rangan Flarion Technologies srangan@flarioncom
More informationA Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing via Polarization of Analog Transmission
Li and Kang: A Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing 1 A Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing via Polarization
More informationNumerical Methods. Rafał Zdunek Underdetermined problems (2h.) Applications) (FOCUSS, M-FOCUSS,
Numerical Methods Rafał Zdunek Underdetermined problems (h.) (FOCUSS, M-FOCUSS, M Applications) Introduction Solutions to underdetermined linear systems, Morphological constraints, FOCUSS algorithm, M-FOCUSS
More informationCompressed Sensing and Related Learning Problems
Compressed Sensing and Related Learning Problems Yingzhen Li Dept. of Mathematics, Sun Yat-sen University Advisor: Prof. Haizhang Zhang Advisor: Prof. Haizhang Zhang 1 / Overview Overview Background Compressed
More informationCompressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles
Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional
More informationFast Angular Synchronization for Phase Retrieval via Incomplete Information
Fast Angular Synchronization for Phase Retrieval via Incomplete Information Aditya Viswanathan a and Mark Iwen b a Department of Mathematics, Michigan State University; b Department of Mathematics & Department
More informationarxiv: v1 [cs.it] 21 Feb 2013
q-ary Compressive Sensing arxiv:30.568v [cs.it] Feb 03 Youssef Mroueh,, Lorenzo Rosasco, CBCL, CSAIL, Massachusetts Institute of Technology LCSL, Istituto Italiano di Tecnologia and IIT@MIT lab, Istituto
More informationError Correction via Linear Programming
Error Correction via Linear Programming Emmanuel Candes and Terence Tao Applied and Computational Mathematics, Caltech, Pasadena, CA 91125 Department of Mathematics, University of California, Los Angeles,
More informationMATCHING PURSUIT WITH STOCHASTIC SELECTION
2th European Signal Processing Conference (EUSIPCO 22) Bucharest, Romania, August 27-3, 22 MATCHING PURSUIT WITH STOCHASTIC SELECTION Thomas Peel, Valentin Emiya, Liva Ralaivola Aix-Marseille Université
More informationSparse Solutions of an Undetermined Linear System
1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research
More informationEUSIPCO
EUSIPCO 013 1569746769 SUBSET PURSUIT FOR ANALYSIS DICTIONARY LEARNING Ye Zhang 1,, Haolong Wang 1, Tenglong Yu 1, Wenwu Wang 1 Department of Electronic and Information Engineering, Nanchang University,
More informationCOMPRESSED SENSING IN PYTHON
COMPRESSED SENSING IN PYTHON Sercan Yıldız syildiz@samsi.info February 27, 2017 OUTLINE A BRIEF INTRODUCTION TO COMPRESSED SENSING A BRIEF INTRODUCTION TO CVXOPT EXAMPLES A Brief Introduction to Compressed
More informationCompressive Sensing of Temporally Correlated Sources Using Isotropic Multivariate Stable Laws
Compressive Sensing of Temporally Correlated Sources Using Isotropic Multivariate Stable Laws George Tzagkarakis EONOS Investment Technologies Paris, France and Institute of Computer Science Foundation
More informationReconstruction from Anisotropic Random Measurements
Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013
More informationOn Sparsity, Redundancy and Quality of Frame Representations
On Sparsity, Redundancy and Quality of Frame Representations Mehmet Açaaya Division of Engineering and Applied Sciences Harvard University Cambridge, MA Email: acaaya@fasharvardedu Vahid Taroh Division
More informationSigma Delta Quantization for Compressed Sensing
Sigma Delta Quantization for Compressed Sensing C. Sinan Güntürk, 1 Mark Lammers, 2 Alex Powell, 3 Rayan Saab, 4 Özgür Yılmaz 4 1 Courant Institute of Mathematical Sciences, New York University, NY, USA.
More informationInverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France
Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France remi.gribonval@inria.fr Structure of the tutorial Session 1: Introduction to inverse problems & sparse
More informationCompressed Sensing and Robust Recovery of Low Rank Matrices
Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech
More informationarxiv: v1 [cs.it] 14 Apr 2009
Department of Computer Science, University of British Columbia Technical Report TR-29-7, April 29 Joint-sparse recovery from multiple measurements Ewout van den Berg Michael P. Friedlander arxiv:94.251v1
More informationDetecting Sparse Structures in Data in Sub-Linear Time: A group testing approach
Detecting Sparse Structures in Data in Sub-Linear Time: A group testing approach Boaz Nadler The Weizmann Institute of Science Israel Joint works with Inbal Horev, Ronen Basri, Meirav Galun and Ery Arias-Castro
More informationSequential Compressed Sensing
Sequential Compressed Sensing Dmitry M. Malioutov, Sujay R. Sanghavi, and Alan S. Willsky, Fellow, IEEE Abstract Compressed sensing allows perfect recovery of sparse signals (or signals sparse in some
More informationDesign of Projection Matrix for Compressive Sensing by Nonsmooth Optimization
Design of Proection Matrix for Compressive Sensing by Nonsmooth Optimization W.-S. Lu T. Hinamoto Dept. of Electrical & Computer Engineering Graduate School of Engineering University of Victoria Hiroshima
More informationIEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER
IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1239 Preconditioning for Underdetermined Linear Systems with Sparse Solutions Evaggelia Tsiligianni, StudentMember,IEEE, Lisimachos P. Kondi,
More informationRobust Support Recovery Using Sparse Compressive Sensing Matrices
Robust Support Recovery Using Sparse Compressive Sensing Matrices Jarvis Haupt and Richard Baraniuk University of Minnesota, Minneapolis MN Rice University, Houston TX Abstract This paper considers the
More informationThe Pros and Cons of Compressive Sensing
The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal
More informationAsymptotic Achievability of the Cramér Rao Bound For Noisy Compressive Sampling
Asymptotic Achievability of the Cramér Rao Bound For Noisy Compressive Sampling The Harvard community h made this article openly available. Plee share how this access benefits you. Your story matters Citation
More informationRSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems
1 RSP-Based Analysis for Sparsest and Least l 1 -Norm Solutions to Underdetermined Linear Systems Yun-Bin Zhao IEEE member Abstract Recently, the worse-case analysis, probabilistic analysis and empirical
More informationSignal Recovery, Uncertainty Relations, and Minkowski Dimension
Signal Recovery, Uncertainty Relations, and Minkowski Dimension Helmut Bőlcskei ETH Zurich December 2013 Joint work with C. Aubel, P. Kuppinger, G. Pope, E. Riegler, D. Stotz, and C. Studer Aim of this
More informationCooperative Interference Alignment for the Multiple Access Channel
1 Cooperative Interference Alignment for the Multiple Access Channel Theodoros Tsiligkaridis, Member, IEEE Abstract Interference alignment (IA) has emerged as a promising technique for the interference
More informationSecure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel
Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel Pritam Mukherjee Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 074 pritamm@umd.edu
More informationIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 6, JUNE
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 6, JUNE 2010 2967 Information-Theoretic Limits on Sparse Signal Recovery: Dense versus Sparse Measurement Matrices Wei Wang, Member, IEEE, Martin J.
More informationRecovery of Low Rank and Jointly Sparse. Matrices with Two Sampling Matrices
Recovery of Low Rank and Jointly Sparse 1 Matrices with Two Sampling Matrices Sampurna Biswas, Hema K. Achanta, Mathews Jacob, Soura Dasgupta, and Raghuraman Mudumbai Abstract We provide a two-step approach
More informationLIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION. By Srikanth Jagabathula Devavrat Shah
00 AIM Workshop on Ranking LIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION By Srikanth Jagabathula Devavrat Shah Interest is in recovering distribution over the space of permutations over n elements
More informationOn the Relationship Between Compressive Sensing and Random Sensor Arrays
On the Relationship Between Compressive Sensing and Random Sensor Arrays Lawrence Carin Department of Electrical & Computer Engineering Duke University Durham, NC lcarin@ece.duke.edu Abstract Random sensor
More informationFast Hard Thresholding with Nesterov s Gradient Method
Fast Hard Thresholding with Nesterov s Gradient Method Volkan Cevher Idiap Research Institute Ecole Polytechnique Federale de ausanne volkan.cevher@epfl.ch Sina Jafarpour Department of Computer Science
More informationUniqueness Conditions for A Class of l 0 -Minimization Problems
Uniqueness Conditions for A Class of l 0 -Minimization Problems Chunlei Xu and Yun-Bin Zhao October, 03, Revised January 04 Abstract. We consider a class of l 0 -minimization problems, which is to search
More information5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE
5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER 2009 Uncertainty Relations for Shift-Invariant Analog Signals Yonina C. Eldar, Senior Member, IEEE Abstract The past several years
More informationThe Sparsity Gap. Joel A. Tropp. Computing & Mathematical Sciences California Institute of Technology
The Sparsity Gap Joel A. Tropp Computing & Mathematical Sciences California Institute of Technology jtropp@acm.caltech.edu Research supported in part by ONR 1 Introduction The Sparsity Gap (Casazza Birthday
More informationDoes Compressed Sensing have applications in Robust Statistics?
Does Compressed Sensing have applications in Robust Statistics? Salvador Flores December 1, 2014 Abstract The connections between robust linear regression and sparse reconstruction are brought to light.
More information