An Overview of Compressed Sensing

Size: px
Start display at page:

Download "An Overview of Compressed Sensing"

Transcription

1 An Overview of Compressed Sensing Nathan Schneider November 18, 2009 Abstract In a large number of applications, the system will be designed to sample at a rate equal to at least the frequency bandwidth of the signal class under study. In a similar fashion, Compressed Sensing (CS) utilizes knowledge of the signal s sparsity, as opposed to bandwidth, to determine a sampling rate. For naturally sparse signals, this can allow a sampling rate significantly below the ubiquitous Nyquist rate. However, this savings in measurement cost is not free, for there is a corresponding increase in the computational cost of reconstruction. Perfect reconstruction cannot be realized through a linear combination of a reconstruction kernel (such as a sinc function) weighted by the sample values. Instead, it is realized through the solution of a convex optimization problem. For this exact reconstruction to be possible, the method of data collection is of significance. The collection waveforms, grouped together as rows of the sensing matrix, must satisfy the Restricted Isometry Property (RIP). RIP is a restriction on the coherence of columns within the sensing matrix and is the key ingredient to any proof within the CS domain. Interestingly, it has been found that many types of random matrices satisfy the RIP with high probability. This facet has drawn comparisons to coding theory and the channel coding theorem. In fact, one leader in the field suggests that the concepts of CS may be used as a universal encoding strategy [11, 12]. Similar to coding theory, the search for deterministic encoding schemes (sensing matrices) has taken hold because real-time storage and retrieval requirements make the use of random sensing matrices impractical for a large number of applications. Key results in this area have been achieved through a weakening of the RIP [6]. Primarily, the randomness in the sensing matrix is removed and instead placed on the signal model. Perfect reconstruction of all signals of a given sparsity is no longer guaranteed; however, if a signal satisfies the sparsity requirement it is reconstructable with high probability. As an extension of CS, research has begun to focus on other a priori knowledge that may be readily available. The primary assumption utilized assumes structure in where the signal coefficients are large (in the basis where the signal is sparse). 1 Motivations Compressed sensing represents a paradigm shift in concept that occurred near the middle of the current decade. While in the midst of the digital revolution, researchers began to ask a fundamentally different question than that of the Shannon sampling theorem [21]. This question centered around the sparsity of the signal of interest and not necessarily the bandwidth of the signal. For example, what if the support in the frequency domain of a signal class is not well contained within 1

2 a connected set but yet is sparse (such as multiple tones being present due to different transmitters). The Shannon sampling theorem implies that the signal must be sampled at a rate of at least the total width of the support in the frequency domain. However, the theory of CS implies that significantly lower sampling rates may be utilized if the frequency support of the signal class is sparse. Reduced sampling rates are extremely useful since, in many applications, sampling at the Nyquist rate may be prohibitively expensive or may be infeasible with current hardware capabilities. Specifically, one of the key constraints on a system designer is that of the Analog to Digital Converter (ADC). Currently, state of the art ADCs sample at approximately 1 GHz with eight to ten bit resolution [13]. This implies that the implementation of a wideband solution requires a large number of parallel downconversion / sampling systems. This requirement can be prohibitive in applications where space and power constraints are critical. 2 Origins and Key Concepts Compressed sensing appears to have formed through collaboration between various California based professors: Emmanuel Candes, Justin Romberg (Candes student at the time), Terence Tao (Fields Medal recipient, 2006), and David Donoho. The first publications occurred approximately four to five years ago in the IEEE Transactions on Information Theory [9, 15]. The key questions to which these researchers began to provide answers are as follows. 1. If compression is performed after measurement, why take so many measurements? 2. In applications where the Shannon Nyquist sampling rate cannot be achieved, under what conditions is reconstruction possible? A simple illustrative example of the first question is found in image formation. One is hardpressed to find a handheld digital camera without megabyte resolution (in pixels). However, images formed by these cameras are typically compressed to kilobytes under the various JPEG standards. Richard Baraniak applied the concepts of CS to this problem and showed that it was indeed possible to form images from fewer measurements than the image itself. In [2], he describes a single pixel camera where 1,600 measurements are taken of a soccer ball. Afterwards, the 64x64 (4096 pixel) image is reconstructed from these measurements. 2.1 Sparsity Assumption CS begins with the assumption that a signal has a sparse representation in some known orthonormal basis. This requirement is not as restrictive as it may seem since such knowledge is currently available (in some cases) from past research in an area known as Computational Harmonic Analysis [12]. For example, it is well known that images may be represented sparsely in a discrete cosine basis as well as a wavelet basis. This fact is exploited in the JPEG standards [4]. As another example, a linear combination of a small number of sinusoids affords a sparse representation in the Fourier basis (representable by the frequency, phase, and magnitude of each tone). More generally, the Fourier basis will likely provide a sparse representation for smoother signals whereas a wavelet basis will likely provide a sparse representation for piecewise smooth signals with finite discontinuities. 2

3 Let the basis of sparse representation be denoted by the N N matrix Ψ where each column is an N 1 vector of the basis. In a very general fashion, it is assumed that the observed phenomenon, f, may be written as a linear combination of the columns of Ψ 1 : f = Ψ 1 x. In order to solve the inverse problem and determine x, a set of linear measurements {y m } M m=1 is taken under the measurement waveforms denoted by {θ m } M m=1, which are the rows of the matrix Θ. This may be written in standard inner product notation. y m = θ m, f = θ m, Ψ 1 x It is important to note that the development thus far assumes that all signals are already digitized. For example, the assumption is made that Ψ is a matrix as opposed to being a set of continuoustime waveforms. Extensions to cases where this assumption is not made is in [16] where sampling of analog signals is considered. However, in keeping with the original literature on the topic, the same assumption of digitization will be made throughout this overview. Thus, the measurements may be grouped together in matrix form with the M N matrix Φ being defined as Φ = ΘΨ 1. y = ΘΨ 1 x = Φx Thus, without loss of generality, the CS formulation provides conditions on the matrix Φ that allow exact reconstruction of x given that x has no more than K non-zero entries (i.e. x is K-sparse). This is without loss of generality because of the formulation just described: one may solve for the actual sensing matrix Θ = ΦΨ given a design of the matrix Φ and a knowledge of the sparsity basis Ψ. 2.2 Significance Prior to stating the key theorems of CS, it is appropriate to explain the significance of the results. Perhaps the greatest achievement of compressed sensing is that it gives conditions under which an under-determined system of equations may be solved exactly through the solution of a convex optimization problem. This quite general, widely applicable result is not intuitive and deserves some comparison to the more common overdetermined case. Consider the typical linear regression problem: find the vector ˆx that provides the minimum error given a known M N measurement matrix Φ and observed measurements y. ˆx = arg min Φx y x p In the overdetermined case (M > N), with p = 2, the solution to this problem may be found analytically and is well known as the least squares solution. ˆx = ( Φ T Φ ) 1 Φ T y Now consider the underdetermined case (M < N) that is of interest to the CS community. The measurement matrix Φ is called wide and Φ T Φ is N N but of rank less than or equal to M (and thus not invertible). Obviously, the least squares solution no longer holds. In fact, there are now an 3

4 infinite number of x such that Φx = y. This implies that the desired x must be selected according to some a priori criteria given the problem domain. Compressed sensing provides conditions under which the unique, sparse x may be recovered from the measurements y. This recovery is performed by solving a convex optimization problem. 2.3 Method of Reconstruction Ideally, one desires the following solution when x is known to be K-sparse. ˆx = arg min x x 0, subject to Φx = y (1) In words, the minimization is over the number of non-zero coefficients of x. Unfortunately, solving this problem as stated is known to be numerically unstable and requires checking all possible configurations of x with K non-zero entries [2]. In any problem where N is large, this becomes infeasible quickly. For example, if N = 100 and K = 10, then the number of combinations that must be checked is in the trillions. In order to present an optimiziation problem that is solvable in polynomial time, a convex relation of equation 1 is given by the following problem statement, where p [1, ). where ˆx = arg min x x p, subject to Φx = y x p = ( N i=1 x i p ) 1 p Compressed sensing relies on a version of this convex relaxation (p = 1) that has been proven to yield the sparse, correct solution in noise free data. This guarantee is subject to the enforcement of requirements on the measurement matrix Φ. The method of reconstruction can now be stated. ˆx = arg min x x 1, subject to Φx = y (2) In the case of noisy data, the measurement model includes an additive noise term: y = Φx + η. In this case, setting p = 1 is typically referred to as robust statistics due to the l 1 norm s ability to prevent large outliers from having a significant negative impact on the solution [22]. Hence, the seemingly innocuous convex relaxation not only yields a problem that is solvable in polynomial time, but also provides nearly optimal solutions in many cases. This will be quantified in the discussion below. In the presence of noise, the problem statement changes slightly in the constraint, where the value of ɛ is chosen in accordance with the noise power. ˆx = arg min x x 1, subject to Φx y 2 ɛ (3) 4

5 3 Fundamental Theorems and Bounds To ensure recovery of the true vector x, there are two requirements. The first requirement is that x is K-sparse. The second requirement is on the columns of the sensing matrix Φ. This requirement is best expressed in terms of the isometry constant δ S, S {1, 2,..., N} of the sensing matrix Φ. The isometry constant is defined as the smallest constant such that the following inequalities hold for all S-sparse vectors x [7, 13]. (1 δ S ) x 2 2 Φx 2 2 (1 + δ S) x 2 2 (4) Any matrix Φ that satisfies equation 4 is said to satisfy the Restricted Isometry Property (RIP) with constant δ S. More succinctly, the matrix is said to be 2S-RIP if the constant δ 2S is satisfactory for reconstruction of an S-sparse signal. The next major section addresses which matrices satisfy this requirement with high probability. For a K-sparse vector x, the hypothesis for the recovery theorems below are stated in terms of δ 2K. A largest value of δ 2K will be given such that recovery is possible to within a given error. Thus, the restriction on Φ is that the user may select any of its 2K columns and these columns must satisy equation 4 for any x R 2K. For a different viewpoint on this isometry requirement, it may be restated in terms of distance preservation between K-sparse vectors x 1 and x 2 under the linear transformation Φ. (1 δ 2K ) x 1 x Φx 1 Φx (1 + δ 2K) x 1 x Therefore, good measurement matrices have the characteristic that they preserve distances between any 2K-sparse vectors and satisfy equation 4 for large values of S. 3.1 Reconstruction Theorems Both claims below compare the quality of solution to that of an oracle which knows the K largest magnitude entries of x. Let x K denote this K-sparse representation of x formed by keeping the K largest magnitude components of x. The error in representation found by solving the convex relaxation is given relative to the error in representation by x K. Since there are no assumptions or requirements that x be K-sparse, these results are quite general. The following claims are stated and proved in [7, 10, 13]. In both cases it is assumed that δ 2K < and that the method of reconstruction is given by equations 2 or Noiseless Recovery. If δ 2K < 2 1, then the solution ˆx to equation 2 satisfies both ˆx x 1 C 0 x x K 1 (5) x x K ˆx x 2 C 1 0 K Note that if x is indeed K-sparse, then recovery is exact. Also, an upper bound for C 0 is given below in terms of δ 2K. 5

6 2. Robust Recovery in Noise. If δ 2K < 2 1 and the noise power is bounded, η 2 ɛ, then the solution ˆx to equation 3 satisfies x x K ˆx x 2 C 1 0 K + C 1 ɛ (6) where C 0 is the same as above and an upper bound for C 1 is also given below in terms of δ 2K. The above statements would be of little value if the constants (C 0, C 1 ) were large and the sparsity of x was small. As it turns out though, upper bounds for these constants provide surprising small amplification when values of δ 2K are small. However, as δ 2K approaches 2 1, these constants may grow in an unbounded fashion. They are given by the following expressions [7] and shown below in figure 1. Let Then α = δ 2K 1 δ 2K, ρ = C ρ 1 ρ C 1 2 α 1 ρ 2δ2K 1 δ 2K The results stated in equations 5 and 6 deserve some comments. First, the recovery of a K- sparse signal from an underdetermined system of equations may be performed exactly by solving a convex optimization problem. This result holds for any K-sparse signal given that the sensing matrix Φ is 2K-RIP. Therefore, one obtains both the locations of the non-zero elements of x (the support) as well as the values of x at these locations. Of significance, this recovery of the support is robust to noise as well. From the noisy version, if a non-zero component of the K-sparse vector x is larger than C 1 ɛ, then its location (index) in x will be correctly recovered. This is important because, in some applications, the support of x may be more important than the values of x. For example, a radar desires to know the location of its targets. Second, the effect of noise on the recovery is directly proportional to its power. This is intuitively satisfying as well as practically useful. For example, a system designer may now predict performance of this algorithm in an online, standalone system when the SNR of operation is approximately known. Third, the actual sparsity of x must not be known since the theorems make no assumption of sparsity. In other words, no knowledge of signal coefficient rate of decay is required because the convex programs being solved in equations 2-3 adapt naturally to it [12]. Lastly, none of these results are useful if one cannot find a sensing matrix Φ that satisfies the 2K-RIP. This is the topic of the next section and will mostly be deferred until then. However, a sneak preview will be given here. Obviously, one desires to take as few measurements as possible in order to ensure a certain reconstruction capability. The number of measurements is the number of rows of Φ, namely M. If N, the number of columns of Φ, is fixed by the length of x, then taking more measurements makes it (7) 6

7 Figure 1: Variation of Bounding Constants (C 0, C 1 ) with the Isometry Constant δ 2K more likely that it is possible to find a set of N columns that are 2K-RIP. In other words, the number of matrices (or more appropriately, types of matrices) that are 2K-RIP increases with the number of measurements. Thus, the problem becomes that of finding a lower bound on M where one may generate a matrix that satisfies the 2K-RIP with high probability. For certain random matrices, it is shown that this bound is given by equation 8 for some constant C that varies by the type of random matrix [3, 12]. Of significance, results from n-widths and approximation theory are cited to state that to within a constant, one cannot do any better given the method of reconstruction. In this sense, these matrices are optimal. ( ) N M C 2K log (8) 2K In many practical systems, it is desired that Φ is not random. For example, in some of the original works, the authors were interested in applying CS concepts to the field of Magnetic Resonance Imaging (MRI) [8, 9, 10]. In this case, Φ is a partial Discrete Fourier Transform (DFT) matrix where any M rows are selected from the N N complete DFT matrix. Then for Φ to be 2K-RIP with high probability, equation 9 has been proved; however, it is suspected to hold for log(n) as well. M C 2K (log(n)) 4 (9) 7

8 3.2 Information Theoretic Bounds on Measurement Requirement There have been multiple publications on information theoretic bounds related to compressed sensing and sensing networks [5, 18, 23]. This section will be concerned primarily with the approach of [20] where the author compares the measurement system to an Additive White Gaussian Noise (AWGN) channel in order to find a lower bound on the number of measurements required for perfect reconstruction. In other words, given the rate distortion function of the source (R), the distortion level using a given decoder (E), and the SNR of the measurement process, how many measurements are required to perfectly reconstruct a K-sparse x? It is shown that reconstruction is possible when equation 10 is satisfied, where the result is asymptotic in that N goes to infinity. The key item to note is that the number of measurements is inversely proportional to the capacity of a Gaussian channel. M N 2R(E) log(1 + SNR) In the same sense as the Channel Coding Theorem, this proof methodology does not provide a method of reconstruction or a practical encoding / decoding scheme (in addition to being asymptotic). For this reason, the authors lean on previous coding theory results by viewing the measurement matrix Φ as an encoding matrix. The suggestion is made to utilize Low Density Parity Check Codes (LDPC) as an encoding / sensing scheme. 4 Restricted Isometry Property The definition of the Restricted Isometry Property (RIP) was given in equation 4. The first subsection is concerned with how to generate these sensing matrices with high probability. This facet of the theory is important because the computational cost of verifying that a matrix satisfies the 2K-RIP property is NP-hard (and thus impractical in any non-trivial problem). Specifically, random matrices are considered because they are known to be optimal in some fashion (recall equation 8 and the discussion surrounding it). The second subsection is concerned with a weakening of RIP that allows one to reconstruct most K-sparse signals (instead of all K-sparse signals). This alteration treats the measurement matrix as fixed and allows the signal model to vary within a probabilistic framework. 4.1 Random Matrices Above, it was stated that there are certain types of random matrices that satisfy the 2K-RIP with high probability. This facet of CS allows one to draw parallels with the channel coding theorem. In this theme, the sensing matrix Φ is viewed as a random encoding scheme. However, one key difference between the two constructs is that CS is a non-asymptotic result. In other words, the length of x, N, does not need to tend to infinity in order to prove the results. Rather, the goal is to find methods that generate valid sensing matrices for a given length N. This leads to the random matrix construction, which is the only known method to approach the smallest number of measurements required given l 1 reconstruction [3]. (10) 8

9 A random matrix is formed by producing every entry according to a given probability distribution. Some example distributions that generate 2K-RIP matrices with high probability (when M satisfies equation 8) are listed below [3]. Φ is formed by each entry φ ij being drawn independently from a zero-mean Normal distribution with variance 1 N. ( φij N 0, 1 ) N Φ is formed from independent realizations of a Bernoulli ( 1 2) random variable. { φ ij U 1 } 1, N N Φ is formed from independent realizations of a random variable that is ideal for producing sparse matrices (good for database applications). P r[φ ij = 3/N] = 1 6 P r[φ ij = 3/N] = 1 6 P r[φ ij = 0] = 2 3 Unfortunately, there is a drawback to utilizing a random matrix in practice in that one must store all of its entries. This can be impractical in large applications with limited on-board storage. 4.2 Deterministic Constructions There are a few practical issues with utilizing random matrices for sensing: on-board storage of the sensing matrix, implementation in sampling circuitry, and verifying that the matrix does indeed satisfy the RIP after it has been constructed. Thus, despite the fact that certain random matrices satisfy the RIP with high probability [3, 9], it is still desirable for deterministic sensing matrices to be used in practice. Deterministic waveforms afford formulaic construction during run-time [1, 6]. As such, there have been multiple attempts in the literature to construct deterministic sensing matrices. A few examples are given in [1, 6, 14, 19]. Of these attempts, the approach by Calderbank et al. [1, 6] appears to be the most promising and useful to this author. In this work, the authors provide multiple examples from the literature of attempts to construct deterministic sensing matrices. Also included is the number of measurements required, robustness to noise, and complexity of reconstruction. Most efforts use concepts from the theory of linear codes with the primary results centering around a weakening of the RIP to consider average reconstruction capabilities (i.e. RIP is focused on worst case and ensures that all K-sparse signals may be reconstructed given enough measurements). There are a few key items provided in the formulation of Calderbank et al. [6]. First, there are conditions that are simple to check to determine if the sensing matrix can recover all but 9

10 an exponentially small fraction of K-sparse signals. Second, the entries of the sensing matrix can typically be computed on the fly (chirps are an example of a waveform that is considered). Third, there are recovery algorithms available that are less computationally expensive than l 1 minimization. As a result, the authors show that partial Fourier matrices will reconstruct a K-sparse signal (with high probability) if the number of measurements, M, satisfies equation 9 with (log(n)) 4 replaced by log(n). This is a significant reduction in the number of measurements required and explains why DFT matrices work well in practice. Additionally, this satisfies the intuition and suspicions of Candes et al. [13]. 5 Extensions of Compressed Sensing and Future Directions 5.1 Model Based Compressed Sensing Model based extensions of CS utilize variations on the fundamental concepts from CS for their proofs; however, more assumptions are allowed on signal structure. Primarily, structural relationships between large coefficients of x are exploited to decrease the number of measurements required. For example, Eldar and Mishali consider the assumption that the large coefficients of x appear in blocks [17]. This notion is generalized by Baraniuk et al. [4]. In this work, no assumption is made that the large coefficients appear in blocks; however, it is assumed that the large coefficients of x live on a rooted, connected tree structure. In this case, it is shown that recovery is possible with on the order of K measurements through alterations of CS recovery algorithms. Since images represented in a wavelet basis tend to satisfy this constraint, the assumption has merit. 5.2 Compressed Measurements in Decision Systems Because the goal of many applications will not be that of reconstructing the signal (i.e. any detection based application), the pertinent question becomes that of utilizing the compressed measurements without having to actually perform the reconstruction. The solution of this problem will allow insight into linking the measurement system with the controlling, decision based system. References [1] Lorne Applebaum, Stephen Howard, Stephen Searle, and Robert Calderbank. Chirp sensing codes: Deterministic compressed sensing measurements for fast recovery. Applied and Computational Harmonic Analysis, 26(2): , March [2] Richard Baraniuk. Compressive sensing. IEEE Signal Processing Magazine, 24(4): , July [3] Richard Baraniuk, Mark Davenport, Ronald DeVore, and Michael Wakin. A simple proof of the restricted isometry property for random matrices. Constructive Approximation, 28(3): , Dec

11 [4] Richard G. Baraniuk, Volkan Cevher, Marco F. Duarte, and Chinmay Hegde. Model-based compressive sensing. Preprint, [5] Dror Baron, Marco Duarte, Shriram Sarvotham, Michael Wakin, and Richard Baraniuk. An information-theoretic approach to distributed compressed sensing. In Allerton Conference on Communication, Control, and Computing, [6] Robert Calderbank, Stephen Howard, and Sina Jafarpour. Construction of a large class of deterministic matrices that satisfy the statistical isometry property. IEEE Transactions on Signal Processing, to appear, [7] Emmanuel Candes. The restricted isometry property and its implications for compressed sensing. Comptes Rendus Mathematique, 346(9-10): , May [8] Emmanuel Candes and Justin Romberg. Sparsity and incoherence in compressive sampling. Inverse Problems, 23(3): , [9] Emmanuel Candes, Justin Romberg, and Terence Tao. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Transactions on Information Theory, 52(2): , February [10] Emmanuel Candes, Justin Romberg, and Terence Tao. Stable signal recovery from incomplete and inaccurate measurements. Communications on Pure and Applied Mathematics, 59(8): , Aug [11] Emmanuel Candes and Terence Tao. Decoding by linear programming. IEEE Trans. on Information Theory, 51(12): , December [12] Emmanuel Candes and Terence Tao. Near optimal signal recovery from random projections: Universal encoding strategies? IEEE Trans. on Information Theory, 52(12): , Dec [13] Emmanuel Candes and Michael Wakin. An introduction to compressive sampling. IEEE Signal Processing Magazine, 25(2):21 30, March [14] Ronald A. DeVore. Deterministic constructions of compressed sensing matrices. J. of Complexity, 23: , August [15] David Donoho. Compressed sensing. IEEE Transactions on Information Theory, 52(4): , April [16] Yonina C. Eldar. Compressed sensing of analog signals. Preprint, [17] Yonina C. Eldar and Moshe Mishali. Robust recovery of signals from a structured union of subspaces. IEEE Trans. on Information Theory, to appear, [18] Alyson Fletcher, Sundeep Rangan, and Vivek K Goyal. Rate-distortion bounds for sparse approximation. In IEEE Statistical Signal Processing Workshop, Aug

12 [19] Shamgar Gurevich, Ronny Hadani, and Nir Sochen. On some deterministic dictionaries supporting sparsity. Journal of Fourier Analysis and Applications, 14: , December [20] Shriram Sarvotham, Dror Baron, and Richard Baraniuk. Measurements vs. bits: Compressed sensing meets information theory. In Allerton Conference on Communication, Control, and Computing, [21] C. E. Shannon. Communication in the presence of noise. Proc. Institute of Radio Engineers, 37(1):10 21, January [22] Joel Tropp. Just relax: Convex programming methods for subset selection and sparse approximation. Technical Report 0404, University of Texas, Austin, TX, February [23] Manqi Zhao, Shuchin Aeron, and Venkatesh Saligrama. Sensing capacity and compressed sensing: Bounds and algorithms. In Allerton Conference on Communication, Control, and Computing,

Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes

Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes Item Type text; Proceedings Authors Jagiello, Kristin M. Publisher International Foundation for Telemetering Journal International Telemetering

More information

Multipath Matching Pursuit

Multipath Matching Pursuit Multipath Matching Pursuit Submitted to IEEE trans. on Information theory Authors: S. Kwon, J. Wang, and B. Shim Presenter: Hwanchol Jang Multipath is investigated rather than a single path for a greedy

More information

Strengthened Sobolev inequalities for a random subspace of functions

Strengthened Sobolev inequalities for a random subspace of functions Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)

More information

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad

More information

Compressed Sensing and Related Learning Problems

Compressed Sensing and Related Learning Problems Compressed Sensing and Related Learning Problems Yingzhen Li Dept. of Mathematics, Sun Yat-sen University Advisor: Prof. Haizhang Zhang Advisor: Prof. Haizhang Zhang 1 / Overview Overview Background Compressed

More information

Compressed Sensing: Lecture I. Ronald DeVore

Compressed Sensing: Lecture I. Ronald DeVore Compressed Sensing: Lecture I Ronald DeVore Motivation Compressed Sensing is a new paradigm for signal/image/function acquisition Motivation Compressed Sensing is a new paradigm for signal/image/function

More information

AN INTRODUCTION TO COMPRESSIVE SENSING

AN INTRODUCTION TO COMPRESSIVE SENSING AN INTRODUCTION TO COMPRESSIVE SENSING Rodrigo B. Platte School of Mathematical and Statistical Sciences APM/EEE598 Reverse Engineering of Complex Dynamical Networks OUTLINE 1 INTRODUCTION 2 INCOHERENCE

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk Model-Based Compressive Sensing for Signal Ensembles Marco F. Duarte Volkan Cevher Richard G. Baraniuk Concise Signal Structure Sparse signal: only K out of N coordinates nonzero model: union of K-dimensional

More information

Constructing Explicit RIP Matrices and the Square-Root Bottleneck

Constructing Explicit RIP Matrices and the Square-Root Bottleneck Constructing Explicit RIP Matrices and the Square-Root Bottleneck Ryan Cinoman July 18, 2018 Ryan Cinoman Constructing Explicit RIP Matrices July 18, 2018 1 / 36 Outline 1 Introduction 2 Restricted Isometry

More information

Compressive Sensing with Random Matrices

Compressive Sensing with Random Matrices Compressive Sensing with Random Matrices Lucas Connell University of Georgia 9 November 017 Lucas Connell (University of Georgia) Compressive Sensing with Random Matrices 9 November 017 1 / 18 Overview

More information

Randomness-in-Structured Ensembles for Compressed Sensing of Images

Randomness-in-Structured Ensembles for Compressed Sensing of Images Randomness-in-Structured Ensembles for Compressed Sensing of Images Abdolreza Abdolhosseini Moghadam Dep. of Electrical and Computer Engineering Michigan State University Email: abdolhos@msu.edu Hayder

More information

Lecture Notes 9: Constrained Optimization

Lecture Notes 9: Constrained Optimization Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form

More information

COMPRESSED SENSING IN PYTHON

COMPRESSED SENSING IN PYTHON COMPRESSED SENSING IN PYTHON Sercan Yıldız syildiz@samsi.info February 27, 2017 OUTLINE A BRIEF INTRODUCTION TO COMPRESSED SENSING A BRIEF INTRODUCTION TO CVXOPT EXAMPLES A Brief Introduction to Compressed

More information

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011 Compressed Sensing Huichao Xue CS3750 Fall 2011 Table of Contents Introduction From News Reports Abstract Definition How it works A review of L 1 norm The Algorithm Backgrounds for underdetermined linear

More information

Lecture 22: More On Compressed Sensing

Lecture 22: More On Compressed Sensing Lecture 22: More On Compressed Sensing Scribed by Eric Lee, Chengrun Yang, and Sebastian Ament Nov. 2, 207 Recap and Introduction Basis pursuit was the method of recovering the sparsest solution to an

More information

Recovery of Compressible Signals in Unions of Subspaces

Recovery of Compressible Signals in Unions of Subspaces 1 Recovery of Compressible Signals in Unions of Subspaces Marco F. Duarte, Chinmay Hegde, Volkan Cevher, and Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University Abstract

More information

Compressed Sensing. 1 Introduction. 2 Design of Measurement Matrices

Compressed Sensing. 1 Introduction. 2 Design of Measurement Matrices Compressed Sensing Yonina C. Eldar Electrical Engineering Department, Technion-Israel Institute of Technology, Haifa, Israel, 32000 1 Introduction Compressed sensing (CS) is an exciting, rapidly growing

More information

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5 CS 229r: Algorithms for Big Data Fall 215 Prof. Jelani Nelson Lecture 19 Nov 5 Scribe: Abdul Wasay 1 Overview In the last lecture, we started discussing the problem of compressed sensing where we are given

More information

A Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing via Polarization of Analog Transmission

A Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing via Polarization of Analog Transmission Li and Kang: A Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing 1 A Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing via Polarization

More information

Recent Developments in Compressed Sensing

Recent Developments in Compressed Sensing Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

Compressed sensing techniques for hyperspectral image recovery

Compressed sensing techniques for hyperspectral image recovery Compressed sensing techniques for hyperspectral image recovery A. Abrardo, M. Barni, C. M. Carretti, E. Magli, S. Kuiteing Kamdem, R. Vitulli ABSTRACT Compressed Sensing (CS) theory is progressively gaining

More information

Sparse Solutions of an Undetermined Linear System

Sparse Solutions of an Undetermined Linear System 1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research

More information

Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice

Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice Jason N. Laska, Mark A. Davenport, Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University

More information

Thresholds for the Recovery of Sparse Solutions via L1 Minimization

Thresholds for the Recovery of Sparse Solutions via L1 Minimization Thresholds for the Recovery of Sparse Solutions via L Minimization David L. Donoho Department of Statistics Stanford University 39 Serra Mall, Sequoia Hall Stanford, CA 9435-465 Email: donoho@stanford.edu

More information

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016 Lecture notes 5 February 9, 016 1 Introduction Random projections Random projections are a useful tool in the analysis and processing of high-dimensional data. We will analyze two applications that use

More information

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar

More information

MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing

MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing Afonso S. Bandeira April 9, 2015 1 The Johnson-Lindenstrauss Lemma Suppose one has n points, X = {x 1,..., x n }, in R d with d very

More information

Analog-to-Information Conversion

Analog-to-Information Conversion Analog-to-Information Conversion Sergiy A. Vorobyov Dept. Signal Processing and Acoustics, Aalto University February 2013 Winter School on Compressed Sensing, Ruka 1/55 Outline 1 Compressed Sampling (CS)

More information

Solving Underdetermined Linear Equations and Overdetermined Quadratic Equations (using Convex Programming)

Solving Underdetermined Linear Equations and Overdetermined Quadratic Equations (using Convex Programming) Solving Underdetermined Linear Equations and Overdetermined Quadratic Equations (using Convex Programming) Justin Romberg Georgia Tech, ECE Caltech ROM-GR Workshop June 7, 2013 Pasadena, California Linear

More information

A new method on deterministic construction of the measurement matrix in compressed sensing

A new method on deterministic construction of the measurement matrix in compressed sensing A new method on deterministic construction of the measurement matrix in compressed sensing Qun Mo 1 arxiv:1503.01250v1 [cs.it] 4 Mar 2015 Abstract Construction on the measurement matrix A is a central

More information

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin

More information

GREEDY SIGNAL RECOVERY REVIEW

GREEDY SIGNAL RECOVERY REVIEW GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

Fast Hard Thresholding with Nesterov s Gradient Method

Fast Hard Thresholding with Nesterov s Gradient Method Fast Hard Thresholding with Nesterov s Gradient Method Volkan Cevher Idiap Research Institute Ecole Polytechnique Federale de ausanne volkan.cevher@epfl.ch Sina Jafarpour Department of Computer Science

More information

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp

CoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp CoSaMP Iterative signal recovery from incomplete and inaccurate samples Joel A. Tropp Applied & Computational Mathematics California Institute of Technology jtropp@acm.caltech.edu Joint with D. Needell

More information

An Introduction to Sparse Approximation

An Introduction to Sparse Approximation An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,

More information

Generalized Orthogonal Matching Pursuit- A Review and Some

Generalized Orthogonal Matching Pursuit- A Review and Some Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents

More information

Compressed Sensing and Sparse Recovery

Compressed Sensing and Sparse Recovery ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing

More information

A Survey of Compressive Sensing and Applications

A Survey of Compressive Sensing and Applications A Survey of Compressive Sensing and Applications Justin Romberg Georgia Tech, School of ECE ENS Winter School January 10, 2012 Lyon, France Signal processing trends DSP: sample first, ask questions later

More information

Compressed Sensing and Linear Codes over Real Numbers

Compressed Sensing and Linear Codes over Real Numbers Compressed Sensing and Linear Codes over Real Numbers Henry D. Pfister (joint with Fan Zhang) Texas A&M University College Station Information Theory and Applications Workshop UC San Diego January 31st,

More information

COMPRESSIVE SAMPLING USING EM ALGORITHM. Technical Report No: ASU/2014/4

COMPRESSIVE SAMPLING USING EM ALGORITHM. Technical Report No: ASU/2014/4 COMPRESSIVE SAMPLING USING EM ALGORITHM ATANU KUMAR GHOSH, ARNAB CHAKRABORTY Technical Report No: ASU/2014/4 Date: 29 th April, 2014 Applied Statistics Unit Indian Statistical Institute Kolkata- 700018

More information

A New Estimate of Restricted Isometry Constants for Sparse Solutions

A New Estimate of Restricted Isometry Constants for Sparse Solutions A New Estimate of Restricted Isometry Constants for Sparse Solutions Ming-Jun Lai and Louis Y. Liu January 12, 211 Abstract We show that as long as the restricted isometry constant δ 2k < 1/2, there exist

More information

Compressive Sensing of Streams of Pulses

Compressive Sensing of Streams of Pulses Compressive Sensing of Streams of Pulses Chinmay Hegde and Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University Abstract Compressive Sensing (CS) has developed as an enticing

More information

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional

More information

The Fundamentals of Compressive Sensing

The Fundamentals of Compressive Sensing The Fundamentals of Compressive Sensing Mark A. Davenport Georgia Institute of Technology School of Electrical and Computer Engineering Sensor Explosion Data Deluge Digital Revolution If we sample a signal

More information

Near Optimal Signal Recovery from Random Projections

Near Optimal Signal Recovery from Random Projections 1 Near Optimal Signal Recovery from Random Projections Emmanuel Candès, California Institute of Technology Multiscale Geometric Analysis in High Dimensions: Workshop # 2 IPAM, UCLA, October 2004 Collaborators:

More information

Exponential decay of reconstruction error from binary measurements of sparse signals

Exponential decay of reconstruction error from binary measurements of sparse signals Exponential decay of reconstruction error from binary measurements of sparse signals Deanna Needell Joint work with R. Baraniuk, S. Foucart, Y. Plan, and M. Wootters Outline Introduction Mathematical Formulation

More information

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1239 Preconditioning for Underdetermined Linear Systems with Sparse Solutions Evaggelia Tsiligianni, StudentMember,IEEE, Lisimachos P. Kondi,

More information

Interpolation via weighted l 1 minimization

Interpolation via weighted l 1 minimization Interpolation via weighted l 1 minimization Rachel Ward University of Texas at Austin December 12, 2014 Joint work with Holger Rauhut (Aachen University) Function interpolation Given a function f : D C

More information

arxiv: v1 [cs.it] 21 Feb 2013

arxiv: v1 [cs.it] 21 Feb 2013 q-ary Compressive Sensing arxiv:30.568v [cs.it] Feb 03 Youssef Mroueh,, Lorenzo Rosasco, CBCL, CSAIL, Massachusetts Institute of Technology LCSL, Istituto Italiano di Tecnologia and IIT@MIT lab, Istituto

More information

Part IV Compressed Sensing

Part IV Compressed Sensing Aisenstadt Chair Course CRM September 2009 Part IV Compressed Sensing Stéphane Mallat Centre de Mathématiques Appliquées Ecole Polytechnique Conclusion to Super-Resolution Sparse super-resolution is sometime

More information

Noisy Signal Recovery via Iterative Reweighted L1-Minimization

Noisy Signal Recovery via Iterative Reweighted L1-Minimization Noisy Signal Recovery via Iterative Reweighted L1-Minimization Deanna Needell UC Davis / Stanford University Asilomar SSC, November 2009 Problem Background Setup 1 Suppose x is an unknown signal in R d.

More information

Single-letter Characterization of Signal Estimation from Linear Measurements

Single-letter Characterization of Signal Estimation from Linear Measurements Single-letter Characterization of Signal Estimation from Linear Measurements Dongning Guo Dror Baron Shlomo Shamai The work has been supported by the European Commission in the framework of the FP7 Network

More information

Secrecy using Compressive Sensing

Secrecy using Compressive Sensing Secrecy using Compressive Sensing Shweta Agrawal and Sriram Vishwanath Department of Electrical and Computer Engineering University of exas at Austin, X, USA Email: shweta.a@gmail.com, sriram@ece.utexas.edu

More information

Signal Recovery from Permuted Observations

Signal Recovery from Permuted Observations EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,

More information

Sensing systems limited by constraints: physical size, time, cost, energy

Sensing systems limited by constraints: physical size, time, cost, energy Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original

More information

Measurements vs. Bits: Compressed Sensing meets Information Theory

Measurements vs. Bits: Compressed Sensing meets Information Theory Measurements vs. Bits: Compressed Sensing meets Information Theory Shriram Sarvotham, Dror Baron, and Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University, Houston, TX

More information

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March

More information

MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design

MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications Class 19: Data Representation by Design What is data representation? Let X be a data-space X M (M) F (M) X A data representation

More information

The Pros and Cons of Compressive Sensing

The Pros and Cons of Compressive Sensing The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal

More information

Combining geometry and combinatorics

Combining geometry and combinatorics Combining geometry and combinatorics A unified approach to sparse signal recovery Anna C. Gilbert University of Michigan joint work with R. Berinde (MIT), P. Indyk (MIT), H. Karloff (AT&T), M. Strauss

More information

The Secrecy of Compressed Sensing Measurements

The Secrecy of Compressed Sensing Measurements The Secrecy of Compressed Sensing Measurements Yaron Rachlin and Dror Baron Abstract Results in compressed sensing describe the feasibility of reconstructing sparse signals using a small number of linear

More information

Design of Spectrally Shaped Binary Sequences via Randomized Convex Relaxations

Design of Spectrally Shaped Binary Sequences via Randomized Convex Relaxations Design of Spectrally Shaped Binary Sequences via Randomized Convex Relaxations Dian Mo Department of Electrical and Computer Engineering University of Massachusetts Amherst, MA 3 mo@umass.edu Marco F.

More information

Estimating Unknown Sparsity in Compressed Sensing

Estimating Unknown Sparsity in Compressed Sensing Estimating Unknown Sparsity in Compressed Sensing Miles Lopes UC Berkeley Department of Statistics CSGF Program Review July 16, 2014 early version published at ICML 2013 Miles Lopes ( UC Berkeley ) estimating

More information

Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery

Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery Anna C. Gilbert Department of Mathematics University of Michigan Connection between... Sparse Approximation and Compressed

More information

Recovering overcomplete sparse representations from structured sensing

Recovering overcomplete sparse representations from structured sensing Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix

More information

of Orthogonal Matching Pursuit

of Orthogonal Matching Pursuit A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement

More information

SPECTRAL COMPRESSIVE SENSING WITH POLAR INTERPOLATION. Karsten Fyhn, Hamid Dadkhahi, Marco F. Duarte

SPECTRAL COMPRESSIVE SENSING WITH POLAR INTERPOLATION. Karsten Fyhn, Hamid Dadkhahi, Marco F. Duarte SPECTRAL COMPRESSIVE SENSING WITH POLAR INTERPOLATION Karsten Fyhn, Hamid Dadkhahi, Marco F. Duarte Dept. of Electronic Systems, Aalborg University, Denmark. Dept. of Electrical and Computer Engineering,

More information

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Journal of Information & Computational Science 11:9 (214) 2933 2939 June 1, 214 Available at http://www.joics.com Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Jingfei He, Guiling Sun, Jie

More information

SPARSE NEAR-EQUIANGULAR TIGHT FRAMES WITH APPLICATIONS IN FULL DUPLEX WIRELESS COMMUNICATION

SPARSE NEAR-EQUIANGULAR TIGHT FRAMES WITH APPLICATIONS IN FULL DUPLEX WIRELESS COMMUNICATION SPARSE NEAR-EQUIANGULAR TIGHT FRAMES WITH APPLICATIONS IN FULL DUPLEX WIRELESS COMMUNICATION A. Thompson Mathematical Institute University of Oxford Oxford, United Kingdom R. Calderbank Department of ECE

More information

SIGNALS with sparse representations can be recovered

SIGNALS with sparse representations can be recovered IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1497 Cramér Rao Bound for Sparse Signals Fitting the Low-Rank Model with Small Number of Parameters Mahdi Shaghaghi, Student Member, IEEE,

More information

Compressed Sensing and Affine Rank Minimization Under Restricted Isometry

Compressed Sensing and Affine Rank Minimization Under Restricted Isometry IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 61, NO. 13, JULY 1, 2013 3279 Compressed Sensing Affine Rank Minimization Under Restricted Isometry T. Tony Cai Anru Zhang Abstract This paper establishes new

More information

Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery

Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery Anna C. Gilbert Department of Mathematics University of Michigan Sparse signal recovery measurements:

More information

A Generalized Restricted Isometry Property

A Generalized Restricted Isometry Property 1 A Generalized Restricted Isometry Property Jarvis Haupt and Robert Nowak Department of Electrical and Computer Engineering, University of Wisconsin Madison University of Wisconsin Technical Report ECE-07-1

More information

Sparse Legendre expansions via l 1 minimization

Sparse Legendre expansions via l 1 minimization Sparse Legendre expansions via l 1 minimization Rachel Ward, Courant Institute, NYU Joint work with Holger Rauhut, Hausdorff Center for Mathematics, Bonn, Germany. June 8, 2010 Outline Sparse recovery

More information

Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation

Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 12, DECEMBER 2008 2009 Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation Yuanqing Li, Member, IEEE, Andrzej Cichocki,

More information

Block-sparse Solutions using Kernel Block RIP and its Application to Group Lasso

Block-sparse Solutions using Kernel Block RIP and its Application to Group Lasso Block-sparse Solutions using Kernel Block RIP and its Application to Group Lasso Rahul Garg IBM T.J. Watson research center grahul@us.ibm.com Rohit Khandekar IBM T.J. Watson research center rohitk@us.ibm.com

More information

Extended Reconstruction Approaches for Saturation Measurements Using Reserved Quantization Indices Li, Peng; Arildsen, Thomas; Larsen, Torben

Extended Reconstruction Approaches for Saturation Measurements Using Reserved Quantization Indices Li, Peng; Arildsen, Thomas; Larsen, Torben Aalborg Universitet Extended Reconstruction Approaches for Saturation Measurements Using Reserved Quantization Indices Li, Peng; Arildsen, Thomas; Larsen, Torben Published in: Proceedings of the 12th IEEE

More information

Compressed sensing and imaging

Compressed sensing and imaging Compressed sensing and imaging The effect and benefits of local structure Ben Adcock Department of Mathematics Simon Fraser University 1 / 45 Overview Previously: An introduction to compressed sensing.

More information

Compressive Sensing and Beyond

Compressive Sensing and Beyond Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered

More information

SCRIBERS: SOROOSH SHAFIEEZADEH-ABADEH, MICHAËL DEFFERRARD

SCRIBERS: SOROOSH SHAFIEEZADEH-ABADEH, MICHAËL DEFFERRARD EE-731: ADVANCED TOPICS IN DATA SCIENCES LABORATORY FOR INFORMATION AND INFERENCE SYSTEMS SPRING 2016 INSTRUCTOR: VOLKAN CEVHER SCRIBERS: SOROOSH SHAFIEEZADEH-ABADEH, MICHAËL DEFFERRARD STRUCTURED SPARSITY

More information

A Power Efficient Sensing/Communication Scheme: Joint Source-Channel-Network Coding by Using Compressive Sensing

A Power Efficient Sensing/Communication Scheme: Joint Source-Channel-Network Coding by Using Compressive Sensing Forty-Ninth Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 28-30, 20 A Power Efficient Sensing/Communication Scheme: Joint Source-Channel-Network Coding by Using Compressive Sensing

More information

Exact Topology Identification of Large-Scale Interconnected Dynamical Systems from Compressive Observations

Exact Topology Identification of Large-Scale Interconnected Dynamical Systems from Compressive Observations Exact Topology Identification of arge-scale Interconnected Dynamical Systems from Compressive Observations Borhan M Sanandaji, Tyrone Vincent, and Michael B Wakin Abstract In this paper, we consider the

More information

Compressed Sensing: Extending CLEAN and NNLS

Compressed Sensing: Extending CLEAN and NNLS Compressed Sensing: Extending CLEAN and NNLS Ludwig Schwardt SKA South Africa (KAT Project) Calibration & Imaging Workshop Socorro, NM, USA 31 March 2009 Outline 1 Compressed Sensing (CS) Introduction

More information

Stable Signal Recovery from Incomplete and Inaccurate Measurements

Stable Signal Recovery from Incomplete and Inaccurate Measurements Stable Signal Recovery from Incomplete and Inaccurate Measurements EMMANUEL J. CANDÈS California Institute of Technology JUSTIN K. ROMBERG California Institute of Technology AND TERENCE TAO University

More information

Enhanced Compressive Sensing and More

Enhanced Compressive Sensing and More Enhanced Compressive Sensing and More Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Nonlinear Approximation Techniques Using L1 Texas A & M University

More information

Stochastic geometry and random matrix theory in CS

Stochastic geometry and random matrix theory in CS Stochastic geometry and random matrix theory in CS IPAM: numerical methods for continuous optimization University of Edinburgh Joint with Bah, Blanchard, Cartis, and Donoho Encoder Decoder pair - Encoder/Decoder

More information

Lecture 3: Compressive Classification

Lecture 3: Compressive Classification Lecture 3: Compressive Classification Richard Baraniuk Rice University dsp.rice.edu/cs Compressive Sampling Signal Sparsity wideband signal samples large Gabor (TF) coefficients Fourier matrix Compressive

More information

Does Compressed Sensing have applications in Robust Statistics?

Does Compressed Sensing have applications in Robust Statistics? Does Compressed Sensing have applications in Robust Statistics? Salvador Flores December 1, 2014 Abstract The connections between robust linear regression and sparse reconstruction are brought to light.

More information

ACCORDING to Shannon s sampling theorem, an analog

ACCORDING to Shannon s sampling theorem, an analog 554 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 59, NO 2, FEBRUARY 2011 Segmented Compressed Sampling for Analog-to-Information Conversion: Method and Performance Analysis Omid Taheri, Student Member,

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

AFRL-RI-RS-TR

AFRL-RI-RS-TR AFRL-RI-RS-TR-200-28 THEORY AND PRACTICE OF COMPRESSED SENSING IN COMMUNICATIONS AND AIRBORNE NETWORKING STATE UNIVERSITY OF NEW YORK AT BUFFALO DECEMBER 200 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC

More information

Shannon-Theoretic Limits on Noisy Compressive Sampling Mehmet Akçakaya, Student Member, IEEE, and Vahid Tarokh, Fellow, IEEE

Shannon-Theoretic Limits on Noisy Compressive Sampling Mehmet Akçakaya, Student Member, IEEE, and Vahid Tarokh, Fellow, IEEE 492 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 1, JANUARY 2010 Shannon-Theoretic Limits on Noisy Compressive Sampling Mehmet Akçakaya, Student Member, IEEE, Vahid Tarokh, Fellow, IEEE Abstract

More information

Compressive Sampling for Energy Efficient Event Detection

Compressive Sampling for Energy Efficient Event Detection Compressive Sampling for Energy Efficient Event Detection Zainul Charbiwala, Younghun Kim, Sadaf Zahedi, Jonathan Friedman, and Mani B. Srivastava Physical Signal Sampling Processing Communication Detection

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior

More information

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina INDUSTRIAL MATHEMATICS INSTITUTE 2007:08 A remark on compressed sensing B.S. Kashin and V.N. Temlyakov IMI Preprint Series Department of Mathematics University of South Carolina A remark on compressed

More information

Acommon problem in signal processing is to estimate an

Acommon problem in signal processing is to estimate an 5758 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER 2009 Necessary and Sufficient Conditions for Sparsity Pattern Recovery Alyson K. Fletcher, Member, IEEE, Sundeep Rangan, and Vivek

More information