Randomness-in-Structured Ensembles for Compressed Sensing of Images
|
|
- Rodney Taylor
- 5 years ago
- Views:
Transcription
1 Randomness-in-Structured Ensembles for Compressed Sensing of Images Abdolreza Abdolhosseini Moghadam Dep. of Electrical and Computer Engineering Michigan State University Hayder Radha Dep. of Electrical and Computer Engineering Michigan State University Abstract Leading compressed sensing (CS) methods require m = O (k log(n)) compressive samples to perfectly reconstruct a k-sparse signal x of size n using random projection matrices (e.g., Gaussian or random Fourier matrices). For a given m, perfect reconstruction usually requires high complexity methods, such as Basis Pursuit (BP), which has complexity O(n 3 ). Meanwhile, low-complexity greedy algorithms do not achieve the same level of performance (as BP) in terms of the quality of the reconstructed signal for the same m. In this paper, we introduce a new CS framework, which we refer to as Randomness-in-Structured Ensemble (RISE) projection. RISE projection matrices enable compressive sampling of image coefficients from random locations within the k-sparse image vector while imposing small structured overlaps. We prove that RISE-based compressed sensing requires only m = ck samples (where c is not a function of n) to perfectly recover a k-sparse image signal. For the case of n O(k 2 ), the complexity of our solver is O(nk) which is less than the complexity of the popular greedy algorithm Orthogonal Matching Pursuit (OMP). Moreover, in practice we only need m = 2k samples to reconstruct the signal. We present simulation results that demonstrate the RISE framework s ability to recover the original image with higher than 50 db PSNR, whereas other leading approaches (such as BP) can achieve PSNR values around 30 db only. I. INTRODUCTION Traditional compressed sensing considers a length n signal S (e.g. an n pixel image or an n pixel image region) which has a sparse decomposition k in a known basis Ψ : S = Ψx. By k sparse signal, it is usually meant that x is non-zero only in k coordinates k = x 0 n. In some applications such as CT scanners and MRIs, it is either expensive or impossible to sense all n samples from the original signal [1]. However Tao, Candes and Donoho have shown that S can be recovered by much fewer linear incoherent measurements [1][2]. More specifically if we project S into an incoherent frame φ (with respect to Ψ) then given compressive samples y m 1 = φs = φψx and the projection matrix P m n = φψ we can recover x (or equivalently S) by solving: min x 0 subject to P m n x n 1 = y m 1 (1) Under state of the art compressed sensing methods, we need m = O (k log(n)) compressive samples to perfectly reconstruct x using random projection matrices [3]-[5]. In this paper we introduce a novel Randomness-in-Structured Ensembles (RISE) of projection matrices. More importantly, we prove that RISE-based compressed sensing requires only m = ck samples (where c is not a function of n) to recover the signal/image without introducing any error. For the case of n O ( k 2), the complexity of our solver is O(nk) which is less than the complexity of the greedy algorithm OMP [6]. Moreover, in practice we only need m = 2k samples to reconstruct the signal. The organization of this paper is as follow: In section II we show that if we divide the image transform coefficients x (or coordinates) into roughly 8k random disjoint subsets, then x in all subsets will be non-zero in less than three coordinates with high probability. Then we introduce the RISE framework, which can solve an underdetermined system of three linear equations with w unknowns if these unknowns are non-zero in less than three coordinates. Thus if we sense each of the 8k subsets using RISE then we can solve all sub-problems and thus reconstruct the original image. Moreover we show that if these subsets of indices have small overlap with each other (for instance similar to the incidence matrix of a Balanced Incomplete Block Design [7]) then we will need only m = 2k samples in practice for exact recovery. The simulation results are presented in section III. II. RANDOMNESS-IN-STRUCTURED ENSEMBLES Following traditional compressed sensing literature, for simplicity and without loss of generality we refer to the sparse representation x as being the image. Consider an n pixel, real valued image x which is non-zero only in k coordinates: k = x 0 n. Hence the image is represented here by a k-sparse vector: x R n. We begin by dividing the image into some random (equal-size) blocks/windows of size w. In other words, we select w random pixels from {x 1, x 2,..., x n } to form a block/window of size w. Next we take a few samples m W < w by processing each of these blocks. More precisely, this can be stated as follows: Choose B = n/w equal-size partitions of the original set {1, 2,..., n} labeled W i such that: B=n/w i=1 W i = 1, 2,..., n, i : W i = w (2) The subset of x with indices in W i (x Wi ) is the i th block/window of the signal. We then sense m W samples from
2 each block of the signal x Wi, leading to the following equation: y i = f i (x Wl ), l [1, B], i [(l 1) m W + 1, lm W ]. This results in a compression ratio of m W /w. We show that the image can now be recovered using the compressive samples y provided each block spans a random collection of coordinates/indices. Further, if we allow these blocks to have a small amount of overlaps (similar to incidence matrix of a Balanced Incomplete Block Design [7] where the rows correspond to the subsets W i ) we can achieve higher compression ratios. In order to prove this claim we need to employ some results from the classical problem of balls in bins [8] [9]. A. Design of Blocks It will be useful to model our proposed sampling procedure in order to simplify the analysis. Dividing the length n imagevector x which is non-zero in k indices into B blocks is equivalent to randomly throwing k balls into B bins. In [8] it has been proven that on average, the fraction of bins (blocks) containing no more than r balls (non-zeros) is: Q( r, B, k) = r Q(j, B, k) = j=0 r j=0 ( ) j 1 k e B k (3) j! B So for instance setting r to two and B (number of blocks) to eight times the number of non-zeros k, then Q( r, B, k) 1 or equivalently with high probability all of blocks contains less than three non-zeros. Assume now the existence of a template projection matrix T of dimension, say m W w (where m W = 3) which could solve an underdetermined system of equations of y i = T x Wi exactly, if x Wi has at most two non-zero entries. Now, we can sense three samples from each block x Wi of the image x using our template T. Thus we can solve all of those sub-problems y i = T i x Wi, l [1, n/w], i [3l w, 3l] without introducing any error (with very high probability). Consequently, we can recover the original image exactly. We have now reduced the problem to that of finding the appropriate projection template matrix T. B. Canonical RISE As stated before, we begin by looking for a 3 w template matrix T such that if we obtain three sensed samples from an image-vector x of length n = w, according to the template: y = T x, we would be guaranteed the recovery of x exactly if x has less than three non-zero entries. (To simplify notations in this section, we are replacing x Wi by x, replacing y i by y; and hence we are using n = w). Let us consider the matrix T (Fig. 1) with the following properties: The first row consists of w random complex numbers i.e. i [1, w] : T 1,i = r 1,i exp jφ i. In the first half of the second row there is one random complex number and in the next half there is another random complex number i.e., the phase is: T 2,i = φ 1, T 2,l = φ 2 and the magnitude T 2,i = r 2,1, T 2,l = r 2,2 for i [1, w/2] and l [1 + w/2, w]. In the last row, there are four unique random complex numbers. For each quarter partition the magnitude is constant and in each half partition the phase is constant i.e., the phase is T 2,i = φ 1, T 2,l = φ 2 for i [1, w/2] and l [1 + w/2, w] and the magnitude is T 3,p = r 3,1, T 3,q = r 3,2, T 3,r = r 3,3, T 3,s = r 3,4 for p [1, w/4], q [1+w/4, w/2], r [1+w/2, 3w/4] and s [1 + 3w/4, w]. It is well-known that one efficient solution to finding a specific value in a sorted list of w numbers is a Binary Search Tree. Here we utilize a similar method: We begin by evaluating the sum of the signal in each quarter (using the second and the third rows of the template) corresponding to a BST of depth two. Following this step we use the first row to locate the non-zero elements in each sub-blocks. Additionally, the first sample provides us a sanity check (i.e. the template T can detect if there are more than two non-zeros in the signal) as demonstrated in Lemma 1. It is easy to show that we can remove one of the parameters (r i,j ) and still have the same properties. But for now we focus on this general template. Lemma 1. Assume we sense three samples y i : i = {1, 2, 3} according to the canonical RISE template from the length n = w image x which is non-zero in k indices: y = T x, k = x 0. We can recover x exactly if k 2 with the complexity of O(w 2 ) in the worst case. Proof: We will prove the lemma for each value of k independently: 1) k = 0 or the image is zero over all indices: i : x i = 0. This is a trivial case because all three compressive samples would be zero. 2) 2. k = 1 or x is non-zero only in one index. We know that the phase of the sum of two vectors is different from each of the individual phases (except for the case when the two vectors lie along the same line): i.e., (a+b) / { a, b}. Using the fact that x is real-valued and entries of the first row of the template are random, if we could find the phase of the first sample among the phases of the first row entries, we can conclude that the signal is non-zero only in one index: y 1 = φ i x j = { y 1 r 1,j exp(jφ 2) if j = i 0 otherwise 3) k = 2: Let us define four partial-sums S i such that: S i = iw/4 j=1+(i 1)w/4 (4) x j, i = {1, 2, 3, 4} (5) We can now express the second and the third compressive samples by: y 2 = (S 1 + S 2 )r 2,1 e jφ 1 + (S3 + S 4 )r 2,2 e jφ 2 (6) y 3 = (r 3,1 S 1 + r 3,2 S 2 )e jφ 1 + (r3,3 S 3 + r 3,4 S 4 )e jφ 2 (7) Since each complex number can be uniquely written as the linear combination of two given complex numbers and all parameters of the template are random hence
3 T = r 1,1 e jφ 1 r 1,2 e jφ r 1,w e jφ w r 2,1 e jφ 1 r 2,1 e jφ 1... r 2,1 e jφ 1 r 2,2 e jφ 2 r 2,2 e jφ 2... r 2,2 e jφ 2 r 3,1 e jφ 1... r 3,2 e jφ 1... r 3,3 e jφ 2... r 3,4 e jφ 2... Fig. 1: The canonical RISE projection matrix we can re-write the second compressive sample into the full-search is O(w 2 ). If there were more than one solution to following full-rank system of equations: the system of equations, it implies that the system of equations is underdetermined or x is non-zero in more than two indices. R{y 2 } = (S 1 + S 2 )r 2,1 R{e jφ 1 } + (S3 + S 4 )r 2,2 R{e jφ 2 } (8) Thus we can detect this case as well. In summary, if the image I{y 2 } = (S 1 + S 2 )r 2,1 I{e jφ 1 } + (S3 + S 4 )r 2,2 I{e jφ 2 } is (9) non-zero (at most) in two indices, we are able to recover By solving this system, we can obtain S 1 +S 2 and S 3 + S 4. Using the same arguments, we can re-state the third compressive sample as: R{y 3 } = (r 3,1 S 1 + r 3,2 S 2 )R{e jφ 1 } + (r 3,3 S 3 + r 3,4 S 4 )R{e jφ 2 } (10) equations: r 1,b e jφ b r1,aejφa [ ˆxa ˆx b ] = y 1 S t S k (16) The only (ˆx a, ˆx b ) that makes these equations consistent is the pair of (x c, x d ). Thus we are performing a full-search to find a subspace of rank two in three samples. We can find the subspace with required rank if the rank of samples is really two. Moreover this subspace is unique. If t = l then it implies that there are at least two non-zeros in the t th sub-block. Again in our full-search if we could find a unique solution to the system of equations 16, then it implies that there are exactly two non-zeros in that sub-block. Clearly the complexity of this it exactly just by three samples. Moreover we can detect that image has more than two non-zero. C. The RISE Projection Matrix So far we have proved that when dividing a sparse image I{y 3 } = (r 3,1 S 1 + r 3,2 S 2 )I{e jφ 1 } + (r 3,3 S 3 + r 3,4 S 4 )I{e jφ 2 vector } (11) into blocks (randomly), if the number of blocks is This system of equation will give us r 3,1 S 1 +r 3,2 S 2 and around eight times the number of non-zeros, then all of blocks r 3,3 S 3 + r 3,4 S 4. Since we already have and then we can would contain less than three non-zeros and the RISE template obtain the individual partial-sums S i s. Clearly if each can solve such problems. According to (3) if the number of S i turns out to be zero, we can conclude that the image blocks equals to k then on average less than 8% of blocks is zero over all associated indices with high probability: would contain more than two non-zeros and RISE template (i 1)w if S i = 0 x j = 0, j [1 +, iw 4 4 ] (12) cannot identify the signal values in those blocks. However if we design the blocks W i such that they have small overlaps Assume S t and S l are non-zero, and t l. Then we conclude that there is at least one non-zero each, in the following ranges: with each others, then we do not need to ensure that all blocks be solvable; because if we are not able to solve a specific block, then there are other blocks which jointly cover those (t 1)w [1 +, tw (l 1)w ] and [1 +, lw indices and we know that most of the blocks are solvable ] (13) Moreover, by adding overlaps between blocks, the rank of the submatrices of the projection matrix will increase which means For now suppose that there are only two non-zeros in the x at the indices c and d and they are in t th and l th that even when other blocks could not help us in determining sub-blocks the signal values at those unsolvable blocks, then we might respectively. Now we perform an exhaustive search to find directly find those values after some simple operations. The indices where the signal was non-zero. More specifically we next question is: How should the blocks overlap? We propose evaluate all ˆx a and ˆx b such that: using objects from combinatorial designs called the Incidence ˆx a r 1,a e jφa + ˆx b r 1,b e jφ b = y 1 (14) Matrix of a Balanced Incomplete Block Designs [7]. (t 1)w a [1 +, tw (l 1)w ], b [1 +, lw ] (15) The incidence matrix E of a BIBD with integer parameters (v, K, λ) ; v > K 2, λ > 0 has the following properties: every column of E contains exactly K 1 s and every row Although for each possible pair of (a, b) there is a unique pair of (ˆx a, ˆx b ) such that the above equation holds true. However, of E contains exactly r 1 s; such that r = λ(v 1)/(k 1). just one of these pairs is consistent with the first sample. Moreover two distinct rows of E both contain 1 s in exactly For proof consider the following overdetermined system of λ columns. Assume M is the matrix formed by randomly permuting the columns of E and Mi is the set of columns whose i th row of M is one. Then the i th block (W i ) will be Mi. Therefore each of image (transform) coefficient will appear in K blocks, and every two blocks will overlap in exactly λ coefficients. In summary the proposed projection matrix would be in the form of: P i,j = { T j M [ i 3] 0 otherwise D. The RISE Solver, T is the RISE template. (17) Suppose we divide the image into overlapping blocks as described in the previous sub-section. Thus it is the case that
4 most of the time there are less than three non-zeros in a block and just a small fraction of blocks contains more than two nonzeros. For the case that n O(k 2 ), we have m, w = O(k). Thus the complexity of the solver will be mo(w 2 ) = O(nk) which is less than the complexity of OMP (O(nk 2 )). Let us denote the indices which we were successful to evaluate their pixel values by C (Confident Set) and let NC (Not-Confident Set) be the indices that we could not determine the pixel values at. Clearly C NC =. We can express our compressive samples by: y = P x = P C x C + P NC x NC. After the first pass of our algorithm, we reduce the dimension of the system of equations by substituting the compressive samples from the Confident Set: y NC = y P C x C = P NC x NC (18) We expect that the size of NC will be less than the rank of our projection matrix. If this was the case, we can find the values of the remaining unknowns by: x NC = (P NC ) 1 y NC. If the cardinality of N C was larger than the rank of (P NC ) 1, we can send the new underdetermined problem y NC = P x NC to an available solver such as Basis Pursuit or StOM [5]. Since with a high probability we were able to determine the signal at least in some coordinates, then this new problem is sparser than the original problem y = P x. Hence BP or OMP can solve this problem much better than the main problem. III. SIMULATION RESULTS We tested the proposed RISE-based scheme with overlapping blocks on a large number of standard compressed sensing signals and images. Here, we present the results for the cameraman image and compare our results with two dominant solvers/projection matrices. We performed compressive sampling on each n = 8 8 block of the image. More specifically the target image is formed by keeping the largest k = 16 DCT coefficients in all 8 8 blocks and set the rest of coefficients to zero. For RISE we divide each block of the image to 10 random windows/blocks with BIBID parameters of λ = 1, K = 2 and sense three samples from each window according to the RISE template. Moreover we sensed two random compressive samples in order to increase the rank of the resulting submatrices to m = 32 total samples for each block. Furthermore we sensed 32 samples from each block using a Gaussian random matrix and applied these samples to the OMP and BP solvers. We maintain the largest 16 DCT coefficients of each 8 8 image block. This represented our target image to be recovered Fig. (2a) by the three schemes that are being compared. As it is clearly shown in Fig. 2, RISE was able to reconstruct the image (virtually) perfectly from its compressive samples while BP and OMP failed to achieve perfect reconstruction. Here we should note that in the cameraman image, the DCT coefficients (excluding the DC value) are very small in most of the blocks therefore if an algorithm just recovers the largest coefficient (e.g. the DC coefficient), the errors in those blocks will be unnoticeable. On the other hand, in blocks which are in the center of the image, all of the 16 non-zero DCT coefficients are significant and consequently introducing any error in the recovery of these blocks are obtrusive. As it is clearly illustrated in Fig.2, BP and OMP failed in recovering these regions of the image, however RISE reconstructed these areas perfectly. To demonstrate the effect of the number of compressive samples m on the quality of recovery, we have measured the PSNR of reconstructed images for a range of m, Fig. 3. Each data point shows the average of 10 iterations. In our simulations, the average performance was very close to the typical performance of a run. Although by increasing the number of compressive samples, OMP and BP improve in terms of the overall image quality, but these methods cannot achieve perfect reconstructions even in the case of m 3k samples. Meanwhile, when using RISE, we have achieved (virtually) the perfect reconstruction for all of these scenarios except in the case of m = 38 samples. Although even in this case the quality of the reconstructed images were significantly higher than the quality of the images recovered by BP and OMP, due to the random nature of defining windows W i, there were a few blocks where the cardinality of the NC was higher than the rank of the RISE projection matrix. Finally Fig. 4 illustrates the required time for recovering the image from its compressive samples (average of 10 iterations for each data point). In this paper, we claimed that the complexity of RISE solver is less than the complexity of OMP. This figure verifies this assertion. PSNR (db) RISE OMP BP Number of Samples Fig. 3: PSNR of recovered image as a function of the number of compressive samples m (average of 10 iterations for each data point) IV. CONCLUSION In this paper, we presented a new approach for solving the compressed sensing problem which does not involve any optimization algorithm or greedy decisions. In our approach, we divide the image transform coefficients into random disjoint (or with very small overlap) subsets such that with high probability in all subsets, the number of non-zeros is less than a predefined threshold. Then we introduce a template matrix which can solve an underdetermined system of equations when
5 Target Image, n=64,k=18 RISE, n=64, m=36, k=18, PSNR= Guassian Mat. BP, n=64, m=36, k=18, PSNR= Guassian Mat. OMP, N=64, m=36, k=18, PSNR= (a) Target image (b) RISE recovered (c) BP recovered Fig. 2: Comparison of RISE with Basis Pursuit and Orthogonal Matching Pursuit (d) OMP recovered Time (seconds) RISE OMP BP [4] Scott Chen, David Donoho and Michael A. Saunders, Atomic Decomposition by Basis Pursuit, Siam J. Sci, Compu, Vol. 20, issue 1, pp , 1998 [5] David L. Donoho, Yaakov Tsaig, Iddo Drori, and Jean-Luc Starck, Sparse solution of underdetermined linear equations by stagewise orthogonal matching pursuit. Preprint, 2007 [6] J. Tropp, A. C. Gilbert, Signal Recovery from Partial Information via Orthogonal Matching Pursuit, Submitted for publication, April [7] Combinatorial Designs: Construction and Analysis by Douglas R. Stinson, Springer-Verlag, New York inc, 2004 [8] [9] Raab, Steger, Balls into Bins - A Simple and Tight Analysis, Lecture notes in Computer Science, Springer, Number of Samples Fig. 4: Required time for the recovery of image from its compressive samples for RISE, BP and OMP (average of 10 iterations for each data point) number of non-zero unknowns is less than that threshold. Thus if we sense each of the subsets of the image by the template then all sub-problems are solvable. In practice the complexity of the solver can be lower than complexity of the OMP and the quality of the recovery is better than BP. ACKNOWLEDGMENT This work was supported in part by NSF Award CNS , NSF Award CCF , NSF Award CCF , and unrestricted gift from Microsoft Research. REFERENCES [1] Emmanuel Cands, Justin Romberg, and Terence Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. on Information Theory, 52(2) pp , February 2006 [2] David Donoho, Compressed sensin. IEEE Trans. on Information Theory, 52(4), pp , April 2006 [3] Emmanuel Cands and Terence Tao, Near optimal signal recovery from random projections: Universal encoding strategies?, IEEE Trans. on Information Theory, 52(12), pp , December 2006
Pre-weighted Matching Pursuit Algorithms for Sparse Recovery
Journal of Information & Computational Science 11:9 (214) 2933 2939 June 1, 214 Available at http://www.joics.com Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Jingfei He, Guiling Sun, Jie
More informationMultipath Matching Pursuit
Multipath Matching Pursuit Submitted to IEEE trans. on Information theory Authors: S. Kwon, J. Wang, and B. Shim Presenter: Hwanchol Jang Multipath is investigated rather than a single path for a greedy
More informationSensing systems limited by constraints: physical size, time, cost, energy
Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original
More informationSPARSE signal representations have gained popularity in recent
6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying
More informationSolution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions
Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Yin Zhang Technical Report TR05-06 Department of Computational and Applied Mathematics Rice University,
More informationCompressive Sensing of Sparse Tensor
Shmuel Friedland Univ. Illinois at Chicago Matheon Workshop CSA2013 December 12, 2013 joint work with Q. Li and D. Schonfeld, UIC Abstract Conventional Compressive sensing (CS) theory relies on data representation
More informationSignal Recovery from Permuted Observations
EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,
More informationof Orthogonal Matching Pursuit
A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement
More informationMAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing
MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing Afonso S. Bandeira April 9, 2015 1 The Johnson-Lindenstrauss Lemma Suppose one has n points, X = {x 1,..., x n }, in R d with d very
More informationThresholds for the Recovery of Sparse Solutions via L1 Minimization
Thresholds for the Recovery of Sparse Solutions via L Minimization David L. Donoho Department of Statistics Stanford University 39 Serra Mall, Sequoia Hall Stanford, CA 9435-465 Email: donoho@stanford.edu
More informationUniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit
Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit arxiv:0707.4203v2 [math.na] 14 Aug 2007 Deanna Needell Department of Mathematics University of California,
More informationNew Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit
New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence
More informationIntroduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011
Compressed Sensing Huichao Xue CS3750 Fall 2011 Table of Contents Introduction From News Reports Abstract Definition How it works A review of L 1 norm The Algorithm Backgrounds for underdetermined linear
More informationA new method on deterministic construction of the measurement matrix in compressed sensing
A new method on deterministic construction of the measurement matrix in compressed sensing Qun Mo 1 arxiv:1503.01250v1 [cs.it] 4 Mar 2015 Abstract Construction on the measurement matrix A is a central
More informationNear Optimal Signal Recovery from Random Projections
1 Near Optimal Signal Recovery from Random Projections Emmanuel Candès, California Institute of Technology Multiscale Geometric Analysis in High Dimensions: Workshop # 2 IPAM, UCLA, October 2004 Collaborators:
More informationSparsity in Underdetermined Systems
Sparsity in Underdetermined Systems Department of Statistics Stanford University August 19, 2005 Classical Linear Regression Problem X n y p n 1 > Given predictors and response, y Xβ ε = + ε N( 0, σ 2
More informationCS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5
CS 229r: Algorithms for Big Data Fall 215 Prof. Jelani Nelson Lecture 19 Nov 5 Scribe: Abdul Wasay 1 Overview In the last lecture, we started discussing the problem of compressed sensing where we are given
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu
More informationCoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles
CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles SIAM Student Research Conference Deanna Needell Joint work with Roman Vershynin and Joel Tropp UC Davis, May 2008 CoSaMP: Greedy Signal
More informationGREEDY SIGNAL RECOVERY REVIEW
GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin
More informationUniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit
Claremont Colleges Scholarship @ Claremont CMC Faculty Publications and Research CMC Faculty Scholarship 6-5-2008 Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit
More informationLecture 22: More On Compressed Sensing
Lecture 22: More On Compressed Sensing Scribed by Eric Lee, Chengrun Yang, and Sebastian Ament Nov. 2, 207 Recap and Introduction Basis pursuit was the method of recovering the sparsest solution to an
More informationLIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION. By Srikanth Jagabathula Devavrat Shah
00 AIM Workshop on Ranking LIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION By Srikanth Jagabathula Devavrat Shah Interest is in recovering distribution over the space of permutations over n elements
More informationAN INTRODUCTION TO COMPRESSIVE SENSING
AN INTRODUCTION TO COMPRESSIVE SENSING Rodrigo B. Platte School of Mathematical and Statistical Sciences APM/EEE598 Reverse Engineering of Complex Dynamical Networks OUTLINE 1 INTRODUCTION 2 INCOHERENCE
More informationCompressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles
Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional
More informationSignal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit
Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Deanna Needell and Roman Vershynin Abstract We demonstrate a simple greedy algorithm that can reliably
More informationElaine T. Hale, Wotao Yin, Yin Zhang
, Wotao Yin, Yin Zhang Department of Computational and Applied Mathematics Rice University McMaster University, ICCOPT II-MOPTA 2007 August 13, 2007 1 with Noise 2 3 4 1 with Noise 2 3 4 1 with Noise 2
More informationSparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery
Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery Anna C. Gilbert Department of Mathematics University of Michigan Sparse signal recovery measurements:
More informationStrengthened Sobolev inequalities for a random subspace of functions
Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)
More informationCombining geometry and combinatorics
Combining geometry and combinatorics A unified approach to sparse signal recovery Anna C. Gilbert University of Michigan joint work with R. Berinde (MIT), P. Indyk (MIT), H. Karloff (AT&T), M. Strauss
More informationIEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER
IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1239 Preconditioning for Underdetermined Linear Systems with Sparse Solutions Evaggelia Tsiligianni, StudentMember,IEEE, Lisimachos P. Kondi,
More informationLecture: Introduction to Compressed Sensing Sparse Recovery Guarantees
Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin
More informationGeneralized Orthogonal Matching Pursuit- A Review and Some
Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents
More informationPackage R1magic. April 9, 2013
Package R1magic April 9, 2013 Type Package Title Compressive Sampling: Sparse signal recovery utilities Version 0.2 Date 2013-04-09 Author Maintainer Depends stats, utils Provides
More informationINDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina
INDUSTRIAL MATHEMATICS INSTITUTE 2007:08 A remark on compressed sensing B.S. Kashin and V.N. Temlyakov IMI Preprint Series Department of Mathematics University of South Carolina A remark on compressed
More informationOn Sparsity, Redundancy and Quality of Frame Representations
On Sparsity, Redundancy and Quality of Frame Representations Mehmet Açaaya Division of Engineering and Applied Sciences Harvard University Cambridge, MA Email: acaaya@fasharvardedu Vahid Taroh Division
More informationRecovery of Sparse Signals Using Multiple Orthogonal Least Squares
Recovery of Sparse Signals Using Multiple Orthogonal east Squares Jian Wang, Ping i Department of Statistics and Biostatistics arxiv:40.505v [stat.me] 9 Oct 04 Department of Computer Science Rutgers University
More informationError Correction via Linear Programming
Error Correction via Linear Programming Emmanuel Candes and Terence Tao Applied and Computational Mathematics, Caltech, Pasadena, CA 91125 Department of Mathematics, University of California, Los Angeles,
More informationSparse & Redundant Signal Representation, and its Role in Image Processing
Sparse & Redundant Signal Representation, and its Role in Michael Elad The CS Department The Technion Israel Institute of technology Haifa 3000, Israel Wave 006 Wavelet and Applications Ecole Polytechnique
More informationAn Introduction to Sparse Approximation
An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,
More informationCompressed Sensing with Very Sparse Gaussian Random Projections
Compressed Sensing with Very Sparse Gaussian Random Projections arxiv:408.504v stat.me] Aug 04 Ping Li Department of Statistics and Biostatistics Department of Computer Science Rutgers University Piscataway,
More informationIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior
More informationIntroduction to Compressed Sensing
Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral
More informationOrthogonal Matching Pursuit for Sparse Signal Recovery With Noise
Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published
More informationA Greedy Search Algorithm with Tree Pruning for Sparse Signal Recovery
A Greedy Search Algorithm with Tree Pruning for Sparse Signal Recovery Jaeseok Lee, Suhyuk Kwon, and Byonghyo Shim Information System Laboratory School of Infonnation and Communications, Korea University
More informationSparse Expander-like Real-valued Projection (SERP) matrices for compressed sensing
Sparse Expander-like Real-valued Projection (SERP) matrices for compressed sensing Abdolreza Abdolhosseini Moghadam and Hayder Radha Department of Electrical and Computer Engineering, Michigan State University,
More informationSolving Underdetermined Linear Equations and Overdetermined Quadratic Equations (using Convex Programming)
Solving Underdetermined Linear Equations and Overdetermined Quadratic Equations (using Convex Programming) Justin Romberg Georgia Tech, ECE Caltech ROM-GR Workshop June 7, 2013 Pasadena, California Linear
More informationAFRL-RI-RS-TR
AFRL-RI-RS-TR-200-28 THEORY AND PRACTICE OF COMPRESSED SENSING IN COMMUNICATIONS AND AIRBORNE NETWORKING STATE UNIVERSITY OF NEW YORK AT BUFFALO DECEMBER 200 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC
More informationGreedy Signal Recovery and Uniform Uncertainty Principles
Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles
More informationCompressive Sensing with Random Matrices
Compressive Sensing with Random Matrices Lucas Connell University of Georgia 9 November 017 Lucas Connell (University of Georgia) Compressive Sensing with Random Matrices 9 November 017 1 / 18 Overview
More informationModel-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk
Model-Based Compressive Sensing for Signal Ensembles Marco F. Duarte Volkan Cevher Richard G. Baraniuk Concise Signal Structure Sparse signal: only K out of N coordinates nonzero model: union of K-dimensional
More informationSparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images
Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images Alfredo Nava-Tudela ant@umd.edu John J. Benedetto Department of Mathematics jjb@umd.edu Abstract In this project we are
More informationLarge-Scale L1-Related Minimization in Compressive Sensing and Beyond
Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March
More information5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE
5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER 2009 Uncertainty Relations for Shift-Invariant Analog Signals Yonina C. Eldar, Senior Member, IEEE Abstract The past several years
More informationCompressive Sensing and Beyond
Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered
More informationConstructing Explicit RIP Matrices and the Square-Root Bottleneck
Constructing Explicit RIP Matrices and the Square-Root Bottleneck Ryan Cinoman July 18, 2018 Ryan Cinoman Constructing Explicit RIP Matrices July 18, 2018 1 / 36 Outline 1 Introduction 2 Restricted Isometry
More informationExact Low-rank Matrix Recovery via Nonconvex M p -Minimization
Exact Low-rank Matrix Recovery via Nonconvex M p -Minimization Lingchen Kong and Naihua Xiu Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044, People s Republic of China E-mail:
More informationSolution Recovery via L1 minimization: What are possible and Why?
Solution Recovery via L1 minimization: What are possible and Why? Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Eighth US-Mexico Workshop on Optimization
More informationCOMPRESSED SENSING IN PYTHON
COMPRESSED SENSING IN PYTHON Sercan Yıldız syildiz@samsi.info February 27, 2017 OUTLINE A BRIEF INTRODUCTION TO COMPRESSED SENSING A BRIEF INTRODUCTION TO CVXOPT EXAMPLES A Brief Introduction to Compressed
More informationSimultaneous Sparsity
Simultaneous Sparsity Joel A. Tropp Anna C. Gilbert Martin J. Strauss {jtropp annacg martinjs}@umich.edu Department of Mathematics The University of Michigan 1 Simple Sparse Approximation Work in the d-dimensional,
More informationPackage R1magic. August 29, 2016
Type Package Package R1magic August 29, 2016 Title Compressive Sampling: Sparse Signal Recovery Utilities Version 0.3.2 Date 2015-04-19 Maintainer Depends stats, utils Utilities
More informationA Survey of Compressive Sensing and Applications
A Survey of Compressive Sensing and Applications Justin Romberg Georgia Tech, School of ECE ENS Winter School January 10, 2012 Lyon, France Signal processing trends DSP: sample first, ask questions later
More informationSparse Optimization Lecture: Sparse Recovery Guarantees
Those who complete this lecture will know Sparse Optimization Lecture: Sparse Recovery Guarantees Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor: Wotao Yin Department of Mathematics,
More informationCompressed Sensing and Related Learning Problems
Compressed Sensing and Related Learning Problems Yingzhen Li Dept. of Mathematics, Sun Yat-sen University Advisor: Prof. Haizhang Zhang Advisor: Prof. Haizhang Zhang 1 / Overview Overview Background Compressed
More informationExact Topology Identification of Large-Scale Interconnected Dynamical Systems from Compressive Observations
Exact Topology Identification of arge-scale Interconnected Dynamical Systems from Compressive Observations Borhan M Sanandaji, Tyrone Vincent, and Michael B Wakin Abstract In this paper, we consider the
More informationCompressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes
Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes Item Type text; Proceedings Authors Jagiello, Kristin M. Publisher International Foundation for Telemetering Journal International Telemetering
More informationLow-Dimensional Signal Models in Compressive Sensing
University of Colorado, Boulder CU Scholar Electrical, Computer & Energy Engineering Graduate Theses & Dissertations Electrical, Computer & Energy Engineering Spring 4-1-2013 Low-Dimensional Signal Models
More informationNew Applications of Sparse Methods in Physics. Ra Inta, Centre for Gravitational Physics, The Australian National University
New Applications of Sparse Methods in Physics Ra Inta, Centre for Gravitational Physics, The Australian National University 2 Sparse methods A vector is S-sparse if it has at most S non-zero coefficients.
More informationBlind Compressed Sensing
1 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE arxiv:1002.2586v2 [cs.it] 28 Apr 2010 Abstract The fundamental principle underlying compressed sensing is that a signal,
More informationInverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France
Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France remi.gribonval@inria.fr Structure of the tutorial Session 1: Introduction to inverse problems & sparse
More informationReconstruction from Anisotropic Random Measurements
Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013
More informationTutorial: Sparse Signal Processing Part 1: Sparse Signal Representation. Pier Luigi Dragotti Imperial College London
Tutorial: Sparse Signal Processing Part 1: Sparse Signal Representation Pier Luigi Dragotti Imperial College London Outline Part 1: Sparse Signal Representation ~90min Part 2: Sparse Sampling ~90min 2
More informationRobust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Frequency Information
1 Robust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Frequency Information Emmanuel Candès, California Institute of Technology International Conference on Computational Harmonic
More information17 Random Projections and Orthogonal Matching Pursuit
17 Random Projections and Orthogonal Matching Pursuit Again we will consider high-dimensional data P. Now we will consider the uses and effects of randomness. We will use it to simplify P (put it in a
More informationRecent Developments in Compressed Sensing
Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline
More informationLecture Notes 9: Constrained Optimization
Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form
More informationCompressed Sensing and Sparse Recovery
ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing
More informationStable Signal Recovery from Incomplete and Inaccurate Measurements
Stable Signal Recovery from Incomplete and Inaccurate Measurements EMMANUEL J. CANDÈS California Institute of Technology JUSTIN K. ROMBERG California Institute of Technology AND TERENCE TAO University
More informationTree-Structured Compressive Sensing with Variational Bayesian Analysis
1 Tree-Structured Compressive Sensing with Variational Bayesian Analysis Lihan He, Haojun Chen and Lawrence Carin Department of Electrical and Computer Engineering Duke University Durham, NC 27708-0291
More informationGradient Descent with Sparsification: An iterative algorithm for sparse recovery with restricted isometry property
: An iterative algorithm for sparse recovery with restricted isometry property Rahul Garg grahul@us.ibm.com Rohit Khandekar rohitk@us.ibm.com IBM T. J. Watson Research Center, 0 Kitchawan Road, Route 34,
More informationSparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery
Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery Anna C. Gilbert Department of Mathematics University of Michigan Connection between... Sparse Approximation and Compressed
More informationAn Overview of Compressed Sensing
An Overview of Compressed Sensing Nathan Schneider November 18, 2009 Abstract In a large number of applications, the system will be designed to sample at a rate equal to at least the frequency bandwidth
More informationRecovery Guarantees for Rank Aware Pursuits
BLANCHARD AND DAVIES: RECOVERY GUARANTEES FOR RANK AWARE PURSUITS 1 Recovery Guarantees for Rank Aware Pursuits Jeffrey D. Blanchard and Mike E. Davies Abstract This paper considers sufficient conditions
More informationExploiting Structure in Wavelet-Based Bayesian Compressive Sensing
Exploiting Structure in Wavelet-Based Bayesian Compressive Sensing 1 Lihan He and Lawrence Carin Department of Electrical and Computer Engineering Duke University, Durham, NC 2778-291 USA {lihan,lcarin}@ece.duke.edu
More informationRobust Principal Component Analysis
ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M
More informationStability and Robustness of Weak Orthogonal Matching Pursuits
Stability and Robustness of Weak Orthogonal Matching Pursuits Simon Foucart, Drexel University Abstract A recent result establishing, under restricted isometry conditions, the success of sparse recovery
More informationCompressive Sensing Theory and L1-Related Optimization Algorithms
Compressive Sensing Theory and L1-Related Optimization Algorithms Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, USA CAAM Colloquium January 26, 2009 Outline:
More informationCoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp
CoSaMP Iterative signal recovery from incomplete and inaccurate samples Joel A. Tropp Applied & Computational Mathematics California Institute of Technology jtropp@acm.caltech.edu Joint with D. Needell
More informationEquivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 12, DECEMBER 2008 2009 Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation Yuanqing Li, Member, IEEE, Andrzej Cichocki,
More informationRecovery of Sparse Signals Using Multiple Orthogonal Least Squares
Recovery of Sparse Signals Using Multiple Orthogonal east Squares Jian Wang and Ping i Department of Statistics and Biostatistics, Department of Computer Science Rutgers, The State University of New Jersey
More informationA Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing via Polarization of Analog Transmission
Li and Kang: A Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing 1 A Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing via Polarization
More informationL-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise
L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise Srdjan Stanković, Irena Orović and Moeness Amin 1 Abstract- A modification of standard
More informationRecovering overcomplete sparse representations from structured sensing
Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix
More informationECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis
ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear
More informationSparse representation classification and positive L1 minimization
Sparse representation classification and positive L1 minimization Cencheng Shen Joint Work with Li Chen, Carey E. Priebe Applied Mathematics and Statistics Johns Hopkins University, August 5, 2014 Cencheng
More informationFast Hard Thresholding with Nesterov s Gradient Method
Fast Hard Thresholding with Nesterov s Gradient Method Volkan Cevher Idiap Research Institute Ecole Polytechnique Federale de ausanne volkan.cevher@epfl.ch Sina Jafarpour Department of Computer Science
More informationCOMPARATIVE ANALYSIS OF ORTHOGONAL MATCHING PURSUIT AND LEAST ANGLE REGRESSION
COMPARATIVE ANALYSIS OF ORTHOGONAL MATCHING PURSUIT AND LEAST ANGLE REGRESSION By Mazin Abdulrasool Hameed A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for
More informationORTHOGONAL matching pursuit (OMP) is the canonical
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 9, SEPTEMBER 2010 4395 Analysis of Orthogonal Matching Pursuit Using the Restricted Isometry Property Mark A. Davenport, Member, IEEE, and Michael
More informationRecovery of Low Rank and Jointly Sparse. Matrices with Two Sampling Matrices
Recovery of Low Rank and Jointly Sparse 1 Matrices with Two Sampling Matrices Sampurna Biswas, Hema K. Achanta, Mathews Jacob, Soura Dasgupta, and Raghuraman Mudumbai Abstract We provide a two-step approach
More informationStochastic geometry and random matrix theory in CS
Stochastic geometry and random matrix theory in CS IPAM: numerical methods for continuous optimization University of Edinburgh Joint with Bah, Blanchard, Cartis, and Donoho Encoder Decoder pair - Encoder/Decoder
More information