Analog-to-Information Conversion
|
|
- Aldous Johnston
- 5 years ago
- Views:
Transcription
1 Analog-to-Information Conversion Sergiy A. Vorobyov Dept. Signal Processing and Acoustics, Aalto University February 2013 Winter School on Compressed Sensing, Ruka 1/55
2 Outline 1 Compressed Sampling (CS) motivations and basics 2 AIC Approaches and Xampling 3 Example of frequency sparce signal recontruction 4 Analog domain CS models 5 The segmented CS method 6 Analytical results 7 Conclusion 2/55
3 CS motivation Nyquist rate sampling. Signal recovery from fewer samples? CS: sensing and compressing simultaneously Sparsity / compressibility 1 1 Picture from Compressive sensing, by Richard Baraniuk 3/55
4 The sensing problem Interested in undersampled situations! 2 Possibility of accurate recovery? Design of sensing waveforms? Ill-posed recovery problem y = Φx R K. 2 Picture from Tutorial on compressive sensing, by Richard Baraniuk, Justin Romberg and Michael Wakin 4/55
5 The sensing problem But our signal of interest is sparse. 5/55
6 The sensing problem We can think of Φ being K S. But we do not know the location of the nonzero coefficients. 6/55
7 The sensing problem Design Φ so its K 2S submatrices are full rank. Two S-sparse signals x 1 and x 2 will be mapped to different measurements. 7/55
8 Restricted Isometry Property (RIP) S-restricted isometry constant δ S, the smallest number satisfying K N (1 δ S) c 2 l 2 Φ T c 2 l 2 K N (1 + δ S) c 2 l 2 T : index set with cardinality S. To be able to recover S-sparse signals, δ 2S < 1. Forming columns of Φ by taking i.i.d. samples of a zero mean normal distribution with variance 1/N The RIP holds if K CS log (N/S) 8/55
9 The discrete CS problem Sampling procedure: y = Φf + w = ΦΨ T x + w. f: TheN 1 discrete-time sparse signal. Ψ: TheN N sparsity basis. x: The sparse representation of f in Ψ. Φ: The K N measurement matrix. w: TheK 1 noise vector. y: TheK 1 vector of compressed samples. Gaussian random measurement matrices Φ are universal. 9/55
10 Recovery The noise less case with Ψ = I Sampling y k =< f, φ k >, k = {1, 2,...,K} Select f with the sparsest representation x which leads to the same y k l 0 -norm minimization min x l0 subject to < φ k, Ψ T x >= y k k. The problem is NP-complete 10 / 55
11 Recovery... Recovery by l 2 -norm minimization min x l2 subject to < φ k, Ψ T x >= y k k. 11 / 55
12 Least square recovery... It does not result in a sparse solution. 12 / 55
13 Recovery... Recovery by l 1 -norm minimization min x l1 subject to < φ k, Ψ T x >= y k k. A convex optimization problem. Can be solved by a linear program of O(N 3 ). 13 / 55
14 Recovery... The noisy case min x l1 subject to Φ x y l2 γ. Φ = ΦΨ T. γ : Bound on the noise vector s energy. The solution x to the above problem obeys x x l2 C 1,S γ + C 2,S x x S l1 / S Other recovery algorithms Convex optimization: Dantzig selector,... Iterative greedy algorithms: Matching pursuit (MP), OMP, / 55
15 Recovery... Empirical risk minimization based recovery { ˆf K = arg min ˆf F(B) ˆr(ˆf)+ c(ˆf)log2 ɛk }. (1) F(B) {f : f 2 NB 2 },andɛ =1/ ( 50(B + σ) 2). c(ˆf): A nonnegative number assigned to a candidate signal ˆf. ˆr(ˆf) 1 ( K 2: K j=1 y j φ jˆf) The empirical risk. 15 / 55
16 Recovery... ˆfK in (1) satisfies { } ˆf K f 2 E C 1 min N ˆf F(B) { ˆf f 2 For a sparse signal f F s (B, S) wehave { } ˆf K f 2 sup E C 1 C 2 f F s (B,S) N F s (B, S) {f : f 2 NB 2, f l0 S}. N } + c(ˆf)log2+4. ɛk ( ) K 1. (2) S log N 16 / 55
17 Applications... Compressive radar. Spectrum sensing in cognitive radios. Sparse channel estimation for communication systems. Analog to information converters to replace ADCs. 17 / 55
18 AIC In most of the CS literature The signal is sampled with Nyquist rate Then, the measurement matrix is applied We can avoid high speed sampling Xampling extension of Shannon-Nyquist theorem to any subspace model if signal lives in a certain subspace, it can be sampled and recovered using simple filtering operation and a digital correction Non-uniform sampler (NUM) Random Modulation Pre-Integration (RMPI) A system of parallel mixers and integrators (BMIs) Sampling with a random sequence of ±1 Easier to change polarity at a high rate rather than digitize The RMPI is less complex. 18 / 55
19 Xampling Motivation There is more structure in some signals than just living in a subspace Multi-band spectrum Multi-band spectrum: unknown carrier frequencies. 19 / 55
20 Xampling Motivation Multi-path (small number of pulses from different targets), TOAs are not known Estimate distances to the targets and Dopplers (velocities) Multi-path in radar or multi-path fading in communications. 20 / 55
21 Xampling Motivation Have to sample at the Nyquist rate of the pulse However, pulse is known, the unknowns are the TOAs, i.e., just few unknowns Model for such signals is Union of Subspaces (US) Normally we look at the sum of subspaces Known: the signal lives in a low dimensional subspace Unknown: which of the subspaces from the union of many possible subspaces 21 / 55
22 US Subspace versus union of subspaces. 3 In multi-band case: we could sample at low rate, but the carriers are unknown - so, normally we use Nyquist rate sampling 3 Picture from Xampling web-site, Eldar and Michaeli 22 / 55
23 Xampling Idea If pulses are sparse and we sample with low rate, the probability to sample zero is high Solution: alias the date before sampling - pre-processing in analog domain In the simplest case, use low-pass filter, but optimally multiply by sinusoids and then low pass Then each sample will contain information about the signal Pulse location from the samples of mixed data can be done in digital domain - recovery algorithms 23 / 55
24 Xampler Block Scheme Xampler block scheme. 4 In analog domain: sample so that the data contains combinations from all of the possible subspaces: alias with sinusoids or just one periodic signal, which already contains many sinusoids In digital domain: 1) Detection to identify the subspaces involved - Use classical array processing methods or compressed sensing methods 2) Reconstruct, which is a solved problem as soon as you know the subspace 4 Picture from Xampling web-site, Eldar and Michaeli 24 / 55
25 Model Data model x n = K s k e jω kn k=1 s k and ω k (1 k K) are unknown amplitude and frequency of the kth sinusoid, respectively In matrix-vector form x = As x =[x 0, x 1,..., x N 1 ] T C N 1 is a linear combination of K sinusoids, K N s =[s 1, s 2,..., s K ] T C K 1 A =[a(ω 1 ), a(ω 2 ),..., a(ω K )] C N K 25 / 55
26 Vector of measurements Measurements y = Φx + w Φ R M N is the measurement matrix w C M 1 is the measurement noise with circularly symmetric complex normal distribution N C (0,σ 2 I ) Elements of Φ are drawn independently from the Gaussian distribution N (0, 1/M) Idea: 1) minimize the estimation error, i.e., ˆx = arg min x y Φx 2 2 2) match the estimated signal to the sparsity model Iterate 26 / 55
27 Nested Least Squares Algorithm 1 Initialize: ˆx 0 =0, i =1 repeat x e ˆx i 1 + λφ T (y Φˆx i 1 ) ˆΩ root-music(x e, K) Â [a(ˆω 1 ) a(ˆω 2 )... a(ˆω K )] ˆB ΦÂ 5 ŝ i (ˆB H ˆB) 1 ˆB H y ˆx i Âŝ i i i +1 until halting criterion true Alternative is Spectral Iterative Hard Thresholding (SIHT) by Duarte and Baraniuk 5 From ICASSP 11 paper, Shaghaghi and Vorobyov 27 / 55
28 Simulation results snr λ=1.0 λ=0.9 λ=0.8 λ=0.7 λ=0.6 λ=0.5 λ=0.4 SIHT via root MUSIC NCRB itr The AIC with parallel branches of mixers and integrators. 6 K = 20, N = 1024, ( M = 300, ) the noise standard deviation ) = 2, E{ x ˆx 2 NMSE =10log 2 } E{ x 2 2 },andncrb =10log( CRB E{ x 2 2 } 6 From ICASSP 11 paper, Shaghaghi and Vorobyov 28 / 55
29 Analog signal model Analog signal representation f (t) = N x n ψ n (t) =x T Ψ(t). n=1 Ψ(t) (ψ 1 (t),...,ψ N (t)) T : The sparsity basis. Basis functions {ψ n (t)} N n=1 defined over period t [0, T ]. Sample collection y k = T 0 f (t)φ k (t) dt, k =1,...,K. Φ(t) (φ 1 (t),...,φ K (t)) T : The measurement operator. φ k (t): Random ±1 chip sequence with period T c = T /N c, N c N. 29 / 55
30 Representing as a discrete model) Discrete CS counterpart y = Φ x + w. Φ =Φ(t)Ψ(t): K N matrix with its (k, n)-th entry given as [Φ ] k,n = T 0 ψ n (t)φ k (t) dt. The entries of the K N c measurement matrix Φ: [Φ] k,n = ntc (n 1)T c φ k (t) dt The entris of the N N c sparsity basis Ψ: [Ψ] m,n = ntc (n 1)T c ψ m (t) dt 30 / 55
31 RMPI structure for AIC The RMPI structure f (t) T 0 φ 1 (t) y 1 T 0 y 2 φ 2 (t) T 0 y K φ K (t) The AIC with parallel branches of mixers and integrators. 31 / 55
32 Segmented CS 7 Measurement matrix Φ with K rows Spliting the integration period into M sub-periods M sub-samples instead of one sample A K M matrix of sub-samples Y = y 1,1 y 1,2... y 1,M y 2,1 y 2,2... y 2,M.... y K,1 y K,2... y K,M. Adding up the rows result in K samples Adding M sub-samples of main diagonal: K +1-stsample Sub-samples of second diagonal K +2-ndsample... Wind up with K e = K + K a samples 7 Taheri and Vorobyov, Segmented CS for AIC, IEEE TSP, Feb / 55
33 Segmented CS... Sub-sample selection diagram (K a = K) y K+1 y K+2 y 1,1 y 1,2 y 1,3 y 1,M y 2,1 y 2,2 y 2,3 y 2,M y 3,1 y 3,2 y 3,3 y 3,M y 2K 1 y 2K y K 1,1 y K 1,2 y K 1,3 y K 1,M y K,1 y K,2 y K,3 y K,M The previously described sub-sample selection method. 33 / 55
34 The extended measurement matrix Sampling procedure can be explained in terms of the extended measurement matrix Φ e of size K e N φ 1,1 φ 1,2... φ 1,M ( ).... Φ Φ e = = φ K,1 φ K,2... φ K,M Φ 1 φ 1,1 φ 2,2... φ M,M..... φ Ka,1 φ π2 (K a),m... φ πm (K a),m φ k,j, j =1,...,M are some vectors whose concatenation results in φ k = ( ) φ k,1,...,φ k,m. π s (k) =((s + k 2) mod K)+1, s, k =1,...,K are permutations applied to different columns of Φ. Is this a valid CS matrix? 34 / 55
35 Sketch of the proof Lemma For T {1, 2,...,N} of cardinality S, (Φ e ) T overwhelming probability if satisfies RIP with min{k, K a + M 1} (K + K a ) /2 We partition the rows of (Φ e ) T Then, we use existing results. into two independent subsets. 35 / 55
36 Sketch of the proof... Φ 1 uses the first K a + M 1rowsofΦ or all of it If min{k, K a + M 1} (K + K a ) /2 (Φ e ) T can be partitioned into two sets of size K + K a 2 and K + K a 2 These sub-sets have independent entries. If both subsets satisfy RIP, so does (Φ e ) T with probability 1 4(12/δ S ) S e C 0 K +Ka 2 36 / 55
37 Sketch of the proof... Union bounding on all different subsets (Φ e ) T We get the main result which is Pr{Φ e satisfies RIP} 1 4e C 4 (K+K a)/2 for S C 3 (K + K a )/2 / log(n/s). 37 / 55
38 Analytical results for empirical risk minimization Theorem Let ɛ be chosen as ɛ = 1 (60 (B + σ) 2). Then the signal reconstruction ˆf Ke ˆfKe = arg min ˆf F(B) given by { ˆr(ˆf)+ c(ˆf)log2 ɛk e } (3) satisfies the following inequality { } ˆf Ke f 2 E C 1e N min ˆf F(B) { ˆf f 2 N } + c(ˆf)log2+4 ɛk e 38 / 55
39 Analytical results... Theorem For a sparse signal f F s (B, S) and corresponding reconstructed signal ˆfKe obtained according to (3), there exists a constant C 2e = C 2e (B,σ) > 0, such that sup E f F s(b,s) { } ˆf Ke f 2 C 1e C 2e N ( ) 1 Ke. (4) S log N 39 / 55
40 Example Two sets of samples are collected (i) With a K N measurement matrix with all i.i.d. (Bernoulli) elements. (ii) With a K e N extended measurement matrix. Parameters are chosen as: K a = K, M =8 K e =2K. Assuming the same ɛ in (2) and (4) C 2e = C 2. Performance comparison comes down to C 1e and 2C C 1e 2C / 55
41 Simulation results Three sampling schemes are compared Original K N measurement matrix Extended K e N measurement matrix Enlarged K e N matrix with i.i.d. entries Bernoulli and Gaussian measurement matrices are considered. Different sparsity bases are considered. Empirical risk minimization and l 1 -norm minimization are used. Results in terms of MSE vs. K a /K MSE is averaged over 1000 simulation runs. 41 / 55
42 Scenario 1 Time sparse signal with l 1 -norm minimization N = 128 basis functions ψ n (t) = { N T, t [(n 1)T /N, nt /N] 0, otherwise, n =1,...,N. Sparsity level: 3 Original measurements: K =16 Additional measurements: K a =0to5K Number of Segments: M =8 42 / 55
43 Scenario 1 (Gaussian measurement matrix) 10 1 SNR = 5 db SNR = 15 db SNR = 25 db Original Gaussian Matrix Extended Matrix Enlarged Matrix 10 2 MSE K a /K 43 / 55
44 Scenario 1 (Bernoulli measurement matrix) 10 1 SNR = 5 db SNR = 15 db SNR = 25 db Original Bernoulli Matrix Extended Matrix Enlarged Matrix 10 2 MSE K a /K 44 / 55
45 Scenario 2 Same time sparse signal The number of original samples: K =24 Empirical risk minimization based recovery 45 / 55
46 Scenario 2 (Gaussian measurement matrix) 10 1 SNR = 5 db SNR = 15 db SNR = 25 db Original Gaussian Matrix Extended Matrix Enlarged Matrix 10 2 MSE K a /K 46 / 55
47 Scenario 2 (Bernoulli measurement matrix) 10 1 SNR = 5 db SNR = 15 db SNR = 25 db Original Bernoulli Matrix Extended Matrix Enlarged Matrix 10 2 MSE K a /K 47 / 55
48 Scenario 3 OFDM signal with l 1 -norm minimization QPSK modulation N = 128 basis functions ( ψ n (t) =cos (n 1) 2π ) T t t [0, T ], n =1,...,N. Sparsity level: 3 The number of chips per duration (N c = 256) Original measurements: K =16 Additional measurements: K a =0to5K Number of Segments: M =8 48 / 55
49 Scenario 3 (Gaussian measurement matrix) 10 1 SNR = 5 db SNR = 15 db SNR = 25 db Original Gaussian Matrix Extended Matrix Enlarged Matrix 10 2 MSE K a /K 49 / 55
50 Scenario 3 (Bernoulli measurement matrix) 10 1 SNR = 5 db SNR = 15 db SNR = 25 db Original Bernoulli Matrix Extended Matrix Enlarged Matrix 10 2 MSE K a /K 50 / 55
51 Scenario 4 A case where the number of original BMIs is insufficient. Time sparse signals as in scenario 1. N = 128, S =5,K =9,andM =8. Performance merit Percentage of successful support recovery The average MSE for cases of successful support recovery 51 / 55
52 Scenario 4... Table: Percentage that the positions of the nonzero signal values are correctly identified. Noiseless SNR=25 db K a K e BMIs % MSE N/A Segmented % MSE N/A K e BMIs Segmented % MSE N/A % MSE N/A / 55
53 Conclusion Three bacis approaches to AIC Xampling also known as modulated wideband converter (MWC): may be energy consuming - needs to multiply the signal by a number of sinusoids in analog domain NUS (Wakin, Romberg, Candes, et. al.): didn t talk about it, but non-uniform sampling otherwise has a very long history RMPI also known as random demodulator (RD): some implementation issues are more involved than for MWC Extentions of RD: Polyphase RD (Laska, Slavinsky, Baraniuk) 53 / 55
54 Conclusion The segmented CS method Reusing the samples already obtained Results in improvement in recovered signal quality without added complexity to the sampler Extentions/improvenets: Partial Segmented CS, Random Circulant Orthogonal Matrix based Analog Compressed Sensing 54 / 55
55 55 / 55
ACCORDING to Shannon s sampling theorem, an analog
554 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 59, NO 2, FEBRUARY 2011 Segmented Compressed Sampling for Analog-to-Information Conversion: Method and Performance Analysis Omid Taheri, Student Member,
More informationSegmented compressed sampling for. analog-to-information conversion: Method and performance analysis
Segmented compressed sampling for 1 analog-to-information conversion: Method and performance analysis arxiv:1004.4308v1 [cs.it] 24 Apr 2010 Omid Taheri, Student Member, IEEE, and Sergiy A. Vorobyov, Senior
More informationNew Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit
New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence
More informationThe Pros and Cons of Compressive Sensing
The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal
More informationIntroduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011
Compressed Sensing Huichao Xue CS3750 Fall 2011 Table of Contents Introduction From News Reports Abstract Definition How it works A review of L 1 norm The Algorithm Backgrounds for underdetermined linear
More informationIntroduction to Compressed Sensing
Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral
More informationSIGNALS with sparse representations can be recovered
IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1497 Cramér Rao Bound for Sparse Signals Fitting the Low-Rank Model with Small Number of Parameters Mahdi Shaghaghi, Student Member, IEEE,
More informationSPARSE signal representations have gained popularity in recent
6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying
More informationThe Pros and Cons of Compressive Sensing
The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal
More informationPrimal Dual Pursuit A Homotopy based Algorithm for the Dantzig Selector
Primal Dual Pursuit A Homotopy based Algorithm for the Dantzig Selector Muhammad Salman Asif Thesis Committee: Justin Romberg (Advisor), James McClellan, Russell Mersereau School of Electrical and Computer
More informationNoisy Signal Recovery via Iterative Reweighted L1-Minimization
Noisy Signal Recovery via Iterative Reweighted L1-Minimization Deanna Needell UC Davis / Stanford University Asilomar SSC, November 2009 Problem Background Setup 1 Suppose x is an unknown signal in R d.
More informationCompressed Sensing: Extending CLEAN and NNLS
Compressed Sensing: Extending CLEAN and NNLS Ludwig Schwardt SKA South Africa (KAT Project) Calibration & Imaging Workshop Socorro, NM, USA 31 March 2009 Outline 1 Compressed Sensing (CS) Introduction
More informationReconstruction from Anisotropic Random Measurements
Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013
More informationCompressed Sensing. 1 Introduction. 2 Design of Measurement Matrices
Compressed Sensing Yonina C. Eldar Electrical Engineering Department, Technion-Israel Institute of Technology, Haifa, Israel, 32000 1 Introduction Compressed sensing (CS) is an exciting, rapidly growing
More informationGeneralized Orthogonal Matching Pursuit- A Review and Some
Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents
More informationModel-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk
Model-Based Compressive Sensing for Signal Ensembles Marco F. Duarte Volkan Cevher Richard G. Baraniuk Concise Signal Structure Sparse signal: only K out of N coordinates nonzero model: union of K-dimensional
More informationSparse Sensing in Colocated MIMO Radar: A Matrix Completion Approach
Sparse Sensing in Colocated MIMO Radar: A Matrix Completion Approach Athina P. Petropulu Department of Electrical and Computer Engineering Rutgers, the State University of New Jersey Acknowledgments Shunqiao
More informationCOMPRESSIVE SAMPLING USING EM ALGORITHM. Technical Report No: ASU/2014/4
COMPRESSIVE SAMPLING USING EM ALGORITHM ATANU KUMAR GHOSH, ARNAB CHAKRABORTY Technical Report No: ASU/2014/4 Date: 29 th April, 2014 Applied Statistics Unit Indian Statistical Institute Kolkata- 700018
More informationCollaborative Compressive Spectrum Sensing Using Kronecker Sparsifying Basis
Collaborative Compressive Spectrum Sensing Using Kronecker Sparsifying Basis Ahmed M Elzanati, Mohamed F Abdelkader, Karim G Seddik and Atef M Ghuniem Department of Communication and Electronics, Sinai
More informationThe Fundamentals of Compressive Sensing
The Fundamentals of Compressive Sensing Mark A. Davenport Georgia Institute of Technology School of Electrical and Computer Engineering Sensor Explosion Data Deluge Digital Revolution If we sample a signal
More informationRobust multichannel sparse recovery
Robust multichannel sparse recovery Esa Ollila Department of Signal Processing and Acoustics Aalto University, Finland SUPELEC, Feb 4th, 2015 1 Introduction 2 Nonparametric sparse recovery 3 Simulation
More informationMultipath Matching Pursuit
Multipath Matching Pursuit Submitted to IEEE trans. on Information theory Authors: S. Kwon, J. Wang, and B. Shim Presenter: Hwanchol Jang Multipath is investigated rather than a single path for a greedy
More informationCompressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery
Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad
More informationPre-weighted Matching Pursuit Algorithms for Sparse Recovery
Journal of Information & Computational Science 11:9 (214) 2933 2939 June 1, 214 Available at http://www.joics.com Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Jingfei He, Guiling Sun, Jie
More informationLecture: Introduction to Compressed Sensing Sparse Recovery Guarantees
Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin
More informationUniversity of Alberta. Signal Processing for Sparse Discrete Time Systems. Omid Taheri
University of Alberta Signal Processing for Sparse Discrete Time Systems by Omid Taheri A thesis submitted to the Faculty of Graduate Studies and Research in partial fulfillment of the requirements for
More informationBlind Compressed Sensing
1 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE arxiv:1002.2586v2 [cs.it] 28 Apr 2010 Abstract The fundamental principle underlying compressed sensing is that a signal,
More informationRandom Coding for Fast Forward Modeling
Random Coding for Fast Forward Modeling Justin Romberg with William Mantzel, Salman Asif, Karim Sabra, Ramesh Neelamani Georgia Tech, School of ECE Workshop on Sparsity and Computation June 11, 2010 Bonn,
More informationMismatch and Resolution in CS
Mismatch and Resolution in CS Albert Fannjiang, UC Davis Coworker: Wenjing Liao, UC Davis SPIE Optics + Photonics, San Diego, August 2-25, 20 Mismatch: gridding error Band exclusion Local optimization
More informationCompressive Sensing with Random Matrices
Compressive Sensing with Random Matrices Lucas Connell University of Georgia 9 November 017 Lucas Connell (University of Georgia) Compressive Sensing with Random Matrices 9 November 017 1 / 18 Overview
More informationLecture 13 October 6, Covering Numbers and Maurey s Empirical Method
CS 395T: Sublinear Algorithms Fall 2016 Prof. Eric Price Lecture 13 October 6, 2016 Scribe: Kiyeon Jeon and Loc Hoang 1 Overview In the last lecture we covered the lower bound for p th moment (p > 2) and
More informationLecture Notes 9: Constrained Optimization
Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form
More informationBhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego
Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego 1 Outline Course Outline Motivation for Course Sparse Signal Recovery Problem Applications Computational
More informationAn Overview of Compressed Sensing
An Overview of Compressed Sensing Nathan Schneider November 18, 2009 Abstract In a large number of applications, the system will be designed to sample at a rate equal to at least the frequency bandwidth
More informationCompressed Spectrum Sensing for Whitespace Networking
Compressed Spectrum Sensing for Whitespace Networking Vishnu Katreddy Masters Project Report Abstract We study the problem of wideband compressed spectrum sensing for detection of whitespaces when some
More informationarxiv: v1 [cs.it] 21 Feb 2013
q-ary Compressive Sensing arxiv:30.568v [cs.it] Feb 03 Youssef Mroueh,, Lorenzo Rosasco, CBCL, CSAIL, Massachusetts Institute of Technology LCSL, Istituto Italiano di Tecnologia and IIT@MIT lab, Istituto
More informationStrengthened Sobolev inequalities for a random subspace of functions
Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)
More informationANALOG-TO-DIGITAL converters (ADCs) are an essential. Democracy in Action: Quantization, Saturation, and Compressive Sensing
Democracy in Action: Quantization, Saturation, and Compressive Sensing Jason N. Laska, Petros T. Boufounos, Mark A. Davenport, and Richard G. Baraniuk Abstract Recent theoretical developments in the area
More informationJoint Direction-of-Arrival and Order Estimation in Compressed Sensing using Angles between Subspaces
Aalborg Universitet Joint Direction-of-Arrival and Order Estimation in Compressed Sensing using Angles between Subspaces Christensen, Mads Græsbøll; Nielsen, Jesper Kjær Published in: I E E E / S P Workshop
More informationCOMPRESSED SENSING IN PYTHON
COMPRESSED SENSING IN PYTHON Sercan Yıldız syildiz@samsi.info February 27, 2017 OUTLINE A BRIEF INTRODUCTION TO COMPRESSED SENSING A BRIEF INTRODUCTION TO CVXOPT EXAMPLES A Brief Introduction to Compressed
More informationSPECTRAL COMPRESSIVE SENSING WITH POLAR INTERPOLATION. Karsten Fyhn, Hamid Dadkhahi, Marco F. Duarte
SPECTRAL COMPRESSIVE SENSING WITH POLAR INTERPOLATION Karsten Fyhn, Hamid Dadkhahi, Marco F. Duarte Dept. of Electronic Systems, Aalborg University, Denmark. Dept. of Electrical and Computer Engineering,
More informationCompressive Sensing and Beyond
Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered
More informationDesign of Spectrally Shaped Binary Sequences via Randomized Convex Relaxations
Design of Spectrally Shaped Binary Sequences via Randomized Convex Relaxations Dian Mo Department of Electrical and Computer Engineering University of Massachusetts Amherst, MA 3 mo@umass.edu Marco F.
More informationCoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp
CoSaMP Iterative signal recovery from incomplete and inaccurate samples Joel A. Tropp Applied & Computational Mathematics California Institute of Technology jtropp@acm.caltech.edu Joint with D. Needell
More informationGREEDY SIGNAL RECOVERY REVIEW
GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin
More informationCompressed Sensing: Lecture I. Ronald DeVore
Compressed Sensing: Lecture I Ronald DeVore Motivation Compressed Sensing is a new paradigm for signal/image/function acquisition Motivation Compressed Sensing is a new paradigm for signal/image/function
More informationApplied and Computational Harmonic Analysis
Appl. Comput. Harmon. Anal. 31 (2011) 429 443 Contents lists available at ScienceDirect Applied and Computational Harmonic Analysis www.elsevier.com/locate/acha Democracy in action: Quantization, saturation,
More informationDemocracy in Action: Quantization, Saturation and Compressive Sensing
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Democracy in Action: Quantization, Saturation and Compressive Sensing Laska, J.N.; Boufounos, P.T.; Davenport, M.A.; Baraniuk, R.G. TR2011-049
More informationL-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise
L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise Srdjan Stanković, Irena Orović and Moeness Amin 1 Abstract- A modification of standard
More informationLecture 22: More On Compressed Sensing
Lecture 22: More On Compressed Sensing Scribed by Eric Lee, Chengrun Yang, and Sebastian Ament Nov. 2, 207 Recap and Introduction Basis pursuit was the method of recovering the sparsest solution to an
More informationAN OVERVIEW OF ROBUST COMPRESSIVE SENSING OF SPARSE SIGNALS IN IMPULSIVE NOISE
AN OVERVIEW OF ROBUST COMPRESSIVE SENSING OF SPARSE SIGNALS IN IMPULSIVE NOISE Ana B. Ramirez, Rafael E. Carrillo, Gonzalo Arce, Kenneth E. Barner and Brian Sadler Universidad Industrial de Santander,
More informationEnhanced Compressive Sensing and More
Enhanced Compressive Sensing and More Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Nonlinear Approximation Techniques Using L1 Texas A & M University
More informationCompressive Sensing under Matrix Uncertainties: An Approximate Message Passing Approach
Compressive Sensing under Matrix Uncertainties: An Approximate Message Passing Approach Asilomar 2011 Jason T. Parker (AFRL/RYAP) Philip Schniter (OSU) Volkan Cevher (EPFL) Problem Statement Traditional
More informationANALOG-TO-DIGITAL converters (ADCs) are an essential. Democracy in Action: Quantization, Saturation, and Compressive Sensing
Democracy in Action: Quantization, Saturation, and Compressive Sensing Jason N. Laska, Petros T. Boufounos, Mark A. Davenport, and Richard G. Baraniuk Abstract Recent theoretical developments in the area
More informationOptimization for Compressed Sensing
Optimization for Compressed Sensing Robert J. Vanderbei 2014 March 21 Dept. of Industrial & Systems Engineering University of Florida http://www.princeton.edu/ rvdb Lasso Regression The problem is to solve
More informationNew Applications of Sparse Methods in Physics. Ra Inta, Centre for Gravitational Physics, The Australian National University
New Applications of Sparse Methods in Physics Ra Inta, Centre for Gravitational Physics, The Australian National University 2 Sparse methods A vector is S-sparse if it has at most S non-zero coefficients.
More informationRandomness-in-Structured Ensembles for Compressed Sensing of Images
Randomness-in-Structured Ensembles for Compressed Sensing of Images Abdolreza Abdolhosseini Moghadam Dep. of Electrical and Computer Engineering Michigan State University Email: abdolhos@msu.edu Hayder
More informationCompressive Wideband Spectrum Sensing based on Cyclostationary Detection
Compressive Wideband Spectrum Sensing based on Cyclostationary Detection Eric Rebeiz, Varun Jain, Deborah Cohen, Yonina C. Eldar, and Danijela Cabric Abstract Cognitive Radios (CRs) operating in a wideband
More informationCyclostationarity-Based Low Complexity Wideband Spectrum Sensing using Compressive Sampling
Cyclostationarity-Based Low Complexity Wideband Spectrum Sensing using Compressive Sampling Eric Rebeiz, Varun Jain, Danijela Cabric University of California Los Angeles, CA, USA Email: {rebeiz, vjain,
More informationROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210
ROBUST BLIND SPIKES DECONVOLUTION Yuejie Chi Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 4 ABSTRACT Blind spikes deconvolution, or blind super-resolution, deals with
More informationCompressed sensing techniques for hyperspectral image recovery
Compressed sensing techniques for hyperspectral image recovery A. Abrardo, M. Barni, C. M. Carretti, E. Magli, S. Kuiteing Kamdem, R. Vitulli ABSTRACT Compressed Sensing (CS) theory is progressively gaining
More informationAN INTRODUCTION TO COMPRESSIVE SENSING
AN INTRODUCTION TO COMPRESSIVE SENSING Rodrigo B. Platte School of Mathematical and Statistical Sciences APM/EEE598 Reverse Engineering of Complex Dynamical Networks OUTLINE 1 INTRODUCTION 2 INCOHERENCE
More informationRecovery Guarantees for Rank Aware Pursuits
BLANCHARD AND DAVIES: RECOVERY GUARANTEES FOR RANK AWARE PURSUITS 1 Recovery Guarantees for Rank Aware Pursuits Jeffrey D. Blanchard and Mike E. Davies Abstract This paper considers sufficient conditions
More informationPHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN
PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION A Thesis by MELTEM APAYDIN Submitted to the Office of Graduate and Professional Studies of Texas A&M University in partial fulfillment of the
More informationRobust Sparse Recovery via Non-Convex Optimization
Robust Sparse Recovery via Non-Convex Optimization Laming Chen and Yuantao Gu Department of Electronic Engineering, Tsinghua University Homepage: http://gu.ee.tsinghua.edu.cn/ Email: gyt@tsinghua.edu.cn
More informationRecovery of Compressible Signals in Unions of Subspaces
1 Recovery of Compressible Signals in Unions of Subspaces Marco F. Duarte, Chinmay Hegde, Volkan Cevher, and Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University Abstract
More informationExact Topology Identification of Large-Scale Interconnected Dynamical Systems from Compressive Observations
Exact Topology Identification of arge-scale Interconnected Dynamical Systems from Compressive Observations Borhan M Sanandaji, Tyrone Vincent, and Michael B Wakin Abstract In this paper, we consider the
More informationMultidimensional Sparse Fourier Transform Based on the Fourier Projection-Slice Theorem
Multidimensional Sparse Fourier Transform Based on the Fourier Projection-Slice Theorem Shaogang Wang, Student Member, IEEE, Vishal M. Patel, Senior Member, IEEE, and Athina Petropulu, Fellow, IEEE 2 3
More informationSolving Underdetermined Linear Equations and Overdetermined Quadratic Equations (using Convex Programming)
Solving Underdetermined Linear Equations and Overdetermined Quadratic Equations (using Convex Programming) Justin Romberg Georgia Tech, ECE Caltech ROM-GR Workshop June 7, 2013 Pasadena, California Linear
More informationRandom projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016
Lecture notes 5 February 9, 016 1 Introduction Random projections Random projections are a useful tool in the analysis and processing of high-dimensional data. We will analyze two applications that use
More informationExact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice
Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice Jason N. Laska, Mark A. Davenport, Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University
More informationAn Introduction to Sparse Approximation
An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,
More informationSparse Optimization Lecture: Sparse Recovery Guarantees
Those who complete this lecture will know Sparse Optimization Lecture: Sparse Recovery Guarantees Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor: Wotao Yin Department of Mathematics,
More informationSparse Legendre expansions via l 1 minimization
Sparse Legendre expansions via l 1 minimization Rachel Ward, Courant Institute, NYU Joint work with Holger Rauhut, Hausdorff Center for Mathematics, Bonn, Germany. June 8, 2010 Outline Sparse recovery
More informationPerformance Analysis for Sparse Support Recovery
Performance Analysis for Sparse Support Recovery Gongguo Tang and Arye Nehorai ESE, Washington University April 21st 2009 Gongguo Tang and Arye Nehorai (Institute) Performance Analysis for Sparse Support
More informationStochastic geometry and random matrix theory in CS
Stochastic geometry and random matrix theory in CS IPAM: numerical methods for continuous optimization University of Edinburgh Joint with Bah, Blanchard, Cartis, and Donoho Encoder Decoder pair - Encoder/Decoder
More informationCompressive Wideband Power Spectrum Estimation
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 60, NO. 9, SEPTEMBER 2012 4775 Compressive Wideband Power Spectrum Estimation Dyonisius Dony Ariananda, Student Member, IEEE, andgeertleus, Fellow, IEEE Abstract
More informationCompressed Sensing and Affine Rank Minimization Under Restricted Isometry
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 61, NO. 13, JULY 1, 2013 3279 Compressed Sensing Affine Rank Minimization Under Restricted Isometry T. Tony Cai Anru Zhang Abstract This paper establishes new
More informationIntroduction to compressive sampling
Introduction to compressive sampling Sparsity and the equation Ax = y Emanuele Grossi DAEIMI, Università degli Studi di Cassino e-mail: e.grossi@unicas.it Gorini 2010, Pistoia Outline 1 Introduction Traditional
More informationGreedy Signal Recovery and Uniform Uncertainty Principles
Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles
More informationLASSO BASED PERFORMANCE EVALUATION FOR SPARSE ONE-DIMENSIONAL RADAR PROBLEM UN- DER RANDOM SUB-SAMPLING AND GAUSSIAN NOISE
Progress In Electromagnetics Research, Vol. 14, 559 578, 013 LASSO BASED PERFORMANCE EVALUATION FOR SPARSE ONE-DIMENSIONAL RADAR PROBLEM UN- DER RANDOM SUB-SAMPLING AND GAUSSIAN NOISE Yin Xiang *, Bingchen
More informationStability of LS-CS-residual and modified-cs for sparse signal sequence reconstruction
Stability of LS-CS-residual and modified-cs for sparse signal sequence reconstruction Namrata Vaswani ECE Dept., Iowa State University, Ames, IA 50011, USA, Email: namrata@iastate.edu Abstract In this
More informationA Review of Sparsity-Based Methods for Analysing Radar Returns from Helicopter Rotor Blades
A Review of Sparsity-Based Methods for Analysing Radar Returns from Helicopter Rotor Blades Ngoc Hung Nguyen 1, Hai-Tan Tran 2, Kutluyıl Doğançay 1 and Rocco Melino 2 1 University of South Australia 2
More informationGreedy Subspace Pursuit for Joint Sparse. Recovery
Greedy Subspace Pursuit for Joint Sparse 1 Recovery Kyung Su Kim, Student Member, IEEE, Sae-Young Chung, Senior Member, IEEE arxiv:1601.07087v1 [cs.it] 6 Jan 016 Abstract In this paper, we address the
More informationTHEORY OF COMPRESSIVE SENSING VIA l 1 -MINIMIZATION: A NON-RIP ANALYSIS AND EXTENSIONS
THEORY OF COMPRESSIVE SENSING VIA l 1 -MINIMIZATION: A NON-RIP ANALYSIS AND EXTENSIONS YIN ZHANG Abstract. Compressive sensing (CS) is an emerging methodology in computational signal processing that has
More informationBayesian Methods for Sparse Signal Recovery
Bayesian Methods for Sparse Signal Recovery Bhaskar D Rao 1 University of California, San Diego 1 Thanks to David Wipf, Jason Palmer, Zhilin Zhang and Ritwik Giri Motivation Motivation Sparse Signal Recovery
More informationStructured Compressed Sensing: From Theory to Applications
Structured Compressed Sensing: From Theory to Applications 1 Marco F. Duarte Member, IEEE, and Yonina C. Eldar, Senior Member, IEEE Abstract arxiv:1106.6224v2 [cs.it] 28 Jul 2011 Compressed sensing (CS)
More informationFast Hard Thresholding with Nesterov s Gradient Method
Fast Hard Thresholding with Nesterov s Gradient Method Volkan Cevher Idiap Research Institute Ecole Polytechnique Federale de ausanne volkan.cevher@epfl.ch Sina Jafarpour Department of Computer Science
More informationUniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit
Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit arxiv:0707.4203v2 [math.na] 14 Aug 2007 Deanna Needell Department of Mathematics University of California,
More informationStopping Condition for Greedy Block Sparse Signal Recovery
Stopping Condition for Greedy Block Sparse Signal Recovery Yu Luo, Ronggui Xie, Huarui Yin, and Weidong Wang Department of Electronics Engineering and Information Science, University of Science and Technology
More informationExponential decay of reconstruction error from binary measurements of sparse signals
Exponential decay of reconstruction error from binary measurements of sparse signals Deanna Needell Joint work with R. Baraniuk, S. Foucart, Y. Plan, and M. Wootters Outline Introduction Mathematical Formulation
More informationEUSIPCO
EUSIPCO 013 1569746769 SUBSET PURSUIT FOR ANALYSIS DICTIONARY LEARNING Ye Zhang 1,, Haolong Wang 1, Tenglong Yu 1, Wenwu Wang 1 Department of Electronic and Information Engineering, Nanchang University,
More informationToeplitz Compressed Sensing Matrices with. Applications to Sparse Channel Estimation
Toeplitz Compressed Sensing Matrices with 1 Applications to Sparse Channel Estimation Jarvis Haupt, Waheed U. Bajwa, Gil Raz, and Robert Nowak Abstract Compressed sensing CS has recently emerged as a powerful
More informationOn Rank Awareness, Thresholding, and MUSIC for Joint Sparse Recovery
On Rank Awareness, Thresholding, and MUSIC for Joint Sparse Recovery Jeffrey D. Blanchard a,1,, Caleb Leedy a,2, Yimin Wu a,2 a Department of Mathematics and Statistics, Grinnell College, Grinnell, IA
More informationMIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design
MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications Class 19: Data Representation by Design What is data representation? Let X be a data-space X M (M) F (M) X A data representation
More informationCompressed Sensing and Sparse Recovery
ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing
More informationA Generalized Restricted Isometry Property
1 A Generalized Restricted Isometry Property Jarvis Haupt and Robert Nowak Department of Electrical and Computer Engineering, University of Wisconsin Madison University of Wisconsin Technical Report ECE-07-1
More informationRecovery of Sparse Signals Using Multiple Orthogonal Least Squares
Recovery of Sparse Signals Using Multiple Orthogonal east Squares Jian Wang and Ping i Department of Statistics and Biostatistics, Department of Computer Science Rutgers, The State University of New Jersey
More informationIEOR 265 Lecture 3 Sparse Linear Regression
IOR 65 Lecture 3 Sparse Linear Regression 1 M Bound Recall from last lecture that the reason we are interested in complexity measures of sets is because of the following result, which is known as the M
More informationRecovering overcomplete sparse representations from structured sensing
Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix
More information