Quantized Iterative Hard Thresholding:
|
|
- Ophelia Knight
- 6 years ago
- Views:
Transcription
1 Quantized Iterative Hard Thresholding: ridging 1-bit and High Resolution Quantized Compressed Sensing Laurent Jacques, Kévin Degraux, Christophe De Vleeschouwer Louvain University (UCL), Louvain-la-Neuve, elgium SampTA2013, Jacobs University remen
2 Compressive Sampling and Quantization Compressed sensing theory says: Linearly sample a signal at a rate function of its intrinsic dimensionality R M? Φ Information theory and sensor designer say: Okay, but I need to quantize/digitize my measurements! (e.g., in ADC) SampTA2013, Jacobs University remen
3 (this talk) Focus on scalar quantization Turning measurements into bits scalar quantization Q Out q i = Q[(Φx) i ]=Q[φ i, x] Ω R q i In q = Q Φx Ω = Ω M, τ i τ i+1 e.g., Lloyd-Max Ω = {q i R :1 i 2 }, (levels) T = {τ i R :1 i 2 +1, τ i τ i+1 } (thresholds) Important conventions: Definition of Φ independent of M bits per measurement Total bit budget: R = M No further encoding (e.g., entropic) (e.g., Φ ij iid N (0, 1)) preserves measurement dynamic! 5 SampTA2013, Jacobs University remen
4 1. First extreme: CS and high-resolution scalar quantization SampTA2013, Jacobs University remen 6
5 Former solutions Quantization is like a noise q = Q Φx = Φx + n quantization distortion SampTA2013, Jacobs University remen 7
6 Former solutions Quantization is like a noise q = Q Φx = Φx + n CS is robust, e.g., with... asis Pursuit DeNoise (PDN): ˆx = argmin u R N u 1 s.t. Φu q Iterative Hard Thresholding (IHT): ˆx = argmin u R N Φu y 2 s.t. u 0 K thresholding gradient descent iterating: x (n+1) = H K x (n) + µφ (y Φx (n) ) SampTA2013, Jacobs University remen 8
7 Former solutions Quantization is like a noise q = Q Φx = Φx + n CS is robust (e.g., with PDN or IHT) If n and 1 M Φ is RIP(δ,aK)withδ δ 0,then ˆx x C M + De(K), for some C, D > 0 and e(k) = deviation to K-sparsity. e(k) =x x K 1 / K or x x K 2 + x x K 1 / K [Candès, 08] (a = 2, δ 0 < 2 1) [lumensath, Davies, 09] (a =3, δ 0 < 1/32) SampTA2013, Jacobs University remen 9
8 Former solutions Quantization is like a noise q = Q Φx = Φx + n CS is robust (e.g., with PDN or IHT) If n and 1 M Φ is RIP(δ,aK)withδ δ 0,then ˆx x C M + De(K), for some C, D > 0 and e(k) = deviation to K-sparsity. Question: finding a for QCS? e.g., High-Resolution ounds! M = O(α) (unif.q bin width α) or O(2 ) for -bit Q SampTA2013, Jacobs University remen 10
9 2. Second extreme: CS and 1-bit scalar quantization SampTA2013, Jacobs University remen 11
10 1-bit Compressed Sensing q Φ x Oversampling in M = sign [oufounos, M M N N araniuk, 08] M-bits! ut, which information inside q? SampTA2013, Jacobs University remen 12
11 1-bit Compressed Sensing Computational bits matter! q Φ x Oversampling in M = sign [oufounos, M M N N araniuk, 08] M-bits! ut, which information inside q? SampTA2013, Jacobs University remen 13
12 Oversampling in M 1-bit Compressed Sensing Computational bits matter! q Φ x = sign M M N N Warning 1: signal amplitude is lost! Amplitude is arbitrarily fixed Examples : x = 1 or Φx 1 =1 Warning 2:!forbidden sensing! (e.g. ernoulli + canonical sparsity) [Plan, Vershynin, 11] SampTA2013, Jacobs University remen 14
13 Why does it work? x on S 2 M vectors: {ϕi : 1 i M } iid Gaussian ϕ5 ϕ1 1-bit Measurements ϕ1, x > 0 ϕ2, x > 0 ϕ3, x 0 ϕ4, x > 0 ϕ5, x > 0... ϕ2 x Smaller and smaller when M increases ΣK {u : sign (Φu) = sign (Φx)} ϕ4 ϕ3 Lower bound on this width? SampTA2013, Jacobs University remen 15
14 1-bit CS bounds Let A( ) := sign (Φ ) with Φ N M N (0, 1). If M = O( 1 K log N ), then, w.h.p, for any two unit K-sparse vectors x and s, if only b bits are different between A(x) and A(s) x s K+b K [LJ, Laska, oufounos, araniuk, 13] (b = 0) [LJ, Degraux, De Vleeschouwer, 13] (b = 0) If M = O( 2 K log N) inary Stable Embedding (SE): d ang (x, s) d H (A(x),A(s)) d ang (x, s)+ =RIPlike [LJ, Laska, oufounos, araniuk, 13] [Plan, Vershynin, 11, 12] Lesson: possible recovery should be as consistent as possible. SampTA2013, Jacobs University remen 17
15 Easiest 1-bit reconstruction Fact: If M = O( 2 K log N/K) (for x Σ K fixed, s Σ K ) π/2 M sign (Φx), Φs x, s (w.h.p) Let x Σ K S N 1 and q = sign(φx). Compute ˆx = π 2M H K(Φ q) [Plan, Vershynin, 12] Then, if previous property holds, x ˆx 2. [LJ, Degraux, De Vleeschouwer, 13] arg max u R N q T Φu s.t. u 0 K, u =1 PV-L0 problem [ahmani, oufounos, Raj, 13] SampTA2013, Jacobs University remen 18
16 inary Iterative Hard Thresholding (IHT) ϕ j ϕ k x J (x )=0 x J (x ) = 0 Idea: greedily forcing consistency forcing sparsity M J (x )= sign (ϕ j, x) ϕ j, x j=1 q j with (λ) =(λ λ )/2 or J (x )=[ diag(q)(φx )] 1 SampTA2013, Jacobs University remen 20
17 inary Iterative Hard Thresholding (IHT) ϕ j ϕ k x J (x )=0 x J (x ) = 0 Idea: greedily forcing consistency forcing sparsity M J (x )= sign (ϕ j, x) ϕ j, x j=1 q j with (λ) =(λ λ )/2 or J (x )=[ diag(q)(φx )] 1 Objective: argmin u J (u) s.t. u 0 K Given q = A(x) and K, setl = 0, x 0 = 0: a l+1 = x l + τ 2 ΦT q A(x l ), ( gradient towards consistency) (τ > 0 controls gradient descent) x l+1 = H K (a l+1 ), l l +1 (proj. K-sparse signal set) SampTA2013, Jacobs University remen 21
18 3. ridging 1-bit & -bit CS? SampTA2013, Jacobs University remen 22
19 ridging 1-bit & -bit CS? -bit quantizer defined with thresholds: λ q i τ i τ i+1 R + + q i = Q[λ] sign (λ τ i ) sign (λ τ i+1 ) λ R i =[τ i, τ i+1 ) sign (λ τ i ) = +1 & sign (λ τ i+1 )= 1 Can we combine multiple thresholds in 1-bit CS? SampTA2013, Jacobs University remen 23
20 ridging 1-bit & -bit CS? Given T = {τ j } and Ω = {q j } ( T =2 +1= Ω + 1), let s define J(ν, λ) = 2 j=2 with w j = q j q j 1. w j sign (λ τj )(ν τ j ), Remark: for symmetric Q, Q(λ) = 2 j=2 w j sign (λ τ j ) SampTA2013, Jacobs University remen 24
21 ridging 1-bit & -bit CS? Given T = {τ j } and Ω = {q j } ( T =2 +1= Ω + 1), let s define J(ν, λ) = 2 j=2 with w j = q j q j 1. w j sign (λ τj )(ν τ j ), Illustration: λ [τ j 1, τ j ), ν [τ j, τ j+1 ) J(ν, λ) = sign (λ τ j )(ν τ j ) =(ν τ j ) delocalized IHT 1 -sided norm λ ν τ j 1 τ j τ j+1 (for w j = 1) SampTA2013, Jacobs University remen 25
22 ridging 1-bit & -bit CS? Given T = {τ j } and Ω = {q j } ( T =2 +1= Ω + 1), let s define J(ν, λ) = 2 j=2 with w j = q j q j 1. w j sign (λ τj )(ν τ j ), Illustration: λ [τ j 1, τ j ), ν [τ j+1, τ j+2 ) J(ν, λ) =(ν τ j )+(ν τ j+1 ) (ν τ j ) (ν τ j+1 ) λ ν τ j 1 τ j τ j+1 (for w j = 1) SampTA2013, Jacobs University remen 26
23 ridging 1-bit & -bit CS? Given T = {τ j } and Ω = {q j } ( T =2 +1= Ω + 1), let s define J(ν, λ) = 2 j=2 with w j = q j q j 1. w j sign (λ τj )(ν τ j ), Illustration: (ν τ j )+(ν τ j+1 ) λ (ν τ j ) τ j 1 τ j τ j+1 (for w j = 1) SampTA2013, Jacobs University remen 27
24 ridging 1-bit & -bit CS? Given T = {τ j } and Ω = {q j } ( T =2 +1= Ω + 1), let s define J(ν, λ) = 2 j=2 with w j = q j q j 1. Illustration: more bins, w j sign (λ τj )(ν τ j ), λ R (λ ν)2 J(λ, ν) q 5 SampTA2013, Jacobs University remen 28
25 ridging 1-bit & -bit CS? Given T = {τ j } and Ω = {q j } ( T =2 +1= Ω + 1), let s define J(ν, λ) = 2 j=2 with w j = q j q j 1. w j sign (λ τj )(ν τ j ), For u, v R M : J (u, v) := M k=1 J(u k,v k ) (component wise) Remarks: J is convex in For =1(j = 2 only): J (u, v) (sign (v) u) 1 1 -sided 1-bit energy For 1: J(ν, λ) 1 2 (ν λ)2 and J (u, v) 1 2u v2 (quadratic energy) SampTA2013, Jacobs University remen 29
26 ridging 1-bit & -bit CS? Let s define an inconsistency energy: E (u) :=J (Φu, q) withq = Q [Φx] and E (x) =0 Idea: Minimize it in Σ K (as for Iterative Hard Thresholding) min u R N E (u) s.t.u 0 K, [lumensath, Davies, 08] SampTA2013, Jacobs University remen 30
27 ridging 1-bit & -bit CS? Let s define an inconsistency energy: E (u) :=J (Φu, q) withq = Q [Φx] and E (x) =0 Idea: Minimize it in Σ K (as for Iterative Hard Thresholding) min u R N E (u) s.t.u 0 K, combinatorial but greedy solution (as for IHT): [lumensath, Davies, 08] x (n+1) = H K [x (n) µ E (x (n) )] and x (0) = 0. (sub) gradient (µ >0, see after) Φ (sign (Φu) sign (Φx)) IHT! E (u) =Φ (Q (Φu) q) =1 1 Quantized IHT (QIHT) all that.. for this!? ;-) Φ (Φu q) IHT! SampTA2013, Jacobs University remen 31
28 ridging 1-bit & -bit CS? So, QIHT reads x (n+1) = H K x (n) + µ Φ q Q (Φx (n) ) and x (0) = 0. Setting μ? for = 1, μ has no impact for high, IHT solution: [lumensath, Davies, 08] If Φ/ M is RIP, µ< 1 (1+δ 2K )M for any, heuristic: µ = 1 M (1 ck/m) for some c>0. Validated by extensive simulations (c = 3) SNR: R = M R = M R = M SampTA2013, Jacobs University remen 33
29 QIHT simulations N = 1024, K= 16, R= M {64, 128,, 1280}, 100 trials Note: entropy could be computed instead of (e.g., for further efficient coding) (+ Lloyd-Max Gauss. Q.) Oracle * * R = M R = M *: almost 6d per bit gain SampTA2013, Jacobs University remen 34
30 QIHT simulations N = 1024, K= 16, R= M {64, 128,, 1280}, 100 trials Note: entropy could be computed instead of (e.g., for further efficient coding) (+ Lloyd-Max Gauss. Q.) Oracle * * R R QIHT stops if x (n+1) x (n) x (n) or n = 500. < 10 4 oosting at low Easiest algo *: almost 6d per bit gain R = M SampTA2013, Jacobs University remen 35
31 QIHT simulations N = 1024, K= 16, R= M {64, 128,, 1280}, 100 trials Note: entropy could be computed instead of (e.g., for further efficient coding) Running time: (+ Lloyd-Max Gauss. Q.) R = M R = M R = M Consistency? R = M R = M R = M SampTA2013, Jacobs University remen 36
32 Conclusions IHT framework can integrate quantization Lot of theoretical works to do/come... IHT/QIHT convergence/optimality guarantees? New embedding results? considering Q(λ) = 2 j=2 w j sign (λ τ j ) = sum of 1-bit costs + thresholds? lend of RIP/εSE? as in 1 δ x x dist(q(φx), Q(Φx )) 1+δ x x +? for sparse x, x and with,m 0 and δ M 0? Coming: Application to -bit compressive sensors (RMPI) SampTA2013, Jacobs University remen 38
33 Further Reading (this work) L. Jacques, K. Degraux, C. De Vleeschouwer, Quantized Iterative Hard Thresholding: ridging 1-bit and High-Resolution Quantized Compressed Sensing", SAMPTA 2013, T. lumensath, M.E. Davies, Iterative thresholding for sparse approximations. Journal of Fourier Analysis and Applications, 14(5-6), pp , 2008 P. T. oufounos and R. G. araniuk, 1-it compressive sensing, Proc. Conf. Inform. Science and Systems (CISS), Princeton, NJ, March 19-21, oufounos, P. T. (2009, November). Greedy sparse signal reconstruction from sign measurements. In Conference Record of the Forty-Third Asilomar Conference on Signals, Systems and Computers, 2009 Y. Plan, R. Vershynin, Dimension reduction by random hyperplane tessellations, arxiv: , Y. Plan, R. Vershynin, Robust 1-bit compressed sensing and sparse logistic regression: a convex programming approach, IEEE Trans. Info. Theory, arxiv: , J. N. Laska, R. G. araniuk, Regime change: it-depth versus measurement-rate in compressive sensing, IEEE Trans. Signal Processing, 60(7), pp , L. Jacques, J. N. Laska, P. T. oufounos, and R. G. araniuk, Robust 1-it Compressive Sensing via inary Stable Embeddings of Sparse Vectors, IEEE Trans. Info. Theory, 59(4), S. ahmani, P.T. oufounos,. Raj, Robust 1-bit Compressive Sensing via Gradient Support Pursuit, SampTA2013, Jacobs University remen 39
ROBUST BINARY FUSED COMPRESSIVE SENSING USING ADAPTIVE OUTLIER PURSUIT. Xiangrong Zeng and Mário A. T. Figueiredo
2014 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP) ROBUST BINARY FUSED COMPRESSIVE SENSING USING ADAPTIVE OUTLIER PURSUIT Xiangrong Zeng and Mário A. T. Figueiredo Instituto
More informationGeneralized Orthogonal Matching Pursuit- A Review and Some
Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents
More informationarxiv: v1 [cs.it] 21 Feb 2013
q-ary Compressive Sensing arxiv:30.568v [cs.it] Feb 03 Youssef Mroueh,, Lorenzo Rosasco, CBCL, CSAIL, Massachusetts Institute of Technology LCSL, Istituto Italiano di Tecnologia and IIT@MIT lab, Istituto
More informationIntroduction to Compressed Sensing
Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral
More informationExponential decay of reconstruction error from binary measurements of sparse signals
Exponential decay of reconstruction error from binary measurements of sparse signals Deanna Needell Joint work with R. Baraniuk, S. Foucart, Y. Plan, and M. Wootters Outline Introduction Mathematical Formulation
More informationCOMPRESSIVE OPTICAL DEFLECTOMETRIC TOMOGRAPHY
COMPRESSIVE OPTICAL DEFLECTOMETRIC TOMOGRAPHY Adriana González ICTEAM/UCL March 26th, 214 1 ISPGroup - ICTEAM - UCL http://sites.uclouvain.be/ispgroup Université catholique de Louvain, Louvain-la-Neuve,
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu
More informationCoSaMP. Iterative signal recovery from incomplete and inaccurate samples. Joel A. Tropp
CoSaMP Iterative signal recovery from incomplete and inaccurate samples Joel A. Tropp Applied & Computational Mathematics California Institute of Technology jtropp@acm.caltech.edu Joint with D. Needell
More informationCompressive Sensing and Beyond
Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered
More informationNew ways of dimension reduction? Cutting data sets into small pieces
New ways of dimension reduction? Cutting data sets into small pieces Roman Vershynin University of Michigan, Department of Mathematics Statistical Machine Learning Ann Arbor, June 5, 2012 Joint work with
More informationRobust 1-Bit Compressive Sensing via Binary Stable Embeddings of Sparse Vectors
Robust 1-Bit Compressive Sensing via Binary Stable Embeddings of Sparse Vectors Laurent Jacques, Jason N. Laska, Petros T. Boufounos, and Richard G. Baraniuk April 15, 211 Abstract The Compressive Sensing
More informationThe Pros and Cons of Compressive Sensing
The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal
More informationExact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice
Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice Jason N. Laska, Mark A. Davenport, Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University
More informationNoisy Signal Recovery via Iterative Reweighted L1-Minimization
Noisy Signal Recovery via Iterative Reweighted L1-Minimization Deanna Needell UC Davis / Stanford University Asilomar SSC, November 2009 Problem Background Setup 1 Suppose x is an unknown signal in R d.
More information1-bit Localization Scheme for Radar using Dithered Quantized Compressed Sensing
28 th International Workshop on Compressed Sensing applied to Radar, Multimodal Sensing, and Imaging (CoSeRa) -bit Localization Scheme for Radar using Dithered Quantized Compressed Sensing Thomas Feuillen,
More informationModel-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk
Model-Based Compressive Sensing for Signal Ensembles Marco F. Duarte Volkan Cevher Richard G. Baraniuk Concise Signal Structure Sparse signal: only K out of N coordinates nonzero model: union of K-dimensional
More informationAn Introduction to Sparse Approximation
An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,
More informationLarge-Scale L1-Related Minimization in Compressive Sensing and Beyond
Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March
More informationANALOG-TO-DIGITAL converters (ADCs) are an essential. Democracy in Action: Quantization, Saturation, and Compressive Sensing
Democracy in Action: Quantization, Saturation, and Compressive Sensing Jason N. Laska, Petros T. Boufounos, Mark A. Davenport, and Richard G. Baraniuk Abstract Recent theoretical developments in the area
More informationLow Resolution Adaptive Compressed Sensing for mmwave MIMO receivers
Low Resolution Adaptive Compressed Sensing for mmwave MIMO receivers Cristian Rusu, Nuria González-Prelcic and Robert W. Heath Motivation 2 Power consumption at mmwave in a MIMO receiver Power at 60 GHz
More informationANALOG-TO-DIGITAL converters (ADCs) are an essential. Democracy in Action: Quantization, Saturation, and Compressive Sensing
Democracy in Action: Quantization, Saturation, and Compressive Sensing Jason N. Laska, Petros T. Boufounos, Mark A. Davenport, and Richard G. Baraniuk Abstract Recent theoretical developments in the area
More informationGreedy Signal Recovery and Uniform Uncertainty Principles
Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles
More informationof Orthogonal Matching Pursuit
A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement
More informationSolving Corrupted Quadratic Equations, Provably
Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin
More informationDesign of Optimal Quantizers for Distributed Source Coding
Design of Optimal Quantizers for Distributed Source Coding David Rebollo-Monedero, Rui Zhang and Bernd Girod Information Systems Laboratory, Electrical Eng. Dept. Stanford University, Stanford, CA 94305
More informationSparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery
Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery Anna C. Gilbert Department of Mathematics University of Michigan Connection between... Sparse Approximation and Compressed
More informationOptimization methods
Optimization methods Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda /8/016 Introduction Aim: Overview of optimization methods that Tend to
More informationSparse regression. Optimization-Based Data Analysis. Carlos Fernandez-Granda
Sparse regression Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 3/28/2016 Regression Least-squares regression Example: Global warming Logistic
More informationCompressed Sensing and Sparse Recovery
ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing
More informationStochastic geometry and random matrix theory in CS
Stochastic geometry and random matrix theory in CS IPAM: numerical methods for continuous optimization University of Edinburgh Joint with Bah, Blanchard, Cartis, and Donoho Encoder Decoder pair - Encoder/Decoder
More informationRobust multichannel sparse recovery
Robust multichannel sparse recovery Esa Ollila Department of Signal Processing and Acoustics Aalto University, Finland SUPELEC, Feb 4th, 2015 1 Introduction 2 Nonparametric sparse recovery 3 Simulation
More informationQuantization and Compressive Sensing
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Quantization and Compressive Sensing Boufounos, P.T.; Jacques, L.; Krahmer, F.; Saab, R. TR2015-008 January 2015 Abstract Quantization is an
More informationCompressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery
Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad
More informationStrengthened Sobolev inequalities for a random subspace of functions
Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)
More informationRecovery of Sparse Signals Using Multiple Orthogonal Least Squares
Recovery of Sparse Signals Using Multiple Orthogonal east Squares Jian Wang and Ping i Department of Statistics and Biostatistics, Department of Computer Science Rutgers, The State University of New Jersey
More informationSampling and high-dimensional convex geometry
Sampling and high-dimensional convex geometry Roman Vershynin SampTA 2013 Bremen, Germany, June 2013 Geometry of sampling problems Signals live in high dimensions; sampling is often random. Geometry in
More information1-Bit Compressive Sensing
1-Bit Compressive Sensing Petros T. Boufounos, Richard G. Baraniuk Rice University, Electrical and Computer Engineering 61 Main St. MS 38, Houston, TX 775 Abstract Compressive sensing is a new signal acquisition
More informationSignal Recovery from Permuted Observations
EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,
More informationOptimization for Compressed Sensing
Optimization for Compressed Sensing Robert J. Vanderbei 2014 March 21 Dept. of Industrial & Systems Engineering University of Florida http://www.princeton.edu/ rvdb Lasso Regression The problem is to solve
More informationSimultaneous Sparsity
Simultaneous Sparsity Joel A. Tropp Anna C. Gilbert Martin J. Strauss {jtropp annacg martinjs}@umich.edu Department of Mathematics The University of Michigan 1 Simple Sparse Approximation Work in the d-dimensional,
More informationNear Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing
Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar
More informationPre-weighted Matching Pursuit Algorithms for Sparse Recovery
Journal of Information & Computational Science 11:9 (214) 2933 2939 June 1, 214 Available at http://www.joics.com Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Jingfei He, Guiling Sun, Jie
More informationInformation and Resolution
Information and Resolution (Invited Paper) Albert Fannjiang Department of Mathematics UC Davis, CA 95616-8633. fannjiang@math.ucdavis.edu. Abstract The issue of resolution with complex-field measurement
More informationGREEDY SIGNAL RECOVERY REVIEW
GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin
More informationCompressive sensing of low-complexity signals: theory, algorithms and extensions
Compressive sensing of low-complexity signals: theory, algorithms and extensions Laurent Jacques March 7, 9, 1, 14, 16 and 18, 216 9h3-12h3 (incl. 3 ) Graduate School in Systems, Optimization, Control
More informationCoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles
CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles SIAM Student Research Conference Deanna Needell Joint work with Roman Vershynin and Joel Tropp UC Davis, May 2008 CoSaMP: Greedy Signal
More informationRandom hyperplane tessellations and dimension reduction
Random hyperplane tessellations and dimension reduction Roman Vershynin University of Michigan, Department of Mathematics Phenomena in high dimensions in geometric analysis, random matrices and computational
More informationPhase Transition Phenomenon in Sparse Approximation
Phase Transition Phenomenon in Sparse Approximation University of Utah/Edinburgh L1 Approximation: May 17 st 2008 Convex polytopes Counting faces Sparse Representations via l 1 Regularization Underdetermined
More informationAN OVERVIEW OF ROBUST COMPRESSIVE SENSING OF SPARSE SIGNALS IN IMPULSIVE NOISE
AN OVERVIEW OF ROBUST COMPRESSIVE SENSING OF SPARSE SIGNALS IN IMPULSIVE NOISE Ana B. Ramirez, Rafael E. Carrillo, Gonzalo Arce, Kenneth E. Barner and Brian Sadler Universidad Industrial de Santander,
More informationNew Applications of Sparse Methods in Physics. Ra Inta, Centre for Gravitational Physics, The Australian National University
New Applications of Sparse Methods in Physics Ra Inta, Centre for Gravitational Physics, The Australian National University 2 Sparse methods A vector is S-sparse if it has at most S non-zero coefficients.
More informationFast Hard Thresholding with Nesterov s Gradient Method
Fast Hard Thresholding with Nesterov s Gradient Method Volkan Cevher Idiap Research Institute Ecole Polytechnique Federale de ausanne volkan.cevher@epfl.ch Sina Jafarpour Department of Computer Science
More informationThe Pros and Cons of Compressive Sensing
The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal
More informationSignal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit
Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Deanna Needell and Roman Vershynin Abstract We demonstrate a simple greedy algorithm that can reliably
More informationA Nonuniform Quantization Scheme for High Speed SAR ADC Architecture
A Nonuniform Quantization Scheme for High Speed SAR ADC Architecture Youngchun Kim Electrical and Computer Engineering The University of Texas Wenjuan Guo Intel Corporation Ahmed H Tewfik Electrical and
More informationCompressed Sensing: Extending CLEAN and NNLS
Compressed Sensing: Extending CLEAN and NNLS Ludwig Schwardt SKA South Africa (KAT Project) Calibration & Imaging Workshop Socorro, NM, USA 31 March 2009 Outline 1 Compressed Sensing (CS) Introduction
More informationSparse Solutions of an Undetermined Linear System
1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research
More informationExtended Reconstruction Approaches for Saturation Measurements Using Reserved Quantization Indices Li, Peng; Arildsen, Thomas; Larsen, Torben
Aalborg Universitet Extended Reconstruction Approaches for Saturation Measurements Using Reserved Quantization Indices Li, Peng; Arildsen, Thomas; Larsen, Torben Published in: Proceedings of the 12th IEEE
More informationECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis
ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 3: Sparse signal recovery: A RIPless analysis of l 1 minimization Yuejie Chi The Ohio State University Page 1 Outline
More informationMLCC 2018 Variable Selection and Sparsity. Lorenzo Rosasco UNIGE-MIT-IIT
MLCC 2018 Variable Selection and Sparsity Lorenzo Rosasco UNIGE-MIT-IIT Outline Variable Selection Subset Selection Greedy Methods: (Orthogonal) Matching Pursuit Convex Relaxation: LASSO & Elastic Net
More informationMaking Flippy Floppy
Making Flippy Floppy James V. Burke UW Mathematics jvburke@uw.edu Aleksandr Y. Aravkin IBM, T.J.Watson Research sasha.aravkin@gmail.com Michael P. Friedlander UBC Computer Science mpf@cs.ubc.ca Current
More informationSketching for Large-Scale Learning of Mixture Models
Sketching for Large-Scale Learning of Mixture Models Nicolas Keriven Université Rennes 1, Inria Rennes Bretagne-atlantique Adv. Rémi Gribonval Outline Introduction Practical Approach Results Theoretical
More informationNew Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit
New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence
More informationEnhanced Compressive Sensing and More
Enhanced Compressive Sensing and More Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Nonlinear Approximation Techniques Using L1 Texas A & M University
More informationOslo Class 6 Sparsity based regularization
RegML2017@SIMULA Oslo Class 6 Sparsity based regularization Lorenzo Rosasco UNIGE-MIT-IIT May 4, 2017 Learning from data Possible only under assumptions regularization min Ê(w) + λr(w) w Smoothness Sparsity
More informationThree Generalizations of Compressed Sensing
Thomas Blumensath School of Mathematics The University of Southampton June, 2010 home prev next page Compressed Sensing and beyond y = Φx + e x R N or x C N x K is K-sparse and x x K 2 is small y R M or
More informationRecovering overcomplete sparse representations from structured sensing
Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix
More informationApplied and Computational Harmonic Analysis
Appl. Comput. Harmon. Anal. 31 (2011) 429 443 Contents lists available at ScienceDirect Applied and Computational Harmonic Analysis www.elsevier.com/locate/acha Democracy in action: Quantization, saturation,
More informationIntroduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011
Compressed Sensing Huichao Xue CS3750 Fall 2011 Table of Contents Introduction From News Reports Abstract Definition How it works A review of L 1 norm The Algorithm Backgrounds for underdetermined linear
More informationSigma Delta Quantization for Compressed Sensing
Sigma Delta Quantization for Compressed Sensing C. Sinan Güntürk, 1 Mark Lammers, 2 Alex Powell, 3 Rayan Saab, 4 Özgür Yılmaz 4 1 Courant Institute of Mathematical Sciences, New York University, NY, USA.
More informationGradient Descent with Sparsification: An iterative algorithm for sparse recovery with restricted isometry property
: An iterative algorithm for sparse recovery with restricted isometry property Rahul Garg grahul@us.ibm.com Rohit Khandekar rohitk@us.ibm.com IBM T. J. Watson Research Center, 0 Kitchawan Road, Route 34,
More informationL-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise
L-statistics based Modification of Reconstruction Algorithms for Compressive Sensing in the Presence of Impulse Noise Srdjan Stanković, Irena Orović and Moeness Amin 1 Abstract- A modification of standard
More informationOne-Bit Quantization Design and Adaptive Methods for Compressed Sensing
1 One-Bit Quantization Design and Adaptive Methods for Compressed Sensing Jun Fang, Yanning Shen, and Hongbin Li, Senior Member, IEEE arxiv:1304.1969v2 [cs.it] 18 May 2013 Abstract There have been a number
More informationSparse Proteomics Analysis (SPA)
Sparse Proteomics Analysis (SPA) Toward a Mathematical Theory for Feature Selection from Forward Models Martin Genzel Technische Universität Berlin Winter School on Compressed Sensing December 5, 2015
More informationUniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit
Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit arxiv:0707.4203v2 [math.na] 14 Aug 2007 Deanna Needell Department of Mathematics University of California,
More informationGreedy Sparsity-Constrained Optimization
Greedy Sparsity-Constrained Optimization Sohail Bahmani, Petros Boufounos, and Bhiksha Raj 3 sbahmani@andrew.cmu.edu petrosb@merl.com 3 bhiksha@cs.cmu.edu Department of Electrical and Computer Engineering,
More informationDemocracy in Action: Quantization, Saturation and Compressive Sensing
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Democracy in Action: Quantization, Saturation and Compressive Sensing Laska, J.N.; Boufounos, P.T.; Davenport, M.A.; Baraniuk, R.G. TR2011-049
More informationInverse problems and sparse models (6/6) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France.
Inverse problems and sparse models (6/6) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France remi.gribonval@inria.fr Overview of the course Introduction sparsity & data compression inverse problems
More informationData Sparse Matrix Computation - Lecture 20
Data Sparse Matrix Computation - Lecture 20 Yao Cheng, Dongping Qi, Tianyi Shi November 9, 207 Contents Introduction 2 Theorems on Sparsity 2. Example: A = [Φ Ψ]......................... 2.2 General Matrix
More informationBhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego
Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego 1 Outline Course Outline Motivation for Course Sparse Signal Recovery Problem Applications Computational
More informationInterpolation via weighted l 1 minimization
Interpolation via weighted l 1 minimization Rachel Ward University of Texas at Austin December 12, 2014 Joint work with Holger Rauhut (Aachen University) Function interpolation Given a function f : D C
More informationarxiv: v3 [cs.it] 7 Nov 2013
Compressed Sensing with Linear Correlation Between Signal and Measurement Noise arxiv:1301.0213v3 [cs.it] 7 Nov 2013 Abstract Thomas Arildsen a, and Torben Larsen b Aalborg University, Faculty of Engineering
More informationAnalysis of Greedy Algorithms
Analysis of Greedy Algorithms Jiahui Shen Florida State University Oct.26th Outline Introduction Regularity condition Analysis on orthogonal matching pursuit Analysis on forward-backward greedy algorithm
More informationAn iterative hard thresholding estimator for low rank matrix recovery
An iterative hard thresholding estimator for low rank matrix recovery Alexandra Carpentier - based on a joint work with Arlene K.Y. Kim Statistical Laboratory, Department of Pure Mathematics and Mathematical
More informationMachine Learning for Signal Processing Sparse and Overcomplete Representations
Machine Learning for Signal Processing Sparse and Overcomplete Representations Abelino Jimenez (slides from Bhiksha Raj and Sourish Chaudhuri) Oct 1, 217 1 So far Weights Data Basis Data Independent ICA
More informationCombining geometry and combinatorics
Combining geometry and combinatorics A unified approach to sparse signal recovery Anna C. Gilbert University of Michigan joint work with R. Berinde (MIT), P. Indyk (MIT), H. Karloff (AT&T), M. Strauss
More informationInverse problems, Dictionary based Signal Models and Compressed Sensing
Inverse problems, Dictionary based Signal Models and Compressed Sensing Rémi Gribonval METISS project-team (audio signal processing, speech recognition, source separation) INRIA, Rennes, France Ecole d
More informationCompressed Sensing - Near Optimal Recovery of Signals from Highly Incomplete Measurements
Compressed Sensing - Near Optimal Recovery of Signals from Highly Incomplete Measurements Wolfgang Dahmen Institut für Geometrie und Praktische Mathematik RWTH Aachen and IMI, University of Columbia, SC
More informationMIT 9.520/6.860, Fall 2018 Statistical Learning Theory and Applications. Class 08: Sparsity Based Regularization. Lorenzo Rosasco
MIT 9.520/6.860, Fall 2018 Statistical Learning Theory and Applications Class 08: Sparsity Based Regularization Lorenzo Rosasco Learning algorithms so far ERM + explicit l 2 penalty 1 min w R d n n l(y
More informationEUSIPCO
EUSIPCO 013 1569746769 SUBSET PURSUIT FOR ANALYSIS DICTIONARY LEARNING Ye Zhang 1,, Haolong Wang 1, Tenglong Yu 1, Wenwu Wang 1 Department of Electronic and Information Engineering, Nanchang University,
More informationMachine Learning for Signal Processing Sparse and Overcomplete Representations. Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013
Machine Learning for Signal Processing Sparse and Overcomplete Representations Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013 1 Key Topics in this Lecture Basics Component-based representations
More informationRecent Developments in Compressed Sensing
Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline
More informationA new method on deterministic construction of the measurement matrix in compressed sensing
A new method on deterministic construction of the measurement matrix in compressed sensing Qun Mo 1 arxiv:1503.01250v1 [cs.it] 4 Mar 2015 Abstract Construction on the measurement matrix A is a central
More informationCompressed Sensing. 1 Introduction. 2 Design of Measurement Matrices
Compressed Sensing Yonina C. Eldar Electrical Engineering Department, Technion-Israel Institute of Technology, Haifa, Israel, 32000 1 Introduction Compressed sensing (CS) is an exciting, rapidly growing
More informationPHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN
PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION A Thesis by MELTEM APAYDIN Submitted to the Office of Graduate and Professional Studies of Texas A&M University in partial fulfillment of the
More information1 minimization without amplitude information. Petros Boufounos
1 minimization without amplitude information Petros Boufounos petrosb@rice.edu The Big 1 Picture Classical 1 reconstruction problems: min 1 s.t. f() = 0 min 1 s.t. f() 0 min 1 + λf() The Big 1 Picture
More informationSparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery
Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery Anna C. Gilbert Department of Mathematics University of Michigan Sparse signal recovery measurements:
More informationRobust Sparse Recovery via Non-Convex Optimization
Robust Sparse Recovery via Non-Convex Optimization Laming Chen and Yuantao Gu Department of Electronic Engineering, Tsinghua University Homepage: http://gu.ee.tsinghua.edu.cn/ Email: gyt@tsinghua.edu.cn
More informationSensing systems limited by constraints: physical size, time, cost, energy
Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original
More informationsparse and low-rank tensor recovery Cubic-Sketching
Sparse and Low-Ran Tensor Recovery via Cubic-Setching Guang Cheng Department of Statistics Purdue University www.science.purdue.edu/bigdata CCAM@Purdue Math Oct. 27, 2017 Joint wor with Botao Hao and Anru
More informationTHEORY OF COMPRESSIVE SENSING VIA l 1 -MINIMIZATION: A NON-RIP ANALYSIS AND EXTENSIONS
THEORY OF COMPRESSIVE SENSING VIA l 1 -MINIMIZATION: A NON-RIP ANALYSIS AND EXTENSIONS YIN ZHANG Abstract. Compressive sensing (CS) is an emerging methodology in computational signal processing that has
More information