The Dantzig Selector

Size: px
Start display at page:

Download "The Dantzig Selector"

Transcription

1 The Dantzig Selector Emmanuel Candès California Institute of Technology Statistics Colloquium, Stanford University, November 2006 Collaborator: Terence Tao (UCLA)

2 Statistical linear model y = Xβ + z y R n : vector of observations (available to the statistician) X R n p : data matrix (known) β R p : object of interest (vector of parameters) z R n : stochastic error; here, z i i.i.d. N(0, σ 2 ) Goal estimate β from y (not Xβ)

3 Classical setup Number of parameters p fixed Number of observations n very large Asymptotics with n

4 Modern setup About as many parameters as observations: n p Many more parameters than observations: n < p or n p Very important subject in statistics today: high-dimensional data Conferences 100 s of articles

5 Example of interest: MRI angiography (sketch) β: N N image of interest = p = N 2 y: noisy Fourier coefficients of image 22 radial lines - 4% coverage for 512 by 512 image: n =.04p - 2% coverage for 1024 by 1024 image: n =.02p

6 Examples of modern setup Biomedical imaging fewer measurements than pixels Analog to digital conversion (ADCs) Inverse problems Genomics low number of observations (in the tens) total number of gene assayed (and considered as possible regressors) easily in the thousands Curve/surface estimation recovery of a continuous time curve from a finite number of noisy samples

7 Underdetermined systems Fundamental difficulty Fewer equations than unknowns Noiseless setup p > n y = X β Seems highly problematic

8 Sparsity as a way out? What if β is sparse or nearly sparse? Perhaps more observations than unknowns y = X * * Warning e.g. p = 2n = 20 * X = y 1 = β 1 y 2 = β 2 =.. y 10 = β 10 β Is it just smoke? What about β 11, β 12,..., β 20?

9 Encouraging example, I Reconstruct by solving min b b l1 := i b i s.t. Xb = y = Xβ β, 15 nonzeros n = 30 Fourier samples perfect recovery

10 Encouraging example, II Reconstruct β with min β TV := β(i 1, i 2 ) s.t. Xb = y = Xβ b R p p i 1,i 2 Original Phantom (Logan Shepp) Naive Reconstruction Reconstruction: min BV + nonnegativity constraint original Filtered backprojection perfect recovery

11 A theory behind these results Consider the underdetermined system y = Xβ where β is S-sparse (only S of the coordinates of β are nonzero). Theorem ˆβ = argmin b R p b l1 subject to Xβ = y Perfect reconstruction ˆβ = β, provided X obeys a uniform uncertainty principle. (Tomorrow s talk)

12 Statistical estimation In real applications, data are corrupted Sensing model: y = Xβ + z Hopeless? (most of the singular values of X are zero)

13 The Dantzig selector Assume columns of X are unit-normed y = Xβ + z, z j iid N(0, σ 2 ) Dantzig selector (DS) ˆβ = argmin b l1 s.t. (X r) i λ p σ, for all i r is the vector of residuals: r = y Xb λ p = 2 log p (makes the true β feasible with large prob.)

14 Why l 1? Theory of noiseless case Precedents in Statistics

15 Why X r small (and not r)? X r l λ σ Invariance Imagine to rotate the data y = Xβ + z with U: Uy = UXβ + Uz ỹ = Xβ + z Estimator should not change The DS is invariant Correlations X r measures the correlation of the residuals with the predictors Don t leave jth predictor out when r, X j is large! Not the size of r that matters but that of X r; e.g. y = X j and σ > X j l

16 The Dantzig selector as a linear program min i Can be recast as a linear program (LP) b i s.t. X (y Xb) j λ p σ min i u i s.t. u b u λ p σ X (y Xb) λ p σ u, b R p (a b means a i b i for all i s)

17 Related approaches Lasso (Tibshirani, 96) min y Xb 2 l 2 s.t. b l1 t Basis pursuit denoising (Chen, Donoho, Saunders, 96) Dantzig selector is different min 1 2 y Xb 2 l 2 + Λ p σ b l1

18 The Dantzig selector works! Assume β is S-sparse and X satisfies some condition (δ 2S + θ S,2S < 1). Then with large probability ˆβ β 2 l 2 O(2 log p) S σ 2

19 The Dantzig selector works! Assume β is S-sparse and X satisfies some condition (δ 2S + θ S,2S < 1). Then with large probability ˆβ β 2 l 2 O(2 log p) S σ 2

20 The Dantzig selector works! Assume β is S-sparse and X satisfies some condition (δ 2S + θ S,2S < 1). Then with large probability No blow up! ˆβ β 2 l 2 O(2 log p) S σ 2 σ Nicely degrades as noise level increases

21 The Dantzig selector works! Assume β is S-sparse and X satisfies some condition (δ 2S + θ S,2S < 1). Then with large probability No blow up! ˆβ β 2 l 2 O(2 log p) S σ 2 σ Nicely degrades as noise level increases S MSE is proportional to the true (unknown) number of parameters (information content) = Adaptivity

22 The Dantzig selector works! Assume β is S-sparse and X satisfies some condition (δ 2S + θ S,2S < 1). Then with large probability No blow up! ˆβ β 2 l 2 O(2 log p) S σ 2 σ Nicely degrades as noise level increases S MSE is proportional to the true (unknown) number of parameters (information content) = Adaptivity 2 log p price to pay for not knowing where the significant coordinates are

23 Understanding the condition: restricted isometries X M R n M : columns of X with indices in M T Restricted isometry constant δ S For each S, δ s R is the smallest number s. t. (1 δ S ) Id XMX M (1 + δ S ) Id, M, M S. Sparse subsets of column vectors are approximately orthonormal.

24 Understanding the condition: the number θ M φ M θ = cosφ θ S,S is the smallest quantity for which Xb, X b θ S,S b l2 b l2 b, b supported on disjoint subsets M, M {1,..., p} with M S, M S

25 The condition is almost necessary Uniform Uncertainty Principle To estimate a S-sparse vector β we need X s.t. δ 2S + θ S,2S < 1 With δ 2S = 1, one can have h R 2p, 2S-sparse with Xh = 0 Decompose h = β β each S-sparse with X(β β ) = 0 Xβ = Xβ = Model is not identifiable!

26 Comparing with an oracle Oracle informs us of the support M of the unknown β. Ideal LS estimate x I LS = argmin y X M b 2 l 2, β I LS = (X MX M ) 1 X My Ideal because unachievable MSE E β I LS β 2 l 2 = Tr(X MX M ) 1 σ 2 S σ 2 With large probability, DS obeys ˆβ β 2 l 2 = O(log p) E β β I LS 2 l 2

27 Is this good enough? A more powerful oracle Oracle lets us observe all the coordinates of the unknown vector β directly y i = β i + z i, z i i.i.d. N(0, σ 2 ) Oracle tells us which coordinates are above the noise level Ideal shrinkage estimator (Donoho and Johnstone, 94) { ˆβ i I 0, β i < σ = y i, β i σ MSE of ideal estimator E ˆβ I β 2 l 2 = i E( ˆβ I i β i ) 2 = i min(β 2 i, σ 2 )

28 Main claim: oracle inequality Same assumptions as before. Then with large probability ( ˆβ β 2 l 2 O(2 log p) σ 2 + ) min(βi 2, σ 2 ). i DS does not have any oracle information Fewer observations than variables Direct observations of β are not available = Yet, nearly achieves the ideal MSE! Unexpected feat This is optimal. No estimator can essentially do better Extensions to compressible signals Other work: Haupt and Nowak (05)

29 An other oracle-style benchmark: Ideal model selection Subset M {1,..., p}. Span of M Least squares estimate V M := {b R p : b i = 0 i / M}. ˆβ M = argmin b VM y Xb 2 l 2 Oracle selects the ideal least-squares estimator β β = argmin M {1,...,p} E β ˆβ M 2 l 2 Ideal because unachievable The risk of the ideal estimator obeys E β β 2 l 2 1 min(βi 2, σ 2 ) 2 i

30 DS vs. ideal model selection With large probability, Dantzig selector obeys ˆβ β 2 l 2 = O(log p) E β β 2 l 2 DS does not have any oracle information and yet, nearly selects the best model nearly achieves the ideal MSE!

31 Extensions to compressible signals Popular model: power-law β (1) β (2)... β (p) β (i) R i s Ideal risk min(βi 2, σ 2 ) = min 0 m p mσ2 + β 2 (i) min m 0 m p σ2 + R 2 m 2s+1 i i>m Optimal m is the number parameters we need to estimate: m = {i : β i > σ} = Could perhaps mimic ideal risk if condition on X is valid with m

32 Extensions to compressible signals Popular model: power-law β (1) β (2)... β (p) β (i) R i s Ideal risk min(βi 2, σ 2 ) = min 0 m p mσ2 + β 2 (i) min m 0 m p σ2 + R 2 m 2s+1 i i>m Optimal m is the number parameters we need to estimate: m = {i : β i > σ} = Could perhaps mimic ideal risk if condition on X is valid with m Theorem S such that δ 2S + θ S,2S < 1. E ˆβ β 2 l 2 min 1 m S O(log p)(m σ2 + R 2 m 2s+1 ) Can recover the minimax rate of weak-l p balls

33 Connections with model selection Canonical selection procedure min y Xb 2 l 2 + Λ σ 2 {i : b i 0} Λ = 2: C p (Mallows 73), AIC (Akaike 74) Λ = log n: BIC (Schwarz 78) Λ = 2 log p: RIC (Foster and George 94)

34 Connections with model selection Canonical selection procedure min y Xb 2 l 2 + Λ σ 2 {i : b i 0} Λ = 2: C p (Mallows 73), AIC (Akaike 74) Λ = log n: BIC (Schwarz 78) Λ = 2 log p: RIC (Foster and George 94) NP-hard Practically impossible for p > 50, 60 Enormous and beautiful theoretical literature focussed on estimating Xβ: Barron, Cover; Foster, George; Donoho, Johnstone; Birgé, Massart; Barron, Birgé, Massart

35 In contrast... The Dantzig selector is practical (computationally feasible) The Dantzig selector does nearly as well if one had perfect information about the object of interest The setup is fairly general

36 Geometric intuition β X T X(β b) l 2λσ Constraint X (y X ˆβ) λ σ β is feasible ˆβ inside the diamond ˆβ l1 β l1 β is feasible ˆβ inside the slab {b : b l1 = β l1 } X X(β ˆβ) l 2λ σ True vector is feasible if (X j columns of X) z, X j λ σ X (Xβ X ˆβ) l X (Xβ y) l + X (y X ˆβ) l 2λ σ

37 Practical performance, I p = 256, n = 72. X is a Gaussian matrix, X ij i.i.d. N(0, 1/n) σ =.11 and λ = 3.5 so that the threshold is at δ = λ σ = true value estimate 2.5 true value estimate Dantzig selector Gauss Dantzig selector

38 Connections with soft-thresholding Suppose X is an orthogonal matrix Dantzig selection minimizes bi s.t. (X y) i b i < λ p σ Implications: DS = soft-thresholding of ỹ = X y at level λ p σ ỹ i λ p σ, ỹ i > λ p σ ˆβ i = 0, ỹ i λ p σ ỹ i + λ p σ, ỹ i < λ p σ Explains soft-thresholding-like behavior

39 The Gauss-Dantzig selector Two stage-procedure for bias-correction Stage 1 Estimate M = {i : β i 0} with ˆM = {i : ˆβ i > t σ}, t 0. Stage 2 Construct the Least Squares estimator ˆβ = min b V ˆM y Xb 2 l 2 Use the Dantzig selector to estimate the model M Construct a new estimator by regressing the data y onto the model ˆM.

40 Practical performance, II p = 5, 000, n = 1, 000 X is binary, X ij = ±1/ n w.p. 1/2 The nonzero entries of β are i.i.d. Cauchy σ =.5 and λ σ = 2.09 Ratio i ( ˆβ i β i ) 2 / min(β 2 i, σ2 ). S Gauss-Dantzig selector The constants are quite small!

41 All Coordinates First 500 coordinates

42 Dantzig selector in image processing, I

43 Original Reconstruction n =.043p

44 Significance for experimental design Want to sense a high-dimensional parameter vector Only a few coordinates are significant How many measurements do you need?

45 What should we measure? Want conditions for estimation to hold for large values of S X i,j N(0, 1): S n/ log(p/n) (Szarek) X i,j = ±1 wp 1/2: S n/ log(p/n) (Pajor et al., ) Randomized incomplete Fourier matrices Many others (tomorrow s talk) Random designs are good sensing matrices

46 Applications: signal acquisition Analog-to-digital converters, receivers,... Space: cameras, medical imaging devices,... Nyquist/Shannon foundation: must sample at twice the highest frequency Signals are wider and wider band Hardware brickwall: extremely fast/high-resolution samplers are decades away

47 Subsampling Signal model s(t) = 1 n p β k e i2πω kt k=1 Observe signal at n subsampled locations y j = s(t j ) + σz j, 1 j n Model y = Xβ + σz, X j,k = 1 n e i2πω kt j X obeys conditions of theorem for large S: practically, S n/

48 Sampling a spectrally sparse signal Length p = 1000 vector, with S = 20 active Fourier modes s(t) β(ω) Measurements: sample at n (non-uniformly spaced) locations in time y j = s(t j ) + σz j, z j N(0, I n ) Recover using Dantzig-Gauss selector

49 Sampling a spectrally sparse signal Ideal recovery error E s ŝ 2 l 2 E β ideal β 2 l 2 S σ 2 For the DS, we observe (across a broad range of σ and n) E ˆβ DS β 2 l S σ 2 Price of unknown support is 15% in additional error Example for n = 110: σ 2 Sσ 2 E β DS β This has implications on the so-called Walden curve for Shannon based high-precision ADCs.

50 Testbed Detection of a Small Tone from L 1 Projection Small tone detection limited by clumping of error energy in sparse bins ) B 0 d ( e d u t i l p m A d e z i l a m r o N - 8 0!70 db 80.8 db Measured Recovered Signal (OCT9-Uncorrected) SNR = 67.63dB SFDR = 79.47dB Loading = PClip-4.74dB Frequency (MHz) ITAR/EAR-Controlled Information Do not export or release to the public without USG approval.

51 Summary Accurate estimation from few observations is possible Need to solve an LP Need incoherent design Opportunities Biomedical imagery New A/D devices Part of a larger body of work with many applications Sparse decompositions in overcomplete dictionaries Error-correction (decoding by LP) Information theory (universal encoders)

52 Is this magic? Download code at

53 New paradigm for analog to digital Direct sampling reproduction of the light field; analog/digital photography, mid 19th century Indirect sampling data aquisition in a transformed domain, second half of 20th century; e.g. CT, MRI Compressive sampling data aquisition in an incoherent domain Take a comparably small number of general measurements rather than the usual pixels Impact: design incoherent analog sensors; e.g. optical Pay-off: far fewer sensors than what is usually considered necessary

54 Rice compressed sensing camera Richard Baraniuk, Kevin Kelly, Yehia Massoud, Don Johnson dsp.rice.edu/cs Other works: Coifman et al., Brady.

55 Dantzig selector in image processing, II N by N image: p = N 2 Data y = Xβ + z where (Xβ) k = t 1,t 2 β(t 1, t 2 ) cos(2π(k 1 t 1 + k 2 t 2 )/N), k = (k 1, k 2 ) Dantzig selection with residual vector r = y Xb. min b TV subject to (X r) i λ i σ

56 NIST: Imaging fuel cells Look inside fuel cells as they are operating via neutron imaging Accelerate process by limiting the number of projections Each projection = samples along radial lines in the Fourier domain Given measurements y, apply the Dantzig selector min b TV subject to X (y Xb) λσ where X = P Ω = partial pseudo-polar FFT (see Averbuch et. al (2001))

57 Imaging fuel cells Reconstruction from 20 projections: original backprojection Dantzig selector

58 Imaging fuel cells From actual measurements with constant SNR Backprojection from 720 slices every 0.25 degrees Dantzig from 60 slices every 3 Results are very preliminary; could improve with more sophisticated models

THE DANTZIG SELECTOR: STATISTICAL ESTIMATION WHEN p IS MUCH LARGER THAN n 1

THE DANTZIG SELECTOR: STATISTICAL ESTIMATION WHEN p IS MUCH LARGER THAN n 1 The Annals of Statistics 27, Vol. 35, No. 6, 2313 2351 DOI: 1.1214/953661523 Institute of Mathematical Statistics, 27 THE DANTZIG SELECTOR: STATISTICAL ESTIMATION WHEN p IS MUCH LARGER THAN n 1 BY EMMANUEL

More information

Tractable Upper Bounds on the Restricted Isometry Constant

Tractable Upper Bounds on the Restricted Isometry Constant Tractable Upper Bounds on the Restricted Isometry Constant Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure, U.C. Berkeley. Support from NSF, DHS and Google.

More information

Near Optimal Signal Recovery from Random Projections

Near Optimal Signal Recovery from Random Projections 1 Near Optimal Signal Recovery from Random Projections Emmanuel Candès, California Institute of Technology Multiscale Geometric Analysis in High Dimensions: Workshop # 2 IPAM, UCLA, October 2004 Collaborators:

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

Sparsity in Underdetermined Systems

Sparsity in Underdetermined Systems Sparsity in Underdetermined Systems Department of Statistics Stanford University August 19, 2005 Classical Linear Regression Problem X n y p n 1 > Given predictors and response, y Xβ ε = + ε N( 0, σ 2

More information

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional

More information

Lecture 24 May 30, 2018

Lecture 24 May 30, 2018 Stats 3C: Theory of Statistics Spring 28 Lecture 24 May 3, 28 Prof. Emmanuel Candes Scribe: Martin J. Zhang, Jun Yan, Can Wang, and E. Candes Outline Agenda: High-dimensional Statistical Estimation. Lasso

More information

Model Selection and Geometry

Model Selection and Geometry Model Selection and Geometry Pascal Massart Université Paris-Sud, Orsay Leipzig, February Purpose of the talk! Concentration of measure plays a fundamental role in the theory of model selection! Model

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

19.1 Problem setup: Sparse linear regression

19.1 Problem setup: Sparse linear regression ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016 Lecture 19: Minimax rates for sparse linear regression Lecturer: Yihong Wu Scribe: Subhadeep Paul, April 13/14, 2016 In

More information

ACCORDING to Shannon s sampling theorem, an analog

ACCORDING to Shannon s sampling theorem, an analog 554 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 59, NO 2, FEBRUARY 2011 Segmented Compressed Sampling for Analog-to-Information Conversion: Method and Performance Analysis Omid Taheri, Student Member,

More information

AN INTRODUCTION TO COMPRESSIVE SENSING

AN INTRODUCTION TO COMPRESSIVE SENSING AN INTRODUCTION TO COMPRESSIVE SENSING Rodrigo B. Platte School of Mathematical and Statistical Sciences APM/EEE598 Reverse Engineering of Complex Dynamical Networks OUTLINE 1 INTRODUCTION 2 INCOHERENCE

More information

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin

More information

Compressed Sensing and Related Learning Problems

Compressed Sensing and Related Learning Problems Compressed Sensing and Related Learning Problems Yingzhen Li Dept. of Mathematics, Sun Yat-sen University Advisor: Prof. Haizhang Zhang Advisor: Prof. Haizhang Zhang 1 / Overview Overview Background Compressed

More information

Near-ideal model selection by l 1 minimization

Near-ideal model selection by l 1 minimization Near-ideal model selection by l minimization Emmanuel J. Candès and Yaniv Plan Applied and Computational Mathematics, Caltech, Pasadena, CA 925 December 2007; Revised May 2008 Abstract We consider the

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

Sparse regression. Optimization-Based Data Analysis. Carlos Fernandez-Granda

Sparse regression. Optimization-Based Data Analysis.   Carlos Fernandez-Granda Sparse regression Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 3/28/2016 Regression Least-squares regression Example: Global warming Logistic

More information

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar

More information

Toeplitz Compressed Sensing Matrices with. Applications to Sparse Channel Estimation

Toeplitz Compressed Sensing Matrices with. Applications to Sparse Channel Estimation Toeplitz Compressed Sensing Matrices with 1 Applications to Sparse Channel Estimation Jarvis Haupt, Waheed U. Bajwa, Gil Raz, and Robert Nowak Abstract Compressed sensing CS has recently emerged as a powerful

More information

Stable Signal Recovery from Incomplete and Inaccurate Measurements

Stable Signal Recovery from Incomplete and Inaccurate Measurements Stable Signal Recovery from Incomplete and Inaccurate Measurements EMMANUEL J. CANDÈS California Institute of Technology JUSTIN K. ROMBERG California Institute of Technology AND TERENCE TAO University

More information

Robust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Frequency Information

Robust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Frequency Information 1 Robust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Frequency Information Emmanuel Candès, California Institute of Technology International Conference on Computational Harmonic

More information

Mathematics Subject Classification (2000). Primary 00A69, 41-02, 68P30; Secondary 62C65.

Mathematics Subject Classification (2000). Primary 00A69, 41-02, 68P30; Secondary 62C65. Compressive sampling Emmanuel J. Candès Abstract. Conventional wisdom and common practice in acquisition and reconstruction of images from frequency data follow the basic principle of the Nyquist density

More information

An Overview of Sparsity with Applications to Compression, Restoration, and Inverse Problems

An Overview of Sparsity with Applications to Compression, Restoration, and Inverse Problems An Overview of Sparsity with Applications to Compression, Restoration, and Inverse Problems Justin Romberg Georgia Tech, School of ECE ENS Winter School January 9, 2012 Lyon, France Applied and Computational

More information

Primal Dual Pursuit A Homotopy based Algorithm for the Dantzig Selector

Primal Dual Pursuit A Homotopy based Algorithm for the Dantzig Selector Primal Dual Pursuit A Homotopy based Algorithm for the Dantzig Selector Muhammad Salman Asif Thesis Committee: Justin Romberg (Advisor), James McClellan, Russell Mersereau School of Electrical and Computer

More information

The Pros and Cons of Compressive Sensing

The Pros and Cons of Compressive Sensing The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal

More information

Lecture 22: More On Compressed Sensing

Lecture 22: More On Compressed Sensing Lecture 22: More On Compressed Sensing Scribed by Eric Lee, Chengrun Yang, and Sebastian Ament Nov. 2, 207 Recap and Introduction Basis pursuit was the method of recovering the sparsest solution to an

More information

Error Correction via Linear Programming

Error Correction via Linear Programming Error Correction via Linear Programming Emmanuel Candes and Terence Tao Applied and Computational Mathematics, Caltech, Pasadena, CA 91125 Department of Mathematics, University of California, Los Angeles,

More information

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise

Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published

More information

Compressed Sensing and Neural Networks

Compressed Sensing and Neural Networks and Jan Vybíral (Charles University & Czech Technical University Prague, Czech Republic) NOMAD Summer Berlin, September 25-29, 2017 1 / 31 Outline Lasso & Introduction Notation Training the network Applications

More information

An Introduction to Sparse Approximation

An Introduction to Sparse Approximation An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,

More information

Compressed Sensing and Sparse Recovery

Compressed Sensing and Sparse Recovery ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing

More information

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions

Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Solution-recovery in l 1 -norm for non-square linear systems: deterministic conditions and open questions Yin Zhang Technical Report TR05-06 Department of Computational and Applied Mathematics Rice University,

More information

A Survey of Compressive Sensing and Applications

A Survey of Compressive Sensing and Applications A Survey of Compressive Sensing and Applications Justin Romberg Georgia Tech, School of ECE ENS Winter School January 10, 2012 Lyon, France Signal processing trends DSP: sample first, ask questions later

More information

Enhancing Sparsity by Reweighted l 1 Minimization

Enhancing Sparsity by Reweighted l 1 Minimization Enhancing Sparsity by Reweighted l Minimization Emmanuel J. Candès Michael B. Wakin Stephen P. Boyd Applied and Computational Mathematics, Caltech, Pasadena, CA 925 Electrical Engineering & Computer Science,

More information

Greedy Signal Recovery and Uniform Uncertainty Principles

Greedy Signal Recovery and Uniform Uncertainty Principles Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles

More information

Optimization for Compressed Sensing

Optimization for Compressed Sensing Optimization for Compressed Sensing Robert J. Vanderbei 2014 March 21 Dept. of Industrial & Systems Engineering University of Florida http://www.princeton.edu/ rvdb Lasso Regression The problem is to solve

More information

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles

CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles CoSaMP: Greedy Signal Recovery and Uniform Uncertainty Principles SIAM Student Research Conference Deanna Needell Joint work with Roman Vershynin and Joel Tropp UC Davis, May 2008 CoSaMP: Greedy Signal

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk Model-Based Compressive Sensing for Signal Ensembles Marco F. Duarte Volkan Cevher Richard G. Baraniuk Concise Signal Structure Sparse signal: only K out of N coordinates nonzero model: union of K-dimensional

More information

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March

More information

Sparsity Regularization

Sparsity Regularization Sparsity Regularization Bangti Jin Course Inverse Problems & Imaging 1 / 41 Outline 1 Motivation: sparsity? 2 Mathematical preliminaries 3 l 1 solvers 2 / 41 problem setup finite-dimensional formulation

More information

Noisy Signal Recovery via Iterative Reweighted L1-Minimization

Noisy Signal Recovery via Iterative Reweighted L1-Minimization Noisy Signal Recovery via Iterative Reweighted L1-Minimization Deanna Needell UC Davis / Stanford University Asilomar SSC, November 2009 Problem Background Setup 1 Suppose x is an unknown signal in R d.

More information

Compressive Sensing and Beyond

Compressive Sensing and Beyond Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered

More information

Lecture Notes 9: Constrained Optimization

Lecture Notes 9: Constrained Optimization Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form

More information

The Analysis Cosparse Model for Signals and Images

The Analysis Cosparse Model for Signals and Images The Analysis Cosparse Model for Signals and Images Raja Giryes Computer Science Department, Technion. The research leading to these results has received funding from the European Research Council under

More information

Sparse Solutions of an Undetermined Linear System

Sparse Solutions of an Undetermined Linear System 1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research

More information

Practical Signal Recovery from Random Projections

Practical Signal Recovery from Random Projections Practical Signal Recovery from Random Projections Emmanuel Candès and Justin Romberg Abstract Can we recover a signal f R N from a small number of linear measurements? A series of recent papers developed

More information

Randomness-in-Structured Ensembles for Compressed Sensing of Images

Randomness-in-Structured Ensembles for Compressed Sensing of Images Randomness-in-Structured Ensembles for Compressed Sensing of Images Abdolreza Abdolhosseini Moghadam Dep. of Electrical and Computer Engineering Michigan State University Email: abdolhos@msu.edu Hayder

More information

Least squares under convex constraint

Least squares under convex constraint Stanford University Questions Let Z be an n-dimensional standard Gaussian random vector. Let µ be a point in R n and let Y = Z + µ. We are interested in estimating µ from the data vector Y, under the assumption

More information

Sensing systems limited by constraints: physical size, time, cost, energy

Sensing systems limited by constraints: physical size, time, cost, energy Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original

More information

Improving Approximate Message Passing Recovery of Sparse Binary Vectors by Post Processing

Improving Approximate Message Passing Recovery of Sparse Binary Vectors by Post Processing 10th International ITG Conference on Systems, Communications and Coding (SCC 2015) Improving Approximate Message Passing Recovery of Sparse Binary Vectors by Post Processing Martin Mayer and Norbert Goertz

More information

Introduction to Sparsity. Xudong Cao, Jake Dreamtree & Jerry 04/05/2012

Introduction to Sparsity. Xudong Cao, Jake Dreamtree & Jerry 04/05/2012 Introduction to Sparsity Xudong Cao, Jake Dreamtree & Jerry 04/05/2012 Outline Understanding Sparsity Total variation Compressed sensing(definition) Exact recovery with sparse prior(l 0 ) l 1 relaxation

More information

Structured matrix factorizations. Example: Eigenfaces

Structured matrix factorizations. Example: Eigenfaces Structured matrix factorizations Example: Eigenfaces An extremely large variety of interesting and important problems in machine learning can be formulated as: Given a matrix, find a matrix and a matrix

More information

Towards a Mathematical Theory of Super-resolution

Towards a Mathematical Theory of Super-resolution Towards a Mathematical Theory of Super-resolution Carlos Fernandez-Granda www.stanford.edu/~cfgranda/ Information Theory Forum, Information Systems Laboratory, Stanford 10/18/2013 Acknowledgements This

More information

Sparse linear models

Sparse linear models Sparse linear models Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 2/22/2016 Introduction Linear transforms Frequency representation Short-time

More information

Shifting Inequality and Recovery of Sparse Signals

Shifting Inequality and Recovery of Sparse Signals Shifting Inequality and Recovery of Sparse Signals T. Tony Cai Lie Wang and Guangwu Xu Astract In this paper we present a concise and coherent analysis of the constrained l 1 minimization method for stale

More information

Signal Recovery from Permuted Observations

Signal Recovery from Permuted Observations EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,

More information

Inverse problems and sparse models (6/6) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France.

Inverse problems and sparse models (6/6) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France. Inverse problems and sparse models (6/6) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France remi.gribonval@inria.fr Overview of the course Introduction sparsity & data compression inverse problems

More information

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016 Lecture notes 5 February 9, 016 1 Introduction Random projections Random projections are a useful tool in the analysis and processing of high-dimensional data. We will analyze two applications that use

More information

Compressed Sensing: Extending CLEAN and NNLS

Compressed Sensing: Extending CLEAN and NNLS Compressed Sensing: Extending CLEAN and NNLS Ludwig Schwardt SKA South Africa (KAT Project) Calibration & Imaging Workshop Socorro, NM, USA 31 March 2009 Outline 1 Compressed Sensing (CS) Introduction

More information

COMPRESSIVE SAMPLING USING EM ALGORITHM. Technical Report No: ASU/2014/4

COMPRESSIVE SAMPLING USING EM ALGORITHM. Technical Report No: ASU/2014/4 COMPRESSIVE SAMPLING USING EM ALGORITHM ATANU KUMAR GHOSH, ARNAB CHAKRABORTY Technical Report No: ASU/2014/4 Date: 29 th April, 2014 Applied Statistics Unit Indian Statistical Institute Kolkata- 700018

More information

Thresholds for the Recovery of Sparse Solutions via L1 Minimization

Thresholds for the Recovery of Sparse Solutions via L1 Minimization Thresholds for the Recovery of Sparse Solutions via L Minimization David L. Donoho Department of Statistics Stanford University 39 Serra Mall, Sequoia Hall Stanford, CA 9435-465 Email: donoho@stanford.edu

More information

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad

More information

Strengthened Sobolev inequalities for a random subspace of functions

Strengthened Sobolev inequalities for a random subspace of functions Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)

More information

Recovering overcomplete sparse representations from structured sensing

Recovering overcomplete sparse representations from structured sensing Recovering overcomplete sparse representations from structured sensing Deanna Needell Claremont McKenna College Feb. 2015 Support: Alfred P. Sloan Foundation and NSF CAREER #1348721. Joint work with Felix

More information

High-dimensional Statistics

High-dimensional Statistics High-dimensional Statistics Pradeep Ravikumar UT Austin Outline 1. High Dimensional Data : Large p, small n 2. Sparsity 3. Group Sparsity 4. Low Rank 1 Curse of Dimensionality Statistical Learning: Given

More information

Wavelet Footprints: Theory, Algorithms, and Applications

Wavelet Footprints: Theory, Algorithms, and Applications 1306 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 51, NO. 5, MAY 2003 Wavelet Footprints: Theory, Algorithms, and Applications Pier Luigi Dragotti, Member, IEEE, and Martin Vetterli, Fellow, IEEE Abstract

More information

regression Lie Wang Abstract In this paper, the high-dimensional sparse linear regression model is considered,

regression Lie Wang Abstract In this paper, the high-dimensional sparse linear regression model is considered, L penalized LAD estimator for high dimensional linear regression Lie Wang Abstract In this paper, the high-dimensional sparse linear regression model is considered, where the overall number of variables

More information

Overview. Optimization-Based Data Analysis. Carlos Fernandez-Granda

Overview. Optimization-Based Data Analysis.   Carlos Fernandez-Granda Overview Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 1/25/2016 Sparsity Denoising Regression Inverse problems Low-rank models Matrix completion

More information

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE 5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER 2009 Uncertainty Relations for Shift-Invariant Analog Signals Yonina C. Eldar, Senior Member, IEEE Abstract The past several years

More information

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Journal of Information & Computational Science 11:9 (214) 2933 2939 June 1, 214 Available at http://www.joics.com Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Jingfei He, Guiling Sun, Jie

More information

Compressed Sensing - Near Optimal Recovery of Signals from Highly Incomplete Measurements

Compressed Sensing - Near Optimal Recovery of Signals from Highly Incomplete Measurements Compressed Sensing - Near Optimal Recovery of Signals from Highly Incomplete Measurements Wolfgang Dahmen Institut für Geometrie und Praktische Mathematik RWTH Aachen and IMI, University of Columbia, SC

More information

ISyE 691 Data mining and analytics

ISyE 691 Data mining and analytics ISyE 691 Data mining and analytics Regression Instructor: Prof. Kaibo Liu Department of Industrial and Systems Engineering UW-Madison Email: kliu8@wisc.edu Office: Room 3017 (Mechanical Engineering Building)

More information

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit arxiv:0707.4203v2 [math.na] 14 Aug 2007 Deanna Needell Department of Mathematics University of California,

More information

On the Projection Matrices Influence in the Classification of Compressed Sensed ECG Signals

On the Projection Matrices Influence in the Classification of Compressed Sensed ECG Signals On the Projection Matrices Influence in the Classification of Compressed Sensed ECG Signals Monica Fira, Liviu Goras Institute of Computer Science Romanian Academy Iasi, Romania Liviu Goras, Nicolae Cleju,

More information

Bayesian Paradigm. Maximum A Posteriori Estimation

Bayesian Paradigm. Maximum A Posteriori Estimation Bayesian Paradigm Maximum A Posteriori Estimation Simple acquisition model noise + degradation Constraint minimization or Equivalent formulation Constraint minimization Lagrangian (unconstraint minimization)

More information

People Hearing Without Listening: An Introduction To Compressive Sampling

People Hearing Without Listening: An Introduction To Compressive Sampling People Hearing Without Listening: An Introduction To Compressive Sampling Emmanuel J. Candès and Michael B. Wakin Applied and Computational Mathematics California Institute of Technology, Pasadena CA 91125

More information

Computable Performance Analysis of Sparsity Recovery with Applications

Computable Performance Analysis of Sparsity Recovery with Applications Computable Performance Analysis of Sparsity Recovery with Applications Arye Nehorai Preston M. Green Department of Electrical & Systems Engineering Washington University in St. Louis, USA European Signal

More information

Machine Learning for Signal Processing Sparse and Overcomplete Representations. Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013

Machine Learning for Signal Processing Sparse and Overcomplete Representations. Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013 Machine Learning for Signal Processing Sparse and Overcomplete Representations Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013 1 Key Topics in this Lecture Basics Component-based representations

More information

Introduction to compressive sampling

Introduction to compressive sampling Introduction to compressive sampling Sparsity and the equation Ax = y Emanuele Grossi DAEIMI, Università degli Studi di Cassino e-mail: e.grossi@unicas.it Gorini 2010, Pistoia Outline 1 Introduction Traditional

More information

High-dimensional Statistical Models

High-dimensional Statistical Models High-dimensional Statistical Models Pradeep Ravikumar UT Austin MLSS 2014 1 Curse of Dimensionality Statistical Learning: Given n observations from p(x; θ ), where θ R p, recover signal/parameter θ. For

More information

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5

CS 229r: Algorithms for Big Data Fall Lecture 19 Nov 5 CS 229r: Algorithms for Big Data Fall 215 Prof. Jelani Nelson Lecture 19 Nov 5 Scribe: Abdul Wasay 1 Overview In the last lecture, we started discussing the problem of compressed sensing where we are given

More information

MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing

MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing Afonso S. Bandeira April 9, 2015 1 The Johnson-Lindenstrauss Lemma Suppose one has n points, X = {x 1,..., x n }, in R d with d very

More information

GREEDY SIGNAL RECOVERY REVIEW

GREEDY SIGNAL RECOVERY REVIEW GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 57 Table of Contents 1 Sparse linear models Basis Pursuit and restricted null space property Sufficient conditions for RNS 2 / 57

More information

Stability of LS-CS-residual and modified-cs for sparse signal sequence reconstruction

Stability of LS-CS-residual and modified-cs for sparse signal sequence reconstruction Stability of LS-CS-residual and modified-cs for sparse signal sequence reconstruction Namrata Vaswani ECE Dept., Iowa State University, Ames, IA 50011, USA, Email: namrata@iastate.edu Abstract In this

More information

Conditions for Robust Principal Component Analysis

Conditions for Robust Principal Component Analysis Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and

More information

Does Compressed Sensing have applications in Robust Statistics?

Does Compressed Sensing have applications in Robust Statistics? Does Compressed Sensing have applications in Robust Statistics? Salvador Flores December 1, 2014 Abstract The connections between robust linear regression and sparse reconstruction are brought to light.

More information

Analog-to-Information Conversion

Analog-to-Information Conversion Analog-to-Information Conversion Sergiy A. Vorobyov Dept. Signal Processing and Acoustics, Aalto University February 2013 Winter School on Compressed Sensing, Ruka 1/55 Outline 1 Compressed Sampling (CS)

More information

Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison

Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison Raghunandan H. Keshavan, Andrea Montanari and Sewoong Oh Electrical Engineering and Statistics Department Stanford University,

More information

Optimisation Combinatoire et Convexe.

Optimisation Combinatoire et Convexe. Optimisation Combinatoire et Convexe. Low complexity models, l 1 penalties. A. d Aspremont. M1 ENS. 1/36 Today Sparsity, low complexity models. l 1 -recovery results: three approaches. Extensions: matrix

More information

sparse and low-rank tensor recovery Cubic-Sketching

sparse and low-rank tensor recovery Cubic-Sketching Sparse and Low-Ran Tensor Recovery via Cubic-Setching Guang Cheng Department of Statistics Purdue University www.science.purdue.edu/bigdata CCAM@Purdue Math Oct. 27, 2017 Joint wor with Botao Hao and Anru

More information

An Introduction to Sparse Representations and Compressive Sensing. Part I

An Introduction to Sparse Representations and Compressive Sensing. Part I An Introduction to Sparse Representations and Compressive Sensing Part I Paulo Gonçalves CPE Lyon - 4ETI - Cours Semi-Optionnel Méthodes Avancées pour le Traitement des Signaux 2014 Objectifs Part I The

More information

Constrained optimization

Constrained optimization Constrained optimization DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Compressed sensing Convex constrained

More information

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011 Compressed Sensing Huichao Xue CS3750 Fall 2011 Table of Contents Introduction From News Reports Abstract Definition How it works A review of L 1 norm The Algorithm Backgrounds for underdetermined linear

More information

High-dimensional covariance estimation based on Gaussian graphical models

High-dimensional covariance estimation based on Gaussian graphical models High-dimensional covariance estimation based on Gaussian graphical models Shuheng Zhou Department of Statistics, The University of Michigan, Ann Arbor IMA workshop on High Dimensional Phenomena Sept. 26,

More information

Adaptive Compressive Imaging Using Sparse Hierarchical Learned Dictionaries

Adaptive Compressive Imaging Using Sparse Hierarchical Learned Dictionaries Adaptive Compressive Imaging Using Sparse Hierarchical Learned Dictionaries Jarvis Haupt University of Minnesota Department of Electrical and Computer Engineering Supported by Motivation New Agile Sensing

More information

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Deanna Needell and Roman Vershynin Abstract We demonstrate a simple greedy algorithm that can reliably

More information

Compressed Sensing in Cancer Biology? (A Work in Progress)

Compressed Sensing in Cancer Biology? (A Work in Progress) Compressed Sensing in Cancer Biology? (A Work in Progress) M. Vidyasagar FRS Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar University

More information