Compressive Sensing under Matrix Uncertainties: An Approximate Message Passing Approach

Size: px
Start display at page:

Download "Compressive Sensing under Matrix Uncertainties: An Approximate Message Passing Approach"

Transcription

1 Compressive Sensing under Matrix Uncertainties: An Approximate Message Passing Approach Asilomar 2011 Jason T. Parker (AFRL/RYAP) Philip Schniter (OSU) Volkan Cevher (EPFL)

2 Problem Statement Traditional Compressive Sensing (CS) addresses underdetermined linear regression y = Ax + w y, w C M ; A C M N ; x C N ; M < N More generally, consider an unknown matrix perturbation E C M N ) y = ( + E x + w }{{} unknown A We characterize A =  + E with entry-wise means and variances given by  mn = E{A mn } ν A mn = var{a mn } Distribution A, Approved for Public Release as 88 ABW

3 Previous Work Notice that y = ( + E ) x + w = Âx + (Ex + w) }{{} signal dependent noise Standard CS performance analysis for bounded E [1; 2] LASSO Sparsity-Cognizant Total Least Squares [3] {ˆx S-TLS, Ê S-TLS} = arg min ( + E)x y λ E E 2 F + λ x 1 x,e Dantzig Selector Matrix Uncertain Selector [4] ˆx MU-Selector = arg min x 1 subject to  H y Âx λ x 1 + ɛ x Distribution A, Approved for Public Release as 88 ABW

4 Generalized Approximate Message Passing Approximate Message Passing (AMP) [5] is derived from (approximate) belief propagation ˆx k+1 = η s ˆx k + A H z k, β k θ k, z k = y Aˆx k + b k z k 1 η s as soft-thresholding near minimax performance (robust) η s distribution specific approximate MMSE inference Generalized AMP (GAMP) [6; 7] MMSE or MAP estimates of x C N, p(x) = n p X(x n ) Arbitrary separable output channel from noiseless measurements z = Ax C M to y, p(y z) = m p Y Z(y m z m ) Handles variable A mn Provides approximate posteriors Distribution A, Approved for Public Release as 88 ABW

5 Matrix Uncertain GAMP Recall noise-free measurements are z = Ax. For large N, the Central Limit Theorem motivates treating z m x n as Gaussian Using the zero mean quantities Ãmn Amn Âmn and x mn x mn ˆx mn, we can write X z m = (Âmn + Ãmn)xn + ˆxr + Âmr xr + Ãmr ˆxr + Ãmr xr) r n(âmr From which we can conclude E{z m x n} = Âmnxm + X r n  mr ˆx mr var{z m x n} = ν A mn x n 2 + X r n  2 mrν x mr + ν A mr ˆx mr 2 + ν A mrν x mr Terms in red modify the original GAMP variance calculation Distribution A, Approved for Public Release as 88 ABW

6 MU-GAMP Algorithm Summary for t = 1, 2, 3,... m : ẑ m(t) = P N n=1 Âmnˆxn(t) (R1) m : νm(t) z = P N n=1 Âmn 2 νn(t) x (R2a) m : νm(t) p = νm(t) z + P N n=1 νa mn `νx n + ˆx n(t) 2 (R2b) m : ˆp m(t) = ẑ m(t) νm(t) z û m(t 1) (R3) m : û m(t) = g out(y m, ˆp m(t), νm(t)) p (R4) m : νm(t) u = g out(y m, ˆp m(t), νm(t)) p (R5) n : νn(t) r = ` P N n=1 Âmn 2 νm(t) 1 u (R6) n : ˆr n(t) = ˆx n(t) + νn(t) r P M m=1 Â mnû m(t) (R7) n : νn(t+1) x = νn(t)g r in(ˆr n(t), νj r (t)) (R8) n : ˆx n(t+1) = g in (ˆr n(t), νn(t)) r (R9) end Distribution A, Approved for Public Release as 88 ABW

7 Independent Identically Distributed Matrix Errors Consider i.i.d. matrix errors with ν A mn = ν A. For additive noise, a CLT argument suggests that, for large N, we can well approximate p(y x) N (Âx, ν A x ν w ) Law of large numbers x 2 2 constant for large N Conclusion: i.i.d. matrix uncertainty can be addressed by tuning standard algorithms for large N Distribution A, Approved for Public Release as 88 ABW

8 Phase Transition - i.i.d. Matrix Errors K/M GAMP MU GAMP STLS MU Selector SPARSA M/N N = 256; Â is i.i.d. N (0, 1); ν A = 0.05 x Bernoulli-Radamacher (±1 non-zero entries) Gaussian additive noise at 20 db SNR. Effective SNR is about 12 db LASSO (using SPARSA), STLS, and MU-Selector parameters use genie-aided tuning GAMP uses genie-aided computation of effective noise variance Curves show 15dB NMSE contours based on median from 100 trials Distribution A, Approved for Public Release as 88 ABW

9 NMSE vs M/N for Sparse Matrix Errors NMSE [db] GAMP MU GAMP STLS MU Selector SPARSA M/N Same setup, except that the entries of E are now Bernoulli-Radamacher with 99% zeroes and ν A = 5 for the non-zeroes. MU-GAMP is given the true entries ν A mn while GAMP is given only the true effective noise variance. The solid lines are linear estimates given the true support of x using  (blue) and  + E (black) Naive versions of STLS and MU-Selector are used with genie-aided tuning. The parametric STLS or compensated MU-Selector would likely show improved performance. Distribution A, Approved for Public Release as 88 ABW

10 Parametric MU-GAMP Parametric Model Consider Θ C P an unknown parameter vector y = A(Θ)x + w, We employ a first order 0 Taylor series expansion, similar to 1 parametric STLS [3] ˆΘ) + (Θ p ˆΘ p)e p( ˆΘ) A x + w E p( ˆΘ) A(α) α α= ˆΘ p ( ˆΘ, ˆν Θ ) Estimate x Dictionary: A( ˆΘ) Estimate Θ Dictionary: E pˆx, p = 1... P (ˆx, ˆν x ) Distribution A, Approved for Public Release as 88 ABW

11 Parametric MU-GAMP: Compute ˆx Data Model 0 1 ˆΘ) + (Θ p ˆΘ p)e p( ˆΘ) A x + w E p( ˆΘ) A(α) α α= ˆΘ p First, assume we have an estimate of the parameter as ( ˆΘ, ν Θ ) We can immediately write y = Cx + w C A( ˆΘ) + (Θ p ˆΘ p)e p( ˆΘ) ĉ E{C} = A( ˆΘ) ν c var{c} = ν Θ p Ep 2, where squares on matrix terms are understood to be element-wise squared magnitudes. In addition, the mean and variance of the matrix are interpreted element-wise. We can use MU-GAMP to compute an estimate (ˆx, ν x ) from this model. Distribution A, Approved for Public Release as 88 ABW

12 Parametric MU-GAMP: Compute ˆΘ Alternate form 0 ˆΘ) + (Θ p ˆΘ p)e p( ˆΘ) A x = Θ pe p( ˆΘ) A x ˆΘ) ˆΘ pe p( ˆΘ) A ˆx ˆΘ) ˆΘ pe p( ˆΘ) A x {z } {z } known constant zero-mean We can leverage this expression to obtain a linear model for Θ with a known dictionary B. u = BΘ + n We can estimate Θ from this model using MU-GAMP! u ˆΘ) ˆΘ pe p( ˆΘ) A ˆx; n w ˆΘ) ˆΘ pe p( ˆΘ) A x 2 E{n} = 0; var{n} = ν w + A( ˆΘ) ˆΘ pe p( ˆΘ) ν x B ˆ E 1 ( ˆΘ)x E 2 ( ˆΘ)x... E P ( ˆΘ)x ˆ ˆb E{B} = E1 ( ˆΘ)ˆx E 2 ( ˆΘ)ˆx... E P ( ˆΘ)ˆx ν b var{b} = ˆ E 1 ( ˆΘ) 2 ν x E 2 ( ˆΘ) 2 ν x... E P ( ˆΘ) 2 ν x Distribution A, Approved for Public Release as 88 ABW

13 Example 1: NMSE vs Parameter Dimension In this toy example (N = 256, M = 103), A = A 0 + θ pe p NMSE (db) MU GAMP Parametric MU GAMP P A 0 and all the E p have entries that are drawn i.i.d. Gaussian. As we vary P, the entries of the E p matrices are scaled to keep E{ν A mn } = ν A ; m, n. MU-GAMP is given the true element-wise variances, but is unaware of the underlying Θ structure. Parametric MU-GAMP leverages this underlying structure to improve performance for small P Distribution A, Approved for Public Release as 88 ABW

14 Example 2: Joint Calibration and Recovery x estimation, NMSE (db) Parametric MU GAMP MU GAMP GENIE GAMP True A iteration θ estimation, NMSE (db) Parametric MU GAMP GENIE iteration N = 256; M = 103; K = 20; P = 10 Each E p contains ones for M/P of the rows, and zeros elsewhere. This model is a surrogate for channel calibration errors in a measurement system. The unknown x is Bernoulli-Gaussian, while Θ is Gaussian. In this example, all signals are complex-valued. The GENIE result for x assumes perfect knowledge of Θ and vice-versa. The GENIE also knows the signal support when applicable. Distribution A, Approved for Public Release as 88 ABW

15 Example 3: Blind Deconvolution x estimation, NMSE (db) Parametric MU GAMP MU GAMP GENIE GAMP True A iteration θ estimation, NMSE (db) Parametric MU GAMP GENIE iteration We consider here the model Y = ΨA(Θ)X A(Θ) C N N is circulant, and Θ C N represents perturbations to the first column. Y C M S ; X C N S Ψ C M N is a mixing matrix. N = 256; M = 103; K = 20; S = 8 Each E p represents the change to the system response for a given coefficient of the impulse response. This model is a surrogate for learning a system impulse response from S snapshots, where each signal is K sparse. The unknown x is complex Bernoulli-Gaussian, while Θ is complex Gaussian. GENIE estimators same as previous example Distribution A, Approved for Public Release as 88 ABW

16 Conclusions and Future Work We have developed a Matrix Uncertain version of GAMP Adaptively adjusts noise power for i.i.d. matrix errors Incorporate element-wise variances for independent, non-identical errors Iterative approach for parametric matrix uncertainty Future Work MU-GAMP for spectral estimation Parametric MU-GAMP for dictionary learning and matrix completion Extension of rigorous GAMP performance analysis to MU-GAMP case. Incorporation of MU-GAMP into EM tuning approach to learn hyper-parameters from data Distribution A, Approved for Public Release as 88 ABW

17 Bibliography [1] Y. Chi, A. Pezeshki, L. Scharf, and R. Calderbank, Sensitivity to basis mismatch in compressed sensing, in Proc. IEEE Int Acoustics Speech and Signal Processing (ICASSP) Conf, pp , [2] M. A. Herman and T. Strohmer, General deviants: An analysis of perturbations in compressed sensing, IEEE Journal of Selected Topics in Signal Processing, vol. 4, no. 2, pp , [3] H. Zhu, G. Leus, and G. Giannakis, Sparsity-cognizant total least-squares for perturbed compressive sampling, Signal Processing, IEEE Transactions on, vol. 59, pp , May [4] M. Rosenbaum and A. Tsybakov, Sparse recovery under matrix uncertainty, The Annals of Statistics, vol. 38, no. 5, pp , [5] D. Donoho, A. Maleki, and A. Montanari, Message-passing algorithms for compressed sensing, Proceedings of the National Academy of Sciences, vol. 106, no. 45, pp , [6] S. Rangan, Generalized Approximate Message Passing for Estimation with Random Linear Mixing, Arxiv preprint arxiv: , [7] P. Schniter, Turbo Reconstruction of Structured Sparse Signals, ICASSP, Distribution A, Approved for Public Release as 88 ABW

18 Questions? Distribution A, Approved for Public Release as 88 ABW

Approximate Message Passing for Bilinear Models

Approximate Message Passing for Bilinear Models Approximate Message Passing for Bilinear Models Volkan Cevher Laboratory for Informa4on and Inference Systems LIONS / EPFL h,p://lions.epfl.ch & Idiap Research Ins=tute joint work with Mitra Fatemi @Idiap

More information

Approximate Message Passing with Built-in Parameter Estimation for Sparse Signal Recovery

Approximate Message Passing with Built-in Parameter Estimation for Sparse Signal Recovery Approimate Message Passing with Built-in Parameter Estimation for Sparse Signal Recovery arxiv:1606.00901v1 [cs.it] Jun 016 Shuai Huang, Trac D. Tran Department of Electrical and Computer Engineering Johns

More information

Turbo-AMP: A Graphical-Models Approach to Compressive Inference

Turbo-AMP: A Graphical-Models Approach to Compressive Inference Turbo-AMP: A Graphical-Models Approach to Compressive Inference Phil Schniter (With support from NSF CCF-1018368 and DARPA/ONR N66001-10-1-4090.) June 27, 2012 1 Outline: 1. Motivation. (a) the need for

More information

Phil Schniter. Collaborators: Jason Jeremy and Volkan

Phil Schniter. Collaborators: Jason Jeremy and Volkan Bilinear Generalized Approximate Message Passing (BiG-AMP) for Dictionary Learning Phil Schniter Collaborators: Jason Parker @OSU, Jeremy Vila @OSU, and Volkan Cehver @EPFL With support from NSF CCF-,

More information

Rigorous Dynamics and Consistent Estimation in Arbitrarily Conditioned Linear Systems

Rigorous Dynamics and Consistent Estimation in Arbitrarily Conditioned Linear Systems 1 Rigorous Dynamics and Consistent Estimation in Arbitrarily Conditioned Linear Systems Alyson K. Fletcher, Mojtaba Sahraee-Ardakan, Philip Schniter, and Sundeep Rangan Abstract arxiv:1706.06054v1 cs.it

More information

Approximate Message Passing Algorithms

Approximate Message Passing Algorithms November 4, 2017 Outline AMP (Donoho et al., 2009, 2010a) Motivations Derivations from a message-passing perspective Limitations Extensions Generalized Approximate Message Passing (GAMP) (Rangan, 2011)

More information

Binary Classification and Feature Selection via Generalized Approximate Message Passing

Binary Classification and Feature Selection via Generalized Approximate Message Passing Binary Classification and Feature Selection via Generalized Approximate Message Passing Phil Schniter Collaborators: Justin Ziniel (OSU-ECE) and Per Sederberg (OSU-Psych) Supported in part by NSF grant

More information

Statistical Image Recovery: A Message-Passing Perspective. Phil Schniter

Statistical Image Recovery: A Message-Passing Perspective. Phil Schniter Statistical Image Recovery: A Message-Passing Perspective Phil Schniter Collaborators: Sundeep Rangan (NYU) and Alyson Fletcher (UC Santa Cruz) Supported in part by NSF grants CCF-1018368 and NSF grant

More information

Bayesian Methods for Sparse Signal Recovery

Bayesian Methods for Sparse Signal Recovery Bayesian Methods for Sparse Signal Recovery Bhaskar D Rao 1 University of California, San Diego 1 Thanks to David Wipf, Jason Palmer, Zhilin Zhang and Ritwik Giri Motivation Motivation Sparse Signal Recovery

More information

Vector Approximate Message Passing. Phil Schniter

Vector Approximate Message Passing. Phil Schniter Vector Approximate Message Passing Phil Schniter Collaborators: Sundeep Rangan (NYU), Alyson Fletcher (UCLA) Supported in part by NSF grant CCF-1527162. itwist @ Aalborg University Aug 24, 2016 Standard

More information

Inferring Sparsity: Compressed Sensing Using Generalized Restricted Boltzmann Machines. Eric W. Tramel. itwist 2016 Aalborg, DK 24 August 2016

Inferring Sparsity: Compressed Sensing Using Generalized Restricted Boltzmann Machines. Eric W. Tramel. itwist 2016 Aalborg, DK 24 August 2016 Inferring Sparsity: Compressed Sensing Using Generalized Restricted Boltzmann Machines Eric W. Tramel itwist 2016 Aalborg, DK 24 August 2016 Andre MANOEL, Francesco CALTAGIRONE, Marylou GABRIE, Florent

More information

Phil Schniter. Supported in part by NSF grants IIP , CCF , and CCF

Phil Schniter. Supported in part by NSF grants IIP , CCF , and CCF AMP-inspired Deep Networks, with Comms Applications Phil Schniter Collaborators: Sundeep Rangan (NYU), Alyson Fletcher (UCLA), Mark Borgerding (OSU) Supported in part by NSF grants IIP-1539960, CCF-1527162,

More information

On convergence of Approximate Message Passing

On convergence of Approximate Message Passing On convergence of Approximate Message Passing Francesco Caltagirone (1), Florent Krzakala (2) and Lenka Zdeborova (1) (1) Institut de Physique Théorique, CEA Saclay (2) LPS, Ecole Normale Supérieure, Paris

More information

Minimax MMSE Estimator for Sparse System

Minimax MMSE Estimator for Sparse System Proceedings of the World Congress on Engineering and Computer Science 22 Vol I WCE 22, October 24-26, 22, San Francisco, USA Minimax MMSE Estimator for Sparse System Hongqing Liu, Mandar Chitre Abstract

More information

Empirical-Bayes Approaches to Recovery of Structured Sparse Signals via Approximate Message Passing

Empirical-Bayes Approaches to Recovery of Structured Sparse Signals via Approximate Message Passing Empirical-Bayes Approaches to Recovery of Structured Sparse Signals via Approximate Message Passing Dissertation Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy

More information

arxiv: v1 [cs.it] 21 Feb 2013

arxiv: v1 [cs.it] 21 Feb 2013 q-ary Compressive Sensing arxiv:30.568v [cs.it] Feb 03 Youssef Mroueh,, Lorenzo Rosasco, CBCL, CSAIL, Massachusetts Institute of Technology LCSL, Istituto Italiano di Tecnologia and IIT@MIT lab, Istituto

More information

Motivation Sparse Signal Recovery is an interesting area with many potential applications. Methods developed for solving sparse signal recovery proble

Motivation Sparse Signal Recovery is an interesting area with many potential applications. Methods developed for solving sparse signal recovery proble Bayesian Methods for Sparse Signal Recovery Bhaskar D Rao 1 University of California, San Diego 1 Thanks to David Wipf, Zhilin Zhang and Ritwik Giri Motivation Sparse Signal Recovery is an interesting

More information

Scale Mixture Modeling of Priors for Sparse Signal Recovery

Scale Mixture Modeling of Priors for Sparse Signal Recovery Scale Mixture Modeling of Priors for Sparse Signal Recovery Bhaskar D Rao 1 University of California, San Diego 1 Thanks to David Wipf, Jason Palmer, Zhilin Zhang and Ritwik Giri Outline Outline Sparse

More information

Generalized Approximate Message Passing for Unlimited Sampling of Sparse Signals

Generalized Approximate Message Passing for Unlimited Sampling of Sparse Signals Generalized Approximate Message Passing for Unlimited Sampling of Sparse Signals Osman Musa, Peter Jung and Norbert Goertz Communications and Information Theory, Technische Universität Berlin Institute

More information

Self-Calibration and Biconvex Compressive Sensing

Self-Calibration and Biconvex Compressive Sensing Self-Calibration and Biconvex Compressive Sensing Shuyang Ling Department of Mathematics, UC Davis July 12, 2017 Shuyang Ling (UC Davis) SIAM Annual Meeting, 2017, Pittsburgh July 12, 2017 1 / 22 Acknowledgements

More information

ROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210

ROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210 ROBUST BLIND SPIKES DECONVOLUTION Yuejie Chi Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 4 ABSTRACT Blind spikes deconvolution, or blind super-resolution, deals with

More information

Mismatched Estimation in Large Linear Systems

Mismatched Estimation in Large Linear Systems Mismatched Estimation in Large Linear Systems Yanting Ma, Dror Baron, and Ahmad Beirami North Carolina State University Massachusetts Institute of Technology & Duke University Supported by NSF & ARO Motivation

More information

Information-Theoretic Limits of Group Testing: Phase Transitions, Noisy Tests, and Partial Recovery

Information-Theoretic Limits of Group Testing: Phase Transitions, Noisy Tests, and Partial Recovery Information-Theoretic Limits of Group Testing: Phase Transitions, Noisy Tests, and Partial Recovery Jonathan Scarlett jonathan.scarlett@epfl.ch Laboratory for Information and Inference Systems (LIONS)

More information

Solving Corrupted Quadratic Equations, Provably

Solving Corrupted Quadratic Equations, Provably Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin

More information

Passing and Interference Coordination

Passing and Interference Coordination Generalized Approximate Message Passing and Interference Coordination Sundeep Rangan, Polytechnic Institute of NYU Joint work with Alyson Fletcher (Berkeley), Vivek Goyal (MIT), Ulugbek Kamilov (EPFL/MIT),

More information

Mismatched Estimation in Large Linear Systems

Mismatched Estimation in Large Linear Systems Mismatched Estimation in Large Linear Systems Yanting Ma, Dror Baron, Ahmad Beirami Department of Electrical and Computer Engineering, North Carolina State University, Raleigh, NC 7695, USA Department

More information

Approximate Message Passing

Approximate Message Passing Approximate Message Passing Mohammad Emtiyaz Khan CS, UBC February 8, 2012 Abstract In this note, I summarize Sections 5.1 and 5.2 of Arian Maleki s PhD thesis. 1 Notation We denote scalars by small letters

More information

Sparse Parameter Recovery from Aggregated Data : Supplement

Sparse Parameter Recovery from Aggregated Data : Supplement Avradeep Bhowmik Joydeep Ghosh The University of Texas at Austin, TX, USA Oluwasanmi Koyejo Stanford University, CA & University of Illinois at Urbana Champaign, IL, USA AVRADEEP.1@UTEXAS.EDU GHOSH@ECE.UTEXAS.EDU

More information

Message passing and approximate message passing

Message passing and approximate message passing Message passing and approximate message passing Arian Maleki Columbia University 1 / 47 What is the problem? Given pdf µ(x 1, x 2,..., x n ) we are interested in arg maxx1,x 2,...,x n µ(x 1, x 2,..., x

More information

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation

Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation UIUC CSL Mar. 24 Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation Yuejie Chi Department of ECE and BMI Ohio State University Joint work with Yuxin Chen (Stanford).

More information

Risk and Noise Estimation in High Dimensional Statistics via State Evolution

Risk and Noise Estimation in High Dimensional Statistics via State Evolution Risk and Noise Estimation in High Dimensional Statistics via State Evolution Mohsen Bayati Stanford University Joint work with Jose Bento, Murat Erdogdu, Marc Lelarge, and Andrea Montanari Statistical

More information

Estimating LASSO Risk and Noise Level

Estimating LASSO Risk and Noise Level Estimating LASSO Risk and Noise Level Mohsen Bayati Stanford University bayati@stanford.edu Murat A. Erdogdu Stanford University erdogdu@stanford.edu Andrea Montanari Stanford University montanar@stanford.edu

More information

Multiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing

Multiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing Multiplicative and Additive Perturbation Effects on the Recovery of Sparse Signals on the Sphere using Compressed Sensing ibeltal F. Alem, Daniel H. Chae, and Rodney A. Kennedy The Australian National

More information

Single-letter Characterization of Signal Estimation from Linear Measurements

Single-letter Characterization of Signal Estimation from Linear Measurements Single-letter Characterization of Signal Estimation from Linear Measurements Dongning Guo Dror Baron Shlomo Shamai The work has been supported by the European Commission in the framework of the FP7 Network

More information

Expectation propagation for signal detection in flat-fading channels

Expectation propagation for signal detection in flat-fading channels Expectation propagation for signal detection in flat-fading channels Yuan Qi MIT Media Lab Cambridge, MA, 02139 USA yuanqi@media.mit.edu Thomas Minka CMU Statistics Department Pittsburgh, PA 15213 USA

More information

Message Passing Algorithms for Compressed Sensing: II. Analysis and Validation

Message Passing Algorithms for Compressed Sensing: II. Analysis and Validation Message Passing Algorithms for Compressed Sensing: II. Analysis and Validation David L. Donoho Department of Statistics Arian Maleki Department of Electrical Engineering Andrea Montanari Department of

More information

Compressed Sensing and Sparse Recovery

Compressed Sensing and Sparse Recovery ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing

More information

Performance Regions in Compressed Sensing from Noisy Measurements

Performance Regions in Compressed Sensing from Noisy Measurements Performance egions in Compressed Sensing from Noisy Measurements Junan Zhu and Dror Baron Department of Electrical and Computer Engineering North Carolina State University; aleigh, NC 27695, USA Email:

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via nonconvex optimization Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

An Overview of Multi-Processor Approximate Message Passing

An Overview of Multi-Processor Approximate Message Passing An Overview of Multi-Processor Approximate Message Passing Junan Zhu, Ryan Pilgrim, and Dror Baron JPMorgan Chase & Co., New York, NY 10001, Email: jzhu9@ncsu.edu Department of Electrical and Computer

More information

An iterative hard thresholding estimator for low rank matrix recovery

An iterative hard thresholding estimator for low rank matrix recovery An iterative hard thresholding estimator for low rank matrix recovery Alexandra Carpentier - based on a joint work with Arlene K.Y. Kim Statistical Laboratory, Department of Pure Mathematics and Mathematical

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

Fast Compressive Phase Retrieval

Fast Compressive Phase Retrieval Fast Compressive Phase Retrieval Aditya Viswanathan Department of Mathematics Michigan State University East Lansing, Michigan 4884 Mar Iwen Department of Mathematics and Department of ECE Michigan State

More information

IEOR 265 Lecture 3 Sparse Linear Regression

IEOR 265 Lecture 3 Sparse Linear Regression IOR 65 Lecture 3 Sparse Linear Regression 1 M Bound Recall from last lecture that the reason we are interested in complexity measures of sets is because of the following result, which is known as the M

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

Compressed Sensing: Extending CLEAN and NNLS

Compressed Sensing: Extending CLEAN and NNLS Compressed Sensing: Extending CLEAN and NNLS Ludwig Schwardt SKA South Africa (KAT Project) Calibration & Imaging Workshop Socorro, NM, USA 31 March 2009 Outline 1 Compressed Sensing (CS) Introduction

More information

SCRIBERS: SOROOSH SHAFIEEZADEH-ABADEH, MICHAËL DEFFERRARD

SCRIBERS: SOROOSH SHAFIEEZADEH-ABADEH, MICHAËL DEFFERRARD EE-731: ADVANCED TOPICS IN DATA SCIENCES LABORATORY FOR INFORMATION AND INFERENCE SYSTEMS SPRING 2016 INSTRUCTOR: VOLKAN CEVHER SCRIBERS: SOROOSH SHAFIEEZADEH-ABADEH, MICHAËL DEFFERRARD STRUCTURED SPARSITY

More information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information Fast Angular Synchronization for Phase Retrieval via Incomplete Information Aditya Viswanathan a and Mark Iwen b a Department of Mathematics, Michigan State University; b Department of Mathematics & Department

More information

Stability of LS-CS-residual and modified-cs for sparse signal sequence reconstruction

Stability of LS-CS-residual and modified-cs for sparse signal sequence reconstruction Stability of LS-CS-residual and modified-cs for sparse signal sequence reconstruction Namrata Vaswani ECE Dept., Iowa State University, Ames, IA 50011, USA, Email: namrata@iastate.edu Abstract In this

More information

19.1 Problem setup: Sparse linear regression

19.1 Problem setup: Sparse linear regression ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016 Lecture 19: Minimax rates for sparse linear regression Lecturer: Yihong Wu Scribe: Subhadeep Paul, April 13/14, 2016 In

More information

Phil Schniter. Collaborators: Jason Jeremy and Volkan

Phil Schniter. Collaborators: Jason Jeremy and Volkan Bilinear generalized approximate message passing (BiG-AMP) for High Dimensional Inference Phil Schniter Collaborators: Jason Parker @OSU, Jeremy Vila @OSU, and Volkan Cehver @EPFL With support from NSF

More information

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Shuyang Ling Department of Mathematics, UC Davis Oct.18th, 2016 Shuyang Ling (UC Davis) 16w5136, Oaxaca, Mexico Oct.18th, 2016

More information

CSC 576: Variants of Sparse Learning

CSC 576: Variants of Sparse Learning CSC 576: Variants of Sparse Learning Ji Liu Department of Computer Science, University of Rochester October 27, 205 Introduction Our previous note basically suggests using l norm to enforce sparsity in

More information

Massive MIMO mmwave Channel Estimation Using Approximate Message Passing and Laplacian Prior

Massive MIMO mmwave Channel Estimation Using Approximate Message Passing and Laplacian Prior Massive MIMO mmwave Channel Estimation Using Approximate Message Passing and Laplacian Prior Faouzi Bellili, Foad Sohrabi, and Wei Yu Department of Electrical and Computer Engineering University of Toronto,

More information

Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego

Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego 1 Outline Course Outline Motivation for Course Sparse Signal Recovery Problem Applications Computational

More information

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms François Caron Department of Statistics, Oxford STATLEARN 2014, Paris April 7, 2014 Joint work with Adrien Todeschini,

More information

Covariance Sketching via Quadratic Sampling

Covariance Sketching via Quadratic Sampling Covariance Sketching via Quadratic Sampling Yuejie Chi Department of ECE and BMI The Ohio State University Tsinghua University June 2015 Page 1 Acknowledgement Thanks to my academic collaborators on some

More information

Compressed Sensing with Shannon-Kotel nikov Mapping in the Presence of Noise

Compressed Sensing with Shannon-Kotel nikov Mapping in the Presence of Noise 19th European Signal Processing Conference (EUSIPCO 011) Barcelona, Spain, August 9 - September, 011 Compressed Sensing with Shannon-Kotel nikov Mapping in the Presence of Noise Ahmad Abou Saleh, Wai-Yip

More information

Analog-to-Information Conversion

Analog-to-Information Conversion Analog-to-Information Conversion Sergiy A. Vorobyov Dept. Signal Processing and Acoustics, Aalto University February 2013 Winter School on Compressed Sensing, Ruka 1/55 Outline 1 Compressed Sampling (CS)

More information

Factor Analysis and Kalman Filtering (11/2/04)

Factor Analysis and Kalman Filtering (11/2/04) CS281A/Stat241A: Statistical Learning Theory Factor Analysis and Kalman Filtering (11/2/04) Lecturer: Michael I. Jordan Scribes: Byung-Gon Chun and Sunghoon Kim 1 Factor Analysis Factor analysis is used

More information

High Dimensional Inverse Covariate Matrix Estimation via Linear Programming

High Dimensional Inverse Covariate Matrix Estimation via Linear Programming High Dimensional Inverse Covariate Matrix Estimation via Linear Programming Ming Yuan October 24, 2011 Gaussian Graphical Model X = (X 1,..., X p ) indep. N(µ, Σ) Inverse covariance matrix Σ 1 = Ω = (ω

More information

SIGNALS with sparse representations can be recovered

SIGNALS with sparse representations can be recovered IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1497 Cramér Rao Bound for Sparse Signals Fitting the Low-Rank Model with Small Number of Parameters Mahdi Shaghaghi, Student Member, IEEE,

More information

EXTENDED GLRT DETECTORS OF CORRELATION AND SPHERICITY: THE UNDERSAMPLED REGIME. Xavier Mestre 1, Pascal Vallet 2

EXTENDED GLRT DETECTORS OF CORRELATION AND SPHERICITY: THE UNDERSAMPLED REGIME. Xavier Mestre 1, Pascal Vallet 2 EXTENDED GLRT DETECTORS OF CORRELATION AND SPHERICITY: THE UNDERSAMPLED REGIME Xavier Mestre, Pascal Vallet 2 Centre Tecnològic de Telecomunicacions de Catalunya, Castelldefels, Barcelona (Spain) 2 Institut

More information

Does l p -minimization outperform l 1 -minimization?

Does l p -minimization outperform l 1 -minimization? Does l p -minimization outperform l -minimization? Le Zheng, Arian Maleki, Haolei Weng, Xiaodong Wang 3, Teng Long Abstract arxiv:50.03704v [cs.it] 0 Jun 06 In many application areas ranging from bioinformatics

More information

How to Design Message Passing Algorithms for Compressed Sensing

How to Design Message Passing Algorithms for Compressed Sensing How to Design Message Passing Algorithms for Compressed Sensing David L. Donoho, Arian Maleki and Andrea Montanari, February 17, 2011 Abstract Finding fast first order methods for recovering signals from

More information

Low Resolution Adaptive Compressed Sensing for mmwave MIMO receivers

Low Resolution Adaptive Compressed Sensing for mmwave MIMO receivers Low Resolution Adaptive Compressed Sensing for mmwave MIMO receivers Cristian Rusu, Nuria González-Prelcic and Robert W. Heath Motivation 2 Power consumption at mmwave in a MIMO receiver Power at 60 GHz

More information

Optimization for Compressed Sensing

Optimization for Compressed Sensing Optimization for Compressed Sensing Robert J. Vanderbei 2014 March 21 Dept. of Industrial & Systems Engineering University of Florida http://www.princeton.edu/ rvdb Lasso Regression The problem is to solve

More information

Expectation propagation for symbol detection in large-scale MIMO communications

Expectation propagation for symbol detection in large-scale MIMO communications Expectation propagation for symbol detection in large-scale MIMO communications Pablo M. Olmos olmos@tsc.uc3m.es Joint work with Javier Céspedes (UC3M) Matilde Sánchez-Fernández (UC3M) and Fernando Pérez-Cruz

More information

An equivalence between high dimensional Bayes optimal inference and M-estimation

An equivalence between high dimensional Bayes optimal inference and M-estimation An equivalence between high dimensional Bayes optimal inference and M-estimation Madhu Advani Surya Ganguli Department of Applied Physics, Stanford University msadvani@stanford.edu and sganguli@stanford.edu

More information

MMSE Dimension. snr. 1 We use the following asymptotic notation: f(x) = O (g(x)) if and only

MMSE Dimension. snr. 1 We use the following asymptotic notation: f(x) = O (g(x)) if and only MMSE Dimension Yihong Wu Department of Electrical Engineering Princeton University Princeton, NJ 08544, USA Email: yihongwu@princeton.edu Sergio Verdú Department of Electrical Engineering Princeton University

More information

Statistics: Learning models from data

Statistics: Learning models from data DS-GA 1002 Lecture notes 5 October 19, 2015 Statistics: Learning models from data Learning models from data that are assumed to be generated probabilistically from a certain unknown distribution is a crucial

More information

Strengthened Sobolev inequalities for a random subspace of functions

Strengthened Sobolev inequalities for a random subspace of functions Strengthened Sobolev inequalities for a random subspace of functions Rachel Ward University of Texas at Austin April 2013 2 Discrete Sobolev inequalities Proposition (Sobolev inequality for discrete images)

More information

Self-Averaging Expectation Propagation

Self-Averaging Expectation Propagation Self-Averaging Expectation Propagation Burak Çakmak Department of Electronic Systems Aalborg Universitet Aalborg 90, Denmark buc@es.aau.dk Manfred Opper Department of Artificial Intelligence Technische

More information

Approximating high-dimensional posteriors with nuisance parameters via integrated rotated Gaussian approximation (IRGA)

Approximating high-dimensional posteriors with nuisance parameters via integrated rotated Gaussian approximation (IRGA) Approximating high-dimensional posteriors with nuisance parameters via integrated rotated Gaussian approximation (IRGA) Willem van den Boom Department of Statistics and Applied Probability National University

More information

Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation

Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 12, DECEMBER 2008 2009 Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation Yuanqing Li, Member, IEEE, Andrzej Cichocki,

More information

Sparse Linear Models (10/7/13)

Sparse Linear Models (10/7/13) STA56: Probabilistic machine learning Sparse Linear Models (0/7/) Lecturer: Barbara Engelhardt Scribes: Jiaji Huang, Xin Jiang, Albert Oh Sparsity Sparsity has been a hot topic in statistics and machine

More information

Sparse Parameter Estimation: Compressed Sensing meets Matrix Pencil

Sparse Parameter Estimation: Compressed Sensing meets Matrix Pencil Sparse Parameter Estimation: Compressed Sensing meets Matrix Pencil Yuejie Chi Departments of ECE and BMI The Ohio State University Colorado School of Mines December 9, 24 Page Acknowledgement Joint work

More information

Estimating Unknown Sparsity in Compressed Sensing

Estimating Unknown Sparsity in Compressed Sensing Estimating Unknown Sparsity in Compressed Sensing Miles Lopes UC Berkeley Department of Statistics CSGF Program Review July 16, 2014 early version published at ICML 2013 Miles Lopes ( UC Berkeley ) estimating

More information

Particle Filtered Modified-CS (PaFiMoCS) for tracking signal sequences

Particle Filtered Modified-CS (PaFiMoCS) for tracking signal sequences Particle Filtered Modified-CS (PaFiMoCS) for tracking signal sequences Samarjit Das and Namrata Vaswani Department of Electrical and Computer Engineering Iowa State University http://www.ece.iastate.edu/

More information

Massive MIMO: Signal Structure, Efficient Processing, and Open Problems II

Massive MIMO: Signal Structure, Efficient Processing, and Open Problems II Massive MIMO: Signal Structure, Efficient Processing, and Open Problems II Mahdi Barzegar Communications and Information Theory Group (CommIT) Technische Universität Berlin Heisenberg Communications and

More information

Sparse Algorithms are not Stable: A No-free-lunch Theorem

Sparse Algorithms are not Stable: A No-free-lunch Theorem Sparse Algorithms are not Stable: A No-free-lunch Theorem Huan Xu Shie Mannor Constantine Caramanis Abstract We consider two widely used notions in machine learning, namely: sparsity algorithmic stability.

More information

CS281A/Stat241A Lecture 17

CS281A/Stat241A Lecture 17 CS281A/Stat241A Lecture 17 p. 1/4 CS281A/Stat241A Lecture 17 Factor Analysis and State Space Models Peter Bartlett CS281A/Stat241A Lecture 17 p. 2/4 Key ideas of this lecture Factor Analysis. Recall: Gaussian

More information

Sparsity Regularization

Sparsity Regularization Sparsity Regularization Bangti Jin Course Inverse Problems & Imaging 1 / 41 Outline 1 Motivation: sparsity? 2 Mathematical preliminaries 3 l 1 solvers 2 / 41 problem setup finite-dimensional formulation

More information

Adaptive Piecewise Polynomial Estimation via Trend Filtering

Adaptive Piecewise Polynomial Estimation via Trend Filtering Adaptive Piecewise Polynomial Estimation via Trend Filtering Liubo Li, ShanShan Tu The Ohio State University li.2201@osu.edu, tu.162@osu.edu October 1, 2015 Liubo Li, ShanShan Tu (OSU) Trend Filtering

More information

Optimization methods

Optimization methods Optimization methods Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda /8/016 Introduction Aim: Overview of optimization methods that Tend to

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Sparse Recovery using L1 minimization - algorithms Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

arxiv: v3 [cs.na] 21 Mar 2016

arxiv: v3 [cs.na] 21 Mar 2016 Phase transitions and sample complexity in Bayes-optimal matrix factorization ariv:40.98v3 [cs.a] Mar 06 Yoshiyuki Kabashima, Florent Krzakala, Marc Mézard 3, Ayaka Sakata,4, and Lenka Zdeborová 5, Department

More information

Approximate Message Passing Algorithm for Complex Separable Compressed Imaging

Approximate Message Passing Algorithm for Complex Separable Compressed Imaging Approximate Message Passing Algorithm for Complex Separle Compressed Imaging Akira Hirayashi, Jumpei Sugimoto, and Kazushi Mimura College of Information Science and Engineering, Ritsumeikan University,

More information

Misspecified Bayesian Cramer-Rao Bound for Sparse Bayesian Learning

Misspecified Bayesian Cramer-Rao Bound for Sparse Bayesian Learning MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Misspecified Bayesian Cramer-Rao Bound for Sparse Bayesian Learning Pajovic, M. TR018-076 July 1, 018 Abstract We consider a misspecified Bayesian

More information

Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice

Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice Jason N. Laska, Mark A. Davenport, Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

Optimisation Combinatoire et Convexe.

Optimisation Combinatoire et Convexe. Optimisation Combinatoire et Convexe. Low complexity models, l 1 penalties. A. d Aspremont. M1 ENS. 1/36 Today Sparsity, low complexity models. l 1 -recovery results: three approaches. Extensions: matrix

More information

Parameter Estimation

Parameter Estimation 1 / 44 Parameter Estimation Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay October 25, 2012 Motivation System Model used to Derive

More information

Elaine T. Hale, Wotao Yin, Yin Zhang

Elaine T. Hale, Wotao Yin, Yin Zhang , Wotao Yin, Yin Zhang Department of Computational and Applied Mathematics Rice University McMaster University, ICCOPT II-MOPTA 2007 August 13, 2007 1 with Noise 2 3 4 1 with Noise 2 3 4 1 with Noise 2

More information

Expectation-Maximization Bernoulli-Gaussian Approximate Message Passing

Expectation-Maximization Bernoulli-Gaussian Approximate Message Passing Expectation-Maximization Bernoulli-Gaussian Approximate Message Passing Jeremy Vila an Philip Schniter Dept. of ECE, The Ohio State University, Columbus, OH 4320. Email: vilaj@ece.osu.eu, schniter@ece.osu.eu

More information

Robust multichannel sparse recovery

Robust multichannel sparse recovery Robust multichannel sparse recovery Esa Ollila Department of Signal Processing and Acoustics Aalto University, Finland SUPELEC, Feb 4th, 2015 1 Introduction 2 Nonparametric sparse recovery 3 Simulation

More information

Optimal Deterministic Compressed Sensing Matrices

Optimal Deterministic Compressed Sensing Matrices Optimal Deterministic Compressed Sensing Matrices Arash Saber Tehrani email: saberteh@usc.edu Alexandros G. Dimakis email: dimakis@usc.edu Giuseppe Caire email: caire@usc.edu Abstract We present the first

More information

Sparse filter models for solving permutation indeterminacy in convolutive blind source separation

Sparse filter models for solving permutation indeterminacy in convolutive blind source separation Sparse filter models for solving permutation indeterminacy in convolutive blind source separation Prasad Sudhakar, Rémi Gribonval To cite this version: Prasad Sudhakar, Rémi Gribonval. Sparse filter models

More information

DISCUSSION OF INFLUENTIAL FEATURE PCA FOR HIGH DIMENSIONAL CLUSTERING. By T. Tony Cai and Linjun Zhang University of Pennsylvania

DISCUSSION OF INFLUENTIAL FEATURE PCA FOR HIGH DIMENSIONAL CLUSTERING. By T. Tony Cai and Linjun Zhang University of Pennsylvania Submitted to the Annals of Statistics DISCUSSION OF INFLUENTIAL FEATURE PCA FOR HIGH DIMENSIONAL CLUSTERING By T. Tony Cai and Linjun Zhang University of Pennsylvania We would like to congratulate the

More information

MODIFIED DISTRIBUTED ITERATIVE HARD THRESHOLDING

MODIFIED DISTRIBUTED ITERATIVE HARD THRESHOLDING MODIFIED DISTRIBUTED ITERATIVE HARD THRESHOLDING Puxiao Han, Ruixin Niu Virginia Commonwealth University Dept. of Electrical and Computer Engineering Richmond, VA, 23284, U.S.A. Email: {hanp, rniu}@vcu.edu

More information