Single-letter Characterization of Signal Estimation from Linear Measurements

Similar documents
Signal Reconstruction in Linear Mixing Systems with Different Error Metrics

Performance Regions in Compressed Sensing from Noisy Measurements

Mismatched Estimation in Large Linear Systems

Interactions of Information Theory and Estimation in Single- and Multi-user Communications

Sparse Superposition Codes for the Gaussian Channel

Mismatched Estimation in Large Linear Systems

Decoupling of CDMA Multiuser Detection via the Replica Method

Asymptotic Analysis of MAP Estimation via the Replica Method and Applications to Compressed Sensing

Compressed Sensing under Optimal Quantization

Wiener Filters in Gaussian Mixture Signal Estimation with l -Norm Error

Passing and Interference Coordination

The Minimax Noise Sensitivity in Compressed Sensing

Multiuser Detection of Sparsely Spread CDMA

Fundamental Limits of Compressed Sensing under Optimal Quantization

On the Optimum Asymptotic Multiuser Efficiency of Randomly Spread CDMA

Random Matrices and Wireless Communications

Optimal Data Detection in Large MIMO

Shannon-Theoretic Limits on Noisy Compressive Sampling Mehmet Akçakaya, Student Member, IEEE, and Vahid Tarokh, Fellow, IEEE

Communication by Regression: Sparse Superposition Codes

Sparsity Pattern Recovery in Compressed Sensing

On convergence of Approximate Message Passing

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 6, JUNE

Compressed Sensing and Linear Codes over Real Numbers

On the Role of the Properties of the Nonzero Entries on Sparse Signal Recovery

Measurements vs. Bits: Compressed Sensing meets Information Theory

Risk and Noise Estimation in High Dimensional Statistics via State Evolution

FAULT identification is the task of determining which of a

Variable-Rate Universal Slepian-Wolf Coding with Feedback

An Overview of Compressed Sensing

Optimality of Large MIMO Detection via Approximate Message Passing

Compressed Sensing Using Bernoulli Measurement Matrices

Communication by Regression: Achieving Shannon Capacity

Sparse Regression Codes for Multi-terminal Source and Channel Coding

Replica Symmetry Breaking in Compressive Sensing

Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk

The Capacity Region of the Gaussian Cognitive Radio Channels at High SNR

Phil Schniter. Supported in part by NSF grants IIP , CCF , and CCF

Compressive Sensing under Matrix Uncertainties: An Approximate Message Passing Approach

Statistical Mechanics of MAP Estimation: General Replica Ansatz

Statistical Mechanics of MAP Estimation: General Replica Ansatz

Asymptotic Analysis of MAP Estimation via the Replica Method and Compressed Sensing

arxiv: v2 [cs.it] 6 Sep 2016

Approximate Message Passing with Built-in Parameter Estimation for Sparse Signal Recovery

Acommon problem in signal processing is to estimate an

SPARSE signal representations have gained popularity in recent

The Pros and Cons of Compressive Sensing

Coding over Interference Channels: An Information-Estimation View

Multipath Matching Pursuit

Approximate Message Passing

Optimum Rate Communication by Fast Sparse Superposition Codes

Message Passing Algorithms for Compressed Sensing: I. Motivation and Construction

MMSE Dimension. snr. 1 We use the following asymptotic notation: f(x) = O (g(x)) if and only

Competition and Cooperation in Multiuser Communication Environments

Exploiting Sparsity for Wireless Communications

Capacity-Approaching PhaseCode for Low-Complexity Compressive Phase Retrieval

Joint Channel Estimation and Co-Channel Interference Mitigation in Wireless Networks Using Belief Propagation

Adaptive Compressive Imaging Using Sparse Hierarchical Learned Dictionaries

Rigorous Dynamics and Consistent Estimation in Arbitrarily Conditioned Linear Systems

On the Required Accuracy of Transmitter Channel State Information in Multiple Antenna Broadcast Channels

Robust Support Recovery Using Sparse Compressive Sensing Matrices

A Proof of the Converse for the Capacity of Gaussian MIMO Broadcast Channels

Design of MMSE Multiuser Detectors using Random Matrix Techniques

An Overview of Multi-Processor Approximate Message Passing

Stopping Condition for Greedy Block Sparse Signal Recovery

Ranked Sparse Signal Support Detection

A Power Efficient Sensing/Communication Scheme: Joint Source-Channel-Network Coding by Using Compressive Sensing

Information Theoretic Imaging

Binary Compressive Sensing via Analog. Fountain Coding

CS on CS: Computer Science insights into Compresive Sensing (and vice versa) Piotr Indyk MIT

Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing

Some Goodness Properties of LDA Lattices

Performance Trade-Offs in Multi-Processor Approximate Message Passing

AFRL-RI-RS-TR

Information Theory. Lecture 10. Network Information Theory (CT15); a focus on channel capacity results

1 Regression with High Dimensional Data

Prestige Lecture Series on Science of Information

Bayesian Compressive Sensing via Belief Propagation

Approximate Message Passing Algorithms

Efficient Inverse Cholesky Factorization for Alamouti Matrices in G-STBC and Alamouti-like Matrices in OMP

Multiuser Receivers, Random Matrices and Free Probability

Turbo-AMP: A Graphical-Models Approach to Compressive Inference

Improving Approximate Message Passing Recovery of Sparse Binary Vectors by Post Processing

On Bit Error Rate Performance of Polar Codes in Finite Regime

DM559 Linear and Integer Programming. Lecture 2 Systems of Linear Equations. Marco Chiarandini

Information-theoretically Optimal Sparse PCA

Orthogonal Matching Pursuit: A Brownian Motion Analysis

Signal Estimation in Gaussian Noise: A Statistical Physics Perspective

IEOR 265 Lecture 3 Sparse Linear Regression

Output MAI Distributions of Linear MMSE Multiuser Receivers in DS-CDMA Systems

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

A Structured Construction of Optimal Measurement Matrix for Noiseless Compressed Sensing via Polarization of Analog Transmission

Noisy Signal Recovery via Iterative Reweighted L1-Minimization

Interleave Division Multiple Access. Li Ping, Department of Electronic Engineering City University of Hong Kong

On the Shamai-Laroia Approximation for the Information Rate of the ISI Channel

Single-Gaussian Messages and Noise Thresholds for Low-Density Lattice Codes

Estimation in Gaussian Noise: Properties of the Minimum Mean-Square Error

In Praise of Bad Codes for Multi-Terminal Communications

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

An equivalence between high dimensional Bayes optimal inference and M-estimation

Transcription:

Single-letter Characterization of Signal Estimation from Linear Measurements Dongning Guo Dror Baron Shlomo Shamai The work has been supported by the European Commission in the framework of the FP7 Network of Excellence in Wireless Communications NEWCOM++, by the Israel Science Foundation, and by the National Science Foundation.

Linear Measurement Systems 1809: Theoria motus corporum coelestium Gauss introduced application of least squares (regression) to solve noisy linear systems motivated by astronomy/navigation Goal: estimate input x to explain measurements y

Non-linear Signal Estimation Linear signal estimation (least squares) sub-optimal example: hard decisions used to estimate binary data Difficult problem with noisy observations even over-determined problems can be challenging Need information theoretic framework for non-linear signal estimation in linear measurement systems underdetermined overerdetermined

Linear Measurement Application Areas Compressed sensing Multiuser communication (CDMA) Medical imaging (tomography) Financial prediction Electromagnetic scattering Seismic imaging (oil industry)

Problem Definition

Setting Replace samples by more general measurements based on a few linear projections (inner products) measurements sparse signal # non-zeros

Signal Model Signal entry X n = B n U n iid B n» Bernoulli(ε) sparse iid U n» P U P X Bernoulli(ε) Multiplier P U

Non-Sparse Input Can use ε=1 X n = U n P U

Measurement Noise Measurement process is typically analog Analog systems add noise, non-linearities, etc. Assume Gaussian noise for ease of analysis Can be generalized to non-gaussian noise [Guo & Wang 2007; Rangan 2010]

Noise Model Noiseless measurements denoted y 0 Noise Noisy measurements Unit-norm columns SNR=γ noiseless SNR

Allerton 2006 [Sarvotham, Baron, & Baraniuk] Model process as measurement channel source encoder channel encoder channel channel decoder source decoder CS measurement CS decoding Measurements provide information! Preliminary single-letter bound for compressed sensing and linear measurement systems

Numerous single-letter bounds [Aeron, Zhao, & Saligrama] [Akcakaya and Tarokh] [Rangan, Fletcher, & Goyal] [Gastpar & Reeves] [Wang, Wainwright, & Ramchandran] [Tune, Bhaskaran, & Hanly] Related Results BP Multiuser detection [Tanaka & Takeda] [Guo & Wang] [Montanari & Tse] Arbitrary noise [Rangan] [Guo & Wang]

Goal: Precise Single-letter Characterization of Optimal CS [Guo, Baron, & Shamai 2009]

What Single-letter Characterization? Φ,Φ channel posterior Ultimately what can one say about X n given Y? (sufficient statistic) Very complicated Want a simple characterization of its quality Large-system limit:

Main Result: Single-letter Characterization Φ,Φ Result1: Conditioned on X n =x n, the observations (Y,Φ) are statistically equivalent to channel posterior η easy to compute degradation Estimation quality from (Y,Φ) just as good as noisier scalar observation

Details η2(0,1) is fixed point of Take-home point: degraded scalar channel Non-rigorous owing to replica method w/ symmetry assumption used in CDMA detection [Tanaka 2002, Guo & Verdu 2005] Related analysis [Rangan, Fletcher, & Goyal 2009] MMSE estimate (not posterior) using [Guo & Verdu 2005] extended to several CS algorithms particularly LASSO

Decoupling [Guo, Baron, & Shamai 2009]

Decoupling Result Result2: Large system limit; any arbitrary (constant) L input elements decouple: Take-home point: interference from each individual signal entry vanishes

Sparse Measurement Matrices [Sarvotham, Baron, & Baraniuk 2006] [Guo, Baron, & Shamai 2009] [Baron, Sarvotham, & Baraniuk 2010]

Sparse Measurement Matrices LDPC measurement matrix (sparse) Mostly zeros in Φ; nonzeros» P Φ Each row contains ¼Nq randomly placed nonzeros Fast matrix-vector multiplication fast encoding / decoding sparse matrix

CS Decoding Using BP [Baron, Sarvotham, & Baraniuk 2006] Measurement matrix represented by graph Estimate input iteratively Implemented via nonparametric BP [Bickson,Sommer, ] signal x measurements y

Identical Single-letter Characterization w/bp [Montanari & Tse 2006; Guo & Wang 2008] Result3: Conditioned on X n =x n, the observations (Y,Φ) are statistically equivalent to Rigorous result identical degradation Sparse matrices just as good BP is asymptotically optimal!

CS-BP vs Other CS Methods (N=1000, ε=0.1, q=0.02) 90 80 MMSE 70 MMSE 60 50 M 40

Conclusion Single-letter characterization of CS Decoupling Sparse matrices just as good Asymptotically optimal CS-BP algorithm

THE END