Sparse Superposition Codes for the Gaussian Channel

Size: px
Start display at page:

Download "Sparse Superposition Codes for the Gaussian Channel"

Transcription

1 Sparse Superposition Codes for the Gaussian Channel Florent Krzakala (LPS, Ecole Normale Supérieure, France) J. Barbier (ENS) arxiv: presented at ISIT 14 Long version in preparation

2 Communication through the Gaussian Channel N (0, ) X coding Ỹ = F X noisy output Y = Ỹ + decoding ˆX F, Y ˆX Message (K bits) Codeword, size M M N matrix Forney & Ungerboeck 98 Review (modulation, shaping and coding) Characteristics: Power constraint: hỹ 2 i =1 Richardson & Urbanke 08 (LDPC and turbo codes) Arikan 09, Abbe & Barron 11 (Polar codes) Signal to noise ratio: SNR = 1 Shannon Capacity: C = 1 2 log 2 (1 + SNR)

3 Sparse Superposition codes Joseph, Barron ISIT 10 N (0, ) X coding Ỹ = F X noisy output Y = Ỹ + decoding ˆX F, Y ˆX Message (K bits) M N matrix with i.i.d random elements Codeword, size M L sections vector X= { B components Only one 0 Dimension N=LB K=L log 2 (B) bits of information Rate: R = N log 2(B) BM

4 Sparse Superposition codes Joseph, Barron ISIT 10 For large B, the maximum likelihood solution is capacity achieving minˆx Y F ˆx `2 such that ˆx has a single 1 per section Hard computational problem

5 Sparse Superposition codes Joseph, Barron ISIT 10 For large B, the maximum likelihood solution is capacity achieving With proper power allocation, the adaptive successive decoder of Joseph and Barron achieve capacity when B vector X=

6 Sparse Superposition codes Joseph, Barron ISIT 10 For large B, the maximum likelihood solution is capacity achieving With proper power allocation, the adaptive successive decoder of Joseph and Barron achieve capacity when B vector X= Power constraint: 0 c1 0 0 c c3 hc 2 i i =1 c 2 i / e 2Ci L c

7 Sparse Superposition codes Joseph, Barron ISIT 10 For large B, the maximum likelihood solution is capacity achieving With proper power allocation, the adaptive successive decoder of Joseph and Barron achieve capacity when B vector X= 0 c1 0 0 c Power constraint: hc 2 i i = c3 c 2 i / e 2Ci L c In practice, however, results are FAR from capacity when B is finite

8 Sparse Superposition codes Joseph, Barron ISIT 10 SNR=15 SNR=7

9 1 The approximate message-passing algorithm

10 Sparse Superposition codes = Multi-dimensional estimation problem L components x 1 B-dim variables Coding matrix F random Mₒ N matrix x 2 Noise M components = 1 x Noisy output Y M components Sparse signal X L B-dimensional components

11 Graphical model Compute the marginal of P (X F, Y ) / P (X)P (Y F, X) P( x l )= prior «only one non zero component» / e snr 2 (y µ likelihood P (y µ F, X) / P N i=1 F µix i ) 2

12 Graphical model Compute the marginal of P (X F, Y ) / P (X)P (Y F, X) Belief Propagation? Not really an option i) Densely connected model ii) O(L 2 ) messages iii) Combinatoric factor O(B L ) iv) Way too slow! m i!µ (x i ) / P (x i ) Y 6=µ m!i (x i )

13 From belief propagation to AMP Compute the marginal of P (X F, Y ) / P (X)P (Y F, X) (i) A Gaussian relaxation (aka Gaussian-BP) (ii) Write the recursion in terms of single marginals (equivalent to TAP in Stat. Phys.) m i!µ (x i ) / P (x i )e x 2 i 2 P 6=µ A!i+ P 6=µ B µ!ix i A Gaussian-like relaxation m µ!i (x i ) / e x 2 i 2 A µ!i+b µ!i x i

14 From belief propagation to AMP Compute the marginal of P (X F, Y ) / P (X)P (Y F, X) (i) A Gaussian relaxation (aka Gaussian-BP) (ii) Write the recursion in terms of single marginals (equivalent to TAP in Stat. Phys.) m i!µ (x i ) m i (x i ) + correction Closing the equations on two moments for each variables: 2N variables (AMP) vs N 2 variables (BP)

15 From belief propagation to AMP Compute the marginal of P (X F, Y ) / P (X)P (Y F, X) Matlab implementation + demo:

16 From belief propagation to AMP Compute the marginal of P (X F, Y ) / P (X)P (Y F, X) A (perhaps) more familiar form from monday s talk (Donoho, Montanari, Maleki 09) a t+1 = f t a(f 0 z t + a t ) z t = y Fa t + 1 < f t+1 a (F 0 z t 1 + a t 1 ) >z t 1 Complexity is N 2 for iid matrix Nlog(N) for Hadamard/Fourier Matlab implementation + demo:

17 2 Asymptotic analysis

18 A heuristic computation of the optimal SER Bayes-optimal estimate: Marginalize w.r.t. P (X Y,F) = P (X) Z MY µ=0 e SNR 2 (y µ P N i=1 F µi ˆx i) 2 The replica method : Asymptotic estimates when N (statistical physics, information theory, optimization, etc..) In CDMA Tanaka 02 Guo, Verdu 06 In Compressed sensing Rangan et al. 09 Kabashima 09 Guo Baron Shamai 09 Krzakala et al. 11

19 A heuristic computation of the optimal SER Bayes-optimal estimate: Marginalize w.r.t. P (X Y,F) = P (X) Z MY µ=0 e SNR 2 (y µ P N i=1 F µi ˆx i) 2 The replica method : Asymptotic estimates when N (statistical physics, information theory, optimization, etc..) 1) Assume that log(z) is concentrated NON RIGOROUS 2) Use the following identity: 3) After (a bit) of work: log Z / log Z =lim n!0 Z n 1 n Z NON RIGOROUS dee N (E) MMSE = max E (E) 4) From the MSE E, one can compute the average SER

20 Single letter characterization of the SER B-dimensional integral with and The SER can be computed from E as with

21 Single letter characterization of the SER Here B=2, SNR=15, C= Φ R=1.5 below the BP threshold R=1.5 at R=1.68 the BP threshold R=1.68 at R=1.75 the optimal threshold R=1.775 maxima SER

22 Gap to capacity closing polynomially

23 Error floor decay polynomially with B

24 State Evolution Analysis of AMP (As in Bayati-Montanari 10 in the scalar case) The MSE obey the following recursion: with: Define: with The Section Error Rate can be deduced from the MSE at all times by

25 State Evolution vs AMP for finite sizes B= 4, SNR=15, N=32768, RBP=1.55

26 State Evolution Analysis and the replica potential with: Define: Replica Potential: The fixed point of the state evolution corresponds = extrema of the potential Potential = Bethe free energy

27 Phase diagram SNR=15 (C=2) high error «low» error Optimal threshold R C when B 1.6 rate BP threshold Optimal threshold exp 1: AMP, L=100 exp 1: SC AMP + Hadamard, L=100 exp 1: it. successive, exp. dist., L=100 exp 2: SC AMP + Hadamard, L=2 13 exp 2: AMP + Hadamard, L= section size B

28 Phase diagram SNR=15 (C=2) high error «low» error Optimal threshold R C when B 1.6 high error for AMP rate AMP+Hadamard, «low» error for L=2^13 AMP R AMP = [2 log(2) (1 + 1/SNR)] BP threshold Optimal threshold exp 1: AMP, L=100 exp 1: SC AMP + Hadamard, L=100 exp 1: it. successive, exp. dist., L=100 exp 2: SC AMP + Hadamard, L=2 13 exp 2: AMP + Hadamard, L= section size B AMP gets worst as B increases :-(

29 Phase diagram SNR=15 (C=2) high error «low» error Optimal threshold R C when B 1.6 high error for AMP rate AMP+Hadamard, «low» error for L=2^13 AMP R AMP = [2 log(2) (1 + 1/SNR)] BP threshold Optimal threshold exp 1: AMP, L=100 exp 1: SC AMP + Hadamard, L=100 exp 1: it. successive, exp. dist., L=100 exp 2: SC AMP + Hadamard, L=2 13 exp 2: AMP + Hadamard, L= section size B but is still way better than the alternative :-)

30 3 Two strategies to reach capacity

31 Two ways to make AMP capacity achieving: Spatial coupling Power Allocation

32 Two ways to make AMP capacity achieving: Spatial coupling Power Allocation vector X= vector X= 0 c1 0 0 c c3 c Generalization the State evolution for structured matrices shows that bamp can reach capacity as B c 2 i / 2 2Ci/L Power constraint: hc 2 i i =1 In practice, however, this turns out to be not so efficient

33 Two ways to make AMP capacity achieving: Spatial coupling 0 h 1 0 F 1 0 e 1 Seed block 1 J J 0 = w J B C J C J J A J : unit coupling : coupling J small w : interaction range Propagation : no coupling (null elements) Use of a structured matrix (Similar constructions in compressed sensing): Pfister-Kudekar 09 Krzakala et al 11 Donoho et al 11 C A

34 Spatial coupling Lena: L=256^2 B=256 levels of grey Matlab implementation + demo:

35 4 Practical tests!

36 Phase diagram SNR=15 (C=2) high error «low» error Optimal threshold R C when B 1.6 high error for AMP rate AMP+Hadamard, «low» error for L=2^13 AMP R AMP = [2 log(2) (1 + 1/SNR)] BP threshold Optimal threshold exp 1: AMP, L=100 exp 1: SC AMP + Hadamard, L=100 exp 1: it. successive, exp. dist., L=100 exp 2: SC AMP + Hadamard, L=2 13 exp 2: AMP + Hadamard, L= section size B but is still way better than the alternative :-)

37 Comparing everything snr = 15, B = 512, L = 1024 Blocklenght ~ average SER Opt. pow. alloc. + random op. + off-line AMP Opt. pow. alloc. + Had. op. + on-line AMP Fixed sp.-coupling + Had. op. + on-line AMP Full Had. op. + on-line AMP Fixed sp.-coupling + opt. pow. alloc. + Had. op. + on-line AMP BP threshold Capacity R

38 . Conclusion Simple coding, capacity achieving with spatial coupling/power allocation Similar phenomenology as LDPC codes Efficient when used with structured operator such as Hadamard Can be analyzed by state evolution/replica method Interesting finite size performances?

On convergence of Approximate Message Passing

On convergence of Approximate Message Passing On convergence of Approximate Message Passing Francesco Caltagirone (1), Florent Krzakala (2) and Lenka Zdeborova (1) (1) Institut de Physique Théorique, CEA Saclay (2) LPS, Ecole Normale Supérieure, Paris

More information

arxiv: v2 [cs.it] 6 Sep 2016

arxiv: v2 [cs.it] 6 Sep 2016 The Mutual Information in Random Linear Estimation Jean Barbier, Mohamad Dia, Nicolas Macris and Florent Krzakala Laboratoire de Théorie des Communications, Faculté Informatique et Communications, Ecole

More information

Performance Regions in Compressed Sensing from Noisy Measurements

Performance Regions in Compressed Sensing from Noisy Measurements Performance egions in Compressed Sensing from Noisy Measurements Junan Zhu and Dror Baron Department of Electrical and Computer Engineering North Carolina State University; aleigh, NC 27695, USA Email:

More information

Mismatched Estimation in Large Linear Systems

Mismatched Estimation in Large Linear Systems Mismatched Estimation in Large Linear Systems Yanting Ma, Dror Baron, and Ahmad Beirami North Carolina State University Massachusetts Institute of Technology & Duke University Supported by NSF & ARO Motivation

More information

Mismatched Estimation in Large Linear Systems

Mismatched Estimation in Large Linear Systems Mismatched Estimation in Large Linear Systems Yanting Ma, Dror Baron, Ahmad Beirami Department of Electrical and Computer Engineering, North Carolina State University, Raleigh, NC 7695, USA Department

More information

Single-letter Characterization of Signal Estimation from Linear Measurements

Single-letter Characterization of Signal Estimation from Linear Measurements Single-letter Characterization of Signal Estimation from Linear Measurements Dongning Guo Dror Baron Shlomo Shamai The work has been supported by the European Commission in the framework of the FP7 Network

More information

Sparse Regression Codes for Multi-terminal Source and Channel Coding

Sparse Regression Codes for Multi-terminal Source and Channel Coding Sparse Regression Codes for Multi-terminal Source and Channel Coding Ramji Venkataramanan Yale University Sekhar Tatikonda Allerton 2012 1 / 20 Compression with Side-Information X Encoder Rate R Decoder

More information

Inferring Sparsity: Compressed Sensing Using Generalized Restricted Boltzmann Machines. Eric W. Tramel. itwist 2016 Aalborg, DK 24 August 2016

Inferring Sparsity: Compressed Sensing Using Generalized Restricted Boltzmann Machines. Eric W. Tramel. itwist 2016 Aalborg, DK 24 August 2016 Inferring Sparsity: Compressed Sensing Using Generalized Restricted Boltzmann Machines Eric W. Tramel itwist 2016 Aalborg, DK 24 August 2016 Andre MANOEL, Francesco CALTAGIRONE, Marylou GABRIE, Florent

More information

Communication by Regression: Achieving Shannon Capacity

Communication by Regression: Achieving Shannon Capacity Communication by Regression: Practical Achievement of Shannon Capacity Department of Statistics Yale University Workshop Infusing Statistics and Engineering Harvard University, June 5-6, 2011 Practical

More information

The Minimax Noise Sensitivity in Compressed Sensing

The Minimax Noise Sensitivity in Compressed Sensing The Minimax Noise Sensitivity in Compressed Sensing Galen Reeves and avid onoho epartment of Statistics Stanford University Abstract Consider the compressed sensing problem of estimating an unknown k-sparse

More information

Optimum Rate Communication by Fast Sparse Superposition Codes

Optimum Rate Communication by Fast Sparse Superposition Codes Optimum Rate Communication by Fast Sparse Superposition Codes Andrew Barron Department of Statistics Yale University Joint work with Antony Joseph and Sanghee Cho Algebra, Codes and Networks Conference

More information

Risk and Noise Estimation in High Dimensional Statistics via State Evolution

Risk and Noise Estimation in High Dimensional Statistics via State Evolution Risk and Noise Estimation in High Dimensional Statistics via State Evolution Mohsen Bayati Stanford University Joint work with Jose Bento, Murat Erdogdu, Marc Lelarge, and Andrea Montanari Statistical

More information

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes Igal Sason Department of Electrical Engineering Technion - Israel Institute of Technology Haifa 32000, Israel 2009 IEEE International

More information

ECEN 655: Advanced Channel Coding

ECEN 655: Advanced Channel Coding ECEN 655: Advanced Channel Coding Course Introduction Henry D. Pfister Department of Electrical and Computer Engineering Texas A&M University ECEN 655: Advanced Channel Coding 1 / 19 Outline 1 History

More information

An equivalence between high dimensional Bayes optimal inference and M-estimation

An equivalence between high dimensional Bayes optimal inference and M-estimation An equivalence between high dimensional Bayes optimal inference and M-estimation Madhu Advani Surya Ganguli Department of Applied Physics, Stanford University msadvani@stanford.edu and sganguli@stanford.edu

More information

Approximate Message Passing Algorithms

Approximate Message Passing Algorithms November 4, 2017 Outline AMP (Donoho et al., 2009, 2010a) Motivations Derivations from a message-passing perspective Limitations Extensions Generalized Approximate Message Passing (GAMP) (Rangan, 2011)

More information

Mutual information for symmetric rank-one matrix estimation: A proof of the replica formula

Mutual information for symmetric rank-one matrix estimation: A proof of the replica formula Mutual information for symmetric rank-one matrix estimation: A proof of the replica formula Jean Barbier, Mohamad Dia and Nicolas Macris Laboratoire de Théorie des Communications, Faculté Informatique

More information

5. Density evolution. Density evolution 5-1

5. Density evolution. Density evolution 5-1 5. Density evolution Density evolution 5-1 Probabilistic analysis of message passing algorithms variable nodes factor nodes x1 a x i x2 a(x i ; x j ; x k ) x3 b x4 consider factor graph model G = (V ;

More information

Signal Reconstruction in Linear Mixing Systems with Different Error Metrics

Signal Reconstruction in Linear Mixing Systems with Different Error Metrics Signal Reconstruction in Linear Mixing Systems with Different Error Metrics Jin Tan and Dror Baron North Carolina State University Feb. 14, 2013 Jin Tan and Dror Baron (NCSU) Reconstruction with Different

More information

Communication by Regression: Sparse Superposition Codes

Communication by Regression: Sparse Superposition Codes Communication by Regression: Sparse Superposition Codes Department of Statistics, Yale University Coauthors: Antony Joseph and Sanghee Cho February 21, 2013, University of Texas Channel Communication Set-up

More information

The spin-glass cornucopia

The spin-glass cornucopia The spin-glass cornucopia Marc Mézard! Ecole normale supérieure - PSL Research University and CNRS, Université Paris Sud LPT-ENS, January 2015 40 years of research n 40 years of research 70 s:anomalies

More information

Approximate Message Passing for Bilinear Models

Approximate Message Passing for Bilinear Models Approximate Message Passing for Bilinear Models Volkan Cevher Laboratory for Informa4on and Inference Systems LIONS / EPFL h,p://lions.epfl.ch & Idiap Research Ins=tute joint work with Mitra Fatemi @Idiap

More information

Phase transitions in low-rank matrix estimation

Phase transitions in low-rank matrix estimation Phase transitions in low-rank matrix estimation Florent Krzakala http://krzakala.github.io/lowramp/ arxiv:1503.00338 & 1507.03857 Thibault Lesieur Lenka Zeborová Caterina Debacco Cris Moore Jess Banks

More information

Rigorous Dynamics and Consistent Estimation in Arbitrarily Conditioned Linear Systems

Rigorous Dynamics and Consistent Estimation in Arbitrarily Conditioned Linear Systems 1 Rigorous Dynamics and Consistent Estimation in Arbitrarily Conditioned Linear Systems Alyson K. Fletcher, Mojtaba Sahraee-Ardakan, Philip Schniter, and Sundeep Rangan Abstract arxiv:1706.06054v1 cs.it

More information

Message passing and approximate message passing

Message passing and approximate message passing Message passing and approximate message passing Arian Maleki Columbia University 1 / 47 What is the problem? Given pdf µ(x 1, x 2,..., x n ) we are interested in arg maxx1,x 2,...,x n µ(x 1, x 2,..., x

More information

Advances in Error Control Strategies for 5G

Advances in Error Control Strategies for 5G Advances in Error Control Strategies for 5G Jörg Kliewer The Elisha Yegal Bar-Ness Center For Wireless Communications And Signal Processing Research 5G Requirements [Nokia Networks: Looking ahead to 5G.

More information

Turbo-AMP: A Graphical-Models Approach to Compressive Inference

Turbo-AMP: A Graphical-Models Approach to Compressive Inference Turbo-AMP: A Graphical-Models Approach to Compressive Inference Phil Schniter (With support from NSF CCF-1018368 and DARPA/ONR N66001-10-1-4090.) June 27, 2012 1 Outline: 1. Motivation. (a) the need for

More information

Single-Gaussian Messages and Noise Thresholds for Low-Density Lattice Codes

Single-Gaussian Messages and Noise Thresholds for Low-Density Lattice Codes Single-Gaussian Messages and Noise Thresholds for Low-Density Lattice Codes Brian M. Kurkoski, Kazuhiko Yamaguchi and Kingo Kobayashi kurkoski@ice.uec.ac.jp Dept. of Information and Communications Engineering

More information

Passing and Interference Coordination

Passing and Interference Coordination Generalized Approximate Message Passing and Interference Coordination Sundeep Rangan, Polytechnic Institute of NYU Joint work with Alyson Fletcher (Berkeley), Vivek Goyal (MIT), Ulugbek Kamilov (EPFL/MIT),

More information

On Bit Error Rate Performance of Polar Codes in Finite Regime

On Bit Error Rate Performance of Polar Codes in Finite Regime On Bit Error Rate Performance of Polar Codes in Finite Regime A. Eslami and H. Pishro-Nik Abstract Polar codes have been recently proposed as the first low complexity class of codes that can provably achieve

More information

EE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes

EE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes EE229B - Final Project Capacity-Approaching Low-Density Parity-Check Codes Pierre Garrigues EECS department, UC Berkeley garrigue@eecs.berkeley.edu May 13, 2005 Abstract The class of low-density parity-check

More information

Belief propagation decoding of quantum channels by passing quantum messages

Belief propagation decoding of quantum channels by passing quantum messages Belief propagation decoding of quantum channels by passing quantum messages arxiv:67.4833 QIP 27 Joseph M. Renes lempelziv@flickr To do research in quantum information theory, pick a favorite text on classical

More information

Quasi-cyclic Low Density Parity Check codes with high girth

Quasi-cyclic Low Density Parity Check codes with high girth Quasi-cyclic Low Density Parity Check codes with high girth, a work with Marta Rossi, Richard Bresnan, Massimilliano Sala Summer Doctoral School 2009 Groebner bases, Geometric codes and Order Domains Dept

More information

Interactions of Information Theory and Estimation in Single- and Multi-user Communications

Interactions of Information Theory and Estimation in Single- and Multi-user Communications Interactions of Information Theory and Estimation in Single- and Multi-user Communications Dongning Guo Department of Electrical Engineering Princeton University March 8, 2004 p 1 Dongning Guo Communications

More information

Construction of low complexity Array based Quasi Cyclic Low density parity check (QC-LDPC) codes with low error floor

Construction of low complexity Array based Quasi Cyclic Low density parity check (QC-LDPC) codes with low error floor Construction of low complexity Array based Quasi Cyclic Low density parity check (QC-LDPC) codes with low error floor Pravin Salunkhe, Prof D.P Rathod Department of Electrical Engineering, Veermata Jijabai

More information

Spatially Coupled LDPC Codes

Spatially Coupled LDPC Codes Spatially Coupled LDPC Codes Kenta Kasai Tokyo Institute of Technology 30 Aug, 2013 We already have very good codes. Efficiently-decodable asymptotically capacity-approaching codes Irregular LDPC Codes

More information

Low-Density Parity-Check Codes

Low-Density Parity-Check Codes Department of Computer Sciences Applied Algorithms Lab. July 24, 2011 Outline 1 Introduction 2 Algorithms for LDPC 3 Properties 4 Iterative Learning in Crowds 5 Algorithm 6 Results 7 Conclusion PART I

More information

Optimal Data Detection in Large MIMO

Optimal Data Detection in Large MIMO 1 ptimal Data Detection in Large MIM Charles Jeon, Ramina Ghods, Arian Maleki, and Christoph Studer Abstract arxiv:1811.01917v1 [cs.it] 5 Nov 018 Large multiple-input multiple-output (MIM) appears in massive

More information

Statistical mechanics and capacity-approaching error-correctingcodes

Statistical mechanics and capacity-approaching error-correctingcodes Physica A 302 (2001) 14 21 www.elsevier.com/locate/physa Statistical mechanics and capacity-approaching error-correctingcodes Nicolas Sourlas Laboratoire de Physique Theorique de l, UMR 8549, Unite Mixte

More information

Message Passing Algorithms for Compressed Sensing: I. Motivation and Construction

Message Passing Algorithms for Compressed Sensing: I. Motivation and Construction Message Passing Algorithms for Compressed Sensing: I. Motivation and Construction David L. Donoho Department of Statistics Arian Maleki Department of Electrical Engineering Andrea Montanari Department

More information

Compressed Sensing under Optimal Quantization

Compressed Sensing under Optimal Quantization Compressed Sensing under Optimal Quantization Alon Kipnis, Galen Reeves, Yonina C. Eldar and Andrea J. Goldsmith Department of Electrical Engineering, Stanford University Department of Electrical and Computer

More information

Approximate Message Passing

Approximate Message Passing Approximate Message Passing Mohammad Emtiyaz Khan CS, UBC February 8, 2012 Abstract In this note, I summarize Sections 5.1 and 5.2 of Arian Maleki s PhD thesis. 1 Notation We denote scalars by small letters

More information

MMSE Dimension. snr. 1 We use the following asymptotic notation: f(x) = O (g(x)) if and only

MMSE Dimension. snr. 1 We use the following asymptotic notation: f(x) = O (g(x)) if and only MMSE Dimension Yihong Wu Department of Electrical Engineering Princeton University Princeton, NJ 08544, USA Email: yihongwu@princeton.edu Sergio Verdú Department of Electrical Engineering Princeton University

More information

Error-Correcting Codes:

Error-Correcting Codes: Error-Correcting Codes: Progress & Challenges Madhu Sudan Microsoft/MIT Communication in presence of noise We are not ready Sender Noisy Channel We are now ready Receiver If information is digital, reliability

More information

Optimality of Large MIMO Detection via Approximate Message Passing

Optimality of Large MIMO Detection via Approximate Message Passing ptimality of Large MIM Detection via Approximate Message Passing Charles Jeon, Ramina Ghods, Arian Maleki, and Christoph Studer arxiv:5.695v [cs.it] ct 5 Abstract ptimal data detection in multiple-input

More information

Fundamental Limits of Compressed Sensing under Optimal Quantization

Fundamental Limits of Compressed Sensing under Optimal Quantization Fundamental imits of Compressed Sensing under Optimal Quantization Alon Kipnis, Galen Reeves, Yonina C. Eldar and Andrea J. Goldsmith Department of Electrical Engineering, Stanford University Department

More information

Vector Approximate Message Passing. Phil Schniter

Vector Approximate Message Passing. Phil Schniter Vector Approximate Message Passing Phil Schniter Collaborators: Sundeep Rangan (NYU), Alyson Fletcher (UCLA) Supported in part by NSF grant CCF-1527162. itwist @ Aalborg University Aug 24, 2016 Standard

More information

Estimation-Theoretic Representation of Mutual Information

Estimation-Theoretic Representation of Mutual Information Estimation-Theoretic Representation of Mutual Information Daniel P. Palomar and Sergio Verdú Department of Electrical Engineering Princeton University Engineering Quadrangle, Princeton, NJ 08544, USA {danielp,verdu}@princeton.edu

More information

Coding over Interference Channels: An Information-Estimation View

Coding over Interference Channels: An Information-Estimation View Coding over Interference Channels: An Information-Estimation View Shlomo Shamai Department of Electrical Engineering Technion - Israel Institute of Technology Information Systems Laboratory Colloquium

More information

arxiv: v3 [cs.na] 21 Mar 2016

arxiv: v3 [cs.na] 21 Mar 2016 Phase transitions and sample complexity in Bayes-optimal matrix factorization ariv:40.98v3 [cs.a] Mar 06 Yoshiyuki Kabashima, Florent Krzakala, Marc Mézard 3, Ayaka Sakata,4, and Lenka Zdeborová 5, Department

More information

Expectation propagation for symbol detection in large-scale MIMO communications

Expectation propagation for symbol detection in large-scale MIMO communications Expectation propagation for symbol detection in large-scale MIMO communications Pablo M. Olmos olmos@tsc.uc3m.es Joint work with Javier Céspedes (UC3M) Matilde Sánchez-Fernández (UC3M) and Fernando Pérez-Cruz

More information

Phase transitions in discrete structures

Phase transitions in discrete structures Phase transitions in discrete structures Amin Coja-Oghlan Goethe University Frankfurt Overview 1 The physics approach. [following Mézard, Montanari 09] Basics. Replica symmetry ( Belief Propagation ).

More information

Asymptotic Analysis of MAP Estimation via the Replica Method and Applications to Compressed Sensing

Asymptotic Analysis of MAP Estimation via the Replica Method and Applications to Compressed Sensing ANALYSIS OF MAP ESTIMATION VIA THE REPLICA METHOD AND APPLICATIONS TO COMPRESSED SENSING Asymptotic Analysis of MAP Estimation via the Replica Method and Applications to Compressed Sensing Sundeep Rangan,

More information

Practical Polar Code Construction Using Generalised Generator Matrices

Practical Polar Code Construction Using Generalised Generator Matrices Practical Polar Code Construction Using Generalised Generator Matrices Berksan Serbetci and Ali E. Pusane Department of Electrical and Electronics Engineering Bogazici University Istanbul, Turkey E-mail:

More information

Introduction to Low-Density Parity Check Codes. Brian Kurkoski

Introduction to Low-Density Parity Check Codes. Brian Kurkoski Introduction to Low-Density Parity Check Codes Brian Kurkoski kurkoski@ice.uec.ac.jp Outline: Low Density Parity Check Codes Review block codes History Low Density Parity Check Codes Gallager s LDPC code

More information

Approximate Message Passing with Built-in Parameter Estimation for Sparse Signal Recovery

Approximate Message Passing with Built-in Parameter Estimation for Sparse Signal Recovery Approimate Message Passing with Built-in Parameter Estimation for Sparse Signal Recovery arxiv:1606.00901v1 [cs.it] Jun 016 Shuai Huang, Trac D. Tran Department of Electrical and Computer Engineering Johns

More information

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels Jilei Hou, Paul H. Siegel and Laurence B. Milstein Department of Electrical and Computer Engineering

More information

Time-invariant LDPC convolutional codes

Time-invariant LDPC convolutional codes Time-invariant LDPC convolutional codes Dimitris Achlioptas, Hamed Hassani, Wei Liu, and Rüdiger Urbanke Department of Computer Science, UC Santa Cruz, USA Email: achlioptas@csucscedu Department of Computer

More information

On Generalized EXIT Charts of LDPC Code Ensembles over Binary-Input Output-Symmetric Memoryless Channels

On Generalized EXIT Charts of LDPC Code Ensembles over Binary-Input Output-Symmetric Memoryless Channels 2012 IEEE International Symposium on Information Theory Proceedings On Generalied EXIT Charts of LDPC Code Ensembles over Binary-Input Output-Symmetric Memoryless Channels H Mamani 1, H Saeedi 1, A Eslami

More information

On the Shamai-Laroia Approximation for the Information Rate of the ISI Channel

On the Shamai-Laroia Approximation for the Information Rate of the ISI Channel On the Shamai-Laroia Approximation for the Information Rate of the ISI Channel Yair Carmon and Shlomo Shamai (Shitz) Department of Electrical Engineering, Technion - Israel Institute of Technology 2014

More information

Polar Codes: Graph Representation and Duality

Polar Codes: Graph Representation and Duality Polar Codes: Graph Representation and Duality arxiv:1312.0372v1 [cs.it] 2 Dec 2013 M. Fossorier ETIS ENSEA/UCP/CNRS UMR-8051 6, avenue du Ponceau, 95014, Cergy Pontoise, France Email: mfossorier@ieee.org

More information

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory Digital Communications III (ECE 154C) Introduction to Coding and Information Theory Tara Javidi These lecture notes were originally developed by late Prof. J. K. Wolf. UC San Diego Spring 2014 1 / 8 I

More information

Fundamental limits of symmetric low-rank matrix estimation

Fundamental limits of symmetric low-rank matrix estimation Proceedings of Machine Learning Research vol 65:1 5, 2017 Fundamental limits of symmetric low-rank matrix estimation Marc Lelarge Safran Tech, Magny-Les-Hameaux, France Département d informatique, École

More information

RCA Analysis of the Polar Codes and the use of Feedback to aid Polarization at Short Blocklengths

RCA Analysis of the Polar Codes and the use of Feedback to aid Polarization at Short Blocklengths RCA Analysis of the Polar Codes and the use of Feedback to aid Polarization at Short Blocklengths Kasra Vakilinia, Dariush Divsalar*, and Richard D. Wesel Department of Electrical Engineering, University

More information

Why We Can Not Surpass Capacity: The Matching Condition

Why We Can Not Surpass Capacity: The Matching Condition Why We Can Not Surpass Capacity: The Matching Condition Cyril Méasson EPFL CH-1015 Lausanne cyril.measson@epfl.ch, Andrea Montanari LPT, ENS F-75231 Paris montanar@lpt.ens.fr Rüdiger Urbanke EPFL CH-1015

More information

Some Goodness Properties of LDA Lattices

Some Goodness Properties of LDA Lattices Some Goodness Properties of LDA Lattices Shashank Vatedka and Navin Kashyap {shashank,nkashyap}@eceiiscernetin Department of ECE Indian Institute of Science Bangalore, India Information Theory Workshop

More information

THE PHYSICS OF COUNTING AND SAMPLING ON RANDOM INSTANCES. Lenka Zdeborová

THE PHYSICS OF COUNTING AND SAMPLING ON RANDOM INSTANCES. Lenka Zdeborová THE PHYSICS OF COUNTING AND SAMPLING ON RANDOM INSTANCES Lenka Zdeborová (CEA Saclay and CNRS, France) MAIN CONTRIBUTORS TO THE PHYSICS UNDERSTANDING OF RANDOM INSTANCES Braunstein, Franz, Kabashima, Kirkpatrick,

More information

Multiuser Detection of Sparsely Spread CDMA

Multiuser Detection of Sparsely Spread CDMA SUBMITTED TO IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS Multiuser Detection of Sparsely Spread CDMA Dongning Guo, Member, IEEE, and Chih-Chun Wang, Member, IEEE Abstract Code-division multiple access

More information

much more on minimax (order bounds) cf. lecture by Iain Johnstone

much more on minimax (order bounds) cf. lecture by Iain Johnstone much more on minimax (order bounds) cf. lecture by Iain Johnstone http://www-stat.stanford.edu/~imj/wald/wald1web.pdf today s lecture parametric estimation, Fisher information, Cramer-Rao lower bound:

More information

Compressed Sensing and Linear Codes over Real Numbers

Compressed Sensing and Linear Codes over Real Numbers Compressed Sensing and Linear Codes over Real Numbers Henry D. Pfister (joint with Fan Zhang) Texas A&M University College Station Information Theory and Applications Workshop UC San Diego January 31st,

More information

Asynchronous Decoding of LDPC Codes over BEC

Asynchronous Decoding of LDPC Codes over BEC Decoding of LDPC Codes over BEC Saeid Haghighatshoar, Amin Karbasi, Amir Hesam Salavati Department of Telecommunication Systems, Technische Universität Berlin, Germany, School of Engineering and Applied

More information

The non-backtracking operator

The non-backtracking operator The non-backtracking operator Florent Krzakala LPS, Ecole Normale Supérieure in collaboration with Paris: L. Zdeborova, A. Saade Rome: A. Decelle Würzburg: J. Reichardt Santa Fe: C. Moore, P. Zhang Berkeley:

More information

Information, Physics, and Computation

Information, Physics, and Computation Information, Physics, and Computation Marc Mezard Laboratoire de Physique Thdorique et Moales Statistiques, CNRS, and Universit y Paris Sud Andrea Montanari Department of Electrical Engineering and Department

More information

Planted Cliques, Iterative Thresholding and Message Passing Algorithms

Planted Cliques, Iterative Thresholding and Message Passing Algorithms Planted Cliques, Iterative Thresholding and Message Passing Algorithms Yash Deshpande and Andrea Montanari Stanford University November 5, 2013 Deshpande, Montanari Planted Cliques November 5, 2013 1 /

More information

ON THE MINIMUM DISTANCE OF NON-BINARY LDPC CODES. Advisor: Iryna Andriyanova Professor: R.. udiger Urbanke

ON THE MINIMUM DISTANCE OF NON-BINARY LDPC CODES. Advisor: Iryna Andriyanova Professor: R.. udiger Urbanke ON THE MINIMUM DISTANCE OF NON-BINARY LDPC CODES RETHNAKARAN PULIKKOONATTU ABSTRACT. Minimum distance is an important parameter of a linear error correcting code. For improved performance of binary Low

More information

Measurements vs. Bits: Compressed Sensing meets Information Theory

Measurements vs. Bits: Compressed Sensing meets Information Theory Measurements vs. Bits: Compressed Sensing meets Information Theory Shriram Sarvotham, Dror Baron, and Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University, Houston, TX

More information

Low-density parity-check codes

Low-density parity-check codes Low-density parity-check codes From principles to practice Dr. Steve Weller steven.weller@newcastle.edu.au School of Electrical Engineering and Computer Science The University of Newcastle, Callaghan,

More information

Performance Trade-Offs in Multi-Processor Approximate Message Passing

Performance Trade-Offs in Multi-Processor Approximate Message Passing Performance Trade-Offs in Multi-Processor Approximate Message Passing Junan Zhu, Ahmad Beirami, and Dror Baron Department of Electrical and Computer Engineering, North Carolina State University, Email:

More information

Performance Limits for Noisy Multi-Measurement Vector Problems

Performance Limits for Noisy Multi-Measurement Vector Problems Performance Limits for Noisy Multi-Measurement Vector Problems Junan Zhu, Student Member, IEEE, Dror Baron, Senior Member, IEEE, and Florent Krzakala Abstract Compressed sensing CS) demonstrates that sparse

More information

Polar Code Construction for List Decoding

Polar Code Construction for List Decoding 1 Polar Code Construction for List Decoding Peihong Yuan, Tobias Prinz, Georg Böcherer arxiv:1707.09753v1 [cs.it] 31 Jul 2017 Abstract A heuristic construction of polar codes for successive cancellation

More information

Wiener Filters in Gaussian Mixture Signal Estimation with l -Norm Error

Wiener Filters in Gaussian Mixture Signal Estimation with l -Norm Error Wiener Filters in Gaussian Mixture Signal Estimation with l -Norm Error Jin Tan, Student Member, IEEE, Dror Baron, Senior Member, IEEE, and Liyi Dai, Fellow, IEEE Abstract Consider the estimation of a

More information

Lecture 10: Broadcast Channel and Superposition Coding

Lecture 10: Broadcast Channel and Superposition Coding Lecture 10: Broadcast Channel and Superposition Coding Scribed by: Zhe Yao 1 Broadcast channel M 0M 1M P{y 1 y x} M M 01 1 M M 0 The capacity of the broadcast channel depends only on the marginal conditional

More information

arxiv:cs/ v1 [cs.it] 15 Aug 2005

arxiv:cs/ v1 [cs.it] 15 Aug 2005 To appear in International Symposium on Information Theory; Adelaide, Australia Lossy source encoding via message-passing and decimation over generalized codewords of LDG codes artin J. Wainwright Departments

More information

Information and Statistics

Information and Statistics Information and Statistics Andrew Barron Department of Statistics Yale University IMA Workshop On Information Theory and Concentration Phenomena Minneapolis, April 13, 2015 Barron Information and Statistics

More information

Lecture 7 September 24

Lecture 7 September 24 EECS 11: Coding for Digital Communication and Beyond Fall 013 Lecture 7 September 4 Lecturer: Anant Sahai Scribe: Ankush Gupta 7.1 Overview This lecture introduces affine and linear codes. Orthogonal signalling

More information

Compressive Sensing under Matrix Uncertainties: An Approximate Message Passing Approach

Compressive Sensing under Matrix Uncertainties: An Approximate Message Passing Approach Compressive Sensing under Matrix Uncertainties: An Approximate Message Passing Approach Asilomar 2011 Jason T. Parker (AFRL/RYAP) Philip Schniter (OSU) Volkan Cevher (EPFL) Problem Statement Traditional

More information

Information-theoretically Optimal Sparse PCA

Information-theoretically Optimal Sparse PCA Information-theoretically Optimal Sparse PCA Yash Deshpande Department of Electrical Engineering Stanford, CA. Andrea Montanari Departments of Electrical Engineering and Statistics Stanford, CA. Abstract

More information

Augmented Lattice Reduction for MIMO decoding

Augmented Lattice Reduction for MIMO decoding Augmented Lattice Reduction for MIMO decoding LAURA LUZZI joint work with G. Rekaya-Ben Othman and J.-C. Belfiore at Télécom-ParisTech NANYANG TECHNOLOGICAL UNIVERSITY SEPTEMBER 15, 2010 Laura Luzzi Augmented

More information

Turbo Codes are Low Density Parity Check Codes

Turbo Codes are Low Density Parity Check Codes Turbo Codes are Low Density Parity Check Codes David J. C. MacKay July 5, 00 Draft 0., not for distribution! (First draft written July 5, 998) Abstract Turbo codes and Gallager codes (also known as low

More information

Performance of Polar Codes for Channel and Source Coding

Performance of Polar Codes for Channel and Source Coding Performance of Polar Codes for Channel and Source Coding Nadine Hussami AUB, Lebanon, Email: njh03@aub.edu.lb Satish Babu Korada and üdiger Urbanke EPFL, Switzerland, Email: {satish.korada,ruediger.urbanke}@epfl.ch

More information

An introduction to Bayesian inference and model comparison J. Daunizeau

An introduction to Bayesian inference and model comparison J. Daunizeau An introduction to Bayesian inference and model comparison J. Daunizeau ICM, Paris, France TNU, Zurich, Switzerland Overview of the talk An introduction to probabilistic modelling Bayesian model comparison

More information

Dr. Cathy Liu Dr. Michael Steinberger. A Brief Tour of FEC for Serial Link Systems

Dr. Cathy Liu Dr. Michael Steinberger. A Brief Tour of FEC for Serial Link Systems Prof. Shu Lin Dr. Cathy Liu Dr. Michael Steinberger U.C.Davis Avago SiSoft A Brief Tour of FEC for Serial Link Systems Outline Introduction Finite Fields and Vector Spaces Linear Block Codes Cyclic Codes

More information

On the Joint Decoding of LDPC Codes and Finite-State Channels via Linear Programming

On the Joint Decoding of LDPC Codes and Finite-State Channels via Linear Programming On the Joint Decoding of LDPC Codes and Finite-State Channels via Linear Programming Byung-Hak Kim (joint with Henry D. Pfister) Texas A&M University College Station International Symposium on Information

More information

Information-Theoretic Limits of Group Testing: Phase Transitions, Noisy Tests, and Partial Recovery

Information-Theoretic Limits of Group Testing: Phase Transitions, Noisy Tests, and Partial Recovery Information-Theoretic Limits of Group Testing: Phase Transitions, Noisy Tests, and Partial Recovery Jonathan Scarlett jonathan.scarlett@epfl.ch Laboratory for Information and Inference Systems (LIONS)

More information

Polar Codes are Optimal for Lossy Source Coding

Polar Codes are Optimal for Lossy Source Coding Polar Codes are Optimal for Lossy Source Coding Satish Babu Korada and Rüdiger Urbanke EPFL, Switzerland, Email: satish.korada,ruediger.urbanke}@epfl.ch Abstract We consider lossy source compression of

More information

On Turbo-Schedules for LDPC Decoding

On Turbo-Schedules for LDPC Decoding On Turbo-Schedules for LDPC Decoding Alexandre de Baynast, Predrag Radosavljevic, Victor Stolpman, Joseph R. Cavallaro, Ashutosh Sabharwal Department of Electrical and Computer Engineering Rice University,

More information

Digital Image Processing Lectures 25 & 26

Digital Image Processing Lectures 25 & 26 Lectures 25 & 26, Professor Department of Electrical and Computer Engineering Colorado State University Spring 2015 Area 4: Image Encoding and Compression Goal: To exploit the redundancies in the image

More information

Joint Channel Estimation and Co-Channel Interference Mitigation in Wireless Networks Using Belief Propagation

Joint Channel Estimation and Co-Channel Interference Mitigation in Wireless Networks Using Belief Propagation Joint Channel Estimation and Co-Channel Interference Mitigation in Wireless Networks Using Belief Propagation Yan Zhu, Dongning Guo and Michael L. Honig Northwestern University May. 21, 2008 Y. Zhu, D.

More information

One-Bit LDPC Message Passing Decoding Based on Maximization of Mutual Information

One-Bit LDPC Message Passing Decoding Based on Maximization of Mutual Information One-Bit LDPC Message Passing Decoding Based on Maximization of Mutual Information ZOU Sheng and Brian M. Kurkoski kurkoski@ice.uec.ac.jp University of Electro-Communications Tokyo, Japan University of

More information