Communication by Regression: Achieving Shannon Capacity

Size: px
Start display at page:

Download "Communication by Regression: Achieving Shannon Capacity"

Transcription

1 Communication by Regression: Practical Achievement of Shannon Capacity Department of Statistics Yale University Workshop Infusing Statistics and Engineering Harvard University, June 5-6, 2011

2 Practical Communication by Regression Achieving Shannon Capacity Gaussian Channel with Power Constraints History of Methods Communication by Regression Sparse Superposition Coding Adaptive Successive Decoding Distributional Analysis Rate, Reliability, and Computational Complexity

3 Shannon Formulation Input bits: U = (U 1, U 2,......, U K ) Encoded: x = (x 1, x 2,..., x n ) = x(u) Channel: p(y x) Received: Y = (Y 1, Y 2,..., Y n ) Decoded: Û = (Û1, Û2,......, ÛK ) indep Bern(1/2) Rate: R = K n Capacity C = max PX I(X; Y ) Reliability: Want small Prob{Û U} and small Prob{Fraction mistakes α}

4 Gaussian Noise Channel Input bits: U = (U 1, U 2,......, U K ) Encoded: x = (x 1, x 2,..., x n ) 1 n n i=1 x 2 i P Channel: p(y x) Y = x(u) + ε, ε N(0, σ 2 I) snr = P/σ 2 Received: Y = (Y 1, Y 2,..., Y n ) Decoded: Û = (Û1, Û2,......, ÛK ) Rate: R = K n Capacity C = 1 2 log(1 + snr) Reliability: Want small Prob{Û U} and small Prob{Fraction mistakes α}

5 Gaussian Noise Channel Input bits: U = (U 1, U 2,......, U K ) [ Encoded: x = (x 1, x 2,..., x n ) E 1 n U n i=1 x i 2 ] P Channel: p(y x) Y = x(u) + ε, ε N(0, σ 2 I) Received: Y = (Y 1, Y 2,..., Y n ) Decoded: Û = (Û1, Û2,......, ÛK ) snr = P/σ 2 Rate: R = K n Capacity C = 1 2 log(1 + snr) Reliability: Want small Prob{Û U} and small Prob{Fraction mistakes α}

6 Shannon Theory Channel Capacity: Supremum of rates R such that reliable communication is possible, with arbitrarily small error probability Information Capacity: C = max PX I(X; Y ) Where I(X; Y ) is the Shannon information, also known as the Kullback divergence between P X,Y and P X P Y Shannon Channel Capacity Theorem: The supremum of achievable communication rates R equals the information capacity C Books: Shannon (49), Gallager (68), Cover & Thomas (06)

7 The Gaussian Noise Model The Gaussian noise channel is the basic model for wireless communication radio, cell phones, television, satellite, space wired communication internet, telephone, cable

8 Shannon Theory meets Coding Practice Forney and Ungerboeck 1998 review: modulation, shaping and coding for the Gaussian channel Richardson & Urbanke 2008 cover coding state of the art: Empirically good LDPC and turbo codes with fast encoding and decoding based on Bayesian belief networks Some tools for theoretical analysis, yet obstacles remain for proof of rates up to capacity for the Gaussian channel Arikan 2009, Arikan and Teletar 2009 polar codes: Adapted to Gaussian channel (Abbe and Barron 2011) Tropp 2006,2008 codes from compressed sensing and related sparse signal recovery work: Wainwright; Fletcher, Rangan, Goyal; Zhang; others l 1 -constrained least squares practical, has positive rate but not capacity achieving Method here different from those above. Prior knowledge of such is not necessary to follow what we present.

9 Sparse Superposition Code Input bits: U = (U U K ) Coefficients: β = ( ) T Sparsity: L entries non-zero out of N Matrix: X, n by N, all entries indep Normal(0, 1) Codeword: X β, superposition of a subset of columns Receive: Y = Xβ + ε, a statistical linear model Decode: ˆβ and Û from X,Y

10 Sparse Superposition Code Input bits: U = (U U K ) Coefficients: β = ( ) T Sparsity: L entries non-zero out of N Matrix: X, n by N, all entries indep Normal(0, 1) Codeword: X β, superposition of a subset of columns Receive: Y = Xβ + ε Decode: ˆβ and Û from X,Y Rate: R = K n from K = log ( ) ( N L, near L log N L e)

11 Sparse Superposition Code Input bits: U = (U U K ) Coefficients: β = ( ) T Sparsity: L entries non-zero out of N Matrix: X, n by N, all entries indep Normal(0, 1) Codeword: X β, superposition of a subset of columns Receive: Y = Xβ + ε Decode: ˆβ and Û from X,Y Rate: R = K n from K = log ( ) N L Reliability: small Prob{Fraction ˆβ mistakes α}, small α

12 Sparse Superposition Code Input bits: U = (U U K ) Coefficients: β = ( ) T Sparsity: L entries non-zero out of N Matrix: X, n by N, all entries indep Normal(0, 1) Codeword: X β, superposition of a subset of columns Receive: Y = Xβ + ε Decode: ˆβ and Û from X,Y Rate: R = K n from K = log ( ) N L Reliability: small Prob{Fraction ˆβ mistakes α}, small α Is it reliable with rate up to capacity?

13 Partitioned Superposition Code Input bits: U = (U 1...,...,...,... U K ) Coefficients: β =( , ,..., ) Sparsity: L sections, each of size B =N/L, a power of 2. 1 non-zero entry in each section Indices of nonzeros: (j 1, j 2,..., j L ) specified by U segments Matrix: Codeword: Receive: Decode: X, n by N, splits into L sections X β, superposition of columns, one from each Y = Xβ + ε ˆβ and Û Rate: R = K n from K = L log N L = L log B

14 Partitioned Superposition Code Input bits: U = (U 1...,...,...,... U K ) Coefficients: β =( , ,..., ) Sparsity: L sections, each of size B =N/L, a power of 2. 1 non-zero entry in each section Indices of nonzeros: (j 1, j 2,..., j L ) specified by U segments Matrix: Codeword: Receive: Decode: X, n by N, splits into L sections X β, superposition of columns, one from each Y = Xβ + ε ˆβ and Û Rate: R = K n from K = L log N L = L log B Ultra-sparse case: Impractical B = 2 nr/l with L constant (reliable at all rates R < C: Cover 1972,1980) Moderately-sparse: Practical B = n with L = nr/ log n Is it reliable with rate up to capacity?

15 Partitioned Superposition Code Input bits: U = (U 1...,...,...,... U K ) Coefficients: β =( , ,..., ) Sparsity: L sections, each of size B =N/L, a power of 2. 1 non-zero entry in each section Indices of nonzeros: (j 1, j 2,..., j L ) specified by U segments Matrix: Codeword: Receive: Decode: X, n by N, splits into L sections X β, superposition of columns, one from each Y = Xβ + ε ˆβ and Û Rate: R = K n from K = L log N L = L log B Reliability: small Prob{Fraction mistakes α} small α Outer RS code: rate 1 α, corrects remaining mistakes Overall rate: R tot = (1 α)r Overall rate: up to capacity?

16 Power Allocation Coefficients: β =( , ,..., ) Indices of nonzeros: sent = (j 1, j 2,..., j L ) Coeff. values: β jl = P l Power control: L l=1 P l = P Codewords: Power Allocations Constant power: Variable power: for l = 1, 2,..., L X β, have average power P P l = P/L 2C l/l P l proportional to e Variable with leveling: P l proportional to max{e 2C l/l, cut}

17 Power Allocation power allocation Power = 7 L = section index

18 Adaptive Successive Decoder Decoding Steps Start: [Step 1] Compute the inner product of Y with each column of X See which are above a threshold Form initial fit as weighted sum of columns above threshold Iterate: [Step k 2] Compute the inner product of residuals Y Fit k 1 with each remaining column of X See which are above threshold Add these columns to the fit Stop: At Step k = log B, or if there are no inner products above threshold

19 Contrast Two Decoders Decoders using received y = Xβ + ε Optimal: l 0 -constrained least squares decoder using equal power ˆβ = argmin Y Xβ 2 maximizes posterior probability of codeword given the data minimizes probability of error with uniform input distribution reliable for all R < C, with best form of error exponent Practical: Adaptive Successive Decoder fast decoder reliable using variable power allocation for all R < C

20 Rate and Reliability Optimal Decoder: l 0 -constrained least squares Prob error exponentially small in n for all R < C Prob{Error} e n (C R)2 /2V In agreement with the Shannon-Gallager exponent of optimal code, though with a suboptimal constant V depending on the snr

21 Rate and Reliability Practical: Adaptive Successive Decoder, with outer RS code. prob error exponentially small in n/(log B) 1/2 for R < C Value C B approaching capacity C B = C 1 + c 1 / log B Probability error exponentially small in L for R < C B Prob { Error } e L(C B R) 2 c 2 Improves to e c 3L(C B R) 2 (log B) 0.5 using a Bernstein bound. Nearly optimal when C B R is at least C C B. Our c 1 is near ( /snr) log log B + 4C

22 Ingredients in the Practical Decoder Adaptive Successive Decoder Ingredients Intialization: res 1 = Y and J 1 = {1, 2,..., N}, with N = LB Loop: Residual: Test Stat: res k = Y Fit k 1 Z comb k,j = Xj T res k / res k Threshold: τ = 2 log B + a Detections: 1 Hk,j = 1 {Z comb k,j τ} Fit Increment: F k = j J k Pj X j 1 Hk,j Fit Update: Fit k = Fit k 1 + F k Remaining: J k+1 = { j J k : Z k,j < τ}

23 Tracking Progress Message sent = (j 1, j 2,..., j L ) False Alarms Increment: ˆf k = j J k (not sent) π j 1 Hk,j Total: ˆf1,k = ˆf 1 + ˆf ˆf k Correct Detections Increment: ˆq k = j J k sent π j 1 Hk,j Total: ˆq 1,k =ˆq 1 + ˆq ˆq k Weights π j = P j /P where P j is the power allocated to the section containing j

24 Decoding Progression g L(x) x B = 2 16, L = B snr = 1 C = 0.5 bits R = 0.31 bits (0.62C) No. of steps = Figure: Plot of of likely progression of weighted fraction of correct detections ˆq 1,k, for snr = 1. x

25 Distributional Analysis Test statistic ingredients: Z k,j = X j T G k / G k The Y, F 1,..., F k 1 have orthogonalized components G 1, G 2,..., G k Approximate distrib: Z k,j is shift of indep N(0, 1) r.v.s (w k u j C/R) 2 log B 1 {j sent} + Z k,j with u jl = u(l/l) = e 2Cl/L where w k has sum s k = w 1 + w w k equaling s(x) = 1/(1 νx) ν = snr/(1 + snr) evaluated at x =x k 1 based on wght. frac. prev. detections x k is at least q adj 1,k q 1,k f 1,k initialized with w 1 = 1 and x 0 = 0.

26 Distributional Analysis Combined test statistic: Z comb k,j = k k =1 λk Z k,j Weights of combination λ k sum to 1 Best weights λ k = w k /s k proportional to w k Approximate distrib: Z comb N(0, 1) k,j (u j C/R) 2 log B 1 νx, maximized shift of indep 1 {j sent} + Z comb k,j evaluated at x = x k 1 and u jl = u(l/l) = e 2C(l 1)/L Expected weighted fraction of terms sent above threshold g(x) = u π j Φ τ j C/R 1 νx 1 j sent called the performance update function

27 Lower Bounding Correct Detections Correct Detections Increment: ˆq k = j J k sent π j 1 Hk,j Total: ˆq 1,k = ˆq 1 + ˆq ˆq k Equivalent: j sent π j 1 H1,j H 2,j... H k,j Lower Bound: j sent π j 1 Hk,j LB Expectation: q1,k Reliability: ˆq 1,k > q 1,k with high prob for q 1,k < q1,k

28 Lower Bounding Correct Detections Correct Detections Increment: ˆq k = j J k sent π j 1 Hk,j Total: ˆq 1,k = ˆq 1 + ˆq ˆq k Equivalent: j sent π j 1 H1,j H 2,j... H k,j Lower Bound: j sent π j 1 Hk,j Expectation: q 1,k Reliability: ˆq 1,k > q 1,k with high prob for q 1,k = q1,k η Recursive Evaluation: q1,k = g( q adj ) 1,k 1 g(x) shown to exceed x by at least const/ log B for R R B g(x) evaluated at x k 1 = q adj 1,k 1 yields x k likely lower bound on correct detections reaches x k 1 const/ log B in order log B steps

29 Upper Bounding False Alarms False Alarms Increment: ˆfk = j J k (not sent) π j 1 Hk,j Upper bound: j not sent π j 1 Hk,j Expectation: f Target level: f less than const/(log B) 2 Total: ˆf1,k = ˆf 1 + ˆf ˆf k UB expectation: kf Reliability: ˆf1,k less than kf with high prob for f > f Total bound: const/ log B with #steps k of order log B

30 Decoding Progression g L(x) x B = 2 16, L = B snr = 15 C = 2 bits R = 1.04 bits (0.52C) No. of steps = Figure: Plot of likely progression of weighted fraction of correct detections ˆq 1,k, for snr = 15. x

31 Decoding progression, example bounds g L(x) x B = 2 16, L = B snr = 15 C = 2 bits R = 1.04 bits (0.52C) No. of steps = Figure: Plot of g(x) and the sequence x k for snr = 15, with variable power allocation. The threshold uses a = The final false alarm and failed detection rates are less than and respectively, with probability of at least that fraction of mistakes less than x

32 Decoding Progression g L(x) x B = 2 16, L = B snr = 1 C = 0.5 bits R = 0.31 bits (0.62C) No. of steps = Figure: Plot of g(x) and the sequence x k for snr = 1, with constant power allocation. The threshold uses a = The final false alarm and failed detection rates are and respectively, with probability bound x

33 Summary Sparse superposition coding is fast and reliable at rates up to channel capacity Formulation and analysis blends modern statistical regression and information theory

34 Rate versus Section Size R (bits/ch. use) detailed envelope curve when L = B curve using simulation runs capacity snr = Figure: Rate as a function of B for snr =15 and error probability B

35 Rate versus Section Size R (bits/ch. use) detailed envelope curve when L = B curve using simulation runs capacity snr = Figure: Rate as a function of B for snr = 1 and error probability B

Communication by Regression: Sparse Superposition Codes

Communication by Regression: Sparse Superposition Codes Communication by Regression: Sparse Superposition Codes Department of Statistics, Yale University Coauthors: Antony Joseph and Sanghee Cho February 21, 2013, University of Texas Channel Communication Set-up

More information

Optimum Rate Communication by Fast Sparse Superposition Codes

Optimum Rate Communication by Fast Sparse Superposition Codes Optimum Rate Communication by Fast Sparse Superposition Codes Andrew Barron Department of Statistics Yale University Joint work with Antony Joseph and Sanghee Cho Algebra, Codes and Networks Conference

More information

Information and Statistics

Information and Statistics Information and Statistics Andrew Barron Department of Statistics Yale University IMA Workshop On Information Theory and Concentration Phenomena Minneapolis, April 13, 2015 Barron Information and Statistics

More information

Sparse Superposition Codes are Fast and Reliable at Rates Approaching Capacity with Gaussian Noise

Sparse Superposition Codes are Fast and Reliable at Rates Approaching Capacity with Gaussian Noise 1 Sparse Superposition Codes are Fast and Reliable at Rates Approaching Capacity with Gaussian Noise Andrew R Barron, Senior Member, IEEE, and Antony Joseph, Student Member, IEEE For upcoming submission

More information

Fast Sparse Superposition Codes have Exponentially Small Error Probability for R < C

Fast Sparse Superposition Codes have Exponentially Small Error Probability for R < C Fast Sparse Superposition Codes have Exponentially Small Error Probability for R < C Antony Joseph, Student Member, IEEE, and Andrew R Barron, Senior Member, IEEE Abstract For the additive white Gaussian

More information

THE ADDITIVE white Gaussian noise channel is basic to

THE ADDITIVE white Gaussian noise channel is basic to IEEE TRANSACTIONS ON INFORMATION THEORY VOL 60 NO 2 FEBRUARY 204 99 Fast Sparse Superposition Codes Have Near Exponential Error Probability for R < C Antony Joseph Student Member IEEE and Andrew R Barron

More information

Least Squares Superposition Codes of Moderate Dictionary Size, Reliable at Rates up to Capacity

Least Squares Superposition Codes of Moderate Dictionary Size, Reliable at Rates up to Capacity Least Squares Superposition Codes of Moderate Dictionary Size, eliable at ates up to Capacity Andrew Barron, Antony Joseph Department of Statistics, Yale University Email: andrewbarron@yaleedu, antonyjoseph@yaleedu

More information

Sparse Superposition Codes for the Gaussian Channel

Sparse Superposition Codes for the Gaussian Channel Sparse Superposition Codes for the Gaussian Channel Florent Krzakala (LPS, Ecole Normale Supérieure, France) J. Barbier (ENS) arxiv:1403.8024 presented at ISIT 14 Long version in preparation Communication

More information

Introduction to Low-Density Parity Check Codes. Brian Kurkoski

Introduction to Low-Density Parity Check Codes. Brian Kurkoski Introduction to Low-Density Parity Check Codes Brian Kurkoski kurkoski@ice.uec.ac.jp Outline: Low Density Parity Check Codes Review block codes History Low Density Parity Check Codes Gallager s LDPC code

More information

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels Jilei Hou, Paul H. Siegel and Laurence B. Milstein Department of Electrical and Computer Engineering

More information

Sparse Regression Codes for Multi-terminal Source and Channel Coding

Sparse Regression Codes for Multi-terminal Source and Channel Coding Sparse Regression Codes for Multi-terminal Source and Channel Coding Ramji Venkataramanan Yale University Sekhar Tatikonda Allerton 2012 1 / 20 Compression with Side-Information X Encoder Rate R Decoder

More information

for some error exponent E( R) as a function R,

for some error exponent E( R) as a function R, . Capacity-achieving codes via Forney concatenation Shannon s Noisy Channel Theorem assures us the existence of capacity-achieving codes. However, exhaustive search for the code has double-exponential

More information

Low-density parity-check codes

Low-density parity-check codes Low-density parity-check codes From principles to practice Dr. Steve Weller steven.weller@newcastle.edu.au School of Electrical Engineering and Computer Science The University of Newcastle, Callaghan,

More information

High-Rate Sparse Superposition Codes with Iteratively Optimal Estimates

High-Rate Sparse Superposition Codes with Iteratively Optimal Estimates 2012 IEEE International Symposium on Information Theory Proceedings High-Rate Sparse Superposition Codes with Iteratively Optimal Estimates Andrew R. Barron and Sanghee Cho Department of Statistics, Yale

More information

Information-Theoretic Limits of Group Testing: Phase Transitions, Noisy Tests, and Partial Recovery

Information-Theoretic Limits of Group Testing: Phase Transitions, Noisy Tests, and Partial Recovery Information-Theoretic Limits of Group Testing: Phase Transitions, Noisy Tests, and Partial Recovery Jonathan Scarlett jonathan.scarlett@epfl.ch Laboratory for Information and Inference Systems (LIONS)

More information

Construction of low complexity Array based Quasi Cyclic Low density parity check (QC-LDPC) codes with low error floor

Construction of low complexity Array based Quasi Cyclic Low density parity check (QC-LDPC) codes with low error floor Construction of low complexity Array based Quasi Cyclic Low density parity check (QC-LDPC) codes with low error floor Pravin Salunkhe, Prof D.P Rathod Department of Electrical Engineering, Veermata Jijabai

More information

(Classical) Information Theory III: Noisy channel coding

(Classical) Information Theory III: Noisy channel coding (Classical) Information Theory III: Noisy channel coding Sibasish Ghosh The Institute of Mathematical Sciences CIT Campus, Taramani, Chennai 600 113, India. p. 1 Abstract What is the best possible way

More information

ECEN 655: Advanced Channel Coding

ECEN 655: Advanced Channel Coding ECEN 655: Advanced Channel Coding Course Introduction Henry D. Pfister Department of Electrical and Computer Engineering Texas A&M University ECEN 655: Advanced Channel Coding 1 / 19 Outline 1 History

More information

INFORMATION AND STATISTICS. Andrew R. Barron. Presentation, April 30, 2015 Information Theory Workshop, Jerusalem

INFORMATION AND STATISTICS. Andrew R. Barron. Presentation, April 30, 2015 Information Theory Workshop, Jerusalem INFORMATION AND STATISTICS Andrew R. Barron YALE UNIVERSITY DEPARTMENT OF STATISTICS Presentation, April 30, 2015 Information Theory Workshop, Jerusalem Information and Statistics Topics in the abstract

More information

5. Density evolution. Density evolution 5-1

5. Density evolution. Density evolution 5-1 5. Density evolution Density evolution 5-1 Probabilistic analysis of message passing algorithms variable nodes factor nodes x1 a x i x2 a(x i ; x j ; x k ) x3 b x4 consider factor graph model G = (V ;

More information

Graph-based codes for flash memory

Graph-based codes for flash memory 1/28 Graph-based codes for flash memory Discrete Mathematics Seminar September 3, 2013 Katie Haymaker Joint work with Professor Christine Kelley University of Nebraska-Lincoln 2/28 Outline 1 Background

More information

EE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes

EE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes EE229B - Final Project Capacity-Approaching Low-Density Parity-Check Codes Pierre Garrigues EECS department, UC Berkeley garrigue@eecs.berkeley.edu May 13, 2005 Abstract The class of low-density parity-check

More information

Extended MinSum Algorithm for Decoding LDPC Codes over GF (q)

Extended MinSum Algorithm for Decoding LDPC Codes over GF (q) Extended MinSum Algorithm for Decoding LDPC Codes over GF (q) David Declercq ETIS ENSEA/UCP/CNRS UMR-8051, 95014 Cergy-Pontoise, (France), declercq@ensea.fr Marc Fossorier Dept. Electrical Engineering,

More information

Codes on graphs and iterative decoding

Codes on graphs and iterative decoding Codes on graphs and iterative decoding Bane Vasić Error Correction Coding Laboratory University of Arizona Prelude Information transmission 0 0 0 0 0 0 Channel Information transmission signal 0 0 threshold

More information

State-of-the-Art Channel Coding

State-of-the-Art Channel Coding Institut für State-of-the-Art Channel Coding Prof. Dr.-Ing. Volker Kühn Institute of Communications Engineering University of Rostock, Germany Email: volker.kuehn@uni-rostock.de http://www.int.uni-rostock.de/

More information

Single-letter Characterization of Signal Estimation from Linear Measurements

Single-letter Characterization of Signal Estimation from Linear Measurements Single-letter Characterization of Signal Estimation from Linear Measurements Dongning Guo Dror Baron Shlomo Shamai The work has been supported by the European Commission in the framework of the FP7 Network

More information

Capacity of a channel Shannon s second theorem. Information Theory 1/33

Capacity of a channel Shannon s second theorem. Information Theory 1/33 Capacity of a channel Shannon s second theorem Information Theory 1/33 Outline 1. Memoryless channels, examples ; 2. Capacity ; 3. Symmetric channels ; 4. Channel Coding ; 5. Shannon s second theorem,

More information

On the Block Error Probability of LP Decoding of LDPC Codes

On the Block Error Probability of LP Decoding of LDPC Codes On the Block Error Probability of LP Decoding of LDPC Codes Ralf Koetter CSL and Dept. of ECE University of Illinois at Urbana-Champaign Urbana, IL 680, USA koetter@uiuc.edu Pascal O. Vontobel Dept. of

More information

Analysis of Greedy Algorithms

Analysis of Greedy Algorithms Analysis of Greedy Algorithms Jiahui Shen Florida State University Oct.26th Outline Introduction Regularity condition Analysis on orthogonal matching pursuit Analysis on forward-backward greedy algorithm

More information

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes Igal Sason Department of Electrical Engineering Technion - Israel Institute of Technology Haifa 32000, Israel 2009 IEEE International

More information

Low-density parity-check (LDPC) codes

Low-density parity-check (LDPC) codes Low-density parity-check (LDPC) codes Performance similar to turbo codes Do not require long interleaver to achieve good performance Better block error performance Error floor occurs at lower BER Decoding

More information

Shannon s noisy-channel theorem

Shannon s noisy-channel theorem Shannon s noisy-channel theorem Information theory Amon Elders Korteweg de Vries Institute for Mathematics University of Amsterdam. Tuesday, 26th of Januari Amon Elders (Korteweg de Vries Institute for

More information

ELEC546 Review of Information Theory

ELEC546 Review of Information Theory ELEC546 Review of Information Theory Vincent Lau 1/1/004 1 Review of Information Theory Entropy: Measure of uncertainty of a random variable X. The entropy of X, H(X), is given by: If X is a discrete random

More information

Quasi-cyclic Low Density Parity Check codes with high girth

Quasi-cyclic Low Density Parity Check codes with high girth Quasi-cyclic Low Density Parity Check codes with high girth, a work with Marta Rossi, Richard Bresnan, Massimilliano Sala Summer Doctoral School 2009 Groebner bases, Geometric codes and Order Domains Dept

More information

Capacity of AWGN channels

Capacity of AWGN channels Chapter 3 Capacity of AWGN channels In this chapter we prove that the capacity of an AWGN channel with bandwidth W and signal-tonoise ratio SNR is W log 2 (1+SNR) bits per second (b/s). The proof that

More information

A Novel Asynchronous Communication Paradigm: Detection, Isolation, and Coding

A Novel Asynchronous Communication Paradigm: Detection, Isolation, and Coding A Novel Asynchronous Communication Paradigm: Detection, Isolation, and Coding The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

Coding Techniques for Data Storage Systems

Coding Techniques for Data Storage Systems Coding Techniques for Data Storage Systems Thomas Mittelholzer IBM Zurich Research Laboratory /8 Göttingen Agenda. Channel Coding and Practical Coding Constraints. Linear Codes 3. Weight Enumerators and

More information

Practical Polar Code Construction Using Generalised Generator Matrices

Practical Polar Code Construction Using Generalised Generator Matrices Practical Polar Code Construction Using Generalised Generator Matrices Berksan Serbetci and Ali E. Pusane Department of Electrical and Electronics Engineering Bogazici University Istanbul, Turkey E-mail:

More information

High-dimensional graphical model selection: Practical and information-theoretic limits

High-dimensional graphical model selection: Practical and information-theoretic limits 1 High-dimensional graphical model selection: Practical and information-theoretic limits Martin Wainwright Departments of Statistics, and EECS UC Berkeley, California, USA Based on joint work with: John

More information

Lec 05 Arithmetic Coding

Lec 05 Arithmetic Coding ECE 5578 Multimedia Communication Lec 05 Arithmetic Coding Zhu Li Dept of CSEE, UMKC web: http://l.web.umkc.edu/lizhu phone: x2346 Z. Li, Multimedia Communciation, 208 p. Outline Lecture 04 ReCap Arithmetic

More information

One Lesson of Information Theory

One Lesson of Information Theory Institut für One Lesson of Information Theory Prof. Dr.-Ing. Volker Kühn Institute of Communications Engineering University of Rostock, Germany Email: volker.kuehn@uni-rostock.de http://www.int.uni-rostock.de/

More information

An Introduction to Low Density Parity Check (LDPC) Codes

An Introduction to Low Density Parity Check (LDPC) Codes An Introduction to Low Density Parity Check (LDPC) Codes Jian Sun jian@csee.wvu.edu Wireless Communication Research Laboratory Lane Dept. of Comp. Sci. and Elec. Engr. West Virginia University June 3,

More information

A Summary of Multiple Access Channels

A Summary of Multiple Access Channels A Summary of Multiple Access Channels Wenyi Zhang February 24, 2003 Abstract In this summary we attempt to present a brief overview of the classical results on the information-theoretic aspects of multiple

More information

An Introduction to Low-Density Parity-Check Codes

An Introduction to Low-Density Parity-Check Codes An Introduction to Low-Density Parity-Check Codes Paul H. Siegel Electrical and Computer Engineering University of California, San Diego 5/ 3/ 7 Copyright 27 by Paul H. Siegel Outline Shannon s Channel

More information

Message-Passing Decoding for Low-Density Parity-Check Codes Harish Jethanandani and R. Aravind, IIT Madras

Message-Passing Decoding for Low-Density Parity-Check Codes Harish Jethanandani and R. Aravind, IIT Madras Message-Passing Decoding for Low-Density Parity-Check Codes Harish Jethanandani and R. Aravind, IIT Madras e-mail: hari_jethanandani@yahoo.com Abstract Low-density parity-check (LDPC) codes are discussed

More information

CHAPTER 3 LOW DENSITY PARITY CHECK CODES

CHAPTER 3 LOW DENSITY PARITY CHECK CODES 62 CHAPTER 3 LOW DENSITY PARITY CHECK CODES 3. INTRODUCTION LDPC codes were first presented by Gallager in 962 [] and in 996, MacKay and Neal re-discovered LDPC codes.they proved that these codes approach

More information

Vector Quantization Encoder Decoder Original Form image Minimize distortion Table Channel Image Vectors Look-up (X, X i ) X may be a block of l

Vector Quantization Encoder Decoder Original Form image Minimize distortion Table Channel Image Vectors Look-up (X, X i ) X may be a block of l Vector Quantization Encoder Decoder Original Image Form image Vectors X Minimize distortion k k Table X^ k Channel d(x, X^ Look-up i ) X may be a block of l m image or X=( r, g, b ), or a block of DCT

More information

Codes on graphs and iterative decoding

Codes on graphs and iterative decoding Codes on graphs and iterative decoding Bane Vasić Error Correction Coding Laboratory University of Arizona Funded by: National Science Foundation (NSF) Seagate Technology Defense Advanced Research Projects

More information

Quiz 2 Date: Monday, November 21, 2016

Quiz 2 Date: Monday, November 21, 2016 10-704 Information Processing and Learning Fall 2016 Quiz 2 Date: Monday, November 21, 2016 Name: Andrew ID: Department: Guidelines: 1. PLEASE DO NOT TURN THIS PAGE UNTIL INSTRUCTED. 2. Write your name,

More information

EVALUATION OF PACKET ERROR RATE IN WIRELESS NETWORKS

EVALUATION OF PACKET ERROR RATE IN WIRELESS NETWORKS EVALUATION OF PACKET ERROR RATE IN WIRELESS NETWORKS Ramin Khalili, Kavé Salamatian LIP6-CNRS, Université Pierre et Marie Curie. Paris, France. Ramin.khalili, kave.salamatian@lip6.fr Abstract Bit Error

More information

Compressed Sensing and Linear Codes over Real Numbers

Compressed Sensing and Linear Codes over Real Numbers Compressed Sensing and Linear Codes over Real Numbers Henry D. Pfister (joint with Fan Zhang) Texas A&M University College Station Information Theory and Applications Workshop UC San Diego January 31st,

More information

High-dimensional graphical model selection: Practical and information-theoretic limits

High-dimensional graphical model selection: Practical and information-theoretic limits 1 High-dimensional graphical model selection: Practical and information-theoretic limits Martin Wainwright Departments of Statistics, and EECS UC Berkeley, California, USA Based on joint work with: John

More information

L interférence dans les réseaux non filaires

L interférence dans les réseaux non filaires L interférence dans les réseaux non filaires Du contrôle de puissance au codage et alignement Jean-Claude Belfiore Télécom ParisTech 7 mars 2013 Séminaire Comelec Parts Part 1 Part 2 Part 3 Part 4 Part

More information

Iterative Quantization. Using Codes On Graphs

Iterative Quantization. Using Codes On Graphs Iterative Quantization Using Codes On Graphs Emin Martinian and Jonathan S. Yedidia 2 Massachusetts Institute of Technology 2 Mitsubishi Electric Research Labs Lossy Data Compression: Encoding: Map source

More information

APPLICATIONS. Quantum Communications

APPLICATIONS. Quantum Communications SOFT PROCESSING TECHNIQUES FOR QUANTUM KEY DISTRIBUTION APPLICATIONS Marina Mondin January 27, 2012 Quantum Communications In the past decades, the key to improving computer performance has been the reduction

More information

Lecture 2. Capacity of the Gaussian channel

Lecture 2. Capacity of the Gaussian channel Spring, 207 5237S, Wireless Communications II 2. Lecture 2 Capacity of the Gaussian channel Review on basic concepts in inf. theory ( Cover&Thomas: Elements of Inf. Theory, Tse&Viswanath: Appendix B) AWGN

More information

Lecture 10: Broadcast Channel and Superposition Coding

Lecture 10: Broadcast Channel and Superposition Coding Lecture 10: Broadcast Channel and Superposition Coding Scribed by: Zhe Yao 1 Broadcast channel M 0M 1M P{y 1 y x} M M 01 1 M M 0 The capacity of the broadcast channel depends only on the marginal conditional

More information

On the Capacity of the Two-Hop Half-Duplex Relay Channel

On the Capacity of the Two-Hop Half-Duplex Relay Channel On the Capacity of the Two-Hop Half-Duplex Relay Channel Nikola Zlatanov, Vahid Jamali, and Robert Schober University of British Columbia, Vancouver, Canada, and Friedrich-Alexander-University Erlangen-Nürnberg,

More information

Belief-Propagation Decoding of LDPC Codes

Belief-Propagation Decoding of LDPC Codes LDPC Codes: Motivation Belief-Propagation Decoding of LDPC Codes Amir Bennatan, Princeton University Revolution in coding theory Reliable transmission, rates approaching capacity. BIAWGN, Rate =.5, Threshold.45

More information

RCA Analysis of the Polar Codes and the use of Feedback to aid Polarization at Short Blocklengths

RCA Analysis of the Polar Codes and the use of Feedback to aid Polarization at Short Blocklengths RCA Analysis of the Polar Codes and the use of Feedback to aid Polarization at Short Blocklengths Kasra Vakilinia, Dariush Divsalar*, and Richard D. Wesel Department of Electrical Engineering, University

More information

Turbo Codes for Deep-Space Communications

Turbo Codes for Deep-Space Communications TDA Progress Report 42-120 February 15, 1995 Turbo Codes for Deep-Space Communications D. Divsalar and F. Pollara Communications Systems Research Section Turbo codes were recently proposed by Berrou, Glavieux,

More information

An Introduction to Sparse Approximation

An Introduction to Sparse Approximation An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,

More information

Lecture 18: Gaussian Channel

Lecture 18: Gaussian Channel Lecture 18: Gaussian Channel Gaussian channel Gaussian channel capacity Dr. Yao Xie, ECE587, Information Theory, Duke University Mona Lisa in AWGN Mona Lisa Noisy Mona Lisa 100 100 200 200 300 300 400

More information

The Method of Types and Its Application to Information Hiding

The Method of Types and Its Application to Information Hiding The Method of Types and Its Application to Information Hiding Pierre Moulin University of Illinois at Urbana-Champaign www.ifp.uiuc.edu/ moulin/talks/eusipco05-slides.pdf EUSIPCO Antalya, September 7,

More information

Binary Transmissions over Additive Gaussian Noise: A Closed-Form Expression for the Channel Capacity 1

Binary Transmissions over Additive Gaussian Noise: A Closed-Form Expression for the Channel Capacity 1 5 Conference on Information Sciences and Systems, The Johns Hopkins University, March 6 8, 5 inary Transmissions over Additive Gaussian Noise: A Closed-Form Expression for the Channel Capacity Ahmed O.

More information

Error Exponent Region for Gaussian Broadcast Channels

Error Exponent Region for Gaussian Broadcast Channels Error Exponent Region for Gaussian Broadcast Channels Lihua Weng, S. Sandeep Pradhan, and Achilleas Anastasopoulos Electrical Engineering and Computer Science Dept. University of Michigan, Ann Arbor, MI

More information

Iterative Encoding of Low-Density Parity-Check Codes

Iterative Encoding of Low-Density Parity-Check Codes Iterative Encoding of Low-Density Parity-Check Codes David Haley, Alex Grant and John Buetefuer Institute for Telecommunications Research University of South Australia Mawson Lakes Blvd Mawson Lakes SA

More information

Lecture 14 February 28

Lecture 14 February 28 EE/Stats 376A: Information Theory Winter 07 Lecture 4 February 8 Lecturer: David Tse Scribe: Sagnik M, Vivek B 4 Outline Gaussian channel and capacity Information measures for continuous random variables

More information

Common Information. Abbas El Gamal. Stanford University. Viterbi Lecture, USC, April 2014

Common Information. Abbas El Gamal. Stanford University. Viterbi Lecture, USC, April 2014 Common Information Abbas El Gamal Stanford University Viterbi Lecture, USC, April 2014 Andrew Viterbi s Fabulous Formula, IEEE Spectrum, 2010 El Gamal (Stanford University) Disclaimer Viterbi Lecture 2

More information

EE5139R: Problem Set 7 Assigned: 30/09/15, Due: 07/10/15

EE5139R: Problem Set 7 Assigned: 30/09/15, Due: 07/10/15 EE5139R: Problem Set 7 Assigned: 30/09/15, Due: 07/10/15 1. Cascade of Binary Symmetric Channels The conditional probability distribution py x for each of the BSCs may be expressed by the transition probability

More information

Message passing and approximate message passing

Message passing and approximate message passing Message passing and approximate message passing Arian Maleki Columbia University 1 / 47 What is the problem? Given pdf µ(x 1, x 2,..., x n ) we are interested in arg maxx1,x 2,...,x n µ(x 1, x 2,..., x

More information

Constellation Shaping for Communication Channels with Quantized Outputs

Constellation Shaping for Communication Channels with Quantized Outputs Constellation Shaping for Communication Channels with Quantized Outputs Chandana Nannapaneni, Matthew C. Valenti, and Xingyu Xiang Lane Department of Computer Science and Electrical Engineering West Virginia

More information

Codes on Graphs. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 27th, 2008

Codes on Graphs. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 27th, 2008 Codes on Graphs Telecommunications Laboratory Alex Balatsoukas-Stimming Technical University of Crete November 27th, 2008 Telecommunications Laboratory (TUC) Codes on Graphs November 27th, 2008 1 / 31

More information

Lecture 11: Polar codes construction

Lecture 11: Polar codes construction 15-859: Information Theory and Applications in TCS CMU: Spring 2013 Lecturer: Venkatesan Guruswami Lecture 11: Polar codes construction February 26, 2013 Scribe: Dan Stahlke 1 Polar codes: recap of last

More information

Expectation propagation for symbol detection in large-scale MIMO communications

Expectation propagation for symbol detection in large-scale MIMO communications Expectation propagation for symbol detection in large-scale MIMO communications Pablo M. Olmos olmos@tsc.uc3m.es Joint work with Javier Céspedes (UC3M) Matilde Sánchez-Fernández (UC3M) and Fernando Pérez-Cruz

More information

A BLEND OF INFORMATION THEORY AND STATISTICS. Andrew R. Barron. Collaborators: Cong Huang, Jonathan Li, Gerald Cheang, Xi Luo

A BLEND OF INFORMATION THEORY AND STATISTICS. Andrew R. Barron. Collaborators: Cong Huang, Jonathan Li, Gerald Cheang, Xi Luo A BLEND OF INFORMATION THEORY AND STATISTICS Andrew R. YALE UNIVERSITY Collaborators: Cong Huang, Jonathan Li, Gerald Cheang, Xi Luo Frejus, France, September 1-5, 2008 A BLEND OF INFORMATION THEORY AND

More information

Stopping Condition for Greedy Block Sparse Signal Recovery

Stopping Condition for Greedy Block Sparse Signal Recovery Stopping Condition for Greedy Block Sparse Signal Recovery Yu Luo, Ronggui Xie, Huarui Yin, and Weidong Wang Department of Electronics Engineering and Information Science, University of Science and Technology

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Sparse Recovery using L1 minimization - algorithms Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Structured Low-Density Parity-Check Codes: Algebraic Constructions

Structured Low-Density Parity-Check Codes: Algebraic Constructions Structured Low-Density Parity-Check Codes: Algebraic Constructions Shu Lin Department of Electrical and Computer Engineering University of California, Davis Davis, California 95616 Email:shulin@ece.ucdavis.edu

More information

Entropies & Information Theory

Entropies & Information Theory Entropies & Information Theory LECTURE I Nilanjana Datta University of Cambridge,U.K. See lecture notes on: http://www.qi.damtp.cam.ac.uk/node/223 Quantum Information Theory Born out of Classical Information

More information

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018 Please submit the solutions on Gradescope. EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018 1. Optimal codeword lengths. Although the codeword lengths of an optimal variable length code

More information

Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes

Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes Xiaojie Zhang and Paul H. Siegel University of California, San Diego, La Jolla, CA 9093, U Email:{ericzhang, psiegel}@ucsd.edu

More information

A t super-channel. trellis code and the channel. inner X t. Y t. S t-1. S t. S t+1. stages into. group two. one stage P 12 / 0,-2 P 21 / 0,2

A t super-channel. trellis code and the channel. inner X t. Y t. S t-1. S t. S t+1. stages into. group two. one stage P 12 / 0,-2 P 21 / 0,2 Capacity Approaching Signal Constellations for Channels with Memory Λ Aleksandar Kav»cić, Xiao Ma, Michael Mitzenmacher, and Nedeljko Varnica Division of Engineering and Applied Sciences Harvard University

More information

Statistical Inference

Statistical Inference Statistical Inference Liu Yang Florida State University October 27, 2016 Liu Yang, Libo Wang (Florida State University) Statistical Inference October 27, 2016 1 / 27 Outline The Bayesian Lasso Trevor Park

More information

LDPC Codes. Intracom Telecom, Peania

LDPC Codes. Intracom Telecom, Peania LDPC Codes Alexios Balatsoukas-Stimming and Athanasios P. Liavas Technical University of Crete Dept. of Electronic and Computer Engineering Telecommunications Laboratory December 16, 2011 Intracom Telecom,

More information

16.36 Communication Systems Engineering

16.36 Communication Systems Engineering MIT OpenCourseWare http://ocw.mit.edu 16.36 Communication Systems Engineering Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.36: Communication

More information

Motivation Sparse Signal Recovery is an interesting area with many potential applications. Methods developed for solving sparse signal recovery proble

Motivation Sparse Signal Recovery is an interesting area with many potential applications. Methods developed for solving sparse signal recovery proble Bayesian Methods for Sparse Signal Recovery Bhaskar D Rao 1 University of California, San Diego 1 Thanks to David Wipf, Zhilin Zhang and Ritwik Giri Motivation Sparse Signal Recovery is an interesting

More information

Graph-based Codes for Quantize-Map-and-Forward Relaying

Graph-based Codes for Quantize-Map-and-Forward Relaying 20 IEEE Information Theory Workshop Graph-based Codes for Quantize-Map-and-Forward Relaying Ayan Sengupta, Siddhartha Brahma, Ayfer Özgür, Christina Fragouli and Suhas Diggavi EPFL, Switzerland, UCLA,

More information

sparse and low-rank tensor recovery Cubic-Sketching

sparse and low-rank tensor recovery Cubic-Sketching Sparse and Low-Ran Tensor Recovery via Cubic-Setching Guang Cheng Department of Statistics Purdue University www.science.purdue.edu/bigdata CCAM@Purdue Math Oct. 27, 2017 Joint wor with Botao Hao and Anru

More information

Introduction to Convolutional Codes, Part 1

Introduction to Convolutional Codes, Part 1 Introduction to Convolutional Codes, Part 1 Frans M.J. Willems, Eindhoven University of Technology September 29, 2009 Elias, Father of Coding Theory Textbook Encoder Encoder Properties Systematic Codes

More information

Lecture 9 Polar Coding

Lecture 9 Polar Coding Lecture 9 Polar Coding I-Hsiang ang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 29, 2015 1 / 25 I-Hsiang ang IT Lecture 9 In Pursuit of Shannon s Limit Since

More information

Information Theory. Coding and Information Theory. Information Theory Textbooks. Entropy

Information Theory. Coding and Information Theory. Information Theory Textbooks. Entropy Coding and Information Theory Chris Williams, School of Informatics, University of Edinburgh Overview What is information theory? Entropy Coding Information Theory Shannon (1948): Information theory is

More information

Low-Complexity Encoding Algorithm for LDPC Codes

Low-Complexity Encoding Algorithm for LDPC Codes EECE 580B Modern Coding Theory Low-Complexity Encoding Algorithm for LDPC Codes Problem: Given the following matrix (imagine a larger matrix with a small number of ones) and the vector of information bits,

More information

On Turbo-Schedules for LDPC Decoding

On Turbo-Schedules for LDPC Decoding On Turbo-Schedules for LDPC Decoding Alexandre de Baynast, Predrag Radosavljevic, Victor Stolpman, Joseph R. Cavallaro, Ashutosh Sabharwal Department of Electrical and Computer Engineering Rice University,

More information

Lecture 5 Channel Coding over Continuous Channels

Lecture 5 Channel Coding over Continuous Channels Lecture 5 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 14, 2014 1 / 34 I-Hsiang Wang NIT Lecture 5 From

More information

Asynchronous Decoding of LDPC Codes over BEC

Asynchronous Decoding of LDPC Codes over BEC Decoding of LDPC Codes over BEC Saeid Haghighatshoar, Amin Karbasi, Amir Hesam Salavati Department of Telecommunication Systems, Technische Universität Berlin, Germany, School of Engineering and Applied

More information

National University of Singapore Department of Electrical & Computer Engineering. Examination for

National University of Singapore Department of Electrical & Computer Engineering. Examination for National University of Singapore Department of Electrical & Computer Engineering Examination for EE5139R Information Theory for Communication Systems (Semester I, 2014/15) November/December 2014 Time Allowed:

More information

An Introduction to Algorithmic Coding Theory

An Introduction to Algorithmic Coding Theory An Introduction to Algorithmic Coding Theory M. Amin Shokrollahi Bell Laboratories Part : Codes - A puzzle What do the following problems have in common? 2 Problem : Information Transmission MESSAGE G

More information

Polar Code Construction for List Decoding

Polar Code Construction for List Decoding 1 Polar Code Construction for List Decoding Peihong Yuan, Tobias Prinz, Georg Böcherer arxiv:1707.09753v1 [cs.it] 31 Jul 2017 Abstract A heuristic construction of polar codes for successive cancellation

More information