Optimum Rate Communication by Fast Sparse Superposition Codes

Size: px
Start display at page:

Download "Optimum Rate Communication by Fast Sparse Superposition Codes"

Transcription

1 Optimum Rate Communication by Fast Sparse Superposition Codes Andrew Barron Department of Statistics Yale University Joint work with Antony Joseph and Sanghee Cho Algebra, Codes and Networks Conference Universite de Bordeaux June 18, 2014

2 Quest for Provably Practical and Reliable High Rate Communication The Channel Communication Problem Gaussian Channel History of Methods Communication by Regression Sparse Superposition Coding Adaptive Successive Decoding Rate, Reliability, and Computational Complexity Distributional Analysis Simulations

3 Shannon Formulation Input bits: U = (U 1, U 2,......, U K ) Encoded: x = (x 1, x 2,..., x n ) Channel: p(y x) Received: Y = (Y 1, Y 2,..., Y n ) Decoded: Û = (Û1, Û2,......, ÛK ) indep Bern(1/2) Rate: R = K n Capacity C = max PX I(X; Y ) Reliability: Want small Prob{Û U} or reliably small fraction of errors

4 Gaussian Noise Channel Input bits: U = (U 1, U 2,......, U K ) 1 Encoded: c = (c 1, c 2,..., c n ) n n i=1 c2 i = P Channel: p(y c) Y = c + ε, ε N(0, σ 2 I) snr = P/σ 2 Received: Y = (Y 1, Y 2,..., Y n ) Decoded: Û = (Û1, Û2,......, ÛK ) Rate: R = K n Capacity C = 1 2 log(1 + snr) Reliability: Want small Prob{Û U} or reliably small fraction of errors

5 The Gaussian Noise Model The Gaussian noise channel is the basic model for wireless communication radio, cell phones, television, satellite, space wired communication internet, telephone, cable

6 Shannon Theory meets Coding Practice Forney and Ungerboeck 1998 review: modulation, shaping and coding for the Gaussian channel Richardson & Urbanke 2008, state of the art: Empirically good LDPC and turbo codes with fast encoding and decoding based on Bayesian belief networks New spatial coupling techniques, Urbanke 2013 Proof of rates up to capacity in some cases Arikan 2009, Arikan and Teletar 2009 polar codes: Adapted to Gaussian channel (Abbe and Barron 2011) Tropp 2006,2008 codes from compressed sensing and related sparse signal recovery work: Wainwright; Fletcher, Rangan, Goyal; Zhang; others Donoho,Montenari, et al 2012, role of spatial coupling l 1 -constrained least squares practical, has positive rate but not capacity achieving Knowledge of above not necessary to follow presentation

7 Sparse Superposition Code Input bits: U = (U 1...,...,...,... U K ) Coefficients: β =( , ,..., ) Sparsity: L sections, each of size M, a power of 2. One non-zero entry in each section Indices of nonzeros: (j 1, j 2,..., j L ) specified by U segments Matrix: Codeword: X, n by ML dictionary; Normal(0, 1) entries X β, superposition of selected columns

8 Sparse Superposition Code Input bits: U = (U 1...,...,...,... U K ) Coefficients: β =( , ,..., ) Sparsity: L sections, each of size M, a power of 2. One non-zero entry in each section Indices of nonzeros: (j 1, j 2,..., j L ) specified by U segments Matrix: Codeword: Receive: Decode: X, n by ML dictionary; Normal(0, 1) entries X β, superposition of selected columns Y = Xβ + ε ˆβ and Û Rate: R = K n from K = L log M

9 Sparse Superposition Code Input bits: U = (U 1...,...,...,... U K ) Coefficients: β =( , ,..., ) Sparsity: L sections, each of size M, a power of 2. One non-zero entry in each section Indices of nonzeros: (j 1, j 2,..., j L ) specified by U segments Matrix: Codeword: Receive: Decode: X, n by ML dictionary; Normal(0, 1) entries X β, superposition of selected columns Y = Xβ + ε ˆβ and Û Rate: R = K n from K = L log M Reliability: small Prob{Fraction ˆβ mistakes δ}, small δ Is it reliable with rate up to capacity?

10 Sparse Superposition Code Input bits: U = (U 1...,...,...,... U K ) Coefficients: β =( , ,..., ) Sparsity: L sections, each of size M, a power of 2. One non-zero entry in each section Indices of nonzeros: (j 1, j 2,..., j L ) specified by U segments Matrix: Codeword: Receive: Decode: X, n by ML dictionary; Normal(0, 1) entries X β, superposition of selected columns Y = Xβ + ε ˆβ and Û Rate: R = K n from K = L log M Ultra-sparse case: Impractical M = 2 nr/l with L constant (reliable at all rates R < C: Cover 1972,1980) Moderately-sparse: Practical M = n with L = nr/ log n (Still reliable at all R < C)

11 Sparse Superposition Code Input bits: U = (U 1...,...,...,... U K ) Coefficients: β =( , ,..., ) Sparsity: L sections, each of size M, a power of 2. One non-zero entry in each section Indices of nonzeros: (j 1, j 2,..., j L ) specified by U segments Matrix: X, n by ML dictionary; Normal(0, 1) entries Codeword: X β, superposition of selected columns Receive: Y = Xβ + ε Decode: ˆβ and Û Rate: R = K n from K = L log M Reliability: small Prob{Fraction mistakes δ} small δ Outer RS code: rate 1 2δ, corrects remaining mistakes Explanation: Outer code selects (j 1, j 2,..., j L ) where any two differ in more than fraction 2δ sections Overall rate: R tot = (1 2δ)R with small δ Overall rate: up to capacity

12 Power Allocation Coefficients: β =( , ,..., ) Indices of nonzeros: sent = (j 1, j 2,..., j L ) Coeff. values: β jl = P l Power control: L l=1 P l = P Codewords: Power Allocations for l = 1, 2,..., L X β, have average power P Constant power: Variable power: P l = P/L 2C l/l P l proportional to e

13 Variable Power Allocation Power control: L l=1 P l = P β 2 = P Variable power: P l proportional to e 2Cl/L for l = 1,..., L Power allocation with snr=7, L=512 Power Allocation section index

14 Variable Power Allocation Power control: L l=1 P l = P β 2 = P Variable power: P l proportional to e 2Cl/L for l = 1,..., L Successive decoding motivation Incremental capacity ( ) 1 2 log P l 1 + σ 2 = C + P l P L L matching the section rate R L = log M n

15 Adaptive Successive Decoder [Joseph & B ISIT, 2013 IT] Decoding Steps (with thresholding) Start: [Step 1] Compute the inner product of Y with each column of X See which are above a threshold Form initial fit as weighted sum of columns above threshold Iterate: [Step k 2] Compute the inner product of residuals Y Fit k 1 with each remaining column of X Standardize by dividing by Y Fit k 1 See which are above the threshold 2 log M + a Add these columns to the fit Stop: At Step k = 1 + snr log M, or if there are no additional inner products above threshold

16 Complexity of Adaptive Successive Decoder Complexity in parallel pipelined implementation Space: (use k = snr log M copies of the n by N dictionary) knn = snr C M n 2 memory positions kn multiplier/accumulators and comparators Time: O(1) per received Y symbol

17 Adaptive Successive Decoder: soft decision [Cho & B 2012 ISIT, 2013 WITMSE] Decoding Steps (with iteratively optimal statistics) Start: [Step 1] Compute the inner product of Y with each column of X Form initial fit Iterate: [Step k 2] Compute inner product of residuals Y Fit k 1 with each X j. Adjusted form: stat k,j equals (Y X ˆβ k 1, j ) T X j Standardize by dividing it by Y X ˆβ k 1. Form the new fit ˆβ k,j = P l ŵ j (α) = P l e α stat k,j j sec l e α stat k,j Stop: When estimated success stops increasing At Step k = O(log M)

18 Iteratively Bayes optimal ˆβ k With prior j l Unif on sec l, the Bayes estimate based on stat k ˆβ k = E[β stat k ] has representation ˆβ k,j = P l ŵ k,j with ŵ k,j = Prob{j l = j stat k }.

19 Iteratively Bayes optimal ˆβ k With prior j l Unif on sec l, the Bayes estimate based on stat k ˆβ k = E[β stat k ] has representation ˆβ k,j = P l ŵ k,j with ŵ k,j = Prob{j l = j stat k }. Here, when the stat k,j are independent N(α l,k 1 {j=jl }, 1), we have the logit representation ŵ k,j = ŵ(α l,k ) where ŵ k,j (α) = e α stat k,j j sec l e α stat. k,j

20 Iteratively Bayes optimal ˆβ k With prior j l Unif on sec l, the Bayes estimate based on stat k ˆβ k = E[β stat k ] has representation ˆβ k,j = P l ŵ k,j with ŵ k,j = Prob{j l = j stat k }. Here, when the stat k,j are independent N(α l,k 1 {j=jl }, 1), we have the logit representation ŵ k,j = ŵ(α l,k ) where ŵ k,j (α) = e α stat k,j j sec l e α stat k,j. Recall stat k,j is standardized inner product of residuals with X j stat k,j = (Y X ˆβ k 1, j ) T X j Y X ˆβ k 1

21 Distributional Analysis Approximate distribution of these statistics: independent standard normal, shifted for terms sent where Here stat k,j = α l,xk 1 1 {j sent} + Z k,j np l α = α l,x = σ 2 + P(1 x) E ˆβ k 1 β 2 = P (1 x k 1 ) Update rule x k = g(x k 1 ) where g(x) = L (P l /P)E[w jl (α l,x )]. l=1

22 Distributional Analysis Approximate distribution of these statistics: independent standard normal, shifted for terms sent where Here stat k,j = α l,xk 1 1 {j sent} + Z k,j np l α = α l,x = σ 2 + P(1 x) E ˆβ k 1 β 2 = P (1 x k 1 ) Update rule x k = g(x k 1 ) where g(x) = L (P l /P)E[w jl (α l,x )]. l=1

23 Success Rate Update Rule Update rule x k = g(x k 1 ) where g(x) = L (P l /P)E[w jl (α l,x )] l=1 this is the success rate update function expressed as a weighted sum of the posterior prob of the term sent

24 Success Rate Update Rule Update rule x k = g(x k 1 ) where g(x) = L (P l /P)E[w jl (α l,x )] l=1 this is the success rate update function expressed as a weighted sum of the posterior prob of the term sent Empirical success rate x k = L (P l /P)w jl (α l,xk ) l=1

25 Success Rate Update Rule Update rule x k = g(x k 1 ) where g(x) = L (P l /P)E[w jl (α l,x )] l=1 this is the success rate update function expressed as a weighted sum of the posterior prob of the term sent Empirical success rate x k = L (P l /P)w jl (α l,xk ) l=1 Empirical estimated success rate ˆx k = L (P l /P) l=1 j sec l [w j (α l,xk )] 2

26 Decoding progression g(x) x M = 2 9, L = M snr=7 C=1.5 bits R=1.2 bits(0.8c) Figure : Plot of g(x) and the sequence x k. x

27 Update functions g(x) Lower bound a=0 a=0.5 M = 2 9, L = M snr=7 C=1.5 bits R=1.2 bits(0.8c) Figure : Comparison of update functions. Blue and light blue lines indicates {0, 1} decision using the threshold τ = 2logM + a with respect to the value a as indicated. x

28 Success Progression plots Expected weight of the terms sent x=0 soft decision hard decision with a=1/2 x=0.6 x=0.2 x=0.8 x=0.4 x= u(l) u(l) u(l) Figure : Progression plots : M = 2 9, L = M, C = 1.5 bits and R = 0.8C. We used Monte Carlo simulation with replicate size The horizontal axis depicts u(l) = 1 e 2Cl/L which is an increasing function of l. Area under curve equals g(x).

29 Rate and Reliability Result for Optimal ML Decoder [Joseph and B IT], with outer RS decoder, and with equal power in the sections Prob error exponentially small in n for all R < C Prob{Error} e n (C R)2 /2V In agreement with the Shannon-Gallager exponent of optimal code, though with a suboptimal constant V depending on the snr

30 Rate and Reliability of Fast Superposition Decoder Practical: Adaptive Successive Decoder, with outer RS code. prob error exponentially small in n/(log M) for R < C [Joseph and B ISIT, 2013 IT] for thresholding [Cho and B ISIT, 2013 WITMSE] for soft-decision C M is identified approaching capacity at rate 1/(log M) 0.5, or, with a variant of the power allocation, at rate 1/ log M. Probability error exponentially small in L for R < C M Prob { Error } e L(C M R) 2 c 1 Improves to e c 2L(C M R) 2 (log M) 0.5 using a Bernstein bound.

31 Simulation Figure : L = 512, M = 64, snr = 15, C = 2 bits, R = 1 bits, n = Ran 20 trials for each residual-based method. Green lines for hard thresholding; blue and red for soft decision.

32 Framework: For k 1 Improved iterative statistics Codeword fits: F k = X ˆβ k statistic vector stat k constructed from inner products of (Y, F 1,..., F k ) with columns of X Want stat k,j better than X T j (Y F k ) Update estimate ˆβ k+1 is a function of stat k as before Thresholding: Adaptive Successive Decoder ˆβ k+1,j = P l 1 {statk,j >thres} Soft decision: ˆβ k+1,j = E[β j stat k ] = P l ŵ k,j with thresholding on the last step

33 Orthogonal Components Codeword fits: F k = X ˆβ k Orthogonalization : Let G 0 = Y and for k 1 G k = part of F k orthogonal to G 0, G 1,..., G k 1 Components of statistics Z k,j = X j T G k G k Statistics such as stat k,j built from Xj T (Y F k, j ) are linear combinations of these Z k,j

34 Distribution Evolution Lemma 1: shifted normal conditional distribution Given F k 1 = ( G 0,..., G k 1, Z 0, Z 1,..., Z k 1 ), the Z k has the distributional representation Z k = G k b k + Z k σ k G k 2 /σ 2 k Chi-square(n k) b 0, b 1,..., b k the successive orthonormal components of [ ] [ ] [ ] β ˆβ, 1 ˆβ,..., k ( ) σ 0 0 Z k N(0, Σ k ) indep of G k Σ k = I b 0 b T 0 b 1b T 1... b kb T k = projection onto space orthogonal to ( ) σ 2 k = ˆβ T k Σ k 1 ˆβ k

35 Combining Components Class of statistics stat k formed by combining Z 0,..., Z k n stat k,j = Zk,j comb + ˆβk,j ck with Z comb k = λ 0,k Z 0 + λ 1,k Z λ k,k Z k, k λ2 k,k = 1

36 Combining Components Class of statistics stat k formed by combining Z 0,..., Z k n stat k,j = Zk,j comb + ˆβk,j ck with Z comb k = λ 0,k Z 0 + λ 1,k Z λ k,k Z k, Ideal Distribution of the combined statistics n statk ideal = β + Zk comb ck for c k = σ 2 + (1 x k )P with Z comb k Normal(0, I). For the terms sent the shift α l,k has an effective snr interpretation α l,k = n P l c k = P l n σ 2 + P remaining,k k λ2 k,k = 1

37 Oracle statistics Weights of combination: based on λ k proportional to ( (σ Y b0 T ˆβ k ), (b1 T ˆβ k ),..., (bk T ˆβ ) k ) Combining Z k with these weights, replacing χ n k with n, it produces the desired distributional representation n stat k = σ 2 + β ˆβ β + Z comb k 2 k with Z comb k N(0, I) and σ 2 Y = σ2 + P.

38 Oracle statistics Weights of combination: based on λ k proportional to ( (σ Y b0 T ˆβ k ), (b1 T ˆβ k ),..., (bk T ˆβ ) k ) Combining Z k with these weights, replacing χ n k with n, it produces the desired distributional representation n stat k = σ 2 + β ˆβ β + Z comb k 2 k with Z comb k N(0, I) and σ 2 Y = σ2 + P. Can t calculate the weights not knowing β, β ˆβ k 2 is close to its known expectation e.g. b 0 = β/σ Provides the desired distribution of the stat k,j

39 Statistics based on weights of combination Oracle weights of combination: λ k proportional to ( (σ Y b0 T ˆβ k ), (b1 T ˆβ k ),..., (bk T ˆβ ) k ) Estimated weights of combination: λ k proportional to ( ( Y Z0 T ˆβ k ), (Z1 T ˆβ k ),..., (Zk T ˆβ ) k ) These estimated weights produce the residual based statistics previously discussed

40 Orthogonalization Interpretation of Weights Estimation of ( (σ Y b0 T ˆβ k ), (b1 T ˆβ k ),..., (bk T ˆβ ) k ) These b T k ˆβ k arise in the QR-decomposition for B = [β, ˆβ 1,..., ˆβ k ]

41 Cholesky Decomposition for B T B The b T k ˆβ k arise in the Cholesky decomposition B T B =R T R,

42 Cholesky Decomposition for B T B Replace entries with deterministic estimates based on x k P, with c k = σ 2 + (1 x k )P and with ω k = 1/c k 1/c k 1 for k 1 and ω 0 = 1/c 0.

43 Cholesky Decomposition for B T B Replace entries with deterministic estimates based on x k P, with c k = σ 2 + (1 x k )P and with ω k = 1/c k 1/c k 1 for k 1 and ω 0 = 1/c 0. The rightmost column yields suitable deterministic weights of combination

44 Deterministic weights of combinations For given ˆβ 1,..., ˆβ k and c 0 >... > c k, c k = σ 2 + (1 x k )P Combine Z k = n b k + Z k with λ k = c k ( ω 0, ω 1,..., ω k ) yielding approximately optimal statistics stat k = k k =0 λ k,k Z k + n ck ˆβ k

45 Update Plots for Deterministic and Oracle Weights Figure : L = 512, M = 64, snr = 15, C = 2 bits, R = 0.7C, blocklength = Ran 10 experiment for each method. We see that they follow the expected update function.

46 Improving the End Game Variable power: P l proportional to e 2Cl/L for l = 1,..., L We use alternative power allocation: constant leveling the power allocation for the last portion of the sections Figure : L = 512, snr = 7, C = 1.5 bits

47 Progression Plot using Alternative Power Allocation Figure : L = 512, M = 64, snr = 7, C = 1.5 bits, R = 0.7C, blocklength = Progression plot of the final step. The area under the curve might be the same, the expected weights for the last sections are higher when we level the power at the end L=512, M=64, n=2926, C=1.5 bits, rate=1.05 bits, snr.ref=7, snr=7 weights for the true term in section l with leveling without leveling P_l P

48 Bit Error Rate Figure : L = 512, M = 64, R = 1.05 bits, blocklength n = 2926, snr.ref = 7, snr = (4, 5, 7, 10, 15). Ran 10,000 trials. Average of error count out of 512 sections.

49 Block Error Rate Figure : L = 512, M = 64, R = 1.05 bits, blocklength n = 2926, snr.ref = 7, snr = (4, 5, 7, 10, 15). Ran 10,000 trials.

50 Block Error Rate Large L regime, M not yet large. Identify rate limits. Tradeoff in R inner and in x where g(x) = x. Let R (snr) be the maximized value of the total rate x R inner Figure : M = 64, 128, R target = 1.05 bits log(prob of Error) Shannon limit M=64 (0.72) M=128 (0.7) log(snr) Snr snr(db) Vertical bars at the snr at which R target equals capacity and at the snr at which R (snr) matches the target.

51 Summary Sparse superposition codes with adaptive successive decoding Simplicity of the code permits: distributional analysis of the decoding progression low complexity decoder exponentially small error probability for any fixed R < C

52 Improving the adaptive successive decoder Introduce successively decode chapters of the dictionary Update coefficient estimates across all sections Use a Reed-Solomon code for each chapter to clean away section errors once estimates are high Reduced interference yields improved success progression

53 Chaptering Rule to compute number of chapters and sections in each Set total number of sections L and the rate R = 0.8C. Precompute progressions g l (x) at current success rate x k. Identify section index l k at which area to the left is x k. For specified rate, it s equivalent to l where Ew jl 0.78 Use these indices as the chapter boundaries. Reed-Solomon code decodes the most recent chapter. Remove the associated interferrence when updating g l (x) where g l (x) = Ew jl Chapter creation ends when can nolonger increase x k.

54 Example use of Chapters Parameters: L = 512, M = 64, n = 2926, R = 1.05bits = 0.7C Number of sections in each chapter: L 1 = 120, L 2 = 117, L 3 = 109, L 4 = 88, L 5 = 78

55 Progression of success with chaptering L=512, M=64, n=2926, snr=7, Rate=1.05 bits (0.7C) Expected weight of the terms sent without chapter with chapter step=1 step=2 step=3 step=4 step=5 step=6 step=7 step=8 step= u(l) u(l) u(l)

56 update function x(k+1) L=512, M=64 n=2926, snr=7 Rate=1.05 bits (0.7C) x(k) Figure : Total success rate is improved from to

Communication by Regression: Sparse Superposition Codes

Communication by Regression: Sparse Superposition Codes Communication by Regression: Sparse Superposition Codes Department of Statistics, Yale University Coauthors: Antony Joseph and Sanghee Cho February 21, 2013, University of Texas Channel Communication Set-up

More information

Communication by Regression: Achieving Shannon Capacity

Communication by Regression: Achieving Shannon Capacity Communication by Regression: Practical Achievement of Shannon Capacity Department of Statistics Yale University Workshop Infusing Statistics and Engineering Harvard University, June 5-6, 2011 Practical

More information

High-Rate Sparse Superposition Codes with Iteratively Optimal Estimates

High-Rate Sparse Superposition Codes with Iteratively Optimal Estimates 2012 IEEE International Symposium on Information Theory Proceedings High-Rate Sparse Superposition Codes with Iteratively Optimal Estimates Andrew R. Barron and Sanghee Cho Department of Statistics, Yale

More information

Sparse Superposition Codes are Fast and Reliable at Rates Approaching Capacity with Gaussian Noise

Sparse Superposition Codes are Fast and Reliable at Rates Approaching Capacity with Gaussian Noise 1 Sparse Superposition Codes are Fast and Reliable at Rates Approaching Capacity with Gaussian Noise Andrew R Barron, Senior Member, IEEE, and Antony Joseph, Student Member, IEEE For upcoming submission

More information

Information and Statistics

Information and Statistics Information and Statistics Andrew Barron Department of Statistics Yale University IMA Workshop On Information Theory and Concentration Phenomena Minneapolis, April 13, 2015 Barron Information and Statistics

More information

Fast Sparse Superposition Codes have Exponentially Small Error Probability for R < C

Fast Sparse Superposition Codes have Exponentially Small Error Probability for R < C Fast Sparse Superposition Codes have Exponentially Small Error Probability for R < C Antony Joseph, Student Member, IEEE, and Andrew R Barron, Senior Member, IEEE Abstract For the additive white Gaussian

More information

Sparse Regression Codes for Multi-terminal Source and Channel Coding

Sparse Regression Codes for Multi-terminal Source and Channel Coding Sparse Regression Codes for Multi-terminal Source and Channel Coding Ramji Venkataramanan Yale University Sekhar Tatikonda Allerton 2012 1 / 20 Compression with Side-Information X Encoder Rate R Decoder

More information

THE ADDITIVE white Gaussian noise channel is basic to

THE ADDITIVE white Gaussian noise channel is basic to IEEE TRANSACTIONS ON INFORMATION THEORY VOL 60 NO 2 FEBRUARY 204 99 Fast Sparse Superposition Codes Have Near Exponential Error Probability for R < C Antony Joseph Student Member IEEE and Andrew R Barron

More information

Least Squares Superposition Codes of Moderate Dictionary Size, Reliable at Rates up to Capacity

Least Squares Superposition Codes of Moderate Dictionary Size, Reliable at Rates up to Capacity Least Squares Superposition Codes of Moderate Dictionary Size, eliable at ates up to Capacity Andrew Barron, Antony Joseph Department of Statistics, Yale University Email: andrewbarron@yaleedu, antonyjoseph@yaleedu

More information

Introduction to Low-Density Parity Check Codes. Brian Kurkoski

Introduction to Low-Density Parity Check Codes. Brian Kurkoski Introduction to Low-Density Parity Check Codes Brian Kurkoski kurkoski@ice.uec.ac.jp Outline: Low Density Parity Check Codes Review block codes History Low Density Parity Check Codes Gallager s LDPC code

More information

Sparse Superposition Codes for the Gaussian Channel

Sparse Superposition Codes for the Gaussian Channel Sparse Superposition Codes for the Gaussian Channel Florent Krzakala (LPS, Ecole Normale Supérieure, France) J. Barbier (ENS) arxiv:1403.8024 presented at ISIT 14 Long version in preparation Communication

More information

ECEN 655: Advanced Channel Coding

ECEN 655: Advanced Channel Coding ECEN 655: Advanced Channel Coding Course Introduction Henry D. Pfister Department of Electrical and Computer Engineering Texas A&M University ECEN 655: Advanced Channel Coding 1 / 19 Outline 1 History

More information

for some error exponent E( R) as a function R,

for some error exponent E( R) as a function R, . Capacity-achieving codes via Forney concatenation Shannon s Noisy Channel Theorem assures us the existence of capacity-achieving codes. However, exhaustive search for the code has double-exponential

More information

Graph-based codes for flash memory

Graph-based codes for flash memory 1/28 Graph-based codes for flash memory Discrete Mathematics Seminar September 3, 2013 Katie Haymaker Joint work with Professor Christine Kelley University of Nebraska-Lincoln 2/28 Outline 1 Background

More information

LDPC Codes. Intracom Telecom, Peania

LDPC Codes. Intracom Telecom, Peania LDPC Codes Alexios Balatsoukas-Stimming and Athanasios P. Liavas Technical University of Crete Dept. of Electronic and Computer Engineering Telecommunications Laboratory December 16, 2011 Intracom Telecom,

More information

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels Jilei Hou, Paul H. Siegel and Laurence B. Milstein Department of Electrical and Computer Engineering

More information

Augmented Lattice Reduction for MIMO decoding

Augmented Lattice Reduction for MIMO decoding Augmented Lattice Reduction for MIMO decoding LAURA LUZZI joint work with G. Rekaya-Ben Othman and J.-C. Belfiore at Télécom-ParisTech NANYANG TECHNOLOGICAL UNIVERSITY SEPTEMBER 15, 2010 Laura Luzzi Augmented

More information

RCA Analysis of the Polar Codes and the use of Feedback to aid Polarization at Short Blocklengths

RCA Analysis of the Polar Codes and the use of Feedback to aid Polarization at Short Blocklengths RCA Analysis of the Polar Codes and the use of Feedback to aid Polarization at Short Blocklengths Kasra Vakilinia, Dariush Divsalar*, and Richard D. Wesel Department of Electrical Engineering, University

More information

An Introduction to Low Density Parity Check (LDPC) Codes

An Introduction to Low Density Parity Check (LDPC) Codes An Introduction to Low Density Parity Check (LDPC) Codes Jian Sun jian@csee.wvu.edu Wireless Communication Research Laboratory Lane Dept. of Comp. Sci. and Elec. Engr. West Virginia University June 3,

More information

One Lesson of Information Theory

One Lesson of Information Theory Institut für One Lesson of Information Theory Prof. Dr.-Ing. Volker Kühn Institute of Communications Engineering University of Rostock, Germany Email: volker.kuehn@uni-rostock.de http://www.int.uni-rostock.de/

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

Error Exponent Region for Gaussian Broadcast Channels

Error Exponent Region for Gaussian Broadcast Channels Error Exponent Region for Gaussian Broadcast Channels Lihua Weng, S. Sandeep Pradhan, and Achilleas Anastasopoulos Electrical Engineering and Computer Science Dept. University of Michigan, Ann Arbor, MI

More information

State-of-the-Art Channel Coding

State-of-the-Art Channel Coding Institut für State-of-the-Art Channel Coding Prof. Dr.-Ing. Volker Kühn Institute of Communications Engineering University of Rostock, Germany Email: volker.kuehn@uni-rostock.de http://www.int.uni-rostock.de/

More information

Making Error Correcting Codes Work for Flash Memory

Making Error Correcting Codes Work for Flash Memory Making Error Correcting Codes Work for Flash Memory Part I: Primer on ECC, basics of BCH and LDPC codes Lara Dolecek Laboratory for Robust Information Systems (LORIS) Center on Development of Emerging

More information

Coding Techniques for Data Storage Systems

Coding Techniques for Data Storage Systems Coding Techniques for Data Storage Systems Thomas Mittelholzer IBM Zurich Research Laboratory /8 Göttingen Agenda. Channel Coding and Practical Coding Constraints. Linear Codes 3. Weight Enumerators and

More information

Low-density parity-check codes

Low-density parity-check codes Low-density parity-check codes From principles to practice Dr. Steve Weller steven.weller@newcastle.edu.au School of Electrical Engineering and Computer Science The University of Newcastle, Callaghan,

More information

Construction of low complexity Array based Quasi Cyclic Low density parity check (QC-LDPC) codes with low error floor

Construction of low complexity Array based Quasi Cyclic Low density parity check (QC-LDPC) codes with low error floor Construction of low complexity Array based Quasi Cyclic Low density parity check (QC-LDPC) codes with low error floor Pravin Salunkhe, Prof D.P Rathod Department of Electrical Engineering, Veermata Jijabai

More information

Progressive Algebraic Soft-Decision Decoding of Reed-Solomon Codes

Progressive Algebraic Soft-Decision Decoding of Reed-Solomon Codes Progressive Algebraic Soft-Decision Decoding of Reed-Solomon Codes Li Chen ( 陈立 ), PhD, MIEEE Associate Professor, School of Information Science and Technology Sun Yat-sen University, China Joint work

More information

Graph-based Codes for Quantize-Map-and-Forward Relaying

Graph-based Codes for Quantize-Map-and-Forward Relaying 20 IEEE Information Theory Workshop Graph-based Codes for Quantize-Map-and-Forward Relaying Ayan Sengupta, Siddhartha Brahma, Ayfer Özgür, Christina Fragouli and Suhas Diggavi EPFL, Switzerland, UCLA,

More information

Codes on graphs and iterative decoding

Codes on graphs and iterative decoding Codes on graphs and iterative decoding Bane Vasić Error Correction Coding Laboratory University of Arizona Funded by: National Science Foundation (NSF) Seagate Technology Defense Advanced Research Projects

More information

Error-Correcting Codes:

Error-Correcting Codes: Error-Correcting Codes: Progress & Challenges Madhu Sudan Microsoft/MIT Communication in presence of noise We are not ready Sender Noisy Channel We are now ready Receiver If information is digital, reliability

More information

Shannon s noisy-channel theorem

Shannon s noisy-channel theorem Shannon s noisy-channel theorem Information theory Amon Elders Korteweg de Vries Institute for Mathematics University of Amsterdam. Tuesday, 26th of Januari Amon Elders (Korteweg de Vries Institute for

More information

Single-letter Characterization of Signal Estimation from Linear Measurements

Single-letter Characterization of Signal Estimation from Linear Measurements Single-letter Characterization of Signal Estimation from Linear Measurements Dongning Guo Dror Baron Shlomo Shamai The work has been supported by the European Commission in the framework of the FP7 Network

More information

Advances in Error Control Strategies for 5G

Advances in Error Control Strategies for 5G Advances in Error Control Strategies for 5G Jörg Kliewer The Elisha Yegal Bar-Ness Center For Wireless Communications And Signal Processing Research 5G Requirements [Nokia Networks: Looking ahead to 5G.

More information

Introduction to Wireless & Mobile Systems. Chapter 4. Channel Coding and Error Control Cengage Learning Engineering. All Rights Reserved.

Introduction to Wireless & Mobile Systems. Chapter 4. Channel Coding and Error Control Cengage Learning Engineering. All Rights Reserved. Introduction to Wireless & Mobile Systems Chapter 4 Channel Coding and Error Control 1 Outline Introduction Block Codes Cyclic Codes CRC (Cyclic Redundancy Check) Convolutional Codes Interleaving Information

More information

Sparse Linear Models (10/7/13)

Sparse Linear Models (10/7/13) STA56: Probabilistic machine learning Sparse Linear Models (0/7/) Lecturer: Barbara Engelhardt Scribes: Jiaji Huang, Xin Jiang, Albert Oh Sparsity Sparsity has been a hot topic in statistics and machine

More information

New EPoC Burst Markers with Data Field

New EPoC Burst Markers with Data Field New EPoC Burst Markers with Data Field Leo Montreuil Rich Prodan Tom Kolze January 2015 www.broadcom.com Burst Markers Update Modifications in Burst Marker Motivation Updated BM Sequences and Modulated

More information

Low-complexity error correction in LDPC codes with constituent RS codes 1

Low-complexity error correction in LDPC codes with constituent RS codes 1 Eleventh International Workshop on Algebraic and Combinatorial Coding Theory June 16-22, 2008, Pamporovo, Bulgaria pp. 348-353 Low-complexity error correction in LDPC codes with constituent RS codes 1

More information

Sparse Approximation of Signals with Highly Coherent Dictionaries

Sparse Approximation of Signals with Highly Coherent Dictionaries Sparse Approximation of Signals with Highly Coherent Dictionaries Bishnu P. Lamichhane and Laura Rebollo-Neira b.p.lamichhane@aston.ac.uk, rebollol@aston.ac.uk Support from EPSRC (EP/D062632/1) is acknowledged

More information

Lecture 3: Error Correcting Codes

Lecture 3: Error Correcting Codes CS 880: Pseudorandomness and Derandomization 1/30/2013 Lecture 3: Error Correcting Codes Instructors: Holger Dell and Dieter van Melkebeek Scribe: Xi Wu In this lecture we review some background on error

More information

Extended MinSum Algorithm for Decoding LDPC Codes over GF (q)

Extended MinSum Algorithm for Decoding LDPC Codes over GF (q) Extended MinSum Algorithm for Decoding LDPC Codes over GF (q) David Declercq ETIS ENSEA/UCP/CNRS UMR-8051, 95014 Cergy-Pontoise, (France), declercq@ensea.fr Marc Fossorier Dept. Electrical Engineering,

More information

Bayesian Methods for Sparse Signal Recovery

Bayesian Methods for Sparse Signal Recovery Bayesian Methods for Sparse Signal Recovery Bhaskar D Rao 1 University of California, San Diego 1 Thanks to David Wipf, Jason Palmer, Zhilin Zhang and Ritwik Giri Motivation Motivation Sparse Signal Recovery

More information

(Classical) Information Theory III: Noisy channel coding

(Classical) Information Theory III: Noisy channel coding (Classical) Information Theory III: Noisy channel coding Sibasish Ghosh The Institute of Mathematical Sciences CIT Campus, Taramani, Chennai 600 113, India. p. 1 Abstract What is the best possible way

More information

Codes on graphs and iterative decoding

Codes on graphs and iterative decoding Codes on graphs and iterative decoding Bane Vasić Error Correction Coding Laboratory University of Arizona Prelude Information transmission 0 0 0 0 0 0 Channel Information transmission signal 0 0 threshold

More information

Approximate Message Passing with Built-in Parameter Estimation for Sparse Signal Recovery

Approximate Message Passing with Built-in Parameter Estimation for Sparse Signal Recovery Approimate Message Passing with Built-in Parameter Estimation for Sparse Signal Recovery arxiv:1606.00901v1 [cs.it] Jun 016 Shuai Huang, Trac D. Tran Department of Electrical and Computer Engineering Johns

More information

Secrecy in the 2-User Symmetric Interference Channel with Transmitter Cooperation: Deterministic View

Secrecy in the 2-User Symmetric Interference Channel with Transmitter Cooperation: Deterministic View Secrecy in the 2-User Symmetric Interference Channel with Transmitter Cooperation: Deterministic View P. Mohapatra 9 th March 2013 Outline Motivation Problem statement Achievable scheme 1 Weak interference

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Sparse Recovery using L1 minimization - algorithms Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Practical Polar Code Construction Using Generalised Generator Matrices

Practical Polar Code Construction Using Generalised Generator Matrices Practical Polar Code Construction Using Generalised Generator Matrices Berksan Serbetci and Ali E. Pusane Department of Electrical and Electronics Engineering Bogazici University Istanbul, Turkey E-mail:

More information

Polar Code Construction for List Decoding

Polar Code Construction for List Decoding 1 Polar Code Construction for List Decoding Peihong Yuan, Tobias Prinz, Georg Böcherer arxiv:1707.09753v1 [cs.it] 31 Jul 2017 Abstract A heuristic construction of polar codes for successive cancellation

More information

Channel Codes for Short Blocks: A Survey

Channel Codes for Short Blocks: A Survey 11th International ITG Conference on Systems, Communications and Coding February 6, 2017 Channel Codes for Short Blocks: A Survey Gianluigi Liva, gianluigi.liva@dlr.de Fabian Steiner, fabian.steiner@tum.de

More information

Introduction to Convolutional Codes, Part 1

Introduction to Convolutional Codes, Part 1 Introduction to Convolutional Codes, Part 1 Frans M.J. Willems, Eindhoven University of Technology September 29, 2009 Elias, Father of Coding Theory Textbook Encoder Encoder Properties Systematic Codes

More information

Quiz 2 Date: Monday, November 21, 2016

Quiz 2 Date: Monday, November 21, 2016 10-704 Information Processing and Learning Fall 2016 Quiz 2 Date: Monday, November 21, 2016 Name: Andrew ID: Department: Guidelines: 1. PLEASE DO NOT TURN THIS PAGE UNTIL INSTRUCTED. 2. Write your name,

More information

Performance of Low Density Parity Check Codes. as a Function of Actual and Assumed Noise Levels. David J.C. MacKay & Christopher P.

Performance of Low Density Parity Check Codes. as a Function of Actual and Assumed Noise Levels. David J.C. MacKay & Christopher P. Performance of Low Density Parity Check Codes as a Function of Actual and Assumed Noise Levels David J.C. MacKay & Christopher P. Hesketh Cavendish Laboratory, Cambridge, CB3 HE, United Kingdom. mackay@mrao.cam.ac.uk

More information

Compressed Sensing and Linear Codes over Real Numbers

Compressed Sensing and Linear Codes over Real Numbers Compressed Sensing and Linear Codes over Real Numbers Henry D. Pfister (joint with Fan Zhang) Texas A&M University College Station Information Theory and Applications Workshop UC San Diego January 31st,

More information

ECE Information theory Final (Fall 2008)

ECE Information theory Final (Fall 2008) ECE 776 - Information theory Final (Fall 2008) Q.1. (1 point) Consider the following bursty transmission scheme for a Gaussian channel with noise power N and average power constraint P (i.e., 1/n X n i=1

More information

Binary Transmissions over Additive Gaussian Noise: A Closed-Form Expression for the Channel Capacity 1

Binary Transmissions over Additive Gaussian Noise: A Closed-Form Expression for the Channel Capacity 1 5 Conference on Information Sciences and Systems, The Johns Hopkins University, March 6 8, 5 inary Transmissions over Additive Gaussian Noise: A Closed-Form Expression for the Channel Capacity Ahmed O.

More information

Quasi-cyclic Low Density Parity Check codes with high girth

Quasi-cyclic Low Density Parity Check codes with high girth Quasi-cyclic Low Density Parity Check codes with high girth, a work with Marta Rossi, Richard Bresnan, Massimilliano Sala Summer Doctoral School 2009 Groebner bases, Geometric codes and Order Domains Dept

More information

Chapter 7: Channel coding:convolutional codes

Chapter 7: Channel coding:convolutional codes Chapter 7: : Convolutional codes University of Limoges meghdadi@ensil.unilim.fr Reference : Digital communications by John Proakis; Wireless communication by Andreas Goldsmith Encoder representation Communication

More information

On Bit Error Rate Performance of Polar Codes in Finite Regime

On Bit Error Rate Performance of Polar Codes in Finite Regime On Bit Error Rate Performance of Polar Codes in Finite Regime A. Eslami and H. Pishro-Nik Abstract Polar codes have been recently proposed as the first low complexity class of codes that can provably achieve

More information

Constellation Shaping for Communication Channels with Quantized Outputs

Constellation Shaping for Communication Channels with Quantized Outputs Constellation Shaping for Communication Channels with Quantized Outputs, Dr. Matthew C. Valenti and Xingyu Xiang Lane Department of Computer Science and Electrical Engineering West Virginia University

More information

Lecture 9 Polar Coding

Lecture 9 Polar Coding Lecture 9 Polar Coding I-Hsiang ang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 29, 2015 1 / 25 I-Hsiang ang IT Lecture 9 In Pursuit of Shannon s Limit Since

More information

Vector Quantization Encoder Decoder Original Form image Minimize distortion Table Channel Image Vectors Look-up (X, X i ) X may be a block of l

Vector Quantization Encoder Decoder Original Form image Minimize distortion Table Channel Image Vectors Look-up (X, X i ) X may be a block of l Vector Quantization Encoder Decoder Original Image Form image Vectors X Minimize distortion k k Table X^ k Channel d(x, X^ Look-up i ) X may be a block of l m image or X=( r, g, b ), or a block of DCT

More information

CHAPTER 3 LOW DENSITY PARITY CHECK CODES

CHAPTER 3 LOW DENSITY PARITY CHECK CODES 62 CHAPTER 3 LOW DENSITY PARITY CHECK CODES 3. INTRODUCTION LDPC codes were first presented by Gallager in 962 [] and in 996, MacKay and Neal re-discovered LDPC codes.they proved that these codes approach

More information

Motivation Sparse Signal Recovery is an interesting area with many potential applications. Methods developed for solving sparse signal recovery proble

Motivation Sparse Signal Recovery is an interesting area with many potential applications. Methods developed for solving sparse signal recovery proble Bayesian Methods for Sparse Signal Recovery Bhaskar D Rao 1 University of California, San Diego 1 Thanks to David Wipf, Zhilin Zhang and Ritwik Giri Motivation Sparse Signal Recovery is an interesting

More information

A t super-channel. trellis code and the channel. inner X t. Y t. S t-1. S t. S t+1. stages into. group two. one stage P 12 / 0,-2 P 21 / 0,2

A t super-channel. trellis code and the channel. inner X t. Y t. S t-1. S t. S t+1. stages into. group two. one stage P 12 / 0,-2 P 21 / 0,2 Capacity Approaching Signal Constellations for Channels with Memory Λ Aleksandar Kav»cić, Xiao Ma, Michael Mitzenmacher, and Nedeljko Varnica Division of Engineering and Applied Sciences Harvard University

More information

Algebraic Soft-Decision Decoding of Reed Solomon Codes

Algebraic Soft-Decision Decoding of Reed Solomon Codes IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 11, NOVEMBER 2003 2809 Algebraic Soft-Decision Decoding of Reed Solomon Codes Ralf Koetter, Member, IEEE, Alexer Vardy, Fellow, IEEE Abstract A polynomial-time

More information

Lecture 8: MIMO Architectures (II) Theoretical Foundations of Wireless Communications 1. Overview. Ragnar Thobaben CommTh/EES/KTH

Lecture 8: MIMO Architectures (II) Theoretical Foundations of Wireless Communications 1. Overview. Ragnar Thobaben CommTh/EES/KTH MIMO : MIMO Theoretical Foundations of Wireless Communications 1 Wednesday, May 25, 2016 09:15-12:00, SIP 1 Textbook: D. Tse and P. Viswanath, Fundamentals of Wireless Communication 1 / 20 Overview MIMO

More information

Lecture 7 MIMO Communica2ons

Lecture 7 MIMO Communica2ons Wireless Communications Lecture 7 MIMO Communica2ons Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Fall 2014 1 Outline MIMO Communications (Chapter 10

More information

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University Chapter 4 Data Transmission and Channel Capacity Po-Ning Chen, Professor Department of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30050, R.O.C. Principle of Data Transmission

More information

Coding for loss tolerant systems

Coding for loss tolerant systems Coding for loss tolerant systems Workshop APRETAF, 22 janvier 2009 Mathieu Cunche, Vincent Roca INRIA, équipe Planète INRIA Rhône-Alpes Mathieu Cunche, Vincent Roca The erasure channel Erasure codes Reed-Solomon

More information

Expectation propagation for symbol detection in large-scale MIMO communications

Expectation propagation for symbol detection in large-scale MIMO communications Expectation propagation for symbol detection in large-scale MIMO communications Pablo M. Olmos olmos@tsc.uc3m.es Joint work with Javier Céspedes (UC3M) Matilde Sánchez-Fernández (UC3M) and Fernando Pérez-Cruz

More information

LECTURE 10. Last time: Lecture outline

LECTURE 10. Last time: Lecture outline LECTURE 10 Joint AEP Coding Theorem Last time: Error Exponents Lecture outline Strong Coding Theorem Reading: Gallager, Chapter 5. Review Joint AEP A ( ɛ n) (X) A ( ɛ n) (Y ) vs. A ( ɛ n) (X, Y ) 2 nh(x)

More information

Lecture 4 : Introduction to Low-density Parity-check Codes

Lecture 4 : Introduction to Low-density Parity-check Codes Lecture 4 : Introduction to Low-density Parity-check Codes LDPC codes are a class of linear block codes with implementable decoders, which provide near-capacity performance. History: 1. LDPC codes were

More information

Concatenated Polar Codes

Concatenated Polar Codes Concatenated Polar Codes Mayank Bakshi 1, Sidharth Jaggi 2, Michelle Effros 1 1 California Institute of Technology 2 Chinese University of Hong Kong arxiv:1001.2545v1 [cs.it] 14 Jan 2010 Abstract Polar

More information

Analysis of a Randomized Local Search Algorithm for LDPCC Decoding Problem

Analysis of a Randomized Local Search Algorithm for LDPCC Decoding Problem Analysis of a Randomized Local Search Algorithm for LDPCC Decoding Problem Osamu Watanabe, Takeshi Sawai, and Hayato Takahashi Dept. of Mathematical and Computing Sciences, Tokyo Institute of Technology

More information

Time-division multiplexing for green broadcasting

Time-division multiplexing for green broadcasting Time-division multiplexing for green broadcasting Pulkit Grover and Anant Sahai Wireless Foundations, Department of EECS University of California at Berkeley Email: {pulkit, sahai} @ eecs.berkeley.edu

More information

On Bayesian Computation

On Bayesian Computation On Bayesian Computation Michael I. Jordan with Elaine Angelino, Maxim Rabinovich, Martin Wainwright and Yun Yang Previous Work: Information Constraints on Inference Minimize the minimax risk under constraints

More information

Simultaneous SDR Optimality via a Joint Matrix Decomp.

Simultaneous SDR Optimality via a Joint Matrix Decomp. Simultaneous SDR Optimality via a Joint Matrix Decomposition Joint work with: Yuval Kochman, MIT Uri Erez, Tel Aviv Uni. May 26, 2011 Model: Source Multicasting over MIMO Channels z 1 H 1 y 1 Rx1 ŝ 1 s

More information

Expansion Coding for Channel and Source. Coding

Expansion Coding for Channel and Source. Coding Expansion Coding for Channel and Source Coding Hongbo Si, O. Ozan Koyluoglu, Kumar Appaiah and Sriram Vishwanath arxiv:505.0548v [cs.it] 20 May 205 Abstract A general method of coding over expansion is

More information

The Pros and Cons of Compressive Sensing

The Pros and Cons of Compressive Sensing The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal

More information

Lecture 4 Capacity of Wireless Channels

Lecture 4 Capacity of Wireless Channels Lecture 4 Capacity of Wireless Channels I-Hsiang Wang ihwang@ntu.edu.tw 3/0, 014 What we have learned So far: looked at specific schemes and techniques Lecture : point-to-point wireless channel - Diversity:

More information

Rigorous Dynamics and Consistent Estimation in Arbitrarily Conditioned Linear Systems

Rigorous Dynamics and Consistent Estimation in Arbitrarily Conditioned Linear Systems 1 Rigorous Dynamics and Consistent Estimation in Arbitrarily Conditioned Linear Systems Alyson K. Fletcher, Mojtaba Sahraee-Ardakan, Philip Schniter, and Sundeep Rangan Abstract arxiv:1706.06054v1 cs.it

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

APPLICATIONS. Quantum Communications

APPLICATIONS. Quantum Communications SOFT PROCESSING TECHNIQUES FOR QUANTUM KEY DISTRIBUTION APPLICATIONS Marina Mondin January 27, 2012 Quantum Communications In the past decades, the key to improving computer performance has been the reduction

More information

LDPC Codes. Slides originally from I. Land p.1

LDPC Codes. Slides originally from I. Land p.1 Slides originally from I. Land p.1 LDPC Codes Definition of LDPC Codes Factor Graphs to use in decoding Decoding for binary erasure channels EXIT charts Soft-Output Decoding Turbo principle applied to

More information

An Introduction to Sparse Approximation

An Introduction to Sparse Approximation An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,

More information

LINEAR SYSTEMS (11) Intensive Computation

LINEAR SYSTEMS (11) Intensive Computation LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY

More information

Quantization for Distributed Estimation

Quantization for Distributed Estimation 0 IEEE International Conference on Internet of Things ithings 0), Green Computing and Communications GreenCom 0), and Cyber-Physical-Social Computing CPSCom 0) Quantization for Distributed Estimation uan-yu

More information

Channel Polarization and Blackwell Measures

Channel Polarization and Blackwell Measures Channel Polarization Blackwell Measures Maxim Raginsky Abstract The Blackwell measure of a binary-input channel (BIC is the distribution of the posterior probability of 0 under the uniform input distribution

More information

A Tight Upper Bound on the Second-Order Coding Rate of Parallel Gaussian Channels with Feedback

A Tight Upper Bound on the Second-Order Coding Rate of Parallel Gaussian Channels with Feedback A Tight Upper Bound on the Second-Order Coding Rate of Parallel Gaussian Channels with Feedback Vincent Y. F. Tan (NUS) Joint work with Silas L. Fong (Toronto) 2017 Information Theory Workshop, Kaohsiung,

More information

Message-Passing Decoding for Low-Density Parity-Check Codes Harish Jethanandani and R. Aravind, IIT Madras

Message-Passing Decoding for Low-Density Parity-Check Codes Harish Jethanandani and R. Aravind, IIT Madras Message-Passing Decoding for Low-Density Parity-Check Codes Harish Jethanandani and R. Aravind, IIT Madras e-mail: hari_jethanandani@yahoo.com Abstract Low-density parity-check (LDPC) codes are discussed

More information

Chapter 6 Reed-Solomon Codes. 6.1 Finite Field Algebra 6.2 Reed-Solomon Codes 6.3 Syndrome Based Decoding 6.4 Curve-Fitting Based Decoding

Chapter 6 Reed-Solomon Codes. 6.1 Finite Field Algebra 6.2 Reed-Solomon Codes 6.3 Syndrome Based Decoding 6.4 Curve-Fitting Based Decoding Chapter 6 Reed-Solomon Codes 6. Finite Field Algebra 6. Reed-Solomon Codes 6.3 Syndrome Based Decoding 6.4 Curve-Fitting Based Decoding 6. Finite Field Algebra Nonbinary codes: message and codeword symbols

More information

Structured Low-Density Parity-Check Codes: Algebraic Constructions

Structured Low-Density Parity-Check Codes: Algebraic Constructions Structured Low-Density Parity-Check Codes: Algebraic Constructions Shu Lin Department of Electrical and Computer Engineering University of California, Davis Davis, California 95616 Email:shulin@ece.ucdavis.edu

More information

Time-invariant LDPC convolutional codes

Time-invariant LDPC convolutional codes Time-invariant LDPC convolutional codes Dimitris Achlioptas, Hamed Hassani, Wei Liu, and Rüdiger Urbanke Department of Computer Science, UC Santa Cruz, USA Email: achlioptas@csucscedu Department of Computer

More information

Machine Learning for Signal Processing Sparse and Overcomplete Representations

Machine Learning for Signal Processing Sparse and Overcomplete Representations Machine Learning for Signal Processing Sparse and Overcomplete Representations Abelino Jimenez (slides from Bhiksha Raj and Sourish Chaudhuri) Oct 1, 217 1 So far Weights Data Basis Data Independent ICA

More information

Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes

Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes Xiaojie Zhang and Paul H. Siegel University of California, San Diego, La Jolla, CA 9093, U Email:{ericzhang, psiegel}@ucsd.edu

More information

Provable Alternating Minimization Methods for Non-convex Optimization

Provable Alternating Minimization Methods for Non-convex Optimization Provable Alternating Minimization Methods for Non-convex Optimization Prateek Jain Microsoft Research, India Joint work with Praneeth Netrapalli, Sujay Sanghavi, Alekh Agarwal, Animashree Anandkumar, Rashish

More information

Nearest Neighbor Decoding in MIMO Block-Fading Channels With Imperfect CSIR

Nearest Neighbor Decoding in MIMO Block-Fading Channels With Imperfect CSIR IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 3, MARCH 2012 1483 Nearest Neighbor Decoding in MIMO Block-Fading Channels With Imperfect CSIR A. Taufiq Asyhari, Student Member, IEEE, Albert Guillén

More information

Exercise 1. = P(y a 1)P(a 1 )

Exercise 1. = P(y a 1)P(a 1 ) Chapter 7 Channel Capacity Exercise 1 A source produces independent, equally probable symbols from an alphabet {a 1, a 2 } at a rate of one symbol every 3 seconds. These symbols are transmitted over a

More information