Decomposition Methods for Large Scale LP Decoding

Size: px
Start display at page:

Download "Decomposition Methods for Large Scale LP Decoding"

Transcription

1 Decomposition Methods for Large Scale LP Decoding Siddharth Barman Joint work with Xishuo Liu, Stark Draper, and Ben Recht

2 Outline Background and Problem Setup LP Decoding Formulation Optimization Framework ADMM Technical Core Projecting onto the Parity Polytope Numerical results Graphs

3 Efficient and Reliable Digital Transmission Error Correcting Codes Constructs that enable reliable delivery of digital data over unreliable (noisy) communication channels. Claude Shannon Richard Hamming

4 Model for Communication Channel Alice x Noisy Channel x Bob Binary Symmetric Channel (BSC) We will focus on transmission of bit strings (x, x {0, } n ) and the Binary Symmetric Channel. 0 p 0 p p p Figure : BSC with crossover probability p

5 Probabilistic implications of BSC 0 p 0 p p p Say x is the transmitted bit string and x is the received bit string. BSC is memoryless hence Pr( x x) = i Pr( x i x i ) Example: Pr( x = 00 x = 000) = ( p) ( p) p.

6 Executive Summary of Error Correcting Codes Alice x Dictionary Noisy Channel x / Dictionary Bob Codewords C: A Dictionary (structured set). Error Detection: Spell Checking. Error Correction: Spell Correction.

7 Maximum Likelihood (ML) Decoding With codeword C and received bit string x, ML decoding picks a codeword x C that maximizes the probability that x was received given that x was sent: Pr(received x sent x) maximize Pr( x x) subject to x C maximize i Pr( x i x i ) subject to x C

8 Maximum Likelihood (ML) Decoding With codeword C and received bit string x, ML decoding picks a codeword x C that maximizes the probability that x was received given that x was sent: Pr(received x sent x) maximize Pr( x x) subject to x C maximize i Pr( x i x i ) subject to x C maximize i log Pr( x i x i ) subject to x C

9 Maximum Likelihood (ML) Decoding ML decoding: maximize i log Pr( x i x i ) s.t. x C Negative log-likelihood ratios for all i: log p p if x i = γ i = log p p if x i = 0 ML decoding: minimize i γ i x i s.t. x C

10 Low Density Parity Check (LDPC) Codes LDPC: x C iff all the parity checks are satisfied. Parity Checks (x, x 2, x 3 ) (x, x 3, x 4 ) (x 2, x 5, x 6 ) (x 4, x 5, x 6 ) Codeword Bits x x 2 x 3 x 4 x 5 x 6 Example: x = ( 0 0 ) Parity Check : ( 0 ) Parity Check 2: ( 0) Parity Check 3: (0 ) Parity Check 4: (0 )

11 Decoding Low Density Parity Check (LDPC) Codes Parity Checks (x, x 2, x 3 ) (x, x 3, x 4 ) (x 2, x 5, x 6 ) (x 4, x 5, x 6 ) Codeword Bits x x 2 x 3 x 4 x 5 x 6 Let P j x be the sub-vector participating in the jth parity check. P x = (x x 2 x 3 ), P 4 x = (x 4 x 5 x 6 ) and d = 3. P d = {all even parity bit-vectors of length d} LDPC x C if and only if P j x P d for all j.

12 ML Decoding for LDPC Codes γ: Negative log-likelihood ratios. P j x: The sub-vector participating in the jth parity check. P d = {all even parity bit-vectors of length d} ML decoding: minimize subject to γ i x i i x C ML Decoding for LDPC Codes: minimize γ i x i subject to P j x P d j i

13 Decoding by Belief Propagation (BP) Parity Check Nodes (x, x 2, x 3 ) (x, x 3, x 4 ) (x, x 2, x 3 ) (x, x 3, x 4 ) Codeword Bits x x 2 x 3 x 4 x x 2 x 3 x 4 BP has been empirically successful in decoding LDPC codes but possess are no convergence guarantees. Inherently distributed and takes full advantage of locality (low density of parity checks). A distributed decoding algorithm is desirable as it directly implies scalability.

14 Decoding Linear Program minimize γ i x i i subject to P j x P d j and x {0, } n Parity Polytope, PP d, is the convex hull of all even parity bit-vectors of length d, PP d = conv(p d ). Feldman et al. (2005) proposed the following relaxation: minimize γ i x i i subject to P j x PP d j and x [0, ] n

15 Decoding LP and the Parity Polytope P d = {all even parity bit-vectors of length d} PP d = conv(p d ) minimize γ i x i i subject to P j x PP d j and x [0, ] n

16 Outline Background and Problem Setup [6 mins.] LP Decoding Formulation

17 Outline Background and Problem Setup [6 mins.] LP Decoding Formulation Optimization Framework [4 mins.] Alternating Direction Method of Multipliers (ADMM)

18 Decoding LP with parity polytope PP d : minimize γ i x i subject to P j x PP d j x [0, ] n i Add Replicas z j s: minimize γ i x i subject to z j = P j x j z j PP d j x [0, ] n i

19 Augmented Lagrangian minimize γ i x i subject to z j = P j x j i z j PP d j x [0, ] n Augmented Lagrangian with Lagrange multiplier λ and penalty parameter µ: L µ (x, z, λ) := γ T x + j λ T j (P j x z j )

20 Augmented Lagrangian minimize γ i x i subject to z j = P j x j i z j PP d j x [0, ] n Augmented Lagrangian with Lagrange multiplier λ and penalty parameter µ: L µ (x, z, λ) := γ T x + j λ T j (P j x z j ) + µ P j x z j j

21 Alternating Direction Method of Multipliers L µ (x, z, λ) := γ T x + j λ T j (P j x z j ) + µ P j x z j j ADMM Update Steps: Lather, Rinse, Repeat. x k+ := argmin x X L µ (x, z k, λ k ) z k+ := argmin z Z L µ (x k+, z, λ k ) ( ) λ k+ j := λ k j + µ P j x k+ z k+ j

22 ADMM x-update Decoding LP: minimize γ i x i subject to z j = P j x j z j PP d j x [0, ] n i Augmented Lagrangian: L µ (x, z, λ) := γ T x + j λt j (P j x z j ) + µ 2 j P jx z j 2 2

23 ADMM x-update Decoding LP: minimize γ i x i subject to z j = P j x j z j PP d j x [0, ] n i Augmented Lagrangian: L µ (x, z, λ) := γ T x + j λt j (P j x z j ) + µ 2 j P jx z j 2 2 With z and λ fixed the x-updates are simple: minimize L µ (x, z k, λ k ) subject to x [0, ] n

24 ADMM x-update In the x-update step replicas (z) and dual variables (λ) are fixed. x L µ(x, z k, λ k )= 0 Component wise update: x i = Π [0,] N v (i) j N v (i) ( z (i) j ) µ λ(i) j N v (i) : set of parity checks containing component i. z (i) j : component of the jth replica associated with x i. µ γ i

25 ADMM z-update Augmented Lagrangian: L µ (x, z, λ) := γ T x + j λt j (P j x z j ) + µ 2 j P jx z j 2 2 z-update: minimize λ T j (P j x z j ) + µ P j x z j subject to z j PP d j j The minimization is completely separable in j, hence for each z j we need to solve: j minimize λ T j (P j x z j ) + µ 2 P jx z j 2 2 subject to z j PP d

26 z-updates: minimize λ T j (P j x z j ) + µ 2 P jx z j 2 2 subject to z j PP d With v = P j x + λ j /µ (completing the square) the problem is equivalent to: minimize v z 2 2 subject to z PP d The primary challenge in ADMM The z-update requires projecting onto the parity polytope.

27 Outline Background and Problem Setup [6 mins.] LP Decoding Formulation Optimization Framework [4 mins.] Alternating Direction Method of Multipliers (ADMM)

28 Outline Background and Problem Setup [6 mins.] LP Decoding Formulation Optimization Framework [4 mins.] Alternating Direction Method of Multipliers (ADMM) Technical Core [ 5 + ɛ mins.] Projecting onto the Parity Polytope

29 minimize v z 2 2 subject to z PP d Recall that the parity polytope, PP d, is the convex hull of all binary vectors of length d and even hamming weight

30 Parity Polytope y PP d iff y = i α ie i Here e i are even-hamming-weight vectors of dimension d, i α i = and α i 0. Example (d = 6): /2 /2 /4 /4 =

31 Parity Polytope y PP d iff y = i α ie i Here e i are even-hamming-weight vectors of dimension d, i α i = and α i 0. Example (d = 6): /2 /2 /4 /4 =

32 Characterizing the Parity Polytope Two-Slice Lemma: For any y PP d there exists a representation: y = i α ie i such that the hamming weight of all e i is either r or r + 2, for some even integer r. Example with d = 6 and r = 2. /2 /2 /4 /4 =

33 Characterizing the Parity Polytope Lemma: For any y PP d there exists a representation: y = i α ie i such that the hamming weight of all e i is either r or r + 2, for some even integer r. (0) (0) t P P r+2 d (0) y (0) (0) (000) s P P r d (000) (000) d = 5, r = 2

34 Characterizing the Parity Polytope Let PP r d be the convex hull of all binary vectors of hamming weight r. Two-Slice Lemma: y PP d iff y = αs + ( α)t, where s PP r d and t PPr+2 d. (0) (0) t P P r+2 d (0) y (0) (0) (000) s P P r d (000) (000) d = 5, r = 2

35 Structure of the Parity Polytope PP r d = (Hyperplane containing vectors of weight r) Hypercube. (0) P P 4 5 (0) () ( 25, 2 5, 2 5, 2 5, 2 5) (0) ( 45, 4 5, 4 5, 4 5, 4 5) (0) (00000) (0) Any u PP d sandwiched between slices of weight r and r + 2 where r u r + 2.

36 Projecting onto the parity polytope Two-Slice Lemma: y PP d iff y = αs + ( α)t, where s PP r d and t PPr+2 d. Polytope Projection: min v y 2 2 s.t. y PP d From the two-slice lemma: min v (αs + ( α)t) 2 2 s.t. s PP r d, t PPr+2 d α [0, ]

37 Majorization Let u and w be d-vectors sorted in decreasing order. The vector w is said to majorize u if w = u and q u k q k= k= w k q Vector (,,,, 0) majorizes ( 4 5, 4 5, 4 5, 4 5, 4 5). (0) P P 4 5 (0) () ( 25, 2 5, 2 5, 2 5, 2 5) (0) ( 45, 4 5, 4 5, 4 5, 4 5) (0) (00000) (0)

38 Majorization Theorem: u is in the convex hull of all permutations of w if and only if w majorizes u. Vector (,,,, 0) majorizes ( 4 5, 4 5, 4 5, 4 5, 4 5). (0) P P 4 5 (0) () ( 25, 2 5, 2 5, 2 5, 2 5) (0) ( 45, 4 5, 4 5, 4 5, 4 5) (0) (00000) (0)

39 Projecting onto the parity polytope Polytope Projection: min v y 2 2 s.t. y PP d From the two-slice lemma: min v (αs + ( α)t) 2 2 s.t. s PP r d, t PPr+2 d α [0, ] Using Majorization: min v y 2 2 s.t. y i = αr + ( α)(r + 2) i r+ y (k) r + ( α) k= 0 α

40 Quadratic program for the projection problem: min v y 2 2 s.t. y i = αr + ( α)(r + 2) i r+ y (k) r + ( α) k= 0 α For the quadratic program the Karush-Kuhn-Tucker (KKT) conditions are necessary and sufficient. We develop a water-filling type algorithm which determines a solution satisfying the KKT conditions. Overall Result We can project onto the parity polytope in O(d log d) time.

41 Final Projection Algorithm The KKT conditions imply that either the projection of v onto the hypercube, [0, ] d, is in the parity polytope, or There exists a β R + such that the projection y satisfies: y = v β w where w = (,...,,,... ) }{{}}{{} T. r+ d r Using this characterization we develop a water-filling type algorithm that determines y, the projection onto the parity polytope. Overall Result We can project onto the parity polytope in O(d log d) time.

42 Outline Background and Problem Setup [6 mins.] LP Decoding Formulation Optimization Framework [4 mins.] ADMM Technical Core [ 5 + ɛ mins.] Projecting onto the Parity Polytope

43 Outline Background and Problem Setup [6 mins.] LP Decoding Formulation Optimization Framework [4 mins.] ADMM Technical Core [ 5 + ɛ mins.] Projecting onto the Parity Polytope Numerical results [5 mins.] Graphs

44 Implementation of ADMM decoder Recap ADMM update steps: x k+ := argmin x X L µ (x, z k, λ k ) z k+ := argmin z Z L µ (x k+, z, λ k ) λ k+ j := λ k j + µ(p j x k+ z k+ j )

45 Implementation of ADMM decoder Recap ADMM update steps: x k+ := argmin x X L µ (x, z k, λ k ) z k+ := argmin z Z L µ (x k+, z, λ k ) λ k+ j := λ k j + µ(p j x k+ z k+ j ) Question: How to choose penalty parameter µ?

46 Implementation of ADMM decoder Recap ADMM update steps: x k+ := argmin x X L µ (x, z k, λ k ) z k+ := argmin z Z L µ (x k+, z, λ k ) λ k+ j := λ k j + µ(p j x k+ z k+ j ) Question: How to choose penalty parameter µ? Another question: When do we terminate the iteration?

47 Implementation of ADMM decoder Recap ADMM update steps: x k+ := argmin x X L µ (x, z k, λ k ) z k+ := argmin z Z L µ (x k+, z, λ k ) λ k+ j := λ k j + µ(p j x k+ z k+ j ) Question: How to choose penalty parameter µ? Another question: When do we terminate the iteration? Need to determine µ, maximum number of iteration T max and error tolerance ɛ.

48 Simulation for (057,244) LPDC code. Fix T max = 300, ɛ = e word error rate (WER) SNR = 5dB SNR = 5.25dB SNR = 5.5dB µ

49 Simulation for (057,244) LPDC code. Fix T max = 300, ɛ = e 4 # iteration per decoding SNR = 5dB 24 SNR = 5.25dB SNR = 5.5dB µ

50 Simulation for (057,244) LPDC code. ɛ = e 4, µ = word error rate ADMM decoding, Tmax = 50, µ = 2 ADMM decoding, Tmax = 00, µ = 2 ADMM decoding, Tmax = 300, µ = Eb/N0(dB)

51 Simulation for (057,244) LPDC code. ɛ = e 4, µ = word error rate ADMM decoding, Tmax = 50, µ = 2 ADMM decoding, Tmax = 00, µ = 2 ADMM decoding, Tmax = 300, µ = Eb/N0(dB) Wait! We have seen that larger µ gives better WER performance.

52 Simulation for (057,244) LPDC code. ɛ = e word error rate ADMM decoding, Tmax = 50, µ = 2 ADMM decoding, Tmax = 00, µ = 2 ADMM decoding, Tmax = 300, µ = 2 ADMM decoding, Tmax = 300, µ = Eb/N0(dB)

53 Simulation for (057,244) LPDC code. ɛ = e word error rate ADMM decoding, Tmax = 50, µ = 2 ADMM decoding, Tmax = 00, µ = 2 ADMM decoding, Tmax = 300, µ = 2 ADMM decoding, Tmax = 300, µ = Eb/N0(dB) Is this performance good enough?

54 Simulation for (057,244) LPDC code. ɛ = e word error rate ADMM decoding, Tmax = 50, µ = ADMM decoding, Tmax = 00, µ = 2 ADMM decoding, Tmax = 300, µ = 2 ADMM decoding, Tmax = 300, µ = 0 LP decoding Eb/N0(dB) Same error performance as LP decoding using simplex method.

Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes

Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes Xiaojie Zhang and Paul H. Siegel University of California, San Diego, La Jolla, CA 9093, U Email:{ericzhang, psiegel}@ucsd.edu

More information

On the Joint Decoding of LDPC Codes and Finite-State Channels via Linear Programming

On the Joint Decoding of LDPC Codes and Finite-State Channels via Linear Programming On the Joint Decoding of LDPC Codes and Finite-State Channels via Linear Programming Byung-Hak Kim (joint with Henry D. Pfister) Texas A&M University College Station International Symposium on Information

More information

Belief-Propagation Decoding of LDPC Codes

Belief-Propagation Decoding of LDPC Codes LDPC Codes: Motivation Belief-Propagation Decoding of LDPC Codes Amir Bennatan, Princeton University Revolution in coding theory Reliable transmission, rates approaching capacity. BIAWGN, Rate =.5, Threshold.45

More information

Probabilistic Graphical Models

Probabilistic Graphical Models School of Computer Science Probabilistic Graphical Models Variational Inference IV: Variational Principle II Junming Yin Lecture 17, March 21, 2012 X 1 X 1 X 1 X 1 X 2 X 3 X 2 X 2 X 3 X 3 Reading: X 4

More information

Graph-based codes for flash memory

Graph-based codes for flash memory 1/28 Graph-based codes for flash memory Discrete Mathematics Seminar September 3, 2013 Katie Haymaker Joint work with Professor Christine Kelley University of Nebraska-Lincoln 2/28 Outline 1 Background

More information

On the Block Error Probability of LP Decoding of LDPC Codes

On the Block Error Probability of LP Decoding of LDPC Codes On the Block Error Probability of LP Decoding of LDPC Codes Ralf Koetter CSL and Dept. of ECE University of Illinois at Urbana-Champaign Urbana, IL 680, USA koetter@uiuc.edu Pascal O. Vontobel Dept. of

More information

Introduction to Low-Density Parity Check Codes. Brian Kurkoski

Introduction to Low-Density Parity Check Codes. Brian Kurkoski Introduction to Low-Density Parity Check Codes Brian Kurkoski kurkoski@ice.uec.ac.jp Outline: Low Density Parity Check Codes Review block codes History Low Density Parity Check Codes Gallager s LDPC code

More information

EE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes

EE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes EE229B - Final Project Capacity-Approaching Low-Density Parity-Check Codes Pierre Garrigues EECS department, UC Berkeley garrigue@eecs.berkeley.edu May 13, 2005 Abstract The class of low-density parity-check

More information

Message Passing Algorithm and Linear Programming Decoding for LDPC and Linear Block Codes

Message Passing Algorithm and Linear Programming Decoding for LDPC and Linear Block Codes Message Passing Algorithm and Linear Programming Decoding for LDPC and Linear Block Codes Institute of Electronic Systems Signal and Information Processing in Communications Nana Traore Shashi Kant Tobias

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

16.36 Communication Systems Engineering

16.36 Communication Systems Engineering MIT OpenCourseWare http://ocw.mit.edu 16.36 Communication Systems Engineering Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.36: Communication

More information

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory Digital Communications III (ECE 154C) Introduction to Coding and Information Theory Tara Javidi These lecture notes were originally developed by late Prof. J. K. Wolf. UC San Diego Spring 2014 1 / 8 I

More information

Single-Gaussian Messages and Noise Thresholds for Low-Density Lattice Codes

Single-Gaussian Messages and Noise Thresholds for Low-Density Lattice Codes Single-Gaussian Messages and Noise Thresholds for Low-Density Lattice Codes Brian M. Kurkoski, Kazuhiko Yamaguchi and Kingo Kobayashi kurkoski@ice.uec.ac.jp Dept. of Information and Communications Engineering

More information

CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding

CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding Tim Roughgarden October 29, 2014 1 Preamble This lecture covers our final subtopic within the exact and approximate recovery part of the course.

More information

Linear and conic programming relaxations: Graph structure and message-passing

Linear and conic programming relaxations: Graph structure and message-passing Linear and conic programming relaxations: Graph structure and message-passing Martin Wainwright UC Berkeley Departments of EECS and Statistics Banff Workshop Partially supported by grants from: National

More information

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.1 Overview Basic Concepts of Channel Coding Block Codes I:

More information

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes Igal Sason Department of Electrical Engineering Technion - Israel Institute of Technology Haifa 32000, Israel 2009 IEEE International

More information

ECEN 655: Advanced Channel Coding

ECEN 655: Advanced Channel Coding ECEN 655: Advanced Channel Coding Course Introduction Henry D. Pfister Department of Electrical and Computer Engineering Texas A&M University ECEN 655: Advanced Channel Coding 1 / 19 Outline 1 History

More information

10725/36725 Optimization Homework 4

10725/36725 Optimization Homework 4 10725/36725 Optimization Homework 4 Due November 27, 2012 at beginning of class Instructions: There are four questions in this assignment. Please submit your homework as (up to) 4 separate sets of pages

More information

Error Detection and Correction: Hamming Code; Reed-Muller Code

Error Detection and Correction: Hamming Code; Reed-Muller Code Error Detection and Correction: Hamming Code; Reed-Muller Code Greg Plaxton Theory in Programming Practice, Spring 2005 Department of Computer Science University of Texas at Austin Hamming Code: Motivation

More information

Ma/CS 6b Class 24: Error Correcting Codes

Ma/CS 6b Class 24: Error Correcting Codes Ma/CS 6b Class 24: Error Correcting Codes By Adam Sheffer Communicating Over a Noisy Channel Problem. We wish to transmit a message which is composed of 0 s and 1 s, but noise might accidentally flip some

More information

Practical Polar Code Construction Using Generalised Generator Matrices

Practical Polar Code Construction Using Generalised Generator Matrices Practical Polar Code Construction Using Generalised Generator Matrices Berksan Serbetci and Ali E. Pusane Department of Electrical and Electronics Engineering Bogazici University Istanbul, Turkey E-mail:

More information

STUDY OF PERMUTATION MATRICES BASED LDPC CODE CONSTRUCTION

STUDY OF PERMUTATION MATRICES BASED LDPC CODE CONSTRUCTION EE229B PROJECT REPORT STUDY OF PERMUTATION MATRICES BASED LDPC CODE CONSTRUCTION Zhengya Zhang SID: 16827455 zyzhang@eecs.berkeley.edu 1 MOTIVATION Permutation matrices refer to the square matrices with

More information

Applications of Linear Programming to Coding Theory

Applications of Linear Programming to Coding Theory University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Dissertations, Theses, and Student Research Papers in Mathematics Mathematics, Department of 8-2010 Applications of Linear

More information

Linear Programming Decoding of Binary Linear Codes for Symbol-Pair Read Channels

Linear Programming Decoding of Binary Linear Codes for Symbol-Pair Read Channels 1 Linear Programming Decoding of Binary Linear Codes for Symbol-Pair Read Channels Shunsuke Horii, Toshiyasu Matsushima, and Shigeichi Hirasawa arxiv:1508.01640v2 [cs.it] 29 Sep 2015 Abstract In this paper,

More information

4 An Introduction to Channel Coding and Decoding over BSC

4 An Introduction to Channel Coding and Decoding over BSC 4 An Introduction to Channel Coding and Decoding over BSC 4.1. Recall that channel coding introduces, in a controlled manner, some redundancy in the (binary information sequence that can be used at the

More information

EXTENDING THE DORSCH DECODER FOR EFFICIENT SOFT DECISION DECODING OF LINEAR BLOCK CODES SEAN MICHAEL COLLISON

EXTENDING THE DORSCH DECODER FOR EFFICIENT SOFT DECISION DECODING OF LINEAR BLOCK CODES SEAN MICHAEL COLLISON EXTENDING THE DORSCH DECODER FOR EFFICIENT SOFT DECISION DECODING OF LINEAR BLOCK CODES By SEAN MICHAEL COLLISON A thesis submitted in partial fulfillment of the requirements for the degree of MASTER OF

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

COMPSCI 650 Applied Information Theory Apr 5, Lecture 18. Instructor: Arya Mazumdar Scribe: Hamed Zamani, Hadi Zolfaghari, Fatemeh Rezaei

COMPSCI 650 Applied Information Theory Apr 5, Lecture 18. Instructor: Arya Mazumdar Scribe: Hamed Zamani, Hadi Zolfaghari, Fatemeh Rezaei COMPSCI 650 Applied Information Theory Apr 5, 2016 Lecture 18 Instructor: Arya Mazumdar Scribe: Hamed Zamani, Hadi Zolfaghari, Fatemeh Rezaei 1 Correcting Errors in Linear Codes Suppose someone is to send

More information

Lecture 18: Optimization Programming

Lecture 18: Optimization Programming Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming

More information

ICS-E4030 Kernel Methods in Machine Learning

ICS-E4030 Kernel Methods in Machine Learning ICS-E4030 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 28. September, 2016 Juho Rousu 28. September, 2016 1 / 38 Convex optimization Convex optimisation This

More information

2376 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 7, JULY Note that conic conv(c) = conic(c).

2376 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 7, JULY Note that conic conv(c) = conic(c). 2376 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 7, JULY 2007 Pseudo-Codeword Analysis of Tanner Graphs From Projective and Euclidean Planes Roxana Smarandache, Member, IEEE, and Pascal O. Vontobel,

More information

5. Density evolution. Density evolution 5-1

5. Density evolution. Density evolution 5-1 5. Density evolution Density evolution 5-1 Probabilistic analysis of message passing algorithms variable nodes factor nodes x1 a x i x2 a(x i ; x j ; x k ) x3 b x4 consider factor graph model G = (V ;

More information

LDPC Codes. Slides originally from I. Land p.1

LDPC Codes. Slides originally from I. Land p.1 Slides originally from I. Land p.1 LDPC Codes Definition of LDPC Codes Factor Graphs to use in decoding Decoding for binary erasure channels EXIT charts Soft-Output Decoding Turbo principle applied to

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

Low-Density Parity-Check Codes

Low-Density Parity-Check Codes Department of Computer Sciences Applied Algorithms Lab. July 24, 2011 Outline 1 Introduction 2 Algorithms for LDPC 3 Properties 4 Iterative Learning in Crowds 5 Algorithm 6 Results 7 Conclusion PART I

More information

Quasi-cyclic Low Density Parity Check codes with high girth

Quasi-cyclic Low Density Parity Check codes with high girth Quasi-cyclic Low Density Parity Check codes with high girth, a work with Marta Rossi, Richard Bresnan, Massimilliano Sala Summer Doctoral School 2009 Groebner bases, Geometric codes and Order Domains Dept

More information

APPLICATIONS. Quantum Communications

APPLICATIONS. Quantum Communications SOFT PROCESSING TECHNIQUES FOR QUANTUM KEY DISTRIBUTION APPLICATIONS Marina Mondin January 27, 2012 Quantum Communications In the past decades, the key to improving computer performance has been the reduction

More information

Lecture 7. Union bound for reducing M-ary to binary hypothesis testing

Lecture 7. Union bound for reducing M-ary to binary hypothesis testing Lecture 7 Agenda for the lecture M-ary hypothesis testing and the MAP rule Union bound for reducing M-ary to binary hypothesis testing Introduction of the channel coding problem 7.1 M-ary hypothesis testing

More information

exercise in the previous class (1)

exercise in the previous class (1) exercise in the previous class () Consider an odd parity check code C whose codewords are (x,, x k, p) with p = x + +x k +. Is C a linear code? No. x =, x 2 =x =...=x k = p =, and... is a codeword x 2

More information

Shannon s noisy-channel theorem

Shannon s noisy-channel theorem Shannon s noisy-channel theorem Information theory Amon Elders Korteweg de Vries Institute for Mathematics University of Amsterdam. Tuesday, 26th of Januari Amon Elders (Korteweg de Vries Institute for

More information

Lecture 4 : Introduction to Low-density Parity-check Codes

Lecture 4 : Introduction to Low-density Parity-check Codes Lecture 4 : Introduction to Low-density Parity-check Codes LDPC codes are a class of linear block codes with implementable decoders, which provide near-capacity performance. History: 1. LDPC codes were

More information

CSCI 2570 Introduction to Nanocomputing

CSCI 2570 Introduction to Nanocomputing CSCI 2570 Introduction to Nanocomputing Information Theory John E Savage What is Information Theory Introduced by Claude Shannon. See Wikipedia Two foci: a) data compression and b) reliable communication

More information

MATH3302. Coding and Cryptography. Coding Theory

MATH3302. Coding and Cryptography. Coding Theory MATH3302 Coding and Cryptography Coding Theory 2010 Contents 1 Introduction to coding theory 2 1.1 Introduction.......................................... 2 1.2 Basic definitions and assumptions..............................

More information

Pseudocodewords of Tanner Graphs

Pseudocodewords of Tanner Graphs SUBMITTED TO IEEE TRANSACTIONS ON INFORMATION THEORY 1 Pseudocodewords of Tanner Graphs arxiv:cs/0504013v4 [cs.it] 18 Aug 2007 Christine A. Kelley Deepak Sridhara Department of Mathematics Seagate Technology

More information

Lecture 8: Shannon s Noise Models

Lecture 8: Shannon s Noise Models Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecture 8: Shannon s Noise Models September 14, 2007 Lecturer: Atri Rudra Scribe: Sandipan Kundu& Atri Rudra Till now we have

More information

Reliable Computation over Multiple-Access Channels

Reliable Computation over Multiple-Access Channels Reliable Computation over Multiple-Access Channels Bobak Nazer and Michael Gastpar Dept. of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA, 94720-1770 {bobak,

More information

An Introduction to Low Density Parity Check (LDPC) Codes

An Introduction to Low Density Parity Check (LDPC) Codes An Introduction to Low Density Parity Check (LDPC) Codes Jian Sun jian@csee.wvu.edu Wireless Communication Research Laboratory Lane Dept. of Comp. Sci. and Elec. Engr. West Virginia University June 3,

More information

Introducing Low-Density Parity-Check Codes

Introducing Low-Density Parity-Check Codes Introducing Low-Density Parity-Check Codes Sarah J. Johnson School of Electrical Engineering and Computer Science The University of Newcastle Australia email: sarah.johnson@newcastle.edu.au Topic 1: Low-Density

More information

Bounds on Achievable Rates of LDPC Codes Used Over the Binary Erasure Channel

Bounds on Achievable Rates of LDPC Codes Used Over the Binary Erasure Channel Bounds on Achievable Rates of LDPC Codes Used Over the Binary Erasure Channel Ohad Barak, David Burshtein and Meir Feder School of Electrical Engineering Tel-Aviv University Tel-Aviv 69978, Israel Abstract

More information

Linear Block Codes. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

Linear Block Codes. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay 1 / 26 Linear Block Codes Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay July 28, 2014 Binary Block Codes 3 / 26 Let F 2 be the set

More information

Capacity of a channel Shannon s second theorem. Information Theory 1/33

Capacity of a channel Shannon s second theorem. Information Theory 1/33 Capacity of a channel Shannon s second theorem Information Theory 1/33 Outline 1. Memoryless channels, examples ; 2. Capacity ; 3. Symmetric channels ; 4. Channel Coding ; 5. Shannon s second theorem,

More information

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual

More information

Expectation propagation for symbol detection in large-scale MIMO communications

Expectation propagation for symbol detection in large-scale MIMO communications Expectation propagation for symbol detection in large-scale MIMO communications Pablo M. Olmos olmos@tsc.uc3m.es Joint work with Javier Céspedes (UC3M) Matilde Sánchez-Fernández (UC3M) and Fernando Pérez-Cruz

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Convex relaxation. In example below, we have N = 6, and the cut we are considering Convex relaxation The art and science of convex relaxation revolves around taking a non-convex problem that you want to solve, and replacing it with a convex problem which you can actually solve the solution

More information

Low-density parity-check (LDPC) codes

Low-density parity-check (LDPC) codes Low-density parity-check (LDPC) codes Performance similar to turbo codes Do not require long interleaver to achieve good performance Better block error performance Error floor occurs at lower BER Decoding

More information

Tutorial on Convex Optimization: Part II

Tutorial on Convex Optimization: Part II Tutorial on Convex Optimization: Part II Dr. Khaled Ardah Communications Research Laboratory TU Ilmenau Dec. 18, 2018 Outline Convex Optimization Review Lagrangian Duality Applications Optimal Power Allocation

More information

Codes on graphs and iterative decoding

Codes on graphs and iterative decoding Codes on graphs and iterative decoding Bane Vasić Error Correction Coding Laboratory University of Arizona Prelude Information transmission 0 0 0 0 0 0 Channel Information transmission signal 0 0 threshold

More information

Lecture 2 Linear Codes

Lecture 2 Linear Codes Lecture 2 Linear Codes 2.1. Linear Codes From now on we want to identify the alphabet Σ with a finite field F q. For general codes, introduced in the last section, the description is hard. For a code of

More information

Low-density parity-check codes

Low-density parity-check codes Low-density parity-check codes From principles to practice Dr. Steve Weller steven.weller@newcastle.edu.au School of Electrical Engineering and Computer Science The University of Newcastle, Callaghan,

More information

Entropies & Information Theory

Entropies & Information Theory Entropies & Information Theory LECTURE I Nilanjana Datta University of Cambridge,U.K. See lecture notes on: http://www.qi.damtp.cam.ac.uk/node/223 Quantum Information Theory Born out of Classical Information

More information

Probabilistic and Bayesian Machine Learning

Probabilistic and Bayesian Machine Learning Probabilistic and Bayesian Machine Learning Day 4: Expectation and Belief Propagation Yee Whye Teh ywteh@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit University College London http://www.gatsby.ucl.ac.uk/

More information

Construction and Performance Evaluation of QC-LDPC Codes over Finite Fields

Construction and Performance Evaluation of QC-LDPC Codes over Finite Fields MEE10:83 Construction and Performance Evaluation of QC-LDPC Codes over Finite Fields Ihsan Ullah Sohail Noor This thesis is presented as part of the Degree of Master of Sciences in Electrical Engineering

More information

Making Error Correcting Codes Work for Flash Memory

Making Error Correcting Codes Work for Flash Memory Making Error Correcting Codes Work for Flash Memory Part I: Primer on ECC, basics of BCH and LDPC codes Lara Dolecek Laboratory for Robust Information Systems (LORIS) Center on Development of Emerging

More information

Ma/CS 6b Class 25: Error Correcting Codes 2

Ma/CS 6b Class 25: Error Correcting Codes 2 Ma/CS 6b Class 25: Error Correcting Codes 2 By Adam Sheffer Recall: Codes V n the set of binary sequences of length n. For example, V 3 = 000,001,010,011,100,101,110,111. Codes of length n are subsets

More information

LOW-density parity-check (LDPC) codes were invented

LOW-density parity-check (LDPC) codes were invented IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 54, NO 1, JANUARY 2008 51 Extremal Problems of Information Combining Yibo Jiang, Alexei Ashikhmin, Member, IEEE, Ralf Koetter, Senior Member, IEEE, and Andrew

More information

Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel

Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel Pål Ellingsen paale@ii.uib.no Susanna Spinsante s.spinsante@univpm.it Angela Barbero angbar@wmatem.eis.uva.es May 31, 2005 Øyvind Ytrehus

More information

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel Introduction to Coding Theory CMU: Spring 2010 Notes 3: Stochastic channels and noisy coding theorem bound January 2010 Lecturer: Venkatesan Guruswami Scribe: Venkatesan Guruswami We now turn to the basic

More information

Lecture 9: Large Margin Classifiers. Linear Support Vector Machines

Lecture 9: Large Margin Classifiers. Linear Support Vector Machines Lecture 9: Large Margin Classifiers. Linear Support Vector Machines Perceptrons Definition Perceptron learning rule Convergence Margin & max margin classifiers (Linear) support vector machines Formulation

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels Jilei Hou, Paul H. Siegel and Laurence B. Milstein Department of Electrical and Computer Engineering

More information

14 : Theory of Variational Inference: Inner and Outer Approximation

14 : Theory of Variational Inference: Inner and Outer Approximation 10-708: Probabilistic Graphical Models 10-708, Spring 2014 14 : Theory of Variational Inference: Inner and Outer Approximation Lecturer: Eric P. Xing Scribes: Yu-Hsin Kuo, Amos Ng 1 Introduction Last lecture

More information

Hamming codes and simplex codes ( )

Hamming codes and simplex codes ( ) Chapter 6 Hamming codes and simplex codes (2018-03-17) Synopsis. Hamming codes are essentially the first non-trivial family of codes that we shall meet. We start by proving the Distance Theorem for linear

More information

Low Density Parity Check (LDPC) Codes and the Need for Stronger ECC. August 2011 Ravi Motwani, Zion Kwok, Scott Nelson

Low Density Parity Check (LDPC) Codes and the Need for Stronger ECC. August 2011 Ravi Motwani, Zion Kwok, Scott Nelson Low Density Parity Check (LDPC) Codes and the Need for Stronger ECC August 2011 Ravi Motwani, Zion Kwok, Scott Nelson Agenda NAND ECC History Soft Information What is soft information How do we obtain

More information

Statistical Machine Learning from Data

Statistical Machine Learning from Data Samy Bengio Statistical Machine Learning from Data 1 Statistical Machine Learning from Data Support Vector Machines Samy Bengio IDIAP Research Institute, Martigny, Switzerland, and Ecole Polytechnique

More information

Analysis of a Randomized Local Search Algorithm for LDPCC Decoding Problem

Analysis of a Randomized Local Search Algorithm for LDPCC Decoding Problem Analysis of a Randomized Local Search Algorithm for LDPCC Decoding Problem Osamu Watanabe, Takeshi Sawai, and Hayato Takahashi Dept. of Mathematical and Computing Sciences, Tokyo Institute of Technology

More information

Homework 3. Convex Optimization /36-725

Homework 3. Convex Optimization /36-725 Homework 3 Convex Optimization 10-725/36-725 Due Friday October 14 at 5:30pm submitted to Christoph Dann in Gates 8013 (Remember to a submit separate writeup for each problem, with your name at the top)

More information

Graph-based Codes and Iterative Decoding

Graph-based Codes and Iterative Decoding Graph-based Codes and Iterative Decoding Thesis by Aamod Khandekar In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy California Institute of Technology Pasadena, California

More information

Lecture 8: Channel and source-channel coding theorems; BEC & linear codes. 1 Intuitive justification for upper bound on channel capacity

Lecture 8: Channel and source-channel coding theorems; BEC & linear codes. 1 Intuitive justification for upper bound on channel capacity 5-859: Information Theory and Applications in TCS CMU: Spring 23 Lecture 8: Channel and source-channel coding theorems; BEC & linear codes February 7, 23 Lecturer: Venkatesan Guruswami Scribe: Dan Stahlke

More information

Distance Properties of Short LDPC Codes and Their Impact on the BP, ML and Near-ML Decoding Performance

Distance Properties of Short LDPC Codes and Their Impact on the BP, ML and Near-ML Decoding Performance Distance Properties of Short LDPC Codes and Their Impact on the BP, ML and Near-ML Decoding Performance Irina E. Bocharova 1,2, Boris D. Kudryashov 1, Vitaly Skachek 2, Yauhen Yakimenka 2 1 St. Petersburg

More information

Chapter 3 Linear Block Codes

Chapter 3 Linear Block Codes Wireless Information Transmission System Lab. Chapter 3 Linear Block Codes Institute of Communications Engineering National Sun Yat-sen University Outlines Introduction to linear block codes Syndrome and

More information

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University Chapter 4 Data Transmission and Channel Capacity Po-Ning Chen, Professor Department of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30050, R.O.C. Principle of Data Transmission

More information

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Convex relaxation. In example below, we have N = 6, and the cut we are considering Convex relaxation The art and science of convex relaxation revolves around taking a non-convex problem that you want to solve, and replacing it with a convex problem which you can actually solve the solution

More information

Support Vector Machine (SVM) & Kernel CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012

Support Vector Machine (SVM) & Kernel CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012 Support Vector Machine (SVM) & Kernel CE-717: Machine Learning Sharif University of Technology M. Soleymani Fall 2012 Linear classifier Which classifier? x 2 x 1 2 Linear classifier Margin concept x 2

More information

Message Passing Algorithms and Improved LP Decoding

Message Passing Algorithms and Improved LP Decoding Message Passing Algorithms and Improved LP Decoding Sanjeev Arora 1 CS, Princeton Universty and Constantinos Daskalakis 2 EECS and CSAIL, MIT and David Steurer CS, Cornell University 3 abstract Linear

More information

On Bit Error Rate Performance of Polar Codes in Finite Regime

On Bit Error Rate Performance of Polar Codes in Finite Regime On Bit Error Rate Performance of Polar Codes in Finite Regime A. Eslami and H. Pishro-Nik Abstract Polar codes have been recently proposed as the first low complexity class of codes that can provably achieve

More information

Enhancing Binary Images of Non-Binary LDPC Codes

Enhancing Binary Images of Non-Binary LDPC Codes Enhancing Binary Images of Non-Binary LDPC Codes Aman Bhatia, Aravind R Iyengar, and Paul H Siegel University of California, San Diego, La Jolla, CA 92093 0401, USA Email: {a1bhatia, aravind, psiegel}@ucsdedu

More information

Joint Decoding of LDPC Codes and Finite-State Channels via Linear-Programming

Joint Decoding of LDPC Codes and Finite-State Channels via Linear-Programming Joint Decoding of LDPC Codes and Finite-State Channels via Linear-Programming Byung-Hak Kim*, Student Member, IEEE and Henry D. Pfister, Senior Member, IEEE arxiv:02.480v2 [cs.it] 27 Jul 20 Abstract This

More information

MATH3302 Coding Theory Problem Set The following ISBN was received with a smudge. What is the missing digit? x9139 9

MATH3302 Coding Theory Problem Set The following ISBN was received with a smudge. What is the missing digit? x9139 9 Problem Set 1 These questions are based on the material in Section 1: Introduction to coding theory. You do not need to submit your answers to any of these questions. 1. The following ISBN was received

More information

Codes on graphs and iterative decoding

Codes on graphs and iterative decoding Codes on graphs and iterative decoding Bane Vasić Error Correction Coding Laboratory University of Arizona Funded by: National Science Foundation (NSF) Seagate Technology Defense Advanced Research Projects

More information

National University of Singapore Department of Electrical & Computer Engineering. Examination for

National University of Singapore Department of Electrical & Computer Engineering. Examination for National University of Singapore Department of Electrical & Computer Engineering Examination for EE5139R Information Theory for Communication Systems (Semester I, 2014/15) November/December 2014 Time Allowed:

More information

Chapter 3 Source Coding. 3.1 An Introduction to Source Coding 3.2 Optimal Source Codes 3.3 Shannon-Fano Code 3.4 Huffman Code

Chapter 3 Source Coding. 3.1 An Introduction to Source Coding 3.2 Optimal Source Codes 3.3 Shannon-Fano Code 3.4 Huffman Code Chapter 3 Source Coding 3. An Introduction to Source Coding 3.2 Optimal Source Codes 3.3 Shannon-Fano Code 3.4 Huffman Code 3. An Introduction to Source Coding Entropy (in bits per symbol) implies in average

More information

Convex Optimization Boyd & Vandenberghe. 5. Duality

Convex Optimization Boyd & Vandenberghe. 5. Duality 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Introduction to Machine Learning Prof. Sudeshna Sarkar Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Introduction to Machine Learning Prof. Sudeshna Sarkar Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Introduction to Machine Learning Prof. Sudeshna Sarkar Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Module - 5 Lecture - 22 SVM: The Dual Formulation Good morning.

More information

Factor Graphs and Message Passing Algorithms Part 1: Introduction

Factor Graphs and Message Passing Algorithms Part 1: Introduction Factor Graphs and Message Passing Algorithms Part 1: Introduction Hans-Andrea Loeliger December 2007 1 The Two Basic Problems 1. Marginalization: Compute f k (x k ) f(x 1,..., x n ) x 1,..., x n except

More information

Codes on Graphs. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 27th, 2008

Codes on Graphs. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 27th, 2008 Codes on Graphs Telecommunications Laboratory Alex Balatsoukas-Stimming Technical University of Crete November 27th, 2008 Telecommunications Laboratory (TUC) Codes on Graphs November 27th, 2008 1 / 31

More information

Lecture 6: Expander Codes

Lecture 6: Expander Codes CS369E: Expanders May 2 & 9, 2005 Lecturer: Prahladh Harsha Lecture 6: Expander Codes Scribe: Hovav Shacham In today s lecture, we will discuss the application of expander graphs to error-correcting codes.

More information

9 Forward-backward algorithm, sum-product on factor graphs

9 Forward-backward algorithm, sum-product on factor graphs Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.438 Algorithms For Inference Fall 2014 9 Forward-backward algorithm, sum-product on factor graphs The previous

More information

Lecture 14: Hamming and Hadamard Codes

Lecture 14: Hamming and Hadamard Codes CSCI-B69: A Theorist s Toolkit, Fall 6 Oct 6 Lecture 4: Hamming and Hadamard Codes Lecturer: Yuan Zhou Scribe: Kaiyuan Zhu Recap Recall from the last lecture that error-correcting codes are in fact injective

More information