Turbo Codes are Low Density Parity Check Codes

Similar documents
CHAPTER 3 LOW DENSITY PARITY CHECK CODES

1e

ECEN 655: Advanced Channel Coding

Quasi-cyclic Low Density Parity Check codes with high girth

An Introduction to Low Density Parity Check (LDPC) Codes

1 1 0, g Exercise 1. Generator polynomials of a convolutional code, given in binary form, are g

Introduction to Low-Density Parity Check Codes. Brian Kurkoski

Belief-Propagation Decoding of LDPC Codes

Physical Layer and Coding

NAME... Soc. Sec. #... Remote Location... (if on campus write campus) FINAL EXAM EE568 KUMAR. Sp ' 00

Lecture 4 : Introduction to Low-density Parity-check Codes

1.6: Solutions 17. Solution to exercise 1.6 (p.13).

Low Density Parity Check (LDPC) Codes and the Need for Stronger ECC. August 2011 Ravi Motwani, Zion Kwok, Scott Nelson

Low-density parity-check (LDPC) codes

Code design: Computer search

AN INTRODUCTION TO LOW-DENSITY PARITY-CHECK CODES

Factor Graphs and Message Passing Algorithms Part 1: Introduction

Low-density parity-check codes

Construction of low complexity Array based Quasi Cyclic Low density parity check (QC-LDPC) codes with low error floor

Introducing Low-Density Parity-Check Codes

Capacity-approaching codes

LDPC Codes. Slides originally from I. Land p.1

Channel Coding and Interleaving

Efficient design of LDPC code using circulant matrix and eira code Seul-Ki Bae

STUDY OF PERMUTATION MATRICES BASED LDPC CODE CONSTRUCTION

9 Forward-backward algorithm, sum-product on factor graphs

LDPC Codes. Intracom Telecom, Peania

Turbo Codes for Deep-Space Communications

Recent Results on Capacity-Achieving Codes for the Erasure Channel with Bounded Complexity

Iterative Encoding of Low-Density Parity-Check Codes

Time-invariant LDPC convolutional codes

EE 229B ERROR CONTROL CODING Spring 2005

BOUNDS ON THE MAP THRESHOLD OF ITERATIVE DECODING SYSTEMS WITH ERASURE NOISE. A Thesis CHIA-WEN WANG

Ma/CS 6b Class 25: Error Correcting Codes 2

Extended MinSum Algorithm for Decoding LDPC Codes over GF (q)

2 Information transmission is typically corrupted by noise during transmission. Various strategies have been adopted for reducing or eliminating the n

Introduction to Convolutional Codes, Part 1

Error Correction Methods

Making Error Correcting Codes Work for Flash Memory

Constructions of Nonbinary Quasi-Cyclic LDPC Codes: A Finite Field Approach

Construction of LDPC codes

An algorithm to improve the error rate performance of Accumulate-Repeat-Accumulate codes Tae-Ui Kim

Accumulate-Repeat-Accumulate Codes: Capacity-Achieving Ensembles of Systematic Codes for the Erasure Channel with Bounded Complexity

Cyclic Redundancy Check Codes

Graph-based codes for flash memory

Codes on Graphs. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 27th, 2008

On the minimum distance of LDPC codes based on repetition codes and permutation matrices 1

Structured Low-Density Parity-Check Codes: Algebraic Constructions

Belief propagation decoding of quantum channels by passing quantum messages

Notes 10: Public-key cryptography

Coding on a Trellis: Convolutional Codes

Weaknesses of Margulis and Ramanujan Margulis Low-Density Parity-Check Codes

Performance of Low Density Parity Check Codes. as a Function of Actual and Assumed Noise Levels. David J.C. MacKay & Christopher P.

Modern Coding Theory. Daniel J. Costello, Jr School of Information Theory Northwestern University August 10, 2009

CHAPTER 8 Viterbi Decoding of Convolutional Codes

Introduction to Binary Convolutional Codes [1]

Exponential Error Bounds for Block Concatenated Codes with Tail Biting Trellis Inner Codes

Error Correcting Codes: Combinatorics, Algorithms and Applications Spring Homework Due Monday March 23, 2009 in class

Research on Unequal Error Protection with Punctured Turbo Codes in JPEG Image Transmission System

Spatially Coupled LDPC Codes

Graph-based Codes and Iterative Decoding

Enhancing Binary Images of Non-Binary LDPC Codes

Symbol Interleaved Parallel Concatenated Trellis Coded Modulation

Error Detection and Correction: Hamming Code; Reed-Muller Code

Quasi-Cyclic Asymptotically Regular LDPC Codes

ELEC 405/511 Error Control Coding. Binary Convolutional Codes

channel of communication noise Each codeword has length 2, and all digits are either 0 or 1. Such codes are called Binary Codes.

Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel

Encoder. Encoder 2. ,...,u N-1. 0,v (0) ,u 1. ] v (0) =[v (0) 0,v (1) v (1) =[v (1) 0,v (2) v (2) =[v (2) (a) u v (0) v (1) v (2) (b) N-1] 1,...

9 THEORY OF CODES. 9.0 Introduction. 9.1 Noise

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

Turbo Compression. Andrej Rikovsky, Advisor: Pavol Hanus

Low-Density Parity-Check Codes

CSE 123: Computer Networks

Codes on Graphs, Normal Realizations, and Partition Functions

DESIGN METHODOLOGY FOR ANALOG VLSI IMPLEMENTATIONS OF ERROR CONTROL DECODERS

Construction and Performance Evaluation of QC-LDPC Codes over Finite Fields

EE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes

Convolutional Codes. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 6th, 2008


U Logo Use Guidelines

Lecture 4: Proof of Shannon s theorem and an explicit code

Aalborg Universitet. Bounds on information combining for parity-check equations Land, Ingmar Rüdiger; Hoeher, A.; Huber, Johannes

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land

The Distribution of Cycle Lengths in Graphical Models for Turbo Decoding

Low-Density Parity-Check codes An introduction

(Classical) Information Theory III: Noisy channel coding

Lecture 4 Noisy Channel Coding

On the Application of LDPC Codes to Arbitrary Discrete-Memoryless Channels

Digital Communication Systems ECS 452. Asst. Prof. Dr. Prapun Suksompong 5.2 Binary Convolutional Codes

An Introduction to Algorithmic Coding Theory

Status of Knowledge on Non-Binary LDPC Decoders

Non-binary Hybrid LDPC Codes: structure, decoding and optimization

A Short Length Low Complexity Low Delay Recursive LDPC Code

RECURSIVE CONSTRUCTION OF (J, L) QC LDPC CODES WITH GIRTH 6. Communicated by Dianhua Wu. 1. Introduction

Integrated Code Design for a Joint Source and Channel LDPC Coding Scheme

IEEE Working Group on Mobile Broadband Wireless Access. <

Pipeline processing in low-density parity-check codes hardware decoder

ON THE MINIMUM DISTANCE OF NON-BINARY LDPC CODES. Advisor: Iryna Andriyanova Professor: R.. udiger Urbanke

Convolutional Coding LECTURE Overview

Transcription:

Turbo Codes are Low Density Parity Check Codes David J. C. MacKay July 5, 00 Draft 0., not for distribution! (First draft written July 5, 998) Abstract Turbo codes and Gallager codes (also known as low density parity check codes) are at present neck and neck in the race towards capacity. In this paper we note that the parity check matrix of a Turbo code can be written as low density parity check matrix. Turbo codes and Gallager codes are both greatly superior to error correcting codes found in textbooks. Some similarities between these codes have been noted. For example, both families of codes can be defined in terms of sparse graphs which define the constraints satisfied by codewords (cite Tanner, Wiberg, Loeliger, Frey, MacKay). Both families of codes are decoded using a local probability propagation algorithm which is known as the sum product algorithm or belief propagation (cite Wiberg, McEliece and MacKay, Frey). There are also differences between the two code families. In the original form studied by Gallager and MacKay and Neal, Gallager codes are quick to decode but have an encoding time that scales as N, whereas Turbo codes are usually defined in terms of linear time encoders. (Fast-encodeable Gallager codes have recently been investigated, MacKay.) In Turbo codes there is a sharp distinction between the bits viewed as source bits and the bits viewed as parity check bits; they play different roles in the decoding algorithm, and posterior probabilities over the states of parity check bits are usually not computed. In Gallager codes, there is a symmetry between all bits. Turbo codes as originally defined tend to suffer from low weight codewords which cause the asymptotic performance for large E b /N 0 to have an error floor. Gallager codes, in contrast, show no such error floor, and it has been proved that they have asymptotically good distance properties. Gallager codes are simple to modify in order to create codes with higher or lower rates. In contrast, increasing the rate of a Turbo code can be tricky because simple puncturing of the parity bits might weaken the code by introducing low weight codewords. Since these two families of codes are both so good in performance, it seems a good idea to try to relate them so as to enhance technology transfer and hybridisation between the two methodologies. However, to our knowledge, only a few researchers have tried to connect these fields together and design new codes (cite Frey and MacKay). This paper makes a simple observation about Turbo codes: treating a Turbo code as a block code, the parity check matrix of that code is actually a low density parity check matrix. This observation is probably extremely obvious to anyone who is familiar with convolutional codes, but for the benefit of readers like myself who are not, I will spell this out in a little more detail.

Definitions A low density parity check code is a block code which has a parity check matrix, H, every row and column of which is sparse. A regular Gallager code is a low density parity check code in which every column of H has the same weight t and every row has the same weight t r ; regular Gallager codes are constructed at random subject to these constraints. An (N, K) Turbo code is defined by a number of constituent encoders (often, two) and an equal number of interleavers which are K K permutation matrices. Without loss of generality, we take the first interleaver to be the identity matrix. The constituent encoders are often convolutional codes. A string of K source bits is encoded by feeding them into each constituent encoder in the order defined by the associated interleaver, and transmitting the bits that come out of each constituent encoder. For simplicity, let us concentrate on Turbo codes with two constituent codes that are both convolutional codes. Often the first constituent encoder is chosen to be a systematic encoder, and the second is a non systematic one which emits parity bits only. The transmitted codeword then consists of K source bits followed by M parity bits generated by the first convolutional code and M parity bits from the second. For the purposes of this paper we will not need to discuss the decoder for either of these codes. One unifying viewpoint for these two code families is in terms of trellis constrained codes. A trellis constrained code is a code whose codewords satisfy a set of constraints, each constraint being compactly described by a trellis in which two or more of the codeword bits participate. Viewing these codes as trellis constrained codes, they appear rather different. The M N parity check matrix of a regular Gallager code defines M trellises. Each trellis constrains the parity of t r of the bits to be even, and each of the N bits participates in t trellises. We can think of a Turbo code as a trellis constrained code in which there are two trellises (figure??); the K source bits and the first M parity bits participate in the first trellis and the K source bits and the last M parity bits participate in the second trellis. Each codeword bit participates in either one or two trellises, depending on whether it is a parity bit or a source bit. See figure. However, we will now see that from the point of view of their parity check matrices, Turbo codes are actually very similar to Gallager codes.. The parity check matrix of a single convolutional code Note different meaning for parity check matrix from convolutional code literature. Here we are talking about the literal parity check matrix of the code viewed as a linear block code. A systematic recursive convolutional code, as used in Turbo codes, is equivalent (in the sense that its codewords are the same) to a nonsystematic nonrecursive convolutional code (figure??). Now, what parity constraints are satisfied by the latter code? Well, if we pass stream b through the convolutional filter that generated stream a and vice versa, then the two resulting streams are identical. So the parity check matrix of a single convolutional code may be written as a low density parity check matrix as shown in figure??. Issue neglected here: termination. Termination simply adds an extra k constraints, where k is the constraint length. Not a big deal.

(c) 0 0 00 0 00 00 000 0 00 00 000 00 000 000 0000 Figure : (a,b) A rate /3 and rate / Turbo code represented as trellis constrained codes. The circles represent the codeword bits. The two rectangles represent trellises of rate / convolutional codes, with the systematic bits occupying the left half of the rectangle and the parity bits occupying the right half. The puncturing of these constituent codes in the rate / Turbo code is represented by the lack of connections to half of the parity bits in each trellis. (c) The contents of the rectangles. Each trellis is generated by the recursive filter shown in figure 3 which has a state space of 4 bits. In each pair of adjacent bits, one is a systematic bit and one is a parity bit. No. of bits per trellis N 3 Turbo. 4 log q 4 q 4 3 Gallager (GF (q)) Gallager... M/q M No. of trellises Trellis complexity Turbo Gallager (GF (q)) Gallager 3 Mean no. of trellises per bit Figure : One view of Gallager codes and Turbo codes in code space, using rate /3 codes as an example. N denotes the block length and M is the number of parity constraints, M = N K, satisfied by a codeword. Points are shown for ordinary binary Gallager codes and for Gallager codes over GF (q) also (q = 8 and q = 6 have been found useful, cite Davey and MacKay). From this perspective, Gallager codes and Turbo codes seem quite different. 3

z 4 d z 3 d z d z d z 0 s (z) z 4 d z 3 d z d z d z 0 s z 4 d z 3 d z d z d z 0 s (/37) 8 Figure 3: Linear feedback shift registers for generating convolutional codes. K = 4.. The parity check matrix of a Turbo code Note that for the standard constraint length 4 convolutional codes, the profile of the turbo code s parity check matrix is roughly K columns of weight about 4, and the remaining columns of weight about 5; the row weight is about 7 for all rows. Here puncturing ignored. How to handle it: if the bit only participates in one check, remove that check. If it participates in more than one check, use row manipulations to create new higher weight checks that don t involve that bit. Note that classic Turbo codes are punctured down to rate /. Consequences Some of the theoretical results proved for Gallager codes carry over to Turbo codes. The positive ones (for example, that Gallager codes are very good ) don t carry over since they rely on creating the whole matrix at random. But the negative ones do. One negative result is that Turbo codes definitely can t get to the Shannon limit unless the constraint length of the constituent codes grows just as Gallager codes are only very good as the weight per column t increases. 3 Discussion For those of us who thought that there was a considerable distance in code space between Turbo codes and Gallager codes, this observation forces a shift in viewpoint. It also allows us to roll out a few simple theorems about the maximum likelihood performance of Turbo codes. So, given that they are such similar codes, what are the differences? Why are regular Gallager codes worse than Turbo codes? We have caught up with Turbo codes only by () 4

3 3 3 (a ) (c) (d) 5 Figure 4: Schematic pictures of the parity check matrices of a regular Gallager code, rate /, (a ) an almost regular Gallager code, rate /3, a convolutional code, rate /, and (c) a Turbo code, rate /3. Notation: A diagonal line represents an identity matrix. A band of diagonal lines represent a band of diagonal s. A circle inside a square represents the random permutation of all the columns in that square. A number inside a square represents the number of random permutation matrices superposed in that square. Horizontal and vertical lines indicate the boundaries of the blocks within the matrix. (d) shows another code with roughly the same profile as a Turbo code. 5

making them over GF(6); () making them irregular. But notice these Turbo codes are binary and their parity check matrices are pretty near regular! I have some ideas about why Turbo are better. Notice that the standard way of decoding a Gallager code is not how Turbo codes are decoded. Message passing different and would be more inaccurate in the Gallager style. Turbo sends messages all the way along trellises, so the within trellis messages are correct. 3. Wasted checks Consider a Gallager code with t r = 4. Imagine that of the four bits that participate in one check, three of them happen to be well determined given the output of the channel. This check allows us instantly to correct the fourth bit, but then it plays no useful role. This is not the way a good code works! In contrast, in a Turbo code, if some bits are well determined, the sole effect is to prune the trellis. Whatever bits are well determined, the remaining bits participate in a trellis that looks (locally) just the same. 6