Convolutional Codes ddd, Houshou Chen. May 28, 2012

Size: px
Start display at page:

Download "Convolutional Codes ddd, Houshou Chen. May 28, 2012"

Transcription

1 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes Convolutional Codes ddd, Houshou Chen Department of Electrical Engineering National Chung Hsing University Taichung, 402 Taiwan May 28, 202

2 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes Representation I and II shift register representation scalar G matrix representation 2 Representation III and IV impulse response representation polynomial G(D) matrix representation example 3 Trellis of state diagram tree diagram regular trellis diagram 4 Viterbi decoding the algorithm Viterbi for BSC Viterbi for AWGN 5 Turbo codes properties performance

3 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes k Information birs (m+)k stages ui ui u i m 2... k 2... k 2... k G G 0 G m n v i Encoded Sequence to modulator For a [n, k, m] convolutional code, the encoder output v i depends not only on the inputs u i but also on some number of previous inputs: v i = f (u i, u i,, u i m ), where v i F n 2 and u i F k 2. A rate convolutional encoder with memory order m can be realized as a k-input, n-output linear sequential circuit with input memory m.

4 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes () v i u i u i 2 v i u i (2) v i A [n = 2, k =, m = 2] convolutional code v () i = u i + u i + u i 2, v (2) i = u i + u i 2 v i = (v () i, v (2) i )

5 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes Convolutional codes were first introduced by Elias in 955. There are several methods to describe a. sequential circuit: shift register representation. 2 algebraic description: scalar G matrix 3 LTI system: impulse response g(l) in time domain 4 LTI system: polynomial G(D) matrix in D domain 5 combinatorial description: state diagram, tree, and trellis

6 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes 訊息序列 ui 2 i 過去訊息區塊 u 目前訊息區塊 ( 包含 k 個位元 ) u i u i u i encoder v i v i u i = (u () i,, u (k) i ) v i = (v () i,, v (n) i ) (time domain) G 2 迴旋碼編碼器 字碼區塊 ( 包含 n 個位元 ) v i G G0 u(d) = P i u id i G(D) v(d) = P i v id i v(d) = u(d)g(d) (frequency domain) There are two types of codes in general Block codes: G(D) = G = v i = u i G Convolutional codes: G(D) = G 0 + G D + + G md m = v i = u i G 0 + u i G + u i m G m

7 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes shift register representation ui u i ui v i v i Figure: Encoder of a [7, 4, 3] Hamming codes Use combinational logic to implement block codes. A information block u i of length k at time i is mapped to a codeword v i of length n at time i by a k n generator matrix for each i, i.e., no memory. Usually, n and k are large. v i = u i G

8 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes shift register representation u u i i ui m G0 G G m G m v i Figure: Encoder of Use sequential logic to implement. We need m + matrices G 0, G,, G m of size k n: Usually, n and k are small. v i = u i G 0 + u i G + u i 2 G u i m G m

9 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes scalar G matrix representation Scalar G matrix of linear block codes Since v i = u i G, we have [u 0, u,u 2, ] G G G = [v 0, v, v 2, ] Figure: A time-varying trellis of a block code.

10 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes scalar G matrix representation Scalar G matrix of Since v i = u i G 0 + u i G + u i 2 G u i m G m, we have [u 0, u, u 2,, u m, u m, u m+, ] G 0 G G m G 0 G m G m G m 2 G m G G 2 G 0 G G 0 = [v 0, v,, v m, v m+, ] a = (00) = 0 0/00 0/00 0/00 0/00 0/00 0/00 / / / / / / b = (0) = 0/ /00 0/ /00 0/ /00 0/ /00 c = (0) = 2 0/0 /0 0/0 /0 0/0 /0 0/0 /0 0/0 /0 d = () = 3 0/0 0/0 0/0 0/0 /0 /0 /0 /0 A time-invariant trellis of a convolutional code.

11 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes impulse response representation We assume that the initial values in the m memories are all set to zeros. (i.e., the conventional trellis with zero-state as the starting state) The output v i is then depend on the current input u i and the states u i,, u i m. For example, the scalar matrix shows that v 0 = u 0 G 0 + u G + u 2 G u m G m = u 0 G 0 since u = u 2 = u m = 0. v = u G 0 + u 0 G + u G u m+ G m = u G 0 + u 0 G. From the scalar G matrix representation, we have v i = u i G 0 + u i G + u i 2 G u i m G m = u i G i As a LTI systems with k inputs and n outputs, we can describe a convolutional code by k n impulse responses g (j) i = g (j) i (l), 0 l m, ( i k, and j n), which can be obtained from m + matrices {G 0,, G m}.

12 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes impulse response representation Define the input sequence due to the ith stream, j k, as u = 0 u (j) = u (j) 0 u(j) u(j) 2 u(j) 3 u () 0 u () u () i u (2) 0 u (2) u (2) i.. u (j) 0 u (j).. u (k) 0 u (k) u (j) i u (k) i u 0 u u i C A = u () = u (2).. = u (j).. = u (k)

13 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes impulse response representation Define the output sequence due to the jth stream, j n, as v = 0 v (j) = v (j) 0 v(j) v(j) 2 v(j) 3 v () 0 v () v () i v (2) 0 v (2) v (2) i.. v (j) 0 v (j).. v (n) 0 v (n) v (j) i v (n) i v 0 v v i C A = v () = v (2).. = v (j).. = v (n)

14 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes impulse response representation A [n, k, m] convolutional code can be represented as a MIMO LTI system with k input streams (u (), u (2),, u (k) ), and n output streams (v (), v (2),, v (n) ), and a k n impulse response matrix g(l) = {g (j) i (l)}. v (j) = u () g (j) + u (2) g (j) 2 + u (k) g (j) k = This is the origin of the name convolutional code. kx i= u (i) g (j) i The impulse response g (j) i of the ith input with the response to the jth output is found by stimulating the encoder with the discrete impulse (000 ) at the ith input and by observing the jth output when all other inputs are set to (0000 ).

15 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes polynomial G(D) matrix representation Polynomial matrix representation The k n matrix g(l) consisting of impulse responses g (j) i (l) and the k n matrix G(D) consisting of G (j) i (D) form a Fourier pair. Or we can form a k n matrix G(D) by G(D) = [G (j) i (D)] = G 0 + G D + G 2 D G md m This is the polynomial matrix of a convolutional code. Let U(D) = (U (D), U 2 (D),..., U k (D)) be the z transform of (u (), u (2),, u (k) ). Let V(D) = (V (D), V 2 (D),..., V n(d)) be the z transform of (v (), v (2),, v (n) ). Then We have (in D-domain) V(D) = U(D) G(D).

16 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes example example Shift register: v () =(000) u=( 0 ) v=( ) v (2) =(000) Figure: (2,,2) convolutional code encoder Scalar G matrix: G 0 = (, ), G = (, 0), G 2 = (, ) impulse response g(l) g(l) = (, 0) = (7, 5) polynomial matrix G(D): G(D) = G 0 + G D + G 2 D 2 = ( + D + D 2, + D 2 )

17 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes example v () =(000) u=( 0 ) v=( ) v (2) =(000) Figure: (2,,2) convolutional code encoder Input: u = (,,, 0, ) in time domain, v can be obtained from the scalar G matrix: (, 0, 0, 0, 00, 0, ) = (,,, 0, )

18 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes example v () =(000) u=( 0 ) v=( ) v (2) =(000) Figure: (2,,2) convolutional code encoder v can be obtained from D domain by U(D) = + D + D 2 + D 4 V(D) = U(D) ( + D + D 2, + D 2 ) V (D) = + D 2 + D 5 + D 6 V 2 (D) = + D + D 3 + D 6 v () = (, 0,, 0, 0,, ) v (2) = (,, 0,, 0, 0, ) In time domain: v = (, 0, 0, 0, 00, 0, )

19 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes example example 2 Shift register: Figure: (3,2,3) convolutional code encoder Scalar G matrix: impulse response g(l) G 0 = 0 0 G = G 2 = g(l) = polynomial matrix G(D): G(D) = = D D D 2 + D + D 2

20 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes example Let the input be u = (u, u 2 ) = (, 0), then u () = (0) and u (2) = (): U (D) = U 2 (D) = + D V (D) V 2 (D) V 3 (D) = U (D) U 2 (D) + D D D 2 + D + D 2 + D D = + D D 2 + D + D 2 = + D + D 2 + D 3 D 3 v () = (,,, ), v (2) = (, 0, 0, 0), v (3) = (0, 0, 0, ) v 0 = (,, 0) v = (, 0, 0) v 2 = (, 0, 0) v 3 = (, 0, )

21 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes state diagram State diagram from encoder G 0 = (, ), G = (, 0), G 2 = (, ) g(l) = (, 0) = (7, 5) G(D) = [ + D + D 2, + D 2 ] = [, ] + [, 0]D + [, ]D 2

22 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes state diagram In the state diagram, we use (u t/v t) to indicate the input/output pair during the transition from the state s i to state s i+.

23 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes state diagram

24 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes state diagram

25 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes state diagram

26 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes state diagram State diagram from a scalar G matrix u i 2 u i u i s i v i s i current input and current state: u i, s i = (u i, u i 2 ) The next state s i+ and the output v i is a function of the current input u i and state s i. next state: s i+ = (u i, u i ) = f (s i, u i ) output: v i = u i () + u i (0) + u i 2 () = g(s i, u i ) The encoder of a convolutional code can be regarded as a time-invariant finite state machine

27 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes tree diagram Any codeword of a convolutional code can be represented as a path in a code tree. Assume the h input sequences u, u 2,..., u h, u h+ = u h+2 = = u h+m = 0 with zero-biting. The code tree then has h + m + level with the leftmost level 0 and the rightmost level h + m. In the first h levels, each node has 2 k branches and after h level, each node has one branch. Each path from the original node to the terminal code form a codeword of a convolutional code.

28 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes regular trellis diagram Trellis of a = (00) = 0 0/00 / 0/00 / 0/00 / 0/00 / 0/00 / 0/00 / b = (0) = 0/ /00 0/ /00 0/ /00 0/ /00 c = (0) = 2 0/0 /0 0/0 /0 0/0 /0 0/0 /0 0/0 /0 d = () = 3 0/0 /0 0/0 /0 0/0 /0 0/0 /0 We can expand the state diagram of a convolutional code to get a trellis of a convolutional code with infinite length. Since a convolutional code has a regular G matrix, the trellis of it is time-invariant. In practice, we have to truncate a convolutional code to be a block code. There are in general three methods: zero-biting, truncation, and tail-biting.

29 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes regular trellis diagram zero biting: h information blocks, followed by m zero blocks. [n, k] convolutional code = [n(h + m), kh] block code. Here h = 4 and m = G 0 G G 2 G 0 G G 2 G 0 G G 2 G 0 G G 2 Trellis with zero state at time 0 to zero state at time h + m, R = k h n h+m

30 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes regular trellis diagram truncation: h information blocks. [n, k] convolutional code = [nh, kh] block code. Here h = 4 and m = G 0 G G G 0 G G G 0 G G Trellis with zero state at time 0 to any state at time Lh. R = k n.

31 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes regular trellis diagram tailbiting: h information blocks. [n, k] convolutional code = [nh, kh] block code. Here h = 4 and m = G 0 G G G 0 G G 2 G 2 0 G 0 G G G 2 0 G Trellis with any state at time 0 to the same state at time h. R = k n.

32 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes Trellis-based decoding Three decoding algorithms: Viterbi decoding: 967 by Viterbi 2 SOVA decoding: 989 by Hagenauer 3 BCJR decoding: 974 by Bahl etc. These algorithms are operated on the trellis of the codes and the complexity depends on the number of states in the trellis. We can apply these algorithms on block and since the former has the irregular trellis and the latter has the regular trellis. Viterbi and SOVA decoding minimize the codeword error probability and BCJR minimize the information bit error probability.

33 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes the algorithm The Viterbi algorithm We use the (zero-biting) trellis of for illustration. Assume that an information sequence u = (u 0, u,..., u h ) of the length K = kh is encoded. Then a codeword v = (v 0, v,, v h, v h,, v h+m ) of length N = n(h + m) is generated after the convolutional encoder. Thus, with zero-biting of mk zero bits, we have a [n(h + m), kh] linear block code. After the discrete memoryless channel (DMC), a sequence r = (r 0, r,..., r h+m ) is received. We focus on BSC and AWGN channels.

34 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes the algorithm A (ML) decoder for a DMC chooses ˆv as the codeword v that maximizes the log-likelihood function log p(r v). Since the channel is memoryless, we have Y h+m N p(r v) = p(r l v l ) = p(r l v l ) l=0 X h+m M(r v) = log p(r v) = l=0 Y l=0 log p(r l v l ) {z } M(r l v l ) X N = M(r v): path metric, M(r l v l ): branch metric, M(r l v l ): bit metric. l=0 log p(r l v l ) {z } M(r l v l ) Similarly, a partial metric for the first t branch of a path can be written as X t M([r v] t) = l=0 X nt M(r l v l ) = l=0 M(r l v l ) The bit metric M(r l v l ) = log p(r l v l ) can be replaced by c log p(r l v l ) + c 2, where c > 0.

35 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes the algorithm Summary of Viterbi algorithm The operations of the Viterbi algorithm is the addition, comparison, and selection (ACS). At time t +, considering each state s t+ and 2 k previous state s t connecting to s t+, we compute the (optimal) partial metric of s t+ by adding the branch metric connecting between s t and s t+ to the partial metric of s t and selecting the largest metric. We store the optimum path (survivor) with the partial metric for each state s t at time t. The final survivor ˆv in the Viterbi algorithm is the maximum likelihood path; that is M(r ˆv) M(r v), all v ˆv

36 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes the algorithm Assume Viterbi algorithm is finished at time k:

37 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes the algorithm Viterbi for a Binary-input, Quaternary-Output DMC A(3,, 2) convolutional code with h = 5 and G(D) = [ + D, + D 2, + D + D 2 ] The bit metric is as follows. M(r l v l ) The quaternary received sequence is r = ( 2 0, 0 2, 0,, 0 2 0, The final survivor is 2 0 2, 2 0 ) ˆv = (, 00, 0, 0, 000, 000, 000) The decoded information is û = (,, 0, 0, 0) M( ) = M( 0) + M( 2 0) + M(0 0) = = 5

38 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes Viterbi for BSC Viterbi decoding for BSC In BSC with transition probability p < /2, the received sequence r is binary and the log-likelihood function becomes log p(r v) = log[p d(r,v) ( p) N d(r,v) ] = d(r, v) log p + N log( p). p p Because log < 0 and N log( p) is a constant for all v, an MLD for a BSC chooses p v as the codeword that minimizes the Hamming distance X h+m N d(r, v) = d(r l, v l ) = d(r l, v l ) l=0 Hence the Hamming distance d(r l, v l ) can be used as a branch metric and the algorithm finds the path through the trellis, which has the smallest Hamming distance to r. X l=0

39 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes Viterbi for BSC A (2,, 2) convolutional code with h = 4 and G(D) = [ + D + D 2, + D 2 ] The received sequence is r = (0, 0, 00, 00, 0, ) The final survivor is ˆv = (, 0, 00, 0, 0, ) The decoded information sequence is û = (, 0,, ) That final survivor has a Hamming metric of 2.

40 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes Viterbi for BSC r = (0, 0, 00, 00, 0, ) (3) (4) 0 0 (4) (4) 0

41 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes Viterbi for BSC r = (0, 0, 00, 00, 0, ) (5) 0 2(4) 0 3(4) (4) (3) 0 2(5) 0

42 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes Viterbi for BSC (5) r = (0, 0, 00, 00, 0, ). The decoded codeword is ˆv = (, 0, 00, 0, 0, ) The decoded information is û = (, 0,, )

43 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes Viterbi for AWGN Viterbi decoding for AWGN Consider the AWGN channel with binary input, i.e., where v i {, }. The path metric can be simplified as v = (v 0, v,, v N ), N Y M(r v) = log p(r v) = log( X l=0 e (r l v l ) πn0 = N (r l v l ) 2 + N N 0 2 log. πn l=0 0 2 N 0 ) A codeword v minimize the Euclidean distance P N l=0 (r l v l ) 2 also maximize the log-likelihood function log p(r v). Viterbi algorithm finds the optimal path v with minimum Euclidean distance from r.

44 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes Viterbi for AWGN The path metric can also be simplified as M(r v) = log p(r v) X = N (rl 2 2r l v l + v 2 l N ) + N 0 2 log πn l=0 0 = 2 N 0 N X l=0 r l v l N 0 ( r 2 + N) + N 2 log πn 0 A codeword v maximize the correlation r r = P N l=0 r l v l also maximize the log-likelihood function log p(r v).

45 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes What are turbo codes? Turbo codes, introduced by Berrou and Glavieux in ICC 93., is a Shannon-capacity approaching code. Turbo codes can come closer to approaching Shannon s limit than any other class of error correcting codes. Turbo codes achieve their remarkable performance with relatively low complexity encoding and decoding algorithms.

46 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes properties We have Parallel concatenated (PCCC, turbo codes) 2 Parallel concatenated block codes (PCBC, block turbo codes) 3 Serial concatenated 4 Serial concatenated block codes

47 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes properties One Decoder (D) for every Encoder (C) Iterative Decoding: D D2 D D2 D Improved Bit-Error Rate Performance Turbo Encoder Turbo Decoder Û U C D Π Π Π C 2 D 2 Π - Encoder: produce a code with randomlike property. (interleaver) Decoder: use soft-in soft-out iterative decoding. (turbo principle)

48 dback, like a turbo engine. Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes properties Turbo codes get their name because the decoder uses feedback, like a turbo engine. The term turbo in Turbo turbo coding and LDPC is related Codes to decoding, not encoding. The feedback of extrinsic information form the SISO decoders in the iterative decoding that mimics the feedback of exhaust gases in a turbo engine. 3/33

49 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes performance Power efficiency of existing standards Power Efficiency of Standard Binary Channel Codes.0 BPSK Capacity Bound Uncoded BPSK Spectral Efficiency Code Rate R 0.5 Turbo Code 993 LDPC Code 200 Chung, Forney, Richardson, Urbanke Shannon Capacity Bound Pioneer Voyager 977 IS Iridium 998 Odenwalder Convolutional Codes 976 Galileo:LGA 996 Galileo:BVD 992 Mariner 969 arbitrarily low BER: = 0 5 P b Eb/No in db

50 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes performance (37, 2, 65536) turbo codes with G(D) = [, +D 4 +D+D 2 +D 3 +D 4 ]

51 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes performance References S. Lin and D. J. Costello, Jr., Error Control Coding, 2nd ed., Prentice Hall, Chapter Todd K. Moon, Error Correction Coding, John Wiley, 2005 Chapter. and.2 Ezio Biglieri, Coding for Wireless Channels, Springer, 2005 Chapter J. G. Proakis, Digital Communication, 4th ed., McGraw-Hill, Chapter 8.

Lecture 3 : Introduction to Binary Convolutional Codes

Lecture 3 : Introduction to Binary Convolutional Codes Lecture 3 : Introduction to Binary Convolutional Codes Binary Convolutional Codes 1. Convolutional codes were first introduced by Elias in 1955 as an alternative to block codes. In contrast with a block

More information

Chapter 7: Channel coding:convolutional codes

Chapter 7: Channel coding:convolutional codes Chapter 7: : Convolutional codes University of Limoges meghdadi@ensil.unilim.fr Reference : Digital communications by John Proakis; Wireless communication by Andreas Goldsmith Encoder representation Communication

More information

Introduction to Binary Convolutional Codes [1]

Introduction to Binary Convolutional Codes [1] Introduction to Binary Convolutional Codes [1] Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw Y. S. Han Introduction

More information

Example of Convolutional Codec

Example of Convolutional Codec Example of Convolutional Codec Convolutional Code tructure K k bits k k k n- n Output Convolutional codes Convoltuional Code k = number of bits shifted into the encoder at one time k= is usually used!!

More information

Digital Communications

Digital Communications Digital Communications Chapter 8: Trellis and Graph Based Codes Saeedeh Moloudi May 7, 2014 Outline 1 Introduction 2 Convolutional Codes 3 Decoding of Convolutional Codes 4 Turbo Codes May 7, 2014 Proakis-Salehi

More information

An Introduction to Low Density Parity Check (LDPC) Codes

An Introduction to Low Density Parity Check (LDPC) Codes An Introduction to Low Density Parity Check (LDPC) Codes Jian Sun jian@csee.wvu.edu Wireless Communication Research Laboratory Lane Dept. of Comp. Sci. and Elec. Engr. West Virginia University June 3,

More information

Binary Convolutional Codes

Binary Convolutional Codes Binary Convolutional Codes A convolutional code has memory over a short block length. This memory results in encoded output symbols that depend not only on the present input, but also on past inputs. An

More information

1 1 0, g Exercise 1. Generator polynomials of a convolutional code, given in binary form, are g

1 1 0, g Exercise 1. Generator polynomials of a convolutional code, given in binary form, are g Exercise Generator polynomials of a convolutional code, given in binary form, are g 0, g 2 0 ja g 3. a) Sketch the encoding circuit. b) Sketch the state diagram. c) Find the transfer function TD. d) What

More information

Coding on a Trellis: Convolutional Codes

Coding on a Trellis: Convolutional Codes .... Coding on a Trellis: Convolutional Codes Telecommunications Laboratory Alex Balatsoukas-Stimming Technical University of Crete November 6th, 2008 Telecommunications Laboratory (TUC) Coding on a Trellis:

More information

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.1 Overview Basic Concepts of Channel Coding Block Codes I:

More information

Convolutional Codes. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 6th, 2008

Convolutional Codes. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 6th, 2008 Convolutional Codes Telecommunications Laboratory Alex Balatsoukas-Stimming Technical University of Crete November 6th, 2008 Telecommunications Laboratory (TUC) Convolutional Codes November 6th, 2008 1

More information

Code design: Computer search

Code design: Computer search Code design: Computer search Low rate codes Represent the code by its generator matrix Find one representative for each equivalence class of codes Permutation equivalences? Do NOT try several generator

More information

Introduction to Convolutional Codes, Part 1

Introduction to Convolutional Codes, Part 1 Introduction to Convolutional Codes, Part 1 Frans M.J. Willems, Eindhoven University of Technology September 29, 2009 Elias, Father of Coding Theory Textbook Encoder Encoder Properties Systematic Codes

More information

Modern Coding Theory. Daniel J. Costello, Jr School of Information Theory Northwestern University August 10, 2009

Modern Coding Theory. Daniel J. Costello, Jr School of Information Theory Northwestern University August 10, 2009 Modern Coding Theory Daniel J. Costello, Jr. Coding Research Group Department of Electrical Engineering University of Notre Dame Notre Dame, IN 46556 2009 School of Information Theory Northwestern University

More information

Turbo Codes for Deep-Space Communications

Turbo Codes for Deep-Space Communications TDA Progress Report 42-120 February 15, 1995 Turbo Codes for Deep-Space Communications D. Divsalar and F. Pollara Communications Systems Research Section Turbo codes were recently proposed by Berrou, Glavieux,

More information

Digital Communication Systems ECS 452. Asst. Prof. Dr. Prapun Suksompong 5.2 Binary Convolutional Codes

Digital Communication Systems ECS 452. Asst. Prof. Dr. Prapun Suksompong 5.2 Binary Convolutional Codes Digital Communication Systems ECS 452 Asst. Prof. Dr. Prapun Suksompong prapun@siit.tu.ac.th 5.2 Binary Convolutional Codes 35 Binary Convolutional Codes Introduced by Elias in 1955 There, it is referred

More information

ECEN 655: Advanced Channel Coding

ECEN 655: Advanced Channel Coding ECEN 655: Advanced Channel Coding Course Introduction Henry D. Pfister Department of Electrical and Computer Engineering Texas A&M University ECEN 655: Advanced Channel Coding 1 / 19 Outline 1 History

More information

The Turbo Principle in Wireless Communications

The Turbo Principle in Wireless Communications The Turbo Principle in Wireless Communications Joachim Hagenauer Institute for Communications Engineering () Munich University of Technology (TUM) D-80290 München, Germany Nordic Radio Symposium, Oulu,

More information

HIGH DIMENSIONAL TRELLIS CODED MODULATION

HIGH DIMENSIONAL TRELLIS CODED MODULATION AFRL-IF-RS-TR-2002-50 Final Technical Report March 2002 HIGH DIMENSIONAL TRELLIS CODED MODULATION Ohio University APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. AIR FORCE RESEARCH LABORATORY INFORMATION

More information

NAME... Soc. Sec. #... Remote Location... (if on campus write campus) FINAL EXAM EE568 KUMAR. Sp ' 00

NAME... Soc. Sec. #... Remote Location... (if on campus write campus) FINAL EXAM EE568 KUMAR. Sp ' 00 NAME... Soc. Sec. #... Remote Location... (if on campus write campus) FINAL EXAM EE568 KUMAR Sp ' 00 May 3 OPEN BOOK exam (students are permitted to bring in textbooks, handwritten notes, lecture notes

More information

Chapter10 Convolutional Codes. Dr. Chih-Peng Li ( 李 )

Chapter10 Convolutional Codes. Dr. Chih-Peng Li ( 李 ) Chapter Convolutional Codes Dr. Chih-Peng Li ( 李 ) Table of Contents. Encoding of Convolutional Codes. tructural Properties of Convolutional Codes. Distance Properties of Convolutional Codes Convolutional

More information

Trellis-based Detection Techniques

Trellis-based Detection Techniques Chapter 2 Trellis-based Detection Techniques 2.1 Introduction In this chapter, we provide the reader with a brief introduction to the main detection techniques which will be relevant for the low-density

More information

Construction of low complexity Array based Quasi Cyclic Low density parity check (QC-LDPC) codes with low error floor

Construction of low complexity Array based Quasi Cyclic Low density parity check (QC-LDPC) codes with low error floor Construction of low complexity Array based Quasi Cyclic Low density parity check (QC-LDPC) codes with low error floor Pravin Salunkhe, Prof D.P Rathod Department of Electrical Engineering, Veermata Jijabai

More information

Performance of Multi Binary Turbo-Codes on Nakagami Flat Fading Channels

Performance of Multi Binary Turbo-Codes on Nakagami Flat Fading Channels Buletinul Ştiinţific al Universităţii "Politehnica" din Timişoara Seria ELECTRONICĂ şi TELECOMUNICAŢII TRANSACTIONS on ELECTRONICS and COMMUNICATIONS Tom 5(65), Fascicola -2, 26 Performance of Multi Binary

More information

The Maximum-Likelihood Soft-Decision Sequential Decoding Algorithms for Convolutional Codes

The Maximum-Likelihood Soft-Decision Sequential Decoding Algorithms for Convolutional Codes The Maximum-Likelihood Soft-Decision Sequential Decoding Algorithms for Convolutional Codes Prepared by Hong-Bin Wu Directed by Prof. Po-Ning Chen In Partial Fulfillment of the Requirements For the Degree

More information

Channel Codes for Short Blocks: A Survey

Channel Codes for Short Blocks: A Survey 11th International ITG Conference on Systems, Communications and Coding February 6, 2017 Channel Codes for Short Blocks: A Survey Gianluigi Liva, gianluigi.liva@dlr.de Fabian Steiner, fabian.steiner@tum.de

More information

Appendix D: Basics of convolutional codes

Appendix D: Basics of convolutional codes Appendix D: Basics of convolutional codes Convolutional encoder: In convolutional code (B. P. Lathi, 2009; S. G. Wilson, 1996; E. Biglieri, 2005; T. Oberg, 2001), the block of n code bits generated by

More information

Turbo Codes. Manjunatha. P. Professor Dept. of ECE. June 29, J.N.N. College of Engineering, Shimoga.

Turbo Codes. Manjunatha. P. Professor Dept. of ECE. June 29, J.N.N. College of Engineering, Shimoga. Turbo Codes Manjunatha. P manjup.jnnce@gmail.com Professor Dept. of ECE J.N.N. College of Engineering, Shimoga June 29, 2013 [1, 2, 3, 4, 5, 6] Note: Slides are prepared to use in class room purpose, may

More information

Introduction to convolutional codes

Introduction to convolutional codes Chapter 9 Introduction to convolutional codes We now introduce binary linear convolutional codes, which like binary linear block codes are useful in the power-limited (low-snr, low-ρ) regime. In this chapter

More information

SOFT DECISION FANO DECODING OF BLOCK CODES OVER DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM

SOFT DECISION FANO DECODING OF BLOCK CODES OVER DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM Journal of ELECTRICAL ENGINEERING, VOL. 63, NO. 1, 2012, 59 64 SOFT DECISION FANO DECODING OF BLOCK CODES OVER DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM H. Prashantha Kumar Udupi Sripati K. Rajesh

More information

Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel

Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel Pål Ellingsen paale@ii.uib.no Susanna Spinsante s.spinsante@univpm.it Angela Barbero angbar@wmatem.eis.uva.es May 31, 2005 Øyvind Ytrehus

More information

Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel

Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel Brian M. Kurkoski, Paul H. Siegel, and Jack K. Wolf Department of Electrical and Computer Engineering

More information

Encoder. Encoder 2. ,...,u N-1. 0,v (0) ,u 1. ] v (0) =[v (0) 0,v (1) v (1) =[v (1) 0,v (2) v (2) =[v (2) (a) u v (0) v (1) v (2) (b) N-1] 1,...

Encoder. Encoder 2. ,...,u N-1. 0,v (0) ,u 1. ] v (0) =[v (0) 0,v (1) v (1) =[v (1) 0,v (2) v (2) =[v (2) (a) u v (0) v (1) v (2) (b) N-1] 1,... Chapter 16 Turbo Coding As noted in Chapter 1, Shannon's noisy channel coding theorem implies that arbitrarily low decoding error probabilities can be achieved at any transmission rate R less than the

More information

ELEC 405/511 Error Control Coding. Binary Convolutional Codes

ELEC 405/511 Error Control Coding. Binary Convolutional Codes ELEC 405/511 Error Control Coding Binary Convolutional Codes Peter Elias (1923-2001) Coding for Noisy Channels, 1955 2 With block codes, the input data is divided into blocks of length k and the codewords

More information

New Puncturing Pattern for Bad Interleavers in Turbo-Codes

New Puncturing Pattern for Bad Interleavers in Turbo-Codes SERBIAN JOURNAL OF ELECTRICAL ENGINEERING Vol. 6, No. 2, November 2009, 351-358 UDK: 621.391.7:004.052.4 New Puncturing Pattern for Bad Interleavers in Turbo-Codes Abdelmounaim Moulay Lakhdar 1, Malika

More information

On Weight Enumerators and MacWilliams Identity for Convolutional Codes

On Weight Enumerators and MacWilliams Identity for Convolutional Codes On Weight Enumerators and MacWilliams Identity for Convolutional Codes Irina E Bocharova 1, Florian Hug, Rolf Johannesson, and Boris D Kudryashov 1 1 Dept of Information Systems St Petersburg Univ of Information

More information

A NEW CHANNEL CODING TECHNIQUE TO APPROACH THE CHANNEL CAPACITY

A NEW CHANNEL CODING TECHNIQUE TO APPROACH THE CHANNEL CAPACITY A NEW CHANNEL CODING TECHNIQUE TO APPROACH THE CHANNEL CAPACITY Mahesh Patel 1 and A. Annamalai 1 1 Department of Electrical and Computer Engineering, Prairie View A & M University, TX 77446, United States

More information

Turbo Codes for xdsl modems

Turbo Codes for xdsl modems Turbo Codes for xdsl modems Juan Alberto Torres, Ph. D. VOCAL Technologies, Ltd. (http://www.vocal.com) John James Audubon Parkway Buffalo, NY 14228, USA Phone: +1 716 688 4675 Fax: +1 716 639 0713 Email:

More information

Structured Low-Density Parity-Check Codes: Algebraic Constructions

Structured Low-Density Parity-Check Codes: Algebraic Constructions Structured Low-Density Parity-Check Codes: Algebraic Constructions Shu Lin Department of Electrical and Computer Engineering University of California, Davis Davis, California 95616 Email:shulin@ece.ucdavis.edu

More information

Physical Layer and Coding

Physical Layer and Coding Physical Layer and Coding Muriel Médard Professor EECS Overview A variety of physical media: copper, free space, optical fiber Unified way of addressing signals at the input and the output of these media:

More information

High rate soft output Viterbi decoder

High rate soft output Viterbi decoder High rate soft output Viterbi decoder Eric Lüthi, Emmanuel Casseau Integrated Circuits for Telecommunications Laboratory Ecole Nationale Supérieure des Télécomunications de Bretagne BP 83-985 Brest Cedex

More information

Low-density parity-check codes

Low-density parity-check codes Low-density parity-check codes From principles to practice Dr. Steve Weller steven.weller@newcastle.edu.au School of Electrical Engineering and Computer Science The University of Newcastle, Callaghan,

More information

Decoding the Tail-Biting Convolutional Codes with Pre-Decoding Circular Shift

Decoding the Tail-Biting Convolutional Codes with Pre-Decoding Circular Shift Decoding the Tail-Biting Convolutional Codes with Pre-Decoding Circular Shift Ching-Yao Su Directed by: Prof. Po-Ning Chen Department of Communications Engineering, National Chiao-Tung University July

More information

Bifurcations and Chaos in Turbo Decoding Algorithms

Bifurcations and Chaos in Turbo Decoding Algorithms Bifurcations and Chaos in Turbo Decoding Algorithms Z. Tasev 1, P. Popovski 2, G.M. Maggio 3, and L. Kocarev 1 1 Institute for Nonlinear Science, University of California, San Diego, 9500 Gilman Drive,

More information

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes Igal Sason Department of Electrical Engineering Technion - Israel Institute of Technology Haifa 32000, Israel 2009 IEEE International

More information

Simplified Implementation of the MAP Decoder. Shouvik Ganguly. ECE 259B Final Project Presentation

Simplified Implementation of the MAP Decoder. Shouvik Ganguly. ECE 259B Final Project Presentation Simplified Implementation of the MAP Decoder Shouvik Ganguly ECE 259B Final Project Presentation Introduction : MAP Decoder û k = arg max i {0,1} Pr[u k = i R N 1 ] LAPPR Λ k = log Pr[u k = 1 R N 1 ] Pr[u

More information

BASICS OF DETECTION AND ESTIMATION THEORY

BASICS OF DETECTION AND ESTIMATION THEORY BASICS OF DETECTION AND ESTIMATION THEORY 83050E/158 In this chapter we discuss how the transmitted symbols are detected optimally from a noisy received signal (observation). Based on these results, optimal

More information

State-of-the-Art Channel Coding

State-of-the-Art Channel Coding Institut für State-of-the-Art Channel Coding Prof. Dr.-Ing. Volker Kühn Institute of Communications Engineering University of Rostock, Germany Email: volker.kuehn@uni-rostock.de http://www.int.uni-rostock.de/

More information

Exponential Error Bounds for Block Concatenated Codes with Tail Biting Trellis Inner Codes

Exponential Error Bounds for Block Concatenated Codes with Tail Biting Trellis Inner Codes International Symposium on Information Theory and its Applications, ISITA004 Parma, Italy, October 10 13, 004 Exponential Error Bounds for Block Concatenated Codes with Tail Biting Trellis Inner Codes

More information

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels Jilei Hou, Paul H. Siegel and Laurence B. Milstein Department of Electrical and Computer Engineering

More information

Research on Unequal Error Protection with Punctured Turbo Codes in JPEG Image Transmission System

Research on Unequal Error Protection with Punctured Turbo Codes in JPEG Image Transmission System SERBIAN JOURNAL OF ELECTRICAL ENGINEERING Vol. 4, No. 1, June 007, 95-108 Research on Unequal Error Protection with Punctured Turbo Codes in JPEG Image Transmission System A. Moulay Lakhdar 1, R. Méliani,

More information

EVALUATION OF PACKET ERROR RATE IN WIRELESS NETWORKS

EVALUATION OF PACKET ERROR RATE IN WIRELESS NETWORKS EVALUATION OF PACKET ERROR RATE IN WIRELESS NETWORKS Ramin Khalili, Kavé Salamatian LIP6-CNRS, Université Pierre et Marie Curie. Paris, France. Ramin.khalili, kave.salamatian@lip6.fr Abstract Bit Error

More information

Introduction to Wireless & Mobile Systems. Chapter 4. Channel Coding and Error Control Cengage Learning Engineering. All Rights Reserved.

Introduction to Wireless & Mobile Systems. Chapter 4. Channel Coding and Error Control Cengage Learning Engineering. All Rights Reserved. Introduction to Wireless & Mobile Systems Chapter 4 Channel Coding and Error Control 1 Outline Introduction Block Codes Cyclic Codes CRC (Cyclic Redundancy Check) Convolutional Codes Interleaving Information

More information

Constructions of Nonbinary Quasi-Cyclic LDPC Codes: A Finite Field Approach

Constructions of Nonbinary Quasi-Cyclic LDPC Codes: A Finite Field Approach Constructions of Nonbinary Quasi-Cyclic LDPC Codes: A Finite Field Approach Shu Lin, Shumei Song, Lan Lan, Lingqi Zeng and Ying Y Tai Department of Electrical & Computer Engineering University of California,

More information

Iterative Solutions Coded Modulation Library Theory of Operation

Iterative Solutions Coded Modulation Library Theory of Operation Iterative Solutions Coded Modulation Library Theory of Operation Oct. 3, 2005 Matthew Valenti Associate Professor West Virginia University Morgantown, WV 26506-6109 mvalenti@wvu.edu Noisy Channel Coding

More information

Decision-Point Signal to Noise Ratio (SNR)

Decision-Point Signal to Noise Ratio (SNR) Decision-Point Signal to Noise Ratio (SNR) Receiver Decision ^ SNR E E e y z Matched Filter Bound error signal at input to decision device Performance upper-bound on ISI channels Achieved on memoryless

More information

Error Correction Methods

Error Correction Methods Technologies and Services on igital Broadcasting (7) Error Correction Methods "Technologies and Services of igital Broadcasting" (in Japanese, ISBN4-339-06-) is published by CORONA publishing co., Ltd.

More information

Unequal Error Protection Turbo Codes

Unequal Error Protection Turbo Codes Unequal Error Protection Turbo Codes Diploma Thesis Neele von Deetzen Arbeitsbereich Nachrichtentechnik School of Engineering and Science Bremen, February 28th, 2005 Unequal Error Protection Turbo Codes

More information

Efficient LLR Calculation for Non-Binary Modulations over Fading Channels

Efficient LLR Calculation for Non-Binary Modulations over Fading Channels Efficient LLR Calculation for Non-Binary Modulations over Fading Channels Raman Yazdani, Student Member, IEEE, and arxiv:12.2164v1 [cs.it] 1 Feb 21 Masoud Ardaani, Senior Member, IEEE Department of Electrical

More information

Turbo Compression. Andrej Rikovsky, Advisor: Pavol Hanus

Turbo Compression. Andrej Rikovsky, Advisor: Pavol Hanus Turbo Compression Andrej Rikovsky, Advisor: Pavol Hanus Abstract Turbo codes which performs very close to channel capacity in channel coding can be also used to obtain very efficient source coding schemes.

More information

An analysis of the computational complexity of sequential decoding of specific tree codes over Gaussian channels

An analysis of the computational complexity of sequential decoding of specific tree codes over Gaussian channels An analysis of the computational complexity of sequential decoding of specific tree codes over Gaussian channels B. Narayanaswamy, Rohit Negi and Pradeep Khosla Department of ECE Carnegie Mellon University

More information

Rapport technique #INRS-EMT Exact Expression for the BER of Rectangular QAM with Arbitrary Constellation Mapping

Rapport technique #INRS-EMT Exact Expression for the BER of Rectangular QAM with Arbitrary Constellation Mapping Rapport technique #INRS-EMT-010-0604 Exact Expression for the BER of Rectangular QAM with Arbitrary Constellation Mapping Leszek Szczeciński, Cristian González, Sonia Aïssa Institut National de la Recherche

More information

Lecture 4: Linear Codes. Copyright G. Caire 88

Lecture 4: Linear Codes. Copyright G. Caire 88 Lecture 4: Linear Codes Copyright G. Caire 88 Linear codes over F q We let X = F q for some prime power q. Most important case: q =2(binary codes). Without loss of generality, we may represent the information

More information

The Viterbi Algorithm EECS 869: Error Control Coding Fall 2009

The Viterbi Algorithm EECS 869: Error Control Coding Fall 2009 1 Bacground Material 1.1 Organization of the Trellis The Viterbi Algorithm EECS 869: Error Control Coding Fall 2009 The Viterbi algorithm (VA) processes the (noisy) output sequence from a state machine

More information

New Designs for Bit-Interleaved Coded Modulation with Hard-Decision Feedback Iterative Decoding

New Designs for Bit-Interleaved Coded Modulation with Hard-Decision Feedback Iterative Decoding 1 New Designs for Bit-Interleaved Coded Modulation with Hard-Decision Feedback Iterative Decoding Alireza Kenarsari-Anhari, Student Member, IEEE, and Lutz Lampe, Senior Member, IEEE Abstract Bit-interleaved

More information

A Mathematical Approach to Channel Codes with a Diagonal Matrix Structure

A Mathematical Approach to Channel Codes with a Diagonal Matrix Structure A Mathematical Approach to Channel Codes with a Diagonal Matrix Structure David G. M. Mitchell E H U N I V E R S I T Y T O H F R G E D I N B U A thesis submitted for the degree of Doctor of Philosophy.

More information

Low Density Parity Check (LDPC) Codes and the Need for Stronger ECC. August 2011 Ravi Motwani, Zion Kwok, Scott Nelson

Low Density Parity Check (LDPC) Codes and the Need for Stronger ECC. August 2011 Ravi Motwani, Zion Kwok, Scott Nelson Low Density Parity Check (LDPC) Codes and the Need for Stronger ECC August 2011 Ravi Motwani, Zion Kwok, Scott Nelson Agenda NAND ECC History Soft Information What is soft information How do we obtain

More information

Interleaver Design for Turbo Codes

Interleaver Design for Turbo Codes 1 Interleaver Design for Turbo Codes H. R. Sadjadpour, N. J. A. Sloane, M. Salehi, and G. Nebe H. Sadjadpour and N. J. A. Sloane are with AT&T Shannon Labs, Florham Park, NJ. E-mail: sadjadpour@att.com

More information

LDPC Codes. Slides originally from I. Land p.1

LDPC Codes. Slides originally from I. Land p.1 Slides originally from I. Land p.1 LDPC Codes Definition of LDPC Codes Factor Graphs to use in decoding Decoding for binary erasure channels EXIT charts Soft-Output Decoding Turbo principle applied to

More information

Lecture 12. Block Diagram

Lecture 12. Block Diagram Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data

More information

Channel Coding and Interleaving

Channel Coding and Interleaving Lecture 6 Channel Coding and Interleaving 1 LORA: Future by Lund www.futurebylund.se The network will be free for those who want to try their products, services and solutions in a precommercial stage.

More information

Dr. Cathy Liu Dr. Michael Steinberger. A Brief Tour of FEC for Serial Link Systems

Dr. Cathy Liu Dr. Michael Steinberger. A Brief Tour of FEC for Serial Link Systems Prof. Shu Lin Dr. Cathy Liu Dr. Michael Steinberger U.C.Davis Avago SiSoft A Brief Tour of FEC for Serial Link Systems Outline Introduction Finite Fields and Vector Spaces Linear Block Codes Cyclic Codes

More information

Graph-based Codes and Iterative Decoding

Graph-based Codes and Iterative Decoding Graph-based Codes and Iterative Decoding Thesis by Aamod Khandekar In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy California Institute of Technology Pasadena, California

More information

THIS paper is aimed at designing efficient decoding algorithms

THIS paper is aimed at designing efficient decoding algorithms IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER 1999 2333 Sort-and-Match Algorithm for Soft-Decision Decoding Ilya Dumer, Member, IEEE Abstract Let a q-ary linear (n; k)-code C be used

More information

Low-Density Parity-Check codes An introduction

Low-Density Parity-Check codes An introduction Low-Density Parity-Check codes An introduction c Tilo Strutz, 2010-2014,2016 June 9, 2016 Abstract Low-density parity-check codes (LDPC codes) are efficient channel coding codes that allow transmission

More information

Optimized Symbol Mappings for Bit-Interleaved Coded Modulation with Iterative Decoding

Optimized Symbol Mappings for Bit-Interleaved Coded Modulation with Iterative Decoding Optimized Symbol Mappings for Bit-nterleaved Coded Modulation with terative Decoding F. Schreckenbach, N. Görtz, J. Hagenauer nstitute for Communications Engineering (LNT) Munich University of Technology

More information

Channel Coding I. Exercises SS 2017

Channel Coding I. Exercises SS 2017 Channel Coding I Exercises SS 2017 Lecturer: Dirk Wübben Tutor: Shayan Hassanpour NW1, Room N 2420, Tel.: 0421/218-62387 E-mail: {wuebben, hassanpour}@ant.uni-bremen.de Universität Bremen, FB1 Institut

More information

Punctured Convolutional Codes Revisited: the Exact State Diagram and Its Implications

Punctured Convolutional Codes Revisited: the Exact State Diagram and Its Implications Punctured Convolutional Codes Revisited: the Exact State iagram and Its Implications Jing Li Tiffany) Erozan Kurtas epartment of Electrical and Computer Engineering Seagate Research Lehigh University Bethlehem

More information

ON DISTRIBUTED ARITHMETIC CODES AND SYNDROME BASED TURBO CODES FOR SLEPIAN-WOLF CODING OF NON UNIFORM SOURCES

ON DISTRIBUTED ARITHMETIC CODES AND SYNDROME BASED TURBO CODES FOR SLEPIAN-WOLF CODING OF NON UNIFORM SOURCES 7th European Signal Processing Conference (EUSIPCO 2009) Glasgow, Scotland, August 24-28, 2009 ON DISTRIBUTED ARITHMETIC CODES AND SYNDROME BASED TURBO CODES FOR SLEPIAN-WOLF CODING OF NON UNIFORM SOURCES

More information

Parallel Concatenated Chaos Coded Modulations

Parallel Concatenated Chaos Coded Modulations Parallel Concatenated Chaos Coded Modulations Francisco J. Escribano, M. A. F. Sanjuán Departamento de Física Universidad Rey Juan Carlos 8933 Móstoles, Madrid, Spain Email: francisco.escribano@ieee.org

More information

Belief-Propagation Decoding of LDPC Codes

Belief-Propagation Decoding of LDPC Codes LDPC Codes: Motivation Belief-Propagation Decoding of LDPC Codes Amir Bennatan, Princeton University Revolution in coding theory Reliable transmission, rates approaching capacity. BIAWGN, Rate =.5, Threshold.45

More information

QPP Interleaver Based Turbo-code For DVB-RCS Standard

QPP Interleaver Based Turbo-code For DVB-RCS Standard 212 4th International Conference on Computer Modeling and Simulation (ICCMS 212) IPCSIT vol.22 (212) (212) IACSIT Press, Singapore QPP Interleaver Based Turbo-code For DVB-RCS Standard Horia Balta, Radu

More information

THE EFFECT OF PUNCTURING ON THE CONVOLUTIONAL TURBO-CODES PERFORMANCES

THE EFFECT OF PUNCTURING ON THE CONVOLUTIONAL TURBO-CODES PERFORMANCES THE EFFECT OF PUNCTURING ON THE CONVOLUTIONAL TURBO-COES PERFORMANCES Horia BALTA 1, Lucian TRIFINA, Anca RUSINARU 1 Electronics and Telecommunications Faculty, Bd. V. Parvan, 1900 Timisoara, ROMANIA,

More information

SIDDHARTH GROUP OF INSTITUTIONS :: PUTTUR Siddharth Nagar, Narayanavanam Road UNIT I

SIDDHARTH GROUP OF INSTITUTIONS :: PUTTUR Siddharth Nagar, Narayanavanam Road UNIT I SIDDHARTH GROUP OF INSTITUTIONS :: PUTTUR Siddharth Nagar, Narayanavanam Road 517583 QUESTION BANK (DESCRIPTIVE) Subject with Code : CODING THEORY & TECHNIQUES(16EC3810) Course & Branch: M.Tech - DECS

More information

Capacity-approaching codes

Capacity-approaching codes Chapter 13 Capacity-approaching codes We have previously discussed codes on graphs and the sum-product decoding algorithm in general terms. In this chapter we will give a brief overview of some particular

More information

Introduction to Low-Density Parity Check Codes. Brian Kurkoski

Introduction to Low-Density Parity Check Codes. Brian Kurkoski Introduction to Low-Density Parity Check Codes Brian Kurkoski kurkoski@ice.uec.ac.jp Outline: Low Density Parity Check Codes Review block codes History Low Density Parity Check Codes Gallager s LDPC code

More information

The Super-Trellis Structure of Turbo Codes

The Super-Trellis Structure of Turbo Codes 2212 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 46, NO 6, SEPTEMBER 2000 The Super-Trellis Structure of Turbo Codes Marco Breiling, Student Member, IEEE, and Lajos Hanzo, Senior Member, IEEE Abstract

More information

CHAPTER 8 Viterbi Decoding of Convolutional Codes

CHAPTER 8 Viterbi Decoding of Convolutional Codes MIT 6.02 DRAFT Lecture Notes Fall 2011 (Last update: October 9, 2011) Comments, questions or bug reports? Please contact hari at mit.edu CHAPTER 8 Viterbi Decoding of Convolutional Codes This chapter describes

More information

Construction of coset-based low rate convolutional codes and their application to low rate turbo-like code design

Construction of coset-based low rate convolutional codes and their application to low rate turbo-like code design Construction of coset-based low rate convolutional codes and their application to low rate turbo-like code design Durai Thirupathi and Keith M Chugg Communication Sciences Institute Dept of Electrical

More information

AN INTRODUCTION TO LOW-DENSITY PARITY-CHECK CODES

AN INTRODUCTION TO LOW-DENSITY PARITY-CHECK CODES AN INTRODUCTION TO LOW-DENSITY PARITY-CHECK CODES Item Type text; Proceedings Authors Moon, Todd K.; Gunther, Jacob H. Publisher International Foundation for Telemetering Journal International Telemetering

More information

Lecture 4 : Introduction to Low-density Parity-check Codes

Lecture 4 : Introduction to Low-density Parity-check Codes Lecture 4 : Introduction to Low-density Parity-check Codes LDPC codes are a class of linear block codes with implementable decoders, which provide near-capacity performance. History: 1. LDPC codes were

More information

On the exact bit error probability for Viterbi decoding of convolutional codes

On the exact bit error probability for Viterbi decoding of convolutional codes On the exact bit error probability for Viterbi decoding of convolutional codes Irina E. Bocharova, Florian Hug, Rolf Johannesson, and Boris D. Kudryashov Dept. of Information Systems Dept. of Electrical

More information

Girth Analysis of Polynomial-Based Time-Invariant LDPC Convolutional Codes

Girth Analysis of Polynomial-Based Time-Invariant LDPC Convolutional Codes IWSSIP 212, 11-13 April 212, Vienna, Austria ISBN 978-3-2-2328-4 Girth Analysis of Polynomial-Based Time-Invariant LDPC Convolutional Codes Hua Zhou and Norbert Goertz Institute of Telecommunications Vienna

More information

Solutions or answers to Final exam in Error Control Coding, October 24, G eqv = ( 1+D, 1+D + D 2)

Solutions or answers to Final exam in Error Control Coding, October 24, G eqv = ( 1+D, 1+D + D 2) Solutions or answers to Final exam in Error Control Coding, October, Solution to Problem a) G(D) = ( +D, +D + D ) b) The rate R =/ and ν i = ν = m =. c) Yes, since gcd ( +D, +D + D ) =+D + D D j. d) An

More information

A new analytic approach to evaluation of Packet Error Rate in Wireless Networks

A new analytic approach to evaluation of Packet Error Rate in Wireless Networks A new analytic approach to evaluation of Packet Error Rate in Wireless Networks Ramin Khalili Université Pierre et Marie Curie LIP6-CNRS, Paris, France ramin.khalili@lip6.fr Kavé Salamatian Université

More information

These outputs can be written in a more convenient form: with y(i) = Hc m (i) n(i) y(i) = (y(i); ; y K (i)) T ; c m (i) = (c m (i); ; c m K(i)) T and n

These outputs can be written in a more convenient form: with y(i) = Hc m (i) n(i) y(i) = (y(i); ; y K (i)) T ; c m (i) = (c m (i); ; c m K(i)) T and n Binary Codes for synchronous DS-CDMA Stefan Bruck, Ulrich Sorger Institute for Network- and Signal Theory Darmstadt University of Technology Merckstr. 25, 6428 Darmstadt, Germany Tel.: 49 65 629, Fax:

More information

Optimum Soft Decision Decoding of Linear Block Codes

Optimum Soft Decision Decoding of Linear Block Codes Optimum Soft Decision Decoding of Linear Block Codes {m i } Channel encoder C=(C n-1,,c 0 ) BPSK S(t) (n,k,d) linear modulator block code Optimal receiver AWGN Assume that [n,k,d] linear block code C is

More information

Research Article Generalized Punctured Convolutional Codes with Unequal Error Protection

Research Article Generalized Punctured Convolutional Codes with Unequal Error Protection Hindawi Publishing Corporation EURASIP Journal on Advances in Signal Processing Volume 2008, Article ID 280831, 6 pages doi:10.1155/2008/280831 Research Article Generalized Punctured Convolutional Codes

More information

Message-Passing Decoding for Low-Density Parity-Check Codes Harish Jethanandani and R. Aravind, IIT Madras

Message-Passing Decoding for Low-Density Parity-Check Codes Harish Jethanandani and R. Aravind, IIT Madras Message-Passing Decoding for Low-Density Parity-Check Codes Harish Jethanandani and R. Aravind, IIT Madras e-mail: hari_jethanandani@yahoo.com Abstract Low-density parity-check (LDPC) codes are discussed

More information

Low-density parity-check (LDPC) codes

Low-density parity-check (LDPC) codes Low-density parity-check (LDPC) codes Performance similar to turbo codes Do not require long interleaver to achieve good performance Better block error performance Error floor occurs at lower BER Decoding

More information