ELEC 405/511 Error Control Coding Binary Convolutional Codes
Peter Elias (1923-2001) Coding for Noisy Channels, 1955 2
With block codes, the input data is divided into blocks of length k and the codewords have blocklength n. With convolutional codes, the input data is encoded in a continuous manner. the output is a single codeword encoding is done using only shift registers and XOR gates (binary arithmetic) the encoder output is a function of the current input bits and the shift register contents 3
Rate 1/2 Linear Convolutional Encoder Figure 11.1 4
Encoding Input data stream x = ( x, x, x, ) 0 1 2 Output data streams y = ( y, y, y, ) (0) (0) (0) (0) 0 1 2 y = ( y, y, y, ) (1) (1) (1) (1) 0 1 2 Multiplex into a single data stream y = ( y, y, y, y, y, y, ) (0) (1) (0) (1) (0) (1) 0 0 1 1 2 2 5
ETSI TS 136 212 V8.4.0 (2008-11) Technical Specification LTE; Evolved Universal Terrestrial Radio Access (E- UTRA); Multiplexing and channel coding Rate 1/3 Convolutional Encoder 6
7
Rate 2/3 Linear Convolutional Encoder 8
Rate 1/2 Linear Convolutional Encoder 9
Example 11-1 Input data stream first input bit Output data streams x = (10110) y (0) = (10001010) y (1) = (11111110) Multiplex into a single data stream y = (11,01,01,01,11,01,11,00) 10
Fractional Rate Loss Message length L Codeword length is L(n/k) + nm Effective code rate is L Reffective = L n / k + nm Fractional rate loss is γ ( ) R R km R L km effective = = + 11
Example 11-2 From Example 11-1, the message length is L = 5 Codeword length is L(n/k) + nm = 5(2/1)+2 3 = 16 Effective code rate is L 5 5 Reffective = = = L n / k + nm 10 + 6 16 ( ) Fractional rate loss is γ R R km R L + km effective = = = 3 8 12
Fractional Rate Loss Typically k = 1 and L is large: L» m In this case R effective 1 L 1 = as L n L+ m n ( ) Fractional rate loss becomes m γ = 0 L+ m 13
Memory order m: length of the longest shift register m = max m i Constraint length: K = 1+ max m i Total memory (overall constraint length): sum of all memory elements M = Σ m i Number of input streams: k Number of output streams: n Code rate: k/n State: contents of the shift registers Initial state: 00 0 End state: 00 0 (input zeros at the end) 14
Punctured Convolutional Codes Complexity is a function of the total memory or overall constraint length Thus codes with k > 1 are rarely used Puncturing allows for higher code rates from a rate 1/2 code For rate 2/3, delete one of every 4 output bits input x 1 x 2 output y 1 y 2 y 3 y 4 remove y 3 15
Rate 1/2 Linear Convolutional Encoder 16
Rate 2/3 Linear Convolutional Encoder 17
Generator Sequences () i g j represents the ith output of an encoder when the jth input is a single 1 followed by a string of zeros ( j) x = δ = (10000 ) Rate 1/2 example (K = 4) g g (0) (1) = (1011) = (1101) Rate 2/3 example g g g = (1001) g = (0110) (0) (0) 0 1 = (0111) g = (1010) (1) (1) 0 1 = (1100) g = (0100) (2) (2) 0 1 18
Why Convolutional Code? For the rate 1/2 (2,1,4) code with impulse response g (0) = 1011 g (1) = 1101 The output of the encoder is y (0) = x * g (0) y (1) = x * g (1) If x = (10110), the output is y (0) = (10110)*(1011)=10001010 y (1) = (10110)*(1101)=11111110 19
Rate 1/2 Linear Convolutional Encoder (2,1,3) code g (0) = 111 g (1) = 101 20
A convolutional code is in fact a linear block code for finite length L. For a rate 1/2 code, the generator matrix can be formed by interleaving g (0) and g (1) G = (0) (1) (0) (1) (0) (1) g0 g 0 g1 g 1 gmg m (0) (1) (0) (1) (0) (1) g0 g 0 g1 g 1 gmgm 0 0 L x 2(m+L) 21
For the rate 1/2 (2,1,4) code 11 01 10 11 0 G = 11 01 10 11 0 For the rate 2/3 (3,2,4) code 101 011 010 110 0 010 101 110 000 G = 101 011 010 110 010 101 110 000 0 22
Example 11-3 (2,1,4) code input x = (1011), L = 4, m = 3 G has dimension 4 x 14 11 01 10 11 00 00 00 00 11 01 10 11 00 00 G = 00 00 11 01 10 11 00 00 00 00 11 01 10 11 y=xg = (11,01,01,01,11,01,11) 23
Polynomial Representation Introduce the delay operator D y y x (,,, ) ( ) 2 = x0 x1 x2 X D= x0 + xd 1 + xd 2 + (0) (0) (0) (0) (0) (0) (0) (0) 2 0 1 2 0 1 2 = ( y, y, y, ) Y ( D) = y + y D+ y D + = ( y, y, y, ) Y ( D) = y + y D+ y D + (1) (1) (1) (1) (1) (1) (1) (1) 2 0 1 2 0 1 2 () i () i g G ( D) 24
(2,1,4) Convolutional Encoder 25
Polynomial Representation For the (2,1,4) code example G (0) (D) = 1+D 2 +D 3 X(D) = 1+D 2 +D 3 G (1) (D) = 1+D+D 3 Use polynomial multiplication to obtain the output Y (0) (D) = X(D) G (0) (D) = 1+D 4 +D 6 Y (1) (D) = X(D) G (1) (D) = 1+D+D 2 +D 3 +D 4 +D 5 +D 6 Y(D) = Y (0) (D 2 ) + DY (1) (D 2 ) = 1+D+D 3 +D 5 +D 7 +D 8 +D 9 +D 11 +D 12 +D 13 26
Rate 1/2 (2,1,3) Code G (0) (D) = 1+D+D 2 G (1) (D) = 1+D 2 X(D) = 1+D 3 +D 4 Y (0) (D) = X(D) G (0) (D) = 1+D+D 2 +D 3 +D 6 Y (1) (D) = X(D) G (1) (D) = 1+D 2 +D 3 +D 4 +D 5 +D 6 Y(D) = Y (0) (D 2 ) + DY (1) (D 2 ) = 1+D+D 2 +D 4 +D 5 +D 6 +D 7 +D 9 +D 11 +D 12 +D 13 27
For 3 outputs Y(D) = Y (0) (D 3 ) + DY (1) (D 3 ) + D 2 Y (2) (D 3 ) For a multiple input, multiple output system k 1 () i ( j) () i = j j= 0 Y ( D) X ( D) G ( D) G = [ ( D) ( D) ( D)] G X X X G () i 0 () i (0) (1) ( k 1) 1 () i k 1 ( D) ( D) ( D) 28
( (0) (1) ( n 1) ) Y( D) = Y ( D), Y ( D),, Y ( D) (0) (1) ( n 1) G0 ( D) G0 ( D) G0 ( D) (0) (1) ( n 1) (0) (1) ( k 1) 1 ( D) 1 ( D) 1 ( D) ( D) ( D) [ ( D) ( D) ( D)] G G G = X G = X X X (0) (1) ( n 1) Gk 1( D) Gk 1( D) Gk 1 ( D) Y n 1 i () i n ( ) ( ) D = D Y D i= 0 G(D) G(D) is called the transfer-function matrix for the encoder 29
For the (2,1,4) code G(D) = [1+D 2 +D 3 1+D+D 3 ] For the (3,2,4) code G G G = 1 + D G = D+ D (0) 3 (0) 2 0 1 = D+ D + D G = 1 + D (1) 2 3 (1) 2 0 1 = 1 + D G = D (2) (2) 0 1 3 2 3 1+ D D+ D + D 1+ D G( D) = 2 2 D+ D 1 + D D 30
Example 11-4 Rate 2/3 Code Let x = (11,10,11) X (0) (D) = 1+D+D 2 X (1) (D) = 1+D 2 Y(D) = X(D)G(D) D+ D 1 + D D 3 2 3 2 2 1+ D D+ D + D 1+ D = 1+ D+ D 1+ D 2 2 5 3 4 5 = 1+ D 1+ D+ D + D + D 1+ D 31
To convert from matrix form to the output string Y 2 i () i 3 ( ) ( ) D = D Y D i= 0 15 3 9 12 15 2 3 1 D D(1 D D D D ) D (1 D ) = + + + + + + + + = 1 + D+ D + D + D + D + D + D + D 2 4 5 10 13 15 16 The output codeword is then y = (111,011,000,010,010,110) 32
Systematic Convolutional Codes 33
Systematic Convolutional Codes The input bits are reproduced unaltered in the codeword G P l I P0 0 P1 0 P2 0 Pm I P 0 P 0 P 0 P 0 1 m 1 m = I P0 0 Pm 2 0 Pm 1 0 Pm ( k) ( k+1) ( n-1) g0,l g0,l g 0,l ( k) ( k+1) ( n-1) g1, l g1, l g1, l = ( k) ( k+1) ( n-1) gk-1, lgk-1, l g k-1, l I: k x k identity matrix 0: k x k all-zero matrix P l : k x (n-k) matrix 34
Systematic Encoder Example (2,1,4) systematic encoder g = (1000) g = (1101) (0) (1) 35
Systematic Encoder The generator matrix is 11 01 00 01 G = 11 01 00 01 G(D) = [1 1+D+D 3 ] For X(D) = 1+D 2 +D 3 Y (0) (D) = X(D) G (0) (D) = X(D) = 1+D 2 +D 3 Y (1) (D) = X(D) G (1) (D) = 1+D+D 2 +D 3 +D 4 +D 5 +D 6 36
Y(D) = Y (0) (D 2 ) + DY (1) (D 2 ) = 1+D 4 +D 6 + D(1+D 2 +D 4 +D 6 +D 8 +D 10 +D 12 ) = 1+D+D 3 +D 4 +D 5 +D 6 +D 7 +D 9 +D 11 +D 13 The output codeword is then y = (11,01,11,11,01,01,01) x = (1011) y = xg 11 01 00 01 00 00 00 00 11 01 00 01 00 00 G = 00 00 11 01 00 01 00 00 00 00 11 01 00 01 37
LTE Turbo Code GD ( ) 1, 0 1 g( D) 1 = g( 0 D) g( D) = 1+ D + D 2 3 g( D) = 1+ D+ D 3 38
State Diagrams The state of an encoder is defined as the shift register contents For an (n,1,k) code there are 2 K-1 = 2 m states Each branch in the state diagram has a label of the form X/Y, where X is the input bit that causes the state transition and Y is the corresponding output bits. Path: a sequence of nodes connected by branches Circuit: a path that starts and stops at the same node Loop: a circuit that does not enter any node more than once. 39
Rate 1/2 Linear Convolutional Encoder 40
State Diagram for the (2,1,4) Encoder 41
Example 11-5: Consider the input x = (1011) Begin at state S 0 input output state - - S 0 1 11 S 1 0 01 S 2 1 01 S 5 1 01 S 3 0 11 S 6 0 01 S 4 0 11 S 0 42
Catastrophic Codes Catastrophic code: A code for which an infinite weight input causes a finite weight output Result: A finite number of channel errors can cause an infinite number of decoding errors In terms of the state diagram: A loop corresponding to a nonzero input for which all the output bits are zero denotes a catastrophic code 43
Catastrophic Encoder 44
g (0) = (1111) g (1) = (1111) x = (111111 ) y = (11,00,11,00,00,00, ) Now suppose r = (10,00,00,00,00,00, ) A maximum likelihood decoder will choose c' = (00,00,00,00,00,00, ) which corresponds to x' = (000000 ) and an infinite number of errors 45
State Diagram for a Catastrophic Code 46
Another Catastrophic Encoder g (0) = (0111) g (1) = (1110) x = (1,1,0,1,1,0,1,1, ) y = (01,10,00,00,00,00, ) 47
The Corresponding State Diagram 48
Theorem 11-1 Catastrophic Codes Let C be a rate 1/n convolutional code with transfer function matrix G(D) whose generator sequences have the transforms {G (0) (D), G (1) (D),, G (n-1) (D)}. C is not catastrophic if and only if GCD(G (0) (D), G (1) (D),, G (n-1) (D)) = D l for l 0. 49
Catastrophic Code Examples Example 11-6 (previous) code g (0) = (0111) g (1) = (1110) G (0) (D) = D+D 2 +D 3 G (1) (D) = 1+D+D 2 GCD(G (0) (D),G (1) (D)) = 1+D+D 2 D l For the other code g (0) = (1111) g (1) = (1111) GCD(G (0) (D),G (1) (D)) = 1+D+D 2 +D 3 D l 50
The Weight Enumerator Label each branch in the state diagram with X i Y j, where i is the weight of the output sequence and j is the weight of the input sequence. The transfer function T( X, Y) = i i a j i, jxy j can be obtained using Mason s formula. 51
The Weight Enumerator Each branch can also be labeled with X i L k, where i is the weight of the output sequence and k is the branch length. The transfer function T ( X,) L b XL = i can be obtained using Mason s formula. k ik, i k 52
(2,1,3) Convolutional Encoder 53
State Diagram for the (2,1,3) Encoder Solid lines represent input 0 Dashed lines represent input 1 state SR contents a 00 b 10 c 01 d 11 54
Example Consider the input sequence x = (10011) Begin at state a input 1 0 0 1 1 output 11 10 11 11 01 new state b c a b d To return to the all zero state: input 0 0 output 01 11 new state c a 55
Modified State Diagram for the (2,1,3) Code XY X = output weight Y = input weight XY X X 2 Y X X 2 Y 56
Modified State Diagram for the (2,1,3) Code XL X = output weight L = branch length XL XL X 2 L XL X 2 L L 57
Mason s Formula TXY (, ) = i F i i Forward paths F i Graph determinant = 1 C + CC CC C + l l m l m n L L L L L L (, ) (,, ) l l m l m n Cofactor of path K i = 1 C + CC CC C + i l l m l m n ( K, L ) ( K, L, L ) ( K, L, L, L ) i l i l m i l m n 58
(2,1,3) Code Two forward paths a 0 bca 1 F 1 = X 2 Y X X 2 =X 5 Y a 0 bdca 1 F 2 = X 2 Y XY X X 2 =X 6 Y 2 Three loops dd C 1 = XY bdcb C 2 = XY X Y=X 2 Y 2 bcb C 3 = X Y=XY =1-(XY+X 2 Y 2 +XY)+XY XY=1-2XY 1 =1-XY 2 =1 59
(2,1,3) Code TXY (, ) F i i i = = X Y(1 XY) + X Y 1 2XY 5 6 2 5 XY = 1 2XY X TX ( ) X 2X 4X 1 2X 5 5 6 7 = = + + + 60
(2,1,3) Code Two forward paths a 0 bca 1 F 1 = X 2 L XL X 2 L=X 5 L 3 a 0 bdca 1 F 2 = X 2 L XL XL X 2 L=X 6 L 4 Three loops dd C 1 = XL bdcb C 2 = XL XL L=X 2 L 3 bcb C 3 = XL L=XL 2 =1-(XL+X 2 L 3 +XL 2 )+XL XL 2 =1-XL-XL 2 1 =1-XL 2 =1 61
(2,1,3) Code TXL (,) Fi i i X L (1 XL) + X L = = 1 XL(1 + L) 5 3 6 4 5 3 XL = 1 XL(1 + L) X TX ( ) X 2X 4X 1 2X 5 5 6 7 = = + + + 62
(2,1,4) Code T(X,Y) = X 6 (Y+Y 3 )+X 8 (3Y 2 +5Y 4 +2Y 6 )+ T(X) = 2X 6 +10X 8 +49X 10 + 63
Minimum Free Distance (2,1,4) code T(X) = 2X 6 +10X 8 +49X 10 + The minimum free distance d free is the minimum Hamming distance between all pairs of convolutional codewords. It is the minimum weight of an output sequence starting from the all-zero state and ending in the all-zero state. dfree = min{ d( y', y'') y' y''} = min{ w( y) y 0} For the (2,1,4) code, there are two codewords of weight 6, thus d free = 6. 64
Maximum Free Distance Codes Nonsystematic convolutional codes typically provide a higher minimum free distance than systematic codes with the same constraint length and rate. 65
66
Best Known Convolutional Codes 67
68
Peter Elias (1923-2001) Coding for Noisy Channels 69