An Optimum Symbol-by Symbol decoding rule for linear codes

Similar documents
WEIGHT DISTRIBUTIONS OF SOME CLASSES OF BINARY CYCLIC CODES

Error Correction Methods

Vector spaces. EE 387, Notes 8, Handout #12

On Two Probabilistic Decoding Algorithms for Binary Linear Codes

Chapter 3 Linear Block Codes

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

Example of Convolutional Codec

Code design: Computer search

Introduction to Convolutional Codes, Part 1

THIS paper is aimed at designing efficient decoding algorithms

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land

The BCH Bound. Background. Parity Check Matrix for BCH Code. Minimum Distance of Cyclic Codes

Polar Codes: Graph Representation and Duality

Some results on the existence of t-all-or-nothing transforms over arbitrary alphabets

PAijpam.eu CONVOLUTIONAL CODES DERIVED FROM MELAS CODES

Error-Correction Coding for Digital Communications

An Introduction to Low Density Parity Check (LDPC) Codes

Optimization of the Hamming Code for Error Prone Media

SIDDHARTH GROUP OF INSTITUTIONS :: PUTTUR Siddharth Nagar, Narayanavanam Road UNIT I

DUAL-MODE COMBINATIONAL LOGIC FOR FUNCTION-INDEPENDENT FAULT TESTING

Some Easily Decoded, Efficient, Burst Error Correcting Block Codes

Optimization of the Hamming Code for Error Prone Media

Chapter 5. Cyclic Codes

Cyclic codes: overview

Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel

Lecture 12. Block Diagram

Lecture 7. Union bound for reducing M-ary to binary hypothesis testing

The Viterbi Algorithm EECS 869: Error Control Coding Fall 2009

A Proposed Quantum Low Density Parity Check Code

Physical Layer and Coding

Binary Primitive BCH Codes. Decoding of the BCH Codes. Implementation of Galois Field Arithmetic. Implementation of Error Correction

Reed-Solomon codes. Chapter Linear codes over finite fields

Exponential Error Bounds for Block Concatenated Codes with Tail Biting Trellis Inner Codes

Binary Convolutional Codes of High Rate Øyvind Ytrehus

DEPARTMENT OF EECS MASSACHUSETTS INSTITUTE OF TECHNOLOGY. 6.02: Digital Communication Systems, Fall Quiz I. October 11, 2012

5.0 BCH and Reed-Solomon Codes 5.1 Introduction

Chapter 7: Channel coding:convolutional codes

Lecture 2 Linear Codes

On APP Decoding SURFACE. Syracuse University. Carlos R.P. Hartmann Syracuse University, Luther D. Rudolph Syracuse University

1 1 0, g Exercise 1. Generator polynomials of a convolutional code, given in binary form, are g

An Application of Coding Theory into Experimental Design Construction Methods for Unequal Orthogonal Arrays

ITCT Lecture IV.3: Markov Processes and Sources with Memory

On Weight Enumerators and MacWilliams Identity for Convolutional Codes

MT5821 Advanced Combinatorics

EE512: Error Control Coding

EE 229B ERROR CONTROL CODING Spring 2005

Chapter10 Convolutional Codes. Dr. Chih-Peng Li ( 李 )

SOFT DECISION FANO DECODING OF BLOCK CODES OVER DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM

PAPER A Low-Complexity Step-by-Step Decoding Algorithm for Binary BCH Codes

MATRICES. a m,1 a m,n A =

Low-density parity-check (LDPC) codes

Low-complexity error correction in LDPC codes with constituent RS codes 1

UNIT I INFORMATION THEORY. I k log 2

School of Computer Science and Electrical Engineering 28/05/01. Digital Circuits. Lecture 14. ENG1030 Electrical Physics and Electronics

Data Detection for Controlled ISI. h(nt) = 1 for n=0,1 and zero otherwise.

Communications II Lecture 9: Error Correction Coding. Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved

Construction of Protographs for QC LDPC Codes With Girth Larger Than 12 1

Decoding Procedure for BCH, Alternant and Goppa Codes defined over Semigroup Ring

Approaching Blokh-Zyablov Error Exponent with Linear-Time Encodable/Decodable Codes

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

REED-SOLOMON CODE SYMBOL AVOIDANCE

Chapter 6. BCH Codes

Correcting Codes in Cryptography

Lecture 3: Error Correcting Codes

ECE8771 Information Theory & Coding for Digital Communications Villanova University ECE Department Prof. Kevin M. Buckley Lecture Set 2 Block Codes

Part 1: Digital Logic and Gates. Analog vs. Digital waveforms. The digital advantage. In real life...

SPA decoding on the Tanner graph

Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes

Logic and Boolean algebra

4488 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 10, OCTOBER /$ IEEE

Channel Coding I. Exercises SS 2017

Practical Polar Code Construction Using Generalised Generator Matrices

ONE approach to improving the performance of a quantizer

Decoding of the Five-Error-Correcting Binary Quadratic Residue Codes

1 Introduction to information theory

Binomial coefficient codes over W2)

IEEE Working Group on Mobile Broadband Wireless Access. <

Rings. EE 387, Notes 7, Handout #10

Error control of line codes generated by finite Coxeter groups

Channel Coding I. Exercises SS 2017

IN this paper, we will introduce a new class of codes,

Coding for Memory with Stuck-at Defects

Introduction to algebraic codings Lecture Notes for MTH 416 Fall Ulrich Meierfrankenfeld

Polar Code Construction for List Decoding

NAME... Soc. Sec. #... Remote Location... (if on campus write campus) FINAL EXAM EE568 KUMAR. Sp ' 00

CONVOLUTIONAL CODES AND IRREDUCIBLE IDEALS

16.36 Communication Systems Engineering

Section 3 Error Correcting Codes (ECC): Fundamentals

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups.

Graph-based codes for flash memory

DIGITAL CIRCUIT LOGIC BOOLEAN ALGEBRA (CONT.)

Integer vs. constraint programming. IP vs. CP: Language

ON PERTURBATION OF BINARY LINEAR CODES. PANKAJ K. DAS and LALIT K. VASHISHT

CONVOLUTION TREES AND PASCAL-T TRIANGLES. JOHN C. TURNER University of Waikato, Hamilton, New Zealand (Submitted December 1986) 1.

MATH 433 Applied Algebra Lecture 22: Review for Exam 2.

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Chapter 7 Reed Solomon Codes and Binary Transmission

Simplified Implementation of the MAP Decoder. Shouvik Ganguly. ECE 259B Final Project Presentation

Basic elements of number theory

Transcription:

Syracuse University SURFACE Electrical Engineering and Computer Science Technical Reports College of Engineering and Computer Science 9-1975 An Optimum Symbol-by Symbol decoding rule for linear codes Carlos R.P. Hartmann Syracuse University, chartman@syr.edu Luther D. Rudolph Syracuse University Follow this and additional works at: https://surface.syr.edu/eecs_techreports Part of the Computer Sciences Commons Recommended Citation Hartmann, Carlos R.P. and Rudolph, Luther D., "An Optimum Symbol-by Symbol decoding rule for linear codes" (1975). Electrical Engineering and Computer Science Technical Reports. 8. https://surface.syr.edu/eecs_techreports/8 This Report is brought to you for free and open access by the College of Engineering and Computer Science at SURFACE. It has been accepted for inclusion in Electrical Engineering and Computer Science Technical Reports by an authorized administrator of SURFACE. For more information, please contact surface@syr.edu.

3-75 AN OPTIMUM SYMBOL-BY-SYMBOL DECODING RULE FOR LINEAR CODES C. R. P. Hartmann L. D. Rudolph September, 1975

AN OPTIMUM SYMBOL-BY-SYMBOL DECODING RULE FOR LINEAR CODES c. R. P. Hartmann L. D. Rudolph This work was supported by National Science Foundation Grant ENG 75-07709 and Rome Air Development Center Contract F30602-72-C-0360. Systems and Information Science Syracuse University Syracuse, New York 13210 (315) 423-2368

ABSTRACT A decoding rule is presented which minimizes the probability of symbol error over a time-discrete memoryless channel for any linear error-correcting code when the code words are equiprobable. The complexity of this rule varies inversely with code rate, making the technique particularly attractive for high rate codes. Examples are given for both block and convolutional codes.

1 ACKNOWLEDGEMENT The authors wish to acknowledge the contribution to this work of Aileen McLoughlin, who provided a key to the discovery of the main result of this paper, and to Professors Kishan Mehrotra and Harry Schwarzlander for a number of helpful discussions on probability theory.

2 I. INTRODUCTION In recent years there has been a growing interest in "soft decision ll decoding schemes for error-correcting codes. The intent is to avoid, in part or in whole, the degradation of communication system performance which results when symbolby-symbol IIhard decision" quantization precedes decoding. The two best known techniques, which are optimum in the sense that they minimize the probability of word error for any time-discrete memoryless channel when the code words are equiprobable, are correlation decoding of block codes and Viterbi decoding of trellis codes [1]. Although in practice correlation and Viterbi decoding are usually used in conjunction with linear codes, neither technique makes any essential use of the linear property. Both techniques are exhaustive in that the received word is compared with every word in the code. For this reason, these techniques may be used only with codes having a small number of code words, i.e. low rate codes or middle-to-high rate codes with short block or constraint lengths. In this paper we present a new decoding rule which is, in a way, the dual of correlation/viterbi decoding in the case of linear codes. This rule is also optimum, but in the sense that it minimizes the probability of symbol error for any time-discrete mernoryless channel when the code words are equiprobable, and makes essential use of the linear property.

3 It is also exhaustive, but in the sense that every word in the dual code is used in the decoding process. This means that in practice this decoding rule can be used only with codes whose dual code has a small number of code words, i.e. high rate codes or low-to-middle rate codes with short block or constraint lengths. In Section II, we present the decoding rule and prove that it is optimum. Although perhaps not immediately obvious from the concise treatment given there, the decoding rule is a form of threshold decoding [2]. This is easily seen from the examples in Section III where the actual form of the decoding rule in the binary case is illustrated for both block and convolutional codes. Section IV contains a discussion of results.

4 II. THE DECODING RULE For convenience, we present the decoding rule for linear block codes. The extension to convolutional codes is immediate and will be obvious from the examples in Section III. Let = (c O,c l,,c n _ 1 ) denote any code word of an Cn,k) linear block code Cover GF(p) and c! = (c!o,c~l' 'c~ 1) the NJ J J J,njth code word of the (n,n-k) dual code ct. A code word c is,...", transmitted over a time-discrete memoryless channel with output alphabet B. The received word is denoted by = (ro,rl,.,r n _ 1 ), r j B. The decoding problem is: given E, compute an estimate cm of the transmitted code symbol c m in such a way that the probability t at C = c isaximized. Othe not tion: w = e (primitive complex pth root of unity); 0.. = 1 if i = j and 0 1J.. otherwise; Pr(x) is the probability of x and Pr(xly) is the probability of x given y. Unless otherwise stated, the elements of GF(p) are taken to be the integers O,l,,p-l and all arithmetic operations are performed in the field of complex numbers. DECODING RULE: Set em = s, where s. GF (p) maximizes the' expression (1)

5 Theorem: Decoding Rule (1) maximizes the probability that A C = C m m (Proof) We must show that choosing s to maximize A (s) is m equivalent to maximizing the probability that c = s given the received word E. We do this directly by showing that A m (s) = A Pr(c m = SIE}, where A is a positive constant which is independent of s. We first note that the expression in the m brackets on the RHS of (1), which is in product-of-sums form, can be rewritten in the sum-of-products form p-l L p-l L v =0 v =0 o 1 (2 ) where Z = (v O 'v 1,,v n - 1 ) is any element of V n, the vector space of all n-tuples over GF(p). Expressing (2) in vector notation and substituting in (1) yields n-k p-l -st PI [ Z ~! t(~ xm) ] A (s) = L w I w J w pr(5ix) m t=o j=l VEV /'" n p-1 -st t(v e ).~:.J I'./m PI n-k v e! IVJ = I w l w pr(ei~) w, on t=o yev j=l.j n (3) where e = (0 0' a 1,.,0 1) is the vector with 1 in the mth. ~m m m m,npositi on and 0 elsewhere. group characters [3,4] we know that By the orthogonality properties of n-k PI j=l V c~ w tv ""'J = n-k if VEe P,v (4)

6 Applying (4) to (3) gives n-k p-l -st t(c.e ) A (s) = p I w I w IV I\"~m Pr(rIc) m t=o ce:c rr "..." tv n-k p-l t(c e -5) r-j = p I pr(r)c) "'m l W tv,..., t=o SEC (5) But the sum on the far RHS of (5) vanishes unless c.e - s = O. Hence,,..J ""'m A (s) m n-k+l = P L Pr(~ls) ".., c.c,c m =s n-k+l \' = P L Pr (sie) (Pr (r) /Pr (c) ) ( 6) CEC C =8 ~ ~,.J, m Finally, since the code words of Care equiprobable, -k Pres) = p and (6) becomes Am(S) = pn+l Pr(E) l pr(sl!) CEC,C =s ty m Q.E.D. As one might expect, the decoding rule takes a simple A form in the binary case: set em = 0 if Am(O) > Am(l) and em = 1 A otherwise. It is more convenient however to state the rule in terms of the likel'ihood ratio ep = Pr(r Il)/pr(rmlo). m m Substituting the RHS of (1) into the inequality Am(O) > ~(l) yields

7 or 1 2 n - k n-1 1 i(c~ +to ) r r IT r (-1) J~ m~ Pr(r Ii) > t=o j=l ~=O i=o ~ 1 2 n - k n-l 1 i(c~ +to ) I (-1) t I IT I (-1) J ~ m~ P r (r Ii), t=o j=l ~=O i=o 1 r IT pr(r1io)+(-1) j1 m1pr(r 1 /1) I > 0 2 n - k n-l [ c +0-1 j=l t=o t n-l.j (7) Dividing both sides of (7) by IT pr(r1io) and using the definition 2=0 of the likelihood ratio, we have 2nik n~l[ 1+<I>1(_1)Cj~+Om1] > 0 j=l ~=O (8) Then dividing both sides of (8) by the positive quantity n-l II r L 1+ 11, ~=O oj c~ +0 2 n - k n-1 1+<1> (-1) J1 m1 I IT 1 > 0 j=l ~=O 1+<1>1 Finaily, using the identity c! +<5 1+<1>1(-1) J1 m1 _ ( l-.$~. ) Cj 1e om1 1+4> R, - 1+<P R, where ' $, denotes modulo 2 addition, we obtain the

8 BINARY DECODING RULE: '" Se.t em = 0 if n-k I ) cjt$o'mt 2 n-l 1-4> ji 1 t~o l 1+<P~ > 0 (9) and a = m 1 othe.rwise. We remark that up to this point we have ignored the question of how one retrieves the decoded information symbols from the code word estimate. '" This could be a problem because, when a '" symbol-by-symbol decoding rule is used, is not in general a code word. In the case of block codes, we could insist that the code be systematic without loss of generality, but there might be some objection to this restriction in the case of convolutional codes. As it turns out, this is not a problem since our decoding rule is easily modified to produce estimates of the information symbols directly if need be. Simply note that every information symbol am can be expressed as a linear combination, over GF(p), of code word symbols cm' i.e. am = fbmtct' b mt GF(p), and that the proof of the theorem goes through intact if we substitute ~ '" fbmtc t for c m and b mt for Gmt in (1).

9 III EXAMPLES Ca) (7,4) Hanuning c'ode We will illustrate the decoding rule for the received symbol roe Since the (7,4) code is cyclic, r 1,,r 6 are decoded simply by cyclically permuting the received word ~ in the buffer store. Binary Decoding Rule (9) in this case becomes cjr,$oor, Co = 0 iff I ~ 1~~R, > 0 j=l R,=O ( 1+<PR, ) (10) The parity check matrix H of the (7,4) code and its row space C' are shown below. r------- --I 111101001 H = 10 1 1 1 0 1 01 100111011 ( a) (b) ( c) Co 1 2 c 3 4 5 c 6 o 0 0 0 000 1 1 1 0 1 0 0 Ca) C' : o 1 1 1 0 1 0 (b) 1 0 0 1 1 1 0 (aeb) o 0 1 1 1 0 1 (c) 1 1 0 1 0 0 1 (a~) o 1 0 0 1 1 1 (b$c) 1 0 1 0 0 1 1 (aeb$c) (11) Let Pi = (l-~i)/{l+~t). Then substituting (11) into (10) gives A Co = 0 iff Po + P1P2P4 + P 2 P S P6 + P 1 P3 P6 + P3 P 4PS + + POPlP2P3PS + POP2P3P4P6 + POPIP4PSP6 > 0 (12)

10 The decoder configuration corresponding to (12) is shown in Figure 1. The knowledgable reader will immediately recognize the similarity between the decoder of Figure 1 and a one-step majority decoder using nonorthogonal parity checks [5]. And in fact if the "soft decision" function (l-q,(x)/(l+<p(x» were replaced by the "hard decision" function f(x) = -1 if x > ~ and +1 otherwise, and the last three parity checks in the decoder were deleted, the resulting circuit would be mathematically equivalent to a conventional one-step majority decoder. Parity checks in the circuit of Figure 1 would be computed by taking products of +1's and -lis, rather than by taking modulo 2 sums of O's and lis as would be the case in a conventional digital decoding circuit. (b) (4,3,3) convolutional code We now illustrate the decoding rule for the received symbol r o using an (no,ko,m) = (4,3,3) convolutional code (from Peterson and Weldon [6], page 395). Binary Decoding Rule (9) in this case becomes A Co cjr,$oor, 00 00 = o iff L II (1-$ ) > 0 1+$~. (13) j=l t=o Of course, there are only a finite number of nonzero terms in (13), the number depending upon the length of the transmitted code sequence. The initial portions of the parity check matrix H of the (4,3,3) code and its row space C' are shown below.

11 fll-l - r... _._r... h,... _ 1 0 (a) H = 1 0 1 0 1 1 1 1 0 (b) 1 1 0 0 1 0 1 0 1 1 1 1 0... ( c) 0 1 1 1 1 0 (a) 1 0"1 0 1 1 1 1 a (b) 0 1 0 1 1 1 1 1 0 C' : 1 1 0 a 1 0 1 0 1 1 1 1 0 0 0 1 1 1 0 1 0 1 1 1 1 0 0 1 1 0 0 1 0 1 1 1 1 1 0 1 0 0 1 0 1 0 1 1 1 1 1 0 (a$b) ( c) (14) (a$c) (bee) ( ag3bfbc).. As before, let p~ = (1-<P~)/(1+<P~) Then substituting (14) into (13) gives A Co = o iff Po + PIP2 P3 + P2 P4PSP6P7 + POPIP3P4PSP6P7 + > 0 (15) The decoding diagram corresponding to (15) is shown in Figure 2. This takes the form of a trellis diagram for the (4,1,3) dual code C' with the cjo positions in the branch labels complemented. general, to decode r the c~ positions would be complemented.) m Jrn Note that the all-zero state acts as the accumulator for the terms of (15). (In

12 Since a different storage unit must be used for each symbol to be decoded, the amount of storage for this type of decoder grows linearly with the length of the transmitted code sequence. This is also true of a Viterbi decoder, which must keep track of its path-elimination decisions. Of course a Viterbi decoder for the (4,3,3) code would be considerably more complex, since the trellis would have 64 states instead of the 4 states of the decoder in Figure 2.

13 IV. DISCUSSION We have presented a symbol-by-symbol decoding rule for linear codes which is optimum in that it minimizes the probability of symbol error on a time-discrete memoryless channel when the code words are equiprobable. A comment or two on the relationship of this technique to correlation/viterbi decoding, which is optimum in that it minimizes the same channels, would seem to be in order. First, although the performance of correlation/viterbi decoding is inferior to the performance of the decoding rule presented here on a symbol-error basis, and vice versa on a word-error basis, some preliminary simulation results for the Gaussian channel suggest that the two approaches are very close in performance on either basis. Symbol-error-rate is generally considered to be a better measure of performance than word-errorrate, especially in the case of convolutional codes, and this would seem to give a slight edge to our decoding rule. On the other hand, correlation/viterbi decoding is applicable to nonlinear as well as linear codes, which might be of some advantage. Our present feeling is that for all practical purposes the two approaches give essentially the same performance. When we turn to the question of complexity, there is of course a radical difference between the two decoding techniques. Correlation/Viterbi decoding is only practical for low rate or short codes whereas our decoding rule is only practical for

14 high rate or short codes. We are fairly well convinced, and the reader may be able to convince himself by studying the examples in Section III, that the complexity of our decoding rule for an (n,k) linear code is comparable to the complexity of a correlation/viterbi decoder for the (n,n-k) dual code. This is fairly easy to see in the case of linear block codes and not so obvious in the case of convolutional codes since there are so many options and programming tricks to be considered. The authors, however, are firm believers in the coding-complexity Folk Theorem: "The complexity of any operation involving a linear code is comparable to the complexity of essentially that same operation involving the dual code". (In fact, it was the unsatisfying lack of a decoding method for high rate linear codes that was "dual" to correlation/viterbi decoding that motivated the research reported here.) If our intuition is correct, then our scheme and correlation/viterbi decoding should be of about the same complexity for rate 1/2 codes. Finally, we remark that the decoding rule presented here, where all words of the dual code are used in the decoding process and the "soft decision function" is the finite Fourier transform p-l ij l w pr(rtli), is a very important but very special limiting i=o case of a general approach to soft decision decoding which we discuss in a companion paper [7].

15 REFERENCES 1. A. J. Viterbi, "Error bounds for convolutional codes and an asymptotically optimum decoding algorithm, It 'IE'EE' T'ra'ns. Inform. -Theo'ry, vol. IT-13, pp. 260-269, Apr. 1967. 2. J. L. Massey, Threshold Decoding, Cambridge, Mass.: M.I.T. Press, 1963. 3. W. W. Peterson, Error-Correcting Codes, pp. 207-212, Cambridge, Mass.: M.I.T. Press, 1961. 4. L. H. Loomis, An Introduction to Abstract Harmonic Analysis. New York: Van Nostrand, 1953. 5. L. D. Rudolph, "A class of majority logic decodable codes," IEEE Trans. Inform. Theory (Corresp.), vol. IT-I3, pp. 305-307, Apr. 1967. 6. W. W. Peterson and E. J. Weldon, Jr., Error-Correcting Codes, 2nd ed., Cambridge, Mass.: M.I.T. Press, 1972. 7. L. D. Rudolph and C. R. P. Hartmann, "Algebraic analog decoding," to be submitted to IEEE Trans. Inform. Theory.

x,/ -------------+----+---..-... ----t j' Tl)(l Figure 1. Decoder for the (7,4) code.

I I (p.,r, e~ ~f3 '> f't~~ + ( p. +eoes.f3 ) ps-e' I, I 0 o, (r r e~ -tp3)p4ff~-p" + (f +P*f Zr3 ) p..p, o I o 0 ( '\ {---'---D 1 '" f ~ of- r4 ft~'> If'lprp, (7 + ~O.,.-\ I fz. t:1 ) 1- o 0 etc. Figure 2. Decoder for the (4,3,3) code.