THIS paper is aimed at designing efficient decoding algorithms

Similar documents
Soft-Decision Decoding Using Punctured Codes

4488 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 10, OCTOBER /$ IEEE

UC Riverside UC Riverside Previously Published Works

Algebraic Soft-Decision Decoding of Reed Solomon Codes

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

ELEC 519A Selected Topics in Digital Communications: Information Theory. Hamming Codes and Bounds on Codes

Efficient Bounded Distance Decoders for Barnes-Wall Lattices

Code design: Computer search

Lecture 12. Block Diagram

Optimum Soft Decision Decoding of Linear Block Codes

MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups.

The Capacity of Finite Abelian Group Codes Over Symmetric Memoryless Channels Giacomo Como and Fabio Fagnani

Solutions of Exam Coding Theory (2MMC30), 23 June (1.a) Consider the 4 4 matrices as words in F 16

Codes for Partially Stuck-at Memory Cells

(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute

Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes

Practical Polar Code Construction Using Generalised Generator Matrices

New Minimal Weight Representations for Left-to-Right Window Methods

The E8 Lattice and Error Correction in Multi-Level Flash Memory

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel

Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel

Chapter 9 Fundamental Limits in Information Theory

MATH/MTHE 406 Homework Assignment 2 due date: October 17, 2016

MATH32031: Coding Theory Part 15: Summary

Answers and Solutions to (Even Numbered) Suggested Exercises in Sections of Grimaldi s Discrete and Combinatorial Mathematics

MATH Examination for the Module MATH-3152 (May 2009) Coding Theory. Time allowed: 2 hours. S = q

Höst, Stefan; Johannesson, Rolf; Zigangirov, Kamil; Zyablov, Viktor V.

Coding on a Trellis: Convolutional Codes

SOFT DECISION FANO DECODING OF BLOCK CODES OVER DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM

RON M. ROTH * GADIEL SEROUSSI **

Covering an ellipsoid with equal balls

EE 229B ERROR CONTROL CODING Spring 2005

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land

Introduction to Convolutional Codes, Part 1

IN this paper, we consider the capacity of sticky channels, a

Coding for a Non-symmetric Ternary Channel

On Compression Encrypted Data part 2. Prof. Ja-Ling Wu The Graduate Institute of Networking and Multimedia National Taiwan University

Chapter 7 Reed Solomon Codes and Binary Transmission

Reed-Solomon codes. Chapter Linear codes over finite fields

CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding

Communications II Lecture 9: Error Correction Coding. Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved

Error Correcting Codes: Combinatorics, Algorithms and Applications Spring Homework Due Monday March 23, 2009 in class

Mathematics Department

ELEC 405/ELEC 511 Error Control Coding. Hamming Codes and Bounds on Codes

On Two Probabilistic Decoding Algorithms for Binary Linear Codes


On Bit Error Rate Performance of Polar Codes in Finite Regime

Random Redundant Soft-In Soft-Out Decoding of Linear Block Codes

Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel

MATH 433 Applied Algebra Lecture 22: Review for Exam 2.

Codes on graphs and iterative decoding

Optimal Block-Type-Decodable Encoders for Constrained Systems

ELEC 405/ELEC 511 Error Control Coding and Sequences. Hamming Codes and the Hamming Bound

Introduction to Low-Density Parity Check Codes. Brian Kurkoski

Information Hiding and Covert Communication

EVALUATION OF PACKET ERROR RATE IN WIRELESS NETWORKS

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes

MATH 291T CODING THEORY

Distributed Arithmetic Coding

16.36 Communication Systems Engineering

Channel Coding I. Exercises SS 2017

Codes on graphs and iterative decoding

Introduction to binary block codes

Hamming Codes 11/17/04

Chapter 3 Linear Block Codes

PSK bit mappings with good minimax error probability

Lecture 4 Noisy Channel Coding

The E8 Lattice and Error Correction in Multi-Level Flash Memory

Group Codes Outperform Binary-Coset Codes on Nonbinary Symmetric Memoryless Channels

Channel combining and splitting for cutoff rate improvement

Trellis-based Detection Techniques

Distributed Source Coding Using LDPC Codes

Physical Layer and Coding

Lecture 3: Error Correcting Codes

LOW-density parity-check (LDPC) codes were invented

These outputs can be written in a more convenient form: with y(i) = Hc m (i) n(i) y(i) = (y(i); ; y K (i)) T ; c m (i) = (c m (i); ; c m K(i)) T and n

Linear Programming Decoding of Binary Linear Codes for Symbol-Pair Read Channels

Orthogonal Arrays & Codes

MATH 291T CODING THEORY

Roll No. :... Invigilator's Signature :.. CS/B.TECH(ECE)/SEM-7/EC-703/ CODING & INFORMATION THEORY. Time Allotted : 3 Hours Full Marks : 70

CLASSICAL error control codes have been designed

GRAY codes were found by Gray [15] and introduced

Error Correction Methods

Lecture 6: Expander Codes

MATH3302. Coding and Cryptography. Coding Theory

Structured Low-Density Parity-Check Codes: Algebraic Constructions

A Lower Bound for Boolean Satisfiability on Turing Machines

: Coding Theory. Notes by Assoc. Prof. Dr. Patanee Udomkavanich October 30, upattane

Joint Coding for Flash Memory Storage

COMPSCI 650 Applied Information Theory Apr 5, Lecture 18. Instructor: Arya Mazumdar Scribe: Hamed Zamani, Hadi Zolfaghari, Fatemeh Rezaei

Low-Complexity Fixed-to-Fixed Joint Source-Channel Coding

Error-Correcting Schemes with Dynamic Thresholds in Nonvolatile Memories

Coding Theory: Linear-Error Correcting Codes Anna Dovzhik Math 420: Advanced Linear Algebra Spring 2014

3. Coding theory 3.1. Basic concepts

Codes over Subfields. Chapter Basics

for some error exponent E( R) as a function R,

The cocycle lattice of binary matroids

Low-density parity-check (LDPC) codes

A Combinatorial Bound on the List Size

A Systematic Description of Source Significance Information

Transcription:

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER 1999 2333 Sort-and-Match Algorithm for Soft-Decision Decoding Ilya Dumer, Member, IEEE Abstract Let a q-ary linear (n; k)-code C be used over a memoryless channel. We design a decoding algorithm 9 N that splits the received p block into two halves in n different ways. First, about N error patterns are found on either half. Then the left- and right-hand lists are sorted out and matched to form codewords. p Finally, the most probable codeword is chosen among at most n N codewords obtained in all n trials. The algorithm can be applied to any linear code C and has complexity order of n 3p N: For any N q n0k, the decoding error probability P N exceeds at most 1+q n0k =N times the probability P 9 (C) of maximum-likelihood decoding. For code rates R 1=2, the complexity order q (n0k)=2 grows as square root of general trellis complexity q minfn0k;kg. When used on quantized additive white Gaussian noise (AWGN) channels, algorithm 9 N can provide maximum-likelihood decoding for most binary linear codes even when N has exponential order of q n0k. Index Terms Complexity, maximum-likelihood decoding, sorting, splitting, syndromes, trellis. I. INTRODUCTION THIS paper is aimed at designing efficient decoding algorithms that reduce the complexity of maximum-likelihood (ML) decoding for general linear codes used on memoryless channels. Let be a -ary input alphabet and be an output alphabet, which can be infinite. We consider a memoryless channel Here is the transition probability (density) of receiving an output for a given We then use a -ary linear code that consists of codewords of length, and suppose that all messages are equiprobable. For any output, ML decoding retrieves a codeword with the maximum a posteriori probability among all codewords, and provides minimum decoding error probability among all decoding algorithms. We use below this probability as a benchmark for all other decoding algorithms applied to a code In [2] and [11], the upper bound on the complexity of ML decoding has been obtained by trellis design. In Manuscript received August 10, 1998; revised May 24, 1999. This work was supported by the NSF under Grant NCR-9703844. The material in this paper was presented in part at the 34th Annual Allerton Conference on Communication, Control, and Computing. The author is with the College of Engineering, University of California, Riverside, CA 92521 USA. Communicated by A. M. Barg, Associate Editor for Coding Theory. Publisher Item Identifier S 0018-9448(99)08279-6. this paper, we wish to reduce the complexity of searching through a codebook or its trellis without deterioration in decoding performance. We shall obtain a better exponential complexity that grows as the square root of trellis complexity for any code rate For these rates, our current bound also surpasses other bounds known to date. In particular, it falls below the complexity obtained in [4] and below the lower bounds on trellis complexity [10], [12], provided that no long binary codes exceed the asymptotic Gilbert Varshamov bound. Given any output, we first consider the list of input vectors that have the highest a posteriori probabilities among all inputs. Following [7], we then study a decoding algorithm that finds the most probable codeword whenever this list includes at least one codeword. Otherwise, may fail to find Due to this possible failure, the decoding error probability of the algorithm can exceed the probability of ML decoding. However, it is proved in [4] that for decoding performance can deteriorate only by a negligible margin. Namely, the inequality [4] holds for any (linear or nonlinear) code used over the so-called mapping channels. In particular, these include discrete additive channels and an additive white Gaussian noise (AWGN) channel with -PSK modulation. More generally, mapping channels arise when the output alphabet can be split into disjoint subsets of size such, that any -ary subchannel is a conventional discrete symmetric channel ([8, p. 92]). A slightly weaker inequality holds for a bigger class of continuous symmetric channels [4], for which finite subsets can have varying size Another statement from [5] shows that algorithm can provide ML decoding even if 1 In particular, this happens for most binary linear codes used on AWGN channels whose output is quantized to any number of fixed levels. 2 Algorithm allows us to consider possible error patterns in the same combinatorial setting. Even for discrete additive channels we can turn from coset leaders used in ML decoding to vectors closest to the received output These do not 1 We say that N (n) has exponential order q n and write N exp = q n if (log q N )=n! as n!1: 2 Below we call these channels quantized. (1) 0018 9448/99$10.00 1999 IEEE

2334 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER 1999 depend on a specific code and form a sphere of size about (including the boundary points). However, a straightforward search for codewords needs to test all closest inputs and has a complexity order of In this case, we cannot reduce the general trellis complexity Therefore, in this paper we wish to speed up our search by leaving out as many inputs as possible. Our current algorithm can be applied to any linear code. We split the received vector into two halves in different ways. Given, we find only about vectors on either half. Then we sort the two sets of recorded vectors and match their syndromes. This allows us to reveal those left and right halves that jointly constitute a codeword. This sort-and-match procedure has the same complexity order as the size of the recorded set. Finally, we choose the most probable codeword among at most of them. The decoding performance is summarized in the following theorems. Theorem 1: Any linear -ary -code used on a mapping channel can be decoded with decoding error probability and complexity order of for any Theorem 2: ML decoding of most linear binary codes of rate and length used on a quantized AWGN channel can be performed with exponential complexity order of The material of this paper is organized as follows. In Section II we revise the hard-decision algorithm of [6] that gives complexity This revised version is more suited for general soft-decision decoding that is discussed in the following sections. In Section III, we introduce a combinatorial setting for the algorithms used on a general memoryless channel. Then in Section IV we describe a general softdecision algorithm and study its complexity. In particular, we show how the lists of most probable vectors can be determined with complexity order of We also give an example of this sort-and-match procedure for a binary -code. Finally, in Section V we study the decoding performance and show that we obtained an algorithm indeed. II. BACKGROUND: MINIMUM-DISTANCE DECODING Let the a posteriori probability depend only on the Hamming distance Given a code, minimum-distance (MD) decoding finds the codeword closest to the received vector Also, most probable vectors belong to the Hamming sphere with center and radius 3 3 Note that is the Gilbert distance of a given code Cq(n; k): (2) Now consider an algorithm that finds the closest codeword (if any) in the sphere According to (1), has at most twice the decoding error probability of MD decoding This important fact was first proved by Evseev [7]. Another important result of Blinovskii [3] states that virtually all long linear codes have covering radius Then MD decoding is achieved if we apply the algorithm with Therefore, MD decoding needs to test only the order of inputs closest to the received vector We first consider the algorithm Define a sliding cyclic window as a subset of cyclically consecutive positions beginning with any position Let the received vector be corrupted by errors. We start with the following trivial lemma. Lemma 1: There exists such that the sliding window is corrupted by errors. Proof: Given, let be the number of errors on a block Obviously, the mean number of errors is Also, Therefore, for some, our stepwise function takes the values and that are the two closest ones to the mean Let and be even numbers. Consider any linear code with a parity-check matrix We take and use a sliding window for any The remaining positions form a subset Any partition splits the received vector and matrix into two parts. These are called,, and According to Lemma 1, there exists a partition in which is corrupted by errors, and is corrupted by errors. Given at most errors, we consider all subvectors of length located within a distance from the subvector These form the sphere in Similarly, we consider all the subvectors that belong to the sphere Then we test the subvectors as follows. Algorithm : Split in halves and match the syndromes. 1. Splitting. Take any and split positions into two halves and 2. Recording the most probable subvectors. For each and, calculate their syndromes and Then form the records and of length Here the rightmost digits form the syndromes and are regarded as the most significant digits.

DUMER: SORT-AND-MATCH ALGORITHM FOR SOFT-DECISION DECODING 2335 The next symbol is called the subset indicator. The leftmost digits represent the subvectors. 3. Sorting the records and matching the syndromes. Consider the joint set Sort the elements as natural numbers by their rightmost digits. Find the matching pairs that agree in the rightmost digits and disagree in the indicator symbol Each matching pair gives two equal syndromes and yields the codeword 4. Choosing the closest codeword. Run through the matching pairs in and choose the codeword closest to Then repeat Steps 1 3 and choose the closest codeword by running all trials Remark: The above procedure is readily modified for odd and For odd, we add one dummy position with symbol for all vectors. Then take,, and For odd, we change only the records Here we store where the negation is done in the field As a result, any matching pair satisfies the condition and gives the codeword, since Lemma 2: The above algorithm finds the closest codeword in the sphere Proof: Let the closest codeword be at distance from In other words, is corrupted by errors with respect to Then there exists a that splits the received block into two subvectors and, corrupted by and errors, respectively. Then is included in the records, and is retrieved as the closest codeword. We now turn to decoding complexity. The size of the sets and is upper-bounded by This is readily verified to give an order due to (2). Then we calculate the syndromes and form the sets and with complexity In the next step, a sorting procedure is performed on the set of size It is well known [1] that natural numbers can be sorted out by a parallel circuit ( network ) of size and depth We can also obtain the same order of algorithmic complexity on a Turing machine with two tapes. In sequential implementation [9] (say, in software design), sorting procedures have time complexity and memory complexity In the forth step, we run through and find the codeword closest to First, we find each matching syndrome that coincides at least on two subvectors Note that all subvectors with follow the subvectors with due to the higher value of the indicator Given, we then find the subvector closest to, and the subvector closest to These two give the codeword closest to for a given One codeword is left after all trials. The overall complexity obtained in trials is Note that we get a similar order whenever we correct lightest error patterns. In this case, however, we achieve decoding error probability that tends to if grows slightly faster than By using, we can decode within the covering radius Then for most long linear codes, we achieve MD decoding with complexity order These results are summarized in the following theorem. Theorem 3: 1. MD decoding can be performed for most linear - codes with a complexity order of 2. Given, near-md decoding can be performed for any linear -code with a complexity order of and decoding error probability III. LIGHTEST SUBSETS FOR GENERAL CHANNELS Now consider a general memoryless channel Given an output define the softdecision weight of any input -ary symbol in any position We then consider the matrix defined over all input symbols (rows) and all positions (columns) Given a vector, define its soft-decision weight So, for any output, the a posteriori probabilities are fully defined by We can also consider all the vectors according to increasing weights To obtain a unique representation (5), any two entries and with the same weight are ordered lexicographically as -digital -ary numbers. Then is the list of lightest vectors Note also that in MD decoding Then is the Hamming distance between and Our problem is to find the codeword that has the minimum weight As above, we shall seek this among closest inputs. However, now this set depends on a specific matrix, whereas in MD decoding, we use the sphere for any In general, can include vectors whose Hamming weights can substantially exceed Yet, the algorithm presented below shows that the complexity depends on the number of plausible candidates rather than on their Hamming weights. Our soft-decision decoding algorithm is similar to the above algorithm We also use about inputs on either half instead of browsing most probable inputs of length However, we change our left- and right-hand subsets if if (3) (4) (5)

2336 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER 1999 and as follows. Given an output, we consider any cyclic window and order all subvectors according to their weights Similarly to (5), we use lexicographic ordering for any two subvectors with the same weights. Then any subvector has a unique position in the ordering (6). We also define the set of lightest subvectors on Now consider the subset by puncturing the last position in and append any symbol subvector The result is the set (6) (7) obtained We then take the set to each of subvectors defined on For any splitting we take below and use two subsets These two subsets use subvectors of the same length and have the same exponential size as the former spherical subsets and Also, is our conventional subset of lightest vectors on the right half. Note, however, that the redefined subset includes times more vectors than Also, may include less probable vectors that do not necessarily fall within the list of most probable subvectors. This essential difference between softdecision decoding and its hard-decision counterpart will be discussed in Section V in more detail. IV. SOFT-DECISION DECODING General Description: Given an output, we first calculate the matrix according to (3). To design an algorithm, we take Algorithm : Step 1. Execute Step 1 of the algorithm Step 2. Design the sets and defined in (9). Form the records and defined in the algorithm Step 3. Execute Step 3 of the algorithm Step 4. Choose the lightest codeword with the minimum weight Repeat Steps 1 3 times, and choose the lightest codeword in all trials. In this design, we need to address two important issues arising in Step 2. What is the complexity of designing the sets and? Why can we use only lightest subvectors to get the full algorithm? (8) (9) Complexity: Below we first show how the sets and are designed with exponential complexity This is done by using conventional trellis diagrams. We also calculate the corresponding syndromes while proceeding with this design. For the set, we start building the trellis from position and finish our procedure in the last position We keep building the trellis diagram until the overall number of paths exceeds in some position Then we leave only the list of shortest paths and proceed with position By adding a symbol to any path we obtain at most paths in step Again, we sort these paths according to their weights and leave shortest paths. The procedure is then repeated until the set is obtained in steps. Remark: Here we need to justify that any path can be excluded from step and all further steps. Let be the lightest value among values of symbol Note that all paths obtained from the set are shorter than any path It can also be shown (see [4]), that our trellis procedure needs to sort at most paths in any step instead of the paths sorted above. For the set, we use identical procedure in the first steps The result is the set of lightest subvectors on the set The only difference arises in the last step, where we use any vertex and take all paths. A similar procedure is also considered in [4]. Note that syndrome calculations require operations. Sorting procedures have the same complexity order if is an exponent in Now the complexity estimate follows. Lemma 3 ([4, Lemma 8]): The lists and can be constructed with complexity Note that for, Steps 2 and 3 give the required exponential complexity Now the overall complexity bound follows from the fact that Step 4 runs trials. Lemma 4: Algorithm has complexity for any -ary linear -code used on any memoryless channel. Example: Soft Decision Decoding for the Binary BCH Code. We take First, we calculate the matrix for the received output We also add one more position to all vectors, where we necessarily place zero symbol. 4 Then we perform the following steps. 1. Take two halves and for any 2. Design a conventional trellis diagram on beginning with position On each step leave at most shortest paths. Then on step leave all paths. These form the set Perform 4 In other words, we define w 63 (0) = 0 and w 63 (1) = 1:

DUMER: SORT-AND-MATCH ALGORITHM FOR SOFT-DECISION DECODING 2337 the same procedure for all 32 positions on the right half, and find the set Form the sets of records and that include the designed paths along with their syndromes. 3. Sort out the records of the joint set Find the matching pairs and the corresponding codewords 4. Choose the lightest codeword with the minimum weight and repeat the procedure for all Note that conventional trellis decoding of a general -code needs up to trellis states. The algorithm [4] uses about states, whereas our current algorithm requires at most states in each trial. Computer simulations have also shown that the decoding error probability is left almost unchanged when is reduced to on AWGN channels. Then we use only states on the right half and states on the left one. For AWGN channels, this simulation gave the following bit error probability versus ratio: at 2 db, at 3 db, at 4 db, and at 5 db. V. DECODING PERFORMANCE In order to justify Theorems 1 and 2, we now address the second question and show that by using lightest subpaths we obtain the algorithm indeed. We need to prove that any vector can be split into two halves and such that (10) Then both halves belong to required subsets and Let be the number of in the ordered set (5). We start with the following lemma that is similar to [4, Lemma 6]. Lemma 5: Any vector satisfies the inequality (11) Proof: Given, let be a subset of vectors such that and First, note that Second, is the last vector in, since any other vector in has either a lower weight, or the same weight and a smaller number, due to the lexicographic ordering of subvectors with equal weights. Therefore, the ordering (5) includes at least vectors placed ahead of Obviously, the same arguments can be used for any subset, in which case In particular, we can split any as and obtain an obvious inequality (12) Given with, consider now a sliding window as a function of Then the number is a stepwise integer function Note that and for any, according to (11). Therefore, for any Lemma 6: For any vector there exists a splitting such that and Proof: First, suppose that both inequalities and hold for some Then and both inequalities (10) hold. Otherwise, suppose that for all The latter implies that there exists a threshold such that and We then start our splitting from position and use the sets and Since inequality (11) gives an upper bound On the other hand, according to (12). Therefore, both inequalities (10) hold for the above partition This lemma ensures that in Step 4 we find the required partition. Then any codeword is necessarily retrieved through Steps 1 4 and we obtain an algorithm indeed. Now Theorem 1 follows according to (1). Also, Theorem 2 holds for most binary -codes used on quantized channels since provides full ML decoding for sufficiently large Discussion: Recall that general algorithm uses vectors on the left half. Our question is whether symmetric inequalities (13) can be used instead of (10). This would give us lightest vectors on both halves similarly to hard-decision decoding. However, it is unclear how to prove that inequalities (13) hold for each In this regard, we compare the proofs of Lemmas 1 and 6. In Lemma 1, we used that the function makes marginal changes on each step. We note that the function used in Lemma 6 can change as much as times on any step. In particular, consider all binary paths obtained on the punctured half Let be the shortest one. Also, let the two symbols and in position have the weights so different that for any Then our choice of the th symbol dominates the total weight on length For example, the path obtained from the shortest path takes the number and follows all paths Summarizing, our function can undergo huge changes on each step It is for this reason that we were unable to use lightest vectors on both halves. VI. CONCLUDING REMARKS In this paper, we study an algorithm that seeks the most probable codeword among most probable inputs taken from the whole space By using, we obtain the decoding error probability for any linear code This probability tends to the error probability of ML decoding if grows faster than as For most binary linear codes used on quantized channels, provides ML decoding for sufficiently large

2338 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER 1999 On the other hand, we show that has complexity for any linear -code. This is done by matching two conventional trellis diagrams of exponential size designed on the length For the rates, the complexity order of grows as a square root of the general trellis complexity Finally, note that our soft-decision complexity is times higher than its hard-decision counterpart presented in Section II. This is due to the fact that hard-decision decoding proceeds without calculating the lightest error patterns. This procedure is included in soft-decision decoding and comprises the bulk of trellis design. We note without proof that soft-decision complexity can also be reduced to the same polynomial order However, in this case the design is different. For any splitting we first calculate the maximum weights and in the lists and of lightest vectors. Each subpath is terminated once its weight exceeds the above maximum. As a result, we can eliminate sorting procedures and design our lists more efficiently. An important observation is that these maximum weights and can be calculated with low complexity prior to decoding The proof is beyond the scope of this paper. ACKNOWLEDGMENT The author wishes to thank P. Farrell for helpful discussions. REFERENCES [1] M. Ajtai, J. Komlós, and E. Szemeredi, An O(n log n) sorting network, in Proc. 15th Annu. ACM Symp. Theory of Computing, 1983, pp. 1 9. [2] L. R. Bahl, J. Cocke, F. Jelinek, and J. Raviv, Optimal decoding of linear codes for minimizing symbol error rate, IEEE Trans. Inform. Theory, vol. IT-20, pp. 284 287, 1974. [3] V. M. Blinovskii, Lower asymptotic bound on the number of linear code words in a sphere of given radius in F n q, Probl. Pered. Inform., vol. 23, no. 2, pp. 50 53, 1987. [4] I. Dumer, Suboptimal decoding of linear codes. Partition technique, IEEE Trans. Inform. Theory, vol. 42, pp. 1971 1986, 1996. [5], Ellipsoidal lists and maximum likelihood decoding, IEEE Trans. Inform. Theory, submitted for publication. [6], Two algorithms for decoding linear codes, Probl. Pered. Inform., vol. 25, no. 1, pp. 24 32, 1989. [7] G. S. Evseev, On the complexity of decoding linear codes, Probl. Pered. Inform., vol. 19, no. 1, pp. 3 8, 1983. [8] R. G. Gallager, Information Theory and Reliable Communication. New York: Wiley, 1968. [9] D. E. Knuth, The Art of Computer Programming, vol. 2: Seminumerical Algorithms. Reading, MA: Addison-Wesley, 1969. [10] A. Lafourcade and A. Vardy, Lower bounds on trellis complexity of block codes, IEEE Trans. Inform. Theory, vol. 41, pp. 1938 1952, 1995. [11] J. K. Wolf, Efficient maximum likelihood decoding of linear codes using a trellis, IEEE Trans. Inform. Theory, vol. IT-24, pp. 76 80, 1978. [12] V. V. Zyablov and V. R. Sidorenko, Bounds on complexity of trellis decoding of linear codes, Probl. Pered. Inform., vol. 29, no. 3, pp. 3 9, 1993.