THIS paper is aimed at designing efficient decoding algorithms

Size: px
Start display at page:

Download "THIS paper is aimed at designing efficient decoding algorithms"

Transcription

1 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER Sort-and-Match Algorithm for Soft-Decision Decoding Ilya Dumer, Member, IEEE Abstract Let a q-ary linear (n; k)-code C be used over a memoryless channel. We design a decoding algorithm 9 N that splits the received p block into two halves in n different ways. First, about N error patterns are found on either half. Then the left- and right-hand lists are sorted out and matched to form codewords. p Finally, the most probable codeword is chosen among at most n N codewords obtained in all n trials. The algorithm can be applied to any linear code C and has complexity order of n 3p N: For any N q n0k, the decoding error probability P N exceeds at most 1+q n0k =N times the probability P 9 (C) of maximum-likelihood decoding. For code rates R 1=2, the complexity order q (n0k)=2 grows as square root of general trellis complexity q minfn0k;kg. When used on quantized additive white Gaussian noise (AWGN) channels, algorithm 9 N can provide maximum-likelihood decoding for most binary linear codes even when N has exponential order of q n0k. Index Terms Complexity, maximum-likelihood decoding, sorting, splitting, syndromes, trellis. I. INTRODUCTION THIS paper is aimed at designing efficient decoding algorithms that reduce the complexity of maximum-likelihood (ML) decoding for general linear codes used on memoryless channels. Let be a -ary input alphabet and be an output alphabet, which can be infinite. We consider a memoryless channel Here is the transition probability (density) of receiving an output for a given We then use a -ary linear code that consists of codewords of length, and suppose that all messages are equiprobable. For any output, ML decoding retrieves a codeword with the maximum a posteriori probability among all codewords, and provides minimum decoding error probability among all decoding algorithms. We use below this probability as a benchmark for all other decoding algorithms applied to a code In [2] and [11], the upper bound on the complexity of ML decoding has been obtained by trellis design. In Manuscript received August 10, 1998; revised May 24, This work was supported by the NSF under Grant NCR The material in this paper was presented in part at the 34th Annual Allerton Conference on Communication, Control, and Computing. The author is with the College of Engineering, University of California, Riverside, CA USA. Communicated by A. M. Barg, Associate Editor for Coding Theory. Publisher Item Identifier S (99) this paper, we wish to reduce the complexity of searching through a codebook or its trellis without deterioration in decoding performance. We shall obtain a better exponential complexity that grows as the square root of trellis complexity for any code rate For these rates, our current bound also surpasses other bounds known to date. In particular, it falls below the complexity obtained in [4] and below the lower bounds on trellis complexity [10], [12], provided that no long binary codes exceed the asymptotic Gilbert Varshamov bound. Given any output, we first consider the list of input vectors that have the highest a posteriori probabilities among all inputs. Following [7], we then study a decoding algorithm that finds the most probable codeword whenever this list includes at least one codeword. Otherwise, may fail to find Due to this possible failure, the decoding error probability of the algorithm can exceed the probability of ML decoding. However, it is proved in [4] that for decoding performance can deteriorate only by a negligible margin. Namely, the inequality [4] holds for any (linear or nonlinear) code used over the so-called mapping channels. In particular, these include discrete additive channels and an additive white Gaussian noise (AWGN) channel with -PSK modulation. More generally, mapping channels arise when the output alphabet can be split into disjoint subsets of size such, that any -ary subchannel is a conventional discrete symmetric channel ([8, p. 92]). A slightly weaker inequality holds for a bigger class of continuous symmetric channels [4], for which finite subsets can have varying size Another statement from [5] shows that algorithm can provide ML decoding even if 1 In particular, this happens for most binary linear codes used on AWGN channels whose output is quantized to any number of fixed levels. 2 Algorithm allows us to consider possible error patterns in the same combinatorial setting. Even for discrete additive channels we can turn from coset leaders used in ML decoding to vectors closest to the received output These do not 1 We say that N (n) has exponential order q n and write N exp = q n if (log q N )=n! as n!1: 2 Below we call these channels quantized. (1) /99$ IEEE

2 2334 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER 1999 depend on a specific code and form a sphere of size about (including the boundary points). However, a straightforward search for codewords needs to test all closest inputs and has a complexity order of In this case, we cannot reduce the general trellis complexity Therefore, in this paper we wish to speed up our search by leaving out as many inputs as possible. Our current algorithm can be applied to any linear code. We split the received vector into two halves in different ways. Given, we find only about vectors on either half. Then we sort the two sets of recorded vectors and match their syndromes. This allows us to reveal those left and right halves that jointly constitute a codeword. This sort-and-match procedure has the same complexity order as the size of the recorded set. Finally, we choose the most probable codeword among at most of them. The decoding performance is summarized in the following theorems. Theorem 1: Any linear -ary -code used on a mapping channel can be decoded with decoding error probability and complexity order of for any Theorem 2: ML decoding of most linear binary codes of rate and length used on a quantized AWGN channel can be performed with exponential complexity order of The material of this paper is organized as follows. In Section II we revise the hard-decision algorithm of [6] that gives complexity This revised version is more suited for general soft-decision decoding that is discussed in the following sections. In Section III, we introduce a combinatorial setting for the algorithms used on a general memoryless channel. Then in Section IV we describe a general softdecision algorithm and study its complexity. In particular, we show how the lists of most probable vectors can be determined with complexity order of We also give an example of this sort-and-match procedure for a binary -code. Finally, in Section V we study the decoding performance and show that we obtained an algorithm indeed. II. BACKGROUND: MINIMUM-DISTANCE DECODING Let the a posteriori probability depend only on the Hamming distance Given a code, minimum-distance (MD) decoding finds the codeword closest to the received vector Also, most probable vectors belong to the Hamming sphere with center and radius 3 3 Note that is the Gilbert distance of a given code Cq(n; k): (2) Now consider an algorithm that finds the closest codeword (if any) in the sphere According to (1), has at most twice the decoding error probability of MD decoding This important fact was first proved by Evseev [7]. Another important result of Blinovskii [3] states that virtually all long linear codes have covering radius Then MD decoding is achieved if we apply the algorithm with Therefore, MD decoding needs to test only the order of inputs closest to the received vector We first consider the algorithm Define a sliding cyclic window as a subset of cyclically consecutive positions beginning with any position Let the received vector be corrupted by errors. We start with the following trivial lemma. Lemma 1: There exists such that the sliding window is corrupted by errors. Proof: Given, let be the number of errors on a block Obviously, the mean number of errors is Also, Therefore, for some, our stepwise function takes the values and that are the two closest ones to the mean Let and be even numbers. Consider any linear code with a parity-check matrix We take and use a sliding window for any The remaining positions form a subset Any partition splits the received vector and matrix into two parts. These are called,, and According to Lemma 1, there exists a partition in which is corrupted by errors, and is corrupted by errors. Given at most errors, we consider all subvectors of length located within a distance from the subvector These form the sphere in Similarly, we consider all the subvectors that belong to the sphere Then we test the subvectors as follows. Algorithm : Split in halves and match the syndromes. 1. Splitting. Take any and split positions into two halves and 2. Recording the most probable subvectors. For each and, calculate their syndromes and Then form the records and of length Here the rightmost digits form the syndromes and are regarded as the most significant digits.

3 DUMER: SORT-AND-MATCH ALGORITHM FOR SOFT-DECISION DECODING 2335 The next symbol is called the subset indicator. The leftmost digits represent the subvectors. 3. Sorting the records and matching the syndromes. Consider the joint set Sort the elements as natural numbers by their rightmost digits. Find the matching pairs that agree in the rightmost digits and disagree in the indicator symbol Each matching pair gives two equal syndromes and yields the codeword 4. Choosing the closest codeword. Run through the matching pairs in and choose the codeword closest to Then repeat Steps 1 3 and choose the closest codeword by running all trials Remark: The above procedure is readily modified for odd and For odd, we add one dummy position with symbol for all vectors. Then take,, and For odd, we change only the records Here we store where the negation is done in the field As a result, any matching pair satisfies the condition and gives the codeword, since Lemma 2: The above algorithm finds the closest codeword in the sphere Proof: Let the closest codeword be at distance from In other words, is corrupted by errors with respect to Then there exists a that splits the received block into two subvectors and, corrupted by and errors, respectively. Then is included in the records, and is retrieved as the closest codeword. We now turn to decoding complexity. The size of the sets and is upper-bounded by This is readily verified to give an order due to (2). Then we calculate the syndromes and form the sets and with complexity In the next step, a sorting procedure is performed on the set of size It is well known [1] that natural numbers can be sorted out by a parallel circuit ( network ) of size and depth We can also obtain the same order of algorithmic complexity on a Turing machine with two tapes. In sequential implementation [9] (say, in software design), sorting procedures have time complexity and memory complexity In the forth step, we run through and find the codeword closest to First, we find each matching syndrome that coincides at least on two subvectors Note that all subvectors with follow the subvectors with due to the higher value of the indicator Given, we then find the subvector closest to, and the subvector closest to These two give the codeword closest to for a given One codeword is left after all trials. The overall complexity obtained in trials is Note that we get a similar order whenever we correct lightest error patterns. In this case, however, we achieve decoding error probability that tends to if grows slightly faster than By using, we can decode within the covering radius Then for most long linear codes, we achieve MD decoding with complexity order These results are summarized in the following theorem. Theorem 3: 1. MD decoding can be performed for most linear - codes with a complexity order of 2. Given, near-md decoding can be performed for any linear -code with a complexity order of and decoding error probability III. LIGHTEST SUBSETS FOR GENERAL CHANNELS Now consider a general memoryless channel Given an output define the softdecision weight of any input -ary symbol in any position We then consider the matrix defined over all input symbols (rows) and all positions (columns) Given a vector, define its soft-decision weight So, for any output, the a posteriori probabilities are fully defined by We can also consider all the vectors according to increasing weights To obtain a unique representation (5), any two entries and with the same weight are ordered lexicographically as -digital -ary numbers. Then is the list of lightest vectors Note also that in MD decoding Then is the Hamming distance between and Our problem is to find the codeword that has the minimum weight As above, we shall seek this among closest inputs. However, now this set depends on a specific matrix, whereas in MD decoding, we use the sphere for any In general, can include vectors whose Hamming weights can substantially exceed Yet, the algorithm presented below shows that the complexity depends on the number of plausible candidates rather than on their Hamming weights. Our soft-decision decoding algorithm is similar to the above algorithm We also use about inputs on either half instead of browsing most probable inputs of length However, we change our left- and right-hand subsets if if (3) (4) (5)

4 2336 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER 1999 and as follows. Given an output, we consider any cyclic window and order all subvectors according to their weights Similarly to (5), we use lexicographic ordering for any two subvectors with the same weights. Then any subvector has a unique position in the ordering (6). We also define the set of lightest subvectors on Now consider the subset by puncturing the last position in and append any symbol subvector The result is the set (6) (7) obtained We then take the set to each of subvectors defined on For any splitting we take below and use two subsets These two subsets use subvectors of the same length and have the same exponential size as the former spherical subsets and Also, is our conventional subset of lightest vectors on the right half. Note, however, that the redefined subset includes times more vectors than Also, may include less probable vectors that do not necessarily fall within the list of most probable subvectors. This essential difference between softdecision decoding and its hard-decision counterpart will be discussed in Section V in more detail. IV. SOFT-DECISION DECODING General Description: Given an output, we first calculate the matrix according to (3). To design an algorithm, we take Algorithm : Step 1. Execute Step 1 of the algorithm Step 2. Design the sets and defined in (9). Form the records and defined in the algorithm Step 3. Execute Step 3 of the algorithm Step 4. Choose the lightest codeword with the minimum weight Repeat Steps 1 3 times, and choose the lightest codeword in all trials. In this design, we need to address two important issues arising in Step 2. What is the complexity of designing the sets and? Why can we use only lightest subvectors to get the full algorithm? (8) (9) Complexity: Below we first show how the sets and are designed with exponential complexity This is done by using conventional trellis diagrams. We also calculate the corresponding syndromes while proceeding with this design. For the set, we start building the trellis from position and finish our procedure in the last position We keep building the trellis diagram until the overall number of paths exceeds in some position Then we leave only the list of shortest paths and proceed with position By adding a symbol to any path we obtain at most paths in step Again, we sort these paths according to their weights and leave shortest paths. The procedure is then repeated until the set is obtained in steps. Remark: Here we need to justify that any path can be excluded from step and all further steps. Let be the lightest value among values of symbol Note that all paths obtained from the set are shorter than any path It can also be shown (see [4]), that our trellis procedure needs to sort at most paths in any step instead of the paths sorted above. For the set, we use identical procedure in the first steps The result is the set of lightest subvectors on the set The only difference arises in the last step, where we use any vertex and take all paths. A similar procedure is also considered in [4]. Note that syndrome calculations require operations. Sorting procedures have the same complexity order if is an exponent in Now the complexity estimate follows. Lemma 3 ([4, Lemma 8]): The lists and can be constructed with complexity Note that for, Steps 2 and 3 give the required exponential complexity Now the overall complexity bound follows from the fact that Step 4 runs trials. Lemma 4: Algorithm has complexity for any -ary linear -code used on any memoryless channel. Example: Soft Decision Decoding for the Binary BCH Code. We take First, we calculate the matrix for the received output We also add one more position to all vectors, where we necessarily place zero symbol. 4 Then we perform the following steps. 1. Take two halves and for any 2. Design a conventional trellis diagram on beginning with position On each step leave at most shortest paths. Then on step leave all paths. These form the set Perform 4 In other words, we define w 63 (0) = 0 and w 63 (1) = 1:

5 DUMER: SORT-AND-MATCH ALGORITHM FOR SOFT-DECISION DECODING 2337 the same procedure for all 32 positions on the right half, and find the set Form the sets of records and that include the designed paths along with their syndromes. 3. Sort out the records of the joint set Find the matching pairs and the corresponding codewords 4. Choose the lightest codeword with the minimum weight and repeat the procedure for all Note that conventional trellis decoding of a general -code needs up to trellis states. The algorithm [4] uses about states, whereas our current algorithm requires at most states in each trial. Computer simulations have also shown that the decoding error probability is left almost unchanged when is reduced to on AWGN channels. Then we use only states on the right half and states on the left one. For AWGN channels, this simulation gave the following bit error probability versus ratio: at 2 db, at 3 db, at 4 db, and at 5 db. V. DECODING PERFORMANCE In order to justify Theorems 1 and 2, we now address the second question and show that by using lightest subpaths we obtain the algorithm indeed. We need to prove that any vector can be split into two halves and such that (10) Then both halves belong to required subsets and Let be the number of in the ordered set (5). We start with the following lemma that is similar to [4, Lemma 6]. Lemma 5: Any vector satisfies the inequality (11) Proof: Given, let be a subset of vectors such that and First, note that Second, is the last vector in, since any other vector in has either a lower weight, or the same weight and a smaller number, due to the lexicographic ordering of subvectors with equal weights. Therefore, the ordering (5) includes at least vectors placed ahead of Obviously, the same arguments can be used for any subset, in which case In particular, we can split any as and obtain an obvious inequality (12) Given with, consider now a sliding window as a function of Then the number is a stepwise integer function Note that and for any, according to (11). Therefore, for any Lemma 6: For any vector there exists a splitting such that and Proof: First, suppose that both inequalities and hold for some Then and both inequalities (10) hold. Otherwise, suppose that for all The latter implies that there exists a threshold such that and We then start our splitting from position and use the sets and Since inequality (11) gives an upper bound On the other hand, according to (12). Therefore, both inequalities (10) hold for the above partition This lemma ensures that in Step 4 we find the required partition. Then any codeword is necessarily retrieved through Steps 1 4 and we obtain an algorithm indeed. Now Theorem 1 follows according to (1). Also, Theorem 2 holds for most binary -codes used on quantized channels since provides full ML decoding for sufficiently large Discussion: Recall that general algorithm uses vectors on the left half. Our question is whether symmetric inequalities (13) can be used instead of (10). This would give us lightest vectors on both halves similarly to hard-decision decoding. However, it is unclear how to prove that inequalities (13) hold for each In this regard, we compare the proofs of Lemmas 1 and 6. In Lemma 1, we used that the function makes marginal changes on each step. We note that the function used in Lemma 6 can change as much as times on any step. In particular, consider all binary paths obtained on the punctured half Let be the shortest one. Also, let the two symbols and in position have the weights so different that for any Then our choice of the th symbol dominates the total weight on length For example, the path obtained from the shortest path takes the number and follows all paths Summarizing, our function can undergo huge changes on each step It is for this reason that we were unable to use lightest vectors on both halves. VI. CONCLUDING REMARKS In this paper, we study an algorithm that seeks the most probable codeword among most probable inputs taken from the whole space By using, we obtain the decoding error probability for any linear code This probability tends to the error probability of ML decoding if grows faster than as For most binary linear codes used on quantized channels, provides ML decoding for sufficiently large

6 2338 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER 1999 On the other hand, we show that has complexity for any linear -code. This is done by matching two conventional trellis diagrams of exponential size designed on the length For the rates, the complexity order of grows as a square root of the general trellis complexity Finally, note that our soft-decision complexity is times higher than its hard-decision counterpart presented in Section II. This is due to the fact that hard-decision decoding proceeds without calculating the lightest error patterns. This procedure is included in soft-decision decoding and comprises the bulk of trellis design. We note without proof that soft-decision complexity can also be reduced to the same polynomial order However, in this case the design is different. For any splitting we first calculate the maximum weights and in the lists and of lightest vectors. Each subpath is terminated once its weight exceeds the above maximum. As a result, we can eliminate sorting procedures and design our lists more efficiently. An important observation is that these maximum weights and can be calculated with low complexity prior to decoding The proof is beyond the scope of this paper. ACKNOWLEDGMENT The author wishes to thank P. Farrell for helpful discussions. REFERENCES [1] M. Ajtai, J. Komlós, and E. Szemeredi, An O(n log n) sorting network, in Proc. 15th Annu. ACM Symp. Theory of Computing, 1983, pp [2] L. R. Bahl, J. Cocke, F. Jelinek, and J. Raviv, Optimal decoding of linear codes for minimizing symbol error rate, IEEE Trans. Inform. Theory, vol. IT-20, pp , [3] V. M. Blinovskii, Lower asymptotic bound on the number of linear code words in a sphere of given radius in F n q, Probl. Pered. Inform., vol. 23, no. 2, pp , [4] I. Dumer, Suboptimal decoding of linear codes. Partition technique, IEEE Trans. Inform. Theory, vol. 42, pp , [5], Ellipsoidal lists and maximum likelihood decoding, IEEE Trans. Inform. Theory, submitted for publication. [6], Two algorithms for decoding linear codes, Probl. Pered. Inform., vol. 25, no. 1, pp , [7] G. S. Evseev, On the complexity of decoding linear codes, Probl. Pered. Inform., vol. 19, no. 1, pp. 3 8, [8] R. G. Gallager, Information Theory and Reliable Communication. New York: Wiley, [9] D. E. Knuth, The Art of Computer Programming, vol. 2: Seminumerical Algorithms. Reading, MA: Addison-Wesley, [10] A. Lafourcade and A. Vardy, Lower bounds on trellis complexity of block codes, IEEE Trans. Inform. Theory, vol. 41, pp , [11] J. K. Wolf, Efficient maximum likelihood decoding of linear codes using a trellis, IEEE Trans. Inform. Theory, vol. IT-24, pp , [12] V. V. Zyablov and V. R. Sidorenko, Bounds on complexity of trellis decoding of linear codes, Probl. Pered. Inform., vol. 29, no. 3, pp. 3 9, 1993.

Soft-Decision Decoding Using Punctured Codes

Soft-Decision Decoding Using Punctured Codes IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 47, NO 1, JANUARY 2001 59 Soft-Decision Decoding Using Punctured Codes Ilya Dumer, Member, IEEE Abstract Let a -ary linear ( )-code be used over a memoryless

More information

4488 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 10, OCTOBER /$ IEEE

4488 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 10, OCTOBER /$ IEEE 4488 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 10, OCTOBER 2008 List Decoding of Biorthogonal Codes the Hadamard Transform With Linear Complexity Ilya Dumer, Fellow, IEEE, Grigory Kabatiansky,

More information

UC Riverside UC Riverside Previously Published Works

UC Riverside UC Riverside Previously Published Works UC Riverside UC Riverside Previously Published Works Title Soft-decision decoding of Reed-Muller codes: A simplied algorithm Permalink https://escholarship.org/uc/item/5v71z6zr Journal IEEE Transactions

More information

Algebraic Soft-Decision Decoding of Reed Solomon Codes

Algebraic Soft-Decision Decoding of Reed Solomon Codes IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 11, NOVEMBER 2003 2809 Algebraic Soft-Decision Decoding of Reed Solomon Codes Ralf Koetter, Member, IEEE, Alexer Vardy, Fellow, IEEE Abstract A polynomial-time

More information

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005 Chapter 7 Error Control Coding Mikael Olofsson 2005 We have seen in Chapters 4 through 6 how digital modulation can be used to control error probabilities. This gives us a digital channel that in each

More information

ELEC 519A Selected Topics in Digital Communications: Information Theory. Hamming Codes and Bounds on Codes

ELEC 519A Selected Topics in Digital Communications: Information Theory. Hamming Codes and Bounds on Codes ELEC 519A Selected Topics in Digital Communications: Information Theory Hamming Codes and Bounds on Codes Single Error Correcting Codes 2 Hamming Codes (7,4,3) Hamming code 1 0 0 0 0 1 1 0 1 0 0 1 0 1

More information

Efficient Bounded Distance Decoders for Barnes-Wall Lattices

Efficient Bounded Distance Decoders for Barnes-Wall Lattices Efficient Bounded Distance Decoders for Barnes-Wall Lattices Daniele Micciancio Antonio Nicolosi April 30, 2008 Abstract We describe a new family of parallelizable bounded distance decoding algorithms

More information

Code design: Computer search

Code design: Computer search Code design: Computer search Low rate codes Represent the code by its generator matrix Find one representative for each equivalence class of codes Permutation equivalences? Do NOT try several generator

More information

Lecture 12. Block Diagram

Lecture 12. Block Diagram Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data

More information

Optimum Soft Decision Decoding of Linear Block Codes

Optimum Soft Decision Decoding of Linear Block Codes Optimum Soft Decision Decoding of Linear Block Codes {m i } Channel encoder C=(C n-1,,c 0 ) BPSK S(t) (n,k,d) linear modulator block code Optimal receiver AWGN Assume that [n,k,d] linear block code C is

More information

MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups.

MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups. MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups. Binary codes Let us assume that a message to be transmitted is in binary form. That is, it is a word in the alphabet

More information

The Capacity of Finite Abelian Group Codes Over Symmetric Memoryless Channels Giacomo Como and Fabio Fagnani

The Capacity of Finite Abelian Group Codes Over Symmetric Memoryless Channels Giacomo Como and Fabio Fagnani IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 5, MAY 2009 2037 The Capacity of Finite Abelian Group Codes Over Symmetric Memoryless Channels Giacomo Como and Fabio Fagnani Abstract The capacity

More information

Solutions of Exam Coding Theory (2MMC30), 23 June (1.a) Consider the 4 4 matrices as words in F 16

Solutions of Exam Coding Theory (2MMC30), 23 June (1.a) Consider the 4 4 matrices as words in F 16 Solutions of Exam Coding Theory (2MMC30), 23 June 2016 (1.a) Consider the 4 4 matrices as words in F 16 2, the binary vector space of dimension 16. C is the code of all binary 4 4 matrices such that the

More information

Codes for Partially Stuck-at Memory Cells

Codes for Partially Stuck-at Memory Cells 1 Codes for Partially Stuck-at Memory Cells Antonia Wachter-Zeh and Eitan Yaakobi Department of Computer Science Technion Israel Institute of Technology, Haifa, Israel Email: {antonia, yaakobi@cs.technion.ac.il

More information

(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute

(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute ENEE 739C: Advanced Topics in Signal Processing: Coding Theory Instructor: Alexander Barg Lecture 6 (draft; 9/6/03. Error exponents for Discrete Memoryless Channels http://www.enee.umd.edu/ abarg/enee739c/course.html

More information

Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes

Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes Xiaojie Zhang and Paul H. Siegel University of California, San Diego, La Jolla, CA 9093, U Email:{ericzhang, psiegel}@ucsd.edu

More information

Practical Polar Code Construction Using Generalised Generator Matrices

Practical Polar Code Construction Using Generalised Generator Matrices Practical Polar Code Construction Using Generalised Generator Matrices Berksan Serbetci and Ali E. Pusane Department of Electrical and Electronics Engineering Bogazici University Istanbul, Turkey E-mail:

More information

New Minimal Weight Representations for Left-to-Right Window Methods

New Minimal Weight Representations for Left-to-Right Window Methods New Minimal Weight Representations for Left-to-Right Window Methods James A. Muir 1 and Douglas R. Stinson 2 1 Department of Combinatorics and Optimization 2 School of Computer Science University of Waterloo

More information

The E8 Lattice and Error Correction in Multi-Level Flash Memory

The E8 Lattice and Error Correction in Multi-Level Flash Memory The E8 Lattice and Error Correction in Multi-Level Flash Memory Brian M Kurkoski University of Electro-Communications Tokyo, Japan kurkoski@iceuecacjp Abstract A construction using the E8 lattice and Reed-Solomon

More information

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel Introduction to Coding Theory CMU: Spring 2010 Notes 3: Stochastic channels and noisy coding theorem bound January 2010 Lecturer: Venkatesan Guruswami Scribe: Venkatesan Guruswami We now turn to the basic

More information

Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel

Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel Brian M. Kurkoski, Paul H. Siegel, and Jack K. Wolf Department of Electrical and Computer Engineering

More information

Chapter 9 Fundamental Limits in Information Theory

Chapter 9 Fundamental Limits in Information Theory Chapter 9 Fundamental Limits in Information Theory Information Theory is the fundamental theory behind information manipulation, including data compression and data transmission. 9.1 Introduction o For

More information

MATH/MTHE 406 Homework Assignment 2 due date: October 17, 2016

MATH/MTHE 406 Homework Assignment 2 due date: October 17, 2016 MATH/MTHE 406 Homework Assignment 2 due date: October 17, 2016 Notation: We will use the notations x 1 x 2 x n and also (x 1, x 2,, x n ) to denote a vector x F n where F is a finite field. 1. [20=6+5+9]

More information

MATH32031: Coding Theory Part 15: Summary

MATH32031: Coding Theory Part 15: Summary MATH32031: Coding Theory Part 15: Summary 1 The initial problem The main goal of coding theory is to develop techniques which permit the detection of errors in the transmission of information and, if necessary,

More information

Answers and Solutions to (Even Numbered) Suggested Exercises in Sections of Grimaldi s Discrete and Combinatorial Mathematics

Answers and Solutions to (Even Numbered) Suggested Exercises in Sections of Grimaldi s Discrete and Combinatorial Mathematics Answers and Solutions to (Even Numbered) Suggested Exercises in Sections 6.5-6.9 of Grimaldi s Discrete and Combinatorial Mathematics Section 6.5 6.5.2. a. r = = + = c + e. So the error pattern is e =.

More information

MATH Examination for the Module MATH-3152 (May 2009) Coding Theory. Time allowed: 2 hours. S = q

MATH Examination for the Module MATH-3152 (May 2009) Coding Theory. Time allowed: 2 hours. S = q MATH-315201 This question paper consists of 6 printed pages, each of which is identified by the reference MATH-3152 Only approved basic scientific calculators may be used. c UNIVERSITY OF LEEDS Examination

More information

Höst, Stefan; Johannesson, Rolf; Zigangirov, Kamil; Zyablov, Viktor V.

Höst, Stefan; Johannesson, Rolf; Zigangirov, Kamil; Zyablov, Viktor V. Active distances for convolutional codes Höst, Stefan; Johannesson, Rolf; Zigangirov, Kamil; Zyablov, Viktor V Published in: IEEE Transactions on Information Theory DOI: 101109/18749009 Published: 1999-01-01

More information

Coding on a Trellis: Convolutional Codes

Coding on a Trellis: Convolutional Codes .... Coding on a Trellis: Convolutional Codes Telecommunications Laboratory Alex Balatsoukas-Stimming Technical University of Crete November 6th, 2008 Telecommunications Laboratory (TUC) Coding on a Trellis:

More information

SOFT DECISION FANO DECODING OF BLOCK CODES OVER DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM

SOFT DECISION FANO DECODING OF BLOCK CODES OVER DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM Journal of ELECTRICAL ENGINEERING, VOL. 63, NO. 1, 2012, 59 64 SOFT DECISION FANO DECODING OF BLOCK CODES OVER DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM H. Prashantha Kumar Udupi Sripati K. Rajesh

More information

RON M. ROTH * GADIEL SEROUSSI **

RON M. ROTH * GADIEL SEROUSSI ** ENCODING AND DECODING OF BCH CODES USING LIGHT AND SHORT CODEWORDS RON M. ROTH * AND GADIEL SEROUSSI ** ABSTRACT It is shown that every q-ary primitive BCH code of designed distance δ and sufficiently

More information

Covering an ellipsoid with equal balls

Covering an ellipsoid with equal balls Journal of Combinatorial Theory, Series A 113 (2006) 1667 1676 www.elsevier.com/locate/jcta Covering an ellipsoid with equal balls Ilya Dumer College of Engineering, University of California, Riverside,

More information

EE 229B ERROR CONTROL CODING Spring 2005

EE 229B ERROR CONTROL CODING Spring 2005 EE 229B ERROR CONTROL CODING Spring 2005 Solutions for Homework 1 1. Is there room? Prove or disprove : There is a (12,7) binary linear code with d min = 5. If there were a (12,7) binary linear code with

More information

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.1 Overview Basic Concepts of Channel Coding Block Codes I:

More information

Introduction to Convolutional Codes, Part 1

Introduction to Convolutional Codes, Part 1 Introduction to Convolutional Codes, Part 1 Frans M.J. Willems, Eindhoven University of Technology September 29, 2009 Elias, Father of Coding Theory Textbook Encoder Encoder Properties Systematic Codes

More information

IN this paper, we consider the capacity of sticky channels, a

IN this paper, we consider the capacity of sticky channels, a 72 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 1, JANUARY 2008 Capacity Bounds for Sticky Channels Michael Mitzenmacher, Member, IEEE Abstract The capacity of sticky channels, a subclass of insertion

More information

Coding for a Non-symmetric Ternary Channel

Coding for a Non-symmetric Ternary Channel Coding for a Non-symmetric Ternary Channel Nicolas Bitouzé and Alexandre Graell i Amat Department of Electronics, Institut TELECOM-TELECOM Bretagne, 938 Brest, France Email: nicolas.bitouze@telecom-bretagne.eu,

More information

On Compression Encrypted Data part 2. Prof. Ja-Ling Wu The Graduate Institute of Networking and Multimedia National Taiwan University

On Compression Encrypted Data part 2. Prof. Ja-Ling Wu The Graduate Institute of Networking and Multimedia National Taiwan University On Compression Encrypted Data part 2 Prof. Ja-Ling Wu The Graduate Institute of Networking and Multimedia National Taiwan University 1 Brief Summary of Information-theoretic Prescription At a functional

More information

Chapter 7 Reed Solomon Codes and Binary Transmission

Chapter 7 Reed Solomon Codes and Binary Transmission Chapter 7 Reed Solomon Codes and Binary Transmission 7.1 Introduction Reed Solomon codes named after Reed and Solomon [9] following their publication in 1960 have been used together with hard decision

More information

Reed-Solomon codes. Chapter Linear codes over finite fields

Reed-Solomon codes. Chapter Linear codes over finite fields Chapter 8 Reed-Solomon codes In the previous chapter we discussed the properties of finite fields, and showed that there exists an essentially unique finite field F q with q = p m elements for any prime

More information

CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding

CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding Tim Roughgarden October 29, 2014 1 Preamble This lecture covers our final subtopic within the exact and approximate recovery part of the course.

More information

Communications II Lecture 9: Error Correction Coding. Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved

Communications II Lecture 9: Error Correction Coding. Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved Communications II Lecture 9: Error Correction Coding Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved Outline Introduction Linear block codes Decoding Hamming

More information

Error Correcting Codes: Combinatorics, Algorithms and Applications Spring Homework Due Monday March 23, 2009 in class

Error Correcting Codes: Combinatorics, Algorithms and Applications Spring Homework Due Monday March 23, 2009 in class Error Correcting Codes: Combinatorics, Algorithms and Applications Spring 2009 Homework Due Monday March 23, 2009 in class You can collaborate in groups of up to 3. However, the write-ups must be done

More information

Mathematics Department

Mathematics Department Mathematics Department Matthew Pressland Room 7.355 V57 WT 27/8 Advanced Higher Mathematics for INFOTECH Exercise Sheet 2. Let C F 6 3 be the linear code defined by the generator matrix G = 2 2 (a) Find

More information

ELEC 405/ELEC 511 Error Control Coding. Hamming Codes and Bounds on Codes

ELEC 405/ELEC 511 Error Control Coding. Hamming Codes and Bounds on Codes ELEC 405/ELEC 511 Error Control Coding Hamming Codes and Bounds on Codes Single Error Correcting Codes (3,1,3) code (5,2,3) code (6,3,3) code G = rate R=1/3 n-k=2 [ 1 1 1] rate R=2/5 n-k=3 1 0 1 1 0 G

More information

On Two Probabilistic Decoding Algorithms for Binary Linear Codes

On Two Probabilistic Decoding Algorithms for Binary Linear Codes On Two Probabilistic Decoding Algorithms for Binary Linear Codes Miodrag Živković Abstract A generalization of Sullivan inequality on the ratio of the probability of a linear code to that of any of its

More information

CS6304 / Analog and Digital Communication UNIT IV - SOURCE AND ERROR CONTROL CODING PART A 1. What is the use of error control coding? The main use of error control coding is to reduce the overall probability

More information

On Bit Error Rate Performance of Polar Codes in Finite Regime

On Bit Error Rate Performance of Polar Codes in Finite Regime On Bit Error Rate Performance of Polar Codes in Finite Regime A. Eslami and H. Pishro-Nik Abstract Polar codes have been recently proposed as the first low complexity class of codes that can provably achieve

More information

Random Redundant Soft-In Soft-Out Decoding of Linear Block Codes

Random Redundant Soft-In Soft-Out Decoding of Linear Block Codes Random Redundant Soft-In Soft-Out Decoding of Linear Block Codes Thomas R. Halford and Keith M. Chugg Communication Sciences Institute University of Southern California Los Angeles, CA 90089-2565 Abstract

More information

Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel

Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel Jonathan Scarlett University of Cambridge jms265@cam.ac.uk Alfonso Martinez Universitat Pompeu Fabra alfonso.martinez@ieee.org

More information

MATH 433 Applied Algebra Lecture 22: Review for Exam 2.

MATH 433 Applied Algebra Lecture 22: Review for Exam 2. MATH 433 Applied Algebra Lecture 22: Review for Exam 2. Topics for Exam 2 Permutations Cycles, transpositions Cycle decomposition of a permutation Order of a permutation Sign of a permutation Symmetric

More information

Codes on graphs and iterative decoding

Codes on graphs and iterative decoding Codes on graphs and iterative decoding Bane Vasić Error Correction Coding Laboratory University of Arizona Prelude Information transmission 0 0 0 0 0 0 Channel Information transmission signal 0 0 threshold

More information

Optimal Block-Type-Decodable Encoders for Constrained Systems

Optimal Block-Type-Decodable Encoders for Constrained Systems IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 5, MAY 2003 1231 Optimal Block-Type-Decodable Encoders for Constrained Systems Panu Chaichanavong, Student Member, IEEE, Brian H. Marcus, Fellow, IEEE

More information

ELEC 405/ELEC 511 Error Control Coding and Sequences. Hamming Codes and the Hamming Bound

ELEC 405/ELEC 511 Error Control Coding and Sequences. Hamming Codes and the Hamming Bound ELEC 45/ELEC 5 Error Control Coding and Sequences Hamming Codes and the Hamming Bound Single Error Correcting Codes ELEC 45 2 Hamming Codes One form of the (7,4,3) Hamming code is generated by This is

More information

Introduction to Low-Density Parity Check Codes. Brian Kurkoski

Introduction to Low-Density Parity Check Codes. Brian Kurkoski Introduction to Low-Density Parity Check Codes Brian Kurkoski kurkoski@ice.uec.ac.jp Outline: Low Density Parity Check Codes Review block codes History Low Density Parity Check Codes Gallager s LDPC code

More information

Information Hiding and Covert Communication

Information Hiding and Covert Communication Information Hiding and Covert Communication Andrew Ker adk @ comlab.ox.ac.uk Royal Society University Research Fellow Oxford University Computing Laboratory Foundations of Security Analysis and Design

More information

EVALUATION OF PACKET ERROR RATE IN WIRELESS NETWORKS

EVALUATION OF PACKET ERROR RATE IN WIRELESS NETWORKS EVALUATION OF PACKET ERROR RATE IN WIRELESS NETWORKS Ramin Khalili, Kavé Salamatian LIP6-CNRS, Université Pierre et Marie Curie. Paris, France. Ramin.khalili, kave.salamatian@lip6.fr Abstract Bit Error

More information

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes Igal Sason Department of Electrical Engineering Technion - Israel Institute of Technology Haifa 32000, Israel 2009 IEEE International

More information

MATH 291T CODING THEORY

MATH 291T CODING THEORY California State University, Fresno MATH 291T CODING THEORY Spring 2009 Instructor : Stefaan Delcroix Chapter 1 Introduction to Error-Correcting Codes It happens quite often that a message becomes corrupt

More information

Distributed Arithmetic Coding

Distributed Arithmetic Coding Distributed Arithmetic Coding Marco Grangetto, Member, IEEE, Enrico Magli, Member, IEEE, Gabriella Olmo, Senior Member, IEEE Abstract We propose a distributed binary arithmetic coder for Slepian-Wolf coding

More information

16.36 Communication Systems Engineering

16.36 Communication Systems Engineering MIT OpenCourseWare http://ocw.mit.edu 16.36 Communication Systems Engineering Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.36: Communication

More information

Channel Coding I. Exercises SS 2017

Channel Coding I. Exercises SS 2017 Channel Coding I Exercises SS 2017 Lecturer: Dirk Wübben Tutor: Shayan Hassanpour NW1, Room N 2420, Tel.: 0421/218-62387 E-mail: {wuebben, hassanpour}@ant.uni-bremen.de Universität Bremen, FB1 Institut

More information

Codes on graphs and iterative decoding

Codes on graphs and iterative decoding Codes on graphs and iterative decoding Bane Vasić Error Correction Coding Laboratory University of Arizona Funded by: National Science Foundation (NSF) Seagate Technology Defense Advanced Research Projects

More information

Introduction to binary block codes

Introduction to binary block codes 58 Chapter 6 Introduction to binary block codes In this chapter we begin to study binary signal constellations, which are the Euclidean-space images of binary block codes. Such constellations have nominal

More information

Hamming Codes 11/17/04

Hamming Codes 11/17/04 Hamming Codes 11/17/04 History In the late 1940 s Richard Hamming recognized that the further evolution of computers required greater reliability, in particular the ability to not only detect errors, but

More information

Chapter 3 Linear Block Codes

Chapter 3 Linear Block Codes Wireless Information Transmission System Lab. Chapter 3 Linear Block Codes Institute of Communications Engineering National Sun Yat-sen University Outlines Introduction to linear block codes Syndrome and

More information

PSK bit mappings with good minimax error probability

PSK bit mappings with good minimax error probability PSK bit mappings with good minimax error probability Erik Agrell Department of Signals and Systems Chalmers University of Technology 4196 Göteborg, Sweden Email: agrell@chalmers.se Erik G. Ström Department

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

The E8 Lattice and Error Correction in Multi-Level Flash Memory

The E8 Lattice and Error Correction in Multi-Level Flash Memory The E8 Lattice and Error Correction in Multi-Level Flash Memory Brian M. Kurkoski kurkoski@ice.uec.ac.jp University of Electro-Communications Tokyo, Japan ICC 2011 IEEE International Conference on Communications

More information

Group Codes Outperform Binary-Coset Codes on Nonbinary Symmetric Memoryless Channels

Group Codes Outperform Binary-Coset Codes on Nonbinary Symmetric Memoryless Channels Group Codes Outperform Binary-Coset Codes on Nonbinary Symmetric Memoryless Channels The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters.

More information

Channel combining and splitting for cutoff rate improvement

Channel combining and splitting for cutoff rate improvement Channel combining and splitting for cutoff rate improvement Erdal Arıkan Electrical-Electronics Engineering Department Bilkent University, Ankara, 68, Turkey Email: arikan@eebilkentedutr arxiv:cs/5834v

More information

Trellis-based Detection Techniques

Trellis-based Detection Techniques Chapter 2 Trellis-based Detection Techniques 2.1 Introduction In this chapter, we provide the reader with a brief introduction to the main detection techniques which will be relevant for the low-density

More information

Distributed Source Coding Using LDPC Codes

Distributed Source Coding Using LDPC Codes Distributed Source Coding Using LDPC Codes Telecommunications Laboratory Alex Balatsoukas-Stimming Technical University of Crete May 29, 2010 Telecommunications Laboratory (TUC) Distributed Source Coding

More information

Physical Layer and Coding

Physical Layer and Coding Physical Layer and Coding Muriel Médard Professor EECS Overview A variety of physical media: copper, free space, optical fiber Unified way of addressing signals at the input and the output of these media:

More information

Lecture 3: Error Correcting Codes

Lecture 3: Error Correcting Codes CS 880: Pseudorandomness and Derandomization 1/30/2013 Lecture 3: Error Correcting Codes Instructors: Holger Dell and Dieter van Melkebeek Scribe: Xi Wu In this lecture we review some background on error

More information

LOW-density parity-check (LDPC) codes were invented

LOW-density parity-check (LDPC) codes were invented IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 54, NO 1, JANUARY 2008 51 Extremal Problems of Information Combining Yibo Jiang, Alexei Ashikhmin, Member, IEEE, Ralf Koetter, Senior Member, IEEE, and Andrew

More information

These outputs can be written in a more convenient form: with y(i) = Hc m (i) n(i) y(i) = (y(i); ; y K (i)) T ; c m (i) = (c m (i); ; c m K(i)) T and n

These outputs can be written in a more convenient form: with y(i) = Hc m (i) n(i) y(i) = (y(i); ; y K (i)) T ; c m (i) = (c m (i); ; c m K(i)) T and n Binary Codes for synchronous DS-CDMA Stefan Bruck, Ulrich Sorger Institute for Network- and Signal Theory Darmstadt University of Technology Merckstr. 25, 6428 Darmstadt, Germany Tel.: 49 65 629, Fax:

More information

Linear Programming Decoding of Binary Linear Codes for Symbol-Pair Read Channels

Linear Programming Decoding of Binary Linear Codes for Symbol-Pair Read Channels 1 Linear Programming Decoding of Binary Linear Codes for Symbol-Pair Read Channels Shunsuke Horii, Toshiyasu Matsushima, and Shigeichi Hirasawa arxiv:1508.01640v2 [cs.it] 29 Sep 2015 Abstract In this paper,

More information

Orthogonal Arrays & Codes

Orthogonal Arrays & Codes Orthogonal Arrays & Codes Orthogonal Arrays - Redux An orthogonal array of strength t, a t-(v,k,λ)-oa, is a λv t x k array of v symbols, such that in any t columns of the array every one of the possible

More information

MATH 291T CODING THEORY

MATH 291T CODING THEORY California State University, Fresno MATH 291T CODING THEORY Fall 2011 Instructor : Stefaan Delcroix Contents 1 Introduction to Error-Correcting Codes 3 2 Basic Concepts and Properties 6 2.1 Definitions....................................

More information

Roll No. :... Invigilator's Signature :.. CS/B.TECH(ECE)/SEM-7/EC-703/ CODING & INFORMATION THEORY. Time Allotted : 3 Hours Full Marks : 70

Roll No. :... Invigilator's Signature :.. CS/B.TECH(ECE)/SEM-7/EC-703/ CODING & INFORMATION THEORY. Time Allotted : 3 Hours Full Marks : 70 Name : Roll No. :.... Invigilator's Signature :.. CS/B.TECH(ECE)/SEM-7/EC-703/2011-12 2011 CODING & INFORMATION THEORY Time Allotted : 3 Hours Full Marks : 70 The figures in the margin indicate full marks

More information

CLASSICAL error control codes have been designed

CLASSICAL error control codes have been designed IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 56, NO 3, MARCH 2010 979 Optimal, Systematic, q-ary Codes Correcting All Asymmetric and Symmetric Errors of Limited Magnitude Noha Elarief and Bella Bose, Fellow,

More information

GRAY codes were found by Gray [15] and introduced

GRAY codes were found by Gray [15] and introduced IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 45, NO 7, NOVEMBER 1999 2383 The Structure of Single-Track Gray Codes Moshe Schwartz Tuvi Etzion, Senior Member, IEEE Abstract Single-track Gray codes are cyclic

More information

Error Correction Methods

Error Correction Methods Technologies and Services on igital Broadcasting (7) Error Correction Methods "Technologies and Services of igital Broadcasting" (in Japanese, ISBN4-339-06-) is published by CORONA publishing co., Ltd.

More information

Lecture 6: Expander Codes

Lecture 6: Expander Codes CS369E: Expanders May 2 & 9, 2005 Lecturer: Prahladh Harsha Lecture 6: Expander Codes Scribe: Hovav Shacham In today s lecture, we will discuss the application of expander graphs to error-correcting codes.

More information

MATH3302. Coding and Cryptography. Coding Theory

MATH3302. Coding and Cryptography. Coding Theory MATH3302 Coding and Cryptography Coding Theory 2010 Contents 1 Introduction to coding theory 2 1.1 Introduction.......................................... 2 1.2 Basic definitions and assumptions..............................

More information

Structured Low-Density Parity-Check Codes: Algebraic Constructions

Structured Low-Density Parity-Check Codes: Algebraic Constructions Structured Low-Density Parity-Check Codes: Algebraic Constructions Shu Lin Department of Electrical and Computer Engineering University of California, Davis Davis, California 95616 Email:shulin@ece.ucdavis.edu

More information

A Lower Bound for Boolean Satisfiability on Turing Machines

A Lower Bound for Boolean Satisfiability on Turing Machines A Lower Bound for Boolean Satisfiability on Turing Machines arxiv:1406.5970v1 [cs.cc] 23 Jun 2014 Samuel C. Hsieh Computer Science Department, Ball State University March 16, 2018 Abstract We establish

More information

: Coding Theory. Notes by Assoc. Prof. Dr. Patanee Udomkavanich October 30, upattane

: Coding Theory. Notes by Assoc. Prof. Dr. Patanee Udomkavanich October 30, upattane 2301532 : Coding Theory Notes by Assoc. Prof. Dr. Patanee Udomkavanich October 30, 2006 http://pioneer.chula.ac.th/ upattane Chapter 1 Error detection, correction and decoding 1.1 Basic definitions and

More information

Joint Coding for Flash Memory Storage

Joint Coding for Flash Memory Storage ISIT 8, Toronto, Canada, July 6-11, 8 Joint Coding for Flash Memory Storage Anxiao (Andrew) Jiang Computer Science Department Texas A&M University College Station, TX 77843, U.S.A. ajiang@cs.tamu.edu Abstract

More information

COMPSCI 650 Applied Information Theory Apr 5, Lecture 18. Instructor: Arya Mazumdar Scribe: Hamed Zamani, Hadi Zolfaghari, Fatemeh Rezaei

COMPSCI 650 Applied Information Theory Apr 5, Lecture 18. Instructor: Arya Mazumdar Scribe: Hamed Zamani, Hadi Zolfaghari, Fatemeh Rezaei COMPSCI 650 Applied Information Theory Apr 5, 2016 Lecture 18 Instructor: Arya Mazumdar Scribe: Hamed Zamani, Hadi Zolfaghari, Fatemeh Rezaei 1 Correcting Errors in Linear Codes Suppose someone is to send

More information

Low-Complexity Fixed-to-Fixed Joint Source-Channel Coding

Low-Complexity Fixed-to-Fixed Joint Source-Channel Coding Low-Complexity Fixed-to-Fixed Joint Source-Channel Coding Irina E. Bocharova 1, Albert Guillén i Fàbregas 234, Boris D. Kudryashov 1, Alfonso Martinez 2, Adrià Tauste Campo 2, and Gonzalo Vazquez-Vilar

More information

Error-Correcting Schemes with Dynamic Thresholds in Nonvolatile Memories

Error-Correcting Schemes with Dynamic Thresholds in Nonvolatile Memories 2 IEEE International Symposium on Information Theory Proceedings Error-Correcting Schemes with Dynamic Thresholds in Nonvolatile Memories Hongchao Zhou Electrical Engineering Department California Institute

More information

Coding Theory: Linear-Error Correcting Codes Anna Dovzhik Math 420: Advanced Linear Algebra Spring 2014

Coding Theory: Linear-Error Correcting Codes Anna Dovzhik Math 420: Advanced Linear Algebra Spring 2014 Anna Dovzhik 1 Coding Theory: Linear-Error Correcting Codes Anna Dovzhik Math 420: Advanced Linear Algebra Spring 2014 Sharing data across channels, such as satellite, television, or compact disc, often

More information

3. Coding theory 3.1. Basic concepts

3. Coding theory 3.1. Basic concepts 3. CODING THEORY 1 3. Coding theory 3.1. Basic concepts In this chapter we will discuss briefly some aspects of error correcting codes. The main problem is that if information is sent via a noisy channel,

More information

Codes over Subfields. Chapter Basics

Codes over Subfields. Chapter Basics Chapter 7 Codes over Subfields In Chapter 6 we looked at various general methods for constructing new codes from old codes. Here we concentrate on two more specialized techniques that result from writing

More information

for some error exponent E( R) as a function R,

for some error exponent E( R) as a function R, . Capacity-achieving codes via Forney concatenation Shannon s Noisy Channel Theorem assures us the existence of capacity-achieving codes. However, exhaustive search for the code has double-exponential

More information

The cocycle lattice of binary matroids

The cocycle lattice of binary matroids Published in: Europ. J. Comb. 14 (1993), 241 250. The cocycle lattice of binary matroids László Lovász Eötvös University, Budapest, Hungary, H-1088 Princeton University, Princeton, NJ 08544 Ákos Seress*

More information

Low-density parity-check (LDPC) codes

Low-density parity-check (LDPC) codes Low-density parity-check (LDPC) codes Performance similar to turbo codes Do not require long interleaver to achieve good performance Better block error performance Error floor occurs at lower BER Decoding

More information

A Combinatorial Bound on the List Size

A Combinatorial Bound on the List Size 1 A Combinatorial Bound on the List Size Yuval Cassuto and Jehoshua Bruck California Institute of Technology Electrical Engineering Department MC 136-93 Pasadena, CA 9115, U.S.A. E-mail: {ycassuto,bruck}@paradise.caltech.edu

More information

A Systematic Description of Source Significance Information

A Systematic Description of Source Significance Information A Systematic Description of Source Significance Information Norbert Goertz Institute for Digital Communications School of Engineering and Electronics The University of Edinburgh Mayfield Rd., Edinburgh

More information