4488 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 10, OCTOBER /$ IEEE
|
|
- Janice Griffith
- 5 years ago
- Views:
Transcription
1 4488 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 10, OCTOBER 2008 List Decoding of Biorthogonal Codes the Hadamard Transform With Linear Complexity Ilya Dumer, Fellow, IEEE, Grigory Kabatiansky, Cédric Tavernier Abstract Let a biorthogonal Reed Muller code RM (1;m) of length n =2 m be used on a memoryless channel with an input alphabet 61 a real-valued output. Given any nonzero received vector y in the Euclidean space n some parameter 2 (0; 1), our goal is to perform list decoding of the code RM (1;m) retrieve all codewords located within the angle arccos from y. For an arbitrarily small, we design an algorithm that outputs this list of codewords with the linear complexity order of n ln 2 bit operations. Without loss of generality, let vector y be also scaled to the Euclidean length p n of the transmitted vectors. Then an equivalent task is to retrieve all coefficients of the Hadamard transform of vector y whose absolute values exceed n. Thus, this decoding algorithm retrieves all n-signicant coefficients of the Hadamard transform with the linear complexity n ln 2 instead of the complexity n ln 2 n of the full Hadamard transform. Index Terms Biorthogonal codes, Hadamard transform, softdecision list decoding. I. INTRODUCTION B IORTHOGONAL (first-order) Reed Muller codes have been extensively used in communications addressed in many papers since the 1960s. These codes have optimal parameters achieve the maximum possible distance for the given length dimension. One renowned decoding algorithm designed by Green [1] performs maximum-likelihood decoding of codes finds the distances from the received vector to all codewords of with complexity of bit operations. Another algorithm designed by Litsyn Shekhovtsov [2] performs bounded distance decoding corrects up to errors with linear complexity. In the area of probabilistic decoding, a major breakthrough has been achieved by Goldreich Levin [3]. Their algorithm takes any received vector outputs the list of codewords of within a decoding radius performing this task with a high probability a low poly-logarithmic complexity for any. Recently, Manuscript received July 2, Current version published September 17, The work of I. Dumer was supported in part by the National Science Foundation under Grants CCF CCF The work of G. Kabatiansky was supported in part by the Russian Foundation for Fundamental Research under Grants The material in this paper was presented in part at the IEEE International Symposium on Information Theory, Nice, France, June I. Dumer is with the Department of Electrical Engineering, University of Calornia, Riverside, CA USA ( dumer@ee.ucr.edu). G. Kabatiansky is with the Institute for Information Transmission Problems, Moscow , Russia with INRIA, Rocquencourt, France ( kaba@iitp.ru). C. Tavernier is with Communications Systems (CS), Le Plessis Robinson, France; ( tavernier.cedric@gmail.com). Communicated by T. Etzion, Associate Editor for Coding Theory. Digital Object Identier /TIT list decoding of codes has been extended to deterministic algorithms. In particular, the algorithm of [4] performs error-free list decoding within the radius with linear complexity for any received vector. This paper advances the results of [4] in two dferent directions. First, we extend list decoding of codes to an arbitrary memoryless semi-continuous channel. Second, the former complexity of [4] will be reduced to. In doing so, we use the following setup. Let a binary vector be mapped onto the Euclidean vector with symbols. Given two binary vectors, consider the Hamming distance, the Euclidean distance, the inner product of their maps. Then Now any binary code is mapped into the cube, which in turn belongs to the Euclidean sphere of radius in the Euclidean space. Thus, any binary code of Hamming distance becomes a spherical code, where two dferent codewords have the inner product at most the angle at least. Below, we consider a memoryless channel with an input alphabet some larger output alphabet (usually, ). We use a code on this channel replace an output in any position with its log-likelihood ratio We then call a received vector. Note that any codeword has a higher posterior probability than another codeword it also has a larger inner product. Note that all codewords become equiprobable for therefore, we will assume that. Without loss of generality, we can multiply by the scalar, where is the squared Euclidean length of vector Then, all vectors belong to the same sphere We now proceed with the biorthogonal codes. Let be any affine Boolean function defined on all points (1) /$ IEEE
2 DUMER et al.: LIST DECODING OF BIORTHOGONAL CODES AND THE HADAMARD TRANSFORM WITH LINEAR COMPLEXITY 4489 As above, we map all outputs onto the vector with symbols Then, codevectors form the biorthogonal code. Given any received vector any parameter our main goal is to retrieve all codewords such that. In equivalent terms, given any, we seek the codewords within the angle from. To define the list, we will construct the corresponding list of affine functions Here each function is recorded as the vector. Now let be decomposed into the orthogonal Hadamard code, whose codevectors are generated by linear functions with, its coset of complementary vectors. Recall also that codewords of code considered as rows form an Hadamard matrix, which satisfies equality, where is the identity matrix. Then the vector represents the Hadamard transform of vector, whereas the vector gives opposite values. Here positions in both vectors are marked as binary -tuples. Now we see that the list gives all positions in which the coefficients of the Hadamard transform have absolute values. Our main result is as follows. Theorem 1: Let the biorthogonal code of length be used on a general memoryless channel. For any any received vector, the list of affine functions can be retrieved error-free with complexity that has linear order of for any fixed as. Finally, we reformulate Theorem 1 as follows. Corollary 2: For any constant, code requires linear complexity order of to output the list of codewords located within the angle from any received vector (softdecision decoding); the Hamming distance from any received vector (hard-decision decoding). Note that linear decoding complexity is achieved in Corollary 2 even the decoding radius is within an arbitrarily small -margin to the code distance. In particular, the new algorithm removes the performance-complexity gap between the maximum-likelihood decoding of the Green machine [1] bounded-distance decoding of the Litsyn Shekhovtsov algorithm [2]. Finally, note for a high-noise case, with of a vanishing order, the output list can include as many as codewords. Each of these is defined by information bits. Thus, in this high-noise case, the newly presented algorithm has complexity that closely approaches the bit size of its output. In Section II, we consider the Johnson bound for real-valued outputs upper-bound the maximum list size for any code. This also yields a tight bound for any biorthogonal code Then in Sections III IV, we proceed with a new soft-decision list decoding algorithm prove Theorem 1. II. JOHNSON BOUND FOR CODES IN Given a code of length any received vector, decoding within the Hamming radius produces the list. Then the classic Johnson bound reads as follows. The Johnson Bound: Let be a code of the minimum Hamming distance. Then for any any positive, such that, the list has size The following lemma shows that the Johnson bound (2) can be applied to any soft-decision output without any alterations. A similar lemma is given in [5] with a dferent proof. Lemma 3: The Johnson bound (2) holds for any code, any output, any positive, such that. Proof: Consider any list of size let. Then we have inequality We can also use the Cauchy Schwartz inequality By construction of our list Since, we combine all three inequalities as follows: which gives the required bound (2). For a code, Lemma 3 gives the following corollary. Corollary 4: Let be a received vector. Then for any, the list of affine functions has size Remark: The term of (2) is replaced with in the last expression. Here we use the fact that a binary code is linear contains an all-one codeword, in which case at most half the code belongs to for any. III. LIST DECODING FOR CODES In this section, we design the Sums Facet algorithm that performs soft-decision list decoding of a code within a threshold. Given any, we represent any -variate linear Boolean function in the form (2)
3 4490 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 10, OCTOBER 2008 Then we define its -prefix as the -variate linear Boolean function Then in each step, we use the function the list of prefixes that pass the threshold test construct that begins with the same coefficients. Given a channel output, the algorithm performs the following steps. In each step, the algorithm receives some list of prefixes Finally, for each, let. For each facet, consider two -dimensional subfacets defined as (7) derives the subsequent list Also, consider the list (3) that includes all prefixes of required functions. For the given, we will sometimes shorten our notation call the above lists, respectively. In the sequel, we show that for all. First, consider the -dimensional Boolean cube any -dimensional facet Then the inner products used in (5) can be recalculated recursively for each prefix as follows: (8) In summary, performs three subroutines: A of (8), B of (5), C of (7) in each step. This is done as follows: Here the prefixes run through for the suffixes are fixed. Also, for. Let denote the restrictions of some vectors to a given facet, be the inner product of these vectors. Let denote the two vectors in that correspond to the above functions. Note that all facets give the same vector,as the latter does not depend on. Now we can use the equality where Input: numbers vector Set Step Input: the list of prefixes numbers for all For each take prefix calculate numbers For each find Pass into Step : pass into Otherwise, pass (9) is the same multiplicative constant for each, vectors satisfy equalities. Since on each. Now we use the inner products define the Sums Facet function The main idea of the algorithm is to calculate for each cidate employ this function as an upper bound for the unknown product. Namely, (4) (5) show that (4) (5) (6) Now consider the Sums Facet function (5) in more detail. Given any vector, define its Facet Span as the subset of vectors obtained by flipping the vector on dferent facets. In contrast to the linear extensions, most vectors are obtained by nonlinear transformations. Then our function can be considered as the maximum inner product over the entire span. This setting is illustrated as follows. Example: In Fig. 1, we consider the code with decoding threshold. As an example, we analyze three cidates, in step. These vectors along with are shown in the first four lines of Fig. 1. Here symbols are marked by a symbols are marked by a. In step, all vectors form four facets of length. Here we also give the values of inner products,, obtained on each facet. The last three lines of Fig. 1 indicate which facets must be flipped within each span,, to obtain
4 DUMER et al.: LIST DECODING OF BIORTHOGONAL CODES AND THE HADAMARD TRANSFORM WITH LINEAR COMPLEXITY 4491 Fig. 1. Decoding of code RM (1; 5). optimal extensions that give the function.we see that only pass the test, since Similarly, we can consider the following steps use recursion (8) for the two remaining cidates. Then it is easy to very from (8) that the cidate (obtained from by taking in step ) gives inner product passes the test, whereas all other extensions of fail. Finally, we compare the Sums Facet algorithm (9) with the classic Green machine. Similarly to algorithm, the Green machine calculates all inner products using all facets. This calculation is similar to Step A of our algorithm. However, the Green machine skips both Steps B C of our algorithm. Instead, each step outputs all possible prefixes their inner products. More generally, the Green machine performs the complete fast Hadamard transform (FHT) of vector using recursion (8). By contrast, the algorithm represents an expurgated version of FHT, which eventually outputs only those coefficients of the HT vector, whose absolute values exceed the given threshold, such coefficients exist. IV. LIST SIZE AND COMPLEXITY OF THE SUMS FACET ALGORITHM Lemma 5: For any received vector, any, any step, the list includes prefixes (3) of all required functions Proof: By definition of,. Thus, according to (6). Then. In the following, we show that all incorrect cidates are filtered out after steps. Lemma 6: The list obtained after all steps equals the required list. Proof: Indeed, any prefix left in the final step is a full function defined on the single facet. Also,. Therefore the proof is completed. Remark: From (8), we also deduce that is a monotonic function on two consecutive prefixes, that strict inequality holds for at least one extension. This implies that, in general, consecutive steps become more restrictive The next lemma shows that -function is fairly restrictive in a sense that each possible list does not accept too many incorrect cidates. Lemma 7: For any received vector, any, any step, the list has size Proof: The bound is obvious, as there exist prefixes. Note that on each facet, the corresponding vectors form an orthogonal code of length. For each prefix each facet, let be the vector such that (10) Let be the corresponding vector of length that equals on each facet. For each, dferent vectors are orthogonal, so are vectors. Then their extensions to full length are also orthogonal. Next, observe from (6) (10) that for any prefix Finally, recall that any prefix satisfies the Sums Facet criterion. Therefore,. Now we apply the generalized Johnson bound of Lemma 3 to the orthogonal code its sublist. Similarly to Corollary 4, this gives the estimate for the latter list. In combinatorial terms, the above proof shows that any two vectors taken from dferent facet spans are pairwise orthogonal. Fig. 1 also illustrates the same fact. Here the three original prefixes,, are orthogonal on each facet, so are all flipped versions of these prefixes. Now we introduce an important parameter
5 4492 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 10, OCTOBER 2008 Corollary 8: Each step leaves (11) for, complete the proof by calculating the entire complexity of as follows:, (11) prefixes in the list. Proof of Theorem 1: Given two positive integers, below we use a procedure that adds -digital binary numbers. We estimate the complexity of in bit operations, with one operation counted for each addition, inversion, comparison of bits. To perform, we first couple two -digital numbers find -digital sums. Then we proceed in the same manner using pairwise additions. Then requires steps has complexity Now we use to estimate the complexity of step of our algorithm (9), which outputs the list using three subroutines A, B, C. For each cidate, our first subroutine A calculates real numbers. Any such calculation uses one addition, possibly, one inversion of two real numbers in (8). To count these real-valued operations in binary bits, we assume that each symbol of the received vector is formed by bits, where is a fixed parameter for any. Thus, step uses -digital inputs -digital outputs. Consequently, step inputs -digital numbers outputs -digital numbers. This calculation requires Remark: Alternatively, we can also assume that any two real numbers are added or compared in one operation. However, the above analysis is more conservative. In particular, it accounts for the fact that all steps employ real numbers of growing bit length as, even the original channel symbols have fixed length. Note also that we obtain a similar bound on we replace -bit operations with one-byte operation but still use an increasing number of -sized bytes as. Above, our outputs were taken from the entire space, or, equivalently, from the sphere normalized. Obviously, the same results also hold for any quantized channel, with the outputs taken from a discrete space of size, where are some positive constants. Finally, note that Corollary 2 is obtained the inner product is replaced with the Euclidean or the Hamming distance. Note also that the last case decoding in the Hamming metric has been recently considered in [6] using a slightly dferent hard-decision algorithm. It is also proven in [6] that for codes, the classic Johnson bound is tight up to the universal constant, given any Hamming radius with. Namely, there exist outputs that are surrounded by or more codewords in a sphere of radius. binary operations for each prefix. The second subroutine B calculates the function of (5) using numbers. Here we need at most bitwise inversions bitwise additions. The procedure gives an number has complexity The third subroutine retrieves the list (7) requires -digital V. CONCLUDING REMARKS In this paper, a new decoding algorithm has been designed for biorthogonal codes that decodes any output into the list of codewords. For any, the algorithm requires linear complexity order of instead of the order of the Hadamard transform. In the decoding process, the algorithm employs an efficient Sums Facet function removes distant codewords, by applying a threshold test to their intermediate prefixes instead of the codewords. In this way, this algorithm extends the classic Green algorithm. In equivalent terms, the algorithm performs the linear order of bit operations to retrieve all -signicant Hadamard coefficients, whose absolute values exceed the threshold. operations per each cidate. Note that for any integers, any step Given at most cidates processed in any step, the entire complexity is Finally, the last step requires only one comparison at most inversions per each prefix. Now we use bound REFERENCES [1] F. J. MacWilliams N. J. A. Sloane, The Theory of Error-Correcting Codes. Amsterdam, The Netherls: North-Holl, [2] S. Litsyn O. Shekhovtsov, Fast decoding algorithm for first order Reed-Muller codes, Probl. Inf. Transm., vol. 19, pp , [3] O. Goldreich L. A. Levin, A hard-core predicate for all one-way functions, in Proc. 21st ACM Symp. Theory of Computing, Seattle, WA, May 1989, pp [4] G. Kabatiansky C. Tavernier, List decoding of Reed-Muller codes of the first order, in Proc. 9th Int. Workshop Algebraic Combinatorial Coding Theory, Kranevo, Bulgaria, Jun. 2004, pp [5] V. I. Levenstein, Universal bounds for codes designs, in Hbook of Coding Theory, V.S. Pless W.C. Huffman, Eds. Amsterdam, The Netherls: Elsevier, 1998, ch. 6, pp [6] I. Dumer, G. Kabatiansky, C. Tavernier, List decoding of the first-order binary Reed-Muller codes, Probl. Inf. Transm., vol. 43, pp , 2007.
UC Riverside UC Riverside Previously Published Works
UC Riverside UC Riverside Previously Published Works Title Soft-decision decoding of Reed-Muller codes: A simplied algorithm Permalink https://escholarship.org/uc/item/5v71z6zr Journal IEEE Transactions
More informationTHIS paper is aimed at designing efficient decoding algorithms
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER 1999 2333 Sort-and-Match Algorithm for Soft-Decision Decoding Ilya Dumer, Member, IEEE Abstract Let a q-ary linear (n; k)-code C be used
More informationSoft-Decision Decoding Using Punctured Codes
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 47, NO 1, JANUARY 2001 59 Soft-Decision Decoding Using Punctured Codes Ilya Dumer, Member, IEEE Abstract Let a -ary linear ( )-code be used over a memoryless
More informationA Combinatorial Bound on the List Size
1 A Combinatorial Bound on the List Size Yuval Cassuto and Jehoshua Bruck California Institute of Technology Electrical Engineering Department MC 136-93 Pasadena, CA 9115, U.S.A. E-mail: {ycassuto,bruck}@paradise.caltech.edu
More informationMATH32031: Coding Theory Part 15: Summary
MATH32031: Coding Theory Part 15: Summary 1 The initial problem The main goal of coding theory is to develop techniques which permit the detection of errors in the transmission of information and, if necessary,
More informationAlgebraic Soft-Decision Decoding of Reed Solomon Codes
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 11, NOVEMBER 2003 2809 Algebraic Soft-Decision Decoding of Reed Solomon Codes Ralf Koetter, Member, IEEE, Alexer Vardy, Fellow, IEEE Abstract A polynomial-time
More informationHöst, Stefan; Johannesson, Rolf; Zigangirov, Kamil; Zyablov, Viktor V.
Active distances for convolutional codes Höst, Stefan; Johannesson, Rolf; Zigangirov, Kamil; Zyablov, Viktor V Published in: IEEE Transactions on Information Theory DOI: 101109/18749009 Published: 1999-01-01
More informationCLASSICAL error control codes have been designed
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 56, NO 3, MARCH 2010 979 Optimal, Systematic, q-ary Codes Correcting All Asymmetric and Symmetric Errors of Limited Magnitude Noha Elarief and Bella Bose, Fellow,
More informationOn Cryptographic Properties of the Cosets of R(1;m)
1494 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 47, NO. 4, MAY 2001 On Cryptographic Properties of the Cosets of R(1;m) Anne Canteaut, Claude Carlet, Pascale Charpin, and Caroline Fontaine Abstract
More informationDecomposing Bent Functions
2004 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 8, AUGUST 2003 Decomposing Bent Functions Anne Canteaut and Pascale Charpin Abstract In a recent paper [1], it is shown that the restrictions
More informationCovering an ellipsoid with equal balls
Journal of Combinatorial Theory, Series A 113 (2006) 1667 1676 www.elsevier.com/locate/jcta Covering an ellipsoid with equal balls Ilya Dumer College of Engineering, University of California, Riverside,
More informationRank and Kernel of binary Hadamard codes.
1 Rank and Kernel of binary Hadamard codes. K.T. Phelps, J. Rifà Senior Member IEEE, M. Villanueva Abstract In this paper the rank and the dimension of the kernel for (binary) Hadamard codes of length
More informationDoubleSparseCompressedSensingProblem 1
Fourteenth International Workshop on Algebraic and Combinatorial Coding Theory September 7 13, 2014, Svetlogorsk (Kaliningrad region), Russia pp. 181 185 DoubleSparseCompressedSensingProblem 1 Grigory
More informationEfficient Bounded Distance Decoders for Barnes-Wall Lattices
Efficient Bounded Distance Decoders for Barnes-Wall Lattices Daniele Micciancio Antonio Nicolosi April 30, 2008 Abstract We describe a new family of parallelizable bounded distance decoding algorithms
More informationOn permutation automorphism groups of q-ary Hamming codes
Eleventh International Workshop on Algebraic and Combinatorial Coding Theory June 16-22, 28, Pamporovo, Bulgaria pp. 119-124 On permutation automorphism groups of q-ary Hamming codes Evgeny V. Gorkunov
More informationThe Capacity of Finite Abelian Group Codes Over Symmetric Memoryless Channels Giacomo Como and Fabio Fagnani
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 5, MAY 2009 2037 The Capacity of Finite Abelian Group Codes Over Symmetric Memoryless Channels Giacomo Como and Fabio Fagnani Abstract The capacity
More informationIntroduction to binary block codes
58 Chapter 6 Introduction to binary block codes In this chapter we begin to study binary signal constellations, which are the Euclidean-space images of binary block codes. Such constellations have nominal
More informationIN this paper, we consider the capacity of sticky channels, a
72 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 1, JANUARY 2008 Capacity Bounds for Sticky Channels Michael Mitzenmacher, Member, IEEE Abstract The capacity of sticky channels, a subclass of insertion
More informationChapter 2. Error Correcting Codes. 2.1 Basic Notions
Chapter 2 Error Correcting Codes The identification number schemes we discussed in the previous chapter give us the ability to determine if an error has been made in recording or transmitting information.
More informationEstimates of the Distance Distribution of Codes and Designs
1050 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 47, NO. 3, MARCH 2001 Estimates of the Distance Distribution of Codes Designs Alexei Ashikhmin, Member, IEEE, Alexer Barg, Senior Member, IEEE, Simon
More informationLinear Programming Decoding of Binary Linear Codes for Symbol-Pair Read Channels
1 Linear Programming Decoding of Binary Linear Codes for Symbol-Pair Read Channels Shunsuke Horii, Toshiyasu Matsushima, and Shigeichi Hirasawa arxiv:1508.01640v2 [cs.it] 29 Sep 2015 Abstract In this paper,
More informationError Detection and Correction: Hamming Code; Reed-Muller Code
Error Detection and Correction: Hamming Code; Reed-Muller Code Greg Plaxton Theory in Programming Practice, Spring 2005 Department of Computer Science University of Texas at Austin Hamming Code: Motivation
More informationImproved Upper Bounds on Sizes of Codes
880 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 48, NO. 4, APRIL 2002 Improved Upper Bounds on Sizes of Codes Beniamin Mounits, Tuvi Etzion, Senior Member, IEEE, and Simon Litsyn, Senior Member, IEEE
More informationIN this paper, we exploit the information given by the generalized
4496 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 10, OCTOBER 2006 A New Upper Bound on the Block Error Probability After Decoding Over the Erasure Channel Frédéric Didier Abstract Motivated by
More informationFlip-N-Write: A Simple Deterministic Technique to Improve PRAM Write Performance, Energy and Endurance. Presenter: Brian Wongchaowart March 17, 2010
Flip-N-Write: A Simple Deterministic Technique to Improve PRAM Write Performance, Energy and Endurance Sangyeun Cho Hyunjin Lee Presenter: Brian Wongchaowart March 17, 2010 Motivation Suppose that you
More informationReed-Solomon codes. Chapter Linear codes over finite fields
Chapter 8 Reed-Solomon codes In the previous chapter we discussed the properties of finite fields, and showed that there exists an essentially unique finite field F q with q = p m elements for any prime
More informationInvestigation of the Elias Product Code Construction for the Binary Erasure Channel
Investigation of the Elias Product Code Construction for the Binary Erasure Channel by D. P. Varodayan A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF BACHELOR OF APPLIED
More informationThe Hamming Codes and Delsarte s Linear Programming Bound
The Hamming Codes and Delsarte s Linear Programming Bound by Sky McKinley Under the Astute Tutelage of Professor John S. Caughman, IV A thesis submitted in partial fulfillment of the requirements for the
More informationAnd for polynomials with coefficients in F 2 = Z/2 Euclidean algorithm for gcd s Concept of equality mod M(x) Extended Euclid for inverses mod M(x)
Outline Recall: For integers Euclidean algorithm for finding gcd s Extended Euclid for finding multiplicative inverses Extended Euclid for computing Sun-Ze Test for primitive roots And for polynomials
More informationOn Asymptotic Strategies for GMD Decoding with Arbitrary Error-Erasure Tradeoff
On Asymptotic Strategies for GMD Decoding with Arbitrary Error-Erasure Tradeoff Joschi Brauchle and Vladimir Sidorenko Institute for Communications Engineering, Technische Universität München Arcisstrasse
More informationYet another side-channel attack: Multi-linear Power Analysis attack (MLPA)
Yet another side-channel attack: Multi-linear Power Analysis attack (MLPA) Thomas Roche, Cédric Tavernier Laboratoire LIG, Grenoble, France. Communications and Systems, Le Plessis Robinson, France. Cryptopuces
More informationExercise 1. = P(y a 1)P(a 1 )
Chapter 7 Channel Capacity Exercise 1 A source produces independent, equally probable symbols from an alphabet {a 1, a 2 } at a rate of one symbol every 3 seconds. These symbols are transmitted over a
More informationLow-complexity error correction in LDPC codes with constituent RS codes 1
Eleventh International Workshop on Algebraic and Combinatorial Coding Theory June 16-22, 2008, Pamporovo, Bulgaria pp. 348-353 Low-complexity error correction in LDPC codes with constituent RS codes 1
More informationAdaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes
Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes Xiaojie Zhang and Paul H. Siegel University of California, San Diego, La Jolla, CA 9093, U Email:{ericzhang, psiegel}@ucsd.edu
More informationA list-decodable code with local encoding and decoding
A list-decodable code with local encoding and decoding Marius Zimand Towson University Department of Computer and Information Sciences Baltimore, MD http://triton.towson.edu/ mzimand Abstract For arbitrary
More informationOn Locating-Dominating Codes in Binary Hamming Spaces
Discrete Mathematics and Theoretical Computer Science 6, 2004, 265 282 On Locating-Dominating Codes in Binary Hamming Spaces Iiro Honkala and Tero Laihonen and Sanna Ranto Department of Mathematics and
More informationCSCI 2570 Introduction to Nanocomputing
CSCI 2570 Introduction to Nanocomputing Information Theory John E Savage What is Information Theory Introduced by Claude Shannon. See Wikipedia Two foci: a) data compression and b) reliable communication
More informationSIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land
SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.1 Overview Basic Concepts of Channel Coding Block Codes I:
More informationMATH 291T CODING THEORY
California State University, Fresno MATH 291T CODING THEORY Fall 2011 Instructor : Stefaan Delcroix Contents 1 Introduction to Error-Correcting Codes 3 2 Basic Concepts and Properties 6 2.1 Definitions....................................
More informationA multiple access system for disjunctive vector channel
Thirteenth International Workshop on Algebraic and Combinatorial Coding Theory June 15-21, 2012, Pomorie, Bulgaria pp. 269 274 A multiple access system for disjunctive vector channel Dmitry Osipov, Alexey
More informationTHE additive or stabilizer construction of quantum error
1700 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 54, NO 4, APRIL 2008 Boolean Functions, Projection Operators, and Quantum Error Correcting Codes Vaneet Aggarwal, Student Member, IEEE, and A Robert Calderbank,
More informationINFORMATION PROCESSING ABILITY OF BINARY DETECTORS AND BLOCK DECODERS. Michael A. Lexa and Don H. Johnson
INFORMATION PROCESSING ABILITY OF BINARY DETECTORS AND BLOCK DECODERS Michael A. Lexa and Don H. Johnson Rice University Department of Electrical and Computer Engineering Houston, TX 775-892 amlexa@rice.edu,
More informationRun-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE
General e Image Coder Structure Motion Video x(s 1,s 2,t) or x(s 1,s 2 ) Natural Image Sampling A form of data compression; usually lossless, but can be lossy Redundancy Removal Lossless compression: predictive
More informationONE approach to improving the performance of a quantizer
640 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 2, FEBRUARY 2006 Quantizers With Unim Decoders Channel-Optimized Encoders Benjamin Farber Kenneth Zeger, Fellow, IEEE Abstract Scalar quantizers
More informationConstructions of Quadratic Bent Functions in Polynomial Forms
1 Constructions of Quadratic Bent Functions in Polynomial Forms Nam Yul Yu and Guang Gong Member IEEE Department of Electrical and Computer Engineering University of Waterloo CANADA Abstract In this correspondence
More informationMATH 291T CODING THEORY
California State University, Fresno MATH 291T CODING THEORY Spring 2009 Instructor : Stefaan Delcroix Chapter 1 Introduction to Error-Correcting Codes It happens quite often that a message becomes corrupt
More informationLecture 4: Proof of Shannon s theorem and an explicit code
CSE 533: Error-Correcting Codes (Autumn 006 Lecture 4: Proof of Shannon s theorem and an explicit code October 11, 006 Lecturer: Venkatesan Guruswami Scribe: Atri Rudra 1 Overview Last lecture we stated
More information: Error Correcting Codes. October 2017 Lecture 1
03683072: Error Correcting Codes. October 2017 Lecture 1 First Definitions and Basic Codes Amnon Ta-Shma and Dean Doron 1 Error Correcting Codes Basics Definition 1. An (n, K, d) q code is a subset of
More informationChapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005
Chapter 7 Error Control Coding Mikael Olofsson 2005 We have seen in Chapters 4 through 6 how digital modulation can be used to control error probabilities. This gives us a digital channel that in each
More informationSome Useful Background for Talk on the Fast Johnson-Lindenstrauss Transform
Some Useful Background for Talk on the Fast Johnson-Lindenstrauss Transform Nir Ailon May 22, 2007 This writeup includes very basic background material for the talk on the Fast Johnson Lindenstrauss Transform
More informationCodes over Subfields. Chapter Basics
Chapter 7 Codes over Subfields In Chapter 6 we looked at various general methods for constructing new codes from old codes. Here we concentrate on two more specialized techniques that result from writing
More informationChapter 7 Reed Solomon Codes and Binary Transmission
Chapter 7 Reed Solomon Codes and Binary Transmission 7.1 Introduction Reed Solomon codes named after Reed and Solomon [9] following their publication in 1960 have been used together with hard decision
More informationReed-Muller Codes. These codes were discovered by Muller and the decoding by Reed in Code length: n = 2 m, Dimension: Minimum Distance
Reed-Muller Codes Ammar Abh-Hhdrohss Islamic University -Gaza ١ Reed-Muller Codes These codes were discovered by Muller and the decoding by Reed in 954. Code length: n = 2 m, Dimension: Minimum Distance
More informationOn Linear Subspace Codes Closed under Intersection
On Linear Subspace Codes Closed under Intersection Pranab Basu Navin Kashyap Abstract Subspace codes are subsets of the projective space P q(n), which is the set of all subspaces of the vector space F
More informationError control codes for parallel asymmetric channels
Error control codes for parallel asymmetric channels R. Ahlswede and H. Aydinian Department of Mathematics University of Bielefeld POB 100131 D-33501 Bielefeld, Germany E-mail addresses: ahlswede@mathematik.uni-bielefeld.de
More informationLDPC codes based on Steiner quadruple systems and permutation matrices
Fourteenth International Workshop on Algebraic and Combinatorial Coding Theory September 7 13, 2014, Svetlogorsk (Kaliningrad region), Russia pp. 175 180 LDPC codes based on Steiner quadruple systems and
More informationLecture 2 Linear Codes
Lecture 2 Linear Codes 2.1. Linear Codes From now on we want to identify the alphabet Σ with a finite field F q. For general codes, introduced in the last section, the description is hard. For a code of
More informationLecture 12. Block Diagram
Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data
More informationQuantum Error Detection I: Statement of the Problem
778 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 46, NO 3, MAY 2000 Quantum Error Detection I: Statement of the Problem Alexei E Ashikhmin, Alexander M Barg, Emanuel Knill, and Simon N Litsyn, Member,
More informationNonlinear Codes Outperform the Best Linear Codes on the Binary Erasure Channel
Nonear odes Outperform the Best Linear odes on the Binary Erasure hannel Po-Ning hen, Hsuan-Yin Lin Department of Electrical and omputer Engineering National hiao-tung University NTU Hsinchu, Taiwan poning@faculty.nctu.edu.tw,.hsuanyin@ieee.org
More informationBounds for binary codes with narrow distance distributions
Bounds for binary codes with narrow distance distributions Ron M Roth, Gadiel Seroussi Advanced Studies HP Laboratories Palo Alto HPL-006-136 September 9, 006* constant-weight codes, distance distribution,
More informationOn the minimum distance of LDPC codes based on repetition codes and permutation matrices 1
Fifteenth International Workshop on Algebraic and Combinatorial Coding Theory June 18-24, 216, Albena, Bulgaria pp. 168 173 On the minimum distance of LDPC codes based on repetition codes and permutation
More information(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute
ENEE 739C: Advanced Topics in Signal Processing: Coding Theory Instructor: Alexander Barg Lecture 6 (draft; 9/6/03. Error exponents for Discrete Memoryless Channels http://www.enee.umd.edu/ abarg/enee739c/course.html
More informationCovering a sphere with caps: Rogers bound revisited
Covering a sphere with caps: Rogers bound revisited Ilya Dumer Abstract We consider coverings of a sphere S n r of radius r with the balls of radius one in an n-dimensional Euclidean space R n. Our goal
More informationError control of line codes generated by finite Coxeter groups
Error control of line codes generated by finite Coxeter groups Ezio Biglieri Universitat Pompeu Fabra, Barcelona, Spain Email: e.biglieri@ieee.org Emanuele Viterbo Monash University, Melbourne, Australia
More information2012 IEEE International Symposium on Information Theory Proceedings
Decoding of Cyclic Codes over Symbol-Pair Read Channels Eitan Yaakobi, Jehoshua Bruck, and Paul H Siegel Electrical Engineering Department, California Institute of Technology, Pasadena, CA 9115, USA Electrical
More informationarxiv:cs/ v1 [cs.it] 15 Sep 2005
On Hats and other Covers (Extended Summary) arxiv:cs/0509045v1 [cs.it] 15 Sep 2005 Hendrik W. Lenstra Gadiel Seroussi Abstract. We study a game puzzle that has enjoyed recent popularity among mathematicians,
More informationChapter 3 Linear Block Codes
Wireless Information Transmission System Lab. Chapter 3 Linear Block Codes Institute of Communications Engineering National Sun Yat-sen University Outlines Introduction to linear block codes Syndrome and
More information6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011
6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 On the Structure of Real-Time Encoding and Decoding Functions in a Multiterminal Communication System Ashutosh Nayyar, Student
More informationMoments of orthogonal arrays
Thirteenth International Workshop on Algebraic and Combinatorial Coding Theory Moments of orthogonal arrays Peter Boyvalenkov, Hristina Kulina Pomorie, BULGARIA, June 15-21, 2012 Orthogonal arrays H(n,
More information6054 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 9, SEPTEMBER 2012
6054 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 58, NO 9, SEPTEMBER 2012 A Class of Binomial Bent Functions Over the Finite Fields of Odd Characteristic Wenjie Jia, Xiangyong Zeng, Tor Helleseth, Fellow,
More informationOrthogonal Arrays & Codes
Orthogonal Arrays & Codes Orthogonal Arrays - Redux An orthogonal array of strength t, a t-(v,k,λ)-oa, is a λv t x k array of v symbols, such that in any t columns of the array every one of the possible
More informationHamming codes and simplex codes ( )
Chapter 6 Hamming codes and simplex codes (2018-03-17) Synopsis. Hamming codes are essentially the first non-trivial family of codes that we shall meet. We start by proving the Distance Theorem for linear
More informationSmart Hill Climbing Finds Better Boolean Functions
Smart Hill Climbing Finds Better Boolean Functions William Millan, Andrew Clark and Ed Dawson Information Security Research Centre Queensland University of Technology GPO Box 2434, Brisbane, Queensland,
More informationAsymmetric binary covering codes
Asymmetric binary covering codes Joshua N Cooper and Robert B Ellis Department of Mathematics, University of California at San Diego, La Jolla, California E-mail: cooper@mathucsdedu, rellis@mathucsdedu
More informationCoding for Memory with Stuck-at Defects
Coding for Memory with Stuck-at Defects Yongjune Kim B. V. K. Vijaya Kumar Electrical Computer Engineering, Data Storage Systems Center (DSSC) Carnegie Mellon University Pittsburgh, USA yongjunekim@cmu.edu,
More informationCodes and Rings: Theory and Practice
Codes and Rings: Theory and Practice Patrick Solé CNRS/LAGA Paris, France, January 2017 Geometry of codes : the music of spheres R = a finite ring with identity. A linear code of length n over a ring R
More informationChapter 9 Fundamental Limits in Information Theory
Chapter 9 Fundamental Limits in Information Theory Information Theory is the fundamental theory behind information manipulation, including data compression and data transmission. 9.1 Introduction o For
More informationOn the properness of some optimal binary linear codes and their dual codes
Eleventh International Workshop on Algebraic and Combinatorial Coding Theory June 16-22, 2008, Pamporovo, Bulgaria pp. 76-81 On the properness of some optimal binary linear codes and their dual codes Rossitza
More informationTail-Biting Trellis Realizations and Local Reductions
Noname manuscript No. (will be inserted by the editor) Tail-Biting Trellis Realizations and Local Reductions Heide Gluesing-Luerssen Received: date / Accepted: date Abstract This paper investigates tail-biting
More informationCoding Theory and Applications. Linear Codes. Enes Pasalic University of Primorska Koper, 2013
Coding Theory and Applications Linear Codes Enes Pasalic University of Primorska Koper, 2013 2 Contents 1 Preface 5 2 Shannon theory and coding 7 3 Coding theory 31 4 Decoding of linear codes and MacWilliams
More informationPerformance of small signal sets
42 Chapter 5 Performance of small signal sets In this chapter, we show how to estimate the performance of small-to-moderate-sized signal constellations on the discrete-time AWGN channel. With equiprobable
More informationOptimal Block-Type-Decodable Encoders for Constrained Systems
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 5, MAY 2003 1231 Optimal Block-Type-Decodable Encoders for Constrained Systems Panu Chaichanavong, Student Member, IEEE, Brian H. Marcus, Fellow, IEEE
More informationELEC 519A Selected Topics in Digital Communications: Information Theory. Hamming Codes and Bounds on Codes
ELEC 519A Selected Topics in Digital Communications: Information Theory Hamming Codes and Bounds on Codes Single Error Correcting Codes 2 Hamming Codes (7,4,3) Hamming code 1 0 0 0 0 1 1 0 1 0 0 1 0 1
More informationPartial permutation decoding for binary linear Hadamard codes
Partial permutation decoding for binary linear Hadamard codes R. D. Barrolleta 1 and M. Villanueva 2 Departament d Enginyeria de la Informació i de les Comunicacions Universitat Autònoma de Barcelona Cerdanyola
More informationONE of the main applications of wireless sensor networks
2658 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006 Coverage by Romly Deployed Wireless Sensor Networks Peng-Jun Wan, Member, IEEE, Chih-Wei Yi, Member, IEEE Abstract One of the main
More informationTilings of Binary Spaces
Tilings of Binary Spaces Gérard Cohen Département Informatique ENST, 46 rue Barrault 75634 Paris, France Simon Litsyn Department of Electrical Engineering Tel-Aviv University Ramat-Aviv 69978, Israel Alexander
More informationLinear Block Codes. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay
1 / 26 Linear Block Codes Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay July 28, 2014 Binary Block Codes 3 / 26 Let F 2 be the set
More informationList Decoding of Reed Solomon Codes
List Decoding of Reed Solomon Codes p. 1/30 List Decoding of Reed Solomon Codes Madhu Sudan MIT CSAIL Background: Reliable Transmission of Information List Decoding of Reed Solomon Codes p. 2/30 List Decoding
More informationAn introduction to basic information theory. Hampus Wessman
An introduction to basic information theory Hampus Wessman Abstract We give a short and simple introduction to basic information theory, by stripping away all the non-essentials. Theoretical bounds on
More informationMATH Examination for the Module MATH-3152 (May 2009) Coding Theory. Time allowed: 2 hours. S = q
MATH-315201 This question paper consists of 6 printed pages, each of which is identified by the reference MATH-3152 Only approved basic scientific calculators may be used. c UNIVERSITY OF LEEDS Examination
More informationSome Nonregular Designs From the Nordstrom and Robinson Code and Their Statistical Properties
Some Nonregular Designs From the Nordstrom and Robinson Code and Their Statistical Properties HONGQUAN XU Department of Statistics, University of California, Los Angeles, CA 90095-1554, U.S.A. (hqxu@stat.ucla.edu)
More informationCyclic Linear Binary Locally Repairable Codes
Cyclic Linear Binary Locally Repairable Codes Pengfei Huang, Eitan Yaakobi, Hironori Uchikawa, and Paul H. Siegel Electrical and Computer Engineering Dept., University of California, San Diego, La Jolla,
More informationVector spaces. EE 387, Notes 8, Handout #12
Vector spaces EE 387, Notes 8, Handout #12 A vector space V of vectors over a field F of scalars is a set with a binary operator + on V and a scalar-vector product satisfying these axioms: 1. (V, +) is
More information1 Introduction A one-dimensional burst error of length t is a set of errors that are conned to t consecutive locations [14]. In this paper, we general
Interleaving Schemes for Multidimensional Cluster Errors Mario Blaum IBM Research Division 650 Harry Road San Jose, CA 9510, USA blaum@almaden.ibm.com Jehoshua Bruck California Institute of Technology
More informationIN this paper, we will introduce a new class of codes,
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 44, NO 5, SEPTEMBER 1998 1861 Subspace Subcodes of Reed Solomon Codes Masayuki Hattori, Member, IEEE, Robert J McEliece, Fellow, IEEE, and Gustave Solomon,
More information3. Coding theory 3.1. Basic concepts
3. CODING THEORY 1 3. Coding theory 3.1. Basic concepts In this chapter we will discuss briefly some aspects of error correcting codes. The main problem is that if information is sent via a noisy channel,
More informationOn the Cross-Correlation of a p-ary m-sequence of Period p 2m 1 and Its Decimated
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 58, NO 3, MARCH 01 1873 On the Cross-Correlation of a p-ary m-sequence of Period p m 1 Its Decimated Sequences by (p m +1) =(p +1) Sung-Tai Choi, Taehyung Lim,
More information: Coding Theory. Notes by Assoc. Prof. Dr. Patanee Udomkavanich October 30, upattane
2301532 : Coding Theory Notes by Assoc. Prof. Dr. Patanee Udomkavanich October 30, 2006 http://pioneer.chula.ac.th/ upattane Chapter 1 Error detection, correction and decoding 1.1 Basic definitions and
More informationLecture 19 : Reed-Muller, Concatenation Codes & Decoding problem
IITM-CS6845: Theory Toolkit February 08, 2012 Lecture 19 : Reed-Muller, Concatenation Codes & Decoding problem Lecturer: Jayalal Sarma Scribe: Dinesh K Theme: Error correcting codes In the previous lecture,
More information