Soft-Decision Decoding Using Punctured Codes

Size: px
Start display at page:

Download "Soft-Decision Decoding Using Punctured Codes"

Transcription

1 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 47, NO 1, JANUARY Soft-Decision Decoding Using Punctured Codes Ilya Dumer, Member, IEEE Abstract Let a -ary linear ( )-code be used over a memoryless channel We design a soft-decision decoding algorithm that tries to locate a few most probable error patterns on a shorter length [ ] First, we take cyclically consecutive positions starting from any initial point Then we cut the subinterval of length into two parts and examine most plausible error patterns on either part To obtain codewords of a punctured ( )-code, we try to match the syndromes of both parts Finally, the designed codewords of an ( )-code are re-encoded to find the most probable codeword on the full length For any long linear code, the decoding error probability of this algorithm can be made arbitrarily close to the probability of its maximum-likelihood (ML) decoding given sufficiently large By optimizing, we prove that this near-ml decoding can be achieved by using only ( ) ( + ) error patterns For most long linear codes, this optimization also gives about re-encoded codewords As a result, we obtain the lowest complexity order of ( ) ( + ) known to date for near-ml decoding For codes of rate 1 2, the new bound grows as a cubic root of the general trellis complexity min For short blocks of length 63, the algorithm reduces complexity of trellis design by a few decimal orders Index Terms Complexity, maximum-likelihood (ML) decoding, sorting, splitting, syndromes, trellis I INTRODUCTION LET be a -ary input alphabet and be an output alphabet, which can be infinite We then consider the input vectors and output vectors We use a -ary linear code of length to encode equiprobable messages The codewords are transmitted over a memoryless channel with transition probability (density) Given an output, maximum-likelihood (ML) decoding retrieves the codeword with the maximum posterior probability This gives the minimum block error probability among all decoding algorithms Our main goal is to reduce ML-decoding complexity while allowing only a negligible increase in decoding error probability Therefore, we use as a benchmark for all other decoding algorithms applied to code Similarly, decoding complexity of our design will be compared with the upper bound on ML-decoding complexity, which was obtained in [2] and [15] by trellis design Given any output, we first wish to restrict the list of inputs among which the most probable codeword is being Manuscript received January 20, 2000 This work was supported by the NSF under Grant NCR The material in this paper was presented in part at the 34th Annual Allerton Conference on Communication, Control and Computing, Monticello, IL, October 2 4, 1996 The author is with the College of Engineering, University of California, Riverside, CA USA ( dumer@eeucredu) Communicated by A Barg, Associate Editor for Coding Theory Publisher Item Identifier S (01) sought More specifically, we consider the list of input vectors with maximum posterior probabilities among all inputs In contrast to ML decoding, our decoding algorithm fails whenever this list does not include codewords Otherwise, finds the most probable codeword Therefore, if otherwise Since can fail to decode, its error probability can exceed the probability of ML decoding However, for decoding performance deteriorates only by a negligible margin Namely, the inequality [6] holds for any (linear or nonlinear) code used over the so-called mapping channels These channels arise when the output alphabet can be split into disjoint subsets of size in such a way that a conventional -ary symmetric channel is obtained for every In particular, an additive white Gaussian noise (AWGN) channel with -PSK modulation is a mapping channel In a more general setting [6], we can also assume that the output size varies for different symmetric subchannels Even in this case, algorithm can give near-ml decoding for any sequence of long -codes of rate Here we use the following definition Definition 1: We say that a decoding algorithm gives near-ml decoding for an infinite sequence of codes if the decoding error probability converges to the probability of ML decoding: as To obtain near-ml decoding for long -codes of rate, we use inequality (1) and take, where Note, however, that a brute-force technique inspects all most probable inputs and gives complexity of order 1 In this case, we cannot reduce ML-decoding complexity Therefore, we wish to speed up our search by eliminating most noncodewords from Given a sequence of codes of rate, we consider the upper bounds on complexity of the algorithm Our main result holds for most long linear codes of any rate, with an exception of a vanishing fraction of codes as This is summarized in the following theorem 1 We say that a function (n) has exponential order q and write = q if (log )=n! as n!1 Similarly, q if lim(log )=n : (1) /01$ IEEE

2 60 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 47, NO 1, JANUARY 2001 Theorem 2: For most long linear codes of rate, near-ml decoding algorithm has decoding complexity, where TABLE I STATE COMPLEXITY # OF THE ALGORITHM 9 FOR BINARY CODES OF LENGTH 63 By taking slightly exceeding, we can reduce general complexity of ML decoding to the order of while increasing the output error probability only by an arbitrarily small fraction In this way, we arrive at the following corollary Corollary 3: Near-ML decoding can be executed for most long linear -codes of rate with complexity order of In particular, for long codes of rate, the new bound grows as a cubic root of trellis complexity For all rates, it surpasses the two upper bounds and known for complexity of near-ml decoding (see [6] and [7], respectively) For binary codes, it also falls below the lower bounds on trellis complexity from [13] and [16], provided that no long binary codes exceed the asymptotic Gilbert Varshamov bound Central to our proof of Theorem 2, will be the following statement Theorem 4: Most long linear -codes have codewords in each list of most probable inputs Inequality (2) holds simultaneously over all outputs, with the exception of the fraction of codes as To prove Theorem 2, we consider a class of algorithms using a parameter In doing so, we relegate decoding on a block of length to a few punctured -codes Each punctured decoding will be similar to trellis design In particular, we will seek the shortest subpaths on trellis diagrams of length Also, we label each trellis state by the -syndrome associated with the arriving subpaths However, for each -code, we use one more parameter and choose only best subpaths Finally, we re-encode these subpaths to the full length and choose the shortest path In this regard, we compare complexity of with conventional trellis design For such a design, it is customary to consider state complexity that is the binary logarithm of the maximum number of trellis states passed at any time We will see that substantially reduces state complexity even on short blocks To unveil the advantages and shortcomings of the algorithm, we consider two different implementations for a binary -code It turns out that both implementations require only about three thousand states in contrast to conventional trellis design whose complexity is upper-bounded by states Similar reductions can also be obtained for other short codes The results are summarized in Table I, while the details are relegated to Section VI and Appendix B Here we list the parameters of Bose Chaudhuri Hocquenghem (BCH) codes of length Given, we optimize and find the state complexity (2) of the algorithm We first compare state complexity with the general upper bound valid for any -code Much better bounds were derived for BCH codes (these bounds and further references are given in [10], [13], and [14]) For these codes, we use the upper and lower bounds and from [13] We see that for the medium rates, algorithm reduces general state complexity by three to six decimal orders We also reduce state complexity by two to three orders even if the trellis structure is optimized for specific BCH codes In our decoding design, we will combine two versions of near-ml decoding These algorithms, which we call and, were developed in [6] and [7], respectively For, the first algorithm has complexity exponent and can be applied to most linear codes and all cyclic codes The second algorithm has complexity exponent for all linear codes Now we achieve the lower order by combining and Similar technique was first employed in [8] for minimum-distance (MD) decoding that finds the closest codeword in the Hamming metric Even for MD decoding, the combined algorithm requires more sophisticated arguments than those used for its underpinnings and In particular, uses weight spectra of codes and their cosets, as opposed to and, both of which do not rely on good code spectra To develop general soft-decision decoding, we first unveil how algorithms,, and their combination perform in the simplest case of MD decoding This will allow us to set a detailed framework for general design and illuminate the problems that need to be addressed II BASIC TECHNIQUES FOR MD DECODING A MD Decoding Let be the ball of radius centered at Let denote its size (for brevity, parameter is dropped from the notation since is kept unchanged throughout the paper) Given, algorithm seeks the codewords among vectors closest to the output These vectors belong to the Hamming ball of radius (3)

3 DUMER: SOFT-DECISION DECODING USING PUNCTURED CODES 61 Thus, algorithm needs to correct at most errors According to (1), such a decoding at most doubles the decoding error probability of MD decoding This important fact proved by Evseev also led to a simple partition algorithm [9] whose modification is described below Note also that we obtain condition if our decoding radius is increased by any integer growing with Thus, correcting errors enables near-md decoding for all long linear codes Also, exact MD decoding can be achieved for most long linear codes by correcting errors [5] To describe algorithms and, we use the following notation Let be a parity-check matrix of a linear code Given any subset of positions, we also consider a subvector and submatrix of size We say that is an information subset, if any two different codewords disagree on this subset Define a sliding cyclic window as a subset of cyclically consecutive positions beginning with any position Below, we execute decoding within the ball of size B Splitting and Re-Encoding: Algorithm Consider a cyclic code Then each sliding window forms an information subset Since at most errors occurred on the length, at least one window is corrupted by or fewer errors In the first step of our decoding, we try to find such a window and choose any starting point Given an information subset, we take each subvector in the second step Then we re-encode as an information subblock Every newly re-encoded codeword replaces the one chosen before if it is closer to Finally, the closest codeword is chosen after all trials Obviously, the decoding is successful within the ball It is readily verified from (3) that this number has exponential order of In a more general setting (see [9] and [6]), one can consider most long linear codes instead of cyclic codes considered above Also, to achieve exact MD decoding, we can still examine the exponential order of error patterns So MD decoding also has complexity order bounded by C Matching the Halves: Algorithm For simplicity, let,, and be even numbers In the first step, we take any starting position and split our block of length onto two sliding halves and By shifting any starting point from to, we change the number of errors on and at most by one Also, and replace each other when is increased by Therefore, there exists a partition, for which both halves and are corrupted by or fewer errors In the second step, we test all possible subvectors located within the ball, and all subvectors that belong to the ball In doing so, we calculate the syndromes and Then we form the records and Here the rightmost digits form the syndromes and are regarded as the most significant digits The next symbol is called the subset indicator This is used to separate the records from The leftmost digits represent original subvectors In the third step, we consider the joint set and sort its elements as natural numbers Then we run through and examine each subset whose records have the same syndrome in the rightmost digits We say that the records and from form a matching pair (they disagree in the indicator symbol ) Then each matching pair gives the codeword While examining, we also choose the subvector closest to and subvector closest to Obviously, this pair gives the closest codeword (if any exists) for a given Finally, we choose the codeword closest to over all syndromes and all trials As a result, we find the closest codeword in Remark: The above procedure is readily modified for odd,, or (see [7]) We can also include distances and in our records and use them in the sorting procedures For each, the first vectors and give the closest pair in this case We now turn to decoding complexity The sets and have the size of the exponential order according to (3) Then we form the sets,, which is done with the same complexity order In the next step, we need to sort out natural numbers It is well known [1] that this can be done on a parallel circuit ( network ) of size and depth The overall complexity obtained in trials has the order of In a more general setting, we consider lightest error patterns In this case, we get a similar order D Puncturing and Matching the Halves: Algorithm Again, consider a cyclic code To decode within a ball, we now use an integer parameter We first wish to find a subblock of length corrupted by or fewer errors To find such a block, we run through all possible starting points in the first decoding step, similarly to the algorithm For simplicity, consider the length and error weight, both being even numbers In the second step, we presumably have at most errors on our subvector Now we use a punctured code and try to match the halves similarly to the algorithm As above, we use trials to split and onto two halves 2 and To enable full decoding within a ball, we take lightest error patterns from the balls and One can readily verify that Therefore, we can substantially reduce the former complexity by taking In our third step, we run through the set and examine each subset Here, however, we substantially diverge from the former algorithm Namely, we take each matching pair It will be proven later that for most long linear 2 Note that each half can have one discontinuity However, this fact is immaterial to our analysis

4 62 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 47, NO 1, JANUARY codes any subblock includes an information subset In this case, each pair is unambiguously re-encoded to the full length Finally, we choose the codeword closest to At this point, we note that re-encoding does not preserve distance properties In particular, any vector that is relatively close to can be placed sufficiently far from after re-encoding It is for this reason that we take all matching pairs as opposed to the algorithm that was free from re-encoding So, the new problem that we need to address in decoding design, is how many codewords are left for re-encoding in Step 3 Note that Step 2 gives all the codewords from that fall within the ball of size It can be proven that for most linear codes the number of these codewords is upper-bounded as regardless of an output and an interval Using (4), we can readily verify that the overall complexity is upper-bounded by To minimize, we take and find the optimum In this case, all the three terms sorting complexity, re-encoding complexity, and overall complexity have the same order Here we omit the proof of the bound (4) In Section VIII, a similar bound is derived in a more general setting when a ball is replaced by an arbitrary list that includes most probable vectors Remark: In essence, the bound (4) specifies the number of codewords of weight or less in any coset Thus, we need to upper-bound the weight spectra of cosets for all and However, the weight spectra of cyclic codes are yet unknown (let alone the spectra of their cosets or punctured codes) It is for this reason that we cannot extend algorithm to all cyclic codes as we did with former algorithms and E Framework for General Design In soft-decision decoding, we need to consider arbitrary lists of most probable inputs In this case, different positions of the received vector are not equally reliable Therefore, the maximum Hamming weight taken in MD decoding cannot be used for choosing most probable error patterns To circumvent this problem, we rank all the input vectors according to their posterior probabilities Then we convert probabilities into soft-decision weights In essence, we enumerate vectors with respect to their weights and wish to consider the lightest vectors We also rank subvectors according to their probabilities (4) At this point, however, our algorithm will depart from its MD version, which seeks a light subvector corrupted by errors In general (see Section IV for details), we cannot find a light subvector in our soft-decision setting even if we start with a light vector However, it will turn out that for any, at least one subvector can be fully decomposed into light subblocks Some technical complications notwithstanding, we still consider vectors as we did in hard-decision algorithm Also, these vectors can be designed with the same complexity order of This will be done in Step 1 of our algorithm In Step 2, we try to split any relatively light subblock onto two parts Though is yet unknown, we will see that it can be reconstructed from about error patterns taken on either part In general, however, these parts can have unequal lengths Thus, we will generalize the splitting technique used above in the algorithm In the final part of the paper, we will derive an upper bound on decoding complexity The paper is organized as follows In Sections III and IV, we consider enumeration properties of vectors and subvectors We also design relatively light subblocks This part closely follows [6] and constitutes Step 1 of In Section V, we present further properties that allow us to execute Step 2 of our algorithm In Section VI, we present a general algorithm In Sections IV VI we also proceed with two examples that give two different decoding algorithms for a binary code Finally, in Sections VII, VIII, and Appendix A we estimate the decoding complexity This estimate is mostly based on combinatorial properties of our codes The main results are already stated in Theorems 2 and 4 Here we need to simultaneously estimate a number of codewords in each list To do this, we reduce the huge set 3 of all possible lists to a comparatively small subset of at most superlists This reduction is done in such a way that each list is contained in at least one superlist On the other hand, each superlist has the same exponential size upper-bounded by This combinatorial construction is given in Appendix A In Appendix B, we consider the algorithm for codes of length III ENUMERATION ON VECTORS AND SUBVECTORS A Soft-Decision Weights Consider a general memoryless channel Here Given an output, we consider a matrix with entries 3 Even for q =2, there exists about 2 lists 8(y; ) These lists are closely related to the threshold Boolean functions or, equivalently, to the hyperplanes that produce different cuttings of the unit cube in the Euclidean space R Surprisingly, the problem of counting these hyperplanes was first raised over 150 years ago, and was recently solved in [17] (5)

5 DUMER: SOFT-DECISION DECODING USING PUNCTURED CODES 63 defined over all input symbols and all positions Given a vector, define its soft-decision weight We then enumerate all vectors ascending order (6) taking their weights in Given two vectors and with the same weight, we use lexicographic ordering, by considering vectors as -digital -ary numbers In the sequel, the list of most probable vectors will be regarded as the list of lightest inputs Below we show that the complexity order of designing the list depends on its size rather than on a specific matrix More precisely, this complexity has almost a linear order We also generalize this design for subvectors of any length Our final goal is to design a few punctured lists whose size is exponentially smaller than the size of the original list Given any, we use a brief notation for B Enumeration and its Complexity Given any vector, let denote its number (rank) in the ordered set (7) Similarly, we take any cyclic window and order all subvectors Here we also use lexicographic ordering for any two subvectors with the same weights Then any subvector has unique number in ordering (8) Consider also a partition of our subset onto two disjoint subsets and Similarly to (8), we consider the ranks and of subvectors and taken on subsets and Lemma 5: Any vector satisfies the inequality Proof: Given, we consider a subset of vectors such that and Then the set has size Second, is the last vector in, since any other vector has either a smaller weight, or the same weight and a smaller rank, due to the lexicographic ordering of subvectors with equal weights Therefore, ordering (8) includes at least vectors placed ahead of Our next step is to estimate complexity of constructing the list of lightest inputs on a subset We show that this enumeration can be done with complexity of order Below we use the tree-like diagrams from [6] that are similar to conventional trellises For simplicity, we take First, in any step,, we recursively build the full (7) (8) (9) list of possible paths of length We also calculate the current weight of each path Then in any further step, we leave only shortest paths in the list Given, we proceed with position by appending any symbol to any path Then we sort paths to obtain the list Note that here we exclude any path prior to the step To prove this, consider all paths obtained by adding the lightest suffix to the list Then each of these paths is necessarily shorter than any path obtained by extending any path Summarizing, the procedure uses steps Each step sorts out at most entries and has complexity Therefore, the following estimate holds Lemma 6 ([6]): Any list can be constructed with complexity Remark: Bound (9) also shows that at most paths need to be sorted in any step instead of paths sorted above Indeed, consider a symbol with rank among all possible symbols To obtain shortest paths in step, we can add only to the first paths Otherwise, is excluded from the list due to its higher rank C Good Subvectors Any implementation of the algorithm allows us to consider only lightest inputs The list of these inputs can be constructed with complexity of order according to Lemma 6 However, the condition (needed for near-ml performance) makes this complexity excessively high Therefore, we wish to reduce this complexity by re-encoding relatively short subblocks Correspondingly, given vectors with ranks, we wish to bound the ranks Our next step is to specify such a relation between the numbers and Definition 7: Given a vector we say that a subblock and the subvector are good, if has rank Note that any vector is good by definition Also, recall that when errors occur in hard-decision decoding, we consider lightest error patterns By taking errors on the length, we consider lightest subvectors Therefore, our good subblocks generalize the lightest error patterns for an arbitrary soft-decision setting The main issue, however, is whether good subblocks exist for all and The following lemma and corollary show that this takes place whenever divides In case, the answer is slightly different To consider both cases, let a subset be decomposed into disjoint consecutive subsets of the

6 64 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 47, NO 1, JANUARY 2001 lengths Below we use notation for a specific decomposition used on the interval Lemma 8: Any good subvector includes at least one good subvector on any decomposition Proof: Assume that all subblocks are not good Then and This contradicts our assumption that is a good subvector Now we assume that divides, and decompose the whole set into consecutive subsets of length Then Lemma 8 leads to the following corollary Corollary 9: If, then any vector contains at least one good subvector of length for some all of length Here and include a single block, while includes two blocks of length We prove that these three suffice to find a relatively light subblock for any vector Indeed, if is a good subblock, then we are done Otherwise, is bad and the remaining subinterval is good according to Lemma 8 We proceed and suppose that is also bad Then is good and we have two good subblocks of length that form the third decomposition Note that in the hard-decision case we also need to consider only three sliding windows Example 2: Let and We use the Euclidean algorithm to find decompositions :,,,, In other words, we take the continued fraction (12) IV RELATIVELY LIGHT SUBVECTORS To consider arbitrary and, we generalize Definition 7 in the following way Definition 10: We say that the subvector is relatively light on a decomposition if all the subvectors are good (10) The main result of this subsection is summarized below in Lemma 11 We show that for any vector and any, there exists an interval of length and its decomposition that form a relatively light subvector Given and, let Then we use representation (11) to get an irreducible fraction These parameters and will be of major importance in our design Lemma 11 [6]: For any and, there exists a set of at most decompositions such that any vector has a relatively light subvector on some decomposition Lemma 11 is discussed in detail in [6] and [4], where the set is written out explicitly Here we omit the lengthy proof Instead, we consider two examples that illuminate the general outline We also use these examples in the sequel to design two different implementations for a binary -code Let be a decomposition of the subblock which starts in position with consecutive subblocks of length and is then followed by subblocks of length and so on, until subblocks of length complete the decomposition For brevity, this decomposition will be denoted as Example 1: Let and Consider three decompositions,, and, Then we consider decompositions:,, and Totally, we have decompositions First, consider If at least one subblock is good, we are done Otherwise, all the complementary subblocks of length are good according to Lemma 8 Then we need to add only one good subblock of length to obtain at least one relatively light subblock If all subblocks are bad, then we proceed with Having good subblocks and bad subblocks for all, we conclude that all subblocks of length are good Now suppose that all subblocks are bad Then all subblocks are good The latter means that our vector has the lightest symbols in all 63 positions Then all the subblocks are good This contradiction implies that at least one subblock is good Therefore, we can use at least for one Outline of the General Design: Let and Then we also apply the Euclidean algorithm starting with the entries and Then Here for all Finally, Itis also known [12] that In the general design described in [6] all decompositions form different types and can start only in positions that are multiples of It is for this reason that the overall number of different decompositions is bounded by Another important observation is that each decomposition includes at most subblocks This follows from the fact that their lengths are multiples of Then the number of these subintervals cannot exceed To proceed with the list of lightest vectors, we wish to reconstruct their relatively light subblocks on the length In case, we construct good subblocks with complexity order of Now we show that relatively light subblocks can be recovered with the same complexity Given and, we first find the set of decompositions satisfying Lemma 11 For each decomposition, we need to consider

7 DUMER: SOFT-DECISION DECODING USING PUNCTURED CODES 65 only good subvectors Therefore, we take lightest vectors on each subinterval According to Lemma 6, these vectors can be designed with complexity order of Then we form the product list (13) Note that our list has size, since Now Lemmas 6 and 11 give the following statement Lemma 12: For any and, there exist at most lists such that each sublist includes at most vectors; any has at least one subvector on some Remark: Each sublist includes the shortest subpaths constructed on cyclically consecutive positions At this point, our trellis-like design is similar to tail-biting trellises In general soft-decision setting, we replace good subblocks by their relatively light concatenations Therefore, the product list can include some longer paths that have ranks above Such a disparity with the hard-decision case is due to the inequality that is used in (9) instead of the Hamming weights, which are strictly additive However, it is yet an open problem as to whether good subblocks exist for all,, and V TWO EVENLY CORRUPTED SUBBLOCKS For any vector, we now consider the set of lists defined in (13) Accordingly, at least one includes a relatively light subvector of length In this section, we wish to split such into two evenly corrupted parts More precisely, these parts will be taken from the sublists and of size about Again, we depart from the hard-decision algorithm and use subintervals and that can have unequal lengths We first consider a decomposition of one interval Here and are separated by some position Lemma 13: For any subvector and any,at least one decomposition satisfies two conditions that begins in the left half of Then we split into two smaller decompositions (15) If, then our splitting is completed Namely, subvectors and belong to sublists and that include at most subvectors Otherwise, our two decompositions (15) have different lengths, say and Then we consider the new parameter (16) Next, we take any splitting with running through the interval According to Lemma 13, at least one splitting satisfies conditions (14) for given For any, we relocate the subinterval from the left part to the right part As a result, we redefine our decompositions and (17) (18) We can also include our specific case (15) in general design (17), (18) by taking, and As a result, we consider the coupled decomposition First, we note that any coupled decomposition is fully defined by the original decomposition and the separating point Therefore, there exists at most different decompositions, since takes on at most different positions For simplicity, below we use the upper bound for the number of decompositions Next, we see that there exists a decomposition such that subvector belongs to the sublist Here we take all the values of the symbol Therefore, has the size (19) (14) Proof: Given any interval of length, consider its subinterval with the same starting point and varying length We then add the next position and consider the interval of length For, we assume that and Note that The latter means that is a stepwise nondecreasing function of the length Given any, we now choose as the rightmost position, at which Then Also,, according to (9) Therefore, both inequalities (14) hold Now we take a relatively light subvector from the list (13), which is split by decomposition Without loss of generality, we suppose that Our further splitting is done in two steps First, we find the rightmost subblock Also, the subvector belongs to the sublist (20) This list also has size or less In turn, this observation shows that both lists are designed with the same complexity order of that we obtained in the hard-decision case For odd, it can be shown that the lists have size bounded by Thus, we arrive at the following lemma Lemma 14: For any and, there exist at most coupled decompositions such that for

8 66 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 47, NO 1, JANUARY 2001 some both subvectors and belong to the sublists and of size or less Remark: Lemma 13 is yet another deviation from the harddecision case First, subvectors and taken in (14) are not necessarily good As a result, subvectors and are not necessarily light However, each subinterval in (17) and (18) includes only the lightest subvectors Also, the product lists and still have the size used in hard-decision decoding It is for these two reasons that we can apply Lemma 6 and keep the former complexity order of VI GENERAL ALGORITHM To present general algorithm, we need the following wellknown lemma (eg see [6]) Lemma 15: All sliding intervals of length form information subsets for most linear codes if Proof: Let be a generator matrix of code An interval is not an information subset if submatrix has rank less than Then we count the fraction of such (deficient) submatrices and show that for any So, the fraction of matrices with at least one deficient submatrix is bounded from above by and vanishes for any used in the statement Thus, our lemma holds for most linear codes General Algorithm Preliminary Step: Given and, we choose the parameters and Then we take a linear code that satisfies Lemma 15 In particular, one can choose any cyclic code Given and, we find at most decompositions of length that satisfy Lemma 12 For each, we take the rightmost interval that begins on the left half of and find the corresponding parameter from (16) We take any separation point using different positions, and further split into two parts and Now we have at most coupled decompositions defined in (17) and (18) Also, the punctured codes have dimension on any Our decoding begins when we receive an output Then we define the weight of any input symbol in each position and form the corresponding matrix Decoding includes three steps Step 1: Splitting We choose any coupled decomposition found in the preliminary step Step 2: Designing of Plausible Candidates Given a decomposition, we construct the product lists and defined in (19) and (20) Recall that only the lightest subvectors are taken on each subinterval From this point, our design follows its hard-decision counterpart presented in Section II Namely, we calculate the syndromes and for all subvectors and Then we form the records and Step 3: Matching and Re-Encoding We form the joint set and sort its elements Then we run through and find all matching pairs that agree in rightmost digits and disagree in the indicator symbol Each matching pair yields the codeword Each codeword is re-encoded onto the codeword of length Then the weight is found Finally, we choose the lightest codeword after all decompositions are considered We now apply algorithm to a binary -code We take and obtain decoding error probability that at most doubles the error probability of ML decoding We complete Examples 1 and 2 by taking and, respectively According to Lemma 14, we use at most subvectors for and subvectors for This compares favorably with all other near-ml algorithms known to date In particular, trellis complexity is upper-bounded by states while near-ml decoding from [6] requires states Example 1 (Continued): We consider three decompositions:,, and of length First suppose that both intervals and are bad, in which case consists of good halves and On either half, we design lightest vectors of length using Lemma 6 Then we calculate their syndromes and sort the records and Each matching codeword of length is re-encoded to the full length and the closest codeword is chosen In case of decompositions (or, our design is similar According to Lemma 13, both parts and should include vectors This allows us to use 19 different decompositions taking any (Indeed, we cannot construct vectors on for Nor can we do this on for Then we follow the previous design performed on Example 2 (Continued): We use decompositions,, and Then each decomposition is split into, with subvectors to be considered on either part When using, we decompose a single block into two parts with sliding point Then we execute matching and re-encoding procedures For,, and, we see that the first subblock includes the left half of the entire interval This subblock is split into two parts, with separation point taking 12 rightmost positions on On the right part, we take the product of multiple lists (20) Then we match the syndromes obtained on the left interval with the syndromes found on the remaining positions Finally, we proceed with re-encoding Examples 1 and 2 (Completed): In both examples, we can consequently store and use the same trellis diagrams for different decompositions with sliding point In particular, every time the separation point changes from to,we can proceed forward while constructing the list Similarly, we proceed backward while constructing This allows us to use already presorted lists and Therefore, we concurrently form two trellis diagrams obtained on length by moving

9 DUMER: SOFT-DECISION DECODING USING PUNCTURED CODES 67 Fig 1 Output bit error rate for the (63; 30) BCH code with decoding list of size T in two opposite directions Our overhead includes sorting procedures performed on each step to reduce by half the number of shortest paths presorted in the previous step The critical advantage is that we use at most paths on each step instead of up to trellis paths used in conventional design Note that in the asymptotic setting, the number of coupled decompositions is immaterial However, the above examples show that this number can be significant on short lengths In particular, our design on the length uses only three decompositions instead of decompositions employed for On the other hand, we increase our lists only about times Therefore, Example 1 is a better choice Similar improvements can be obtained for other codes of length These are discussed in Appendix B Example 1 becomes even more favorable when compared with near-ml decoding of [6] The latter requires that lightest vectors be constructed on each of 63 different decompositions It is also important that the practical design can be simplified further For example, computer simulation has shown that the decoding error probability is left almost unchanged if we use only subvectors instead of examining or more of them These results are shown in Fig 1 VII DECODING COMPLEXITY A Preliminary Remarks To proceed with the asymptotic setting, we recall that our algorithm gives near-ml decoding if it recovers any codeword The following lemma shows that does so Lemma 16: Algorithm recovers each codeword from the list of most probable input vectors Proof: Consider any codeword According to Lemma 14, at least one coupled decomposition places both subvectors and into the sublists and In turn, and are matched in Step 3, since their combination belongs to the code Finally, vector is obtained by re-encoding To proceed with decoding complexity, we need to upper-bound the number of codewords obtained in Step 3 These codewords belong to the combined list obtained by linking the two parts and Recall that we split the interval into the three subintervals and insert these intervals in our original decomposition To simplify the notation, below we call these subintervals and respectively Then the combined list can be presented in the form (21) Here for (also,,,, and It is important that all the numbers are fully defined by our original decomposition Also, we will use the fact that (22) Now we use the restriction (11) and consider the asymptotic setting with and Here we keep the ratio fixed as Then we consider all generator matrices of size and rank The ensemble of codes is defined by the uniform distribution taken on matrices In total, we consider codes Finally, we define the maximum number of codewords obtained in Step 3 as

10 68 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 47, NO 1, JANUARY 2001 Recall that every list depends on the output (or the corresponding matrix ) Therefore, the above maximum is taken over all codes, decompositions of length, and matrices Here we consider as a function of and will minimize later Also, we use the fact that the lists have size or less B General Estimate Given parameters and, we first give a general estimate that depends on Lemma 17: Algorithm has complexity where is the number of decompositions used in the first step; is full complexity of the second step; is re-encoding complexity of the third step Proof: According to Lemma 14, the number of coupled decompositions is upper-bounded by Since is fixed, In Step 2, we form sublists and link them into the two lists (19) and (20) of size Each sublist includes only the shortest vectors and can be designed with complexity, according to Lemma 6 Therefore, the overall complexity taken over all sublists is upper-bounded by The same complexity order is required to calculate all syndromes and Therefore, Step 2 has complexity In Step 3, we sort out all the records of the combined list Following Section II, we use a parallel circuit of size and depth This gives complexity that is below Then we run through the presorted set and find at most codewords of the code Their re-encoding has complexity We also take into account that at most trials are executed in Steps 2 and 3 Proof of Theorem 2: Below in Lemma 22 we will prove that the upper bound (23) holds for most long linear codes, with the exception of immaterial fraction vanishing as By taking in Lemma 17, we readily obtain complexity where It is easy to verify that achieves its minimum if for any Otherwise, we obtain a lower value at This proves Theorem 2 To obtain Corollary 3, we choose a sequence of that tends to In this case and We conclude that the same upper bound can be used for the number of lightest subvectors taken in Step 2 and the number of codewords re-encoded in Step 3 Also, the overall fraction of bad codes is defined only by Lemma 15, due to the immaterial fraction eliminated in Lemma 22 For the optimum, it can be readily verified that this fraction has exponentially small order of For larger used in Theorem 2, the fraction of bad codes declines even faster Now we need to prove bound (23) to complete our analysis This is done in the remaining part of the paper VIII BOUNDS ON THE NUMBER OF CODEWORDS IN THE LISTS A Fixed Output To find, we first fix an output and the corresponding -matrix Given, we consider the maximum number of codewords taken over all decompositions and all codes We then show that the upper bound fails to hold only for a small fraction of all codes that declines as rapidly as We then wish to use the union bound while considering all real matrices at the same time To do this, we will replace the full set by a comparatively small subset that includes only about matrices For each matrix, we will also design a superlist that yet has the same exponential size as any original list However, we then prove that each list is fully contained in at least one of superlists This will allow us to use the union bound in the form As a result, the bound (23) fails to hold only for the vanishing fraction of all codes From now on, we use the following parameters: -codes, with the excep- of them, have Lemma 18: Most long linear tion of the fraction (24) (25) codewords in any list of lightest inputs Proof: The proof is similar to that in [3, Proposition 2] Consider all combinations of linearly independent vectors chosen from The number of these combinations is bounded as (26) Since, we verify that the lower bound has the same order as We then find the number of -codes that include any specific combination This number is

11 DUMER: SOFT-DECISION DECODING USING PUNCTURED CODES 69 In the asymptotic equality we also use the fact that Therefore, any given combination belongs to the fraction B Bounding the Number of Codewords: All Outputs Now we have an exponential order of codewords obtained for all decompositions However, we still cannot use the union bound for all matrices Therefore, we wish to replace a huge set by a comparatively small subset In doing so, we also increase our subsets times, where (28) of codes Now we find the expected number of combinations taken over all codes from This is, Given, we say that code is bad if it includes at least combinations By the Chebyshev inequality, the fraction of these codes is Nowwe see that any remaining code includes fewer than vectors from, where Namely, we consider the lists that include lightest inputs taken with respect to The main result of this step is the following theorem Theorem 20: There exists a set of matrices such that any list is contained in at least one superlist This theorem proved in Appendix A allows us to represent all lists by only superlists of the same exponential size Now we combine Theorem 20 with Lemma 18 Corollary 21: Most long linear -codes have Indeed, otherwise the number of possible combinations taken from exceeds Here our arguments are similar to (26), since is an exponent in and In turn, as This shows that any that includes vectors is bad Finally, note that gives the right-hand side of inequality (25) for the given from (24) This lemma is immediately generalized for all decompositions and punctured codes Lemma 19: Most long linear -codes satisfy the inequality (27) with the exception of the fraction of codes Proof: Given a coupled decomposition and an output, we specify the list obtained in (21) and (22) This list includes or fewer vectors Let denote the subset of codes that give an -subcode of some dimension, when punctured to the interval Note that in the asymptotic setting, we have the inequality as Given a list, we then apply the same arguments that were used in Lemma 18 We fix to obtain the fraction of bad codes We also replace by and by For brevity, we also use the term instead of Then we find the fraction of bad -subcodes, which do not satisfy inequality (27) We see that for all Second, given and, we note that each -subcode is obtained from the same number of the original codes Therefore, the fraction of bad codes that fail to satisfy (27) is also upper-bounded by for each Now we have the fraction of bad codes for any given decomposition The proof is completed by using the union bound taken over decompositions codewords in each of the lists of inputs, with the exception of the fraction of codes as Proof: By taking any list, we increase times our upper bound on from (25) On the other hand, the fraction of bad codes is at most and still declines as This proves our corollary for all lists Then the same fraction of codes is obtained for their enclosures (sublists) In a similar way, we can consider any decomposition that includes subintervals In doing so, we replace any sublist by a bigger list The main problem here is that the original product list becomes times larger Therefore, we take into account that is a constant as long as,, and The result is the following lemma Lemma 22: For, most linear -codes of length give (29) punctured codewords of length in each of the lists, with the exception of the fraction of codes Proof: Given and above, we have decompositions For any we take intervals, where Then we apply Theorem 20 on any single interval This allows us to replace all lists by superlists Their number is bounded by on a single interval, and by, when all intervals are combined into Original lists have size, according to (22) Then we consider the supersets obtained by linking the lists These supersets have size

12 70 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 47, NO 1, JANUARY 2001 Given any superset, we can apply Lemma 19 Then bound (29) fails to hold only for the fraction of codes By taking all supersets and decompositions, we still get the fraction of bad codes as This proves our lemma for superlists Then the same fraction of codes can be taken for their enclosures IX CONCLUDING REMARKS In this paper, we study near-ml decoding algorithms These algorithms achieve block error probability similar to that of ML decoding at any ratio We present a near-ml decoding algorithm that decodes the original -code by comparing a number of decoding results obtained on the punctured -codes The algorithm employs trellis-like design and can be used for any linear code regardless of specific code structure For most long linear codes, the algorithm achieves complexity order of along with decoding error probability that is arbitrarily close to the error probability of ML decoding Thus, we reduce up to three times the asymptotic complexity of trellis design The algorithm also provides for significant complexity improvements even on the relatively short lengths In particular, for binary linear codes of length, the algorithm uses fewer than three thousand states and reduces general trellis complexity by three to six decimal orders APPENDIX A UPPER BOUNDS ON THE NUMBER OF SUPERLISTS We consider all possible matrices, where For each position, let be the lightest entry Consider the new weight Then all vectors have their weights equally changed This shift keeps the same ordering of vectors in any list Therefore, we will consider only matrices with zero entries Second, we further redefine matrices in the following way Given any threshold, let be the number of entries in the column Also, define the subset of vectors Lemma 23: All vectors have rescaled weight Proof: The list includes only the vectors with total weight and has size or more, according to (31) The list includes lightest vectors Thus, for any For any list let be the maximum weight of its vectors (This depends on and ) Then we define the closure Below Lemma 24: Any closure has size and is contained in Proof: Subset includes only vectors of maximum weight For each vector from, we replace the first (leftmost) symbol by the lightest symbol In this new set of replaced vectors, each vector is obtained from at most vectors Then this set has size On the other hand, belongs to our subset Thus, We now define the list with a given maximum weight Then we study its size as a function of (for brevity, we omit the argument ) Lemma 25: Consider a list of size Then the list has size (32) Proof: Let be the subset of vectors in, that have weight in position Let The complementary set includes the vectors with all symbols of weight Therefore, this set is contained in the set from (31), since is the weight closest to from below Then Now we take any and replace the leftmost symbol with the weight by the lightest symbol Similarly to the previous proof, we see that the new set of replaced vectors has size with all symbols of weight or less Then the size of this set is We say that is a step if for any The latter means that at least one entry is equal to In total, our function has at most steps Given, we use the two adjacent steps (30) Below we consider the rescaled entries Then any vector has rescaled weight Also, our two adjacent steps and are now converted to and Thus, we have the inequalities (31) On the other hand, for any Hence, and (32) holds Proof of Theorem 20: Now we design a discrete set of at most superlists, where is already defined in (28) We consider the set of numbers For any given matrix, let matrix be obtained by rounding down all entries in to the closest number Since each of nonzero entries in can be rounded to any of levels, there exist matrices

THIS paper is aimed at designing efficient decoding algorithms

THIS paper is aimed at designing efficient decoding algorithms IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER 1999 2333 Sort-and-Match Algorithm for Soft-Decision Decoding Ilya Dumer, Member, IEEE Abstract Let a q-ary linear (n; k)-code C be used

More information

UC Riverside UC Riverside Previously Published Works

UC Riverside UC Riverside Previously Published Works UC Riverside UC Riverside Previously Published Works Title Soft-decision decoding of Reed-Muller codes: A simplied algorithm Permalink https://escholarship.org/uc/item/5v71z6zr Journal IEEE Transactions

More information

4488 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 10, OCTOBER /$ IEEE

4488 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 10, OCTOBER /$ IEEE 4488 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 10, OCTOBER 2008 List Decoding of Biorthogonal Codes the Hadamard Transform With Linear Complexity Ilya Dumer, Fellow, IEEE, Grigory Kabatiansky,

More information

The Capacity of Finite Abelian Group Codes Over Symmetric Memoryless Channels Giacomo Como and Fabio Fagnani

The Capacity of Finite Abelian Group Codes Over Symmetric Memoryless Channels Giacomo Como and Fabio Fagnani IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 5, MAY 2009 2037 The Capacity of Finite Abelian Group Codes Over Symmetric Memoryless Channels Giacomo Como and Fabio Fagnani Abstract The capacity

More information

Decomposing Bent Functions

Decomposing Bent Functions 2004 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 8, AUGUST 2003 Decomposing Bent Functions Anne Canteaut and Pascale Charpin Abstract In a recent paper [1], it is shown that the restrictions

More information

Codes for Partially Stuck-at Memory Cells

Codes for Partially Stuck-at Memory Cells 1 Codes for Partially Stuck-at Memory Cells Antonia Wachter-Zeh and Eitan Yaakobi Department of Computer Science Technion Israel Institute of Technology, Haifa, Israel Email: {antonia, yaakobi@cs.technion.ac.il

More information

Solutions of Exam Coding Theory (2MMC30), 23 June (1.a) Consider the 4 4 matrices as words in F 16

Solutions of Exam Coding Theory (2MMC30), 23 June (1.a) Consider the 4 4 matrices as words in F 16 Solutions of Exam Coding Theory (2MMC30), 23 June 2016 (1.a) Consider the 4 4 matrices as words in F 16 2, the binary vector space of dimension 16. C is the code of all binary 4 4 matrices such that the

More information

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005 Chapter 7 Error Control Coding Mikael Olofsson 2005 We have seen in Chapters 4 through 6 how digital modulation can be used to control error probabilities. This gives us a digital channel that in each

More information

Lecture 4: Proof of Shannon s theorem and an explicit code

Lecture 4: Proof of Shannon s theorem and an explicit code CSE 533: Error-Correcting Codes (Autumn 006 Lecture 4: Proof of Shannon s theorem and an explicit code October 11, 006 Lecturer: Venkatesan Guruswami Scribe: Atri Rudra 1 Overview Last lecture we stated

More information

Reed-Solomon codes. Chapter Linear codes over finite fields

Reed-Solomon codes. Chapter Linear codes over finite fields Chapter 8 Reed-Solomon codes In the previous chapter we discussed the properties of finite fields, and showed that there exists an essentially unique finite field F q with q = p m elements for any prime

More information

EE 229B ERROR CONTROL CODING Spring 2005

EE 229B ERROR CONTROL CODING Spring 2005 EE 229B ERROR CONTROL CODING Spring 2005 Solutions for Homework 1 1. Is there room? Prove or disprove : There is a (12,7) binary linear code with d min = 5. If there were a (12,7) binary linear code with

More information

IN this paper, we will introduce a new class of codes,

IN this paper, we will introduce a new class of codes, IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 44, NO 5, SEPTEMBER 1998 1861 Subspace Subcodes of Reed Solomon Codes Masayuki Hattori, Member, IEEE, Robert J McEliece, Fellow, IEEE, and Gustave Solomon,

More information

Covering an ellipsoid with equal balls

Covering an ellipsoid with equal balls Journal of Combinatorial Theory, Series A 113 (2006) 1667 1676 www.elsevier.com/locate/jcta Covering an ellipsoid with equal balls Ilya Dumer College of Engineering, University of California, Riverside,

More information

MATH Examination for the Module MATH-3152 (May 2009) Coding Theory. Time allowed: 2 hours. S = q

MATH Examination for the Module MATH-3152 (May 2009) Coding Theory. Time allowed: 2 hours. S = q MATH-315201 This question paper consists of 6 printed pages, each of which is identified by the reference MATH-3152 Only approved basic scientific calculators may be used. c UNIVERSITY OF LEEDS Examination

More information

MATH32031: Coding Theory Part 15: Summary

MATH32031: Coding Theory Part 15: Summary MATH32031: Coding Theory Part 15: Summary 1 The initial problem The main goal of coding theory is to develop techniques which permit the detection of errors in the transmission of information and, if necessary,

More information

IN this paper, we consider the capacity of sticky channels, a

IN this paper, we consider the capacity of sticky channels, a 72 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 1, JANUARY 2008 Capacity Bounds for Sticky Channels Michael Mitzenmacher, Member, IEEE Abstract The capacity of sticky channels, a subclass of insertion

More information

Lecture 12. Block Diagram

Lecture 12. Block Diagram Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data

More information

Error Correcting Codes: Combinatorics, Algorithms and Applications Spring Homework Due Monday March 23, 2009 in class

Error Correcting Codes: Combinatorics, Algorithms and Applications Spring Homework Due Monday March 23, 2009 in class Error Correcting Codes: Combinatorics, Algorithms and Applications Spring 2009 Homework Due Monday March 23, 2009 in class You can collaborate in groups of up to 3. However, the write-ups must be done

More information

Code design: Computer search

Code design: Computer search Code design: Computer search Low rate codes Represent the code by its generator matrix Find one representative for each equivalence class of codes Permutation equivalences? Do NOT try several generator

More information

Guess & Check Codes for Deletions, Insertions, and Synchronization

Guess & Check Codes for Deletions, Insertions, and Synchronization Guess & Check Codes for Deletions, Insertions, and Synchronization Serge Kas Hanna, Salim El Rouayheb ECE Department, Rutgers University sergekhanna@rutgersedu, salimelrouayheb@rutgersedu arxiv:759569v3

More information

Introduction to binary block codes

Introduction to binary block codes 58 Chapter 6 Introduction to binary block codes In this chapter we begin to study binary signal constellations, which are the Euclidean-space images of binary block codes. Such constellations have nominal

More information

Communications II Lecture 9: Error Correction Coding. Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved

Communications II Lecture 9: Error Correction Coding. Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved Communications II Lecture 9: Error Correction Coding Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved Outline Introduction Linear block codes Decoding Hamming

More information

Efficient Bounded Distance Decoders for Barnes-Wall Lattices

Efficient Bounded Distance Decoders for Barnes-Wall Lattices Efficient Bounded Distance Decoders for Barnes-Wall Lattices Daniele Micciancio Antonio Nicolosi April 30, 2008 Abstract We describe a new family of parallelizable bounded distance decoding algorithms

More information

Lecture 3: Error Correcting Codes

Lecture 3: Error Correcting Codes CS 880: Pseudorandomness and Derandomization 1/30/2013 Lecture 3: Error Correcting Codes Instructors: Holger Dell and Dieter van Melkebeek Scribe: Xi Wu In this lecture we review some background on error

More information

Support weight enumerators and coset weight distributions of isodual codes

Support weight enumerators and coset weight distributions of isodual codes Support weight enumerators and coset weight distributions of isodual codes Olgica Milenkovic Department of Electrical and Computer Engineering University of Colorado, Boulder March 31, 2003 Abstract In

More information

Chapter 3 Linear Block Codes

Chapter 3 Linear Block Codes Wireless Information Transmission System Lab. Chapter 3 Linear Block Codes Institute of Communications Engineering National Sun Yat-sen University Outlines Introduction to linear block codes Syndrome and

More information

THIS work is motivated by the goal of finding the capacity

THIS work is motivated by the goal of finding the capacity IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 8, AUGUST 2007 2693 Improved Lower Bounds for the Capacity of i.i.d. Deletion Duplication Channels Eleni Drinea, Member, IEEE, Michael Mitzenmacher,

More information

Robust Network Codes for Unicast Connections: A Case Study

Robust Network Codes for Unicast Connections: A Case Study Robust Network Codes for Unicast Connections: A Case Study Salim Y. El Rouayheb, Alex Sprintson, and Costas Georghiades Department of Electrical and Computer Engineering Texas A&M University College Station,

More information

Chapter 2. Error Correcting Codes. 2.1 Basic Notions

Chapter 2. Error Correcting Codes. 2.1 Basic Notions Chapter 2 Error Correcting Codes The identification number schemes we discussed in the previous chapter give us the ability to determine if an error has been made in recording or transmitting information.

More information

MATH 291T CODING THEORY

MATH 291T CODING THEORY California State University, Fresno MATH 291T CODING THEORY Fall 2011 Instructor : Stefaan Delcroix Contents 1 Introduction to Error-Correcting Codes 3 2 Basic Concepts and Properties 6 2.1 Definitions....................................

More information

Optimum Soft Decision Decoding of Linear Block Codes

Optimum Soft Decision Decoding of Linear Block Codes Optimum Soft Decision Decoding of Linear Block Codes {m i } Channel encoder C=(C n-1,,c 0 ) BPSK S(t) (n,k,d) linear modulator block code Optimal receiver AWGN Assume that [n,k,d] linear block code C is

More information

Coding on a Trellis: Convolutional Codes

Coding on a Trellis: Convolutional Codes .... Coding on a Trellis: Convolutional Codes Telecommunications Laboratory Alex Balatsoukas-Stimming Technical University of Crete November 6th, 2008 Telecommunications Laboratory (TUC) Coding on a Trellis:

More information

Error Correction Methods

Error Correction Methods Technologies and Services on igital Broadcasting (7) Error Correction Methods "Technologies and Services of igital Broadcasting" (in Japanese, ISBN4-339-06-) is published by CORONA publishing co., Ltd.

More information

Lecture 6: Expander Codes

Lecture 6: Expander Codes CS369E: Expanders May 2 & 9, 2005 Lecturer: Prahladh Harsha Lecture 6: Expander Codes Scribe: Hovav Shacham In today s lecture, we will discuss the application of expander graphs to error-correcting codes.

More information

2012 IEEE International Symposium on Information Theory Proceedings

2012 IEEE International Symposium on Information Theory Proceedings Decoding of Cyclic Codes over Symbol-Pair Read Channels Eitan Yaakobi, Jehoshua Bruck, and Paul H Siegel Electrical Engineering Department, California Institute of Technology, Pasadena, CA 9115, USA Electrical

More information

Orthogonal Arrays & Codes

Orthogonal Arrays & Codes Orthogonal Arrays & Codes Orthogonal Arrays - Redux An orthogonal array of strength t, a t-(v,k,λ)-oa, is a λv t x k array of v symbols, such that in any t columns of the array every one of the possible

More information

Group Codes Outperform Binary-Coset Codes on Nonbinary Symmetric Memoryless Channels

Group Codes Outperform Binary-Coset Codes on Nonbinary Symmetric Memoryless Channels Group Codes Outperform Binary-Coset Codes on Nonbinary Symmetric Memoryless Channels The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters.

More information

Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes

Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes Xiaojie Zhang and Paul H. Siegel University of California, San Diego, La Jolla, CA 9093, U Email:{ericzhang, psiegel}@ucsd.edu

More information

MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups.

MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups. MATH 433 Applied Algebra Lecture 21: Linear codes (continued). Classification of groups. Binary codes Let us assume that a message to be transmitted is in binary form. That is, it is a word in the alphabet

More information

CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding

CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding Tim Roughgarden October 29, 2014 1 Preamble This lecture covers our final subtopic within the exact and approximate recovery part of the course.

More information

Algebraic Soft-Decision Decoding of Reed Solomon Codes

Algebraic Soft-Decision Decoding of Reed Solomon Codes IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 11, NOVEMBER 2003 2809 Algebraic Soft-Decision Decoding of Reed Solomon Codes Ralf Koetter, Member, IEEE, Alexer Vardy, Fellow, IEEE Abstract A polynomial-time

More information

LOW-density parity-check (LDPC) codes were invented

LOW-density parity-check (LDPC) codes were invented IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 54, NO 1, JANUARY 2008 51 Extremal Problems of Information Combining Yibo Jiang, Alexei Ashikhmin, Member, IEEE, Ralf Koetter, Senior Member, IEEE, and Andrew

More information

exercise in the previous class (1)

exercise in the previous class (1) exercise in the previous class () Consider an odd parity check code C whose codewords are (x,, x k, p) with p = x + +x k +. Is C a linear code? No. x =, x 2 =x =...=x k = p =, and... is a codeword x 2

More information

The E8 Lattice and Error Correction in Multi-Level Flash Memory

The E8 Lattice and Error Correction in Multi-Level Flash Memory The E8 Lattice and Error Correction in Multi-Level Flash Memory Brian M Kurkoski University of Electro-Communications Tokyo, Japan kurkoski@iceuecacjp Abstract A construction using the E8 lattice and Reed-Solomon

More information

Höst, Stefan; Johannesson, Rolf; Zigangirov, Kamil; Zyablov, Viktor V.

Höst, Stefan; Johannesson, Rolf; Zigangirov, Kamil; Zyablov, Viktor V. Active distances for convolutional codes Höst, Stefan; Johannesson, Rolf; Zigangirov, Kamil; Zyablov, Viktor V Published in: IEEE Transactions on Information Theory DOI: 101109/18749009 Published: 1999-01-01

More information

RON M. ROTH * GADIEL SEROUSSI **

RON M. ROTH * GADIEL SEROUSSI ** ENCODING AND DECODING OF BCH CODES USING LIGHT AND SHORT CODEWORDS RON M. ROTH * AND GADIEL SEROUSSI ** ABSTRACT It is shown that every q-ary primitive BCH code of designed distance δ and sufficiently

More information

MATH 291T CODING THEORY

MATH 291T CODING THEORY California State University, Fresno MATH 291T CODING THEORY Spring 2009 Instructor : Stefaan Delcroix Chapter 1 Introduction to Error-Correcting Codes It happens quite often that a message becomes corrupt

More information

Decoding the Tail-Biting Convolutional Codes with Pre-Decoding Circular Shift

Decoding the Tail-Biting Convolutional Codes with Pre-Decoding Circular Shift Decoding the Tail-Biting Convolutional Codes with Pre-Decoding Circular Shift Ching-Yao Su Directed by: Prof. Po-Ning Chen Department of Communications Engineering, National Chiao-Tung University July

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

Guess & Check Codes for Deletions, Insertions, and Synchronization

Guess & Check Codes for Deletions, Insertions, and Synchronization Guess & Chec Codes for Deletions, Insertions, and Synchronization Serge Kas Hanna, Salim El Rouayheb ECE Department, IIT, Chicago sashann@hawiitedu, salim@iitedu Abstract We consider the problem of constructing

More information

PSK bit mappings with good minimax error probability

PSK bit mappings with good minimax error probability PSK bit mappings with good minimax error probability Erik Agrell Department of Signals and Systems Chalmers University of Technology 4196 Göteborg, Sweden Email: agrell@chalmers.se Erik G. Ström Department

More information

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel Introduction to Coding Theory CMU: Spring 2010 Notes 3: Stochastic channels and noisy coding theorem bound January 2010 Lecturer: Venkatesan Guruswami Scribe: Venkatesan Guruswami We now turn to the basic

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior

More information

Lecture 2 Linear Codes

Lecture 2 Linear Codes Lecture 2 Linear Codes 2.1. Linear Codes From now on we want to identify the alphabet Σ with a finite field F q. For general codes, introduced in the last section, the description is hard. For a code of

More information

Optimal Block-Type-Decodable Encoders for Constrained Systems

Optimal Block-Type-Decodable Encoders for Constrained Systems IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 5, MAY 2003 1231 Optimal Block-Type-Decodable Encoders for Constrained Systems Panu Chaichanavong, Student Member, IEEE, Brian H. Marcus, Fellow, IEEE

More information

Covering a sphere with caps: Rogers bound revisited

Covering a sphere with caps: Rogers bound revisited Covering a sphere with caps: Rogers bound revisited Ilya Dumer Abstract We consider coverings of a sphere S n r of radius r with the balls of radius one in an n-dimensional Euclidean space R n. Our goal

More information

Structured Low-Density Parity-Check Codes: Algebraic Constructions

Structured Low-Density Parity-Check Codes: Algebraic Constructions Structured Low-Density Parity-Check Codes: Algebraic Constructions Shu Lin Department of Electrical and Computer Engineering University of California, Davis Davis, California 95616 Email:shulin@ece.ucdavis.edu

More information

Ahlswede Khachatrian Theorems: Weighted, Infinite, and Hamming

Ahlswede Khachatrian Theorems: Weighted, Infinite, and Hamming Ahlswede Khachatrian Theorems: Weighted, Infinite, and Hamming Yuval Filmus April 4, 2017 Abstract The seminal complete intersection theorem of Ahlswede and Khachatrian gives the maximum cardinality of

More information

Chapter 7: Channel coding:convolutional codes

Chapter 7: Channel coding:convolutional codes Chapter 7: : Convolutional codes University of Limoges meghdadi@ensil.unilim.fr Reference : Digital communications by John Proakis; Wireless communication by Andreas Goldsmith Encoder representation Communication

More information

Laplacian Integral Graphs with Maximum Degree 3

Laplacian Integral Graphs with Maximum Degree 3 Laplacian Integral Graphs with Maximum Degree Steve Kirkland Department of Mathematics and Statistics University of Regina Regina, Saskatchewan, Canada S4S 0A kirkland@math.uregina.ca Submitted: Nov 5,

More information

Practical Polar Code Construction Using Generalised Generator Matrices

Practical Polar Code Construction Using Generalised Generator Matrices Practical Polar Code Construction Using Generalised Generator Matrices Berksan Serbetci and Ali E. Pusane Department of Electrical and Electronics Engineering Bogazici University Istanbul, Turkey E-mail:

More information

Introduction to Binary Convolutional Codes [1]

Introduction to Binary Convolutional Codes [1] Introduction to Binary Convolutional Codes [1] Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw Y. S. Han Introduction

More information

1 Introduction to information theory

1 Introduction to information theory 1 Introduction to information theory 1.1 Introduction In this chapter we present some of the basic concepts of information theory. The situations we have in mind involve the exchange of information through

More information

The Hamming Codes and Delsarte s Linear Programming Bound

The Hamming Codes and Delsarte s Linear Programming Bound The Hamming Codes and Delsarte s Linear Programming Bound by Sky McKinley Under the Astute Tutelage of Professor John S. Caughman, IV A thesis submitted in partial fulfillment of the requirements for the

More information

Algebraic Soft-Decision Decoding of Reed-Solomon Codes Using Bit-level Soft Information

Algebraic Soft-Decision Decoding of Reed-Solomon Codes Using Bit-level Soft Information 1 Algebraic Soft-Decision Decoding of Reed-Solomon Codes Using Bit-level Soft Information arxiv:cs/0611090v [cs.it] 4 Aug 008 Jing Jiang and Krishna R. Narayanan Department of Electrical and Computer Engineering,

More information

ELEC 519A Selected Topics in Digital Communications: Information Theory. Hamming Codes and Bounds on Codes

ELEC 519A Selected Topics in Digital Communications: Information Theory. Hamming Codes and Bounds on Codes ELEC 519A Selected Topics in Digital Communications: Information Theory Hamming Codes and Bounds on Codes Single Error Correcting Codes 2 Hamming Codes (7,4,3) Hamming code 1 0 0 0 0 1 1 0 1 0 0 1 0 1

More information

Introduction to Convolutional Codes, Part 1

Introduction to Convolutional Codes, Part 1 Introduction to Convolutional Codes, Part 1 Frans M.J. Willems, Eindhoven University of Technology September 29, 2009 Elias, Father of Coding Theory Textbook Encoder Encoder Properties Systematic Codes

More information

MATH3302. Coding and Cryptography. Coding Theory

MATH3302. Coding and Cryptography. Coding Theory MATH3302 Coding and Cryptography Coding Theory 2010 Contents 1 Introduction to coding theory 2 1.1 Introduction.......................................... 2 1.2 Basic definitions and assumptions..............................

More information

Efficient Decoding of Permutation Codes Obtained from Distance Preserving Maps

Efficient Decoding of Permutation Codes Obtained from Distance Preserving Maps 2012 IEEE International Symposium on Information Theory Proceedings Efficient Decoding of Permutation Codes Obtained from Distance Preserving Maps Yeow Meng Chee and Punarbasu Purkayastha Division of Mathematical

More information

Lecture 7. Union bound for reducing M-ary to binary hypothesis testing

Lecture 7. Union bound for reducing M-ary to binary hypothesis testing Lecture 7 Agenda for the lecture M-ary hypothesis testing and the MAP rule Union bound for reducing M-ary to binary hypothesis testing Introduction of the channel coding problem 7.1 M-ary hypothesis testing

More information

And for polynomials with coefficients in F 2 = Z/2 Euclidean algorithm for gcd s Concept of equality mod M(x) Extended Euclid for inverses mod M(x)

And for polynomials with coefficients in F 2 = Z/2 Euclidean algorithm for gcd s Concept of equality mod M(x) Extended Euclid for inverses mod M(x) Outline Recall: For integers Euclidean algorithm for finding gcd s Extended Euclid for finding multiplicative inverses Extended Euclid for computing Sun-Ze Test for primitive roots And for polynomials

More information

MATH/MTHE 406 Homework Assignment 2 due date: October 17, 2016

MATH/MTHE 406 Homework Assignment 2 due date: October 17, 2016 MATH/MTHE 406 Homework Assignment 2 due date: October 17, 2016 Notation: We will use the notations x 1 x 2 x n and also (x 1, x 2,, x n ) to denote a vector x F n where F is a finite field. 1. [20=6+5+9]

More information

Random Redundant Soft-In Soft-Out Decoding of Linear Block Codes

Random Redundant Soft-In Soft-Out Decoding of Linear Block Codes Random Redundant Soft-In Soft-Out Decoding of Linear Block Codes Thomas R. Halford and Keith M. Chugg Communication Sciences Institute University of Southern California Los Angeles, CA 90089-2565 Abstract

More information

Error Correcting Codes Questions Pool

Error Correcting Codes Questions Pool Error Correcting Codes Questions Pool Amnon Ta-Shma and Dean Doron January 3, 018 General guidelines The questions fall into several categories: (Know). (Mandatory). (Bonus). Make sure you know how to

More information

New Minimal Weight Representations for Left-to-Right Window Methods

New Minimal Weight Representations for Left-to-Right Window Methods New Minimal Weight Representations for Left-to-Right Window Methods James A. Muir 1 and Douglas R. Stinson 2 1 Department of Combinatorics and Optimization 2 School of Computer Science University of Waterloo

More information

Physical Layer and Coding

Physical Layer and Coding Physical Layer and Coding Muriel Médard Professor EECS Overview A variety of physical media: copper, free space, optical fiber Unified way of addressing signals at the input and the output of these media:

More information

LARGE PRIME NUMBERS (32, 42; 4) (32, 24; 2) (32, 20; 1) ( 105, 20; 0).

LARGE PRIME NUMBERS (32, 42; 4) (32, 24; 2) (32, 20; 1) ( 105, 20; 0). LARGE PRIME NUMBERS 1. Fast Modular Exponentiation Given positive integers a, e, and n, the following algorithm quickly computes the reduced power a e % n. (Here x % n denotes the element of {0,, n 1}

More information

The decomposability of simple orthogonal arrays on 3 symbols having t + 1 rows and strength t

The decomposability of simple orthogonal arrays on 3 symbols having t + 1 rows and strength t The decomposability of simple orthogonal arrays on 3 symbols having t + 1 rows and strength t Wiebke S. Diestelkamp Department of Mathematics University of Dayton Dayton, OH 45469-2316 USA wiebke@udayton.edu

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

An introduction to basic information theory. Hampus Wessman

An introduction to basic information theory. Hampus Wessman An introduction to basic information theory Hampus Wessman Abstract We give a short and simple introduction to basic information theory, by stripping away all the non-essentials. Theoretical bounds on

More information

2376 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 7, JULY Note that conic conv(c) = conic(c).

2376 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 7, JULY Note that conic conv(c) = conic(c). 2376 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 7, JULY 2007 Pseudo-Codeword Analysis of Tanner Graphs From Projective and Euclidean Planes Roxana Smarandache, Member, IEEE, and Pascal O. Vontobel,

More information

MATH 433 Applied Algebra Lecture 22: Review for Exam 2.

MATH 433 Applied Algebra Lecture 22: Review for Exam 2. MATH 433 Applied Algebra Lecture 22: Review for Exam 2. Topics for Exam 2 Permutations Cycles, transpositions Cycle decomposition of a permutation Order of a permutation Sign of a permutation Symmetric

More information

Performance of small signal sets

Performance of small signal sets 42 Chapter 5 Performance of small signal sets In this chapter, we show how to estimate the performance of small-to-moderate-sized signal constellations on the discrete-time AWGN channel. With equiprobable

More information

MATH3302 Coding Theory Problem Set The following ISBN was received with a smudge. What is the missing digit? x9139 9

MATH3302 Coding Theory Problem Set The following ISBN was received with a smudge. What is the missing digit? x9139 9 Problem Set 1 These questions are based on the material in Section 1: Introduction to coding theory. You do not need to submit your answers to any of these questions. 1. The following ISBN was received

More information

Section 3 Error Correcting Codes (ECC): Fundamentals

Section 3 Error Correcting Codes (ECC): Fundamentals Section 3 Error Correcting Codes (ECC): Fundamentals Communication systems and channel models Definition and examples of ECCs Distance For the contents relevant to distance, Lin & Xing s book, Chapter

More information

EE5139R: Problem Set 4 Assigned: 31/08/16, Due: 07/09/16

EE5139R: Problem Set 4 Assigned: 31/08/16, Due: 07/09/16 EE539R: Problem Set 4 Assigned: 3/08/6, Due: 07/09/6. Cover and Thomas: Problem 3.5 Sets defined by probabilities: Define the set C n (t = {x n : P X n(x n 2 nt } (a We have = P X n(x n P X n(x n 2 nt

More information

Chain Independence and Common Information

Chain Independence and Common Information 1 Chain Independence and Common Information Konstantin Makarychev and Yury Makarychev Abstract We present a new proof of a celebrated result of Gács and Körner that the common information is far less than

More information

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.1 Overview Basic Concepts of Channel Coding Block Codes I:

More information

Chapter 9 Fundamental Limits in Information Theory

Chapter 9 Fundamental Limits in Information Theory Chapter 9 Fundamental Limits in Information Theory Information Theory is the fundamental theory behind information manipulation, including data compression and data transmission. 9.1 Introduction o For

More information

An Alternative Proof of Channel Polarization for Channels with Arbitrary Input Alphabets

An Alternative Proof of Channel Polarization for Channels with Arbitrary Input Alphabets An Alternative Proof of Channel Polarization for Channels with Arbitrary Input Alphabets Jing Guo University of Cambridge jg582@cam.ac.uk Jossy Sayir University of Cambridge j.sayir@ieee.org Minghai Qin

More information

Vector spaces. EE 387, Notes 8, Handout #12

Vector spaces. EE 387, Notes 8, Handout #12 Vector spaces EE 387, Notes 8, Handout #12 A vector space V of vectors over a field F of scalars is a set with a binary operator + on V and a scalar-vector product satisfying these axioms: 1. (V, +) is

More information

An Efficient Algorithm for Finding Dominant Trapping Sets of LDPC Codes

An Efficient Algorithm for Finding Dominant Trapping Sets of LDPC Codes An Efficient Algorithm for Finding Dominant Trapping Sets of LDPC Codes Mehdi Karimi, Student Member, IEEE and Amir H. Banihashemi, Senior Member, IEEE Abstract arxiv:1108.4478v2 [cs.it] 13 Apr 2012 This

More information

5.0 BCH and Reed-Solomon Codes 5.1 Introduction

5.0 BCH and Reed-Solomon Codes 5.1 Introduction 5.0 BCH and Reed-Solomon Codes 5.1 Introduction A. Hocquenghem (1959), Codes correcteur d erreurs; Bose and Ray-Chaudhuri (1960), Error Correcting Binary Group Codes; First general family of algebraic

More information

Joint Coding for Flash Memory Storage

Joint Coding for Flash Memory Storage ISIT 8, Toronto, Canada, July 6-11, 8 Joint Coding for Flash Memory Storage Anxiao (Andrew) Jiang Computer Science Department Texas A&M University College Station, TX 77843, U.S.A. ajiang@cs.tamu.edu Abstract

More information

On Compression Encrypted Data part 2. Prof. Ja-Ling Wu The Graduate Institute of Networking and Multimedia National Taiwan University

On Compression Encrypted Data part 2. Prof. Ja-Ling Wu The Graduate Institute of Networking and Multimedia National Taiwan University On Compression Encrypted Data part 2 Prof. Ja-Ling Wu The Graduate Institute of Networking and Multimedia National Taiwan University 1 Brief Summary of Information-theoretic Prescription At a functional

More information

Enumeration Schemes for Words Avoiding Permutations

Enumeration Schemes for Words Avoiding Permutations Enumeration Schemes for Words Avoiding Permutations Lara Pudwell November 27, 2007 Abstract The enumeration of permutation classes has been accomplished with a variety of techniques. One wide-reaching

More information

A Polynomial-Time Algorithm for Pliable Index Coding

A Polynomial-Time Algorithm for Pliable Index Coding 1 A Polynomial-Time Algorithm for Pliable Index Coding Linqi Song and Christina Fragouli arxiv:1610.06845v [cs.it] 9 Aug 017 Abstract In pliable index coding, we consider a server with m messages and n

More information

On Locating-Dominating Codes in Binary Hamming Spaces

On Locating-Dominating Codes in Binary Hamming Spaces Discrete Mathematics and Theoretical Computer Science 6, 2004, 265 282 On Locating-Dominating Codes in Binary Hamming Spaces Iiro Honkala and Tero Laihonen and Sanna Ranto Department of Mathematics and

More information

arxiv:math/ v1 [math.mg] 31 May 2006

arxiv:math/ v1 [math.mg] 31 May 2006 Covering spheres with spheres arxiv:math/060600v1 [math.mg] 31 May 006 Ilya Dumer College of Engineering, University of California at Riverside, Riverside, CA 951, USA dumer@ee.ucr.edu Abstract Given a

More information

On the Duality between Multiple-Access Codes and Computation Codes

On the Duality between Multiple-Access Codes and Computation Codes On the Duality between Multiple-Access Codes and Computation Codes Jingge Zhu University of California, Berkeley jingge.zhu@berkeley.edu Sung Hoon Lim KIOST shlim@kiost.ac.kr Michael Gastpar EPFL michael.gastpar@epfl.ch

More information