Interleaver Properties and Their Applications to the Trellis Complexity Analysis of Turbo Codes

Size: px
Start display at page:

Download "Interleaver Properties and Their Applications to the Trellis Complexity Analysis of Turbo Codes"

Transcription

1 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL 49, NO 5, MAY Interleaver Properties Their Applications to the Trellis Complexity Analysis of Turbo Codes Roberto Garello, Member, IEEE, Guido Montorsi, Member, IEEE, Sergio Benedetto, Fellow, IEEE, Giovanni Cancellieri Abstract In the first part of this paper, the basic theory of interleavers is revisited in a semi-tutorial manner, extended to encompass noncausal interleavers The parameters that characterize the interleaver behavior (like delay, latency, period) are clearly defined The input output interleaver code is introduced its complexity studied Connections among various interleaver parameters are explored The classes of convolutional block interleavers are considered, their practical implementation discussed In the second part, the trellis complexity of turbo codes is tied to the complexity of the constituent interleaver A procedure of complexity reduction by coordinate permutation is presented, together with some examples of its application Index Terms Concatenated codes, interleavers, trellis complexity, turbo codes I INTRODUCTION INTERLEAVERS are simple devices that permute (usually binary) sequences: they are widely used for improving error correction capabilities of coding schemes over bursty channels Their basic theory has received a relatively limited attention in the past, apart from some classical papers [1], [2] Since the introduction of turbo codes [3], where interleavers play a fundamental role, researchers have dedicated many efforts to interleaver design However, misundersting of basic interleaver theory often causes confusion in the turbo code literature In this paper, interleaver theory is revisited Our intent is twofold: first, we want to establish a clear mathematical framework which encompasses old definitions results on causal interleavers Second, we want to extend this theory to noncausal interleavers, which can be useful for some application like, for example, turbo codes We begin by a proper definition of the key quantities that characterize an interleaver, like its minimum/maximum delay, its characteristic latency, its period Then, interleaver equivalence deinterleavers are carefully studied Interleaver trellis complexity is analyzed by the introduction of a proper input/output interleaver code Connections between interleaver Paper approved by M Fossorier, the Editor for Coding Communication Theory of the IEEE Communications Society Manuscript received August 1, 1997; revised July 6, 1999 April 13, 2000 This work was supported in part by Qualcomm, Inc Agenzia Spaziale Italiana This paper was presented in part at the Sixth Communication Mini-Conference, Phoenix, AZ, November 1997, at the 2000 International Symposium on Information Theory, Sorrento, Italy, June 2000 R Garello G Cancellieri are with the Dipartimento di Elettronica ed Automatica, Università di Ancona, Ancona, Italy G Montorsi S Benedetto are with the Dipartimento di Elettronica, Politecnico di Torino, Torino, Italy Publisher Item Identifier S (01) quantities are then explored, especially those concerning latency, a key parameter for applications Finally, the class of convolutional interleavers is considered, their practical implementation discussed Block interleavers, which are the basis of most turbo code schemes, are introduced as a special case A turbo code is obtained by the parallel concatenation of simple constituent convolutional codes connected through an interleaver Turbo codes, although decoded through an iterative suboptimum decoding algorithm, yield performance extremely close to the Shannon limit [3] In spite of the fact that several research issues are still open, turbo codes are obtaining a large success their introduction in many international stards is in progress For decoding purposes, the trellises of the constituent convolutional encoders are almost always terminated, so that the turbo code can be considered as a block code Every linear block code can be represented by a minimal trellis, which, in turn, can be used for soft-decision decoding with the Viterbi or the BCJR [4] algorithms For decoding purposes, the complexity of a given trellis is usually expressed in terms of complexity parameters like the maximum state complexity (logarithm of the maximum number of states), the maximum branch complexity (logarithm of the maximum number of branches), the average branch symbol complexity (average number of branch symbols per information bit) Moreover, for a given block code, it is well known that all complexity measures strongly depend on the ordering of the time axis, which in turn involves in principle all possible permutations of its segments It is a folk theorem among researchers in the field that Maximum Likelihood decoding of turbo codes presents a computational complexity increasing exponentially with the interleaver length, so that it becomes prohibitively complex when the turbo code employs a large interleaver, because of the huge number of states involved Thus, simpler, suboptimum iterative decoding strategies have been proposed successfully applied The previous, qualitative folk theorem may contain a reasonable amount of truth The trellis complexity of turbo codes, however, has never been properly studied In this paper, it is analyzed related to that of the constituent codes interleaver By the introduction of the uniform interleaver, the evaluation of the average complexity of a turbo code of a given length can also be performed Strong connections exist between the true trellis complexity parameters (minimized by coordinate permutation) the free distance of the code, that characterizes the error probability performance at high signal-to-noise ratios As /01$ IEEE

2 794 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL 49, NO 5, MAY 2001 a matter of fact, large values of trellis complexity quantities are a necessary (but not sufficient) condition to achieve a large [5], [6] Finally, a procedure to reduce turbo code complexity by coordinate permutation, suggested by interleaver properties, is presented Its application is particularly interesting for the class of row-by-column block interleavers, where it allows an impressive reduction that helps to underst the interleaver properties In Section II, the definitions of the trellis complexity parameters are recalled In Section III, past work on interleaver theory is reviewed In Section IV, interleaver theory is revisited extended to noncausal interleavers The input output trellis complexity of interleavers is studied in Section V In Section VI, the connections between various interleaver parameters are explored Convolutional block interleavers are considered in Section VII In Section VIII, turbo codes are introduced, their complexity studied II TRELLIS COMPLEXITY DEFINITIONS For a linear binary block code of rate, we will use the following notations referred to its minimal trellis [7]: any codeword is an -tuple ; is the state space at time its cardinality, ie, the number of states at time, the state (space dimension) profile;, is the trellis section at time its cardinality, ie, the number of branches at time, the branch profile Following [7], the state space can be defined as, where is the projection of over the interval is the subcode of, composed by the codewords with support enclosed in The trellis complexity parameters considered in this paper are as follows The maximum state complexity, defined as the maximum of the state profile The maximum branch complexity maximum of the branch profile, defined as the The average branch symbol complexity, defined as the average number of branch symbols per information bit A Complexity: Past Work on the Subject The literature on the trellis complexity of block codes is very rich The subject was introduced in some classical papers [4], [8] [10] Many other papers deal with the subject, covering methods to compute the complexity parameters, build the trellis, establish bounds, or studies on the complexity reduction by coordinate permutation Among them we can cite [11] [26] Recent results on the complexity of convolutional codes are reported in [17] [27] For a complete list of references on the subject, see [28] B Minimal Span Generator Matrix (MSGM) Matrices A procedure for practically computing the state branch profile of a code has been presented in [20] [22] We say that a generator matrix for with the LR (left/right) property has MSGM form Given a row of, let be the smallest such that the largest such that, has the LR property if for any pair of rows Any generator matrix can be easily transformed into an MSGM form [22] For any we say that a row is edge-active in if, is state-active in if Given in MSGM form, is the number of rows that are state-active at time, is the number of rows that are edge-active at time Example 1: Consider the simple block code with The generator matrix has the LR property It follows then We have for is depicted in Fig 1(a) The minimal trellis C Complexity Minimization by Coordinate Permutation Coordinate permutations can strongly change the complexity parameters In other words, given, there can exist an equivalent code, where is an -long permutation, mapping any sequence into, with, such that, /or, /or (in general, there may occur different permutations to minimize different complexity parameters) As a consequence, one can base a real measure of the complexity of the code upon the parameters (See Section II-D for a definition of than 1 bit labels each branch) that holds when more

3 GARELLO et al: INTERLEAVER PROPERTIES AND THEIR APPLICATIONS TO THE TRELLIS COMPLEXITY ANALYSIS OF TURBO CODES 795 In general, the problem of finding the best permutation complexity minimization is NP-complete [25], [26] for D Sectionalization So far, we have considered an atomic representation of, ie, a bit at every time For clearness, in the following we will denote by,, the complexity parameters obtained when a sectionalization occurs such that a group of bits corresponds to any time The superscript will be omitted only when obvious from the context As pointed out in [16], trellis sectionalization varies the state branch complexities, up to pathological cases like When, the definition of the average branch symbol complexity becomes is not affected by sectionalization; moreover, it is strictly tied to the computational complexity of the decoding algorithms [22] This suggests that is perhaps the most appropriate parameter for complexity considerations Optimal sectionalization of the code trellis to reduce its complexity has been studied in [29] Several authors also allow for sectionalization into varying numbers of bits per trellis section, as in [29] Fig 1 Minimal trellises for the codes of (a) Example 1 (b) Example 2 Example 2: Given the code of Example 1, the following permutation: The gener- leads to ator matrix has the LR property It follows We have The minimal trellis of is depicted in Fig 1(b) III INTERLEAVER THEORY: PAST WORK ON THE SUBJECT The theory of interleavers was established in the two classical papers [1] [2] In [1] causal interleavers were deeply analyzed: several definitions results, equivalent to those presented in this paper, will be referred in the following Moreover, many significant results on the spreading properties of interleavers (a subject that we have not considered in this paper) on their implementation were presented Other results were presented in [2] (see also [30]): Forney invented periodic convolutional interleavers that are employed in many applications international stards Two considerations are important First, classical interleaver theory only concerned causal devices As a consequence, some quantities defined in our paper (like, for example, equivalence classes of interleavers, or the interleaver code) were not strictly necessary, not introduced in [1] [2] Second, other results on causal interleavers were not explicitly presented in [1] [2], are poorly documented or scattered elsewhere in the literature As a matter of fact, we think that most of the results on causal interleavers presented in this paper are to be considered as a re-interpretation of previous ones, even when an explicit reference is missing Among the main contribution of this paper are the general mathematical framework all results concerning noncausal interleavers, a subject which was not studied in the past Recently, we partially presented our results in a preliminary form in [31] Similar definitions for causal interleavers can be found in [32] together with some results about the interleaver spread Some other results concerning the cycle decomposition of an interleaver its implementation are presented in [33]

4 796 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL 49, NO 5, MAY 2001 Fig 2 The delay function some parameters of the interleaver of Example 3 IV INTERLEAVERS: BASIC DEFINITIONS We begin to revisit interleaver theory by introducing some basic definitions They can be considered an extension of the definitions introduced in [1] for causal interleavers An interleaver is a device characterized by a fixed permutation of the infinite time axis maps bi-infinite input sequences into permuted output sequences, with Although irrelevant for the interleaver properties, for simplicity we will assume in the following that is the binary alphabet A Delays Period We define the delay function The interleaver action can then be described as We also define the following The maximum delay The minimum delay The characteristic latency as We say that is periodic with period, if there exists a positive integer such that for all ie, In this paper, we will only consider periodic interleavers, because nonperiodic permutations are not used in practice The period is usually referred to, in the turbo code literature, as the interleaver length or size, is a crucial parameter in determining the code performance Often, it is also directly related to the latency introduced in the transmission chain; this is not strictly correct, as we will see soon that the minimum delay introduced by the interleaver deinterleaver pair is instead the characteristic latency The ambiguity stems from the common suboptimal implementation of a block interleaver (see Section VII-B) Example 3: Consider this interleaver its action on the binary sequences, as shown at the bottom of the page has period, minimum maximum delay, characteristic latency Its delay function is depicted in Fig 2 In the following, we will describe an interleaver of period only by its values in the fundamental interval For the interleaver of Example 3 we have B Interleaver Equivalence Formal definition of equivalence classes of interleavers, not introduced in classical theory, will prove useful in our framework

5 GARELLO et al: INTERLEAVER PROPERTIES AND THEIR APPLICATIONS TO THE TRELLIS COMPLEXITY ANALYSIS OF TURBO CODES 797 We say that two interleavers are equivalent if one differs from the other only by a pure delay of the input or output sequence: are equivalent if there exists a pair of integers such that for all or, equivalently We will denote an equivalence class of interleavers by the symbol It can be easily proved that the characteristic latency the period are invariant for all interleavers belonging to It is useful to introduce a subclass of composed by all interleavers that are equivalent to by a pure delay of the output sequences: are equivalent up to translation if there exists an integer such that Example 4: Consider the interleaver of Example 3 the two interleavers of period four: Their delay functions are, respectively: We have: (with for for ), The characteristic latency is still equal to nine for both C Inverse Interleaver Given, its inverse interleaver is The delay functions of are tied by: has the same period characteristic latency of, its minimum/maximum delays are: D Deinterleavers Latency The action of an interleaver must be inverted at the receiver side by a deinterleaver such that any sequence that enters is permuted into an output sequence that is a delayed version of, with (in general, we admit negative ) is the latency, ie, the delay introduced in the transmission chain by the interleaver/deinterleaver pair Given, its inverse interleaver is certainly a deinterleaver of, it yields Moreover, all the elements of are deinterleavers for This simple lemma follows Lemma 1: Given an interleaver with period characteristic latency, its inverse, all deinterleavers for are characterized by a permutation of the kind, ie, belong to All of them have period characteristic latency The latency introduced by an interleaver/deinterleaver pair is equal to Proof: If is a deinterleaver for, we must have:, but: Then Clearly, All the elements of have the same period characteristic latency of Example 5: Given the interleaver of Example 3, its inverse interleaver is: For the pair, we have The interleavers are deinterleavers for For the pair we have, for the pair we have E Causal Canonical Interleavers For practical purposes, an interleaver must be physically realizable, or causal, ie, it must satisfy the property ie, ie, with To each interleaver with a certain (possibly negative if is not causal), we can associate the (equivalent causal) canonical interleaver with characterized by: For,wehave (It is clear that an infinite number of other interleavers belonging to could have been chosen as canonical for but, for simplicity, we have chosen the only one belonging to ) Example 6: Let us consider the interleaver of period with has:, Its canonical interleaver has We have: F Canonical Deinterleaver Minimum Latency Given a causal canonical interleaver, its inverse, we know that all deinterleavers belong to have the same characteristic latency of Among them, we consider the equivalent causal canonical interleaver associate with call it the (equivalent causal) canonical deinterleaver By definition, has The most interesting feature of regards the latency, is clarified by the following corollary Corollary 1: Given a causal canonical interleaver with characteristic latency its causal canonical deinterleaver, the latency introduced by the pair is minimum is equal to, ie, for the deinterleaved sequence we have: Proof: By definition, Then By the definition of canonical causal interleaver associated with we have From Lemma 1 it follows Example 7: Given the interleaver its causal canonical interleaver of Example 6, the inverse of is the interleaver with The causal canonical deinterleaver is The latency introduced by the pair is In fact, for a generic sequence at the output of we have the equation shown at the bottom of the next page, thus for all

6 798 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL 49, NO 5, MAY 2001 Fig 3 Interleaver code This Corollary coincides with the results presented in [1, Section II], where it was shown that the latency introduced by the interleaver/deinterleaving (called unscrambler) pair is equal to the maximum delay (called the encoding delay) when the minimum delay is equal to zero It also coincides with Theorem 1 of [32] on latency (called minimum total delay) The latency is important for system applications having stringent delay requirements The previous corollary shows that the minimum latency introduced by an interleaver/deinterleaver causal pair is equal to the characteristic latency Practical motivations can lead to latencies greater than this minimum value (see Section VII-B for the two-register implementation of block interleavers) For this reason, in the turbo code literature, the minimum latency the period are often confused V THE INPUT OUTPUT INTERLEAVER CODE AND ITS COMPLEXITY The study of the state space of a causal interleaver is intuitive does not need a specific theory However, in this paper we also study noncausal interleavers which are important for some applications, for example, turbo code schemes For this reason, we introduce the new concept of input output interleaver code The code formed by the set of all output sequences of an interleaver with unconstrained input sequences spans over all As a consequence, it has a trivial state space of cardinality for any Given an interleaver, we introduce the (input output) interleaver code defined as the set of all input/output interleaver sequence pairs, where the input sequences span over all (see Fig 3) Every codeword is a bi-infinite sequence, where is the pair composed by the input the output bit at time A minimal trellis for will have 2 bits labeling any branch; the number of branches exiting any state will be equal to two for all times only for causal interleavers Example 8: Consider an interleaver with period All sequences of are of the kind Clearly, are three sequences belonging to the code The generators of are all possible shifted versions of those three sequences The minimal trellis for is depicted in Fig 4 Any branch is labelled by 2 bits that are the input the output bits at time The interleaver is not causal The number of branches exiting any state is equal to two only for For the connections between the state space of the interleaver code that of a real machine able to exactly realize the interleaver input/output relationship, see Sections V-C VII A Interleaver Trellis Complexity For causal interleavers, it is well known intuitive that the state space size is constant, because at each time 1 bit comes in 1 bit goes out When more general interleavers (noncausal, too) are considered, the following general theorem holds for the state profile of the interleaver code (simply in the following) Theorem 1: For every interleaver code where The state space is Proof: We can use the definition (see Section II) The sequences, where is an all-zero sequence with the exception of (then is an all-zero sequence, with the exception of ), form a set of generators for All sequences with are generators of all those with are all-zero sequences The remaining are linear independent sequences The state space of an interleaver code at time is composed by two components: consists of the input bits that must be kept in memory because they will be output in the future, while consists of the bits that have been output previously in the past must be kept in memory until time to give the corresponding input bit Example 9: For the interleaver of the previous example we have the following For For, which contributes with to,, which contributes with to Then

7 GARELLO et al: INTERLEAVER PROPERTIES AND THEIR APPLICATIONS TO THE TRELLIS COMPLEXITY ANALYSIS OF TURBO CODES 799 Fig 4 Minimal trellis for the interleaver code of Example 8 For B The Trellis Complexity of a Causal Interleaver For a causal, realizable, interleaver, one always has: The previous Theorem 1 becomes the following corollary Corollary 2: For a causal interleaver, given the set,wehave, constant with Proof: By the definition of causal interleaver, Given, consider the set There are two possibilities: if then ; otherwise, if then In both cases A causal interleaver has a constant number of states for each : we will call the constraint length of Asa consequence, the trellis complexity parameters of the interleaver code for a causal interleaver are simply For Then For Then The interleavers belonging to an equivalence class have another invariant (other than the period the characteristic latency) Lemma 2: Any interleaver belonging to the equivalence class has a canonical interleaver with the same constraint length Proof: Given, its equivalent canonical causal interleaver is with For a generic, with we have Its equivalent canonical interleaver is with Let us compute of Theorem 1 For we have For we have of pe- Example 10: Consider the (noncausal) interleaver riod two with where Then To compute the state profile of the interleaver code,wehave the following For Then For Then The equivalent causal canonical interleaver for is with We have the following C Minimal Encoder of a Causal Interleaver Given a causal interleaver, we are interested in the construction of a finite-state machine able to realize its input/output relationship In analogy with the code terminology, we will call it an encoder for If the interleaver is not causal, an encoder can be built for a causal equivalent Following [34] (Section III-C), an encoder can be viewed as the set of all the admissible input output sequence pairs of the interleaver, ie, as a code over the product alphabet Formally, we could then build the minimal space state of this code (as in [7]), a minimal (systematic) encoder for it (as in [34]), which exactly realizes the input/output relationship This means that a minimal (with the minimum possible number of states) encoder for a causal interleaver has a space

8 800 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL 49, NO 5, MAY 2001 state in one-to-one correspondence with the space state of the interleaver code An important consequence of Corollary 2 is that a minimal encoder for a causal interleaver does not need to keep memory of,or bits, but only at each time They are the input bits forming that must be kept in memory to be released in the future, their number is constant because, in a causal interleaver, at any time an input bit arrives one of the input bits in memory, or itself, is output See Section VII for more details on the practical implementation of the encoder average absolute delay is it coincides with the average of the state profile Given a causal interleaver a causal deinterleaver, the sum of the average delays through is clearly equal to the introduced latency This lemma follows from Corollary 3 Lemma 4: The sum of the constraint lengths of a causal interleaver/deinterleaver pair is equal to the latency VI CONNECTIONS AMONG INTERLEAVER PARAMETERS In this section, we explore the relations between the various interleaver parameters The connection between the state spaces of an interleaver its inverse is clarified by the following lemma Lemma 3: Given an interleaver its inverse, their interleaver codes, their state space profiles are equal: Proof: For any, there is a one-to-one correspondence between for, for In fact, By posing one has: A strong relation exists between the delay function of an interleaver its state space Theorem 2: Given an interleaver of period, the average of the absolute values of the delays is equal to the average of the state profile Proof: Consider a position with a certain delay If is negative, belongs to the sets for all the following positions If is positive, belongs to the sets for all the previous positions In both cases, increases of units the sum A Causal Interleavers We have seen that for a causal interleaver the state profile is constant equal to The previous Theorem 2 becomes the following corollary Corollary 3: For a causal interleaver, the average delay is equal to the constraint length Example 11: Given the interleaver of Example 10, we have The average absolute delay is 2 Since, it coincides with the average of the state profile Given the causal canonical interleaver of the same Example, we have The average absolute delay is 2 coincides with the constraint length Finally, given the interleaver with of Example 9, we have, The When are the canonical causal interleaver/deinterleaver pair, the introduced latency is equal to the characteristic latency, then we have the following key property Corollary 4: The sum of the constraint lengths of a causal canonical interleaver/deinterleaver pair is equal to the characteristic latency This corollary coincides with [1, Theorem 4], where it was shown that the sum of the constraint lengths of the interleaving the deinterleaving (called minimum storage capacities) is equal to the latency Corollary 4 completely clarifies the fundamental role of the characteristic latency of an interleaver In fact, given, its characteristic latency is equal to the following: the difference between the maximum the minimum delay of ; the difference between the maximum the minimum delay of any interleaver equivalent to ; the difference between the maximum the minimum delay of any deinterleaver for ; the maximum delay of the canonical interleaver associate with ; the maximum delay of the canonical deinterleaver associate with ; the sum of the constraint lengths of ; the minimum latency introduced in the transmission chain or any in- by an interleaver/deinterleaver pair involving terleaver equivalent to VII CONVOLUTIONAL AND BLOCK INTERLEAVERS The interleaver parameters properties previously introduced hold for all periodic interleavers In the following, we will introduce a way to describe general periodic interleavers that will be called convolutional interleavers, as opposed to the special important subclass of block interleavers A Convolutional Interleavers Given a periodic interleaver of period,a convenient completely general way to represent the permutation is the following: (1)

9 GARELLO et al: INTERLEAVER PROPERTIES AND THEIR APPLICATIONS TO THE TRELLIS COMPLEXITY ANALYSIS OF TURBO CODES 801 Fig 5 Structure of a convolutional interleaver where is a finite basic permutation of length, is the th element of a shift vector of elements taking values in A visual implementation of the convolutional interleaver described in (1) is shown in Fig 5, where the reader can recognize the similarities with the familiar structure of convolutional interleavers described by Ramsey Forney in [1] [2] The input stream is permuted according to the basic permutation The th output of the permutation register is sent to a delay line with a delay, whose output is read sent out Notice that in the realization of Fig 5 the permutation is always noncausal, except for the case of the identity permutation As a consequence, it is in general not physically realizable In the next subsection, we will describe a minimal realization of any periodic causal interleaver Example 12: A simple class of interleavers stems from the choice of the identity basic permutation Fig 6 Minimal memory implementation of an interleaver Proof: For a causal interleaver, we proved in Corollary 3 that the average delay is equal to the constraint length, so that From (1) we then have As an example, choosing, yields the equation shown at the bottom of the page 1) Implementation of Convolutional Interleavers with Minimal Memory Requirement: We have already proved in Corollary 2 that the minimal memory requirement for the implementation of a causal interleaver is the constraint length, which in turn is equal to the average of the delay profile An important property that relates the shift vector to the constraint length of causal interleavers is given in the following lemma Lemma 5: The constraint length of a causal interleaver of period is related to the shift vector through In this section we describe an implementation with minimal memory requirements of the encoder for a generic periodic causal interleaver The encoder is realized through a single memory with size equal to the constraint length The input output bits at a given time are read stored with a single Read-Modify-Write operation acting on a single memory cell whose address is derived as follows (see Fig 6) At time we read from a memory cell, write into it As a consequence, the successive contents of that memory cell are Thus, the permutation induces a partition of the set of integers with the following equivalence relationship: the constraint length is also the cardinality of this partition With the previous considerations in mind, we can devise the following algorithm to compute the addresses to be used in the minimal memory interleaver of Fig 6

10 802 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL 49, NO 5, MAY ) Set the running variable 2) For all 3) If, then the input symbol is directly transferred to the output In this case the memory contents are not modified (the address will be conventionally set to 0) 4) If then 5) If then Since the interleaver is periodic, the address sequence is also periodic Its period can be obtained by deriving the following expression of the th power of the permutation in the fundamental interval so that the address sequence is given by Example 15: The equivalent causal canonical interleaver of Example 10 has the following parameter: so that lcm the address sequence is given by To derive the period of the sequence we must use the cycle decomposition of the basic permutation For each cycle of the basic permutation, we can define the following basic period: B Block Interleavers Block interleavers, a particular case of the general class of periodic interleavers previously described, are very important for applications, form the basis of most turbo code schemes A block interleaver is generated by a permutation of length, made periodic on all, so yielding the infinite permutation (2) where is the length of the cycle is its th element The periods of a cycle then corresponds to the sum of all the shift vector elements associated to the cycle elements Finally, it can be proved that the period of the address sequence is times the least common multiple of all periods lcm Example 13: As an example, consider the causal periodic permutation with period ;we have The period of is clearly equal to the length of A block interleaver (apart when is the identity) is not causal, has a nonpositive minimum delay The maximum delay is then the characteristic latency is Block interleavers are a particular case of convolutional interleavers, obtained by posing in the general representation (1) A convolutional interleaver with period is equivalent to a block interleaver if there exists an such that the set is a set of adjacent numbers As an example, this happens if the shift vector in the representation (1) has the form The basic permutation can be decomposed as follows: So that lcm Applying the algorithm previously described, we have in fact that the address sequence is periodic with period 12 Example 14: The delay permutation has the following parameters: Among all possible choices to construct a causal interleaver equivalent to a given block interleaver, two are particularly interesting, will be described in the following 1) The Canonical Causal Interleaver of a Block Interleaver: For a block interleaver, generated by an -length permutation with a certain minimum delay, we can consider its equivalent causal canonical interleaver, with, with, By definition of canonical interleaver, is the best choice in terms of latency By using the canonical deinterleaver, the latency is equal to The constraint length of is characterized by the following lemma

11 GARELLO et al: INTERLEAVER PROPERTIES AND THEIR APPLICATIONS TO THE TRELLIS COMPLEXITY ANALYSIS OF TURBO CODES 803 Lemma 6: The canonical interleaver of a block interleaver with minimum delay has constraint length middle of the permutation, bits have been anticipated bits will be released in the future) Example 17: The case yields this permutation of length the corresponding state profile computed by applying Theorem 1 Proof: By definition, we have iff Since, if we compute the set of Theorem 1 for,wehave Lemma 6 shows that a minimal encoder for needs only cells The algorithm previously presented in Section VII-A-1 can be applied to realize it 2) The Two-Register Causal Interleaver of a Block Interleaver: In practice, to make a block interleaver causal, the causal interleaver with is often used instead of the canonical interleaver corresponds to an encoder implementation largely used in practice, which consists of two registers of length used alternatively, one for writing the input bits of the current block, the other for reading the output permuted bits of the preceding block Clearly, in this case has a maximum delay usually larger than, then it leads to a nonminimum latency If also the deinterleaver for is realized by the same two-register strategy, the introduced latency is equal to Example 16: The block interleaver with has Its canonical interleaver is with, with a minimum The two-register causal interleaver is with, with By using the canonical deinterleaver of with, the pair introduces a latency equal to 7 By using the two-register deinterleaver for with, the pair introduces a latency equal to 16 3) The Trellis Complexity of a Block Interleaver: The state profile of noncausal interleavers has been characterized in Theorem 1 For a block interleaver, the minimal trellis has a particular form: it only has one state at the beginning at the end of its period Corollary 5: For a block interleaver for 4) Row-by-Column Block Interleavers: Row-by-column interleavers are a class of block interleavers widely used in practice Given, the input bits are written row-wise into a matrix with rows columns read columnwise The corresponding permutation is An row-by-column interleaver has parameters, The minimal trellis of a row-by-column interleaver is very regular In particular, when is a power of two, it is easy to prove that (in the VIII APPLICATION TO TURBO CODES For simplicity, in this paper we focus on turbo codes, or parallel concatenated convolutional codes (PCCC), of rate- obtained from the following: two equal binary systematic convolutional encoders of rate- constraint length ; a block interleaver of length Every information frame of bits is read out in two different orders for the two encoders The input bits of are not permuted, whereas those of are permuted by Therefore, given the information frame, the th input bit is for for The two encoders are systematic, produce two parity check bits, respectively The input bits of the second encoder are not transmitted, while the information bits the two parity check bits are multiplexed from top to bottom At the end of the information frame of bits, the two encoders are terminated, ie, driven back to the zero state As explained in [35], this can be done in bit times In this way, other triplets are generated Let us define By termination, the turbo code becomes an block code with parameters Every codeword will be denoted by, with A The Trellis Complexity of Turbo Codes To compute the minimal trellis of a turbo code its complexity measures, one can directly apply the procedure based on the MSGM form of a generator matrix described in Section II We will consider a sectionalization of bits for each time The following theorem ties the state space of a turbo code (simply in the following) to the state profile of the constituent block interleaver Theorem 3: Given a turbo code constituted by two constituent encoders of constraint length a block interleaver with state profile, the state profile of the turbo code is equal to with Proof: Denote by an input sequence, with all bits equal to zero, with the exception of, by the corresponding encoded sequence The sequences, with, form a set of generators for At time, the state space is given by There are sequences, with All their projections are independent

12 804 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL 49, NO 5, MAY 2001 Fig 7 State profile for the interleaver code the turbo code of Example 18 generators of There are sequences, with All their projections are independent generators of All projections, with, are generators of All projections, with are elements of The maximum number of independent generators among them is Example 18: Consider a turbo code composed by two equal four-state, rate-, convolutional encoders with, a block rectangular interleaver with In Fig 7 we report the state profile of the turbo code of the constituent interleaver The largest difference is equal to according to Theorem 3 B The Average Complexity of Turbo Codes A uniform interleaver of length, introduced in [36], is a probabilistic device that acts as the average of all possible block interleavers of length Formally, given a binary input frame of Hamming weight maps into one of the output frames of weight with probability The following theorem allows to compute the state profile of a block uniform interleaver of length, that coincides with the average of the state profile over all block interleavers of length Theorem 4: A uniform block interleaver of length has a state profile equal to with Proof: At time is the average of the state space dimension over all block interleavers of length For any block interleaver, Theorem 1 holds, Fixed, we first compute the sum Consider all permutations Any number, with, contributes to a number of times equal to the number of permutations such that, with Among all possible permutations, there are permutations with this property In fact, there are permutations such that, a number of admissible positions with Moreover, there are numbers to be considered, because, then Fixed, we can compute the sum in the same way, by considering the contribution to of any number, with With the same method of before, we obtain Finally we have Note that the state profile can be a noninteger number, because it corresponds to the average of the state profiles of all interleavers with length Even if a uniform interleaver does not really exist, all the theorems of this section are referred to it (instead of the average over all the interleavers) This way, the theorems look more immediate This is in fact the reason why the uniform interleaver has become a stard tool for turbo code analysis The average of the maximum state complexity over all block interleavers of length is easily computed Corollary 6: A uniform block interleaver of length has a maximum state complexity equal to: Proof: The maximum of the state profile is reached in the middle point Given Theorem 3 Theorem 4, we can now establish the average of the state profile the average of the maximum state complexity over all turbo codes employing a block interleaver of length Corollary 7: The state profile of a turbo code formed by two constituent encoders of constraint length a uniform block interleaver of length is equal to its maximum state complexity to with with C Reducing the Trellis Complexity of Turbo Codes A terminated turbo code is a block code, its complexity parameters, strongly depend on the constituent interleaver, as explained by Theorem 3 It would be of great interest to find a permutation of the code sequences that minimizes the complexity parameters, ie, search for, Finding the best permutation for minimizing turbo code trellis complexity is still an open research problem at this stage, we cannot offer the final solution (Note that, in general, the problem of finding the best permutation is NP-complete [25], [26]) Instead, we will present a permutation choice, based on a result on interleaver code complexity minimization, show through examples how it can substantially decrease the trellis complexity of the turbo code 1) Minimizing the Complexity of an Interleaver Code: In this section, we will use coordinate permutations of the interleaver code sequences Given a sequence, a permutation acting on will be denoted by the pair where are infinite permutations

13 GARELLO et al: INTERLEAVER PROPERTIES AND THEIR APPLICATIONS TO THE TRELLIS COMPLEXITY ANALYSIS OF TURBO CODES 805 Fig 8 State branch profile for the turbo code of Example 19, evaluated directly after applying p Fig 9 State branch profile for the turbo code of Example 20, evaluated directly after applying p Note that for two equivalent interleavers, there always exists a permutation, where are pure delay permutations, such that Given an interleaver its code, we can look for some permutation that minimizes the complexity parameters of The obvious solution for any is the permutation In this case is composed by all pairs, for any input sequence The code has trivial dynamics with, Obviously, the permutation yields the same minimization results 2) Application to the Reduction of the State Complexity of Turbo Code: Given a turbo code a code sequence, we will consider permutations acting on of the kind where are permutations of To reduce the complexity of the turbo code employing a block interleaver, we consider two permutations They derive directly from those minimizing the complexity of the interleaver code, which had been shown to be or, equivalently, The two permutations are 1 Example 19: Consider a turbo code composed by the same constituent convolutional encoders of Example 18, a palindrome interleaver with : In Fig 8 we report the state branch profile of the turbo code evaluated directly through the permutation, by applying the MSGM method of [22] recalled in Section II The curves obtained after applying show a significant complexity reduction: the maximum state complexity passes from (65536 states) to (16 states) The same profiles are obtained by applying 1 Since the permutation has length N, while the turbo codewords have length n = N + after termination, we extend by posing (i) = i for N i < n Fig 10 State profile for the turbo codes of Example 21, evaluated directly after applying p D The Trellis Complexity of Turbo Codes with Rectangular Interleavers Impressive results in terms of reduction of turbo code trellis complexity through the application of the two previously introduced permutations can be obtained for the class of row-bycolumn block interleavers By using the results obtained in Section VII-B-4 Theorem 3, we know that when of two By applying (, respectively) when, we obtain a consistent reduction to is a power Example 20: Consider a turbo code composed by the same constituent convolutional encoders of Example 18, a block rectangular interleaver with In Fig 9 we report the state branch profile of the turbo code evaluated directly through the permutation, by applying the MSGM method The curves obtained after applying show a significant complexity reduction The average

14 806 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL 49, NO 5, MAY 2001 branch symbol complexity decreases from branch symbols per information bits to branch symbols per information bits Example 21: Consider a turbo code with the same constituent convolutional encoders of Example 18, rectangular interleavers with different length, but equal column number Let us consider, In Fig 10 we report the state profile of the two turbo codes, evaluated directly through the permutation The result is striking: the two curves obtained directly show a dependence of the maximum state complexity on,, in particular, a value of equal to 18 for to 130 for Instead, the two curves obtained after applying yield a significant complexity reduction,, more important, a maximum state complexity that is independent from, given by Using the algorithm described in [37], we have computed the free distance of the two turbo codes, verified that it is also independent from ; this suggests that a simple increase of the interleaver length may not be beneficial,, instead, that the interleaver in a PCCC construction should be chosen aiming at maximizing the true trellis complexities parameters IX CONCLUSION In this paper, we have defined the main parameters that characterize an interleaver, by extending classical concepts to noncausal interleavers, too After introducing the input/output interleaver code, we have evaluated its complexity, explored the connections between the various parameters We have then focused on causal interleavers on block interleavers, that are both important for applications These concepts have been applied to the evaluation of the trellis complexity of turbo codes We have also faced the problem of finding a time axis permutation that reduces the complexity of turbo codes ACKNOWLEDGMENT The authors wish to thank the Editor the reviewers for their valuable comments that helped to refocus improve the original manuscript They are also grateful to F Chiaraluce G D Forney, Jr, for their appropriate comments REFERENCES [1] J L Ramsey, Realization of optimum interleavers, IEEE Trans Inform Theory, vol IT-16, pp , May 1970 [2] G D Forney Jr, Burst-correcting codes for the classic bursty channel, IEEE Trans Commun Technol, vol COM-19, pp , Oct 1971 [3] C Berrou, A Glavieux, P Thitimajshima, Near Shannon limit error-correcting coding decoding: Turbo-codes, in Proc ICC 93, Geneva, Switzerl, May 1993, pp [4] L R Bahl, J Cocke, F Jelinek, J Raviv, Optimal decoding of linear codes for minimizing symbol error rate, IEEE Trans Inform Theory, vol IT-20, pp , Mar 1974 [5] A Lafourcade A Vardy, Lower bounds on trellis complexity of block codes, IEEE Trans Inform Theory, vol 41, pp , Nov 1995 [6], Asymptotically good codes have infinite trellis complexity, IEEE Trans Inform Theory, vol 41, pp , Mar 1995 [7] G D Forney Jr M D Trott, The dynamics of linear codes over groups: State spaces, trellis diagram canonical encoders, IEEE Trans Inform Theory, vol 39, pp , Sept 1993 [8] J K Wolf, Efficient maximum likelihood decoding of linear block codes using a trellis, IEEE Trans Inform Theory, vol IT-24, pp 76 80, Jan 1978 [9] J L Massey, Foundation methods of channel encoding, in Proc Int Conf Information Theory Systems, Berlin, Germany, Sept 1978 [10] G D ForneyJr, Cosetcodes PartII: Binarylatticesrelatedcodes, IEEE Trans Inform Theory, vol 34, pp , Sept 1988 [11] D J Muder, Minimal trellises for block codes, IEEE Trans Inform Theory, vol 34, pp , Sept 1988 [12] T Kasami, T Takata, T Fujiwara, S Lin, On the optimum bit orders with respect to the state complexity of trellis diagrams for binary linear codes, IEEE Trans Inform Theory, vol 39, pp , Jan 1993 [13], On complexity of trellis structure of linear block codes, IEEE Trans Inform Theory, vol 39, pp , May 1993 [14] A D Kot C Leung, On the construction dimensionality of linear block code trellis, Proc 1993 IEEE Int Symp Information Theory, p 291, Jan 1993 [15] Y Berger Y Be ery, Bounds on the trellis size of linear block codes, IEEE Trans Inform Theory, vol 39, pp , Jan 1993 [16] G D Forney Jr, Dimension/length profiles trellis complexity of linear block codes, IEEE Trans Inform Theory, vol 40, pp , Nov 1994 [17] V Sidorenko V V Zyablov, Decoding of convolutional codes using a syndrome trellis, IEEE Trans Inform Theory, vol 40, pp , Sept 1994 [18] F R Kschischang G B Horn, A heuristic for hordering a linear block code to minimize trellis state complexity, in Proc 32nd Annu Allerton Conf Communication, Control, Computing, Sept 1994, pp [19] S J Dolinar Jr, L L Ekroot, A B Kiely, R J Mc Eliece, W Lin, The permutation trellis complexity of linear block codes, in Proc 32nd Annu Allerton Conf Communication, Control, Computing, Sept 1994, pp [20] F R Kschischang V Sorokine, On the trellis structure of block codes, IEEE Trans Inform Theory, vol 41, pp , Nov 1995 [21] C C Lu S H Huang, On bit-level trellis complexity of Reed- Muller codes, IEEE Trans Inform Theory, vol 41, pp , Nov 1995 [22] R J McEliece, On the BCJR trellis for linear block codes, IEEE Trans Inform Theory, vol 42, pp , July 1996 [23] A B Kiely, S J Dolinar Jr, R J McEliece, L L Ekroot, W Lin, Trellis decoding complexity of linear block codes, IEEE Trans Inform Theory, vol 42, pp , Nov 1996 [24] A Engelhart, J Maucher, V Sidorenko, Heuristic algorithms for ordering a linear block code to reduce the number of nodes of the minimal trellis, Proc 1998 IEEE Int Symp Information Theory, p 206, Aug 1998 [25] G B Horn F R Kschischang, On the intractability of permuting a block code to minimize trellis complexity, IEEE Trans Inform Theory, vol 42, pp , Nov 1996 [26] A Vardy, The intractability of computing the minimum distance of a code, IEEE Trans Inform Theory, vol 43, pp , Nov 1997 [27] R J McEliece W Lin, The trellis complexity of convolutional codes, IEEE Trans Inform Theory, vol 42, pp , Nov 1996 [28] S Lin, T Kasami, T Fujiwara, M Fossorier, Trellises Trellis- Based Decoding Algorithms for Linear Block Codes Norwell, MA: Kluwer, 1998 [29] A Lafourcade A Vardy, Optimal sectionalization of a trellis, IEEE Trans Inform Theory, vol 42, pp , May 1996 [30] G D Forney Jr, Interleavers, US Patent , Mar 1972 [31] S Benedetto, R Garello, G Montorsi, The trellis complexity of turbo codes, in Proc Sixth Communication Theory Mini-Conf, GLOBECOM 97, Phoenix, AZ, Nov 1997, pp [32] K Andrews, C Heegard, D Kozen, A theory of interleavers, 1997 IEEE Int Symp Information Theory, 1997 Recent Results Tech Rep , Comput Sci Dept, Cornell Univ [33] M Mondin F Daneshgaran, Realization of permutations with minimal delay with applications to turbo coding, Proc 1996 IEEE Int Symp Information Theory Its Applications (ISITA 96), 1996 [34] M D Trott, S Benedetto, R Garello, M Mondin, Rotational invariance of trellis codes Part I: Encoders precoders, IEEE Trans Inform Theory, vol 42, pp , May 1996 [35] D Divsalar F Pollara, Turbo Codes for Deep-Space Communications, JPL, TDA Prog Rep , Feb 1995 [36] S Benedetto G Montorsi, Unveiling turbo-codes: Some results on parallel concatenated coding schemes, IEEE Trans Inform Theory, vol 42, pp , Mar 1996

Turbo Codes for Deep-Space Communications

Turbo Codes for Deep-Space Communications TDA Progress Report 42-120 February 15, 1995 Turbo Codes for Deep-Space Communications D. Divsalar and F. Pollara Communications Systems Research Section Turbo codes were recently proposed by Berrou, Glavieux,

More information

Aalborg Universitet. Bounds on information combining for parity-check equations Land, Ingmar Rüdiger; Hoeher, A.; Huber, Johannes

Aalborg Universitet. Bounds on information combining for parity-check equations Land, Ingmar Rüdiger; Hoeher, A.; Huber, Johannes Aalborg Universitet Bounds on information combining for parity-check equations Land, Ingmar Rüdiger; Hoeher, A.; Huber, Johannes Published in: 2004 International Seminar on Communications DOI link to publication

More information

THIS paper is aimed at designing efficient decoding algorithms

THIS paper is aimed at designing efficient decoding algorithms IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER 1999 2333 Sort-and-Match Algorithm for Soft-Decision Decoding Ilya Dumer, Member, IEEE Abstract Let a q-ary linear (n; k)-code C be used

More information

Symbol Interleaved Parallel Concatenated Trellis Coded Modulation

Symbol Interleaved Parallel Concatenated Trellis Coded Modulation Symbol Interleaved Parallel Concatenated Trellis Coded Modulation Christine Fragouli and Richard D. Wesel Electrical Engineering Department University of California, Los Angeles christin@ee.ucla. edu,

More information

Höst, Stefan; Johannesson, Rolf; Zigangirov, Kamil; Zyablov, Viktor V.

Höst, Stefan; Johannesson, Rolf; Zigangirov, Kamil; Zyablov, Viktor V. Active distances for convolutional codes Höst, Stefan; Johannesson, Rolf; Zigangirov, Kamil; Zyablov, Viktor V Published in: IEEE Transactions on Information Theory DOI: 101109/18749009 Published: 1999-01-01

More information

Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel

Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel Brian M. Kurkoski, Paul H. Siegel, and Jack K. Wolf Department of Electrical and Computer Engineering

More information

Trellis-based Detection Techniques

Trellis-based Detection Techniques Chapter 2 Trellis-based Detection Techniques 2.1 Introduction In this chapter, we provide the reader with a brief introduction to the main detection techniques which will be relevant for the low-density

More information

Interleaver Design for Turbo Codes

Interleaver Design for Turbo Codes 1 Interleaver Design for Turbo Codes H. R. Sadjadpour, N. J. A. Sloane, M. Salehi, and G. Nebe H. Sadjadpour and N. J. A. Sloane are with AT&T Shannon Labs, Florham Park, NJ. E-mail: sadjadpour@att.com

More information

Code design: Computer search

Code design: Computer search Code design: Computer search Low rate codes Represent the code by its generator matrix Find one representative for each equivalence class of codes Permutation equivalences? Do NOT try several generator

More information

On Weight Enumerators and MacWilliams Identity for Convolutional Codes

On Weight Enumerators and MacWilliams Identity for Convolutional Codes On Weight Enumerators and MacWilliams Identity for Convolutional Codes Irina E Bocharova 1, Florian Hug, Rolf Johannesson, and Boris D Kudryashov 1 1 Dept of Information Systems St Petersburg Univ of Information

More information

Polar Codes: Graph Representation and Duality

Polar Codes: Graph Representation and Duality Polar Codes: Graph Representation and Duality arxiv:1312.0372v1 [cs.it] 2 Dec 2013 M. Fossorier ETIS ENSEA/UCP/CNRS UMR-8051 6, avenue du Ponceau, 95014, Cergy Pontoise, France Email: mfossorier@ieee.org

More information

On the exact bit error probability for Viterbi decoding of convolutional codes

On the exact bit error probability for Viterbi decoding of convolutional codes On the exact bit error probability for Viterbi decoding of convolutional codes Irina E. Bocharova, Florian Hug, Rolf Johannesson, and Boris D. Kudryashov Dept. of Information Systems Dept. of Electrical

More information

Simplified Implementation of the MAP Decoder. Shouvik Ganguly. ECE 259B Final Project Presentation

Simplified Implementation of the MAP Decoder. Shouvik Ganguly. ECE 259B Final Project Presentation Simplified Implementation of the MAP Decoder Shouvik Ganguly ECE 259B Final Project Presentation Introduction : MAP Decoder û k = arg max i {0,1} Pr[u k = i R N 1 ] LAPPR Λ k = log Pr[u k = 1 R N 1 ] Pr[u

More information

Coding on a Trellis: Convolutional Codes

Coding on a Trellis: Convolutional Codes .... Coding on a Trellis: Convolutional Codes Telecommunications Laboratory Alex Balatsoukas-Stimming Technical University of Crete November 6th, 2008 Telecommunications Laboratory (TUC) Coding on a Trellis:

More information

Chapter 7: Channel coding:convolutional codes

Chapter 7: Channel coding:convolutional codes Chapter 7: : Convolutional codes University of Limoges meghdadi@ensil.unilim.fr Reference : Digital communications by John Proakis; Wireless communication by Andreas Goldsmith Encoder representation Communication

More information

The Geometry of Turbo-Decoding Dynamics

The Geometry of Turbo-Decoding Dynamics IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 46, NO 1, JANUARY 2000 9 The Geometry of Turbo-Decoding Dynamics Tom Richardson Abstract The spectacular performance offered by turbo codes sparked intense

More information

Low Density Parity Check (LDPC) Codes and the Need for Stronger ECC. August 2011 Ravi Motwani, Zion Kwok, Scott Nelson

Low Density Parity Check (LDPC) Codes and the Need for Stronger ECC. August 2011 Ravi Motwani, Zion Kwok, Scott Nelson Low Density Parity Check (LDPC) Codes and the Need for Stronger ECC August 2011 Ravi Motwani, Zion Kwok, Scott Nelson Agenda NAND ECC History Soft Information What is soft information How do we obtain

More information

IN this paper, we consider the capacity of sticky channels, a

IN this paper, we consider the capacity of sticky channels, a 72 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 1, JANUARY 2008 Capacity Bounds for Sticky Channels Michael Mitzenmacher, Member, IEEE Abstract The capacity of sticky channels, a subclass of insertion

More information

Digital Communication Systems ECS 452. Asst. Prof. Dr. Prapun Suksompong 5.2 Binary Convolutional Codes

Digital Communication Systems ECS 452. Asst. Prof. Dr. Prapun Suksompong 5.2 Binary Convolutional Codes Digital Communication Systems ECS 452 Asst. Prof. Dr. Prapun Suksompong prapun@siit.tu.ac.th 5.2 Binary Convolutional Codes 35 Binary Convolutional Codes Introduced by Elias in 1955 There, it is referred

More information

Systematic encoders for convolutional codes and their application to turbo codes

Systematic encoders for convolutional codes and their application to turbo codes 6 Systematic encoders for convolutional codes and their application to turbo codes Sergio Benedetto, Roberto Garello, Guido Montorsi Abstract - - Systematic recursive convolutional encoders have been shown

More information

QPP Interleaver Based Turbo-code For DVB-RCS Standard

QPP Interleaver Based Turbo-code For DVB-RCS Standard 212 4th International Conference on Computer Modeling and Simulation (ICCMS 212) IPCSIT vol.22 (212) (212) IACSIT Press, Singapore QPP Interleaver Based Turbo-code For DVB-RCS Standard Horia Balta, Radu

More information

SOFT DECISION FANO DECODING OF BLOCK CODES OVER DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM

SOFT DECISION FANO DECODING OF BLOCK CODES OVER DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM Journal of ELECTRICAL ENGINEERING, VOL. 63, NO. 1, 2012, 59 64 SOFT DECISION FANO DECODING OF BLOCK CODES OVER DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM H. Prashantha Kumar Udupi Sripati K. Rajesh

More information

Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel

Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel Pål Ellingsen paale@ii.uib.no Susanna Spinsante s.spinsante@univpm.it Angela Barbero angbar@wmatem.eis.uva.es May 31, 2005 Øyvind Ytrehus

More information

High rate soft output Viterbi decoder

High rate soft output Viterbi decoder High rate soft output Viterbi decoder Eric Lüthi, Emmanuel Casseau Integrated Circuits for Telecommunications Laboratory Ecole Nationale Supérieure des Télécomunications de Bretagne BP 83-985 Brest Cedex

More information

Convolutional Codes. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 6th, 2008

Convolutional Codes. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 6th, 2008 Convolutional Codes Telecommunications Laboratory Alex Balatsoukas-Stimming Technical University of Crete November 6th, 2008 Telecommunications Laboratory (TUC) Convolutional Codes November 6th, 2008 1

More information

Exponential Error Bounds for Block Concatenated Codes with Tail Biting Trellis Inner Codes

Exponential Error Bounds for Block Concatenated Codes with Tail Biting Trellis Inner Codes International Symposium on Information Theory and its Applications, ISITA004 Parma, Italy, October 10 13, 004 Exponential Error Bounds for Block Concatenated Codes with Tail Biting Trellis Inner Codes

More information

Turbo Codes are Low Density Parity Check Codes

Turbo Codes are Low Density Parity Check Codes Turbo Codes are Low Density Parity Check Codes David J. C. MacKay July 5, 00 Draft 0., not for distribution! (First draft written July 5, 998) Abstract Turbo codes and Gallager codes (also known as low

More information

Convolutional Codes ddd, Houshou Chen. May 28, 2012

Convolutional Codes ddd, Houshou Chen. May 28, 2012 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes Convolutional Codes ddd, Houshou Chen Department of Electrical Engineering National Chung Hsing University Taichung,

More information

Error control of line codes generated by finite Coxeter groups

Error control of line codes generated by finite Coxeter groups Error control of line codes generated by finite Coxeter groups Ezio Biglieri Universitat Pompeu Fabra, Barcelona, Spain Email: e.biglieri@ieee.org Emanuele Viterbo Monash University, Melbourne, Australia

More information

Decomposing Bent Functions

Decomposing Bent Functions 2004 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 8, AUGUST 2003 Decomposing Bent Functions Anne Canteaut and Pascale Charpin Abstract In a recent paper [1], it is shown that the restrictions

More information

On Linear Subspace Codes Closed under Intersection

On Linear Subspace Codes Closed under Intersection On Linear Subspace Codes Closed under Intersection Pranab Basu Navin Kashyap Abstract Subspace codes are subsets of the projective space P q(n), which is the set of all subspaces of the vector space F

More information

An Introduction to Low Density Parity Check (LDPC) Codes

An Introduction to Low Density Parity Check (LDPC) Codes An Introduction to Low Density Parity Check (LDPC) Codes Jian Sun jian@csee.wvu.edu Wireless Communication Research Laboratory Lane Dept. of Comp. Sci. and Elec. Engr. West Virginia University June 3,

More information

ON DISTRIBUTED ARITHMETIC CODES AND SYNDROME BASED TURBO CODES FOR SLEPIAN-WOLF CODING OF NON UNIFORM SOURCES

ON DISTRIBUTED ARITHMETIC CODES AND SYNDROME BASED TURBO CODES FOR SLEPIAN-WOLF CODING OF NON UNIFORM SOURCES 7th European Signal Processing Conference (EUSIPCO 2009) Glasgow, Scotland, August 24-28, 2009 ON DISTRIBUTED ARITHMETIC CODES AND SYNDROME BASED TURBO CODES FOR SLEPIAN-WOLF CODING OF NON UNIFORM SOURCES

More information

arxiv:cs/ v2 [cs.it] 1 Oct 2006

arxiv:cs/ v2 [cs.it] 1 Oct 2006 A General Computation Rule for Lossy Summaries/Messages with Examples from Equalization Junli Hu, Hans-Andrea Loeliger, Justin Dauwels, and Frank Kschischang arxiv:cs/060707v [cs.it] 1 Oct 006 Abstract

More information

Construction of low complexity Array based Quasi Cyclic Low density parity check (QC-LDPC) codes with low error floor

Construction of low complexity Array based Quasi Cyclic Low density parity check (QC-LDPC) codes with low error floor Construction of low complexity Array based Quasi Cyclic Low density parity check (QC-LDPC) codes with low error floor Pravin Salunkhe, Prof D.P Rathod Department of Electrical Engineering, Veermata Jijabai

More information

Structured Low-Density Parity-Check Codes: Algebraic Constructions

Structured Low-Density Parity-Check Codes: Algebraic Constructions Structured Low-Density Parity-Check Codes: Algebraic Constructions Shu Lin Department of Electrical and Computer Engineering University of California, Davis Davis, California 95616 Email:shulin@ece.ucdavis.edu

More information

of the (48,24,12) Quadratic Residue Code? Dept. of Math. Sciences, Isfahan University oftechnology, Isfahan, Iran

of the (48,24,12) Quadratic Residue Code? Dept. of Math. Sciences, Isfahan University oftechnology, Isfahan, Iran On the Pless-Construction and ML Decoding of the (8,,1) Quadratic Residue Code M. Esmaeili?, T. A. Gulliver y, and A. K. Khandani z? Dept. of Math. Sciences, Isfahan University oftechnology, Isfahan, Iran

More information

Performance of Multi Binary Turbo-Codes on Nakagami Flat Fading Channels

Performance of Multi Binary Turbo-Codes on Nakagami Flat Fading Channels Buletinul Ştiinţific al Universităţii "Politehnica" din Timişoara Seria ELECTRONICĂ şi TELECOMUNICAŢII TRANSACTIONS on ELECTRONICS and COMMUNICATIONS Tom 5(65), Fascicola -2, 26 Performance of Multi Binary

More information

On Quadratic Inverses for Quadratic Permutation Polynomials over Integer Rings

On Quadratic Inverses for Quadratic Permutation Polynomials over Integer Rings arxiv:cs/0511060v1 [cs.it] 16 Nov 005 On Quadratic Inverses for Quadratic Permutation Polynomials over Integer Rings Jonghoon Ryu and Oscar Y. Takeshita Dept. of Electrical and Computer Engineering 015

More information

Research on Unequal Error Protection with Punctured Turbo Codes in JPEG Image Transmission System

Research on Unequal Error Protection with Punctured Turbo Codes in JPEG Image Transmission System SERBIAN JOURNAL OF ELECTRICAL ENGINEERING Vol. 4, No. 1, June 007, 95-108 Research on Unequal Error Protection with Punctured Turbo Codes in JPEG Image Transmission System A. Moulay Lakhdar 1, R. Méliani,

More information

Error Correction Methods

Error Correction Methods Technologies and Services on igital Broadcasting (7) Error Correction Methods "Technologies and Services of igital Broadcasting" (in Japanese, ISBN4-339-06-) is published by CORONA publishing co., Ltd.

More information

Binary Convolutional Codes of High Rate Øyvind Ytrehus

Binary Convolutional Codes of High Rate Øyvind Ytrehus Binary Convolutional Codes of High Rate Øyvind Ytrehus Abstract The function N(r; ; d free ), defined as the maximum n such that there exists a binary convolutional code of block length n, dimension n

More information

Soft-Output Trellis Waveform Coding

Soft-Output Trellis Waveform Coding Soft-Output Trellis Waveform Coding Tariq Haddad and Abbas Yongaçoḡlu School of Information Technology and Engineering, University of Ottawa Ottawa, Ontario, K1N 6N5, Canada Fax: +1 (613) 562 5175 thaddad@site.uottawa.ca

More information

Time-invariant LDPC convolutional codes

Time-invariant LDPC convolutional codes Time-invariant LDPC convolutional codes Dimitris Achlioptas, Hamed Hassani, Wei Liu, and Rüdiger Urbanke Department of Computer Science, UC Santa Cruz, USA Email: achlioptas@csucscedu Department of Computer

More information

The Capacity of Finite Abelian Group Codes Over Symmetric Memoryless Channels Giacomo Como and Fabio Fagnani

The Capacity of Finite Abelian Group Codes Over Symmetric Memoryless Channels Giacomo Como and Fabio Fagnani IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 5, MAY 2009 2037 The Capacity of Finite Abelian Group Codes Over Symmetric Memoryless Channels Giacomo Como and Fabio Fagnani Abstract The capacity

More information

New Puncturing Pattern for Bad Interleavers in Turbo-Codes

New Puncturing Pattern for Bad Interleavers in Turbo-Codes SERBIAN JOURNAL OF ELECTRICAL ENGINEERING Vol. 6, No. 2, November 2009, 351-358 UDK: 621.391.7:004.052.4 New Puncturing Pattern for Bad Interleavers in Turbo-Codes Abdelmounaim Moulay Lakhdar 1, Malika

More information

Random Redundant Soft-In Soft-Out Decoding of Linear Block Codes

Random Redundant Soft-In Soft-Out Decoding of Linear Block Codes Random Redundant Soft-In Soft-Out Decoding of Linear Block Codes Thomas R. Halford and Keith M. Chugg Communication Sciences Institute University of Southern California Los Angeles, CA 90089-2565 Abstract

More information

Turbo Compression. Andrej Rikovsky, Advisor: Pavol Hanus

Turbo Compression. Andrej Rikovsky, Advisor: Pavol Hanus Turbo Compression Andrej Rikovsky, Advisor: Pavol Hanus Abstract Turbo codes which performs very close to channel capacity in channel coding can be also used to obtain very efficient source coding schemes.

More information

The Super-Trellis Structure of Turbo Codes

The Super-Trellis Structure of Turbo Codes 2212 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 46, NO 6, SEPTEMBER 2000 The Super-Trellis Structure of Turbo Codes Marco Breiling, Student Member, IEEE, and Lajos Hanzo, Senior Member, IEEE Abstract

More information

Distributed Arithmetic Coding

Distributed Arithmetic Coding Distributed Arithmetic Coding Marco Grangetto, Member, IEEE, Enrico Magli, Member, IEEE, Gabriella Olmo, Senior Member, IEEE Abstract We propose a distributed binary arithmetic coder for Slepian-Wolf coding

More information

A t super-channel. trellis code and the channel. inner X t. Y t. S t-1. S t. S t+1. stages into. group two. one stage P 12 / 0,-2 P 21 / 0,2

A t super-channel. trellis code and the channel. inner X t. Y t. S t-1. S t. S t+1. stages into. group two. one stage P 12 / 0,-2 P 21 / 0,2 Capacity Approaching Signal Constellations for Channels with Memory Λ Aleksandar Kav»cić, Xiao Ma, Michael Mitzenmacher, and Nedeljko Varnica Division of Engineering and Applied Sciences Harvard University

More information

OPTIMUM fixed-rate scalar quantizers, introduced by Max

OPTIMUM fixed-rate scalar quantizers, introduced by Max IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL 54, NO 2, MARCH 2005 495 Quantizer Design for Channel Codes With Soft-Output Decoding Jan Bakus and Amir K Khandani, Member, IEEE Abstract A new method of

More information

Practical Polar Code Construction Using Generalised Generator Matrices

Practical Polar Code Construction Using Generalised Generator Matrices Practical Polar Code Construction Using Generalised Generator Matrices Berksan Serbetci and Ali E. Pusane Department of Electrical and Electronics Engineering Bogazici University Istanbul, Turkey E-mail:

More information

Shannon meets Wiener II: On MMSE estimation in successive decoding schemes

Shannon meets Wiener II: On MMSE estimation in successive decoding schemes Shannon meets Wiener II: On MMSE estimation in successive decoding schemes G. David Forney, Jr. MIT Cambridge, MA 0239 USA forneyd@comcast.net Abstract We continue to discuss why MMSE estimation arises

More information

Construction of coset-based low rate convolutional codes and their application to low rate turbo-like code design

Construction of coset-based low rate convolutional codes and their application to low rate turbo-like code design Construction of coset-based low rate convolutional codes and their application to low rate turbo-like code design Durai Thirupathi and Keith M Chugg Communication Sciences Institute Dept of Electrical

More information

ANALYSIS OF CONTROL PROPERTIES OF CONCATENATED CONVOLUTIONAL CODES

ANALYSIS OF CONTROL PROPERTIES OF CONCATENATED CONVOLUTIONAL CODES CYBERNETICS AND PHYSICS, VOL 1, NO 4, 2012, 252 257 ANALYSIS OF CONTROL PROPERTIES OF CONCATENATED CONVOLUTIONAL CODES M I García-Planas Matemàtica Aplicada I Universitat Politècnica de Catalunya Spain

More information

Computation of Information Rates from Finite-State Source/Channel Models

Computation of Information Rates from Finite-State Source/Channel Models Allerton 2002 Computation of Information Rates from Finite-State Source/Channel Models Dieter Arnold arnold@isi.ee.ethz.ch Hans-Andrea Loeliger loeliger@isi.ee.ethz.ch Pascal O. Vontobel vontobel@isi.ee.ethz.ch

More information

Successive Cancellation Decoding of Single Parity-Check Product Codes

Successive Cancellation Decoding of Single Parity-Check Product Codes Successive Cancellation Decoding of Single Parity-Check Product Codes Mustafa Cemil Coşkun, Gianluigi Liva, Alexandre Graell i Amat and Michael Lentmaier Institute of Communications and Navigation, German

More information

Algebraic Soft-Decision Decoding of Reed Solomon Codes

Algebraic Soft-Decision Decoding of Reed Solomon Codes IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 11, NOVEMBER 2003 2809 Algebraic Soft-Decision Decoding of Reed Solomon Codes Ralf Koetter, Member, IEEE, Alexer Vardy, Fellow, IEEE Abstract A polynomial-time

More information

1e

1e B. J. Frey and D. J. C. MacKay (1998) In Proceedings of the 35 th Allerton Conference on Communication, Control and Computing 1997, Champaign-Urbana, Illinois. Trellis-Constrained Codes Brendan J. Frey

More information

Low-complexity error correction in LDPC codes with constituent RS codes 1

Low-complexity error correction in LDPC codes with constituent RS codes 1 Eleventh International Workshop on Algebraic and Combinatorial Coding Theory June 16-22, 2008, Pamporovo, Bulgaria pp. 348-353 Low-complexity error correction in LDPC codes with constituent RS codes 1

More information

Turbo Code Design for Short Blocks

Turbo Code Design for Short Blocks Turbo Code Design for Short Blocks Thomas Jerkovits, Balázs Matuz Abstract This work considers the design of short parallel turbo codes (PTCs) with block lengths in the order of (a few) hundred code bits.

More information

Minimum Distance and Convergence Analysis of Hamming-Accumulate-Acccumulate Codes

Minimum Distance and Convergence Analysis of Hamming-Accumulate-Acccumulate Codes 1 Minimum Distance and Convergence Analysis of Hamming-Accumulate-Acccumulate Codes Alexandre Graell i Amat and Raphaël Le Bidan arxiv:0905.4545v1 [cs.it] 28 May 2009 Abstract In this letter we consider

More information

CODE DECOMPOSITION IN THE ANALYSIS OF A CONVOLUTIONAL CODE

CODE DECOMPOSITION IN THE ANALYSIS OF A CONVOLUTIONAL CODE Bol. Soc. Esp. Mat. Apl. n o 42(2008), 183 193 CODE DECOMPOSITION IN THE ANALYSIS OF A CONVOLUTIONAL CODE E. FORNASINI, R. PINTO Department of Information Engineering, University of Padua, 35131 Padova,

More information

Capacity of Memoryless Channels and Block-Fading Channels With Designable Cardinality-Constrained Channel State Feedback

Capacity of Memoryless Channels and Block-Fading Channels With Designable Cardinality-Constrained Channel State Feedback 2038 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 50, NO. 9, SEPTEMBER 2004 Capacity of Memoryless Channels and Block-Fading Channels With Designable Cardinality-Constrained Channel State Feedback Vincent

More information

1 Introduction to information theory

1 Introduction to information theory 1 Introduction to information theory 1.1 Introduction In this chapter we present some of the basic concepts of information theory. The situations we have in mind involve the exchange of information through

More information

Tail-Biting Trellis Realizations and Local Reductions

Tail-Biting Trellis Realizations and Local Reductions Noname manuscript No. (will be inserted by the editor) Tail-Biting Trellis Realizations and Local Reductions Heide Gluesing-Luerssen Received: date / Accepted: date Abstract This paper investigates tail-biting

More information

The Distribution of Cycle Lengths in Graphical Models for Turbo Decoding

The Distribution of Cycle Lengths in Graphical Models for Turbo Decoding The Distribution of Cycle Lengths in Graphical Models for Turbo Decoding Technical Report UCI-ICS 99-10 Department of Information and Computer Science University of California, Irvine Xianping Ge, David

More information

A Hyper-Trellis based Turbo Decoder for Wyner-Ziv Video Coding

A Hyper-Trellis based Turbo Decoder for Wyner-Ziv Video Coding A Hyper-Trellis based Turbo Decoder for Wyner-Ziv Video Coding Arun Avudainayagam, John M. Shea and Dapeng Wu Wireless Information Networking Group (WING) Department of Electrical and Computer Engineering

More information

1.6: Solutions 17. Solution to exercise 1.6 (p.13).

1.6: Solutions 17. Solution to exercise 1.6 (p.13). 1.6: Solutions 17 A slightly more careful answer (short of explicit computation) goes as follows. Taking the approximation for ( N K) to the next order, we find: ( N N/2 ) 2 N 1 2πN/4. (1.40) This approximation

More information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information 204 IEEE International Symposium on Information Theory Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information Omur Ozel, Kaya Tutuncuoglu 2, Sennur Ulukus, and Aylin Yener

More information

Construction of Protographs for QC LDPC Codes With Girth Larger Than 12 1

Construction of Protographs for QC LDPC Codes With Girth Larger Than 12 1 Construction of Protographs for QC LDPC Codes With Girth Larger Than 12 1 Sunghwan Kim, Jong-Seon No School of Electrical Eng. & Com. Sci. Seoul National University, Seoul, Korea Email: {nodoubt, jsno}@snu.ac.kr

More information

ONE approach to improving the performance of a quantizer

ONE approach to improving the performance of a quantizer 640 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 2, FEBRUARY 2006 Quantizers With Unim Decoders Channel-Optimized Encoders Benjamin Farber Kenneth Zeger, Fellow, IEEE Abstract Scalar quantizers

More information

Factor Graphs and Message Passing Algorithms Part 1: Introduction

Factor Graphs and Message Passing Algorithms Part 1: Introduction Factor Graphs and Message Passing Algorithms Part 1: Introduction Hans-Andrea Loeliger December 2007 1 The Two Basic Problems 1. Marginalization: Compute f k (x k ) f(x 1,..., x n ) x 1,..., x n except

More information

Codes on graphs. Chapter Elementary realizations of linear block codes

Codes on graphs. Chapter Elementary realizations of linear block codes Chapter 11 Codes on graphs In this chapter we will introduce the subject of codes on graphs. This subject forms an intellectual foundation for all known classes of capacity-approaching codes, including

More information

CS6304 / Analog and Digital Communication UNIT IV - SOURCE AND ERROR CONTROL CODING PART A 1. What is the use of error control coding? The main use of error control coding is to reduce the overall probability

More information

Extended MinSum Algorithm for Decoding LDPC Codes over GF (q)

Extended MinSum Algorithm for Decoding LDPC Codes over GF (q) Extended MinSum Algorithm for Decoding LDPC Codes over GF (q) David Declercq ETIS ENSEA/UCP/CNRS UMR-8051, 95014 Cergy-Pontoise, (France), declercq@ensea.fr Marc Fossorier Dept. Electrical Engineering,

More information

Codes for Partially Stuck-at Memory Cells

Codes for Partially Stuck-at Memory Cells 1 Codes for Partially Stuck-at Memory Cells Antonia Wachter-Zeh and Eitan Yaakobi Department of Computer Science Technion Israel Institute of Technology, Haifa, Israel Email: {antonia, yaakobi@cs.technion.ac.il

More information

Example of Convolutional Codec

Example of Convolutional Codec Example of Convolutional Codec Convolutional Code tructure K k bits k k k n- n Output Convolutional codes Convoltuional Code k = number of bits shifted into the encoder at one time k= is usually used!!

More information

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.1 Overview Basic Concepts of Channel Coding Block Codes I:

More information

TURBO codes, which were introduced in 1993 by Berrou

TURBO codes, which were introduced in 1993 by Berrou 140 IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 16, NO. 2, FEBRUARY 1998 Turbo Decoding as an Instance of Pearl s Belief Propagation Algorithm Robert J. McEliece, Fellow, IEEE, David J. C. MacKay,

More information

CHAPTER 8 Viterbi Decoding of Convolutional Codes

CHAPTER 8 Viterbi Decoding of Convolutional Codes MIT 6.02 DRAFT Lecture Notes Fall 2011 (Last update: October 9, 2011) Comments, questions or bug reports? Please contact hari at mit.edu CHAPTER 8 Viterbi Decoding of Convolutional Codes This chapter describes

More information

Modern Coding Theory. Daniel J. Costello, Jr School of Information Theory Northwestern University August 10, 2009

Modern Coding Theory. Daniel J. Costello, Jr School of Information Theory Northwestern University August 10, 2009 Modern Coding Theory Daniel J. Costello, Jr. Coding Research Group Department of Electrical Engineering University of Notre Dame Notre Dame, IN 46556 2009 School of Information Theory Northwestern University

More information

Code Concatenation and Advanced Codes

Code Concatenation and Advanced Codes Contents 11 Code Concatenation and Advanced Codes 58 11.1 Code Concatenation....................................... 60 11.1.1 Serial Concatenation................................... 60 11.1. Parallel Concatenation..................................

More information

Binary Convolutional Codes

Binary Convolutional Codes Binary Convolutional Codes A convolutional code has memory over a short block length. This memory results in encoded output symbols that depend not only on the present input, but also on past inputs. An

More information

Minimum Distance Bounds for Multiple-Serially Concatenated Code Ensembles

Minimum Distance Bounds for Multiple-Serially Concatenated Code Ensembles Minimum Distance Bounds for Multiple-Serially Concatenated Code Ensembles Christian Koller,Jörg Kliewer, Kamil S. Zigangirov,DanielJ.Costello,Jr. ISIT 28, Toronto, Canada, July 6 -, 28 Department of Electrical

More information

Upper Bounds on the Capacity of Binary Intermittent Communication

Upper Bounds on the Capacity of Binary Intermittent Communication Upper Bounds on the Capacity of Binary Intermittent Communication Mostafa Khoshnevisan and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame, Indiana 46556 Email:{mhoshne,

More information

Encoder. Encoder 2. ,...,u N-1. 0,v (0) ,u 1. ] v (0) =[v (0) 0,v (1) v (1) =[v (1) 0,v (2) v (2) =[v (2) (a) u v (0) v (1) v (2) (b) N-1] 1,...

Encoder. Encoder 2. ,...,u N-1. 0,v (0) ,u 1. ] v (0) =[v (0) 0,v (1) v (1) =[v (1) 0,v (2) v (2) =[v (2) (a) u v (0) v (1) v (2) (b) N-1] 1,... Chapter 16 Turbo Coding As noted in Chapter 1, Shannon's noisy channel coding theorem implies that arbitrarily low decoding error probabilities can be achieved at any transmission rate R less than the

More information

Tackling Intracell Variability in TLC Flash Through Tensor Product Codes

Tackling Intracell Variability in TLC Flash Through Tensor Product Codes Tackling Intracell Variability in TLC Flash Through Tensor Product Codes Ryan Gabrys, Eitan Yaakobi, Laura Grupp, Steven Swanson, Lara Dolecek University of California, Los Angeles University of California,

More information

Introduction to Binary Convolutional Codes [1]

Introduction to Binary Convolutional Codes [1] Introduction to Binary Convolutional Codes [1] Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw Y. S. Han Introduction

More information

Security in Locally Repairable Storage

Security in Locally Repairable Storage 1 Security in Locally Repairable Storage Abhishek Agarwal and Arya Mazumdar Abstract In this paper we extend the notion of locally repairable codes to secret sharing schemes. The main problem we consider

More information

Convolutional Coding LECTURE Overview

Convolutional Coding LECTURE Overview MIT 6.02 DRAFT Lecture Notes Spring 2010 (Last update: March 6, 2010) Comments, questions or bug reports? Please contact 6.02-staff@mit.edu LECTURE 8 Convolutional Coding This lecture introduces a powerful

More information

Optimum Binary-Constrained Homophonic Coding

Optimum Binary-Constrained Homophonic Coding Optimum Binary-Constrained Homophonic Coding Valdemar C. da Rocha Jr. and Cecilio Pimentel Communications Research Group - CODEC Department of Electronics and Systems, P.O. Box 7800 Federal University

More information

Lecture 12. Block Diagram

Lecture 12. Block Diagram Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data

More information

Optimal Block-Type-Decodable Encoders for Constrained Systems

Optimal Block-Type-Decodable Encoders for Constrained Systems IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 5, MAY 2003 1231 Optimal Block-Type-Decodable Encoders for Constrained Systems Panu Chaichanavong, Student Member, IEEE, Brian H. Marcus, Fellow, IEEE

More information

Physical Layer and Coding

Physical Layer and Coding Physical Layer and Coding Muriel Médard Professor EECS Overview A variety of physical media: copper, free space, optical fiber Unified way of addressing signals at the input and the output of these media:

More information

Bounds on Mutual Information for Simple Codes Using Information Combining

Bounds on Mutual Information for Simple Codes Using Information Combining ACCEPTED FOR PUBLICATION IN ANNALS OF TELECOMM., SPECIAL ISSUE 3RD INT. SYMP. TURBO CODES, 003. FINAL VERSION, AUGUST 004. Bounds on Mutual Information for Simple Codes Using Information Combining Ingmar

More information

Code Concatenation and Low-Density Parity Check Codes

Code Concatenation and Low-Density Parity Check Codes Contents 11 Code Concatenation and Low-Density Parity Check Codes 254 11.1CodeConcatenation...256 11.1.1 SerialConcatenation...256 11.1.2 ParallelConcatenation...256 11.1.3 ultilevelcoding...257 11.2Interleaving...258

More information

Reed-Solomon codes. Chapter Linear codes over finite fields

Reed-Solomon codes. Chapter Linear codes over finite fields Chapter 8 Reed-Solomon codes In the previous chapter we discussed the properties of finite fields, and showed that there exists an essentially unique finite field F q with q = p m elements for any prime

More information

Punctured Convolutional Codes Revisited: the Exact State Diagram and Its Implications

Punctured Convolutional Codes Revisited: the Exact State Diagram and Its Implications Punctured Convolutional Codes Revisited: the Exact State iagram and Its Implications Jing Li Tiffany) Erozan Kurtas epartment of Electrical and Computer Engineering Seagate Research Lehigh University Bethlehem

More information