On the exact bit error probability for Viterbi decoding of convolutional codes
|
|
- Amos Gibbs
- 5 years ago
- Views:
Transcription
1 On the exact bit error probability for Viterbi decoding of convolutional codes Irina E. Bocharova, Florian Hug, Rolf Johannesson, and Boris D. Kudryashov Dept. of Information Systems Dept. of Electrical and Information Technology, St. Petersburg Univ. of Information Technologies, Lund University Mechanics and Optics P. O. Box 8, SE- Lund, Sweden St. Petersburg 97, Russia {florian, {irina, Abstract Forty years ago, Viterbi published upper bounds on both the first error event burst error and bit error probabilities for Viterbi decoding of convolutional codes. These bounds were derived using a signal flow chart technique for convolutional encoders. In 995, Best et al. published a formula for the exact bit error probability for Viterbi decoding of the rate R = /, memory m = convolutional encoder with generator matrix GD = + D when used to communicate over the binary symmetric channel. Their method was later extended to the rate R = /, memory m = generator matrix GD = + D + D + D by Lentmaier et al. In this paper, we shall use a different approach to derive the exact bit error probability. We derive and solve a general matrix recurrent equation connecting the average information weights at the current and previous steps of the Viterbi decoding. A closed form expression for the exact bit error probability is given. Our general solution yields the expressions for the exact bit error probability obtained by Best et al. m = and Lentmaier et al. m = as special cases. I. INTRODUCTION In 97, Viterbi [] published a now classical upper bound on the bit error probability P b for Viterbi decoding when convolutional codes are used to communicate over the binary symmetric channel BSC. This bound was derived from the extended path weight enumerators obtained using a signal flow chart technique for convolutional encoders. Van de Meeberg [] used a clever observation to tighten Viterbi s bound. The challenging problem of deriving an expression for the exact bit error probability was first addressed by Morrissey in 97 [3] for a suboptimum feedback decoding technique. Apparently, for the memory m = convolutional encoder with generator matrix GD = + D, he got an expression which coincides with the Viterbi decoding bit error probability published in 995 by Best et al. [4]. They used a more general approach based on considering a Markov chain of the so-called metric states of the Viterbi decoder [5]. Their new method looked simpler to generalize to larger memory encoders and to other channel models, which was later done in a number of papers. In particular, the extension to the memory m = convolutional encoder with generator matrix GD = + D + D + D was given by Lentmaier et al. [6]. We use a different approach to derive the exact bit error probability for Viterbi decoding of minimal convolutional encoders when used to communicate over the BSC. A matrix recurrent equation will be derived and solved for the average information weights at the current and previous states that are connected by the branches decided by the Viterbi decoder during the current step. In this presentation we consider for notational convenience only rate R = / minimal convolutional feed-forward encoders realized in controller canonical form. The extension to rate R = /c is trivial and to rate R = b/c as well as feedback encoders straight-forward. Before proceeding we would like to emphasize that the bit error probability is an encoder property, not a code property. Assume that the all-zero sequence is transmitted over the BSC. Let W t σ denote the weight of the information sequence corresponding to the code sequence decided by the Viterbi decoder at state σ at time t. If its initial value W σ is known then the random process W t σ is a function of the random process of the received c-tuples r i, i =,,..., t. Thus, the ensemble {r i, i =,,..., t } determines the ensemble {W i σ, i =,,..., t}. Our goal is to determine the mathematical expectation of the random variable W t σ over this ensemble, since for minimal convolutional encoders the bit error probability can be computed as the limit assuming that this limit exists. P b E [W t σ = ] t II. A RECURRENT EQUATION FOR THE INFORMATION WEIGHTS Since we have chosen realizations in controller canonical form the encoder states can be represented by the m- tuples of the inputs of the shift register, that is, σ t = u t u t... u t m. In the sequel we usually denote these encoder states σ, σ {,,..., m }. During the decoding step at time t + the Viterbi algorithm computes the cumulative Viterbi branch metric vector =... m at time t + using the vector at time t and the received c-tuple r t. In our analysis it is convenient to normalize the metrics such that the cumulative metrics at every all-zero state will be zero, that is,
2 r t = r t = r t = r t = φ t = φ t+ = φ t = φ t+ = φ t = φ t+ = φ t = φ t+ = r t = r t = r t = r t = φ t = φ t+ = φ t = φ t+ = φ t = φ t+ = φ t = φ t+ = r t = r t = r t = r t = φ t = φ t+ = φ t = φ t+ = φ t = φ t+ = φ t = φ t+ = r t = r t = 3 3 r t = r t = φ t = φ t+ = φ t = φ t+ = 3 3 φ t = φ t+ = φ t = φ t+ = r t = 3 3 r t = 4 4 r t = r t = φ t = φ t+ = φ t = φ t+ = 4 4 φ t = φ t+ = 3 3 φ t = φ t+ = Fig.. The different trellis sections for the GD = + D generator matrix. we subtract the value from,,..., m and introduce the cumulative normalized branch metric vector φ t = φ t φ t... φ t m =... m For a memory m = encoder we obtain the scalar φ t = φ t 3 while for a memory m = encoder we have the vector φ t = φ t φ t φ t 3 4 First we consider the rate R = /, memory m = minimal encoder with generator matrix GD = + D. In Fig. we show the different trellis sections corresponding to the M = 5 different normalized cumulative metrics =,,,, and the four different received tuples φ t r t =,,,. The bold branches correspond to the branches decided by the Viterbi decoder at time t +. When we have two branches entering the same state with the same state metric we have a tie which we, in our analysis, will resolve by coin-flipping. The normalized cumulative metric Φ t is a 5-state Markov
3 chain with transition probability matrix Φ = φ jk, where φ jk = P r φ t+ = φ k φt = φ j 5 From the trellis sections in Fig. we obtain the following transition probability matrix Φ = - - φ k - q pq p - q pq p q p pq p + q pq pq p + q pq φ j Let p t denote the probability of the M different metric values of Φ t, that is, φ t { φ, φ,..., φ M}. The stationary distribution of the normalized cumulative metrics Φ t is denoted p = p p... p M and is determined as the solution of, for example, the first M equations of and 6 p Φ = p 7 M i= p i = 8 For our m = convolutional encoder we obtain 4p + 8p 7p 3 + p 4 p T p 5p + 5p 3 p 4 = + 3p p 3 p 3p + 3p 3 p 3p 3 + p 4 9 p + p 3 p 4 Now we return to the information weight W t σ. From the trellis sections in Fig. it is easily seen how the information weights are transformed during one step of the Viterbi decoding. Transitions from state or state to state decided by the Viterbi decoder without tiebreaking do not cause an increment of the information weights; we simply copy the information weight from the state at the root of the branch to the state at the termini of the branch since such a transmission corresponds to û t =. Having a transition from state to state decided by the Viterbi decoder without tiebreaking, we obtain the information weight at state and time t + by incrementing the information weight at state and time t since such a transition corresponds to û t =. Similarly, coming from state we obtain the information weight at state and time t + by incrementing the information weight at state and time t. If we have tiebreaking, we use the arithmetic average of the information weights at the two states and at time t in our updating procedure. Now we introduce some notations for rate R = /c, memory m encoders. The values of the random variable W t σ are distributed over the cumulative metrics φ t according to the vector p t. Let w t be the vector of the information weights at time t split both on the m states σ t and the M metric values φ t ; that is, we can write w t as the following vector of M m entries: w t = w t φ, σ =... w t φ M, σ = w t φ, σ =... w t φ M, σ =. w t φ, σ = m... w t φ M, σ = m The vector w t describes the dynamics of the information weights when we proceed along the trellis. It satisfies the following recurrent equation { wt+ = w t A + b t b t+ = b t Π where A is an M m M m nonnegative matrix and Π is an M m M m stochastic matrix. The matrix A is the linear part of the affine transformation of the information weights and it can be determined from the trellis sections in Fig.. The vector b t of length M m describes the increments of the information weights. For simplicity, we choose the initial values and. w = b =,M,M...,M p }{{} p... p 3 }{{} m m where,m denotes the all-zero vector of length M and p is the stationary probability distribution of the normalized cumulative metrics Φ t. The following two examples illustrate how A can be obtained from the trellis sections in Fig.. Consider first a situation without tiebreaking; for example, the trellis section in the upper left corner, where we have φ t =, φ t+ =, and r t =. Following the bold branches, we first copy with probability P rr t = = q the information weight from state σ t = to state σ t+ =, and obtain the information weight at σ t+ = as the information weight at σ t = plus since û t = for this branch. We have now determined four of the entries in A, namely, the two entries for σ t =, φ t =, and φ t+ =, which both are q, and the two entries for σ t =, φ t =, and φ t+ =, which both are. Notice that, when we determine the entry for φ t+ =, we have to add the probabilities for the two trellis sections with φ t+ =. Next we include tiebreaking and choose the trellis section with φ t =, φ t+ =, and r t =. Here we have to resolve ties at σ t+ =. By following the bold branch from σ t = to σ t+ = we conclude that the information weight at state σ t+ = is a copy of the information weight at state σ t =. Then we follow the two bold branches to state σ t+ = where the information weight is the arithmetic average of the information weights at states σ t = and σ t = plus. We have now determined another four entries of A, namely, the
4 entry for σ t =, φ t =, φ t+ =, and σ t+ = which is q, the two entries for φ t =, φ t+ =, and σ t+ = which are both q / the tie is resolved by coin-flipping, and, finally, the entry for σ t =, φ t =, φ t+ =, and σ t+ = which is since there is no bold branch between σ t = and σ t+ = in this trellis section. Proceeding in this manner yields the matrix A 4 for the memory m = convolutional encoder with generator matrix GD = + D. This matrix A is specified at the bottom of this page. The second equation in determines the dynamics of the information weight increments. We notice that m i= A ij = Φ, j =,,..., m 5 Moreover, we should only have increments when entering the the states σ t+ whose first digit is a. Thus, the first half of the entries of b t are s for t =,,..., and it follows that we can choose Π to be Φ Π = 6 Φ In the next section we shall solve the recurrent matrix equation. III. SOLVING THE RECURRENT EQUATION By iterating the recurrent equation and using the initial values and 3 we obtain w t+ = b A t + b ΠA t + + b Π t 7 From 7 it follows that w t w t+ lim t/ j= t b Π j A t j j= b Π j A t j + Π t j A j lim b Π t/ A t/ t b Π t/ A t/ = b Π A 8 where Π and A denote the limits of the sequences Π t and A t when t tends to infinity. We also used that, if a sequence is convergent to a finite limit, then it is Cesàro-summable to the same limit. The limit Π can be written as Φ... Π Φ... = Φ where Φ is a matrix whose rows are identical and equal to the stationary distribution p. Then 8 can be written as w t lim = b A where b =,M,M...,M p }{{} p... p }{{} m m We have the following important properties of A = a ij : Nonnegativity, that is, a ij, i, j M m. For any convolutional encoder, A has a block structure, A = A ij, i, j =,,..., m, where the block A ij corresponds to the transitions from σ t = i to σ t+ = j. Summing over the blocks columnwise yields m i= From follows that satisfies A ij = Φ, j =,,..., m e L = p p... p 3 e L A = e L 4 and, hence, e L is a left eigenvector with eigenvalue λ =. From the nonnegativity follows Corollary 8..3 [7] that λ = is a maximal eigenvalue of A. Let e R be the right eigenvector corresponding to the eigenvalue λ = and let A A A = A A = σ t = σ t = σ t+ = σ t+ = - q pq p q pq p - q 3pq/ p / q / 3pq/ p q pq pq p q / qp/ pq/ p / - - pq/ p / q / pq/ pq p q pq pq q / + p pq/ pq/ q + p / pq pq q + p pq pq q + p pq φ t+ 4 φ t
5 r t = r t = r t = r t = φ t = φ t+ = φ t = φ t+ = φ t = φ t+ = φ t = φ t+ = Fig.. Four different trellis sections of the in total 4 for the GD = + D + D + D generator matrix. e R be normalized such that e L e R =. If e L is unique up to normalization then it follows Lemma 8..7, statement i [7] that Combining, 3, and 5 yields lim w t A = e R e L 5 = b e R p t p... p 6 From it follows that the expression for the exact bit error probability can be written as M E[W t σ = ] i= P b w tφ i, σ = w t σ = T,M 7 where,m is the all-one row vector of length M. In other words, to get the expression for P b we sum up the first M components of the vector on the right side of 6, or, equivalently, we multiply this vector by the vector,m,m...,m T. Then we obtain P b =,M,M...,M p p... p e R 8 In summary, for rate R = /, memory m convolutional encoders we can determine the exact bit error probability P b for Viterbi decoding, when communicating over the BSC, as follows: Construct the set of metric states and find the stationary probability distribution p Construct the matrix A analogously to the memory m = example given above and compute its right eigenvector e R normalized according to p p... p e R =. Compute P b using 8. IV. EXAMPLES First we consider the rate R = /, memory m = convolutional code with generator matrix GD = + D. Its set of metric states is {,,,, } and the stationary probability distribution p is given by 9. From the trellis sections in Fig. we obtain the matrix A 4. Its normalized right eigenvector is e R = pq 4pq p + 4p 4p 3 + 7p p + 3p 3 p 4 + 4p 5 p + 4p 4p 3 Finally, inserting 9 and 9 into 8 yields P b = 4p 3p 3 + 6p 4 + p 5 6p 6 + 8p 7 + 3p p 3 p + 4p 4p which coincides with the bit error probability formula in [4]. Next we consider the rate R = /, memory m = convolutional encoder with generator matrix GD = + D + D + D. In Fig. we show the four trellis sections for φ t =. The corresponding metric states at time t + are φ t+ = and. Completing the set of trellis sections yields 3 different normalized metric states. Thus, we have 4 different trellis sections. The matrix A is a block matrix that consists of eight different nontrivial blocks corresponding to the eight branches in each trellis section. We obtain the 4 4 matrix A = A 3,3 A 3,3 A 3,3 A 3,3 3,3 A 3,3 A 3 3,3 A 3 3,3 A 33 3
6 P b R = /, memory m =, GD = + D R = /, memory m =, GD = + D + D + D R = /, memory m = 3, GD = + D + D 3 + D + D + D 3 BSC crossover probability p Fig. 3. Exact bit error probability for rate R = /, memory m = GD = + D, memory m = GD = + D + D + D, and memory m = 3 GD = + D + D 3 + D + D + D 3. Similarly to 6 and we have and Π = Φ Φ Φ Φ 3 b =,3,3 p p 33 where Φ is the 3 3 transition probability matrix for the 3-state Markov chain for the normalized cumulative metrics Φ t and p is the stationary probability distribution for Φ t. Following the method of calculating the exact bit error probability in Section III we obtain P b = 44p p p p p 8 48 p p 6 p which coincides with the previously obtained result by Lentmaier et al. [6]. Finally, we consider the rate R = /, memory m = 3 convolutional encoder with generator matrix GD = + D +D 3 +D +D +D 3. If we consider all trellis sections we find that the number of normalized metric states is 433. Then we obtain the matrix A A A A A A A = 433 A A A A A A A A A A 77 where 433 is the all-zero matrix. Since this example is essentially more complex we computed the exact bit error probability following the method in Section III only numerically. The result is shown in Fig. 3 and compared with the curves for the previously discussed memory m = and m = encoders. ACKNOWLEDGMENT This work was supported in part by the Swedish Research Council under Grant REFERENCES [] A. J. Viterbi, Convolutional codes and their performance in communication systems, IEEE Trans. Inf. Theory, vol. IT-9, no. 5, pp , Oct. 97. [] L. Van De Meeberg, A tightened upper bound on the error probability of binary convolutional codes with Viterbi decoding, IEEE Trans. Inf. Theory, vol. IT-, no. 3, pp , May 974. [3] T. N. Morrissey, Jr., Analysis of decoders for convolutional codes by stochastic sequential machine methods, IEEE Trans. Inf. Theory, vol. IT-6, no. 4, pp , Jul. 97. [4] M. R. Best, M. V. Burnashev, Y. Levy, A. Rabinovich, P. C. Fishburn, A. R. Calderbank, and D. J. Costello, Jr., On a technique to calculate the exact performance of a convolutional code, IEEE Trans. Inf. Theory, vol. 4, no., pp , Mar [5] M. V. Burnashev and D. L. Kon, Symbol error probability for convolutional codes, Problems on Information Transmission, vol. 6, no. 4, pp , 99. [6] M. Lentmaier, D. V. Truhachev, and K. S. Zigangirov, Analytic expressions for the bit error probabilities of rate-/ memory convolutional encoders, IEEE Trans. Inf. Theory, vol. 5, no. 6, pp. 33 3, Jun. 4. [7] R. A. Horn and C. R. Johnson, Matrix Analysis. Cambridge University Press, Feb. 99.
On Weight Enumerators and MacWilliams Identity for Convolutional Codes
On Weight Enumerators and MacWilliams Identity for Convolutional Codes Irina E Bocharova 1, Florian Hug, Rolf Johannesson, and Boris D Kudryashov 1 1 Dept of Information Systems St Petersburg Univ of Information
More informationExact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel
Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel Brian M. Kurkoski, Paul H. Siegel, and Jack K. Wolf Department of Electrical and Computer Engineering
More informationHöst, Stefan; Johannesson, Rolf; Zigangirov, Kamil; Zyablov, Viktor V.
Active distances for convolutional codes Höst, Stefan; Johannesson, Rolf; Zigangirov, Kamil; Zyablov, Viktor V Published in: IEEE Transactions on Information Theory DOI: 101109/18749009 Published: 1999-01-01
More information1 1 0, g Exercise 1. Generator polynomials of a convolutional code, given in binary form, are g
Exercise Generator polynomials of a convolutional code, given in binary form, are g 0, g 2 0 ja g 3. a) Sketch the encoding circuit. b) Sketch the state diagram. c) Find the transfer function TD. d) What
More informationBinary Convolutional Codes
Binary Convolutional Codes A convolutional code has memory over a short block length. This memory results in encoded output symbols that depend not only on the present input, but also on past inputs. An
More informationThe Viterbi Algorithm EECS 869: Error Control Coding Fall 2009
1 Bacground Material 1.1 Organization of the Trellis The Viterbi Algorithm EECS 869: Error Control Coding Fall 2009 The Viterbi algorithm (VA) processes the (noisy) output sequence from a state machine
More informationWrap-Around Sliding-Window Near-ML Decoding of Binary LDPC Codes Over the BEC
Wrap-Around Sliding-Window Near-ML Decoding of Binary LDPC Codes Over the BEC Irina E Bocharova 1,2, Boris D Kudryashov 1, Eirik Rosnes 3, Vitaly Skachek 2, and Øyvind Ytrehus 3 1 Department of Information
More information4 An Introduction to Channel Coding and Decoding over BSC
4 An Introduction to Channel Coding and Decoding over BSC 4.1. Recall that channel coding introduces, in a controlled manner, some redundancy in the (binary information sequence that can be used at the
More informationChapter 7: Channel coding:convolutional codes
Chapter 7: : Convolutional codes University of Limoges meghdadi@ensil.unilim.fr Reference : Digital communications by John Proakis; Wireless communication by Andreas Goldsmith Encoder representation Communication
More informationConvergence analysis for a class of LDPC convolutional codes on the erasure channel
Convergence analysis for a class of LDPC convolutional codes on the erasure channel Sridharan, Arvind; Lentmaier, Michael; Costello Jr., Daniel J.; Zigangirov, Kamil Published in: [Host publication title
More informationLecture 12. Block Diagram
Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data
More informationCode design: Computer search
Code design: Computer search Low rate codes Represent the code by its generator matrix Find one representative for each equivalence class of codes Permutation equivalences? Do NOT try several generator
More informationTime-invariant LDPC convolutional codes
Time-invariant LDPC convolutional codes Dimitris Achlioptas, Hamed Hassani, Wei Liu, and Rüdiger Urbanke Department of Computer Science, UC Santa Cruz, USA Email: achlioptas@csucscedu Department of Computer
More informationAn analysis of the computational complexity of sequential decoding of specific tree codes over Gaussian channels
An analysis of the computational complexity of sequential decoding of specific tree codes over Gaussian channels B. Narayanaswamy, Rohit Negi and Pradeep Khosla Department of ECE Carnegie Mellon University
More informationLecture 3 : Introduction to Binary Convolutional Codes
Lecture 3 : Introduction to Binary Convolutional Codes Binary Convolutional Codes 1. Convolutional codes were first introduced by Elias in 1955 as an alternative to block codes. In contrast with a block
More informationML and Near-ML Decoding of LDPC Codes Over the BEC: Bounds and Decoding Algorithms
1 ML and Near-ML Decoding of LDPC Codes Over the BEC: Bounds and Decoding Algorithms Irina E. Bocharova, Senior Member, IEEE, Boris D. Kudryashov, Senior Member, IEEE, Vitaly Skachek, Member, IEEE, Eirik
More informationBinary Convolutional Codes of High Rate Øyvind Ytrehus
Binary Convolutional Codes of High Rate Øyvind Ytrehus Abstract The function N(r; ; d free ), defined as the maximum n such that there exists a binary convolutional code of block length n, dimension n
More informationExample of Convolutional Codec
Example of Convolutional Codec Convolutional Code tructure K k bits k k k n- n Output Convolutional codes Convoltuional Code k = number of bits shifted into the encoder at one time k= is usually used!!
More informationLow-complexity error correction in LDPC codes with constituent RS codes 1
Eleventh International Workshop on Algebraic and Combinatorial Coding Theory June 16-22, 2008, Pamporovo, Bulgaria pp. 348-353 Low-complexity error correction in LDPC codes with constituent RS codes 1
More informationConvolutional Codes. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 6th, 2008
Convolutional Codes Telecommunications Laboratory Alex Balatsoukas-Stimming Technical University of Crete November 6th, 2008 Telecommunications Laboratory (TUC) Convolutional Codes November 6th, 2008 1
More informationGirth Analysis of Polynomial-Based Time-Invariant LDPC Convolutional Codes
IWSSIP 212, 11-13 April 212, Vienna, Austria ISBN 978-3-2-2328-4 Girth Analysis of Polynomial-Based Time-Invariant LDPC Convolutional Codes Hua Zhou and Norbert Goertz Institute of Telecommunications Vienna
More informationAppendix D: Basics of convolutional codes
Appendix D: Basics of convolutional codes Convolutional encoder: In convolutional code (B. P. Lathi, 2009; S. G. Wilson, 1996; E. Biglieri, 2005; T. Oberg, 2001), the block of n code bits generated by
More informationConvolutional Codes ddd, Houshou Chen. May 28, 2012
Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes Convolutional Codes ddd, Houshou Chen Department of Electrical Engineering National Chung Hsing University Taichung,
More informationSIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land
SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.1 Overview Basic Concepts of Channel Coding Block Codes I:
More informationIntroduction to Binary Convolutional Codes [1]
Introduction to Binary Convolutional Codes [1] Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw Y. S. Han Introduction
More informationData Detection for Controlled ISI. h(nt) = 1 for n=0,1 and zero otherwise.
Data Detection for Controlled ISI *Symbol by symbol suboptimum detection For the duobinary signal pulse h(nt) = 1 for n=0,1 and zero otherwise. The samples at the output of the receiving filter(demodulator)
More informationChapter10 Convolutional Codes. Dr. Chih-Peng Li ( 李 )
Chapter Convolutional Codes Dr. Chih-Peng Li ( 李 ) Table of Contents. Encoding of Convolutional Codes. tructural Properties of Convolutional Codes. Distance Properties of Convolutional Codes Convolutional
More informationEE 550: Notes on Markov chains, Travel Times, and Opportunistic Routing
EE 550: Notes on Markov chains, Travel Times, and Opportunistic Routing Michael J. Neely University of Southern California http://www-bcf.usc.edu/ mjneely 1 Abstract This collection of notes provides a
More informationNAME... Soc. Sec. #... Remote Location... (if on campus write campus) FINAL EXAM EE568 KUMAR. Sp ' 00
NAME... Soc. Sec. #... Remote Location... (if on campus write campus) FINAL EXAM EE568 KUMAR Sp ' 00 May 3 OPEN BOOK exam (students are permitted to bring in textbooks, handwritten notes, lecture notes
More informationCoding on a Trellis: Convolutional Codes
.... Coding on a Trellis: Convolutional Codes Telecommunications Laboratory Alex Balatsoukas-Stimming Technical University of Crete November 6th, 2008 Telecommunications Laboratory (TUC) Coding on a Trellis:
More informationAsymptotically Good Convolutional Codes with Feedback Encoders. Peter J. Sallaway OCT
Asymptotically Good Convolutional Codes with Feedback Encoders by Peter J. Sallaway Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements
More informationCapacity of the Discrete Memoryless Energy Harvesting Channel with Side Information
204 IEEE International Symposium on Information Theory Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information Omur Ozel, Kaya Tutuncuoglu 2, Sennur Ulukus, and Aylin Yener
More informationA new upper bound on the first-event error probability for maximum-likelihood decoding of fixed binary convolutional codes
A new upper on the first-event error probability for maximum-likelihood decoding of fixed binary convolutional codes Cedervall, Mats; Johannesson, Rolf; Zigangirov, Kamil Published in: IEEE Transactions
More informationSTA 414/2104: Machine Learning
STA 414/2104: Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistics! rsalakhu@cs.toronto.edu! http://www.cs.toronto.edu/~rsalakhu/ Lecture 9 Sequential Data So far
More informationToday. Next lecture. (Ch 14) Markov chains and hidden Markov models
Today (Ch 14) Markov chains and hidden Markov models Graphical representation Transition probability matrix Propagating state distributions The stationary distribution Next lecture (Ch 14) Markov chains
More informationOptimum Binary-Constrained Homophonic Coding
Optimum Binary-Constrained Homophonic Coding Valdemar C. da Rocha Jr. and Cecilio Pimentel Communications Research Group - CODEC Department of Electronics and Systems, P.O. Box 7800 Federal University
More informationRun-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE
General e Image Coder Structure Motion Video x(s 1,s 2,t) or x(s 1,s 2 ) Natural Image Sampling A form of data compression; usually lossless, but can be lossy Redundancy Removal Lossless compression: predictive
More informationIntroduction to Convolutional Codes, Part 1
Introduction to Convolutional Codes, Part 1 Frans M.J. Willems, Eindhoven University of Technology September 29, 2009 Elias, Father of Coding Theory Textbook Encoder Encoder Properties Systematic Codes
More information3F1 Information Theory, Lecture 3
3F1 Information Theory, Lecture 3 Jossy Sayir Department of Engineering Michaelmas 2013, 29 November 2013 Memoryless Sources Arithmetic Coding Sources with Memory Markov Example 2 / 21 Encoding the output
More informationChapter 3 Linear Block Codes
Wireless Information Transmission System Lab. Chapter 3 Linear Block Codes Institute of Communications Engineering National Sun Yat-sen University Outlines Introduction to linear block codes Syndrome and
More informationIOSR Journal of Mathematics (IOSR-JM) e-issn: , p-issn: x. Volume 9, Issue 2 (Nov. Dec. 2013), PP
IOSR Journal of Mathematics (IOSR-JM) e-issn: 2278-5728, p-issn:2319-765x. Volume 9, Issue 2 (Nov. Dec. 2013), PP 33-37 The use of Algorithmic Method of Hamming Code Techniques for the Detection and Correction
More informationBASICS OF DETECTION AND ESTIMATION THEORY
BASICS OF DETECTION AND ESTIMATION THEORY 83050E/158 In this chapter we discuss how the transmitted symbols are detected optimally from a noisy received signal (observation). Based on these results, optimal
More informationSuccessive Cancellation Decoding of Single Parity-Check Product Codes
Successive Cancellation Decoding of Single Parity-Check Product Codes Mustafa Cemil Coşkun, Gianluigi Liva, Alexandre Graell i Amat and Michael Lentmaier Institute of Communications and Navigation, German
More informationSoft-Output Trellis Waveform Coding
Soft-Output Trellis Waveform Coding Tariq Haddad and Abbas Yongaçoḡlu School of Information Technology and Engineering, University of Ottawa Ottawa, Ontario, K1N 6N5, Canada Fax: +1 (613) 562 5175 thaddad@site.uottawa.ca
More informationSolutions or answers to Final exam in Error Control Coding, October 24, G eqv = ( 1+D, 1+D + D 2)
Solutions or answers to Final exam in Error Control Coding, October, Solution to Problem a) G(D) = ( +D, +D + D ) b) The rate R =/ and ν i = ν = m =. c) Yes, since gcd ( +D, +D + D ) =+D + D D j. d) An
More informationStaircase Codes. for High-Speed Optical Communications
Staircase Codes for High-Speed Optical Communications Frank R. Kschischang Dept. of Electrical & Computer Engineering University of Toronto (Joint work with Lei Zhang, Benjamin Smith, Arash Farhood, Andrew
More information1.3 Convergence of Regular Markov Chains
Markov Chains and Random Walks on Graphs 3 Applying the same argument to A T, which has the same λ 0 as A, yields the row sum bounds Corollary 0 Let P 0 be the transition matrix of a regular Markov chain
More information10. Hidden Markov Models (HMM) for Speech Processing. (some slides taken from Glass and Zue course)
10. Hidden Markov Models (HMM) for Speech Processing (some slides taken from Glass and Zue course) Definition of an HMM The HMM are powerful statistical methods to characterize the observed samples of
More information(Reprint of pp in Proc. 2nd Int. Workshop on Algebraic and Combinatorial coding Theory, Leningrad, Sept , 1990)
(Reprint of pp. 154-159 in Proc. 2nd Int. Workshop on Algebraic and Combinatorial coding Theory, Leningrad, Sept. 16-22, 1990) SYSTEMATICITY AND ROTATIONAL INVARIANCE OF CONVOLUTIONAL CODES OVER RINGS
More informationOn the distribution of the number of computations in any finite number of subtrees for the stack algorithm
On the distribution of the number of computations in any finite number of subtrees for the stack algorithm Johannesson, Rolf; Zigangirov, Kamil Published in: EEE Transactions on nformation Theory Published:
More informationLow-Complexity Fixed-to-Fixed Joint Source-Channel Coding
Low-Complexity Fixed-to-Fixed Joint Source-Channel Coding Irina E. Bocharova 1, Albert Guillén i Fàbregas 234, Boris D. Kudryashov 1, Alfonso Martinez 2, Adrià Tauste Campo 2, and Gonzalo Vazquez-Vilar
More informationCodes on Graphs and More. Hug, Florian. Published: Link to publication
Codes on Graphs and More Hug, Florian Published: 22-- Link to publication Citation for published version (APA): Hug, F. (22). Codes on Graphs and More General rights Copyright and moral rights for the
More informationSridharan, Arvind; Truhachev, Dmitri; Lentmaier, Michael; Costello Jr., Daniel J.; Zigangirov, Kamil
Distance bounds for an ensemble of LDPC convolutional codes Sridharan, Arvind; Truhachev, Dmitri; Lentmaier, Michael; Costello Jr., Daniel J.; Zigangirov, Kamil Published in: IEEE Transactions on Information
More informationMarkov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006.
Markov Chains As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006 1 Introduction A (finite) Markov chain is a process with a finite number of states (or outcomes, or
More informationCODE DECOMPOSITION IN THE ANALYSIS OF A CONVOLUTIONAL CODE
Bol. Soc. Esp. Mat. Apl. n o 42(2008), 183 193 CODE DECOMPOSITION IN THE ANALYSIS OF A CONVOLUTIONAL CODE E. FORNASINI, R. PINTO Department of Information Engineering, University of Padua, 35131 Padova,
More informationModern Coding Theory. Daniel J. Costello, Jr School of Information Theory Northwestern University August 10, 2009
Modern Coding Theory Daniel J. Costello, Jr. Coding Research Group Department of Electrical Engineering University of Notre Dame Notre Dame, IN 46556 2009 School of Information Theory Northwestern University
More informationAnalytical Performance of One-Step Majority Logic Decoding of Regular LDPC Codes
Analytical Performance of One-Step Majority Logic Decoding of Regular LDPC Codes Rathnakumar Radhakrishnan, Sundararajan Sankaranarayanan, and Bane Vasić Department of Electrical and Computer Engineering
More informationLogic. Combinational. inputs. outputs. the result. system can
Digital Electronics Combinational Logic Functions Digital logic circuits can be classified as either combinational or sequential circuits. A combinational circuit is one where the output at any time depends
More informationOn the Block Error Probability of LP Decoding of LDPC Codes
On the Block Error Probability of LP Decoding of LDPC Codes Ralf Koetter CSL and Dept. of ECE University of Illinois at Urbana-Champaign Urbana, IL 680, USA koetter@uiuc.edu Pascal O. Vontobel Dept. of
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 11 Project
More informationDecoding the Tail-Biting Convolutional Codes with Pre-Decoding Circular Shift
Decoding the Tail-Biting Convolutional Codes with Pre-Decoding Circular Shift Ching-Yao Su Directed by: Prof. Po-Ning Chen Department of Communications Engineering, National Chiao-Tung University July
More informationLecture 4 : Introduction to Low-density Parity-check Codes
Lecture 4 : Introduction to Low-density Parity-check Codes LDPC codes are a class of linear block codes with implementable decoders, which provide near-capacity performance. History: 1. LDPC codes were
More informationSearching for Voltage Graph-Based LDPC Tailbiting Codes with Large Girth
1 Searching for Voltage Graph-Based LDPC Tailbiting Codes with Large Girth Irina E. Bocharova, Florian Hug, Student Member, IEEE, Rolf Johannesson, Fellow, IEEE, Boris D. Kudryashov, and Roman V. Satyukov
More information4488 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 10, OCTOBER /$ IEEE
4488 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 10, OCTOBER 2008 List Decoding of Biorthogonal Codes the Hadamard Transform With Linear Complexity Ilya Dumer, Fellow, IEEE, Grigory Kabatiansky,
More informationTitle. Author(s)Tsai, Shang-Ho. Issue Date Doc URL. Type. Note. File Information. Equal Gain Beamforming in Rayleigh Fading Channels
Title Equal Gain Beamforming in Rayleigh Fading Channels Author(s)Tsai, Shang-Ho Proceedings : APSIPA ASC 29 : Asia-Pacific Signal Citationand Conference: 688-691 Issue Date 29-1-4 Doc URL http://hdl.handle.net/2115/39789
More informationIN this paper, we consider the capacity of sticky channels, a
72 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 1, JANUARY 2008 Capacity Bounds for Sticky Channels Michael Mitzenmacher, Member, IEEE Abstract The capacity of sticky channels, a subclass of insertion
More informationNon-binary LDPC decoding using truncated messages in the Walsh-Hadamard domain
Non-binary LDPC decoding using truncated messages in the Walsh-Hadamard domain Jossy Sayir University of Cambridge Abstract The Extended Min-Sum EMS algorithm for nonbinary low-density parity-check LDPC
More informationEE 229B ERROR CONTROL CODING Spring 2005
EE 229B ERROR CONTROL CODING Spring 2005 Solutions for Homework 1 1. Is there room? Prove or disprove : There is a (12,7) binary linear code with d min = 5. If there were a (12,7) binary linear code with
More informationDigital Communication Systems ECS 452. Asst. Prof. Dr. Prapun Suksompong 5.2 Binary Convolutional Codes
Digital Communication Systems ECS 452 Asst. Prof. Dr. Prapun Suksompong prapun@siit.tu.ac.th 5.2 Binary Convolutional Codes 35 Binary Convolutional Codes Introduced by Elias in 1955 There, it is referred
More informationQuasi-Cyclic Asymptotically Regular LDPC Codes
2010 IEEE Information Theory Workshop - ITW 2010 Dublin Quasi-Cyclic Asymptotically Regular LDPC Codes David G. M. Mitchell, Roxana Smarandache, Michael Lentmaier, and Daniel J. Costello, Jr. Dept. of
More informationMismatched Multi-letter Successive Decoding for the Multiple-Access Channel
Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel Jonathan Scarlett University of Cambridge jms265@cam.ac.uk Alfonso Martinez Universitat Pompeu Fabra alfonso.martinez@ieee.org
More informationPunctured Convolutional Codes Revisited: the Exact State Diagram and Its Implications
Punctured Convolutional Codes Revisited: the Exact State iagram and Its Implications Jing Li Tiffany) Erozan Kurtas epartment of Electrical and Computer Engineering Seagate Research Lehigh University Bethlehem
More informationA Proof of the Converse for the Capacity of Gaussian MIMO Broadcast Channels
A Proof of the Converse for the Capacity of Gaussian MIMO Broadcast Channels Mehdi Mohseni Department of Electrical Engineering Stanford University Stanford, CA 94305, USA Email: mmohseni@stanford.edu
More informationVariable-Rate Universal Slepian-Wolf Coding with Feedback
Variable-Rate Universal Slepian-Wolf Coding with Feedback Shriram Sarvotham, Dror Baron, and Richard G. Baraniuk Dept. of Electrical and Computer Engineering Rice University, Houston, TX 77005 Abstract
More informationInformation and Entropy
Information and Entropy Shannon s Separation Principle Source Coding Principles Entropy Variable Length Codes Huffman Codes Joint Sources Arithmetic Codes Adaptive Codes Thomas Wiegand: Digital Image Communication
More informationApproximately achieving the feedback interference channel capacity with point-to-point codes
Approximately achieving the feedback interference channel capacity with point-to-point codes Joyson Sebastian*, Can Karakus*, Suhas Diggavi* Abstract Superposition codes with rate-splitting have been used
More informationMaximum Likelihood Sequence Detection
1 The Channel... 1.1 Delay Spread... 1. Channel Model... 1.3 Matched Filter as Receiver Front End... 4 Detection... 5.1 Terms... 5. Maximum Lielihood Detection of a Single Symbol... 6.3 Maximum Lielihood
More informationUNIT MEMORY CONVOLUTIONAL CODES WITH MAXIMUM DISTANCE
UNIT MEMORY CONVOLUTIONAL CODES WITH MAXIMUM DISTANCE ROXANA SMARANDACHE Abstract. Unit memory codes and in particular, partial unit memory codes are reviewed. Conditions for the optimality of partial
More informationAnalysis of Soft Decision Decoding of Interleaved Convolutional Codes over Burst Channels
nalysis of Soft Decision Decoding of nterleaved Convolutional Codes over Burst Channels Cecilio Pimentel and Leandro Chaves Rego Communications Research Group - CODEC Department of Electronics and Systems
More informationFeedback Capacity of a Class of Symmetric Finite-State Markov Channels
Feedback Capacity of a Class of Symmetric Finite-State Markov Channels Nevroz Şen, Fady Alajaji and Serdar Yüksel Department of Mathematics and Statistics Queen s University Kingston, ON K7L 3N6, Canada
More informationBoolean Algebra and Digital Logic 2009, University of Colombo School of Computing
IT 204 Section 3.0 Boolean Algebra and Digital Logic Boolean Algebra 2 Logic Equations to Truth Tables X = A. B + A. B + AB A B X 0 0 0 0 3 Sum of Products The OR operation performed on the products of
More informationChannel Coding and Interleaving
Lecture 6 Channel Coding and Interleaving 1 LORA: Future by Lund www.futurebylund.se The network will be free for those who want to try their products, services and solutions in a precommercial stage.
More informationConstruction of LDPC codes
Construction of LDPC codes Telecommunications Laboratory Alex Balatsoukas-Stimming Technical University of Crete July 1, 2009 Telecommunications Laboratory (TUC) Construction of LDPC codes July 1, 2009
More informationTurbo Codes for Deep-Space Communications
TDA Progress Report 42-120 February 15, 1995 Turbo Codes for Deep-Space Communications D. Divsalar and F. Pollara Communications Systems Research Section Turbo codes were recently proposed by Berrou, Glavieux,
More informationEfficient Decoding of Permutation Codes Obtained from Distance Preserving Maps
2012 IEEE International Symposium on Information Theory Proceedings Efficient Decoding of Permutation Codes Obtained from Distance Preserving Maps Yeow Meng Chee and Punarbasu Purkayastha Division of Mathematical
More informationLecture 4: Linear Codes. Copyright G. Caire 88
Lecture 4: Linear Codes Copyright G. Caire 88 Linear codes over F q We let X = F q for some prime power q. Most important case: q =2(binary codes). Without loss of generality, we may represent the information
More informationDefinition 2.1. Let w be a word. Then the coset C + w of w is the set {c + w : c C}.
2.4. Coset Decoding i 2.4 Coset Decoding To apply MLD decoding, what we must do, given a received word w, is search through all the codewords to find the codeword c closest to w. This can be a slow and
More information6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011
6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 On the Structure of Real-Time Encoding and Decoding Functions in a Multiterminal Communication System Ashutosh Nayyar, Student
More informationCut-Set Bound and Dependence Balance Bound
Cut-Set Bound and Dependence Balance Bound Lei Xiao lxiao@nd.edu 1 Date: 4 October, 2006 Reading: Elements of information theory by Cover and Thomas [1, Section 14.10], and the paper by Hekstra and Willems
More informationAlternative Characterization of Ergodicity for Doubly Stochastic Chains
Alternative Characterization of Ergodicity for Doubly Stochastic Chains Behrouz Touri and Angelia Nedić Abstract In this paper we discuss the ergodicity of stochastic and doubly stochastic chains. We define
More informationCombinational Logic. By : Ali Mustafa
Combinational Logic By : Ali Mustafa Contents Adder Subtractor Multiplier Comparator Decoder Encoder Multiplexer How to Analyze any combinational circuit like this? Analysis Procedure To obtain the output
More informationSimplified Implementation of the MAP Decoder. Shouvik Ganguly. ECE 259B Final Project Presentation
Simplified Implementation of the MAP Decoder Shouvik Ganguly ECE 259B Final Project Presentation Introduction : MAP Decoder û k = arg max i {0,1} Pr[u k = i R N 1 ] LAPPR Λ k = log Pr[u k = 1 R N 1 ] Pr[u
More informationThe simple ideal cipher system
The simple ideal cipher system Boris Ryabko February 19, 2001 1 Prof. and Head of Department of appl. math and cybernetics Siberian State University of Telecommunication and Computer Science Head of Laboratory
More informationChapter 2. Error Correcting Codes. 2.1 Basic Notions
Chapter 2 Error Correcting Codes The identification number schemes we discussed in the previous chapter give us the ability to determine if an error has been made in recording or transmitting information.
More informationLecture notes for Analysis of Algorithms : Markov decision processes
Lecture notes for Analysis of Algorithms : Markov decision processes Lecturer: Thomas Dueholm Hansen June 6, 013 Abstract We give an introduction to infinite-horizon Markov decision processes (MDPs) with
More informationCommunication over Finite-Ring Matrix Channels
Communication over Finite-Ring Matrix Channels Chen Feng 1 Roberto W. Nóbrega 2 Frank R. Kschischang 1 Danilo Silva 2 1 Department of Electrical and Computer Engineering University of Toronto, Canada 2
More informationConvolutional Coding LECTURE Overview
MIT 6.02 DRAFT Lecture Notes Spring 2010 (Last update: March 6, 2010) Comments, questions or bug reports? Please contact 6.02-staff@mit.edu LECTURE 8 Convolutional Coding This lecture introduces a powerful
More informationError Correction Methods
Technologies and Services on igital Broadcasting (7) Error Correction Methods "Technologies and Services of igital Broadcasting" (in Japanese, ISBN4-339-06-) is published by CORONA publishing co., Ltd.
More informationAalborg Universitet. Bounds on information combining for parity-check equations Land, Ingmar Rüdiger; Hoeher, A.; Huber, Johannes
Aalborg Universitet Bounds on information combining for parity-check equations Land, Ingmar Rüdiger; Hoeher, A.; Huber, Johannes Published in: 2004 International Seminar on Communications DOI link to publication
More informationThe Compound Capacity of Polar Codes
The Compound Capacity of Polar Codes S. Hamed Hassani, Satish Babu Korada and Rüdiger Urbanke arxiv:97.329v [cs.it] 9 Jul 29 Abstract We consider the compound capacity of polar codes under successive cancellation
More information