Introduction to Convolutional Codes, Part 1

Size: px
Start display at page:

Download "Introduction to Convolutional Codes, Part 1"

Transcription

1 Introduction to Convolutional Codes, Part 1 Frans M.J. Willems, Eindhoven University of Technology September 29, 2009

2 Elias, Father of Coding Theory Textbook Encoder Encoder Properties Systematic Codes and Different Rates Transmission

3 Peter Elias (U.S., ) Figure: P. Elias Coding for Noisy Channels. Hamming had already introduced parity-check codes, but Peter went a giant step farther by showing for the binary symmetric channel that such linear codes suffice to exploit a channel to its fullest. In particular, he showed that error probability as a function of delay is bounded above and below by exponentials, whose exponents agree for a considerable range of values of the channel and the code parameters, and that these same results apply to linear codes. These exponential error bounds presaged those obtained for general channels ten years later by Gallager. In this same paper Peter introduced and named convolutional codes. His motivation was to show that it was in principle possible, by using a convolutional code with infinite constraint length, to transmit information at a rate equal to channel capacity with probability one that no decoded symbol will be in error. (by J.L. Massey)

4 Textbook Encoder Encoder Properties Systematic Codes and Different Rates Transmission Textbook Convolutional Encoder We consider the encoder that appears in almost every elementary text on convolutional codes. It consists of two connected delay elements (a shift register) and two modulo-2 adders (EXORs). The output of a delay element time t is equal to its input at time t 1. The time t is integer. u(t) s 1 (t) D s 2 (t) D v 1 (t) v 2 (t) Motivation is that shift-registers can be used to produce random sequences, and we have learned from Shannon that random codes reach capacity.

5 Textbook Encoder Encoder Properties Systematic Codes and Different Rates Transmission Finite-State Description Let T be the number of symbols that is to be encoded. For the input u(t) of the encoder we assume that u(t) {0, 1}, for t = 1, 2,, T and u(t + 1) = u(t + 2) = 0. (1) For the outputs v 1 (t) and v 2 (t) of the encoder we then have that v 1 (t) = u(t) s 2 (t), v 2 (t) = u(t) s 1 (t) s 2 (t), for t = 1, 2,, T + 2, (2) while the states s 1 (t) and s 2 (t) satisfy s 1 (1) = s 2 (1) = 0, s 1 (t) = u(t 1), s 2 (t) = s 1 (t 1), for t = 2, 3,, T + 3. (3) Note the the encoder starts and stops in the all-zero state (s 1, s 2 ) = (0, 0).

6 Textbook Encoder Encoder Properties Systematic Codes and Different Rates Transmission Rate, Memory, Constraint Length Codewords are now created as follows: inputword = u(1), u(2),, u(t ), codeword = v 1 (1), v 2 (1), v 1 (2), v 2 (2),, v 1 (T + 2), v 2 (T + 2). (4) The length of the input words is T hence there are 2 T codewords. We assume that they all have probability 2 T, or I (1), U(2),, U(T ) are all uniform. The length of the codewords is 2(T + 2) therefore the rate R T = log 2T 2(T + 2) = T 2(T + 2) = T + 2. (5) Note that R = R T = 1/2. We therefore call our encoder a rate-1/2 encoder. The memory M associated with our code is 2. In general the encoder uses the past inputs u t M,, u t 1 and the current input u t to construct the outputs v 1 (t), v 2 (t). Related to this is the constraint length K = M + 1, since a new pair of outputs is determined by M previous input symbols and the current one.

7 Textbook Encoder Encoder Properties Systematic Codes and Different Rates Transmission Convolution, Linearity Why do we call this a convolutional encoder? To see why note that s 1 (t) = u(t 1) and s 2 (t) = s 1 (t 1) = u(t 2). Therefore v 1 (t) = u(t) s 2 (t) = u(t) u(t 2) v 2 (t) = u(t) s 1 (t) s 2 (t) = u(t) u(t 1) u(t 2). (6) Define the impulse responses h 1 (t) and h 2 (t) with coefficients h 1 (0), h 1 (1), h 1 (2) = 1, 0, 1, h 2 (0), h 2 (1), h 2 (2) = 1, 1, 1, (7) and the other coefficients equal to zero, then v 1 (t) = u(t τ)h 1 (τ) = u h 1 (t), v 2 (t) = τ=0,1, u(t τ)h 2 (τ) = u h 2 (t), (8) τ=0,1, result from convolving u with h 1 and h 2. This makes our convolutional code linear.

8 Textbook Encoder Encoder Properties Systematic Codes and Different Rates Transmission A Systematic Convolutional Code u(t) v 1 (t) D D D v 2 (t) The encoder in the figure above is systematic since one of its outputs is equal to the input i.e. v 1 (t) = u(t). The rate R of this code is 1/2, its memory M = 3.

9 Textbook Encoder Encoder Properties Systematic Codes and Different Rates Transmission A Rate-2/3 Convolutional Code u 1 (t) u 2 (t) D D v 1 (t) v 2 (t) v 3 (t) For every k = 2 binary input symbols the encoder in figure above produces n = 3 binary output symbols. Therefore its rate R = k/n = 2/3. The memory M of this encoder is 1 since only u 1 (t 1) and u 2 (t 1) are used to produce v 1 (t), v 2 (t), and v 3 (t).

10 Textbook Encoder Encoder Properties Systematic Codes and Different Rates Transmission Transmission via a BSC Suppose that we use our example encoder and take the length T of the input words u = (u 1, u 2,, u T ) equal to 6. We then get codewords x = (x 1, x 2,, x 2(T +2) ) = (v 1 (1), v 2 (1), v 1 (2),, v 2 (T + 2)) with codeword length equal to 2(T + 2) = 16. We assume that all codewords are equiprobable. These codewords are transmitted over a binary symmetric channel (BSC), see the figure below, with cross-over probability 0 p 1/2. Now suppose that we receive 1 p 0 0 p x y p p y = (y 1, y 2,, y 2(T +2) ) = (10, 11, 00, 11, 10, 11, 11, 00). (9) How should we efficiently decode this received sequence?

11 Communicating a Message, Error Probability P(m) m e(m) x y m P(y x) d(y) Consider the communication system in the figure. A message source produces message m M with a-priori probability Pr{M = m}. An encoder transforms the message into a channel input 1 x X, hence x = e(m). Now the channel output 2 y Y is received with probability Pr{Y = y X = x}. The decoder observes y and produces an estimate m M of the transmitted message, hence m = d(y). How should decoding rule d( ) be chosen such that the error probability is minimized? P e = Pr{ M M} (10) 1 This is in general a sequence. 2 Typically a sequence.

12 The Maximum A-Posteriori Probability (MAP) Rule First we form an upper bound for the probability that no error occurs: 1 P e = y = y Pr{M = d(y), Y = y} Pr{Y = y} Pr{M = d(y) Y = y} y Pr{Y = y} max Pr{M = m Y = y}. (11) m Observe that equality is achieved if and only if 3 d(y) = arg max Pr{M = m Y = y}, for all y that can occur. (12) m Since {Pr{M = m Y = y}, m M} are the a-posteriori probabilities after having received y, we call this rule the maximum a-posteriori probability rule (MAP-rule). 3 It is possible that the maximum is not obtained for a unique m.

13 The Maximum-Likelihood (ML) Rule Suppose that all message probabilities are equal. Then d(y) = arg max Pr{M = m Y = y}, m = arg max m = arg max m Pr{M = m} Pr{Y = y M = m} Pr{Y = y} Pr{Y = y M = m} M Pr{Y = y} = arg max Pr{Y = y M = m} m = arg max Pr{Y = y X = e(m)}, for all y that can occur. (13) m Since {Pr{Y = y X = e(m)}, m M} are the likelihoods for receiving y, we call this rule the maximum-likelihood rule (ML-rule).

14 The Minimum-Distance (MD) Rule Suppose that all message probabilities are equal, and that e(m) is a binary codeword of length L, for all m M. Moreover let y be the output sequence of a binary symmetric channel with cross-over probability 0 p 1/2, when e(m) is its input sequence. Then ( ) Pr{Y = y X = e(m)} = p d H (e(m),y) (1 p) L d H (e(m),y) p dh = (1 p) L (e(m),y). 1 p (14) where d H (, ) denotes Hamming distance. Now d(y) = arg max Pr{Y = y X = e(m)}, m = arg min d m H (e(m), y), for all y that can occur. (15) Since {d H (e(m), y), m M} are the Hamming distances between codewords e(m) and the received sequence y, we call this rule the minimum (Hamming) distance rule (MD-rule). Conclusion is that minimum Hamming distance decoding should be applied to decode y = (10, 11, 00, 11, 10, 11, 11, 00).

15 Complexity of We could do an exhaustive search. Using minimum-distance (MD) decoding we could search all 2 T = 64 codewords. A serious disadvantage of this approach is that the search complexity increases exponentially in the number of input symbols T. We will therefore discuss an efficient method, called the Viterbi algorithm. The complexity of this method is linear in T.

16 Transition Table For our textbook encoder we can determine the transition table. This table contains the output pair v 1 v 2 (t) and next state s 1 s 2 (t + 1) given current state s 1 s 2 (t) and input u(t). See below: This table leads to the state diagram. u(t) s 1 s 2 (t) v 1 v 2 (t) s 1 s 2 (t + 1) 0 0, 0 0, 0 0, 0 1 0, 0 1, 1 1, 0 0 0, 1 1, 1 0, 0 1 0, 1 0, 0 1, 0 0 1, 0 0, 1 0, 1 1 1, 0 1, 0 1, 1 0 1, 1 1, 0 0, 1 1 1, 1 0, 1 1, 1

17 State Diagram In the state diagram, see figure below, states are denoted by s 1 s 2 (t). Along the branches that lead from the current state s 1 s 2 (t) to the next state s 1 s 2 (t + 1) we find the input/outputs u(t)/v 1 v 2 (t)). 0/ /11 1/11 1/ /01 0/10 1/ /01

18 Trellis Diagram To see what sequences of states are possible as a function of the time t we can take a look at the trellis (in Dutch hekwerk ) diagram (see figure below). The horizontal axis is the time-axis. States are denoted by s 1 s 2 (t), and along the branches are again the input and outputs u(t)/v 1 v 2 (t). 0/00 0/00 0/00 0/00 0/00 0/00 0/00 0/ /11 1/11 1/11 1/11 1/11 1/11 1/11 1/11 0/11 0/11 0/11 0/11 0/11 0/11 0/11 0/ /00 1/00 1/00 1/00 1/00 1/00 1/00 1/00 0/01 0/01 0/01 0/01 0/01 0/01 0/01 0/ /10 1/10 1/10 1/10 1/10 1/10 1/10 1/10 0/10 0/10 0/10 0/10 0/10 0/10 0/10 0/ /01 1/01 1/01 1/01 1/01 1/01 1/01 1/01

19 Truncated Trellis Each codeword now corresponds to a path in the trellis diagram. This path starts at t = 1 in state s 1 s 2 = 00 and traverses along T + 2 branches and then ends at t = T + 3 in state s 1 s 2 = 00 again. The truncated trellis (see figure below) contains only the states and branches that can actually occur. 0/00 0/00 0/00 0/00 0/00 0/00 0/00 0/ /11 1/11 1/11 1/11 1/11 1/11 0/11 0/11 0/11 0/11 0/11 0/ /00 1/00 1/00 1/00 0/01 0/01 0/01 0/01 0/01 0/ /10 1/10 1/10 1/10 1/10 0/10 0/10 0/10 0/10 0/ START STOP 1/01 1/01 1/01 1/01

20 Trellis with Branch Distances After having received channel output sequence y, to do MD-decoding, we must be able to compute Hamming distances d H (x, y) where x is a codeword. Since T +2 d H (x, y) = d H (v 1 v 2 (t), y(2t 1)y(2t)), (16) t=1 we first determine all branch distances d H (v 1 v 2 (t), y(2t 1)y(2t)), see figure. y = START STOP

21 Viterbi s Principle Let s = s 1 s 2 and v = v 1 v 2. s (t) v (t) s (t) v (t) s(t + 1) Assume that state s(t + 1) at time t + 1 can be reached only via states s (t) and s (t) at time t through the branches v (t) and v (t) respectively. Then (Viterbi [1967]): Best path to s(t + 1) = best of ( all paths to s (t) extended by v (t), all paths to s (t) extended by v (t) ) = best of ( best path to s (t) extended by v (t), best path to s (t) extended by v (t) ). This principle can be used recursively. First determine the best path leading from the start state s(1) to all states at time 2, then the best path to all states at time 3, etc., and finally determine the best part to the final state s(t + 3). Dijkstra s shortest path algorithm [1959] is more general and more complex than the Viterbi method.

22 The Define D s(t) to be the total distance of a best path leading to state s at time t. Let B s(t) denote a best path leading to this state. Define d s,s(t 1, t) to be the distance corresponding to the branch connecting state s at time t 1 to state s at time t. Let b s,s(t 1, t) denote this branch. 1. Set t := 1. Also set the total distance of the starting state D 00 (1) := 0 and set the best path leading to it B 00 (1) := φ i.e. equal to the empty path. 2. Increment t i.e. t := t + 1. For all possible states s at time t let A s(t) be the set of states at time t 1 that have a branch leading to state s at time t. Assume that s A s(t) minimizes D s (t 1) + d s,s(t 1, t) i.e. survives. Then set Here denotes concatenation. D s(t) := D s (t 1) + d s,s(t 1, t) B s(t) := B s (t 1) b s,s(t 1, t). 3. If t = T + 3 output the best path B 00 (t), otherwise go to step 2.

23 Forward: Add, Compare, and Select Best-path-metrics are denoted in the states. An asterisk denotes that each of the two incoming paths into a state can be chosen as survivor. y = START * 3 4* * 3 4* STOP Note that there is a best path (codeword) at distance 4 from the received sequence

24 Backward: Trace Back Tracing back from the stop state results in decoded path (00, 11, 01, 11, 11, 01, 11, 00), The corresponding input sequence is (0, 1, 0, 0, 1, 0). An equally good input sequence would be (0, 0, 0, 1, 1, 0). The second and fourth input digit are therefore not so reliable.... x = START * 3 4* * 3 4* STOP

25 Complexity Fortunately the complexity of the Viterbi algorithm is linear in the codeword length T. At each time we have to add, compare and select (ACS) in every state. The complexity is therefore also linear in the number of states at each time, which is 4 in our case. In general the number of states is 2 m where m is the number of delay elements in the encoder. Therefore Viterbi decoding is in practise only possible (now) if m is not much higher than say 10, i.e. the number of states is not much more than 2 10 = Later we shall see that the code performance improves for increasing values of m.

26 Exercise We transmit an information-word (x(1), x(2), x(3), x(4), x(5)) over an inter-symbol-interference (ISI) channel. This information-word is preceded and followed by zeroes, hence x(t) = 0 for integer t / {1, 2, 3, 4, 5} x(t) { 1, 1} for t {1, 2, 3, 4, 5}. All 32 information-words occur with equal probability. For the ISI channel for integer times t we have that y(t) = x(t) + x(t 1) + n(t) where the probability density function of the noise n(t) is given by p(n) = 1 2π exp( n2 2 ), thus n(t) has a Gaussian density. The received sequence satisfies y(1) = +0.3, y(2) = +0.2, y(3) = +0.1, y(4) = 1.1, y(5) = +2.5 en y(6) = Decode the information-word with a decoder that minimizes the word-error probability. Show first that the decoder should minimize (squared) Euclidean distance.

NAME... Soc. Sec. #... Remote Location... (if on campus write campus) FINAL EXAM EE568 KUMAR. Sp ' 00

NAME... Soc. Sec. #... Remote Location... (if on campus write campus) FINAL EXAM EE568 KUMAR. Sp ' 00 NAME... Soc. Sec. #... Remote Location... (if on campus write campus) FINAL EXAM EE568 KUMAR Sp ' 00 May 3 OPEN BOOK exam (students are permitted to bring in textbooks, handwritten notes, lecture notes

More information

Chapter 7: Channel coding:convolutional codes

Chapter 7: Channel coding:convolutional codes Chapter 7: : Convolutional codes University of Limoges meghdadi@ensil.unilim.fr Reference : Digital communications by John Proakis; Wireless communication by Andreas Goldsmith Encoder representation Communication

More information

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.1 Overview Basic Concepts of Channel Coding Block Codes I:

More information

1 1 0, g Exercise 1. Generator polynomials of a convolutional code, given in binary form, are g

1 1 0, g Exercise 1. Generator polynomials of a convolutional code, given in binary form, are g Exercise Generator polynomials of a convolutional code, given in binary form, are g 0, g 2 0 ja g 3. a) Sketch the encoding circuit. b) Sketch the state diagram. c) Find the transfer function TD. d) What

More information

Binary Convolutional Codes

Binary Convolutional Codes Binary Convolutional Codes A convolutional code has memory over a short block length. This memory results in encoded output symbols that depend not only on the present input, but also on past inputs. An

More information

BASICS OF DETECTION AND ESTIMATION THEORY

BASICS OF DETECTION AND ESTIMATION THEORY BASICS OF DETECTION AND ESTIMATION THEORY 83050E/158 In this chapter we discuss how the transmitted symbols are detected optimally from a noisy received signal (observation). Based on these results, optimal

More information

Code design: Computer search

Code design: Computer search Code design: Computer search Low rate codes Represent the code by its generator matrix Find one representative for each equivalence class of codes Permutation equivalences? Do NOT try several generator

More information

Appendix D: Basics of convolutional codes

Appendix D: Basics of convolutional codes Appendix D: Basics of convolutional codes Convolutional encoder: In convolutional code (B. P. Lathi, 2009; S. G. Wilson, 1996; E. Biglieri, 2005; T. Oberg, 2001), the block of n code bits generated by

More information

Channel Coding and Interleaving

Channel Coding and Interleaving Lecture 6 Channel Coding and Interleaving 1 LORA: Future by Lund www.futurebylund.se The network will be free for those who want to try their products, services and solutions in a precommercial stage.

More information

Physical Layer and Coding

Physical Layer and Coding Physical Layer and Coding Muriel Médard Professor EECS Overview A variety of physical media: copper, free space, optical fiber Unified way of addressing signals at the input and the output of these media:

More information

Soft-Output Trellis Waveform Coding

Soft-Output Trellis Waveform Coding Soft-Output Trellis Waveform Coding Tariq Haddad and Abbas Yongaçoḡlu School of Information Technology and Engineering, University of Ottawa Ottawa, Ontario, K1N 6N5, Canada Fax: +1 (613) 562 5175 thaddad@site.uottawa.ca

More information

Example of Convolutional Codec

Example of Convolutional Codec Example of Convolutional Codec Convolutional Code tructure K k bits k k k n- n Output Convolutional codes Convoltuional Code k = number of bits shifted into the encoder at one time k= is usually used!!

More information

RADIO SYSTEMS ETIN15. Lecture no: Equalization. Ove Edfors, Department of Electrical and Information Technology

RADIO SYSTEMS ETIN15. Lecture no: Equalization. Ove Edfors, Department of Electrical and Information Technology RADIO SYSTEMS ETIN15 Lecture no: 8 Equalization Ove Edfors, Department of Electrical and Information Technology Ove.Edfors@eit.lth.se Contents Inter-symbol interference Linear equalizers Decision-feedback

More information

The Viterbi Algorithm EECS 869: Error Control Coding Fall 2009

The Viterbi Algorithm EECS 869: Error Control Coding Fall 2009 1 Bacground Material 1.1 Organization of the Trellis The Viterbi Algorithm EECS 869: Error Control Coding Fall 2009 The Viterbi algorithm (VA) processes the (noisy) output sequence from a state machine

More information

Decoding the Tail-Biting Convolutional Codes with Pre-Decoding Circular Shift

Decoding the Tail-Biting Convolutional Codes with Pre-Decoding Circular Shift Decoding the Tail-Biting Convolutional Codes with Pre-Decoding Circular Shift Ching-Yao Su Directed by: Prof. Po-Ning Chen Department of Communications Engineering, National Chiao-Tung University July

More information

Convolutional Codes ddd, Houshou Chen. May 28, 2012

Convolutional Codes ddd, Houshou Chen. May 28, 2012 Representation I, II Representation III, IV trellis of Viterbi decoding Turbo codes Convolutional Codes ddd, Houshou Chen Department of Electrical Engineering National Chung Hsing University Taichung,

More information

ELEC 405/511 Error Control Coding. Binary Convolutional Codes

ELEC 405/511 Error Control Coding. Binary Convolutional Codes ELEC 405/511 Error Control Coding Binary Convolutional Codes Peter Elias (1923-2001) Coding for Noisy Channels, 1955 2 With block codes, the input data is divided into blocks of length k and the codewords

More information

Roll No. :... Invigilator's Signature :.. CS/B.TECH(ECE)/SEM-7/EC-703/ CODING & INFORMATION THEORY. Time Allotted : 3 Hours Full Marks : 70

Roll No. :... Invigilator's Signature :.. CS/B.TECH(ECE)/SEM-7/EC-703/ CODING & INFORMATION THEORY. Time Allotted : 3 Hours Full Marks : 70 Name : Roll No. :.... Invigilator's Signature :.. CS/B.TECH(ECE)/SEM-7/EC-703/2011-12 2011 CODING & INFORMATION THEORY Time Allotted : 3 Hours Full Marks : 70 The figures in the margin indicate full marks

More information

Turbo Codes for Deep-Space Communications

Turbo Codes for Deep-Space Communications TDA Progress Report 42-120 February 15, 1995 Turbo Codes for Deep-Space Communications D. Divsalar and F. Pollara Communications Systems Research Section Turbo codes were recently proposed by Berrou, Glavieux,

More information

4 An Introduction to Channel Coding and Decoding over BSC

4 An Introduction to Channel Coding and Decoding over BSC 4 An Introduction to Channel Coding and Decoding over BSC 4.1. Recall that channel coding introduces, in a controlled manner, some redundancy in the (binary information sequence that can be used at the

More information

Lecture 4: Proof of Shannon s theorem and an explicit code

Lecture 4: Proof of Shannon s theorem and an explicit code CSE 533: Error-Correcting Codes (Autumn 006 Lecture 4: Proof of Shannon s theorem and an explicit code October 11, 006 Lecturer: Venkatesan Guruswami Scribe: Atri Rudra 1 Overview Last lecture we stated

More information

An Introduction to Low Density Parity Check (LDPC) Codes

An Introduction to Low Density Parity Check (LDPC) Codes An Introduction to Low Density Parity Check (LDPC) Codes Jian Sun jian@csee.wvu.edu Wireless Communication Research Laboratory Lane Dept. of Comp. Sci. and Elec. Engr. West Virginia University June 3,

More information

Coding theory: Applications

Coding theory: Applications INF 244 a) Textbook: Lin and Costello b) Lectures (Tu+Th 12.15-14) covering roughly Chapters 1,9-12, and 14-18 c) Weekly exercises: For your convenience d) Mandatory problem: Programming project (counts

More information

Introduction to Low-Density Parity Check Codes. Brian Kurkoski

Introduction to Low-Density Parity Check Codes. Brian Kurkoski Introduction to Low-Density Parity Check Codes Brian Kurkoski kurkoski@ice.uec.ac.jp Outline: Low Density Parity Check Codes Review block codes History Low Density Parity Check Codes Gallager s LDPC code

More information

Lecture 12. Block Diagram

Lecture 12. Block Diagram Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data

More information

Chapter10 Convolutional Codes. Dr. Chih-Peng Li ( 李 )

Chapter10 Convolutional Codes. Dr. Chih-Peng Li ( 李 ) Chapter Convolutional Codes Dr. Chih-Peng Li ( 李 ) Table of Contents. Encoding of Convolutional Codes. tructural Properties of Convolutional Codes. Distance Properties of Convolutional Codes Convolutional

More information

Channel Coding I. Exercises SS 2017

Channel Coding I. Exercises SS 2017 Channel Coding I Exercises SS 2017 Lecturer: Dirk Wübben Tutor: Shayan Hassanpour NW1, Room N 2420, Tel.: 0421/218-62387 E-mail: {wuebben, hassanpour}@ant.uni-bremen.de Universität Bremen, FB1 Institut

More information

Introduction to Binary Convolutional Codes [1]

Introduction to Binary Convolutional Codes [1] Introduction to Binary Convolutional Codes [1] Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw Y. S. Han Introduction

More information

GEORGIA INSTITUTE OF TECHNOLOGY SCHOOL OF ELECTRICAL AND COMPUTER ENGINEERING Final Examination - Fall 2015 EE 4601: Communication Systems

GEORGIA INSTITUTE OF TECHNOLOGY SCHOOL OF ELECTRICAL AND COMPUTER ENGINEERING Final Examination - Fall 2015 EE 4601: Communication Systems GEORGIA INSTITUTE OF TECHNOLOGY SCHOOL OF ELECTRICAL AND COMPUTER ENGINEERING Final Examination - Fall 2015 EE 4601: Communication Systems Aids Allowed: 2 8 1/2 X11 crib sheets, calculator DATE: Tuesday

More information

Lecture 3 : Introduction to Binary Convolutional Codes

Lecture 3 : Introduction to Binary Convolutional Codes Lecture 3 : Introduction to Binary Convolutional Codes Binary Convolutional Codes 1. Convolutional codes were first introduced by Elias in 1955 as an alternative to block codes. In contrast with a block

More information

Digital Communications

Digital Communications Digital Communications Chapter 8: Trellis and Graph Based Codes Saeedeh Moloudi May 7, 2014 Outline 1 Introduction 2 Convolutional Codes 3 Decoding of Convolutional Codes 4 Turbo Codes May 7, 2014 Proakis-Salehi

More information

LDPC Codes. Intracom Telecom, Peania

LDPC Codes. Intracom Telecom, Peania LDPC Codes Alexios Balatsoukas-Stimming and Athanasios P. Liavas Technical University of Crete Dept. of Electronic and Computer Engineering Telecommunications Laboratory December 16, 2011 Intracom Telecom,

More information

CSCI 2570 Introduction to Nanocomputing

CSCI 2570 Introduction to Nanocomputing CSCI 2570 Introduction to Nanocomputing Information Theory John E Savage What is Information Theory Introduced by Claude Shannon. See Wikipedia Two foci: a) data compression and b) reliable communication

More information

CHAPTER 8 Viterbi Decoding of Convolutional Codes

CHAPTER 8 Viterbi Decoding of Convolutional Codes MIT 6.02 DRAFT Lecture Notes Fall 2011 (Last update: October 9, 2011) Comments, questions or bug reports? Please contact hari at mit.edu CHAPTER 8 Viterbi Decoding of Convolutional Codes This chapter describes

More information

Introduction to convolutional codes

Introduction to convolutional codes Chapter 9 Introduction to convolutional codes We now introduce binary linear convolutional codes, which like binary linear block codes are useful in the power-limited (low-snr, low-ρ) regime. In this chapter

More information

Linear Block Codes. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

Linear Block Codes. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay 1 / 26 Linear Block Codes Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay July 28, 2014 Binary Block Codes 3 / 26 Let F 2 be the set

More information

Coding on a Trellis: Convolutional Codes

Coding on a Trellis: Convolutional Codes .... Coding on a Trellis: Convolutional Codes Telecommunications Laboratory Alex Balatsoukas-Stimming Technical University of Crete November 6th, 2008 Telecommunications Laboratory (TUC) Coding on a Trellis:

More information

CS6304 / Analog and Digital Communication UNIT IV - SOURCE AND ERROR CONTROL CODING PART A 1. What is the use of error control coding? The main use of error control coding is to reduce the overall probability

More information

EE 229B ERROR CONTROL CODING Spring 2005

EE 229B ERROR CONTROL CODING Spring 2005 EE 229B ERROR CONTROL CODING Spring 2005 Solutions for Homework 1 1. Is there room? Prove or disprove : There is a (12,7) binary linear code with d min = 5. If there were a (12,7) binary linear code with

More information

Lecture 8: Shannon s Noise Models

Lecture 8: Shannon s Noise Models Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecture 8: Shannon s Noise Models September 14, 2007 Lecturer: Atri Rudra Scribe: Sandipan Kundu& Atri Rudra Till now we have

More information

Decision-Point Signal to Noise Ratio (SNR)

Decision-Point Signal to Noise Ratio (SNR) Decision-Point Signal to Noise Ratio (SNR) Receiver Decision ^ SNR E E e y z Matched Filter Bound error signal at input to decision device Performance upper-bound on ISI channels Achieved on memoryless

More information

LDPC Codes. Slides originally from I. Land p.1

LDPC Codes. Slides originally from I. Land p.1 Slides originally from I. Land p.1 LDPC Codes Definition of LDPC Codes Factor Graphs to use in decoding Decoding for binary erasure channels EXIT charts Soft-Output Decoding Turbo principle applied to

More information

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018

EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018 Please submit the solutions on Gradescope. EE376A: Homework #3 Due by 11:59pm Saturday, February 10th, 2018 1. Optimal codeword lengths. Although the codeword lengths of an optimal variable length code

More information

LECTURE 10. Last time: Lecture outline

LECTURE 10. Last time: Lecture outline LECTURE 10 Joint AEP Coding Theorem Last time: Error Exponents Lecture outline Strong Coding Theorem Reading: Gallager, Chapter 5. Review Joint AEP A ( ɛ n) (X) A ( ɛ n) (Y ) vs. A ( ɛ n) (X, Y ) 2 nh(x)

More information

ELECTRONICS & COMMUNICATIONS DIGITAL COMMUNICATIONS

ELECTRONICS & COMMUNICATIONS DIGITAL COMMUNICATIONS EC 32 (CR) Total No. of Questions :09] [Total No. of Pages : 02 III/IV B.Tech. DEGREE EXAMINATIONS, APRIL/MAY- 207 Second Semester ELECTRONICS & COMMUNICATIONS DIGITAL COMMUNICATIONS Time: Three Hours

More information

Digital Communication Systems ECS 452. Asst. Prof. Dr. Prapun Suksompong 5.2 Binary Convolutional Codes

Digital Communication Systems ECS 452. Asst. Prof. Dr. Prapun Suksompong 5.2 Binary Convolutional Codes Digital Communication Systems ECS 452 Asst. Prof. Dr. Prapun Suksompong prapun@siit.tu.ac.th 5.2 Binary Convolutional Codes 35 Binary Convolutional Codes Introduced by Elias in 1955 There, it is referred

More information

Channel Coding I. Exercises SS 2017

Channel Coding I. Exercises SS 2017 Channel Coding I Exercises SS 2017 Lecturer: Dirk Wübben Tutor: Shayan Hassanpour NW1, Room N 2420, Tel.: 0421/218-62387 E-mail: {wuebben, hassanpour}@ant.uni-bremen.de Universität Bremen, FB1 Institut

More information

Introduction to Wireless & Mobile Systems. Chapter 4. Channel Coding and Error Control Cengage Learning Engineering. All Rights Reserved.

Introduction to Wireless & Mobile Systems. Chapter 4. Channel Coding and Error Control Cengage Learning Engineering. All Rights Reserved. Introduction to Wireless & Mobile Systems Chapter 4 Channel Coding and Error Control 1 Outline Introduction Block Codes Cyclic Codes CRC (Cyclic Redundancy Check) Convolutional Codes Interleaving Information

More information

Error Correction and Trellis Coding

Error Correction and Trellis Coding Advanced Signal Processing Winter Term 2001/2002 Digital Subscriber Lines (xdsl): Broadband Communication over Twisted Wire Pairs Error Correction and Trellis Coding Thomas Brandtner brandt@sbox.tugraz.at

More information

Turbo Codes. Manjunatha. P. Professor Dept. of ECE. June 29, J.N.N. College of Engineering, Shimoga.

Turbo Codes. Manjunatha. P. Professor Dept. of ECE. June 29, J.N.N. College of Engineering, Shimoga. Turbo Codes Manjunatha. P manjup.jnnce@gmail.com Professor Dept. of ECE J.N.N. College of Engineering, Shimoga June 29, 2013 [1, 2, 3, 4, 5, 6] Note: Slides are prepared to use in class room purpose, may

More information

16.36 Communication Systems Engineering

16.36 Communication Systems Engineering MIT OpenCourseWare http://ocw.mit.edu 16.36 Communication Systems Engineering Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.36: Communication

More information

Convolutional Codes. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 6th, 2008

Convolutional Codes. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 6th, 2008 Convolutional Codes Telecommunications Laboratory Alex Balatsoukas-Stimming Technical University of Crete November 6th, 2008 Telecommunications Laboratory (TUC) Convolutional Codes November 6th, 2008 1

More information

Trellis-based Detection Techniques

Trellis-based Detection Techniques Chapter 2 Trellis-based Detection Techniques 2.1 Introduction In this chapter, we provide the reader with a brief introduction to the main detection techniques which will be relevant for the low-density

More information

THIS paper is aimed at designing efficient decoding algorithms

THIS paper is aimed at designing efficient decoding algorithms IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER 1999 2333 Sort-and-Match Algorithm for Soft-Decision Decoding Ilya Dumer, Member, IEEE Abstract Let a q-ary linear (n; k)-code C be used

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

SIDDHARTH GROUP OF INSTITUTIONS :: PUTTUR Siddharth Nagar, Narayanavanam Road UNIT I

SIDDHARTH GROUP OF INSTITUTIONS :: PUTTUR Siddharth Nagar, Narayanavanam Road UNIT I SIDDHARTH GROUP OF INSTITUTIONS :: PUTTUR Siddharth Nagar, Narayanavanam Road 517583 QUESTION BANK (DESCRIPTIVE) Subject with Code : CODING THEORY & TECHNIQUES(16EC3810) Course & Branch: M.Tech - DECS

More information

PUNCTURED 8-PSK TURBO-TCM TRANSMISSIONS USING RECURSIVE SYSTEMATIC CONVOLUTIONAL GF ( 2 N ) ENCODERS

PUNCTURED 8-PSK TURBO-TCM TRANSMISSIONS USING RECURSIVE SYSTEMATIC CONVOLUTIONAL GF ( 2 N ) ENCODERS 19th European Signal Processing Conference (EUSIPCO 2011) Barcelona, Spain, August 29 - September 2, 2011 PUCTURED 8-PSK TURBO-TCM TRASMISSIOS USIG RECURSIVE SYSTEMATIC COVOLUTIOAL GF ( 2 ) ECODERS Calin

More information

Lecture 18: Shanon s Channel Coding Theorem. Lecture 18: Shanon s Channel Coding Theorem

Lecture 18: Shanon s Channel Coding Theorem. Lecture 18: Shanon s Channel Coding Theorem Channel Definition (Channel) A channel is defined by Λ = (X, Y, Π), where X is the set of input alphabets, Y is the set of output alphabets and Π is the transition probability of obtaining a symbol y Y

More information

Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel

Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel Brian M. Kurkoski, Paul H. Siegel, and Jack K. Wolf Department of Electrical and Computer Engineering

More information

Lecture 4: Linear Codes. Copyright G. Caire 88

Lecture 4: Linear Codes. Copyright G. Caire 88 Lecture 4: Linear Codes Copyright G. Caire 88 Linear codes over F q We let X = F q for some prime power q. Most important case: q =2(binary codes). Without loss of generality, we may represent the information

More information

Belief propagation decoding of quantum channels by passing quantum messages

Belief propagation decoding of quantum channels by passing quantum messages Belief propagation decoding of quantum channels by passing quantum messages arxiv:67.4833 QIP 27 Joseph M. Renes lempelziv@flickr To do research in quantum information theory, pick a favorite text on classical

More information

On the Joint Decoding of LDPC Codes and Finite-State Channels via Linear Programming

On the Joint Decoding of LDPC Codes and Finite-State Channels via Linear Programming On the Joint Decoding of LDPC Codes and Finite-State Channels via Linear Programming Byung-Hak Kim (joint with Henry D. Pfister) Texas A&M University College Station International Symposium on Information

More information

Belief-Propagation Decoding of LDPC Codes

Belief-Propagation Decoding of LDPC Codes LDPC Codes: Motivation Belief-Propagation Decoding of LDPC Codes Amir Bennatan, Princeton University Revolution in coding theory Reliable transmission, rates approaching capacity. BIAWGN, Rate =.5, Threshold.45

More information

Exponential Error Bounds for Block Concatenated Codes with Tail Biting Trellis Inner Codes

Exponential Error Bounds for Block Concatenated Codes with Tail Biting Trellis Inner Codes International Symposium on Information Theory and its Applications, ISITA004 Parma, Italy, October 10 13, 004 Exponential Error Bounds for Block Concatenated Codes with Tail Biting Trellis Inner Codes

More information

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005 Chapter 7 Error Control Coding Mikael Olofsson 2005 We have seen in Chapters 4 through 6 how digital modulation can be used to control error probabilities. This gives us a digital channel that in each

More information

Maximum Likelihood Sequence Detection

Maximum Likelihood Sequence Detection 1 The Channel... 1.1 Delay Spread... 1. Channel Model... 1.3 Matched Filter as Receiver Front End... 4 Detection... 5.1 Terms... 5. Maximum Lielihood Detection of a Single Symbol... 6.3 Maximum Lielihood

More information

Data Detection for Controlled ISI. h(nt) = 1 for n=0,1 and zero otherwise.

Data Detection for Controlled ISI. h(nt) = 1 for n=0,1 and zero otherwise. Data Detection for Controlled ISI *Symbol by symbol suboptimum detection For the duobinary signal pulse h(nt) = 1 for n=0,1 and zero otherwise. The samples at the output of the receiving filter(demodulator)

More information

Error Correction Methods

Error Correction Methods Technologies and Services on igital Broadcasting (7) Error Correction Methods "Technologies and Services of igital Broadcasting" (in Japanese, ISBN4-339-06-) is published by CORONA publishing co., Ltd.

More information

Turbo Codes are Low Density Parity Check Codes

Turbo Codes are Low Density Parity Check Codes Turbo Codes are Low Density Parity Check Codes David J. C. MacKay July 5, 00 Draft 0., not for distribution! (First draft written July 5, 998) Abstract Turbo codes and Gallager codes (also known as low

More information

ECEN 655: Advanced Channel Coding

ECEN 655: Advanced Channel Coding ECEN 655: Advanced Channel Coding Course Introduction Henry D. Pfister Department of Electrical and Computer Engineering Texas A&M University ECEN 655: Advanced Channel Coding 1 / 19 Outline 1 History

More information

One Lesson of Information Theory

One Lesson of Information Theory Institut für One Lesson of Information Theory Prof. Dr.-Ing. Volker Kühn Institute of Communications Engineering University of Rostock, Germany Email: volker.kuehn@uni-rostock.de http://www.int.uni-rostock.de/

More information

Simplified Implementation of the MAP Decoder. Shouvik Ganguly. ECE 259B Final Project Presentation

Simplified Implementation of the MAP Decoder. Shouvik Ganguly. ECE 259B Final Project Presentation Simplified Implementation of the MAP Decoder Shouvik Ganguly ECE 259B Final Project Presentation Introduction : MAP Decoder û k = arg max i {0,1} Pr[u k = i R N 1 ] LAPPR Λ k = log Pr[u k = 1 R N 1 ] Pr[u

More information

Error Correcting Codes: Combinatorics, Algorithms and Applications Spring Homework Due Monday March 23, 2009 in class

Error Correcting Codes: Combinatorics, Algorithms and Applications Spring Homework Due Monday March 23, 2009 in class Error Correcting Codes: Combinatorics, Algorithms and Applications Spring 2009 Homework Due Monday March 23, 2009 in class You can collaborate in groups of up to 3. However, the write-ups must be done

More information

Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel

Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel Maximum Likelihood Decoding of Codes on the Asymmetric Z-channel Pål Ellingsen paale@ii.uib.no Susanna Spinsante s.spinsante@univpm.it Angela Barbero angbar@wmatem.eis.uva.es May 31, 2005 Øyvind Ytrehus

More information

Approaching Blokh-Zyablov Error Exponent with Linear-Time Encodable/Decodable Codes

Approaching Blokh-Zyablov Error Exponent with Linear-Time Encodable/Decodable Codes Approaching Blokh-Zyablov Error Exponent with Linear-Time Encodable/Decodable Codes 1 Zheng Wang, Student Member, IEEE, Jie Luo, Member, IEEE arxiv:0808.3756v1 [cs.it] 27 Aug 2008 Abstract We show that

More information

Punctured Convolutional Codes Revisited: the Exact State Diagram and Its Implications

Punctured Convolutional Codes Revisited: the Exact State Diagram and Its Implications Punctured Convolutional Codes Revisited: the Exact State iagram and Its Implications Jing Li Tiffany) Erozan Kurtas epartment of Electrical and Computer Engineering Seagate Research Lehigh University Bethlehem

More information

SOFT DECISION FANO DECODING OF BLOCK CODES OVER DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM

SOFT DECISION FANO DECODING OF BLOCK CODES OVER DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM Journal of ELECTRICAL ENGINEERING, VOL. 63, NO. 1, 2012, 59 64 SOFT DECISION FANO DECODING OF BLOCK CODES OVER DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM H. Prashantha Kumar Udupi Sripati K. Rajesh

More information

ECS 332: Principles of Communications 2012/1. HW 4 Due: Sep 7

ECS 332: Principles of Communications 2012/1. HW 4 Due: Sep 7 ECS 332: Principles of Communications 2012/1 HW 4 Due: Sep 7 Lecturer: Prapun Suksompong, Ph.D. Instructions (a) ONE part of a question will be graded (5 pt). Of course, you do not know which part will

More information

Message Passing Algorithm and Linear Programming Decoding for LDPC and Linear Block Codes

Message Passing Algorithm and Linear Programming Decoding for LDPC and Linear Block Codes Message Passing Algorithm and Linear Programming Decoding for LDPC and Linear Block Codes Institute of Electronic Systems Signal and Information Processing in Communications Nana Traore Shashi Kant Tobias

More information

Low Density Lattice Codes

Low Density Lattice Codes Low Density Lattice Codes Naftali Sommer,, Meir Feder, Ofir Shalvi Department of Electrical Engineering - Systems Tel-Aviv University, Tel-Aviv, Israel, Email: meir@eng.tau.ac.il Texas Instruments, Herzlia,

More information

Encoder. Encoder 2. ,...,u N-1. 0,v (0) ,u 1. ] v (0) =[v (0) 0,v (1) v (1) =[v (1) 0,v (2) v (2) =[v (2) (a) u v (0) v (1) v (2) (b) N-1] 1,...

Encoder. Encoder 2. ,...,u N-1. 0,v (0) ,u 1. ] v (0) =[v (0) 0,v (1) v (1) =[v (1) 0,v (2) v (2) =[v (2) (a) u v (0) v (1) v (2) (b) N-1] 1,... Chapter 16 Turbo Coding As noted in Chapter 1, Shannon's noisy channel coding theorem implies that arbitrarily low decoding error probabilities can be achieved at any transmission rate R less than the

More information

Modulation & Coding for the Gaussian Channel

Modulation & Coding for the Gaussian Channel Modulation & Coding for the Gaussian Channel Trivandrum School on Communication, Coding & Networking January 27 30, 2017 Lakshmi Prasad Natarajan Dept. of Electrical Engineering Indian Institute of Technology

More information

Lecture 4 : Introduction to Low-density Parity-check Codes

Lecture 4 : Introduction to Low-density Parity-check Codes Lecture 4 : Introduction to Low-density Parity-check Codes LDPC codes are a class of linear block codes with implementable decoders, which provide near-capacity performance. History: 1. LDPC codes were

More information

Communication Theory II

Communication Theory II Communication Theory II Lecture 24: Error Correction Techniques Ahmed Elnakib, PhD Assistant Professor, Mansoura University, Egypt May 14 th, 2015 1 Error Correction Techniques olinear Block Code Cyclic

More information

Revision of Lecture 5

Revision of Lecture 5 Revision of Lecture 5 Information transferring across channels Channel characteristics and binary symmetric channel Average mutual information Average mutual information tells us what happens to information

More information

An analysis of the computational complexity of sequential decoding of specific tree codes over Gaussian channels

An analysis of the computational complexity of sequential decoding of specific tree codes over Gaussian channels An analysis of the computational complexity of sequential decoding of specific tree codes over Gaussian channels B. Narayanaswamy, Rohit Negi and Pradeep Khosla Department of ECE Carnegie Mellon University

More information

Lecture 7 September 24

Lecture 7 September 24 EECS 11: Coding for Digital Communication and Beyond Fall 013 Lecture 7 September 4 Lecturer: Anant Sahai Scribe: Ankush Gupta 7.1 Overview This lecture introduces affine and linear codes. Orthogonal signalling

More information

for some error exponent E( R) as a function R,

for some error exponent E( R) as a function R, . Capacity-achieving codes via Forney concatenation Shannon s Noisy Channel Theorem assures us the existence of capacity-achieving codes. However, exhaustive search for the code has double-exponential

More information

Communication by Regression: Sparse Superposition Codes

Communication by Regression: Sparse Superposition Codes Communication by Regression: Sparse Superposition Codes Department of Statistics, Yale University Coauthors: Antony Joseph and Sanghee Cho February 21, 2013, University of Texas Channel Communication Set-up

More information

Non-Linear Turbo Codes for Interleaver-Division Multiple Access on the OR Channel.

Non-Linear Turbo Codes for Interleaver-Division Multiple Access on the OR Channel. UCLA Graduate School of Engineering - Electrical Engineering Program Non-Linear Turbo Codes for Interleaver-Division Multiple Access on the OR Channel. Miguel Griot, Andres I. Vila Casado, and Richard

More information

Error Detection and Correction: Hamming Code; Reed-Muller Code

Error Detection and Correction: Hamming Code; Reed-Muller Code Error Detection and Correction: Hamming Code; Reed-Muller Code Greg Plaxton Theory in Programming Practice, Spring 2005 Department of Computer Science University of Texas at Austin Hamming Code: Motivation

More information

Convolutional Codes Klaus von der Heide

Convolutional Codes Klaus von der Heide Convolutional Codes Klaus von der Heide Convolutional codes encode a stream of symbols into n streams of symbols. 1/n = R is called the code rate. A second important parameter is the constraint length

More information

Constructing Polar Codes Using Iterative Bit-Channel Upgrading. Arash Ghayoori. B.Sc., Isfahan University of Technology, 2011

Constructing Polar Codes Using Iterative Bit-Channel Upgrading. Arash Ghayoori. B.Sc., Isfahan University of Technology, 2011 Constructing Polar Codes Using Iterative Bit-Channel Upgrading by Arash Ghayoori B.Sc., Isfahan University of Technology, 011 A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree

More information

Cyclic Redundancy Check Codes

Cyclic Redundancy Check Codes Cyclic Redundancy Check Codes Lectures No. 17 and 18 Dr. Aoife Moloney School of Electronics and Communications Dublin Institute of Technology Overview These lectures will look at the following: Cyclic

More information

Mapper & De-Mapper System Document

Mapper & De-Mapper System Document Mapper & De-Mapper System Document Mapper / De-Mapper Table of Contents. High Level System and Function Block. Mapper description 2. Demodulator Function block 2. Decoder block 2.. De-Mapper 2..2 Implementation

More information

Sub-Gaussian Model Based LDPC Decoder for SαS Noise Channels

Sub-Gaussian Model Based LDPC Decoder for SαS Noise Channels Sub-Gaussian Model Based LDPC Decoder for SαS Noise Channels Iulian Topor Acoustic Research Laboratory, Tropical Marine Science Institute, National University of Singapore, Singapore 119227. iulian@arl.nus.edu.sg

More information

Performance of small signal sets

Performance of small signal sets 42 Chapter 5 Performance of small signal sets In this chapter, we show how to estimate the performance of small-to-moderate-sized signal constellations on the discrete-time AWGN channel. With equiprobable

More information

Chapter 3 Linear Block Codes

Chapter 3 Linear Block Codes Wireless Information Transmission System Lab. Chapter 3 Linear Block Codes Institute of Communications Engineering National Sun Yat-sen University Outlines Introduction to linear block codes Syndrome and

More information

An Introduction to Low-Density Parity-Check Codes

An Introduction to Low-Density Parity-Check Codes An Introduction to Low-Density Parity-Check Codes Paul H. Siegel Electrical and Computer Engineering University of California, San Diego 5/ 3/ 7 Copyright 27 by Paul H. Siegel Outline Shannon s Channel

More information

ECE8771 Information Theory & Coding for Digital Communications Villanova University ECE Department Prof. Kevin M. Buckley Lecture Set 2 Block Codes

ECE8771 Information Theory & Coding for Digital Communications Villanova University ECE Department Prof. Kevin M. Buckley Lecture Set 2 Block Codes Kevin Buckley - 2010 109 ECE8771 Information Theory & Coding for Digital Communications Villanova University ECE Department Prof. Kevin M. Buckley Lecture Set 2 Block Codes m GF(2 ) adder m GF(2 ) multiplier

More information