Convolutional Codes. Lecture Notes 8: Trellis Codes. Example: K=3,M=2, rate 1/2 code. Figure 95: Convolutional Encoder

Similar documents
Convolutional Codes. Lecture 13. Figure 93: Encoder for rate 1/2 constraint length 3 convolutional code.

Kernel Methods and Support Vector Machines

Non-Parametric Non-Line-of-Sight Identification 1

Feature Extraction Techniques

List Scheduling and LPT Oliver Braun (09/05/2017)

Block designs and statistics

Polygonal Designs: Existence and Construction

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t.

Probability Distributions

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians

On Constant Power Water-filling

CS Lecture 13. More Maximum Likelihood

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness

Lecture 12. Block Diagram

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

ARTICLE IN PRESS. Murat Hüsnü Sazlı a,,canişık b. Syracuse, NY 13244, USA

Estimating Parameters for a Gaussian pdf

Approximation in Stochastic Scheduling: The Power of LP-Based Priority Policies

Finite fields. and we ve used it in various examples and homework problems. In these notes I will introduce more finite fields

Ch 12: Variations on Backpropagation

Combining Classifiers

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

Ocean 420 Physical Processes in the Ocean Project 1: Hydrostatic Balance, Advection and Diffusion Answers

Fixed-to-Variable Length Distribution Matching

The Transactional Nature of Quantum Information

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search

Physically Based Modeling CS Notes Spring 1997 Particle Collision and Contact

SPECTRUM sensing is a core concept of cognitive radio

Impact of Imperfect Channel State Information on ARQ Schemes over Rayleigh Fading Channels

Support Vector Machines. Goals for the lecture

Topic 5a Introduction to Curve Fitting & Linear Regression

Boosting with log-loss

The Weierstrass Approximation Theorem

CSE525: Randomized Algorithms and Probabilistic Analysis May 16, Lecture 13

On Conditions for Linearity of Optimal Estimation

Least Squares Fitting of Data

Pattern Recognition and Machine Learning. Artificial Neural networks

Detection and Estimation Theory

In this chapter, we consider several graph-theoretic and probabilistic models

ECS455: Chapter 5 OFDM

Proc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES

COS 424: Interacting with Data. Written Exercises

A Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair

Convex Programming for Scheduling Unrelated Parallel Machines

Ph 20.3 Numerical Solution of Ordinary Differential Equations

A1. Find all ordered pairs (a, b) of positive integers for which 1 a + 1 b = 3

Efficient Filter Banks And Interpolators

Page 1 Lab 1 Elementary Matrix and Linear Algebra Spring 2011

Experimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical

Introduction to Discrete Optimization

Research Article Rapidly-Converging Series Representations of a Mutual-Information Integral

Randomized Recovery for Boolean Compressed Sensing

Pattern Recognition and Machine Learning. Artificial Neural networks

1 Proof of learning bounds

ANALYSIS OF HALL-EFFECT THRUSTERS AND ION ENGINES FOR EARTH-TO-MOON TRANSFER

Rateless Codes for MIMO Channels

Algorithms for parallel processor scheduling with distinct due windows and unit-time jobs

Estimation of ADC Nonlinearities from the Measurement in Input Voltage Intervals

Symbolic Analysis as Universal Tool for Deriving Properties of Non-linear Algorithms Case study of EM Algorithm

Support Vector Machines MIT Course Notes Cynthia Rudin

INTELLECTUAL DATA ANALYSIS IN AIRCRAFT DESIGN

Maximum a Posteriori Decoding of Turbo Codes

A Better Algorithm For an Ancient Scheduling Problem. David R. Karger Steven J. Phillips Eric Torng. Department of Computer Science

A note on the multiplication of sparse matrices

Low-complexity, Low-memory EMS algorithm for non-binary LDPC codes

Iterative Decoding of LDPC Codes over the q-ary Partial Erasure Channel

Interactive Markov Models of Evolutionary Algorithms

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Intelligent Systems: Reasoning and Recognition. Artificial Neural Networks

Pattern Recognition and Machine Learning. Artificial Neural networks

Principal Components Analysis

Hybrid System Identification: An SDP Approach

Curious Bounds for Floor Function Sums

Finite-State Markov Modeling of Flat Fading Channels

Compression and Predictive Distributions for Large Alphabet i.i.d and Markov models

Multi-Scale/Multi-Resolution: Wavelet Transform

Vulnerability of MRD-Code-Based Universal Secure Error-Correcting Network Codes under Time-Varying Jamming Links

Chapter 6 1-D Continuous Groups

On Poset Merging. 1 Introduction. Peter Chen Guoli Ding Steve Seiden. Keywords: Merging, Partial Order, Lower Bounds. AMS Classification: 68W40

arxiv: v1 [cs.ds] 3 Feb 2014

Homework 3 Solutions CSE 101 Summer 2017

Optimal Resource Allocation in Multicast Device-to-Device Communications Underlaying LTE Networks

Use of PSO in Parameter Estimation of Robot Dynamics; Part One: No Need for Parameterization

Design of Spatially Coupled LDPC Codes over GF(q) for Windowed Decoding

Nonmonotonic Networks. a. IRST, I Povo (Trento) Italy, b. Univ. of Trento, Physics Dept., I Povo (Trento) Italy

Lecture 21. Interior Point Methods Setup and Algorithm

Handout 7. and Pr [M(x) = χ L (x) M(x) =? ] = 1.

A Generalized Permanent Estimator and its Application in Computing Multi- Homogeneous Bézout Number

A Simple Regression Problem

A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine. (1900 words)

Fast Montgomery-like Square Root Computation over GF(2 m ) for All Trinomials

A Note on the Applied Use of MDL Approximations

Error Exponents in Asynchronous Communication

A remark on a success rate model for DPA and CPA

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics

Figure 1: Equivalent electric (RC) circuit of a neurons membrane

Transcription:

Convolutional Codes Lecture Notes 8: Trellis Codes In this lecture we discuss construction of signals via a trellis. That is, signals are constructed by labeling the branches of an infinite trellis with signals fro a sall set. Because the trellis is of infinite length this is conceptually different than the signals created in the previous chapter. We the codes generated are linear (the su of any two sequences is also a valid sequence) then the codes are known as convolutional codes. We first discuss convolutional codes, then optiu decoding of convolutional codes, then discuss ways to evaluated the perforance of convolutional codes. Finally we discuss the ore general trellis codes for QAM and SK types of odulation. Unlike block codes, convolutional codes are not of fixed length. The encoder instead processes using a sliding window the inforation bit sequence to produce a channel bit sequence. The window operates on a nuber of inforation bits at a tie to produce a nuber of channel bits. For exaple, the encoder shown below exaines three consecutive inforation bits and produces two channel bits. The encoder then shifts in a new inforation bit and produces another set of two channel bits based on the new inforation bit and the previous two inforation bits. In general the encoder stores M inforation bits. Based on these bits and the current set of k input bits produces n channel bits. The eory of the encoder is M. The constraint length is the largest nuber of consecutive input bits that any particular output depends. In the above exaple the outputs depend on a axiu of consecutive input bits. The rate is k n. The operation to produce the channel bits is a linear cobination of the inforation bits in the encoder. Because of this linearity each output of the encoder is a convolution of the input inforation strea with soe ipulse response of the encoder and hence the nae convolutional codes. VIII- VIII- Exaple: K=,M=, rate / code i c c Figure 95: Convolutional Encoder c c c In this exaple, the input to the encoder is the sequence of inforation sybols I :. The output of the top part of the encoder is : :. The relation between the input and output where M g and the output c is, g c c and the output of the botto part of the decoder is and g i i M l i lg l. Siilarly the relation between the input and i i M l i lg where g g, g and. The above relations are convolutions of the input with the vector g known as the generator of the code. Fro the above equations it is easy to check that the su of any two codewords generated by two inforation sequences corresponds to l i is VIII- VIII-

/ i c c / / the codeword generated fro the su of the two inforation sequences. Thus the code is linear. Because of this we can assue in our analysis without loss of generality that the all zero inforation sequence (and codeword) is the transitted sequence. The operation of the encoder can be deterined copletely by way of a state transition diagra. The state transition diagra is a directed graph with nodes for each possible encoder content and transition between nodes corresponding to the results of different input bits to the encoder. The transitions are labeled with the output bits of the code. This is shown for the previous exaple. / / / / / Figure 96: State Transition Diagra of Encoder VIII-5 VIII-6 Trellis Section Maxiu Likelihood Sequence Detection of States of a Markov Chain / / Consider a finite state Markov chain. Let x be the sequence of rando variables representing the state at tie. Let x be the initial state of the process p x. Later on we will denote the states by the integers,,...,n. Since this is a Markov process we have that / / / / Also p x p x x x x x x p x px x M x p x p x x x x p x p x x x p x p x x p x / Let w x x be the state transition at tie. There is a one-to-one correspondence / between state sequences and transition sequences w x x,,...,w M x M xm x x xm w w wm VIII-7 VIII-8

Observation By soe echanis (e.g. a noisy channel) a noisy version z of the state transition sequence is observed. Based on this noisy version of w we wish to estiate the state sequence x or the transition sequence w. Sincew and x contain the sae inforation we have that where zz eoryless if z zm, xx x p z p z x xm, and ww w M p z p z w w wm. We say a channel is Likelihood Calcluation So given an observation z on a eoryless channel our goal is to find the state sequence x for which the aposteriori probability p x z is largest. This iniizes the probability that we choose the wrong sequence. Thus the optiu (iniu sequence error probability) decoder chooses x which axiizes p x z : i.e ˆx argaxx p x z z argaxx p x arginx arginx log p x z log p z x p x VIII-9 VIII- Markov State and Meoryless Channel Viterbi algorith (dynaic prograing) Using the eoryless property of the channel we obtain p z x M p z k Using the Markov property of the state sequence (with given initial state) yields Define λ w Then as follows: λ w p x M ln p x ˆxargin x p x x M x λ w w ln p z This proble forulation leads to a recursive solution. The recursive solution is called the Viterbi Algorith by counication engineers and is a for of dynaic prograing as studied by control engineers. They are really the sae though. w Let Γ x be the length (optiization criteria) of the shortest (optiu) path to state x at tie. Let ˆx x be the shortest path to state x at tie. Let ˆΓ x be the length of the path to state x at tie that goes through state x at tie. Then the algorith works as follows., tie index, ˆx x Γ x, ˆx x ˆx x Γ x Γ x x arbitrary, x, M M x Storage: Initialization x VIII- VIII-

λ w p c p c c c c E w Recursion ˆΓ x x Γ x Γ x inx ˆΓ x x for each x Let ˆx x Justification: arginx ˆΓ x x. ˆx x ˆx x Basically we are interested in finding the shortest length path through the trellis. At tie we find the shortest length paths to each of the possible states at tie by coputing all possible ways of getting to state x ufroastate at tie. If the shortest path (denoted by ˆx u ) to get to x uattie goes through state x vattie (i.e. ˆx u then the corresponding path ˆx v to state x vustbethe shortest path to state v at tie since if there was a shorter path, say x v, to state v at tie then the path x v state u at tie that used this shorter path to state v at tie would be shorter then what we assued was the shortest path). Stated another way if the shortest way of getting to state u at tie is by going through state v at tie then the path used to get to state v at tie ust be the shortest of all paths to state v at tie. x ˆxv u) u to We identify a state transition with the pair of bits transitted. The received pair of decision statistics is our noisy inforation about the transition. Thus p z in this case is ust the transition probabilities fro the input of the channel to the output of the channel. This is because knowing the state transition deterines the channel input. VIII- VIII- Exaple : Binary Syetric Channel (BSC) p p Exaple : Additive white Gaussian noise channel (AWGN) p p n i p z ln p z w w p d H z p p d H z p d H z p N p d H z dh p N dh z N p So that iniizing the etric is the sae as choosing the sequence with the closest Haing distance. z c N c i The noise n i is Gaussian, ean and variance N. The possible inputs to the channel are a finite set of real nubers (e.g u i ). These are obtained by the siple apping E r i VIII-5 VIII-6

u Ni u y c. The transition probability is p z u N i πσ exp πσ πσ N exp N exp σ z iu i σ N i σ d E z z iu i where de z z iu i is the squared Euclidean distance between the input and the output of the channel. Thus finding u to axiize p z is equivalent to finding u to iniize de z. u u Thus in these two cases we can equivalently use a distance function (between what is received and what would have been transitted along a given path) as the function we iniize. This should be obvious fro the case of block codes. u.5.5.5.5.5 5 5 5 5 VIII-7 VIII-8 Weight Enuerator for Convolutional Codes X Y Z X Y Z In this section we show how to deterine the weight enuerator polynoials for convolutional codes. The weight enuerator polynoial is a ethod for counting the nuber of codewords of a given Haing weight with a certain nuber of input ones and a certain length. It will be useful when error probability bounds are discussed. X Y Z X Y Z Consider the earlier exaple with four states. We would like to deterine the nuber of paths that start in the zero state (say), diverge fro the zero state for soe tie and then reerge with the zero state such that there are a certain nuber of input ones, a certain nuber of output ones and a certain length. To do this let us split the zero state into a begining state and ending state. In addition we label each path with three variables (x z). The power on x is the nuber of input ones; the power on y is the nuber of output ones; the power on z is the length of that path (naely one). To get the paraeters for two consecutive paths we ultiply these variables. X Y Z X Y Z X Y Z Figure 97: State transition diagra of encoder Let T represent the nuber of paths stating in the all zero state and ending in state. This VIII-9 VIII-

y 7 y 8 8y 9 y y y y x z includes paths that go through state any nuber of ties. Siilarly let T be the nuber of paths stating fro the state and ending in state. Finally let T be the nuber of paths stating fro the state and ending in state. Then we can write the following equations Then the nuber of paths stating at and ending in is A x solution is z yzt. In this case the T T T xy zxzt xyzt xyzt yzt yzt Fro these equations we can solve for T. The following Maple code solves this proble. with(linalg); :=atrix(,,[[,-x*z,],[-x*y*z,,-x*y*z],[y*z,-,y*z]]); b:=[x*yˆ*z,,]; fxx:=linsolve(,b); fxx:=fxx[]*yˆ*z; fxx:=diff( fxx, x); fxx:=siplify( fxx); fxx5:=eval( fxx, z=); fxx6:=eval( fxx5, x=); wy=taylor(fxx6, y=, 5); A x z xy 5 z xy 5 z xyz z x y 6 z x y 6 z 5 x y 7 z 5 x y 7 z 6 x y 7 z 7 x y 8 z 6x y 8 z 7 Thus there is one path through the trellis with one input one, 5 output ones and length. There is one path diverging and reerging with input ones, length and 6 output ones. The iniu (or free) distance of a convolutional code is the iniu nuber of output ones on any path that diverges fro the all zero state and then reerges. This code has d f of 5. To calculate A l used in the first event error probability bound we calculate A coefficient on y l is A l.. The In order to deterine the bit error probability the next section shows that the polynoial given by w y A x z x VIII- VIII- Exaple: K=7,M=6, rate / code is needed. For this exaple the polynoial is w y y 5 y y 5 y 6 This code has d f of. The weight enuerator polynoial is given as follows. The first event error probability is deterined by a y b y. The bit error probability is deterined by c y b y. VIII- VIII-

a y b y y 6y 8 5y y 69y 5y 76y 8 76y y 67y 8y 56y 8 9y 795y 66y 58y 9y 6 9575y 97y 9y8 5 678y 58y 5y5 868y56 6 8y 589y y6 68 88y 8y 6656y y7 76 y 76y y8 6 6 y 6y y 6y 68y 8 59y 689y 8 a y b y 8y6 78y6 y 8 8766y 56 y 5866y 6 y 66y 68 y 7y 76 y 8y 96y8 y 885y 8y 5y y 6y 67y 6y 75y 87y 6y 8y 76y 56y 5576y 5 9y y 6 y6 y 6 c y 6y y 88y 888y 957y 77y 6675y8 y 87y 5685y56 796y 5 56778y 7769y 8y 669y 8 y 57y6 59y 89y 9y 6687y 8 8786y 958y 6 6y 57757y 5 58556595y 6 7699y6 66955y6 7896y 66 88688y 687565y 7 7895y 7 56995y 7665956y 78 57y8 6977y 8695756y 8 889y86 8568677y88 687y 9 78789y 99578y 9 55898y 98 9657y 786y 75766y 6569y 6789y 8 68558y 8 y 67565y 57y 686y 656y 6y VIII-5 VIII-6 6 977y 665y 565y8 7569y 8y 66y 87y6 5y 6y y y y 6 8 The weight enuerator polynoial for deterining the bit error probability is given by c d b d 6y y 569y 659y 6 8 y 6y 677y 76y 99y 8587y 8 Error Bounds for Convolutional Codes We are interested in deterining the probability that the decoder akes an error. We will define several types of errors. Without loss of generality we will assue that the inforation sequence is the all zeros sequence so the transitted codeword is the all zeros codeword. Furtherore we will assue the trellis starts at tie unit. Norally a code is truncated (forced back to the all zero state) after a large nuber of code sybols have been transitted but we don t require this. VIII-7 VIII-8

dn d n n d d First Event Error A first event error is said to occur at tie if the all zeros path (correct path) is eliinated for the first tie at tie, that is, if the path with the shortest distance to the state at tie is not the all zeros path and this is the first tie that the all zero path has been eliinated. At tie an incorrect path will be chosen over the correct path if incorrect path received sequence is greater than correct path received sequence. If the incorrect path has (output) weight d then the probability that the incorrect path being ore likely than the all zeros path is denoted as d. This is easily calculated for ost channels since it is ust the error probability of a repetition code of length d. For an additive white Gaussian channel it is given by d Q Ed For a binary syetric channel with crossover probability p it is given by d d n d n d d d p n p p n p The first event error probability at tie, f N dnd odd d p d p d even, can then be bounded (using the union bound) as f E l A l l where A l is the nuber of paths through the trellis with output weight l. This is a union type bound. However, it is also an upper bound since at any finite tie there will only be a finite nuber of incorrect paths that erge with the correct path at tie. We have included in the upper bound paths of all lengths as if the trellis started t at. This akes the bound independent of the tie index. (We will show later how to calculate A l for all l via a generating function). Since each ter in the infinite su is nonnegative, the su is either a finite positive nuber or. For exaple the standard code of constraint length has A l l5 so unless the pairwise error probability decreases faster than l the above bound will be infinite. The pairwise error probability will decrease fast enough for reasonably large signal-to-noise ratio or reasonably sall crossover probability. In general the sequence Al ay have a periodic coponent that is zero but otherwise is a positive increasing sequence. l for reasonable channels is a decreasing sequence. If the channel is fairly noisy then the above upper bound on first event error probability ay converge to soething larger than, even. In this case clearly is an VIII-9 VIII- Bit error probability upper bound on any probability we are interested in, so the above bound can be iproved to f E in l A l l For exaple, the well known constraint length 7, rate / convolutional code has coefficients that grow no faster than 8765 k so that provided l decreases faster than 8765 k the bound above will converge. Since l D p pthe above bound converges for 6. For soft decisions (and additive p white Gaussian noise) De E 6dB. Dl where (for hard decisions) N N and thus the bound converges provided E Below we find an upper bound on the error probability for binary convolutional codes. The generalization to nonbinary codes is straightforward. In order to calculate the bit error probability in decoding between tie and weneed to consider all paths through the trellis that are not in the all zeros state at either tie unit or. We also need to realize that not all incorrect paths will cause an error. First consider a rate n code (i.e. k) so each transition fro one state to the next is deterined by a single bit. To copute an upper bound on the bit error probability we will do a union bound on all paths that cause a bit error. We assue that the trellis started t at. We do this in two steps. First we look at each path diverging and then reerging to the all zero state. This is called an error event. (An error event can only diverge once fro the all zero state). Then su over all possible starting ties (ties when the path diverges fro the all zero state) for each of these error events. So take a particular error event of length l corresponding to a state sequence with i input ones and output ones and let A i l be the nuber of such paths. If the error event started at tie unit then that clearly would cause an error since the input bit need be one upon starting an error event (diverging fro the all zero state). However, if the event ended at tie unit in the all zero state then there would not be a bit error ade since reerging to the all zero state corresponds to an input bit of. Of the l phases that VIII- VIII-

r y y y ik overlap with the transition fro to there are exactly i of these that will cause a bit error. So for each error event we need to weight the probability by the nuber of input ones that are on that path. Thus the bit error probability (for k) can be upper bounded by b i l where is the probability of error between two codewords that differ in positions. As before, this bound is independent of the tie index since we have included all paths as if the trellis started t at and goes on t to. If we define the weight enuerator polynoial for a convolutional code as A x z ia i l A i i l lx i y z l and upper bound using the Chernoff or Bhattacharyya bound by D then the upper bound on first event error probability is ust f E A x z x yd z Siilarly the bit error probability can be further upper bounded by b A x z x x yd z As entioned previously the above bounds ay be larger than one (for sall signal-to-noise ratio or high cross over probability). This will happen for a larger range of paraeters when we use the generating function with the Bhattacharyya bound as opposed to ust the union bound. There is a way for certain codes to use ust the union bound for the first say L ters and the Bhattacharyya bound for reaining to get a tighter bound than the bound based on the generating function but without the infinite coputation required for ust the union bound. (See soe proble). The above bounds only are finite for a certain range of the paraeter D depending on the specific code. However for practical codes and reasonable signal-to-noise ratios or crossover probabilities the above bounds are finite. (See another proble). VIII- VIII- Rate k n codes Now consider a rate k n convolutional code. The trellis for such a code has k branches eerging fro each state. We will consider the bit error probability for the r-th bit in each k bit input sequence. Let A i l be the nuber of paths through the trellis with output ones, length l, i r input ones in the r-th input bit ( Then clearly ik A i l i ik:kr iri The bit error probability for the r-th bit is then bounded by b The average bit error probability is b k b k b r r i k k l k i r A i l r k) of the sequence of k bit inputs. A i l ik ik ri i ik k r ik ik l i r A i l i r A i ik l ik l b Now consider the last two sus. Thus i We can write this as ik k i r r Ai k b l l i i k A i k l i r r ik ik b i i i i i i ia i i k i k ia i l w i k i r i k :i ri r i k :i ri i A i i k :i ri l l Ai l l Ai ik ik l VIII-5 VIII-6

w D Iproved Union-Bhattacharrya Bound where w i l ia i l We can upper bound the bit error probability by b w w D w D The first bound is the union bound. It is ipossible to exactly evaluate this bound because there are an infinite nuber of ters in the suation. Dropping all but the first N ters gives an approxiation. It ay no longer be an upper bound though. If the weight enuerator is known we can get arbitrarily close to the union bound and still get a bound as follows. b w N w d f N w N w d f N w D N d f w D d f w D VIII-7 VIII-8 Siulation Upper bound N w d f D Bit Error Rate Lower bound The second ter is the Union-Bhattacharyya (U-B) bound. The first ter is clearly less than zero, so we get soething that is tighter than the U-B bound. By chosing N sufficiently large we can soeties get significant iproveents over the U-B bound. Below we show the error probability bounds and siulation for the constraint length (eory ) convolutional code. Note that the upper bound is fiarly tight when the bit error probability is less than.. 5 6 7 8 5 6 7 8 Eb/N Figure 98: Error probability of constraint length convolutional codes on an additive white Gaussian noise channel with soft decisions decoding (upperbound, siulation and lower bound). VIII-9 VIII-

Exaple: K=7,M=6, rate / code This code is used in any coercial systes including IS-95 Standard for digital cellular. This is also a NASA standard code. b 5 6 5 6 E b /N Figure 99: Error probability of constraint length convolutional codes on an additive white Gaussian noise channel with soft decisions decoding (upperbound, siulation). VIII- VIII- - U pper Bounds on Bit Error robabilityfor Constraint Length 7, Rate / Convolutional Code Union Bound - e,b - Siulation Uncoded BSK -5-6 Hard Decisions Bit Error Rate 5-7 6-8 -9 Soft Decisions 7-6 8 E b /N (db) Figure : Error probability of constraint length 7 rate / convolutional codes on an additive white Gaussian noise channel (hard and soft decisions). 8 6 8 Eb/N (db) Figure : Error probability of constraint length 7 rate / convolutional codes on an additive white Gaussian noise channel with soft decisions decoding (upperbound, siulation and uncoded). VIII- VIII-

Bit Error robability Bit Error robability (Bound) for Constraint Length 9 Rate / Convolutional Code - - - - -5-6 -7-8 -9 - Soft Decisions 6 Hard Decisions 8 E b /N (db) Figure : Error probability of constraint length 9 convolutional codes on an additive white Gaussian noise channel (hard and soft decisions). Meory Generators in octal d f ree A d f ree 5 7 5 5 7 6 5 7 5 5 75 8 6 7 7 7 7 8 56 75 9 67 55 Rate / axiu free distance codes VIII-5 VIII-6 Meory Generators in octal d f ree A d f ree 5 7 7 8 5 7 5 7 5 7 5 75 6 5 75 5 7 5 67 6 8 557 66 7 8 9 7 65 6 Rate / axiu free distance codes Note that low rate convolutional codes do not perfor any better than a rate / code convolutional code concatenated with a repetition code. There are better approaches to achieving high distance at low rates. This usually involves concatenating a convolutional code with an orthogonal code as described below (to be added). VIII-7 VIII-8

VIII-9 VIII-5 VIII-5 VIII-5

l b D 8 6D D D Error Bounds for Convolutional Codes The perforance of convolutional codes can be upper bounded by Standard codes: b d f ree w l D l Exaple Convolutional Code : Constraint length 7, eory 6, 6 state decoder, rate / has the following upper bound. where w l is the average nuber of nonzero inforation bits on paths with Haing distance l and D is a paraeter that depends only on the channel. usually the suation in the upper bound is truncated to soe finite nuber of ters. Exaple. Binary Syetric Channel crossover probability p. D p p Exaple. Additive White Gaussian Noise channel De E N b D D 6D 6 There is a chip ade by Qualco and Stanford Telecounications that operates at data rates on the order of Mbits/second that will do encoding and decoding. Exaple Convolutional Code : Constraint length 9, eory 8, 56 state decoder, rate / b 8D 79D 6 55D 8 Exaple Convolutional Code : Constraint length 9, eory 8, 56 state decoder, rate / erforance Exaples: Generally hard decisions requires db ore signal energy than soft decisions for the sae bit error probability. Also soft decisions is only about.5db better than 8 level quantization. 95D 56D 7D 6 VIII-5 VIII-5 No bit error Trellis VIII-55 VIII-56

No bit error Bit error VIII-57 VIII-58 No bit error Bit error VIII-59 VIII-6

Codes for Multiaplitude signals Bit error In this section we consider coding for ultiaplitude signals. The application of this is to bandwidth constrained channels. We can consider as a baseline a two diensional odulation syste transitting a sybols per second. If each sybol represents bits of inforation then the data rate is 96 bits per second. So we would like to have ore signals per diension in order to increase the data rate. However, we ust try to keep the signals as far apart fro each other as possible (in oder to keep the error rate low). So an increase of the size of the signal constellation for fixed iniu distance would likely increase the total signal energy transitted. The codes (signal sets) constructed are not linear in nature so the application of linear block codes is not very productive. -ary signal sets Consider a -ary QAM signal set shown below. The average energy is. The iniu distance is and the rate is 5 bits/diension. Clearly this is a nonlinear code in that the su of two codewords is not a codeword. VIII-6 VIII-6 7 5-7 -5 - - - 5 7 - -5 correct errors). However, this iproved perforance is at the expense of lower data rate (we ust transit the redundant bits). A second possible way of iproving the perforance (adding redundancy) is to increase the alphabet size of the signal set but then only allow certain subsets of all possible signal sequences. We will show that considerable gains are possible with this approach. So first we consider expanding the constellation size. 6-ary signal sets Consider a 6-ary QAM signal set shown below. The average energy is. -7 There are several possible ways to iprove the perforance of the constellation. First, one could add redundancy (e.g. use a binary code and ake hard decisions and use the code to VIII-6 VIII-6

7 5-7 -5 - - 5 7 - - -5-7 9 7 5-9 -7-5 - - 5 7 9 - - -5-7 -9 VIII-65 VIII-67 Modified QAM (used in aradyne.kbit ode). This has average energy of.975. The following hexagonal constellation has energy 5.5 but each interior point now has 6 neighbors copared to the four neighbors for the rectangular structures. VIII-66 VIII-68

7 5-7 -5 - - - 5 7 - -5-7 Consider now coding for 6QAM (and coparing it to an uncoded QAM signal set). Consider the following block code. Divide the points in the constellation into two subsets called A and B as shown below. A B A B A B A B B A B A B A B A A B A B A B A B B A B A B A B A A B A B A B A B B A B A B A B A A B A B A B A B B A B A B A B A VIII-69 VIII-7 The code is then described by vectors of length two where it is required that the coponents coe fro the sae set. Thus we can either have two signals fro subset A or two signals fro subset B. The Euclidean distance is calculated as follows. Consider the following two codewords. c a and where a a A and a a. Then a c a d E c a c 8 Siilar calculation holds for points in subset B. Also consider and where a i A and b i B. Then c a a c b d E c b c 8 transits bits/ diensions or.75 bits/diension. The original signal set has on the average.75 nearest neighbors per signal point. We calculate the nuber of nearest neighbors for the code as follow. Consider the nearest neighbors to the codeword a where a is an interior point of the constellation and is in subset A. Then a nearest neighbor is of the for a. There are four choices for a. This is the sae as the original constellation. Now consider a to be one of the exterior points (but not a corner point). Then a a there are only two nearest neighbors (as opposed to three for the original constellation). Now consider a to be a corner point. Corner points have only one nearest neighbor. Thus the average nuber of nearest neighbors is calculated to be 6 6 Thus we have gained a factor of two in Euclidean distance copared to 6QAM and have reduced the average nuber of nearest neighbors. Consider now further dividing the constellation. 65 Thus the distance between two points is twice the distance of the original signal set. The original signal set transitted 6 bits/ diensions or bits/diension. The new signal set VIII-7 VIII-7

c A B A B A B A B C D C D C D C D A B A B A B A B C D C D C D C D A B A B A B A B of 6). A block code for this signal partition consists of codewords of the for (A,A,A,A) (B,B,B,B) (A,A,D,D) (B,B,C,C) (A,D,A,D) (B,C,B,C) (A,D,D,A) (B,C,C,B) (D,A,A,D) (C,B,B,C) (D,A,D,A) (C,B,C,B) (D,D,A,A) (C,C,B,B) (D,D,D,D) (C,C,C,C) C D C D C D C D A B A B A B A B C D C D C D C D The iniu distance between points in subset A is now (or a iniu distance squared That is the coponents are either all fro the sets A and D or all fro the sets C and B. The nuber of ties fro any set is even. The iniu distance of this code/odulation is deterined as follows. Two codewords of the for AA A but differing in exactly one position has squared Euclidean distance of 6. Two codewords for the for AA A and AA D have squared Euclidean distance of 8+8=6. Two codewords of the for AA A and BB B have squared Euclidean distance of +++=6. Thus it is easy to verify the iniu squared Euclidean distance of this code is 6 or ties larger than 6 QAM. The nuber of bits per diension is calculated as bits to deterine a codeword and D A B A A VIII-7 VIII-7 bits to deterine which point in a subset to take. Thus to chose the four subsets requires 6 bits. Thus we have 6+= bits in 8 diensions or a rate of.5 bits per diension. We could copare this to a QAM syste which also has.5 bits/diension. The iniu distance squared of QAM is and the signal power is (copared to for 6QAM). Thus we have increased the signal power by a factor of but have increased the squared-euclidean distance by a factor of. The net coding gain is or db. (Can you calculate the nuber of nearest neighbors?) Thus when coparing a coded syste with a certain constellation and an uncoded syste with soe other constellation the coding gain is defined as Coding Gain d E c Trellis Codes for 8-SK Suppose we want to transit bits per diensions (one I-Q sybol). This is easy with -SK. The odulation uses one of signals at four different phases. The constellation is shown below. where c u is the power (or energy) of the coded (uncoded) signal set and d E corresponding Euclidean distance. d E u u de c u is the The probability that we confuse signal with signal is signal is Q d σ where d and σn. VIII-75 VIII-76

sin π 8-SK Constellation One way to iprove the perforance is to use soe sort of coding. That is, add redunandant bits and use the distance of the code to protect fro errors. However, for a fixed bandwidth (odulation sybol duration) adding an error control code will decrease the inforation rate. We would like to keep the inforation rate constant but decrease the required energy. Suppose we add ore signal points but then only allow certain points be transitted. So, for exaple, consider doubling the nuber of points to 8 fro and then using a trellis to decide which points to transit as shown below. d 7 d 7 d 7 8 5 7 6 VIII-77 VIII-78 8-SK 6 6 6 6 6 6 Distance The iniu distance can be calculated as follows. Clearly the distance ust be less than the distance between two identical nodes via parallel paths. The distance between two identical nodes on parallel paths is always. The distance between two paths that diverge at soe point and then reerge later as shown in the previous figures is calculated as: d 5 8 5 5 5 d 5 8 7 5 7 7 5 7 7 5 7 Thus the iniu distance is. This is a factor of larger than the case of -SK but we have transission at the sae inforation rate. This is essentially a db gain (reduction of energy) at the sae inforation rate but with higher receiver coplexity. VIII-79 VIII-8

x x w α Miniu Bit Error robability Decoding for Convolutional Codes on this noisy version of we wish to deterine the following probabilities p x x z J reviously we derived the optial decoder for iniizing the codeword error probability for a convolutional code. That is iniizing the probability that the decoder chooses the wrong inforation sequence. In this section we derive an algorith for iniizing the probability of bit error. Consider a finite state Markov chain with state space. Let x be the sequence of rando variables representing the state at tie. Let x be the initial state of the process with p x and let x J be the final state. Since this is a Markov process we have that p x x x x x p x x M Let wx be the state transition at tie. There is a one-to-one correspondence between state sequences and transition sequences. That is the two sequences x x x contain the sae inforation. Let u l ku k ul denote a sequence. By soe echanis (e.g. a noisy channel) a noisy version of the state transition sequence is observed. Based z xj w w wj and p x These two quantities can be calculated fro σ λ by appropriate noralization. p x p x x z J x x z J z J z J x σ l λ l λ l k σ l k z J VIII-8 VIII-8 Now let α x γ z β x z z J x x We can calculate λ We can calculate σ as follows. λ β as follows. z J z J x x α z z J β γ x We now develop recursions for α x and β α z x. For x J we have z x z α x M M M z x x γ The boundary conditions are given as z x x z α z x z σ x z J z J x x x z J x z z x z x z x z α Here we are assuing the Markov chain starts in state and ends in state at tie J ( x J ). VIII-8 VIII-8

x x x β β w x w The recursion for β is given as follows. β z J M x M M M The boundary condition is x z J z J β x x z J γ β J x z x x x z x z x w w w x z x z z x w x w w x x x x x The first ter is the transition probability of the channel. The second ter is the output of the encoder when transitioning fro state to state. The last ter is the probability of going to state fro state. This will be either a nonzero constant (e.g. /) or zero. The algorith works as follows. First initialize α and β. After receiving the vector z zj perfor the recursion on α and β. Then cobine α and β to deterine λ and σ. Noralize to deterine the desired probabilities. Finally we can calculate γ as follows γ x z VIII-85 VIII-86 Now consider a convolutional code which is used to transit inforation. The input sequence to the encoder is u uj wherezeroshavebeenappendedto the input sequence to force the encoder to the all zero state at tie J. We wish to deterine the iniu bit error probability decision rule for bit u. The input sequence deterines a state transition sequence x J. The state sequence deterines the output code sybols c cn. The output sybols are odulated and the received and a decision statistic is derived for each coded sybol via a channel p z c. Based on observing r we wish to deterine the optiu rule for deciding if u or u. The optial decision rule is to copute the log-likelihood ratio and copare that to zero λlog log log log p u z p u z :u p x px :u :u σ :u σ :u :u α α γ γ z z Turbo Codes Inforation RSC Interleaver RSC Figure : Turbo Code Encoder VIII-87 VIII-88

Deinterleaver Decoder Interleaver Interleaver Decoder Figure : Recursive Systeatic Encoder Figure 5: Decoding Architecture VIII-9 VIII-9 VIII-89 VIII-9

VIII-9