SENS'6 Second Scientific Conference with International Participation SPACE, ECOLOGY, NANOTECHNOLOGY, SAFETY 4 6 June 6, Varna, Bulgaria SIMULATION ANALYSIS OF THE VITERBI CONVOLUTIONAL DECODING ALGORITHM Teodor Iliev Universit of Rousse, Department of Communication Technique and Technologies, e-mail: tediliev@ahoo.com Kewords: convolutional codes, Viterbi algorithm, Hamming distance; Abstract. The advantage of Viterbi decoding, compared with brute force decoding, is that the complexit of Viterbi decoding is not a function of the number of smbols in the codeword sequence. The Viterbi algorithm removes from consideration those trellis paths that could not possibl be candidates for the maximum likelihood choice. The decoder continues in this wa to advance deeper into the trellis, making decision b eliminating the least likel paths. The paper is devoted to an example of Viterbi convolutional decoding, that the goal of selecting the optimum path can be expressed, equivalentl, as choosing the codeword with the maximum likelihood metric, or as choosing the codeword with the minimum Hamming distance. We propose encoding and decoding structure with their trellis diagrams and algorithm for hard and soft decoding decision. The received results from the simulation model provide the opportunit of assessing the qualit of decoding. INTRODUCTION Convolutional encoding is a powerful method for forward error correction of a binar sequence in digital communications sstems. The maximum likelihood (ML estimation of the information bits gives the best performance as far as the block error rate is concerned []. A convolutional code can be represented b a trellis diagram. Starting from a given initial state, a binar sequence determines a unique path in the trellis. The Viterbi algorithm is an efficient wa to find the best path in the trellis. For each state in the trellis, the algorithm recursivel updates the best path ending in the state, which is called a survivor path. The architecture of a Viterbi decoder consists of three main units: a branch metric computation unit (BMU, an add-compare unit (ACSU and a trace-back unit (TBU. Each time there are K- states in the trellis, where K is the constraint length, and each state can be entered b means of two paths. Viterbi decoding consist of computing the metric for the two paths entering each state and eliminating one of them. This computation is done for each of the K- nodes at time t i, then the decoder moves to time t i+ and repeats the process.[,3] Viterbi Algorithm:. For i,,...l computation of path metric for each path from state a to the other states;. i i +: computation of the accumulated path metric b adding the branch metric; 3. For each state selection of the path with maximum metric (survivor; 4. Repetition of step and 3 if i < N + L ; An example of Viterbi convolutional decoding:
X ( ; n, N 3 Y ; m, M error ( error in position 4 Z p <,5 bit error probabilit p ( zµ ˆ µ p ( zµ ˆ µ p p ( zµ ˆ µ p( zµ ˆ µ p abreviation: ( p α β log log p α > β Xˆ Z without errors with errors error α > β state i 3 4 β 4β α + 4β 5 α + 5β a x 9α + β best 3 α + 3β part α α + β 6β 7 α + 3β x b 3α + β 7 α + 3β 3 α + 3β 6 α + 4β x c 7 α + 3β x 3α + β 6 α + 4β 3 α + 3β d 4 α + β 7 α + 3β X i X i Fig.: Decoder trellis diagram ( Viterbi Algorithm with Hamming Distance Metric For the decoder trellis it is convenient to label each trellis branch at time t i with the Hamming distance between the received code smbols and corresponding branch word from the encoder trellis. The decoding algorithm uses these Hamming distance metrics to find the most likel (minimum distance path trough the trellis.[3] Hamming Distance: d ( Z, number of disturbed bits M d( Z, number of undisturbed bits Transition probabilities:
( z ˆ p M F log d p p ; bit error ; error free log( p ( p( z ˆ d( Z, log( p + M d( Z, p ( Z, log + M log( p p 44 43 4 443 const const because of, and p log( p < M log( p const < <,5 maximization of F corresponds to minimization of d. simplified VA with minimization of the Hamming path metric: ~ F ( k k logd( Z i, i i bit error probabilit must not be known! and decision. Soft Decision Decoding [4] Until now we have considered hard decision decoding according to: Y (... M with i {,}, Z ( z z... with {,}, z M z i Real transmission channel are analog. at the output of the demodulator we get analog samples w the elements of Z are obtained b quantizing the samples of w, this is called X channel encoding Y modulator u analog channel v demodulator w W decision Z channel decoding Xˆ interference discrete channel Fig. : Transmission model hard ( decision p z j i soft decision p ( z i j i zi i zi Fig. 3: Hard and soft decision
p ( p( w w hard decision Q soft decision hard decision: z i soft decision: z w Q 8 Fig. 4: Hard and soft decoding decision {,} i, e.g. quantization within the range with 3 bits is disturbed b Gaussian noise conditional probabilit densities B soft decision (quantization of w we get additional information about the reliabilit of a decision zi or z i Conditional probabilit functions on the premise of an Additive White Gaussian Channel (AWGN. w i p w σ i e πσ Viterbi Algoritm with Euklidean Metric ~ ( M F z ˆ Minimization of F ~ In case of infinite fine quantization there is a gain of SNR, db In case of 8 level quantization the gain becomes SNR db Example: Comparison of Viterbi Decoding with Soft and Hard Decision. GSM convolutional encoder (Full Rate channel P b - - - Hard-decision uncodiert w z z -3-4 Soft-decision gain -5 3 4 5 6 7 E b/n db
Rate Compatible Punctured Convolutional Codes RCPC Modification of coding rate (error protection b periodic puncturing of the mother code. Example: puncturing of a / rate code. Input X ( x, x, x, x3, x4 ( n m - rate ( i A ~ ( i x i v T v T v xi x i A A T/ Y ~ ( i A ~ ( i Fig. 5: Block diagram (puncturing with period p 4 Without puncturing 3 4.. Y MMMM With puncturing using A ~ Y... 4444 3 puncturing period 4 information smbols are transformed into 5 bits 4 code rate r 5 Puncturing matrix A A 5 out of 8 elements are non zero coding rate: n n 8 8 4 m' m 5 5 5 Puncturing matrix A A 6 out of 8 elements are non zero coding rate:
8 6 3 CONCLUSIONS B switching the puncturing scheme, unequal error protection can be achieved according to the different bit error sensitivit of different bit classes of a coded speech frame. The major drawback of the Viterbi algorithm is that while error probabilit decreases exponentiall with constraint length, the number of code states, and consequentl decoder complexit, grows exponentiall with constraint length. On the other hand, the computational complexit of the Viterbi algorithm is independent of the channel characteristics (compared to hard decision decoding, soft decision decoding requires onl a trivial increase in the number of computations REFERENCES. Илиев, Т., Петков, Г., Цифрова обработка и пренос на сигнали, Печатна база при РУ Ангел Кънчев, 5 г.. Abbasfar, A., Yao, K., Survivor memor reductions in the Viterbi algorithm, IEEE Communications Letters, vol. 9, 4, pp. 35 354, April 5 3. Sklar, B., Digital Communications, nd ed., Prentice Hall PTR, New Jerse, 4. Var, P., Advanced channel coding and modulation, master program, Communication Engineering in RWTH Aachen, 5 5. Viterbi, A., Error bounds for convolutional coding and asmptoticall optimum decoding algorithm, IEEE Trans. Inform. Theor, vol. 3, pp. 6-69, April 967.