MLSE in a single path channel. MLSE in a multipath channel. State model for a multipath channel. State model for a multipath channel

Similar documents
Chapter 7: Channel coding:convolutional codes

Decision-Point Signal to Noise Ratio (SNR)

BASICS OF DETECTION AND ESTIMATION THEORY

RADIO SYSTEMS ETIN15. Lecture no: Equalization. Ove Edfors, Department of Electrical and Information Technology

Data Detection for Controlled ISI. h(nt) = 1 for n=0,1 and zero otherwise.

Maximum Likelihood Sequence Detection

NAME... Soc. Sec. #... Remote Location... (if on campus write campus) FINAL EXAM EE568 KUMAR. Sp ' 00

The Viterbi Algorithm EECS 869: Error Control Coding Fall 2009

Introduction to Convolutional Codes, Part 1

Channel Coding and Interleaving

One Lesson of Information Theory

A L A BA M A L A W R E V IE W

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land

Performance evaluation for ML sequence detection in ISI channels with Gauss Markov Noise

These outputs can be written in a more convenient form: with y(i) = Hc m (i) n(i) y(i) = (y(i); ; y K (i)) T ; c m (i) = (c m (i); ; c m K(i)) T and n

Turbo Codes. Manjunatha. P. Professor Dept. of ECE. June 29, J.N.N. College of Engineering, Shimoga.

Decoding the Tail-Biting Convolutional Codes with Pre-Decoding Circular Shift

Revision of Lecture 4

Convolutional Codes. Telecommunications Laboratory. Alex Balatsoukas-Stimming. Technical University of Crete. November 6th, 2008

Roll No. :... Invigilator's Signature :.. CS/B.TECH(ECE)/SEM-7/EC-703/ CODING & INFORMATION THEORY. Time Allotted : 3 Hours Full Marks : 70

1 1 0, g Exercise 1. Generator polynomials of a convolutional code, given in binary form, are g

CHAPTER 14. Based on the info about the scattering function we know that the multipath spread is T m =1ms, and the Doppler spread is B d =0.2 Hz.

Es e j4φ +4N n. 16 KE s /N 0. σ 2ˆφ4 1 γ s. p(φ e )= exp 1 ( 2πσ φ b cos N 2 φ e 0

Coding on a Trellis: Convolutional Codes

IN the mobile communication systems, the channel parameters

Blind Equalization via Particle Filtering

LECTURE 16 AND 17. Digital signaling on frequency selective fading channels. Notes Prepared by: Abhishek Sood

This examination consists of 11 pages. Please check that you have a complete copy. Time: 2.5 hrs INSTRUCTIONS

c. What is the average rate of change of f on the interval [, ]? Answer: d. What is a local minimum value of f? Answer: 5 e. On what interval(s) is f

MAXIMUM LIKELIHOOD SEQUENCE ESTIMATION FROM THE LATTICE VIEWPOINT. By Mow Wai Ho

A Comparison of Cooperative Transmission Schemes for the Multiple Access Relay Channel

Lecture 7. Union bound for reducing M-ary to binary hypothesis testing

Estimation of the Capacity of Multipath Infrared Channels

Lecture 12. Block Diagram

Appendix D: Basics of convolutional codes

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information

Fast Time-Varying Dispersive Channel Estimation and Equalization for 8-PSK Cellular System

Iterative Equalization using Improved Block DFE for Synchronous CDMA Systems

Channel capacity. Outline : 1. Source entropy 2. Discrete memoryless channel 3. Mutual information 4. Channel capacity 5.

Trellis Coded Modulation

A new analytic approach to evaluation of Packet Error Rate in Wireless Networks

Agenda Rationale for ETG S eek ing I d eas ETG fram ew ork and res u lts 2

Bayesian ML Sequence Detection for ISI Channels

Detection Theory. Chapter 3. Statistical Decision Theory I. Isael Diaz Oct 26th 2010

UTA EE5362 PhD Diagnosis Exam (Spring 2011)

Channel Coding I. Exercises SS 2017

Notes on a posteriori probability (APP) metrics for LDPC

An Introduction to Low Density Parity Check (LDPC) Codes

Summary: SER formulation. Binary antipodal constellation. Generic binary constellation. Constellation gain. 2D constellations

B ench mark Test 3. Special Segments in Triangles. Answers. Geometry B enchmark T ests. 1. What is AC if } DE is a midsegment of the triangle?

Lecture 2. Capacity of the Gaussian channel

Multiuser Detection. Summary for EECS Graduate Seminar in Communications. Benjamin Vigoda

Iterative Timing Recovery

12.4 Known Channel (Water-Filling Solution)

Introduction to Wireless & Mobile Systems. Chapter 4. Channel Coding and Error Control Cengage Learning Engineering. All Rights Reserved.

The Crossover-Distance for ISI-Correcting Decoding of Convolutional Codes in Diffusion-Based Molecular Communications

ADAPTIVE FILTER ALGORITHMS. Prepared by Deepa.T, Asst.Prof. /TCE

New Designs for Bit-Interleaved Coded Modulation with Hard-Decision Feedback Iterative Decoding

Exact Probability of Erasure and a Decoding Algorithm for Convolutional Codes on the Binary Erasure Channel

EE5713 : Advanced Digital Communications

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

CSCI 2570 Introduction to Nanocomputing

MOLECULAR communication is a promising design

Parameter Estimation

order is number of previous outputs

A Hilbert Space for Random Processes

The Viterbi Algorithm and Markov Noise Memory

Delay, feedback, and the price of ignorance

Summary II: Modulation and Demodulation

Error Correction and Trellis Coding

Coding Schemes for Crisscross Error Patterns

T h e C S E T I P r o j e c t

Optimum Soft Decision Decoding of Linear Block Codes

Bayesian Data Fusion for Asynchronous DS-CDMA Sensor Networks in Rayleigh Fading

Proceedings of the DATA COMPRESSION CONFERENCE (DCC 02) /02 $ IEEE

Physical Layer and Coding

Solution Manual for "Wireless Communications" by A. F. Molisch

Coding theory: Applications

4 An Introduction to Channel Coding and Decoding over BSC

Ralf Koetter, Andrew C. Singer, and Michael Tüchler

Module 3. Quantization and Coding. Version 2, ECE IIT, Kharagpur

Generalize from CPFSK to continuous phase modulation (CPM) by allowing: non-rectangular frequency pulses frequency pulses with durations LT, L > 1

Convolutional Codes. Lecture 13. Figure 93: Encoder for rate 1/2 constraint length 3 convolutional code.

STA 414/2104: Machine Learning

Advanced 3 G and 4 G Wireless Communication Prof. Aditya K Jagannathan Department of Electrical Engineering Indian Institute of Technology, Kanpur

Soft-Output Trellis Waveform Coding

Binary Convolutional Codes

X 1 : X Table 1: Y = X X 2

Signal Processing for Digital Data Storage (11)

National University of Singapore Department of Electrical & Computer Engineering. Examination for

Module 1. Introduction to Digital Communications and Information Theory. Version 2 ECE IIT, Kharagpur

Interleave Division Multiple Access. Li Ping, Department of Electronic Engineering City University of Hong Kong

16.36 Communication Systems Engineering

ML Detection with Blind Linear Prediction for Differential Space-Time Block Code Systems

Asymptotic Filtering and Entropy Rate of a Hidden Markov Process in the Rare Transitions Regime

LDPC Codes. Intracom Telecom, Peania

On the exact bit error probability for Viterbi decoding of convolutional codes

Revision of Lecture 5

ELEC E7210: Communication Theory. Lecture 4: Equalization

Digital Band-pass Modulation PROF. MICHAEL TSAI 2011/11/10

Transcription:

MLSE in a single path channel MLSE - Maximum Lielihood Sequence Estimation T he op timal detector is the one w hich selects from all p ossib le transmitted b it sequences the one w ith hig hest p rob ab ility In A W G N channel the sequence p rob ab ility can b e calculated as a multip lication of indiv idual sy mb ol p rob ab ilities p (y, y,..., y N x, x,..., x N ) A W = G N S in g le p a th = p (y x ) p (y x, x,..., x N ) MLSE in a multipath channel A multipath channel mixes the symbols and therefore the probabilities of the symbols can not be calculated independently p (y, y,..., y N x, x,..., x N ) A W = G N L ch an n el p ath s = p (y x, x,..., x N ) p (y x, x,..., x L ) F or applying MLSE we have to now how to create a state based model for the channel calculate the conditional probability for the observed symbols State model for a multipath channel The observed value y is modelled as a convolution of the channel impulse response h ch and input symbols x observed in the noise n y = h x + n In a L tap channel: to the value of y contribute only L previous symbols x, x,..., x L we defi ne a state as a combination of L previous bits s = [ x, x,..., x L ] For a noise value with mean 0 the mean value of the sample y is defi ned by f (x, s, h ch ) and calculated as State model for a multipath channel The symbols x,..., x L defining the state s can tae values from the possible symbol set For example for a B P SK modulation x ± Since each symbol could have two possible values we have in total L possible states At one stage the trellis will have L possible states The x and the state s and h ch define the new state s + The new state is combination of the new bit and L youngest bit from the state s s + = [ x x... x (L ) ] = [ x s,ex lud ing x L ] f (x, s, h ch ) = h x + h x +... h L x L

Transition probabilities In a channel with only one path the observed value y is a function of only one symbol x y = h x + n In a AWGN channel probability of the observed sample is p(y x, h ) = e (y h x ) σn π σ n In a multipath channel the mean value of the observed value y is computed based on the state s and the possible new bit x. The state transition probability is p(y x, s, h ch ) = e (y (h x +h x + +h L x L )) σn π σ n Probabilty of a symbol sequence The probability of a possible transmitted symbols sequence calculated based on the observed symbols is p(y, y,..., y n x, x,..., x N, h ch ) = = p (y x, s, h ch ) We can calculate it in logarithmic domain p (y x, x,..., x N, h ch ) log (p(y, y,..., y n x, x,..., x N )) = log (p (y x, s, h ch )) = ( ( ) ) log πσ n σn (y f (x, s, h ch )) Calculation of the transition probabilities... ( ) The terms log are common for all the sequences πσ n and we can drop them The scaling constant is σn the same for all the sequences: it scales all the sequences since we want to find maximum over the sequences we can drop the scaling since we dropped also multiplication by the maximiz ation becomes minimiz ation log (p(y, y,..., y n x, x,..., x N )) ((y f (x, s, h ch )) ) MLSE The maximum lielihood sequence estimation (MLSE) becomes a problem of finding the sequence with the minimum weight as following max (p(y, y,..., y n x, x,..., x N )) o v er all the sequences o f x ( ) min (y f (x, s, h ch )) o v er all the sequences o f x Where f (x, s, h ch ) = h x + h x + + h L x (L )

Viterbi algorithm The decoder has to find the bit sequence that generates the state sequence that is nearest to the received sequence y Each transition in the trellis depends only on the starting state and the end state We now what would be the output from a state (without a noise) The noiseless state output gives the mean value for the observation y The noise is Gaussian and we can calculate the probability based on the state transition (x, s, h ch ) and the received symbol y The total probability for a state sequence is the multiplication of all the probabilities along the path of the sta te seq u ence in the Marov chain Viterbi algorithm... For all transitions in the trellis compute the sum of the metrix in the initial state and in the transition L(s, s + ) = L(s ) + L(y s, s ) At each state select among the incoming paths the one with the minimum metrix (the survival) L(s + ) = min L(s, s + ) At each state remember the previous surviving state (in previous stage) and the bit corresponding to the surviving path Example: Viterbi Algorithm Use a Viterbi decoder for estimating the transmitted bit sequence when the channel is h ch = δ(0) + 0.3 δ(). The receid signal is y rec = [.3, 0.,,.4, 0.9 ] The input is a sequence of binary bits b modulated as x 0. Estimate the most liely transmitted bit sequence System description Two path channel generates two scaled copies of the transmitted bit sequence. R eceiver sees sum of these copies plus noise h x x x 3 x 4 h x x x 3 x 4 n n + n 3 n 4 n 5 y = h ch x + n y y = y 3 y 4 y 5

Construction of the Marov model Problem visualisation The received sample y is a function of L transmitted symbols We can construct a state model where the state is defined by L previous symbols and transition is defined by the new incoming symbol L is the length of the channel response [+ + ] z h h y = h x + h x no ise + [.3, 0.,,.4, 0.9 ] System state model for the trellis middle section 0 -.3-0.7.3 0 0.7 Trellis diagram construction We assign costs µ Stage,initial state, fi nal state for each transition S tage= S tage= S tage=3 S tage=4 S tage=5 Transition calculation Assign the weights for each transition At the trellis middle section Normal opertional mode µ,0,0 µ,0, µ,0,0 µ,0, µ,,0 µ,, µ 3,,0 µ 3,0,0 µ 3,0, µ 3,, µ 4,,0 µ 4,0,0 µ 4,0, µ 4,, µ 5,0,0 µ 5,, y y y 3 y 4 y 5 Transitions in diff erent trellis sections B eginning M iddle End At the beginning of the trellis Not all the channel taps are accounted Last taps are not assigned any symbol since the signal along these paths have not yet arrived At the end of the trellis The signal at the first taps have ended we have to account only the signal from the last channel taps 0 -... 0 -.3 0.7 0-0.7... 0-0.3.3 0.3

Example Trellis with the weights in transitions Decoding: Stage S ta g e = S ta g e = S ta g e = 3 S ta g e = 4 S ta g e = 5.9 6 0.0 4.4 0.6 4.8 9 0.49 7.9 y :.3 y : 0. y 3 : y 4 :.4 y 5 = 0.9 y =.3 Decoding: Stage Decoding: Stage 3.96 0.73 0.73 4.4 0.64 y = 0..89.53 y 3 = 0.73.53 y = 0..53 0.8 y 3 =

Decoding: Stage 4 Decoding: Stage 5 4.4 0.0.3.3.67 0.49 0.8 y 4 =.4 8. y 4 = 0.9 4.4.3 4.4 0.8 7.9 8. y 4 =.4 8. 9.55 y 4 = 0.9 Example: Finding the ML sequence At the final Stage = 5 w e hav e tw o states L (.6 7 ) S elect the o ne w ith m inim u m rem em b ered su rv ing p ath L (9.5 5 ) v alu e and fo llo w b ac alo ng the T he M L seq u ence is the b it seq u ence co rresp o nd ing to this p ath. In o u r ex am p le it is 0 0 0 Example: Finding the ML sequence... The detected bit sequecne has 5 symbols the transmitted sequence has only 4 symbols O ne additional symbol, at the end of the sequence, is due to the convolution C onvolution of the symbol sequence with the channel has prolonged the sequence by L elements In our case L =, and it gives additional element The additional symbols are the same as the last L symbols