Channel Models with Memory. Channel Models with Memory. Channel Models with Memory. Channel Models with Memory

Similar documents
Random Variables. ECE 313 Probability with Engineering Applications Lecture 8 Professor Ravi K. Iyer University of Illinois

2. Independence and Bernoulli Trials

å 1 13 Practice Final Examination Solutions - = CS109 Dec 5, 2018

IS 709/809: Computational Methods in IS Research. Simple Markovian Queueing Model

Application of Generating Functions to the Theory of Success Runs

CHAPTER 4 RADICAL EXPRESSIONS

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

Lecture 9. Some Useful Discrete Distributions. Some Useful Discrete Distributions. The observations generated by different experiments have

MATH 371 Homework assignment 1 August 29, 2013

Factorization of Finite Abelian Groups

= lim. (x 1 x 2... x n ) 1 n. = log. x i. = M, n

Two Fuzzy Probability Measures

Mu Sequences/Series Solutions National Convention 2014

Training Sample Model: Given n observations, [[( Yi, x i the sample model can be expressed as (1) where, zero and variance σ

Discrete Mathematics and Probability Theory Fall 2016 Seshia and Walrand DIS 10b

2SLS Estimates ECON In this case, begin with the assumption that E[ i

X X X E[ ] E X E X. is the ()m n where the ( i,)th. j element is the mean of the ( i,)th., then

Chain Rules for Entropy

Chapter 9 Jordan Block Matrices

18.413: Error Correcting Codes Lab March 2, Lecture 8

D KL (P Q) := p i ln p i q i

Point Estimation: definition of estimators

Multiple Choice Test. Chapter Adequacy of Models for Regression

EECE 301 Signals & Systems

Continuous Random Variables: Conditioning, Expectation and Independence

Some queue models with different service rates. Július REBO, Žilinská univerzita, DP Prievidza

Homework 1: Solutions Sid Banerjee Problem 1: (Practice with Asymptotic Notation) ORIE 4520: Stochastics at Scale Fall 2015

Lecture Notes Types of economic variables

CHAPTER VI Statistical Analysis of Experimental Data

Third handout: On the Gini Index

Lecture 3. Sampling, sampling distributions, and parameter estimation

Investigating Cellular Automata

Analysis of System Performance IN2072 Chapter 5 Analysis of Non Markov Systems

Random Variables and Probability Distributions

Lecture 07: Poles and Zeros

Set Theory and Probability

Entropy, Relative Entropy and Mutual Information

Part 4b Asymptotic Results for MRR2 using PRESS. Recall that the PRESS statistic is a special type of cross validation procedure (see Allen (1971))

UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS

ON BIVARIATE GEOMETRIC DISTRIBUTION. K. Jayakumar, D.A. Mundassery 1. INTRODUCTION

Simulation Output Analysis

CS 2750 Machine Learning Lecture 5. Density estimation. Density estimation

Summary of the lecture in Biostatistics

best estimate (mean) for X uncertainty or error in the measurement (systematic, random or statistical) best

Chapter 4 Multiple Random Variables

ρ < 1 be five real numbers. The

PTAS for Bin-Packing

means the first term, a2 means the term, etc. Infinite Sequences: follow the same pattern forever.

Unsupervised Learning and Other Neural Networks

Modified Cosine Similarity Measure between Intuitionistic Fuzzy Sets

7.0 Equality Contraints: Lagrange Multipliers

2006 Jamie Trahan, Autar Kaw, Kevin Martin University of South Florida United States of America

ANALYSIS ON THE NATURE OF THE BASIC EQUATIONS IN SYNERGETIC INTER-REPRESENTATION NETWORK

THE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA

Unit 9. The Tangent Bundle

1 Onto functions and bijections Applications to Counting

d dt d d dt dt Also recall that by Taylor series, / 2 (enables use of sin instead of cos-see p.27 of A&F) dsin

Lecture 02: Bounding tail distributions of a random variable

Investigation of Partially Conditional RP Model with Response Error. Ed Stanek

STRONG CONSISTENCY FOR SIMPLE LINEAR EV MODEL WITH v/ -MIXING

Likewise, properties of the optimal policy for equipment replacement & maintenance problems can be used to reduce the computation.

The Occupancy and Coupon Collector problems

D. VQ WITH 1ST-ORDER LOSSLESS CODING

MA/CSSE 473 Day 27. Dynamic programming

{ }{ ( )} (, ) = ( ) ( ) ( ) Chapter 14 Exercises in Sampling Theory. Exercise 1 (Simple random sampling): Solution:

SPECIAL CONSIDERATIONS FOR VOLUMETRIC Z-TEST FOR PROPORTIONS

( ) 2 2. Multi-Layer Refraction Problem Rafael Espericueta, Bakersfield College, November, 2006

Lecture 9: Tolerant Testing

Chapter 5 Properties of a Random Sample

Chapter 2 - Free Vibration of Multi-Degree-of-Freedom Systems - II

Assignment 7/MATH 247/Winter, 2010 Due: Friday, March 19. Powers of a square matrix

COMPUTERISED ALGEBRA USED TO CALCULATE X n COST AND SOME COSTS FROM CONVERSIONS OF P-BASE SYSTEM WITH REFERENCES OF P-ADIC NUMBERS FROM

Introduction to local (nonparametric) density estimation. methods

F. Inequalities. HKAL Pure Mathematics. 進佳數學團隊 Dr. Herbert Lam 林康榮博士. [Solution] Example Basic properties

Pseudo-random Functions

Econometric Methods. Review of Estimation

Semi-Riemann Metric on. the Tangent Bundle and its Index

b. There appears to be a positive relationship between X and Y; that is, as X increases, so does Y.

CHAPTER 3 POSTERIOR DISTRIBUTIONS

Assignment 5/MATH 247/Winter Due: Friday, February 19 in class (!) (answers will be posted right after class)

Lecture 3 Probability review (cont d)

Beam Warming Second-Order Upwind Method

Descriptive Statistics

Chapter 5 Properties of a Random Sample

Recursive linear estimation for discrete time systems in the presence of different multiplicative observation noises

Chapter 14 Logistic Regression Models

Dimensionality Reduction and Learning

PGE 310: Formulation and Solution in Geosystems Engineering. Dr. Balhoff. Interpolation

We have already referred to a certain reaction, which takes place at high temperature after rich combustion.

MOLECULAR VIBRATIONS

BIOREPS Problem Set #11 The Evolution of DNA Strands

Estimation of Stress- Strength Reliability model using finite mixture of exponential distributions

Nonparametric Density Estimation Intro

Study of Correlation using Bayes Approach under bivariate Distributions

Integral Generalized Binomial Coefficients of Multiplicative Functions

i 2 σ ) i = 1,2,...,n , and = 3.01 = 4.01

8.1 Hashing Algorithms

AN UPPER BOUND FOR THE PERMANENT VERSUS DETERMINANT PROBLEM BRUNO GRENET

Can we take the Mysticism Out of the Pearson Coefficient of Linear Correlation?

THE ROYAL STATISTICAL SOCIETY HIGHER CERTIFICATE

Transcription:

Chael Models wth Memory Chael Models wth Memory Hayder radha Electrcal ad Comuter Egeerg Mchga State Uversty I may ractcal etworkg scearos (cludg the Iteret ad wreless etworks), the uderlyg chaels are ot memoryless. Ths s due to the fact that whe a loss or error occurs, the the robablty blt that t the ext bt wll be lost/corruted s usually hgher tha f there are o revous errors or losses. Hece, losses/errors sometmes occur bursts,.e., more tha oe cosecutve losses/errors ca be observed. Coyrght 5-8 Coyrght 5-8 Chael Models wth Memory Chael Models wth Memory Therefore, we ca coclude that the uderlyg chaels of ractcal etworks have memory The memoryless model aturally caot accurately rereset such chaels, ad cosequetly, models that ca cature ths behavor wth memory are eeded The most oular chael models used to rereset chaels wth memory are dscretetme Markov chas These chas rereset ay radom rocess through a dscrete set of states Hece, Markov chas ca be vewed as a set of fte-state rocess I ths case, the chael ca be modeled as havg a fte umber of states 3 4

Chael Models wth Memory Chael Models wth Memory At ay gve dscrete tme stace () the chael could be a state S = s Whe the chael s a state (S = s )the ), that chael s behavor s, geeral, dfferet from the behavor of the chael whe t s aother state S = s It s mortat to thk of the state (S ) as a radom varable that ca take o a (dscrete) umber of ossble values (ossble states): S = s () S S = s = s =,,3... () () S = s (3). S = s (). 5 6 Chael Models wth Memory Chael Models wth Memory Each of the ossble values (states) of the state radom varable (S ) could be used to rereset a chael behavor (or chael model) Tradtoally, the ossble states are dected as dstct crcles wth the state detfcato the ceter as show below S = s () S = s () S = s (3). S = s (). The set of ossble states {s (), =,,3..} ca be cosdered as beg deedet of the tme dex () However, f a chael ca be a ossble state (s () ) at a gve tme-dex (), the same chael may ot be able to be at that state (s () ) at aother tme-dex ( ); the ths smly mles that the robabltes: ( ) Pr S = s ( ) Pr S = s = () s () s (3) s. () s. () s () s (3) () s. s. 7 8

Examle : Chael Models wth Memory Examle: A artcular chael may take o three dfferet ossble states () () {, (), (3) } S s s s Examle : Chael Models wth Memory Each ossble state (s (), =,,3) reresets a Bary Symmetrc Chael (BSC) wth a error robablty (ε ), for =,,3: () s () s (3) s () s () s (3) s ε ε ε ε ε 3 ε 3 ε ε ε 3 ε ε ε 3 9 Examle : Chael Models wth Memory Examle : Chael Models wth Memory Let () be the robablty that the chael s state () at tme-dex (): () ( ) = Pr S = s =,,3... What s the robablty of error at tme-dex ()? () s ε ε ε ε () s ε ε ε ε (3) s ε 3 ε 3 ε 3 ε 3 ( ) Let Pr Y X = Pr("error"at tme-dex ) where X ad Y are the chael ut ad outut at tme-dex (), resectvely. The: ( Y X) = ε + ε + 3 ε3 Pr ( ). ( ). ( ). ( ) = Pr Y X ( ). ε = 3 3

Chael Models wth Memory So far, we have ot exlored the memory asects of chaels wth memory, ad we have maly focused o the ossble states asects of these chaels. ow, we move to exlorg Markov chas more detals to exlot the memory characterstcs t of such chas; The, we wll look at mortat chael models wth memory. Movg forward, we also start to smlfy the otatos used for reresetg states ad other related varables ad robablty measures Markov Chas Let (S = s ) be the state of a Markov cha at ay gve tme stace () Ay Markov rocess, cludg ay Markov cha, exhbts the Markov roerty: Gve the reset state (S = s ) of a Markov rocess, the future state (S + = s + ) s deedet of all of the ast states S, S,... S, S 3 4 Markov Chas Markov Chas Markov roerty: Gve the reset state (S = s ) of a Markov rocess, the future state (S + = s + ) s deedet of all of the ast states: future reset + = + ( = ) ( = = ) [ S s S s ] Pr S s S s, S s, S s,... = Pr = = + + ast 5 Ths codtoal robablty measure of a Markov rocess s kow as the oe-ste trasto robablty or smly the trasto robablty (sometmes trastoal robablty). It reresets the robablty of trastg from oe state to aother durg a sgle (oe-ste) tme dex Pr [ S+ = s+ S = s, S = s, S = s,... ] = Pr S = s S = s [ ] + + 6 4

Markov Chas Below, we beg to use a smlfed otato for reresetg the Markov roerty ad related codtoal robabltes of Markov chas at ay gve tme stace () whe the chael s a state (S = s ): [ S S S S ] = [ S S ] Pr,,,... Pr + + Markov Chas A Markov cha s a teger-valued Markov rocess; ad hece we ca use tegers to rereset the state of the rocess. At ay gve tme-dex () the chael could be a state (). At the ext tme stace ( + ) the chael trastos to aother state () wth a robablty () [ ] ( ) = Pr S = S = + 7 8 Markov Chas Markov Chas We assume that the trasto robabltes () are deedet of the tme dex () I ths case, the oe-ste trasto robabltes are kow as homogeeous trasto robabltes: = Pr [ S+ = S = ] 9 The homogeeous trastoal robabltes ca be ut a matrx that s kow as the oe-ste trasto robablty matrx P: P = [ ] = Pr S = S = + 5

Markov Chas ote that each row the oe-ste trasto robablty matrx P must add to oe: P = [ ] = Pr S+ = S = = = = Examle : Markov Chas Examle : 3 33 [ ] 34 = Pr S+ = S = 44 3 4 4 P 3 = 3 33 34 4 44 Markov Chas Markov Chas At ay tme-dex (), a Markov cha also has state robabltes (), whch rereset the robablty-mass-fucto of that Markov cha at tme-dex dex (): ( ) Pr[ S ] = = =,,3... ote that: () s the robablty that the Markov cha s state () at tme-dex () I geeral, the state robabltes, ad hece, the Markov-cha mf () s a fucto of the tmedex (), ad thus, the rocess may ot be statoary ( ) = Pr[ S = ] 3 4 6

Markov Chas Markov Chas Markov cha state dex () ( ) = Pr[ S = ] = 3,,3... If we deote to ( = ), as the tal tmedex of the Markov cha, the we have the tal mf or tal state robabltes: : [ ] () = Pr S = =,,3... tme-dex () The tal mf s eeded for a comlete characterzato of a Markov cha at all tme dces: > 5 6 Markov Chas Markov Chas We ow cosder the ot mf of a Markov cha at tme dces (k =,,,.): [ S S S k S ] = [ S = S = S = S = ] Pr,,...,... Pr,,...,... =,,3... k k k The ot mf of a Markov cha ca be exressed terms of the trasto robabltes: [ S S S ] = [ S S S S ] [ S S S ] Pr,,... Pr,,... Pr,,... [ S S S ] = [ S S ] [ S S S ] Pr,,... Pr Pr,,... 7 8 7

Markov Chas Markov Chas Takg advatage of the Markov roerty reeatedly leads to: Pr [ S, S,... S ] = [ S S ] [ S S ] [ S S ] [ S ] Pr Pr...Pr Pr [ S S S ] = [ S ] Pr,,...... Pr [ S S S] = ( ) Pr,,...... () Cosequetly: [ S S S] = ( ) Pr,,...... () [ ] ( ) Pr S, S,... S = () k = k k 9 3 Markov Chas Markov Chas Hece, the trasto robabltes ad the tal state (mf) robabltes () comletely characterze the statstcal roertes of a Markov cha [ ] ( ) Pr S, S,... S = () k = k k ote that the trasto robabltes: k =,,... k k are deedet of the tme-dex (k); ad hece, we are assumg homogeeous trasto robabltes [ ] ( ) Pr S, S,... S = () k k k = 3 3 8

Markov Chas We have to use a tme-dex (k) the subscrt of the state ( k ) sce the Markov cha could be dfferet states at dfferet tmes; however, ths does ot mea that the trasto robabltes chage wth tme k =,,... k k [ ] ( ) Pr S, S,... S = () k k k = 33 Markov Chas A ot mf of a artcular terest s the followg: Pr S =, S = [ ] + [ S = S = ] = [ S = S = ] [ S = ] Pr, Pr Pr + + [ ] Pr S =, S = = ( ) + 34 Markov Chas Markov Chas ote the dfferece betwee homogeeous ad statoary : [ ] Pr S =, S = = ( ) + homogeeous trasto robabltes (ot fucto of ) o-statoary; geeral, state robabltes are fuctos of 35 Havg a homogeeous (does ot chage over tme) oe-ste trasto matrx P does ot mea that the Markov cha s statoary. For the Markov cha to be statoary, the state-robabltes (.e., the mf of the rocess) must ot chage over tme: ( ) = Pr[ S = ] 36 9

Markov Chas We ca have a homogeeous statoary rocess whe: [ ] Pr S =, S = = + Markov Chas I may staces, a Markov cha s tally ot statoary. However, ad uder certa codtos, a Markov cha may become statoary after sometme of rug the rocess. Whe a Markov cha settles to a statoary t behavor, ts state robabltes do ot chage overtme. homogeeous trasto robabltes (ot fucto of ) Statoary Markov cha; state robabltes are ot fuctos of ( ) 37 38 Markov Chas I ths case, the state-robabltes coverge to steady-state (statoary) robabltes These steady-state state robabltes are deoted by: ( ) π 39 Markov Chas If the state-robabltes coverge to steady-state (statoary) robabltes, the we (evetually) have a homogeeous statoary rocess steady state : [ ] Pr S =, S = = π + homogeeous trasto robabltes (ot fucto of ) Statoary Markov cha steady state (s ot fucto of ) 4

Examle 3: Chael Models wth Memory Examle: A chael ca be oe of ossble Markov cha states: {,,...,... } S...... Examle 3: Chael Models wth Memory Each ossble state (=,,...) reresets a Bary Symmetrc Chael (BSC) wth a error robablty (ε ), for =,,.. ε ε ε ε...... ε ε ε ε ε ε ε ε 4 4 Examle 3: Chael Models wth Memory Let the chael Markov cha model has steady state robabltes (π, π,..., π,...) ad a oe-ste homogeeous trasto matrx P. ε ε ε ε ε ε ε ε...... ε ε ε ε 43 Examle 3: Chael Models wth Memory Evaluate the codtoal error robablty that the chael causes a error at tme-dex (+) gve that the chael causes a error at tme- dex (): ( Y X Y X ) Pr + + where X ad Y are the chael ut ad outut at tme-dex (), resectvely. 44

Examle 3: Chael Models wth Memory Soluto: Whe solvg ths roblem, we eed to take to cosderato the ossble states that the chael ca be at both tme dces (+) ad (): We also could use the defto of codtoal robabltes: Pr, Pr = ( Y X Y X ) Here: + + ( ) ( Y X Y+ X+ ) Pr( Y X ) Pr Y X = ( ). ε = π. ε = = Examle 3: Chael Models wth Memory To smlfy the otato let the error evet at tme dex () be: The: Ad: ( Y X ) ( e ) ( Y X Y X ) = ( e e ) Pr Pr + + + Pr( Y X ) = Pr ( e ) = π. ε = 45 46 Examle 3: Chael Models wth Memory We ow exress the ot robablty takg to cosderato the ossble states ( artto ) that the chael ca be at both tme dces (+) ad (). We also use the defto of codtoal robabltes: ( e e ) Pr, + ( e e+ S = S+ = ) ( S = S = ) Pr,, = = = Pr, + Examle 3: Chael Models wth Memory Usg the followg two equatos: ( ) Pr e, e S =, S = = ε. ε + + ( e e ) Pr, ( ) Pr S =, S = = π : + + ( e e+ S = S+ = ) ( S = S = ) Pr,, = = = Pr, + 47 48

Examle 3: Chael Models wth Memory Examle 3: Chael Models wth Memory The: Therefore: ( e e + ) = ε ε π Pr,. = = ( e e ) Pr + = ( e e+ ) Pr ( e ) Pr, Alteratvely: ( e e ) Pr, = π ε ε + = = 49 = = + = ( e e ) Pr π ε π. ε = ε 5 Examle 3: Chael Models wth Memory Chael Models wth Memory Alteratvely: = ( e+ e) = Pr π ε. α = π ε where: α = ε = Examle 3 rovdes a sght to the relatosh betwee the state robablty measures (trasto/steady state robabltes) of the Markov cha model of a chael ad the artcular chael arameters such as the error robabltes The focus of Examle 3 was o two tme dces () ad (+) 5 5 3

Chael Models wth Memory A mortat questo s the followg: What does hae whe we cosder more tha two tme dces? Ca we geeralze the soluto of Examle 3 to a more geeral scearo where multle tme dces are volved (t s a Markov cha after all)? Examle 4: Chael Models wth Memory Examle: The same chael Examle 3 ca be, as before, oe of ossble Markov cha states: {,,...,... } S...... The ext examle addresses ths questo. 53 54 Examle 4: Chael Models wth Memory Also, as before each ossble state (=,,...) reresets a Bary Symmetrc Chael (BSC) wth a error robablty (ε ), for =,,........ Examle 4: Chael Models wth Memory As before, the chael Markov cha model has steady state robabltes (π, π,..., π,...) ad a oeste homogeeous trasto matrx P....... ε ε ε ε ε ε ε ε ε ε ε ε ε ε ε ε ε ε ε ε ε ε ε ε 55 56 4

Examle 4: Chael Models wth Memory Evaluate the codtoal error robablty that the chael causes a error at tme-dex (+) gve that the chael causes a error tme-dces () ad ( ): ( ( Y+ X+ ) ( Y X ) ( Y X ) ) Pr Y X Y X, Y X where X ad Y are the chael ut ad outut at tmedex (), resectvely. I other words, we eed to fd: ( e e e ) Pr, + Examle 4: Chael Models wth Memory Soluto: We follow a smlar rocedure for solvg ths examle as we dd Examle 3. I ths case, we eed to take to cosderato the ossble states that the chael ca be at the three tme dces (+), () ad ( ). We also could use the defto of codtoal robabltes: ( e+ e e ) Pr, ( e+ e e ) Pr ( e, e ) Pr,, = 57 58 Examle 4: Chael Models wth Memory Recall that from Examle 3, we have the ot robablty of errors at the two (cosecutve) tme dces () ad (+): ( e e ) Pr, + = ε πα = where α = ε = ote that ths o robablty s deedet of the tme-dex (). Hece, the ot robablty of errors at the two (cosecutve) tme dces () ad ( ): ( e e ) Pr, = ε πα = 59 Examle 4: Chael Models wth Memory ow, we focus o the ot robablty of errors over the three tme dces (+), () ad ( ):,, k ( e e e ) Pr,, + = ( e+ e e S = S = S+ = k) Pr,,,, Pr ( S =, S =, S+ = k) 6 5

Examle 4: Chael Models wth Memory Usg ( e+ e e S = S = S+ = k) Pr,,,, = ε. ε. εk ( ) Pr S =, S =, S = k = π + k Examle 4: Chael Models wth Memory Hece, the ot robablty of errors over the three tme dces (+), () ad ( ): ( ) Pr e +, e, e = ε. ε. ε π k k,, k Pr ( e+, e, e ) = πε ε kεk k 6 6 Examle 4: Chael Models wth Memory Examle 4: Chael Models wth Memory Therefore: Pr ( e+, e, e ) = πε ε kεk k Pr ( e+, e, e ) = π ε ε. α where α = ε k k k = 63 Therefore: Pr ( e+, e, e ) = π ε ε. α Pr ( e+, e, e ) = π ε. β where β = ε. α α = ε k k k = 64 6

Examle 4: Chael Models wth Memory Chael Models wth Memory Fally: where ( e+ e e ) Pr, ( e e e ) Pr, + = β = ε. α ( e+ e e ) Pr ( e, e ) Pr,, = πε. β π εα. α = ε k k k = 65 Summary of Examles, 3 ad 4: Pr ( e+ e) = π ε. α πε = = Pr ( e+ e, e ) = π ε. β πε. α where α β = ε. α = kε k k = 66 Chael Models wth Memory Chael Models wth Memory Summary of Examles, 3 ad 4: Summary of Examles, 3 ad 4: ( e ) Pr π ε = = ( e ) Pr π ε = = Pr ( e+ e) = π ε. α πε = = Pr ( e+ e) = π ε. α πε α = kε k k = = = β = ε. α Pr ( e e, e ) = π ε. β π ε. α + 67 Pr ( e e, e ) = π ε. β π ε. α + 68 7

Chael Models wth Memory Key coclusos from Examles 3 ad 4 clude: The two codtoal robabltes are ot equal to each other sce kowledge of ast error evets do ot reveal the actual states that the chael (Markov cha) has bee durg the corresodg ast tme dces ( e+ e e ) ( e+ e) Pr, Pr Recursve methods could be used to evaluate dfferet robablty measures of error evets ad atters 69 7 Back to Markov Chas Back to Markov Chas I the revous examles, we have assumed that we have kowledge of the steady state robabltes (π, π,..., π,...) ow, we further exame how we could evaluate these steady state robabltes from our kowledge of the homogeeous trasto robabltes It s also mortat to ote that the steady state robabltes (π, π,..., π,...) may or may ot exst for a gve Markov cha (as characterzed by a trasto matrx P). I the followg dscusso/sequel, we assume that the steady state robabltes do exst. (Idetfyg the codtos for the exstece of the steady state robabltes s beyod the scoe of the curret dscusso.) Just lke the case for ay radom varable, ote that we ca evaluate the margal state robablty of the radom varable (S + ) at tme-dex (+), by usg the ot mf of the state radom varables (S ) ad (S + ): [ S+ = ] = [ S = S+ = ] Pr Pr, [ S+ = ] = [ S+ = S = ] [ S = ] Pr Pr Pr 7 7 8

Markov Chas Therefore, for a homogeeous (but ot ecessarly statoary) cha: [ ] Pr S = = + ( ) ( ) ( ) + = Markov Chas ow, for a statoary rocess steady-state : ( ) ( ) + = π 73 74 Markov Chas Markov Chas ote that the followg summato s over the elemets (row dex ) of a gve colum () of the trasto robablty matrx P Hece, matrx format, for a statoary Markov cha steady-state : P = π colum dex row dex colum π π π π = π π T 75 76 9

Therefore: π Markov Chas T P π π π π = π π π Markov Chas Ths matrx equato for the steady-state (statoary) robabltes (π, π,..., π,...), combed wth the costrat that the sum of the these robabltes must add u to oe, ca be used to solve for the steady state robabltes (f they exst): P T π π = 77 78 Two State Markov Chas Two state Markov chas are very oular models for chaels wth memory. The oularty of these Markov chas s due to: They are the least comlex models of Markov chas (we caot have less tha two ossble states f we are terested usg models wth memory) They rovde farly accurate reresetato of may ractcal chaels such as wreless lks ad Iteret routes 79 8

Two State Markov Chas A two-state Markov cha s characterzed by two trasto robabltes Two State Markov Chas ote that the robabltes of stayg the same state ca be evaluated from the robabltes of trastg from oe state to aother: = = 8 8 Two State Markov Chas Two State Markov Chas Therefore, a two-state Markov cha has a x trasto robablty matrx P: P = The steady state robabltes ca be foud from the followg costrats: π π π = π π + T 83 84

Two State Markov Chas Usg the frst costrat leads to: π π = π π π + π π + π Two State Markov Chas Ths leads to the followg equatos: = = ( ) ( ) π + π π + π 85 86 Hece: Two State Markov Chas ( ) + ( ) π + π π + π ( π π ) = ( ) Two State Markov Chas Therefore, we eed to use the secod costrat o the steady state robabltes (ther sum has to be oe), ad the costrat derved from the trasto matrx P: π + π = ( π π ) = ( ) 87 88

Two State Markov Chas Ths leads to the followg exressos for the steady state robabltes: Two State Markov Chas A mortat asect of two-state Markov chas s ther memory characterstcs. + + Oe aroach for characterzg the memory measure of these chas s to evaluate the codtos that lead to a memoryless cha. + + 89 9 Two State Markov Chas ote that a memoryless scearo occurs f the state robabltes are the same as the trasto robabltes. Two State Markov Chas Hece, we have a memoryless cha whe: I other words, ths mles that beg a artcular state t s deedet d of the revous state t (or the robablty of movg to a state s deedet of the curret state). + = + = + + + + 9 9 3

Two State Markov Chas Cosequetly, a memoryless cha has: + = Memoryless Two State Markov Chas Ths leads to the followg measure for the memory of a two state Markov cha: Memory ( ) μ = + + + + + 93 94 Postve Memory The Memory Plae : Two State Markov Chas + = μ = Memoryless ( ) μ > ( ) + < ( ) + > μ < ( ) μ = + egatve Memory 95 Process stays the tal state forever μ = + ( ) + = The Memory Plae : Two State Markov Chas μ = + = μ > ( ) + < ( ) ( ) + > μ < μ = ( ) + = Process oscllates forever 96 4

Examle 5: The Glbert-Ellott Chael The Glbert-Ellot Chael (GEC) s a two-state Markov cha wth each state reresetg a Bary-Symmetrc-Chael (BSC). Tradtoally, the two states of a GEC s referred to as the good state ad bad state. The BSC error robabltes are deoted by ε g ad ε b for the good ad bad states, resectvely. The followg codto s assumed for ay GEC: ε g < ε b 97 98 The Glbert-Ellott Chael The Glbert-Ellott Chael The trasto robabltes ad steady state robabltes are deoted usg the good ad bad desgato of the GEC chael model gb Hece: g gb bg + bg gb b gb gb + bg gg g b bb gg g b bb ε g ε g bg ε b ε b ε g ε g bg ε b ε b ε g ε b ε g ε b 99 ε g ε b ε g ε b 5

Examle 6: The Glbert Chael The Glbert chael, whch s a secal case of the Glbert- Ellot Chael (GEC), s also a two-state Markov cha wth each state reresetg a Bary-Symmetrc-Chael (BSC). Uder the Glbert chael, the two states, the good state ad bad state, rereset smlfed BSC chaels. I ths case, the BSC error robabltes are ε g = ad ε b = for the good ad bad states, resectvely: ε = ε = g b The Glbert Chael Therefore, the Glbert chael does ot cause ay errors whe t s the good state; Meawhle, the chael causes a error wth robablty oe whe t s the bad state Hece, ths case, the error rocess s syoymous to the state rocess : whe the chael s good there s o error, ad whe the chael s bad there s a error ε = ε = g b The Glbert Chael The Glbert Chael Therefore, the Glbert chael could be rereseted as follows: gb Hece: Pr[ error] = b gb gb + bg gg g bg b bb Pr[ o error] = g gb bg + bg 3 4 6