EE401: Advanced Communication Theory Professor A. Manikas Chair of Communications and Array Processing Imperial College London Introductory Concepts Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 1 / 27
Table of Contents 1 Introduction 2 Communication Channels: Continuous and Discrete Continuous Channels Discrete Channels 3 Communication Systems - Block Diagrams 4 Digital Modulators/Demodulators 5 Appendices A: Comm Systems: Basic Performance Criteria B: Additive Noise Additive White Gaussian Noise (AWGN) Bandlimited AWGN "I" and "Q" Noise Components Tail function (or Q-function) for Gaussian Signals C: Tail Function Graph D: Fourier Transform Tables Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 2 / 27
Introduction Introduction With reference to the following block structure of a Dig. Comm. System (DCS), this topic is concerned with the basics of both continuous and discrete communication channels. Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 3 / 27
Introduction Just as with sources, communication channels are either discrete channels, or continuous channels 1 wireless channels (in this case the whole DCS is known as a Wireless DCS) 2 wireline channels (in this case the whole DCS is known as a Wireline DCS) Note that a continuous channel is converted into (becomes) a discrete channel when a digital modulator is used to feed the channel and a digital demodulator provides the channel output. Examples of channels - with reference to DCS shown in previous page, discrete channels: input: A2 - output: Â2 (alphabet: levels of quantiser - Volts) input: B2 - output: B2 (alphabet: binary digits or binary codewords) continuous channels: input: A1 - output: Â1, (Volts) - continuous channel (baseband) input: T, - output: T (Volts) - continuous channel (baseband), input: T1 - output: T1 (Volts) - continuous channel (bandpass). Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 4 / 27
Communication Channels Continuous Channels Continuous Channels A continuous communication channel (which can be regarded as an analogue channel) is described by an input ensemble (s(t), pdf s (s)) and PSD s (f ) an output ensemble, (r(t), pdf r (r)) the channel noise (AWGN) n i (t) and β, the channel bandwidth B and channel capacity C. Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 5 / 27
Communication Channels Discrete Channels Discrete Channels A discrete communication channel has a discrete input and a discrete output where the symbols applied to the channel input for transmission are drawn from a finite alphabet, described by an input ensemble (X, p) while the symbols appearing at the channel output are also drawn from a finite alphabet, which is described by an output ensemble (Y, q) the channel transition probability matrix F. Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 6 / 27
Communication Channels Discrete Channels In many situations the input and output alphabets X and Y are identical but in the general case these are different. Instead of using X and Y, it is common practice to use the symbols H and D and thus define the two alphabets and the associated probabilities as {}}{{}}{{}}{ input: H = {H 1, H 2,..., H M } p = [ Pr(H 1 ), Pr(H 2 ),..., Pr(H M )] T p 1 p 2 p M {}}{{}}{{}}{ output: D = {D 1, D 2,..., D K } q = [ Pr(D 1 ), Pr(D 2 ),..., Pr(D K )] T where p m abbreviates the probability Pr(H m ) that the symbol H m may appear at the input while q k abbreviates the probability Pr(D k ) that the symbol D k may appear at the output of the channel. q 1 q 2 q K Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 7 / 27
Communication Channels Discrete Channels The probabilistic relationship between input symbols H and output symbols D is described by the so-called channel transition probability matrix F, which is defined as follows: Pr(D 1 H 1 ), Pr(D 1 H 2 ),..., Pr(D 1 H M ) F = Pr(D 2 H 1 ), Pr(D 2 H 2 ),..., Pr(D 2 H M )...,...,...,... (1) Pr(D K H 1 ), Pr(D K H 2 ),..., Pr(D K H M ) Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 8 / 27
Communication Channels Discrete Channels Pr(D k H m ) denotes the probability that symbol D k D will appear at the channel output, given that H m H was applied to the input. The input ensemble ( H, p ), the output ensemble ( D, q ) and the matrix F fully describe the functional properties of the channel. The following expression describes the relationship between q and p q = F.p (2) Note that in a noiseless channel D = H (3) q = p i.e the matrix F is an identity matrix F = I M (4) Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 9 / 27
Communication Channels Discrete Channels Joint transition Probability Matrix The joint probabilistic relationship between input channel symbols H = {H 1, H 2,..., H M } and output channel symbols D = {D 1, D 2,..., D M }, is described by the so-called joint-probability matrix, T Pr(H 1, D 1 ), Pr(H 1, D 2 ),..., Pr(H 1, D K ) J Pr(H 2, D 1 ), Pr(H 2, D 2 ),..., Pr(H 2, D K )...,...,...,... Pr(H M, D 1 ), Pr(H M, D 2 ),..., Pr(H M, D K ) J is related to the forward transition probabilities of a channel with the following expression (compact form of Bayes Theorem): p 1 0... 0 J = F. 0 p 2... 0............ = F.diag(p) (6) } 0 0... {{ p M } diag(p) Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 10 / 27 (5)
Communication Channels Discrete Channels Note: This is equivalent to a new (joint) source having alphabet {(H 1, D1), (H 1, D 2 ),..., (H M, D K )} and ensemble (joint ensemble) defined as follows = (H D, J) = (H 1, D 1 ), Pr(H 1, D 1 ) (H 1, D 2 ), Pr(H 1, D 2 )... (H m, D k ), Pr(H m, D k )... (H M, D K ), Pr(H M, D K ) (H m, D k ), Pr(H m, D k ), mk : 1 m M, 1 k K }{{} =J km (7) Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 11 / 27
Communication Systems - Block Diagrams Block Structure of a Digital Comm System Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 12 / 27
Communication Systems - Block Diagrams Block Structure of a Spread Spectrum Comm System SSS: CDMA: Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 13 / 27
Digital Modulators/Demodulators Digital Modulators/Demodulators A digital modulator is described by M different channel symbols. These channel symbols are ENERGY SIGNALS of duration T cs. Digital Modulator: Digital Demodulator: Note: It is common practice to ignore the up/down conversion and to work in baseband. Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 14 / 27
Digital Modulators/Demodulators If M = 2 Binary Digital Modulator Binary Comm. System If M > 2 M-ary Digital Modulator M-ary Comm. System Note: A continuous channel is converted into (becomes) a discrete channel when a digital modulator is used to feed the channel and a digital demodulator provides the channel output. Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 15 / 27
Digital Modulators/Demodulators Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 16 / 27
Digital Modulators/Demodulators In this topic we will focus on the following block of a digital communication system (β = 1 is assumed): This is the heart of a communication system and some elements of detection/decision theory will be employed for its investigation. Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 17 / 27
Appendices A: Comm Systems: Basic Performance Criteria Appendices - A: Basic Performance Criteria SNR in = Power of signal at T {(βs(t)) Power of noise at T = E 2} E {n(t) 2 } = β2 P s N 0 B }{{} P n p e = BER at point B (9) SNR out = (8) Power of signal at  Power of noise at  = f{p }{{ e} (10) } denotes: a function of p e Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 18 / 27
Appendices B: Additive Noise B: Additive Noise types of channel signals s(t), r(t), n(t): bandpass n i (t) = AWGN: allpass Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 19 / 27
Appendices B: Additive Noise n i (t) it is a random all-pass signal its Power Spectral Density is "White" i.e. "flat". That is, PSD ni (f ) = N 0 2 (11) its amplitude probability density ( function is Gaussian ) its Autocorrelation function i.e. FT 1 {PSD(f )} is: R ni n i (τ) = N 0 2 δ(τ) (12) Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 20 / 27
Appendices B: Additive Noise Bandlimited AWGN n(t) it is a random Band-Pass signal of bandwidth B (equal to the channel bandwidth) its Power Spectral Density is "bandmimited White". That is, PSD ni (f ) = N 0 2 Its power is: ( { } { }) f + Fc f Fc rect + rect B B (13) P n = σ 2 n = PSD ni (f ).df = N 0 2 B 2 P n = N 0 B (14) Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 21 / 27
Appendices B: Additive Noise more on n(t): its amplitude probability density function is Gaussian It is also known as bandlimited-awgn It can be written as follows: where pdf n = N(0, σ 2 n = N 0 B) (15) n(t) = n c (t) cos(2πf c t) n s (t) sin(2πf c t) (16) = } nc 2 (t) + ns 2 (t) cos(2πf c t + φ n (t)) {{} (17) r n (t) n c (t) and n s (t) are random signals - with pdf=gaussian distribution r n (t) is a random signal - with pdf=rayleigh distribution φ n (t) is a random signal - with pdf=uniform distribution: [0, 2π]) N.B.: all the above are low pass signals & appear at Rx s o/p Equ. 16 is known as Quadrature Noise Representation. Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 22 / 27
Appendices B: Additive Noise "I" and "Q" Noise Components n c (t) (i.e. "I") and n s (t) (i.e. "Q") their Power Spectral Densities are: their power are: { } f PSD nc (f ) = PSD ns (f ) = N 0 rect B P nc = σ 2 n c = PSD nc (f ).df = N 0 B (18) P nc = P ns = P n = N 0 B (19) Amplitude probability density functions: Gaussian, are uncorrelated i.e. E {n c (t).n s (t)} = 0 pdf nc = pdf ns = N(0, N 0 B) (20) Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 23 / 27
Appendices B: Additive Noise Tail function (or Q-function) for Gaussian Signals Probablity and Probability-Density-Function (pdf) Consider a random signal x(t) with a known amplitude probability density function pdf x (x) - not necessarily Gaussian. Then the probability that the amplitude of x(t) is greater than A Volts (say) is given as follows: Pr(x(t) > A) = A pdf x (x).dx (21) e.g. if A = 3V Pr(x(t) > 3V ) = 3 pdf x (x).dx = highlighted area Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 24 / 27
Appendices B: Additive Noise Gaussian pdf and Tail function If pdf x (x) = Gaussian of mean µ x and standard deviation σ x (notation used: pdf x (x) = N(µ x, σ 2 x ), then the above area is defined as the Tail-function (or Q-function) e.g. Pr(x(t) > A) = A pdf x (x).dx T{ A µ x σ x } (22) if pdf x (x) = N(1, 4) - i.e. µ x = 0, σ x = 2 - and A = 3V then Pr(x(t) > 3V ) = 3 pdf x (x).dx T{ 3 1 if pdf x (x) = N(0, 1) and A = 3V then Pr(x(t) > 3V ) = 3 pdf x (x).dx T{3} The Tail function graph is given in the next page 2 }=T{1} Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 25 / 27
Appendices C: Tail Function Graph Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 26 / 27
Appendices D: Fourier Transform Tables Fourier Transform Tables Description Function Transformation 1 Definition g(t) G (f ) = g(t).e j2πft dt 2 Scaling g ( ) t T T.G (ft ) 3 Time shift g(t T ) G (f ).e j2πft 4 Frequency shift g(t).e j2πft G (f F ) 5 Complex conjugate g (t) G ( f ) 6 Temporal derivative d n dt g(t) n (j2πf )n.g (f ) 7 Spectral derivative ( j2πt) n.g(t) d n df G (f ) n 8 Reciprocity G (t) g( f ) 9 Linearity A.g(t) + B.h(t) A.G (f ) + B.H(f ) 10 Multiplication g(t).h(t) G (f ) H(f ) 11 Convolution g(t) h(t) G (f ).H(f ) 12 Delta function δ(t) 1 13 Constant 1 δ(f ) { 1 if t < 1 14 Rectangular function rect{t} 2 sinc{f } sin(πf ) 0 otherwise πf 15 Sinc function sinc(t) rect{f } { 1 t > 0 16 Unit step function u(t) 0 t < 0 17 Signum function sgn(t) 18 decaying exp (two-sided) 19 decaying exp (one-sided) { 1 t > 0 1 t < 0 1 2 δ(f ) j 2πf j πf e t 2 1+(2πf ) 2 e t.u(t) 1 j2πf 1+(2πf ) 2 20 Gaussian function e πt2 e πf 2 { 1 t if 0 t 1 21 Lambda function Λ{t} sinc 1 + t if 1 t 0 2 {f } 22 Repeated function rept {g(t)} = g(t) rept {δ(t)} 1 T comb 1T {G (f )} 23 Sampled function combt {g(t)} = g(t).rept {δ(t)} 1 T rep 1 {G (f )} T Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 27 / 27