EE401: Advanced Communication Theory

Similar documents
EE303: Communication Systems

Information Sources. Professor A. Manikas. Imperial College London. EE303 - Communication Systems An Overview of Fundamentals

Digital Band-pass Modulation PROF. MICHAEL TSAI 2011/11/10

E303: Communication Systems

Lecture 8 ELE 301: Signals and Systems

Signals and Systems: Part 2

ECE6604 PERSONAL & MOBILE COMMUNICATIONS. Week 3. Flat Fading Channels Envelope Distribution Autocorrelation of a Random Process

ENSC327 Communications Systems 2: Fourier Representations. Jie Liang School of Engineering Science Simon Fraser University

This examination consists of 11 pages. Please check that you have a complete copy. Time: 2.5 hrs INSTRUCTIONS

5 Analog carrier modulation with noise

EE4061 Communication Systems

Lecture 7 ELE 301: Signals and Systems

Digital Communications: An Overview of Fundamentals

This examination consists of 10 pages. Please check that you have a complete copy. Time: 2.5 hrs INSTRUCTIONS

E303: Communication Systems

EE5713 : Advanced Digital Communications

Summary of Fourier Transform Properties

ENSC327 Communications Systems 2: Fourier Representations. School of Engineering Science Simon Fraser University

Fourier Series and Transform KEEE343 Communication Theory Lecture #7, March 24, Prof. Young-Chai Ko

7 The Waveform Channel

EE4512 Analog and Digital Communications Chapter 4. Chapter 4 Receiver Design

= 4. e t/a dt (2) = 4ae t/a. = 4a a = 1 4. (4) + a 2 e +j2πft 2

Signal Design for Band-Limited Channels

Introduction to Fourier Transforms. Lecture 7 ELE 301: Signals and Systems. Fourier Series. Rect Example

Kevin Buckley a i. communication. information source. modulator. encoder. channel. encoder. information. demodulator decoder. C k.

Lecture Notes 7 Stationary Random Processes. Strict-Sense and Wide-Sense Stationarity. Autocorrelation Function of a Stationary Process

MATHEMATICAL TOOLS FOR DIGITAL TRANSMISSION ANALYSIS

Analog Electronics 2 ICS905

Example: Bipolar NRZ (non-return-to-zero) signaling

Lecture 7: Wireless Channels and Diversity Advanced Digital Communications (EQ2410) 1

Module 4. Signal Representation and Baseband Processing. Version 2 ECE IIT, Kharagpur

Principles of Communications

Communications and Signal Processing Spring 2017 MSE Exam

A First Course in Digital Communications

Chapter 5 Random Variables and Processes

Fundamentals of Noise

EE401: Advanced Communication Theory

that efficiently utilizes the total available channel bandwidth W.

Communication Theory Summary of Important Definitions and Results

2.1 Introduction 2.22 The Fourier Transform 2.3 Properties of The Fourier Transform 2.4 The Inverse Relationship between Time and Frequency 2.

Fourier Analysis and Power Spectral Density

ELEC546 Review of Information Theory

Digital Baseband Systems. Reference: Digital Communications John G. Proakis

2 Frequency-Domain Analysis

TSKS01 Digital Communication Lecture 1

2A1H Time-Frequency Analysis II

Lecture 5b: Line Codes

Communication Systems Lecture 21, 22. Dong In Kim School of Information & Comm. Eng. Sungkyunkwan University

Digital Communications

Chapter Review of of Random Processes

a) Find the compact (i.e. smallest) basis set required to ensure sufficient statistics.

Digital Communications

2.1 Basic Concepts Basic operations on signals Classication of signals

NORWEGIAN UNIVERSITY OF SCIENCE AND TECHNOLOGY DEPARTMENT OF ELECTRONICS AND TELECOMMUNICATIONS

EE 637 Final April 30, Spring Each problem is worth 20 points for a total score of 100 points

Square Root Raised Cosine Filter

Data Detection for Controlled ISI. h(nt) = 1 for n=0,1 and zero otherwise.

Review of Fourier Transform

Good Luck. EE 637 Final May 4, Spring Name: Instructions: This is a 120 minute exam containing five problems.

LOPE3202: Communication Systems 10/18/2017 2

Revision of Lecture 4

Signal Processing Signal and System Classifications. Chapter 13

Lecture 18: Gaussian Channel

Lecture 15: Thu Feb 28, 2019

Review of Doppler Spread The response to exp[2πift] is ĥ(f, t) exp[2πift]. ĥ(f, t) = β j exp[ 2πifτ j (t)] = exp[2πid j t 2πifτ o j ]

Fourier Analysis and Sampling

2016 Spring: The Final Exam of Digital Communications

UCSD ECE153 Handout #40 Prof. Young-Han Kim Thursday, May 29, Homework Set #8 Due: Thursday, June 5, 2011

II - Baseband pulse transmission

Chapter 2 Signal Processing at Receivers: Detection Theory

FINAL EXAM: 3:30-5:30pm

Signals and Systems. Lecture 14 DR TANIA STATHAKI READER (ASSOCIATE PROFESSOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE LONDON

Direct-Sequence Spread-Spectrum

s o (t) = S(f)H(f; t)e j2πft df,

Detecting Parametric Signals in Noise Having Exactly Known Pdf/Pmf

Continuous Time Signal Analysis: the Fourier Transform. Lathi Chapter 4

Line Codes and Pulse Shaping Review. Intersymbol interference (ISI) Pulse shaping to reduce ISI Embracing ISI

Revision of Lecture 5

ECE6604 PERSONAL & MOBILE COMMUNICATIONS. Week 4. Envelope Correlation Space-time Correlation

Random Processes Why we Care

8.1 Circuit Parameters

Digital Modulation 2

Es e j4φ +4N n. 16 KE s /N 0. σ 2ˆφ4 1 γ s. p(φ e )= exp 1 ( 2πσ φ b cos N 2 φ e 0

Digital Modulation 1

The Discrete Fourier Transform. Signal Processing PSYCH 711/712 Lecture 3

Summary II: Modulation and Demodulation

ENEE 420 FALL 2007 COMMUNICATIONS SYSTEMS SAMPLING

EE456 Digital Communications

2A1H Time-Frequency Analysis II Bugs/queries to HT 2011 For hints and answers visit dwm/courses/2tf

Fundamentals of the Discrete Fourier Transform

ECE 630: Statistical Communication Theory

EE6604 Personal & Mobile Communications. Week 15. OFDM on AWGN and ISI Channels

8 PAM BER/SER Monte Carlo Simulation

The Hilbert Transform

8: Correlation. E1.10 Fourier Series and Transforms ( ) Fourier Transform - Correlation: 8 1 / 11. 8: Correlation

Communication Signals (Haykin Sec. 2.4 and Ziemer Sec Sec. 2.4) KECE321 Communication Systems I

This is a Gaussian probability centered around m = 0 (the most probable and mean position is the origin) and the mean square displacement m 2 = n,or

On Coding for Orthogonal Frequency Division Multiplexing Systems

Random Processes Handout IV

Modulation & Coding for the Gaussian Channel

Transcription:

EE401: Advanced Communication Theory Professor A. Manikas Chair of Communications and Array Processing Imperial College London Introductory Concepts Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 1 / 27

Table of Contents 1 Introduction 2 Communication Channels: Continuous and Discrete Continuous Channels Discrete Channels 3 Communication Systems - Block Diagrams 4 Digital Modulators/Demodulators 5 Appendices A: Comm Systems: Basic Performance Criteria B: Additive Noise Additive White Gaussian Noise (AWGN) Bandlimited AWGN "I" and "Q" Noise Components Tail function (or Q-function) for Gaussian Signals C: Tail Function Graph D: Fourier Transform Tables Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 2 / 27

Introduction Introduction With reference to the following block structure of a Dig. Comm. System (DCS), this topic is concerned with the basics of both continuous and discrete communication channels. Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 3 / 27

Introduction Just as with sources, communication channels are either discrete channels, or continuous channels 1 wireless channels (in this case the whole DCS is known as a Wireless DCS) 2 wireline channels (in this case the whole DCS is known as a Wireline DCS) Note that a continuous channel is converted into (becomes) a discrete channel when a digital modulator is used to feed the channel and a digital demodulator provides the channel output. Examples of channels - with reference to DCS shown in previous page, discrete channels: input: A2 - output: Â2 (alphabet: levels of quantiser - Volts) input: B2 - output: B2 (alphabet: binary digits or binary codewords) continuous channels: input: A1 - output: Â1, (Volts) - continuous channel (baseband) input: T, - output: T (Volts) - continuous channel (baseband), input: T1 - output: T1 (Volts) - continuous channel (bandpass). Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 4 / 27

Communication Channels Continuous Channels Continuous Channels A continuous communication channel (which can be regarded as an analogue channel) is described by an input ensemble (s(t), pdf s (s)) and PSD s (f ) an output ensemble, (r(t), pdf r (r)) the channel noise (AWGN) n i (t) and β, the channel bandwidth B and channel capacity C. Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 5 / 27

Communication Channels Discrete Channels Discrete Channels A discrete communication channel has a discrete input and a discrete output where the symbols applied to the channel input for transmission are drawn from a finite alphabet, described by an input ensemble (X, p) while the symbols appearing at the channel output are also drawn from a finite alphabet, which is described by an output ensemble (Y, q) the channel transition probability matrix F. Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 6 / 27

Communication Channels Discrete Channels In many situations the input and output alphabets X and Y are identical but in the general case these are different. Instead of using X and Y, it is common practice to use the symbols H and D and thus define the two alphabets and the associated probabilities as {}}{{}}{{}}{ input: H = {H 1, H 2,..., H M } p = [ Pr(H 1 ), Pr(H 2 ),..., Pr(H M )] T p 1 p 2 p M {}}{{}}{{}}{ output: D = {D 1, D 2,..., D K } q = [ Pr(D 1 ), Pr(D 2 ),..., Pr(D K )] T where p m abbreviates the probability Pr(H m ) that the symbol H m may appear at the input while q k abbreviates the probability Pr(D k ) that the symbol D k may appear at the output of the channel. q 1 q 2 q K Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 7 / 27

Communication Channels Discrete Channels The probabilistic relationship between input symbols H and output symbols D is described by the so-called channel transition probability matrix F, which is defined as follows: Pr(D 1 H 1 ), Pr(D 1 H 2 ),..., Pr(D 1 H M ) F = Pr(D 2 H 1 ), Pr(D 2 H 2 ),..., Pr(D 2 H M )...,...,...,... (1) Pr(D K H 1 ), Pr(D K H 2 ),..., Pr(D K H M ) Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 8 / 27

Communication Channels Discrete Channels Pr(D k H m ) denotes the probability that symbol D k D will appear at the channel output, given that H m H was applied to the input. The input ensemble ( H, p ), the output ensemble ( D, q ) and the matrix F fully describe the functional properties of the channel. The following expression describes the relationship between q and p q = F.p (2) Note that in a noiseless channel D = H (3) q = p i.e the matrix F is an identity matrix F = I M (4) Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 9 / 27

Communication Channels Discrete Channels Joint transition Probability Matrix The joint probabilistic relationship between input channel symbols H = {H 1, H 2,..., H M } and output channel symbols D = {D 1, D 2,..., D M }, is described by the so-called joint-probability matrix, T Pr(H 1, D 1 ), Pr(H 1, D 2 ),..., Pr(H 1, D K ) J Pr(H 2, D 1 ), Pr(H 2, D 2 ),..., Pr(H 2, D K )...,...,...,... Pr(H M, D 1 ), Pr(H M, D 2 ),..., Pr(H M, D K ) J is related to the forward transition probabilities of a channel with the following expression (compact form of Bayes Theorem): p 1 0... 0 J = F. 0 p 2... 0............ = F.diag(p) (6) } 0 0... {{ p M } diag(p) Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 10 / 27 (5)

Communication Channels Discrete Channels Note: This is equivalent to a new (joint) source having alphabet {(H 1, D1), (H 1, D 2 ),..., (H M, D K )} and ensemble (joint ensemble) defined as follows = (H D, J) = (H 1, D 1 ), Pr(H 1, D 1 ) (H 1, D 2 ), Pr(H 1, D 2 )... (H m, D k ), Pr(H m, D k )... (H M, D K ), Pr(H M, D K ) (H m, D k ), Pr(H m, D k ), mk : 1 m M, 1 k K }{{} =J km (7) Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 11 / 27

Communication Systems - Block Diagrams Block Structure of a Digital Comm System Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 12 / 27

Communication Systems - Block Diagrams Block Structure of a Spread Spectrum Comm System SSS: CDMA: Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 13 / 27

Digital Modulators/Demodulators Digital Modulators/Demodulators A digital modulator is described by M different channel symbols. These channel symbols are ENERGY SIGNALS of duration T cs. Digital Modulator: Digital Demodulator: Note: It is common practice to ignore the up/down conversion and to work in baseband. Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 14 / 27

Digital Modulators/Demodulators If M = 2 Binary Digital Modulator Binary Comm. System If M > 2 M-ary Digital Modulator M-ary Comm. System Note: A continuous channel is converted into (becomes) a discrete channel when a digital modulator is used to feed the channel and a digital demodulator provides the channel output. Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 15 / 27

Digital Modulators/Demodulators Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 16 / 27

Digital Modulators/Demodulators In this topic we will focus on the following block of a digital communication system (β = 1 is assumed): This is the heart of a communication system and some elements of detection/decision theory will be employed for its investigation. Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 17 / 27

Appendices A: Comm Systems: Basic Performance Criteria Appendices - A: Basic Performance Criteria SNR in = Power of signal at T {(βs(t)) Power of noise at T = E 2} E {n(t) 2 } = β2 P s N 0 B }{{} P n p e = BER at point B (9) SNR out = (8) Power of signal at  Power of noise at  = f{p }{{ e} (10) } denotes: a function of p e Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 18 / 27

Appendices B: Additive Noise B: Additive Noise types of channel signals s(t), r(t), n(t): bandpass n i (t) = AWGN: allpass Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 19 / 27

Appendices B: Additive Noise n i (t) it is a random all-pass signal its Power Spectral Density is "White" i.e. "flat". That is, PSD ni (f ) = N 0 2 (11) its amplitude probability density ( function is Gaussian ) its Autocorrelation function i.e. FT 1 {PSD(f )} is: R ni n i (τ) = N 0 2 δ(τ) (12) Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 20 / 27

Appendices B: Additive Noise Bandlimited AWGN n(t) it is a random Band-Pass signal of bandwidth B (equal to the channel bandwidth) its Power Spectral Density is "bandmimited White". That is, PSD ni (f ) = N 0 2 Its power is: ( { } { }) f + Fc f Fc rect + rect B B (13) P n = σ 2 n = PSD ni (f ).df = N 0 2 B 2 P n = N 0 B (14) Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 21 / 27

Appendices B: Additive Noise more on n(t): its amplitude probability density function is Gaussian It is also known as bandlimited-awgn It can be written as follows: where pdf n = N(0, σ 2 n = N 0 B) (15) n(t) = n c (t) cos(2πf c t) n s (t) sin(2πf c t) (16) = } nc 2 (t) + ns 2 (t) cos(2πf c t + φ n (t)) {{} (17) r n (t) n c (t) and n s (t) are random signals - with pdf=gaussian distribution r n (t) is a random signal - with pdf=rayleigh distribution φ n (t) is a random signal - with pdf=uniform distribution: [0, 2π]) N.B.: all the above are low pass signals & appear at Rx s o/p Equ. 16 is known as Quadrature Noise Representation. Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 22 / 27

Appendices B: Additive Noise "I" and "Q" Noise Components n c (t) (i.e. "I") and n s (t) (i.e. "Q") their Power Spectral Densities are: their power are: { } f PSD nc (f ) = PSD ns (f ) = N 0 rect B P nc = σ 2 n c = PSD nc (f ).df = N 0 B (18) P nc = P ns = P n = N 0 B (19) Amplitude probability density functions: Gaussian, are uncorrelated i.e. E {n c (t).n s (t)} = 0 pdf nc = pdf ns = N(0, N 0 B) (20) Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 23 / 27

Appendices B: Additive Noise Tail function (or Q-function) for Gaussian Signals Probablity and Probability-Density-Function (pdf) Consider a random signal x(t) with a known amplitude probability density function pdf x (x) - not necessarily Gaussian. Then the probability that the amplitude of x(t) is greater than A Volts (say) is given as follows: Pr(x(t) > A) = A pdf x (x).dx (21) e.g. if A = 3V Pr(x(t) > 3V ) = 3 pdf x (x).dx = highlighted area Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 24 / 27

Appendices B: Additive Noise Gaussian pdf and Tail function If pdf x (x) = Gaussian of mean µ x and standard deviation σ x (notation used: pdf x (x) = N(µ x, σ 2 x ), then the above area is defined as the Tail-function (or Q-function) e.g. Pr(x(t) > A) = A pdf x (x).dx T{ A µ x σ x } (22) if pdf x (x) = N(1, 4) - i.e. µ x = 0, σ x = 2 - and A = 3V then Pr(x(t) > 3V ) = 3 pdf x (x).dx T{ 3 1 if pdf x (x) = N(0, 1) and A = 3V then Pr(x(t) > 3V ) = 3 pdf x (x).dx T{3} The Tail function graph is given in the next page 2 }=T{1} Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 25 / 27

Appendices C: Tail Function Graph Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 26 / 27

Appendices D: Fourier Transform Tables Fourier Transform Tables Description Function Transformation 1 Definition g(t) G (f ) = g(t).e j2πft dt 2 Scaling g ( ) t T T.G (ft ) 3 Time shift g(t T ) G (f ).e j2πft 4 Frequency shift g(t).e j2πft G (f F ) 5 Complex conjugate g (t) G ( f ) 6 Temporal derivative d n dt g(t) n (j2πf )n.g (f ) 7 Spectral derivative ( j2πt) n.g(t) d n df G (f ) n 8 Reciprocity G (t) g( f ) 9 Linearity A.g(t) + B.h(t) A.G (f ) + B.H(f ) 10 Multiplication g(t).h(t) G (f ) H(f ) 11 Convolution g(t) h(t) G (f ).H(f ) 12 Delta function δ(t) 1 13 Constant 1 δ(f ) { 1 if t < 1 14 Rectangular function rect{t} 2 sinc{f } sin(πf ) 0 otherwise πf 15 Sinc function sinc(t) rect{f } { 1 t > 0 16 Unit step function u(t) 0 t < 0 17 Signum function sgn(t) 18 decaying exp (two-sided) 19 decaying exp (one-sided) { 1 t > 0 1 t < 0 1 2 δ(f ) j 2πf j πf e t 2 1+(2πf ) 2 e t.u(t) 1 j2πf 1+(2πf ) 2 20 Gaussian function e πt2 e πf 2 { 1 t if 0 t 1 21 Lambda function Λ{t} sinc 1 + t if 1 t 0 2 {f } 22 Repeated function rept {g(t)} = g(t) rept {δ(t)} 1 T comb 1T {G (f )} 23 Sampled function combt {g(t)} = g(t).rept {δ(t)} 1 T rep 1 {G (f )} T Prof. A. Manikas (Imperial College) EE.401: Introductory Concepts v.17 27 / 27