EE4304 C-term 2007: Lecture 17 Supplemental Slides

Similar documents
Digital Transmission Methods S

ECE 564/645 - Digital Communications, Spring 2018 Homework #2 Due: March 19 (In Lecture)

Summary: SER formulation. Binary antipodal constellation. Generic binary constellation. Constellation gain. 2D constellations

A First Course in Digital Communications

Digital Modulation 1

Chapter 2 Signal Processing at Receivers: Detection Theory

392D: Coding for the AWGN Channel Wednesday, January 24, 2007 Stanford, Winter 2007 Handout #6. Problem Set 2 Solutions

Digital Band-pass Modulation PROF. MICHAEL TSAI 2011/11/10

a) Find the compact (i.e. smallest) basis set required to ensure sufficient statistics.

Performance of small signal sets

A Simple Example Binary Hypothesis Testing Optimal Receiver Frontend M-ary Signal Sets Message Sequences. possible signals has been transmitted.

Es e j4φ +4N n. 16 KE s /N 0. σ 2ˆφ4 1 γ s. p(φ e )= exp 1 ( 2πσ φ b cos N 2 φ e 0

Problem Set 7 Due March, 22

16.36 Communication Systems Engineering

Modulation & Coding for the Gaussian Channel

EE4512 Analog and Digital Communications Chapter 4. Chapter 4 Receiver Design

!P x. !E x. Bad Things Happen to Good Signals Spring 2011 Lecture #6. Signal-to-Noise Ratio (SNR) Definition of Mean, Power, Energy ( ) 2.

This examination consists of 10 pages. Please check that you have a complete copy. Time: 2.5 hrs INSTRUCTIONS

Computing Probability of Symbol Error

EE456 Digital Communications

8 PAM BER/SER Monte Carlo Simulation

Problem 7.7 : We assume that P (x i )=1/3, i =1, 2, 3. Then P (y 1 )= 1 ((1 p)+p) = P (y j )=1/3, j=2, 3. Hence : and similarly.

ECE 564/645 - Digital Communications, Spring 2018 Midterm Exam #1 March 22nd, 7:00-9:00pm Marston 220

EE6604 Personal & Mobile Communications. Week 13. Multi-antenna Techniques

ECE531 Lecture 6: Detection of Discrete-Time Signals with Random Parameters

Computer Vision Group Prof. Daniel Cremers. 2. Regression (cont.)

Revision of Lecture 4

Chapter [4] "Operations on a Single Random Variable"

BASICS OF DETECTION AND ESTIMATION THEORY

Lecture 12. Block Diagram

Probability and Statistics for Final Year Engineering Students

Mobile Communications (KECE425) Lecture Note Prof. Young-Chai Ko

EE 574 Detection and Estimation Theory Lecture Presentation 8

UTA EE5362 PhD Diagnosis Exam (Spring 2011)

Chapter 4: Continuous channel and its capacity

List Decoding: Geometrical Aspects and Performance Bounds

Electrical Engineering Written PhD Qualifier Exam Spring 2014

A Hilbert Space for Random Processes

Residual Versus Suppressed-Carrier Coherent Communications

University of Cambridge Engineering Part IIB Module 3F3: Signal and Pattern Processing Handout 2:. The Multivariate Gaussian & Decision Boundaries

Lecture 5: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics. 1 Executive summary

ECE531: Principles of Detection and Estimation Course Introduction

Summary II: Modulation and Demodulation

ELEC546 Review of Information Theory

Application of Matched Filter

MITOCW ocw-6-451_ feb k_512kb-mp4

Lecture Notes 1: Vector spaces

Handout 12: Error Probability Analysis of Binary Detection

Principles of Communications

Lecture 6: Gaussian Channels. Copyright G. Caire (Sample Lectures) 157

ECE531: Principles of Detection and Estimation Course Introduction

ρ = sin(2π ft) 2π ft To find the minimum value of the correlation, we set the derivative of ρ with respect to f equal to zero.

This examination consists of 11 pages. Please check that you have a complete copy. Time: 2.5 hrs INSTRUCTIONS

The Simplex Algorithm


Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi

Received Signal, Interference and Noise

2. What are the tradeoffs among different measures of error (e.g. probability of false alarm, probability of miss, etc.)?

Detecting Parametric Signals in Noise Having Exactly Known Pdf/Pmf

CHAPTER 14. Based on the info about the scattering function we know that the multipath spread is T m =1ms, and the Doppler spread is B d =0.2 Hz.

MODULATION AND CODING FOR QUANTIZED CHANNELS. Xiaoying Shao and Harm S. Cronie

Problem Set 3 Due Oct, 5

Basic information theory

Statistics for scientists and engineers

Random Processes Handout IV

Principles of Communications

ECE531 Lecture 4b: Composite Hypothesis Testing

Block 2: Introduction to Information Theory

Lecture 4 Capacity of Wireless Channels

Communication Theory II

E&CE 358, Winter 2016: Solution #2. Prof. X. Shen

Kevin Buckley a i. communication. information source. modulator. encoder. channel. encoder. information. demodulator decoder. C k.

LECTURE 16 AND 17. Digital signaling on frequency selective fading channels. Notes Prepared by: Abhishek Sood

Lecture 15: Thu Feb 28, 2019

Appendix B Information theory from first principles

These outputs can be written in a more convenient form: with y(i) = Hc m (i) n(i) y(i) = (y(i); ; y K (i)) T ; c m (i) = (c m (i); ; c m K(i)) T and n

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

ANALYSIS OF A PARTIAL DECORRELATOR IN A MULTI-CELL DS/CDMA SYSTEM

EE6604 Personal & Mobile Communications. Week 15. OFDM on AWGN and ISI Channels

Channel Coding and Interleaving

that efficiently utilizes the total available channel bandwidth W.

M378K In-Class Assignment #1

12 Statistical Justifications; the Bias-Variance Decomposition

The Cooper Union Department of Electrical Engineering ECE111 Signal Processing & Systems Analysis Final May 4, 2012

Coding theory: Applications

Information Sources. Professor A. Manikas. Imperial College London. EE303 - Communication Systems An Overview of Fundamentals

14 : Theory of Variational Inference: Inner and Outer Approximation

Solutions to Selected Problems

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY

Exam 1. Problem 1: True or false

Exercise 1. = P(y a 1)P(a 1 )

CHANNEL CAPACITY CALCULATIONS FOR M ARY N DIMENSIONAL SIGNAL SETS

EE401: Advanced Communication Theory

Chapter 7: Channel coding:convolutional codes

Lecture 9: Diversity-Multiplexing Tradeoff Theoretical Foundations of Wireless Communications 1. Overview. Ragnar Thobaben CommTh/EES/KTH

ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER. The Kalman Filter. We will be concerned with state space systems of the form

ECE531 Screencast 9.2: N-P Detection with an Infinite Number of Possible Observations

arxiv:cs/ v1 [cs.it] 11 Sep 2006

Lecture 7. Union bound for reducing M-ary to binary hypothesis testing

Introduction to Probability and Stocastic Processes - Part I

Transcription:

EE434 C-term 27: Lecture 17 Supplemental Slides D. Richard Brown III Worcester Polytechnic Institute, Department of Electrical and Computer Engineering February 5, 27

Geometric Representation: Optimal Receiver Geometric representation of a general digital communication system alphabet: s 1 (t) =.. s M (t) = c 1,1 ψ 1 (t) + + c 1,N ψ N (t). c M,1 ψ 1 (t) + + c M,N ψ N (t) The optimal receiver uses an N-dimensional bank of matched filters: matched filter bank ψ 1 (t) t = T V 1 r(t) ψ N (t) t = T V N decision device symbol/bit estimates This receiver maximizes the SNR at time t = T in all N dimensions.

Analysis of the Matched Filter Bank Receiver - 1 What does V n look like? V n = T r(t)ψ n (t) dt T = α s m (t)ψ n (t) dt + }{{} this is just the coordinate c m,n = αc m,n + w n Clearly V n is a random variable. In fact, T N(t)ψ n (t) dt } {{ } call this w n c m,n is random since we don t know which symbol was sent (m is unknown) w n is random since the filtered, sampled noise is random Vector notation V = V 1. = α c m,1. + w 1. = αc m + W V N c m,n w N

Analysis of the Matched Filter Bank Receiver - 2 What can we say about the noise vector W? Each w n is Gaussian distributed (AWGN LTI filter sample GRV) E[w n ] = (AWGN LTI filter zero-mean GRP zero-mean GRV) Variance var[w n ] = E[wn] 2 ( T = E N(t)ψ n (t) dt ) 2 = T = N 2 = N 2 T T E[N(t 1 )N(t 2 )] }{{} = N 2 δ(t 1 t 2 ) since white noise ψn 2 (t) dt ψ n (t 1 )ψ n (t 2 ) dt 1 dt 2 for all n since the basis functions are normal

Analysis of the Matched Filter Bank Receiver - 3 We know everything about w n. Is this enough? No. We also need to know about how w n and w m (n m) are related. Since w n and w m are jointly Gaussian, we only need to know their covariance. cov[w n ] = E[w n w m ] [( )( T )] T = E N(t)ψ n (t) dt N(t)ψ m (t) dt = T T T E[N(t 1 )N(t 2 )] }{{} = N 2 δ(t 1 t 2 ) since white noise ψ n (t 1 )ψ m (t 2 ) dt 1 dt 2 = N ψ n (t)ψ m (t) dt 2 = for all n m since the basis functions are orthogonal We now know everything about the noise vector W. The JPDF can be written as N { } 1 x 2 f w (x) = exp n (just the product of the marginals) (πn ) 1/2 n=1 N

Optimal Decisions: Problem Setup matched filter bank ψ 1 (t) t = T V 1 r(t) ψ N (t) t = T V N decision device symbol/bit estimates Suppose a symbol was transmitted and we observe the vector v at the output of the sampled matched filter bank. Given the outcome (aka observation ) V = v, we want to determine the most likely transmitted message s m (t), m {1,...,M}. In other words, we will use the following Maximum Likelihood Decision Rule : MLDR: Decide message m was sent if P(message m sent V = v) P(message k sent V = v) k m.

Analysis - Part 1 We can simplify the decision rule by applying the appropriate form of Bayes rule: P(A X = x) = P(A) f X(x A) f X (x) where A is an event, X is a random variable, and x is a particular outcome. We can apply this rule to both sides of our inequality. The MLDR can then be rewritten as Decide message m was sent if p m f V (v message m sent) p k f V (v message k sent) k m where p m is the (a priori) probability that message m was transmitted. To proceed further, we need an expression for f V (v message m sent). What do we know? Given message m sent, V = αc m + W. αc m is just a constant. W is a jointly Gaussian N-dimensional random variable. Each term in W is zero mean with a variance of N /2 and is independent of all other terms.

Analysis - Part 2 Hence, conditioned on message m transmitted, V is a jointly Gaussian random variable. It isn t too difficult to show that E[V n ] = αc m,n var[v n ] = N /2 cov[v n V m ] = if m n Hence, the (conditional) joint pdf of V can be written as the product of the (conditional) marginal pdfs f V (v message m sent) = = = N n=1 f Vn (v n message m sent) N [ 1 n=1 2πN /2 exp (vn αc m,n ) 2 ] 2N /2 [ 1 exp 1 N ] (v (πn ) N/2 n αc m,n ) 2 N n=1

Analysis - Part 3 Plug this result back into our decision rule (canceling the common terms)... Decide message m was sent if [ p m exp 1 N ] [ (v n αc m,n ) 2 p k exp 1 N ] (v n αc k,n ) 2 N N n=1 n=1 k m Take natural log of both sides and rearrange... Decide message m was sent if N (v n αc m,n ) 2 N (ln(p m ) ln(p k )) + n=1 N (v n αc k,n ) 2 k m Define d(x, y) = (x 1 y 1 ) 2 + + (x N y n ) 2 as the Euclidean distance measure between the vectors x and y. Our decision rule then becomes... Decide message m was sent if n=1 d 2 (v, αc m ) d 2 (v, αc k ) + N ln(p m /p k ) k m

Special Cases and Interpretation Special case: If all symbols are transmitted with equal probability (p m = 1 M m), then ln(p m /p k ) = and the MLDR can be written as for all Decide message m was sent if d(v, αc m ) d(v, αc k ) k m In other words, given the observation v, the most likely transmitted symbol is the one with coordinates nearest to v. This is sometimes called the minimum distance decision rule. If the (a priori) transmission probabilities are not all identical, the decision rule is no longer minimum distance. The term N ln(p m /p k ) increases the size of the decision regions for the more likely symbols and decreases the size of the decision regions for the less likely symbols.

Error Probability Analysis - Basic Procedure Pt. 1 1. Draw the geometric representation of the communication system (this may involve Gram-Schmidt orthogonalization). Often, this is given. 2. Determine the optimum decision regions/thresholds. In the case when all symbols are equally likely and the background noise is AWGN, this is just the minimum distance rule. 3. (Optional) Take advantage of the fact that the error probability analysis is invariant to rotation and translation of the constellation points. Move the constellation around to simplify things if it makes sense. 4. Write an expression for P(error). An error in this case is the event that we decide on a different symbol than the one sent. List all of the ways that an error can occur in terms of the matched filter output vector, i.e., P(error) = P(symbol 1 sent, V Ω 1 ) + + P(symbol M sent, V Ω M ) continued...

Error Probability Analysis - Basic Procedure Pt. 2 5. Rewrite the expression for P(error) in terms of the noise vector W, i.e., P(error) = P(symbol 1 sent, W Λ 1 ) + + P(symbol M sent, W Λ M ) 6. Reduce the general P(error) expression as much as possible to Q-functions using the assumptions (a) the symbols and the noise are independent (b) the symbols are equiprobable (this is our default assumption) (c) the noise is Gaussian, independent, and symmetric in each dimension 7. Put the final error probability expression in terms of per-bit SNR, i.e., E b av /N. Make sure you use the original constellation to compute E b av and not the rotated/translated constellation.

Error Probability Analysis - Final Comments To simplify notation, we will assume α = 1 from now on. Energy in mth symbol E sm = Z T s 2 m(t)dt = Z T 2 NX NX NX Z T c m,n ψ n (t)! dt = c m,n c m,k n=1 n=1 k=1 but this last integral equals one only if k = n. Hence ψ n (t)ψ k (t) dt E sm = N c 2 m,n = squared magnitude of constellation point. n=1 Average energy per symbol E s av = Average energy per bit M m=1 p m E sm = }{{} if equally likely E b av = E s av log 2 (M) 1 M M m=1 E sm

4-PAM Probability of Symbol Error Analysis Suppose s(t) = 1 T t T otherwise and our symbols are given as s 1 (t) = s 2 (t) = s 3 (t) = s t (t) = 3as(t) as(t) as(t) 3as(t) Step 1 is pretty easy: ψ 1 (t) = s(t). -3a -a a 3a ψ 1 (t) Assuming the symbols are all transmitted with equal probability, we can easily draw the decision regions in this one-dimensional system. Rest of analysis on blackboard...

M-PAM Symbol Error Probability Analysis We now know that, for a 4-PAM communication system, the probability of an erroneous symbol decision can be expressed as ( ) P(error) = 3 2 Q 4Eb av 5N The same analysis can be repeated to show that, for general M-PAM, the probability of symbol error can be expressed as ( ) 2(M 1) P(error) = M Q 6 log 2 (M)E b av (M 2 1)N

General Remarks 1. The quantity E b av N is commonly called the SNR-per-bit or the per-bit-snr. This is the only fair way to compare the error probability of two communication systems as a function of the transmit energy. 2. The probability of symbol error is commonly called the symbol error rate and is abbreviated as SER. 3. How does M affect the communication system? (a) Increasing M means, in general, that you transmit more bits per symbol (good) but also means, in general, that the SER will increase if the SNR-per-bit is held fixed (bad). (b) For a given modulation format, e.g. PAM, the bandwidth required is roughly inversely proportional to log 2 (M). For example, an 8-PAM communication system requires about 1 3 the bandwidth of a 2-PAM communication system with the same overall bit rate. This is a fundamental tradeoff in designing communication systems: bandwidth efficiency (ratio of data rate in bits/sec to required bandwidth in Hz) versus SER versus SNR-per-bit.

Tradeoff Example Suppose we wanted to transmit 6 bits/sec at E b av N 4-PAM, or 8-PAM. What are the tradeoffs? =1dB. We can use 2-PAM, 2-PAM 4-PAM 8-PAM required bandwidth 2B B 2B 3 SER 5 1 6 3 1 3 9 1 2 E b av N required to have SER=5 1 6 1dB 14.5dB 19dB 8-PAM requires 1/3 the bandwidth but requires 9dB more SNR-per-bit to get the same symbol error rate. Bottom line: The optimum choice depends on your application.

Plotting SER versus Per-Bit-SNR Almost all SER vs. per-bit-snr plots show SER in log scale versus E b av N in db. 1 1 1 2 1 3 SER 1 4 1 5 1 6 M=2 M=4 M=8 M=16 M=32 5 1 15 2 25 3 per bit SNR (db)

Assigning Bits to Symbols Basic idea: Some symbol errors are more common than others. We want the most common symbol errors to cause only one bit error. Can use Gray code for one-dimensional constellations. Higher dimensional constellations are more difficult. In many cases it may not be possible to have neighboring constellation points differ by only one bit.

Bit-Error-Rate Versus Symbol-Error-Rate When M = 2, BER=SER. For M > 2, an exact relationship is difficult to derive (and depends on how you assigned the bits to the symbols). Since each symbol contains M = log 2 (M) bits, a symbol error results in at least one bit error and at most log 2 (M) bit errors. Hence, the BER is bounded by SER log 2 (M) BER SER. If the bits are assigned to symbols in a smart way, then most symbol errors result in only one bit error and BER SER log 2 (M).