Handout 12: Error Probability Analysis of Binary Detection

Similar documents
EE 574 Detection and Estimation Theory Lecture Presentation 8

392D: Coding for the AWGN Channel Wednesday, January 24, 2007 Stanford, Winter 2007 Handout #6. Problem Set 2 Solutions

Detection theory. H 0 : x[n] = w[n]

ECE 564/645 - Digital Communications, Spring 2018 Homework #2 Due: March 19 (In Lecture)

Chapter 2 Signal Processing at Receivers: Detection Theory

Determining the Optimal Decision Delay Parameter for a Linear Equalizer

Detection Theory. Chapter 3. Statistical Decision Theory I. Isael Diaz Oct 26th 2010

EE4304 C-term 2007: Lecture 17 Supplemental Slides

a) Find the compact (i.e. smallest) basis set required to ensure sufficient statistics.

Communication Theory II

E4702 HW#4-5 solutions by Anmo Kim

A First Course in Digital Communications

Digital Band-pass Modulation PROF. MICHAEL TSAI 2011/11/10

Maximum Likelihood Sequence Detection

ECE 564/645 - Digital Communications, Spring 2018 Midterm Exam #1 March 22nd, 7:00-9:00pm Marston 220

Diversity Performance of a Practical Non-Coherent Detect-and-Forward Receiver

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi

Day 5: Generative models, structured classification

Lecture 7. Union bound for reducing M-ary to binary hypothesis testing

Performance of small signal sets

Channel Coding I. Exercises SS 2017

Digital Transmission Methods S

UNIVERSITY OF DUBLIN

Problem Set 2. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 30

Relationship between Least Squares Approximation and Maximum Likelihood Hypotheses

Appendix D: Basics of convolutional codes

MODULATION AND CODING FOR QUANTIZED CHANNELS. Xiaoying Shao and Harm S. Cronie

A Simple Example Binary Hypothesis Testing Optimal Receiver Frontend M-ary Signal Sets Message Sequences. possible signals has been transmitted.

Image Compression using DPCM with LMS Algorithm

Analysis of methods for speech signals quantization

INTERVAL ESTIMATION AND HYPOTHESES TESTING

UTA EE5362 PhD Diagnosis Exam (Spring 2011)

Channel Coding I. Exercises SS 2017

LIKELIHOOD RECEIVER FOR FH-MFSK MOBILE RADIO*

PSK bit mappings with good minimax error probability

MITOCW ocw-6-451_ feb k_512kb-mp4

8 PAM BER/SER Monte Carlo Simulation

Chapter [4] "Operations on a Single Random Variable"

Performance Analysis of Spread Spectrum CDMA systems

Ph 219b/CS 219b. Exercises Due: Wednesday 22 February 2006

EE4512 Analog and Digital Communications Chapter 4. Chapter 4 Receiver Design

Ph 219b/CS 219b. Exercises Due: Wednesday 20 November 2013

Capacity of AWGN channels

Novel Quantization Strategies for Linear Prediction with Guarantees

Chapter 9 Fundamental Limits in Information Theory

Machine Learning 3. week

ITCT Lecture IV.3: Markov Processes and Sources with Memory

Random Processes Why we Care

System Identification

SYDE 372 Introduction to Pattern Recognition. Probability Measures for Classification: Part I

Lecture 4: Statistical Hypothesis Testing

RADIO SYSTEMS ETIN15. Lecture no: Equalization. Ove Edfors, Department of Electrical and Information Technology

STATISTICAL INDEPENDENCE AND AN INVITATION TO THE Art OF CONDITIONING

List Decoding: Geometrical Aspects and Performance Bounds

Digital Modulation 1

Problem 7.7 : We assume that P (x i )=1/3, i =1, 2, 3. Then P (y 1 )= 1 ((1 p)+p) = P (y j )=1/3, j=2, 3. Hence : and similarly.

Notes on a posteriori probability (APP) metrics for LDPC

TDA231. Logistic regression

Lecture 5b: Line Codes

DETECTION theory deals primarily with techniques for

INTRODUCTION TO BAYESIAN INFERENCE PART 2 CHRIS BISHOP

Revision of Lecture 4

ρ = sin(2π ft) 2π ft To find the minimum value of the correlation, we set the derivative of ρ with respect to f equal to zero.

STOCHASTIC PROCESSES, DETECTION AND ESTIMATION Course Notes

Introduction to Detection Theory

Lecture 12. Block Diagram

Censoring for Type-Based Multiple Access Scheme in Wireless Sensor Networks

The PAC Learning Framework -II

Multiple Antennas for MIMO Communications - Basic Theory

One-Bit LDPC Message Passing Decoding Based on Maximization of Mutual Information

Why study probability? Set theory. ECE 6010 Lecture 1 Introduction; Review of Random Variables

This examination consists of 10 pages. Please check that you have a complete copy. Time: 2.5 hrs INSTRUCTIONS

Confidence Intervals. - simply, an interval for which we have a certain confidence.

Exam 1. Problem 1: True or false

Handout 1: Probability

Lecture 5: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics. 1 Executive summary

Data Detection for Controlled ISI. h(nt) = 1 for n=0,1 and zero otherwise.

Lecture 8: Information Theory and Statistics

Course content (will be adapted to the background knowledge of the class):

Bayesian Learning (II)

General BER Expression for One-Dimensional Constellations

Hypothesis testing I. - In particular, we are talking about statistical hypotheses. [get everyone s finger length!] n =

MATHEMATICAL TOOLS FOR DIGITAL TRANSMISSION ANALYSIS

Lecture Testing Hypotheses: The Neyman-Pearson Paradigm

Introduction to Randomized Algorithms: Quick Sort and Quick Selection

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012

CHAPTER EVALUATING HYPOTHESES 5.1 MOTIVATION

Summary: SER formulation. Binary antipodal constellation. Generic binary constellation. Constellation gain. 2D constellations

Impact of channel-state information on coded transmission over fading channels with diversity reception

ECE Homework Set 2

MODULE -4 BAYEIAN LEARNING

Draft of a lecture with exercises

Laboratory Project 1: Introduction to Random Processes

Confidence Intervals. CAS Antitrust Notice. Bayesian Computation. General differences between Bayesian and Frequntist statistics 10/16/2014

Combining Shared Coin Algorithms

On the Limits of Communication with Low-Precision Analog-to-Digital Conversion at the Receiver

Linear Models for Regression CS534

Chapter 7: Hypothesis Testing

INTRODUCTION TO PATTERN RECOGNITION

ECE 302 Solution to Homework Assignment 5

Transcription:

ENGG 230-B: Principles of Communication Systems 207 8 First Term Handout 2: Error Probability Analysis of Binary Detection Instructor: Wing-Kin Ma November 4, 207 Suggested Reading: Chapter 8 of Simon Haykin and Michael Moher, Communication Systems (5th Edition, Wily & Sons Ltd; or Chapter 3 of B. P. Lathi and Z. Ding, Modern Digital and Analog Communication Systems (4th Edition, Oxford University Press. In the previous handout, we are faced with a binary detection problem. Specifically, in Section 3 of the last Handout, Detection in Noise, a concept called the correlation receiver was introduced for detection of 2-ary PAM in the presence of noise. There, the correlation receiver first yields a signal sample y 0 a 0 E g + v 0 ( where a 0 is the transmitted (or true symbol at the 0th symbol interval, E g is pulse energy and v 0 is noise. Then, the receiver detects a 0 by the decision rule â 0 {, if y0 0, if y 0 > 0 (2 Now, the question arising is how frequent or unfrequent the correlation receiver makes an incorrect decision in the presence of noise. This handout addresses this question by probabilistic performance analysis. Note that in this particular handout, probability concepts and properties are extensively used. Do not hesitate to ask if you have any question. Formulation We use random variables and apply probability models to characterize (. The formulation is described as follows. Let y be a random variable, which characterizes y 0 in (. The signal model for y is formulated as { + ν, a y (3 + ν, a where > 0 is a constant, ν is a random variable characterizing the noise term v 0 in (, and a is a discrete random variable taking the value of either and. Moreover, we make the following two probabilistic assumptions. a is equiprobable; i.e, Pr(a Pr(a 2. 2. ν follows a Gaussian distribution with mean zero and variance 2 ; i.e., the probability density function of ν is given by p ν (v e 2 2 v2. (4

The first assumption seems reasonable since the distribution of 0 and in binary data should usually be quite uniform (like tossing a coin. The second assumption, which assumes noise to be Gaussian, may not be as obvious to you. Simply stated, many of noises encountered in communications are found to be Gaussian, or can be well approximated as Gaussian. Let â be a decision of the random variable a based on the observation y. The decision rule is to decide â if y > 0, and â if y 0 the same as (2. Our task is to solve the error probability Pr(â a. To be specific, the probability Pr(â a is the symbol error probability as well as the bit error probability. Before we proceed, we should note that the variance 2 physically represents the noise power. Thus, the quantity 2 2 describes the signal-to-noise ratio (SNR of the model in (3. 2 Error Probability Analysis We divide the error probability analysis into three steps. Step Consider the hypothesis a. We aim at deriving the conditional probability Pr(â a ; i.e., the probability of making the decision â, given the event that the true symbol is a. The expression is as follows: Pr(â a Pr(y > 0 a (5a Pr( + ν > 0 (5b Pr(ν > (5c p ν (vdv e 2 2 v2 dv. Note that (5b is obtained by applying the uppercase of (3. By a change of variable z v/, we can simplify (5e to Pr(â a e 2 z2 dz. (6 Unfortunately, the integral at the right-hand side of (6 does not admit a simple expression. However, it can be numerically computed. For convenience, let Q(x x / (5d (5e e 2 z2 dz. (7 2

Note that Q(x is known as the Q-function. The Q-function is frequently encountered in digital communications, and software like MATLAB has specific functions for evaluating the numerical values of the Q-function. Eq. (6 can be written as Pr(â a Q. (8 Step 2 Consider the hypothesis a. Following the same spirit as in Step, we examine the conditional probability Pr(â a : Pr(â a Pr(y 0 a (9a Pr( + ν 0 (9b Pr(ν (9c By a change of variable w v/, Eq. (9d is re-expressed as Pr(â a / e 2 2 v2 dv. (9d ( e 2 w2 dw Q. (0 Step 3 We are now ready to deal with the error probability Pr(â a. Pr(â a Pr(â a, a + Pr(â a, a (a Pr(â, a + Pr(â, a (b Pr(â a Pr(a + Pr(â a Pr(a (c Q 2 + Q (d ( 2 Q, (e where (c is due to Bayes rule, and (d is obtained by substituting (8, (0 and Pr(a Pr(a /2 into (c. To conclude, the error probability is Pr(â a Q. (2 Figure shows the (numerically computed error probability with respect to the SNR 2 / 2. It can be seen that the error probability reduces very significant with the SNR. 3 Further Analysis and Implication (Note that this is an advanced topic While we can evaluate the error probability by numerically computing the Q-function, we also want to analyze the error probability. In particular, can we prove in what way the error probability reduces with the SNR 2 / 2? 3

0 0 0 0 2 Symbol Error Probability 0 3 0 4 0 5 0 6 0 7 0 8 0 2 3 4 5 6 7 8 9 0 2 3 4 5 SNR (db Figure : Error probability versus the SNR. We answer the above question by proving an upper bound on the error probability. Let us consider the Q-function in (7. Let u(z be the unit step function, i.e., u(z if z 0, and u(z 0 otherwise. The Q-function can be equivalently expressed as Q(x The key trick we apply is that for any α 0, it holds true that Substituting (4 into (3 yields Q(x u(z x e 2 z2 dz. (3 u(z e αz for any z. (4 e αx e α(z x e 2 z2 dz e αx e αx e α2 2 e αz e 2 z2 dz (5a (5b e α2 2 e (z α2 2 dz (5c e (z α2 2 dz } {{ } (5d e x2 2 + (α x2 2. (5e 4

where the integral in (5d integrates the area under a Gaussian probability density function (with mean α and variance, which is one. Next, we wish to choose α 0 such that (5e is the smallest. It is seen that if x 0, then α x leads to the smallest value of (5e. Hence, we obtain the following upper bound for the Q-function: Q(x e x2 2, for any x 0. (6 The above upper bound is known as the Chernoff bound of the Q-function. Now, let us apply the Chernoff bound to the error probability: Pr(â a e 2 2 2 (7 We see something appealing from the above equation: The error probability decreases at least in an exponential manner with respect to the SNR which is very good! 5