!P x. !E x. Bad Things Happen to Good Signals Spring 2011 Lecture #6. Signal-to-Noise Ratio (SNR) Definition of Mean, Power, Energy ( ) 2.

Similar documents
LECTURE 5 Noise and ISI

LECTURE 5 Noise and ISI

EE4304 C-term 2007: Lecture 17 Supplemental Slides

ECE 546 Lecture 23 Jitter Basics

Signal types. Signal characteristics: RMS, power, db Probability Density Function (PDF). Analogue-to-Digital Conversion (ADC).

6.02 Fall 2012 Lecture #10

ECE 564/645 - Digital Communications, Spring 2018 Homework #2 Due: March 19 (In Lecture)

Principles of Communications

Random Signal Transformations and Quantization

Why study probability? Set theory. ECE 6010 Lecture 1 Introduction; Review of Random Variables

Lecture 7 Random Signal Analysis

Lecture 7: Statistics and the Central Limit Theorem. Philip Moriarty,

Laboratory Project 1: Introduction to Random Processes

Random Processes. By: Nick Kingsbury

Chapter 4: Continuous channel and its capacity

Ching-Han Hsu, BMES, National Tsing Hua University c 2015 by Ching-Han Hsu, Ph.D., BMIR Lab. = a + b 2. b a. x a b a = 12

Introduction to Probability and Statistics (Continued)

Intro to probability concepts

Calculus with Algebra and Trigonometry II Lecture 21 Probability applications

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process

Chapter 3 Single Random Variables and Probability Distributions (Part 1)

BMIR Lecture Series on Probability and Statistics Fall, 2015 Uniform Distribution

6.02 Fall 2012 Lecture #11

Lecture 1: Introduction and probability review

EE 302 Division 1. Homework 6 Solutions.

Lecture 11: Continuous-valued signals and differential entropy

1 Probability Review. 1.1 Sample Spaces

Chapter 6. Random Processes

Joint Probability Distributions and Random Samples (Devore Chapter Five)

Northwestern University Department of Electrical Engineering and Computer Science

ENSC327 Communications Systems 19: Random Processes. Jie Liang School of Engineering Science Simon Fraser University

Summary: SER formulation. Binary antipodal constellation. Generic binary constellation. Constellation gain. 2D constellations

functions Poisson distribution Normal distribution Arbitrary functions

16.36 Communication Systems Engineering

Lecture 18: Gaussian Channel

ECE Lecture #10 Overview

Chapter 5 continued. Chapter 5 sections

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

ACM 116: Lecture 2. Agenda. Independence. Bayes rule. Discrete random variables Bernoulli distribution Binomial distribution

STA 111: Probability & Statistical Inference

Math 180A. Lecture 16 Friday May 7 th. Expectation. Recall the three main probability density functions so far (1) Uniform (2) Exponential.

Due dates are as mentioned above. Checkoff interviews for PS4 and PS5 will happen together between October 24 and 28, 2012.

Stochastic Processes

Statistics for scientists and engineers

Fundamentals of Digital Commun. Ch. 4: Random Variables and Random Processes

Math 231E, Lecture 25. Integral Test and Estimating Sums

Expect Values and Probability Density Functions

Liars, d d liars, and experts. possibly Judge George Bramwell (quoted in 1885), expressing his opinion of witnesses

Lecture 10: Normal RV. Lisa Yan July 18, 2018

1. Quantization Signal to Noise Ratio (SNR).

Expectation propagation for symbol detection in large-scale MIMO communications

Revision of Lecture 4

CS 237: Probability in Computing

A Hilbert Space for Random Processes

Digital Band-pass Modulation PROF. MICHAEL TSAI 2011/11/10

Signals and Spectra - Review

Recall the Basics of Hypothesis Testing

CHAPTER 8 Viterbi Decoding of Convolutional Codes

EE4512 Analog and Digital Communications Chapter 4. Chapter 4 Receiver Design

1 GSW Sets of Systems

Chapter 5. Chapter 5 sections

Name: Firas Rassoul-Agha

What is a random variable

Random number generation

Advanced 3 G and 4 G Wireless Communication Prof. Aditya K Jagannathan Department of Electrical Engineering Indian Institute of Technology, Kanpur

STA205 Probability: Week 8 R. Wolpert

Computer Vision & Digital Image Processing

Will Landau. Feb 21, 2013

Properties of Continuous Probability Distributions The graph of a continuous probability distribution is a curve. Probability is represented by area

PROBABILITY THEORY. Prof. S. J. Soni. Assistant Professor Computer Engg. Department SPCE, Visnagar

Basic Principles of Video Coding

Math 111, Introduction to the Calculus, Fall 2011 Midterm I Practice Exam 1 Solutions

Deep Learning for Computer Vision

Statistics, Probability Distributions & Error Propagation. James R. Graham

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes

Chapter 2: Random Variables

ECE Homework Set 2

ECE 302: Probabilistic Methods in Electrical Engineering

Lecture 4: September Reminder: convergence of sequences

Chapter 6 - Random Processes

Probability Space. J. McNames Portland State University ECE 538/638 Stochastic Signals Ver

Statistics, Data Analysis, and Simulation SS 2017

Intensity Transformations and Spatial Filtering: WHICH ONE LOOKS BETTER? Intensity Transformations and Spatial Filtering: WHICH ONE LOOKS BETTER?

3F1 Random Processes Examples Paper (for all 6 lectures)

Monte Carlo Integration I

CSC 446 Notes: Lecture 13

EE4601 Communication Systems

Fig 1: Stationary and Non Stationary Time Series

CS 361: Probability & Statistics

Revision of Lecture 5

Independent Events. Two events are independent if knowing that one occurs does not change the probability of the other occurring

Statistics for Data Analysis. Niklaus Berger. PSI Practical Course Physics Institute, University of Heidelberg

Known probability distributions

CS 237: Probability in Computing

Channel Coding and Interleaving

SDS 321: Introduction to Probability and Statistics

Slides 8: Statistical Models in Simulation

Lecture 18: Bayesian Inference

Normal Random Variables and Probability

Review of Probability

Transcription:

Bad Things Happen to Good Signals oise, broadly construed, is any change to the signal from its expected value, x[n] h[n], when it arrives at the receiver. We ll look at additive noise and assume the noise in our systems is independent in value and timing from the nominal signal, y nf [n], and that the noise can be described by a random variable with a known probability distribution. 6. Spring Lecture 6 Mean, power, energy, SR Metrics for random processes ormal PDF, CDF Calculating p(error), BER vs. SR 6. Spring Lecture 6, Slide We ll model the received signal as y nf [n] + noise[n]. noise-free signal at receiver, i.e., x[n] h[n] y nf [n] + noise[n] y[n] Independent random noise noisy signal receiver must process 6. Spring Lecture 6, Slide Definition of Mean, Power, Energy Some interesting statistical metrics for x[n]: Mean: Power: Energy: µ x = P x = E x = x[n] x[n] x[n] P x = ( x[n] µ x ) 6. Spring Lecture 6, Slide 3 E x = x[n] µ x ( ) Slides 3-6 derived from 6. slides by Mike Perrott In analyzing our systems, we often use metrics where the mean has been factored out. Signal-to-oise Ratio (SR) The Signal-to-oise ratio (SR) is useful in judging the impact of noise on system performance: SR = P signal P noise SR is often measured in decibels (db): P SR (db) =log signal $ P & noise % 3db is a factor of logx 6. Spring Lecture 6, Slide 4 X 9 8 7 6 5 4 3 -. -. -3. -4. -5. -6. -7. -8. -9. -.

SR Example Analysis of Random Processes Random processes, such as noise, take on different sequences for different trials Think of trials as different measurement intervals from the same experimental setup (as in lab) For a given trial, we can apply our standard analysis tools and metrics mean and power calculations, etc... Changing the amplification factor (gain) A leads to different SR values: Lower A lower SR Signal quality degrades with lower SR When trying to analyze the ensemble (i.e., all trials) of possible outcomes, we find ourselves in need of new tools and metrics 6. Spring Lecture 6, Slide 5 6. Spring Lecture 6, Slide 6 Stationary and Ergodic Random Processes Experiment to See Statistical Distribution Stationary statistical behavior is independent of shifts in time in a given trial. Implies noise[k] is statistically indistinguishable from noise[k+] Ergodic statistical sampling can be performed at one sample time (i.e., n=k) across different trials, or across different sample times of the same trial with no change in the measured result 6. Spring Lecture 6, Slide 7 Experiment: create histograms of sample values from trials of increasing lengths. Assumption of stationarity implies histogram should converge to a shape known as a probability density function (PDF) 6. Spring Lecture 6, Slide 8

Formalizing the PDF Concept Formalizing Probability Define x as a random variable whose PDF has the same shape as the histogram we just obtained. Denote the PDF of x as f x (x) and scale f x (x) such that its overall area is : (x) = f x The probability that random variable x takes on a value in the range of x to x is calculated from the PDF of x as: x p(x x x ) = f x ote that probability values are always in the range of to. x 6. Spring Lecture 6, Slide 9 6. Spring Lecture 6, Slide Example Probability Calculation Examination of Sample Value Distribution This shape is referred to as a uniform PDF. Verify that overall area is : =.5dx = f x Probability that x takes on a value between.5 and : p(.5 x.) =.5dx =.5 6. Spring Lecture 6, Slide.5 Assumption of ergodicity implies the value occurring at a given time sample, noise[k], across many different trials has the same PDF as estimated in our previous experiment of many time samples and one trial. Thus we can model noise[k] using the random variable x. 6. Spring Lecture 6, Slide

Probability Calculation Mean and Variance In a given trial, the probability that noise[k] takes on a value in the range of x to x is computed as p(x x x ) = f x 6. Spring Lecture 6, Slide 3 x x The mean of a random variable x, μ x, corresponds to its average value and computed as: µ x = x f x The variance of a random variable x, x, gives an indication of its variability and is computed as: Compare with power calculation x = (x µ x ) f x 6. Spring Lecture 6, Slide 4 Visualizing Mean and Variance Example Mean and Variance Calculation Mean: µ x = x f x = x dx = 4 x = Variance: Changes in mean shift the center of mass of PDF Changes in variance narrow or broaden the PDF (but area is always equal to ) 6. Spring Lecture 6, Slide 5 x = (x µ x ) f x = x dx = 6 x ( ) ( ) 3 6. Spring Lecture 6, Slide 6 = 3

oise on a Communication Channel The net noise observed at the receiver is often the sum of many small, independent random contributions from the electronics and transmission medium. If these independent random variables have finite mean and variance, the Central Limit Theorem says their sum will be normally distributed. The figure below shows the histograms of the results of, trials of summing random samples draw from [-,] using two different distributions. The ormal Distribution A normal or Gaussian distribution with mean μ and variance has a PDF described by f x (x) = ( ) xµ e The normal distribution with μ= and = is called the standard or unit normal. Triangular Uniform.5 - PDF PDF - 6. Spring Lecture 6, Slide 7 6. Spring Lecture 6, Slide 8 Cumulative Distribution Function (x) = CDF for Unit ormal PDF When analyzing the effects of Gaussian noise, we ll often want to determine the probability that the noise is larger or smaller than a given value x. From slide : (xµ ) x p(x x ) = e $ dx % & µ, (x ) Φ μ, (x ) x μ Most math libraries don t provide Φ(x) but they do have a related function, erf(x), the error function: erf(x) = x e t dt (xµ ) p(x x ) = e $ dx = % x µ, (x ) Where Φ μ, (x) is the cumulative distribution function (CDF) for the normal distribution with mean μ and variance. The CDF for µ, (x) = x µ & % ( $ ' the unit normal is usually written as just Φ(x). μ -Φ μ, (x ) x For Python hackers: from math import sqrt from scipy.special import erf CDF for ormal PDF def Phi(x,mu=,sigma=): t = erf((x-mu)/(sigma*sqrt())) return.5 +.5*t lim x $(x) = () =.5 lim (x) = x 6. Spring Lecture 6, Slide 9 6. Spring Lecture 6, Slide

Bit Error Rate The bit error rate (BER), or perhaps more appropriately the bit error ratio, is the number of bits received in error divided by the total number of bits transferred. We can estimate the BER by calculating the probability that a bit will be incorrectly received due to noise. Using our normal signaling strategy (V for, V for ), on a noise-free channel with no ISI, the samples at the receiver are either V or V. Assuming that s and s are equally probable in the transmit stream, the number of V samples is approximately the same as the number of V samples. So the mean and power of the noise-free received signal are µ ynf = P ynf = y nf [n] = = y nf [n] & % ( = $ ' 6. Spring Lecture 6, Slide & % ( = $ ' 4 = 4 p(bit error) ow assume the channel has Gaussian noise with μ= and variance. And we ll assume a digitization threshold of.5v. We can calculate the probability that noise[k] is large enough that y[k] = y nf [k] + noise[k] is received incorrectly: p(error transmitted ): p(error transmitted ): 6. Spring Lecture 6, Slide.5 Φ μ, (.5) = Φ((.5-)/) = Φ(-.5/).5 -Φ μ, (.5) = Φ μ, (-.5) = Φ((-.5-)/) = Φ(-.5/) p(bit error) = p(transmit )*p(error transmitted ) + p(transmit )*p(error transmitted ) =.5*Φ(-.5/) +.5*Φ(-.5/) = Φ(-.5/) Plots of noise-free voltage + Gaussian noise BER (no ISI) vs. SR We calculated the power of the noise-free signal to be.5 and the power of the Gaussian noise is its variance, so P SR (db) = log signal $.5$ P & = log & noise % % Given an SR, we can use the formula above to compute and then plug that into the formula on the previous slide to compute p(bit error) = BER. The BER result is plotted to the right for various SR values. 6. Spring Lecture 6, Slide 3